Re: [Ifeffit] Question on GDS values: amp
Hi Nic, On Aug 2, 2011, at 6:41 AM, Nicholas.Tse@csiro.au wrote:
Just a sidenote, is the general workflow for fitting XAFS data the following:
Fit first shell and get reasonable Enot and Amp and then make set them, then incrementally add more of the scatter paths and adjust the delR for each path correspondingly. Adjusting degeneracy for atom assuming there are slight differences in the atoms spacing.
I know the question is very trivial but I don't seem to be able to find a general guideline for knowing when the fitting process is over in these type of analysis.
As for your side note, there are several workflows that have success. The most appropriate one depends both on your level of knowledge about the material and your personal preference. I don't particularly favor the one you describe, though, because it's an attempt to fit a few parameters at a time to avoid wrestling with correlations. That doesn't end up actually avoiding correlations; it just locks them in. For instance, if you first guess amp, find a "best fit" value, set it, and then run a fit with something else varied (say, a degeneracy), then you've artificially broken the correlation between amp and degeneracy. But you've done it in a completely arbitrary way...you haven't really explored the space of the two varied jointly. (Of course, if you're only doing a single shell, you can't vary both S02 and N, because they correlate completely. But you certainly can't pretend you can by first varying S02 with N fixed, and then varying N with S02 fixed!) The two most prevalent valid fitting strategies I've seen are: "Bottom up." (That's my name for it. I've also heard it called "shell by shell.") In this strategy, you start with a single shell with few constraints; perhaps you guess N (taking S02 from a standard), E0, delR, and ss. You try different nearest-neighbors and see what works best. As you begin to gain knowledge about the system, you add more distant shells and begin to add reasonable constraints. In a biological system where you've determined nearest neighbors are sulfur, for instance, then your knowledge of the particular system may suggest what ligands are present, which might provide information about second nearest-neighbors. For instance, the number of second nearest-neighbor carbons might be equal to the number of near-neighbor sulfurs, constraining some of the degeneracies. "Top down." Start with a highly-constrained, multi-shell fit. For instance, you might include all important paths out to 5 angstroms, with the only free parameters S02. E0, and a sigma2 parameter or two. If the fit appears qualitatively close, constraints can then be relaxed to more realistically model the particular features of the material (as one example, you can vary degeneracies to allow for vacancies or nanoscale effects). If the fit does not appear qualitatively close initially, it is probably the wrong starting material. (Amplitudes are often far off initially with this approach, and there may be small phase shifts, but the first few big peaks should be roughly in the right place. A minor variation on this approach is to start with a sum of paths rather than fit at all.) Note that both strategies end up in the same place: a modestly constrained multi-shell fit. (And no, that's not always possible-- sometimes first shell information is all you can get.) When do you use each? If your material is a modestly modified version of a known crystal, top down is a good way to go. For instance, you might have a doped zinc oxide. You know the structure of zinc oxide; the question is the effect of the dopant. Why waste effort trying to fit just a nearest-neighbor first, which can be quite difficult, when you think you know the rough structure out to several angstroms? On the other hand, for something like a protein you probably don't initially know much about the structure at all. Then bottom up works better. Many problems, such as some environmental problems, fall in between the two, and either approach might be effective. A strategy that I've sometimes seen used by beginners in workshops, which is not a good idea, is to fit one shell, get the results, set the values of those parameters to the result of the fit, add another shell, and so on. This is a misunderstanding of the bottom up approach! Rather than using information from outer shells to achieve better fits on the inner shells (or vice-versa), you lock in distortions and make it difficult to evaluate statistically-related measures such as error bars. Hopefully that helps! --Scott Calvin Sarah Lawrence College
participants (1)
-
Scott Calvin