Hi Brandon, I don't find this terribly surprising. First, a little background which you may or may not know: Reduced chi-square is a statistical parameter which requires a knowledge of the uncertainty of the measurement to compute. In theory, therefore, it "knows" that a "good" fit to noisy data will not be as close in an absolute sense as a "good" fit to high-quality data. The problem, however, is that it's difficult to know what is the proper quantity to use for the uncertainty of the measurement in EXAFS analysis. One could use the standard deviation of subsequent scans, but that is only sensitive to random scan-to-scan error. Something like, say, a monochromator glitch is quite reproducible, and yet most of us would consider it to be part of the measurement error. So the default behavior of Ifeffit is to look at the Fourier transform between 15 and 25 angstroms, and figure that any amplitude there is due to error of some kind, and not signal. It then makes the assumption that the same amount of error is present in the range being fit (i.e. the error is "white"), and from there computes the reduced chi-square. This is in some sense a dubious procedure, but the real problem is that we don't have a good method for estimating the measurement uncertainty, so we have to do something. As long as we are comparing fits to exactly the same data, on the same k-range, with the same k-weight, with the same windows, then changes in reduced chi-square are worth looking at. If all you've done is change a constraint or change the R-range being fit, for instance, a lower reduced chi-square is a good sign (use the Hamilton test if you want to be rigorous about it.) But change the k-range, or the k-weight, or the window, or the data, and Ifeffit's estimate of the uncertainty can change wildly, causing a correspondingly wild change in reduced chi-square. After all, one glitch toward the end of the k-range you are thinking can introduce a lot of high-R amplitude in to the Fourier transform, and different windows would treat it very differently. But single-point glitches often don't have much effect on the results of your fit, precisely because they do affect the high-R part of the Fourier transform much more than low-R part. Ifeffit's default behavior can be overridden, if you so choose. The parameter "epsilon" (available on the Data panel of Artemis) overrides Ifeffit's usual estimate for uncertainty. So in your case, I suggest putting a number--any number--in for epsilon, and then comparing fits using the two windows. Probably you will find that the reduced chi- squares become much more similar to each other. Incidentally, while in this case the default behavior of Ifeffit is merely distracting, there is a circumstance where it can be a more substantial problem: mutliple data-set fits (e.g. on multiple edges of the same sample). If Ifeffit finds uncertainties for the different data sets that are quite different from each other because, for instance, of the presence of a glitch in one, it will in effect weight the data very differently when doing a fit. In multiple-data set fits, therefore, it is often advisable to come up with your own scheme for setting epsilons (perhaps inversely proportional to the edge jump of the set, or something like that), to avoid wonky weightings. --Scott Calvin Sarah Lawrence College On May 11, 2011, at 12:47 PM, Brandon Reese wrote:
Hello everybody,
I am working on fitting some EXAFS of amorphous materials and have noticed an odd (in my mind) behavior when changing transform windows. I settled on a fit using all three k-weights and the Hanning transform window obtaining statistical parameters of R=0.0018 and chi_R=361. I decided to change the transform window to a Kaiser-Bessel to see what would happen. The refined parameters came out more or less the same, well within the error bars, with the Hanning windows having slightly smaller error bars. But my statistical parameters changed significantly to R=0.0022 and chi_R=89.354. It seems that this large change may be related to why we can't use the chi_R parameter to compare fits over different k- ranges, but I am not sure about that. Have other people seen this? I would guess it means that when looking for trends in different data sets, it is more important to be consistent, rather than which specific window type is used.
Thanks, Brandon