Hi Neil,

This might help. It’s an abbreviated version of the discussion of criteria for judging fit quality that I later expanded on in my textbook. Northwestern doesn’t seem to have a copy of the full book, but you can certainly get it via inter-library loan.

—Scott Calvin
Sarah Lawrence College

On Jul 20, 2016, at 6:59 PM, Neil M Schweitzer <neil.schweitzer@northwestern.edu> wrote:

Being somewhere between a novice and an advanced user of XAS, I have two “philosophical” questions about EXAFS fitting (I hope this is the correct forum to ask them!). I also have a bug to report. I have included a project file of a CeO2 powder for those interested.
 
1)      Generally, I have developed my own methodology for EXAFS fitting which I think generates ok fits, at least from the standpoint that the red line seems to follow the blue line pretty closely. My question relates to interpreting the statistics of the fit. Generally, I am studying catalyst (i.e. very small metal or oxide nanoclusters supported on a different oxide phase). Due to the size of the clusters of interest, I wouldn’t expect them to resemble a bulk crystal, and therefore it is difficult for me to reduce the number of independent variables using know crystal structures. For example in this reference material, I am giving every single scattering path its own delR and ss, and using a single delE and SO2 for all the paths since they come from the same feff calculation. Once I am through with my own methodology, I can usually get a close fit even while making every variable a guess (refer to fit 17 in the project file). Admittedly, the reduced chi is rather large (5262) and the error of each of the DWF’s is the same magnitude as the value itself, which is troubling. However, if I use the best fit values of fit 17 and set all of the values except for the 3 DWF’s and generate a new fit, all of the statistics significantly improve even though the values of the DWF’s did not change at all (Fit 19). That is, merely setting the other variables improved the fit (the reduced chi square is now 2951) and improved the error of the guess variables (the errors of the DWF’s are cut close to half). Why is the fit better since none of the values actually changed, and why do the errors improve? How should I be reporting the statistics (i.e. the errors) in a publication? Should I report the fit 17 errors or the fit 19 errors for the DWF’s?
2)      In terms of the DWF’s in general, what value is considered too high? I know the DWF’s have a component that relates to temperature induced disorder in the scattering shell and a component that relates to physical disorder in the scattering shell, but what value would be considered too big for a sample measured at room temperature. I have seen values as high as 0.03 and 0.04 in presentations (sorry, no references) but these seems too large to me. At some point, if the sample is disordered enough, it seems like EXAFS is no longer an appropriate characterization tool to use. What value of DWF would that represent (for a sample measured at room temperature)?
3)      I found a bug in the log files in the history window. I am running Demeter 0.9.21 on Windows 7. When I generate a new fit, if I change the k-range of the fit, the k-range of previous fits in the history window will also change to the current range (e.g. fit 19 was run from 2.1-9.3, as shown in the data window. Fits 14-16 were run in smaller ranges). What’s even stranger, is if I go to a very old fit (i.e. fit 1 or 2 in the project file) and then go back to the new fit, the log file will report the k-range from the earlier fit, not the current fit. Obviously, this is making it very difficult to keep track of fits generated with different k-ranges. I have not tested to see if the R-range behaves similarly.
 
Neil
<CeO2_MEE.fpj>_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Unsubscribe: http://millenia.cars.aps.anl.gov/mailman/options/ifeffit