Did you do any correction for the multi-electron bumps? Here's a note I made for myself when doing CeO2 EXAFS: Problem: Ce L3-edge spectra are polluted by multi-electron peaks (MEP) which appear as bumps right in the important range and can corrupt 1st-shell analysis, Gomilšek, et, al (Acta Chim. Slov. 51,23(2004)) give an estimate for the MEP feature for 3+ and 4+ states: For 3+, a single Lorentzian peak, height 2.5% of edge jump, FWHM 17eV, position 5852eV. For 4+, two peaks: Pos(eV) Height FWHM(eV) 5857 2.7% 12 5877 1.1% 7 Their energy calibration is with respect to Cr (5989eV), but it seems from their Figure 1 that it's the same as mine, The first peak of CeO2 appears at 5730eV, as I assume. Thus, the procedure might be: Reduce the file to .e (pre-edge subtracted, post-edge normalized). Fit this to 3+ and 4+ references to get the 3+ fraction x3. Subtract from the .e a contribution x3*(3+ MEP)+(1-x3)*(4+ MEP). Convert to k-space and proceed as normal. Here, ".e" files are pre-edge subtracted, post-edge normalized XANES. I wrote a program that fits an unknown .e to a sum of CeO2 and Ce3+ refs to get the Ce4+ fraction, then subtracts the peaks. This results in something unpolluted by MEP. mam On 7/20/2016 3:59 PM, Neil M Schweitzer wrote:
Being somewhere between a novice and an advanced user of XAS, I have two “philosophical” questions about EXAFS fitting (I hope this is the correct forum to ask them!). I also have a bug to report. I have included a project file of a CeO2 powder for those interested.
1) Generally, I have developed my own methodology for EXAFS fitting which I think generates ok fits, at least from the standpoint that the red line seems to follow the blue line pretty closely. My question relates to interpreting the statistics of the fit. Generally, I am studying catalyst (i.e. very small metal or oxide nanoclusters supported on a different oxide phase). Due to the size of the clusters of interest, I wouldn’t expect them to resemble a bulk crystal, and therefore it is difficult for me to reduce the number of independent variables using know crystal structures. For example in this reference material, I am giving every single scattering path its own delR and ss, and using a single delE and SO2 for all the paths since they come from the same feff calculation. Once I am through with my own methodology, I can usually get a close fit even while making every variable a guess (refer to fit 17 in the project file). Admittedly, the reduced chi is rather large (5262) and the error of each of the DWF’s is the same magnitude as the value itself, which is troubling. However, if I use the best fit values of fit 17 and set all of the values except for the 3 DWF’s and generate a new fit, all of the statistics significantly improve even though the values of the DWF’s did not change at all (Fit 19). That is, merely setting the other variables improved the fit (the reduced chi square is now 2951) and improved the error of the guess variables (the errors of the DWF’s are cut close to half). Why is the fit better since none of the values actually changed, and why do the errors improve? How should I be reporting the statistics (i.e. the errors) in a publication? Should I report the fit 17 errors or the fit 19 errors for the DWF’s?
2) In terms of the DWF’s in general, what value is considered too high? I know the DWF’s have a component that relates to temperature induced disorder in the scattering shell and a component that relates to physical disorder in the scattering shell, but what value would be considered too big for a sample measured at room temperature. I have seen values as high as 0.03 and 0.04 in presentations (sorry, no references) but these seems too large to me. At some point, if the sample is disordered enough, it seems like EXAFS is no longer an appropriate characterization tool to use. What value of DWF would that represent (for a sample measured at room temperature)?
3) I found a bug in the log files in the history window. I am running Demeter 0.9.21 on Windows 7. When I generate a new fit, if I change the k-range of the fit, the k-range of previous fits in the history window will also change to the current range (e.g. fit 19 was run from 2.1-9.3, as shown in the data window. Fits 14-16 were run in smaller ranges). What’s even stranger, is if I go to a very old fit (i.e. fit 1 or 2 in the project file) and then go back to the new fit, the log file will report the k-range from the earlier fit, not the current fit. Obviously, this is making it very difficult to keep track of fits generated with different k-ranges. I have not tested to see if the R-range behaves similarly.
Neil
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit Unsubscribe: http://millenia.cars.aps.anl.gov/mailman/options/ifeffit