In another message in that thread, Matt calls that "a reasonable way to start" and makes the valid point that small details can "often get lost in the fit to the more dominant features of the spectra." What I am saying is that the reasson that small details "often get lost" is because there are correlations in any data. If you fix one part of the fit and float another part, you have made a very specific decision to ignore an entire set of correlations. That may make the results on those small details much more *precise*, but it most certainly does not guarantee that the results will be more *accurate*. So yes, you can do what you want to do. I said the same in my last email. But you have to be prepared to defend against the completely valid criticism that doing so arbitrarily removes correlations from the fitting model. That can have an impact on the accuracy of your result. Neither the RCS nor the F test address that problem. Of course, if you dig through all the stuff that I have written over the years, I frequently report (in the case of publications) or recommend (in the case of teaching material) doing things that fall into the category of improving precision while risking accuracy. Indeed, any use of constraints in Artemis could engender this criticism -- and I gas on and on and on about the virtues of constraints whenever I do EXAFS training courses. But I always try to emphasize the importance of honestly assessing the consequences of these actions, both with myself and with my readership. As an example, in http://dx.doi.org/10.1016/j.radphyschem.2009.05.024 I spend a rather long paragraph clearly stating the most egregious approximations I made in the analysis presented in that paper. The remainder of the paper justifies all that using both the XANES and other published work on the system. I guess that none of that answered your specific question about the nominal disagreement between the RCS and the F test. I might be exposing a weakness in my own understanding of the F test right now, but I can suggest something to think about. The F test result may be saying something about the normality of the parameters that you actually used in the fit. Try varying some of the procedural parameters of the fit. For example, try limiting or expanding the ranges in k or R by a bit; try different k-weightings; try adding a bit of artificial noise to your chi(k) data -- anything that slightly changes the conditions of the fit without actually changing the details of the model or information content of the data. Doing so might help clarify what is going on with your statistical tests. HTH, B On Thursday, March 21, 2013 02:38:22 PM Matt Siebecker wrote:
Hello Bruce,
Thanks for your response. By “second shell fits” I mean that the best values for the first shell were fitted then fixed and then the fitting R-range was moved to the second shell. Similarly to how Matt Newville describes here:
http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2009-April/008779.html
He describes this as an acceptable approach, although, others in the thread disagree.
Essentially, my question is why do the F-test and the RCS results not agree with each other? The F-test indicates model 2 may not be a statistical improvement over model 1 while the RCS values show that model 2 is definitely an improvement over model 1. If I consider a reduction in the RCS value of >2x as significant, then I would pick model 2.
However, can I apply this logic when fitting the second shell with the best parameters for the first shell fixed and the R-ranges over the second shell?
Thank you again, Matt S
-- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Methods Group at NSLS --- Beamlines U7A, X24A, X23A2 Building 535A Upton NY, 11973 Homepage: http://xafs.org/BruceRavel Software: https://github.com/bruceravel