Hi Wojciech, I have another suggestion. It is my secret, completely untested belief (which I am now revealing to everyone on this mailing list!), that some of the cases of "successful" fits using multiple E0's are masking problems caused by not considering a third cumulant. For those who may not know the role of this parameter, it in essence measures asymmetry in the distribution associated with a path. For example, if a pair of atoms are more likely to separated by a distance considerably larger than the mean separation than by a distance considerably smaller than the mean separation, then the third cumulant is positive. (The mathematical definition is that the third cumulant is the mean cube of the difference from the mean, in the same sense that sigma2 is the mean square of the difference from the mean.) In most cases, the third cumulant is small. Nevertheless, if it were 0 in all cases, then materials would not show any expansion with temperature! Through symmetry arguments, it is pretty clear that the third cumulant is most likely to be significantly nonzero for nearest-neighbor paths. What does this have to do with fitting different E0's? E0 and the third cumulant both affect the phase of the EXAFS signal, although they are weighted in different ways by k. Nevertheless, if a nonzero nearest-neighbor third cumulant is called for, allowing a different E0 for the nearest-neighbor instead would probably also improve the fit statistically. In this case, however, while the use of a third cumulant can be justified on physical grounds relatively easily, the use of a separate E0 is an arbitrary non-physical attempt to improve the statistics. So as far as I am concerned, I am more inclined, if my fit is not quite working out, to try allowing the third cumulant for paths in the first coordination shell to vary than I am to introduce multiple E0's. In fact, I usually do this at some point during the fitting process even if my fit is behaving fairly well to reassure myself that the third cumulant is 0 to within the uncertainty of the fit, and that constraining it to 0 is not distorting the values of the parameters I am interested it. Take all of this with a grain of salt; I wrote my dissertation on the third cumulant, and, to paraphrase Bruce, since I've spent a lot of time making a nice hammer, everything tends to look like a nail... --Scott Calvin Sarah Lawrence College
I would like to address a couple of questions which are partially related to my recent struggles in fitting some EXAFS data. I'm trying to fit my data using several shells of different neighbors including a few single scattering paths and also so multiple scattering contributions (mainly collinear multiple scattering paths) all calculated with the help of FEFF 8.20. Now, I found once in the FEFFIT manual the following suggestion: one might consider using several different E0's for different paths in order to improve the fit. Ok, the explanation was based on some approximations coming from FEFF code which include incomplete core-hole shielding, lack of angular variations of the valence charge distribution and charge transfer between atoms in polar materials. My question is the following: does anyone of you have some experience with such procedure? And if yes, shall than distinguish between the first shell of nearest neighbors and the rest of the atoms in terms of their E0 corrections (using 2 parameters)? Or perhaps one can use separate E0's for each path?
On Friday 28 May 2004 01:42 pm, Scott Calvin wrote: SC> It is my secret, completely untested belief (which I am now revealing SC> to everyone on this mailing list!), that some of the cases of SC> "successful" fits using multiple E0's are masking problems caused by SC> not considering a third cumulant. For those who may not know the role SC> of this parameter, it in essence measures asymmetry in the SC> distribution associated with a path. For example, if a pair of atoms SC> are more likely to separated by a distance considerably larger than SC> the mean separation than by a distance considerably smaller than the SC> mean separation, then the third cumulant is positive. (The SC> mathematical definition is that the third cumulant is the mean cube SC> of the difference from the mean, in the same sense that sigma2 is the SC> mean square of the difference from the mean.) SC> SC> In most cases, the third cumulant is small. Nevertheless, if it were SC> 0 in all cases, then materials would not show any expansion with SC> temperature! This is a really good point. A non-zero C3 is often much easier to justify physically than a second e0 parameter. As I recall, Wojciech is working on some kind of solvated complex. Without the rigidity of a crystal form, I think that it is quite reasonable to expect that something solvated would have a measurable C3. B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Scott said:
It is my secret, completely untested belief (which I am now revealing to everyone on this mailing list!), that some of the cases of "successful" fits using multiple E0's are masking problems caused by not considering a third cumulant.
Hmm, really??? I'm not sure of that. Even disregarding the different k-dependences of E0 and C3, I'm not sure how using different E0's for different paths (especially with the notion that they are to be applied to different coordination species) could mask a third cumulant for a particular path. Do you have an example? As discussed earlier, using different E0's for different shells does have some physical interpretation: that the single, flat energy origin from the muffin tin approximation is incomplete. That's about as physical as the first E0 and S02. My guess is that a second E0 is about as likely to be needed as non-zero C3 unless you have purposely disordered (that is, hot) samples. It's certainly easy enough to add fudge factors to a fit. The hope is that the model has some physical meaning or that a reviewer would catch abuses! --Matt
Matt said:
Hmm, really??? I'm not sure of that. Even disregarding the different k-dependences of E0 and C3, I'm not sure how using different E0's for different paths (especially with the notion that they are to be applied to different coordination species) could mask a third cumulant for a particular path. Do you have an example?
I never followed up in a case where I've seen it (except maybe to ask a question of the speaker), but I'm sure I've seen talks where someone revealed they routinely use a different E0, not for different species, but for the first coordination shell. That's the case that sets off my alarm bells. If someone is using different E0's to different atom types, as Shelly suggests, then I am much less likely to suspect they are really trying to mask a C3 effect.
As discussed earlier, using different E0's for different shells does have some physical interpretation: that the single, flat energy origin from the muffin tin approximation is incomplete. That's about as physical as the first E0 and S02. My guess is that a second E0 is about as likely to be needed as non-zero C3 unless you have purposely disordered (that is, hot) samples.
I've successfully modeled thermal expansion in fcc metals below room temperature by using a C3, and it requires nearest-neighbor C3's that are significantly nonzero. So although I won't dispute that there may be common materials where more than one E0 is needed for high accuracy (oxides?), it also appears to be true that everyone's favorite practice problem (good old copper), requires a non-zero nearest-neighbor C3 for high accuracy fits.
It's certainly easy enough to add fudge factors to a fit. The hope is that the model has some physical meaning or that a reviewer would catch abuses!
Amen! :) --Scott Calvin Sarah Lawrence College
Hi Scott, Oh, I definitely agree that C3 can be important, even below room temp for some systems. The questions are (or were) if using more than one E0 is ever needed and whether it can be justified. It seems to me that the answer to that is yes. You suggested that multiple E0s might be a manifestation of needing a C3 but instead compensating with an additional E0 as a fudge factor. It should be easy to make up some data with a non-zero C3 and try to fit it with multiple E0s, or to make up data with multiple E0 shifts and see how well it could be fit with a C3. That might be a fun little project for a student. If someone were to try this, I'd recommend adding some noise to the "data" using random(dist=normal). --Matt
Hi Matt, As I was working on my talk for the EXAFS workshop, I came across a rather technical question: how does Ifeffit choose the number of points for the Fourier transforms? As I recall, FEFFIT used an algorithm which required an integer power of 2, so its default behavior was to use the smallest integer power of 2 greater than the number of points in the chi(k) data, and then pad the rest with 0's. This default behavior could also be over-ridden if desired. It doesn't look to me like Ifeffit is handling that in the same way--is it always using a very large number of points and padding with 0's, or is it using an algorithm that doesn't require the integer power of 2, or is it interpolating the chi(k) on to a grid of the appropriate size first, or...? I appreciate any info you can give me on this. --Scott Calvin Sarah Lawrence College
Hi Scott, All Fourier transforms in Ifeffit use arrays of length 2048, padding with zeros as needed. In earlier versions of Feffit, doing repeated FTs in the fitting loop was noticeably slow. This is probably one of the reasons fitting in R-space was uncommon. Anyway, using arrays of length N_fft = 1024, 512, or 256 definitely helped speed things up. In principle, one could use N_fft ~= N_idp, but it turns out that if you use too few FT points, you can easily get into a real 'false, local, minima'. So Feffit used 1024,512,or 256 for fits, and then go back to 2048 to write out the data. Strictly speaking, the FFT doesn't need to use an array size that is a power of 2, but it turns out to make a very noticeable difference with the FFT routines used. With Ifeffit, several "anti-optimizations" were made that make the fits slightly slower but be more straightforward and simple (relatively speaking, anyway!). The use of double precision and N_fft=2048 being the main examples. Part of the reason for sticking with 2048 is that you want to be able to view the data at any point in the process so 'pretty output' is always needed. Also, I doubt many people will be trying to run artemis on a 386 or a microVAX. --Matt
As I was working on my talk for the EXAFS workshop, I came across a rather technical question: how does Ifeffit choose the number of points for the Fourier transforms? As I recall, FEFFIT used an algorithm which required an integer power of 2, so its default behavior was to use the smallest integer power of 2 greater than the number of points in the chi(k) data, and then pad the rest with 0's. This default behavior could also be over-ridden if desired.
It doesn't look to me like Ifeffit is handling that in the same way--is it always using a very large number of points and padding with 0's, or is it using an algorithm that doesn't require the integer power of 2, or is it interpolating the chi(k) on to a grid of the appropriate size first, or...?
I appreciate any info you can give me on this.
--Scott Calvin Sarah Lawrence College _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
participants (3)
-
Bruce Ravel
-
Matt Newville
-
Scott Calvin