Another reason, from my point of view, is that FEFF wasn't accurate enough to use on its own without references.  Also, it still can be argued that
the process of data reduction and filtering produces distortions which aren't captured by using FEFF alone.  Further, if one is comparing
very similar systems, e.g. bulk and nano of the same stuff, then with the exception of multiple scattering, the one system should be an ideal
reference for the other.
 
A trick I used to do:  Suppose, for instance, that I wanted to fit a shell of Zn surrounded by Si (not an actual case).  There's no Si-rich
zinc silicide to use as a reference, but it's not too hard to make theta-CuAl2, a compound in which all Cu atoms are surrounded by Al in the
first shell (this was done).  From this, amp and phase functions could be extracted which refer to Cu looking at Al (Cu->Al). Next, to transform this into
the desired Zn->Si, I would do FEFF calculations on identical structures with the atoms changed around and take:
 
A(Zn->Si, semi-empirical) = A(Cu->Al, expt)*[A(Zn->Si, FEFF)/A(Cu->Al, FEFF)]
phi(Zn->Si, semi-empirical) = phi(Cu->Al, expt)+[Phi(Zn->Si, FEFF)-phi(Cu->Al, FEFF)]
 
with A, phi being amplitude and phase for a given shell, and appropriate account being taken of the distance differences involved.
 
This makes sense if you consider A~ = A*exp(i*phi) to be one of the factors in a complex chi~ such that chi = Im(chi~), and you're
essentially making a correction to ln(A~).  Yes, this was low-rent and subject to errors, but it seemed to make sense provided
one didn't try to take it too far, for instance trying to change Al for Au or an oxide for a metal.
 
Brings back memories, not all of them fond :-)
    mam
----- Original Message -----
From: Frenkel, Anatoly
To: ifeffit@millenia.cars.aps.anl.gov
Sent: Friday, October 22, 2010 2:38 PM
Subject: Re: [Ifeffit] Asymmetric error bars in IFeffit

On a related subject, now I understand why we use the concept of chemical transferability of amplitudes and phases by recycling the same FEFF path for different systems. The true reason is historic: back then it took one hour for one FEFF calculation....
Anatoly


From: ifeffit-bounces@millenia.cars.aps.anl.gov <ifeffit-bounces@millenia.cars.aps.anl.gov>
To: XAFS Analysis using Ifeffit <ifeffit@millenia.cars.aps.anl.gov>
Sent: Fri Oct 22 16:23:08 2010
Subject: [Ifeffit] Asymmetric error bars in IFeffit

Hi all,

I'm puzzling over an issue with my latest analysis, and it seemed like the sort of thing where this mailing list might have some good ideas.

First, a little background on the analysis. It is a simultaneous fit to four samples, made of various combinations of three phases. Mossbauer has established which samples include which phases. One of the phases itself has two crystallographically inequivalent  absorbing sites. The result is that the fit includes 12 Feff calculations, four data sets, and 1000 paths. Remarkably, everything works quite well, yielding a satisfying and informative fit. Depending on the details, the fit takes about 90 minutes to run. Kudos to Ifeffit and Horae for making such a thing possible!

Several of the parameters that the fit finds are "characteristic crystallite radii" for the individual phases. In my published fits, I often include a factor that accounts for the fact that a phase is nanoscale in a crude way: it assumes the phase is present as spheres of uniform radius and applies a suppression factor to the coordination numbers of the paths as a function of that radius and of the absorber-scatterer distance. Even though this model is rarely strictly correct in terms of morphology and size dispersion, it gives a first-order approximation to the effect of the reduced coordination numbers found in nanoscale materials. Some people, notably Anatoly Frenkel, have published models which deal with this effect much more realistically. But those techniques also require more fitted variables and work best with fairly well-behaved samples. I tend to work with "messy" chemical samples of free nanoparticles where the assumption of sphericity isn't terrible, and the size dispersion is difficult to model accurately.

At any rate, the project I'm currently working on includes a fitted characteristic radius of the type I've described for each of the phases in each of the samples. And again, it seems to work pretty well, yielding values that are plausible and largely stable.

That's the background information. Now for my question:

The effect of the characteristic radius on the spectrum is a strongly nonlinear function of that radius. For example, the difference between the EXAFS spectra of 100 nm and 1000 nm single crystals due to the coordination number effect is completely negligible. The difference between 1 nm and 10 nm crystals, however, is huge.

So for very small crystallites, IFeffit reports perfectly reasonable error bars: the radius is 0.7 +/- 0.3 nm, for instance. For somewhat larger crystallites, however, it tends to report values like 10 +/- 500 nm. I understand why it does that: it's evaluating how much the parameter would have to change by to have a given impact on the chi square of the fit. And it turns out that once you get to about 10 nm, the size could go arbitrarily higher than that and not change the spectrum much at all. But it couldn't go that much lower without affecting the spectrum. So what IFeffit means is something like "the best fit value is 10 nm, and it is probable that the value is at least 4 nm." But it's operating under the assumption that the dependence of chi-square on the parameter is parabolic, so it comes up with a compromise between a 6 nm error bar on the low side and an infinitely large error bar on the high side. Compromising with infinity, however, rarely yields sensible results.

Thus my question is if anyone can think of a way to extract some sense of these asymmetric error bars from IFeffit. Here are possibilities I've considered:

--Fit something like the log of the characteristic radius, rather than the radius itself. That creates an asymmetric error bar for the radius, but the asymmetry the new error bar possesses has no relationship to the uncertainty it "should" possess. This seems to me like it's just a way of sweeping the problem under the rug and is potentially misleading.

--Rerun the fits setting the variable in question to different values to probe how far up or down it can go and have the same effect on the fit. But since I've got nine of these factors, and each fit takes more than an hour, the computer time required seems prohibitive!

--Somehow parameterize the guessed variable so that it does tend to have symmetric error bars, and then calculate the characteristic radius and its error bars from that. But it's not at all clear what that parameterization would be.

--Ask the IFeffit mailing list for ideas!

Thanks!

--Scott Calvin
Sarah Lawrence College


_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit