Thanks, all! Here's what I got out of the discussion: FEFF is calculating the "correct" chi(k), and applying an approximate correction introduces additional sources of error. But the only way to measure chi(k) is to extract it from unnormalized data, and the original definition of chi was an arbitrary, if sensible, one: chi(E) = mu(E)/mu_o(E) - 1. And mu_o(E), while not known with great accuracy, depends only on the element and the edge (perhaps excepting minor contributions from AXAFS). Not applying a correction, whether McMaster or something more accurate (such as the ones Anatoly and John suggested), is equivalent to using the approximation mu_o(E) = mu_o(E_o), which is less accurate than the alternatives. On the other hand, the effect is almost entirely a shift in the absolute (as opposed to relative) value of sigma^2. Considering that, it seems to me that this would be a good option for Athena when calculating chi(k). (I think it would be more problematic to apply when calculating normalized energy-space data, as in that case the correction would depend on instrumental effects and the absorption of other edges in the sample.) So, Bruce, I guess this was first a discussion and then a feature request. :) --Scott Calvin Sarah Lawrence College On Jun 17, 2011, at 4:14 AM, John J. Rehr wrote:
Hi Scott et al.,
Thanks for bringing up this issue. Whether or not McMaster corrections are useful does seem to depend on details of the measurement. But my question is: for the cases where they are useful, can one do better? As the data & theory get better and better, perhaps we should try to extract more accurate cross sections mu(E). For example, is it at all of interest to have embedded atom cross-sections to replace the atomic based Cromer-Liberman cross sections or empirical tables?
John
On Thu, 16 Jun 2011, Scott Calvin wrote:
Hi all, I've been pondering the McMaster correction recently.
My understanding is that it is a correction because while chi(k) is defined relative to the embedded-atom background mu_o(E), we almost always extract it from our data by normalizing by the edge step. Since mu_o(E) drops gradually above the edge, the normalization procedure results in oscillations that are too small well above edge, which the McMaster correction then compensates for. It's also my understanding that this correction is the same whether the data is measured in absorption or fluorescence, because in this context mu_o(E) refers only to absorption due to the edge of interest, which is a characteristic of the atom in its local environment and is thus independent of measurement mode.
So here's my question: why is existing software structured so that we have to put this factor in by hand? Feff, for instance, could simply define chi(k) consistently with the usual procedure, so that it was normalized by the edge step rather than mu_o(E). A card could be set to turn that off if a user desired. Alternatively, a correction could be done to the experimental data by Athena, or automatically within the fitting procedure by Ifeffit.
Of course, having more than one of those options could cause trouble, just as the ability to put sigma2 into a feff calculation and in to Ifeffit sometimes does now. But wouldn't it make sense to have it available (perhaps even the default) at one of those stages?
--Scott Calvin Sarah Lawrence College
participants (1)
-
Scott Calvin