Hi Scott, I maybe misunderstood your question but let me comment on it the way I see the problem. First, chi(k) is defined only above k=0 (E=E0), of course, and its defined as mu=mu_o*(1+chi(k)). chi(k) results from the scattering electron wave function being non-monotonic and is stripped of any information about mu_o. In that sense, chi(k) is completely independent on mu_o, and all parts of chi(k) including f(k), delta(k) and lambda(k) that FEFF calculates do not have any idea about mu_o either. Thus, if FEFF defined chi(k) using mu-mu_o/edge step instead of mu-mu_o/mu_o which is a correct definition of chi(k), then FEFF would have to know the McMaster correction to recover a more accurate chi(k), or would have to know a mu_o, which would be even better, but either option introduces additional theoretical approximations to the more accurate chi(k) that FEFF can produce without tampering with McMaster correction. Maybe another way to handle that would be to normalize mu not by the edge step but by mu_o, and that would make experimental chi(k) in a better agreement with the way it is calculated by FEFFand theoretical EXAFS equation, and not require any McMaster correction? That would make chi(k), of course, not realistic near the edge since mu_o near the edge is not reliable, but we are not using chi(k) in the k-range between 0 and k_min anyway, so nobody who uses chi(k) for Fourier transform purposes (in fits or whatever) will not notice the difference. Thus, such procedure would get away with McMaster correction alltogether. Anatoly ________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Scott Calvin Sent: Thu 6/16/2011 8:28 PM To: XAFS Analysis using Ifeffit Subject: [Ifeffit] McMaster correction Hi all, I've been pondering the McMaster correction recently. My understanding is that it is a correction because while chi(k) is defined relative to the embedded-atom background mu_o(E), we almost always extract it from our data by normalizing by the edge step. Since mu_o(E) drops gradually above the edge, the normalization procedure results in oscillations that are too small well above edge, which the McMaster correction then compensates for. It's also my understanding that this correction is the same whether the data is measured in absorption or fluorescence, because in this context mu_o(E) refers only to absorption due to the edge of interest, which is a characteristic of the atom in its local environment and is thus independent of measurement mode. So here's my question: why is existing software structured so that we have to put this factor in by hand? Feff, for instance, could simply define chi(k) consistently with the usual procedure, so that it was normalized by the edge step rather than mu_o(E). A card could be set to turn that off if a user desired. Alternatively, a correction could be done to the experimental data by Athena, or automatically within the fitting procedure by Ifeffit. Of course, having more than one of those options could cause trouble, just as the ability to put sigma2 into a feff calculation and in to Ifeffit sometimes does now. But wouldn't it make sense to have it available (perhaps even the default) at one of those stages? --Scott Calvin Sarah Lawrence College