[Ifeffit] McMaster correction

Matt Newville newville at cars.uchicago.edu
Thu Jun 16 22:10:42 CDT 2011


Hi Scott,

I would say that it does not belong in Feff, as it is a correction to
the approximation of using constant value for normalizing mu(E) to get
to chi(k).  That is, in
   chi(E) = [mu(E) - mu_0(E)] / Edge_Step

The approximation is in using a constant value for Edge_Step, while we
ought to normalize by a value that decays as mu(E) should.  I don't
think Feff should try to include a correction that makes assumptions
on how chi(k) is extracted from mu(E).

A McMaster-like correction could be included as a default when
normalizing data, but for a few points:
  1.  Hardly anyone measures mu(E) with enough accuracy to be useful
by itself -- tabulated values for the decay of mu(E) would have to be
used (hence the name McMaster).  Of course, the McMaster correction
should be applied to fluorescence as well as transmission data, and
they typically have very different instrumentation drifts.

  2.  Which tables? How accurate are these?

  3.  You have to know the absorbing element and excited edge in order
to make the correction.  I think this is not so easy to automate.

  4.  Ignoring the McMaster correction adds a small static component
to sigma^2, at least for most hard x-ray edges (it's more serious for
low energy edges).  So the error is in the absolute value of sigma^2,
but not the relative value of two spectra on the same element/edge.

Hope that helps.    I have to admit I'm a little uneasy with the
frequency of "I've been pondering..." discussion topics alternating
with requests to review book chapters, and find myself being more
cautious in my response than I would if someone was actually asking a
question.   On the other hand, I don't think anyone would object if
you added a button to Athena that normalized the data in a way that
included a correction for the expected decay of mu(E).

Cheers,

--Matt

On Thu, Jun 16, 2011 at 7:28 PM, Scott Calvin <dr.scott.calvin at gmail.com> wrote:
> Hi all,
> I've been pondering the McMaster correction recently.
> My understanding is that it is a correction because while chi(k) is defined
> relative to the embedded-atom background mu_o(E), we almost always extract
> it from our data by normalizing by the edge step. Since mu_o(E) drops
> gradually above the edge, the normalization procedure results in
> oscillations that are too small well above edge, which the McMaster
> correction then compensates for. It's also my understanding that this
> correction is the same whether the data is measured in absorption or
> fluorescence, because in this context mu_o(E) refers only to absorption due
> to the edge of interest, which is a characteristic of the atom in its local
> environment and is thus independent of measurement mode.
> So here's my question: why is existing software structured so that we have
> to put this factor in by hand? Feff, for instance, could simply define
> chi(k) consistently with the usual procedure, so that it was normalized by
> the edge step rather than mu_o(E). A card could be set to turn that off if a
> user desired. Alternatively, a correction could be done to the experimental
> data by Athena, or automatically within the fitting procedure by Ifeffit.
> Of course, having more than one of those options could cause trouble, just
> as the ability to put sigma2 into a feff calculation and in to Ifeffit
> sometimes does now. But wouldn't it make sense to have it available (perhaps
> even the default) at one of those stages?
> --Scott Calvin
> Sarah Lawrence College
> _______________________________________________
> Ifeffit mailing list
> Ifeffit at millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
>
>




More information about the Ifeffit mailing list