[Ifeffit] amplitude parameter S02 larger than 1

Matt Newville newville at cars.uchicago.edu
Mon Mar 23 23:37:13 CDT 2015


On Mon, Mar 23, 2015 at 9:45 PM, Scott Calvin <scalvin at sarahlawrence.edu>
wrote:

> Hi Anatoly,
>
> The method Ifeffit uses to compute uncertainties in fitted parameters is
> independent of noise in the data because it, in essence, assumes the fit is
> statistically good and rescales accordingly. This means that the estimated
> uncertainties really aren't dependable for fits that are known to be bad
> (e.g. have a huge R-factor, unrealistic fitted parameters, etc.), but since
> those fits aren't generally the published ones, that's OK.
>
> Secondly, the high-R amplitude will not be essentially zero with
> theoretically-generated data, even if you don't add noise, because the
> effect of having a finite chi(k) range will create some ringing even at
> high R.
>
> Frankly, the default method by which Ifeffit (and Larch? I haven't used
> Larch) estimates the noise in the data is pretty iffy, although there's not
> really a good alternative. The user can override it with a value of their
> own, but as you know, epsilon is a notoriously squirrelly concept in EXAFS
> fitting. The really nice thing about the Ifeffit algorithm is that it makes
> the choice of epsilon irrelevant for the reported uncertainties.
>
> What it is NOT irrelevant for is the chi-square. For this reason, I
> personally ignore the magnitude of the chi-square reported by Artemis, but
> pay close attention to differences in chi square (actually, reduced chi
> square) for different fits on the same data.
>
>
I completely agree with this assessment -- fitting test data made from Feff
calculations without noise added does not normally give absurd error bars,
because the estimated uncertainties in the "data" are mostly unused.

The default method of using high-R data to estimate the uncertainties can
definitely be called "pretty iffy, but there's not really a good
alternative" for a single spectrum -- using scan-to-scan variations is also
a fine approach, but can (also) miss some kinds of non-statistical
errors.    The high-R method does seem to work reasonably well for *very*
noisy data, but that is hardly ever actually analyzed in isolation.

I'm not sure what's causing the large S02 values, and haven't looked in
detail at the projects.  But near-neighbor distances of ~3 Ang aren't that
uncommon for metals (silver and gold are 2.9 Ang and lead is 3.5 Ang), and
those work OK -- and don't give very large S02 values.

Are these samples layered and/or anisotropic?  If so, polarization effects
could also affect the amplitudes.

--Matt

PS: we should implement Hamilton's test (and include other statistics) as
easy-to-run functions in Larch!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20150323/beec2247/attachment.html>


More information about the Ifeffit mailing list