[Ifeffit] Question about k-weighting real-world practice and outcomes

Matt Newville newville at cars.uchicago.edu
Sun Dec 16 10:07:51 CST 2018


Hi John,

On Fri, Dec 14, 2018 at 7:56 PM John Ferre <jbferre at uw.edu> wrote:

> Dear EXAFS folk,
>
> I'm a graduate student in Jerry Seidler's group at the University of
> Washington, Seattle.  I've been doing a numerical and experimental study on
> how to optimize k-weighting for EXAFS when the total experimental
> measurement time for a study is constrained, i.e., when you have 1 hour at
> some given I0 flux and your job is to get the best results, in terms of
> best fits or cleanest g(R).
>
> Can anyone aim me at 'famous' papers (or at least 'standard' papers) on
> the best use of k-weighting? Also, it would be great if people could
> informally email me how they, personally, do k-weighting and what their
> personal experience has been. My email is jbferre at uw.edu.
>
>
Are you asking about weighting the integration time for data collection, or
weighting the chi(k) data for analysis?  I'm not sure the answer is really
different, but I'm also not sure that this is really an answer or even that
there is a single answer.

I am not aware of any papers that analyze the selection of k-weighting to
optimize data collection efficiency or to optimize actual results of
refinements.  I believe that the idea of always and only k-weighting by a
simple power of k is probably historical, and while easy to do and explain
and clearly helpful for many cases, is not really all that well-justified.
For sure, when extracting structural parameters from a refinement, it is
definitely advantageous to k-weight the data.  But I don't think your going
to find that some kweight is always best.

There is a 1/k in the EXAFS equation, and F(k) generally decays with k.
The challenge is that this decay changes with Z.  So, "normally" one might
say that using k-weight of 2 or 3 is necessary to see lighter elements at
high k.  In fact, F(k) for oxygen sort of looks like k^{-2} at high k, so
if the goal is to refine g(R) for light elements, using k-weight of 3 seems
like a reasonable choice.   If there are heavier scatterers that will
usually dominate the high-k portion of chi(k).  Disorder terms in g(R) also
strongly reduce chi(k) as a function of k and k-weighting can sort of help
compensate for that, and give a relatively strong and uniform signal over
the full k range.  But these decays are actually power laws, so any attempt
to normalize so that the g(R) for any scatterer is uniformly sampled over k
is probably not realistic.

For many analyses, especially from dilute samples measured in fluorescence,
the noise in the data is appreciable and has an important contribution from
counting statistics.   Some (perhaps most?) data acquisition programs can
k-weight the collection time with a power of k, which wlll try to make the
counting noise uniform for k^(w) * chi(k) for some k-weight w.    That
definitely misses important sources of noise, but I think the general
observation is that this does actually help for many cases. I don't think
there is much work analyzing this process in detail.  There is always a
preference to reduce noise levels over characterizing noise levels in
excuriating detail.

That is definitely not an answer, but I hope it helps,

--Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20181216/6d7c9ab4/attachment-0001.html>


More information about the Ifeffit mailing list