[Ifeffit] A basic question about data collection

Matt Newville newville at cars.uchicago.edu
Fri Aug 26 09:54:46 CDT 2005


Hi Scott, Anatoly,

Anatoly wrote:

>I am probably missing the point, but it is not immediately
> obvious to me why the following is equivalent in terms of
> improving the signal to noise: a) constant E-space increment
> and b) constant k-space increment combined with k-dependent
> integration time. 

I think they are pretty much equivalent, though k-weighting the
collection time is preferred if for no other reason than it is
more flexible.

> In a), the data cluster at high E, but each data point in E
> corresponds to a different final state and thus is unique.

Not quite.  Each data point in E corresponds to a set of final
states with a finite energy width (core-hole lifetime, energy
resolution), so values of mu(E) and mu(E+0.01eV) are not unique
measurements.  More importantly, at high energies, values of
mu(E) and mu(E + 2eV)  are not unique measures of the EXAFS
oscillations due to atoms within 10Ang of the absorber.  The
important thing to sample (ie, analyze) is the frequencey
components of chi(k) below 10A, not mu(E).

> Averaging over E-space data in the small interval Delta E,
> (1/Delta E)*Int [xmu(E) dE] is not equivalent to the time
> average of xmu(E) collected at a fixed E: (1/T)*Int [xmu(E)
> dt].

As long as you integrate within the energy resolution limit, it
is.  But, in general, you're right: this is why I would think
that a rolling average (eg, convolve with a 2eV lorenztian)
would be the best way to handle rebinning of QEXAFS data.  I
believe that boxcar average works well enough because we're
interpolating to a fine k-grid.  With a k-grid = 0.05A^-1,
you're sampling distances to 31Ang (10*pi). So, if you lose a
little resolution in k because of sloppy sampling with a boxcar
average, the errors in chi below 8Ang are going to be tiny.

> Thus, k^n-weighted integration time, to my mind, is the
> only proper way of reducing statistical noise.

But binning data taken in constant energy steps onto a k=0.05
grid does work pretty well.  I'd challenge someone to show a
significant difference between that and k^1 weighting.  Anyway,
my experience is that you start k-weighting the collection time
when statistical noise would otherwise dominate.  Even in those
situations you almost always collect for long enough time that
statistical noise no longer dominates.  K-weighting the
collection time gets you to that condition faster than not
k-weighting.

There are, as Jeremy, Carlo, and Scott mentioned strategic
reasons for both methods. With constant E steps you may also be
able to better remove glitches because you may not have to throw
out as many points (I wouldn't bett on this, but maybe...).  
Then again it's either slower (step scanning) or much faster
(slew scanning) to go in constant energy steps.   If you're 
using a solid-state detector for dilute samples, QEXAFS is 
probably not going to work.

Scott wrote:

> Aha! So the reason I was taught to collect at k-space
> intervals of 0.05 A^-1 to avoid interpolation problems, NOT
> for the reason I thought at the time.

I think the original reason you gave (that collecting too finely
in energy unnecessarily oversamples chi(k)) is the correct
reason for sampling evenly in k.  Using constant energy steps
definitely oversamples chi(k) at high k:  there's no point in
collecting 14 data points between 18.0Ang^-1 and 18.1Ang^-1
(which are 14eV apart) because chi(k) is not changing that
rapidly.  20 data points per Ang^-1 is plenty good enough.  
Using the common approach of sampling evenly in k, and using
delta_k= 0.05Ang^-1, the data handling procedures can use linear
interpolation and not lose too much resolution for the important
data (ie, the stuff between 1 and 8Ang).

> And since I've basically made a career so far out of
> concentrated samples, it's been OK. But as long as I'm willing
> to throw in a binning step, I'd use beam time more efficiently
> if I collected more closely spaced data at high-k and less at
> low-k than I currently do.  Or I can just increase collection
> times at high k's relative to low k's.

K-weighting the collection time is certainly easy enough to do
(though I guess you have to convince the beamline scientists to
update the software).  If you can't do this, rebinning isn't so
bad either.

--Matt








More information about the Ifeffit mailing list