[Ifeffit] A basic question about data collection

scalvin at slc.edu scalvin at slc.edu
Thu Aug 25 21:58:16 CDT 2005

Hi Anatoly,

I agree--they are not equivalent, and the constant k-space increment with
k-dependent integration time is formally "more proper." But if the spacing
is small compared to the size of an EXAFS oscillation, then there isn't a
lot of difference between the two. It could even be argued that sampling
over a range of k (or E) and binning is less susceptible to artifacts than
choosing fewer points and spending longer on them, although as was pointed
out earlier, the former takes longer because of mono settling time.

Unfortunately, the beam lines I work on don't have software implemented to
use a k^n weighted integration time, so I'd have to define a scan with a
lot of segments that gradually increase integration time. Constant energy
increment is a lazier way to move things in that direction. The real
solution is to think about getting the k-weighted integration time
implemented in the software...

Question: you say k^n weighted integration time. Shouldn't it ideally be
k^(2n), since noise might be expected to decrease as the square root of
the number of counts?

--Scott Calvin
Sarah Lawrence College

I am probably missing the point, but it is not immediately obvious to me
> why the following is equivalent in terms of improving the signal to noise:
> a) constant E-space increment and b) constant k-space increment combined
> with k-dependent integration time. In a), the data cluster at high E, but
> each data point in E corresponds to a different final state and thus is
> unique. Averaging over E-space data in the small interval Delta E,
> (1/Delta E)*Int [xmu(E) dE] is not equivalent to the time average of
> xmu(E) collected at a fixed E: (1/T)*Int [xmu(E) dt]. Thus, k^n-weighted
> integration time, to my mind, is the only proper way of reducing
> statistical noise.

More information about the Ifeffit mailing list