[Ifeffit] A basic question about data collection

Anatoly Frenkel frenkel at bnl.gov
Thu Aug 25 23:30:28 CDT 2005


Hi Scott,

Not exactly related to my last posint, here is an argument against any
tampering with integration time, whether k^n or k^2n weighting...

The Fourier transform of white noise is easy to calculate: it's magnitude
does not depend on r, and this property is used as a measure of statistical
errors in experimental data, by using the FT magnitude at higher r, as
described in this document (page 5, eq. (6)):

http://ixs.iit.edu/subcommittee_reports/sc/err-rep.pdf

Since k-weighted integration time alters the noise, it is no longer white.
That means, its FT at high r is not indicative of the noise at low r,
because it varies with r. Moreover, according to literature, this behavior
is strongly dependent on the Fourier window function used. Thus, Eq. (6)
that is implemented in IFEFFIT, is no longer valid for statistical error
analysis if variable integration time used. Eqs. (7) and (8) in this
summary, that also can be used to estimate statistical errors do not allow
to account for a measurement with variable integration times either.

In summary, to my opinion, one may improve the data quality by varying
integration time, but it is not straightforward how to quantify such an
improvement in terms of the effect on statistical errors in the data made by
such trick.

Anatoly

******************
Anatoly Frenkel, Ph.D.
Associate Professor
Physics Department
Yeshiva University
245 Lexington Avenue
New York, NY 10016

(YU)  212-340-7827
(BNL) 631-344-3013
(Fax) 212-340-7788

anatoly.frenkel at yu.edu
http://www.yu.edu/faculty/afrenkel


-----Original Message-----
From: ifeffit-bounces at millenia.cars.aps.anl.gov
[mailto:ifeffit-bounces at millenia.cars.aps.anl.gov]On Behalf Of
scalvin at slc.edu
Sent: Thursday, August 25, 2005 10:58 PM
To: XAFS Analysis using Ifeffit
Subject: RE: [Ifeffit] A basic question about data collection


Hi Anatoly,

I agree--they are not equivalent, and the constant k-space increment with
k-dependent integration time is formally "more proper." But if the spacing
is small compared to the size of an EXAFS oscillation, then there isn't a
lot of difference between the two. It could even be argued that sampling
over a range of k (or E) and binning is less susceptible to artifacts than
choosing fewer points and spending longer on them, although as was pointed
out earlier, the former takes longer because of mono settling time.

Unfortunately, the beam lines I work on don't have software implemented to
use a k^n weighted integration time, so I'd have to define a scan with a
lot of segments that gradually increase integration time. Constant energy
increment is a lazier way to move things in that direction. The real
solution is to think about getting the k-weighted integration time
implemented in the software...

Question: you say k^n weighted integration time. Shouldn't it ideally be
k^(2n), since noise might be expected to decrease as the square root of
the number of counts?

--Scott Calvin
Sarah Lawrence College

>
I am probably missing the point, but it is not immediately obvious to me
> why the following is equivalent in terms of improving the signal to noise:
> a) constant E-space increment and b) constant k-space increment combined
> with k-dependent integration time. In a), the data cluster at high E, but
> each data point in E corresponds to a different final state and thus is
> unique. Averaging over E-space data in the small interval Delta E,
> (1/Delta E)*Int [xmu(E) dE] is not equivalent to the time average of
> xmu(E) collected at a fixed E: (1/T)*Int [xmu(E) dt]. Thus, k^n-weighted
> integration time, to my mind, is the only proper way of reducing
> statistical noise.
>
_______________________________________________
Ifeffit mailing list
Ifeffit at millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit




More information about the Ifeffit mailing list