[Ifeffit] I think I'm starting to understand EXAFS Fourier transforms...

Scott Calvin scalvin at slc.edu
Fri Jun 18 17:50:52 CDT 2004


Hi all,

OK, after speaking with some people who use FT's in other fields,
consulting various resources, and much pondering, I think I understand (and
have resolved) the source of my confusion. Here's my current understanding:

The key is that both our chi(k) and chi(R) data are discrete and on a
finite interval. I think most of us believe that chi(k) is "really" a
continuous function. Experimentally we sampled at various values of k and
may at some point have interpolated on to a grid (ifeffit uses 0.05 inverse
angstroms), but in principle we could take more data (and alter the
interpolation routine) to make the spectrum as fine-grained as we desire.
This is not a statement about resolution, of course, since effects like
core-hole lifetime and instrumental resolution "smear out" the data somewhat.

The question is chi(R). It is tempting to think of it as "really" being a
continuous function which we are only sampling at certain points. But in
what sense is that true? chi(R) does not correspond to nature in the same
way chi(k) does. If we change our k-space interval, chi(R) changes. In an
oxide, for example, some parts of k-space correspond more to scattering off
the oxygens, while others correspond more strongly to metal-metal paths.
Thus the discrete chi(R) we use does not strictly correspond to a sampling
of some other continuous function which we could find if we simply has
enough k-space data.

In fact, because we use a finite interval of chi(k) data, I think
mathematicians would refer to what we are doing as a Fourier series and not
a Fourier transform (this is sometimes disguised by terminology like
"discrete time-limited Fourier transform").

OK, so now consider chi(R). It is intrinsically discrete. At this point
there are several different ways we can look at chi(R) as an aid to
understanding:

One way is to pretend chi(R) is a function which is 0 between the discrete
values at which it actually has meaning. That turns out to correspond to
the Fourier transform of a function which is the chi(k) we used repeated an
infinite number of times (this is known as "periodic extension").

Another is to think of the discrete chi(R) as a sampled version of some
continuous chi(R). Matt made the plausible argument that a good guess is
that chi(k) goes to 0 outside the interval we used. (He also suggested it
might be even better to assume it goes to noise.) In this case we can of
course compute the values chi(R) would have between the points at which it
is actually computed. This is a reasonable model, but I do want to point
out that the argument for chi(k) going to 0 or to noise is more convincing
above kmax than below kmin.

OK, so what ramifications does this actually have?

First, suppose chi(k) (however we choose to weight it) is a cosine function
and that we choose an interval which is an integer number of periods.
Because it is permissible to view chi(R) as the Fourier transform of a
periodic extension of this function, chi(R) will have a single non-zero
value...no evidence of spreading, sidebands, or leakage. If one chooses to
think of chi(R) as really being continuous, then the sidebands are "really"
there, but were not "sampled." Of course the exact structure of the
"invisible" sidebands depends on the structure of chi(k) outside the
interval we sampled.

Now take the same cosine function but choose an interval which is a
non-integer number of periods. Periodic extension yields a function with a
sharp discontinuity. Thus chi(R) will have significant non-zero values at
many points, and we will say there are sidebands attributed to truncation
effects. If we choose to hold the view that chi(R) is a sampling of a
continuous function, then we have simply changed the points at which we
sample the function, and sidebands which were previously "invisible" become
"visible." But from the periodic extension viewpoint, we have introduced a
discontinuity where there was none previously. Both viewpoints are defensible!

Thus, if you like to think in terms of periodic extension, zero-padding
without windowing introduces discontinuities. If you think of chi(R) as a
sampling of some underlying continuous structure, zero-padding merely
reveals structure in chi(r) which was always there (due to truncation) but
previously hidden. There is no disagreement as to result, but there are two
models which can be used to describe what is seen.

Finally, windowing works either because it softens the discontinuities or
because it softens the boxcar function implicit in truncation.

In any case, the algorithm used by ifeffit is good, because it is
reasonably "democratic." Since the data is padded with a large number of
zeroes, every function suffers similar discontinuities, or, in the
alternative viewpoint, the complete structure of the truncation error is
revealed and we are not at the mercy of some arbitrary artifact of the
points we choose to sample. This means that we don't get funny artifacts
where one choice of k-range gives much less broadening than another.

Sorry for the long rant, but at least now I'm satisfied I understand this
issue...

--Scott Calvin
Sarah Lawrence College



More information about the Ifeffit mailing list