[Ifeffit] more fun with FFT's

Matt Newville newville at cars.uchicago.edu
Wed Jun 9 22:47:18 CDT 2004


Scott,

> Chi(k) is complex? That's a completely new concept to me. I thought chi(R)
> was complex because a complex Fourier transform was used, and that chi(k)
> was real. After all, the peaks of chi(k) correspond to peaks in the energy
> spectrum...if that is the real part of chi(k), what does the imaginary part
> correspond to?

Would you buy the idea that chi(k) is complex with it's imaginary
part set to 0?  An FT transforms one complex function to another.  
Of course, EXAFS theory is typically presented so that chi(k) is
the imaginary part of a complex function.  My understanding is
that most EXAFS analysis programs from the UW lineage have used
   ~chi(k) = chi_expt(k) + i * 0                (1)

(with '~' meaning complex) as opposed to 
   ~chi(k) = 0 + i * chi_expt(k)                (2)

and then 
   ~chi(R) = FT[ ~chi(k) ] 

This gives a few properties of ~chi(R) that are easy to prove (or
look up in an FT book) such as it's real and imaginary parts being
out of phase.  Whether Eq 1 or 2 is used is unimportant unless you
want to try to make sense of the real/imaginary parts of ~chi(R),
such as looking at zero-crossings of phase-corrected Re[chi(R)].  
I think that has been discussed on this list before.

For what it's worth, Ifeffit allows you to use Eq 1 or Eq 2:
fftf() can take real and/or imaginary components.  The default 
is to use Eq 1.

> I still disagree with the assertion that you always have
> truncation effects when using a finite k-range. As I understand
> the properties of Fourier transforms, using a finite range is
> identical to using an infinite range with a periodic function
> that repeats the pattern seen in the finite range over and over.
> ...

No!  Using a finite range implies nothing about what happens
outside that range. More on this below, but for now: there are
definitely finite range effects no matter what form of a window
function you use or how carefully you pick the range relative to
your data.

The FT of a step function always gives finite amplitude for all
frequencies, and truncating the series to back transform always
gives ripple in the reconstructed step:  you always have
truncation effects for a finite range.

> if there is a downside to forcing chi(k) to go to zero at the
> endpoints, so be it, but this strategy means the downside is
> always present. Alternative strategies which encourage picking
> whole periods of chi(k) for the data range, for example, risk
> being self-fulfilling, so that the period chosen for the data
> range might be emphasized in the Fourier transform at the
> expense of other spectral frequencies.

It is not important that chi(k) go to zero at the endpoints or
that you have 'whole periods'.  Well, I don't see how you can get
'whole periods' for a real EXAFS chi(k) because chi(k) is not a
pure sine wave.  It does help reduce the size of truncation
effects if chi(k)*k^w*Window(k) is small at the endpoints.  As
Grant described so well, this is generally at the expense of the
apparent resolution, and the choice of these parameters is (or
should be!!) more important for display than signal content.  
On the other hand, it is important for signal content that you
have enough samplings of a particular feature, which is why you
want to err on the side of oversampling, using 2048 points.

OK, while I'm at it, let me go after a different, but common,
criticism of FT and zero-padding.  Many general data analysis and
signal processing books state or imply that zero-padding is
somehow cheating and likely to cause artifacts.  The objection is
essentially that by zero-padding you are 'making up data' and
asserting that your signal is zero beyond the data range.  For
many signal processing applications, this is a bad assumption. For
EXAFS, I think is not so bad an assumption.

For the sake of argument, let's say you have data to 12.5Ang^-1.
With 0.05Ang^-1 grid spacing, 256 points would get you to
12.8Ang^-1, and you might think this should be good enough.  
(Actually, it would be 512 points, as Nyquist's theorem says you
need 2x sample rate).  Anyway, padding to 2048 points is overkill
even if it does give finer spacing of points for ~chi(R).

Using 512 points is equivalent to saying "I know the data up to
12.8Ang^-1 and have no idea what it is beyond that" while
zero-padding is equivalent to saying "I know the data up to
12.8Ang^-1 and assert that it is zero beyond that".

To me, it seems more reasonable to assert that chi(k)=0 for k well
beyond our last than to assert that we have no knowledge of chi(k)
beyond our data range.  In fact, we can be pretty sure that chi(k)
dies off quickly with k, and we usually stop measuring when chi(k)
is below the measurement noise.

>From a Bayesian point of view, replacing the 'zero-padding' with
'noise-padding' (that is, filling in with noise consistent with
the measurement uncertainties) or 'noisy model-padding' (that is,
filling in with an extrapolation of the most likely model, plus
noise) might be even better than regular old zero padding.  I'm
not aware of anyone doing this, but it would be easy enough to try
(it could be done as an ifeffit script).  Of course, the
k-weighting of chi(k) and its noise may complicate this, but it
might be interesting to try.

--Matt





More information about the Ifeffit mailing list