This is in response to Scott's remarks about zero padding and windows. The basic point is that you *always* have fourier truncation effects if you are using a finite k range. If you Fourier transform a pure sine wave over a finite range you will get side lobes that go like |sin(x)/x|. This is true whether you zero pad or not. You won't be able to resolve the side lobe structure if you don't zero-pad however. All you will see is a single point. A useful way to think about it is to view the discrete FFT as a sampled version of the continuous transform (side lobes and all) where the r-space sampling is controlled by size N of the zero padded array (and dk). Hhe zero padding doesn't change the window distortions, it just allows you to see them better.
You can of course mitigate the side lobes by applying a k-window to taper the amplitude down to zero at the cutoff, but at the expense of broadening
Thanks to Matt and Grant for the comments. After reading Grant's comments, it appears I have some significant gaps in my understanding of some basic EXAFS concepts: Chi(k) is complex? That's a completely new concept to me. I thought chi(R) was complex because a complex Fourier transform was used, and that chi(k) was real. After all, the peaks of chi(k) correspond to peaks in the energy spectrum...if that is the real part of chi(k), what does the imaginary part correspond to? I still disagree with the assertion that you always have truncation effects when using a finite k-range. As I understand the properties of Fourier transforms, using a finite range is identical to using an infinite range with a periodic function that repeats the pattern seen in the finite range over and over. So if it happens that the function at the end of the range connects smoothly with the function at the beginning of the range, there should be no truncation effects. This can also be looked at another way. In terms of the sinc function due to the finite range, the zeroes of the sinc function correspond exactly with the points in r-space at which the Fourier transform is evaluated, and thus the Foruier transform is only non-zero at one point. I have simulations that show this, but if I missed that chi(k) was complex, that could mean there is an error in this logic. As I understand Grant's comments, you can show that the sinc function is "really" still there by padding with zeroes, thus improving the resolution in r-space, and revealing the side-band structure. I agree that this is what will be observed, but if I consider this from the point of view of a FT of a finite range being the same as the FT of an infinite function which repeats the finite section periodically, then the zero-padding is introducing discontinuities which were not previously there. It is, I think, a matter of semantics whether one considers the side-bands to be "really" there when the FT is not evaluated at values of r where it is non-zero. But in either way of looking at it, padding with zeroes introduces non-zero values to the discrete FT outside of a region where there were only zero values before. None of this is meant in any way as a criticism of the algorithm used by Ifeffit as Matt described it. Windowing and then padding with zeroes makes the truncation effects relatively consistent...if there is a downside to forcing chi(k) to go to zero at the endpoints, so be it, but this strategy means the downside is always present. Alternative strategies which encourage picking whole periods of chi(k) for the data range, for example, risk being self-fulfilling, so that the period chosen for the data range might be emphasized in the Fourier transform at the expense of other spectral frequencies. To sum up, I am now very curious as to whether there is some sense in which chi(k) is complex. If that's true, I'll have to rethink a lot of my understanding as to how this all fits together. --Scott Calvin Sarah Lawrence College At 02:26 PM 6/9/2004 -0500, Grant wrote: the
central part of the peaks.
There also is a common myth that if you choose the cutoff to be where chi is zero anyway, that you don't get fourier truncation effects. This is not true. Even if chi (Amplitude*sin(phase)) is zero at a particular k value the amplitude may not be. Imagine a thought experiment in which you shift the phase of chi by a Pi/2, so a node becomes an antinode. Mathematically this is the same as multiplying the complex chi A exp(i phi) by exp(i Pi/2). Since the fourier transform is linear, the filtered (transformed, r-windowed, and inverse transformed) data are just shifted in phase by Pi/2. The fourier filtering distortions are precisely the same when you cut at a node or an antinode.
I'll try to dig up some simulations I did many moons ago and post them.
Scott,
Chi(k) is complex? That's a completely new concept to me. I thought chi(R) was complex because a complex Fourier transform was used, and that chi(k) was real. After all, the peaks of chi(k) correspond to peaks in the energy spectrum...if that is the real part of chi(k), what does the imaginary part correspond to?
Would you buy the idea that chi(k) is complex with it's imaginary part set to 0? An FT transforms one complex function to another. Of course, EXAFS theory is typically presented so that chi(k) is the imaginary part of a complex function. My understanding is that most EXAFS analysis programs from the UW lineage have used ~chi(k) = chi_expt(k) + i * 0 (1) (with '~' meaning complex) as opposed to ~chi(k) = 0 + i * chi_expt(k) (2) and then ~chi(R) = FT[ ~chi(k) ] This gives a few properties of ~chi(R) that are easy to prove (or look up in an FT book) such as it's real and imaginary parts being out of phase. Whether Eq 1 or 2 is used is unimportant unless you want to try to make sense of the real/imaginary parts of ~chi(R), such as looking at zero-crossings of phase-corrected Re[chi(R)]. I think that has been discussed on this list before. For what it's worth, Ifeffit allows you to use Eq 1 or Eq 2: fftf() can take real and/or imaginary components. The default is to use Eq 1.
I still disagree with the assertion that you always have truncation effects when using a finite k-range. As I understand the properties of Fourier transforms, using a finite range is identical to using an infinite range with a periodic function that repeats the pattern seen in the finite range over and over. ...
No! Using a finite range implies nothing about what happens outside that range. More on this below, but for now: there are definitely finite range effects no matter what form of a window function you use or how carefully you pick the range relative to your data. The FT of a step function always gives finite amplitude for all frequencies, and truncating the series to back transform always gives ripple in the reconstructed step: you always have truncation effects for a finite range.
if there is a downside to forcing chi(k) to go to zero at the endpoints, so be it, but this strategy means the downside is always present. Alternative strategies which encourage picking whole periods of chi(k) for the data range, for example, risk being self-fulfilling, so that the period chosen for the data range might be emphasized in the Fourier transform at the expense of other spectral frequencies.
From a Bayesian point of view, replacing the 'zero-padding' with 'noise-padding' (that is, filling in with noise consistent with
It is not important that chi(k) go to zero at the endpoints or that you have 'whole periods'. Well, I don't see how you can get 'whole periods' for a real EXAFS chi(k) because chi(k) is not a pure sine wave. It does help reduce the size of truncation effects if chi(k)*k^w*Window(k) is small at the endpoints. As Grant described so well, this is generally at the expense of the apparent resolution, and the choice of these parameters is (or should be!!) more important for display than signal content. On the other hand, it is important for signal content that you have enough samplings of a particular feature, which is why you want to err on the side of oversampling, using 2048 points. OK, while I'm at it, let me go after a different, but common, criticism of FT and zero-padding. Many general data analysis and signal processing books state or imply that zero-padding is somehow cheating and likely to cause artifacts. The objection is essentially that by zero-padding you are 'making up data' and asserting that your signal is zero beyond the data range. For many signal processing applications, this is a bad assumption. For EXAFS, I think is not so bad an assumption. For the sake of argument, let's say you have data to 12.5Ang^-1. With 0.05Ang^-1 grid spacing, 256 points would get you to 12.8Ang^-1, and you might think this should be good enough. (Actually, it would be 512 points, as Nyquist's theorem says you need 2x sample rate). Anyway, padding to 2048 points is overkill even if it does give finer spacing of points for ~chi(R). Using 512 points is equivalent to saying "I know the data up to 12.8Ang^-1 and have no idea what it is beyond that" while zero-padding is equivalent to saying "I know the data up to 12.8Ang^-1 and assert that it is zero beyond that". To me, it seems more reasonable to assert that chi(k)=0 for k well beyond our last than to assert that we have no knowledge of chi(k) beyond our data range. In fact, we can be pretty sure that chi(k) dies off quickly with k, and we usually stop measuring when chi(k) is below the measurement noise. the measurement uncertainties) or 'noisy model-padding' (that is, filling in with an extrapolation of the most likely model, plus noise) might be even better than regular old zero padding. I'm not aware of anyone doing this, but it would be easy enough to try (it could be done as an ifeffit script). Of course, the k-weighting of chi(k) and its noise may complicate this, but it might be interesting to try. --Matt
participants (2)
-
Matt Newville
-
Scott Calvin