Hi folks, This is just a quick note to let everyone know that I have read all of the traffic on the mailing list from the last two weeks. Clearly there are some serious issues with the current release of artemis that need my attention. I am still in the middle of a very busy period and won't have much time to devote to the codes for a couple more weeks, but I wanted everyone to know that I have read about the problems and will work on solutions just as soon as I get a chance. Thanks to those who commented for your input. As I have said many times before, the codes are only usable because of the contributions of their users. It seems that most of the non-bug report traffic was answered to some extent. I probably will not respond to any of the mail from the last two weeks, so if anyone posted a question for me that didn't get answered to your satisfaction, I would recommend posting it again. I also wanted to comment on a couple of things that made me laugh out loud. One was was Paul's comment to Chris Derr asking him to inform the list of the answer once he gets EXAFS all figured out. (Is it 42?) The other was when Doug Pease (sorry for picking on you Doug, but this *was* funny) refered to Shelly Kelly as Mary Shelly. Mary Shelley, of course, was the author of Frankenstein. I would say that it is unfair to burden Shelly with the responsibility of creating a Frankenstein monster. That would be me or Matt, I think, depending on whether the interior part or the exterior part seems more monstrous to you ;-) B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Bruce said:
I also wanted to comment on a couple of things that made me laugh out loud. One was was Paul's comment to Chris Derr asking him to inform the list of the answer once he gets EXAFS all figured out. (Is it 42?)
Bruce! I'm shocked at you! Everyone knows that legitimate EXAFS results must always be reported with the uncertainty; the answer is actually 42 +/- pi. --Scott Calvin Sarah Lawrence College
Hi all, OK, after speaking with some people who use FT's in other fields, consulting various resources, and much pondering, I think I understand (and have resolved) the source of my confusion. Here's my current understanding: The key is that both our chi(k) and chi(R) data are discrete and on a finite interval. I think most of us believe that chi(k) is "really" a continuous function. Experimentally we sampled at various values of k and may at some point have interpolated on to a grid (ifeffit uses 0.05 inverse angstroms), but in principle we could take more data (and alter the interpolation routine) to make the spectrum as fine-grained as we desire. This is not a statement about resolution, of course, since effects like core-hole lifetime and instrumental resolution "smear out" the data somewhat. The question is chi(R). It is tempting to think of it as "really" being a continuous function which we are only sampling at certain points. But in what sense is that true? chi(R) does not correspond to nature in the same way chi(k) does. If we change our k-space interval, chi(R) changes. In an oxide, for example, some parts of k-space correspond more to scattering off the oxygens, while others correspond more strongly to metal-metal paths. Thus the discrete chi(R) we use does not strictly correspond to a sampling of some other continuous function which we could find if we simply has enough k-space data. In fact, because we use a finite interval of chi(k) data, I think mathematicians would refer to what we are doing as a Fourier series and not a Fourier transform (this is sometimes disguised by terminology like "discrete time-limited Fourier transform"). OK, so now consider chi(R). It is intrinsically discrete. At this point there are several different ways we can look at chi(R) as an aid to understanding: One way is to pretend chi(R) is a function which is 0 between the discrete values at which it actually has meaning. That turns out to correspond to the Fourier transform of a function which is the chi(k) we used repeated an infinite number of times (this is known as "periodic extension"). Another is to think of the discrete chi(R) as a sampled version of some continuous chi(R). Matt made the plausible argument that a good guess is that chi(k) goes to 0 outside the interval we used. (He also suggested it might be even better to assume it goes to noise.) In this case we can of course compute the values chi(R) would have between the points at which it is actually computed. This is a reasonable model, but I do want to point out that the argument for chi(k) going to 0 or to noise is more convincing above kmax than below kmin. OK, so what ramifications does this actually have? First, suppose chi(k) (however we choose to weight it) is a cosine function and that we choose an interval which is an integer number of periods. Because it is permissible to view chi(R) as the Fourier transform of a periodic extension of this function, chi(R) will have a single non-zero value...no evidence of spreading, sidebands, or leakage. If one chooses to think of chi(R) as really being continuous, then the sidebands are "really" there, but were not "sampled." Of course the exact structure of the "invisible" sidebands depends on the structure of chi(k) outside the interval we sampled. Now take the same cosine function but choose an interval which is a non-integer number of periods. Periodic extension yields a function with a sharp discontinuity. Thus chi(R) will have significant non-zero values at many points, and we will say there are sidebands attributed to truncation effects. If we choose to hold the view that chi(R) is a sampling of a continuous function, then we have simply changed the points at which we sample the function, and sidebands which were previously "invisible" become "visible." But from the periodic extension viewpoint, we have introduced a discontinuity where there was none previously. Both viewpoints are defensible! Thus, if you like to think in terms of periodic extension, zero-padding without windowing introduces discontinuities. If you think of chi(R) as a sampling of some underlying continuous structure, zero-padding merely reveals structure in chi(r) which was always there (due to truncation) but previously hidden. There is no disagreement as to result, but there are two models which can be used to describe what is seen. Finally, windowing works either because it softens the discontinuities or because it softens the boxcar function implicit in truncation. In any case, the algorithm used by ifeffit is good, because it is reasonably "democratic." Since the data is padded with a large number of zeroes, every function suffers similar discontinuities, or, in the alternative viewpoint, the complete structure of the truncation error is revealed and we are not at the mercy of some arbitrary artifact of the points we choose to sample. This means that we don't get funny artifacts where one choice of k-range gives much less broadening than another. Sorry for the long rant, but at least now I'm satisfied I understand this issue... --Scott Calvin Sarah Lawrence College
participants (3)
-
Bruce Ravel
-
scalvin@slc.edu
-
Scott Calvin