Hi, I figured I should comment on some of the issues on background subtraction. I agree with 99.9% of what Bruce said. This probably represents the UWXAFS/Ifeffit party line, and so it's definitely worth questioning. Much about this topic is not completely known, and I'm certainly open for suggestions... Carlo mentioned an algorithm:
I know that Steve Wasserman and Jeff Terry have a Mathematica background subtraction algorithm which allows you to back FT the region below Rbkg and subtract it from chi. This seems to be roughly the same idea as the background fit but is less biased by the fitting. I would appreciate your comments.
I don't know the details of Steve and Jeff's method, but it sounds close to Autobk/spline(). Two other methods are also similar: 1) the method of Cook and Sayers, which took mu(E) and smoothed it to give the background function, and 2) the method of Li, Bridges, and Booth (in their work on Atomic XAFS), where they subtracted a first shell Feff calculation for chi(k) from the mu(k) and then filtered out the high frequency parts to reveal structure in the background. There may be other similar methods or variations as well. These all share the idea that mu0(E) is defined by the lowest frequency components of mu(E). The subtleties are then 1) whether you use the higher-frequency components in the fit for the background (as most 'classic' background subtractions and Cook and Sayers do), 2) whether you use an estimate of the first shell EXAFS, and 3) how you control the flexibility of the spline. Of course, the main issue with background subtraction in EXAFS is that the background extends into the first shell, and may affect the results for the first shell parameters. This is a "well-known" problem in EXAFS, though perhaps more widely assumed and accepted than actually studied. There's also the related idea that background subtraction should leave a "pretty" chi(k) and chi(R). Again, I think this not as well-established as we might hope, but is probably reasonable. For now, I'll assume it's right.... If the background "leaks into" the first shell (as everyone knows it does), then the first shell probably also extends to the low-R "background region". Certainly, if there were no overlap of the first shell and background, then any old background subtraction method will work just fine. This suggests that we *should* take the first shell into account when determining the background. I heartedly agree with Bruce on this (and most other things). How this happens is not necessarily obvious. In Autobk/spline(), the first shell leakage to low-R can be taken into account crudely by supplying a 'standard': a chi(k) spectra with approximately the right first shell. The idea is that the first shell of the standard can be scaled in amplitude to match the first shell of the data and then predict the leakage of the first shell to low-R. Without a 'standard', Autobk/spline() simply minimizes the low-R components, which ignores the first shell leakage. Li, Bridges, and Booth sort of boot-strapped the solution (first using a simple background to get a fitted chi, then subtracted that from mu to get a refined background). Refining the background in Feffit/feffit() is roughly the same concept, but allows a better mode for the first shell. It also allows us to assess correlations between background and structural parameters, which seems useful. Another important difference is that Autobk/spline() use a very different FT range than Feffit/feffit(), so low-R 'ugly bumps' that were reduced in Autobk/spline() might appear with other FT settings. The ability to assess the correlations of the background and structural parameters was a main motivation for putting this feature in Feffit. Everyone "knew" that you can get "wrong answers" for the first shell parameters because the background was "wrong", but it was rare to measure how wrong the first shell parameters could be. By refining background parameters with structural parameters, we can now measure it easily. In general, the background parameters and the structural parameters seem to be slightly correlated, with E0 and R1 influenced more than N and sigma2. Some people are uncomfortable with the idea of measuring background parameters twice, which is what running Autobk/spline() and then refining the background with Feffit/feffit(). I don't see a problem with this from an 'information theory' point of view. There are a fixed set of parameters set aside to account for the background. Autobk/spline() measures them and the Feffit/feffit() refines that measurement. More simply, if a parameter can be measured, it is allowed to be measured. But that presents a good opportunity to bring up the bkg_cl() command in Ifeffit. By itself, bkg_cl() gives a pretty bad background, though it does give a reasonably good and stable normalization. Importantly, it does no spline fit at all, and so "uses up" no independent points (if you're hung up on counting such things). Doing this, and then including the background in feffit() really does put the refinement of all variables in one fit. Many (possibly most) people greatly prefer refining everything at once. It's easy to do (athena even has a pull-down menu and guesses the central atom for you!!), and a good thing to try if you're worried about the relation between background subtraction and fitted parameters. --Matt