Definitely XRD peaks are not EXAFS oscillations, thus you should exclude them from data analysis. The simplest way is to use deglitching tool from Athena. More complicated one is to subtract peaks (if you know very well their shapes). If you have still access to the experiment - tray to measure sample at different angle sample surface -detector, than you can shift the position of the diffraction peaks in energy scale. After merging of few of this scans you eliminate XRD peaks. Have you measured with one pixel fluorescence detector? For such experiments the use of multi-element detector can be also useful... regards kicaj p.s. I didnt check the attached file. Next time, please, give a link to a graph, not attach the file W dniu 12-12-11 08:08, Zhaomo Tian pisze:
Dear all,
I got XAFS data for Ag with CO adsorption( using Ag K edge), Ag is thin film~300nm deposited on Si/SiO2 substrate. But in the original μ(E) spectra, from 25894-26044eV, four obvious diffraction peaks appear( I attached the file), and I guess they will influence the quality of fitting. Is there anyone who knows how to deal with these diffraction peak? Will it be corrected by smoothing or changing some origin data points in the original file? I want the modification that will not destroy data analysis later.
Thanks for your help.
*Tian Zhaomo*
*M.S. candidate*
*Lab. For Photosynthesis Materials and Devices*
Department of Materials Science and Engineeing,POSTECH
san 31, Hyoja-Dong, Nam-Gu, Pohang, 790-784, Republic of Korea
office: +82-54-279-2827
mobile: +82-10-7747-3790
e-mail: zhaomo1989@postech.ac.kr mailto:zhaomo1989@postech.ac.kr
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
I second that motion about the usefulness of multi-element detectors. Some people suggest that because single-element detectors can now count as fast as multi-element detectors used to and cover solid angle more efficiently, multi-element detectors are obsolete. My EXAFS data reduction software includes an option called 'deglitch with reference scalers'. An example of how this works: Suppose you have a 7-element detector (elements 0-6) and there's a Bragg peak in elements 4 and 5. Now form the ratio 4/(0+1+2+3+6), where the digits refer to element numbers. Except for the Bragg peak, this ratio should be slowly-varying and almost devoid of EXAFS information because everything in it should be proportional to the same signal. Now, do the deglitch thing - fit this ratio to a smooth function (I use a cubic polynomial) in a region outside the Bragg peak and replace the ratio in the Bragg region with the fit. Then, in that region, replace the signal from element 4 with (fitted)*(0+1+2+3+6). Do the same with element 5. What you've done is to reconstruct an estimate of what elements 4,5 should have been doing with reference to the other elements. Yes, you've faked some information, but much less than if you had just 'papered over' the glitch region with a cubic on the sum. This method allows you to deal with Bragg peaks in spectral regions where there is detail without totally obliterating the signal you're after, but it only works if you have multiple elements. OK, rant-mode off. mam On 12/10/2012 11:51 PM, "Dr. Dariusz A. Zając" wrote:
Definitely XRD peaks are not EXAFS oscillations, thus you should exclude them from data analysis. The simplest way is to use deglitching tool from Athena. More complicated one is to subtract peaks (if you know very well their shapes). If you have still access to the experiment - tray to measure sample at different angle sample surface -detector, than you can shift the position of the diffraction peaks in energy scale. After merging of few of this scans you eliminate XRD peaks. Have you measured with one pixel fluorescence detector? For such experiments the use of multi-element detector can be also useful...
regards kicaj
p.s. I didnt check the attached file. Next time, please, give a link to a graph, not attach the file
W dniu 12-12-11 08:08, Zhaomo Tian pisze:
Dear all,
I got XAFS data for Ag with CO adsorption( using Ag K edge), Ag is thin film~300nm deposited on Si/SiO2 substrate. But in the original μ(E) spectra, from 25894-26044eV, four obvious diffraction peaks appear( I attached the file), and I guess they will influence the quality of fitting. Is there anyone who knows how to deal with these diffraction peak? Will it be corrected by smoothing or changing some origin data points in the original file? I want the modification that will not destroy data analysis later.
Thanks for your help.
*Tian Zhaomo*
*M.S. candidate*
*Lab. For Photosynthesis Materials and Devices*
Department of Materials Science and Engineeing,POSTECH
san 31, Hyoja-Dong, Nam-Gu, Pohang, 790-784, Republic of Korea
office: +82-54-279-2827
mobile: +82-10-7747-3790
e-mail: zhaomo1989@postech.ac.kr mailto:zhaomo1989@postech.ac.kr
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
On Tuesday, December 11, 2012 04:08:03 PM Zhaomo Tian wrote:
I got XAFS data for Ag with CO adsorption( using Ag K edge), Ag is thin film~300nm deposited on Si/SiO2 substrate. But in the original μ(E) spectra, from 25894-26044eV, four obvious diffraction peaks appear( I attached the file), and I guess they will influence the quality of fitting. Is there anyone who knows how to deal with these diffraction peak? Will it be corrected by smoothing or changing some origin data points in the original file? I want the modification that will not destroy data analysis later.
You have received some decent advice on how to post-process your data to minimize the effect of the diffraction peak. I think it bears mentioning that not all prblems are best solved after the fact in software. Some problems should be addressed as the data are measured or even beforehand. I see from the data file you sent that you were at beamline 10C at Pohang. It is hard to tell for certain from the data file or from the website, but it seems as though you were using a PIPS detector to measure your fluorescence XAFS. This is an experimental setup can be a difficult one to combine with a sample that diffracts. The sort of PIPS commonly used at an XAS beamline tends to be of a very large surface area. That means that the likelihood of some diffraction peak from the sample hitting the detector during the measurement is pretty high. In fact, it happened 4 times for you. That means you have a lot of data points to deglitch, were you to follow Kicaj's advice. Because you used a PIPS rather than a multi-element detector, you don't have the option of following any of Matthew's (excellent) advice. I hate to say it, but I think you are screwed. The best you can do is deglitch as best you can. Because you will be removing so many points from the data, you will introduce substantial systematic error into the data set that remains. I don't really see what you can do about that at this late date. So, what might you do the next time you visit the synchrotron to obtain better data? In fact, there are number of things that you can consider at the stages of sample preparation or of data collection. 1. You don't say a lot about the sample or its substrate. Perhaps you have a reason that the substrate *must* be crystalline. Perhaps not. Putting your film on an amorphous substrate would obviate the problem of diffraction from the substrate. That may be your best bet. 2. The large size of the PIPS detector is a contributing factor to the problem. Simply using a detector with a smaller surface area reduces (but certainly does not eliminate!) the likelihood of diffraction peaks hiiting it. Moving the detector farther away from the sample would serve the same purpose. Of course, doing so would also serve to reduce your count rate, thus reducing the quality of your data. There is a cost to everything! 3. Use the 13 element Ge detector instead of the PIPS. Then you can simply eliminate channels hit by Bragg peaks or do the post-processing trick Matthew described. 4. At my beamline, we have users almost every cycle who measure stuff on crystalline substrates. Our favorite trick is to mount the sample on a spinner (e.g. http://dx.doi.org/10.1063/1.1147815) At my beamline, we actually attach the sample the sort of small DC fan that is used to cool electronics. Inexpensive, simple, and effective! By keeping the sample constantly in motion, the energy at which the Bragg condition is met is constantly changing. This serves to reduce the effect of the Bragg peak by a few orders of magnitude by spreading it out in energy. Usually, it can be made smaller than chi(k), resulting in analyzable data. The bottom line is that you have the sort of problem that I think needs to be solved up front rather than after the fact. I know that's not helpful right now, but hopefully it will be the next time you go to the beamline. Cheers, B -- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Methods Group at NSLS --- Beamlines U7A, X24A, X23A2 Building 535A Upton NY, 11973 Homepage: http://xafs.org/BruceRavel Software: https://github.com/bruceravel
Here's one last trick that I used to use when I used a plastic scintillator at NSLS and didn't have a spinner. WARNING - this is painful! I used to tape a piece of Polaroid film to the face of the detector, run through a scan, develop the film, see where the white spots were, and put Pb tape on the detector at those spots. Told you it was painful! Aside from that, I think Bruce is right - you're screwed with 1-channel data, especially if the Bragg peaks are broad and/or come in the XANES region. If you have to take data without a spinner or multi-element detector, the trial-and-error method of changing the sample orientation often works to shift the Bragg peaks away from sensitive parts of the spectrum to where you don't mind too much losing some points. I have never tried fitting peak shapes to the Bragg peaks (say, a Lorentzian or pseudo-Voight plus a smooth polynomial; don't subtract the polynomial). I suspect that the peak shapes aren't 'nice' enough for that to work reliably. The shapes of the tails, for instance, would be influenced by the angular divergence of the beam onto the sample and into the monochromator and any strain. mam On 12/12/2012 7:28 AM, Bruce Ravel wrote:
On Tuesday, December 11, 2012 04:08:03 PM Zhaomo Tian wrote:
I got XAFS data for Ag with CO adsorption( using Ag K edge), Ag is thin film~300nm deposited on Si/SiO2 substrate. But in the original μ(E) spectra, from 25894-26044eV, four obvious diffraction peaks appear( I attached the file), and I guess they will influence the quality of fitting. Is there anyone who knows how to deal with these diffraction peak? Will it be corrected by smoothing or changing some origin data points in the original file? I want the modification that will not destroy data analysis later.
You have received some decent advice on how to post-process your data to minimize the effect of the diffraction peak. I think it bears mentioning that not all prblems are best solved after the fact in software. Some problems should be addressed as the data are measured or even beforehand.
I see from the data file you sent that you were at beamline 10C at Pohang. It is hard to tell for certain from the data file or from the website, but it seems as though you were using a PIPS detector to measure your fluorescence XAFS.
This is an experimental setup can be a difficult one to combine with a sample that diffracts. The sort of PIPS commonly used at an XAS beamline tends to be of a very large surface area. That means that the likelihood of some diffraction peak from the sample hitting the detector during the measurement is pretty high. In fact, it happened 4 times for you. That means you have a lot of data points to deglitch, were you to follow Kicaj's advice. Because you used a PIPS rather than a multi-element detector, you don't have the option of following any of Matthew's (excellent) advice.
I hate to say it, but I think you are screwed. The best you can do is deglitch as best you can. Because you will be removing so many points from the data, you will introduce substantial systematic error into the data set that remains. I don't really see what you can do about that at this late date.
So, what might you do the next time you visit the synchrotron to obtain better data? In fact, there are number of things that you can consider at the stages of sample preparation or of data collection.
1. You don't say a lot about the sample or its substrate. Perhaps you have a reason that the substrate *must* be crystalline. Perhaps not. Putting your film on an amorphous substrate would obviate the problem of diffraction from the substrate. That may be your best bet.
2. The large size of the PIPS detector is a contributing factor to the problem. Simply using a detector with a smaller surface area reduces (but certainly does not eliminate!) the likelihood of diffraction peaks hiiting it. Moving the detector farther away from the sample would serve the same purpose. Of course, doing so would also serve to reduce your count rate, thus reducing the quality of your data. There is a cost to everything!
3. Use the 13 element Ge detector instead of the PIPS. Then you can simply eliminate channels hit by Bragg peaks or do the post-processing trick Matthew described.
4. At my beamline, we have users almost every cycle who measure stuff on crystalline substrates. Our favorite trick is to mount the sample on a spinner (e.g. http://dx.doi.org/10.1063/1.1147815) At my beamline, we actually attach the sample the sort of small DC fan that is used to cool electronics. Inexpensive, simple, and effective! By keeping the sample constantly in motion, the energy at which the Bragg condition is met is constantly changing. This serves to reduce the effect of the Bragg peak by a few orders of magnitude by spreading it out in energy. Usually, it can be made smaller than chi(k), resulting in analyzable data.
The bottom line is that you have the sort of problem that I think needs to be solved up front rather than after the fact.
I know that's not helpful right now, but hopefully it will be the next time you go to the beamline.
Cheers, B
On Wednesday, December 12, 2012 09:49:45 AM Matthew Marcus wrote:
Here's one last trick that I used to use when I used a plastic scintillator at NSLS and didn't have a spinner. WARNING - this is painful! I used to tape a piece of Polaroid film to the face of the detector, run through a scan, develop the film, see where the white spots were, and put Pb tape on the detector at those spots. Told you it was painful!
These days, I suspect the painful part is *finding* polaroid film! :) B -- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Methods Group at NSLS --- Beamlines U7A, X24A, X23A2 Building 535A Upton NY, 11973 Homepage: http://xafs.org/BruceRavel Software: https://github.com/bruceravel
(For masochists only): I still have Matthew's plastic scintillator in my cabinet at the NSLS, so if anyone is willing to try this trick with the original hardware, let me know. Anatoly ________________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov [ifeffit-bounces@millenia.cars.aps.anl.gov] on behalf of Matthew Marcus [mamarcus@lbl.gov] Sent: Wednesday, December 12, 2012 12:49 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] How to do with diffraction peak Here's one last trick that I used to use when I used a plastic scintillator at NSLS and didn't have a spinner. WARNING - this is painful! I used to tape a piece of Polaroid film to the face of the detector, run through a scan, develop the film, see where the white spots were, and put Pb tape on the detector at those spots. Told you it was painful! Aside from that, I think Bruce is right - you're screwed with 1-channel data, especially if the Bragg peaks are broad and/or come in the XANES region. If you have to take data without a spinner or multi-element detector, the trial-and-error method of changing the sample orientation often works to shift the Bragg peaks away from sensitive parts of the spectrum to where you don't mind too much losing some points. I have never tried fitting peak shapes to the Bragg peaks (say, a Lorentzian or pseudo-Voight plus a smooth polynomial; don't subtract the polynomial). I suspect that the peak shapes aren't 'nice' enough for that to work reliably. The shapes of the tails, for instance, would be influenced by the angular divergence of the beam onto the sample and into the monochromator and any strain. mam On 12/12/2012 7:28 AM, Bruce Ravel wrote:
On Tuesday, December 11, 2012 04:08:03 PM Zhaomo Tian wrote:
I got XAFS data for Ag with CO adsorption( using Ag K edge), Ag is thin film~300nm deposited on Si/SiO2 substrate. But in the original μ(E) spectra, from 25894-26044eV, four obvious diffraction peaks appear( I attached the file), and I guess they will influence the quality of fitting. Is there anyone who knows how to deal with these diffraction peak? Will it be corrected by smoothing or changing some origin data points in the original file? I want the modification that will not destroy data analysis later.
You have received some decent advice on how to post-process your data to minimize the effect of the diffraction peak. I think it bears mentioning that not all prblems are best solved after the fact in software. Some problems should be addressed as the data are measured or even beforehand.
I see from the data file you sent that you were at beamline 10C at Pohang. It is hard to tell for certain from the data file or from the website, but it seems as though you were using a PIPS detector to measure your fluorescence XAFS.
This is an experimental setup can be a difficult one to combine with a sample that diffracts. The sort of PIPS commonly used at an XAS beamline tends to be of a very large surface area. That means that the likelihood of some diffraction peak from the sample hitting the detector during the measurement is pretty high. In fact, it happened 4 times for you. That means you have a lot of data points to deglitch, were you to follow Kicaj's advice. Because you used a PIPS rather than a multi-element detector, you don't have the option of following any of Matthew's (excellent) advice.
I hate to say it, but I think you are screwed. The best you can do is deglitch as best you can. Because you will be removing so many points from the data, you will introduce substantial systematic error into the data set that remains. I don't really see what you can do about that at this late date.
So, what might you do the next time you visit the synchrotron to obtain better data? In fact, there are number of things that you can consider at the stages of sample preparation or of data collection.
1. You don't say a lot about the sample or its substrate. Perhaps you have a reason that the substrate *must* be crystalline. Perhaps not. Putting your film on an amorphous substrate would obviate the problem of diffraction from the substrate. That may be your best bet.
2. The large size of the PIPS detector is a contributing factor to the problem. Simply using a detector with a smaller surface area reduces (but certainly does not eliminate!) the likelihood of diffraction peaks hiiting it. Moving the detector farther away from the sample would serve the same purpose. Of course, doing so would also serve to reduce your count rate, thus reducing the quality of your data. There is a cost to everything!
3. Use the 13 element Ge detector instead of the PIPS. Then you can simply eliminate channels hit by Bragg peaks or do the post-processing trick Matthew described.
4. At my beamline, we have users almost every cycle who measure stuff on crystalline substrates. Our favorite trick is to mount the sample on a spinner (e.g. http://dx.doi.org/10.1063/1.1147815) At my beamline, we actually attach the sample the sort of small DC fan that is used to cool electronics. Inexpensive, simple, and effective! By keeping the sample constantly in motion, the energy at which the Bragg condition is met is constantly changing. This serves to reduce the effect of the Bragg peak by a few orders of magnitude by spreading it out in energy. Usually, it can be made smaller than chi(k), resulting in analyzable data.
The bottom line is that you have the sort of problem that I think needs to be solved up front rather than after the fact.
I know that's not helpful right now, but hopefully it will be the next time you go to the beamline.
Cheers, B
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
W dniu 12-12-12 18:49, Matthew Marcus pisze:
I have never tried fitting peak shapes to the Bragg peaks (say, a Lorentzian or pseudo-Voight plus a smooth polynomial; don't subtract the polynomial). I have never tried too, simply I took care of data during experiment, as is mentioned in posts before. And I suppose it would be very painful, more than Pb tape... Anyway - good ideal with this Pb tape.
coming back to the problem with Bragg peaks, does anyone knows if it is possible to analyse EXAFS with more than 1 interest regions? like 2 or more FT windows - before, between and after Bragg peaks, or is possible to introduce one own FT window function in Athena? regards kicaj
On Wednesday, December 12, 2012 11:09:29 PM Dr. Dariusz A. Zając wrote:
coming back to the problem with Bragg peaks, does anyone knows if it is possible to analyse EXAFS with more than 1 interest regions? like 2 or more FT windows - before, between and after Bragg peaks, or is possible to introduce one own FT window function in Athena?
Interesting question. Some days I really love this mailing list. A similar question came up recently on the mailing list, although I think here you are asking whether regions can be chosen in k-space. (The previous discussion was about regions in R space.) I am commenting off the top of my head here, so I may be neglecting something. But I think what you are suggesting to have a windowing function that goes up, then down, then up again, then down again. I think the problem with that is that it would introduce low frequency components into the transformed data. Granted, the transform of the theory would have the same effect, but I worry about how that kind of windowing function would be correlated with the parameters of the fit. As Scott suggested in the earlier conversation, another option is to do a multiple data set fit where the data is imported twice. For one instance of the data, the FT window is set over the first range. For the other instance, the window is set over the second range. Again, I worry about how the choice of windows may be correlated with the parameters of the fit, given the limited information content used in each part of the MDS fit. I don't plan on exploring this myself. But if someone wanted to do so and attempt to convince me that it should be implemented in the software, I am -- as always -- willing to listen. Cheers, B -- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Methods Group at NSLS --- Beamlines U7A, X24A, X23A2 Building 535A Upton NY, 11973 Homepage: http://xafs.org/BruceRavel Software: https://github.com/bruceravel
Hmm. Another possible way to do it is to delete the bad data points and then do a "slow" FT, which would be a fit of the data, at the points given with no interpolation, to a sum of sines and cosines. This would have the nice feature of using the data as it is and ignoring the bad stuff. Filtering would involve multiplying the sines and cosines by some window function (in R space) and evaluating them *at the given k-points*, not on a regular grid. This of course means that evaluating FEFF paths and the like is likely to be slow because you don't get to use recursion relations to evaluate sin(2*R*k(i)+delta) as you would if k(i) are uniformly tabulated. Now that computers are a bazillion times faster than they were when EXAFS analysis traditions were established, maybe that's the way to go. What do you think? mam On 12/12/2012 2:16 PM, Bruce Ravel wrote:
On Wednesday, December 12, 2012 11:09:29 PM Dr. Dariusz A. Zając wrote:
coming back to the problem with Bragg peaks, does anyone knows if it is possible to analyse EXAFS with more than 1 interest regions? like 2 or more FT windows - before, between and after Bragg peaks, or is possible to introduce one own FT window function in Athena?
Interesting question. Some days I really love this mailing list.
A similar question came up recently on the mailing list, although I think here you are asking whether regions can be chosen in k-space. (The previous discussion was about regions in R space.)
I am commenting off the top of my head here, so I may be neglecting something. But I think what you are suggesting to have a windowing function that goes up, then down, then up again, then down again. I think the problem with that is that it would introduce low frequency components into the transformed data. Granted, the transform of the theory would have the same effect, but I worry about how that kind of windowing function would be correlated with the parameters of the fit.
As Scott suggested in the earlier conversation, another option is to do a multiple data set fit where the data is imported twice. For one instance of the data, the FT window is set over the first range. For the other instance, the window is set over the second range. Again, I worry about how the choice of windows may be correlated with the parameters of the fit, given the limited information content used in each part of the MDS fit.
I don't plan on exploring this myself. But if someone wanted to do so and attempt to convince me that it should be implemented in the software, I am -- as always -- willing to listen.
Cheers, B
On Wednesday, December 12, 2012 03:00:41 PM Matthew Marcus wrote:
Hmm. Another possible way to do it is to delete the bad data points and then do a "slow" FT, which would be a fit of the data, at the points given with no interpolation, to a sum of sines and cosines. This would have the nice feature of using the data as it is and ignoring the bad stuff. Filtering would involve multiplying the sines and cosines by some window function (in R space) and evaluating them *at the given k-points*, not on a regular grid. This of course means that evaluating FEFF paths and the like is likely to be slow because you don't get to use recursion relations to evaluate sin(2*R*k(i)+delta) as you would if k(i) are uniformly tabulated. Now that computers are a bazillion times faster than they were when EXAFS analysis traditions were established, maybe that's the way to go. What do you think?
I think data should be measured correctly in the first place if at all possible! ;) That said, your suggestion seems completely valid to me. And, just to be clear, it's completely impossible with Ifeffit. Larch, on the other hand.... B -- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Methods Group at NSLS --- Beamlines U7A, X24A, X23A2 Building 535A Upton NY, 11973 Homepage: http://xafs.org/BruceRavel Software: https://github.com/bruceravel
I agree about the first point, but what's done is done and the user who started this discussion has data which he needs to do something with. What's Larch? Why can't ifeffit do what I suggest? Is it because it uses an algorithm which requires uniform tabulation? One issue with my proposal - post-edge background subtraction (and even pre-edge if the Braggie comes at the wrong place) could be corrupted by gaps in the data. Imagine if a spline knot falls in a wide gap. An example of an effect similar to Bragg peaks but not fixable by screwing with data acquisition - 2-electron peaks in ekements like Ce. For those, you just have to subtract the peaks, but fortunately there is literature on how to do that. mam On 12/12/2012 3:06 PM, Bruce Ravel wrote:
On Wednesday, December 12, 2012 03:00:41 PM Matthew Marcus wrote:
Hmm. Another possible way to do it is to delete the bad data points and then do a "slow" FT, which would be a fit of the data, at the points given with no interpolation, to a sum of sines and cosines. This would have the nice feature of using the data as it is and ignoring the bad stuff. Filtering would involve multiplying the sines and cosines by some window function (in R space) and evaluating them *at the given k-points*, not on a regular grid. This of course means that evaluating FEFF paths and the like is likely to be slow because you don't get to use recursion relations to evaluate sin(2*R*k(i)+delta) as you would if k(i) are uniformly tabulated. Now that computers are a bazillion times faster than they were when EXAFS analysis traditions were established, maybe that's the way to go. What do you think?
I think data should be measured correctly in the first place if at all possible! ;)
That said, your suggestion seems completely valid to me. And, just to be clear, it's completely impossible with Ifeffit. Larch, on the other hand....
B
Hi Matthew, Bruce, All,
Sorry for not being able to join this discussion earlier. I agree
that having glitch-free data is preferred. But I also think it's OK to
simply remove diffraction peaks from XAFS data -- you're just
asserting that you know those values are not good measurements of
mu(E). I don't see it as all that different from removing obvious
outliers from fluorescence channels from a multi-element detector --
though that has the appeal of keeping good measurements at a
particular energy.
On Wed, Dec 12, 2012 at 5:06 PM, Bruce Ravel
On Wednesday, December 12, 2012 03:00:41 PM Matthew Marcus wrote:
Hmm. Another possible way to do it is to delete the bad data points and then do a "slow" FT, which would be a fit of the data, at the points given with no interpolation, to a sum of sines and cosines. This would have the nice feature of using the data as it is and ignoring the bad stuff. Filtering would involve multiplying the sines and cosines by some window function (in R space) and evaluating them *at the given k-points*, not on a regular grid. This of course means that evaluating FEFF paths and the like is likely to be slow because you don't get to use recursion relations to evaluate sin(2*R*k(i)+delta) as you would if k(i) are uniformly tabulated. Now that computers are a bazillion times faster than they were when EXAFS analysis traditions were established, maybe that's the way to go. What do you think?
I'm not sure that "slow FT" versus "discrete FT" is that important here, though perhaps I'm missing your meaning. My vies is that The EXAFS signal is band-limited (finite k range due to F(k) and 1/k, and finite R range due to F(k), lambda(k), and 1/R^2), so that sampling on reasonably fine grids is going to preserve all the information that is really there. I do think that having richer window functions and spectral weightings would be very useful. You could view chi(k) data with glitches ias wanting a window function that had a very small value (possibly zero: remove this point) exactly at the glitch, but had a large value (~1) at nearby k-values. Another (and possibly best) approach is to assign an uncertainty to each k value of chi. At the glitch, the uncertainty needs to go way up, so that not matching that point does not harm the overall fit. Larch has a separate Transform class used for each dataset in a fit - this includes the standard FT parameters and fit ranges. It is intended to be able to do this sort of advanced windowing (and in principle, advanced weighting too). Currently (I'd be ready for a release but am commissioning our new beamline and am hoping to post a windows installer by the end of the month) this has "the standard XAFS window functions", but one can create their own window functions (for k- and/or R-space) with suppressed points/regions, etc, and use them as well. I have to admit I haven't tried this, but it was definitely part of the intention, and should work. My hope is to extend this to include some wavelet-like weightings as well. --Matt
On 12/12/2012 8:03 PM, Matt Newville wrote:
Hi Matthew, Bruce, All,
Sorry for not being able to join this discussion earlier. I agree that having glitch-free data is preferred. But I also think it's OK to simply remove diffraction peaks from XAFS data -- you're just asserting that you know those values are not good measurements of mu(E). I don't see it as all that different from removing obvious outliers from fluorescence channels from a multi-element detector -- though that has the appeal of keeping good measurements at a particular energy.
That's kind of my point. The standard methods, which require uniform tabulation, cause you to fill gaps in data by interpolation. Effectively, you're claiming knowledge of data for which you don't have knowledge. Better to use a method which preserves agnosticism about the bad data. The method I described for deglitching with reference scalers tries to preserve what information you do have by allowing you to ignore some channels. That only works if there are 'good' channels to serve as references.
I'm not sure that "slow FT" versus "discrete FT" is that important here, though perhaps I'm missing your meaning. My vies is that The EXAFS signal is band-limited (finite k range due to F(k) and 1/k, and finite R range due to F(k), lambda(k), and 1/R^2), so that sampling on reasonably fine grids is going to preserve all the information that is really there.
Again, FFT requires a uniform tabulation of data in k-space, which is generally secured by interpolation. Traditionally, this has been histogram interpolation, which tries to take into account the possibility of having multiple data points within one of the new intervals in k. That method, however, has to 'make up' data when there isn't any in one or more intervals. Actually, there are three possible tiers of speed. There's FFT, there's discrete FT done the slow way by computing the sums as they appear in the textbook (possibly good if you want a funny grid in R space), and then there's what I propose.
I do think that having richer window functions and spectral weightings would be very useful. You could view chi(k) data with glitches ias wanting a window function that had a very small value (possibly zero: remove this point) exactly at the glitch, but had a large value (~1) at nearby k-values.
Another (and possibly best) approach is to assign an uncertainty to each k value of chi. At the glitch, the uncertainty needs to go way up, so that not matching that point does not harm the overall fit.
A window function which went to 0 in the glitch region is tantamount to an assertion that chi(k)=0 there, which is generally not so and is probably a lot worse than the error you make by even the lamest interpolation through the glitch. The uncertainty makes more sense, except that if you filter data, the noise spreads out in a correlated manner over adjacent regions. Also, the tacit assumption of an uncertainty-based method is that the noise is uncorrelated within the high-uncertainty region. That said, I can imagine that this is one of those methods that works better than it has any right to. Now, in my proposed system, how would you fit filtered (q-space) data? The first step is to do the very slow FT (VSFT) on the input data with no interpolation. This leads to a set of sines and cosines which replicate the data. I would then apply the window and do the back-summation of all those sines and cosines at the given points. This will undoubtedly introduce some artifacts near the edges of the deleted regions simply due to our ignorance of what the data really do in there. Next, in each fit iteration, I would evaluate the model function at each point, then do the VSFT and filtering exactly as had been done on the data, which causes the fit to have the same artifacts. Since the VSFT and filter operations are linear, it should be possible to pre-compute a kernel which relates the values of unfiltered data to those of filtered data. This would be a matrix of size Np*Np with Np=# of points. Using the kernel instead of evaluating gazillions of trig functions would be reasonably fast; I suspect that ifeffit does a similar trick. Similarly, fitting in R-space would be accomplished by computing the VSFT on data and model function. Yes, I know I talk a good game. Unfortunately, I'm too swamped to attempt implementation. Also, I'd have to reinvent a number of very well-engineered wheels to do it.
Larch has a separate Transform class used for each dataset in a fit - this includes the standard FT parameters and fit ranges. It is intended to be able to do this sort of advanced windowing (and in principle, advanced weighting too). Currently (I'd be ready for a release but am commissioning our new beamline and am hoping to post a windows installer by the end of the month) this has "the standard XAFS window functions", but one can create their own window functions (for k- and/or R-space) with suppressed points/regions, etc, and use them as well. I have to admit I haven't tried this, but it was definitely part of the intention, and should work. My hope is to extend this to include some wavelet-like weightings as well.
What is Larch? Is it a replacement for ifeffit? By 'class' do you mean object as in OOP? "And now for something completely different. The larch." - Monty Python mam
Dear Bruce, so I need to re-read this topic once more ;) yes I was thinking about k-s space. but as long the regions are in uniform units, connected with x scale should be the same... as FT and back FT is similar. thanks and regards kicaj W dniu 12-12-12 23:16, Bruce Ravel pisze:
A similar question came up recently on the mailing list, although I think here you are asking whether regions can be chosen in k-space. (The previous discussion was about regions in R space.)
participants (6)
-
"Dr. Dariusz A. Zając"
-
Anatoly I Frenkel
-
Bruce Ravel
-
Matt Newville
-
Matthew Marcus
-
Zhaomo Tian