question about S02 (passive electron reduction factor)
Dear Ifeffit folks, This is Yu-Chuan. I am now trying to fit my sample, magnesium orthovanadate. I try to follow Bruce's Cu example and Scott's ZnO example steps and I found a problem come to my fitting. At the beginning of the fitting, the S02(amp) are far away from 0.9 (just about 0.3). It's much different than Bruce's and Scott's examples. For Cu and ZnO examples, the fitting results of amp are close to 0.9 at the beginning even if you set just four parameters (guess:amp, e0, delr, and ss). Also, John have said that the S02 value should be around 0.9. That's why I am wondering if there is anything wrong with my data processing? Does anyone happen to have this kind of problem? Thank you for your help. Good day Yu-Chuan
Hi Yu-Chuan, There are several possibilities. One is sample prep. If you have an extremely uneven sample (lots of pinholes, etc.) in transmission it can lead to results like you're describing. Another is normalization. If you're using Athena for background subtraction make sure you look at the pre-edge line and post-edge curve and see that they look reasonable. Sometimes if the background has a funny shape to it Athena can create a post-edge curve that shoots way up at the edge energy; this could also lead to the effect you're describing. Finally, look at the correlation between S02 and sigma2...if it is very high (say 0.95 or above) it may simply be coming up with low estimates for each. There are, of course, other possibilities as well, but in my experience those are the most common. --Scott Calvin Sarah Lawrence College
This is Yu-Chuan. I am now trying to fit my sample, magnesium orthovanadate. I try to follow Bruce's Cu example and Scott's ZnO example steps and I found a problem come to my fitting. At the beginning of the fitting, the S02(amp) are far away from 0.9 (just about 0.3). It's much different than Bruce's and Scott's examples. For Cu and ZnO examples, the fitting results of amp are close to 0.9 at the beginning even if you set just four parameters (guess:amp, e0, delr, and ss). Also, John have said that the S02 value should be around 0.9. That's why I am wondering if there is anything wrong with my data processing? Does anyone happen to have this kind of problem? Thank you for your help.
Hi, Yesterday we had a question about oddly small S02 values. In his thorough summary of possible explanations, Scott mentioned this: On Wednesday 25 February 2004 01:18 pm, Scott Calvin wrote:
Another is normalization. If you're using Athena for background subtraction make sure you look at the pre-edge line and post-edge curve and see that they look reasonable. Sometimes if the background has a funny shape to it Athena can create a post-edge curve that shoots way up at the edge energy; this could also lead to the effect you're describing.
I thought that a word of explanation about why Athena does what she does might be helpful to folks. In this email I am going to refer to data taken on an iron foil. To follow along, fire up Athena and select "Import a demo project" from the Help menu. Select the `calibrate.prj' file. This contains an iron foil spectrum. Set the plotting range in energy to [-200:1500]. Set the upper bound of the normalization range to 400. Click on the "post-edge line" button in the plotting options section. This will plot the data, the background, and the post-edge line. Note that the post-edge line is U-saped, diverging from the data a bit at low energy and significantly at high energy. Note also that the edge step is about 3.05. Now set the upper bound of the normalization range to 1700 and click the red E plotting button. Now the post-edge line goes through the data and the edge step is about 2.86. So what's going on? Well, the post-edge line is determined by regressing a quadratic polynomial to the data in the user-specified normalization range. When the upper bound was set to 400, the square term in the polynomial was quite large because *that* was the regression to the data over that short data range. When the upper bound was set to 1700, the square term was a lot smaller, the post-edge line was closer to linear and it was constrained by the regression to follow the data over the entire data range. Setting the upper bound of the normalization range had the effect of increasing the edge step by about 6%. Remember that the edge step is the value of the post-edge line extrapolated back to E0 after the pre-edge line is removed from the data. With the short energy range, the edge step is too large and the chi(k) is attenuated. In the subsequent fit, S02 will have to be similarly smaller to compensate. In this case it was a 6% effect, but if the post-edge line is really screwy, the attenuation could be much larger. So this was the long-winded way of repeating what Scott said -- plot the post-edge line and make sure the regression was done such that the post-edge line goes through the entire data in a way that seems sensible. It is also, as Scott said, prudent to check the pre-edge line as well. B -- ********* PLEASE NOTE MY NEW PHONE, FAX, & ROOM NUMBERS ****************** Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Hi Scott, First, I'm not sure how your message got tagged as Spam by Argonne's SpamAssassin (words or phrases selling drugs??? huh???), but that happens well before I can do anything about it. I've forwarded this message on as a false positive for spam. For better archiving, I also included the full message you sent below.
Good point. It is much more straightforward to check the uncertainty than the correlation to determine if that could be responsible for the anomalous S02...so I should have said that Yu-Chuan should check the uncertainty of the S02 parameter. If it is very large (say 0.8), then Ifeffit is in effect reporting that it cannot determine S02 well for some reason. At that point, I think it is helpful to see if there is a high correlation, since that may give a clue as to the reason for the large uncertainty.
Knowing the correlations always helps. And stating a best-fit value without an uncertainty is dangerous (one might take an implicit uncertainty, which may not be what you mean). This is probably a bit off-topic from the original question (where I'd guess self-absorption to be the main issue,....), but let's consider typical fit results: S02= 0.9 +/- 0.1, sigma2= 0.015 +/- 0.005, and a correlation between S02 and sigma2 C_S02_sigma2= +0.90. One could conclude from these that a true value of S02= 1.0 was reasonable. But in order to get to S02= 1.0, sigma2 has to go up to ~= 0.019 (~= sigma2_best + C_S02_sigma2 * delta_sigma2). S02= 0.80 is also reasonable, but this implies sigma2 would drop to around 0.011. The correlation means that, although having either S02= 0.80 OR sigma2= 0.020 would be reasonable,t having both S02= 0.80 AND sigma2= 0.020 is much less likely. The correlation by itself says nothing about the likelihood of having a true value for S02 of, say, 0.5. There is a chance this can happen, but it's small because the uncertainty in S02 is 0.1. The correlation simply tells you how sigma2 would respond if S02 were 0.5, but nothing more.
So this brings up a (possibly contentious) point. In their reporting recommendations, the IXS suggests reporting high correlations, particularly when they are between parameters that do not routinely show high correlations. What is the reason for this suggestion? I'm not criticizing it...just looking for the rationale.
It's hard to speak for the IXS, but I'd say that correlations are recommended to be reported because they're important statistics. The correlations, along with the best-fit values and uncertainties, help more fully describe the range of plausible results. It's generally well-known that the parameters (S02,sigma2) and (E0,R) are highly correlated for a single shell of a single data set, and the implications of these are generally understood, I think. Sometimes other variables are correlated, and some people even do complex fits with multiple data sets or generalized variables that are not the simple XAFS parameters ;). In these cases, it may not be obvious how the variables are correlated. I think the IXS committee was concerned about this, and so recommended reporting correlations in such cases. That seems sensible to me. --Matt
On Sat, 28 Feb 2004, Scott Calvin wrote:
Matt,
Good point. It is much more straightforward to check the uncertainty than the correlation to determine if that could be responsible for the anomalous S02...so I should have said that Yu-Chuan should check the uncertainty of the S02 parameter. If it is very large (say 0.8), then Ifeffit is in effect reporting that it cannot determine S02 well for some reason. At that point, I think it is helpful to see if there is a high correlation, since that may give a clue as to the reason for the large uncertainty.
So this brings up a (possibly contentious) point. In their reporting recommendations, the IXS suggests reporting high correlations, particularly when they are between parameters that do not routinely show high correlations. What is the reason for this suggestion? I'm not criticizing it...just looking for th erationale.
--Scott Calvin Sarah Lawrence College
Scott, Yu-Chuan, (Bruce, Grant), I'd like to comment on one of the points here.
Yu-Chuan wrote:
At the beginning of the fitting, the S02(amp) are far away from 0.9 (just about 0.3). It's much different than Bruce's and Scott's examples. [...] Also, John have said that the S02 value should be around 0.9. That's why I am wondering if there is anything wrong with my data processing? Does anyone happen to have this kind of problem?
Scott wrote:
There are several possibilities. One is sample prep. [...] [...] Another is normalization.[...] Finally, look at the correlation between S02 and sigma2...if it is very high (say 0.95 or above) it may simply be coming up with low estimates for each.
Yu-Chuan didn't give details of sample prep or data collection, and I agree with Scott that these are most likely cause of low S02 factors (Scott mentioned problems with transmission measurements, and Grant mentioned how self-absorption can cause similarly low S02 factors for fluorescence measurements). I also agree with Scott that poor normalization will affect S02, though S02~0.3 would indicate an edge step off by a factor of ~3, which is a pretty big error for normalization. I doubt that is the culprit. But: the correlation between S02 and sigma2 is NOT the cause of a value for S02 being low by a factor of 3. A high correlation means that a reasonably good fit could also be found by changing both S02 and sigma2 away from their best-fit value. This is not at all the same as saying that either parameter has the wrong best-fit value. In fact, the correlation between pair of variables says nothing about the trustworthiness of the best-fit values, only how much the uncertainties in the best-fit values depend on other variables. And, just to be clear, the reported uncertainties already take these correlation into account. Yu-Chuan didn't actually say what the reported uncertainties in S02 were, but it seems that it was small enough so that S02~=0.3 could be distinguished from S02~=0.9. --Matt
participants (4)
-
Bruce Ravel
-
hsnuboy
-
Matt Newville
-
Scott Calvin