amplitude parameter S02 larger than 1
Hi all, I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list. http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting. In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting. However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask: 1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it? 2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition? 3) Can anyone share whether you had the multiple site system that gets S02 larger than one? Looking forward to your help. Best, Yanyun
Hi, One thing that could be considered is transferring the SO2 factor from a reliable source such as a standard and then use that value in the fit. Chemical transferability of SO2 to similar systems is often acceptable. You could also try constraining the value in the fit as well. SO2 and Debye are also correlated so this may also affect the value. Hope that helps, Chris Sent from my iPhone
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Chris,
Thank you for your suggestion. But I don't have a standard. Also, I
wonder whether a multiple-site situation could be different from the
normal one-site case with respect to S02.
Best,
Yanyun
Quoting Chris Patridge
Hi,
One thing that could be considered is transferring the SO2 factor from a reliable source such as a standard and then use that value in the fit. Chemical transferability of SO2 to similar systems is often acceptable. You could also try constraining the value in the fit as well. SO2 and Debye are also correlated so this may also affect the value.
Hope that helps,
Chris
Sent from my iPhone
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
The two sites would mostly likely have very similar SO2 values so it should not matter that much within uncertainty You do get a value but what are the uncertainty for value of the site percentages? This floating value would have an uncertainty associated with it in the fit. Chris Sent from my iPhone
On Mar 20, 2015, at 12:04 PM, huyanyun@physics.utoronto.ca wrote:
Hi Chris,
Thank you for your suggestion. But I don't have a standard. Also, I wonder whether a multiple-site situation could be different from the normal one-site case with respect to S02.
Best, Yanyun Quoting Chris Patridge
: Hi,
One thing that could be considered is transferring the SO2 factor from a reliable source such as a standard and then use that value in the fit. Chemical transferability of SO2 to similar systems is often acceptable. You could also try constraining the value in the fit as well. SO2 and Debye are also correlated so this may also affect the value.
Hope that helps,
Chris
Sent from my iPhone
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Chris,
The site percentage is fitted to be 0.53+/-0.04 if let S02 to be free
parameter. If I fix S02=0.9, the site percentage is fitted to be
0.72+/-0.06. So the uncertainties are pretty small in both cases.
Best,
Yanyun
Quoting Chris Patridge
The two sites would mostly likely have very similar SO2 values so it should not matter that much within uncertainty
You do get a value but what are the uncertainty for value of the site percentages? This floating value would have an uncertainty associated with it in the fit.
Chris
Sent from my iPhone
On Mar 20, 2015, at 12:04 PM, huyanyun@physics.utoronto.ca wrote:
Hi Chris,
Thank you for your suggestion. But I don't have a standard. Also, I wonder whether a multiple-site situation could be different from the normal one-site case with respect to S02.
Best, Yanyun Quoting Chris Patridge
: Hi,
One thing that could be considered is transferring the SO2 factor from a reliable source such as a standard and then use that value in the fit. Chemical transferability of SO2 to similar systems is often acceptable. You could also try constraining the value in the fit as well. SO2 and Debye are also correlated so this may also affect the value.
Hope that helps,
Chris
Sent from my iPhone
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
One possible scenario: If one site has 6 nearest neighbors and the other - 4, and if you choose the site with 4 neighbors to construct FEFF to model your EXAFS data; and if you set your degeneracy equal to 4 and make your amplitude factor as S02*x of one site + S02*(1-x) of another site, then your S02 will come out larger than it should be because it will compensate for the fact that you underestimate the degeneracy of the 6-coordinated site.
Anatoly
________________________________________
From: ifeffit-bounces@millenia.cars.aps.anl.gov [ifeffit-bounces@millenia.cars.aps.anl.gov] on behalf of huyanyun@physics.utoronto.ca [huyanyun@physics.utoronto.ca]
Sent: Friday, March 20, 2015 12:04 PM
To: XAFS Analysis using Ifeffit
Subject: Re: [Ifeffit] amplitude parameter S02 larger than 1
Hi Chris,
Thank you for your suggestion. But I don't have a standard. Also, I
wonder whether a multiple-site situation could be different from the
normal one-site case with respect to S02.
Best,
Yanyun
Quoting Chris Patridge
Hi,
One thing that could be considered is transferring the SO2 factor from a reliable source such as a standard and then use that value in the fit. Chemical transferability of SO2 to similar systems is often acceptable. You could also try constraining the value in the fit as well. SO2 and Debye are also correlated so this may also affect the value.
Hope that helps,
Chris
Sent from my iPhone
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Yanyun, I am hesitant to promote a commercial project from which I directly profit on this list, but it seems to me you are asking a bigger set of questions than can comfortably and sufficiently be answered in this format, and they are questions which have been answered in detail elsewhere. In my book XAFS for Everyone, I have four pages devoted solely to S02, along with related information elsewhere in the book. Since you have a University of Toronto address, I am guessing you have access to their library. If you don't wish to purchase the book, you can request it via interlibrary loan, at no cost to you or your institution. In the mean time, a quote from the book that may be useful in thinking about S02: "Alternatively, one can treat So2 as a phenomenological parameter that accounts for any amplitude suppression independent of k and R, regardless of physical cause (Krappe and Rossner 2004). Under this view, So2 does not have any particular physical meaning, and the k or R dependence of intrinsic losses can be assigned to other parameters." That's the way I usually think about it--as not having a single physical meaning, but rather as being an empirically observed correction factor relative to simplistic theories which is indicative both of experimental effects and limitations in the theoretical model. Hope that helps... --Scott Calvin Sarah Lawrence College
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Scott,
Thank you. Our group has one copy of your book, I'll read it again
after my colleague return it to shelf. I still want to continue our
discussion here:
If we treat S02 as an empirically observed parameter, can I just set
S02=0.9 or 1.45 and let other parameters to explain the k- and R-
dependence? Because S02 is not a simplistic parameter which may
include both theory and experimental effects, I feel that S02 is not
necessarily to be smaller than 1, although I admit S02 smaller than 1
is more defensible as it represents some limitations both in theory
model and experiment, but I have a series of similar sample and all
their S02 will be automatically be fitted to 1.45~1.55, not smaller
than 1. Could this indicate something?
I actually found in my system, when I set S02=0.9 (instead of letting
it fit to 1.45), other parameter will definitely change but the
fitting is not terrible, it is still a close fit but important site
occupancy percentage P% changed a lot. So how should I compare/select
from the two fits, one with S02=0.9 and one with S02=1.45 with two
scenarios showing different results?
Best,
Yanyun
Quoting Scott Calvin
Hi Yanyun,
I am hesitant to promote a commercial project from which I directly profit on this list, but it seems to me you are asking a bigger set of questions than can comfortably and sufficiently be answered in this format, and they are questions which have been answered in detail elsewhere.
In my book XAFS for Everyone, I have four pages devoted solely to S02, along with related information elsewhere in the book.
Since you have a University of Toronto address, I am guessing you have access to their library. If you don't wish to purchase the book, you can request it via interlibrary loan, at no cost to you or your institution.
In the mean time, a quote from the book that may be useful in thinking about S02:
"Alternatively, one can treat So2 as a phenomenological parameter that accounts for any amplitude suppression independent of k and R, regardless of physical cause (Krappe and Rossner 2004). Under this view, So2 does not have any particular physical meaning, and the k or R dependence of intrinsic losses can be assigned to other parameters."
That's the way I usually think about it--as not having a single physical meaning, but rather as being an empirically observed correction factor relative to simplistic theories which is indicative both of experimental effects and limitations in the theoretical model.
Hope that helps...
--Scott Calvin Sarah Lawrence College
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
On 03/20/2015 12:48 PM, huyanyun@physics.utoronto.ca wrote:
Thank you. Our group has one copy of your book, I'll read it again after my colleague return it to shelf. I still want to continue our discussion here:
If we treat S02 as an empirically observed parameter, can I just set S02=0.9 or 1.45 and let other parameters to explain the k- and R- dependence? Because S02 is not a simplistic parameter which may include both theory and experimental effects, I feel that S02 is not necessarily to be smaller than 1, although I admit S02 smaller than 1 is more defensible as it represents some limitations both in theory model and experiment, but I have a series of similar sample and all their S02 will be automatically be fitted to 1.45~1.55, not smaller than 1. Could this indicate something?
I actually found in my system, when I set S02=0.9 (instead of letting it fit to 1.45), other parameter will definitely change but the fitting is not terrible, it is still a close fit but important site occupancy percentage P% changed a lot. So how should I compare/select from the two fits, one with S02=0.9 and one with S02=1.45 with two scenarios showing different results?
Yanyun, As I recall, you are looking at those bizarre skuttuderite materials which consist of a metal framework with an enormous gap. Sitting in the gap is your absorber atom. The center point of the gap is, as I recall, over 3 angstroms away from the nearest vertex of the framework. The point I am about to make hinges upon all that being more or less correct. Feff drops neutral atoms into the specified lattice positions then does a rather simple-minded algorithm to overlap the charges and come up with the radii that are used to compute the muffin tin potentials. In the case of one of those atoms rattling about inside the cage, I am skeptical that Feff's model produces a highly reliable set of scattering potentials. Probably ain't bad -- as you said in your first email, your fits look good. But it probably ain't quite right either. As Scott hinted, mistakes in the theory can show up in surprising with surprising k- or R-dependence, and surprising amplitude and phase dependence. I have absolutely no intuition for how Feff might introduce systematic error into a fit for the physical situation of a nearest neighbor at a distance of 3 or more angstrom, so I don't know how to "explain away" an oddly large S02. That said, I can think of some experiments that /might/ give some insight. Pick something simple, like a metal oxide or a metal sulfide -- something with a cubic structure. You don't want this experiment to get to complicated. 1. Before generating the feff.inp file, make the lattice constant nonphysically large such that the near neighbor distance is about 3 angstroms. 2. Run Feff and add up all the paths to make a theoretical chi(k) spectrum for your nonphysically large crystal. For a later iteration of this, you might add some synthetic noise to the spectrum. 3. Treat the chi(k) you just made as your "data". Import it and the normal crystal data into Artemis. Run Feff on the normal crystal. 4. Use Artemis's single scattering path tool to make a path for the first shell scatterer at the distance you used to make your theoretical data. 5. Make a simple first shell, four-parameter fit using that SS path. Can you make a reasonable looking fit? With sensible error bars? What happens with the amplitude? Is it very large or very small? Perhaps try the experiment the other way around. Fit the "normal" theoretical data with the unphysical Feff calculation. The point I am driving at that I wonder if you can figure out what happens to the amplitude in a decent fit when you contrive a situation with an unusually large first neighbor distance. If you see a trend in these "Feff experiments", perhaps that can help you understand the amplitude in your skuttuderite fits. Again, I have no intuition about this. I have no idea if my suggestion will be fruitful or not. For that matter, I have no idea if my memory of your problem is correct. But maybe this is a brilliant suggestion. Unlikely, but stranger things have happened :) B -- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Science Group at NSLS-II Building 535A Upton NY, 11973 Homepage: http://bruceravel.github.io/home/ Software: https://github.com/bruceravel Demeter: http://bruceravel.github.io/demeter/
Hi Bruce,
Thank you for your insight. The current materials I am talking about
is something a little different. The first-shell distance for one site
is about 2.5 angstrom, and for the other site is 3.3 angstrom.
I feel happy that you still remember that bizarre system which I am
still working on. Your brought up a very interesting test to do. I
will definitely do this test experiment to see what would happen to
the amplitude S02 if the first-shell distance is larger than 3
angstrom. This might take me for a while. I will come back to you
after I do it.
Best,
Yanyun
Quoting Bruce Ravel
On 03/20/2015 12:48 PM, huyanyun@physics.utoronto.ca wrote:
Thank you. Our group has one copy of your book, I'll read it again after my colleague return it to shelf. I still want to continue our discussion here:
If we treat S02 as an empirically observed parameter, can I just set S02=0.9 or 1.45 and let other parameters to explain the k- and R- dependence? Because S02 is not a simplistic parameter which may include both theory and experimental effects, I feel that S02 is not necessarily to be smaller than 1, although I admit S02 smaller than 1 is more defensible as it represents some limitations both in theory model and experiment, but I have a series of similar sample and all their S02 will be automatically be fitted to 1.45~1.55, not smaller than 1. Could this indicate something?
I actually found in my system, when I set S02=0.9 (instead of letting it fit to 1.45), other parameter will definitely change but the fitting is not terrible, it is still a close fit but important site occupancy percentage P% changed a lot. So how should I compare/select from the two fits, one with S02=0.9 and one with S02=1.45 with two scenarios showing different results?
Yanyun,
As I recall, you are looking at those bizarre skuttuderite materials which consist of a metal framework with an enormous gap. Sitting in the gap is your absorber atom. The center point of the gap is, as I recall, over 3 angstroms away from the nearest vertex of the framework. The point I am about to make hinges upon all that being more or less correct.
Feff drops neutral atoms into the specified lattice positions then does a rather simple-minded algorithm to overlap the charges and come up with the radii that are used to compute the muffin tin potentials. In the case of one of those atoms rattling about inside the cage, I am skeptical that Feff's model produces a highly reliable set of scattering potentials. Probably ain't bad -- as you said in your first email, your fits look good. But it probably ain't quite right either. As Scott hinted, mistakes in the theory can show up in surprising with surprising k- or R-dependence, and surprising amplitude and phase dependence.
I have absolutely no intuition for how Feff might introduce systematic error into a fit for the physical situation of a nearest neighbor at a distance of 3 or more angstrom, so I don't know how to "explain away" an oddly large S02.
That said, I can think of some experiments that /might/ give some insight. Pick something simple, like a metal oxide or a metal sulfide -- something with a cubic structure. You don't want this experiment to get to complicated.
1. Before generating the feff.inp file, make the lattice constant nonphysically large such that the near neighbor distance is about 3 angstroms.
2. Run Feff and add up all the paths to make a theoretical chi(k) spectrum for your nonphysically large crystal. For a later iteration of this, you might add some synthetic noise to the spectrum.
3. Treat the chi(k) you just made as your "data". Import it and the normal crystal data into Artemis. Run Feff on the normal crystal.
4. Use Artemis's single scattering path tool to make a path for the first shell scatterer at the distance you used to make your theoretical data.
5. Make a simple first shell, four-parameter fit using that SS path.
Can you make a reasonable looking fit? With sensible error bars? What happens with the amplitude? Is it very large or very small?
Perhaps try the experiment the other way around. Fit the "normal" theoretical data with the unphysical Feff calculation.
The point I am driving at that I wonder if you can figure out what happens to the amplitude in a decent fit when you contrive a situation with an unusually large first neighbor distance. If you see a trend in these "Feff experiments", perhaps that can help you understand the amplitude in your skuttuderite fits.
Again, I have no intuition about this. I have no idea if my suggestion will be fruitful or not. For that matter, I have no idea if my memory of your problem is correct.
But maybe this is a brilliant suggestion. Unlikely, but stranger things have happened :)
B
-- Bruce Ravel ------------------------------------ bravel@bnl.gov
National Institute of Standards and Technology Synchrotron Science Group at NSLS-II Building 535A Upton NY, 11973
Homepage: http://bruceravel.github.io/home/ Software: https://github.com/bruceravel Demeter: http://bruceravel.github.io/demeter/ _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
On 03/20/2015 01:57 PM, huyanyun@physics.utoronto.ca wrote:
I feel happy that you still remember that bizarre system which I am still working on. Your brought up a very interesting test to do. I will definitely do this test experiment to see what would happen to the amplitude S02 if the first-shell distance is larger than 3 angstrom. This might take me for a while. I will come back to you after I do it
Well, I've reviewed your CLS proposals at least twice and discussed your problem with you at an XAFS course. Eventually things do stick even in my tiny, little brain :) On this page of the Artemis manual: http://bruceravel.github.io/demeter/artug/extended/qfs.html I explain why the Artemis user should be careful not to misuse the quick first shell tool in Artemis. The bottom line is that unphysical muffin tin radii in the Feff calculation show up in large part as incorrect amplitudes of the calculated chi(k) for the path, as you can see in the second and third figures on that page. In fact, unphysically large muffin tins lead to larger amplitudes, as shown by the red lines It's a little hard to decide exactly how your physical near-neighbor separation of 3.3 A should be related to a Feff calculation that purposefully sets the muffin tins to be loo large, but I think you can see why I'm concerned that this effect might be related to your large fitted S02 values. B -- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Science Group at NSLS-II Building 535A Upton NY, 11973 Homepage: http://bruceravel.github.io/home/ Software: https://github.com/bruceravel Demeter: http://bruceravel.github.io/demeter/
Hello Bruce,
This is to follow up the *test* experiment you suggested. Attached are
three Artemis files.
I chose the Fe (im-3m) structure to do the test. The normal crystal
structure has its first-shell distance at 2.48 angstrom. A large
structure was created so that the first-shell distance reaches 3.11
angstrom.
Both normal and large structure are calculated in JFEFF in the same
manner (i.e., same path number, same calculation procedure, same
sigma2=0.0045 for all paths) to generate the calculated chi data.
File #1 attached was exactly following the procedure you mentioned.
The quick first shell fits very well, four parameters except S02 give
normal results. The amplitude parameter S02 is very large (2.2+/-0.14).
I extended the test in #2 and #3 for comparison. In #2, calculated
data based on large structure is presented to fit to the same large
structure. As expected, we get reasonable fit looking with other
parameters normal. However, amplitude S02 fits to 1.80+/-0.03. In #3,
both data and structure are normal. In this case, it is not surprising
to get good fit and all parameters including S02 turn out to be close
to their true values.
I think it is clear that first-shell distance larger than 3 angstrom
does has effect in making amplitude artificialy large.
Best,
Yanyun
Quoting Bruce Ravel
On 03/20/2015 12:48 PM, huyanyun@physics.utoronto.ca wrote:
Thank you. Our group has one copy of your book, I'll read it again after my colleague return it to shelf. I still want to continue our discussion here:
If we treat S02 as an empirically observed parameter, can I just set S02=0.9 or 1.45 and let other parameters to explain the k- and R- dependence? Because S02 is not a simplistic parameter which may include both theory and experimental effects, I feel that S02 is not necessarily to be smaller than 1, although I admit S02 smaller than 1 is more defensible as it represents some limitations both in theory model and experiment, but I have a series of similar sample and all their S02 will be automatically be fitted to 1.45~1.55, not smaller than 1. Could this indicate something?
I actually found in my system, when I set S02=0.9 (instead of letting it fit to 1.45), other parameter will definitely change but the fitting is not terrible, it is still a close fit but important site occupancy percentage P% changed a lot. So how should I compare/select from the two fits, one with S02=0.9 and one with S02=1.45 with two scenarios showing different results?
Yanyun,
As I recall, you are looking at those bizarre skuttuderite materials which consist of a metal framework with an enormous gap. Sitting in the gap is your absorber atom. The center point of the gap is, as I recall, over 3 angstroms away from the nearest vertex of the framework. The point I am about to make hinges upon all that being more or less correct.
Feff drops neutral atoms into the specified lattice positions then does a rather simple-minded algorithm to overlap the charges and come up with the radii that are used to compute the muffin tin potentials. In the case of one of those atoms rattling about inside the cage, I am skeptical that Feff's model produces a highly reliable set of scattering potentials. Probably ain't bad -- as you said in your first email, your fits look good. But it probably ain't quite right either. As Scott hinted, mistakes in the theory can show up in surprising with surprising k- or R-dependence, and surprising amplitude and phase dependence.
I have absolutely no intuition for how Feff might introduce systematic error into a fit for the physical situation of a nearest neighbor at a distance of 3 or more angstrom, so I don't know how to "explain away" an oddly large S02.
That said, I can think of some experiments that /might/ give some insight. Pick something simple, like a metal oxide or a metal sulfide -- something with a cubic structure. You don't want this experiment to get to complicated.
1. Before generating the feff.inp file, make the lattice constant nonphysically large such that the near neighbor distance is about 3 angstroms.
2. Run Feff and add up all the paths to make a theoretical chi(k) spectrum for your nonphysically large crystal. For a later iteration of this, you might add some synthetic noise to the spectrum.
3. Treat the chi(k) you just made as your "data". Import it and the normal crystal data into Artemis. Run Feff on the normal crystal.
4. Use Artemis's single scattering path tool to make a path for the first shell scatterer at the distance you used to make your theoretical data.
5. Make a simple first shell, four-parameter fit using that SS path.
Can you make a reasonable looking fit? With sensible error bars? What happens with the amplitude? Is it very large or very small?
Perhaps try the experiment the other way around. Fit the "normal" theoretical data with the unphysical Feff calculation.
The point I am driving at that I wonder if you can figure out what happens to the amplitude in a decent fit when you contrive a situation with an unusually large first neighbor distance. If you see a trend in these "Feff experiments", perhaps that can help you understand the amplitude in your skuttuderite fits.
Again, I have no intuition about this. I have no idea if my suggestion will be fruitful or not. For that matter, I have no idea if my memory of your problem is correct.
But maybe this is a brilliant suggestion. Unlikely, but stranger things have happened :)
B
-- Bruce Ravel ------------------------------------ bravel@bnl.gov
National Institute of Standards and Technology Synchrotron Science Group at NSLS-II Building 535A Upton NY, 11973
Homepage: http://bruceravel.github.io/home/ Software: https://github.com/bruceravel Demeter: http://bruceravel.github.io/demeter/ _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
On 03/23/2015 04:10 PM, huyanyun@physics.utoronto.ca wrote:
This is to follow up the *test* experiment you suggested. Attached are three Artemis files.
I chose the Fe (im-3m) structure to do the test. The normal crystal structure has its first-shell distance at 2.48 angstrom. A large structure was created so that the first-shell distance reaches 3.11 angstrom.
Both normal and large structure are calculated in JFEFF in the same manner (i.e., same path number, same calculation procedure, same sigma2=0.0045 for all paths) to generate the calculated chi data.
File #1 attached was exactly following the procedure you mentioned. The quick first shell fits very well, four parameters except S02 give normal results. The amplitude parameter S02 is very large (2.2+/-0.14).
I extended the test in #2 and #3 for comparison. In #2, calculated data based on large structure is presented to fit to the same large structure. As expected, we get reasonable fit looking with other parameters normal. However, amplitude S02 fits to 1.80+/-0.03. In #3, both data and structure are normal. In this case, it is not surprising to get good fit and all parameters including S02 turn out to be close to their true values.
I think it is clear that first-shell distance larger than 3 angstrom does has effect in making amplitude artificialy large.
Yanyun, Very interesting! It's rather difficult to understand how to relate what you have found to your actual sample. That is certainly true in a quantitative sense. I don't think this tells you how to interpret the actual value that you are finding for S02. But I think it does give a hint for why you are getting such a large value in your case. I will be very interested to see how you address this when you publish your results. For everyone else, I would caution against reading too much into what Yanyun has found. (Even more so than I would caution her!) She is looking at a really unusual situation. Her materials really do have extraordinarily long near neighbor distances. In fact, last fall when she showed me her data, I was quite shocked that such a thing exists. In all my years doing XAS and being a beamline scientist, I had never seen such a long near-neighbor distance. The FT of the data was really quite remarkable! Trying to understand a large S02 is a common question on this mailing list and elsewhere. However, I don't think that Yanyun's experience gives much insight to most such problems. Her situation is quite unusual. If you are seeing an oddly large S02, that is something you need to figure out about your sample or your data. In most cases, you cannot explain it away by asserting that Feff made a mistake with the muffin tin radii. As an example, I rather doubt that Yanyun's experience can shed much light on the questions that Jatin Rana was asking over the weekend. Anyway, wow! What an interesting day on the mailing list! B -- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Science Group at NSLS-II Building 535A Upton NY, 11973 Homepage: http://bruceravel.github.io/home/ Software: https://github.com/bruceravel Demeter: http://bruceravel.github.io/demeter/
Hi Bruce,
Thanks for the comments.
Just to make an end to this topic so far: the samples which give large
amplitude are suspected to be more complicated than the normal
skutterudites I showned to you last fall. Normal skutterudites data
actually didn't cause me worried because its S02 is fitted in a
typical range. In addition, there is one example on the paper PRB 86,
174106(2012) which has >3 angstrom nearest distance and the authors
reports very normal s02 values. Therefore, I will keep tracking down
why the amplitude is fitted to large and how to relate this amplitude
to what we found here for my specific case.
Thank you everyone.
Best,
Yanyun
Anyway, this test result gave me some hints to tracking down why there is
Quoting Bruce Ravel
On 03/23/2015 04:10 PM, huyanyun@physics.utoronto.ca wrote:
This is to follow up the *test* experiment you suggested. Attached are three Artemis files.
I chose the Fe (im-3m) structure to do the test. The normal crystal structure has its first-shell distance at 2.48 angstrom. A large structure was created so that the first-shell distance reaches 3.11 angstrom.
Both normal and large structure are calculated in JFEFF in the same manner (i.e., same path number, same calculation procedure, same sigma2=0.0045 for all paths) to generate the calculated chi data.
File #1 attached was exactly following the procedure you mentioned. The quick first shell fits very well, four parameters except S02 give normal results. The amplitude parameter S02 is very large (2.2+/-0.14).
I extended the test in #2 and #3 for comparison. In #2, calculated data based on large structure is presented to fit to the same large structure. As expected, we get reasonable fit looking with other parameters normal. However, amplitude S02 fits to 1.80+/-0.03. In #3, both data and structure are normal. In this case, it is not surprising to get good fit and all parameters including S02 turn out to be close to their true values.
I think it is clear that first-shell distance larger than 3 angstrom does has effect in making amplitude artificialy large.
Yanyun,
Very interesting! It's rather difficult to understand how to relate what you have found to your actual sample. That is certainly true in a quantitative sense. I don't think this tells you how to interpret the actual value that you are finding for S02. But I think it does give a hint for why you are getting such a large value in your case.
I will be very interested to see how you address this when you publish your results.
For everyone else, I would caution against reading too much into what Yanyun has found. (Even more so than I would caution her!) She is looking at a really unusual situation. Her materials really do have extraordinarily long near neighbor distances. In fact, last fall when she showed me her data, I was quite shocked that such a thing exists. In all my years doing XAS and being a beamline scientist, I had never seen such a long near-neighbor distance. The FT of the data was really quite remarkable!
Trying to understand a large S02 is a common question on this mailing list and elsewhere. However, I don't think that Yanyun's experience gives much insight to most such problems. Her situation is quite unusual. If you are seeing an oddly large S02, that is something you need to figure out about your sample or your data. In most cases, you cannot explain it away by asserting that Feff made a mistake with the muffin tin radii.
As an example, I rather doubt that Yanyun's experience can shed much light on the questions that Jatin Rana was asking over the weekend.
Anyway, wow! What an interesting day on the mailing list!
B
-- Bruce Ravel ------------------------------------ bravel@bnl.gov
National Institute of Standards and Technology Synchrotron Science Group at NSLS-II Building 535A Upton NY, 11973
Homepage: http://bruceravel.github.io/home/ Software: https://github.com/bruceravel Demeter: http://bruceravel.github.io/demeter/ _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Yuanyun and Bruce,
Your test is a very nice way to check the effect of the muffin tin radius on the fit value of the amplitude fator.
Although I hadn't run your test myself I would have expected that the error bars produced by the fit of a theory to "theoretically generated data" would have been huge because you haven't added noise to the "data". Or have you? It is not clear why the uncertainties reported by Yuanyun are so small, given that the analysis method uses the high r signal in the data as a measure of noise. Since the data had no noise the high r amplitude should be essentually zero and that would renormalize error bars to huge values.
I may have misunderstood what was done in this test, and, perhaps, the noise was added automatically as one of the option that Yuanyun chose. Or there was some other reason why errors came out to be small? Thanks for your comments,
Anatoly
________________________________________
From: ifeffit-bounces@millenia.cars.aps.anl.gov [ifeffit-bounces@millenia.cars.aps.anl.gov] on behalf of huyanyun@physics.utoronto.ca [huyanyun@physics.utoronto.ca]
Sent: Monday, March 23, 2015 9:46 PM
To: XAFS Analysis using Ifeffit
Subject: Re: [Ifeffit] amplitude parameter S02 larger than 1
Hi Bruce,
Thanks for the comments.
Just to make an end to this topic so far: the samples which give large
amplitude are suspected to be more complicated than the normal
skutterudites I showned to you last fall. Normal skutterudites data
actually didn't cause me worried because its S02 is fitted in a
typical range. In addition, there is one example on the paper PRB 86,
174106(2012) which has >3 angstrom nearest distance and the authors
reports very normal s02 values. Therefore, I will keep tracking down
why the amplitude is fitted to large and how to relate this amplitude
to what we found here for my specific case.
Thank you everyone.
Best,
Yanyun
Anyway, this test result gave me some hints to tracking down why there is
Quoting Bruce Ravel
On 03/23/2015 04:10 PM, huyanyun@physics.utoronto.ca wrote:
This is to follow up the *test* experiment you suggested. Attached are three Artemis files.
I chose the Fe (im-3m) structure to do the test. The normal crystal structure has its first-shell distance at 2.48 angstrom. A large structure was created so that the first-shell distance reaches 3.11 angstrom.
Both normal and large structure are calculated in JFEFF in the same manner (i.e., same path number, same calculation procedure, same sigma2=0.0045 for all paths) to generate the calculated chi data.
File #1 attached was exactly following the procedure you mentioned. The quick first shell fits very well, four parameters except S02 give normal results. The amplitude parameter S02 is very large (2.2+/-0.14).
I extended the test in #2 and #3 for comparison. In #2, calculated data based on large structure is presented to fit to the same large structure. As expected, we get reasonable fit looking with other parameters normal. However, amplitude S02 fits to 1.80+/-0.03. In #3, both data and structure are normal. In this case, it is not surprising to get good fit and all parameters including S02 turn out to be close to their true values.
I think it is clear that first-shell distance larger than 3 angstrom does has effect in making amplitude artificialy large.
Yanyun,
Very interesting! It's rather difficult to understand how to relate what you have found to your actual sample. That is certainly true in a quantitative sense. I don't think this tells you how to interpret the actual value that you are finding for S02. But I think it does give a hint for why you are getting such a large value in your case.
I will be very interested to see how you address this when you publish your results.
For everyone else, I would caution against reading too much into what Yanyun has found. (Even more so than I would caution her!) She is looking at a really unusual situation. Her materials really do have extraordinarily long near neighbor distances. In fact, last fall when she showed me her data, I was quite shocked that such a thing exists. In all my years doing XAS and being a beamline scientist, I had never seen such a long near-neighbor distance. The FT of the data was really quite remarkable!
Trying to understand a large S02 is a common question on this mailing list and elsewhere. However, I don't think that Yanyun's experience gives much insight to most such problems. Her situation is quite unusual. If you are seeing an oddly large S02, that is something you need to figure out about your sample or your data. In most cases, you cannot explain it away by asserting that Feff made a mistake with the muffin tin radii.
As an example, I rather doubt that Yanyun's experience can shed much light on the questions that Jatin Rana was asking over the weekend.
Anyway, wow! What an interesting day on the mailing list!
B
-- Bruce Ravel ------------------------------------ bravel@bnl.gov
National Institute of Standards and Technology Synchrotron Science Group at NSLS-II Building 535A Upton NY, 11973
Homepage: http://bruceravel.github.io/home/ Software: https://github.com/bruceravel Demeter: http://bruceravel.github.io/demeter/ _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Anatoly, The method Ifeffit uses to compute uncertainties in fitted parameters is independent of noise in the data because it, in essence, assumes the fit is statistically good and rescales accordingly. This means that the estimated uncertainties really aren't dependable for fits that are known to be bad (e.g. have a huge R-factor, unrealistic fitted parameters, etc.), but since those fits aren't generally the published ones, that's OK. Secondly, the high-R amplitude will not be essentially zero with theoretically-generated data, even if you don't add noise, because the effect of having a finite chi(k) range will create some ringing even at high R. Frankly, the default method by which Ifeffit (and Larch? I haven't used Larch) estimates the noise in the data is pretty iffy, although there's not really a good alternative. The user can override it with a value of their own, but as you know, epsilon is a notoriously squirrelly concept in EXAFS fitting. The really nice thing about the Ifeffit algorithm is that it makes the choice of epsilon irrelevant for the reported uncertainties. What it is NOT irrelevant for is the chi-square. For this reason, I personally ignore the magnitude of the chi-square reported by Artemis, but pay close attention to differences in chi square (actually, reduced chi square) for different fits on the same data. --Scott Calvin Sarah Lawrence College
On Mar 23, 2015, at 10:18 PM, Anatoly I Frenkel
wrote: Hi Yuanyun and Bruce,
Your test is a very nice way to check the effect of the muffin tin radius on the fit value of the amplitude fator. Although I hadn't run your test myself I would have expected that the error bars produced by the fit of a theory to "theoretically generated data" would have been huge because you haven't added noise to the "data". Or have you? It is not clear why the uncertainties reported by Yuanyun are so small, given that the analysis method uses the high r signal in the data as a measure of noise. Since the data had no noise the high r amplitude should be essentually zero and that would renormalize error bars to huge values.
I may have misunderstood what was done in this test, and, perhaps, the noise was added automatically as one of the option that Yuanyun chose. Or there was some other reason why errors came out to be small? Thanks for your comments,
Anatoly
________________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov [ifeffit-bounces@millenia.cars.aps.anl.gov] on behalf of huyanyun@physics.utoronto.ca [huyanyun@physics.utoronto.ca] Sent: Monday, March 23, 2015 9:46 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] amplitude parameter S02 larger than 1
Hi Bruce,
Thanks for the comments.
Just to make an end to this topic so far: the samples which give large amplitude are suspected to be more complicated than the normal skutterudites I showned to you last fall. Normal skutterudites data actually didn't cause me worried because its S02 is fitted in a typical range. In addition, there is one example on the paper PRB 86, 174106(2012) which has >3 angstrom nearest distance and the authors reports very normal s02 values. Therefore, I will keep tracking down why the amplitude is fitted to large and how to relate this amplitude to what we found here for my specific case.
Thank you everyone.
Best, Yanyun
Anyway, this test result gave me some hints to tracking down why there is Quoting Bruce Ravel
: On 03/23/2015 04:10 PM, huyanyun@physics.utoronto.ca wrote:
This is to follow up the *test* experiment you suggested. Attached are three Artemis files.
I chose the Fe (im-3m) structure to do the test. The normal crystal structure has its first-shell distance at 2.48 angstrom. A large structure was created so that the first-shell distance reaches 3.11 angstrom.
Both normal and large structure are calculated in JFEFF in the same manner (i.e., same path number, same calculation procedure, same sigma2=0.0045 for all paths) to generate the calculated chi data.
File #1 attached was exactly following the procedure you mentioned. The quick first shell fits very well, four parameters except S02 give normal results. The amplitude parameter S02 is very large (2.2+/-0.14).
I extended the test in #2 and #3 for comparison. In #2, calculated data based on large structure is presented to fit to the same large structure. As expected, we get reasonable fit looking with other parameters normal. However, amplitude S02 fits to 1.80+/-0.03. In #3, both data and structure are normal. In this case, it is not surprising to get good fit and all parameters including S02 turn out to be close to their true values.
I think it is clear that first-shell distance larger than 3 angstrom does has effect in making amplitude artificialy large.
Yanyun,
Very interesting! It's rather difficult to understand how to relate what you have found to your actual sample. That is certainly true in a quantitative sense. I don't think this tells you how to interpret the actual value that you are finding for S02. But I think it does give a hint for why you are getting such a large value in your case.
I will be very interested to see how you address this when you publish your results.
For everyone else, I would caution against reading too much into what Yanyun has found. (Even more so than I would caution her!) She is looking at a really unusual situation. Her materials really do have extraordinarily long near neighbor distances. In fact, last fall when she showed me her data, I was quite shocked that such a thing exists. In all my years doing XAS and being a beamline scientist, I had never seen such a long near-neighbor distance. The FT of the data was really quite remarkable!
Trying to understand a large S02 is a common question on this mailing list and elsewhere. However, I don't think that Yanyun's experience gives much insight to most such problems. Her situation is quite unusual. If you are seeing an oddly large S02, that is something you need to figure out about your sample or your data. In most cases, you cannot explain it away by asserting that Feff made a mistake with the muffin tin radii.
As an example, I rather doubt that Yanyun's experience can shed much light on the questions that Jatin Rana was asking over the weekend.
Anyway, wow! What an interesting day on the mailing list!
B
-- Bruce Ravel
Hi Scott and Anatoly,
I didn't add any noise to the calculated data. So I can understand the
magnitude of chi-square and reduced chi-square are terribly large,
which is OK.
I see the small uncertainties in fitted parameters in the way that the
generated data is very consistent with the guessed model and there is
no noise in the data. As in this test, this is obvious. One thing I
don't understand is that the fitted s02 turns out to be much larger
than 1 even if the generated spectrum was calculated with s02=0.9. So
I assume this is because of the muffin tin effect due to large (>3
angstrom) first-shell path.
Best,
Yanyun
Quoting Scott Calvin
Hi Anatoly,
The method Ifeffit uses to compute uncertainties in fitted parameters is independent of noise in the data because it, in essence, assumes the fit is statistically good and rescales accordingly. This means that the estimated uncertainties really aren't dependable for fits that are known to be bad (e.g. have a huge R-factor, unrealistic fitted parameters, etc.), but since those fits aren't generally the published ones, that's OK.
Secondly, the high-R amplitude will not be essentially zero with theoretically-generated data, even if you don't add noise, because the effect of having a finite chi(k) range will create some ringing even at high R.
Frankly, the default method by which Ifeffit (and Larch? I haven't used Larch) estimates the noise in the data is pretty iffy, although there's not really a good alternative. The user can override it with a value of their own, but as you know, epsilon is a notoriously squirrelly concept in EXAFS fitting. The really nice thing about the Ifeffit algorithm is that it makes the choice of epsilon irrelevant for the reported uncertainties.
What it is NOT irrelevant for is the chi-square. For this reason, I personally ignore the magnitude of the chi-square reported by Artemis, but pay close attention to differences in chi square (actually, reduced chi square) for different fits on the same data.
--Scott Calvin Sarah Lawrence College
On Mar 23, 2015, at 10:18 PM, Anatoly I Frenkel
wrote: Hi Yuanyun and Bruce,
Your test is a very nice way to check the effect of the muffin tin radius on the fit value of the amplitude fator. Although I hadn't run your test myself I would have expected that the error bars produced by the fit of a theory to "theoretically generated data" would have been huge because you haven't added noise to the "data". Or have you? It is not clear why the uncertainties reported by Yuanyun are so small, given that the analysis method uses the high r signal in the data as a measure of noise. Since the data had no noise the high r amplitude should be essentually zero and that would renormalize error bars to huge values.
I may have misunderstood what was done in this test, and, perhaps, the noise was added automatically as one of the option that Yuanyun chose. Or there was some other reason why errors came out to be small? Thanks for your comments,
Anatoly
________________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov [ifeffit-bounces@millenia.cars.aps.anl.gov] on behalf of huyanyun@physics.utoronto.ca [huyanyun@physics.utoronto.ca] Sent: Monday, March 23, 2015 9:46 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] amplitude parameter S02 larger than 1
Hi Bruce,
Thanks for the comments.
Just to make an end to this topic so far: the samples which give large amplitude are suspected to be more complicated than the normal skutterudites I showned to you last fall. Normal skutterudites data actually didn't cause me worried because its S02 is fitted in a typical range. In addition, there is one example on the paper PRB 86, 174106(2012) which has >3 angstrom nearest distance and the authors reports very normal s02 values. Therefore, I will keep tracking down why the amplitude is fitted to large and how to relate this amplitude to what we found here for my specific case.
Thank you everyone.
Best, Yanyun
Anyway, this test result gave me some hints to tracking down why there is Quoting Bruce Ravel
: On 03/23/2015 04:10 PM, huyanyun@physics.utoronto.ca wrote:
This is to follow up the *test* experiment you suggested. Attached are three Artemis files.
I chose the Fe (im-3m) structure to do the test. The normal crystal structure has its first-shell distance at 2.48 angstrom. A large structure was created so that the first-shell distance reaches 3.11 angstrom.
Both normal and large structure are calculated in JFEFF in the same manner (i.e., same path number, same calculation procedure, same sigma2=0.0045 for all paths) to generate the calculated chi data.
File #1 attached was exactly following the procedure you mentioned. The quick first shell fits very well, four parameters except S02 give normal results. The amplitude parameter S02 is very large (2.2+/-0.14).
I extended the test in #2 and #3 for comparison. In #2, calculated data based on large structure is presented to fit to the same large structure. As expected, we get reasonable fit looking with other parameters normal. However, amplitude S02 fits to 1.80+/-0.03. In #3, both data and structure are normal. In this case, it is not surprising to get good fit and all parameters including S02 turn out to be close to their true values.
I think it is clear that first-shell distance larger than 3 angstrom does has effect in making amplitude artificialy large.
Yanyun,
Very interesting! It's rather difficult to understand how to relate what you have found to your actual sample. That is certainly true in a quantitative sense. I don't think this tells you how to interpret the actual value that you are finding for S02. But I think it does give a hint for why you are getting such a large value in your case.
I will be very interested to see how you address this when you publish your results.
For everyone else, I would caution against reading too much into what Yanyun has found. (Even more so than I would caution her!) She is looking at a really unusual situation. Her materials really do have extraordinarily long near neighbor distances. In fact, last fall when she showed me her data, I was quite shocked that such a thing exists. In all my years doing XAS and being a beamline scientist, I had never seen such a long near-neighbor distance. The FT of the data was really quite remarkable!
Trying to understand a large S02 is a common question on this mailing list and elsewhere. However, I don't think that Yanyun's experience gives much insight to most such problems. Her situation is quite unusual. If you are seeing an oddly large S02, that is something you need to figure out about your sample or your data. In most cases, you cannot explain it away by asserting that Feff made a mistake with the muffin tin radii.
As an example, I rather doubt that Yanyun's experience can shed much light on the questions that Jatin Rana was asking over the weekend.
Anyway, wow! What an interesting day on the mailing list!
B
-- Bruce Ravel
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
On Mon, Mar 23, 2015 at 9:45 PM, Scott Calvin
Hi Anatoly,
The method Ifeffit uses to compute uncertainties in fitted parameters is independent of noise in the data because it, in essence, assumes the fit is statistically good and rescales accordingly. This means that the estimated uncertainties really aren't dependable for fits that are known to be bad (e.g. have a huge R-factor, unrealistic fitted parameters, etc.), but since those fits aren't generally the published ones, that's OK.
Secondly, the high-R amplitude will not be essentially zero with theoretically-generated data, even if you don't add noise, because the effect of having a finite chi(k) range will create some ringing even at high R.
Frankly, the default method by which Ifeffit (and Larch? I haven't used Larch) estimates the noise in the data is pretty iffy, although there's not really a good alternative. The user can override it with a value of their own, but as you know, epsilon is a notoriously squirrelly concept in EXAFS fitting. The really nice thing about the Ifeffit algorithm is that it makes the choice of epsilon irrelevant for the reported uncertainties.
What it is NOT irrelevant for is the chi-square. For this reason, I personally ignore the magnitude of the chi-square reported by Artemis, but pay close attention to differences in chi square (actually, reduced chi square) for different fits on the same data.
I completely agree with this assessment -- fitting test data made from Feff calculations without noise added does not normally give absurd error bars, because the estimated uncertainties in the "data" are mostly unused. The default method of using high-R data to estimate the uncertainties can definitely be called "pretty iffy, but there's not really a good alternative" for a single spectrum -- using scan-to-scan variations is also a fine approach, but can (also) miss some kinds of non-statistical errors. The high-R method does seem to work reasonably well for *very* noisy data, but that is hardly ever actually analyzed in isolation. I'm not sure what's causing the large S02 values, and haven't looked in detail at the projects. But near-neighbor distances of ~3 Ang aren't that uncommon for metals (silver and gold are 2.9 Ang and lead is 3.5 Ang), and those work OK -- and don't give very large S02 values. Are these samples layered and/or anisotropic? If so, polarization effects could also affect the amplitudes. --Matt PS: we should implement Hamilton's test (and include other statistics) as easy-to-run functions in Larch!
Hi Yanyun,
Lots of comments coming in now, so I’m editing this as I write it!
One possibility for why you're getting a high best-fit S02 is that the fit doesn't care all that much about what the value of S02; i.e. there is broad range of S02's compatible with describing the fit as "good." That should be reflected in the uncertainty that Artemis reports. If S02 is 1.50 +/- 0.48, for example, that means the fit isn't all that "sure" what S02 should be. That would mean we could just shrug our shoulders and move on, except that it correlates with a parameter you are interested in (in this case, site occupancy). So in such a case, I think you can cautiously fall back on what might be called a "Bayesian prior"; i.e., the belief that the S02 should be "around" 0.9, and set the S02 to 0.9. (Or perhaps restrain S02 to 0.9; then you're really doing something a bit more like the notion of a Bayesian prior.)
On the other hand, if the S02 is, say, 1.50 +/- 0.07, then the fit really doesn’t like the idea of an S02 in the typical range. An S02 that high, with that small an uncertainty, suggests to me that something is wrong—although it could be as simple as a normalization issue during data reduction. In that case, I’d be more skeptical of just setting S02 to 0.90 and going with that result; the fit is trying to tell you something, and it’s important to track down what that something is.
Of course, once in a while, a fit will find a local minimum, while there’s another good local minimum around a more realistic value. That would be reflected by a fit that gave similarly good quantitative measures of fit quality (e.g. R-factors) when S02 is fit (and yields 1.50 +/- 0.07) as when its forced to 0.90. That’s somewhat unusual, however, particularly with a global parameter like S02.
A good way to defend setting S02 to 0.90 is to use the Hamilton test to see if floating S02 yields a statistically significant improvement over forcing it to 0.90. If not, using your prior best estimate for S02 is reasonable.
If you did that, though, I’d think that it would be good to mention what happened in any eventual publication of presentation; it might provide an important clue to someone who follows up with this or a similar system. It would also be good to increase your reported uncertainty for site occupancy (and indicate in the text what you’ve done). I now see that your site occupancies are 0.53 +/- 0.04 for the floated S02, and 0.72 +/-0.06 for the S02 = 0.90. That’s not so bad, really. It means that you’re pretty confident that the site occupancy is 0.64 +/- 0.15, which isn’t an absurdly large uncertainty as these things go.
To be concrete, if all the Hamilton test does not show statistically significant improvement by floating S02, then I might write something like this in any eventual paper: “The site occupancy was highly correlated with S02 in our fits, making it difficult to determine the site occupancy with high precision. If S02 is constrained to 0.90, a plausible value for element [X] [ref], then the site occupancy is 0.53 +/- 0.04. If constrained to 1.0, the site occupancy is [whatever it comes out to be] To reflect the increased uncertainty associated with the unknown value for S02, we are adopting a value of 0.53 +/- [enough uncertainty to cover the results found for S02 = 1.0].
Of course, if you do that, I’d also suggest tracking down as many other possibilities for why your fit is showing high values of S02 as you can; e.g., double-check your normalization during data reduction.
If, on the other hand, the Hamilton test does show the floated S02 is yielding a statistically significant improvement, I think you have a bigger issue. Looking at, e.g., whether you may have constrained coordination numbers incorrectly becomes more critical.
—Scott Calvin
Sarah Lawrence College
On Mar 20, 2015, at 12:48 PM, huyanyun@physics.utoronto.camailto:huyanyun@physics.utoronto.ca wrote:
Hi Scott,
Thank you. Our group has one copy of your book, I'll read it again
after my colleague return it to shelf. I still want to continue our
discussion here:
If we treat S02 as an empirically observed parameter, can I just set
S02=0.9 or 1.45 and let other parameters to explain the k- and R-
dependence? Because S02 is not a simplistic parameter which may
include both theory and experimental effects, I feel that S02 is not
necessarily to be smaller than 1, although I admit S02 smaller than 1
is more defensible as it represents some limitations both in theory
model and experiment, but I have a series of similar sample and all
their S02 will be automatically be fitted to 1.45~1.55, not smaller
than 1. Could this indicate something?
I actually found in my system, when I set S02=0.9 (instead of letting
it fit to 1.45), other parameter will definitely change but the
fitting is not terrible, it is still a close fit but important site
occupancy percentage P% changed a lot. So how should I compare/select
from the two fits, one with S02=0.9 and one with S02=1.45 with two
scenarios showing different results?
Best,
Yanyun
Quoting Scott Calvin
Hi Scott,
Thank you so much for giving me your thought again. It is very helpful
to know how you and other XAFS experts deal with unusual situations.
The floating S02 is fitted to be 1.45+/-0.14, this just means the fit
doesn't like the idea of an S02 in a typical range. Instead of setting
S02 to 0.9, I have to figure out why it happens and what it might
indicate.
I guess a Hamilton test is done by adjusting one parameter (i.e., S02)
while keeping other conditions and model the same. Is that right? So
I record this test as following:
1) Floating S02: S02 fits to 1.45+/-0.14, R=0.0055, reduced
chi^2=17.86, Percentage=0.53+/-0.04
2) Set S02=0.7, R=0.044, reduced chi^2=120.6, percentage=0.81+/-0.2
3) set S02=0.8, R=0.030, reduced chi^2=86.10, percentage=0.77+/-0.07
3) set S02=0.9, R=0.021, reduced chi^2=60.16, percentage=0.72+/-0.06
4) set S02=1.0, R=0.017, reduced chi^2=49.5, percentage=0.67+/-0.05
5) set S02=1.1, R=0.012, reduced chi^2=35.1, percentage=0.62+/-0.03
6) set S02=1.2, R=0.009, reduced chi^2=24.9, percentage=0.59+/-0.02
7) set S02=1.3, R=0.007, reduced chi^2=18.9, percentage=0.57+/-0.02
8) set S02=1.4, R=0.0057, reduced chi^2=16.1, percentage=0.55+/-0.02
9) Floating S02 to be 1.45+/-0.14
10) set S02=1.6, R=0.006, reduced chi^2=17.8, percentage=0.53+/- 0.02
11) set S02=2.0, R=0.044, reduced chi^2=120.7, percentage=0.37+/-0.06.
Therefore, I will say S02 falling in the range 1.2~1.6 gives
statistically improved fit, but S02=0.9 is not terrible as well. I
agree with you that I could always be confident to say the percentage
is 0.64+/-0.15, but I do want to shrink down the uncertainty and think
about other possibilities that could cause a large S02.
I did double-check the data-reduction and normalization process. I
don't think I can improve anything in this step. By the way, I have a
series of similar samples and their fittings all shows floating S02
larger than one based on the same two-sites model.
Best,
Yanyun
Quoting Scott Calvin
Hi Yanyun,
Lots of comments coming in now, so I’m editing this as I write it!
One possibility for why you're getting a high best-fit S02 is that the fit doesn't care all that much about what the value of S02; i.e. there is broad range of S02's compatible with describing the fit as "good." That should be reflected in the uncertainty that Artemis reports. If S02 is 1.50 +/- 0.48, for example, that means the fit isn't all that "sure" what S02 should be. That would mean we could just shrug our shoulders and move on, except that it correlates with a parameter you are interested in (in this case, site occupancy). So in such a case, I think you can cautiously fall back on what might be called a "Bayesian prior"; i.e., the belief that the S02 should be "around" 0.9, and set the S02 to 0.9. (Or perhaps restrain S02 to 0.9; then you're really doing something a bit more like the notion of a Bayesian prior.)
On the other hand, if the S02 is, say, 1.50 +/- 0.07, then the fit really doesn’t like the idea of an S02 in the typical range. An S02 that high, with that small an uncertainty, suggests to me that something is wrong—although it could be as simple as a normalization issue during data reduction. In that case, I’d be more skeptical of just setting S02 to 0.90 and going with that result; the fit is trying to tell you something, and it’s important to track down what that something is.
Of course, once in a while, a fit will find a local minimum, while there’s another good local minimum around a more realistic value. That would be reflected by a fit that gave similarly good quantitative measures of fit quality (e.g. R-factors) when S02 is fit (and yields 1.50 +/- 0.07) as when its forced to 0.90. That’s somewhat unusual, however, particularly with a global parameter like S02.
A good way to defend setting S02 to 0.90 is to use the Hamilton test to see if floating S02 yields a statistically significant improvement over forcing it to 0.90. If not, using your prior best estimate for S02 is reasonable.
If you did that, though, I’d think that it would be good to mention what happened in any eventual publication of presentation; it might provide an important clue to someone who follows up with this or a similar system. It would also be good to increase your reported uncertainty for site occupancy (and indicate in the text what you’ve done). I now see that your site occupancies are 0.53 +/- 0.04 for the floated S02, and 0.72 +/-0.06 for the S02 = 0.90. That’s not so bad, really. It means that you’re pretty confident that the site occupancy is 0.64 +/- 0.15, which isn’t an absurdly large uncertainty as these things go.
To be concrete, if all the Hamilton test does not show statistically significant improvement by floating S02, then I might write something like this in any eventual paper: “The site occupancy was highly correlated with S02 in our fits, making it difficult to determine the site occupancy with high precision. If S02 is constrained to 0.90, a plausible value for element [X] [ref], then the site occupancy is 0.53 +/- 0.04. If constrained to 1.0, the site occupancy is [whatever it comes out to be] To reflect the increased uncertainty associated with the unknown value for S02, we are adopting a value of 0.53 +/- [enough uncertainty to cover the results found for S02 = 1.0].
Of course, if you do that, I’d also suggest tracking down as many other possibilities for why your fit is showing high values of S02 as you can; e.g., double-check your normalization during data reduction.
If, on the other hand, the Hamilton test does show the floated S02 is yielding a statistically significant improvement, I think you have a bigger issue. Looking at, e.g., whether you may have constrained coordination numbers incorrectly becomes more critical.
—Scott Calvin Sarah Lawrence College
On Mar 20, 2015, at 12:48 PM, huyanyun@physics.utoronto.camailto:huyanyun@physics.utoronto.ca wrote:
Hi Scott,
Thank you. Our group has one copy of your book, I'll read it again after my colleague return it to shelf. I still want to continue our discussion here:
If we treat S02 as an empirically observed parameter, can I just set S02=0.9 or 1.45 and let other parameters to explain the k- and R- dependence? Because S02 is not a simplistic parameter which may include both theory and experimental effects, I feel that S02 is not necessarily to be smaller than 1, although I admit S02 smaller than 1 is more defensible as it represents some limitations both in theory model and experiment, but I have a series of similar sample and all their S02 will be automatically be fitted to 1.45~1.55, not smaller than 1. Could this indicate something?
I actually found in my system, when I set S02=0.9 (instead of letting it fit to 1.45), other parameter will definitely change but the fitting is not terrible, it is still a close fit but important site occupancy percentage P% changed a lot. So how should I compare/select from the two fits, one with S02=0.9 and one with S02=1.45 with two scenarios showing different results?
Best, Yanyun Quoting Scott Calvin
mailto:scalvin@sarahlawrence.edu>: Hi Yanyun,
I am hesitant to promote a commercial project from which I directly profit on this list, but it seems to me you are asking a bigger set of questions than can comfortably and sufficiently be answered in this format, and they are questions which have been answered in detail elsewhere.
In my book XAFS for Everyone, I have four pages devoted solely to S02, along with related information elsewhere in the book.
Since you have a University of Toronto address, I am guessing you have access to their library. If you don't wish to purchase the book, you can request it via interlibrary loan, at no cost to you or your institution.
In the mean time, a quote from the book that may be useful in thinking about S02:
"Alternatively, one can treat So2 as a phenomenological parameter that accounts for any amplitude suppression independent of k and R, regardless of physical cause (Krappe and Rossner 2004). Under this view, So2 does not have any particular physical meaning, and the k or R dependence of intrinsic losses can be assigned to other parameters."
That's the way I usually think about it--as not having a single physical meaning, but rather as being an empirically observed correction factor relative to simplistic theories which is indicative both of experimental effects and limitations in the theoretical model.
Hope that helps...
--Scott Calvin Sarah Lawrence College
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.camailto:huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.govmailto:Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.govmailto:Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Yanyun, To actually do a Hamilton test, the one other thing I need to know the number of degrees of freedom in the fit...if you provide that, I'll walk you through how to actually do a Hamilton test--it's not that bad, with the aid of an online calculator, and I think it might be instructive for some of the other people reading this list who are trying to learn EXAFS. --Scott Calvin Sarah Lawrence College
On Mar 20, 2015, at 3:46 PM, huyanyun@physics.utoronto.ca wrote:
Hi Scott,
Thank you so much for giving me your thought again. It is very helpful to know how you and other XAFS experts deal with unusual situations.
The floating S02 is fitted to be 1.45+/-0.14, this just means the fit doesn't like the idea of an S02 in a typical range. Instead of setting S02 to 0.9, I have to figure out why it happens and what it might indicate.
I guess a Hamilton test is done by adjusting one parameter (i.e., S02) while keeping other conditions and model the same. Is that right? So I record this test as following:
1) Floating S02: S02 fits to 1.45+/-0.14, R=0.0055, reduced chi^2=17.86, Percentage=0.53+/-0.04 2) Set S02=0.7, R=0.044, reduced chi^2=120.6, percentage=0.81+/-0.2 3) set S02=0.8, R=0.030, reduced chi^2=86.10, percentage=0.77+/-0.07 3) set S02=0.9, R=0.021, reduced chi^2=60.16, percentage=0.72+/-0.06 4) set S02=1.0, R=0.017, reduced chi^2=49.5, percentage=0.67+/-0.05 5) set S02=1.1, R=0.012, reduced chi^2=35.1, percentage=0.62+/-0.03 6) set S02=1.2, R=0.009, reduced chi^2=24.9, percentage=0.59+/-0.02 7) set S02=1.3, R=0.007, reduced chi^2=18.9, percentage=0.57+/-0.02 8) set S02=1.4, R=0.0057, reduced chi^2=16.1, percentage=0.55+/-0.02 9) Floating S02 to be 1.45+/-0.14 10) set S02=1.6, R=0.006, reduced chi^2=17.8, percentage=0.53+/- 0.02 11) set S02=2.0, R=0.044, reduced chi^2=120.7, percentage=0.37+/-0.06.
Therefore, I will say S02 falling in the range 1.2~1.6 gives statistically improved fit, but S02=0.9 is not terrible as well. I agree with you that I could always be confident to say the percentage is 0.64+/-0.15, but I do want to shrink down the uncertainty and think about other possibilities that could cause a large S02.
I did double-check the data-reduction and normalization process. I don't think I can improve anything in this step. By the way, I have a series of similar samples and their fittings all shows floating S02 larger than one based on the same two-sites model.
Best, Yanyun
Quoting Scott Calvin
: Hi Yanyun,
Lots of comments coming in now, so I’m editing this as I write it!
One possibility for why you're getting a high best-fit S02 is that the fit doesn't care all that much about what the value of S02; i.e. there is broad range of S02's compatible with describing the fit as "good." That should be reflected in the uncertainty that Artemis reports. If S02 is 1.50 +/- 0.48, for example, that means the fit isn't all that "sure" what S02 should be. That would mean we could just shrug our shoulders and move on, except that it correlates with a parameter you are interested in (in this case, site occupancy). So in such a case, I think you can cautiously fall back on what might be called a "Bayesian prior"; i.e., the belief that the S02 should be "around" 0.9, and set the S02 to 0.9. (Or perhaps restrain S02 to 0.9; then you're really doing something a bit more like the notion of a Bayesian prior.)
On the other hand, if the S02 is, say, 1.50 +/- 0.07, then the fit really doesn’t like the idea of an S02 in the typical range. An S02 that high, with that small an uncertainty, suggests to me that something is wrong—although it could be as simple as a normalization issue during data reduction. In that case, I’d be more skeptical of just setting S02 to 0.90 and going with that result; the fit is trying to tell you something, and it’s important to track down what that something is.
Of course, once in a while, a fit will find a local minimum, while there’s another good local minimum around a more realistic value. That would be reflected by a fit that gave similarly good quantitative measures of fit quality (e.g. R-factors) when S02 is fit (and yields 1.50 +/- 0.07) as when its forced to 0.90. That’s somewhat unusual, however, particularly with a global parameter like S02.
A good way to defend setting S02 to 0.90 is to use the Hamilton test to see if floating S02 yields a statistically significant improvement over forcing it to 0.90. If not, using your prior best estimate for S02 is reasonable.
If you did that, though, I’d think that it would be good to mention what happened in any eventual publication of presentation; it might provide an important clue to someone who follows up with this or a similar system. It would also be good to increase your reported uncertainty for site occupancy (and indicate in the text what you’ve done). I now see that your site occupancies are 0.53 +/- 0.04 for the floated S02, and 0.72 +/-0.06 for the S02 = 0.90. That’s not so bad, really. It means that you’re pretty confident that the site occupancy is 0.64 +/- 0.15, which isn’t an absurdly large uncertainty as these things go.
To be concrete, if all the Hamilton test does not show statistically significant improvement by floating S02, then I might write something like this in any eventual paper: “The site occupancy was highly correlated with S02 in our fits, making it difficult to determine the site occupancy with high precision. If S02 is constrained to 0.90, a plausible value for element [X] [ref], then the site occupancy is 0.53 +/- 0.04. If constrained to 1.0, the site occupancy is [whatever it comes out to be] To reflect the increased uncertainty associated with the unknown value for S02, we are adopting a value of 0.53 +/- [enough uncertainty to cover the results found for S02 = 1.0].
Of course, if you do that, I’d also suggest tracking down as many other possibilities for why your fit is showing high values of S02 as you can; e.g., double-check your normalization during data reduction.
If, on the other hand, the Hamilton test does show the floated S02 is yielding a statistically significant improvement, I think you have a bigger issue. Looking at, e.g., whether you may have constrained coordination numbers incorrectly becomes more critical.
—Scott Calvin Sarah Lawrence College
Hi Scott,
In all situations, 31.2 independent data points and 24 variables were
used. In the case of setting S02 to a value, 23 variables were used.
Let me know if there is any other info needed.
Best,
Yanyun
Quoting Scott Calvin
Hi Yanyun,
To actually do a Hamilton test, the one other thing I need to know the number of degrees of freedom in the fit...if you provide that, I'll walk you through how to actually do a Hamilton test--it's not that bad, with the aid of an online calculator, and I think it might be instructive for some of the other people reading this list who are trying to learn EXAFS.
--Scott Calvin Sarah Lawrence College
On Mar 20, 2015, at 3:46 PM, huyanyun@physics.utoronto.ca wrote:
Hi Scott,
Thank you so much for giving me your thought again. It is very helpful to know how you and other XAFS experts deal with unusual situations.
The floating S02 is fitted to be 1.45+/-0.14, this just means the fit doesn't like the idea of an S02 in a typical range. Instead of setting S02 to 0.9, I have to figure out why it happens and what it might indicate.
I guess a Hamilton test is done by adjusting one parameter (i.e., S02) while keeping other conditions and model the same. Is that right? So I record this test as following:
1) Floating S02: S02 fits to 1.45+/-0.14, R=0.0055, reduced chi^2=17.86, Percentage=0.53+/-0.04 2) Set S02=0.7, R=0.044, reduced chi^2=120.6, percentage=0.81+/-0.2 3) set S02=0.8, R=0.030, reduced chi^2=86.10, percentage=0.77+/-0.07 3) set S02=0.9, R=0.021, reduced chi^2=60.16, percentage=0.72+/-0.06 4) set S02=1.0, R=0.017, reduced chi^2=49.5, percentage=0.67+/-0.05 5) set S02=1.1, R=0.012, reduced chi^2=35.1, percentage=0.62+/-0.03 6) set S02=1.2, R=0.009, reduced chi^2=24.9, percentage=0.59+/-0.02 7) set S02=1.3, R=0.007, reduced chi^2=18.9, percentage=0.57+/-0.02 8) set S02=1.4, R=0.0057, reduced chi^2=16.1, percentage=0.55+/-0.02 9) Floating S02 to be 1.45+/-0.14 10) set S02=1.6, R=0.006, reduced chi^2=17.8, percentage=0.53+/- 0.02 11) set S02=2.0, R=0.044, reduced chi^2=120.7, percentage=0.37+/-0.06.
Therefore, I will say S02 falling in the range 1.2~1.6 gives statistically improved fit, but S02=0.9 is not terrible as well. I agree with you that I could always be confident to say the percentage is 0.64+/-0.15, but I do want to shrink down the uncertainty and think about other possibilities that could cause a large S02.
I did double-check the data-reduction and normalization process. I don't think I can improve anything in this step. By the way, I have a series of similar samples and their fittings all shows floating S02 larger than one based on the same two-sites model.
Best, Yanyun
Quoting Scott Calvin
: Hi Yanyun,
Lots of comments coming in now, so I’m editing this as I write it!
One possibility for why you're getting a high best-fit S02 is that the fit doesn't care all that much about what the value of S02; i.e. there is broad range of S02's compatible with describing the fit as "good." That should be reflected in the uncertainty that Artemis reports. If S02 is 1.50 +/- 0.48, for example, that means the fit isn't all that "sure" what S02 should be. That would mean we could just shrug our shoulders and move on, except that it correlates with a parameter you are interested in (in this case, site occupancy). So in such a case, I think you can cautiously fall back on what might be called a "Bayesian prior"; i.e., the belief that the S02 should be "around" 0.9, and set the S02 to 0.9. (Or perhaps restrain S02 to 0.9; then you're really doing something a bit more like the notion of a Bayesian prior.)
On the other hand, if the S02 is, say, 1.50 +/- 0.07, then the fit really doesn’t like the idea of an S02 in the typical range. An S02 that high, with that small an uncertainty, suggests to me that something is wrong—although it could be as simple as a normalization issue during data reduction. In that case, I’d be more skeptical of just setting S02 to 0.90 and going with that result; the fit is trying to tell you something, and it’s important to track down what that something is.
Of course, once in a while, a fit will find a local minimum, while there’s another good local minimum around a more realistic value. That would be reflected by a fit that gave similarly good quantitative measures of fit quality (e.g. R-factors) when S02 is fit (and yields 1.50 +/- 0.07) as when its forced to 0.90. That’s somewhat unusual, however, particularly with a global parameter like S02.
A good way to defend setting S02 to 0.90 is to use the Hamilton test to see if floating S02 yields a statistically significant improvement over forcing it to 0.90. If not, using your prior best estimate for S02 is reasonable.
If you did that, though, I’d think that it would be good to mention what happened in any eventual publication of presentation; it might provide an important clue to someone who follows up with this or a similar system. It would also be good to increase your reported uncertainty for site occupancy (and indicate in the text what you’ve done). I now see that your site occupancies are 0.53 +/- 0.04 for the floated S02, and 0.72 +/-0.06 for the S02 = 0.90. That’s not so bad, really. It means that you’re pretty confident that the site occupancy is 0.64 +/- 0.15, which isn’t an absurdly large uncertainty as these things go.
To be concrete, if all the Hamilton test does not show statistically significant improvement by floating S02, then I might write something like this in any eventual paper: “The site occupancy was highly correlated with S02 in our fits, making it difficult to determine the site occupancy with high precision. If S02 is constrained to 0.90, a plausible value for element [X] [ref], then the site occupancy is 0.53 +/- 0.04. If constrained to 1.0, the site occupancy is [whatever it comes out to be] To reflect the increased uncertainty associated with the unknown value for S02, we are adopting a value of 0.53 +/- [enough uncertainty to cover the results found for S02 = 1.0].
Of course, if you do that, I’d also suggest tracking down as many other possibilities for why your fit is showing high values of S02 as you can; e.g., double-check your normalization during data reduction.
If, on the other hand, the Hamilton test does show the floated S02 is yielding a statistically significant improvement, I think you have a bigger issue. Looking at, e.g., whether you may have constrained coordination numbers incorrectly becomes more critical.
—Scott Calvin Sarah Lawrence College
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Yanyun, Good. So here's the procedure for a Hamilton test. We're comparing the fit with S02 guessed to the one with S02 set to 0.90, because that was your a priori best guess at S02. I take the ratio of the first R-factor to the second. You didn't actually say the R-factor for the fit with S02 guessed, but it's clearly around 0.0055 based on the other information you gave. The R-factor for the 0.90 fit is 0.021. So the ratio is 0.0055/0.021 = 0.26, which we'll call x. For the first fit the degrees of freedom is 31.2 - 24 = 8.2. Take half of that and call that a. So a is 4.1. The first fit guesses 1 parameter that the second one doesn't. Take half of 1 and call that b. So b is 0.5. Find a regularized lower incomplete beta function calculator, like this one: http://www.danielsoper.com/statcalc3/calc.aspx?id=37 Enter x, a, and b. The result is 0.001. This means that there is a 0.1% chance that the fits are actually consistent, and that the difference is just due to noise in the data. So in this case, we can't just explain away the high S02 as insignificant. Of course, you could pretty much eyeball that once you gave me the uncertainties; since your fit said 1.45 +/- 0.14, that's likely to be quite incompatible with S02 = 0.9. Still, it's nice to put that on a firmer statistical basis, and I've personally found the Hamilton test quite helpful for answering "do I need to worry about [X]?" type questions. But in your case, you do need to worry about it. This discussion has generated several suggestions; hopefully one of them is a good lead! --Scott Calvin Sarah Lawrence College
On Mar 20, 2015, at 4:30 PM, huyanyun@physics.utoronto.ca wrote:
Hi Scott,
In all situations, 31.2 independent data points and 24 variables were used. In the case of setting S02 to a value, 23 variables were used.
Let me know if there is any other info needed.
Best, Yanyun
Quoting Scott Calvin
: Hi Yanyun,
To actually do a Hamilton test, the one other thing I need to know the number of degrees of freedom in the fit...if you provide that, I'll walk you through how to actually do a Hamilton test--it's not that bad, with the aid of an online calculator, and I think it might be instructive for some of the other people reading this list who are trying to learn EXAFS.
--Scott Calvin Sarah Lawrence College
On Mar 20, 2015, at 3:46 PM, huyanyun@physics.utoronto.ca wrote:
Hi Scott,
Thank you so much for giving me your thought again. It is very helpful to know how you and other XAFS experts deal with unusual situations.
The floating S02 is fitted to be 1.45+/-0.14, this just means the fit doesn't like the idea of an S02 in a typical range. Instead of setting S02 to 0.9, I have to figure out why it happens and what it might indicate.
I guess a Hamilton test is done by adjusting one parameter (i.e., S02) while keeping other conditions and model the same. Is that right? So I record this test as following:
1) Floating S02: S02 fits to 1.45+/-0.14, R=0.0055, reduced chi^2=17.86, Percentage=0.53+/-0.04 2) Set S02=0.7, R=0.044, reduced chi^2=120.6, percentage=0.81+/-0.2 3) set S02=0.8, R=0.030, reduced chi^2=86.10, percentage=0.77+/-0.07 3) set S02=0.9, R=0.021, reduced chi^2=60.16, percentage=0.72+/-0.06 4) set S02=1.0, R=0.017, reduced chi^2=49.5, percentage=0.67+/-0.05 5) set S02=1.1, R=0.012, reduced chi^2=35.1, percentage=0.62+/-0.03 6) set S02=1.2, R=0.009, reduced chi^2=24.9, percentage=0.59+/-0.02 7) set S02=1.3, R=0.007, reduced chi^2=18.9, percentage=0.57+/-0.02 8) set S02=1.4, R=0.0057, reduced chi^2=16.1, percentage=0.55+/-0.02 9) Floating S02 to be 1.45+/-0.14 10) set S02=1.6, R=0.006, reduced chi^2=17.8, percentage=0.53+/- 0.02 11) set S02=2.0, R=0.044, reduced chi^2=120.7, percentage=0.37+/-0.06.
Therefore, I will say S02 falling in the range 1.2~1.6 gives statistically improved fit, but S02=0.9 is not terrible as well. I agree with you that I could always be confident to say the percentage is 0.64+/-0.15, but I do want to shrink down the uncertainty and think about other possibilities that could cause a large S02.
I did double-check the data-reduction and normalization process. I don't think I can improve anything in this step. By the way, I have a series of similar samples and their fittings all shows floating S02 larger than one based on the same two-sites model.
Best, Yanyun
Quoting Scott Calvin
: Hi Yanyun,
Lots of comments coming in now, so I’m editing this as I write it!
One possibility for why you're getting a high best-fit S02 is that the fit doesn't care all that much about what the value of S02; i.e. there is broad range of S02's compatible with describing the fit as "good." That should be reflected in the uncertainty that Artemis reports. If S02 is 1.50 +/- 0.48, for example, that means the fit isn't all that "sure" what S02 should be. That would mean we could just shrug our shoulders and move on, except that it correlates with a parameter you are interested in (in this case, site occupancy). So in such a case, I think you can cautiously fall back on what might be called a "Bayesian prior"; i.e., the belief that the S02 should be "around" 0.9, and set the S02 to 0.9. (Or perhaps restrain S02 to 0.9; then you're really doing something a bit more like the notion of a Bayesian prior.)
On the other hand, if the S02 is, say, 1.50 +/- 0.07, then the fit really doesn’t like the idea of an S02 in the typical range. An S02 that high, with that small an uncertainty, suggests to me that something is wrong—although it could be as simple as a normalization issue during data reduction. In that case, I’d be more skeptical of just setting S02 to 0.90 and going with that result; the fit is trying to tell you something, and it’s important to track down what that something is.
Of course, once in a while, a fit will find a local minimum, while there’s another good local minimum around a more realistic value. That would be reflected by a fit that gave similarly good quantitative measures of fit quality (e.g. R-factors) when S02 is fit (and yields 1.50 +/- 0.07) as when its forced to 0.90. That’s somewhat unusual, however, particularly with a global parameter like S02.
A good way to defend setting S02 to 0.90 is to use the Hamilton test to see if floating S02 yields a statistically significant improvement over forcing it to 0.90. If not, using your prior best estimate for S02 is reasonable.
If you did that, though, I’d think that it would be good to mention what happened in any eventual publication of presentation; it might provide an important clue to someone who follows up with this or a similar system. It would also be good to increase your reported uncertainty for site occupancy (and indicate in the text what you’ve done). I now see that your site occupancies are 0.53 +/- 0.04 for the floated S02, and 0.72 +/-0.06 for the S02 = 0.90. That’s not so bad, really. It means that you’re pretty confident that the site occupancy is 0.64 +/- 0.15, which isn’t an absurdly large uncertainty as these things go.
To be concrete, if all the Hamilton test does not show statistically significant improvement by floating S02, then I might write something like this in any eventual paper: “The site occupancy was highly correlated with S02 in our fits, making it difficult to determine the site occupancy with high precision. If S02 is constrained to 0.90, a plausible value for element [X] [ref], then the site occupancy is 0.53 +/- 0.04. If constrained to 1.0, the site occupancy is [whatever it comes out to be] To reflect the increased uncertainty associated with the unknown value for S02, we are adopting a value of 0.53 +/- [enough uncertainty to cover the results found for S02 = 1.0].
Of course, if you do that, I’d also suggest tracking down as many other possibilities for why your fit is showing high values of S02 as you can; e.g., double-check your normalization during data reduction.
If, on the other hand, the Hamilton test does show the floated S02 is yielding a statistically significant improvement, I think you have a bigger issue. Looking at, e.g., whether you may have constrained coordination numbers incorrectly becomes more critical.
—Scott Calvin Sarah Lawrence College
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Yanyun, The so2 also has a correlation with the Debye factor as well. You should also look at that parameter in the fits you've done. Chris Sent from my iPhone
On Mar 20, 2015, at 3:46 PM, huyanyun@physics.utoronto.ca wrote:
Hi Scott,
Thank you so much for giving me your thought again. It is very helpful to know how you and other XAFS experts deal with unusual situations.
The floating S02 is fitted to be 1.45+/-0.14, this just means the fit doesn't like the idea of an S02 in a typical range. Instead of setting S02 to 0.9, I have to figure out why it happens and what it might indicate.
I guess a Hamilton test is done by adjusting one parameter (i.e., S02) while keeping other conditions and model the same. Is that right? So I record this test as following:
1) Floating S02: S02 fits to 1.45+/-0.14, R=0.0055, reduced chi^2=17.86, Percentage=0.53+/-0.04 2) Set S02=0.7, R=0.044, reduced chi^2=120.6, percentage=0.81+/-0.2 3) set S02=0.8, R=0.030, reduced chi^2=86.10, percentage=0.77+/-0.07 3) set S02=0.9, R=0.021, reduced chi^2=60.16, percentage=0.72+/-0.06 4) set S02=1.0, R=0.017, reduced chi^2=49.5, percentage=0.67+/-0.05 5) set S02=1.1, R=0.012, reduced chi^2=35.1, percentage=0.62+/-0.03 6) set S02=1.2, R=0.009, reduced chi^2=24.9, percentage=0.59+/-0.02 7) set S02=1.3, R=0.007, reduced chi^2=18.9, percentage=0.57+/-0.02 8) set S02=1.4, R=0.0057, reduced chi^2=16.1, percentage=0.55+/-0.02 9) Floating S02 to be 1.45+/-0.14 10) set S02=1.6, R=0.006, reduced chi^2=17.8, percentage=0.53+/- 0.02 11) set S02=2.0, R=0.044, reduced chi^2=120.7, percentage=0.37+/-0.06.
Therefore, I will say S02 falling in the range 1.2~1.6 gives statistically improved fit, but S02=0.9 is not terrible as well. I agree with you that I could always be confident to say the percentage is 0.64+/-0.15, but I do want to shrink down the uncertainty and think about other possibilities that could cause a large S02.
I did double-check the data-reduction and normalization process. I don't think I can improve anything in this step. By the way, I have a series of similar samples and their fittings all shows floating S02 larger than one based on the same two-sites model.
Best, Yanyun
Quoting Scott Calvin
: Hi Yanyun,
Lots of comments coming in now, so I’m editing this as I write it!
One possibility for why you're getting a high best-fit S02 is that the fit doesn't care all that much about what the value of S02; i.e. there is broad range of S02's compatible with describing the fit as "good." That should be reflected in the uncertainty that Artemis reports. If S02 is 1.50 +/- 0.48, for example, that means the fit isn't all that "sure" what S02 should be. That would mean we could just shrug our shoulders and move on, except that it correlates with a parameter you are interested in (in this case, site occupancy). So in such a case, I think you can cautiously fall back on what might be called a "Bayesian prior"; i.e., the belief that the S02 should be "around" 0.9, and set the S02 to 0.9. (Or perhaps restrain S02 to 0.9; then you're really doing something a bit more like the notion of a Bayesian prior.)
On the other hand, if the S02 is, say, 1.50 +/- 0.07, then the fit really doesn’t like the idea of an S02 in the typical range. An S02 that high, with that small an uncertainty, suggests to me that something is wrong—although it could be as simple as a normalization issue during data reduction. In that case, I’d be more skeptical of just setting S02 to 0.90 and going with that result; the fit is trying to tell you something, and it’s important to track down what that something is.
Of course, once in a while, a fit will find a local minimum, while there’s another good local minimum around a more realistic value. That would be reflected by a fit that gave similarly good quantitative measures of fit quality (e.g. R-factors) when S02 is fit (and yields 1.50 +/- 0.07) as when its forced to 0.90. That’s somewhat unusual, however, particularly with a global parameter like S02.
A good way to defend setting S02 to 0.90 is to use the Hamilton test to see if floating S02 yields a statistically significant improvement over forcing it to 0.90. If not, using your prior best estimate for S02 is reasonable.
If you did that, though, I’d think that it would be good to mention what happened in any eventual publication of presentation; it might provide an important clue to someone who follows up with this or a similar system. It would also be good to increase your reported uncertainty for site occupancy (and indicate in the text what you’ve done). I now see that your site occupancies are 0.53 +/- 0.04 for the floated S02, and 0.72 +/-0.06 for the S02 = 0.90. That’s not so bad, really. It means that you’re pretty confident that the site occupancy is 0.64 +/- 0.15, which isn’t an absurdly large uncertainty as these things go.
To be concrete, if all the Hamilton test does not show statistically significant improvement by floating S02, then I might write something like this in any eventual paper: “The site occupancy was highly correlated with S02 in our fits, making it difficult to determine the site occupancy with high precision. If S02 is constrained to 0.90, a plausible value for element [X] [ref], then the site occupancy is 0.53 +/- 0.04. If constrained to 1.0, the site occupancy is [whatever it comes out to be] To reflect the increased uncertainty associated with the unknown value for S02, we are adopting a value of 0.53 +/- [enough uncertainty to cover the results found for S02 = 1.0].
Of course, if you do that, I’d also suggest tracking down as many other possibilities for why your fit is showing high values of S02 as you can; e.g., double-check your normalization during data reduction.
If, on the other hand, the Hamilton test does show the floated S02 is yielding a statistically significant improvement, I think you have a bigger issue. Looking at, e.g., whether you may have constrained coordination numbers incorrectly becomes more critical.
—Scott Calvin Sarah Lawrence College
On Mar 20, 2015, at 12:48 PM, huyanyun@physics.utoronto.camailto:huyanyun@physics.utoronto.ca wrote:
Hi Scott,
Thank you. Our group has one copy of your book, I'll read it again after my colleague return it to shelf. I still want to continue our discussion here:
If we treat S02 as an empirically observed parameter, can I just set S02=0.9 or 1.45 and let other parameters to explain the k- and R- dependence? Because S02 is not a simplistic parameter which may include both theory and experimental effects, I feel that S02 is not necessarily to be smaller than 1, although I admit S02 smaller than 1 is more defensible as it represents some limitations both in theory model and experiment, but I have a series of similar sample and all their S02 will be automatically be fitted to 1.45~1.55, not smaller than 1. Could this indicate something?
I actually found in my system, when I set S02=0.9 (instead of letting it fit to 1.45), other parameter will definitely change but the fitting is not terrible, it is still a close fit but important site occupancy percentage P% changed a lot. So how should I compare/select from the two fits, one with S02=0.9 and one with S02=1.45 with two scenarios showing different results?
Best, Yanyun Quoting Scott Calvin
mailto:scalvin@sarahlawrence.edu>: Hi Yanyun,
I am hesitant to promote a commercial project from which I directly profit on this list, but it seems to me you are asking a bigger set of questions than can comfortably and sufficiently be answered in this format, and they are questions which have been answered in detail elsewhere.
In my book XAFS for Everyone, I have four pages devoted solely to S02, along with related information elsewhere in the book.
Since you have a University of Toronto address, I am guessing you have access to their library. If you don't wish to purchase the book, you can request it via interlibrary loan, at no cost to you or your institution.
In the mean time, a quote from the book that may be useful in thinking about S02:
"Alternatively, one can treat So2 as a phenomenological parameter that accounts for any amplitude suppression independent of k and R, regardless of physical cause (Krappe and Rossner 2004). Under this view, So2 does not have any particular physical meaning, and the k or R dependence of intrinsic losses can be assigned to other parameters."
That's the way I usually think about it--as not having a single physical meaning, but rather as being an empirically observed correction factor relative to simplistic theories which is indicative both of experimental effects and limitations in the theoretical model.
Hope that helps...
--Scott Calvin Sarah Lawrence College
On Mar 19, 2015, at 6:32 PM, huyanyun@physics.utoronto.camailto:huyanyun@physics.utoronto.ca wrote:
Hi all,
I know this question has been asked for many times. S02 is expected to be around, but smaller than 1, a fact that has been explained, such as in the following previous emails, in our mailing list.
http://www.mail-archive.com/ifeffit%40millenia.cars.aps.anl.gov/msg02237.htm... http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2003-February/000230.html
However, I am continually get S02 value larger than 1 for a series of similar samples when I fit data in Artemis. I think my fit is very good, because my suspected model(based on other technique) could be verified in XAFS analysis (i.e., defensible in physics), the statistics is good ( R=0.01, reduced chi-square=31.4, fit-range:1.5~6 Angstrom, k-range: 3~14 angstrom-1) and all the parameters such as the bond length, sigma2 are physically reasonable. The only thing makes me uncomfortable is that parameter S02 keeps between 1.45 to 1.55 during the fitting.
In my system, the absorber atom occupies two crystallographic sites. So I built a model with paths generated from two FEFF calculations. For paths generated from the 1st and 2nd FEFF calculation, the amplitude parameters are set to be S02*P% and S02*(1-P%) respectively, where P% is the first site occupancy percentage. Both S02 and P are free parameters during the fit, and P is an important conclusion I want to extract from XAFS fitting.
However, the fit result gives me S02=1.45 ~ 1.55 and P=0.51 ~ 0.56 all the time (i.e., for each path the 'total amplitude' S02*P% or S02*(1-P%) are about 0.7~0.8, smaller than 1). It looks to me that I got a 'perfect' fit but I am not sure if S02 larger than one is defensible. So I have to ask:
1) Is my current fit with S02 larger than one reasonable? If not, what could be suggested to get around it?
2) What's the meaning of S02? It is interpreted in physics that it is a reduced electron excitation parameter, but is it possible that S02 will be affected by any experimental condition?
3) Can anyone share whether you had the multiple site system that gets S02 larger than one?
Looking forward to your help.
Best, Yanyun
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.govmailto:Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.govmailto:Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
participants (6)
-
Anatoly I Frenkel
-
Bruce Ravel
-
Chris Patridge
-
huyanyun@physics.utoronto.ca
-
Matt Newville
-
Scott Calvin