Question about the R_factor in artemis/ifeffit
Dear Matt and/or Bruce, First, thank you for the great software! I am a former EXAFSPAK user and have been really impressed by Artemis/ifeffit. It's easier to use, more flexible, and gives more reasonable errors for the fit parameters. My question is about the value of the R_factor for R-space fitting. This seems to be the R_factor for the real component of the Fourier transform only although the fitting is performed on both the real and imaginary components. Is there a reason that the contribution of the imaginary component to the R_factor is not included? Sincerely, Wayne --- Wayne Lukens Scientist Lawrence Berkeley National Laboratory email: wwlukens@lbl.gov phone: (510) 486-4305 FAX: (510) 486-5596
On Friday 30 April 2004 06:33 pm, Wayne Lukens wrote:
First, thank you for the great software! I am a former EXAFSPAK user and have been really impressed by Artemis/ifeffit. It's easier to use, more flexible, and gives more reasonable errors for the fit parameters.
Well, welcome to the "family" and thanks for the kind words!
My question is about the value of the R_factor for R-space fitting. This seems to be the R_factor for the real component of the Fourier transform only although the fitting is performed on both the real and imaginary components. Is there a reason that the contribution of the imaginary component to the R_factor is not included?
A quick perusal of Matt's source code does not suggest that what you say is true. Could you be more specific as to why you think that the R-factor is computed incorrectly? B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Hi Wayne, The R-factor is being calculated with both the real and imaginary parts. If you're looking at the source code, the R-factor is calculated in fitfun.f (in src/lib). There, the sum is over the elements of the arrays thifit and chifit. For R-space fits, thifit contain alternating real,imaginary elements of Delta chi(R) (theory-data) within the fit range and chifit contains alternating real, imaginary elements of chi(R) for the data. For k-space fits, thifit contain the elements of the k-weighted Delta chi(k) (theory-data) and chifit contais k-weighted chi(k) for the data. But I also believe that it doesn't matter much, and that due to the nature of doing FT for a purely real function, an R-value calculated for the Real part only should be equal to one calculated for the Imaginary part only and should be very close to the total R-factor. Hope that helps, --Matt On Sat, 1 May 2004, Bruce Ravel wrote:
On Friday 30 April 2004 06:33 pm, Wayne Lukens wrote:
First, thank you for the great software! �I am a former EXAFSPAK user and have been really impressed by Artemis/ifeffit. �It's easier to use, more flexible, and gives more reasonable errors for the fit parameters.
Well, welcome to the "family" and thanks for the kind words!
My question is about the value of the R_factor for R-space fitting. � This seems to be the R_factor for the real component of the Fourier transform only although the fitting is performed on both the real and imaginary components. �Is there a reason that the contribution of the imaginary component to the R_factor is not included?
A quick perusal of Matt's source code does not suggest that what you say is true. Could you be more specific as to why you think that the R-factor is computed incorrectly?
B
Hi Matt, Thank you for the explanation. This makes sense now. Sincerely, Wayne On May 2, 2004, at 8:21 PM, Matt Newville wrote:
Hi Wayne,
The R-factor is being calculated with both the real and imaginary parts. If you're looking at the source code, the R-factor is calculated in fitfun.f (in src/lib). There, the sum is over the elements of the arrays thifit and chifit. For R-space fits, thifit contain alternating real,imaginary elements of Delta chi(R) (theory-data) within the fit range and chifit contains alternating real, imaginary elements of chi(R) for the data. For k-space fits, thifit contain the elements of the k-weighted Delta chi(k) (theory-data) and chifit contais k-weighted chi(k) for the data.
But I also believe that it doesn't matter much, and that due to the nature of doing FT for a purely real function, an R-value calculated for the Real part only should be equal to one calculated for the Imaginary part only and should be very close to the total R-factor.
Hope that helps,
--Matt
On Sat, 1 May 2004, Bruce Ravel wrote:
On Friday 30 April 2004 06:33 pm, Wayne Lukens wrote:
First, thank you for the great software! I am a former EXAFSPAK user and have been really impressed by Artemis/ifeffit. It's easier to use, more flexible, and gives more reasonable errors for the fit parameters.
Well, welcome to the "family" and thanks for the kind words!
My question is about the value of the R_factor for R-space fitting. This seems to be the R_factor for the real component of the Fourier transform only although the fitting is performed on both the real and imaginary components. Is there a reason that the contribution of the imaginary component to the R_factor is not included?
A quick perusal of Matt's source code does not suggest that what you say is true. Could you be more specific as to why you think that the R-factor is computed incorrectly?
B
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Bruce, I calculated the R_factor by hand (well, by Excel, really) for the real component of the Fourier transformed spectrum. The analysis was actually carried out using Artemis, and I calculated the R_factor over the fitting range in R. For the real component, this was exactly equal to the R_factor returned from Artemis. I don't necessarily think that anything is wrong. I was just curious about this. Sincerely, Wayne On May 1, 2004, at 7:48 AM, Bruce Ravel wrote:
On Friday 30 April 2004 06:33 pm, Wayne Lukens wrote:
First, thank you for the great software! I am a former EXAFSPAK user and have been really impressed by Artemis/ifeffit. It's easier to use, more flexible, and gives more reasonable errors for the fit parameters.
Well, welcome to the "family" and thanks for the kind words!
My question is about the value of the R_factor for R-space fitting. This seems to be the R_factor for the real component of the Fourier transform only although the fitting is performed on both the real and imaginary components. Is there a reason that the contribution of the imaginary component to the R_factor is not included?
A quick perusal of Matt's source code does not suggest that what you say is true. Could you be more specific as to why you think that the R-factor is computed incorrectly?
B
-- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642
NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973
My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
On Monday 03 May 2004 11:29 am, Wayne Lukens wrote:
I calculated the R_factor by hand (well, by Excel, really) for the real component of the Fourier transformed spectrum. The analysis was actually carried out using Artemis, and I calculated the R_factor over the fitting range in R. For the real component, this was exactly equal to the R_factor returned from Artemis.
I am surprised that it was numerically identical, but I think Matt explained well enough why the R-factor computed using the real part, the imaginary part, or both would be roughly the same. B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Hi folks, I have some questions that for most of you are certainly trivial, but for a newcomer trying to learn things by himself are not, so please bear with me. I am trying to figure out the BEST parameters for Athena processing of a set of spectra recorded on Ni and Zn protein samples, therefore dealing with spectra containing disorder and fast dying signal, also with some degree of noise. I figure that in order to run the fitting I need to start from data whose processing I trust. I also figure that the data will look differently (obviously) dependign on the parameters I use for the processing with Athena, so I will try to address some points here. I hope someone has some time to answer: 1) I understand that energy calibration should be performed, and I did so by using the atomic edge energy. I also understand that this parameter could be fluctuating in the subsequent fitting. Any comment on this procedure? 2) The next parameter is Rbkg. I have read the literature about this parameter, especially the 1993 paper by Matt. I sort of understood the rationale and method, but I also gathered that the default parameter of 1.0 can be changed. What are the criteria for changing this? I tend to keep it to the default value, but I feel a bit uneasy about not being able to control this parameter intelligently. 3) I understood from the paper by Matt (and from the "Using Athena" manual by Bruce) that one could use a "standard" to estimate the level of leakage into the small chi(R) region (apodization effects due to Fourier window filtering). The manual states that one can read in a chi.dat file produced with feff. However, I do not understand how to build the feff.inp for feff and produce a useful chi.dat to use as a "standard". Please help? 4) Then there is the k-weight parameter to be changed for the background removal. The default value for this is 1, but higher values are allowed. I noticed that increasing the k-weight for background removal produces a curve in the chi(E) which more and more appear to disregard the edge peak, resembling more and more a smoothly monotonically increasing curve. Consequently, the chi(k) changes depending on this parameter, and I again start to be worried about the following fit of it. What are the criteria to choose this k-weight for the background? 5) The Pre-edge range: here the manual (and the online help) states that the range is -200 to - <snip> (btw, is there a way to see the end of the long sentences in the echo area?) but the actual default values are -150/-75 for most cases, while it can be different for different spectra. I do not understand the rationale in choosing these default values. I am guessing that the program somehow finds the "best" range and uses it. If so, i would like to know the criteria for this choice. Also, I read somewhere in your documents that one should try to have the pre-edge and the post-edge lines to run parallel. Is this a good criterium? Should I change the pre- and post-edge ranges in order to satisfy this criterium? If the default values yield non-parallel lines should I worry? If so, what should I do? 6) Spline range: this is another important issue, I think. In the paper by Matt it is stated that "standard practice ... has been to ignore everything below an energy typically 30 eV above the E0" and that Autobk is an advantage because it can read in data very close to the E0. My question is: the default value for k in the spline range is set to 0.5 eV (0.952 eV). What are the criteria to change this default value? Also, is there any relationship between this range and the range subsequently used for FT? My guess is that the k-min for FT should be always higher that the k-min for the spline range, but please comment on this. Also, what are the criteria to set the default value for k-max in the background removal spline? Does this relate to k-max for FT in the sense that the latter should always be smaller than k-max used for spline background calculation? I also noticed that (obviously) the peak in the chi(R) spectrum which is at R values smaller than the first shell coordination distance decreases as I increase the k-min for the spline calculation, while the first and more distant shells are less influenced by this parameter (even though to a significant extent, which, again, worries me). What is the best value for k-min for the spline and what are the criteria for deciding? 7) for the FT parameters: Shelly's protocol to define the k-range to best calculate the chi(R) was clear and useful to me. However, I would like to know more about choosing the dk parameter and the window-type. I need some general criteria to choose between the various possibilities. I noticed that the kaiser-bessel window is the default, but in the literature I almost invariably find the Hanning window. Please comment. I come from the NMR spectroscopy world, and I am used to run FT and play around with the parameters defining the type of apodization function. However, in that case, it was pretty clear to me what are the standard parameters used by most people, while here, being new to the game, I need some guidance. I tried to read all the material available (thanks a lot for such a great effort) and also the previous mailing list messages, but I thought that it was time to ask al the questions I have at once. I feel like a naive cook who is afraid of making mistakes, and therefore reads a receipe very carefully, not to mix the wrong ingredients in the wrong amounts. So, again, please bear with me. Best, Stefano -- ____________________________________________ Stefano Ciurli Professor of Chemistry Department of Agro-Environmental Science and Technology University of Bologna Viale Giuseppe Fanin, 40 I-40127 Bologna Italy Phone: +39-051-209-6204 Fax: +39-051-209-6203 "Fatti non foste a viver come bruti, ma per seguir virtute e canoscenza" Dante Alighieri - Inferno - Canto XXVI
Stefano, What an excellent email! These are very insightful questions that will, I presume, prompt all sorts of good discussion. What a great use of the mailing list! I am going to answer a few of the questions in this email, but leave others for later or for others to take a stab at.
1) I understand that energy calibration should be performed, and I did so by using the atomic edge energy. I also understand that this parameter could be fluctuating in the subsequent fitting. Any comment on this procedure?
Often energy calibration is made using a reference spectrum (in your case an Ni or Fe foil) measures simultaneously (or perhaps right before or right after) with the sample. That way you can measure an energy shift relative to the 0 valent metal. Also you should be aware that, although Feff's relative energy scale is accurate, it's absolute energy scale may not be. Consquently, one almost always needs an e0 parameter when fitting using Feff in order to "line up" the energy grids of the data and the theory. This is not to say that energy calibration is pointless. Quite the contrary. Along with the energy shifts I mentioned above, you want all of your data to be align and calibrated so that the e0's you measure when fitting are internally consistent within the data ensemble.
3) I understood from the paper by Matt (and from the "Using Athena" manual by Bruce) that one could use a "standard" to estimate the level of leakage into the small chi(R) region (apodization effects due to Fourier window filtering). The manual states that one can read in a chi.dat file produced with feff. However, I do not understand how to build the feff.inp for feff and produce a useful chi.dat to use as a "standard". Please help?
Since you are looking at proteins, I'll give the "protein answer" rather than the "crystal answer". You will need a protein data bank file for your protein or for something that you think is similar. A simple example of converting a PDB file to a feff.inp file is shown on page 2.5 ("Preparing the FEFF input file for non-crystalline materials") of this document: http://cars9.uchicago.edu/xafs/NSLS_EDCA/Sept2002/Ravel.pdf Note that Feff does NOT require that the central atom is at (0,0,0). Also Feff does NOT require that the atoms list be in any particular order. Thus, you can take just the bit around your metal atom from the PDB file and doctor it up as explained on that page. If there is no PDB file for your exact protein, pick something similar. As long as its close, that should be enough to begin interpreting the data.
5) The Pre-edge range: here the manual (and the online help) states that the range is -200 to - <snip> (btw, is there a way to see the end of the long sentences in the echo area?) but the actual default
Go to the Edit menu and select "Echo buffer". The complete sentence is written there. The sentences are being snipped because sentences that are too long make the whole window expand to show them. That is kind of jarring and confusing. As I've been using Athena and finding examples of lines that are too long, I have been editing them to be shorter.
values are -150/-75 for most cases, while it can be different for different spectra. I do not understand the rationale in choosing these default values. I am guessing that the program somehow finds the "best" range and uses it. If so, i would like to know the criteria for this choice.
The defaults are indeed -200 to -30, but the first value will be reset if it is lower then the first data point. You can set values that you think are appropriate in the preferences dialog. The rationale for this choice is that, umm... well... ummm....., they are pretty reasonable guesses for most data sets and when they are not reasonable guesses then you can change them. Not much of a reason, but I don't know what else to say.
Also, I read somewhere in your documents that one should try to have the pre-edge and the post-edge lines to run parallel. Is this a good criterium? Should I change the pre- and post-edge ranges in order to satisfy this criterium? If the default values yield non-parallel lines should I worry? If so, what should I do?
My, my! Certainly not. Not only are they not parallel by the physics of the absorption processes, they are usually more non-parallel in practice due to detector and sample effects. The pre and post edge lines should go through the data. *That* is the only good rule. OK, I'm going to go get a cup of coffee now. I'll poke at these questions some more later on. B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Bruce,
What an excellent email! These are very insightful questions that will, I presume, prompt all sorts of good discussion. What a great use of the mailing list!
thanks for not making me feel like I am bugging you all the times... :-)) and for the time spent here for us
Often energy calibration is made using a reference spectrum (in your case an Ni or Fe foil) measures simultaneously (or perhaps right before or right after) with the sample. That way you can measure an energy shift relative to the 0 valent metal.
we did not do that... and I have never seen it done for biological samples in the short experience I have had. So I hope it is really not too essential as long as the fitted E0 are not too different from the zero-valent atom (few eV, right?)
Also you should be aware that, although Feff's relative energy scale is accurate, it's absolute energy scale may not be. Consquently, one almost always needs an e0 parameter when fitting using Feff in order to "line up" the energy grids of the data and the theory.
OK
This is not to say that energy calibration is pointless. Quite the contrary. Along with the energy shifts I mentioned above, you want all of your data to be align and calibrated so that the e0's you measure when fitting are internally consistent within the data ensemble.
OK. Good.
Since you are looking at proteins, I'll give the "protein answer" rather than the "crystal answer". You will need a protein data bank file for your protein or for something that you think is similar.
I think I understand that I should use one of the .DAT files produced by feff as a standard to optimize the short R range (less than Rbgk), right? well, I tried to do that. First of all the purpose of my study is to sort out the number and type of ligands to the Ni and Zn in our protein, so we do not know that. I could in principle use a .DAT file coming from, let's say, Ni(H2O)62+ (nickel hexa-aquo ion), whose structure I could get from a crystallographic database or from a q-chem calculation. Of course in that case I will only have a single shell, but that may be enough? What if I use NONE as a standard? In that case the pre-first-shell peak is not removed very well at all... but I know that it is not that important for the fitting, as for that I will start fitting at higher R... right?
A simple example of converting a PDB file to a feff.inp file is shown on page 2.5 ("Preparing the FEFF input file for non-crystalline materials") of this document: http://cars9.uchicago.edu/xafs/NSLS_EDCA/Sept2002/Ravel.pdf
I have gone through that. I am able to do it by now :-)) tnx!
Note that Feff does NOT require that the central atom is at (0,0,0). Also Feff does NOT require that the atoms list be in any particular order. Thus, you can take just the bit around your metal atom from the PDB file and doctor it up as explained on that page.
OK
If there is no PDB file for your exact protein, pick something similar. As long as its close, that should be enough to begin interpreting the data.
nickel or zinc hexa-aquo is enough then?
5) The Pre-edge range: here the manual (and the online help) states that the range is -200 to - <snip> (btw, is there a way to see the end of the long sentences in the echo area?) but the actual default
Go to the Edit menu and select "Echo buffer". The complete sentence is written there.
The sentences are being snipped because sentences that are too long make the whole window expand to show them. That is kind of jarring and confusing. As I've been using Athena and finding examples of lines that are too long, I have been editing them to be shorter.
OK!
values are -150/-75 for most cases, while it can be different for different spectra. I do not understand the rationale in choosing these default values. I am guessing that the program somehow finds the "best" range and uses it. If so, i would like to know the criteria for this choice.
The defaults are indeed -200 to -30, but the first value will be reset if it is lower then the first data point.
OK, now I get it.
You can set values that you think are appropriate in the preferences dialog. The rationale for this choice is that, umm... well... ummm....., they are pretty reasonable guesses for most data sets and when they are not reasonable guesses then you can change them. Not much of a reason, but I don't know what else to say.
OK. If the following criterium is wrong - as I understand - then the guessed values by the program are actually very good.
Also, I read somewhere in your documents that one should try to have the pre-edge and the post-edge lines to run parallel. Is this a good criterium? Should I change the pre- and post-edge ranges in order to satisfy this criterium? If the default values yield non-parallel lines should I worry? If so, what should I do?
My, my! Certainly not. Not only are they not parallel by the physics of the absorption processes, they are usually more non-parallel in practice due to detector and sample effects. The pre and post edge lines should go through the data. *That* is the only good rule.
Who knows where I found that idea? I am certain I have read it somewhere...
OK, I'm going to go get a cup of coffee now. I'll poke at these questions some more later on.
Being on a different time, I am done for today and going to pick up my kids from school. I look forward to reading more of your wisdom (and others as well, if willing) later on tonight or tomorrow. Ciao, Stefano -- ____________________________________________ Stefano Ciurli Professor of Chemistry Department of Agro-Environmental Science and Technology University of Bologna Viale Giuseppe Fanin, 40 I-40127 Bologna Italy Phone: +39-051-209-6204 Fax: +39-051-209-6203 "Fatti non foste a viver come bruti, ma per seguir virtute e canoscenza" Dante Alighieri - Inferno - Canto XXVI
Okee dokee, a few more thoughts after Matt's and Shelly's posts... SC> 2) The next parameter is Rbkg. I have read the literature about this SC> parameter, especially the 1993 paper by Matt. I sort of understood SC> the rationale and method, but I also gathered that the default SC> parameter of 1.0 can be changed. What are the criteria for changing SC> this? I tend to keep it to the default value, but I feel a bit uneasy SC> about not being able to control this parameter intelligently. Unfortunately there is not a really good rule of thumb. One might say "half the distance to the first peak", but that's not really a very good rule. The issue is that choice of Rbkg can have a profound impact on the low frequency Fourier components. (Try setting Rbkg to an absurdly large value, say 2 or 3, and see what damage that does to your data!) I would say the real answer is that the choice of Rbkg shouldn't be too strongly correlated with the things you are trying to measure -- N, delta_R, sigma^2. Once you have set up your fitting model in Artemis, a little experiment to try is to save chi(k) using several different values of Rbkg and see how the answers change when you do the fits. To completely make up an example, suppose that you save chi(k) for Rbkg=0.75 and 0.95, then do the fits. If the best fit values of N, delta_R and so on are the same within their error bars, then it doesn't matter which Rbkg you use in Athena. In that case you would probably use the larger one because it probably makes the "prettier" picture in terms of removing very low frequency Fourier components. SC> 3) I understood from the paper by Matt (and from the "Using Athena" SC> manual by Bruce) that one could use a "standard" to estimate the SC> level of leakage into the small chi(R) region (apodization effects SC> due to Fourier window filtering). The manual states that one can read SC> in a chi.dat file produced with feff. However, I do not understand SC> how to build the feff.inp for feff and produce a useful chi.dat to SC> use as a "standard". Please help? In your follow-up to my original answer to this question, you made it clear that the role of the standard is still not clear to you. As Matt said, the standard need not be perfect. Let me explain why. The autobk algorith works by "optimizing" the low frequency Fourier components in order to select the correct spline function. In the absence of a standard, "optimize" means "minimize". That is, without a standard, autobk finds the spline that makes the chi(R) spectrum as small as possible between 0 and Rbkg. Again as Matt said, low Z ligands often benefit by using a standard. The reason is that low Z ligands tend to have quite short distances resulting in a peak in chi(R) that has a significant tail into the region below Rbkg. In that case, *minimizing* the components between 0 and Rbkg is a poor idea because they are *supposed* to be non-zero. A standard then is used to tell the autobk algorithm what the components between 0 and Rbkg are supposed to look like and the spline is chosen to make the data look that way between 0 and Rbkg. In that context, the standard need not be perfect -- close should be good enough. Oh yeah. The feff run should produce a file called "chi.dat". That's a good one to use as a standard. Matt's ifeffit recipe for converting a feffNNNN.dat file into a chi(k) works, too. SC> 4) Then there is the k-weight parameter to be changed for the SC> background removal. The default value for this is 1, but higher SC> values are allowed. I noticed that increasing the k-weight for SC> background removal produces a curve in the chi(E) which more and more SC> appear to disregard the edge peak, resembling more and more a SC> smoothly monotonically increasing curve. Consequently, the chi(k) SC> changes depending on this parameter, and I again start to be worried SC> about the following fit of it. What are the criteria to choose this SC> k-weight for the background? With noisy spectra, the high energy portion of the data might be dominated by fluctuations that have nothing to do with the exafs. In that case a large k-weight for the background removal might be unduly influenced by stuff that is not the data you want to analyze but instead are detector problems, sample inhomogeneity problems, gremlins, crap, whatever ;-) SC> 6) Spline range: this is another important issue, I think. In the SC> paper by Matt it is stated that "standard practice ... has been to SC> ignore everything below an energy typically 30 eV above the E0" and SC> that Autobk is an advantage because it can read in data very close to SC> the E0. My question is: the default value for k in the spline range SC> is set to 0.5 eV (0.952 eV). What are the criteria to change this SC> default value? Also, is there any relationship between this range and Matt said: MN> I typically use kmin=0, kmax=last_data_point, dk=0 for the spline MN> (which are the defaults). Bruce seems to prefer kmin=0.5 or so: MN> it shouldn't make a difference. The reason I chose to make 0.5 the default is so that Athena will stand a better chance of dealing well with data that have a large white line. For data like that spline_kmin=0.0 often leads to a poor background removal because the spline just doesn't have the freedom to deal with such a quickly changing part of the spectrum. For many materials 0.0 is probably a better choice, but in Athena I want the default behavior to always be non-stupid. ("Smart" is a bit too difficult for me ;-) SC> the range subsequently used for FT? My guess is that the k-min for FT SC> should be always higher that the k-min for the spline range, but SC> please comment on this. Also, what are the criteria to set the Well, the FT range must be equal to or smaller than the spline range, but you probably already figured that out! Other than that there is not relationship. It is certainly useful in practice for them not to be the same. Getting back to the problem of deling with a white line, I often find it useful to make spline_kmin 1 or even larger to avoid the white line altogether. That means that the FT range will also be smaller resulting in less data for fitting, but it seems worth it. Since it is so hard to distinguish data from background under the white line, it is often better to just avoid the problem altogether. SC> 7) for the FT parameters: Shelly's protocol to define the k-range to SC> best calculate the chi(R) was clear and useful to me. However, I SC> would like to know more about choosing the dk parameter and the SC> window-type. I need some general criteria to choose between the SC> various possibilities. I noticed that the kaiser-bessel window is SC> the default, but in the literature I almost invariably find the SC> Hanning window. Please comment. Matt summed this up. (I liked the historical context!) Here is a talk that Shelly gave last year at the NSLS exafs course: http://cars9.uchicago.edu/xafs/NSLS_EDCA/July2003/Kelly.pdf Check out page 21. It demonstrates really clearly how little the choice of window shape matters. As for dk, I don't have a good rule of thumb. I generally choose a smallish number. I wanted also to say a few words about your question regarding peak fitting. In a numerical sense, peak fitting is harder than fitting exafs spectra. An exafs spectra is basically one or more damped sine waves. If you try to fit that with any ol' damped sine wave that's not too far out of phase, you won't go too far wrong. The same is not true of fitting peak shapes to xanes data. That kind of non-linear fit is notoriously unstable. In my experience, it can be very difficult to fit the centroid of a peak shape to a feature in a xanes spectrum. That is why Athena has the centroid not flagged as a variable by default. I am not completely clear on what's going on with your data, but it seems as though you want to fit a very tiny feature at the very beginning of the data. The problem there is that, if the pre-edge is not completely flat, the lineshape used for the edge step might through the peak or well below the peak. If so, the peak shape you are trying to use to fit that feature might be poorly defined numerically. I would recommend trying to adjust the parameters by hand until you geta fairly decent representation of the data, then let some of the parameters float in a fit to start understanding where the parameters will want to float off to. If you try to fit the whole xanes spectrum in one quick swoop, though, you are indeed likely to get a poor fit. SC> I feel like a naive cook who is afraid of making mistakes, and SC> therefore reads a receipe very carefully, not to mix the wrong SC> ingredients in the wrong amounts. So, again, please bear with me. Well, I have enjoyed cooking my entire life and I like to think I am not so bad at it. In my experience, the one way to truly learn how to make a really good dish is to make some bad ones and think about what went wrong. You'll know when you're getting there because the dish will start to taste good. The same applies to using the codes. Poke at the buttons and don't fret if weird stuff happens. Think about what it means and why the weird results are inconsistent with what you know about your sample. Eventually the results from the analysis will start to make sense in the context of what you know about your samples. Mmmmmm.... that's good analysis! B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Hi Bruce,
Oh yeah. The feff run should produce a file called "chi.dat". That's a good one to use as a standard. Matt's ifeffit recipe for converting a feffNNNN.dat file into a chi(k) works, too.
when I run feff, I do not get any chi.dat file. Why? I tried to use artemis to do that, but right now the new version I installed crashes when I try to read in a feff calculation (which btw I performed on something that should be similar to my metal site and was about to use it for standard...) The trap message states: # Artemis 0.7.004 # This file created at 12:59:30 on 27 May, 2004 # using darwin, perl 5.008001, Tk 804.027, and Ifeffit 1.2.5 # Workspace: /Users/stefano/.horae/stash/artemis.project.0/ The following message was trapped by Artemis on a SIGDIE: Artemis0.7.004die/Users/stefano/.horae/stash/ARTEMIS.TRAPCODE(0x1e38760)/Users/stefano/.horae/stash/artemis.project.0/ at /Applications/Ifeffit/bin/artemis line 1550 main::__ANON__('Callback called exit.\x{a}') called at /Applications/Ifeffit/bin/artemis line 0 Stefano PS: I am using the OSX 10.3 version -- ____________________________________________ Stefano Ciurli Professor of Chemistry Department of Agro-Environmental Science and Technology University of Bologna Viale Giuseppe Fanin, 40 I-40127 Bologna Italy Phone: +39-051-209-6204 Fax: +39-051-209-6203 "Fatti non foste a viver come bruti, ma per seguir virtute e canoscenza" Dante Alighieri - Inferno - Canto XXVI
Hi, I also see Stefano's problem trying to import an Atoms.inp into artemis on Mac OSX. I think it occurs at the ## make a project feff folder unless ($just_parse) { } block in sub import_atoms{}. In fact, I get a malloc 'out of memory error': *** malloc: vm_allocate(size=2147483648) failed (error code=3) *** malloc[1056]: error: Can't allocate region Out of memory! somewhere in this block. I get a print statement after the Ifeffit::Path->new(), and it seemed to die at the line (@autoparams = autoparams_define($id, $n_feff, 0)) if $config{autoparams}{do_autoparams}; which seemed odd. I haven't pursued it any more than that. Running darwin's top, I do not see a noticeable change in memory usage on trying to import the Atoms.inp useful. On my laptop, one run said there was 175M free before running artemis, and 140M after starting artemis, but before opening an Atoms.inp file. A second attempt (stopping some processes, restarting X11), had 240M free before running artemis, 200M after running artemis, and the same crash on opening an Atoms.inp file. I definitely don't see this problem on linux, even on machines with much less actual memory. So I think it's not actual memory usage. That leads me to suspect perl/Tk. Any ideas, Bruce? Paul? --Matt
Hi Matt,
In fact, I get a malloc 'out of memory error':
*** malloc: vm_allocate(size=2147483648) failed (error code=3) *** malloc[1056]: error: Can't allocate region Out of memory!
me too!
That leads me to suspect perl/Tk.
that is what I also suspected. I think there may be the need for some polishing of the Artemis-perl/Tk interface... Stefano -- ____________________________________________ Stefano Ciurli Professor of Chemistry Department of Agro-Environmental Science and Technology University of Bologna Viale Giuseppe Fanin, 40 I-40127 Bologna Italy Phone: +39-051-209-6204 Fax: +39-051-209-6203 "Fatti non foste a viver come bruti, ma per seguir virtute e canoscenza" Dante Alighieri - Inferno - Canto XXVI
Stefano, Matt, Yesterday I finally got around to installing 10.3 on my G4. I'll try to take a look at this odd atoms problem today. It seems that there is a simple work-around. If the problem involves running the subroutine that parses atoms, you can run atoms and feff outside of artemis then delete (or rename to something other than "atoms.inp") the atoms.inp file, then inport the feff calculation into Artemis. If an atoms.inp file is not found when you import the feff calculation, the atoms tab will remain deactivated and the (apparently) troublesome sub will not be called. B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 405 Naval Research Laboratory phone: (1) 202 767 2268 Washington DC 20375, USA fax: (1) 202 767 4642 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Bruce,
SC> 4) Then there is the k-weight parameter to be changed for the SC> background removal....What are the criteria to choose this SC> k-weight for the background?
With noisy spectra, the high energy portion of the data might be dominated by fluctuations that have nothing to do with the exafs. In that case a large k-weight for the background removal might be unduly influenced by stuff that is not the data you want to analyze but instead are detector problems, sample inhomogeneity problems, gremlins, crap, whatever ;-)
so I conclude that I should use a LOW k-weight in my case, right? Stefano -- ____________________________________________ Stefano Ciurli Professor of Chemistry Department of Agro-Environmental Science and Technology University of Bologna Viale Giuseppe Fanin, 40 I-40127 Bologna Italy Phone: +39-051-209-6204 Fax: +39-051-209-6203 "Fatti non foste a viver come bruti, ma per seguir virtute e canoscenza" Dante Alighieri - Inferno - Canto XXVI
participants (4)
-
Bruce Ravel
-
Matt Newville
-
Stefano Ciurli
-
Wayne Lukens