[Ifeffit] Autobk parameters

Bruce Ravel ravel at phys.washington.edu
Wed May 26 15:56:37 CDT 2004


Okee dokee, a few more thoughts after Matt's and Shelly's posts...

SC> 2) The next parameter is Rbkg. I have read the literature about this
SC> parameter, especially the 1993  paper by Matt. I sort of understood
SC> the rationale and method, but I also gathered that the default
SC> parameter of 1.0 can be changed. What are the criteria for changing
SC> this? I tend to keep it to the default value, but I feel a bit uneasy
SC> about not being able to control this parameter intelligently.

Unfortunately there is not a really good rule of thumb.  One might say
"half the distance to the first peak", but that's not really a very
good rule.  The issue is that choice of Rbkg can have a profound
impact on the low frequency Fourier components.  (Try setting Rbkg to
an absurdly large value, say 2 or 3, and see what damage that does to
your data!)

I would say the real answer is that the choice of Rbkg shouldn't be
too strongly correlated with the things you are trying to measure --
N, delta_R, sigma^2.  Once you have set up your fitting model in
Artemis, a little experiment to try is to save chi(k) using several
different values of Rbkg and see how the answers change when you do
the fits.

To completely make up an example, suppose that you save chi(k) for
Rbkg=0.75 and 0.95, then do the fits.  If the best fit values of N,
delta_R and so on are the same within their error bars, then it
doesn't matter which Rbkg you use in Athena.  In that case you would
probably use the larger one because it probably makes the "prettier"
picture in terms of removing very low frequency Fourier components.

SC> 3) I understood from the paper by Matt (and from the "Using Athena"
SC> manual by Bruce) that one could use a "standard" to estimate the
SC> level of leakage into the small chi(R) region (apodization effects
SC> due to Fourier window filtering). The manual states that one can read
SC> in a chi.dat file produced with feff. However, I do not understand
SC> how to build the feff.inp for feff and produce a useful chi.dat to
SC> use as a "standard". Please help?

In your follow-up to my original answer to this question, you made it
clear that the role of the standard is still not clear to you.

As Matt said, the standard need not be perfect.  Let me explain why.
The autobk algorith works by "optimizing" the low frequency Fourier
components in order to select the correct spline function.  In the
absence of a standard, "optimize" means "minimize".  That is, without
a standard, autobk finds the spline that makes the chi(R) spectrum as
small as possible between 0 and Rbkg.

Again as Matt said, low Z ligands often benefit by using a standard.
The reason is that low Z ligands tend to have quite short distances
resulting in a peak in chi(R) that has a significant tail into the
region below Rbkg.  In that case, *minimizing* the components between
0 and Rbkg is a poor idea because they are *supposed* to be non-zero.
A standard then is used to tell the autobk algorithm what the
components between 0 and Rbkg are supposed to look like and the spline
is chosen to make the data look that way between 0 and Rbkg.  In that
context, the standard need not be perfect -- close should be good
enough.

Oh yeah.  The feff run should produce a file called "chi.dat".  That's
a good one to use as a standard.  Matt's ifeffit recipe for converting
a feffNNNN.dat file into a chi(k) works, too.

SC> 4) Then there is the k-weight parameter to be changed for the
SC> background removal. The default value for this is 1, but higher
SC> values are allowed. I noticed that increasing the k-weight for
SC> background removal produces a curve in the chi(E) which more and more
SC> appear to disregard the edge peak, resembling more and more a
SC> smoothly monotonically increasing curve. Consequently, the chi(k)
SC> changes depending on this parameter, and I again start to be worried
SC> about the following fit of it. What are the criteria to choose this
SC> k-weight for the background?

With noisy spectra, the high energy portion of the data might be
dominated by fluctuations that have nothing to do with the exafs.  In
that case a large k-weight for the background removal might be unduly
influenced by stuff that is not the data you want to analyze but
instead are detector problems, sample inhomogeneity problems,
gremlins, crap, whatever ;-)

SC> 6) Spline range: this is another important issue, I think. In the
SC> paper by Matt it is stated that "standard practice ... has been to
SC> ignore everything below an energy typically 30 eV above the E0" and
SC> that Autobk is an advantage because it can read in data very close to
SC> the E0. My question is: the default value for k in the spline range
SC> is set to 0.5 eV (0.952 eV). What are the criteria to change this
SC> default value? Also, is there any relationship between this range and

Matt said:

MN> I typically use kmin=0, kmax=last_data_point, dk=0 for the spline
MN> (which are the defaults).  Bruce seems to prefer kmin=0.5 or so:
MN> it shouldn't make a difference.

The reason I chose to make 0.5 the default is so that Athena will
stand a better chance of dealing well with data that have a large
white line.  For data like that spline_kmin=0.0 often leads to a poor
background removal because the spline just doesn't have the freedom to
deal with such a quickly changing part of the spectrum.  For many
materials 0.0 is probably a better choice, but in Athena I want the
default behavior to always be non-stupid.  ("Smart" is a bit too
difficult for me ;-)


SC> the range subsequently used for FT? My guess is that the k-min for FT
SC> should be always higher that the k-min for the spline range, but
SC> please comment on this.  Also, what are the criteria to set the

Well, the FT range must be equal to or smaller than the spline range,
but you probably already figured that out!  Other than that there is
not relationship.  It is certainly useful in practice for them not to
be the same.

Getting back to the problem of deling with a white line, I often find
it useful to make spline_kmin 1 or even larger to avoid the white line
altogether.  That means that the FT range will also be smaller
resulting in less data for fitting, but it seems worth it.  Since it
is so hard to distinguish data from background under the white line,
it is often better to just avoid the problem altogether.

SC> 7) for the FT parameters: Shelly's protocol to define the k-range to
SC> best calculate the chi(R) was clear and useful to me. However, I
SC> would like to know more about choosing the dk parameter and the
SC> window-type. I need some general criteria to choose between the
SC> various possibilities.  I noticed that the kaiser-bessel window is
SC> the default, but in the literature I almost invariably find the
SC> Hanning window. Please comment.

Matt summed this up.  (I liked the historical context!)  Here is a
talk that Shelly gave last year at the NSLS exafs course:
   http://cars9.uchicago.edu/xafs/NSLS_EDCA/July2003/Kelly.pdf
Check out page 21.  It demonstrates really clearly how little the
choice of window shape matters.

As for dk, I don't have a good rule of thumb.  I generally choose a
smallish number.



I wanted also to say a few words about your question regarding peak
fitting.  In a numerical sense, peak fitting is harder than fitting
exafs spectra.  An exafs spectra is basically one or more damped sine
waves.  If you try to fit that with any ol' damped sine wave that's
not too far out of phase, you won't go too far wrong.

The same is not true of fitting peak shapes to xanes data.  That kind
of non-linear fit is notoriously unstable.  In my experience, it can
be very difficult to fit the centroid of a peak shape to a feature in
a xanes spectrum. That is why Athena has the centroid not flagged as a
variable by default.

I am not completely clear on what's going on with your data, but it
seems as though you want to fit a very tiny feature at the very
beginning of the data. The problem there is that, if the pre-edge is
not completely flat, the lineshape used for the edge step might
through the peak or well below the peak.  If so, the peak shape you
are trying to use to fit that feature might be poorly defined
numerically.

I would recommend trying to adjust the parameters by hand until you
geta fairly decent representation of the data, then let some of the
parameters float in a fit to start understanding where the parameters
will want to float off to.

If you try to fit the whole xanes spectrum in one quick swoop, though,
you are indeed likely to get a poor fit.



SC> I feel like a naive cook who is afraid of making mistakes, and
SC> therefore reads a receipe very carefully, not to mix the wrong
SC> ingredients in the wrong amounts. So, again, please bear with me.

Well, I have enjoyed cooking my entire life and I like to think I am
not so bad at it.  In my experience, the one way to truly learn how to
make a really good dish is to make some bad ones and think about what
went wrong.  You'll know when you're getting there because the dish
will start to taste good.

The same applies to using the codes.  Poke at the buttons and don't
fret if weird stuff happens.  Think about what it means and why the
weird results are inconsistent with what you know about your sample.
Eventually the results from the analysis will start to make sense in
the context of what you know about your samples.  Mmmmmm.... that's
good analysis!

B

-- 
 Bruce Ravel  ----------------------------------- ravel at phys.washington.edu
 Code 6134, Building 3, Room 405
 Naval Research Laboratory                          phone: (1) 202 767 2268
 Washington DC 20375, USA                             fax: (1) 202 767 4642

 NRL Synchrotron Radiation Consortium (NRL-SRC)
 Beamlines X11a, X11b, X23b
 National Synchrotron Light Source
 Brookhaven National Laboratory, Upton, NY 11973

 My homepage:    http://feff.phys.washington.edu/~ravel 
 EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/




More information about the Ifeffit mailing list