[Ifeffit] Scanning of "Potential Surfaces" in EXAFS

Matt Newville newville at cars.uchicago.edu
Fri Jul 11 08:08:18 CDT 2003


On the 'Fitting Potential Surface' thread, I agree with most of the
comments from the other responders, but wanted to add a few
comments of my own.

First, I think that automating a broader search of parameter space
is a fine thing to do.  I wouldn't bother with a more complex
algorithms until using the existing least-squares approach with
multiple 'random but reasonable' starting guesses was shown to fail
in some important cases.

In fact, though I think this is a fine thing to do, I don't find it
to be too critical for most EXAFS problems.  The truth is that first
shell EXAFS fit is a pretty simple and robust problem for non-linear
least-squares. Fitting N, R, sigma2, and E0 basically boils down to
fitting a damped sine wave, and we have a very good starting guess
of it's period.  So it's hard to go too wrong unless E0 goes far off
that it jumps a period, and that's not hard to spot (though an
automated "warning flag" might be nice). Fitting a pair of
Lorenztians to NMR, XANES, or XRD data is much less robust, and more
likely to go completely wacky because of "multiple minima" or the
"shallow areas" of parameter space.

EXAFS fits are easy enough that for many years the simplified,
linearized version of the problem (the "log-ratio" method) was
preferred by many experts in the field, and can work very well for
many problems.  Just the fact that good EXAFS results can be
obtained _without_ any least-squares refinement at all indicates the
the robustness of the problem.

Once multiple overlapping shells and multiple-scattering become
important, the robustness and guaranteed uniqueness fades away, and
more elaborate fitting methods are needed.  And multiple minima
become a more likely.

On making contour plots: Personally, I don't get too excited about
these. If there's one minima, all I get from these are a) the
best-fit values, b) the uncertainties in the values, and c) the
correlation between the variables. That is, I get 5 numbers that I
already knew.  If there are more than one statistically significant
minima, the contour plot tells me nothing except that there are 2
minima (which is again something that I could have figured out with
a lot fewer calculations!).  Maybe the contour plot helps some
people, and many people seem to like showing them, but I've never
seen the point.  I'm willing to accept that I might be in the
extreme minority on this point.

I hope that doesn't discourage you too much, just gives one opinion
on where priorities could be placed....  Anyway, these are all good
things to work on.

Now, onto some of the nuts and bolts:

Like the others, I'd highly recommend starting with the 'random but
reasonable' starting guesses, and letting feffit() find the minima
"attractors" from each guess.  The reported correlations would give
statistics on the contour of parameter space near each minima.  
This would be fairly simple to implement.

Norbert wrote:

> 1) Computer power is quite fast now - and ifeffit is also really 
> fast in computing the fit quality if you do not guess any variable 
> (which you don't need in this case as you vary the parameters by 
> your own). 

It could possibly be faster at this too.  That is, a non_feffit()
command could generate 'chi-square' of (data-model) without a fit.  
Other possibilities are for feffit() to avoid the calculation of
correlations and just give the best-fit values and chi-square.  If
you're mapping out parameter space yourself, you may not need the
uncertainties found this way (though they might be a good way to
estimate how big your steps in each parameter should be!).
Also, Ifeffit's feffit() is slower than Feffit for at least one
reason that, while normally "very good" could possibly be relaxed:
In Ifeffit, all FTs use a grid of 2048 elements/0.05Ang^-1.  In
Feffit, the fits are done with 1024 element arrays, and the
difference in speed is noticable (well, ~2x). In the old days, this
was important, but machines are fast enough now that it normally
doesn't matter.  But if you're beating on these problems for hours,
it might be worth the effort to allow this speed-up in Ifeffit.

Grant wrote:
> I don't think this should be very hard to do. Of course it easy 
> to suggest that other people do the work, so  I've been playing 
> with these approaches myself using mathematica as my sandbox.  The 
> minimization code in feffit would have to be robust enough that it 
> would give up gracefully (not crash) if it became numerically 
> unstable. 

I think it (minpack) is robust enough.  I've never seen it crash,
rarely fail by giving a 'this problem is too hard' error unless
stuck somewhere where the variations in chi-square are pinned to
zero.  With Feffit this is rare unless N gets pulled to 0.  Feffit
can itself give a 'this problem is too hard' error, but that means
'took too many iterations'.  Anyway, minpack should be *very*
difficult to crash, but Ifeffit might need to give better messages
if it is stressed harder.

Let me know if any changes would help make this stuff easier to do.


More information about the Ifeffit mailing list