[Ifeffit] Re: Ifeffit Digest, Vol 5, Issue 4

Grant Bunker bunker at biocat1.phys.iit.edu
Tue Jul 8 13:02:10 CDT 2003


In regard to global fitting, there are much better algorithms than a
simple grid search, something we stared with the 70's. More recently I
have had some good results with a very simple genetic algorithm called
"differential evolution" by storn and price. It seems to be faster
than simulated annealing, and it has the advantage of being very
robust. It behaves well with hard constraints (e.g. integer
programming) included by penalty functions (or if you like, terms
incorporating a priori knowledge, from a bayesian point of view). It also
looks to me like it would be trivial to parallelize for using multiple
processors.

I think the easiest thing to implement in feffit however would be simply
to do a series of minimizations with (say, 1000) random starting points in
the parameter space. This would just entail wrapping the minimization in a
loop, and adding a way to summarize the results.

Minimizations starting at differentpoints may end up in the same or different
"attractors". If those corresponded to bad fits, throw them out. If they are
adequate fits, keep them. Quite possibly more than one attractor could fit
the data adequately. A cluster analysis of the results should indicate whether
different solutions are equivalent (belong to the same attractor) or not.

I don't think this should be very hard to do. Of course it easy to suggest
that other people do the work, so  I've been playing with these approaches
myself using mathematica as my sandbox.  The minimization code in
feffit would have to be robust enough that it would give up gracefully
(not crash) if it became numerically unstable.

I think it should be fast enough that a global solution could be obtained
in a couple of hours on a modern machine. If not, it could be done in an
intrinsically parallel way on multiple machines.

thanks - grant

On Tue, 8 Jul 2003 ifeffit-request at millenia.cars.aps.anl.gov wrote:

> Send Ifeffit mailing list submissions to
> 	ifeffit at millenia.cars.aps.anl.gov
>
> To subscribe or unsubscribe via the World Wide Web, visit
> 	http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
> or, via email, send a message with subject or body 'help' to
> 	ifeffit-request at millenia.cars.aps.anl.gov
>
> You can reach the person managing the list at
> 	ifeffit-owner at millenia.cars.aps.anl.gov
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Ifeffit digest..."
>
>
> Today's Topics:
>
>    1. Re: ARE: [Ifeffit] Re: Ifeffit Digest, Vol 5, Issue 1
>       (Bruce Ravel)
>    2. Re: Estimating error in XAS (Scott Calvin)
>    3. Re: Re: Estimating error in XAS (Matt Newville)
>    4. Re: Estimating error in XAS (Scott Calvin)
>    5. Scanning of "Potential Surfaces" in EXAFS (Norbert Weiher)
>    6. Re: Scanning of "Potential Surfaces" in EXAFS (Bruce Ravel)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 7 Jul 2003 13:11:08 -0400
> From: Bruce Ravel <ravel at phys.washington.edu>
> Subject: Re: ARE: [Ifeffit] Re: Ifeffit Digest, Vol 5, Issue 1
> To: XAFS Analysis using Ifeffit <ifeffit at millenia.cars.aps.anl.gov>
> Message-ID: <200307071311.08259.ravel at phys.washington.edu>
> Content-Type: text/plain;  charset="iso-8859-1"
>
> On Monday 07 July 2003 12:45 pm, Matt Newville wrote:
> > Anyway, I think using the 'epsilon_k' that chi_noise() estimates as
> > the noise in chi(k) is a fine way to do a weighted averages of data.
> > It's not perfect, but neither is anything else.
>
> Exactly right!  As Matt explained, there are reasons to believe that
> the measurement uncertainty is dominated by things that Parsival's
> theorem doesn't address and that measuring those problems is hard.
>
> As Matt said, weighting by 1/chi_noise() is OK.  As Scott said,
> weighting uniformly is OK.  Those are two choices are the most OK I
> can think of, so Athena will let you choose between them.
>
> While I am of the chorus of people agreeing with Grant that mu is not
> equal to <If>/<I0>, I can see no reason why Athena should actively
> prevent the user from looking at <If> or <I0>, if that is what he
> wants to do.  The *current* version of Athena prevents this, but
> that's a bug not a feature! ;-)
>
> B
>
>
>
> --
>  Bruce Ravel  ----------------------------------- ravel at phys.washington.edu
>  Code 6134, Building 3, Room 222
>  Naval Research Laboratory                          phone: (1) 202 767 5947
>  Washington DC 20375, USA                             fax: (1) 202 767 1697
>
>  NRL Synchrotron Radiation Consortium (NRL-SRC)
>  Beamlines X11a, X11b, X23b, X24c, U4b
>  National Synchrotron Light Source
>  Brookhaven National Laboratory, Upton, NY 11973
>
>  My homepage:    http://feff.phys.washington.edu/~ravel
>  EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 7 Jul 2003 13:15:45 -0400
> From: Scott Calvin <scalvin at anvil.nrl.navy.mil>
> Subject: [Ifeffit] Re: Estimating error in XAS
> To: XAFS Analysis using Ifeffit <ifeffit at millenia.cars.aps.anl.gov>
> Message-ID: <p05210602bb2f56d34214@[132.250.126.21]>
> Content-Type: text/plain; charset="us-ascii"
>
> In the anecdotal category, I've seen some fairly bizarre high-r
> behavior on beamline X23B at the NSLS, which I tentatively attribute
> to feedback problems. That line, as many of you know, can be a little
> pathological at times. I've also collected some data to examine this
> issue on X11A, a more conventional beamline, but have never gotten
> around to looking at it--I hope to soon.
>
> In any case, I don't really understand the logic of averaging scans
> based on some estimate of the noise. For that to be appropriate,
> you'd have to believe there was some systematic difference in the
> noise between scans. What's causing that difference, if they're
> collected on the same beamline on the same sample? (Or did I
> misunderstand Michel's comment--was he talking about averaging data
> from different beamlines or something?) If there is no systematic
> changes during data collection, then the noise level should be the
> same, and any attempt to weight by some proxy for the actual noise
> will actually decrease the statistical content of the averaged data
> by overweighting some scans (i. e. random fluctuations in the
> quantity being used to estimate the uncertainty will cause some scans
> to dominate the average more heavily, which is not ideal if the
> actual noise level is the same). If, on the other hand, there is a
> systematic difference between subsequent scans, it is fairly unlikely
> to be "white," and thus will not be addressed by this scheme anyway.
> Perhaps one of you can give me examples where this kind of variation
> in data quality is found.
>
> So right now I don't see the benefit to this method. Particularly if
> it's automated, I hesitate to add hidden complexity to my data
> reduction without a clear rationale for it.
>
> --Scott Calvin
> Naval Research Lab
> Code 6344
>
> Matt Newville wrote:
>
> >
> >I'd assumed that vibrations would actually cause fairly white noise,
> >though feedback mechanisms could skew towards high frequency. Other
> >effects (temperature/pressure/flow fluctuations in ion chamber gases
> >and optics) might skew toward low-frequency noises.  I have not seen
> >many studies of vibrations, feedback mechanism, or other
> >beamline-specific effects on data quality, and none discussing the
> >spectral weight of the beamline-specific noise.
> >
> >On the other hand, all data interpolation schemes do some smoothing,
> >which suppresses high frequency components.  And it usually appears
> >that the high-frequency estimate of the noise from chi_noise() or
> >Feffit gives an estimate that is significantly *low*.
> >
> >Anyway, I think using the 'epsilon_k' that chi_noise() estimates as
> >the noise in chi(k) is a fine way to do a weighted averages of data.
> >It's not perfect, but neither is anything else.
> >
> >--Matt
> >
> >_______________________________________________
> >Ifeffit mailing list
> >Ifeffit at millenia.cars.aps.anl.gov
> >http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20030707/9e2d0f0c/attachment-0001.htm
>
> ------------------------------
>
> Message: 3
> Date: Mon, 7 Jul 2003 16:07:20 -0500 (CDT)
> From: Matt Newville <newville at cars.uchicago.edu>
> Subject: Re: [Ifeffit] Re: Estimating error in XAS
> To: XAFS Analysis using Ifeffit <ifeffit at millenia.cars.aps.anl.gov>
> Message-ID:
> 	<Pine.LNX.4.44.0307071217020.17053-100000 at millenia.cars.aps.anl.gov>
> Content-Type: TEXT/PLAIN; charset=US-ASCII
>
> Hi Scott,
>
> On Mon, 7 Jul 2003, Scott Calvin wrote:
>
> > In any case, I don't really understand the logic of averaging scans
> > based on some estimate of the noise. For that to be appropriate,
> > you'd have to believe there was some systematic difference in the
> > noise between scans. What's causing that difference, if they're
> > collected on the same beamline on the same sample? (Or did I
> > misunderstand Michel's comment--was he talking about averaging data
> > from different beamlines or something?) If there is no systematic
> > changes during data collection, then the noise level should be the
> > same, and any attempt to weight by some proxy for the actual noise
> > will actually decrease the statistical content of the averaged data
> > by overweighting some scans (i. e. random fluctuations in the
> > quantity being used to estimate the uncertainty will cause some scans
> > to dominate the average more heavily, which is not ideal if the
> > actual noise level is the same). If, on the other hand, there is a
> > systematic difference between subsequent scans, it is fairly unlikely
> > to be "white," and thus will not be addressed by this scheme anyway.
> > Perhaps one of you can give me examples where this kind of variation
> > in data quality is found.
>
> Using a solid-state detector with low-concentration samples, it's
> common to do a couple scans counting for a few seconds per point,
> then more scans counting for longer time (say, first 3sec/pt then
> 10sec/pt).  The data is typically better with longer counting time
> (not always by square-root-of-time), but you want to use all the
> noisy data you have.  In such a case, a weighted average based on
> after-the-fact data quality would be useful.
>
> > So right now I don't see the benefit to this method. Particularly
> > if it's automated, I hesitate to add hidden complexity to my data
> > reduction without a clear rationale for it.
>
> I can't see a case where it would be obviously better to do the
> simple average than to average using weights of the estimated noise.
> For very noisy data (such as the 3sec/pt scan and the 10sec/pt
> scan), the simple average is almost certainly worse.  Anyway, it
> seems simple enough to allow either option and/or overriding the
> high-R estimate of the noise.
>
> Maybe I'm just not understanding your objection to using the high-R
> estimate of data noise, but I don't see how a weigthed average would
> "actually decrease the statistical content of the averaged data"
> unless the high-R estimate of noise is pathologically *way* off,
> which I don't think it is (I think it's generally a little low but
> reasonable).  If two spectra give different high-R estimates of
> noise, saying they actually have the same noise seems pretty bold.
> Of course, one can assert that the data should be evenly weighted or
> assert that they should be weighted by their individually estimated
> noise.  Either way, something is being asserted about different
> measurements in order to treat them as one.  Whatever average is
> used, it is that assertion that is probably the most questionable,
> complex, and in need of a rationale. So maybe it is better to make
> the assertion more explicit and provide options for how it is done.
>
> --Matt
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 7 Jul 2003 18:43:29 -0400
> From: Scott Calvin <scalvin at anvil.nrl.navy.mil>
> Subject: [Ifeffit] Re: Estimating error in XAS
> To: XAFS Analysis using Ifeffit <ifeffit at millenia.cars.aps.anl.gov>
> Message-ID: <p0521060bbb2fa7121b77@[132.250.126.21]>
> Content-Type: text/plain; charset="us-ascii" ; format="flowed"
>
> Matt Newville writes:
>
> >
> >Using a solid-state detector with low-concentration samples, it's
> >common to do a couple scans counting for a few seconds per point,
> >then more scans counting for longer time (say, first 3sec/pt then
> >10sec/pt).  The data is typically better with longer counting time
> >(not always by square-root-of-time), but you want to use all the
> >noisy data you have.  In such a case, a weighted average based on
> >after-the-fact data quality would be useful.
>
>
> This was the kind of example I was looking for. I agree that in this
> case it makes sense to use the high-R noise estimate for averaging,
> and thus it's useful to have this option implemented in software.
>
> --Scott
>
> ------------------------------
>
> Message: 5
> Date: Tue, 8 Jul 2003 13:24:52 +0200
> From: Norbert Weiher <weiher at tech.chem.ethz.ch>
> Subject: [Ifeffit] Scanning of "Potential Surfaces" in EXAFS
> To: ifeffit at millenia.cars.aps.anl.gov
> Message-ID: <200307081324.52430.weiher at tech.chem.ethz.ch>
> Content-Type: text/plain;  charset="iso-8859-1"
>
> Hi folks,
>
> first of all thanks to all of you who attended the XAFS XII for an interesting
> conference with lots of input and new ideas.
>
> But now to my question:
>
> I have been guessing for a long time of doing an algorithm of doing some kind
> of potential surface scanning when doing an EXAFS fit. This procedure has
> been known in e.g. ab initio codes like GAUSSIAN for a long time and can be
> used to check if you are really in a global minimum on the potential surface.
> As EXAFS analysis is the ultimate search for a global minimum in the
> parameter space, but you never know if you really end up there, I was
> planning to do such kind of investigations.
>
> However, before I start off with wild coding :) I want to have more opinions
> on this topic. Here are my main points speaking for this kind of algorithm:
>
> 1) Computer power is quite fast now - and ifeffit is also really fast in
> computing the fit quality if you do not guess any variable (which you don't
> need in this case as you vary the parameters by your own).
>
> 2) In cases where you would expect large correlations between certain
> variables (e.g. when you have overlapping shells at nearly the same
> distance), one could systematically investigate the influence of small
> changes in the parameter space on the fit.
>
> That's it - now I am really keen on knowing what you think of this idea.
>
> Cheers,
>
> Norbert
> --
> Dr. rer. nat. Norbert Weiher (norbertweiher at yahoo.de)
> Laboratory for Technical Chemistry - ETH Hönggerberg
> HCI E 117 - 8093 Zürich - Phone: +41 1 63 3 48 32
>
>
> ------------------------------
>
> Message: 6
> Date: Tue, 8 Jul 2003 09:32:13 -0400
> From: Bruce Ravel <ravel at phys.washington.edu>
> Subject: Re: [Ifeffit] Scanning of "Potential Surfaces" in EXAFS
> To: XAFS Analysis using Ifeffit <ifeffit at millenia.cars.aps.anl.gov>
> Message-ID: <200307080932.13185.ravel at phys.washington.edu>
> Content-Type: text/plain;  charset="iso-8859-1"
>
> On Tuesday 08 July 2003 07:24 am, Norbert Weiher wrote:
> > I have been guessing for a long time of doing an algorithm of doing some
> kind
> > of potential surface scanning when doing an EXAFS fit. This procedure has
> > been known in e.g. ab initio codes like GAUSSIAN for a long time and can be
> > used to check if you are really in a global minimum on the potential
> surface.
> > As EXAFS analysis is the ultimate search for a global minimum in the
> > parameter space, but you never know if you really end up there, I was
> > planning to do such kind of investigations.
> >
> > However, before I start off with wild coding :) I want to have more opinions
> > on this topic. Here are my main points speaking for this kind of algorithm:
> >
> > 1) Computer power is quite fast now - and ifeffit is also really fast in
> > computing the fit quality if you do not guess any variable (which you don't
> > need in this case as you vary the parameters by your own).
> >
> > 2) In cases where you would expect large correlations between certain
> > variables (e.g. when you have overlapping shells at nearly the same
> > distance), one could systematically investigate the influence of small
> > changes in the parameter space on the fit.
>
> This is an excellent idea and, in fact, is among my long-range plans
> for Artemis.
>
> You might take a look at Biochemistry 35 (1996) pp. 9014-9023 and
> other papers by the same authors for an example of what I have in
> mind.  In that example, they raster through a plane of two variables
> and fit the remaining variables.  The end product is a contour plot of
> chi-square in the plane of the two rastered variables.
>
> Another possibility would be to start at Ifeffit's best fit and raster
> by hand through as many variables as you like, saving a matrix where
> the abscissae are the parameters and the values of the matrix elements
> are chi-square.  You could then take cuts through this matrix to
> explore the multi-variate parameter space.
>
> In any case, it would not be a difficult programming chore to use
> perl, python, or C to walk through the parameters and organize the
> output and to use ifeffit to do the heavy lifting.
>
> Let us know how it works out for you, Norbert.
> B
>
>
>
> --
>  Bruce Ravel  ----------------------------------- ravel at phys.washington.edu
>  Code 6134, Building 3, Room 222
>  Naval Research Laboratory                          phone: (1) 202 767 5947
>  Washington DC 20375, USA                             fax: (1) 202 767 1697
>
>  NRL Synchrotron Radiation Consortium (NRL-SRC)
>  Beamlines X11a, X11b, X23b, X24c, U4b
>  National Synchrotron Light Source
>  Brookhaven National Laboratory, Upton, NY 11973
>
>  My homepage:    http://feff.phys.washington.edu/~ravel
>  EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
>
>
> ------------------------------
>
> _______________________________________________
> Ifeffit mailing list
> Ifeffit at millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
>
>
> End of Ifeffit Digest, Vol 5, Issue 4
> *************************************
>




More information about the Ifeffit mailing list