In regard to global fitting, there are much better algorithms than a simple grid search, something we stared with the 70's. More recently I have had some good results with a very simple genetic algorithm called "differential evolution" by storn and price. It seems to be faster than simulated annealing, and it has the advantage of being very robust. It behaves well with hard constraints (e.g. integer programming) included by penalty functions (or if you like, terms incorporating a priori knowledge, from a bayesian point of view). It also looks to me like it would be trivial to parallelize for using multiple processors. I think the easiest thing to implement in feffit however would be simply to do a series of minimizations with (say, 1000) random starting points in the parameter space. This would just entail wrapping the minimization in a loop, and adding a way to summarize the results. Minimizations starting at differentpoints may end up in the same or different "attractors". If those corresponded to bad fits, throw them out. If they are adequate fits, keep them. Quite possibly more than one attractor could fit the data adequately. A cluster analysis of the results should indicate whether different solutions are equivalent (belong to the same attractor) or not. I don't think this should be very hard to do. Of course it easy to suggest that other people do the work, so I've been playing with these approaches myself using mathematica as my sandbox. The minimization code in feffit would have to be robust enough that it would give up gracefully (not crash) if it became numerically unstable. I think it should be fast enough that a global solution could be obtained in a couple of hours on a modern machine. If not, it could be done in an intrinsically parallel way on multiple machines. thanks - grant On Tue, 8 Jul 2003 ifeffit-request@millenia.cars.aps.anl.gov wrote:
Send Ifeffit mailing list submissions to ifeffit@millenia.cars.aps.anl.gov
To subscribe or unsubscribe via the World Wide Web, visit http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit or, via email, send a message with subject or body 'help' to ifeffit-request@millenia.cars.aps.anl.gov
You can reach the person managing the list at ifeffit-owner@millenia.cars.aps.anl.gov
When replying, please edit your Subject line so it is more specific than "Re: Contents of Ifeffit digest..."
Today's Topics:
1. Re: ARE: [Ifeffit] Re: Ifeffit Digest, Vol 5, Issue 1 (Bruce Ravel) 2. Re: Estimating error in XAS (Scott Calvin) 3. Re: Re: Estimating error in XAS (Matt Newville) 4. Re: Estimating error in XAS (Scott Calvin) 5. Scanning of "Potential Surfaces" in EXAFS (Norbert Weiher) 6. Re: Scanning of "Potential Surfaces" in EXAFS (Bruce Ravel)
----------------------------------------------------------------------
Message: 1 Date: Mon, 7 Jul 2003 13:11:08 -0400 From: Bruce Ravel
Subject: Re: ARE: [Ifeffit] Re: Ifeffit Digest, Vol 5, Issue 1 To: XAFS Analysis using Ifeffit Message-ID: <200307071311.08259.ravel@phys.washington.edu> Content-Type: text/plain; charset="iso-8859-1" On Monday 07 July 2003 12:45 pm, Matt Newville wrote:
Anyway, I think using the 'epsilon_k' that chi_noise() estimates as the noise in chi(k) is a fine way to do a weighted averages of data. It's not perfect, but neither is anything else.
Exactly right! As Matt explained, there are reasons to believe that the measurement uncertainty is dominated by things that Parsival's theorem doesn't address and that measuring those problems is hard.
As Matt said, weighting by 1/chi_noise() is OK. As Scott said, weighting uniformly is OK. Those are two choices are the most OK I can think of, so Athena will let you choose between them.
While I am of the chorus of people agreeing with Grant that mu is not equal to <If>/<I0>, I can see no reason why Athena should actively prevent the user from looking at <If> or <I0>, if that is what he wants to do. The *current* version of Athena prevents this, but that's a bug not a feature! ;-)
B
-- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 222 Naval Research Laboratory phone: (1) 202 767 5947 Washington DC 20375, USA fax: (1) 202 767 1697
NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b, X24c, U4b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973
My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
------------------------------
Message: 2 Date: Mon, 7 Jul 2003 13:15:45 -0400 From: Scott Calvin
Subject: [Ifeffit] Re: Estimating error in XAS To: XAFS Analysis using Ifeffit Message-ID: Content-Type: text/plain; charset="us-ascii" In the anecdotal category, I've seen some fairly bizarre high-r behavior on beamline X23B at the NSLS, which I tentatively attribute to feedback problems. That line, as many of you know, can be a little pathological at times. I've also collected some data to examine this issue on X11A, a more conventional beamline, but have never gotten around to looking at it--I hope to soon.
In any case, I don't really understand the logic of averaging scans based on some estimate of the noise. For that to be appropriate, you'd have to believe there was some systematic difference in the noise between scans. What's causing that difference, if they're collected on the same beamline on the same sample? (Or did I misunderstand Michel's comment--was he talking about averaging data from different beamlines or something?) If there is no systematic changes during data collection, then the noise level should be the same, and any attempt to weight by some proxy for the actual noise will actually decrease the statistical content of the averaged data by overweighting some scans (i. e. random fluctuations in the quantity being used to estimate the uncertainty will cause some scans to dominate the average more heavily, which is not ideal if the actual noise level is the same). If, on the other hand, there is a systematic difference between subsequent scans, it is fairly unlikely to be "white," and thus will not be addressed by this scheme anyway. Perhaps one of you can give me examples where this kind of variation in data quality is found.
So right now I don't see the benefit to this method. Particularly if it's automated, I hesitate to add hidden complexity to my data reduction without a clear rationale for it.
--Scott Calvin Naval Research Lab Code 6344
Matt Newville wrote:
I'd assumed that vibrations would actually cause fairly white noise, though feedback mechanisms could skew towards high frequency. Other effects (temperature/pressure/flow fluctuations in ion chamber gases and optics) might skew toward low-frequency noises. I have not seen many studies of vibrations, feedback mechanism, or other beamline-specific effects on data quality, and none discussing the spectral weight of the beamline-specific noise.
On the other hand, all data interpolation schemes do some smoothing, which suppresses high frequency components. And it usually appears that the high-frequency estimate of the noise from chi_noise() or Feffit gives an estimate that is significantly *low*.
Anyway, I think using the 'epsilon_k' that chi_noise() estimates as the noise in chi(k) is a fine way to do a weighted averages of data. It's not perfect, but neither is anything else.
--Matt
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit