That 'flattening' function seems over-complicated to me and makes an artificial discontinuity, at least in slope, at E0 which is a somewhat arbitrary quantity. Why not simply divide by the post-edge quadratic (norm(E) = (mu(E)-pre_edge_line(E))/quadratic(E))? In some cases, where there's a big curvature, it may make sense to divide mu(E) by the quadratic, then subtract a pre-edge. What I've never solved satisfactorily is the case in which the extrapolation of the pre-edge line crosses the post-edge, so mu(E)-pre_edge_line(E)<0 for some part of the range. I've never understood why this happens. mam On 5/16/2013 4:47 AM, Matt Newville wrote:
Hi Matthew, George, Zach,
Thanks for the discussion!
On Wed, May 15, 2013 at 5:41 PM, Matthew Marcus
wrote: I'm not sure what 'flattening' means. Does that mean dividing by a linear or other polynomial function, fitted to the post-edge? mam
Sorry, I should have been clearer. "Standard Athena/Ifeffit" is to
a) regress a pre-edge line to mu(E) (no power laws) b) regress a post-edge quadratic c) set edge_step = post_edge_quadratic(E0) - pre_edge_line(E0) b) set norm(E) = (mu(E) - pre_edge_line(E)) / edge_step.
Flattening (Athena only, now backported to larch) fits a quadratic to the post-edge range (typcically E0+100 to end of data) of norm(E), and then sets
flattened(E) = norm(E) for E<= E0 = norm(E) - quadratic(E) + quadratic(E0) for E > E0
I think this was originally meant for display purposes only.
Hopefully Bruce can correct me if I'm wrong on any of the details here.
I think it's fair to say that the "Standard Athena/Ifeffit" approach to normalization is simple-minded. It was designed for EXAFS in an era when accessing databases seemed like a challenge, so even for EXAFS it is simple-minded.
Flattening might be better at removing instrumental backgrounds, and be better for linear analysis of XANES. The main concern I would have is the potential for a slight discontinuity at E0, or the potential strong dependence from the choice of E0.
Using something like bkg_cl() (which matched mu(E) to the data from the Cromer-Liberman tables) or MBACK (which I believe is similar, but also accounts for "elastic/Compton leakage" into the pre-edge part of fluorescence spectra).
From my point of view, the question is: what's the best way to do this? The pre_edge() function in Larch does include an energy exponent term, and now writes out the "flattened" array, as above. It does not include the scaling MAM described, but that would not be hard. Reimplementing bkg_cl() would not be too hard, but perhaps trying to port MBACK would be better. Perhaps all of the above is best?
--Matt _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit