[Ifeffit] Larch flattening

Matt Newville newville at cars.uchicago.edu
Fri Jan 25 22:13:07 CST 2019


Hi,

On Fri, Jan 25, 2019 at 12:45 PM Kirill Lomachenko <
kirill.lomachenko at esrf.fr> wrote:

> Dear Matt,
> I have a question and a comment regarding Larch. I would appreciate if
> you could have a look.
> 1) The flattening algorithm for XAS in Larch  works differently with
> respect to Athena when linear post-edge function is used for
> normalization. In Larch the normalized spectrum is always fitted by a
> parabola for flattening, no matter which function was used for
> normalization. It works fine when the post-edge function is also a
> parabola, but when it is a straight line, the resulting flattened
> spectrum does not stick to Y=1, as it is expected to do (and as it does
> in Athena). My question is: is it a bug or a feature? Just in case, I
> made some minor modifications in the pre_edge.py file that allow to get
> the same results as in Athena when linear post-edge is used. Maybe these
> corrections are a bit clumsy, but it seems to work. If anyone needs, I
> can share.
>

Flattening is not well-defined, and probably not well-justified.  I sort of
view it as weakness that it is included in Larch at all, and yet I see many
people use it.  The definition may well differ from what Athena does.
That's probably a clue of how poorly defined the process is.

For sure, Larch has some improvements in pre-edge removal and normalization
over Ifeffit.  For example, Larch can use the Victoreen formula to (at
least mostly) account for the expected decay in mu(E).    It also has an
implementation of Mback, and a modification to this (mback_norm) that is
more like the cl_norm function in Ifeffit.  In my opinion, this matching of
mu(E) data to tabulated values has a lot of merit to it.  FWIW, the Mback
algorithm has a lot of subtle features - I find `mback_norm` to be more
consistent and easier to use.

>
> 2) There is a typo in the larch web-manual in the description of the
> nnorm parameter of the pre_edge() function. It is stated that it is the
> number of terms in the fitting polynomial (i.e., 1+degree) whereas it
> seems that it is just its degree. So, nnorm=1 corresponds to a linear
> function and nnorm=2 to quadratic. In the gui help it is partially
> corrected, but the phrase "Default=3 (quadratic)" stays. It is not
> critical at all, but may be misleading for beginners...
>

Ah, sorry and thanks.  The online doc is also different from the
documentation string in the code (which is closer, but still not perfect).

nnorm is the degree of the polynomial (0 for constant, 1 for line, 2 for
quadratic, default=2).  The word "order" for polynomial is apparently not
that well-defined.  I either learned it wrong long ago or mis-remembered.
I'm in the process of trying to release the next version, so I'll make
these changes soon.

>
> Using the possibility, I would like to thank you for all the great work
> you are doing on Larch. It really helps for large datasets!
>

Great, glad it's useful.
--Matt Newville
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20190125/d6de7bb4/attachment.html>


More information about the Ifeffit mailing list