[Ifeffit] Peak-Fitting Process in Larix

Matt Newville newville at cars.uchicago.edu
Sun Apr 7 23:13:53 CDT 2024


Hi Ryan,

There are a couple of different questions here.  And it turns out that this
is also coming up against a few small bugs in code that are in the process
of being resolved ;).   I think this is going to be a long answer, first
the "how to do this", then a warning about the bug-let, then on
pseudo-Voigt in general.

First,  on doing peak fitting with arc-tangents in Larix:

> The parameters for the arctangent function are initially estimated using
the 'pick values from plot'
> feature. However, modifying these parameters doesn't result in
corresponding changes in the graph,
> making it difficult to ascertain if it aligns with the data points.
Considering this, would it be acceptable to
> set the arctangent's amplitude to 1 (normalised edge jump) and position
its centre a couple of eV below
> E0?

Yes, you can definitely change the initial values (and set bounds -- but be
careful about setting these too tightly).  Picking default values from the
plot is meant to be a rough value anyway.  The amplitude should be about 1
(and you could set this to be "positive"), and the center is "halfway up
the edge".  These values should refine well.

Be careful using arc-tangents in general.  I know that is what many people
(including Farges et al) used.  I find that "line + Lorentzian" for the
main peak just works better.  But, I also get that you're trying to
reproduce that earlier work, which is fine.  So,  also: note that "fit
baseline" in Larix Pre-Edge Peak Fitting fits that baseline and assigns fit
components named "bline" and "bpeak" ("b" for baseline) -- these are in
your model.  If you want, you can simply delete these components of the
model.

> Following this, two pseudo-Voigt functions are introduced, with their
parameters initially estimated.
> Then, to replicate the conditions of '1.3 eV 2σ width and 45% Gaussian,'
do I set the pseud_fraction
> to 0.45 and pseud_sigma to 2? I'm uncertain about where to input the 1.3
eV width and whether this
> choice is optimal, especially considering that the natural width of the
atomic K level at the Mn edge is
> 1.16 eV (Krause, 1979).

Actually, to reproduce Farge's settings, use 'fraction' of 0.55.  Here
"fraction" is the fraction of the Lorentzian.  Sigma is a little trickier,
because I am not 100% certain of the definition of pseudo-Voigt that they
used.  As far as I can tell, most definitions (and the one we use) have
sigma as the sigma for the Lorentzian (ie, HWFM), and set a sigma for the
Gaussian so that the FWHM are the same for both Lorentzian and Gaussian.
This is not really that well justified physically (more below), but a
common approach. With all that, you should then set the sigma to be 0.65,
so that 2*sigma is 1.3.

I think that is all of the "technical bits" and mechanics of doing the fits.

Second, on buglets:  There are 2 bugs in the combination of lmfit 1.3.0
(latest) and Larix 0.9.75 (latest).  I'm reasonable for both bugs.  Lmfit
1.3.0 is brand new (so, if you're using an older version, wait for 1.3.1
before updating) and breaks the way Larix sets the "arctan" form of the
step function.   A fix is in the works.  A completely separate bug in Larch
0.9.75 bug is the bigger problem, as save/load of Larix sessions with Peak
Fitting results do not correctly restore.  This is fixed in the development
branch, and I hope to push out Larix 0.9.76 in a day or two which will fix
both of these.

> Finally, I couldn't find the specific paper, but the authors stated that
due to the significant processing
>  times required for Voigt functions, they opted for pseudo-Voigt
functions to model instrumental and
> core-hole broadening factors. With improvements in processing times, are
Voigt functions now the
> preferred choice, or does the pseudo-Voigt function still hold advantages
over both?

Third: Voigt functions are a convolution of Lorentzian and Gaussian peaks.
Historically, these have been used in X-ray powder diffraction analysis.
The basic idea is that the energy profile of the X-ray source (say, tube or
bend magnet) gives a broadly Gaussian-like profile of incident angles on a
monochromator. Monochromators at beamlines have complicated profiles, but
are also Gaussian-like (tails get suppressed quickly).  In XRD, I think
that the powder-i-ness of the sample is expected to give Lorentzian-like
broadening of peaks.   For XAS, it is the core-level width that gives a
Lorentzian-like energy profile to a peak.

Properly, the Voigt function needs the complex Faddeeva function (complex
extension of an error function).  In the old days (say pre-2000?),
implementing this was considered either hard (say, if you were writing in
C) or slow to run.... or, well, the people writing analysis codes were just
lazy ;).     So they (I think fullProf may be to blame) invented
pseudo-Voigt as a fraction sum of a Lorentzian and Gaussian of the same
FWHM.   It must have worked well enough -- lots of people use this.

Is there any justification for asserting that the FWHM are the same for
these two components?  Not really.  Like you say, the Gamma (= 2*sigma) for
the core-holes are at least nominally known, and for transition metal
K-edges are around 1 to 1.5 eV.  FWHM of the source should scale linearly
with energy, and should also be around 1 eV, though at modern facilities
ought to be dominated by the mono Darwin width, which is Gaussian-ish (and
depends on which mono crystal is used).

Is it hard or slow to calculate a Faddeeva function these days?  Nope, not
in Python: it is built into scipy by the kind of people who love to read
Abramowitz and Stegun so that we don't have to. So, you would be welcome to
use a "real" Voigt function: it does not have a "fraction" parameter but
does add a "gamma" parameter which works to control "how Gaussian vs how
Lorentzian" the result is.  Or use pseudo-Voigt to go all in at "reproduce
previous results".

My experience with pre-edge peak fitting with data from modern X-ray
beamlines (either a good collimating mirror or an insertion device source
with divergence comparable to the mono Darwin width) is that a Voigt
profile (even with the default "gamma=sigma", similar to fraction=0.5, and
equivalent FWHM) gives lower fit residuals than Gaussian.  Sometimes, if
the fit and the data are really good, it seems like "real" Voigt does a
better job than pseudo-Voigt.  But that is at the "detection limit", and it
would not change any interpreted result within estimated uncertainties.

--Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20240407/b08258f0/attachment-0001.htm>


More information about the Ifeffit mailing list