You might find larch and Python useful here.   Of course, you would have to code in some particulars about where and in what format your data is stored, but the XAFS pre-edge subtraction and normalization step is a single Python function with larch, maybe omething like this:

``` Python script
#!/usr/bin/env python    
from glob import glob
from larch.io import read_ascii
from larch.xafs import pre_edge

filelist = glob.glob('MnXAFS*')

groups = {}
for fname in filelist:
    thisgroup = read_ascii(fname, labels='energy ctime i0 i1 i2 mnka')
    thisgroup.mu = thisgroup.mnka / thisgroup.i0

    pre_edge(thisgroup, pre1=-200.00, pre2=-35.00, norm1=150.00, norm2=350.0)
    group[fname] = thisgroup

# now you have several normalized XAFS spectra... and now what?

```

That makes a fair number of assumptions about how data is configured (plain text files, with columns as labeled) and how it should be processed, but it might give you an idea of how to start.


On Tue, Nov 15, 2022 at 1:38 AM 张驰 <czhang2020@sinano.ac.cn> wrote:

How to normalize tens of thousands of XAS data?

I measured a series of the spectra of sample over time by in situ XAS, the results contained tens of thousands of sets of XAS data, and it was impossible to normalize each set of data individually.I would like to ask where to find the normalized source code of Athena, I can modify the code so I can use my computer to achieve a large number of processing.Thanks!

_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Unsubscribe: http://millenia.cars.aps.anl.gov/mailman/options/ifeffit


--
--Matt Newville <newville at cars.uchicago.edu> 630-327-7411