[Ifeffit] Ifeffit Digest, Vol 162, Issue 17

Christopher Thomas Chantler chantler at unimelb.edu.au
Fri Aug 12 16:36:40 CDT 2016


Topic 1: We have a routine /edit within ifeffit (our modified version) which propagates and fits uncertainty.
Working on a couple of minor details before passing it on to Matt and Bruce for general use.

Topic 3: Monochromator glitches should be eliminated in absorption spectra under normal circumstances if the dark current estimation is well defined. I recommend careful measurement and recording of this for all spectra.

------------------------------------------------------------
Christopher Chantler, Professor, FAIP
Editor-in-Chief, Radiation Physics and Chemistry
Chair, International IUCr Commission on XAFS
President, International Radiation Physics Society
School of Physics, University of Melbourne
Parkville Victoria 3010 Australia
+61-3-83445437 FAX +61-3-93474783
chantler at unimelb.edu.au chantler at me.com
http://optics.ph.unimelb.edu.au/~chantler/xrayopt/xrayopt.html
http://optics.ph.unimelb.edu.au/~chantler/home.html


________________________________________
From: Ifeffit [ifeffit-bounces at millenia.cars.aps.anl.gov] on behalf of ifeffit-request at millenia.cars.aps.anl.gov [ifeffit-request at millenia.cars.aps.anl.gov]
Sent: Friday, 12 August 2016 11:22 PM
To: ifeffit at millenia.cars.aps.anl.gov
Subject: Ifeffit Digest, Vol 162, Issue 17

Send Ifeffit mailing list submissions to
        ifeffit at millenia.cars.aps.anl.gov

To subscribe or unsubscribe via the World Wide Web, visit
        http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
or, via email, send a message with subject or body 'help' to
        ifeffit-request at millenia.cars.aps.anl.gov

You can reach the person managing the list at
        ifeffit-owner at millenia.cars.aps.anl.gov

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Ifeffit digest..."


Today's Topics:

   1. Statistical errors in linear combination fits (Joshua Kas)
   2. Re: Statistical errors in linear combination fits (Bruce Ravel)
   3. Athena interpolation when removing mono glitches
      (Michael Gaultois)
   4. Re: Athena interpolation when removing mono glitches (Bruce Ravel)


----------------------------------------------------------------------

Message: 1
Date: Thu, 11 Aug 2016 13:31:45 -0700
From: Joshua Kas <joshua.j.kas at gmail.com>
To: ifeffit at millenia.cars.aps.anl.gov
Subject: [Ifeffit] Statistical errors in linear combination fits
Message-ID:
        <CAHuhYRm_Dx5+_HAUuOZ0nCwzXwCrS-Subxb9dmBXDsQp8wmSRQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi All,
I was wondering if it is possible to pass a value for the uncertainty in
the data to Athena when doing linear combination fits. We have calculated
chi^2 values using a simple estimate of the statistical uncertainty, and
find that our values differ by several factors of 10 when compared to the
reported values from Athena. I assume that this has to do with value of the
uncertainty that Athena is using, but I certainly could be mistaken. In any
case, the reduced chi^2 reported by Athena is much smaller than 1, while
the fit is off by fairly large amounts compared to any reasonable estimate
of the statistical error.
Thanks in advance for any help and sorry if this info is already available
in the archives,
Josh Kas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20160811/ebcf5347/attachment-0001.html>

------------------------------

Message: 2
Date: Thu, 11 Aug 2016 16:43:36 -0400
From: Bruce Ravel <bravel at bnl.gov>
To: XAFS Analysis using Ifeffit <ifeffit at millenia.cars.aps.anl.gov>
Subject: Re: [Ifeffit] Statistical errors in linear combination fits
Message-ID: <507e138f-9ab8-9de3-0629-bfd85aad1e1f at bnl.gov>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 08/11/2016 04:31 PM, Joshua Kas wrote:
> I was wondering if it is possible to pass a value for the uncertainty in
> the data to Athena when doing linear combination fits. We have
> calculated chi^2 values using a simple estimate of the statistical
> uncertainty, and find that our values differ by several factors of 10
> when compared to the reported values from Athena. I assume that this has
> to do with value of the uncertainty that Athena is using, but I
> certainly could be mistaken. In any case, the reduced chi^2 reported by
> Athena is much smaller than 1, while the fit is off by fairly large
> amounts compared to any reasonable estimate of the statistical error.

Not so easy with Ifeffit, quite easier with larch.  It's on my to do
list, but has not yet been implemented in Athena.

B

--
  Bruce Ravel  ------------------------------------ bravel at bnl.gov

  National Institute of Standards and Technology
  Synchrotron Science Group at NSLS-II
  Building 743, Room 114
  Upton NY, 11973

  Homepage:    http://bruceravel.github.io/home/
  Software:    https://github.com/bruceravel
  Demeter:     http://bruceravel.github.io/demeter/


------------------------------

Message: 3
Date: Fri, 12 Aug 2016 11:56:47 +0000
From: Michael Gaultois <mike.g at usask.ca>
To: "ifeffit at millenia.cars.aps.anl.gov"
        <ifeffit at millenia.cars.aps.anl.gov>
Subject: [Ifeffit] Athena interpolation when removing mono glitches
Message-ID:
        <CAADTZ036AFLb_nWjkS3XYS=vguPi+R01GVOWXA5+eK0bJuxdew at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Dear members of the Ifeffit list,

I recently collected some EXAFS data with some significant monochromator
glitches that I am looking to remove. I have used a python script
graciously written by the beamline scientist to remove the offending
regions, but when I import the data into Athena, Athena does some funny
business in an attempt to join together the regions outside of the data
gap. (See the bending away in the dataset and/or attached image.) I have
confirmed by plotting with other software that the strange step-like
behaviour in the mu(E) is present only after importing into Athena (the raw
data is fine).

I have looked through the mailing list archives and also the user manual,
but can't seem to find anything that explains it, or other people who have
experienced this problem in the past. From what I can determine, Athena
joins together the segments to obtain a linear interpolation in the
norm(E)? This leads to a warping in the mu(E).
==How does Athena try to treat this data?==

I was wondering if other people have had similar issues, and what steps can
be taken to remedy the problem. For example, replacing removed data points
with artificial points along a linear interpolation would be possible, but
the act of adding artificial points that don't exist is concerning to me.
==What is the best way to treat data with mono glitches to reduce spurious
features not intrinsic to the sample?==

If you are interested, I have included links to .prj datasets and images to
highlight these problems below.

With thanks for your time,
Michael

----------
.prj file with 4 ways of working up the same data:
http://bit.ly/2bnfNZ5

1) raw data
2) mono glitches removed
3) rebinned data
4) rebinned and manually removed points (This leads to some strange-looking
features in k-space, and this would be less than desireable on the many
datasets we have collected)

images to highlight these problems:
a) mu(E)
http://bit.ly/2aYaJIf

b) norm(E)
http://bit.ly/2aYaX27

c) k
http://bit.ly/2baWh0h
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20160812/9b376ceb/attachment-0001.html>

------------------------------

Message: 4
Date: Fri, 12 Aug 2016 09:22:22 -0400
From: Bruce Ravel <bravel at bnl.gov>
To: XAFS Analysis using Ifeffit <ifeffit at millenia.cars.aps.anl.gov>
Subject: Re: [Ifeffit] Athena interpolation when removing mono
        glitches
Message-ID: <c9b4007d-f976-b62d-f1b1-17c8d18c80e1 at bnl.gov>
Content-Type: text/plain; charset=windows-1252; format=flowed


Michael,

Let's start by talking about where chi(k) comes from in the software.

mu(E) is measured on some grid.  Eventually, we want chi(k) to be on a
clear, specified, reasonably (but not excessively) dense grid in k.
chi(E) is (mu(E) - mu0(E)) / mu0(E0).  This places chi(E) on the same
grid as the original data.

To prepare for the Fourier transform and the comparison with theory,
chi(E) needs to be converted to k and put onto the specified k grid.
In our software, this is done by interpolation.  The interpolation is
local and does not consider the density of the grid in E.  So long as
the grid in E is not very dense compared to the wavelengths
represented in the data [1], this works fine.  However, with a very
dense E-grid, it is possible to come up with a situation where this
local interpolation -- i.e. using a few points from the original grid
which surround the point on the target k-grid -- will yield
undesirable results.

How do I know it's possible for the interpolation to go all wonky like
this?  You contrived just such a situation :)

Your deglitching step removed about 40 eV worth of data, which
corresponded to about 0.5/Ang. in the region of 11.5/Ang in k -- about
10 points on the k-grid.  This is a BIG gap in the data.

The normal presumption when deglitching is that you are only removing
a few points of data.  If you are removing a huge gash from your data
-- as you did -- you need to think hard about how you fill that gash
back in.

Your data set #2 is a pretty bad solution.  Your data set #4 seems
like a much better solution.  Let's examine why.

In data set #2 you contrived exactly the problem I described above.
You left a gash in the data which was about 10 grid steps wide.  When
going from chi(E) to chi(k), the interpolation was done.  At the first
data point in the gap, the previous few data points were used in the
interpolation.  Since those few points were pointing down-ish, the
first point in the gap was lower than the baseline.  The next point
interpolated even lower, the next even lower.  Eventually, the data
had to hook back up with the other side of the gap, so the
interpolation rose back up.  By cutting a bug gash out of the data and
doing a simple interpolation, you introduced the weird, downward
pointing feature at 11.5.

Data set 4 is much more sensible solution because you rebinned before
deglitching.  Rebinning is implemented in Athena as a convolution with
a square kernel -- this is often called a box car average.  This
passes the convolution kernel over the data and evaluates it at the
target grid points.  The data before rebinning were on a very dense
grid.  After rebinning, they are on the "conventional" energy grid
that was used back in the day when we all did step scans and for which
the background removal algorithm was originally written and optimized.
By rebinning first, the data are smoother and sparser and less
susceptible to the interpolation effect that you saw in data set #2.


Soooooo ..... is Athena broken?  An argument could be made that the
interpolation is the problem and the solution should be to make a
better algorithm.  There is merit to that, but I am going to argue
something else.

I think the problem is that beamlines are implementing quick scanning
without thinking about all the needs of their users.  Your original
data consists of about 4000 points.  It is rebinned to about 600
points -- about the size of a conventional step scan.

Last week, one of my colleagues here at BNL showed me a quick scan
which had almost 200,000 points in it.  Yowzers!

My question to the beamline scientists out there is this: what do your
users want and need?  While it is possible that you might have a power
user with a good reason to examine the data file with 4000 or 200,000
points [2], most of your users want the data rebinned onto a
conventional grid.  So why are beamlines sending their users home with
data in a format that is not what they want?  That's just bad
practice.

Had you simply gone home [3] with your quick scan data rebinned onto a
sensible grid, you would never have even noticed this problem because
you would have naturally fallen into the case of data set #4.

You should demand that from your beamline scientist.

B





[1] I mean the part of the data that is EXAFS, not the part of the
     data that is unnormalized monochromator glitch.

[2] ... and it is certainly true that the beamline scientist needs to
     examine the large, dense, raw data file ...

[3] "But what about the sanctity of raw data" is something that I am
     sure someone is sputtering right now.  Well, to take the example
     of Diamond, /all/ the data are streamed into an HDF5 file.  Column
     data files are then written for the convenience of the user.
     That's a great solution.  The HDF5 file has all the things but the
     user can interact with the salient representation of the
     measurement.




On 08/12/2016 07:56 AM, Michael Gaultois wrote:
> Dear members of the Ifeffit list,
>
> I recently collected some EXAFS data with some significant monochromator
> glitches that I am looking to remove. I have used a python script
> graciously written by the beamline scientist to remove the offending
> regions, but when I import the data into Athena, Athena does some funny
> business in an attempt to join together the regions outside of the data
> gap. (See the bending away in the dataset and/or attached image.) I have
> confirmed by plotting with other software that the strange step-like
> behaviour in the mu(E) is present only after importing into Athena (the
> raw data is fine).
>
> I have looked through the mailing list archives and also the user
> manual, but can't seem to find anything that explains it, or other
> people who have experienced this problem in the past. From what I can
> determine, Athena joins together the segments to obtain a linear
> interpolation in the norm(E)? This leads to a warping in the mu(E).
> ==How does Athena try to treat this data?==
>
> I was wondering if other people have had similar issues, and what steps
> can be taken to remedy the problem. For example, replacing removed data
> points with artificial points along a linear interpolation would be
> possible, but the act of adding artificial points that don't exist is
> concerning to me.
> ==What is the best way to treat data with mono glitches to reduce
> spurious features not intrinsic to the sample?==
>
> If you are interested, I have included links to .prj datasets and images
> to highlight these problems below.
>
> With thanks for your time,
> Michael
>
> ----------
> .prj file with 4 ways of working up the same data:
> http://bit.ly/2bnfNZ5
>
> 1) raw data
> 2) mono glitches removed
> 3) rebinned data
> 4) rebinned and manually removed points (This leads to some
> strange-looking features in k-space, and this would be less than
> desireable on the many datasets we have collected)
>
> images to highlight these problems:
> a) mu(E)
> http://bit.ly/2aYaJIf
>
> b) norm(E)
> http://bit.ly/2aYaX27
>
> c) k
> http://bit.ly/2baWh0h
>
>
> _______________________________________________
> Ifeffit mailing list
> Ifeffit at millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
> Unsubscribe: http://millenia.cars.aps.anl.gov/mailman/options/ifeffit
>


--
  Bruce Ravel  ------------------------------------ bravel at bnl.gov

  National Institute of Standards and Technology
  Synchrotron Science Group at NSLS-II
  Building 743, Room 114
  Upton NY, 11973

  Homepage:    http://bruceravel.github.io/home/
  Software:    https://github.com/bruceravel
  Demeter:     http://bruceravel.github.io/demeter/


------------------------------

Subject: Digest Footer

_______________________________________________
Ifeffit mailing list
Ifeffit at millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Unsubscribe: http://millenia.cars.aps.anl.gov/mailman/options/ifeffit


------------------------------

End of Ifeffit Digest, Vol 162, Issue 17
****************************************



More information about the Ifeffit mailing list