Hi Everyone, I figured that it had been long enough for version 1.0071, so posted version 1.0072 today. This version has several small feature improvements and bug fixes. There is also a fairly substantial changes to the 'configure ; make ' on Unix, so that the perl and python extensions are no longer built during this process, but have to be built after installation of the library, with the 'normal method' for these languages: cd wrappers/python python setup.py install cd wrappers/perl perl Makefile.PL ; make install This change greatly simplifies the work of the configuration script and is much more reliable. The other major change is that the internals of macro handling were largely rewritten so that nested macros work correctly in all cases (including on windows). Related to that, macros that are run by feffit() and minimize() at each iteration of the fitting loop now run correctly as well. I have a Windows dll ready, but am having difficulty making standalone executables from the ravelware perl scripts. Hopefully that will get worked out soon so that a new Windows installer with the latest athena, artemis, and tkatoms, as well as gifeffit and samxas will be possible. I've also started (but haven't gotten very far on) an annoted collection of ifeffit scripts, intended to be something of a 'how-to' or 'cookbook' for ifeffit commands that will complement the documents and FAQ. Any suggestions or comments on this would be very welcome. Finally, I am really expecting that 1.0072 represents a nearly complete version of Ifeffit 1.0. I believe the XAFS analysis functionality is more-or-less complete at this point. I'm willing to fix bugs, and add small features, but would like to try to leave this version stable and to start working (even if slowly!) on much more significant changes that would eventually become Ifeffit 2. Thanks, --Matt
Matt: Is it possible that the new version breaks something that athena is looking for when it determies if it can be installed on the system. I have to go back to the previous version in order to get athena to install at all. Carlo On Thu, 27 Jun 2002, Matt Newville wrote:
Hi Everyone,
I figured that it had been long enough for version 1.0071, so posted version 1.0072 today.
This version has several small feature improvements and bug fixes. There is also a fairly substantial changes to the 'configure ; make ' on Unix, so that the perl and python extensions are no longer built during this process, but have to be built after installation of the library, with the 'normal method' for these languages: cd wrappers/python python setup.py install cd wrappers/perl perl Makefile.PL ; make install
This change greatly simplifies the work of the configuration script and is much more reliable.
The other major change is that the internals of macro handling were largely rewritten so that nested macros work correctly in all cases (including on windows). Related to that, macros that are run by feffit() and minimize() at each iteration of the fitting loop now run correctly as well.
I have a Windows dll ready, but am having difficulty making standalone executables from the ravelware perl scripts. Hopefully that will get worked out soon so that a new Windows installer with the latest athena, artemis, and tkatoms, as well as gifeffit and samxas will be possible.
I've also started (but haven't gotten very far on) an annoted collection of ifeffit scripts, intended to be something of a 'how-to' or 'cookbook' for ifeffit commands that will complement the documents and FAQ. Any suggestions or comments on this would be very welcome.
Finally, I am really expecting that 1.0072 represents a nearly complete version of Ifeffit 1.0. I believe the XAFS analysis functionality is more-or-less complete at this point. I'm willing to fix bugs, and add small features, but would like to try to leave this version stable and to start working (even if slowly!) on much more significant changes that would eventually become Ifeffit 2.
Thanks,
--Matt
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
-- Carlo U. Segre -- Professor of Physics Illinois Institute of Technology Voice: 312.567.3498 Fax: 312.567.3494 Carlo.Segre@iit.edu http://www.iit.edu/~segre
Matt and Bruce: First of all, thanks! I just put athena and ifeffit on our beamline computer at MR-CAT for our users to assess data as they take it. It proved to be very easy for the students and newbies to figure out. The question that I have is how does IFEFFIT handle continuous scan data, where the point spacing is almost but not quite uniform in energy all the way through the scan. I have noticed the error of misordered data which can often show up in the continuous scans. We will probably try to fix that ourselves. What is done with the high density data when converting to k-space? Do you rebin (averaging both E and mu data) or do you use a smoothing fit to take advantage of the statistics present in the excess data points, or do you just interpolate and throw away the extra statistics? Cheers, Carlo -- Carlo U. Segre -- Professor of Physics Illinois Institute of Technology Voice: 312.567.3498 Fax: 312.567.3494 Carlo.Segre@iit.edu http://www.iit.edu/~segre
CS> First of all, thanks! I just put athena and ifeffit on our CS> beamline computer at MR-CAT for our users to assess data as they CS> take it. It proved to be very easy for the students and newbies CS> to figure out. That's just splendid. I am very pleased and quite flattered by the praise. Thanks! If there are features of the program the MR-CAT and its users would like to see, please let me know. CS> The question that I have is how does IFEFFIT handle continuous CS> scan data, where the point spacing is almost but not quite CS> uniform in energy all the way through the scan. Well, I am sure that Matt will correct me if I am wrong, but I think I can answer this. mu(E) data is almost never on an even grid, regardless of how it is measured. The background function is evaluated on the energy array of the data using knots that are evenly spaced in wavenumber. When the background is removed, chi(E) -- which is on the original energy grid -- is interpolated onto an even grid in k-space. CS> I have noticed the error of misordered data which can often show CS> up in the continuous scans. We will probably try to fix that CS> ourselves. Misordered data is handled in a sensible manner in recent versions of athena. It may even be handled correctly ;-) CS> What is done with the high density data when converting to CS> k-space? Do you rebin (averaging both E and mu data) or do you CS> use a smoothing fit to take advantage of the statistics present CS> in the excess data points, or do you just interpolate and throw CS> away the extra statistics? I think I answered this above. I suppose you might say that ifeffit "interpolates and throws", but even that depends on what advantage you claim to be getting by measuring on a finer grid. There are physical limits to the resolution, so in that sense a finer grid does not help. However, mesuring for one second per point on, say, a 0.25 eV grid is similar in a counting statistics sense to two measurements of one second per point on a 0.5 eV grid. That counting statistics improvement is not lost in the interpolation of chi(E) to chi(k). Or, perhaps, I'm missing your point entirely... That happens ;-) B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 222 Naval Research Laboratory phone: (1) 202 767 5947 Washington DC 20375, USA fax: (1) 202 767 1697 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b, X24c, U4b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
On Wed, 17 Jul 2002, Bruce Ravel wrote:
CS> What is done with the high density data when converting to CS> k-space? Do you rebin (averaging both E and mu data) or do you CS> use a smoothing fit to take advantage of the statistics present CS> in the excess data points, or do you just interpolate and throw CS> away the extra statistics?
I think I answered this above. I suppose you might say that ifeffit "interpolates and throws", but even that depends on what advantage you claim to be getting by measuring on a finer grid. There are physical limits to the resolution, so in that sense a finer grid does not help. However, mesuring for one second per point on, say, a 0.25 eV grid is similar in a counting statistics sense to two measurements of one second per point on a 0.5 eV grid. That counting statistics improvement is not lost in the interpolation of chi(E) to chi(k).
I think that this last sentence is the answer that I was looking for. I wanted to know if the counting statistics improvement is propagated in the transformation to k-space. This is good since it means that we do not have to write our own rebinning or smoothing routines before handing the data off to athena and ifeffit. Carlo -- Carlo U. Segre -- Professor of Physics Illinois Institute of Technology Voice: 312.567.3498 Fax: 312.567.3494 Carlo.Segre@iit.edu http://www.iit.edu/~segre
Carlo said:
However, mesuring for one second per point on, say, a 0.25 eV grid is similar in a counting statistics sense to two measurements of one second per point on a 0.5 eV grid. That counting statistics improvement is not lost in the interpolation of chi(E) to chi(k).
CS> I think that this last sentence is the answer that I was looking CS> for. I wanted to know if the counting statistics improvement is CS> propagated in the transformation to k-space. This is good since CS> it means that we do not have to write our own rebinning or CS> smoothing routines before handing the data off to athena and CS> ifeffit. Hmm.... upon further reflection, I think that what I said is not right. Or, more specifically, it depends upon how the interpolation is done and what the densities are of the starting and ending grids. Suppose you are interpolating ONTO a 1eV grid FROM a 0.5 eV grid using three-point interpolation. Then every point you measure gets used to determine points on the new grid. However, if you do the same interpolation ONTO a 1 eV grid FROM a 0.25 eV interpolation, then 1/2 of the points never get used to determine the new grid. For any polynomial interpolation, this argument will hold for any starting grid that is sufficiently dense compared to the ending grid. I had to reread what Numerical Recipes has to say about cubic spline interpolation (i.e. Ifeffit's splint() function), but it too has this problem of relative densities. Because second derivatives of the starting grid are computed, the splint makes better use of the denser grid. However, the splint may fail to use some of the information provided in the chi(E) to chi(k) conversion -- particularly far above the edge where the k-grid is very sparse compared to the E-grid. As Matt said yesterday, Ifeffit uses a quadratic, i.e. three-point, interpolation to make chi(k) from chi(E) (Source divers: See src/lib/spline.f at line 364) onto a 0.05invAng grid (a hardwired value set in consts.h). Sorry I misled you yesterday. But since the E-grid does not map linearly onto the k-grid, you will need to think hard about how you do any rebinning. Whatever you come up with, you should let us know. It may be something worth putting into either Athena or Ifeffit. B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 222 Naval Research Laboratory phone: (1) 202 767 5947 Washington DC 20375, USA fax: (1) 202 767 1697 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b, X24c, U4b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
Hi Carlo, Sorry I didn't answer this more clearly. CS> What is done with the high density data when converting to CS> k-space? Do you rebin (averaging both E and mu data) or do you CS> use a smoothing fit to take advantage of the statistics present CS> in the excess data points, or do you just interpolate and throw CS> away the extra statistics? BR> ... However, mesuring for one second per point on, say, a BR> 0.25 eV grid is similar in a counting statistics sense to BR> two measurements of one second per point on a 0.5 eV grid. BR> That counting statistics improvement is not lost in the BR> interpolation of chi(E) to chi(k). CS> I think that this last sentence is the answer that I was CS> looking for. I wanted to know if the counting statistics CS> improvement is propagated in the transformation to k-space. CS> This is good since it means that we do not have to write CS> our own rebinning or smoothing routines before handing the CS> data off to athena and ifeffit. MN> -- spline() generates k and chi arrays using a 0.05Ang-1 grid MN> using a three-point interpolation of the chi(e) data. MN> That could possibly be improved, I suppose. CS> I take this to mean, as Bruce mentioned in the previous CS> message, that the counting statistics improvement in these CS> fine-gridded data is used with this three point CS> interpolation. If this was not the case, I would write a CS> program to rebin by averaging multiple points to an equally CS> spaced > k-grid in order to use the increased statistics. The three-point interpolation done probably WILL lose some statistics for finely gridded data. That is, if data is binned at fine energy intervals through the EXAFS region, the method currently used will not use all that data to construct chi(k). As an example : k = 10.00 -> E-E0 = 381.0 k = 10.05 -> E-E0 = 384.8 k = 10.10 -> E-E0 = 388.7 Currently, ifeffit marches in k-space in steps of 0.05Ang-1, and uses three energy points (the energy just below, the energy just below, and the next closest point) to make a parabola through chi(E) [that is, xmu(E)-bkg(E) at the energy points of the data], and uses the value of that parabola as chi(k). The result is that if data is binned on a 0.5eV grid, some of it will be ignored when making chi(k) at k=10.Ang^-1. This could be improved. Ifeffit really wants chi(k) on an even k-grid, so the options would be either a finer k-grid or a better interpolation scheme. A finer grid in ifeffit could be possible, but it's a non-trivial change. A better interpolation scheme is easier to do. Changing from 3-point interpolation to a cubic spline that passes through all the data points of chi(E) would be easy, and would use all the data for each chi(k) point. Whether it actually 'preserves statistics' is a harder question to answer. I think that any griddign of data could be said to lose statistics. Some sort of rolling averaging could be used, which might preserve statistics better at the (small) expense of resolution. Anyway, the reason that's not cubic spline interpolation is not currently done is execution speed, but that's probably less important than throwing away data! Changing to the better interpolation scheme is easy enough to try. I could send altered code if you like. --Matt
I would be willing to test any new code out. We have been working with this for diffraction patterns over the summer and we have not been happy with the spline fit or a Bezier alternative. Jeff Terry and Steve Wasserman's Mathematica code uses an algorithm which sets bins in energy space using the desired k-space resolution, say delta-k=0.05 and then averages both mu(E) and E. Then an interpolation scheme eventually puts the data on an exactly even k-space grid since there is no guarantee that the average E is in the correct place. Carlo On Thu, 18 Jul 2002, Matt Newville wrote:
Hi Carlo,
Sorry I didn't answer this more clearly.
CS> What is done with the high density data when converting to CS> k-space? Do you rebin (averaging both E and mu data) or do you CS> use a smoothing fit to take advantage of the statistics present CS> in the excess data points, or do you just interpolate and throw CS> away the extra statistics?
BR> ... However, mesuring for one second per point on, say, a BR> 0.25 eV grid is similar in a counting statistics sense to BR> two measurements of one second per point on a 0.5 eV grid. BR> That counting statistics improvement is not lost in the BR> interpolation of chi(E) to chi(k).
CS> I think that this last sentence is the answer that I was CS> looking for. I wanted to know if the counting statistics CS> improvement is propagated in the transformation to k-space. CS> This is good since it means that we do not have to write CS> our own rebinning or smoothing routines before handing the CS> data off to athena and ifeffit.
MN> -- spline() generates k and chi arrays using a 0.05Ang-1 grid MN> using a three-point interpolation of the chi(e) data. MN> That could possibly be improved, I suppose.
CS> I take this to mean, as Bruce mentioned in the previous CS> message, that the counting statistics improvement in these CS> fine-gridded data is used with this three point CS> interpolation. If this was not the case, I would write a CS> program to rebin by averaging multiple points to an equally CS> spaced > k-grid in order to use the increased statistics.
The three-point interpolation done probably WILL lose some statistics for finely gridded data. That is, if data is binned at fine energy intervals through the EXAFS region, the method currently used will not use all that data to construct chi(k). As an example : k = 10.00 -> E-E0 = 381.0 k = 10.05 -> E-E0 = 384.8 k = 10.10 -> E-E0 = 388.7
Currently, ifeffit marches in k-space in steps of 0.05Ang-1, and uses three energy points (the energy just below, the energy just below, and the next closest point) to make a parabola through chi(E) [that is, xmu(E)-bkg(E) at the energy points of the data], and uses the value of that parabola as chi(k). The result is that if data is binned on a 0.5eV grid, some of it will be ignored when making chi(k) at k=10.Ang^-1.
This could be improved. Ifeffit really wants chi(k) on an even k-grid, so the options would be either a finer k-grid or a better interpolation scheme. A finer grid in ifeffit could be possible, but it's a non-trivial change.
A better interpolation scheme is easier to do. Changing from 3-point interpolation to a cubic spline that passes through all the data points of chi(E) would be easy, and would use all the data for each chi(k) point. Whether it actually 'preserves statistics' is a harder question to answer. I think that any griddign of data could be said to lose statistics. Some sort of rolling averaging could be used, which might preserve statistics better at the (small) expense of resolution.
Anyway, the reason that's not cubic spline interpolation is not currently done is execution speed, but that's probably less important than throwing away data!
Changing to the better interpolation scheme is easy enough to try. I could send altered code if you like.
--Matt
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
-- Carlo U. Segre -- Professor of Physics Illinois Institute of Technology Voice: 312.567.3498 Fax: 312.567.3494 Carlo.Segre@iit.edu http://www.iit.edu/~segre
Hi Carlo, On Tue, 16 Jul 2002, Carlo U. Segre wrote:
First of all, thanks! I just put athena and ifeffit on our beamline computer at MR-CAT for our users to assess data as they take it. It proved to be very easy for the students and newbies to figure out.
Thanks!! Like Bruce said, that's great news.
The question that I have is how does IFEFFIT handle continuous scan data, where the point spacing is almost but not quite uniform in energy all the way through the scan.
I have noticed the error of misordered data which can often show up in the continuous scans. We will probably try to fix that ourselves.
What is done with the high density data when converting to k-space? Do you rebin (averaging both E and mu data) or do you use a smoothing fit to take advantage of the statistics present in the excess data points, or do you just interpolate and throw away the extra statistics?
It's supposed to handle data binned from continuous scans, including data in order that's not strictly increasing in energy. Of course, it could be making mistakes: we haven't tested this very extensively. There's also the possibility that's it's not working as expected (or really desired). What's supposed to happen with out-of-order data is: -- read_data() normally does not re-order the data. This is because it's not always obvious which column to use to order the data in (some data is given as a function of mono angle, energy might not be the first column, etc). -- if you know (or athena can help you guess) which column should be in strictly increasing order, you can use read_data(file =... , sort = 1) where 'sort=n' means to sort all the data so the nth column is strictly increasing. At this point, duplicate values are preserved. -- the pre_edge() and spline() commands should handle poorly sorted energy/xmu data (including duplicate energy values: here it uses the average of the duplicate values), and will generate arrays for normalized xmu, and background arrays that exactly match the input energy array (that is, not necessarily sorted). That way, point-by-point subtraction will still work as expected. -- spline() generates k and chi arrays using a 0.05Ang-1 grid using a three-point interpolation of the chi(e) data. That could possibly be improved, I suppose. So my first guess would be that using read_data(...., sort=1) would help reading the qexafs data. Hope that helps! --Matt
Matt said:
-- if you know (or athena can help you guess) which column should be in strictly increasing order, you can use read_data(file =... , sort = 1)
where 'sort=n' means to sort all the data so the nth column is strictly increasing. At this point, duplicate values are preserved.
This is extremely similar to what Athena already does. When you select a column in the column selection dialog as containing energy, Athena checks to see that it is strictly increasing. If not, it asks you whether you want to ignore the data or have Athena sort it. If you choose to sort it, then Athena will sort all columns such that they are non-decreasing in energy. It will then remove any data points that are not strictly increasing. This sorting is all done internally because when I first ran into the problem, I was not aware of the sort argument to read_data. Also sorting is something for which perl has a particularly expressive and flexible syntax. Athena then hands of the sorted, strictly increasing data to Ifeffit for further handling. B -- Bruce Ravel ----------------------------------- ravel@phys.washington.edu Code 6134, Building 3, Room 222 Naval Research Laboratory phone: (1) 202 767 5947 Washington DC 20375, USA fax: (1) 202 767 1697 NRL Synchrotron Radiation Consortium (NRL-SRC) Beamlines X11a, X11b, X23b, X24c, U4b National Synchrotron Light Source Brookhaven National Laboratory, Upton, NY 11973 My homepage: http://feff.phys.washington.edu/~ravel EXAFS software: http://feff.phys.washington.edu/~ravel/software/exafs/
participants (3)
-
Bruce Ravel
-
Carlo U. Segre
-
Matt Newville