Athena, Artemis, Hephaestus
Please Help, I just installed, using the windows installer, the latest version of Ifeffit onto my computer (Windows XP). I can run all the programs but not the graphical user interfaces Athena, Artemis, and Hephaestus. When I try to run these programs, I get a DOS with a lot of gibberish that opens and closes in 30 seconds. I have no idea what is going on. Can anyone suggest a fix or workaround? ___________________________ Seth Mueller U.S. Geological Survey Box 25046 MS 973 BLDG 20 Denver Federal Center Denver, CO 80225-0046 phone 303.236.1882 fax 303.236.3200
Hi Seth, My guess is that the problem is due to some left-over 'PAR' directories that need to be cleaned out. These usually live in C:\Documents and Settings\USER\Local Settings\Temp\par-USER\ where USER is your Windows login name. Look for any par-USER folders and delete everything in it (should be a bunch of folders with names like 'cache-**********'). Really, it is safe to delete these. We (Bruce and I) think we have a fix for this problem 'in the works' and hope to be able to get a new installer out very soon that will avoid these problems. Sorry for the trouble, --Matt
Hi Matt,
That fixed it! Thank you.
Seth
___________________________
Seth Mueller
U.S. Geological Survey
Box 25046 MS 973 BLDG 20
Denver Federal Center
Denver, CO 80225-0046
phone 303.236.1882
fax 303.236.3200
Matt Newville
Hi all, A basic question about EXAFS data collection strategies occurred to me today, so I thought I'd see what you all think. I was taught to collect data points spaced equally in k-space. The argument given to me was that collecting data points spaced equally in energy over-sampled at high-k, relative to low-k, since of course E ~ k^2. But before analyzing my data (applying a FT or whatever), we generally k-weight it. So let's say we're in a situation where random noise is significant, and suppose k-weight 1 gives a spectrum that is fairly uniform in amplitude over the k-range we are planning to analyze. (I know there have been various rationales put forth for using various k-weights, but that's not the subject of this post...) If the noise is itself independent of k, then that will mean our signal-to-noise ratio decreases with increasing k (constant noise; signal decreases as 1/k). So it would seem reasonable to me to correct for that by taking more data at high k than at low k. A start in that direction would be to collect data spaced equally in energy, although because Poisson noise falls off as the square root of the number of counts, that's not enough to give a constant signal-to-noise ratio even for a k-weight of 1, and certainly not enough for k-weights of 2 or 3. But at least it's closer than data that's evenly spaced in k. I'm sure this is something those of you who work with dilute samples have thought a lot about...I eagerly await your collective wisdom. --Scott Calvin Sarah Lawrence College
Hi Scott, Yep, collecting with an even energy grid is essentially k-weighting the collection by k^1. It's not quite that simple -- see below -- but it's close. Of course, you can also step evenly in k and increase the count time at each point. A common approach when usig solid-state fluorescence detectors and/or dilute samples is to k-weight the collection time by k, k^2, or k^3, and count for, say, 2sec per point at low-k and ramp up to 10sec per point at high-k (looking at a random recent scan). It does definitely help cut down the total collection time to get reasonably clean spectra on dilute systems. The challenge with using data that's on a fine energy grid is that the routine converting energy to k has to know to use all that data and also know *how* to use all that data. Typically (at least, in ifeffit), data is interpolated from E to an evenly-spaced k grid with a fairly simple interpolation scheme. If the energy data are too finely spaced, some data may actually get ignored. Collecting out to k=18A^-1 with 0.5 eV steps might not work at smoothing out the data as well as you'd hope. Since k=18 -> E=1234.4 and k=18.05 -> 1241.3, there ~6eV between adjacent k-points (assuming ifeffit's delta_k = 0.05). This is important for QEXAFS (which typically does sample at a very fine energy grid). I've been told by people doing QEXAFS that a simple box-car average is good enough for binnning QEXAFS data. That's what Ifeffit's rebin() function does. I'd think that a more sophisticated rolling average (convolution) would be better (and not screw up energy resolution), but apparantly it's not an issue. --Matt
Aha! So the reason I was taught to collect at k-space intervals of 0.05 A^-1 to avoid interpolation problems, NOT for the reason I thought at the time. And since I've basically made a career so far out of concentrated samples, it's been OK. But as long as I'm willing to throw in a binning step, I'd use beam time more efficiently if I collected more closely spaced data at high-k and less at low-k than I currently do. Or I can just increase collection times at high k's relative to low k's. Thanks, Matt, that clears things up! --Scott Calvin Sarah Lawrence College
Yep, collecting with an even energy grid is essentially k-weighting the collection by k^1. It's not quite that simple -- see below -- but it's close. Of course, you can also step evenly in k and increase the count time at each point. A common approach when usig solid-state fluorescence detectors and/or dilute samples is to k-weight the collection time by k, k^2, or k^3, and count for, say, 2sec per point at low-k and ramp up to 10sec per point at high-k (looking at a random recent scan). It does definitely help cut down the total collection time to get reasonably clean spectra on dilute systems.
The challenge with using data that's on a fine energy grid is that the routine converting energy to k has to know to use all that data and also know *how* to use all that data. Typically (at least, in ifeffit), data is interpolated from E to an evenly-spaced k grid with a fairly simple interpolation scheme. If the energy data are too finely spaced, some data may actually get ignored. Collecting out to k=18A^-1 with 0.5 eV steps might not work at smoothing out the data as well as you'd hope. Since k=18 -> E=1234.4 and k=18.05 -> 1241.3, there ~6eV between adjacent k-points (assuming ifeffit's delta_k = 0.05).
This is important for QEXAFS (which typically does sample at a very fine energy grid). I've been told by people doing QEXAFS that a simple box-car average is good enough for binnning QEXAFS data. That's what Ifeffit's rebin() function does. I'd think that a more sophisticated rolling average (convolution) would be better (and not screw up energy resolution), but apparantly it's not an issue.
--Matt
Hi Scott, Anatoly, Anatoly wrote:
I am probably missing the point, but it is not immediately obvious to me why the following is equivalent in terms of improving the signal to noise: a) constant E-space increment and b) constant k-space increment combined with k-dependent integration time.
I think they are pretty much equivalent, though k-weighting the collection time is preferred if for no other reason than it is more flexible.
In a), the data cluster at high E, but each data point in E corresponds to a different final state and thus is unique.
Not quite. Each data point in E corresponds to a set of final states with a finite energy width (core-hole lifetime, energy resolution), so values of mu(E) and mu(E+0.01eV) are not unique measurements. More importantly, at high energies, values of mu(E) and mu(E + 2eV) are not unique measures of the EXAFS oscillations due to atoms within 10Ang of the absorber. The important thing to sample (ie, analyze) is the frequencey components of chi(k) below 10A, not mu(E).
Averaging over E-space data in the small interval Delta E, (1/Delta E)*Int [xmu(E) dE] is not equivalent to the time average of xmu(E) collected at a fixed E: (1/T)*Int [xmu(E) dt].
As long as you integrate within the energy resolution limit, it is. But, in general, you're right: this is why I would think that a rolling average (eg, convolve with a 2eV lorenztian) would be the best way to handle rebinning of QEXAFS data. I believe that boxcar average works well enough because we're interpolating to a fine k-grid. With a k-grid = 0.05A^-1, you're sampling distances to 31Ang (10*pi). So, if you lose a little resolution in k because of sloppy sampling with a boxcar average, the errors in chi below 8Ang are going to be tiny.
Thus, k^n-weighted integration time, to my mind, is the only proper way of reducing statistical noise.
But binning data taken in constant energy steps onto a k=0.05 grid does work pretty well. I'd challenge someone to show a significant difference between that and k^1 weighting. Anyway, my experience is that you start k-weighting the collection time when statistical noise would otherwise dominate. Even in those situations you almost always collect for long enough time that statistical noise no longer dominates. K-weighting the collection time gets you to that condition faster than not k-weighting. There are, as Jeremy, Carlo, and Scott mentioned strategic reasons for both methods. With constant E steps you may also be able to better remove glitches because you may not have to throw out as many points (I wouldn't bett on this, but maybe...). Then again it's either slower (step scanning) or much faster (slew scanning) to go in constant energy steps. If you're using a solid-state detector for dilute samples, QEXAFS is probably not going to work. Scott wrote:
Aha! So the reason I was taught to collect at k-space intervals of 0.05 A^-1 to avoid interpolation problems, NOT for the reason I thought at the time.
I think the original reason you gave (that collecting too finely in energy unnecessarily oversamples chi(k)) is the correct reason for sampling evenly in k. Using constant energy steps definitely oversamples chi(k) at high k: there's no point in collecting 14 data points between 18.0Ang^-1 and 18.1Ang^-1 (which are 14eV apart) because chi(k) is not changing that rapidly. 20 data points per Ang^-1 is plenty good enough. Using the common approach of sampling evenly in k, and using delta_k= 0.05Ang^-1, the data handling procedures can use linear interpolation and not lose too much resolution for the important data (ie, the stuff between 1 and 8Ang).
And since I've basically made a career so far out of concentrated samples, it's been OK. But as long as I'm willing to throw in a binning step, I'd use beam time more efficiently if I collected more closely spaced data at high-k and less at low-k than I currently do. Or I can just increase collection times at high k's relative to low k's.
K-weighting the collection time is certainly easy enough to do (though I guess you have to convince the beamline scientists to update the software). If you can't do this, rebinning isn't so bad either. --Matt
Matt and Scott: On Thu, 25 Aug 2005, Matt Newville wrote:
This is important for QEXAFS (which typically does sample at a very fine energy grid). I've been told by people doing QEXAFS that a simple box-car average is good enough for binnning QEXAFS data. That's what Ifeffit's rebin() function does. I'd think that a more sophisticated rolling average (convolution) would be better (and not screw up energy resolution), but apparantly it's not an issue.
I have ben playing with the athena smoothing and rebinning funcionalities and I think that I prefer the rebinning because smoothing tends to attenuate sharp peaked structure. A rolling average might be good too but I haven't tried it too much. My guess is that for gentle features such as in the EXAFS region, rebinning, rolling averages and smoothing will all give statistically indistiguishable results. I amy be wrong. Carlo -- Carlo U. Segre -- Professor of Physics Associate Dean for Special Projects, Graduate College Illinois Institute of Technology Voice: 312.567.3498 Fax: 312.567.3494 Carlo.Segre@iit.edu http://www.iit.edu/~segre
participants (4)
-
Carlo Segre
-
Matt Newville
-
scalvin@slc.edu
-
Seth Mueller