Hi Scott, Yep, collecting with an even energy grid is essentially k-weighting the collection by k^1. It's not quite that simple -- see below -- but it's close. Of course, you can also step evenly in k and increase the count time at each point. A common approach when usig solid-state fluorescence detectors and/or dilute samples is to k-weight the collection time by k, k^2, or k^3, and count for, say, 2sec per point at low-k and ramp up to 10sec per point at high-k (looking at a random recent scan). It does definitely help cut down the total collection time to get reasonably clean spectra on dilute systems. The challenge with using data that's on a fine energy grid is that the routine converting energy to k has to know to use all that data and also know *how* to use all that data. Typically (at least, in ifeffit), data is interpolated from E to an evenly-spaced k grid with a fairly simple interpolation scheme. If the energy data are too finely spaced, some data may actually get ignored. Collecting out to k=18A^-1 with 0.5 eV steps might not work at smoothing out the data as well as you'd hope. Since k=18 -> E=1234.4 and k=18.05 -> 1241.3, there ~6eV between adjacent k-points (assuming ifeffit's delta_k = 0.05). This is important for QEXAFS (which typically does sample at a very fine energy grid). I've been told by people doing QEXAFS that a simple box-car average is good enough for binnning QEXAFS data. That's what Ifeffit's rebin() function does. I'd think that a more sophisticated rolling average (convolution) would be better (and not screw up energy resolution), but apparantly it's not an issue. --Matt