[Ifeffit] Question about transform windows and statistical parameters

Brandon Reese bjreese at gmail.com
Thu May 12 13:40:41 CDT 2011


Thanks again for everyone's very informative and thorough replies, this
mailing list is great!

Bruce, I hope that I didn't convey that reduced chi-square (RCS) wasn't
useful.  I am constantly using it when figuring out how to appropriately
model my data.  My comment stemmed from what you said about the RCS of a
single fit not being interpretable.  I agree that it would useful to say
something like "model A reported a RCS of 3x that of model B, so for this
analysis model B was used."  But it seems to me that if you had a table
reporting fitted parameters of several samples using model B, reporting a
RCS for each of those samples would be reporting a RCS for a single fit (the
"best fit") for each of the samples, and thus the RCS isn't interpretable.
Unless each sample had a fixed epsilon of (for example) the average epsilon
of the samples, assuming Artemis wanted the epsilons to be relatively close
to each other.  My logic could certainly be flawed on this, but that was the
scenario I envisioned in my head.

Matt, yes, ifeffit's thought of what epsilon should be was different by a
large factor between the two window functions.  As dk was increased (with
the K-B window), the values of epsilon started converging to that reported
for the Hanning window.  When dk=5 (for the K-B window), the epsilons was
about 25% away from the Hanning window epsilon (with dk=1), but the RCS and
CS had increased my more than an order of magnitude.  For the Hanning window
I used a dk of 1, and for the K-B window i tried different dk's from 1-5.
The kmin and kmax were held constant at 2.5-11.  I did notice the large
increase in ringing with the low dk values and the K-B window.  The fit
quality, by R-factor, was substantially worse with the K-B window regardless
of the dk value.  I will probably stick the Hanning window, but I have
certainly learned a lot more about some more knobs in Artemis, thanks.

For a typical group of the data sets I am looking at, the epsilon value
reported by ifeffit varies by 10-20% percent.  Would setting epsilon to an
average or something else a bit more justified than a random number, be
reasonable for comparing RCS values? Or would it would be best to leave out
the extra complication and report them as is, more in line with Scott's
suggestion?

Thanks again for all the advice

Brandon

On Thu, May 12, 2011 at 9:51 AM, Scott Calvin <dr.scott.calvin at gmail.com>wrote:

> Hi Brandon,
>
> Matt and Bruce both gave good, thorough answers to your questions this
> morning. Nevertheless, I'm going to chime in too, because there are some
> aspects of this issue I'd like to put emphasis on.
>
> On May 11, 2011, at 8:46 PM, Brandon Reese wrote:
>
>  I tried your suggestion with epsilon and the chi-square values came out to
> be very similar values with the different windows.  Does this mean that
> reporting reduced chi-square values in a paper that compared several data
> sets would not be necessary and/or appropriate?
>
>
> Bruce said "no" emphatically, and I say "yes," but I think we've understood
> the question differently. As Bruce says:
>
> Of course, reduced chi-square can only be compared for fitting models which
> compute epsilon the same way or use the same value for epsilon.
>
>
> That's the key point. I've gotten away from reporting values for reduced
> chi-square (RCS). That's a personal choice, and is *not* in accord with
> the International X-Ray Absorption Society's Error Reporting Recommendation,
> available here:
>
> http://ixs.iit.edu/subcommittee_reports/sc/
>
> I think the difficulty in choosing epsilon is more likely to make a reduced
> chi-square number confusing than enlightening. But I *am* moving
> increasingly toward reporting changes in reduced chi-square between fits on
> the same data, and applying Hamilton's test to determine if improvements are
> statistically significant.
>
>
>  Would setting a value for epsilon allow comparisons across different
> k-ranges, different (but similar) data sets, or a combination of the two
> using the chi-square parameter?
>
>
> Maybe not. After all, the epsilon *should* be different for different
> k-ranges, as your signal to noise ratio probably changes as a function of
> *k*. Using the same epsilon doesn't reflect that.
>
>
>
> In playing around with different windows and dk values my fit variables
> generally stayed within the error bars, but the size of the error bars could
> change more than a factor 2.  Does this mean that it would make sense to
> find a window/dk that seems to "work" for a given group of data and stay
> consistent when analyzing that data group?
>
>
> The fact that your variables stay within the error bars is good news. The
> change in the size of the error bars may be related to a less-than-ideal
> value for dk you may have used for the Kaiser-Bessel window.
>
> But yes, find a window and dk combination that seems to work well and then
> stay consistent for that analysis. Unless the data is particularly
> problematic, I'd prefer making a reasoned choice before beginning to fit and
> then sticking with it; *a posteriori* choices for that kind of thing make
> me a little nervous.
>
> * * *
>
> At the risk of being redundant, four quick examples.
>
> Example 1: You change the range of R values in the Fourier transform over
> which you are fitting a data set.
> For this example, RCS is a valuable statistic for letting you know whether
> the fit supports the change in R-range.
>
> Example 2: You change the range of *k* values over which you are fitting
> your data.
> For this example, comparing RCS is unlikely to be useful. You are likely
> trying different k-ranges because you are suspicious about some of the data
> at the extremes of your range. Including or excluding that data likely
> implies epsilon should be changed, but by how much? Thus the unreliability
> of comparing RCS in this case.
>
> Example 3: You change constraints on a fit on the same data range.
> For this example, comparing RCS is very useful.
>
> Example 4: You compare fits on the same data range, with the same model, on
> two different data sets which were collected during the same synchrotron run
> under similar conditions.
> For this example, proceed with caution. You may decide to trust Ifeffit's
> method for estimating epsilon, or you may be able to come up with your own
> (perhaps basing it on the size of the edge jumps). Hopefully issues like
> glitches and high-frequency jitter are nearly the same for both samples,
> which gives you a fighting chance of making reasonable estimates of epsilon.
> Done with a little care, there may be value in comparing RCS for this kind
> of case.
>
> --Scott Calvin
> Sarah Lawrence College
>
>
> _______________________________________________
> Ifeffit mailing list
> Ifeffit at millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20110512/8f776499/attachment.html>


More information about the Ifeffit mailing list