OK, I feel I have to weigh in. I'm on a microprobe line where sample motions contribute to noise and I rarely find that the noise
quality of
the EXAFS signal, as measured by running a high-order polynomial through the data and looking at the residuals, matches the number
of counts per point.
Also, if you are using an analog counter like an ion chamber, then you can't measure the true number of detected quanta, so you
can't get the shot-noise
limit. Further, there will be systematics like background-subtraction artifacts which will act as other than white noise. For all
these reasons, I think
that an attempt to use a literal chi-squared isn't going to succeed. I don't think I've ever seen anyone report the true noise
quality of their data, anyway.
Occasionally, someone might report the number of counts/point, but as I said, that's an upper limit to the noise quality. What is
more intuitive, though
less rigorous to report, is the R value.
mam
----- Original Message -----
From: "Scott Calvin"
Matt,
Is this the most recent IXAS report on error reporting standards?
http://www.i-x-s.org/OLD/subcommittee_reports/sc/err-rep.pdf
It uses a rather expansive definition of epsilon, which explicitly includes "imperfect" ab initio standards such as FEFF calculations. It indicates that statistical methods such as that used by ifeffit for estimating measurement error yields a lower limit for epsilon, and thus an overestimate of chi square.
So I think my statement and yours are entirely compatible.
As far as what should be reported, I do deviate from the IXAS recommendations by not reporting chi-square. Of course, I tend to work in circumstances where the signal-to-noise ratio is very high, and thus the statistical uncertainties make a very small contribution to the overall measurement error. In such cases I have become convinced that the R-factor alone provides as much meaningful information as the chi-square values, and that in fact the chi-square values can be confusing when listed for fits on different data. For those working with dilute samples, on the other hand, I can see that chi-square might be a meaningful quantity.
At any rate, I strongly agree that the decision of which measurements of quality of fit to produce should not be dependent on what "looks good"! That would be bad science. The decision of what figures of merit to present should be made a priori.
--Scott Calvin Sarah Lawrence College
On Aug 18, 2009, at 10:40 PM, Matt Newville wrote:
Having a "reasonable R-factor" of a few percent misfit and a reduced chi-square of ~100 means the misfit is much larger than the estimated uncertainty in the data. This is not at all unusual. It does not necessarily mean (as Scott implies) that this is because the uncertainty in data is unreasonably low, but can also mean that there are systematic problems with the FEFF calculations that do not account for the data as accurately as it can be measured. For most "real" data, it is likely that both errors FEFF and a slightly low estimate for the uncertainty in the data contribute to making reduced chi-square much larger than 1.
And, yes, the community-endorsed recommendation is to report either chi-square or reduced chi-square as well as an R-factor. I think some referees might find it a little deceptive to report R-factor because it is "acceptably small" but not reduced chi-square because it is "too big".
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit