newville at cars.uchicago.edu
Mon Nov 16 11:18:56 CST 2015
On Fri, Nov 13, 2015 at 6:25 PM, Ritimukta Sarangi <ritimukta at gmail.com>
> Hi Bruce and Matt,
> Are there any obvious examples of where Sterns criteria is shown to be
> incorrect? Meaning that 2delkdelR/pi +2 has been shown to over predict the
> number of available independent parameters? I have often been asked this by
> students/users and failed to explain it well, except for taking of talking
> sources or error such as systematic, spline, etc.
That's an excellent question. To be honest, I don't think this has been
tested in great detail, with the main issue being "when can you truly tell
that you've used too many parameters".
We typically rely on "when adding a variable doesn't improve reduced
chi-square" as that measure. But, the definition of reduced chi-square
that we use includes a value for N_idp already, so we're more or less
*imposing* the Stern rule.
I think lots of us have observed cases where the Stern rule is definitely
optimistic. And that might be understandable because the Stern rule
doesn't include a separate measure of the noise in the data, or what the
distribution of "the effect" of the parameters would be on the signal.
That is, one can always increase the R range to 20 Angstroms, and so
increase N_idp. But if you're trying to fit 25 variables for the part of
the signal between 1.5 and 2.5 Ang, there is probably going to be very hard
to get good measures of all 25 variables. The Stern rule sort of doesn't
account for this, except in the sense of giving guidance.
It would be interesting the revisit this with a better statistical
treatments of information. There are, for example, statistical values like
the Akaike Information criterion (
https://en.wikipedia.org/wiki/Akaike_information_criterion) and related
values that might be useful for better understanding when one is using too
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Ifeffit