Hello all,
I have been a frequent user of Athena for many years, mostly for interpreting P K-edge XANES spectra. Until last week I thought that the R factor in Athena was always defined as:
sum( [data_i – fit_i]^2 )
-------------------------------
sum( data_i^2 ]
This is also the definition given in the online manual, and it has been stated by me and by other colleagues in a number of papers dealing with P K-edge XANES. But well, this is not true when dealing with normalized XANES spectra! I realized this when I played around with a number of my old LC fits in Excel. While the chi-square value (or maybe more precisely, the sum of squared residuals) was reproduced perfectly, I always got “R factors” (i.e. with the above definition) between 2 and 3 times lower than what Athena gives. After that I consulted the Demeter programming documentation (https://bruceravel.github.io/demeter/pods/Demeter/LCF.pm.html) to find that, for normalized mu(E), “Demeter thus scales the R-factor to make it somewhat closer to 10^-2”. However, the equation stated on this page actually reproduces the R factor even more poorly, and therefore I won’t reiterate it here. After inspecting the Perl code, and trying out different alternatives in Excel, I now believe that the following equation provides a more accurate definition of the R factor (correct me if I’m wrong!):
sum( [data_i – fit_i]^2 )
-------------------------------
sum( [data_i – avg data]^2 )
where “avg data” is the arithmetic mean of the data in the LC fitting range. It would be great if others could confirm this. As far as I understand, this won’t affect the interpretations that any of us have made over the years, it only affects the understanding of what the R factor actually is…