Olga,
On Sat, Mar 7, 2015 at 12:26 AM, Olga Kashurnikova
I'm very sorry that I'm long. That is because I didn't suppose to explain the approach itself, but to understand what is with formulae of statistics in IFEFFIT. I understood that you say that k-space is a problem because you didn't rely on it. I thought it is not a good idea for IFEFFIT statistic, not Bayes.
I'm sorry, but I simply do not understand what this sentence means. K-space is not a problem because I don't rely on it. I don't rely on k-space because it is a problem. Specifically, using k-space neglects to properly filter out frequency components that we know we're not trying to model.
I would like to know what you mean of 'Bayes doesn't dictate space', I didn't understand it from papers and books, may be you will help me find where it is said and what is math? I really didn't know anything of it, there was analysis of original spectra in all cases.
No analysis method specified the independent variable(s) 'x'. In fact, usually the model is described as a function *of the parameters*, and any independent variable(s) are ignored. That the data happens to also be a function of some 'x' (whether that be time, frequency, energy, wavenumber, distance, voltage, ....) is not important mathematically. Data can be transformed many ways, and can always be modeled statistically. For EXAFS the data in k-space is certainly not any more real or fundamental than the data in R-space.
I have statistically good fits on this compounds, but with constraints, and Bayesian approach could help decide what parametra are nonmeaningful. It is hard to decide between models without it.
There are many statistical tests one can do to decide between models and parameters. See, for example F-tests, Akaike Information Criteria, Bayesian Information Criteria, and so on.
I will try a test in R-space now, and to simplify the test, if you think, but not sure how to use it to the end. Is it that I should use epsilon_R and simply use FT-transformed dat and model and Bayes statistics will be the same?
I'm not really sure what you're doing, so I can't answer this. Like, I have absolutely no idea what you're doing for "Bayes statistics". It seems from subsequent messages that you may have figured out the issue, but I don't really understand what you're trying to do. Was calculating chi-square by hand the major stumbling block? That would be much easier with Larch than Ifeffit.
I'm not sure noise can be treated as constant in this case, you see, it depends on k-weighting and so on. Uncertainty is from noise (that can be thought as Gaussian) and mu0, and it is not quite understandable how to add mu0 co-fitting than.
Why would co-refining the background function changes the methodology? One models some signal (say, "chi(k) + mu0(k)") and then transforms/projects/samples the data and model and compares them. If the data has been transformed, you want the uncertainty in that transformed data. I hope that helps, but I'm not really sure. --Matt