Grant Bunker says:
I think the easiest thing to implement in feffit however would be simply to do a series of minimizations with (say, 1000) random starting points in the parameter space. This would just entail wrapping the minimization in a loop, and adding a way to summarize the results.
Minimizations starting at differentpoints may end up in the same or different "attractors". If those corresponded to bad fits, throw them out. If they are adequate fits, keep them. Quite possibly more than one attractor could fit the data adequately. A cluster analysis of the results should indicate whether different solutions are equivalent (belong to the same attractor) or not.
I think there are at least two different (albeit related) purposes for "scanning potential surfaces" with IFEFFIT. One is to identify the candidates for good fits. This is not just a matter of improving the fitting routines so that there is a greater probability of finding the true global minimum; as most of us know quite well, the statistically best fit is not necessarily the one that corresponds most closely to physical reality. Grant's suggestion sounds like a really good way of identifying candidates for "good" fits, and thus reducing the possibility that a good solution is being overlooked. A second purpose for looking at the potential surface is to help the researcher better understand the relationships between the variables being fit. (Better than just a single correlation number, for example.) This may be helpful early in the fitting process, perhaps when a good model has not yet been found. It may also be useful for presentation of results to a skeptical audience! For these purposes, Bruce's plan to allow scanning over two variables while fitting the rest sounds very useful. --Scott Calvin Sarah Lawrence College