Hi Norbert, I hope to be able to add my 2cents to the "global minimization" discussion too. We're just starting a beam run, so I can't respond very quickly to emails, and thought I should respond to this one first. And I had a great time at the XAFS12 conference too! Restraints are fairly new and are not very well documented. In fact, parts of their implementation may change (more on that below), and I'm open for suggestions on how to make this easier/better. You probably know much of what's below, but I'll bore everyone on the list with a more complete description of restraints: In a normal least-squares fit, you try to minimize chi_square: N / model_i - data_i \ 2 chi_square = Sum | ----------------- | i=1 \ epsilon_i / One view of this problem is that you have a vector f of length N that you want minimized in the least-square sense: f_i = (model_i - data_i) / epsilon_i for i = 1, N And the minimization routines in feffit() (and A restraint simply adds another term, lambda, to the vector to minimize, so that f_N+1 = lambda Really, that's all there is to the mathematics. To use restraints in ifeffit, you simply give the name of a scalar to evaluate as the lambda term and that term is just appended to the vector to be minimized in the least-squares minimization. This allows your restraint to depend on the variables in a fairly arbitrary way. The interpretation and implications of restraints are more interesting. If you have some expected value for some physical quantity that depends on the fitting variables, you can mimic the rest of the chi_square terms: lambda = (calculated_value(x) - predicted_value)/uncertainty where the calculated value depends on the fitting variables x. That is, you can include some _external knowledge_ of the system that can be modeled along with your XAFS data. That predicted value is now part of your extended data, and the calculated value of that quantity is built from an expanded model). Maybe a simpler view of this is that you're adding a penalty if some value goes too far from the expected value. For examples, you could have a distance be restrained, say by a value you believe from diffraction measurements. guess delta_R1 = 0. def R1_calc = delta_R1 + 2.30 set R1_expected = 2.32 set R1_uncertainty = 0.03 def R1_lambda = (R1_calc - R1_expected) / (R1_uncertainty) feffit(....., restraint = R1_lambda,...) For the EXAFS conference, I presented using the Bond Valence model as a restraint. Here the "known valence" is extra data, and the parameterized Bond Valence model relates R and N to the predicted valence for the central atom. An important point is the scaling or relative weights of the traditional 'data-model' portion and the restraint. This is currently _totally_ up to the user, and controlled through the scale of lambda itself. That means the (R1_uncertainty) above works not only as the uncertainty in the restraint on R1, but as a relative weight between the restraint and the rest of the (data-model) part of vector to be minimized. It would be nice to be able to better control the weighting factors, but I'm not sure how to do this in general. Then again, feffit() also uses a constant value of epsilon_i, which is really making a similar assertion about the relative weights of the data. Currently, you can have up to 10 different restraints. Using restraints and multiple data sets together is somewhat painful right now. Due to an implementation mistake (bug??), all restraints have to be given with the _last_ instance of feffit(). Hopefully this will get fixed soon. It's been proposed that restraints be moved from a feffit() keyword to a command of their own, to put them on more equal footing with 'set', 'guess', and 'def'. OK, but on to your questions:
I see how you can say that a value should be near some number you know from other methods. But how do I apply a range? Of course, I could always use the midpoint of the restraint interval and use the way described - but can you do it in other ways?
You could probably come up with a "flatter function" than the simple linear lambda = (model-data)/uncertainty Off the top of my head, I'm not sure which would be best. Taking lambda = (model - (min(max(data,min_val),max_val))/uncertainty would prevent the penalty from getting too big, and using lambda = sqrt(abs(model-data)/uncertainty) would ease the penalty. Perhaps using two restraints could give strong penalties for going above some value and below another?? If not, and if such "soft bounds on variables" are really what are desired, they could, perhaps be added in some other way...
The last question concerns the weighting factor: Let's keep with Matt's example of setting a restraint on a distance to 2.54. If I use the weighting of 0.01 shown in the example, what would be the max. and min. values the fit can run to? Is it +/- 1 % deviation or something else?
The uncertainty/weighting factor really just sets the size of the penalty, but it is not a hard constraint on the value. Hope that's not too incoherent and is enough to get you (and anyone else interested) started on using restraints, or at least provoke more discussion on how to mess with the fitting algorithm... --Matt On Wed, 9 Jul 2003, Norbert Weiher wrote:
Hello, it's me again,
it was a pleasure for me to hear that IFEFFIT can now handle restraints. I read the documentation on handling them on the IFEFFIT homepage but one thing is still unclear:
Cheers,
Norbert
--Matt |= Matthew Newville mailto:newville@cars.uchicago.edu |= GSECARS, Bldg 434A voice: (630) 252-0431 / 1713 |= Argonne Natl Lab fax: (630) 252-0436 |= 9700 South Cass Ave cell: (708) 804-0361 |= Argonne, IL 60439 USA http://cars9.uchicago.edu/~newville/