Kristine, Bruce, I've also noticed some weird things with feffit() not always estimating error bars. I believe the situation can be improved so that error are estimated for more fits. First, let me clarify two main cases when uncertainties are not calculated, and what might be done about this. 1. One of the defined variables doesn't affect the fit, which can happen if you define a variable but it doesn't get used, or even if it is used but has no real affect on the fit (for example, guessing deltaR when N=0). This is supposed to give a message like: one or more variables may not affect the fit In this case, feffit() _should_ list variables that are the most likely unused variables. I believe: a) this error gets triggered too often. b) the listing of unused variables is not complete. c) errors can be estimated in many of the 'false positive' cases anyway. 2. The fitting algorithm takes too many iterations. This is when it reports 'try a simpler problem or better guesses'.... The maximum allowed number of iterations could be increased (computers are fast these days, right??) but there is currently no way to interrupt a fit without killing the whole program. I'd be willing to increase it, but not too much. It should also be possible to _try_ to estimate the uncertainties in many of these cases, though the results may not be meaningful. All of these fixes are possible. Whether that will solve the problem for this case is a different matter.... If you can send an Artemis Project where the errors aren't estimated, that would be helpful. --Matt