First, the ranges used in by the pre_edge() function for finding the edge step for normalization are now better determined from the actual data range rather than simply being hard-wired numbers. These improvements were long over-due and give noticeably better default results for XANES data, especially for relatively low-energy edges such as S and Cl K edges.
When reading Athena Project files (say, to import into XAS Viewer), the pre-edge and normalization ranges from the Athena Project file will be preserved. When reading in new raw data, or if you select the "Use Default Setting" button on the Normalization Panel for any group in XAS Viewer, the newer defaults will be used. You can always alter these values, but in playing around with this with a range of datasets, the new defaults seem to give a noticeable improvement in almost all cases and rarely bad.
Second, as a few users have pointed out or gently hinted at over many months, there were sometimes significant differences in the background removals between classic Autobk/Ifeffit/Athena and Larch, with Larch sometimes being noticeably and inexplicably worse. I believe this involved two different problems. One was introduced a while back when implementing an estimate of delta_chi - the variance in chi due to the background subtraction. This estimate is important, but I botched some of the configurations of the number of knots, fit range, and Rbkg. The other problem was that "spline clamps" were just done too differently in Larch and Ifeffit/Athena.
I believe this is now working much better: the background results are much more consistent, and do not occasionally get "very bad". They also happen to be generally closer to Autobk/Ifeffit/Athena, and perhaps slightly better because the fit range in R-space is now more consistently determined (instead of wandering +/- a few R data points around Rbkg where the misfit will often be the largest). In addition, `delta_chi` (never calculated in Ifeffit/Athena) is now also more consistent. One consequence of this change is that a very small change in Rbkg (of say 0.01 to 0.05 Ang) may actually give no difference at all in mu0(E) or in chi(k).
I bring these changes up because I think they will be noticeable. I think they are both improvements, but let me know if you find cases for which you think are now made worse. Possibly related: one thing that I definitely noticed in going through several example data sets was that I tended to favor a k-weight of 2 instead of 1 for background subtraction -- so much so that it seemed like this might be a better default. I did not change this default yet, but if you have a strong opinion on this, that might be a good topic for discussion here.