Hi all, OK--this one's been puzzling me for a while, so I thought I'd see what you all had to say about it: One of my students and I performed fits on some different samples of platinum nanoparticles to see if we could extract mean sizes by observing the reduction in coordination numbers as a function of absorber-scatterer distance (I and Anatoly Frenkel, among others, have done some past work in this area). This worked OK: we extracted believable sizes that are consistent in some sense with what was seen via TEM and XRD (there are complications that arise because of polydispersion, but that's a story for another day). But here's the issue: the uncertainties generated by Ifeffit in the particle size are fairly large compared to the difference between the best-fit values for different samples. These uncertainties are reasonable in the sense that varying details of the fits (e.g. k-range, k-weight, Debye-Waller constraint schemes, whether resolution and/or third cumulant effects are included, etc.) causes the best-fit values to jump around within the uncertainty range. Thus if a fit reports 15 +/- 4 angstroms for particle radius, I can construct fits with reasonable R-factors that yield best-fit results of 12 or 18 angstroms. This is perfectly sensible behavior. But we have also observed that as long as we use the same fitting details on all samples, that the fitted sizes of all samples move up or down together. In other words, if under one set of fitting conditions the best-fit radius for sample A is 15 +/- 4 angstroms while for sample B it is 17 +/- 5 angstroms, under another set of conditions the best-fit radii might be 18 +/- 6 and 20 +/- 7 respectively, but the size of B always comes out larger than the size of A. In addition, the relative sizes of A and B (and C and D and...) have since been confirmed by other methods (XRD, experiments involving mixtures of samples, etc.). So it seems as if there should be a way to express the " uncertainty in the relative size" between the two samples...B is larger than A by 13 +/- 5 %, for example, regardless of the absolute size the fits find. But so far the only way I've thought of for doing this is to look at all the fits we've tried that have yielded R-factors below some cut-off, and just sort of average all the results for the differences in size. That seems unsatisfactory, however, since the standard deviation depends intimately on whatever fitting details we just happened to try. It would be much better if there were some way to directly fit the difference in size for the two samples, but I haven't thought of a good way to do this yet. Any ideas? --Scott Calvin Sarah Lawrence College