Some follow-up. This, for example, is from an excellent workshop presentation by Rob Scarrow:
Errors from large particles are independent of thickness
The relative (%) variation in thickness depends on the ratio (particle diameter / avg. thickness), so it is tempting to increase the avg. thickness (i.e. increase μx) as an alternative to reducing the particle diameter.
However, simulations of MnO2 spectra for average Δμ0x = 1, 2 or 3 show that the errors in derived pre-edge peak heights and EXAFS amplitude factors are significant when diameter > 0.2 / Δμ0, but that they are not affected by the average sample thickness. (Δμ0 refers to the edge jump)
The equation at right is given by Heald (quoting earlier work by Stern and Lu). D is particle diameter, μ1 is for just below the edge, and Δμ =μ(above edge) - μ1.
I've seen similar claims elsewhere, although Scarrow's is particularly clear and unambiguous. The equation Scarrow gives is indeed the one from Lu and Stern, and the simulations are based on that equation. That Lu-Stern equation is derived for a monolayer of spheres, and then experimentally tested with multiple layers of tape. I'm still trying to work through the math to see how it works for multiple layers. I'm not convinced that the N divides out as is claimed in the article. As Matt says, it wasn't their main point. There is no question that if the particle size is large compared to an absorption length there will be nonuniformity and thus distortions. But compare a monolayer of particles with a diameter equal to 0.4 absorption lengths with four strips of tape of that kind stacked. Do we really think the distortion due to nonuniformity will be as bad in the latter case as in the first? In practice, I think many transmission samples fall in roughly that regime, so the question isn't just academic. I'll keep trying to work through the math and let you know what I find. --Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory