On Nov 22, 2010, at 2:55 PM, Scott Calvin wrote:
But compare a monolayer of particles with a diameter equal to 0.4 absorption lengths with four strips of tape of that kind stacked. Do we really think the distortion due to nonuniformity will be as bad in the latter case as in the first? In practice, I think many transmission samples fall in roughly that regime, so the question isn't just academic.
OK, I've got it straight now. The answer is yes, the distortion from nonuniformity is as bad for four strips stacked as for the single strip. This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken. A bit on how the math works out: for Gaussian distributions of thickness, the absorption is attenuated (to first order) by a term directly proportional to the variance in the distribution. The standard deviation in thickness from point to point in a stack of N tapes generally increases as the square root of N (typical statistical behavior). This means that the fractional standard deviation goes down as the square root of N. In casual conversation, we would usually identify a sample with thickness variations of +/-5 % as being "more uniform" than one with thickness variations of +/- 8%, so it's natural to think that a stack of tapes is more uniform than a single one. But since the attenuation is proportional to the variance (i.e. the square of the standard deviation), it actually increases in proportion to N. Since the absorption is also increasing in proportion to N, the attenuation remains the same size relative to the absorption, and the spectrum is as distorted as ever. This result doesn't actually depend on having a Gaussian distribution of thickness. if each layer has 10% pinholes, for instance, at first blush it seems as if two layers should solve most of the problem: the fraction of pinholes drops to 1%. But those pinholes are now compared to a sample which is twice as thick, on average, and thus create nearly as much distortion as before. Add to this that there is now 9% of the sample that is half the thickness of the rest, and the situation hasn't improved any. I've worked through the math, and the cancellation of effects is precise--a two layer sample has the identical nonuniformity distortion to a one layer one. (There is probably a simple and compelling argument as to why this distortion is independent of the number of randomly aligned layers for ANY thickness distribution, but I haven't yet found it.) * * * For me personally, knowing this will cause some changes in the way I prepare samples. First of all, I'm going to move my bias more toward the thin end. My samples are generally pretty concentrated, so signal to noise is not a big issue. If I'm also not improving uniformity by using more layers of tape, there's no reason for me not to keep the total absorption down around 1, rather than around 2. Secondly, I'll approach the notion of eyeballing the assembled stack of tapes for uniformity, whether with the naked eye or a microscope, with more caution--particularly when teaching new students. The idea that a sample which has no evident pinholes is a better sample than one that does is not necessarily true, as the example above with the single layer exhibiting 10% pinholes as compared to the double layer exhibiting 1% demonstrates. Stressing the elimination of visible pinholes will tend to bias students toward thicker samples, but not necessarily better ones. --Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory