To my mind, when considering sample preparation the important thing is not so much the "right" thickness, as knowing the effects to guard against as the thickness deviates toward the thin or thick side. As transmission samples become thicker, the problem of "unwanted" photons becomes more severe. Those photons may be harmonics, photons scattered into the It detector, or photons from the tails of the resolution curve of the monochromator. As transmission samples become thinner, uniformity becomes more of an issue. If you play with the equations, you'll see that if your sample is a mixture of regions that have a thickness of 1.0 absorption lengths and regions that have a thickness of 2.0 absorption lengths, the spectrum is less distorted than if it is a mixture of 0.5 and 1.0 absorption lengths. So if a sample is on the thick side, it is particularly important to guard against harmonics in the beam and scattered photons. If it is on the thin side, it is particularly important to guard against nonuniformity. To put it another way, problems are synergistic. With a well- conditioned beam, a uniform sample, and linear detectors, the thickness almost doesn't matter (within reason)--at a modern beamline, a total absorption of even 0.05 or 4.0 will work. But as each of those conditions deviates from the ideal, distortions become much more severe. There's an old joke about someone on a diet going in to a fast food joint and asking for a double bacon cheeseburger, a large fries...and a diet Coke. In XAFS measurements, that attitude actually kind of works, because of the synergies I just discussed. Personally, I trust my ability to condition the beam and minimize scattering more than I trust my ability to make a uniform sample, so I lean a little toward the thicker side. --Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory On Nov 22, 2010, at 5:13 AM, Welter, Edmund wrote:
Dear Jatin,
the optimum mued of 2.x is not just derived by simple photon counting statistics. As Matt pointed out, for transmission measurements at a synchrotron beamline in conventional scanning mode this is seldom a matter. Nevertheless, one should avoid to measure subtle changes of absorption at the extreme ends, that is, transmission near 0 % or 100 %. In optical photometry this is described by the more or less famous "Ringbom plots" which describe the dependency of the accuracy of quantitative analysis by absorption measurements (usually but not necessarily in the UV/Vis) from the total absorption of the sample.
This time the number is only near to 42, the optimum transmission is 36.8 % (mue = 1). So, to achieve the highest accuracy in the determination of small Delta c (c = concentration) you should try to measure samples with transmissions near to this value (actually the minimum is broad and transmissions between 0.2 and 0.7 are ok). In our case, we are not interested in the concentration of the absorber, but we are also interested in (very) small changes of the transmission resp. absorption in our samples. Or, using Bouger, Lambert Beer's law, in our case mue (-ln(I1/I0) is a function of the absorption coefficient (mue0). The concentration of the absorber and the thickness (d) of the sample are constant.
-ln(I1/I0) = mue0 * c * d
But then: If the optimum is a mue between 0.35 and 1.6 why are we all measuring successfully (ok, more or less ;-) using samples having a mue between 2 and 3? ...and 0.35 seems desperately small to me! Maybe sample homogeneity is an issue?
Cheers, Edmund Welter