Re: [Ifeffit] Distortion of transmission spectra due to particle size
Some follow-up. This, for example, is from an excellent workshop presentation by Rob Scarrow:
Errors from large particles are independent of thickness
The relative (%) variation in thickness depends on the ratio (particle diameter / avg. thickness), so it is tempting to increase the avg. thickness (i.e. increase μx) as an alternative to reducing the particle diameter.
However, simulations of MnO2 spectra for average Δμ0x = 1, 2 or 3 show that the errors in derived pre-edge peak heights and EXAFS amplitude factors are significant when diameter > 0.2 / Δμ0, but that they are not affected by the average sample thickness. (Δμ0 refers to the edge jump)
The equation at right is given by Heald (quoting earlier work by Stern and Lu). D is particle diameter, μ1 is for just below the edge, and Δμ =μ(above edge) - μ1.
I've seen similar claims elsewhere, although Scarrow's is particularly clear and unambiguous. The equation Scarrow gives is indeed the one from Lu and Stern, and the simulations are based on that equation. That Lu-Stern equation is derived for a monolayer of spheres, and then experimentally tested with multiple layers of tape. I'm still trying to work through the math to see how it works for multiple layers. I'm not convinced that the N divides out as is claimed in the article. As Matt says, it wasn't their main point. There is no question that if the particle size is large compared to an absorption length there will be nonuniformity and thus distortions. But compare a monolayer of particles with a diameter equal to 0.4 absorption lengths with four strips of tape of that kind stacked. Do we really think the distortion due to nonuniformity will be as bad in the latter case as in the first? In practice, I think many transmission samples fall in roughly that regime, so the question isn't just academic. I'll keep trying to work through the math and let you know what I find. --Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
On Mon, Nov 22, 2010 at 4:55 PM, Scott Calvin
Some follow-up. This, for example, is from an excellent workshop presentation by Rob Scarrow:
Errors from large particles are independent of thickness
Yes... one can have a sample that is uniform, or made of small particles, and still too thick. In that sense, having large particles or sample with widely varying thickness is a separate issue from having a sample that is too thick.
The relative (%) variation in thickness depends on the ratio (particle diameter / avg. thickness), so it is tempting to increase the avg. thickness (i.e. increase μx) as an alternative to reducing the particle diameter.
However, simulations of MnO2 spectra for average Δμ0x = 1, 2 or 3 show that the errors in derived pre-edge peak heights and EXAFS amplitude factors are significant when diameter > 0.2 / Δμ0, but that they are not affected by the average sample thickness. (Δμ0 refers to the edge jump)
The equation at right is given by Heald (quoting earlier work by Stern and Lu). D is particle diameter, μ1 is for just below the edge, and Δμ =μ(above edge) - μ1.
I've seen similar claims elsewhere, although Scarrow's is particularly clear and unambiguous.
OK. Are you saying there something wrong with this? Did he say that spheres were stacked directly on top of one another? I'm not seeing that assumption in what you quote. I read it as saying that you can't have spheres that are too thick. --Matt
On Nov 22, 2010, at 2:55 PM, Scott Calvin wrote:
But compare a monolayer of particles with a diameter equal to 0.4 absorption lengths with four strips of tape of that kind stacked. Do we really think the distortion due to nonuniformity will be as bad in the latter case as in the first? In practice, I think many transmission samples fall in roughly that regime, so the question isn't just academic.
OK, I've got it straight now. The answer is yes, the distortion from nonuniformity is as bad for four strips stacked as for the single strip. This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken. A bit on how the math works out: for Gaussian distributions of thickness, the absorption is attenuated (to first order) by a term directly proportional to the variance in the distribution. The standard deviation in thickness from point to point in a stack of N tapes generally increases as the square root of N (typical statistical behavior). This means that the fractional standard deviation goes down as the square root of N. In casual conversation, we would usually identify a sample with thickness variations of +/-5 % as being "more uniform" than one with thickness variations of +/- 8%, so it's natural to think that a stack of tapes is more uniform than a single one. But since the attenuation is proportional to the variance (i.e. the square of the standard deviation), it actually increases in proportion to N. Since the absorption is also increasing in proportion to N, the attenuation remains the same size relative to the absorption, and the spectrum is as distorted as ever. This result doesn't actually depend on having a Gaussian distribution of thickness. if each layer has 10% pinholes, for instance, at first blush it seems as if two layers should solve most of the problem: the fraction of pinholes drops to 1%. But those pinholes are now compared to a sample which is twice as thick, on average, and thus create nearly as much distortion as before. Add to this that there is now 9% of the sample that is half the thickness of the rest, and the situation hasn't improved any. I've worked through the math, and the cancellation of effects is precise--a two layer sample has the identical nonuniformity distortion to a one layer one. (There is probably a simple and compelling argument as to why this distortion is independent of the number of randomly aligned layers for ANY thickness distribution, but I haven't yet found it.) * * * For me personally, knowing this will cause some changes in the way I prepare samples. First of all, I'm going to move my bias more toward the thin end. My samples are generally pretty concentrated, so signal to noise is not a big issue. If I'm also not improving uniformity by using more layers of tape, there's no reason for me not to keep the total absorption down around 1, rather than around 2. Secondly, I'll approach the notion of eyeballing the assembled stack of tapes for uniformity, whether with the naked eye or a microscope, with more caution--particularly when teaching new students. The idea that a sample which has no evident pinholes is a better sample than one that does is not necessarily true, as the example above with the single layer exhibiting 10% pinholes as compared to the double layer exhibiting 1% demonstrates. Stressing the elimination of visible pinholes will tend to bias students toward thicker samples, but not necessarily better ones. --Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
Scott, You haven't totally convinced me, but clearly what we all need to do to run powdered samples correctly is to use 2D detectors with pixels much smaller than the absorption length and bin pixels of the same total absorbance. You heard it here first. Jeremy
-----Original Message----- From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Scott Calvin Sent: Tuesday, November 23, 2010 3:22 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize
On Nov 22, 2010, at 2:55 PM, Scott Calvin wrote:
But compare a monolayer of particles with a diameter equal to 0.4 absorption lengths with four strips of tape of that kind stacked. Do we really think the distortion due to nonuniformity will be as bad in the latter case as in the first? In practice, I think many transmission samples fall in roughly that regime, so the question isn't just academic.
OK, I've got it straight now. The answer is yes, the distortion from nonuniformity is as bad for four strips stacked as for the single strip. This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken.
A bit on how the math works out: for Gaussian distributions of thickness, the absorption is attenuated (to first order) by a term directly proportional to the variance in the distribution. The standard deviation in thickness from point to point in a stack of N tapes generally increases as the square root of N (typical statistical behavior). This means that the fractional standard deviation goes down as the square root of N. In casual conversation, we would usually identify a sample with thickness variations of +/-5 % as being "more uniform" than one with thickness variations of +/- 8%, so it's natural to think that a stack of tapes is more uniform than a single one. But since the attenuation is proportional to the variance (i.e. the square of the standard deviation), it actually increases in proportion to N. Since the absorption is also increasing in proportion to N, the attenuation remains the same size relative to the absorption, and the spectrum is as distorted as ever.
This result doesn't actually depend on having a Gaussian distribution of thickness. if each layer has 10% pinholes, for instance, at first blush it seems as if two layers should solve most of the problem: the fraction of pinholes drops to 1%. But those pinholes are now compared to a sample which is twice as thick, on average, and thus create nearly as much distortion as before. Add to this that there is now 9% of the sample that is half the thickness of the rest, and the situation hasn't improved any. I've worked through the math, and the cancellation of effects is precise--a two layer sample has the identical nonuniformity distortion to a one layer one.
(There is probably a simple and compelling argument as to why this distortion is independent of the number of randomly aligned layers for ANY thickness distribution, but I haven't yet found it.)
* * *
For me personally, knowing this will cause some changes in the way I prepare samples.
First of all, I'm going to move my bias more toward the thin end. My samples are generally pretty concentrated, so signal to noise is not a big issue. If I'm also not improving uniformity by using more layers of tape, there's no reason for me not to keep the total absorption down around 1, rather than around 2.
Secondly, I'll approach the notion of eyeballing the assembled stack of tapes for uniformity, whether with the naked eye or a microscope, with more caution--particularly when teaching new students. The idea that a sample which has no evident pinholes is a better sample than one that does is not necessarily true, as the example above with the single layer exhibiting 10% pinholes as compared to the double layer exhibiting 1% demonstrates. Stressing the elimination of visible pinholes will tend to bias students toward thicker samples, but not necessarily better ones.
--Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Scott,
OK, I've got it straight now. The answer is yes, the distortion from nonuniformity is as bad for four strips stacked as for the single strip.
I don't think that's correct.
This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken.
Stacking multiple layers does improve sample uniformity. Below is a simple simulation of a sample of unity thickness with randomly placed pinholes. First this makes a sample that is 1 layer of N cells, with each cell either having thickness of 1 or 0. Then it makes a sample of the same size and total thickness, but made of 5 independent layers, with each layer having the same fraction of randomly placed pinholes, so that total thickness for each cell could be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 layers. The simulation below is in python. I do hope the code is straightforward enough so that anyone interested can follow. The way in which pinholes are randomly selected by the code may not be obvious, so I'll say hear that the "numpy.random.shuffle" function is like shuffling a deck of cards, and works on its array argument in-place. For 10% pinholes, the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 10.0 | 0.900 | 0.300 | # 5 | 10.0 | 0.900 | 0.135 | # 25 | 10.0 | 0.900 | 0.060 | For 5% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 5.0 | 0.950 | 0.218 | # 5 | 5.0 | 0.950 | 0.097 | # 25 | 5.0 | 0.950 | 0.044 | For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 0.990 | 0.045 | # 25 | 1.0 | 0.990 | 0.020 | Multiple layers of smaller particles gives a more uniform thickness than fewer layers of larger particles. The standard deviation of the thickness goes as 1/sqrt(N_layers). In addition, one can see that 5 layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. Does any of this seem surprising or incorrect to you? Now let's try your case of 1 layer of thickness 0.4 with 4 layers of thickness 0.4, with 1% pinholes. In the code below, the simulation would look like # one layer of thickness=0.4 sample = 0.4 * make_layer(ncells, ph_frac) print format % (1, 100*ph_frac, sample.mean(), sample.std()) # four layers of thickness=0.4 layer1 = 0.4 * make_layer(ncells, ph_frac) layer2 = 0.4 * make_layer(ncells, ph_frac) layer3 = 0.4 * make_layer(ncells, ph_frac) layer4 = 0.4 * make_layer(ncells, ph_frac) sample = layer1 + layer2 + layer3 + layer4 print format % (4, 100*ph_frac, sample.mean(), sample.std()) and the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.396 | 0.040 | # 4 | 1.0 | 1.584 | 0.080 | The sample with 4 layers had its average thickness increase by a factor of 4, while the standard deviation of that thickness only doubled. The sample is twice as uniform. OK, that's a simple model and of thickness only. Lu and Stern did a more complete analysis and made actual measurements of the effect of thickness on XAFS amplitudes. They *showed* that many thin layers is better than fewer thick layers. Perhaps I am not understanding the points you're trying to make, but I think I am not the only one confused by what you are saying. --Matt #!/usr/bin/python import numpy def make_layer(ncells, pinhole_frac): "make an array of cells of unity thickness with a pinhole fraction" data = numpy.ones(ncells) # [1,1,1,1,1,...] pinholes = numpy.arange(ncells) # [0,1,2,3,4,5,...] numpy.random.shuffle(pinholes) # randmon pixel indices # set ncells*pinhole_frac randomly selected pixels to zero for pixel in pinholes[:int(ncells*pinhole_frac)]: data[pixel] = 0 return data print "# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |" format = "# %2i | %4.1f | %.3f | %.3f |" ph_frac = 0.01 ncells = 100*100 # 1 layer of ~unity thickness sample = make_layer(ncells, ph_frac) print format % (1, 100*ph_frac, sample.mean(), sample.std()) # 5 layers giving ~unity thickness layer1 = 0.2 * make_layer(ncells, ph_frac) layer2 = 0.2 * make_layer(ncells, ph_frac) layer3 = 0.2 * make_layer(ncells, ph_frac) layer4 = 0.2 * make_layer(ncells, ph_frac) layer5 = 0.2 * make_layer(ncells, ph_frac) sample = layer1 + layer2 + layer3 + layer4 + layer5 print format % (5, 100*ph_frac, sample.mean(), sample.std()) # 25 layers giving ~unity thickness nlayers = 25 sample = make_layer(ncells, ph_frac) for i in range(nlayers-1): sample = sample + make_layer(ncells, ph_frac) sample = sample / nlayers print format % (nlayers, 100*ph_frac, sample.mean(), sample.std()) ### END ###
participants (3)
-
Kropf, Arthur Jeremy
-
Matt Newville
-
Scott Calvin