Re: [Ifeffit] Distortion of transmission spectra due to particle size
Matt, Your second simulation confirms what I said:
The standard deviation in thickness from point to point in a stack of N tapes generally increases as the square root of N (typical statistical behavior).
Now follow that through, using, for example, Grant Bunker's formula for the distortion caused by a Gaussian distribution: (mu x)eff = mu x_o - (mu sigma)^2/2 where sigma is the standard deviation of the thickness. So if sigma goes as square root of N, and x_o goes as N, the fractional attenuation of the measured absorption stays constant, and the shape of the measured spectrum stays constant. There is thus no reduction in the distortion of the spectrum by measuring additional layers. Your pinholes simulation, on the other hand, is not the scenario I was describing. I agree it is better to have more thin layers rather than fewer thick layers. My question was whether it is better to have many thin layers compared to fewer thin layers. For the "brush sample on tape" method of sample preparation, this is more like the question we face when we prepare a sample. Our choice is not to spread a given amount of sample over more tapes, because we're already spreading as thin as we can. Our choice is whether to use more tapes of the same thickness. We don't have to rerun your simulation to see the effect of using tapes of the same thickness. All that happens is that the average thickness and the standard deviation gets multiplied by the number of layers. So now the results are: For 10% pinholes, the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 10.0 | 0.900 | 0.300 | # 5 | 10.0 | 4.500 | 0.675 | # 25 | 10.0 | 22.500 | 1.500 | For 5% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 5.0 | 0.950 | 0.218 | # 5 | 5.0 | 4.750 | 0.485 | # 25 | 5.0 | 23.750 | 1.100 | For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 4.950 | 0.225 | # 25 | 1.0 | 24.750 | 0.500 | As before, the standard deviation increases as square root of N. Using a cumulant expansion (admittedly slightly funky for such a broad distribution) necessarily yields the same result as the Gaussian distribution: the shape of the measured spectrum is independent of the number of layers used! And as it turns out, an exact calculation (i.e. not using a cumulant expansion) also yields the same result of independence. So Lu and Stern got it right. But the idea that we can mitigate pinholes by adding more layers is wrong. --Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory On Nov 24, 2010, at 6:05 AM, Matt Newville wrote:
Scott,
OK, I've got it straight now. The answer is yes, the distortion from nonuniformity is as bad for four strips stacked as for the single strip.
I don't think that's correct.
This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken.
Stacking multiple layers does improve sample uniformity.
Below is a simple simulation of a sample of unity thickness with randomly placed pinholes. First this makes a sample that is 1 layer of N cells, with each cell either having thickness of 1 or 0. Then it makes a sample of the same size and total thickness, but made of 5 independent layers, with each layer having the same fraction of randomly placed pinholes, so that total thickness for each cell could be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 layers.
The simulation below is in python. I do hope the code is straightforward enough so that anyone interested can follow. The way in which pinholes are randomly selected by the code may not be obvious, so I'll say hear that the "numpy.random.shuffle" function is like shuffling a deck of cards, and works on its array argument in-place.
For 10% pinholes, the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 10.0 | 0.900 | 0.300 | # 5 | 10.0 | 0.900 | 0.135 | # 25 | 10.0 | 0.900 | 0.060 |
For 5% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 5.0 | 0.950 | 0.218 | # 5 | 5.0 | 0.950 | 0.097 | # 25 | 5.0 | 0.950 | 0.044 |
For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 0.990 | 0.045 | # 25 | 1.0 | 0.990 | 0.020 |
Multiple layers of smaller particles gives a more uniform thickness than fewer layers of larger particles. The standard deviation of the thickness goes as 1/sqrt(N_layers). In addition, one can see that 5 layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. Does any of this seem surprising or incorrect to you?
Now let's try your case of 1 layer of thickness 0.4 with 4 layers of thickness 0.4, with 1% pinholes. In the code below, the simulation would look like # one layer of thickness=0.4 sample = 0.4 * make_layer(ncells, ph_frac) print format % (1, 100*ph_frac, sample.mean(), sample.std())
# four layers of thickness=0.4 layer1 = 0.4 * make_layer(ncells, ph_frac) layer2 = 0.4 * make_layer(ncells, ph_frac) layer3 = 0.4 * make_layer(ncells, ph_frac) layer4 = 0.4 * make_layer(ncells, ph_frac) sample = layer1 + layer2 + layer3 + layer4 print format % (4, 100*ph_frac, sample.mean(), sample.std())
and the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.396 | 0.040 | # 4 | 1.0 | 1.584 | 0.080 |
The sample with 4 layers had its average thickness increase by a factor of 4, while the standard deviation of that thickness only doubled. The sample is twice as uniform.
OK, that's a simple model and of thickness only. Lu and Stern did a more complete analysis and made actual measurements of the effect of thickness on XAFS amplitudes. They *showed* that many thin layers is better than fewer thick layers.
Perhaps I am not understanding the points you're trying to make, but I think I am not the only one confused by what you are saying.
--Matt
It's not that I don't believe in mathematics, but in this case rather than checking the math, I did a simulation. I took a spectrum of a copper foil and then calculated the following: (a) copper foil (original edge step 1.86) (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 absorption (c) 1/2 original, 1/2 nothing (a large "pinhole") (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly stacked layers of (c)) Observation 1: Stacking random layers does nothing to improve chi(k) amplitudes as has been discussed. They are identical, but I've offset them by 0.01 units. Observation 2: Pretty awful uniformity gives reasonable EXAFS data. If you don't care too much about absolute N, XANES, or Eo (very small changes), the rest is quite accurate (R, sigma2, relative N). Perhaps I'll simulate a spherical particle next with absorption in the center of 10 absorption lengths or so - probably not an uncommon occurance. Jeremy Chemical Sciences and Engineering Division Argonne National Laboratory Argonne, IL 60439 Ph: 630.252.9398 Fx: 630.252.9917 Email: kropf@anl.gov > -----Original Message----- > From: ifeffit-bounces@millenia.cars.aps.anl.gov > [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf > Of Scott Calvin > Sent: Wednesday, November 24, 2010 10:41 AM > To: XAFS Analysis using Ifeffit > Subject: Re: [Ifeffit] Distortion of transmission spectra due > to particlesize > > Matt, > > Your second simulation confirms what I said: > > > The standard deviation in thickness from point to point in > a stack of > > N tapes generally increases as the square root of N (typical > > statistical behavior). > > Now follow that through, using, for example, Grant Bunker's > formula for the distortion caused by a Gaussian distribution: > > (mu x)eff = mu x_o - (mu sigma)^2/2 > > where sigma is the standard deviation of the thickness. > > So if sigma goes as square root of N, and x_o goes as N, the > fractional attenuation of the measured absorption stays > constant, and the shape of the measured spectrum stays > constant. There is thus no reduction in the distortion of the > spectrum by measuring additional layers. > > Your pinholes simulation, on the other hand, is not the > scenario I was describing. I agree it is better to have more > thin layers rather than fewer thick layers. My question was > whether it is better to have many thin layers compared to > fewer thin layers. For the "brush sample on tape" method of > sample preparation, this is more like the question we face > when we prepare a sample. Our choice is not to spread a given > amount of sample over more tapes, because we're already > spreading as thin as we can. Our choice is whether to use > more tapes of the same thickness. > > We don't have to rerun your simulation to see the effect of > using tapes of the same thickness. All that happens is that > the average thickness and the standard deviation gets > multiplied by the number of layers. > > So now the results are: > > For 10% pinholes, the results are: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 10.0 | 0.900 | 0.300 | > # 5 | 10.0 | 4.500 | 0.675 | > # 25 | 10.0 | 22.500 | 1.500 | > > For 5% pinholes: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 5.0 | 0.950 | 0.218 | > # 5 | 5.0 | 4.750 | 0.485 | > # 25 | 5.0 | 23.750 | 1.100 | > > For 1% pinholes: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 1.0 | 0.990 | 0.099 | > # 5 | 1.0 | 4.950 | 0.225 | > # 25 | 1.0 | 24.750 | 0.500 | > > As before, the standard deviation increases as square root of > N. Using a cumulant expansion (admittedly slightly funky for > such a broad > distribution) necessarily yields the same result as the Gaussian > distribution: the shape of the measured spectrum is > independent of the number of layers used! And as it turns > out, an exact calculation (i.e. > not using a cumulant expansion) also yields the same result > of independence. > > So Lu and Stern got it right. But the idea that we can > mitigate pinholes by adding more layers is wrong. > > --Scott Calvin > Faculty at Sarah Lawrence College > Currently on sabbatical at Stanford Synchrotron Radiation Laboratory > > > > On Nov 24, 2010, at 6:05 AM, Matt Newville wrote: > > > Scott, > > > >> OK, I've got it straight now. The answer is yes, the > distortion from > >> nonuniformity is as bad for four strips stacked as for the single > >> strip. > > > > I don't think that's correct. > > > >> This is surprising to me, but the mathematics is fairly clear. > >> Stacking > >> multiple layers of tape rather than using one thin layer > improves the > >> signal to noise ratio, but does nothing for uniformity. So there's > >> nothing wrong with the arguments in Lu and Stern, Scarrow, > etc.--it's > >> the notion I had that we use multiple layers of tape to improve > >> uniformity that's mistaken. > > > > Stacking multiple layers does improve sample uniformity. > > > > Below is a simple simulation of a sample of unity thickness with > > randomly placed pinholes. First this makes a sample that > is 1 layer > > of N cells, with each cell either having thickness of 1 or > 0. Then it > > makes a sample of the same size and total thickness, but made of 5 > > independent layers, with each layer having the same fraction of > > randomly placed pinholes, so that total thickness for each > cell could > > be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 > > layers. > > > > The simulation below is in python. I do hope the code is > > straightforward enough so that anyone interested can > follow. The way > > in which pinholes are randomly selected by the code may not be > > obvious, so I'll say hear that the "numpy.random.shuffle" > function is > > like shuffling a deck of cards, and works on its array argument > > in-place. > > > > For 10% pinholes, the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 10.0 | 0.900 | 0.300 | > > # 5 | 10.0 | 0.900 | 0.135 | > > # 25 | 10.0 | 0.900 | 0.060 | > > > > For 5% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 5.0 | 0.950 | 0.218 | > > # 5 | 5.0 | 0.950 | 0.097 | > > # 25 | 5.0 | 0.950 | 0.044 | > > > > For 1% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.990 | 0.099 | > > # 5 | 1.0 | 0.990 | 0.045 | > > # 25 | 1.0 | 0.990 | 0.020 | > > > > Multiple layers of smaller particles gives a more uniform thickness > > than fewer layers of larger particles. The standard deviation of the > > thickness goes as 1/sqrt(N_layers). In addition, one can > see that 5 > > layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. > > Does any of this seem surprising or incorrect to you? > > > > Now let's try your case of 1 layer of thickness 0.4 with 4 > layers of > > thickness 0.4, with 1% pinholes. In the code below, the simulation > > would look like > > # one layer of thickness=0.4 > > sample = 0.4 * make_layer(ncells, ph_frac) > > print format % (1, 100*ph_frac, sample.mean(), sample.std()) > > > > # four layers of thickness=0.4 > > layer1 = 0.4 * make_layer(ncells, ph_frac) > > layer2 = 0.4 * make_layer(ncells, ph_frac) > > layer3 = 0.4 * make_layer(ncells, ph_frac) > > layer4 = 0.4 * make_layer(ncells, ph_frac) > > sample = layer1 + layer2 + layer3 + layer4 > > print format % (4, 100*ph_frac, sample.mean(), sample.std()) > > > > and the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.396 | 0.040 | > > # 4 | 1.0 | 1.584 | 0.080 | > > > > The sample with 4 layers had its average thickness increase by a > > factor of 4, while the standard deviation of that thickness only > > doubled. The sample is twice as uniform. > > > > OK, that's a simple model and of thickness only. Lu and > Stern did a > > more complete analysis and made actual measurements of the > effect of > > thickness on XAFS amplitudes. They *showed* that many thin > layers is > > better than fewer thick layers. > > > > Perhaps I am not understanding the points you're trying to > make, but I > > think I am not the only one confused by what you are saying. > > > > --Matt > > > > _______________________________________________ > Ifeffit mailing list > Ifeffit@millenia.cars.aps.anl.gov > http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit >
Whoops. On closer look sigma2 isn't very accurate either. Jeremy > -----Original Message----- > From: ifeffit-bounces@millenia.cars.aps.anl.gov > [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf > Of Kropf, Arthur Jeremy > Sent: Wednesday, November 24, 2010 12:09 PM > To: XAFS Analysis using Ifeffit > Subject: Re: [Ifeffit] Distortion of transmission spectra due > to particlesize > > It's not that I don't believe in mathematics, but in this > case rather than checking the math, I did a simulation. > > I took a spectrum of a copper foil and then calculated the following: > (a) copper foil (original edge step 1.86) > (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 > absorption > (c) 1/2 original, 1/2 nothing (a large "pinhole") > (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly > stacked layers of (c)) > > Observation 1: Stacking random layers does nothing to improve > chi(k) amplitudes as has been discussed. They are identical, > but I've offset them by 0.01 units. > > Observation 2: Pretty awful uniformity gives reasonable EXAFS > data. If you don't care too much about absolute N, XANES, or > Eo (very small changes), the rest is quite accurate (R, > sigma2, relative N). > > Perhaps I'll simulate a spherical particle next with > absorption in the center of 10 absorption lengths or so - > probably not an uncommon occurance. > > Jeremy > > Chemical Sciences and Engineering Division Argonne National > Laboratory Argonne, IL 60439 > > Ph: 630.252.9398 > Fx: 630.252.9917 > Email: kropf@anl.gov > > > > -----Original Message----- > > From: ifeffit-bounces@millenia.cars.aps.anl.gov > > [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On > Behalf Of Scott > > Calvin > > Sent: Wednesday, November 24, 2010 10:41 AM > > To: XAFS Analysis using Ifeffit > > Subject: Re: [Ifeffit] Distortion of transmission spectra due to > > particlesize > > > > Matt, > > > > Your second simulation confirms what I said: > > > > > The standard deviation in thickness from point to point in > > a stack of > > > N tapes generally increases as the square root of N (typical > > > statistical behavior). > > > > Now follow that through, using, for example, Grant Bunker's formula > > for the distortion caused by a Gaussian distribution: > > > > (mu x)eff = mu x_o - (mu sigma)^2/2 > > > > where sigma is the standard deviation of the thickness. > > > > So if sigma goes as square root of N, and x_o goes as N, the > > fractional attenuation of the measured absorption stays > constant, and > > the shape of the measured spectrum stays constant. There is thus no > > reduction in the distortion of the spectrum by measuring additional > > layers. > > > > Your pinholes simulation, on the other hand, is not the > scenario I was > > describing. I agree it is better to have more thin layers > rather than > > fewer thick layers. My question was whether it is better to > have many > > thin layers compared to fewer thin layers. For the "brush sample on > > tape" method of sample preparation, this is more like the > question we > > face when we prepare a sample. Our choice is not to spread a given > > amount of sample over more tapes, because we're already > spreading as > > thin as we can. Our choice is whether to use more tapes of the same > > thickness. > > > > We don't have to rerun your simulation to see the effect of using > > tapes of the same thickness. All that happens is that the average > > thickness and the standard deviation gets multiplied by the > number of > > layers. > > > > So now the results are: > > > > For 10% pinholes, the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 10.0 | 0.900 | 0.300 | > > # 5 | 10.0 | 4.500 | 0.675 | > > # 25 | 10.0 | 22.500 | 1.500 | > > > > For 5% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 5.0 | 0.950 | 0.218 | > > # 5 | 5.0 | 4.750 | 0.485 | > > # 25 | 5.0 | 23.750 | 1.100 | > > > > For 1% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.990 | 0.099 | > > # 5 | 1.0 | 4.950 | 0.225 | > > # 25 | 1.0 | 24.750 | 0.500 | > > > > As before, the standard deviation increases as square root > of N. Using > > a cumulant expansion (admittedly slightly funky for such a broad > > distribution) necessarily yields the same result as the Gaussian > > distribution: the shape of the measured spectrum is > independent of the > > number of layers used! And as it turns out, an exact > calculation (i.e. > > not using a cumulant expansion) also yields the same result of > > independence. > > > > So Lu and Stern got it right. But the idea that we can mitigate > > pinholes by adding more layers is wrong. > > > > --Scott Calvin > > Faculty at Sarah Lawrence College > > Currently on sabbatical at Stanford Synchrotron Radiation Laboratory > > > > > > > > On Nov 24, 2010, at 6:05 AM, Matt Newville wrote: > > > > > Scott, > > > > > >> OK, I've got it straight now. The answer is yes, the > > distortion from > > >> nonuniformity is as bad for four strips stacked as for > the single > > >> strip. > > > > > > I don't think that's correct. > > > > > >> This is surprising to me, but the mathematics is fairly clear. > > >> Stacking > > >> multiple layers of tape rather than using one thin layer > > improves the > > >> signal to noise ratio, but does nothing for uniformity. > So there's > > >> nothing wrong with the arguments in Lu and Stern, Scarrow, > > etc.--it's > > >> the notion I had that we use multiple layers of tape to improve > > >> uniformity that's mistaken. > > > > > > Stacking multiple layers does improve sample uniformity. > > > > > > Below is a simple simulation of a sample of unity thickness with > > > randomly placed pinholes. First this makes a sample that > > is 1 layer > > > of N cells, with each cell either having thickness of 1 or > > 0. Then it > > > makes a sample of the same size and total thickness, but > made of 5 > > > independent layers, with each layer having the same fraction of > > > randomly placed pinholes, so that total thickness for each > > cell could > > > be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 > > > layers. > > > > > > The simulation below is in python. I do hope the code is > > > straightforward enough so that anyone interested can > > follow. The way > > > in which pinholes are randomly selected by the code may not be > > > obvious, so I'll say hear that the "numpy.random.shuffle" > > function is > > > like shuffling a deck of cards, and works on its array argument > > > in-place. > > > > > > For 10% pinholes, the results are: > > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > > # 1 | 10.0 | 0.900 | 0.300 | > > > # 5 | 10.0 | 0.900 | 0.135 | > > > # 25 | 10.0 | 0.900 | 0.060 | > > > > > > For 5% pinholes: > > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > > # 1 | 5.0 | 0.950 | 0.218 | > > > # 5 | 5.0 | 0.950 | 0.097 | > > > # 25 | 5.0 | 0.950 | 0.044 | > > > > > > For 1% pinholes: > > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > > # 1 | 1.0 | 0.990 | 0.099 | > > > # 5 | 1.0 | 0.990 | 0.045 | > > > # 25 | 1.0 | 0.990 | 0.020 | > > > > > > Multiple layers of smaller particles gives a more uniform > thickness > > > than fewer layers of larger particles. The standard > deviation of the > > > thickness goes as 1/sqrt(N_layers). In addition, one can > > see that 5 > > > layers of 5% pinholes is about as uniform 1 layer with 1% > pinholes. > > > Does any of this seem surprising or incorrect to you? > > > > > > Now let's try your case of 1 layer of thickness 0.4 with 4 > > layers of > > > thickness 0.4, with 1% pinholes. In the code below, the > simulation > > > would look like > > > # one layer of thickness=0.4 > > > sample = 0.4 * make_layer(ncells, ph_frac) > > > print format % (1, 100*ph_frac, sample.mean(), sample.std()) > > > > > > # four layers of thickness=0.4 > > > layer1 = 0.4 * make_layer(ncells, ph_frac) > > > layer2 = 0.4 * make_layer(ncells, ph_frac) > > > layer3 = 0.4 * make_layer(ncells, ph_frac) > > > layer4 = 0.4 * make_layer(ncells, ph_frac) > > > sample = layer1 + layer2 + layer3 + layer4 > > > print format % (4, 100*ph_frac, sample.mean(), sample.std()) > > > > > > and the results are: > > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > > # 1 | 1.0 | 0.396 | 0.040 | > > > # 4 | 1.0 | 1.584 | 0.080 | > > > > > > The sample with 4 layers had its average thickness increase by a > > > factor of 4, while the standard deviation of that thickness only > > > doubled. The sample is twice as uniform. > > > > > > OK, that's a simple model and of thickness only. Lu and > > Stern did a > > > more complete analysis and made actual measurements of the > > effect of > > > thickness on XAFS amplitudes. They *showed* that many thin > > layers is > > > better than fewer thick layers. > > > > > > Perhaps I am not understanding the points you're trying to > > make, but I > > > think I am not the only one confused by what you are saying. > > > > > > --Matt > > > > > > > _______________________________________________ > > Ifeffit mailing list > > Ifeffit@millenia.cars.aps.anl.gov > > http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit > > >
Jeremy: In your simulation, "(c) 1/2 original, 1/2 nothing (a large "pinhole")" it appears that chi(k) is half intensity of the original spectrum. Does it mean that when the pinhole is present, EXAFS wiggles are half of the original ones in amplitude but the edge step remains the same? Or, equivalently, that the wiggles are the same but the edge step doubled? Either way, I don't think it is the situation you are describing (a large pinhole). If there is a large pinhole made in a perfect foil (say, you removed half of the area of the foil from the footprint of the beam and it just goes through from I0 to I detector, unaffected). Then, if I0 is a well behaving function of energy, i.e., the flux density is constant over the entire sample for all energies, EXAFS in the both cases should be the same. Or I misunderstood your example, or, maybe, the colors? Anatoly ________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Kropf, Arthur Jeremy Sent: Wed 11/24/2010 1:08 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize It's not that I don't believe in mathematics, but in this case rather than checking the math, I did a simulation. I took a spectrum of a copper foil and then calculated the following: (a) copper foil (original edge step 1.86) (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 absorption (c) 1/2 original, 1/2 nothing (a large "pinhole") (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly stacked layers of (c)) Observation 1: Stacking random layers does nothing to improve chi(k) amplitudes as has been discussed. They are identical, but I've offset them by 0.01 units. Observation 2: Pretty awful uniformity gives reasonable EXAFS data. If you don't care too much about absolute N, XANES, or Eo (very small changes), the rest is quite accurate (R, sigma2, relative N). Perhaps I'll simulate a spherical particle next with absorption in the center of 10 absorption lengths or so - probably not an uncommon occurance. Jeremy Chemical Sciences and Engineering Division Argonne National Laboratory Argonne, IL 60439 Ph: 630.252.9398 Fx: 630.252.9917 Email: kropf@anl.gov > -----Original Message----- > From: ifeffit-bounces@millenia.cars.aps.anl.gov > [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf > Of Scott Calvin > Sent: Wednesday, November 24, 2010 10:41 AM > To: XAFS Analysis using Ifeffit > Subject: Re: [Ifeffit] Distortion of transmission spectra due > to particlesize > > Matt, > > Your second simulation confirms what I said: > > > The standard deviation in thickness from point to point in > a stack of > > N tapes generally increases as the square root of N (typical > > statistical behavior). > > Now follow that through, using, for example, Grant Bunker's > formula for the distortion caused by a Gaussian distribution: > > (mu x)eff = mu x_o - (mu sigma)^2/2 > > where sigma is the standard deviation of the thickness. > > So if sigma goes as square root of N, and x_o goes as N, the > fractional attenuation of the measured absorption stays > constant, and the shape of the measured spectrum stays > constant. There is thus no reduction in the distortion of the > spectrum by measuring additional layers. > > Your pinholes simulation, on the other hand, is not the > scenario I was describing. I agree it is better to have more > thin layers rather than fewer thick layers. My question was > whether it is better to have many thin layers compared to > fewer thin layers. For the "brush sample on tape" method of > sample preparation, this is more like the question we face > when we prepare a sample. Our choice is not to spread a given > amount of sample over more tapes, because we're already > spreading as thin as we can. Our choice is whether to use > more tapes of the same thickness. > > We don't have to rerun your simulation to see the effect of > using tapes of the same thickness. All that happens is that > the average thickness and the standard deviation gets > multiplied by the number of layers. > > So now the results are: > > For 10% pinholes, the results are: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 10.0 | 0.900 | 0.300 | > # 5 | 10.0 | 4.500 | 0.675 | > # 25 | 10.0 | 22.500 | 1.500 | > > For 5% pinholes: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 5.0 | 0.950 | 0.218 | > # 5 | 5.0 | 4.750 | 0.485 | > # 25 | 5.0 | 23.750 | 1.100 | > > For 1% pinholes: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 1.0 | 0.990 | 0.099 | > # 5 | 1.0 | 4.950 | 0.225 | > # 25 | 1.0 | 24.750 | 0.500 | > > As before, the standard deviation increases as square root of > N. Using a cumulant expansion (admittedly slightly funky for > such a broad > distribution) necessarily yields the same result as the Gaussian > distribution: the shape of the measured spectrum is > independent of the number of layers used! And as it turns > out, an exact calculation (i.e. > not using a cumulant expansion) also yields the same result > of independence. > > So Lu and Stern got it right. But the idea that we can > mitigate pinholes by adding more layers is wrong. > > --Scott Calvin > Faculty at Sarah Lawrence College > Currently on sabbatical at Stanford Synchrotron Radiation Laboratory > > > > On Nov 24, 2010, at 6:05 AM, Matt Newville wrote: > > > Scott, > > > >> OK, I've got it straight now. The answer is yes, the > distortion from > >> nonuniformity is as bad for four strips stacked as for the single > >> strip. > > > > I don't think that's correct. > > > >> This is surprising to me, but the mathematics is fairly clear. > >> Stacking > >> multiple layers of tape rather than using one thin layer > improves the > >> signal to noise ratio, but does nothing for uniformity. So there's > >> nothing wrong with the arguments in Lu and Stern, Scarrow, > etc.--it's > >> the notion I had that we use multiple layers of tape to improve > >> uniformity that's mistaken. > > > > Stacking multiple layers does improve sample uniformity. > > > > Below is a simple simulation of a sample of unity thickness with > > randomly placed pinholes. First this makes a sample that > is 1 layer > > of N cells, with each cell either having thickness of 1 or > 0. Then it > > makes a sample of the same size and total thickness, but made of 5 > > independent layers, with each layer having the same fraction of > > randomly placed pinholes, so that total thickness for each > cell could > > be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 > > layers. > > > > The simulation below is in python. I do hope the code is > > straightforward enough so that anyone interested can > follow. The way > > in which pinholes are randomly selected by the code may not be > > obvious, so I'll say hear that the "numpy.random.shuffle" > function is > > like shuffling a deck of cards, and works on its array argument > > in-place. > > > > For 10% pinholes, the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 10.0 | 0.900 | 0.300 | > > # 5 | 10.0 | 0.900 | 0.135 | > > # 25 | 10.0 | 0.900 | 0.060 | > > > > For 5% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 5.0 | 0.950 | 0.218 | > > # 5 | 5.0 | 0.950 | 0.097 | > > # 25 | 5.0 | 0.950 | 0.044 | > > > > For 1% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.990 | 0.099 | > > # 5 | 1.0 | 0.990 | 0.045 | > > # 25 | 1.0 | 0.990 | 0.020 | > > > > Multiple layers of smaller particles gives a more uniform thickness > > than fewer layers of larger particles. The standard deviation of the > > thickness goes as 1/sqrt(N_layers). In addition, one can > see that 5 > > layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. > > Does any of this seem surprising or incorrect to you? > > > > Now let's try your case of 1 layer of thickness 0.4 with 4 > layers of > > thickness 0.4, with 1% pinholes. In the code below, the simulation > > would look like > > # one layer of thickness=0.4 > > sample = 0.4 * make_layer(ncells, ph_frac) > > print format % (1, 100*ph_frac, sample.mean(), sample.std()) > > > > # four layers of thickness=0.4 > > layer1 = 0.4 * make_layer(ncells, ph_frac) > > layer2 = 0.4 * make_layer(ncells, ph_frac) > > layer3 = 0.4 * make_layer(ncells, ph_frac) > > layer4 = 0.4 * make_layer(ncells, ph_frac) > > sample = layer1 + layer2 + layer3 + layer4 > > print format % (4, 100*ph_frac, sample.mean(), sample.std()) > > > > and the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.396 | 0.040 | > > # 4 | 1.0 | 1.584 | 0.080 | > > > > The sample with 4 layers had its average thickness increase by a > > factor of 4, while the standard deviation of that thickness only > > doubled. The sample is twice as uniform. > > > > OK, that's a simple model and of thickness only. Lu and > Stern did a > > more complete analysis and made actual measurements of the > effect of > > thickness on XAFS amplitudes. They *showed* that many thin > layers is > > better than fewer thick layers. > > > > Perhaps I am not understanding the points you're trying to > make, but I > > think I am not the only one confused by what you are saying. > > > > --Matt > > > > _______________________________________________ > Ifeffit mailing list > Ifeffit@millenia.cars.aps.anl.gov > http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit >
Anatoly, I think that may be exactly the point. If you have half the beam on a foil and half off, even with a uniform beam, you cant get the same spectrum as with the whole beam on the foil. I tried to come up with a quick proof by demonstration, but I got bogged down on normalization. That will have to wait. Jeremy ________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Frenkel, Anatoly Sent: Wednesday, November 24, 2010 1:33 PM To: XAFS Analysis using Ifeffit Subject: RE: [Ifeffit] Distortion of transmission spectra due to particlesize Jeremy: In your simulation, "(c) 1/2 original, 1/2 nothing (a large "pinhole")" it appears that chi(k) is half intensity of the original spectrum. Does it mean that when the pinhole is present, EXAFS wiggles are half of the original ones in amplitude but the edge step remains the same? Or, equivalently, that the wiggles are the same but the edge step doubled? Either way, I don't think it is the situation you are describing (a large pinhole). If there is a large pinhole made in a perfect foil (say, you removed half of the area of the foil from the footprint of the beam and it just goes through from I0 to I detector, unaffected). Then, if I0 is a well behaving function of energy, i.e., the flux density is constant over the entire sample for all energies, EXAFS in the both cases should be the same. Or I misunderstood your example, or, maybe, the colors? Anatoly ________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Kropf, Arthur Jeremy Sent: Wed 11/24/2010 1:08 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize It's not that I don't believe in mathematics, but in this case rather than checking the math, I did a simulation. I took a spectrum of a copper foil and then calculated the following: (a) copper foil (original edge step 1.86) (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 absorption (c) 1/2 original, 1/2 nothing (a large "pinhole") (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly stacked layers of (c)) Observation 1: Stacking random layers does nothing to improve chi(k) amplitudes as has been discussed. They are identical, but I've offset them by 0.01 units. Observation 2: Pretty awful uniformity gives reasonable EXAFS data. If you don't care too much about absolute N, XANES, or Eo (very small changes), the rest is quite accurate (R, sigma2, relative N). Perhaps I'll simulate a spherical particle next with absorption in the center of 10 absorption lengths or so - probably not an uncommon occurance. Jeremy Chemical Sciences and Engineering Division Argonne National Laboratory Argonne, IL 60439 Ph: 630.252.9398 Fx: 630.252.9917 Email: kropf@anl.gov > -----Original Message----- > From: ifeffit-bounces@millenia.cars.aps.anl.gov > [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf > Of Scott Calvin > Sent: Wednesday, November 24, 2010 10:41 AM > To: XAFS Analysis using Ifeffit > Subject: Re: [Ifeffit] Distortion of transmission spectra due > to particlesize > > Matt, > > Your second simulation confirms what I said: > > > The standard deviation in thickness from point to point in > a stack of > > N tapes generally increases as the square root of N (typical > > statistical behavior). > > Now follow that through, using, for example, Grant Bunker's > formula for the distortion caused by a Gaussian distribution: > > (mu x)eff = mu x_o - (mu sigma)^2/2 > > where sigma is the standard deviation of the thickness. > > So if sigma goes as square root of N, and x_o goes as N, the > fractional attenuation of the measured absorption stays > constant, and the shape of the measured spectrum stays > constant. There is thus no reduction in the distortion of the > spectrum by measuring additional layers. > > Your pinholes simulation, on the other hand, is not the > scenario I was describing. I agree it is better to have more > thin layers rather than fewer thick layers. My question was > whether it is better to have many thin layers compared to > fewer thin layers. For the "brush sample on tape" method of > sample preparation, this is more like the question we face > when we prepare a sample. Our choice is not to spread a given > amount of sample over more tapes, because we're already > spreading as thin as we can. Our choice is whether to use > more tapes of the same thickness. > > We don't have to rerun your simulation to see the effect of > using tapes of the same thickness. All that happens is that > the average thickness and the standard deviation gets > multiplied by the number of layers. > > So now the results are: > > For 10% pinholes, the results are: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 10.0 | 0.900 | 0.300 | > # 5 | 10.0 | 4.500 | 0.675 | > # 25 | 10.0 | 22.500 | 1.500 | > > For 5% pinholes: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 5.0 | 0.950 | 0.218 | > # 5 | 5.0 | 4.750 | 0.485 | > # 25 | 5.0 | 23.750 | 1.100 | > > For 1% pinholes: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 1.0 | 0.990 | 0.099 | > # 5 | 1.0 | 4.950 | 0.225 | > # 25 | 1.0 | 24.750 | 0.500 | > > As before, the standard deviation increases as square root of > N. Using a cumulant expansion (admittedly slightly funky for > such a broad > distribution) necessarily yields the same result as the Gaussian > distribution: the shape of the measured spectrum is > independent of the number of layers used! And as it turns > out, an exact calculation (i.e. > not using a cumulant expansion) also yields the same result > of independence. > > So Lu and Stern got it right. But the idea that we can > mitigate pinholes by adding more layers is wrong. > > --Scott Calvin > Faculty at Sarah Lawrence College > Currently on sabbatical at Stanford Synchrotron Radiation Laboratory > > > > On Nov 24, 2010, at 6:05 AM, Matt Newville wrote: > > > Scott, > > > >> OK, I've got it straight now. The answer is yes, the > distortion from > >> nonuniformity is as bad for four strips stacked as for the single > >> strip. > > > > I don't think that's correct. > > > >> This is surprising to me, but the mathematics is fairly clear. > >> Stacking > >> multiple layers of tape rather than using one thin layer > improves the > >> signal to noise ratio, but does nothing for uniformity. So there's > >> nothing wrong with the arguments in Lu and Stern, Scarrow, > etc.--it's > >> the notion I had that we use multiple layers of tape to improve > >> uniformity that's mistaken. > > > > Stacking multiple layers does improve sample uniformity. > > > > Below is a simple simulation of a sample of unity thickness with > > randomly placed pinholes. First this makes a sample that > is 1 layer > > of N cells, with each cell either having thickness of 1 or > 0. Then it > > makes a sample of the same size and total thickness, but made of 5 > > independent layers, with each layer having the same fraction of > > randomly placed pinholes, so that total thickness for each > cell could > > be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 > > layers. > > > > The simulation below is in python. I do hope the code is > > straightforward enough so that anyone interested can > follow. The way > > in which pinholes are randomly selected by the code may not be > > obvious, so I'll say hear that the "numpy.random.shuffle" > function is > > like shuffling a deck of cards, and works on its array argument > > in-place. > > > > For 10% pinholes, the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 10.0 | 0.900 | 0.300 | > > # 5 | 10.0 | 0.900 | 0.135 | > > # 25 | 10.0 | 0.900 | 0.060 | > > > > For 5% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 5.0 | 0.950 | 0.218 | > > # 5 | 5.0 | 0.950 | 0.097 | > > # 25 | 5.0 | 0.950 | 0.044 | > > > > For 1% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.990 | 0.099 | > > # 5 | 1.0 | 0.990 | 0.045 | > > # 25 | 1.0 | 0.990 | 0.020 | > > > > Multiple layers of smaller particles gives a more uniform thickness > > than fewer layers of larger particles. The standard deviation of the > > thickness goes as 1/sqrt(N_layers). In addition, one can > see that 5 > > layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. > > Does any of this seem surprising or incorrect to you? > > > > Now let's try your case of 1 layer of thickness 0.4 with 4 > layers of > > thickness 0.4, with 1% pinholes. In the code below, the simulation > > would look like > > # one layer of thickness=0.4 > > sample = 0.4 * make_layer(ncells, ph_frac) > > print format % (1, 100*ph_frac, sample.mean(), sample.std()) > > > > # four layers of thickness=0.4 > > layer1 = 0.4 * make_layer(ncells, ph_frac) > > layer2 = 0.4 * make_layer(ncells, ph_frac) > > layer3 = 0.4 * make_layer(ncells, ph_frac) > > layer4 = 0.4 * make_layer(ncells, ph_frac) > > sample = layer1 + layer2 + layer3 + layer4 > > print format % (4, 100*ph_frac, sample.mean(), sample.std()) > > > > and the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.396 | 0.040 | > > # 4 | 1.0 | 1.584 | 0.080 | > > > > The sample with 4 layers had its average thickness increase by a > > factor of 4, while the standard deviation of that thickness only > > doubled. The sample is twice as uniform. > > > > OK, that's a simple model and of thickness only. Lu and > Stern did a > > more complete analysis and made actual measurements of the > effect of > > thickness on XAFS amplitudes. They *showed* that many thin > layers is > > better than fewer thick layers. > > > > Perhaps I am not understanding the points you're trying to > make, but I > > think I am not the only one confused by what you are saying. > > > > --Matt > > > > _______________________________________________ > Ifeffit mailing list > Ifeffit@millenia.cars.aps.anl.gov > http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit >
Hi Jeremy,
For a sample of thickness t in the beam, and a good measurement being
I_t = I_0 * exp(-t*mu)
I think having 1/2 the sample missing (and assuming uniform I_0) would be:
I_t_measured = (I_0 / 2)*exp(-0*mu) + (I_0/2) * exp(-t*mu)
= I_0 * (1 + exp(-t*mu) / 2
So that
t*mu_measured = -ln(I_t_measured/I_0) = -ln (1 + exp(-t*mu)/2)
An ifeffit script showing such an effect would be:
read_data(cu.xmu, group =good)
plot good.energy, good.xmu
set pinhole.energy = good.energy
set pinhole.xmu = -ln(1 + exp(-good.xmu)/2)
spline(pinhole.energy, pinhole.xmu, kmin=0, kweight=2, rbkg=1)
spline(good.energy, good.xmu, kmin=0, kweight=2, rbkg=1)
newplot(good.k, good.chi*good.k^2, xmax=18, xlabel='k (\A)',
ylabel='k\u2\d\gx(k)', key='no pinholes')
plot pinhole.k, pinhole.chi*pinhole.k^2, key='half pinholes'
With the corresponding plot of k^2 * chi(k) attached.
Corrections welcome,
--Matt
On Wed, Nov 24, 2010 at 2:27 PM, Kropf, Arthur Jeremy
Anatoly,
I think that may be exactly the point. If you have half the beam on a foil and half off, even with a uniform beam, you cant get the same spectrum as with the whole beam on the foil.
I tried to come up with a quick proof by demonstration, but I got bogged down on normalization. That will have to wait.
Jeremy
________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Frenkel, Anatoly Sent: Wednesday, November 24, 2010 1:33 PM To: XAFS Analysis using Ifeffit Subject: RE: [Ifeffit] Distortion of transmission spectra due to particlesize
Jeremy:
In your simulation, "(c) 1/2 original, 1/2 nothing (a large "pinhole")" it appears that chi(k) is half intensity of the original spectrum. Does it mean that when the pinhole is present, EXAFS wiggles are half of the original ones in amplitude but the edge step remains the same? Or, equivalently, that the wiggles are the same but the edge step doubled?
Either way, I don't think it is the situation you are describing (a large pinhole). If there is a large pinhole made in a perfect foil (say, you removed half of the area of the foil from the footprint of the beam and it just goes through from I0 to I detector, unaffected). Then, if I0 is a well behaving function of energy, i.e., the flux density is constant over the entire sample for all energies, EXAFS in the both cases should be the same.
Or I misunderstood your example, or, maybe, the colors?
Anatoly
________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Kropf, Arthur Jeremy Sent: Wed 11/24/2010 1:08 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize
It's not that I don't believe in mathematics, but in this case rather than checking the math, I did a simulation.
I took a spectrum of a copper foil and then calculated the following: (a) copper foil (original edge step 1.86) (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 absorption (c) 1/2 original, 1/2 nothing (a large "pinhole") (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly stacked layers of (c))
Observation 1: Stacking random layers does nothing to improve chi(k) amplitudes as has been discussed. They are identical, but I've offset them by 0.01 units.
Observation 2: Pretty awful uniformity gives reasonable EXAFS data. If you don't care too much about absolute N, XANES, or Eo (very small changes), the rest is quite accurate (R, sigma2, relative N).
Perhaps I'll simulate a spherical particle next with absorption in the center of 10 absorption lengths or so - probably not an uncommon occurance.
Jeremy
Chemical Sciences and Engineering Division Argonne National Laboratory Argonne, IL 60439
Ph: 630.252.9398 Fx: 630.252.9917 Email: kropf@anl.gov
-----Original Message----- From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Scott Calvin Sent: Wednesday, November 24, 2010 10:41 AM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize
Matt,
Your second simulation confirms what I said:
The standard deviation in thickness from point to point in a stack of N tapes generally increases as the square root of N (typical statistical behavior).
Now follow that through, using, for example, Grant Bunker's formula for the distortion caused by a Gaussian distribution:
(mu x)eff = mu x_o - (mu sigma)^2/2
where sigma is the standard deviation of the thickness.
So if sigma goes as square root of N, and x_o goes as N, the fractional attenuation of the measured absorption stays constant, and the shape of the measured spectrum stays constant. There is thus no reduction in the distortion of the spectrum by measuring additional layers.
Your pinholes simulation, on the other hand, is not the scenario I was describing. I agree it is better to have more thin layers rather than fewer thick layers. My question was whether it is better to have many thin layers compared to fewer thin layers. For the "brush sample on tape" method of sample preparation, this is more like the question we face when we prepare a sample. Our choice is not to spread a given amount of sample over more tapes, because we're already spreading as thin as we can. Our choice is whether to use more tapes of the same thickness.
We don't have to rerun your simulation to see the effect of using tapes of the same thickness. All that happens is that the average thickness and the standard deviation gets multiplied by the number of layers.
So now the results are:
For 10% pinholes, the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 10.0 | 0.900 | 0.300 | # 5 | 10.0 | 4.500 | 0.675 | # 25 | 10.0 | 22.500 | 1.500 |
For 5% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 5.0 | 0.950 | 0.218 | # 5 | 5.0 | 4.750 | 0.485 | # 25 | 5.0 | 23.750 | 1.100 |
For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 4.950 | 0.225 | # 25 | 1.0 | 24.750 | 0.500 |
As before, the standard deviation increases as square root of N. Using a cumulant expansion (admittedly slightly funky for such a broad distribution) necessarily yields the same result as the Gaussian distribution: the shape of the measured spectrum is independent of the number of layers used! And as it turns out, an exact calculation (i.e. not using a cumulant expansion) also yields the same result of independence.
So Lu and Stern got it right. But the idea that we can mitigate pinholes by adding more layers is wrong.
--Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
On Nov 24, 2010, at 6:05 AM, Matt Newville wrote:
Scott,
OK, I've got it straight now. The answer is yes, the distortion from nonuniformity is as bad for four strips stacked as for the single strip.
I don't think that's correct.
This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken.
Stacking multiple layers does improve sample uniformity.
Below is a simple simulation of a sample of unity thickness with randomly placed pinholes. First this makes a sample that is 1 layer of N cells, with each cell either having thickness of 1 or 0. Then it makes a sample of the same size and total thickness, but made of 5 independent layers, with each layer having the same fraction of randomly placed pinholes, so that total thickness for each cell could be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 layers.
The simulation below is in python. I do hope the code is straightforward enough so that anyone interested can follow. The way in which pinholes are randomly selected by the code may not be obvious, so I'll say hear that the "numpy.random.shuffle" function is like shuffling a deck of cards, and works on its array argument in-place.
For 10% pinholes, the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 10.0 | 0.900 | 0.300 | # 5 | 10.0 | 0.900 | 0.135 | # 25 | 10.0 | 0.900 | 0.060 |
For 5% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 5.0 | 0.950 | 0.218 | # 5 | 5.0 | 0.950 | 0.097 | # 25 | 5.0 | 0.950 | 0.044 |
For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 0.990 | 0.045 | # 25 | 1.0 | 0.990 | 0.020 |
Multiple layers of smaller particles gives a more uniform thickness than fewer layers of larger particles. The standard deviation of the thickness goes as 1/sqrt(N_layers). In addition, one can see that 5 layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. Does any of this seem surprising or incorrect to you?
Now let's try your case of 1 layer of thickness 0.4 with 4 layers of thickness 0.4, with 1% pinholes. In the code below, the simulation would look like # one layer of thickness=0.4 sample = 0.4 * make_layer(ncells, ph_frac) print format % (1, 100*ph_frac, sample.mean(), sample.std())
# four layers of thickness=0.4 layer1 = 0.4 * make_layer(ncells, ph_frac) layer2 = 0.4 * make_layer(ncells, ph_frac) layer3 = 0.4 * make_layer(ncells, ph_frac) layer4 = 0.4 * make_layer(ncells, ph_frac) sample = layer1 + layer2 + layer3 + layer4 print format % (4, 100*ph_frac, sample.mean(), sample.std())
and the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.396 | 0.040 | # 4 | 1.0 | 1.584 | 0.080 |
The sample with 4 layers had its average thickness increase by a factor of 4, while the standard deviation of that thickness only doubled. The sample is twice as uniform.
OK, that's a simple model and of thickness only. Lu and Stern did a more complete analysis and made actual measurements of the effect of thickness on XAFS amplitudes. They *showed* that many thin layers is better than fewer thick layers.
Perhaps I am not understanding the points you're trying to make, but I think I am not the only one confused by what you are saying.
--Matt
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
That looks right except for a minor quibble. Your calc is for 1/3 foil and 2/3 "pinhole". I think the equation should have an extra set of parenthesis: set pinhole.xmu = -ln((1 + exp(-good.xmu))/2) Jeremy PS. That is a whole lot easier than what I did for the plot - export data to another program, do some calculations, and then reimport the results to Athena. I really need to learn to use ifeffit better. Thanks for the demonstration. :)
-----Original Message----- From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Matt Newville Sent: Wednesday, November 24, 2010 3:01 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize
Hi Jeremy,
For a sample of thickness t in the beam, and a good measurement being I_t = I_0 * exp(-t*mu) I think having 1/2 the sample missing (and assuming uniform I_0) would be:
I_t_measured = (I_0 / 2)*exp(-0*mu) + (I_0/2) * exp(-t*mu) = I_0 * (1 + exp(-t*mu) / 2
So that t*mu_measured = -ln(I_t_measured/I_0) = -ln (1 + exp(-t*mu)/2)
An ifeffit script showing such an effect would be: read_data(cu.xmu, group =good) plot good.energy, good.xmu set pinhole.energy = good.energy set pinhole.xmu = -ln(1 + exp(-good.xmu)/2)
spline(pinhole.energy, pinhole.xmu, kmin=0, kweight=2, rbkg=1) spline(good.energy, good.xmu, kmin=0, kweight=2, rbkg=1)
newplot(good.k, good.chi*good.k^2, xmax=18, xlabel='k (\A)', ylabel='k\u2\d\gx(k)', key='no pinholes') plot pinhole.k, pinhole.chi*pinhole.k^2, key='half pinholes'
With the corresponding plot of k^2 * chi(k) attached.
Corrections welcome,
--Matt
Anatoly,
I think that may be exactly the point. If you have half
foil and half off, even with a uniform beam, you cant get the same spectrum as with the whole beam on the foil.
I tried to come up with a quick proof by demonstration, but I got bogged down on normalization. That will have to wait.
Jeremy
________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Frenkel, Anatoly Sent: Wednesday, November 24, 2010 1:33 PM To: XAFS Analysis using Ifeffit Subject: RE: [Ifeffit] Distortion of transmission spectra due to particlesize
Jeremy:
In your simulation, "(c) 1/2 original, 1/2 nothing (a large "pinhole")" it appears that chi(k) is half intensity of the original spectrum. Does it mean that when the pinhole is present, EXAFS wiggles are half of the original ones in amplitude but the edge step remains
Or, equivalently, that the wiggles are the same but the edge step doubled?
Either way, I don't think it is the situation you are describing (a large pinhole). If there is a large pinhole made in a perfect foil (say, you removed half of the area of the foil from the footprint of the beam and it just goes through from I0 to I detector, unaffected). Then, if I0 is a well behaving function of energy, i.e., the flux density is constant over the entire sample for all energies, EXAFS in the both cases should be the same.
Or I misunderstood your example, or, maybe, the colors?
Anatoly
________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Kropf, Arthur Jeremy Sent: Wed 11/24/2010 1:08 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize
It's not that I don't believe in mathematics, but in this case rather than checking the math, I did a simulation.
I took a spectrum of a copper foil and then calculated the following: (a) copper foil (original edge step 1.86) (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 absorption (c) 1/2 original, 1/2 nothing (a large "pinhole") (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly stacked layers of (c))
Observation 1: Stacking random layers does nothing to improve chi(k) amplitudes as has been discussed. They are identical, but I've offset them by 0.01 units.
Observation 2: Pretty awful uniformity gives reasonable EXAFS data. If you don't care too much about absolute N, XANES, or Eo (very small changes), the rest is quite accurate (R, sigma2, relative N).
Perhaps I'll simulate a spherical particle next with absorption in the center of 10 absorption lengths or so - probably not an uncommon occurance.
Jeremy
Chemical Sciences and Engineering Division Argonne National Laboratory Argonne, IL 60439
Ph: 630.252.9398 Fx: 630.252.9917 Email: kropf@anl.gov
-----Original Message----- From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Scott Calvin Sent: Wednesday, November 24, 2010 10:41 AM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize
Matt,
Your second simulation confirms what I said:
The standard deviation in thickness from point to point in a stack of N tapes generally increases as the square root of N (typical statistical behavior).
Now follow that through, using, for example, Grant Bunker's formula for the distortion caused by a Gaussian distribution:
(mu x)eff = mu x_o - (mu sigma)^2/2
where sigma is the standard deviation of the thickness.
So if sigma goes as square root of N, and x_o goes as N, the fractional attenuation of the measured absorption stays constant, and the shape of the measured spectrum stays constant. There is thus no reduction in the distortion of the spectrum by measuring additional layers.
Your pinholes simulation, on the other hand, is not the scenario I was describing. I agree it is better to have more thin layers rather than fewer thick layers. My question was whether it is better to have many thin layers compared to fewer thin layers. For the "brush sample on tape" method of sample preparation, this is more like
we face when we prepare a sample. Our choice is not to spread a given amount of sample over more tapes, because we're already spreading as thin as we can. Our choice is whether to use more tapes of
thickness.
We don't have to rerun your simulation to see the effect of using tapes of the same thickness. All that happens is that the average thickness and the standard deviation gets multiplied by
layers.
So now the results are:
For 10% pinholes, the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 10.0 | 0.900 | 0.300 | # 5 | 10.0 | 4.500 | 0.675 | # 25 | 10.0 | 22.500 | 1.500 |
For 5% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 5.0 | 0.950 | 0.218 | # 5 | 5.0 | 4.750 | 0.485 | # 25 | 5.0 | 23.750 | 1.100 |
For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 4.950 | 0.225 | # 25 | 1.0 | 24.750 | 0.500 |
As before, the standard deviation increases as square root of N. Using a cumulant expansion (admittedly slightly funky for such a broad distribution) necessarily yields the same result as the Gaussian distribution: the shape of the measured spectrum is independent of the number of layers used! And as it turns out, an exact calculation (i.e. not using a cumulant expansion) also yields the same result of independence.
So Lu and Stern got it right. But the idea that we can mitigate pinholes by adding more layers is wrong.
--Scott Calvin Faculty at Sarah Lawrence College Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
On Nov 24, 2010, at 6:05 AM, Matt Newville wrote:
Scott,
OK, I've got it straight now. The answer is yes, the distortion from nonuniformity is as bad for four strips stacked as for
On Wed, Nov 24, 2010 at 2:27 PM, Kropf, Arthur Jeremy
wrote: the beam on a the same? the question the same the number of the single strip.
I don't think that's correct.
This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken.
Stacking multiple layers does improve sample uniformity.
Below is a simple simulation of a sample of unity thickness with randomly placed pinholes. First this makes a sample that is 1 layer of N cells, with each cell either having thickness of 1 or 0. Then it makes a sample of the same size and total thickness, but made of 5 independent layers, with each layer having the same fraction of randomly placed pinholes, so that total thickness for each cell could be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 layers.
The simulation below is in python. I do hope the code is straightforward enough so that anyone interested can follow. The way in which pinholes are randomly selected by the code may not be obvious, so I'll say hear that the "numpy.random.shuffle" function is like shuffling a deck of cards, and works on its array argument in-place.
For 10% pinholes, the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 10.0 | 0.900 | 0.300 | # 5 | 10.0 | 0.900 | 0.135 | # 25 | 10.0 | 0.900 | 0.060 |
For 5% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 5.0 | 0.950 | 0.218 | # 5 | 5.0 | 0.950 | 0.097 | # 25 | 5.0 | 0.950 | 0.044 |
For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 0.990 | 0.045 | # 25 | 1.0 | 0.990 | 0.020 |
Multiple layers of smaller particles gives a more uniform thickness than fewer layers of larger particles. The standard deviation of the thickness goes as 1/sqrt(N_layers). In addition, one can see that 5 layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. Does any of this seem surprising or incorrect to you?
Now let's try your case of 1 layer of thickness 0.4 with 4 layers of thickness 0.4, with 1% pinholes. In the code below, the simulation would look like # one layer of thickness=0.4 sample = 0.4 * make_layer(ncells, ph_frac) print format % (1, 100*ph_frac, sample.mean(), sample.std())
# four layers of thickness=0.4 layer1 = 0.4 * make_layer(ncells, ph_frac) layer2 = 0.4 * make_layer(ncells, ph_frac) layer3 = 0.4 * make_layer(ncells, ph_frac) layer4 = 0.4 * make_layer(ncells, ph_frac) sample = layer1 + layer2 + layer3 + layer4 print format % (4, 100*ph_frac, sample.mean(), sample.std())
and the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.396 | 0.040 | # 4 | 1.0 | 1.584 | 0.080 |
The sample with 4 layers had its average thickness increase by a factor of 4, while the standard deviation of that thickness only doubled. The sample is twice as uniform.
OK, that's a simple model and of thickness only. Lu and Stern did a more complete analysis and made actual measurements of the effect of thickness on XAFS amplitudes. They *showed* that many thin layers is better than fewer thick layers.
Perhaps I am not understanding the points you're trying to make, but I think I am not the only one confused by what you are saying.
--Matt
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Jeremy, Anatoly, Thanks, you're absolutely right -- I got the parentheses wrong. Anatoly,
However, thickness is present in mu*t only because of the total number of absorbers. There are 1/2 absorbers in the foil with 50% holes, and thus (t*mu)_measured is equal to (1/2) * t*mu_measured.
I agree, but I think that you'd have to know your sample was 50% holes to make that work. I was considering 't*mu' to be a single thing quantity (and really meant that to stand for -ln(I_t/I_0) in the sense of the log of intensities sampled by ion chambers, not absolute fluxes). I think it should be (starting with I_t = I_0 * exp(-tmu) ) that a half full / half empty sample will have: I_t = (I_0 + I_0 * exp(-tmu) ) / 2 = I_0 * (1 + exp(-tmu) ) / 2 so that tmu_measured = -ln (I_t / I_0) = -ln( (1 + exp(-tmu))/2) = ln(2) - ln(1+ exp(-tmu)) As for whether there is a reduction in chi(k) of a factor of 2 or not, I think this would depend on the sample thickness (or, the reltive size of tmu to 1) in the portion of the sample that was non-empty. Using this corrected formula on cu foil data (that actually has an edge jump ~= 2.3, so is probably on the thick side, but is still decent data), I do see a reduction in chi(k) that is a little more than a factor of 2, with some k-dependence. Attached is an Athena project of original and "half empty" data. Am I a pessimist for not calling it "half full"? --Matt PS: I did this to make the half empty data, then read in the data file into athena. read_data(cu.xmu, group =good) set pinhole.energy = good.energy set pinhole.xmu = -ln( (1 + exp(-good.xmu) )/2) write_data(file=half_empty.xmu, pinhole.energy, pinhole.xmu)
You are right about the amplitude factor: it should change. The ln(1+x/2) is not the full story, since x is the exponent: x= exp(-mu*t) and when mu*t is small (thin samples), x is not, and vice versa. More accurate expansion of ln(1+x/2) in the limit of thick and thin samples shows that mu_measured (in thin film with pinholes) and mu in the thin film are differed by a proportinality factor, that is what you are getting. Thanks, Anatoly ________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Matt Newville Sent: Wed 11/24/2010 6:35 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize Hi Jeremy, Anatoly, Thanks, you're absolutely right -- I got the parentheses wrong. Anatoly,
However, thickness is present in mu*t only because of the total number of absorbers. There are 1/2 absorbers in the foil with 50% holes, and thus (t*mu)_measured is equal to (1/2) * t*mu_measured.
I agree, but I think that you'd have to know your sample was 50% holes to make that work. I was considering 't*mu' to be a single thing quantity (and really meant that to stand for -ln(I_t/I_0) in the sense of the log of intensities sampled by ion chambers, not absolute fluxes). I think it should be (starting with I_t = I_0 * exp(-tmu) ) that a half full / half empty sample will have: I_t = (I_0 + I_0 * exp(-tmu) ) / 2 = I_0 * (1 + exp(-tmu) ) / 2 so that tmu_measured = -ln (I_t / I_0) = -ln( (1 + exp(-tmu))/2) = ln(2) - ln(1+ exp(-tmu)) As for whether there is a reduction in chi(k) of a factor of 2 or not, I think this would depend on the sample thickness (or, the reltive size of tmu to 1) in the portion of the sample that was non-empty. Using this corrected formula on cu foil data (that actually has an edge jump ~= 2.3, so is probably on the thick side, but is still decent data), I do see a reduction in chi(k) that is a little more than a factor of 2, with some k-dependence. Attached is an Athena project of original and "half empty" data. Am I a pessimist for not calling it "half full"? --Matt PS: I did this to make the half empty data, then read in the data file into athena. read_data(cu.xmu, group =good) set pinhole.energy = good.energy set pinhole.xmu = -ln( (1 + exp(-good.xmu) )/2) write_data(file=half_empty.xmu, pinhole.energy, pinhole.xmu)
Matt, I am not sure it is correct. t*mu_measured should not have the same t as in the original foil. The entire thing is an effective (t*mu)_measured since it is what we measure in the foil which has holes in it, since thickness cannot be defined per se because of the holes. However, thickness is present in mu*t only because of the total number of absorbers. There are 1/2 absorbers in the foil with 50% holes, and thus (t*mu)_measured is equal to (1/2) * t*mu_measured. Then, your equation should be written as: (1/2) t*mu_measured = (-ln (1 + exp(-t*mu)/2) Since exp(-t*mu) is much smaller than one, we can expand ln(1+x/2) = x/2, in the zeroth approximation. Then, mu_measured should be approximately equal to mu, and thus exafs measured should be the same, with or without pinholes, provided that the I0 is uniform. There should be some differences, because the above equation is an approximation, but there should not be a factor of 2 reduction in chi(k) intensity, I believe. Do you agree? Anatoly ________________________________ From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Matt Newville Sent: Wed 11/24/2010 4:01 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize Hi Jeremy, For a sample of thickness t in the beam, and a good measurement being I_t = I_0 * exp(-t*mu) I think having 1/2 the sample missing (and assuming uniform I_0) would be: I_t_measured = (I_0 / 2)*exp(-0*mu) + (I_0/2) * exp(-t*mu) = I_0 * (1 + exp(-t*mu) / 2 So that t*mu_measured = -ln(I_t_measured/I_0) = -ln (1 + exp(-t*mu)/2) An ifeffit script showing such an effect would be: read_data(cu.xmu, group =good) plot good.energy, good.xmu set pinhole.energy = good.energy set pinhole.xmu = -ln(1 + exp(-good.xmu)/2) spline(pinhole.energy, pinhole.xmu, kmin=0, kweight=2, rbkg=1) spline(good.energy, good.xmu, kmin=0, kweight=2, rbkg=1) newplot(good.k, good.chi*good.k^2, xmax=18, xlabel='k (\A)', ylabel='k\u2\d\gx(k)', key='no pinholes') plot pinhole.k, pinhole.chi*pinhole.k^2, key='half pinholes' With the corresponding plot of k^2 * chi(k) attached. Corrections welcome, --Matt On Wed, Nov 24, 2010 at 2:27 PM, Kropf, Arthur Jeremywrote: > Anatoly, > > I think that may be exactly the point. If you have half the beam on a foil > and half off, even with a uniform beam, you cant get the same spectrum as > with the whole beam on the foil. > > I tried to come up with a quick proof by demonstration, but I got bogged > down on normalization. That will have to wait. > > Jeremy > > ________________________________ > From: ifeffit-bounces@millenia.cars.aps.anl.gov > [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Frenkel, > Anatoly > Sent: Wednesday, November 24, 2010 1:33 PM > To: XAFS Analysis using Ifeffit > Subject: RE: [Ifeffit] Distortion of transmission spectra due to > particlesize > > Jeremy: > > In your simulation, "(c) 1/2 original, 1/2 nothing (a large "pinhole")" it > appears that chi(k) is half intensity of the original spectrum. > Does it mean that when the pinhole is present, EXAFS wiggles are half of the > original ones in amplitude but the edge step remains the same? > Or, equivalently, that the wiggles are the same but the edge step doubled? > > Either way, I don't think it is the situation you are describing (a large > pinhole). If there is a large pinhole made in a perfect foil (say, you > removed half of the area of the foil from the footprint of the beam and it > just goes through from I0 to I detector, unaffected). > Then, if I0 is a well behaving function of energy, i.e., the flux density > is constant over the entire sample for all energies, EXAFS in the both cases > should be the same. > > Or I misunderstood your example, or, maybe, the colors? > > Anatoly > > ________________________________ > From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Kropf, Arthur > Jeremy > Sent: Wed 11/24/2010 1:08 PM > To: XAFS Analysis using Ifeffit > Subject: Re: [Ifeffit] Distortion of transmission spectra due to > particlesize > > It's not that I don't believe in mathematics, but in this case rather > than checking the math, I did a simulation. > > I took a spectrum of a copper foil and then calculated the following: > (a) copper foil (original edge step 1.86) > (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 absorption > (c) 1/2 original, 1/2 nothing (a large "pinhole") > (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly > stacked layers of (c)) > > Observation 1: Stacking random layers does nothing to improve chi(k) > amplitudes as has been discussed. They are identical, but I've offset > them by 0.01 units. > > Observation 2: Pretty awful uniformity gives reasonable EXAFS data. If > you don't care too much about absolute N, XANES, or Eo (very small > changes), the rest is quite accurate (R, sigma2, relative N). > > Perhaps I'll simulate a spherical particle next with absorption in the > center of 10 absorption lengths or so - probably not an uncommon > occurance. > > Jeremy > > Chemical Sciences and Engineering Division > Argonne National Laboratory > Argonne, IL 60439 > > Ph: 630.252.9398 > Fx: 630.252.9917 > Email: kropf@anl.gov > > >> -----Original Message----- >> From: ifeffit-bounces@millenia.cars.aps.anl.gov >> [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf >> Of Scott Calvin >> Sent: Wednesday, November 24, 2010 10:41 AM >> To: XAFS Analysis using Ifeffit >> Subject: Re: [Ifeffit] Distortion of transmission spectra due >> to particlesize >> >> Matt, >> >> Your second simulation confirms what I said: >> >> > The standard deviation in thickness from point to point in >> a stack of >> > N tapes generally increases as the square root of N (typical >> > statistical behavior). >> >> Now follow that through, using, for example, Grant Bunker's >> formula for the distortion caused by a Gaussian distribution: >> >> (mu x)eff = mu x_o - (mu sigma)^2/2 >> >> where sigma is the standard deviation of the thickness. >> >> So if sigma goes as square root of N, and x_o goes as N, the >> fractional attenuation of the measured absorption stays >> constant, and the shape of the measured spectrum stays >> constant. There is thus no reduction in the distortion of the >> spectrum by measuring additional layers. >> >> Your pinholes simulation, on the other hand, is not the >> scenario I was describing. I agree it is better to have more >> thin layers rather than fewer thick layers. My question was >> whether it is better to have many thin layers compared to >> fewer thin layers. For the "brush sample on tape" method of >> sample preparation, this is more like the question we face >> when we prepare a sample. Our choice is not to spread a given >> amount of sample over more tapes, because we're already >> spreading as thin as we can. Our choice is whether to use >> more tapes of the same thickness. >> >> We don't have to rerun your simulation to see the effect of >> using tapes of the same thickness. All that happens is that >> the average thickness and the standard deviation gets >> multiplied by the number of layers. >> >> So now the results are: >> >> For 10% pinholes, the results are: >> # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | >> # 1 | 10.0 | 0.900 | 0.300 | >> # 5 | 10.0 | 4.500 | 0.675 | >> # 25 | 10.0 | 22.500 | 1.500 | >> >> For 5% pinholes: >> # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | >> # 1 | 5.0 | 0.950 | 0.218 | >> # 5 | 5.0 | 4.750 | 0.485 | >> # 25 | 5.0 | 23.750 | 1.100 | >> >> For 1% pinholes: >> # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | >> # 1 | 1.0 | 0.990 | 0.099 | >> # 5 | 1.0 | 4.950 | 0.225 | >> # 25 | 1.0 | 24.750 | 0.500 | >> >> As before, the standard deviation increases as square root of >> N. Using a cumulant expansion (admittedly slightly funky for >> such a broad >> distribution) necessarily yields the same result as the Gaussian >> distribution: the shape of the measured spectrum is >> independent of the number of layers used! And as it turns >> out, an exact calculation (i.e. >> not using a cumulant expansion) also yields the same result >> of independence. >> >> So Lu and Stern got it right. But the idea that we can >> mitigate pinholes by adding more layers is wrong. >> >> --Scott Calvin >> Faculty at Sarah Lawrence College >> Currently on sabbatical at Stanford Synchrotron Radiation Laboratory >> >> >> >> On Nov 24, 2010, at 6:05 AM, Matt Newville wrote: >> >> > Scott, >> > >> >> OK, I've got it straight now. The answer is yes, the >> distortion from >> >> nonuniformity is as bad for four strips stacked as for the single >> >> strip. >> > >> > I don't think that's correct. >> > >> >> This is surprising to me, but the mathematics is fairly clear. >> >> Stacking >> >> multiple layers of tape rather than using one thin layer >> improves the >> >> signal to noise ratio, but does nothing for uniformity. So there's >> >> nothing wrong with the arguments in Lu and Stern, Scarrow, >> etc.--it's >> >> the notion I had that we use multiple layers of tape to improve >> >> uniformity that's mistaken. >> > >> > Stacking multiple layers does improve sample uniformity. >> > >> > Below is a simple simulation of a sample of unity thickness with >> > randomly placed pinholes. First this makes a sample that >> is 1 layer >> > of N cells, with each cell either having thickness of 1 or >> 0. Then it >> > makes a sample of the same size and total thickness, but made of 5 >> > independent layers, with each layer having the same fraction of >> > randomly placed pinholes, so that total thickness for each >> cell could >> > be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 >> > layers. >> > >> > The simulation below is in python. I do hope the code is >> > straightforward enough so that anyone interested can >> follow. The way >> > in which pinholes are randomly selected by the code may not be >> > obvious, so I'll say hear that the "numpy.random.shuffle" >> function is >> > like shuffling a deck of cards, and works on its array argument >> > in-place. >> > >> > For 10% pinholes, the results are: >> > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | >> > # 1 | 10.0 | 0.900 | 0.300 | >> > # 5 | 10.0 | 0.900 | 0.135 | >> > # 25 | 10.0 | 0.900 | 0.060 | >> > >> > For 5% pinholes: >> > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | >> > # 1 | 5.0 | 0.950 | 0.218 | >> > # 5 | 5.0 | 0.950 | 0.097 | >> > # 25 | 5.0 | 0.950 | 0.044 | >> > >> > For 1% pinholes: >> > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | >> > # 1 | 1.0 | 0.990 | 0.099 | >> > # 5 | 1.0 | 0.990 | 0.045 | >> > # 25 | 1.0 | 0.990 | 0.020 | >> > >> > Multiple layers of smaller particles gives a more uniform thickness >> > than fewer layers of larger particles. The standard deviation of the >> > thickness goes as 1/sqrt(N_layers). In addition, one can >> see that 5 >> > layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. >> > Does any of this seem surprising or incorrect to you? >> > >> > Now let's try your case of 1 layer of thickness 0.4 with 4 >> layers of >> > thickness 0.4, with 1% pinholes. In the code below, the simulation >> > would look like >> > # one layer of thickness=0.4 >> > sample = 0.4 * make_layer(ncells, ph_frac) >> > print format % (1, 100*ph_frac, sample.mean(), sample.std()) >> > >> > # four layers of thickness=0.4 >> > layer1 = 0.4 * make_layer(ncells, ph_frac) >> > layer2 = 0.4 * make_layer(ncells, ph_frac) >> > layer3 = 0.4 * make_layer(ncells, ph_frac) >> > layer4 = 0.4 * make_layer(ncells, ph_frac) >> > sample = layer1 + layer2 + layer3 + layer4 >> > print format % (4, 100*ph_frac, sample.mean(), sample.std()) >> > >> > and the results are: >> > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | >> > # 1 | 1.0 | 0.396 | 0.040 | >> > # 4 | 1.0 | 1.584 | 0.080 | >> > >> > The sample with 4 layers had its average thickness increase by a >> > factor of 4, while the standard deviation of that thickness only >> > doubled. The sample is twice as uniform. >> > >> > OK, that's a simple model and of thickness only. Lu and >> Stern did a >> > more complete analysis and made actual measurements of the >> effect of >> > thickness on XAFS amplitudes. They *showed* that many thin >> layers is >> > better than fewer thick layers. >> > >> > Perhaps I am not understanding the points you're trying to >> make, but I >> > think I am not the only one confused by what you are saying. >> > >> > --Matt >> > >> >> _______________________________________________ >> Ifeffit mailing list >> Ifeffit@millenia.cars.aps.anl.gov >> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit >> > > _______________________________________________ > Ifeffit mailing list > Ifeffit@millenia.cars.aps.anl.gov > http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit > >
Hi the non-linearity occurs because of the log in the calculation of the absorption coefficient. It becomes clear when you write down the Lambert Beer law assuming the transmitted intensity is the sum of contributions from the sample (ie, dependent on absorption coefficient) and some fraction of unaffected transmitted beam (independent of absorption coefficient). I plotted the resulting edge jump effect assuming 1% transparency of a sample ('pinholes') some time ago by calculating the measured absorption coefficient as a function of actual absorption coefficient and as a function of sample thickness x. The plot is on page 13 in 'Spectroscopy for Surface Science' (Wiley 1998): http://books.google.co.uk/books?id=vo5cGstx_Q0C&printsec=frontcover&dq=spect roscopy+for+surface+science&source=bl&ots=kOuTVLyvXY&sig=F7c-2vfvAcdZ4b0o0Qz _BTJdFlo&hl=en&ei=P3vtTMfKEIyEhQflzfzMDA&sa=X&oi=book_result&ct=result&resnu m=1&ved=0CCQQ6AEwAA#v=onepage&q&f=false You can see that for thin foils there is good linearity - thick samples run into a saturation effect. Hence EXAFS dampened ... Sven From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Kropf, Arthur Jeremy Sent: 24 November 2010 20:28 To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize Anatoly, I think that may be exactly the point. If you have half the beam on a foil and half off, even with a uniform beam, you cant get the same spectrum as with the whole beam on the foil. I tried to come up with a quick proof by demonstration, but I got bogged down on normalization. That will have to wait. Jeremy _____ From: ifeffit-bounces@millenia.cars.aps.anl.gov [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Frenkel, Anatoly Sent: Wednesday, November 24, 2010 1:33 PM To: XAFS Analysis using Ifeffit Subject: RE: [Ifeffit] Distortion of transmission spectra due to particlesize Jeremy: In your simulation, "(c) 1/2 original, 1/2 nothing (a large "pinhole")" it appears that chi(k) is half intensity of the original spectrum. Does it mean that when the pinhole is present, EXAFS wiggles are half of the original ones in amplitude but the edge step remains the same? Or, equivalently, that the wiggles are the same but the edge step doubled? Either way, I don't think it is the situation you are describing (a large pinhole). If there is a large pinhole made in a perfect foil (say, you removed half of the area of the foil from the footprint of the beam and it just goes through from I0 to I detector, unaffected). Then, if I0 is a well behaving function of energy, i.e., the flux density is constant over the entire sample for all energies, EXAFS in the both cases should be the same. Or I misunderstood your example, or, maybe, the colors? Anatoly _____ From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Kropf, Arthur Jeremy Sent: Wed 11/24/2010 1:08 PM To: XAFS Analysis using Ifeffit Subject: Re: [Ifeffit] Distortion of transmission spectra due to particlesize It's not that I don't believe in mathematics, but in this case rather than checking the math, I did a simulation. I took a spectrum of a copper foil and then calculated the following: (a) copper foil (original edge step 1.86) (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 absorption (c) 1/2 original, 1/2 nothing (a large "pinhole") (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly stacked layers of (c)) Observation 1: Stacking random layers does nothing to improve chi(k) amplitudes as has been discussed. They are identical, but I've offset them by 0.01 units. Observation 2: Pretty awful uniformity gives reasonable EXAFS data. If you don't care too much about absolute N, XANES, or Eo (very small changes), the rest is quite accurate (R, sigma2, relative N). Perhaps I'll simulate a spherical particle next with absorption in the center of 10 absorption lengths or so - probably not an uncommon occurance. Jeremy Chemical Sciences and Engineering Division Argonne National Laboratory Argonne, IL 60439 Ph: 630.252.9398 Fx: 630.252.9917 Email: kropf@anl.gov > -----Original Message----- > From: ifeffit-bounces@millenia.cars.aps.anl.gov > [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf > Of Scott Calvin > Sent: Wednesday, November 24, 2010 10:41 AM > To: XAFS Analysis using Ifeffit > Subject: Re: [Ifeffit] Distortion of transmission spectra due > to particlesize > > Matt, > > Your second simulation confirms what I said: > > > The standard deviation in thickness from point to point in > a stack of > > N tapes generally increases as the square root of N (typical > > statistical behavior). > > Now follow that through, using, for example, Grant Bunker's > formula for the distortion caused by a Gaussian distribution: > > (mu x)eff = mu x_o - (mu sigma)^2/2 > > where sigma is the standard deviation of the thickness. > > So if sigma goes as square root of N, and x_o goes as N, the > fractional attenuation of the measured absorption stays > constant, and the shape of the measured spectrum stays > constant. There is thus no reduction in the distortion of the > spectrum by measuring additional layers. > > Your pinholes simulation, on the other hand, is not the > scenario I was describing. I agree it is better to have more > thin layers rather than fewer thick layers. My question was > whether it is better to have many thin layers compared to > fewer thin layers. For the "brush sample on tape" method of > sample preparation, this is more like the question we face > when we prepare a sample. Our choice is not to spread a given > amount of sample over more tapes, because we're already > spreading as thin as we can. Our choice is whether to use > more tapes of the same thickness. > > We don't have to rerun your simulation to see the effect of > using tapes of the same thickness. All that happens is that > the average thickness and the standard deviation gets > multiplied by the number of layers. > > So now the results are: > > For 10% pinholes, the results are: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 10.0 | 0.900 | 0.300 | > # 5 | 10.0 | 4.500 | 0.675 | > # 25 | 10.0 | 22.500 | 1.500 | > > For 5% pinholes: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 5.0 | 0.950 | 0.218 | > # 5 | 5.0 | 4.750 | 0.485 | > # 25 | 5.0 | 23.750 | 1.100 | > > For 1% pinholes: > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > # 1 | 1.0 | 0.990 | 0.099 | > # 5 | 1.0 | 4.950 | 0.225 | > # 25 | 1.0 | 24.750 | 0.500 | > > As before, the standard deviation increases as square root of > N. Using a cumulant expansion (admittedly slightly funky for > such a broad > distribution) necessarily yields the same result as the Gaussian > distribution: the shape of the measured spectrum is > independent of the number of layers used! And as it turns > out, an exact calculation (i.e. > not using a cumulant expansion) also yields the same result > of independence. > > So Lu and Stern got it right. But the idea that we can > mitigate pinholes by adding more layers is wrong. > > --Scott Calvin > Faculty at Sarah Lawrence College > Currently on sabbatical at Stanford Synchrotron Radiation Laboratory > > > > On Nov 24, 2010, at 6:05 AM, Matt Newville wrote: > > > Scott, > > > >> OK, I've got it straight now. The answer is yes, the > distortion from > >> nonuniformity is as bad for four strips stacked as for the single > >> strip. > > > > I don't think that's correct. > > > >> This is surprising to me, but the mathematics is fairly clear. > >> Stacking > >> multiple layers of tape rather than using one thin layer > improves the > >> signal to noise ratio, but does nothing for uniformity. So there's > >> nothing wrong with the arguments in Lu and Stern, Scarrow, > etc.--it's > >> the notion I had that we use multiple layers of tape to improve > >> uniformity that's mistaken. > > > > Stacking multiple layers does improve sample uniformity. > > > > Below is a simple simulation of a sample of unity thickness with > > randomly placed pinholes. First this makes a sample that > is 1 layer > > of N cells, with each cell either having thickness of 1 or > 0. Then it > > makes a sample of the same size and total thickness, but made of 5 > > independent layers, with each layer having the same fraction of > > randomly placed pinholes, so that total thickness for each > cell could > > be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 > > layers. > > > > The simulation below is in python. I do hope the code is > > straightforward enough so that anyone interested can > follow. The way > > in which pinholes are randomly selected by the code may not be > > obvious, so I'll say hear that the "numpy.random.shuffle" > function is > > like shuffling a deck of cards, and works on its array argument > > in-place. > > > > For 10% pinholes, the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 10.0 | 0.900 | 0.300 | > > # 5 | 10.0 | 0.900 | 0.135 | > > # 25 | 10.0 | 0.900 | 0.060 | > > > > For 5% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 5.0 | 0.950 | 0.218 | > > # 5 | 5.0 | 0.950 | 0.097 | > > # 25 | 5.0 | 0.950 | 0.044 | > > > > For 1% pinholes: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.990 | 0.099 | > > # 5 | 1.0 | 0.990 | 0.045 | > > # 25 | 1.0 | 0.990 | 0.020 | > > > > Multiple layers of smaller particles gives a more uniform thickness > > than fewer layers of larger particles. The standard deviation of the > > thickness goes as 1/sqrt(N_layers). In addition, one can > see that 5 > > layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. > > Does any of this seem surprising or incorrect to you? > > > > Now let's try your case of 1 layer of thickness 0.4 with 4 > layers of > > thickness 0.4, with 1% pinholes. In the code below, the simulation > > would look like > > # one layer of thickness=0.4 > > sample = 0.4 * make_layer(ncells, ph_frac) > > print format % (1, 100*ph_frac, sample.mean(), sample.std()) > > > > # four layers of thickness=0.4 > > layer1 = 0.4 * make_layer(ncells, ph_frac) > > layer2 = 0.4 * make_layer(ncells, ph_frac) > > layer3 = 0.4 * make_layer(ncells, ph_frac) > > layer4 = 0.4 * make_layer(ncells, ph_frac) > > sample = layer1 + layer2 + layer3 + layer4 > > print format % (4, 100*ph_frac, sample.mean(), sample.std()) > > > > and the results are: > > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | > > # 1 | 1.0 | 0.396 | 0.040 | > > # 4 | 1.0 | 1.584 | 0.080 | > > > > The sample with 4 layers had its average thickness increase by a > > factor of 4, while the standard deviation of that thickness only > > doubled. The sample is twice as uniform. > > > > OK, that's a simple model and of thickness only. Lu and > Stern did a > > more complete analysis and made actual measurements of the > effect of > > thickness on XAFS amplitudes. They *showed* that many thin > layers is > > better than fewer thick layers. > > > > Perhaps I am not understanding the points you're trying to > make, but I > > think I am not the only one confused by what you are saying. > > > > --Matt > > > > _______________________________________________ > Ifeffit mailing list > Ifeffit@millenia.cars.aps.anl.gov > http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit >
Scott, You said:
the distortion from nonuniformity is as bad for four strips stacked as for the single strip.
As I showed earlier, a four layer sample is more uniform than a one layer sample, whether the total thickness is preserved or the thickness per layer is preserved.
For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 4.950 | 0.225 | # 25 | 1.0 | 24.750 | 0.500 |
Yes, the sample with 25 layers has a more uniform thickness.
As before, the standard deviation increases as square root of N. Using a cumulant expansion (admittedly slightly funky for such a broad distribution) necessarily yields the same result as the Gaussian distribution: the shape of the measured spectrum is independent of the number of layers used! And as it turns out, an exact calculation (i.e. not using a cumulant expansion) also yields the same result of independence.
OK... The shape is the same, but the relative widths change. 24.75 +/- 0.50 is a more uniform distribution than 0.99 +/- .099. Perhaps this is what is confusing you?
So Lu and Stern got it right. But the idea that we can mitigate pinholes by adding more layers is wrong.
Adding more layers does make a sample of more uniform thickness. Perhaps "mitigate pinholes" means something different to you? In your original message (in which you set out to "track down" a piece of "incorrect lore") you said that Lu and Stern assumed that layers were stacked "so that thick spots are always over thick and thin spots over thin". They did not assume that. Given that initial misunderstanding, and the fact that you haven't shown any calculations or simulations, it's a bit hard for me to fathom what you think Lu and Stern "got right" or wrong. The main point of their work is that it is better to use more layers to get to a given thickness. You seem to have some objection to this, but I cannot figure out what you're trying to say. This is starting to feel like "The Gossage Vardebedian Papers". --Matt
participants (5)
-
Frenkel, Anatoly
-
Kropf, Arthur Jeremy
-
Matt Newville
-
Scott Calvin
-
Sven L.M. Schroeder