Hi Jeremy,
For a sample of thickness t in the beam, and a good measurement being
I_t = I_0 * exp(-t*mu)
I think having 1/2 the sample missing (and assuming uniform I_0) would be:
I_t_measured = (I_0 / 2)*exp(-0*mu) + (I_0/2) * exp(-t*mu)
= I_0 * (1 + exp(-t*mu) / 2
So that
t*mu_measured = -ln(I_t_measured/I_0) = -ln (1 + exp(-t*mu)/2)
An ifeffit script showing such an effect would be:
read_data(cu.xmu, group =good)
plot good.energy, good.xmu
set pinhole.energy = good.energy
set pinhole.xmu = -ln(1 + exp(-good.xmu)/2)
spline(pinhole.energy, pinhole.xmu, kmin=0, kweight=2, rbkg=1)
spline(good.energy, good.xmu, kmin=0, kweight=2, rbkg=1)
newplot(good.k, good.chi*good.k^2, xmax=18, xlabel='k (\A)',
ylabel='k\u2\d\gx(k)', key='no pinholes')
plot pinhole.k, pinhole.chi*pinhole.k^2, key='half pinholes'
With the corresponding plot of k^2 * chi(k) attached.
Corrections welcome,
--Matt
On Wed, Nov 24, 2010 at 2:27 PM, Kropf, Arthur Jeremy <kropf@anl.gov> wrote:
> Anatoly,
>
> I think that may be exactly the point. If you have half the beam on a foil
> and half off, even with a uniform beam, you cant get the same spectrum as
> with the whole beam on the foil.
>
> I tried to come up with a quick proof by demonstration, but I got bogged
> down on normalization. That will have to wait.
>
> Jeremy
>
> ________________________________
> From: ifeffit-bounces@millenia.cars.aps.anl.gov
> [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf Of Frenkel,
> Anatoly
> Sent: Wednesday, November 24, 2010 1:33 PM
> To: XAFS Analysis using Ifeffit
> Subject: RE: [Ifeffit] Distortion of transmission spectra due to
> particlesize
>
> Jeremy:
>
> In your simulation, "(c) 1/2 original, 1/2 nothing (a large "pinhole")" it
> appears that chi(k) is half intensity of the original spectrum.
> Does it mean that when the pinhole is present, EXAFS wiggles are half of the
> original ones in amplitude but the edge step remains the same?
> Or, equivalently, that the wiggles are the same but the edge step doubled?
>
> Either way, I don't think it is the situation you are describing (a large
> pinhole). If there is a large pinhole made in a perfect foil (say, you
> removed half of the area of the foil from the footprint of the beam and it
> just goes through from I0 to I detector, unaffected).
> Then, if I0 is a well behaving function of energy, i.e., the flux density
> is constant over the entire sample for all energies, EXAFS in the both cases
> should be the same.
>
> Or I misunderstood your example, or, maybe, the colors?
>
> Anatoly
>
> ________________________________
> From: ifeffit-bounces@millenia.cars.aps.anl.gov on behalf of Kropf, Arthur
> Jeremy
> Sent: Wed 11/24/2010 1:08 PM
> To: XAFS Analysis using Ifeffit
> Subject: Re: [Ifeffit] Distortion of transmission spectra due to
> particlesize
>
> It's not that I don't believe in mathematics, but in this case rather
> than checking the math, I did a simulation.
>
> I took a spectrum of a copper foil and then calculated the following:
> (a) copper foil (original edge step 1.86)
> (b) 1/3 original, 1/3 with half absorption, and 1/3 with 1/4 absorption
> (c) 1/2 original, 1/2 nothing (a large "pinhole")
> (d) 1/4 nothing, 1/2 original, 1/4 double (simulating two randomly
> stacked layers of (c))
>
> Observation 1: Stacking random layers does nothing to improve chi(k)
> amplitudes as has been discussed. They are identical, but I've offset
> them by 0.01 units.
>
> Observation 2: Pretty awful uniformity gives reasonable EXAFS data. If
> you don't care too much about absolute N, XANES, or Eo (very small
> changes), the rest is quite accurate (R, sigma2, relative N).
>
> Perhaps I'll simulate a spherical particle next with absorption in the
> center of 10 absorption lengths or so - probably not an uncommon
> occurance.
>
> Jeremy
>
> Chemical Sciences and Engineering Division
> Argonne National Laboratory
> Argonne, IL 60439
>
> Ph: 630.252.9398
> Fx: 630.252.9917
> Email: kropf@anl.gov
>
>
>> -----Original Message-----
>> From: ifeffit-bounces@millenia.cars.aps.anl.gov
>> [mailto:ifeffit-bounces@millenia.cars.aps.anl.gov] On Behalf
>> Of Scott Calvin
>> Sent: Wednesday, November 24, 2010 10:41 AM
>> To: XAFS Analysis using Ifeffit
>> Subject: Re: [Ifeffit] Distortion of transmission spectra due
>> to particlesize
>>
>> Matt,
>>
>> Your second simulation confirms what I said:
>>
>> > The standard deviation in thickness from point to point in
>> a stack of
>> > N tapes generally increases as the square root of N (typical
>> > statistical behavior).
>>
>> Now follow that through, using, for example, Grant Bunker's
>> formula for the distortion caused by a Gaussian distribution:
>>
>> (mu x)eff = mu x_o - (mu sigma)^2/2
>>
>> where sigma is the standard deviation of the thickness.
>>
>> So if sigma goes as square root of N, and x_o goes as N, the
>> fractional attenuation of the measured absorption stays
>> constant, and the shape of the measured spectrum stays
>> constant. There is thus no reduction in the distortion of the
>> spectrum by measuring additional layers.
>>
>> Your pinholes simulation, on the other hand, is not the
>> scenario I was describing. I agree it is better to have more
>> thin layers rather than fewer thick layers. My question was
>> whether it is better to have many thin layers compared to
>> fewer thin layers. For the "brush sample on tape" method of
>> sample preparation, this is more like the question we face
>> when we prepare a sample. Our choice is not to spread a given
>> amount of sample over more tapes, because we're already
>> spreading as thin as we can. Our choice is whether to use
>> more tapes of the same thickness.
>>
>> We don't have to rerun your simulation to see the effect of
>> using tapes of the same thickness. All that happens is that
>> the average thickness and the standard deviation gets
>> multiplied by the number of layers.
>>
>> So now the results are:
>>
>> For 10% pinholes, the results are:
>> # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
>> # 1 | 10.0 | 0.900 | 0.300 |
>> # 5 | 10.0 | 4.500 | 0.675 |
>> # 25 | 10.0 | 22.500 | 1.500 |
>>
>> For 5% pinholes:
>> # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
>> # 1 | 5.0 | 0.950 | 0.218 |
>> # 5 | 5.0 | 4.750 | 0.485 |
>> # 25 | 5.0 | 23.750 | 1.100 |
>>
>> For 1% pinholes:
>> # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
>> # 1 | 1.0 | 0.990 | 0.099 |
>> # 5 | 1.0 | 4.950 | 0.225 |
>> # 25 | 1.0 | 24.750 | 0.500 |
>>
>> As before, the standard deviation increases as square root of
>> N. Using a cumulant expansion (admittedly slightly funky for
>> such a broad
>> distribution) necessarily yields the same result as the Gaussian
>> distribution: the shape of the measured spectrum is
>> independent of the number of layers used! And as it turns
>> out, an exact calculation (i.e.
>> not using a cumulant expansion) also yields the same result
>> of independence.
>>
>> So Lu and Stern got it right. But the idea that we can
>> mitigate pinholes by adding more layers is wrong.
>>
>> --Scott Calvin
>> Faculty at Sarah Lawrence College
>> Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
>>
>>
>>
>> On Nov 24, 2010, at 6:05 AM, Matt Newville wrote:
>>
>> > Scott,
>> >
>> >> OK, I've got it straight now. The answer is yes, the
>> distortion from
>> >> nonuniformity is as bad for four strips stacked as for the single
>> >> strip.
>> >
>> > I don't think that's correct.
>> >
>> >> This is surprising to me, but the mathematics is fairly clear.
>> >> Stacking
>> >> multiple layers of tape rather than using one thin layer
>> improves the
>> >> signal to noise ratio, but does nothing for uniformity. So there's
>> >> nothing wrong with the arguments in Lu and Stern, Scarrow,
>> etc.--it's
>> >> the notion I had that we use multiple layers of tape to improve
>> >> uniformity that's mistaken.
>> >
>> > Stacking multiple layers does improve sample uniformity.
>> >
>> > Below is a simple simulation of a sample of unity thickness with
>> > randomly placed pinholes. First this makes a sample that
>> is 1 layer
>> > of N cells, with each cell either having thickness of 1 or
>> 0. Then it
>> > makes a sample of the same size and total thickness, but made of 5
>> > independent layers, with each layer having the same fraction of
>> > randomly placed pinholes, so that total thickness for each
>> cell could
>> > be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25
>> > layers.
>> >
>> > The simulation below is in python. I do hope the code is
>> > straightforward enough so that anyone interested can
>> follow. The way
>> > in which pinholes are randomly selected by the code may not be
>> > obvious, so I'll say hear that the "numpy.random.shuffle"
>> function is
>> > like shuffling a deck of cards, and works on its array argument
>> > in-place.
>> >
>> > For 10% pinholes, the results are:
>> > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
>> > # 1 | 10.0 | 0.900 | 0.300 |
>> > # 5 | 10.0 | 0.900 | 0.135 |
>> > # 25 | 10.0 | 0.900 | 0.060 |
>> >
>> > For 5% pinholes:
>> > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
>> > # 1 | 5.0 | 0.950 | 0.218 |
>> > # 5 | 5.0 | 0.950 | 0.097 |
>> > # 25 | 5.0 | 0.950 | 0.044 |
>> >
>> > For 1% pinholes:
>> > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
>> > # 1 | 1.0 | 0.990 | 0.099 |
>> > # 5 | 1.0 | 0.990 | 0.045 |
>> > # 25 | 1.0 | 0.990 | 0.020 |
>> >
>> > Multiple layers of smaller particles gives a more uniform thickness
>> > than fewer layers of larger particles. The standard deviation of the
>> > thickness goes as 1/sqrt(N_layers). In addition, one can
>> see that 5
>> > layers of 5% pinholes is about as uniform 1 layer with 1% pinholes.
>> > Does any of this seem surprising or incorrect to you?
>> >
>> > Now let's try your case of 1 layer of thickness 0.4 with 4
>> layers of
>> > thickness 0.4, with 1% pinholes. In the code below, the simulation
>> > would look like
>> > # one layer of thickness=0.4
>> > sample = 0.4 * make_layer(ncells, ph_frac)
>> > print format % (1, 100*ph_frac, sample.mean(), sample.std())
>> >
>> > # four layers of thickness=0.4
>> > layer1 = 0.4 * make_layer(ncells, ph_frac)
>> > layer2 = 0.4 * make_layer(ncells, ph_frac)
>> > layer3 = 0.4 * make_layer(ncells, ph_frac)
>> > layer4 = 0.4 * make_layer(ncells, ph_frac)
>> > sample = layer1 + layer2 + layer3 + layer4
>> > print format % (4, 100*ph_frac, sample.mean(), sample.std())
>> >
>> > and the results are:
>> > # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
>> > # 1 | 1.0 | 0.396 | 0.040 |
>> > # 4 | 1.0 | 1.584 | 0.080 |
>> >
>> > The sample with 4 layers had its average thickness increase by a
>> > factor of 4, while the standard deviation of that thickness only
>> > doubled. The sample is twice as uniform.
>> >
>> > OK, that's a simple model and of thickness only. Lu and
>> Stern did a
>> > more complete analysis and made actual measurements of the
>> effect of
>> > thickness on XAFS amplitudes. They *showed* that many thin
>> layers is
>> > better than fewer thick layers.
>> >
>> > Perhaps I am not understanding the points you're trying to
>> make, but I
>> > think I am not the only one confused by what you are saying.
>> >
>> > --Matt
>> >
>>
>> _______________________________________________
>> Ifeffit mailing list
>> Ifeffit@millenia.cars.aps.anl.gov
>> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
>>
>
> _______________________________________________
> Ifeffit mailing list
> Ifeffit@millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
>
>