Scott,
OK, I've got it straight now. The answer is yes, the distortion from nonuniformity is as bad for four strips stacked as for the single strip.
I don't think that's correct.
This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken.
Stacking multiple layers does improve sample uniformity. Below is a simple simulation of a sample of unity thickness with randomly placed pinholes. First this makes a sample that is 1 layer of N cells, with each cell either having thickness of 1 or 0. Then it makes a sample of the same size and total thickness, but made of 5 independent layers, with each layer having the same fraction of randomly placed pinholes, so that total thickness for each cell could be 1, 0.8, 0.6, 0.4, 0.2, or 0. Then it makes a sample with 25 layers. The simulation below is in python. I do hope the code is straightforward enough so that anyone interested can follow. The way in which pinholes are randomly selected by the code may not be obvious, so I'll say hear that the "numpy.random.shuffle" function is like shuffling a deck of cards, and works on its array argument in-place. For 10% pinholes, the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 10.0 | 0.900 | 0.300 | # 5 | 10.0 | 0.900 | 0.135 | # 25 | 10.0 | 0.900 | 0.060 | For 5% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 5.0 | 0.950 | 0.218 | # 5 | 5.0 | 0.950 | 0.097 | # 25 | 5.0 | 0.950 | 0.044 | For 1% pinholes: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.990 | 0.099 | # 5 | 1.0 | 0.990 | 0.045 | # 25 | 1.0 | 0.990 | 0.020 | Multiple layers of smaller particles gives a more uniform thickness than fewer layers of larger particles. The standard deviation of the thickness goes as 1/sqrt(N_layers). In addition, one can see that 5 layers of 5% pinholes is about as uniform 1 layer with 1% pinholes. Does any of this seem surprising or incorrect to you? Now let's try your case of 1 layer of thickness 0.4 with 4 layers of thickness 0.4, with 1% pinholes. In the code below, the simulation would look like # one layer of thickness=0.4 sample = 0.4 * make_layer(ncells, ph_frac) print format % (1, 100*ph_frac, sample.mean(), sample.std()) # four layers of thickness=0.4 layer1 = 0.4 * make_layer(ncells, ph_frac) layer2 = 0.4 * make_layer(ncells, ph_frac) layer3 = 0.4 * make_layer(ncells, ph_frac) layer4 = 0.4 * make_layer(ncells, ph_frac) sample = layer1 + layer2 + layer3 + layer4 print format % (4, 100*ph_frac, sample.mean(), sample.std()) and the results are: # N_layers | % Pinholes | Ave Thickness | Thickness Std Dev | # 1 | 1.0 | 0.396 | 0.040 | # 4 | 1.0 | 1.584 | 0.080 | The sample with 4 layers had its average thickness increase by a factor of 4, while the standard deviation of that thickness only doubled. The sample is twice as uniform. OK, that's a simple model and of thickness only. Lu and Stern did a more complete analysis and made actual measurements of the effect of thickness on XAFS amplitudes. They *showed* that many thin layers is better than fewer thick layers. Perhaps I am not understanding the points you're trying to make, but I think I am not the only one confused by what you are saying. --Matt #!/usr/bin/python import numpy def make_layer(ncells, pinhole_frac): "make an array of cells of unity thickness with a pinhole fraction" data = numpy.ones(ncells) # [1,1,1,1,1,...] pinholes = numpy.arange(ncells) # [0,1,2,3,4,5,...] numpy.random.shuffle(pinholes) # randmon pixel indices # set ncells*pinhole_frac randomly selected pixels to zero for pixel in pinholes[:int(ncells*pinhole_frac)]: data[pixel] = 0 return data print "# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |" format = "# %2i | %4.1f | %.3f | %.3f |" ph_frac = 0.01 ncells = 100*100 # 1 layer of ~unity thickness sample = make_layer(ncells, ph_frac) print format % (1, 100*ph_frac, sample.mean(), sample.std()) # 5 layers giving ~unity thickness layer1 = 0.2 * make_layer(ncells, ph_frac) layer2 = 0.2 * make_layer(ncells, ph_frac) layer3 = 0.2 * make_layer(ncells, ph_frac) layer4 = 0.2 * make_layer(ncells, ph_frac) layer5 = 0.2 * make_layer(ncells, ph_frac) sample = layer1 + layer2 + layer3 + layer4 + layer5 print format % (5, 100*ph_frac, sample.mean(), sample.std()) # 25 layers giving ~unity thickness nlayers = 25 sample = make_layer(ncells, ph_frac) for i in range(nlayers-1): sample = sample + make_layer(ncells, ph_frac) sample = sample / nlayers print format % (nlayers, 100*ph_frac, sample.mean(), sample.std()) ### END ###