Dear all,
could you please let me know your thoughts on this question that has
been consuming much of my time lately? It goes like this:
When we perform first shell analysis of elemental crystals and
nanoparticles (like bulk Ge and Ge NPs, or bulk Pt and Pt NPs, for
example), should both bulk and NP samples have the same E0 shift
relative to the theory (or, fit-wise, the same deltaE0 when fitting in
Artemis)? And, even more important to know, why so?
I've been reading some papers and I can find people who do set the NPs
deltaE0 to be the same as the crystalline foil as well as people who
report different deltaE0s for NPs relative to the foil. This confuses me
because by setting (or not) deltaE0 we directly change the correlated
variables, delR and C3.
Intuitively I would think that for Ge, in particular, NPs could have a
different deltaE0 from the bulk since they present smaller band gaps and
different electronic structure, causing differences in the
occupied/unoccupied states (that's why they show some PL while the bulk
doesn't). Would you say this is reasonable?
I have used Athena to calibrate (maximum of energy derivative = atomic
value), align edges, background subtract (with same parameters) and
Fourier transform (with same parameters) both bulk and NP spectra (at
the same temperature, from same run/measurement conditions), which I
then fit in Artemis to the FEFF8-generated crystalline Ge standard. And
I do see a significant change in the odd cumulants (delR and C3) when I
let deltaE0 run free or when I set it to the bulk value (the difference
is between 1 and 2 eV). Would you say this is physically reasonable or
more likely to be a fit artifact?
Thanks,
Leandro