[Ifeffit] A question about the new Artemis version

Scott Calvin scalvin at slc.edu
Tue Mar 30 18:03:25 CST 2004


Hi all,

A project we're working on now seems like something that shouldn't be that
hard to do in Artemis, but it's proving unwieldly. I was hoping someone
could suggest a less cumbersome method:

We have data on 11 related samples, plus a standard. All use the same FEFF
calculation and paths (about 25 paths or so). Each one was analyzed as a
separate project file, and gave us promising results.

Now we would like to perform a multiple-data set fit...on all the samples
if possible, or some subset if not. There are a couple of reasons for
wanting to do this. One is to get a more robust determination of parameters
that are universal to all samples, like S02, rather than just depending on
the fit from the standard. The other is that one parameter we are fitting
(crystallite size) quite properly shows a rather large uncertainty in its
fit. But it is clear from playing around that the difference between
samples for crystallite size is much better determined. Our plan is to
guess crystallite size for one particular sample, and guess a difference
from this size for each of the other samples. This should, of course, not
yield a significantly different result from just guessing each size. But
then we'd like to set the crystallite size for the first sample to its
best-fit value. The only purpose for this is to obtain an estimate of the
uncertainty in the =relative= crystallite sizes of the samples.

OK, so that's the plan, but the implementation is awkward. It seems to me
that with the current version of Artemis we have to pretend that there is a
separate FEFF calculation for each sample, and we have to enter all the
path parameters anew for each sample. There's a lot of nitty-gritty
constraints on multiple-scattering paths and such, so that's a lot of data
entry. Also, I forget what the current limits are on total number of FEFF
paths and total number of samples--are we going to be hitting limits?

Basically, I can understand if we simply can't fit this many samples
simultaneously with the standard code. That's not a big problem; we can get
most of what we want by fitting a subset of the samples. It's the
data-entry that's a bit of a pain. For each sample we have to re-enter all
of the path and GDS information, and I think we'll get a bloated ZIP file,
since presumably the FEFF calculation will be repeated once for each sample
even though they are all identical.

This doesn't seem like all that unusual a scenario. Does anyone have a good
idea of how this should be done in Artemis, or do we just have to bite the
bullet and re-enter all the pertinent information for each sample we bring
in (and live with the bloated project file).

--Scott Calvin
Sarah Lawrence College

P.S. Thanks to Matt and Bruce again for even making such a thing
conceivable. The analysis on this project is being done by a talented
freshman without coding experience. It is a tribute to the software (and
the student!) that we've been able to get so far so fast.



More information about the Ifeffit mailing list