This is a great example, Scott! Thanks for sharing your experience. Hopefully ours will be similarly positive and the results will be in the ballpark of each other. It's at least comforting to know there's a chance.



Mike





On Aug 13, 2019, at 8:18 PM, Scott Calvin <dr.scott.calvin@gmail.com> wrote:

Yes, I did.

I set up a double-blind experiment with mixtures of various iron standards and an “unknown” iron-containing compound. This was years ago, so I may have a few details wrong, but it will get the gist:

 I used undergraduates to make the mixtures with random amounts of random selections of the standards. An undergraduate also ordered the “unknown” compound. 

We then measured the spectra, along with the spectra of the pure standards (but not the pure “unknown”). The spectra were measured at somewhat different temperatures so that simple linear combination analysis would be of limited use. Three of us then attempted analysis independently: myself, one of my most advanced undergraduates (who did not participate in the preparation of the samples), and a high school student with roughly two weeks training in analysis. We attempted to find what compounds were present, how much of each, and, in the case of the unknown compound, as much as we could suss out structurally.

The results were that we all did OK, but my analyses were the most accurate, the advanced undergraduate less so, and the high school student the least (but still generally in the ballpark).

It was satisfying to know that different analysts could find similar results, that those results reflected reality, and that those with greater expertise did achieve more accurate results.

If I recall correctly, I didn’t publish because the undergraduate who prepared the pure standards did a lousy job (pinholes, etc.), which distorted our measurement of the standards enough to make it more difficult to evaluate our analyses. Still, it showed things work in principle.

The data, including a remeasured set of standards, is still available as the EXAFS Divination Set.

Best,

Scott Calvin
Lehman College of the City University of New York

On Tue, Aug 13, 2019 at 9:41 PM Mike Massey <mmassey@gmail.com> wrote:
Hi Everyone,


I'm curious, has anyone ever tried turning two analysts loose on the same unknown EXAFS spectrum to see if their fits come out with similar conclusions? If you have tried it, how did it work out? Were the conclusions indeed similar? If not, why not, and what did you end up doing about it?

I was talking with a colleague today about our plans for data analysis, and we settled on this approach (since there are two interested parties willing to try to fit a series of unknown EXAFS datasets).

The hope is, of course, that the two analysts will independently reach similar conclusions with similar fits and structural models, but to my mind that outcome is by no means guaranteed. Given the (presumably) wide variation in fitting customs and procedures, I can envision a scenario in which there are major differences.

This got me wondering, "Has anyone tried this?" So I thought I'd ask.


Your thoughts and experiences would be welcome. Thanks!



Mike Massey
_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Unsubscribe: http://millenia.cars.aps.anl.gov/mailman/options/ifeffit
_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Unsubscribe: http://millenia.cars.aps.anl.gov/mailman/options/ifeffit