Debye-Waller factors for metal oxides ?
Hi, There is little discussion over the significance of fitted Debye-Waller factors (ss2) found for metal centers in metal oxides of heterogeneous catalysts. I am searching for a better way of determining how reasonable these values are in the fits I get. 1. Are phenomenological models like the correlated Debye model and Einstein model appropriate? It seems that Debye temperatures are hard to find for oxides. 2. Is there an equivalent value for ss2 in x-ray crystallography data that I can refer to for comparison in these materials? What should I look for in reviewing published x-ray crystallographic data? 3. Is the temperature dependence reported for the ss2 for some metals (like Cu and Al) similar to their respective oxides? For example, in Debye-Waller factor calculations reported for Al metal by R.C.G Killean (in J.Phys.F: Metal Phys., v. 4, pg. 1908, 1974), the ss2 changes by a factor of three between 25C and 300C. Would I expect a factor of three increase in crystalline zeolites (AlO4 structural units). 4. Specific to my work: I have studied Al containing oxides at temperatures between 25 and 300 C. I would like to quantify coordination changes by doing EXAFS fitting of the Al K-edge data. The EXAFS was taken at temperature and I observe no change in the broadening of the data. Likewise, the fitting shows no sensitivity in the Debye-Waller factor. It is nearly constant at 0.001 A over the temperatures of interest. Since ss2 and the coordination number (CN) are correlated, I would like a way to bind the error on the fitted CN by modeling real physical changes in the Debye-Waller factors. Do you have any suggestions? Looking forward to your thoughts. Ian Drake Graduate Student UC Berkeley
Hi Ian, I am looking forward to the responses of others on this issue, because it's always a tough one. But I'll give you some quick thoughts as I see them: Essentially, there are no simple models for the ss values in oxides. Einstein models don't work very well because there would really have to be a separate Einstein temperature assigned for each kind of bond, and even then you still don't have a formula to address multiple-scattering. Debye models have similar problems. X-ray crystallography values are giving the variance around a lattice point; EXAFS gives the variance in the bond length. Thus I think it is true that for distant (i.e. uncorrelated) scatterers, the EXAFS value should be roughly twice the XRD value (somebody please confirm my reasoning on this!). But for bonded pairs this is certainly not true...acoustic modes generally dominate over optical modes under these circumstances, and thus the EXAFS value for nearby pairs should be lower than for distant pairs, all else being equal. The temperature dependence of metals and oxides are not related in any simple way of which I am aware. After all, that would imply that things like specific heats of oxides should correlate to the metals...think about water ("hydrogen oxide"), for example. Note that if you are expecting changes in CN, it is particularly unlikely that a simple theoretical model will cover changes in ss2's, since the former should have an actual physical effect on the latter (as opposed to just a fitting correlation). OK, so you're probably wondering what you can do. Although there have been some attempts to theoretically model the Debye-Waller factors of oxides that I'm sure people on this list could point you toward, in most cases the more practical solution for your problem is to fit standards, or to complement with other experimental techniques. In other words, can you find a closely related material that is not expected to show CN changes and measure its ss2's as a function of temperature? Or can you get data on CN's at even a few of the temperatures some other way? --Scott Calvin Sarah Lawrence College
There is little discussion over the significance of fitted Debye-Waller factors (ss2) found for metal centers in metal oxides of heterogeneous catalysts. I am searching for a better way of determining how reasonable these values are in the fits I get.
1. Are phenomenological models like the correlated Debye model and Einstein model appropriate? It seems that Debye temperatures are hard to find for oxides.
2. Is there an equivalent value for ss2 in x-ray crystallography data that I can refer to for comparison in these materials? What should I look for in reviewing published x-ray crystallographic data?
3. Is the temperature dependence reported for the ss2 for some metals (like Cu and Al) similar to their respective oxides? For example, in Debye-Waller factor calculations reported for Al metal by R.C.G Killean (in J.Phys.F: Metal Phys., v. 4, pg. 1908, 1974), the ss2 changes by a factor of three between 25C and 300C. Would I expect a factor of three increase in crystalline zeolites (AlO4 structural units).
4. Specific to my work: I have studied Al containing oxides at temperatures between 25 and 300 C. I would like to quantify coordination changes by doing EXAFS fitting of the Al K-edge data. The EXAFS was taken at temperature and I observe no change in the broadening of the data. Likewise, the fitting shows no sensitivity in the Debye-Waller factor. It is nearly constant at 0.001 A over the temperatures of interest. Since ss2 and the coordination number (CN) are correlated, I would like a way to bind the error on the fitted CN by modeling real physical changes in the Debye-Waller factors. Do you have any suggestions?
Hi all, Theanne Schiros and I are running into a fitting issue that I don't quite know how to solve, other than to suggest a new Ifeffit feature. :) The system we are working with is a CuInSe2 photovoltaic. Our model allows for the possibility of vacancies as well as site disorder (e.g. a copper sitting in a nominally selenium site). Combined with the known stoichiometry of the samples (generally somewhat Se rich compared to the nominal formula), that gives us a whole network of constraints. In other words, there are 12 parameters describing the disorder: the amounts of copper, indium, selenium, and vacancies in each of the three kinds of site. But many of those can be "def'd" in terms of the others, since we know there are twice as many selenium sites as copper or indium, and we know the stoichiometry. The resulting fits work pretty well. They generate results which are statistically superior to fits with no disorder, and yield values for the various kinds of disorder which are comparable to those predicted theoretically. So here's the problem: a couple of the "def'd" values come out somewhat negative. I'm not surprised this happens; my sense is that the negative values are comparable to the uncertainties (Theanne has the exact data; I'm mainly kibbitzing), so it's not a crisis for the overall fit. But of course I'd like to force the fit not to allow those negative occupancy numbers. I can't think how to do that, though. We can't just put an abs() around the def'd values, because that means our constraints based on stoichiometry and the relative number of sites are being applied incorrectly. A restraint doesn't really seem appropriate either. Does anyone have an idea of how to deal with this? If no one does, perhaps there should be an ifeffit option for a "hard restraint." This would be something like the max and min functions, except that it would operate by putting a huge penalty to the chi-square when outside the range. Just to be clear, it would work something like this: y has been def'd to x + z. Then y is also given a hard restraint that it cannot be less than 0. The fit would then generate a positive value of y that was not less than 0 by varying x and z in such a way that it is still true that y = x + z. Is this a feature that would be useful to other people? Am I missing a way to implement this using the current software? Would this create big problems for, e.g., Ifeffit's ability to calculate uncertainties and correlations? Thanks for any thoughts you all have on these issues! --Scott Calvin Sarah Lawrence College
Hi Scott, Theanne,
.... We can't just put an abs() around the def'd values, because that means our constraints based on stoichiometry and the relative number of sites are being applied incorrectly. A restraint doesn't really seem appropriate either.
Does anyone have an idea of how to deal with this?
In a simplified case, you'd like to constrain two coordination numbers to be positive _and_ to add up to some value, right? guess n1_var = 5 def n1 = abs(n1_var) def n2 = 12 - n1 or, if you're worried about n1 > 12, n2<0, you could do: guess n1_var = 5 def n1 = max(0,min(n1_var,12)) def n2 = 12 - n1 Is that good enough, or am I not seeing the whole complicated picture?
If no one does, perhaps there should be an ifeffit option for a "hard restraint." This would be something like the max and min functions, except that it would operate by putting a huge penalty to the chi-square when outside the range. Just to be clear, it would work something like this: y has been def'd to x + z. Then y is also given a hard restraint that it cannot be less than 0. The fit would then generate a positive value of y that was not less than 0 by varying x and z in such a way that it is still true that y = x + z.
I think this is possible without a change to the code. You can add a restraint penalty when some value is outside a range [lo_val,hi_val]. It's a bit scary looking, but this will do it: lo_val = 0.7 ## lower bound up_val = 1.0 ## upper bound scale = 1.000 ## scale can be increased to weight this restraint target = 0.8 ## this could be a variable, say S02 ## use penalty as a restraint in feffit() def penalty = (max((target-lo_val), abs(target-lo_val)) - min((target-up_val),-abs(target-up_val)) + lo_val - up_val)*scale This penalty can be used as a restraint, and will be 0 when the target value is between the lower and upper bound. To set a penalty only for going below the lower bound, it's just def penalty = max((target-lo_val),abs(target-lo_val))-(target- lo_val) and if the lower bound is 0, it's even simpler: def penalty = max(target,abs(target))-target Even that is scary enough, and the whole thing may suggest a change to the code to make a simple function that provides a restraint penalty when a variable is outside some user-selected bounds, perhaps as bound(x, lo_val, hi_val) # not implemented! Is this way of setting a penalty what you had in mind, or am I missing the point? --Matt PS: ########## # To visualize this restraint, an array of penalties can be # plotted v. an array of targe values: lo_val = 0.7 up_val = 1.0 scale = 1.000 # array of target values m.vals = range(-1,3,0.05) m.lo_bound = scale*(m.vals-up_val) m.up_bound = scale*(m.vals-lo_val) m.lo_penalty = max(m.up_bound, abs(m.up_bound)) - m.up_bound m.up_penalty = -min(m.lo_bound, -abs(m.lo_bound)) + m.lo_bound m.penalty = (max(m.up_bound, abs(m.up_bound)) - min(m.lo_bound, -abs(m.lo_bound)) + scale*(lo_val - up_val)) newplot m.vals, m.penalty, style=linespoints2 plot m.vals, m.lo_penalty ##########
Hi Matt,
In a simplified case, you'd like to constrain two coordination numbers to be positive _and_ to add up to some value, right? guess n1_var = 5 def n1 = abs(n1_var) def n2 = 12 - n1
or, if you're worried about n1 > 12, n2<0, you could do: guess n1_var = 5 def n1 = max(0,min(n1_var,12)) def n2 = 12 - n1
Is that good enough, or am I not seeing the whole complicated picture?
That's a very good (less complicated) example of what I'm thinking of. In the example you just gave, you have to guarantee that n2 is positive by putting limits on n1. Suppose we make it just slightly more complicated, with three coordination numbers that have to be positive and add up to some value. I guess it could be done this way: guess n1_var = 5 def n1 = max(0,min(n1_var,12)) guess n2_var = 5 def n2 = max(0,min(n2_var,12-n1)) def n3 = 12 - n1 - n2 So that strategy might work. It might get very complicated, though; we have something like six constrained sums with variables often appearing in more than one sum. For example, the fraction of copper atoms in nominally copper sites appears in the sum requiring the total amount of copper to match the stoichiometry, and in the sum requiring the total number of atoms + vacancies in nominally copper sites to match the number of copper sites. We'll look at it, but I think it may be very unwieldy. Your next suggestion, though, should do it:
I think this is possible without a change to the code. You can add a restraint penalty when some value is outside a range [lo_val,hi_val]. It's a bit scary looking, but this will do it:
lo_val = 0.7 ## lower bound up_val = 1.0 ## upper bound scale = 1.000 ## scale can be increased to weight this restraint target = 0.8 ## this could be a variable, say S02
## use penalty as a restraint in feffit() def penalty = (max((target-lo_val), abs(target-lo_val)) - min((target-up_val),-abs(target-up_val)) + lo_val - up_val)*scale
This penalty can be used as a restraint, and will be 0 when the target value is between the lower and upper bound.
To set a penalty only for going below the lower bound, it's just def penalty = max((target-lo_val),abs(target-lo_val))-(target- lo_val)
and if the lower bound is 0, it's even simpler: def penalty = max(target,abs(target))-target
Yes! Very clever. I've used a similar strategy to implement "if-then-else" type assignments. This seems considerably easier to implement in complicated situations than the previous method.
Even that is scary enough, and the whole thing may suggest a change to the code to make a simple function that provides a restraint penalty when a variable is outside some user-selected bounds, perhaps as bound(x, lo_val, hi_val) # not implemented!
Is this way of setting a penalty what you had in mind, or am I missing the point?
Yes, that's exactly what I had in mind. I would certainly find it very useful...this isn't the first system I've analyzed where it would have helped, although I've usually been able to get around it with judicious use of abs() or max/min. I am curious if others would find this feature helpful. Now that you've pointed out how to do the same thing with the current restraints, I don't really need the shortcut...you've seen enough of my fits to know I don't shy away from long expressions. :) But of course it would make life easier. --Scott Calvin Sarah Lawrence College
Hi All, towards the "bound (min,max)" issue: I think that is what many people dream of in IFEFFIT. By now, the implementation of a restraint is (as Matt's example showed) a bit scary. And we all have seen the tricks of fits like negative DW factors, silly CNs and so on. So a simple "bound" command as mentioned would for sure make - especially for new users - life a bit easier. Norbert -- Dr. rer. nat. Norbert Weiher (norbert.weiher@manchester.ac.uk) School of Chemical Engineering and Analytical Science Sackville Street, Manchester, M60 1QD - Phone: +44 161 306 4468
Hi Norbert, Scott, On Wed, 9 Feb 2005, Norbert Weiher wrote:
towards the "bound (min,max)" issue:
I think that is what many people dream of in IFEFFIT. By now, the implementation of a restraint is (as Matt's example showed) a bit scary. And we all have seen the tricks of fits like negative DW factors, silly CNs and so on. So a simple "bound" command as mentioned would for sure make - especially for new users - life a bit easier.
Just for completeness, there are two different ways to give bounds to variables. The first uses max/min: guess x_var = 1 def x = max(low, min(high, x_var)) which gives infinitely hard walls: x will never go outside the bounds. The second approach is to calculate a penalty when the value goes out of bounds. My earlier post gave an ugly way to do this, and a built-in 'bound()' function could also do this: penalty = bound(x_var, low, high, scale) x_var may go outside the limits, but with a penalty of penalty = (x_var-high)*scale for x_var > high penalty = -(x_var-low)*scale for x_var < low A very large value for scale would make a very hard wall, but the strength of the wall is tweakable. This penalty can then be used as a restraint. A bound() function would be easy enough to implement, and I'll do this for the next vesion (but see below). Possibly related to this, Bruce has suggested a couple of times that restraints should be implemented with a command restraint() similar to set() or def() rather than an additional scaler parameter to feffit(). That might fit in with this.... --Matt On upcoming versions: The source code for next version of Ifeffit won't be for several weeks (I won't start working on it for at least two weeks). No estimate on a Windows dll. Did I mention that releases of Windows binaries and dlls will lag significantly? I did ask for feedback about a recent windows update and got exactly two responses. I also asked for help building windows executables and heard no replies. I take this to mean that there is little interest in the Windows versions of these programs. I cannot make Windows executables for Athena, Artemis, etc even if I wanted too, and don't expect that Bruce will be making them real soon either. That's all to say that if you're expecting a windows version with these changes, you'll be waiting a while.
Hi Matt,
The source code for next version of Ifeffit won't be for several weeks (I won't start working on it for at least two weeks). No estimate on a Windows dll. Did I mention that releases of Windows binaries and dlls will lag significantly? I did ask for feedback about a recent windows update and got exactly two responses. I also asked for help building windows executables and heard no replies. I take this to mean that there is little interest in the Windows versions of these programs.
There's actually a much happier explanation for the lack of response than lack of interest in Windows versions. I think that it's evidence that these programs work well, and serve most people's needs most of the times, in their current "official" incarnations. Not that improvements and extensions to the capabilities won't be appreciated, but the prospect of waiting half a year (or whatever) for the next version is not a hardship when the software is already so powerful and works so well. Of course, given the immense amount of unpaid time and effort you and Bruce have put into developing and, perhaps more importantly, maintaining and supporting this software, I would encourage users to help in any way they are able, whether that be bug reports, building executables, creating documented examples, or helping to field questions on this list. --Scott Calvin Sarah Lawrence College
Hey, I like ifeffit so much that I am planning to come from Japan to thank Matt personally (whilst carrying out an APS experiment!). Paul On Feb 10, 2005, at 1:22 PM, Scott Calvin wrote:
Hi Matt,
The source code for next version of Ifeffit won't be for several weeks (I won't start working on it for at least two weeks). No estimate on a Windows dll. Did I mention that releases of Windows binaries and dlls will lag significantly? I did ask for feedback about a recent windows update and got exactly two responses. I also asked for help building windows executables and heard no replies. I take this to mean that there is little interest in the Windows versions of these programs.
There's actually a much happier explanation for the lack of response than lack of interest in Windows versions. I think that it's evidence that these programs work well, and serve most people's needs most of the times, in their current "official" incarnations. Not that improvements and extensions to the capabilities won't be appreciated, but the prospect of waiting half a year (or whatever) for the next version is not a hardship when the software is already so powerful and works so well. Of course, given the immense amount of unpaid time and effort you and Bruce have put into developing and, perhaps more importantly, maintaining and supporting this software, I would encourage users to help in any way they are able, whether that be bug reports, building executables, creating documented examples, or helping to field questions on this list.
--Scott Calvin Sarah Lawrence College _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Dr. Paul Fons Senior Researcher National Institute for Advanced Industrial Science & Technology METI Center for Applied Near-Field Optics Research (CANFOR) AIST Central 4, Higashi 1-1-1 Tsukuba, Ibaraki JAPAN 305-8568 tel. +81-298-61-5636 fax. +81-298-61-2939 email: paul-fons@aist.go.jp The lines below are in a Japanese font 〒305−8568 茨城県つくば市東1−1−1 つくば中央第4 近接場光応用工学センター ポール・フォンス主任研究官
Hi folks, I am sitting here at the beamline waiting for data to roll in and reading the mailing list correspondence that came in while I was on vacation. A month ago Scott and Matt had a very interesting discussion about ways to set boundaries on parameter values in fits. Matt showed a rather complicated math expression using Ifeffit's current restraint mechanism that does essentially the same thing as the hypothetical expression "bound(x,lo_val,hi_val)", where "x" is a guessed parameter, and the other two numbers indicate the boundaries beyond which a severe penalty should be applied to the fitting metric. Matt's expression was, as Scott, said rather fiendish and clever. Norbert then piped up suggesting that a "bound" would be widely appreciated. One solution is, of course, to wait for Matt to find the time to implement and test a bound function. It occurs to me that another solution is to implement in Artemis a "bound" interface which would take the arguments of the hypothetical bound function mentioned in the last paragraph and construct the long expression that Matt suggested to Scott. All that could be done (as so many things are) behind the scenes and out of view of the casual user. The Artemis solution has two nice features. One is that it would be much more transparent to the user than trying to implement Matt's fiendish suggestion by hand. The second is that I could probably deliver this feature pretty quickly in Artemis and it may not be necessary to change Ifeffit. Scott and Norbert seem interested, so perhaps I should put this on the list of things to do... B
Hi Bruce, first of all a welcome back from the UK. Good to see your messages popping up in the mailing list again. Hope you are relaxed enough to get back to business... I am glad to hear that implementing a "bound" function in ARTEMIS is not difficult to handle. I agree with your point that the function need not be implemented in the IFEFFIT library itself. It is after all just an interface which produces the functions that Matt described earlier. Another subject: Did you consider implementing the "load GDS sheet" and "save GDS sheet" options we discussed earlier? This would be great for text editor freaks (like me, I confess...), as you can write your variable definitions external and crank them back into ARTEMIS to do the fitting business for you. Norbert -- Dr. rer. nat. Norbert Weiher (norbert.weiher@manchester.ac.uk) School of Chemical Engineering and Analytical Science Sackville Street, Manchester, M60 1QD - Phone: +44 161 306 4468
Hi Bruce, I will add a bound(x,lo_val,hi_val) function to the ifeffit core. It's of general use for modeling and restraints, so it shouldn't be tied to a particular App. I think the main issue Scott and I discussed last month was not so much adding this or any other feature, but getting the results distributed, especially to Windows users, in a timely manner. I think putting features in Artemis doesn't necessarily help that, as no one is currently able to make Windows binaries. In order for Windows users to have access to this feature, it will require a new Artemis.exe. Whether or not it also requires a new ifeffit.dll does not seem like the limiting factor. I read the opinion of Scott, Norbert, and all who responded by personal emails to be that waiting several months for such an update was not a problem. --Matt
The lack of accurate Debye-Waller factors in complex materials, like metal oxides, can be a real problem. In my view, simple phenomenological methods like correlated Debye and Einstein models can't always be trusted. There is an alternative approach to fitting DW factors in such systems which might be useful. The idea is to fit a few spring constants k_ij between the atoms, rather than the DW factors themselves, e.g., for the most important sites. This is efficient since the spring constants are temperture independent. This has been tested by Rossner et al. using their Bayesian approach [Rossner, H. H. and Krappe, H. J., (2004). Phys. Rev. B 70, 104102] The fitting is does via an analytical formula for the Debye-Waller factors based on a fast recursion method [Poiarkova, A. V. & Rehr, J. J. (1998). Phys. Rev. B 59, 948]. Of course, such an approach still misses contributions to the DW factors due to structural disorder, which are addititive. J. Rehr On Sun, 6 Feb 2005, Ian James Drake wrote:
Hi,
There is little discussion over the significance of fitted Debye-Waller factors (ss2) found for metal centers in metal oxides of heterogeneous catalysts. I am searching for a better way of determining how reasonable these values are in the fits I get.
1. Are phenomenological models like the correlated Debye model and Einstein model appropriate? It seems that Debye temperatures are hard to find for oxides.
2. Is there an equivalent value for ss2 in x-ray crystallography data that I can refer to for comparison in these materials? What should I look for in reviewing published x-ray crystallographic data?
3. Is the temperature dependence reported for the ss2 for some metals (like Cu and Al) similar to their respective oxides? For example, in Debye-Waller factor calculations reported for Al metal by R.C.G Killean (in J.Phys.F: Metal Phys., v. 4, pg. 1908, 1974), the ss2 changes by a factor of three between 25C and 300C. Would I expect a factor of three increase in crystalline zeolites (AlO4 structural units).
4. Specific to my work: I have studied Al containing oxides at temperatures between 25 and 300 C. I would like to quantify coordination changes by doing EXAFS fitting of the Al K-edge data. The EXAFS was taken at temperature and I observe no change in the broadening of the data. Likewise, the fitting shows no sensitivity in the Debye-Waller factor. It is nearly constant at 0.001 A over the temperatures of interest. Since ss2 and the coordination number (CN) are correlated, I would like a way to bind the error on the fitted CN by modeling real physical changes in the Debye-Waller factors. Do you have any suggestions?
Looking forward to your thoughts.
Ian Drake Graduate Student UC Berkeley _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi John, Hello from Japan! The cherry blossoms have just finished falling off the trees here and we are enjoying a beautiful spring. Speaking of spring ... we have been doing a lot of DFT calculations in my group recently for phase change materials and I was very curious about the possibility of calculating Debye-Waller factors within abinit using some of the same DFT structural models we have been optimizing. If it is possible to do so as a joint research project or if it is possible to just use the code. Either is fine. We have temperature dependence data for all edges of the materials in question so checking the DW factors might give us some insight into the correct structural models. Thanks in any case! Cheers, Paul
Didn't mean to send the last note to the mailing list--- please ignore.
Hi Ian, I think that Scott, John, and Shelly answered some of your questions about XAFS Debye-Waller factors, and gave many of the standard answers about how to deal with these in the analysis of real XAFS data. Here are some more thoughts on this:
1. Are phenomenological models like the correlated Debye model and Einstein model appropriate? It seems that Debye temperatures are hard to find for oxides.
The Einstein model asserts that there is a single, dominant (or effective) vibrational mode between two atoms, with a vibrational amplitude related to the Einstein temperature. For first shell EXAFS, this is almost always an appropriate model. But, if you don't know what that Einstein temperature is, you have to treat it as an unknown. Einstein / Debye temperatures are hard to find for metal oxides because most bulk measurements of thermal properties don't fit with these simple models. That doesn't mean that the metal-near-neighbor-oxygen bond doesn't have a single, dominant (or effective) vibration, just that the other vibrational modes of the solid are important in the total thermal behavior. But a result of the Einstein model that may be important for your application is that sigma2 is linear in T over most of the temperature range.
2. Is there an equivalent value for ss2 in x-ray crystallography data that I can refer to for comparison in these materials? What should I look for in reviewing published x-ray crystallographic data?
Crystallographic Debye-Waller Factors are generally unrelated to XAFS Debye-Waller Factors. Many people have tried to make this relation. In my maybe-not-so-humble-as-it-should-be opinion, they're all doomed to fail. They're very different views of thermal and static disorder. One is with respect to the near-neighbors, the other with respect to "the fixed stars" of the crystal. On the other hand, molecular vibrational information from IR, Raman, NMR, Mossbauer, etc, also estimate bond strengths and disorder in bond length, and are more like the XAFS Debye-Waller Factors in their 'localness'. As others have said, XAFS cares about the vibrational mode between the two atoms, and so is most strongly related to the optical phonon modes. In many of these other vibrational spectra, these modes can be selected and their amplitudes extracted. I don't think much (or enough) effort has been put into relating vibrational measurements to XAFS DWFs. It seems like an interesting and potentially useful approach. It's been more common for people to try to fit the experiment into the Einstein model by taking temperature dependent data well below room temperature or by coming up with models to relate crystallographic and XAFS DWFs. I think these are not appropriate for most systems, including yours.
3. Is the temperature dependence reported for the ss2 for some metals (like Cu and Al) similar to their respective oxides?
No, not at all. Metallic bonds are generally weak, and so tend to have larger sigma2 than metal-oxygen bonds.
For example, in Debye-Waller factor calculations reported for Al metal by R.C.G Killean (in J.Phys.F: Metal Phys., v. 4, pg. 1908, 1974), the ss2 changes by a factor of three between 25C and 300C. Would I expect a factor of three increase in crystalline zeolites (AlO4 structural units).
Nope. The Al-O bond strength is much higher than Al-Al (Al melts at 660C, Al2O3 at 2050C).
4. Specific to my work: I have studied Al containing oxides at temperatures between 25 and 300 C. I would like to quantify coordination changes by doing EXAFS fitting of the Al K-edge data. The EXAFS was taken at temperature and I observe no change in the broadening of the data. Likewise, the fitting shows no sensitivity in the Debye-Waller factor. It is nearly constant at 0.001 A over the temperatures of interest. Since ss2 and the coordination number (CN) are correlated, I would like a way to bind the error on the fitted CN by modeling real physical changes in the Debye-Waller factors. Do you have any suggestions?
As others mentioned, you could model the temperature dependence. You can probably model sigma2 as simply as: sigma2(T) = sigma2_off + T * sigma2_slope With T either in C or K. That is, since you're probably well below the Einstein temperature and bond-breaking temperature, sigma2 is probably linear in T. The offset term incorporates the static disorder as well as the thermal disorder at your lowest temperature. Then the sigma2 values for all your temperature dependent data get mapped to two variables (sigma2_off and sigma2_slope). If coordination number is assumed to be independent of temperature, you'll have three parameters to fit the amplitudes of all your temperature-dependent data. It sounds like you might find an appreciable static disorder and a very small slope. That could be right, but would indicate a very strong bond. Hope that helps, --Matt
Hi,
This will probably be the most naïve suggestion: the correlation between
coordination numbers (CN) and Debye-Waller factors (DW) in your metal oxides
may be at least partially solved changing the k weighting in the fit
procedure. You can fit data in the R space for example, by weighting
simultaneously in k1 and k3, as you have probably done. In principle this
should allow you to reduce the covariance between CN's and DW's. Another
approached described in the FEFF manual that worked well for me is to fix
the CN at several values around the optimal region and fit the corresponding
DW for a given k-weight. Next you change the k-weight scheme and repeat the
fit of DW's at fixed CN's. After performing this procedure you will end up
with a grid of DW versus CN at three different k-weightings. Plotting DW vs
CN for each k-weight yields a straight line if you are close to the true
minimum. The intersection of the three lines will form a triangle whose
centroid will correspond to the optimum CN/DW pair. Now that you have
extracted k-weight independent DW factors you might try to model these with
a static plus thermal disorder terms. This approach should work well in the
first shell of metal oxides.
Hugo Carabineiro
----- Original Message -----
From: "Ian James Drake"
Hi,
There is little discussion over the significance of fitted Debye-Waller factors (ss2) found for metal centers in metal oxides of heterogeneous catalysts. I am searching for a better way of determining how reasonable these values are in the fits I get.
1. Are phenomenological models like the correlated Debye model and Einstein model appropriate? It seems that Debye temperatures are hard to find for oxides.
2. Is there an equivalent value for ss2 in x-ray crystallography data that I can refer to for comparison in these materials? What should I look for in reviewing published x-ray crystallographic data?
3. Is the temperature dependence reported for the ss2 for some metals (like Cu and Al) similar to their respective oxides? For example, in Debye-Waller factor calculations reported for Al metal by R.C.G Killean (in J.Phys.F: Metal Phys., v. 4, pg. 1908, 1974), the ss2 changes by a factor of three between 25C and 300C. Would I expect a factor of three increase in crystalline zeolites (AlO4 structural units).
4. Specific to my work: I have studied Al containing oxides at temperatures between 25 and 300 C. I would like to quantify coordination changes by doing EXAFS fitting of the Al K-edge data. The EXAFS was taken at temperature and I observe no change in the broadening of the data. Likewise, the fitting shows no sensitivity in the Debye-Waller factor. It is nearly constant at 0.001 A over the temperatures of interest. Since ss2 and the coordination number (CN) are correlated, I would like a way to bind the error on the fitted CN by modeling real physical changes in the Debye-Waller factors. Do you have any suggestions?
Looking forward to your thoughts.
Ian Drake Graduate Student UC Berkeley _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi all, OK--this one's been puzzling me for a while, so I thought I'd see what you all had to say about it: One of my students and I performed fits on some different samples of platinum nanoparticles to see if we could extract mean sizes by observing the reduction in coordination numbers as a function of absorber-scatterer distance (I and Anatoly Frenkel, among others, have done some past work in this area). This worked OK: we extracted believable sizes that are consistent in some sense with what was seen via TEM and XRD (there are complications that arise because of polydispersion, but that's a story for another day). But here's the issue: the uncertainties generated by Ifeffit in the particle size are fairly large compared to the difference between the best-fit values for different samples. These uncertainties are reasonable in the sense that varying details of the fits (e.g. k-range, k-weight, Debye-Waller constraint schemes, whether resolution and/or third cumulant effects are included, etc.) causes the best-fit values to jump around within the uncertainty range. Thus if a fit reports 15 +/- 4 angstroms for particle radius, I can construct fits with reasonable R-factors that yield best-fit results of 12 or 18 angstroms. This is perfectly sensible behavior. But we have also observed that as long as we use the same fitting details on all samples, that the fitted sizes of all samples move up or down together. In other words, if under one set of fitting conditions the best-fit radius for sample A is 15 +/- 4 angstroms while for sample B it is 17 +/- 5 angstroms, under another set of conditions the best-fit radii might be 18 +/- 6 and 20 +/- 7 respectively, but the size of B always comes out larger than the size of A. In addition, the relative sizes of A and B (and C and D and...) have since been confirmed by other methods (XRD, experiments involving mixtures of samples, etc.). So it seems as if there should be a way to express the " uncertainty in the relative size" between the two samples...B is larger than A by 13 +/- 5 %, for example, regardless of the absolute size the fits find. But so far the only way I've thought of for doing this is to look at all the fits we've tried that have yielded R-factors below some cut-off, and just sort of average all the results for the differences in size. That seems unsatisfactory, however, since the standard deviation depends intimately on whatever fitting details we just happened to try. It would be much better if there were some way to directly fit the difference in size for the two samples, but I haven't thought of a good way to do this yet. Any ideas? --Scott Calvin Sarah Lawrence College
Hi Scott, Would it make sense to define a factor between the size of the two (or more) particles, say "sizeA_v_sizeB" and vary *that* in a fit of the two particles, keeping all the other things (k-weight, ranges, etc) the same for the two fits? That is, instead of asking "what is the size of particle A and what is the size of particle B?", ask "what is the size of particle A and how much bigger is particle B?". If you're observations are right, sizeA_v_sizeB should be statistically different from 1. --Matt PS: The price of airline tickets vary widely with many factors, but for any given flight, the price (starting "retail" price) of a first class ticket is always higher than a coach ticket. Of course, the coach ticket on some flights can easily be twice the price of a first class ticket on other flights. But everyone knows first class is always more expensive than coach. ;).
Hi Matt, I tried that, before realizing it didn't really do anything. Performing a multi-dataset fit with two guessed variables, say size_A and sizeA_v_sizeB, is of course completely equivalent to just having the two sizes as guessed variables. And if size_A is set to some arbitrary (but reasonable) value, sizeA_v_sizeB is equivalent to just fitting size_B, and produces the "absolute" uncertainty again. This would be different if the fit were truly multi-dataset in the sense that we had parameters in common between samples, so that constraining the size of A had some effect on the fit for B. But the parameters that are in common, like S02, we constrained to a standard rather than refining through a multi-dataset fit. I like your analogy to airline tickets...that is something like the situation we seem to have. --Scott Calvin Sarah Lawrence College
Hi Scott,
Would it make sense to define a factor between the size of the two (or more) particles, say "sizeA_v_sizeB" and vary *that* in a fit of the two particles, keeping all the other things (k-weight, ranges, etc) the same for the two fits? That is, instead of asking "what is the size of particle A and what is the size of particle B?", ask "what is the size of particle A and how much bigger is particle B?". If you're observations are right, sizeA_v_sizeB should be statistically different from 1.
--Matt
PS: The price of airline tickets vary widely with many factors, but for any given flight, the price (starting "retail" price) of a first class ticket is always higher than a coach ticket. Of course, the coach ticket on some flights can easily be twice the price of a first class ticket on other flights. But everyone knows first class is always more expensive than coach. ;).
Hi Scott,
I tried that, before realizing it didn't really do anything. Performing a multi-dataset fit with two guessed variables, say size_A and sizeA_v_sizeB, is of course completely equivalent to just having the two sizes as guessed variables. And if size_A is set to some arbitrary (but reasonable) value, sizeA_v_sizeB is equivalent to just fitting size_B, and produces the "absolute" uncertainty again. This would be different if the fit were truly multi-dataset in the sense that we had parameters in common between samples, so that constraining the size of A had some effect on the fit for B. But the parameters that are in common, like S02, we constrained to a standard rather than refining through a multi-dataset fit.
I think (I think) the goal would be to know whether particle A was always larger than particle B (or vice versa). In that case, analyzing data for the two sets together should help: If you allow sizeA and sizeB to be adjusted in the fit, you may well get uncertainties that overlap, but will also get a measure of their correlation. Like you say, Adjusting sizeA and the size ratio would not change the final result, but it would put the emphasis on knowing the correlation, which I think is what you want. But I think that this might mean the multi-dataset fit would have to include S02 and other common parameters. It might also need to include some of the changes in k-weight, ranges, etc that you alluded to earlier. Basically it would mean looking for the correlation of the sizes. --Matt For completeness: Let's say you have sizeA = 15+/-4 and sizeB = 17+/-4. If sizeA and sizeB have a large, positve correlatation, increasing sizeA from 15 would mean that sizeB would have to increase from 17. A large, negative correlation would mean sizeB would have to *decrease* from 17, and a small correlation would mean that sizeA could change without necessarily causing any significant change in sizeB. In the airline ticket analogy, a survey of coach and first class ticket prices mighty give a difference in average prices that was smaller than the standard deviation in each price. The prices are positively and highly correlated, but to know that, you have to record both prices for each flight.
participants (9)
-
Bruce Ravel
-
Hugo Carabineiro
-
Ian James Drake
-
John J. Rehr
-
Matt Newville
-
Norbert Weiher
-
Norbert Weiher
-
Paul Fons
-
Scott Calvin