Different R-factor values
Hello List, I know that Horae is no longer supported but I had quick question about the R-factor. I search the mailing list and found this post from 2006 concerning different R-factors in the fit log I have a question about Artemis log file. I noticed that two r-factors are
reported in the log file. One is in the fifth line and it is called 'R-factor' and the other one is under the data set fitting conditions and it is called 'r-factor for this data set'. They have in general different values. What does it mean? Which is the difference between the two?
It probably means that I am calculating something wrongly. Here's the concept. If you do a multiple data set fit and you have some value of R-factor, you would like to know how that misfit is partitioned among the data sets. That is, you'd like to know if one set is contributing to the misfit for significantly than the others. When Ifeffit computes the R-factor, it is for the entire fit. My idea in Artemis was to use the formula for R factor from page 18 of http://cars9.uchicago.edu/~newville/feffit/feffit.ps http://cars9.uchicago.edu/%7Enewville/feffit/feffit.ps on each data set in a multiple data set fit and report that in the log file. At some point, I must have convinced myself that I was doing the calulcation in Artemis identically to how it is done in Ifeffit. If you are seeing different values, it would seem I was mistaken. I'll make an entry in my to do list to look into that. I am reviewing some older analysis projects from artemis and just wanted to know which R-factor more accurately describes the misfit? I suspect the average over the k weights values since this was adopted in Demeter? Thanks, Chris Patridge -- ******************************** Christopher J. Patridge, PhD NRC Post Doctoral Research Associate Naval Research Laboratory Washington, DC 20375 Cell: 315-529-0501
Hi Chris,
Might be helpful also to link to the archived thread you're talking about.
http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2006-June/007048.html
Bruce might have to correct me on this, but if I remember right there were
individual-data-set R-factor and chi-square calculations at some point,
which come not from IFEFFIT but from Bruce's own post-fit calculations, and
these eventually were found to be pretty buggy and were dropped.
I don't understand what "the average over the k weights" R factor is;
analyzing the same data set with multiple k weights (which is pretty
typical) still means a single fit result and a single statistical output in
IFEFFIT, as far back as I can remember, anyhow. The discussion about
multiple R-factors is for when you're simultaneously fitting multiple data
sets (i.e. trying to fit a couple different data sets to some shared or
partially shared set of guess variables).
I think the overall residuals and chi-square are the more statistically
meaningful values, as they are actually calculated by the same algorithm
used to determine the guess variables - they're the quantities IFEFFIT is
attempting to reduce. I don't believe I've reported the per-data-set
residuals in my final results, as I only treated it as an internal check
for myself. (It would be nice to have again, though...)
-Jason
On Thu, Jan 24, 2013 at 10:12 AM, Christopher Patridge wrote: I am reviewing some older analysis projects from artemis and just wanted
to know which R-factor more accurately describes the misfit? I suspect the
average over the k weights values since this was adopted in Demeter? Thanks, Chris Patridge --
********************************
Christopher J. Patridge, PhD
NRC Post Doctoral Research Associate
Naval Research Laboratory
Washington, DC 20375
Cell: 315-529-0501 _______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Perhaps next time I'll notice the attachment ...
I don't see that "r factor for k-weight=..." in my old projects; I'm not
sure if I just never used that version? I checked some Artemis 0.8.006
logfiles from 2009 and per-k-weight R-factors aren't in there, so that's a
little weird.
I'm still pretty sure, as you say, the overall R-factor is the
statistically meaningful one.
-Jason
On Fri, Jan 25, 2013 at 11:01 AM, Jason Gaudet
Hi Chris,
Might be helpful also to link to the archived thread you're talking about.
http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2006-June/007048.html
Bruce might have to correct me on this, but if I remember right there were individual-data-set R-factor and chi-square calculations at some point, which come not from IFEFFIT but from Bruce's own post-fit calculations, and these eventually were found to be pretty buggy and were dropped.
I don't understand what "the average over the k weights" R factor is; analyzing the same data set with multiple k weights (which is pretty typical) still means a single fit result and a single statistical output in IFEFFIT, as far back as I can remember, anyhow. The discussion about multiple R-factors is for when you're simultaneously fitting multiple data sets (i.e. trying to fit a couple different data sets to some shared or partially shared set of guess variables).
I think the overall residuals and chi-square are the more statistically meaningful values, as they are actually calculated by the same algorithm used to determine the guess variables - they're the quantities IFEFFIT is attempting to reduce. I don't believe I've reported the per-data-set residuals in my final results, as I only treated it as an internal check for myself. (It would be nice to have again, though...)
-Jason
On Thu, Jan 24, 2013 at 10:12 AM, Christopher Patridge < patridge@buffalo.edu> wrote:
I am reviewing some older analysis projects from artemis and just wanted to know which R-factor more accurately describes the misfit? I suspect the average over the k weights values since this was adopted in Demeter?
Thanks,
Chris Patridge
-- ******************************** Christopher J. Patridge, PhD NRC Post Doctoral Research Associate Naval Research Laboratory Washington, DC 20375 Cell: 315-529-0501
_______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Hi Jason, Chris,
On Fri, Jan 25, 2013 at 10:01 AM, Jason Gaudet
Hi Chris,
Might be helpful also to link to the archived thread you're talking about.
http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2006-June/007048.html
Bruce might have to correct me on this, but if I remember right there were individual-data-set R-factor and chi-square calculations at some point, which come not from IFEFFIT but from Bruce's own post-fit calculations, and these eventually were found to be pretty buggy and were dropped.
I don't understand what "the average over the k weights" R factor is; analyzing the same data set with multiple k weights (which is pretty typical) still means a single fit result and a single statistical output in IFEFFIT, as far back as I can remember, anyhow. The discussion about multiple R-factors is for when you're simultaneously fitting multiple data sets (i.e. trying to fit a couple different data sets to some shared or partially shared set of guess variables).
I think the overall residuals and chi-square are the more statistically meaningful values, as they are actually calculated by the same algorithm used to determine the guess variables - they're the quantities IFEFFIT is attempting to reduce. I don't believe I've reported the per-data-set residuals in my final results, as I only treated it as an internal check for myself. (It would be nice to have again, though...)
-Jason
I can understand the desire for "per data set" R-factors. I think there are a few reasons why this hasn't been done so far. First, The main purpose of chi-square and R-factor are to be simple, well-defined statistics that can be used to compare different fits. In the case of R-factor, the actual value can also be readily interpreted and so mapped to "that's a good fit" and "that's a poor fit" more easily (even if still imperfect). Second, it would be a slight technical challenge for Ifeffit to make these different statistics and decide what to call them. Third, this is really asking for information on different portions of the fit, and it's not necessarily obvious how to break the whole into parts. OK, for fitting multiple data sets, it might *seem* obvious how to break the whole. But, well, fitting with multiple k-weights *is* fitting different data. Also, multiple-data-set fits can mix fits in different fit spaces, with different k-weights, and so on. Should the chi-squared and R-factors be broken up for different k-weights too? Perhaps they should. You can different weights to different data sets in a fit, but how to best do this can quickly become a field of study on its own. I guess that's not a valid reason to not report these.... So, again, I think it's reasonable to ask for per-data-set and/or per-k-weight statistics, but not necessarily obvious what to report here. For example, you might also want to use other partial sums-of-squares (based on k- or R-range, for example) to see where a fit was better and worse. Of course, you can calculate any of the partial sums and R-factors yourself. This isn't so obvious with Artemis or DArtemis, but it is possible. It's much easier to do yourself and implement for others with larch than doing it in Ifeffit or Artemis. Patches welcome for this and/or any other advanced statistical analyses. Better visualizations of the fit and/or mis-fit might be useful to think about too. --Matt
Thank you for the discussion Matt and Jason, My main objective was to decide between the two different reported R-factors in some older Artemis fit file logs. I suspect that the analysis was prematurely completed because the user found small R-factor values printed out along with the other fit statistics near the beginning of the fit log. Scrolling down the log file to the area which gives; R-factor for this data set = ? k1,k2,k3 weightings R-factors = ? This R-factor is the average R-factor of the k-weights and much larger say, 0.01 above vs. 0.07-0.08 making a typical "good fit" to a single data set into a rather questionable one. Looking at more current fit logs from Demeter (attached, just a quick example), the R-factor which is printed near the beginning of the fit file is equal to the average R-factor for the k-weightings. Therefore the value found in the earlier Artemis file logs must have been faulty or buggy as was said so one should not rely on that value to evaluate the fits. Sorry for any confusion but this is all in the name of weeding out good/bad analysis.... Thanks again, Chris ******************************** Christopher J. Patridge, PhD NRC Post Doctoral Research Associate Naval Research Laboratory Washington, DC 20375 Cell: 315-529-0501 On 1/25/2013 12:04 PM, Matt Newville wrote:
Hi Jason, Chris,
Hi Chris,
Might be helpful also to link to the archived thread you're talking about.
http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2006-June/007048.html
Bruce might have to correct me on this, but if I remember right there were individual-data-set R-factor and chi-square calculations at some point, which come not from IFEFFIT but from Bruce's own post-fit calculations, and these eventually were found to be pretty buggy and were dropped.
I don't understand what "the average over the k weights" R factor is; analyzing the same data set with multiple k weights (which is pretty typical) still means a single fit result and a single statistical output in IFEFFIT, as far back as I can remember, anyhow. The discussion about multiple R-factors is for when you're simultaneously fitting multiple data sets (i.e. trying to fit a couple different data sets to some shared or partially shared set of guess variables).
I think the overall residuals and chi-square are the more statistically meaningful values, as they are actually calculated by the same algorithm used to determine the guess variables - they're the quantities IFEFFIT is attempting to reduce. I don't believe I've reported the per-data-set residuals in my final results, as I only treated it as an internal check for myself. (It would be nice to have again, though...)
-Jason I can understand the desire for "per data set" R-factors. I think
On Fri, Jan 25, 2013 at 10:01 AM, Jason Gaudet
wrote: there are a few reasons why this hasn't been done so far. First, The main purpose of chi-square and R-factor are to be simple, well-defined statistics that can be used to compare different fits. In the case of R-factor, the actual value can also be readily interpreted and so mapped to "that's a good fit" and "that's a poor fit" more easily (even if still imperfect). Second, it would be a slight technical challenge for Ifeffit to make these different statistics and decide what to call them. Third, this is really asking for information on different portions of the fit, and it's not necessarily obvious how to break the whole into parts. OK, for fitting multiple data sets, it might *seem* obvious how to break the whole. But, well, fitting with multiple k-weights *is* fitting different data. Also, multiple-data-set fits can mix fits in different fit spaces, with different k-weights, and so on. Should the chi-squared and R-factors be broken up for different k-weights too? Perhaps they should. You can different weights to different data sets in a fit, but how to best do this can quickly become a field of study on its own. I guess that's not a valid reason to not report these....
So, again, I think it's reasonable to ask for per-data-set and/or per-k-weight statistics, but not necessarily obvious what to report here. For example, you might also want to use other partial sums-of-squares (based on k- or R-range, for example) to see where a fit was better and worse. Of course, you can calculate any of the partial sums and R-factors yourself. This isn't so obvious with Artemis or DArtemis, but it is possible. It's much easier to do yourself and implement for others with larch than doing it in Ifeffit or Artemis. Patches welcome for this and/or any other advanced statistical analyses.
Better visualizations of the fit and/or mis-fit might be useful to think about too.
--Matt _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
Chris et al, Sorry I didn't pipe up earlier. I haven't had a good chance to sit down and follow this discussion until this afternoon. I'll start with the practical issue. Recently someone requested per-data-set R factors in Artemis. Being a perfectly fine request, I sat down to implement it. Since fits in Artemis are usually done with multiple k-weights, it wasn't clear to me how to display the information in the clearest manner. The "overall" R factor, the one that Ifeffit reports after the fit finishes, includes all the data and all the k-weights used in the fit. That is certainly a useful number in that it summarizes the closeness of the fit in the aggregate. As long as I was breaking down the R-factors by data set, I figured it would be useful to do so by k-weight also. I could imagine a scenario where knowing how a particular data set and a particular k-weight contributed to the overall closeness of the fit. That should explain the why of what you find in Artemis' log file. My intent is to use the same formula for R-factor as in the Ifeffit reference manual. If you do a single data set, single k-weight fit, theoverall R-factor and the per data set R-factor at the k-weight (all three are reported regardless) should be the same. It is possible that is not well enought tested. Matt's point about Larch being the superior tool for user-specified R-factors is certainly true, although few GUI users would avail themselves of that. If some R-factor other than one reported by Ifeffit (or, soon, the one reported by default by Larch) is needed, that would be a legitamate request. If something sophisticated or flexible is needed, that too can be put into the GUI. As for the actual question -- how to "decide" between the R-factors -- well, my take is that that's not a well posed question. The R factor is not reduced chi-square. It does not measure *goodness*, it only measures *closeness* of fit. The term "goodness" means something in a statistical context. An R-factor is some kind of percentage misfit without any consideration of how the information content of the actual data ensemble was used. In short, the R-factor is a numerical value expressing how closely the red line overplots the blue line in the plot made after Artemis finishes her fit. Thus, the overall R-factor expresses how closely all the red lines together overplot all the blue lines. The R-factors broken out by data set and k-weight express how closely a particular red line overplots a particular blue line. HTH, B On Friday, January 25, 2013 01:11:22 PM Christopher Patridge wrote:
Thank you for the discussion Matt and Jason,
My main objective was to decide between the two different reported R-factors in some older Artemis fit file logs. I suspect that the analysis was prematurely completed because the user found small R-factor values printed out along with the other fit statistics near the beginning of the fit log. Scrolling down the log file to the area which gives;
R-factor for this data set = ? k1,k2,k3 weightings R-factors = ?
This R-factor is the average R-factor of the k-weights and much larger say, 0.01 above vs. 0.07-0.08 making a typical "good fit" to a single data set into a rather questionable one.
Looking at more current fit logs from Demeter (attached, just a quick example), the R-factor which is printed near the beginning of the fit file is equal to the average R-factor for the k-weightings. Therefore the value found in the earlier Artemis file logs must have been faulty or buggy as was said so one should not rely on that value to evaluate the fits. Sorry for any confusion but this is all in the name of weeding out good/bad analysis....
Thanks again,
Chris
******************************** Christopher J. Patridge, PhD NRC Post Doctoral Research Associate Naval Research Laboratory Washington, DC 20375 Cell: 315-529-0501
On 1/25/2013 12:04 PM, Matt Newville wrote:
Hi Jason, Chris,
On Fri, Jan 25, 2013 at 10:01 AM, Jason Gaudet
wrote: Hi Chris,
Might be helpful also to link to the archived thread you're talking about.
http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2006-June/007048.html
Bruce might have to correct me on this, but if I remember right there were individual-data-set R-factor and chi-square calculations at some point, which come not from IFEFFIT but from Bruce's own post-fit calculations, and these eventually were found to be pretty buggy and were dropped.
I don't understand what "the average over the k weights" R factor is; analyzing the same data set with multiple k weights (which is pretty typical) still means a single fit result and a single statistical output in IFEFFIT, as far back as I can remember, anyhow. The discussion about multiple R-factors is for when you're simultaneously fitting multiple data sets (i.e. trying to fit a couple different data sets to some shared or partially shared set of guess variables).
I think the overall residuals and chi-square are the more statistically meaningful values, as they are actually calculated by the same algorithm used to determine the guess variables - they're the quantities IFEFFIT is attempting to reduce. I don't believe I've reported the per-data-set residuals in my final results, as I only treated it as an internal check for myself. (It would be nice to have again, though...)
-Jason
I can understand the desire for "per data set" R-factors. I think there are a few reasons why this hasn't been done so far. First, The main purpose of chi-square and R-factor are to be simple, well-defined statistics that can be used to compare different fits. In the case of R-factor, the actual value can also be readily interpreted and so mapped to "that's a good fit" and "that's a poor fit" more easily (even if still imperfect). Second, it would be a slight technical challenge for Ifeffit to make these different statistics and decide what to call them. Third, this is really asking for information on different portions of the fit, and it's not necessarily obvious how to break the whole into parts. OK, for fitting multiple data sets, it might *seem* obvious how to break the whole.
But, well, fitting with multiple k-weights *is* fitting different data. Also, multiple-data-set fits can mix fits in different fit spaces, with different k-weights, and so on. Should the chi-squared and R-factors be broken up for different k-weights too? Perhaps they should. You can different weights to different data sets in a fit, but how to best do this can quickly become a field of study on its own. I guess that's not a valid reason to not report these....
So, again, I think it's reasonable to ask for per-data-set and/or per-k-weight statistics, but not necessarily obvious what to report here. For example, you might also want to use other partial sums-of-squares (based on k- or R-range, for example) to see where a fit was better and worse. Of course, you can calculate any of the partial sums and R-factors yourself. This isn't so obvious with Artemis or DArtemis, but it is possible. It's much easier to do yourself and implement for others with larch than doing it in Ifeffit or Artemis. Patches welcome for this and/or any other advanced statistical analyses.
Better visualizations of the fit and/or mis-fit might be useful to think about too.
--Matt _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
-- Bruce Ravel ------------------------------------ bravel@bnl.gov National Institute of Standards and Technology Synchrotron Methods Group at NSLS --- Beamlines U7A, X24A, X23A2 Building 535A Upton NY, 11973 Homepage: http://xafs.org/BruceRavel Software: https://github.com/bruceravel
Thanks again everyone, Thanks for the clarity Bruce. Yes guess it was a poor question. The purely numerical aspect of R in a single data set fit is certainly not lost on me but reviewing the analyses I want to have some concrete arguments against the fits instead of just, 'your method and fits are indefensible'. There are several problems with the fits outside any statistical metrics... in fact I would guess that this was the unique case where the user depended exclusively upon R instead of the "red line matching the blue line". (see attached) - Chris ******************************** Christopher J. Patridge, PhD NRC Post Doctoral Research Associate Naval Research Laboratory Washington, DC 20375 Cell: 315-529-0501 On 1/25/2013 3:32 PM, Bruce Ravel wrote:
Chris et al,
Sorry I didn't pipe up earlier. I haven't had a good chance to sit down and follow this discussion until this afternoon.
I'll start with the practical issue. Recently someone requested per-data-set R factors in Artemis. Being a perfectly fine request, I sat down to implement it. Since fits in Artemis are usually done with multiple k-weights, it wasn't clear to me how to display the information in the clearest manner.
The "overall" R factor, the one that Ifeffit reports after the fit finishes, includes all the data and all the k-weights used in the fit. That is certainly a useful number in that it summarizes the closeness of the fit in the aggregate.
As long as I was breaking down the R-factors by data set, I figured it would be useful to do so by k-weight also. I could imagine a scenario where knowing how a particular data set and a particular k-weight contributed to the overall closeness of the fit. That should explain the why of what you find in Artemis' log file.
My intent is to use the same formula for R-factor as in the Ifeffit reference manual. If you do a single data set, single k-weight fit, theoverall R-factor and the per data set R-factor at the k-weight (all three are reported regardless) should be the same. It is possible that is not well enought tested.
Matt's point about Larch being the superior tool for user-specified R-factors is certainly true, although few GUI users would avail themselves of that.
If some R-factor other than one reported by Ifeffit (or, soon, the one reported by default by Larch) is needed, that would be a legitamate request. If something sophisticated or flexible is needed, that too can be put into the GUI.
As for the actual question -- how to "decide" between the R-factors -- well, my take is that that's not a well posed question. The R factor is not reduced chi-square. It does not measure *goodness*, it only measures *closeness* of fit. The term "goodness" means something in a statistical context. An R-factor is some kind of percentage misfit without any consideration of how the information content of the actual data ensemble was used. In short, the R-factor is a numerical value expressing how closely the red line overplots the blue line in the plot made after Artemis finishes her fit. Thus, the overall R-factor expresses how closely all the red lines together overplot all the blue lines. The R-factors broken out by data set and k-weight express how closely a particular red line overplots a particular blue line.
HTH, B
On Friday, January 25, 2013 01:11:22 PM Christopher Patridge wrote:
Thank you for the discussion Matt and Jason,
My main objective was to decide between the two different reported R-factors in some older Artemis fit file logs. I suspect that the analysis was prematurely completed because the user found small R-factor values printed out along with the other fit statistics near the beginning of the fit log. Scrolling down the log file to the area which gives;
R-factor for this data set = ? k1,k2,k3 weightings R-factors = ?
This R-factor is the average R-factor of the k-weights and much larger say, 0.01 above vs. 0.07-0.08 making a typical "good fit" to a single data set into a rather questionable one.
Looking at more current fit logs from Demeter (attached, just a quick example), the R-factor which is printed near the beginning of the fit file is equal to the average R-factor for the k-weightings. Therefore the value found in the earlier Artemis file logs must have been faulty or buggy as was said so one should not rely on that value to evaluate the fits. Sorry for any confusion but this is all in the name of weeding out good/bad analysis....
Thanks again,
Chris
******************************** Christopher J. Patridge, PhD NRC Post Doctoral Research Associate Naval Research Laboratory Washington, DC 20375 Cell: 315-529-0501
On 1/25/2013 12:04 PM, Matt Newville wrote:
Hi Jason, Chris,
Hi Chris,
Might be helpful also to link to the archived thread you're talking about.
http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2006-June/007048.html
Bruce might have to correct me on this, but if I remember right there were individual-data-set R-factor and chi-square calculations at some point, which come not from IFEFFIT but from Bruce's own post-fit calculations, and these eventually were found to be pretty buggy and were dropped.
I don't understand what "the average over the k weights" R factor is; analyzing the same data set with multiple k weights (which is pretty typical) still means a single fit result and a single statistical output in IFEFFIT, as far back as I can remember, anyhow. The discussion about multiple R-factors is for when you're simultaneously fitting multiple data sets (i.e. trying to fit a couple different data sets to some shared or partially shared set of guess variables).
I think the overall residuals and chi-square are the more statistically meaningful values, as they are actually calculated by the same algorithm used to determine the guess variables - they're the quantities IFEFFIT is attempting to reduce. I don't believe I've reported the per-data-set residuals in my final results, as I only treated it as an internal check for myself. (It would be nice to have again, though...)
-Jason I can understand the desire for "per data set" R-factors. I think
On Fri, Jan 25, 2013 at 10:01 AM, Jason Gaudet
wrote: there are a few reasons why this hasn't been done so far. First, The main purpose of chi-square and R-factor are to be simple, well-defined statistics that can be used to compare different fits. In the case of R-factor, the actual value can also be readily interpreted and so mapped to "that's a good fit" and "that's a poor fit" more easily (even if still imperfect). Second, it would be a slight technical challenge for Ifeffit to make these different statistics and decide what to call them. Third, this is really asking for information on different portions of the fit, and it's not necessarily obvious how to break the whole into parts. OK, for fitting multiple data sets, it might *seem* obvious how to break the whole. But, well, fitting with multiple k-weights *is* fitting different data. Also, multiple-data-set fits can mix fits in different fit spaces, with different k-weights, and so on. Should the chi-squared and R-factors be broken up for different k-weights too? Perhaps they should. You can different weights to different data sets in a fit, but how to best do this can quickly become a field of study on its own. I guess that's not a valid reason to not report these....
So, again, I think it's reasonable to ask for per-data-set and/or per-k-weight statistics, but not necessarily obvious what to report here. For example, you might also want to use other partial sums-of-squares (based on k- or R-range, for example) to see where a fit was better and worse. Of course, you can calculate any of the partial sums and R-factors yourself. This isn't so obvious with Artemis or DArtemis, but it is possible. It's much easier to do yourself and implement for others with larch than doing it in Ifeffit or Artemis. Patches welcome for this and/or any other advanced statistical analyses.
Better visualizations of the fit and/or mis-fit might be useful to think about too.
--Matt _______________________________________________ Ifeffit mailing list Ifeffit@millenia.cars.aps.anl.gov http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
participants (4)
-
Bruce Ravel
-
Christopher Patridge
-
Jason Gaudet
-
Matt Newville