Hi Teresa,
I'm sorry that I cannot give you a definitive answer. I should admit that when adding PCA methods to Larch and XAS Viewer (which I invite you to try out), I tried to follow the scikit-learn approach but also to follow the Athena implementation. I think I never tried to test against the results from SixPack. FWIW, Larch only does (easily) PCA on normalized mu(E) or its derivative. I suppose PCA on chi(k) could be added, but I'm a bit sort of skeptical of this.
In fact, the code at
https://github.com/xraypy/xraylarch/blob/master/larch/math/pca.py (and, just to be clear, having this both in Python and publicly available is motivated by having these conversations of "what does it do?") has a few different methods to train a PCA set: one directly from scikit-learn, one basically reproducing Demeter's PCA.pm (modulo slight differences in underlying math libraries, which should be insignificant), one that aims to use only non-negative components (not really worth in my opinion), and one that is sort of hand-coded and including the IND statistic. I don't know what SixPack does.
I cannot really explain why, but the default "readily exposed in Larch XAS Viewer" is to use the hand-coded version of `pca_train`. In fact, they should be all more or less interchangeable. I did some tests with these but that was now several years ago, but it might be worth trying that again. If you're up for that, please do try. If not and would like to send your project and an outline of what you get, I might be able to look at this too.
For "target transformation", this is implemented as `pca_fit`: how well can a data set be explained by the first N components of a training model?
For fit statistics: I have seen "SPOIL" used several places in the EXAFS literature but am afraid I do not actually know of a definition for this. If anyone can explain what these are, that would be helpful. Larch can calculate the F1 and IND statistics that are more common in the PCA literature. XAS Viewer exposes and automatically plots IND - it's a very useful way to select how many components are significant.
I'm pretty sure that does not answer your actual question, but maybe it will be helpful. If you or anyone else has suggestions for additions, improvements, or other optional methods or statistics for PCA and/or related methods, please let me know.