Idiots look at Data, Part VIII: The Insecurity Index
Throughout our look at chromatograms and parts of chromatograms, we've been counting things that look like they might be problems in the data set. We are not saying they are problems, we're saying they are things that may cause concern. The higher the number, the more careful we want to be about interpreting the data.
Adding up all the numbers we get an aggregate we'll call the Idiot's Insecurity Index (I3), pronounced, "aye-yi-yi". It consists of:
- The Number of peaks
- Rating of the baseline slope on a 1 to 5 scale.
- Rating of bumpiness of the background on a 1 to 5 scale.
- On the peaks of interest, the total count of shoulders, leading or trailing edges, connections to neighbors above the baseline, and the number of neighbors within one peak-width at the baseline.
[MORE]
Let's look at two trivial examples, one from UCLA, and one from the LNDD and score them.
UCLA: 3 peaks, flat slope = 1, no bumps = 1, total shoulders = 0, total edges = 0, total connections = 0, neighbors in one peak = 0, total = 5.
LNDD: 3 peaks, flat = 1, no bumps = 1, shoulders 0, edges = 3, connections = 0, neighbors, charitably = 0, total = 8. Given a choice, it might be better for them to be spaced a little further apart, and the trails on the pulses might indicate a problem somewhere in the system, we think.
So given the I3 and the scores we made all along, where are we with our look at the data in previous parts? We could make you go to another post or page, like a computer hardware review sites, but we'll be nice:
Test | Pks | Slp | Bmp | Shldr | Edg | Conn | Nbrs | I3 |
UCLA | 12 | 1 | 1 | 2 2 0 | 0 1 0 | 0 0 0 | 0 0 0 | 19 |
Ex 92 3-Jul | 36 | 3 | 2 | 2 1 2 | 0 1 0 | 2 2 0 | 3 1 1 | 56 |
Ex 88 13-Jul | 33 | 5 | 2 | 2 2 1 | 0 0 0 | 2 2 0 | 3 3 1 | 56 |
Ex 90 14-Jul | 36 | 5 | 1 | 2 1 1 | 0 0 0 | 2 2 0 | 2 2 1 | 55 |
Ex 86 | 47 | 5 | 3 | 2 2 1 | 0 0 0 | 2 2 0 | 2 2 1 | 69 |
USADA 173 | 29 | 3 | 1 | 2 2 2 | 0 1 0 | 0 2 0 | 2 2 1 | 47 |
USADA 349 | 27 | 3 | 1 | 1 2 1 | 0 1 0 | 2 2 0 | 2 2 1 | 44 |
Ex 87 22-Jul | 39 | 2 | 2 | 1 1 0 | 0 1 0 | 1 2 1 | 1 2 3 | 56 |
Ex 84 23-Jul | 32 | 1 | 3 | 1 1 0 | 0 1 0 | 2 2 1 | 2 2 3 | 51 |
Ex 93 control | 22 | 1 | 2 | 1 1 1 | 1 1 0 | 0 2 1 | 0 2 3 | 38 |
Ex 85 control | 35 | 2 | 3 | 1 1 1 | 1 2 0 | 0 2 1 | 1 2 2 | 54 |
Ex 89 control | 36 | 2 | 2 | 1 1 1 | 1 1 0 | 1 2 1 | 1 4 3 | 57 |
Shack fig3a | 17 | 1 | 1 | 0 1 1 | 0 1 1 | 0 2 0 | 0 4 0 | 29 |
Shack fig3b | 19 | 2 | 1 | 0 0 2 | 0 1 0 | 0 2 0 | 0 1 0 | 27 |
This is bogus, you say. However, it follows thinking used in Software Engineering in measures such as the McCabe complexity, or the Halstead volume, or arguably function points. A chromatogram with an I3 of two is like a software subroutine that does nothing: Useless, but absolutely correct. On the other hand, one with an I3 of 200 is like a 10 page software function with a McCabe of 2000 -- it might appear to be correct, but how do you really know without looking very closely indeed?
Bigger numbers mean more stuff.
More stuff means more opportunity for error. The more stuff you have, the more careful you need to be about checking assumptions and pre-conditions.
In an earlier post, M has made comments suggesting it is unfair or improper to make some of the comparisons made here. We disagree; as shown above, the methodology is perfectly applicable to a straight line background or a series of reference pulses. It is a measure of the potential for problems, not an assertion there are problems.
M also suggested one reason it was unfair was that chemistry in the F3 fraction was more difficult than that of the F2 fraction. It is true the F3 chemistry is more difficult, and appreciate that admission from M. It raises the very question we'd like to ask.
How do you tell if the chemistry does the job properly?
One indicator is to look at the I3 of the resulting chromatograms.
Thanks to M's diligence, we found the Shackleton chromatograms that also reveal the 5bA and 5aA, so we do have fair, like-to-like comparisons. They appear to be much cleaner by I3 score than those produced by LNDD.
What did Shackleton do that LNDD didn't? This bears investigation.
When we started this series, we said that the preconditions for correctness in the integration that computes the numbers in a CIR result are:
- Clean, unambiguous baselines suggesting good chemical separation of the prepared samples. This is reflected in the count of the peaks in the chromatogram. Good chemistry give fewer peaks to be concerned about, and fewer unknowns floating about.
- Significant (a debatable term) baseline (chromatographic) separation of peaks. We've demonstrated that co-elutes can cause unexpected skews of significant magnitude.
- Absence of shoulders suggesting unidentified peaks. Where there are shoulders or tails, there may be unidentified co-elutes.
- Measurement of nearby peaks to consider their potential for influence. We may back away from this thought, but it seem like you ought to know the CIR of every adjacent peak in case it is co-eluting in some way.
Maybe LNDD's chemistry isn't separating as well as it ought to, and needs to.
If there is lots of stuff around, it is going to go somewhere. A high I3 score makes it prudent to be sure the peaks being measured contain only what they are purported to contain.
As we demonstrated in "Integration for Idiots", presence of unexpected material can invalidate any reported numeric results.
We are thus left with some questions to seed further discussion.
- What does UCLA do to ensure purity of peaks?
- What did Shackleton do to ensure purity of peaks?
- What did LNDD do to ensure purity of peaks?
Feel free to chop apart individual assessments and argue whether certain pixels represent particular flaws, and whether they have particular numeric significance in a particular test. This doesn't much interest us. At a scientific level, either the protocol is flawed and there is flawed data being processed and reported, or it is good data and good results. At the moment, indications such as the I3 suggest the data may not be good. A good process will be able to demonstrate the data is good.
We have said for a long time, if we can get confirmation the data is clean and pure, we're prepared to accept the numeric conclusions at a scientific level.
If there is no validation the data is clean and pure enough to trust, there is a different, legal question whether the reported results are correct.
13 comments:
Bravo !
TBV, we cannot conclude anything more from the chromatography than some chromatographs are better than others? And that our examination of the chromatography should lead to an examination of the chemistry?
"Aye-yi-yi?" In my part of town, we'd say "oy-yoy-yoy!"
I despair at the thought that the chromatography leads us to the chemistry. There's been no substantive discussion (none that I've seen, in any event) of the chemistry used at LNDD. FL's legal team never addressed this topic, to my knowledge. We know that it took LNDD about a day and a half to do its chemistry, and that's about all we know about the chemistry. Moreover, even if we could examine LNDD's chemistry procedures and somehow conclude that these are good procedures, there's no way we can tell if the procedures were performed properly in any given case ... not without looking at the chromatographs. So our investigation here seems to be running in circles.
I don't think that kicking this discussion over to the chemistry is necessarily the best place to go. Can't we conclude anything more about the chromatography, other than your statement that a high I3 score probably means that there's lots of stuff around. For example, we never got to my questions about noise. Is noise "stuff"? Do you reduce or eliminate noise with better chemistry?
Also, the issue of peak separation seems to involve issues other than the presence of unwanted "stuff". How about when you have two overlapping peaks, and both of these peaks contain stuff you DO want to measure? Is this also a problem that's cured with better chemistry?
You mention the legal question, but without some form of objective and measurable criteria, there's nothing here for the lawyers to grab hold of. I'm back to thinking about statements made by Duckworth and Mr. Idiot that perhaps the ISL might require the labs to analyze the mass spectra data. I doubt at the moment that the ISL requires mass spectra data, but at least that's something specific and objective.
M would tell you that from the standpoint of the ISL, it doesn't matter whether LNDD's procedures are as good as UCLA's or Shackleton's, or whether it would be prudent to hesitate before accepting test results based on IRMS graphs with high I3 scores. It matters whether these test results are "good enough" under the ISL, or for that matter, whether there's any scientific standard out there that would invalidate these results. So far at least, there does not appear to be any problem with these results under the ISL, and no objective criteria that could be used to toss out these results.
I think the idiot's series leaves us with a more educated understanding of where we seemed to be at the outset: that there is "good" and "bad" chromatography, but that whether the chromatography is "good enough" is a matter of opinion where reasonable people can differ.
A couple of weeks ago, M and I were engaged in legal debate over the issue of IRMS peak identification. I received a number of questions asking why I had chosen to focus on peak identification, when the REAL scientific issues lie elsewhere. TBV, I think you and Ali have effectively demonstrated why I focused on something as mundane as peak identification. When it comes to peak identification, there are standards, making it possible to say that peak identification in a given case is or is not "good enough". But so far, I have no way to say that LNDD's chromatographs are not "good enough".
Larry,
From a legal, tactical perspective you may be correct. While I'm still interested in investigating the specificity requirements of the ISL, I can see why you wouldn't want to rely on that legally.
On the other hand, the series was filed under "science", and is, in fact, also trying to address the Truth value of the findings, not just the legal result.
The argument is that in determining Truth of the matter, the objective dirtiness of the LNDD chromatograms should be cause for concern to those who think they indicate doping. If there were evidence to support the purity of the peaks, there would be reason to be confident in the results.
It is not clear to me, even with the calculator available, the co-elution of pure peaks would produce the measured results. It seems like only unidentified third peaks (like a second gunman on Dealy Plaza) explain all the results if there was no doping going on.
To this degree there was value in all these other B sample tests: it showed that the error, if any, really is systemic and unlikely to be caused by operator error on one or two tests. This makes the manual integration argument uninteresting from the perspective of Truth, in my opinion.
The recent revisitation of the cortisone, now properly identified as methylprednisolone and dexamethasone, is consistent with the search for what really happened.
I understand that what really happened may not play in the legal process, unfortunately.
TBV
Larry,
Just to be clear about the Mass Spectra, the GCMS exhibits do show partial mass spectra for all of the metabolite peaks, 5A, 5B, pregnane etc, and these mass spectra have not been challenged, and presumably confirm that those peaks contain the 5A, 5B etc.
What they haven't produced, are the mass spectra showing that other substances weren't hidden under those 5A and 5B peaks. It's unclear to me whether the complete mass spectra were taken at the time, but apparently there are reports in the testimony? that they were destroyed or erased. I haven't confirmed this.
I think you understand this, but just wanted to put it out there again
That is correct -- complete mass spectra were recorded, but were not saved onto the backup that was prepared prior to wiping the drive in February.
Why they didn't save $100 worth of drive containing data collected on these tests is a mystery to many.
TBV
TBV -
In response to your 7:35 AM post.
First, I'm very much interested in exploring the specificity requirements under the ISL. I'm not optimistic I can come up with an argument there that would win the day for FL, but that's never stopped me before!
(actually, I still think I have the winning argument on peak identification, though M has given me a much harder time there than I would have thought possible at the outset.)
You say that what "really" happened may not play in the legal process. I think that what "really" happened is central and critical to the legal process. You use the word "Truth" with a capital "T" to describe your ultimate goal. Mine too. You also mentioned earlier an analogy to the two-line software subroutine that may be absolutely correct and do nothing for us. My problem is when we're searching for Truth with a captial "T" and we end up with one of those subroutines.
The Truths that emerge from the idiots series are (and I'm oversimplifying) (1) good chromatography is important, and (2) based on the I3 measure, LNDD's results have a greater potential for problems than other results, such as UCLA and Shackleton. This is good stuff, and I'm grateful for the work you've done here.
(We have to acknowledge, in our search for Truth, that the I3 may not be the best measure of this potential. Without something approaching scientific peer review of what you and Ali have done here, we don't know if the I3 is scientifically valid. If we depart from Truth and venture towards opinion, I'm willing to accept that the I3 has Truth value. You've laid out the I3, you'd made your determinations for all to see, and it makes sense to me.)
But where does this Truth take us? I would have suspected as a matter of logic that all chromatographs were not of equal quality, and that some chromatographs would be better than others. Except for the unlikely possibility that the FL chromatographs were the best ever produced, it was always logically possible (in theory at least) to compare the FL chromatographs to better chromatographs, as you have done, and to attempt to quantify the quality differences between the chromatographs.
We might feel that justice is not served in a case that has turned on something less than the best quality evidence. But there are always going to be comparative quality shortcomings in the evidence presented in a judicial proceeding. We let the police introduce polaroid photos of the crime scene; we don't require them to use Nikon cameras. We let the local crime lab testify about the fingerprint evidence; we don't require the FBI to send their top person to testify. And we don't think that Truth has suffered any, just because we have low resolution photographs and the local guy from the crime lab.
The Truth we need here is a Truth that gets to the question of how good is good enough.
The Truth we need goes back to those little changes in the graphs you made in the "Idiot's Guide to Integration", the ones that led to changes of a magnitude of 3 delta-delta points. That's the kind of Truth we need here, the kind of Truth that says the FL chromatographs contain problems of this kind of magnitude. Of course, I understand that there's no way to look at a real life chromatograph, pick out a point on the graph and say, "look there, that's an error of the magnitude of 3 delta-delta points." That would be a very nice piece of Truth, but we can't prove anything like that from looking at noise levels, and "shoulders", and peaks with poor separation.
But we have to try to say something more than, this chromatograph is better than that chromatograph. That's just not a Truth that gets us anywhere. That's a two-line subroutine kind of truth. Can we take this data anyplace past the "cause for concern" that you've expressed and I share, and get closer to the "not good enough" that we need to prove here, both as a matter of science and as a question of law?
Larry,
I appreciate what your bringing, and accept that we don't have a meeting of minds on some points. What we've tried to do with the I3 is show what I'd call a nuanced rather than bright line indicator.
To stretch the metaphor, the I3 is like the T/E screening test. It's trivial to score, and doesn't mean much by itself, but is reasonable cause to do other looking.
In the first series on integration, we learned that third peaks can cause trouble, and they can be hard to recognize.
In the second series, I hope to have gotten across the point that the LNDD results are dirty enough that it is difficult to tell, on their face, whether there are co- or tri-elutes. It would be good to have some other measure of purity and specificity than what we have now as evidence.
I'd also like to point out, that from the point of view of Truth, the extra B samples have, for me, put to bed the utility of closely examining the S17 results. Along with the other results, we have chromatography that is pretty consistently dirty enough it is hard to tell what is going on, and contains lots more "stuff" than the other examples.
I'll claim that whatever is wrong (or right) in the S17 tests is also wrong in the other B's, which factors out human integration and machine operation.
What remains in common are the sources of the samples (Landis), what he was taking (the TUEd methylmumble), the test sample chemistry and preparation, and the general technique once applied at two different machines.
As someone who debugs complicated, fault tolerant distributed systems, I quite often see things that "aren't supposed to happen", but have happened anyway. One of the first steps in the process of looking at such a problem is admitting that it did, in fact, occur, despite precautions that one expects were taken.
In the Landis case, we have assertions from USADA and LNDD they did the right thing, and there was nothing else present in the samples to cause the results.
But there appears to be no evidence of this. LNDD showed a notable lack of concern for specificity in the T/E test, failing to collect and examine the 3 diagnostic ions. I see little reason to think they are any different in approach to the IRMS testing. Their chromatograms are dirty enough to raise suspicions of other substances, and there is no indication they did anything to check.
This is why I'm vastly more interested in looking at that protocol problem than I am at trying to apply specific integration error models to the S17 B Sample F3.
Whether the protocol error, if any, violates the ISL remains a legal question, but it is certainly an open question for establishing the truth of the reported results.
Once again, I'll note the statistically curious fact that the LNDD reports three times as many steroid AAFs as the average WADA lab.
I have /really/ wanted to get information about their proficiency testing, but that is the sort of thing that is kept a very closely guarded "trade secret". It's also, in my mind, disingenuous to keep it hidden.
TBV
TBV -
I think we DO have a meeting of the minds, in terms of the state of the facts and what the facts mean. If we don't see eye-to-eye, it's because of the conclusions we want to draw from these facts.
You are saying that, from a scientific standpoint, these facts demand further investigation. I agree with this from a scientific standpoint as well. I'm not a scientist, but I don't understand how the top scientists in mass chromatography can live with the idea that you can have this kind of range in the I3 levels of different IRMS graphs measuring for the same substances under the same conditions. This sort of thing would keep me up nights, if I were a scientist. (I'm not a scientist, so I sleep, but not well.) I'd want to know, what is causing such a wide range of I3 values? I'd want to publish a paper telling labs throughout the world how to generate chromatographs like those at UCLA. I'd even want to find a lawyer to help me write a TD to be adopted by WADA, to enforce these UCLA-type standards. None of this would help FL, of course.
What do these facts demand from the legal standpoint of deciding how to resolve the FL case? I'm not sure we disagree here either. The law accepts mass chromatography as a valid form of evidence, so long as it is "good" mass chromatography. The standards we use to judge whether mass chromatography is "good" or "bad" are those contained in your I3 measure. We have no hard and fast standard of what is "good enough" to rely upon or "bad enough" to reject. At the moment, whether a test is "good enough" or "bad enough" seems to be up to the experts to decide, and reasonable experts can differ.
You seem to be indicating that the I3 test does not distinguish between "good" and "bad" chromatography. You argue that the I3 test distinguishes between chromatographs that do and do not require further explanation and investigation. From a legal standpoint, is there any real difference between a "bad" chromatograph, and a chromatograph with a high I3 value that requires further explanation? Perhaps not, especially if there IS no explanation for the high I3 value. So I continue to think that the I3 test has the potential to be relevant to the law as well as the science.
But here's the rub: you drew the analogy to the T/E tests, as a test we rely upon to see if further testing is required. Good analogy, I think. But there's a consensus on a 4:1 ratio as being the T/E test result that should cause us to do further testing. We don't have any indication of an I3 number that should trigger further testing.
Give me an I3 number that's the equivalent of the 4:1 T/E ratio, and we may have something.
A few minor details. Given the additional B tests, I agree that from the standpoint of Truth, we have to find an explanation for these results that is a point in common across all of the B tests. (From a legal standpoint, I can focus exclusively on S17.) I don't know if this rules out the human integration and machine operation -- both humans and machines can make the same kinds of mistakes over a wide range of tests. But I agree that we're better off looking at matters such as peak identification (sorry, could not resist) and noisy/dirty chromatography, that clearly were consistent factors in the evaluation of each graph.
I also write software for a living. I'm about the only person I know with two State Bar licenses and one Microsoft programming certification. But there you go. I also understand the business of software bugs that "are not supposed to happen."
Busy day here. Need to find time to write something about trade secrets.
Well, I am not really sure where this post should go, but this seems as good a place as any.
In simplest terms, TbV's and Ali's work has shown that there is some possibility that something, or somethings, other than the right things, have found their way into the peaks of interest. This post is about what that could be.
Exhibit 106 of Floyd's recent document dump gave us some information we were long interested in - the exact substance injected into Floyd's hip on July 8, 2006 (and May 5, 2006 by the way). It was two glucocorticoids called dexamethasone (hereafter dexa') and methylprednisolone (hereafter methyl'). Glucocorticoids (a.k.a. corticosteroids) are a class of non-sex hormone related steroids, but they are still chemically very similar to testosterone.
Both of these substances are banned by WADA, but were covered by a Therapeutic Use Exemption for Floyd. They are both only exogenous. Unlike testosterone your body doesn't make any of this stuff.
Like testosterone, dexa' and methyl' both are metabolized by your body and broken down into other compounds. When you have it injected "intra-articularly" (into a joint) these metabolites can come out in your urine for a long time (weeks at least).
Dexa' comes out in your urine partly intact, and also as four different metabolites. Methyl' comes out partly as itself, and also as sixteen different metabolites.
So, the obvious question is, "Did any of these metabolites co-elute to any degree with the 5aA (or 5bA)?" In other words, “Do any of these metabolites show up as the little peaks, and shoulders, and possible hidden elutes in our peaks of interest?” And if you are an astute Floyd junkie, you will also wonder "If any of these metabolites did co-elute, could that have lead to a strongly negative CIR?" (although as we have seen, the negativity could be increased by a partial co-elute even if the substance was not highly negative, depending on the starts and stops of integration).
So, “Did any of these metabolites co-elute to any degree with 5aA (or 5bA)?” The answer is “maybe.”
Unfortunately, the only way to tell for sure is to have the complete mass spectra data of the relevant peaks. That information is gone.
So, what are we left with? Well, what we would like to have is some evidence that at least one of those 20 metabolites has a habit of eluting at least somewhere near the metabolites of testosterone. The problem is that when something elutes in a GC/MS is not really predictable. In different chromatographic conditions, stuff elutes at different times. And it is particularly hard to predict the retention time when dealing with glucocorticoids like dexa’ and methyl’, because they are highly temperature sensitive, so small changes in the conditions will strongly affect the elution time.
I have not found a single chromatogram available on the internet that has dexa’ or methyl’ along with testosterone (or their metabolites) shown on the same chromatogram.
A further complication is that testing for glucocorticoids is usually done with liquid chromatography / mass spectrometry (LC/MS), rather than GC/MS. This is for several reasons. First GC/MS (at least the way LNDD does it) involves certain steps to prepare the samples to be analyzed. One of the steps is acylation. I don’t get the details of the chemistry, but the effect is if the compound has any hydroxyl (Oxygen and Hydrogen together) groups, it ends up lowering the CIR of the compound. Many of the methyl’ metabolites have numerous hydroxyl groups, so the resulting compound could have seriously negative CIRs.
Another reason LC/MS is used is that glucocorticoids are hard to turn into gases (they have low volatility) and they are very temperature sensitive, making consistent results in GC/MS difficult to achieve. That superiority of LC/MS for analyzing glucocorticoids may be why they aren’t normally shown on the same chromatogram.
So, the summary of all that is that it does not really seem possible to show ahead of time whether these two compounds, dexa’ and methyl’, or and their 20 metabolites, would elute with 5aA or anything else. Too much depends on the particular circumstances of the particular set-up of the particular GC/MS machine using a particular process.
Now, most of you reading this remember the great discussion between Larry and m about whether the demands of TD2003IDCR were met. The most generous take on all that is that LNDD did met TD2003IDCR using a “relaxed relative retention time” standard. The 1% standard stated in the document would need to be relaxed to about 6%.
What that means is that any of these dexa’ and methyl’ metabolites that were within 6% of the peak of interest in the GC/MS have to be considered as possible co-elutes in the IRMS (there may be even more movement of substances than that between the GC/MS and the IRMS, but that’s a conservative, but big enough window).
Remember we are talking two main compounds and 20 metabolites. It seems reasonable to conclude that one or more of them fall within 6% of the 5aA peak in the GC/MS. Looking at the chromatogram, there are some possibilities there. Any one of those could have negatively skewed the CIR in the IRMS.
Of course, the complete mass spectra data could have made all of this moot, because it could have shown the purity of the peaks in the GC/MS and identified everything around it. So we would have known there were possible problems. But, alas, that information was erased.
In the end, this is all about specificity, the way LNDD assures the purity of its peaks. In short, they don't, because they don't have the mass spec data.
And with dexamethasone and methylprednisolone we have compounds that could have caused contamination / interference, and there seems to be no way to show they didn't, and no way to show they did.
So, my guess is that means a big fight over the meaning of the legal requirements of ISL 5.4.4.2.1, and perhaps other documents. The chemistry leads right back to the law. But I won’t go into that right now.
syi
Placement problem solved.
TBV
TBV,
Let me now comment on your I3 scale.
If what you want to show is the likelihood of error in the carbon measurements of the key metabolites, in the F3 that would be the 5A, 5B, and pregnane, then you need to examine the peak conditions around those metabolites.
Your measure is useless for that, since it measures peak conditions far away from those metabolites and such information may swamp the more specific information for the metabolite peaks. If I may point out one example, the peak conditions around the IS, 5A androstanol, are irrelevant, since that is only being used as a chromatographic standard for relative retention times, and no carbon ratios are calculated for it.
Secondly, what you use as your reference standard, the UCLA slide, has been cherry picked by Landis, and one cannot know if it is representative of normal good chromatography. Moreover, as I pointed out it refers to the F2 sample fraction, which in the Landis samples generally contains many fewer peaks (one of your metrics), than the F3 sample you use for comparison here. At a minimum, you should have compared the UCLA slide to the Landis F2 samples.
Finally, as to the Shackleton slide, we do not know if this is representative of normal chromatography in a production lab. Since this was a pioneering study, special efforts were undoubtedly used to get chromatograms suitable for a publishable study. Moreover, as to the metabolites in question, the Shackleton study shows an intervening peak between the 5A and 5B which according to your theories should have skewed the carbon measurements for the 5A and 5B. There is nothing in the portions of the study to which you have linked which indicates that Shackleton thought this was a problem, nor any corrective or diagnostic measurements he took to account for this. E.g. the CIR of this peak.
TBV,
One other thing, we have testimony by Brenna going over a number of the F3 chromatograms and explaining that he doesn't think any of the problems you mention constitute a problem for proper measurement of the carbon ratios.
On the other hand, if I remember correctly, the only specific problem Meier points to in the Landis F3 is the small intervening peak between the 5A and 5B, which is rebutted by Brenna's testimony. I don't remember him pointing to any of the other problems that are included in your I3, but haven't checked this.
Also can you refer me to where in the record it says that complete mass spectra were recorded and then destroyed. thanks.
I don't recall Brenna ever really addressing co-elutes of unexpected substances. I think he mostly was discussing the 5bA/5aA adjacency at possibly interfering peaks. He did address a question dealing with a shoulder, but the answer in retrospect seems unconvincing because (a) it was kind of vague; (b) when Young tried to rely on it with WMA, the calculator popped out and Brenna's dismissal didn't come up sounding so good; (c) WMAs math wasn't challenged and the point wasn't re-addressed by Brenna in rebuttal; (d) We can duplicate WMA's math with the spreadsheet; (e) the spreadsheet shows affects of third peaks that Brenna didn't address at all.
I consider Brenna's testimony to be correct within the narrow confines of what he spoke to. Every place there is an omission, or a failure to extend seems to match a place he did not want to go. We should be wary of extending his conclusions further than he went himself.
TBV
Post a Comment