Friday, August 08, 2008

A look at LNDD data and positivity criteria

Update: were aware of some data problems and are working on a revision of this post.

Inspired by the visualization that Donald Berry did in his Nature article, we'd like now to look again at the Landis data and positivity criteria.

We will look at the data presented in Exhibit 26, LNDD 433-436, plus data from the Landis B sample in USADA 352, as well as the reported results from Exhibits 86, 87, 89, 90, 92 and 93. The last group are the "alternate B samples", plus some "controls" provided by Dr. Aquilerra.

Note that in this discussion we completely ignore the issue of correct peak identification, which some have argued should have been the end of the story. We're now taking the reported values at face value, and looking at them.

The data presented here is available in an Excel spreadsheet in the archive for folks to do their own manipulation should they like.

Here is a picture with all the data from LNDD 433-436, plus all the Landis samples, in a form similar to that used by Berry. We've mapped the 5aA against the 5bA, and the Andro vs. the Etio, which differs from his chart.

Figure 1:
Reported IRMS CIR delta-deltas for all samples of interest.
Click for bigger.

Let's walk through this slowly to be sure we understand what is shown. The yellow vertical and horizontal lines represent the -3.00 delta unit limits, and the purple lines are the -3.8 practical limit because of LNDD's measurement uncertainty.

One of Berry's major points is questioning whether these limit lines are in the right place to achieve statistically valid conclusions. (There are others, to which we will return).

Solid red triangles represent 5aA and 5bA values reported by LNDD on samples they have deemed "negative". Outlined red triangles are the same values for those LNDD has deemed "positive".

Solid Red squares represent Andro and Etio deltas for "negative" samples, and outlined red squares are those for "positive" samples.

Blue triangles represent Landis 5aA and 5bA values, and Blue squares represent his Andro and Etio values.

We see in this picture that none of Landis' Andro and Etio values met the -3.8 criteria for any sample test; one exceeds it in one dimension for a -3.0 criteria.

On the 5aA and 5bA values, we see one of Landis' sample tests exceeds -3.8 on on two criteria; this is the Stage 17 A sample. When the B of the same control was run, only one of the values exceeded the -3.8 limit. On the "alternate B" samples tested, two exceed the -3.8 on one measure.

Because the B sample is legally controlling, all of the Landis controls result in three tests that exceed the -3.8 limit on a single value. When we say controlling, it's from the requirement that the A and B confirm each other. What counts is what they agree on, in this case, on only one metabolite over the -3.8 limit.

Is a single value exceeding a -3.8 limit proof of doping?

We believe Berry questions this on two grounds, first, that value of the limit hasn't been validated to a supportable level of confidence, and second, that a multi-dimensional analysis for reliability hasn't been done.

We see, in practice, that the LNDD is willing to declare a positive on a single measurement over that value, whether or not they have measured any other values. In the Landis case, CAS has accepted this from a legal point of view. Berry, among others, would question whether this is scientifically supportable. Others think CAS may have erred in the legal analysis, but there is no recourse for such a mistake.

Let us look at the LNDD provided data, and see what we can learn.

On LNDD 436, there are 27 reported "positives." We break them down in Table 1.

Number of positive metabolites
at -3.8
at -3.0
5aA, 5bA
at -3.8
5aA, 5bA
at -3.0



Table 1:
Reported positives and number of metabolites exceeding limits.

Of the 27 positives, 8 were 4 for 4 over the -3.8 threshold value. 3 more were 3 for 4. The number that has only two positive metabolites is considerable, 12. Four were declared positive on the basis of a single metabolite greater than the -3.8 limit, and there was one case of a positive declared with a single value that exceeded the -3.0 limit. By LNDD's own testimony on the proper use of measurement uncertainty, this should not have been reported as a positive, yet here it is in the report. The average LNDD positive has 2.5 metabolites over -3.8.

The Landis A sample is one of the 12 considered positive for two on LNDD 436, but it is not controlling because it wasn't confirmed by the B. Adding that Stage 17 B sample and the alternate B's, Landis doubles the number of tests that LNDD has declared positive on the basis of a single metabolite.

Because the difference between the 5aA and 5bA measurements were raised as issues with the Landis defense through the testimony of John Amory, we've also counted the positives involving those two in separate columns. There are only two positives declared by LNDD without either being positive; 8 are positive based on one o them being beyond -3.8, and 6 based on -3.0. There are 17 where both are positive beyond -3.8. In 20 of the 27 reported positives, both values exceeded the -3.0 value. On average, LNDD positives had 1.5 positives between the two 5aA and 5bA metabolites.

How many metabolites should be positive to prove doping?

This the second of the major points in the Berry's opinion: we don't have enough data to know. We do know from studies conducted by the UCLA Olympic Laboratory and the Sydney Australia lab that they believe there are false positives with fewer than two positive metabolites. We also know that WADA, as presented in the Landis case, disagrees, and is content with declaring an AAF on a single metabolite. This is no small part of where Berry and Nature do not buy WADA's story.

Let's look at the LNDD data again. If the criteria were more stringent based on metabolites, where would that leave us? Let us confine ourselves to the bottom left part of Figure one, expanded here as Figure 2.

Figure 2:
The "positives" of Figure 1

If we believe the -3.8 delta value limit, and take the most restrictive view, the quadrant below and to the left of the purple lines contains high-confidence positives. This interpretation would deem inconclusive the Landis Stage 17 B sample, and all of the other Landis alternate B samples.

If we took a requirement that all four metabolites LNDD measures should exceed -3.8, then there are 8 total positives instead of 27. If we said three of four, then we end up with 11 instead of 27.

If we say two of four, then there are 23 of 27, and none of them is Landis.

The values used to sanction Landis reflect the loosest possible interpretation of the criteria, and by definition the most likely to contain false positives.

We have noted before that the LNDD has a higher rate of reported steroid positives than other labs. They have 5% of the AAFs, but 10% of those for steroids. Why that might be. A particularly dirty group of tested athletes compared to other labs? Positivity criteria that are looser than other labs? Differences in the testing methodology?

It is hard to believe LNDD tests dirtier athletes than anyone else. One avenue of clear concern would be the "single metabolite" criterion, which we have discussed. Having a 2 or 3 metabolite standard might account for a much of the discrepancy.

Methodology Issues?

Quite a lot of the Landis case dealt with questions about the methodology, so let's look at what the data may say about that.

A point that Landis raised through the testimony of John Amory was the likelihood (or not) of 5aA and 5bA measurements being as different as seen by LNDD in some of the Landis samples. If we look again at Figure two, there aren't many points below and right of the Landis data, particularly the S17 samples. This reflects the differences between the reported 5aA and 5bA deltas.

Of all the positives reported by LNDD, the average difference between the 5xA measurements is 0.87. On the4 Landis samples reported positive, it is 3.09. Amory testified this is biologically unlikely even if he was doping. USADA argues it is because of doping, and offered a single subject in a progress report by Schanzer in Cologne to make the point. We suspect Berry would be unimpressed by the statistical validity of that single data point.

Landis has two reported positives (the S17 A and B) where the difference exceeds 3.00; the only other positive reported by LNDD with a 5xA difference greater than 3.00 was the female E27.

In the very first filing on the case, the ADRB submission, Howard Jacobs suggested the difference was so great it was likely to have come from measurement error. WADA has argued it is because of how he was doping, and they have no need to or interest in finding out the details.


The opinion one forms on the Landis case comes a lot from the prejudices one brings into the discussion, and how much you look at the data. We believe that looking at the data, a significant number of qualified biochemists as exemplified by Donald Berry can come to quite reasonable doubt about the soundness of the "guilty" decision handed to Landis.

Under the WADA Code, this is irrelevant. They have secured their sanction against Landis, and have marched onward to Beijing.


whareagle said...


Someone ought to step up and subsidize a lawsuit through the US courts, as well as the International Courts, to bring this evidence forward and refute the foundation for the conviction. USAC? You embarrass me. You should have stepped up 2 to 4 years ago when all of this was happening, to YOUR athletes, and fought for their rights, instead of rolling over.

Now Landis is exhausted, emotionally and fiscally, and the damage done to him, his family, and the greater family of competitive and recreational cyclists in this country will take years, years, years to overcome.

And for those who continue to believe Floyd is guilty, let's hope to Gawd nothing like this ever happens to you.

LNDD scruud up. The ADA's covered for them, and the price is one man's life and reputation, as well as that of his teammates, the owner, and the sport.

The blood is on your hands, LNDD, and yours, USADA, and yours, WADA, and maybe we can throw in the French Federation as well. Don't lap it up like hounds from hell. Gag on it and work to bring some sense of respect and dignity back to your own ADA's and cycling federations.

Unknown said...

TBV, I'm having some trouble with your numbers. Is the following not right for all the numbers over 3.0?

(All numbers are actually negative, but it is easier to read this way)

July 13: 5aA = 4.62, 5bA = 4.09
July 18: 5aA = 5.06, 5bA = 3.56
July 20A: 5aA = 6.14, Andro = 3.99
July 20B: 5aA = 6.39, Andro = 3.51
July 22: 5aA = 4.80
July 23: 5aA = 4.96

So it's not the S17A sample that exceeds on both 5aA and 5bA, but the July 13 sample.

And, taking into account what you say about the July 20B sample being the controlling sample, then we have four others that exceed 3.8 on just 5aA.

So, isn't your graph missing July 13?

And the two blue triangles close to coordinate (6,4) are not 5aA and 5bA, but 5aA and Andro.

If we took a requirement that all four metabolites LNDD measures should exceed -3.8, then there are 8 total positives instead of 27. If we said three of four, then we end up with 11 instead of 27. If we say two of four, then there are 23 of 27, and none of them is Landis.

What about July 13?

Or am I missing something about how you are reading the tests?


Oh, and on a different note: what about the fact that the 5aA average for the negative tests is 1.45? How can 3.0/3.8 be the standard for all four different metabolites when the average for each metabolite is so different. Shouldn't there be a different standard (a different standard deviation x 2) for each metabolite? I know this set of data is not their control group, but I suspect their control group would show similar differences (after all, these were the NEGATIVE tests).

DBrower said...

Mike, did you pull down the spreadsheet and see if I made data entry errors? That's a significant risk with hand-entry.

My graphs aren't the same as Berry's, and I believe that I plotted the 5xA's against each other, different than what he did.


Unknown said...

I only have time to look at this in bits, but the chart of LNDD's positives from 2004-2006 on Ex. 26 0436 is fascinating.

Of the 27 IRMS positives, Landis S17 is the ONLY one in which the Andro and the 5aA, and only the Andro and 5aA, exceed 3.8. Clearly his results were unusual for positive tests.

Unknown said...

For all the LNDD positives in 2004 and 2005 (14 of them with full data) the difference between the 5aA and the 5bA CIR reading averages .74.

For all the LNDD positives in 2006 (9 of them) the difference between the 5aA and the 5bA CIR reading averages 1.70.

Too small a number of samples to be significant for sure, but did something change in 2006?


Larry said...

Mike, one thing that might have changed in 2006 is increased use of testosterone gels applied to the skin. I don't know if this actually took place. But USADA's argument is that testosterone applied to the skin will (or might) be metabolized primarily into 5aA. This is as close as they came to explaining the discreptancy in 5aA versus 5bA readings for Landis.

This is a strange business, one I am slowly trying to research: the issue of "what is up with 5aA"? Wherever we look, when a reading turns out odd, it's generally the 5aA. Look at the LNDD validation studies, the 5aA is measured appreciably more negative than anything else. Look at the Landis measurements, except for the two samples where the 5aA measured lowest, the 5aA measures higher than anything else. Look at the results for the EDF reprocessing, when the Landis team forced LNDD to use the automatic integration built into the Masslynx software: the blank urine (a negative control) tests positive for 5aA.

If you look at the historic negative tests (per TBV's spreadsheet), the 5aA measures more negative than any other metabolite, though not by as much as we might expect (about 0.3). But these "negatives" are unusual, because they're all negatives measured when the athlete failed the screen but passed the CIR test. If you look at the positive tests, again on average the 5aA is the most negative, this time by a factor of at least 0.9 over the next most negative measurement.

So again, I ask, what is up with 5aA?

One other question, I haven't been at this as long as some of you, has the question of sitosterolaemia ever been discussed here? Has anyone taken a look at the relatively ancient comment at Ken's great Environmental Chemistry site? The idea that Landis' mennonite background might make him genetically prone to elevated levels of plant-based 5a ... is this worth considering?

Thomas A. Fine said...

Oh good, another chance to talk about beer!

The fact that the 5xA metabolites are so divergent in Floyd and so not divergent in all the other positives is perhaps the most interesting thing in here to me. Why might this be?

There's the T-gel theory. But are we to believe that of all these positives, Floyd is the only one using T-gel? Then too, we have no data from that alleged study on T-gel, and as TBV says, it was only one subject anyway. Are there any divergent 5xA among the negative tests (cheaters who got away with it)?

Then, there's the possibility that Floyd is just weird - that his own organism does things differently than 9 out of 10 human organisms. This is a very real possibility, and it can't be discounted. Certainly in the WADA study on dietary influences, there were subjects that had unusual profiles, compared to other subjects (and of course one subject who's results weren't reported, and no explanation that I can find).

Then of course, my favorite, the beer theory. Beer might cause his initial positive. But to explain those other "positives" we have to reach farther.

Lab cheating? Has anyone looked for patterns in those "positives", like all one technician or all the same day or some such?

Or Floyd cracked a brew several times during the tour? It is possible.

Or perhaps things besides beer might be able to cause rapid d13C changes? Maybe vinegar (acetic acid utilization is an essential element of my beer theory)? Does Floyd like pickles? Or cholesterol in general? Cholesterol was excluded from WADA's study on dietary influences.

There's another aspect of the beer theory I have to mention to. Suppose we take Schwanzer's claims about T-gel at face value. But then suppose my beer theory happens to be true? Did Schwanzer make any attempt in that study on T-gel to account for possible outside influences on his results, like beer? Maybe his lone subject had an Amstel light the night before, and tested positive for that.

It's unlikely that he checked for anything like that. My theory hasn't been tested, and there's no evidence yet that anyone out there is taking it seriously. On the other hand, Schwanzer was also involved in the dietary study. If the thought would occur to anyone besides me, it might occur to him.

So in summary, those divergent values are a problem begging for an explanation. And so far, there's only two explanations on the table for it. One is from a respected professional in the field - but he admitted only a single subject, and where's the publication? The other is from an internet crackpot - but it's backed up by a stack of studies on the link between alcohol and testosterone. And besides, Nature just validated the statistical arguments made by that same crackpot.


Unknown said...

Just a quick warning Larry. I may be crazy, but I think TBV mislabeled his spreadsheet. The 5aA and 5bA labels should be switched. You can check to be sure.


Larry said...

Mike, you're right, but I caught that and the statistics I gave you are properly labelled. I think.

Tom -

Can you point me to where you've explained the beer theory? Does your beer theory explain how the C13 deficit might have overexpressed itself in 5aA?

Repeating myself, but is there anything to the sitosterolaemia argument I pointed to in my earlier post?

You've probably noticed that WADA, USADA and their friends love to talk about the unique and divergent nature of human biochemistry -- that is, when it suits them. This is how they explain why Landis only tested positive for one metabolite, even though this is (giving USADA the benefit of the doubt) highly unusual. And when I argued over at Science of Sports that doping is not an explanation for how one cyclist can ride the shorts off another cyclist if both cyclists are doping, I was told that certain athletes have the biochemistry to get more performance-enhancement than others from doping. But if you suggest that a person might have a biochemistry that could naturally produce delta-delta readings above -3.8 ... well, then you're a crackpot, it seems.

The t-gel theory is the first theory I've encountered to explain how the body would "know" which testosterone was light in C13 and how to apply that testosterone mostly to 5aA production. But yeah, if the t-gel theory is to be taken seriously, then athletes would be free to use all the t-gel they wanted when competing in the U.S.A., because UCLA (which requires three metabolites to test positive) would never be able to pin an AAF on an athlete using t-gel. That's at least a small hole poked in the t-gel theory.

The more I try to study this stuff, the more impressed I become that this is a very complex topic, and we may be talking about multiple factors that caused the Landis AAF. We know as a general matter that living things prefer C12 to C13, but that this preference is not uniform across all species, and is possibly not uniform across all biochemical processes within a given species, or even among all members of the same species. Using the LNDD data, it is obvious that 5aA is generally lighter in C12 than the other testosterone metabolites, regardless of where we look (historic positives, historic negatives and blank urine). I don't have other data from other labs to use as a point of comparison (do you?). But if LNDD is using a valid method, then it looks like human metabolism has a distinct preference for C12 when it metabolizes testosterone to create 5aA. Or, the beer.

If humans have a distinct preference for C12 when they create 5aA, why couldn't this preference be more distinct in some people than others? Could this be why Landis flunked the CIR test? However ... if Landis has a distinct and unusual biochemistry, wouldn't his legal team have caught this? Don't you figure that they tested Landis in 100 different ways to see if they could find something like this? I mean, don't you figure they gave him a couple of beers, put him on a stationary bicycle, then asked him to pee in a cup?

So ... I don't think the explanation can lie solely with Landis' organism. If there's an explanation, it must lie (at least in part) with the method used by the LNDD. That's something that Landis' team could NOT get access to. (I think this is TBV's argument.)

Larry said...


Another excellent post by you. Thank you for the careful and thoughtful analysis.

Everything that follows in IMHO, and I won't repeat my oft-stated disclaimer about my not being a scientist or statistician. (oops)

One difficulty with Dr. Berry's article is that he is making two very distinct points, and it is easy to confuse these two points, particularly if what you're trying to do is to apply Dr. Berry's conclusions to the Landis case. The first point is that using tests validated to a 95% level of confidence is problematic, particularly where (as with Landis, and with those now subjected to targeted testing) the same athlete is repeatedly tested. The second is that the WADA tests are themselves not tested for accuracy. These are both terrific points. But when we focus on the Landis case, I think that the second point is much more important than the first.

Let's discuss the first point first.

The first point made by Dr. Berry is, if I understand it, a point based solely on statistical factors. I believe that for his first point, Dr. Berry is taking the LNDD results at face value, and is simply considering what happens when a good lab performs a good test that has been validated to a 95% level of confidence, and applies the same test multiple times to the same athlete. From Dr. Berry's article, the 95% confidence level erodes to something much smaller, to something in which we can have no real confidence.

But in fairness to WADA (and yeah, it feels strange to mention "fairness" and "WADA" in the same sentence), there's a lot more going on in WADA testing than the performace of a single test. In a normal scenario, the lab never even GETS to the test that has the 95% confidence level unless the athlete fails an earlier screening test. If the athlete fails both the screening test and the main test, then statistically speaking we should have greater confidence than if he just failed the main test. There's also the matter that the main test is performed twice, on an "A" sample and a "B" sample. Again, if we're assuming that the tests are good tests performed by a good lab, the testing of the "B" sample provides additional confidence. (During the NPR interview, Dr. Berry was asked about how "B" sample testing affects his statistical analysis, but it was a multi-part question and Dr. Berry never provided an answer.)

I think that Dr. Berry is making a powerful point about the true measurement of confidence levels in doping testing, but given that he doesn't address the screening and the "B" test as potential additional sources of confidence, I'm not sure we can quantify the point he's making.

When it comes to the Landis case, Dr. Berry's first point is more difficult to apply. Yes, we know that Landis was tested multiple times during the 2006 Tour, which erodes confidence in the single S17 positive test. But to figure out the "P" in the Bayes Rule calculation (the one used to show so-called "prosecutor's fallacy"), we'd also have to take into account whatever value there might have been in the screening test (yes, the lab violated the rules by acquiring only a single diagnostic ion, but arguably there's still some level of additional confidence in measuring T/E based on a single diagnostic ion), plus the additional "B" tests (the positive tests increasing confidence, the negative tests decreasing confidence, with a likely net increase in confidence overall), and the negative impact on confidence from the statistically unusual nature of both the spread and the trends in the CIR measurements of the four diagnostic metabolites (the unusual nature of the spread is well illustrated by your charts). These factors cannot easily be quantified. I don't see a statistical conclusion that can be reached here.

I understand your point about deriving greater statistical confidence from requiring multiple metabolites to test positive. It's my personal preference to argue for this requirement from a legal standpoint (I think this is what is required under a fair reading of the ISL) and a scientific standpoint (I think that the scientific explanation for exogenous testosterone expressing itself in a single metabolite relies on the slightest possible evidence and what seems to me to be the shakiest of scientific hypotheses). The statistical argument seems weaker to me, as I'm not sure what difference it would make from a statistical standpoint to require multiple metabolites to test positive. To get to the required 95% level of confidence, you're still going to have to derive a standard deviation based on measured results, and multiply it by two. If you do this using multiple metabolites, presumably you can derive this confidence level based on the metabolite that measures most accurately. LNDD's method was to calculate confidence based on the metabolite (naturally it was the 5aA) that measures the least accurately. Statistically speaking, I think you end up with the same 95% confidence level either way.

(When I can't sleep at night, sometimes I like to count up all of the unlikely things about the Landis case. What are the odds that Landis would dope during a race where he also provided his wattage statistics to the general public? What about the odds that the agency that accredited the French lab would have put the wrong margin of error on the accreditation document, and that no one would notice this for almost a year, after which time the agency would agree to a retroactive change in the accreditation? What are the odds that the lab's GC service guy would have changed the GC column on the critical piece of lab equipment used to convict Landis in order to do testing, then would have changed the name of the column in the machine's memory to do the testing, then would have remembered to replace the old column without also changing back the name of the column in the machine's memory ... with no one noticing this, either, for almost a year, after which time the service guy would "remember" what he had forgotten to do? What are the odds that a doping rider would conduct a wiki defense? Sorry, I know that none of this means anything statistically, but I've read that Landis' "bonk" on S16 should add statistical confidence to the LNDD results, so I feel entitled to make my own dumb arguments in reply. Forget I said anything. Besides, from what I've read, it wasn't a bonk, it was dehydration. Sorry for digressing.)

Ultimately for me, point 1 just bogs down altogether in application to the Landis case. I can't determine the "P" in the Bayes Rule calculation, so this ends up in a YGIAGAM situation, and we have enough of these situations in this case already.

Now, on to point 2.

Point two of Dr. Berry's analysis seems far more important to me, and far more relevant to the Landis case. It is MUCH more important -- so much more important that it dwarfs point 1, and makes me sorry that Dr. Berry buried the lede by mentioning point 1. In point 2, Dr. Berry effectively makes the point I tried (and probably failed) to make in my Curb Your Anticipation series, in that lab testing depends in the first instance on proper method validation. In Dr. Berry's words from the NPR interview:

WADA never tests its tests.

This is something that we have suspected here, but we also knew that WADA labs do not have to disclose their method validation studies to athletes, so we could not say anything for certain about how WADA labs validate their test methods. But Dr. Berry says that WADA and the WADA labs do NOT validate their test methods in an adequate way, and the highly respected publication "Nature" has thrown its full weight behind this conclusion. WADA and AFLD could of course prove Dr. Berry and "Nature" wrong by publishing their validation studies, or even by stating that they have performed validation studies that go beyond the blank urine testing reported at LNDD 0456. But WADA remains silent and AFLD remains silent. So, let's repeat:

WADA never tests its tests.

This is CRITICAL to our understanding of what is going on in doping testing. It is SO critical, in fact, that there's really no point in discussing 95% levels of confidence and prosecutor's fallacies. We have no reason to give the LNDD testing a 95% level of confidence -- maybe they test at a 99.99% level of confidence, or a 2% level of confidence. Who knows?

All AFLD can tell you is that its test is based on a CIR method favorably reported upon in peer-reviewed science, and that its test has been performed 30 times on blank urine with 95% confidence using a margin of error of +/- 0.8. In other words, AFLD did all of their testing on ONE sample, a sample they presumed to be negative for exogenous testosterone. They had no way to determine how close their measured results might be to the true measurements. And they never bothered to test the test against a positive sample.

This is the critical point that Dr. Berry is making. All other points made by Dr. Berry, in my humble opinion, fade to meaningless.

WADA never tests its tests.

Please, don't bury this headline.

Here on your forum, TBV, we've evidently given more careful consideration to the validity of the WADA tests than WADA does.

Thomas A. Fine said...

Hi Larry,

This DPF thread lays out my favorite studies in the first three posts.

In short, alcohol has been shown to give a short-term boost in testosterone, and specifically to 5aA.

As for individual variations in C-13 chemistry, look over
my review of WADA's study on diet. Female 2 certainly has something going on with 11-OH-Etiocholanolone that's different from all the other people.


Larry said...

Tom, I will review, and in the meantime could you please consider this sitosterolaemia business?

Thomas A. Fine said...

From what I've read (briefly) sitosterolaemia is a disease with serious consequences early on in life. I suppose it's possible that being an athlete might mask these issues (or just as easily accelerate them). But otherwise, Floyd would likely know if he had this.

To have teeth, you'd simply need to show that one of the plant sterols that is over-absorbed metabolises into the 5aAlpha.

Show that, and then show that Floyd has this disease, and you might have something. But it's hard to find the metabolites of arbitrary sterols just by googling around, and in many cases, these sterols aren't well-enough researched to know their metabolites.

Even at that, you have to show that these metabolites hit the urine at a different rate than everything else. I haven't read enough about sitosterolaemia to begin to adress that. I"ve seen it described as storing too many sterols. If it's a long term buildup, that would serve to even out changes, rather than make things more dynamic. But then that begs the question of whether this thing might tend to influence T metabolites or reference metabolites (if either).

So, just off the cuff, this bears further investigation. Then again, so do a dozen (or a hundred?) other possibilities.

Problems like this are why WADA sucks. They want to believe that our biology is linear, when it's complex at best, and sometimes chaotic. There's so many unknown factors, some are guaranteed to lead to false positives.


Larry said...

Tom, OK, understood. There seems to be some references in the literature to mild forms of sitosterolaemia, where the disease is undiagnosed and barely noticed, but I take your point.

OK, so in summary, it could be the beer, and it could not have been the bourbon. I am taking away the point that we can't prove it was the beer, nor can we rule it out. The lesson to be learned is that human biochemistry is exceedingly complicated, and it is entirely possible for a person to naturally produce the results measured by LNDD, without use of a banned substance. The science behind the CIR test may be good in general terms and on average, but cannot possibly be validly applied to all people under all circumstances.