Thursday, August 07, 2008

Irregular Report 6

Some follow-up from yesterday's piece in Nature concerning the Landis case and possible inaccuracies in anti-doping tests continues with news and blogger reaction. If more appears TBV will post it. The piece in Nature is getting very wide repeat coverage, presumably because of the Olympic run up, and the prestige of the journal.

The Nature article by Berry is now available in full, for free. Mandatory reading.

News
The Boulder Report comments on a few current cycling related stories, and thinks that those who say Floyd Landis will go to Rock Racing should peddle their "crazy" somewhere else. OK fine, but there does seem to be lots of crazy for sale out there.

The Peoria, Il. Journal Star
posts a story that reveals a seldom seen perspective, that of a DCO or "doping control officer."

MSNBC/AP reports on a sting by German ARD TV that has caught a Chinese doctor offering gene doping to a reporter posing as a swimming coach. Various parties are shocked, shocked at the availability, and the convincing nature of the "gotcha" video. (tip from a reader.)

Blogs
St Louis Today's cycling blog "10 Speed" cites the Donald Berry story from yesterday which maintains that anti-doping tests , such as those performed on Floyd Landis' Tour de France samples, may be inaccurate:

And people wonder how I can suggest that perhaps dethroned 2006 Tour de Franc champ Floyd Landis got Richard Kimbled.


Jae from Podium Cafe also comments on the Nature article from yesterday which points out the inadequacies of some anti-doping tests.

Drug Monkey says "bravo" to Nature for the piece it posted yesterday on Dr. Donald Berry and his skepticism over some anti-doping efforts.

Strategus feels that as one of the leading journals in Science, Nature's piece on Dr. Donald Berry posted yesterday must be taken seriously.

The Information Cul-De-Sac (eightzero) fantasizes about what really happened in July of 2006 to Floyd Landis.

CNN's Dr Gupta blogs about the Nature story as well.

CyclingFansAnonymous discusses Paul Scott's leaving ACE to help Landis, referring to declarations made by Aquilera and Catlin from our archive.

A anonymous comment at CFA suggests that Landis has been on the Rock Racing payroll all year. We really doubt that for a number of reasons.

And we doubt Landis is going to show up at Leadville to race this weekend.



24 comments:

whareagle said...

Okay, I know I got 'Lemonded' about a year ago, but Joe Lindsay, if you're reading this, you REALLY need to do some more homework before you go popping off. I wonder if a call to Strickland wouldn't be more appropriate, unless you're TRYING to be the next Zap.

A LOT of scientific minds think Floyd got hosed,and you're not doing the world or yourself any favors by spouting off your uninformed opinion about the case so frequently. Heck, Tyler lives in the same town as you, and as far as I've read, you never bothered to contact him and ASK for an audience, instead, you just snarked. Well, dude, smarter people than you think this whole doping-enforcement-debacle is exactly that, a debacle, so maybe a call to Arnie Baker's office is appropriate, or maybe a call to Lim's coaching practice is appropriate. Otherwise, your opinions continue to diminish the impact of your words.

Unknown said...

With regards to my hometown's Dave Luecking's Fugitive reference and the testosterone involved in the case I'm greatly resisting the urge to say, "It wasn't me, it was the one-balled man" but I figured that would just start someone off on the wrong path that bogs down some forums.

Wait. Whoops.

strbuk said...

Nah Gary, we won't get bogged down here, never happen!


str :-)


http://dailymaine.blogspot.com/

Unknown said...

Okay, the Nature article is very important from the perspective of the WADA/anti-doping complex as a whole. But the matter is more complicated specifically for the Landis case.

Berry says that there is somewhere between an 8% and 34% chance that Landis' would test positive on one of the eight days he was tested. He sort of implies that that means that there is an 8% - 34% chance Landis is innocent (obviously too high to "convict").

But with LNDD's one metabolite criteria, Landis tested positive on five different days (with the CIR test of course, not the T/E). And that would mean there was only, from a purely statistical viewpoint, some very tiny chance of having false positives on all five tests.

However, it is more complicated than that, because (even with a one metabolite criteria) Landis also tested negative on three days of the Tour. So what does that do to the statistics?

An important point that Berry makes is that the sensitivity (true-positive) and specificity (false-positive, I think) of the test as performed by LNDD isn't known. The appropriate tests "have not been performed or published."

I just might point out that the false-positive rate with a one metabolite criteria is obviously going to be higher, maybe a lot higher, than with more demanding criteria (like at UCLA). If I remember right, LNDD never did produce any study offering evidence of the value of their one metabolite criteria. The false positive rate could be 50% for all we know.

It is interesting that he says in the article that if the lab couldn't get the T/E test right, "in my opinion, this should have invalidated the more involved follow-up testing..." Obviously, Campbell's position from the first hearing.

So, it seems to me that while the Berry article is extremely damning of anti-doping science in general, it doesn't actually say too much about Landis, because of the unfortunate reality of five positive tests.

Thomas A. Fine said...

Did people also read the Nature editorial linked to the article?

"Nature believes that accepting 'legal limits' of specific metabolites without such rigorous verification goes against the foundational standards of modern science, and results in an arbitrary test for which the rate of false positives and false negatives can never be known. By leaving these rates unknown, and by not publishing and opening to broader scientific scrutiny the methods by which testing labs engage in study, it is Nature's view that the anti-doping authorities have fostered a sporting culture of suspicion, secrecy and fear."

I can't tell you how emotionally satisfying it is to read that in Nature. If I didn't know better, I'd swear it was something I'd written myself.

Open standards, and known false positive and false negative rates, were my big theme on DPF for a long time.

So now, I am going to say it. "I told you so". Ha!

tom

Thomas A. Fine said...

Mike,

monkeying with probabilities is like monkeying with the space-time contiuum. It's way too easy to create a paradox.

Floyd tested positive one time in eight. I'm not trying to nail down a semantic point for PR purposes, it's a statistical argument. Whatever the false positive and false negative rates might be that lead Floyd to his lone false positive, these rates all apply to the tests as they were used. A later fishing expedition can't be tossed in, because statistically, there's no way to compare that actual positive, with the later experiments. The later B-sample tests went through a completely different statistical mill.

tom

Unknown said...

Tom, I sort of see the distinction (between the statistical significance of the one test, vs. the five) but I really fail at statistics. Could you further elucidate?

syi

Larry said...

Mike -

I hope you saw my post from last night about the difference between method precision and method trueness. This is a critical distinction, and it helps explains the applicability of at least some of what's in the Nature article to the Landis case.

(before going further: this post contains a discussion of some concepts in statistics. I AM NOT a statistician. I may well have made mistakes here. If so, I hope that people who know more than I do will step in and correct me.)

Yes, the many positives in the Landis case makes it unlikely that the Landis AAF resulted from poor lab method precision. The S17 A and B positives are NOT anomalous results, as you're pointing out. So, perhaps we can assume that the LNDD's CIR method had reasonably good precision.

What's missing is any indication of the trueness of LNDD's lab method. This is the critical point, the point that Berry tried to make with the chart. In order for a lab method to be fit for purpose, and in order to properly determine the margin of error applicable to a given lab method, you need to determine both trueness and precision. Otherwise, all you've determined is that the method is consistent, but you haven't determined if the method is consistently wrong.

To get a good illustration of this, take a look at the LNDD's method validation study at LNDD 0456. This is the study that LNDD used to determine its margin of error for its CIR testing. The critical number for purposes of the study is the SD(0/00) shown for 5aA - 5bP. This is a measurement of a single standard deviation for LNDD's determination of the delta-delta of 5aA - 5bP for thirty different CIR tests performed on the LNDD blank urine collection. It came up with a standard deviation of 0.40. LNDD took this standard deviation, and multiplied it by 2 to get its stated margin of error of +/- 0.8. From all I understand about the WADA rules, LNDD's multiplication of a standard deviation by two is in accordance with ISL requirements.

The key thing to look at is LNDD's measurement of a standard deviation of 0.40. From a true idiot's perspective, a standard deviation is a way of measuring the dispersion of a set of values. If you've determined a standard deviation for a set of measured values, then about 68% of the measured values will fall within that standard deviation, and 32% will fall outside.

Go back to LNDD 0456. What LNDD has effectively determined is that 68% of its measurements of 5aA - 5bP fall into a range of +/- 0.4. This tells us that the measurements are reasonably consistent, but it says nothing about how these measurements relate to the true scientific value of what the lab is trying to measure. For the moment, assume that if the lab is doing tests on a negative urine sample, a sample from a donor or donors that have not used artificial testosterone, then the 4 delta-delta measurements should all come out close to zero. Indeed, this is what we see for the first three sets of delta-delta measurements. But the delta-delta measurements for 5aA - 5bP do NOT come out at around zero, they come out at a mean average slightly higher than -1. If the correct measurement for this value is 0, then obviously we're dealing with a case where the margin of error is higher than +/- 0.8. I'm not a statistician, and I don't know how to compute a standard deviation to take trueness as well as precision into account. Maybe a correct calculation of margin of error would add the trueness error to the +/- 0.8 precision error, and we'd have a margin of error of +/- 1.8. Maybe we're supposed to add the trueness error to the 0.4 precision standard deviation, and multiply THAT by two, to get a margin of error of +/- 2.8. (My guess is the former and not the latter, but hopefully someone who actually understands statistics will jump in here.)

Putting statistics to one side for the moment, on the face of LNDD 0456, there's an obvious and even more damning indictment of LNDD's failure to consider method trueness. Notice that LNDD ran all of its margin of error testing on a single blank urine pool. In theory, the pool is a "negative control" -- it should represent typical urine from people who have not used any banned substances. LNDD gets credit for the fact that it ran its CIR test 30 times against this negative control, and not once did the negative control test positive for artificial testosterone. True, some of the tests came somewhat CLOSE to a positive finding of -3.0 -- by LNDD's analysis, 5% of the delta-delta readings for 5aA - 5bP would equal or exceed -1.84 -- but there were no false positives on this negative control.

But remarkably, this is where LNDD's accuracy testing ended. They ran their accuracy tests on a SINGLE NEGATIVE CONTROL. They never ran their tests against a positive control, a sample taken from a subject using artificial testosterone (or if they DID perform additional testing, this testing had no effect on their measured margin of error, which is based exclusively on the results shown at LNDD 0456).

We're told that the ADA lab tests would never pass muster at most kinds of testing labs, and here's proof of why this is the case. Imagine that a medical lab used a blood screening test for cancer that had only been tested against one person's blood, where the test determined (accurately) that the person did not have cancer. Would you think that this test had been proven accurate?

I think it is a travesty that LNDD was allowed to get away with this kind of testing.

For those who would say that LNDD is just using the WADA-approved CIR test, I'll end with my oft-quoted section of the WADA rules, ISL Section 5.4.4.1:

Standard methods are generally not available for Doping Control analyses. The Laboratory shall develop, validate, and document in-house methods for compounds present on the Prohibited List and for related substances. The methods shall be selected and validated so they are fit for the purpose.

(Tom, just saw your posts as I am posting this. I echo Mike and ask that you post something in greater detail, particularly addressing how "B" sample testing affects the analysis here. Also, please take a look at this post and tell me where I've screwed up.)

Thomas A. Fine said...

Sorry, I can't help myself. Look at that Nature editorial again. They couldn't have used much stronger language. For the most prestigious scientific publication on the planet (arguably) to make such a bold statement is worth noting.

They are not merely saying "we ought to do our tests right". Tell me if I'm reading to much in, but they are actually saying, "current practices are wrong". And it's not just this Berry guy saying it, it's the Nature editorial staff.

Imagine your a WADA scientist reading this. It's not coming from some nutcase on the internet or some Floyd high-paid flunky. It's coming from Nature.

That's huge. Frankly, it seems too much of a statement to base on this one article by Berry, which makes me wonder what else they have up their sleeve. (Then again, I always like to appeal to the wilder side of people's imaginations.)

tom

Eightzero said...

Tom, I'm with you. Nature said this well. But unfortunately, they don't deal with the next logical question. They say "...it is Nature's view that the anti-doping authorities have fostered a sporting culture of suspicion, secrecy and fear."

So therefore what? So there is a culture of suspicion, secrecy and fear. It is the *injustice done* because of this culture that is the problem, and Nature offers no objective solution.

Everywhere we point this out we get shrugs while people point at cheaters. As long as we get our gladitorial games, no one cares. Until the lions come for you.

Lots of "I told you so's" here at TBV and at Rant's place. Some solace that is to Floyd. I just wonder when they'll come for me. I see my sport of fencing had a doper excluded from the Italian team. Could I be next? I signed up for that "farcical system" myself.

But hey, LNDD has all that experience. And the CAS seems to like the idea of "the spirit of what was intended." That's the spirit! Rah IOC! Rah Olympic movement!

Remember when they asked Bill Johnson what it meant for him to win an Olymipic (Silver) medal? He looked right at the camera and said, very candidly, "....millions."

Russ said...

Funny thing is, I believe Dr Berry is the guy that OMJ and others were quoting as THE man and his book as THE reference on calculating the false positive and negatives, over on DPF in the early days.

80, the scientist generally leave the rest of the story up to the lawyers and politicians, that is unless it is about global warming or a hunt for life in outer space! :-)

For them to go, as TAF pointed out to the, I'd say EXTREME of linking in such editorial comments is strong stuff indeed.

Larry and Mike, I left a new post at the end of Monday's stuff.

As to Berry's 34%, remember that was granting (from out of the air) a 95% spec. and lacking the validations from lndd, except for his comments, seemed to assume 'industry standard' validated methods. I mean his methods, I think, are not really suited for tests and machines that are poorly performed, operated, calibrated, etc. So I am trying to say the the 95% was generous.

Regards,
Russ

Russ

Larry said...

Russ, the 95% figure is not pulled out of the air. It comes from the WADA rules. ISL Rule 5.4.4.3.2 requires the lab to determine the expanded uncertainty of their lab methods to reflect a level of confidence of 95%. And WADA did not pull this 95% figure out of the air, either. My understanding is that 95% is at the low range of what would be deemed to be acceptable expanded uncertainty.

These concepts are discussed in somewhat more detail in my Curb Your Anticipation Series, at parts 7 and 8.

daniel m (a/k/a Rant) said...

Larry,

To add to your point, the two-sigma spread (two standard deviations) corresponds to a confidence level of 95 percent, if I recall. Meaning that if the testing was done correctly, there is a 95 percent probability that the true value lies within +/- two standard deviations of the actual, measured value.

It doesn't tell you what that true value is, however, just that you're in the ballpark.

Unknown said...

Tom, I share your joy over having a publication like Nature poke it's finger in WADA's eye. What they write in the editorial, and what Berry writes, is very damning of the way things are done in WADAWorld. Excellent.

I still want to understand better what it means for Floyd's results, though.

Excellent long post Larry. You really do help me see more of what Berry was saying. And, wow, it is a fundamental challenge to WADAWorld.

And I do get the distinction between precision and trueness, and now I understand the point that LNDD's MOE is inexplicably related only to precision, not trueness. As you say, how can they get away with that? Does that get back to our fear that both arb panels put way too much weight on accreditation? Did COFRAC accreditation only affirm precision, not trueness? If so, how can that be?

syi

Larry said...

Rant, OK, I'm not a statistics guy, but I don't think you can use two standard deviations to get to a confidence level of 95%. Please take a look at NIST on Expanded Uncertainty. I think there's a distinction between describing the number of standard deviations that should be required to describe 95% of the dispersion of an EXISTING set of values, and the calculation of what would be required to predict with a level of 95% confidence where a FUTURE value should lie. You're a better man with statistics than I am, but I am told that you get to that 95% confidence level by taking the margin you calculate from a single standard deviation and multiply it by two.

Mike, if you read the CAS decision, you'll see that they relied a great deal more on the COFRAC accreditation than did the original AAA panel. And I think they were dead wrong to do so. Even if COFRAC had done a bang-up job, I think they had something like a day they could spend at LNDD, and they had to certify a few dozen procedures (in my spare time I plan to check to see how long they spent at LNDD and how many procedures got certified), so in all fairness how thorough a check could they have done? Plus, COFRAC does the ISO 17025 check, which of course is important, but how carefully could they possibly have checked the lab against the ISL? I think that COFRAC would check the LNDD once every 2 or 3 years, and as the LNDD is the only WADA lab in France, COFRAC obviously doesn't get much of an opportunity to work with the ISL.

The ultimate check of the LNDD would be performed by WADA, not COFRAC, and strangely enough, I don't remember the CAS decision even mentioning WADA accreditation.

My point is not only that CAS relied way too heavily on COFRAC accreditation, but that we're starting to get a picture that the CIR test was never properly validated by LNDD.

d-Bob, yes, we finally have a piece of the validation data used by LNDD. I think we have this data only because COFRAC accredited the LNDD's CIR test with a (probably erroneous) 20% margin of error, and the Landis team devoted a great deal of time and effort to use this factoid to prove that the lab's CIR test was never accredited. My guess is that USADA gave us this small piece of the validation puzzle to prove that LNDD's validation studies did truly point to a margin of error of +/- 0.8. I'm not sure that there's anything else in the new documents that relates to the LNDD's validation of its CIR tests; if you find anything, please let me know.

And d-Bob, when you talk about the Landis team not being able to get the lab's SOP, you are hitting one of my hot buttons, where I've been known to blow my legendary ;^) cool. I get the fact that the WADA arbitration process has been streamlined to try to avoid the lengthy "fishing expeditions" for information that we lawyers are justifyably famous for. The rule is set up so that the labs are supposed to provide a standard package of documents, and the athlete is supposed to rely on this standard package. It's not a horrible idea, as far as it goes: the athlete DOES get a certain amount of information without having to fight for it, and the lab is not required to photocopy every piece of paper it's ever generated in order to satisfy the natural (!) curiosity of us lawyers. To make things fair-er, the arbitrators have the right to order the lab to turn over additional material. And in the history of man, no lawyer has ever been satisfied with the information he or she has received in discovery, and no lawyer has ever turned over documents to the other side that he or she thought the other side was entitled to. It's typical lawyer versus lawyer stuff, and every fair legal system tries to place fair limits on the discovery process.

What makes my blood boil is the combination of the following. First, the lab's SOP is NOT part of the standard WADA document package. Second, the system presumes the lab is right and puts the onus on the athlete to prove that the lab violated the ISL. Third, substantial portions of the ISL require the labs to adopt SOPs on such topics as chain of custody or GC peak identification, and to follow the SOPs they've adopted. OK then ... how can the athlete prove that the lab has departed from the ISL if the ISL standards are written into the SOPs and the athlete can't get hold of the SOPs?

This is the best (or worst) Catch-22 I've ever seen in my practice of law. The athlete is told that the only defense allowed under the rules is to prove that the lab departed from applicable ISL standards, and then allows (practically REQUIRES) the lab to hide the standards from the athlete. (deep breaths, Larry) It's one thing to place a limit on the athlete's ability to discover the FACTS necessary to prove that the lab violated the rules -- no one likes these kinds of limits, but limits are necessary (we can argue about where these limits are set, but we need limits). However (I can feel my blood pressure rising), THERE SHOULD BE NO LIMIT ON THE ATHLETE'S ACCESS TO THE RULES GOVERNING HIS CASE. If the ISL is dumb enough to REQUIRE labs to write the rules into the SOPs, then the athlete has to get the SOP to understand the rules.

What makes it worse (face flushing, heart pounding) is that the ADA HAS FULL ACCESS TO THESE RULES, and can dribble them out to us in a manner that best suits them.

wschart said...

I'm trying to wrap my mind around the question of confidence intervals, standard deviations, etc. as they apply here, and I am having some problems. I have had a course in statistics; the problem for me is that in that course, such things were discussed in terms of determining how likely the mean of a sample population represents the actual mean of the entire population. When pollsters report that 51% of the population support candidate X, ±5%, what is really meant is that there is 95% probability the population mean lies within 5% either way from 51%. So it could be as low as 46% or as high as 56%, and in truth it could even be higher or lower, although the probability is low for that.

What LNDD did to arrive at their ±0.8 figure is not quite the same situation. Now, I am not saying they did anything wrong, but I an not sure you can actually apply the same reasoning here. Their 30 tests are not really a sample population, but rather are the whole population, and the idea seems to be that they can project the confidence interval to future tests.

What is troubling me is that, according to Larry's figures, LNDD had a mean value of -1 for their tests when they should have had a value of 0. Unless Larry's figures are rounded off, the "true" value for LNDD's tests lies outside their confidence interval.

Thomas A. Fine said...

The correct measurement is almost certainly not zero. This is actually one of my oldest arguments.

The theory is just that - all natural steroids come from the same source (us), so they should all have identical or very similar d13C values.

The reality in two parts is that 1. most chemical reactions show a slight preference or dislike for carbon-13, therefore outputs of a chemical reaction are almost always different from inputs; and 2. more pragmatically, the actual research that went into this tests never found delta d13Cs that were zero.

And in fact, the research behind this test consistently showed that the testosterone metabolites were more depleted than the reference metabolites, making this test biased in favor of positive results.

Of course, the bigger problem is that the threshold value seems to have been pulled out of someone's ass, so any bias is kind of irrelevant.

tom

Larry said...

ws -

I hate to keep doing this, but I sacrificed something like two months worth of weekends (and considerable additional free time) to write that "Curb Your Anticipation" series PRECISELY so we'd have a basis for this kind of discussion. There is an idiot's discussion of the statistics in parts 7 and 8, as I'd decided that I didn't have the smarts to learn (let alone explain) how a WADA lab would establish its margin of error, but I DID cite to two outside sources that explain what's involved quite nicely. It all seems to come down to finding a standard deviation for the result you're interested in measuring and multiplying that standard deviation by two, to produce a margin of error that you can use with 95% confidence. If you have questions, this discussion is a good place to get answers (if I DO have to say so myself! ;^)

Again, if someone can explain the statistics better than I can, please step forward and do so, but I don't think the statistics here is the same statistics that one would use in your polling example. Your polling example is a problem involving sample size and probability. You're trying to determine a property of a large population by sampling (at random) a small fraction of the population -- the margin of error you're trying to determine there has to do with the confidence you can have that your sample is representative.

LNDD's determination of its margin of error is a related, but quite different, kind of calculation. LNDD is ultimately not trying to measure a property of a large population -- they're ultimately trying to determine a property of a single athlete sample. They don't have to worry about whether the sample is representative of anything. However, they DO have to worry about error. No measurement is perfect. If you set about measuring the same thing a number of times, statistically speaking, you're going to end up with different (hopefully, SLIGHTLY different) values each time. No method of measurement is perfectly precise, nor do we need methods of measurement to be perfectly precise. These methods just have to be good enough to allow us to do whatever we need them to do -- or in the parlance of the labs, the methods have to be "fit for purpose". I use this example a lot ... but if you're a major league baseball pitcher, and you throw a 95 MPH fast ball within 1% of where you aim it, you're going to make $20 million a year. If you're a NASA scientist and you shoot a rocket within 1% of where you aim it, you're going to be unemployed.

LNDD claims to have a +/- 0.8 margin of error, which I think the science people would say is pretty good. But the way they determined their margin of error is like an old story about a man walking through a forest, and he comes to a grove of trees with targets painted on them. Each target has an arrow shot right through the center. The guy continues walking, continues seeing more trees with arrows shot right through the center of targets. It's the most remarkable display of markmanship the man has ever seen. At the edge of the forest, the man encounters a young boy practicing with bow and arrow, and the man confirms that the young boy is the markman who's shot all the arrows he's seen in the forest. The man asks the boy how he's able to shoot with such amazing accuracy. "It's easy," said the boy. "First, I shoot the arrow. Second, I paint the target."

This old story is, in essence, all of the statistics you need to know to understand how LNDD determined its CIR margin of error of +/- 0.8.

Unknown said...

Larry,
In your response, the paragraph on what makes your blood boil is exactly the issue that's bothering me. To my eye, this effectively makes the athelete defenseless. Do you share this view?

I had mentioned that when we get audited by the FDA, they take your SOP and then check your documentation to see that you're following it. If I could say, "your not getting the SOP," they would have nothing to evaluate me with (WOW, wouldn't THAT be cool?). This seems like the position the athelete is in. Where my analogy breaks down is that the FDA can then shut me down, because they have the big stick. Too bad Floyd and Suh couldn't do that!

DBrower said...

In fact, beyond the availability of SOPs, Mr. Young quite openly argued that it was not possible for an athlete to review anything that might cast doubt on the reliability of the lab. He ridiculed it as "accreditation by litigation", with the suggestionthere was something wrong with that premise. His position is there is nothing an athlete can question.

Evidently this is the intent and concept of The Code, because the CAS Panel agreed that the appeal was without merit: You don't get to question anything substantive.

TBV

Larry said...

TBV, I understand why Mr. Young would not want to reopen the accreditation of a lab in a litigation involving one lab result. You don't want to draw general conclusions about the lab's general ability to operate effectively from a single case. dBob, I also understand why any legal system is going to place a limit on the ability of one party to a litigation to demand the other party to produce facts (though it's legitimate to debate where these limits should be set). TBV, I understand why you'd say that the athlete doesn't "get to question anything substantive", and dBob, I understand why you'd say that the athlete has nothing to evaluate, and while I agree with both of these statements to a certain extent, they are both debatable statments. There are decent arguments that can be made against these statements.

I am making a more narrow point, but I think it is an important point, and IMHO, I see no argument against this narrow point.

The point is that there's a clear distinction in justice between the facts of a case, and the rules governing the case. There are limits (both natural and practical) in our ability to uncover all of the facts. There are no corresponding limits on our ability to understand the rules. Any system of "justice" that imposes legal consequences based on the compliance of various parties with a system of rules, and then denies the parties (or worse, denies only one of the parties) access to these rules, is by definition an unjust system.

Let's use chain of custody as an example. The WADA rules say that Landis cannot attack his AAF based on a flawed chain of custody, unless Landis can prove that the lab's chain of custody violated the rules for chain of custody set forth in the ISL. It is one thing to say that the Landis team might not be able to get access to all of the FACTS regarding how the LNDD handled all of the Landis samples. We might object to having any of these facts denied to us, but we can acknowledge that there are certain limits on being able to draw a time line on where and when a sample was located. It is another thing to say that Landis is not permitted to know what RULES governed how the LNDD was supposed to handle chain of custody. I see no excuse, no justification, for Landis being denied access to these rules.

Lack of access to the rules renders meaningless (from a legal standpoint) any access or lack of access to the facts.

The ISL rules on chain of custody require every WADA lab to write up chain of custody procedures that comply with certain general standards set forth in the ISL, and then to comply with those procedures. The Landis team had access to the general standards in the ISL, but not to the procedures adopted by LNDD as required by the ISL. So in essence, this forced the Landis team to guess at what procedures were adopted by the LNDD, to argue that these LNDD procedures must have required LNDD to perform steps X, Y and Z in order to keep a chain of custody in compliance with the ISL standards. This is unfair and unjust in a number of obvious respects. First, it means that the lab does not have to comply with its actual chain of custody procedures, it only has to comply with what the lawyers might determine to be the absolute minimal procedures that a lab could have adopted consistently with the ISL standards. Second, it means that the lab can effectively violate certain general kinds of ISL standards, the kind of standards that might effectively be complied with by the lab's adopting a hundred different procedures (or combinations of procedures), since the athlete cannot possibly quantify and analyze all of the ways that the lab might have implemented these general standards. In essence, the only ISL standards binding on the lab are those that give the lab little or no discretion in the kinds of procedures that the lab must adopt. Third, the athlete never gets to consider whether the lab's procedures ACTUALLY COMPLY with the ISL standards.

The WADA rules place the athlete in the absurd position of having to guess at the rules applicable to the athlete's defense. The athlete must argue, if the rules required A, then the lab didn't do A, and if the rules required B, then the lab didn't do B, and if the rules required C, then the lab didn't do C either. To which the prosecuting ADA can say, "sorry, you guessed wrong, the lab wasn't actually required to do A, or B, or C." And if the athlete asks, what WAS the lab required to do, the ADA can reply, "sorry, we don't have to tell you." And of course, if the lab's procedures actually DID require it to do B, we'll never know, because the ADA wasn't required to tell us that, either.

In the past, I've called this a Catch-22. I don't know any other way to describe it. It's as good a catch as there is to tell an athlete that he can defend himself by proving that the lab violated its rules, but that the athlete is not allowed to know the rules that the lab might have violated that would give the athlete a defense.

Russ said...

Larry,
You may have missed my point, or my reading of Berry's point. So to use two quotes from Berry to achieve the required specificity :-) :

"The method used to establish the criterion for discriminating one group from another has not been published, and tests have not been performed to establish sensitivity and specificity. Without further validation in independent experiments, testing is subject to extreme biases. The LNDD lab disagrees with my interpretation."

"If he never doped and assuming a specificity of 95%, the probability of all 8 samples being labelled 'negative' is the eighth power of 0.95, or 0.66. Therefore, Landis's false-positive rate for the race as a whole would be about 34%."

Now Berry may have use 95% sourced from WADA, he did not say. He did say there was no basis and also prefaced his calculation based on 95% with ASSUMING.

So I added my two cents worth's point calling for a derating of the 95% based on anomaly's of testing and machine operation that are clearly not up to acceptable standards.

Thanks,

Larry said...

Russ, OK, understood.

Unknown said...

Larry,
Sorry if I seem a bit daft, but I want to understand what you are calling rules, and what you are calling facts. To me, the rule governing what we're talking about is:

The Laboratory shall develop, validate, and document in-house methods for compounds present on the Prohibited List and for related substances. The methods shall be selected and validated so they are fit for the purpose.

The method is captured in the SOP, and it is also the procedure (or, it least it should be). So, I don't view the SOP as a rule, but more like a fact. I say that, because it's inextricably linked to the test results (which are undoubtedly facts), and there's no way to determine if the test results are valid without knowing if they followed the procedure. So, not having the SOP makes the test results meaningless (I can certainly tell you that this is the FDA's viewpoint, and most Lab Directors working in regulated industries in the U.S would say the same thing). So, my question is: Do you consider the SOP to be a rule or a fact?

Sorry, if it seems that I'm focusing on minutia or semantics, but I want to be clear on the distinctions you're drawing, so I understand your position better.

Regards