Tuesday, October 16, 2007

Our Appeal Brief

We're not in the employ of Team Landis, but there are things we think ought to be in the appeal brief based on our understanding of the situation. Since it is clear we believe justice has not yet been accomplished, we offer some suggestions to ensure that the science of this case is for the first time actually addressed then, again for the first time, decided fairly.

First, we need to be clear that the case rests entirely on the reliability of the IRMS tests, and not get caught in most of the other effluvia. While the Award is based on the S17 tests only, we cannot, as did the majority of the CAS-AAA Panel, ignore the alternate B samples. In our opinion, they are as unreliable as the S17 test "results".

Second, logically, legally, and scientifically, the Majority's refusal to rule that there was an ISL violation on the peak identification in the IRMS results was and is incorrect and must be corrected for justice to occur. Once an ISL violation is established, there is a burden flip and USADA would then be required to prove its case in some other way, to the comfortable satisfaction of the Panel, bearing in mind the seriousness of the allegation which is made. Whereas that latter standard may not have been a relevant factor when analytic factors form the foundation of the accusation, in multiple CAS cases addressing accusations based on non-analytic accusations, the seriousness of the allegation moves the burden of proof to a higher level then that which supports an adverse analytical finding.

Third, at least one clear, coherent theory about what happened, explaining why all of the results that were incorrectly reported as positives should be presented. We would like to see that so that some sort of justice and closure can be seen, and not just the issuing of a result.

We'll break these down further below.


It's the IRMS, stupid.

Initially, there were two basic scientific arguments disputed in this case. Ultimately, though, the case has simply been about the IRMS tests. As expected, the T/E tests were deemed unreliable over the argument of USADA and need not be further addressed. That point, rather silently and without fanfare, has been conceded by USADA. Thus, everything that is not about specifically refuting all the IRMS tests is superfluous and will only confuse the issue. While it may have been useful to present many issues at the early stages, the key issues are now identified, and those which have no traction should be put aside.

The Majority award in the initial arbitration did not consider the alternate B samples, but we should not assume that the review via CAS will similarly ignore them. It is important that any arguments made against the S17 tests also apply to the alternate B results. It is important to demonstrate that errors shown in the S17 test methodology were highly likely to have been repeated in the other tests, as well.

There was an ISL Violation on peak identification.

We strongly believe that there was an ISL violation in the peak identification used in the S17 and other samples, and that the logic used by the Majority in its award is faulty. Many of the reasons are worked out in Seven Paragraphs, so we'll be brief here.

  1. The TD2003IDCR does apply.

  2. LNDD could and should have used a methodology that met TD2003IDCR, including alternately the use of an appropriate cal-mix in the IRMS; the use of more similar chromatographic conditions between the GCMS and the IRMS; and the use of a trailing anchor in the cal mixes to allow use of Kovat's indexes.

  3. LNDD failure to use a conforming methodology does not excuse TD2003IDCR applicability.

  4. LNDD did not identify in its SOP an an alternate identification methodology, such as the "visual gestalt" method offered by Dr. Brenna. Saying that was what they did after the fact does not make it a documented methodology; using an undocumented methodology would itself be an ISL violation.

  5. The LNDD did not offer arguments why the looser criteria suggested in TD2003IDCR as acceptable in some circumstances should be applied.

  6. The Seven Paragraphs contradict each other, and so cannot present a valid argument.

  7. Brenna's testimony contradicts itself on identification, and should be discounted.

  8. A visual standard is no standard, demonstrated by examples from the chromatograms in the LDP. It is based on assumptions that are not true regarding peak ordering and proportionality of heights. These assumptions are not valid because the chromatographic conditions changed too much across the machines, including sensor type, pressure, and temperature.

A single metabolite positivity standard is not valid unless it is adequately supported.

If a WADA approved laboratory (LNDD) is going to assert an adverse analytical finding upon a single metabolite positivity standard, as is presumedly permitted under the WADA Code, then logic, law and fundamental fairness require that it run a scientically sound validation study because the single metabolite finding is not otherwise fortified by another or multiple metabolite positives, as is preferred in virtually every other WADA accredited laboratory. LNDD's "validation study" is a misnomer. It was inadequate to support an adverse finding through single-metabolite positivity criteria.

We at TBV have not examined the study done by LNDD in detail. Our understanding is that it shows that the test as performed by LNDD can detect doping in subjects they know doped, but does not contain sufficient control subjects to show that they do not also find non-doped subjects to be dopers. This is the reason other labs use multiple metabolite standards, because their validation studies show they are necessary.

We also accept that this argument may fall on deaf ears at CAS, since labs can do no wrong, what with their being acredited and therefore deemed to be trustworthy. It nevertheless needs to be made, so that any reasoning for accepting a single metabolite standard can be on record may be examined. It will be more evidence for the WADA Code accepting substandard science as truth.

There are explanations that should disrupt any comfortable satisfaction that the Panel may otherwise have that Landis doped.

These are bullet points only, and each needs clear illustration both visually and numerically.

  1. The LNDD's chemistry for separation isn't good enough for the job it is asked to do. This is the main cause of the "poor chromatography" often mentioned. What this really means is that things aren't well separated, and there is lots more interference and unaccounted-for noise in the chromatograms than allows trustworthy measurements. This is visually demonstrated by comparing chromatograms from good chemistry as done at UCLA and Montreal with those from LNDD. Presenting the chemical steps would be useful, explaining the matter at that level as well, assuming it's possible to get the chemistry. (That chemistry details may not be available is a matter to complain about in due course).

  2. As a result, there are plentiful unknown impurities in all the fractions, which are visually obvious.

  3. The GCMS only identifies the presence of the known in a peak, but does not indicate the absence of an unknown; thus the GCMS does not indicate anything about the inadequate separation in the chemistry.

  4. When these poorly separated samples are run though the IRMS, we do not know the purity of the compounds whose CIRs we are measuring. This may have been detected in the MS data that should have been collected and made available.

  5. The S17 MS data from the IRMS was destroyed before it could be looked at, and there is no evidence it ever was evaluated.

  6. The MS data from the IRMS for the other B samples has not been made available, and we do not know if it exists either.

  7. The CIR of the peaks in question is highly dependent on the purity of the contents of the sample in the peak. An example should show clearly both mathematically and visually the result of an amount of an impurity at a certain value on the value of the assumed-to-be-pure peak. This should be tied to specific peaks in the S17 Landis F3's

  8. The effect of non-linearity at the low end of the measurement should be shown with a similar example, both visually and mathematically, and tied to specific peaks in the S17 Landis F3's.

  9. The effect of a sloping baseline should be shown visually and mathematically, and tied to the S17 Landis F3's.

  10. The effect of manual marking of integration boundaries and background levels should be shown visually, mathematically, and tied to the Landis F3's.

  11. The results obtained with "automatic" integration during the reprocessing must be shown as due to impurities non-linearity and sloping baselines, and not all from manual processing.

  12. Amory's unrefuted testimony about 5aA and 5aB supports the belief that something is amiss in the measurements.

The preceding points illustrate the main argument as we understand it. There is a corroborating argument which may be made regarding linearity.

  1. It was suggested by Davis that the linearity of the machines drifted all over the place, and that this accounts for much of the inconsistency in what ought to be consistent results.

  2. The results that should be consistent and are not are mainly the Blanks, where values seem to be all over the place. (You can't argue with the Landis samples).

  3. It is further suggested that since the linearity changes over time, depending on timing, results may or may not be consistent for runs; the timing would depend on the period over which the linearity of the instrument oscillates from high to low and back again.

  4. It would be useful if it were possible to analyze the available data and offer estimates of the likely periods of non-linearity, mapped to known acquisition times and results of Blanks, then project the rates onto the Landis F3's.

  5. It would also be useful to consider what might have been run during the "gaps" that was not recorded, and why those runs might have been done.

It is worth emphasizing there is no need to believe or present an explicit conspiracy; the causes of unreliable results include.

  • Failure to obtain adequate chemical separation;

  • Failure to properly identify peaks by using inadequate methods;

  • Failure to ensure peaks are not contaminated with co-elutes;

  • Accepting momentary linearity data from an unstable system, failing to understand the underlying problem.

  • Unconscious biases affecting manual operations.

Unfortunately, and to the detriment of the athlete and to the fairness of the proceedings, data identified as potentially exculpatory has consistently not been provided or intentionally destroyed. This includes (but is not limited to) SOPs, chemistry, and Mass Spec data.

It is impossible to conclude to a degree of comfortable satisfaction both that the tests were carried out in a reliable way that supports a finding a doping offense occurred.

There is no other reliable evidence, based in science, from which it can be concluded, given the seriousness of the allegation, to comfortable satisfaction, that a doping offense occurred.


bostonlondontokyo said...

TBV - Very thorough run-down of the inconsistancies in the case. Allow me to play devil's advocate for a moment - it seems that the majority of the points that you raise are about 1) Violations of procedure by LNDD 2) Destruction or waste of evidence that could have been re-tested, with different results, 3) Lack of arguments about criteria for positive readings, 4) Contradictions in the majority arbitors' findings, 5) Discounting of witness/source conculsions, based on contradiction, 6) Visual standards (eye-balling it) are not reliable enough, 7) Results from a lab could be biased because of the lack of security (or double-blindedness) in testing.

While these are all points that would probably give anyone pause, or at least wonder at how you can judge a case when the 'evidence' is confusing, at best, in terms of simple base issues (like, how to test a sample and read the results)... Still, I wonder if the arguments all have a similar flavour. Questioning a process, a result, a method, a bias - these, while convincing in a non-legal setting, don't really hold up much except to say that the evidence shouldn't be admissable. Unfortunately, it's been pretty well established that this standard of American law is not a standard in the arbitration process.

What stands out to me is that there has never been any evidence submitted to explain that Landis did not dope. Yes, it's a high threshold, but wouldn't you agree that unless there is new and compelling evidence that can at least suggest that Landis did not dope, we're still looking at the issue as one of mis-interpretation, therefore, 'throw the whole thing out'? - I do not know what the CAS's motives are, or their agenda, or how they even review evidence. But I'm thinking that the arguments you've posed, though very interesting and certainly thoughtful, still skirt the issue of Landis' actual innocence or guilt. Do you think CAS would be swayed by the same data presented anew? Just because a lab has done shoddy work does not prove that its results are wrong.

I am being devil's advocate, but I also have some doubts as to how far this line of defence can be stretched. Do you think that there is existing evidence (which doesn't relate strictly to measurement of levels) that could be presented that would swing CAS, or is this going to always remain a science case?
(PS - thanks for the blog - I love coming here...)

Mike Solberg said...

BLT, TbV will probably respond also, but I'll just say two things:

1) The way the WADA Code works the issue of ISL violations is really the ONLY way to defend yourself. You're not able to challenge the underlying science, and the Code doesn't have room for outside exculpatory evidence. ISL violations are the only game in town.

2) In a sense, there is evidence that he did not dope. That is the implication of Amory's testimony about 5A and 5B "traveling together." Also, the Leutinizing (sp?) Hormone levels do not indicate doping (as even Catlin admitted), and I am pretty sure I read there has been more evidence published recently that LH is a good marker for doping.

The only argument I would really like to see teased out that Floyd's people haven't touched is the possibility of contamination from Floyd's Cortisone. Could its metabolites elute at the same time as 5A? Would its CIR value be sufficient negative that it could skew the results even in small amounts? Could the metabolites be in his urine on the relevant days? Etc.

The problem with that, though, is that it provides an explanation for the numbers if the numbers are right. Floyd's whole argument is that the numbers are wrong. I assume that is why he hasn't pursued this publicly.


Unknown said...

I agree with BLT. All of your arguments seem to be technical arguments and skirt the issue of whether Landis is innocent or guilty. Also, I do think the fact that four of the other B samples showed evidence of testosterone is telling. The argument that they messed up on all of these tests, when they knew they were under the microscope is weak. Landis' expert was there during the procedure... no? I cannot remember as I know they were originally allowed to be there, but might have been kicked out. If the Landis expert was there, is there any testimony from him about the procedure on the additional B samples?

Also, with the money spent, although it might not have been admissible at the hearing, for PR reasons you think Landis would have hired the top CIA/FBI/ect lie detector administrators to administer a test and released the results.

Do I think the lab was shoddy? Yes. Do I think that there might even be an argument the case should be dismissed because the lab is so bad and WADA failed to follow its own rules... yes. Do I think Landis is guilty.... YES.

DBrower said...

My opinion is that there are unknown substances in the peaks measured to obtain the reported results that have skewed the values. They might be cortisol metabolites, or might not.

Amory's argument suggests to me strongly the results contain significant identification or measurement errors. Therefore, I don't see how I can conclude the reported numbers on any of the samples are the result of doping.

As I said in answer to MOI in another comment stream, I'd belief Landis' guilt is proven if a number of things had been done and presented as evidence. These things do not appear to have done, evaluated, and in some cases have been destroyed precluding definitive determination.

So I do not yet see how a conclusion he is guilty of what he is charged is sustained by any number of standards of proof.

Above, there is an argument, "it provides an explanation for the numbers if the numbers are right. Floyd's whole argument is that the numbers are wrong."

Not really; Landis is arguing the positive finding is incorrect, because he believes he didn't do what is charged, and doesn't believ e he was spiked. He must legally show the tests are not likely to be correct based on ISL violations.

There is no way to "prove he didn't dope."

He has no access to laboratories that could duplicate LNDDs "methodology" to see what may have gone wrong -- the LNDD hasn't given him access to certain details of the methodology. It is therefore not possible for him to provide hard proof of a cortisol theory.


Unknown said...

"So I do not yet see how a conclusion he is guilty of what he is charged is sustained by any number of standards of proof."

The above is a legal argument. Not a declaration that you think he did not dope. Any time you use "standard of proof" you are talking legal argument.

Floyd is in a bad position. You cannot prove a negative (so I have heard... eg you cannot prove god does not exist).

I guess I am not understanding why people are thinking the testing of the additional B's are bad testing. Is it more than, "LNDD messed up the other tests and thus we cannot trust these?" Are people stating those do not show testosterone? Are people stating the lab messed up the testing? Are people stating there are unknown substances in those also and that caused the higher values?

DBrower said...

The additional B sample testing was a waste of time. If the chemistry/procedure of the S17 test was correct, then it stands alone. If the chemistry/procedure of the other B tests is the same as the S17 test, they are either good or flawed in the same ways.

All the tests show testosterone -- it's naturally occuring. The dispute all along is whether they correctly identify non-human testosterone, being illegally adminstered synthetic testosterone, as being present.

The lab problems are that we (a) don't know what the peaks were [the identification problem], (b) we don't know what all they contain; [the co-elution problem resolvable with mass-spec data] and (c) the measurements weren't done correctly [ the manual integration, baseline correction issues, also related to near co-elutes ].

This is not a simple yes/no test.

To say that the tests showed synthetic T is to assume a conclusion for which there is contradictory argument.

Let me offer an imperfect analogy.

Sugar, corn starch and salt are all white powders, each with a certain density A, B, and C in common form. You are given a cup of white powder, determines its mass, and compute a density, D. If the powder was purely one of the salt, corn starch or sugar, a match of D to A, B, or C probably reveals whether the powder is one of those substances. But if you don't know what the powder contains, it's hard to say that because D equals B, then it must be corn starch.

Similarly, if you assume a sample is only one thing, but it also contains something else, then computing it's weight based on volume and known density would be incorrect.

In the IRMS test, we have peaks that we have not positively identified as being the substances. If we are wrong, no measurements we take are correct at all. This is the identification problem: We might be looking at a cup of flour instead of sugar, corn starch or sugar.

If we are right, and assume that there is only what we want in the peak, and take measurements, we can have numbers that are also incorrect.

Finally, imagine a sample that contains multiple substances, but isn't well-mixed. The cup has a bottom layer of one thing and a top layer of something else. You pour some of it into a 1/2 cup container, and weigh it. What can you conclude about the total density of the original sample? This is the linearity problem, where selectively looking at wrong parts of the peak can give you an incorrect measurement. Especially if you assume the peak is pure, but really contains multiple substances.

The totality of this is that a casual glance at the data appears to support the LNDD conclusions, but careful looking shows a great many places where errors that can affect the result have taken place, and nothing was done to rule out those possibilities of error. The ISL required them to do some of these steps -- identify the peaks correctly, and they did none of them completely correctly.

How then can we trust their conclusion that that powder was corn starch (exogenous-T) and not flour (cortisol metabolites)? They didn't do the things that would have resolved the question.


DBrower said...

To answer the other question, the only reason to test the B's was to avoid a "landaluze" problem, where the same technician worked on the A and B samples and the results were thrown out because of that. USADA was afraid of that possibility, and rather than just investigate and ask, they chose to run the additional samples to provide "corroborative evidence" and to cause everybody an expensive trip to the lab.

Since there was no Landaluze problem, and the errors being alleged are systemic, there is no evidentiary value to the other tests, and they need not have been run, since they prove nothing one way or another.


C-Fiddy said...

Lonnie, Here's an analogy for you.
If you were driving 39 in a 40 mph zone and a police officer uses a radar detector to gauge your speed, there are many rules and procedures he/she must follow for that reading to be considered reliable enough to be used in a conviction. (Now I'm guessing here, but) they probably can't use a malfunctioning detector, they probably have to be certified that they know how to use it properly, and they have to offer to show you the reading they are using. All these rules and proceedures are there to protect you from an officer with a bad attitude just saying, "I thought you were doing 41, so here's your ticket, you go tell your story to the judge". You could say over and over, "I was only doing 39", and it just becomes your word against theirs, and they can say, you didn't give any evidence for your side of the case.
Now if you can show the officer didn't know how to use the equipment, or you can show that it was malfunctioning, or that the officer disregarded his training and proceedure and just "eye-balled" the result, you may have evidence that would show why you shouldn't be convicted of something you know yourself to be untrue. Whether you prove it to be untrue or not is no longer the case, and there will be a bunch of people who say, "I always see people speeding on that road, so I bet he's guilty. He's just using a technicallity to get off."
I may be way off, but this is a simplified version of why I think Floyd decided to procede as he has. If he know's he didn't dope, and believes he can show that they failed to use the proper tests and procedures, this is probably the only way he has to retain what is rightfully his. Now if he is guilty, an he know's he doped, as his mom said, that's between him and God, and like you say, that's a tough one too.

wschart said...

The speedig analogy is rather apt, as if you were to receive a speeding ticket there is really no way to prove you weren't speeding, the only real way you have to clear yourself to to cast enough doubt on the radar. Similarly there is no way for Landis to prove he didn't dope. If there was some remaining urine from the S17 samples, they could be run through tests and could come up negative, but then you would be faced with the decision of which set of test to accept. This would probably come back to a question of which lab was using the most reliable methods, which boils down to essentially the same argument as Landis &c made: the LNDD results cannot be relied on because of the various errors as testified too.

Note that this has little if anything to do with the idea the ADA process is heavily stacked against the athlete. If the Landis case was to be heard in a US criminal court, with Landis being afforded all the rights that entails a defendant, he still couldn't really prove he didn't dope, only that the evidence against him is not sufficient to support a guilty verdict.

As far as to taking a "lie detector" test: 1. There is no such thing as a lie detector, it's a polygraph and what it really does is detect what are presumed to be signs of stress. The assumption is that lying produces stress, however there are people who can "beat" a polygraph to some degree, and I would guess that someone under stress from other reasons than lying could exhibit high signs of stress could have a false positive result. To a large extent it depends on the skill of the operator, both in devising a suitable questioning strategy as well as interpreting the results. Polygraph tests are commonly if not totally not acceptable in a court for reasons like this. Hence, polygraph results could not have been used in the May heariing, nor can they be used in CAS. At best, they might have some PR value. But considering that there are those who think Landis is a bald faced liar, they probably would think he was just able to beat the machine while many others would not care. However, if the polygraph results were to be deemed even suspicious, the PR would be even worse than now, In short, Landis would have nothing to gain but a lot to lose.

Unknown said...

We have to forget about the B sample testing because their isn't an A sample to compare the results to.

The testing of the B samples was a smoking gun used by the USADA. They actually played no part in the decision because i believe the arbs decided they couldn't be used to find a non-negative result. Therefore they don't have any bearing on Floyd's case.

Anyone still talking about the B Samples -- how come there's barely been a peep out of the USADA or WADA about those results?????

DBrower said...

One final addition to te illuminating speeding analogy. Instead of 39, consider that you knew you were going 35 because you had your cruise control on and set it carefully. You know you weren't anywhere near the cited speed, so there is no question in your mind that this was near the threshold as the result of a momentary lead foot.


Unknown said...

To all:

First, thanks for the explanation of the possible errors in the samples and how it can bring about bad results. I do see what you are stating. I may even agree with you. But this isn't about that. If I remember correctly, Armory stated that the deviations from the procedure affected the results while the other experts stated it did not. So, really this case just comes down to a battle of experts and who you believe. Some state it showed exogenous testosterone, some say it didn't. So, it comes down to which experts you believe. In the case at hand, two of the three arbitrators chose to believe the experts that stated it showed the exogenous testosterone over the one(s) that stated it did not. That is really the entire case. So, all of the self-proclaimed experts can argue until they are blue in the face that this was done wrong, we don't know what it showed, ect., but there are experts out there that will disagree and state it was a positive and definitively showed the presence of exogenous testosterone.

We cannot forget the additional B samples if we are trying to believe Landis. There is a difference between him winning his case and him not being a doper. He could easily be a doper and win his case for many reasons. So, to me the additional B samples are important. Not so much as a part of the actual case of the USADA against him, but the towards the idea he isn't a doper.

I understand what everybody is getting at with the speeding analogy, but more apt would be DNA testing as used in criminal cases as it has very similar complications when evidence is found on the scene. It always comes down to the experts and which the trier of fact decide to believe.

As far as the polygraph (sorry for not using the correct term, I didn't know we had to be so precise in our language) having benefit, I think to the public at large it would. The Velonews survey has 53 percent of people thinking the decision was correct. That is 53 percent of people that are following the case pretty closely. The sports public at large, is laughing because they have always thought he was guilty and this is just a formality. I bet world-wide, the perception is that he is guilty by an overwhelming majority. So, Landis needs to get information out there that would make his argument not seem so "technical." A polygraph, administered by the five top people on five different occasions would hold great weight to the public in general versus the science arguments he is giving in the case.

Finally, people need to stop making the analogy to the US Court system. This system is about catching cheats. The cheats are far more advanced than the testers. So, the rules are always going to be in favor of WADA and the national agencies, as it should be. After all, Marion Jones took 160 drug tests while she was doping and everyone came back negative.

Landis has a lot more to overcome than the S17 testing also. He has to overcome: 1) the mindset the cyclist are dopers; 2) His quickly dismissed him and showed no support; 3) the S17 tests and the experts at the lab and other experts stating it showed exogenous testosterone; 4)the additional B samples showing exogenous testosterone; 5) his own statements (although not his fault due to WADA not following their own rules), after testing positive along with the other actions of his camp (although I give little credence to Lemond); 6) the fact that he bonked on stage 16 and killed on stage 17 (most casual sports fans even that day implicated doping, although being an amateur at using power, I know his stage 17 power was nothing out of the ordinary for him).

Landis must overcome all of that to win in the court of the public. If he continues to attack the S17 test and even if he wins, he will always be perceived as a doper that got off on a technicality to a majority of people following the case and an overwhelming majority of casual sports fans world wide.

N.B.O.L. said...

One more for the speeding anology. Suppose the cop also gave a speeding ticket to a parked car. That is basically what LNDD did because their analysis showed one of the urine blanks to be positive in the same single 5a that they used to convict Floyd.

ct said...

Is there any place for circumstantial evidence in these hearings? It's been pointed out in this thread that there is no way to definitively prove doping at this point, since evidence has been destroyed and some steps in testing were either improperly documented, or not documented at all. If the scientific evidence in this case is inconclusive, shouldn’t circumstantial evidence be considered? How can a ruling be made when the scientific evidence cannot be relied upon? At that point, wouldn't CAS need to consider other factors in deciding whether or not to give Floyd the benefit of the doubt?
Because that's what this comes down to now....the science is in doubt, so a ruling can only be made on what is most probable, right?

I've never been clear as to how many times was Floyd tested during the TDF. I think they always test the 4 guys in jerseys and the stage winner, correct?
So I assume Floyd was tested many times during the race (as he knew he would be). Is every sample collected, tested? If so, there should be several negative test results recorded. If that’s the case, the "positive" test result should be considered an anomaly and therefore, suspect. If the other tests had shown evidence of doping, it would have been well-publicized (given the loose lips at LNDD).

Another thing I've never understood: what kind of doping practice might give one positive result among many negative test results? If Floyd is expected to explain why his test was positive, why aren't the accusers expected to put forth a plausible scenario of what they think Floyd did? Is it because there is no likely scenario where an athlete could be doping and only have one test out of many come back positive? (Seriously, I’m asking)

I've never been able to consider Floyd guilty because of the lack of other positive tests (from stages other than 17) and because of the lack of a plausible scenario of what Floyd could have done to cause only one positive test among many negative tests.

And what about the data from the power meter Floyd used in training and during the race (as documented in his book)? If that data was collected and stored, is there any way to use that data to help determine the likelihood of whether or not Floyd was doping?

Maybe my thinking is illogical given the framework of WADA rules. There's obviously a disparity between the WADA "justice" system and the American legal system so maybe reasonable doubt is irrelevant here. If CAS can only base their decision on the test results as documented, Floyd is probably wasting his time and effort. There’s gray area here, and I know people don't have a lot of patience for that. People want to know "did he?" or "didn't he?" and believing the lab result gives them the easy answer they're looking for. Hopefully CAS will be more willing to examine the gray areas instead of looking for any shred of data to latch on to as evidence that the lab was right so they don't have to acknowledge that science can be wrong if the work behind it is invalid.

Unknown said...

This has been a good thread. The best way to have handled this would have been to send the samples to another lab. Which I was told last year when I made the suggestion before was against WADA rules. Unfortunately, after that they decided to test the B samples which is also against I believe WADA rules.

1. For an AAF to be a violation you have to have both samples show positive. So to have A samples showing negative and B samples positive supports Floyd’s case – flexible rules?

2.To have the information leaked to the press for a test that is supposed to be blind is also not good – monetary consideration?

3.To have data mysteriously erased from the testing, which should support the testing accuracy – cover up?

4.To have Floyd’s observer barred from the lab during testing – something to hide?

5.To have a blank sample show up positive – sloppy work?

6.For the samples to mislabeled – sloppy work?

7.For the lab to not use a known quantity of what they were looking for in the sample to calibrate the equipment – sloppy work?

8.Manually run the tests on a machine that if used correctly will run the tests automatically – keep doing it until we get it right?

9.Other labs to have a different threshold than LNDD for a positive result –let’s find what works for us?

I could go on, at which point does this get ridiculous?

This is just a small fraction of things that cause me to have misgivings about the lab work. I believe that most of this has been stipulated to by the prosecution. Not number 2, no one has looked in this one. I know that most of the ones that think he is guilty could not come up with a similar list. I believe that WADA had every opportunity to seal the deal and chose to go the wrong way every time. It would have been simple for them to send them to another lab to verify the results. It would have been cheaper and faster.

Larry said...


Isn't this analysis a lot simpler than you're making it out to be? Why are we even discussing RRTs? Unless I'm mistaken, RRTs are not mentioned in TD2003IDCR.

It appears to me that TD2003IDCR requires a RT comparison of a GC/IRMS of the athlete's sample to a contemporaneous GC/IRMS of a reference sample. (Presumably you'd take the two samples on the same machine - I think that's a fair reading of the term "contemporaneous", and it wouldn't make any sense to compare RT results from different machines.) If LNDD did not make this comparison, then isn't that an ISL violation, pure and simple? Why say anything more?

Mike Solberg said...

Larry, not that this is a satisfying answer, but you have to remember the introductory paragraph of TD2003IDCR:
"The appropriate analytical characteristics must be documented for a particular compound. The laboratory must establish criteria for identification of a compound. Examples of acceptable criteria are..."

So, shockingly in my view, TD2003IDCR seems to be only advisory. What is given is only presented as "examples."

Wait...I wonder...TbV probably understands this better than any of us, but what if we construe the document differently? It says "Examples of acceptable criteria are...," and I have until now thought that the word "examples" applied to the three main headings of the document: "Chromatographic separation," "Mass Specrometric Detection," and "Tandem Mass Spectrometric Detection."

In that reading the document is saying "Here, for example, are three possible ways to do this, but there are other ways..."

But what if the "Examples of acceptable criteria are..." line applies to the details within each heading?

In that case, it would be saying "Obviously, there are three ways to do this. If you do it with chromatographic separation, then examples of acceptable criteria are... If you do it with mass spectrographic detection, then the examples of acceptable criteria are..."

If that is how this document should be read, and now that I think about it, it seems to fit the nature of the case better, then there is no doubt about an ISL violation, because they did not use any of those three methods for their "identification criteria."


Larry said...

Mike, good point. We have a typical example of the lousy drafting of the WADA rules.

I don't think that TD2003IDCR is intended to be advisory. I think it is intended to set up some standards for identification of compounds, while at the same time allowing for the labs to develop new and better techniques. Naturally, WADA does not want its rules to impede advances in the science.

However, if a lab chooses to employ one of the identification methods specified in TD2003IDCR, then the lab would have to meet the minimum standards for that method set forth in TD2003IDCR. So if the LNDD chose to identify compounds utilizing chromatographic separation, then the LNDD would have to measure RTs within the specifications set forth in TD2003IDCR.

You could argue that the TD is broad enough to allow a lab like LNDD to develop alternate specifications for chromatographic separation. So, for example, the LNDD might have developed its own criteria utilizing RRTs. We could then argue whether these criteria are acceptable, based on the science and on the parallel examples of acceptable criteria in the TD. But I don't think the TD can be interpreted to allow for a lab to utilize a criteria specified in the TD, but in a manner less rigorous than specified in the TD. For example, I don't think LNDD would be permitted to use RTs, but change the percentage used for comparison from 1% to 2%.

In any event, I'm not aware of any alternate criteria developed by LNDD for chromatographic separation.