Second, logically, legally, and scientifically, the Majority's refusal to rule that there was an ISL violation on the peak identification in the IRMS results was and is incorrect and must be corrected for justice to occur. Once an ISL violation is established, there is a burden flip and USADA would then be required to prove its case in some other way, to the comfortable satisfaction of the Panel, bearing in mind the seriousness of the allegation which is made. Whereas that latter standard may not have been a relevant factor when analytic factors form the foundation of the accusation, in multiple CAS cases addressing accusations based on non-analytic accusations, the seriousness of the allegation moves the burden of proof to a higher level then that which supports an adverse analytical finding.
Third, at least one clear, coherent theory about what happened, explaining why all of the results that were incorrectly reported as positives should be presented. We would like to see that so that some sort of justice and closure can be seen, and not just the issuing of a result.
We'll break these down further below.
It's the IRMS, stupid.
Initially, there were two basic scientific arguments disputed in this case. Ultimately, though, the case has simply been about the IRMS tests. As expected, the T/E tests were deemed unreliable over the argument of USADA and need not be further addressed. That point, rather silently and without fanfare, has been conceded by USADA. Thus, everything that is not about specifically refuting all the IRMS tests is superfluous and will only confuse the issue. While it may have been useful to present many issues at the early stages, the key issues are now identified, and those which have no traction should be put aside.
The Majority award in the initial arbitration did not consider the alternate B samples, but we should not assume that the review via CAS will similarly ignore them. It is important that any arguments made against the S17 tests also apply to the alternate B results. It is important to demonstrate that errors shown in the S17 test methodology were highly likely to have been repeated in the other tests, as well.
There was an ISL Violation on peak identification.
We strongly believe that there was an ISL violation in the peak identification used in the S17 and other samples, and that the logic used by the Majority in its award is faulty. Many of the reasons are worked out in Seven Paragraphs, so we'll be brief here.
- The TD2003IDCR does apply.
- LNDD could and should have used a methodology that met TD2003IDCR, including alternately the use of an appropriate cal-mix in the IRMS; the use of more similar chromatographic conditions between the GCMS and the IRMS; and the use of a trailing anchor in the cal mixes to allow use of Kovat's indexes.
- LNDD failure to use a conforming methodology does not excuse TD2003IDCR applicability.
- LNDD did not identify in its SOP an an alternate identification methodology, such as the "visual gestalt" method offered by Dr. Brenna. Saying that was what they did after the fact does not make it a documented methodology; using an undocumented methodology would itself be an ISL violation.
- The LNDD did not offer arguments why the looser criteria suggested in TD2003IDCR as acceptable in some circumstances should be applied.
- The Seven Paragraphs contradict each other, and so cannot present a valid argument.
- Brenna's testimony contradicts itself on identification, and should be discounted.
- A visual standard is no standard, demonstrated by examples from the chromatograms in the LDP. It is based on assumptions that are not true regarding peak ordering and proportionality of heights. These assumptions are not valid because the chromatographic conditions changed too much across the machines, including sensor type, pressure, and temperature.
A single metabolite positivity standard is not valid unless it is adequately supported.
If a WADA approved laboratory (LNDD) is going to assert an adverse analytical finding upon a single metabolite positivity standard, as is presumedly permitted under the WADA Code, then logic, law and fundamental fairness require that it run a scientically sound validation study because the single metabolite finding is not otherwise fortified by another or multiple metabolite positives, as is preferred in virtually every other WADA accredited laboratory. LNDD's "validation study" is a misnomer. It was inadequate to support an adverse finding through single-metabolite positivity criteria.
We also accept that this argument may fall on deaf ears at CAS, since labs can do no wrong, what with their being acredited and therefore deemed to be trustworthy. It nevertheless needs to be made, so that any reasoning for accepting a single metabolite standard can be on record may be examined. It will be more evidence for the WADA Code accepting substandard science as truth.
There are explanations that should disrupt any comfortable satisfaction that the Panel may otherwise have that Landis doped.
These are bullet points only, and each needs clear illustration both visually and numerically.
- The LNDD's chemistry for separation isn't good enough for the job it is asked to do. This is the main cause of the "poor chromatography" often mentioned. What this really means is that things aren't well separated, and there is lots more interference and unaccounted-for noise in the chromatograms than allows trustworthy measurements. This is visually demonstrated by comparing chromatograms from good chemistry as done at UCLA and Montreal with those from LNDD. Presenting the chemical steps would be useful, explaining the matter at that level as well, assuming it's possible to get the chemistry. (That chemistry details may not be available is a matter to complain about in due course).
- As a result, there are plentiful unknown impurities in all the fractions, which are visually obvious.
- The GCMS only identifies the presence of the known in a peak, but does not indicate the absence of an unknown; thus the GCMS does not indicate anything about the inadequate separation in the chemistry.
- When these poorly separated samples are run though the IRMS, we do not know the purity of the compounds whose CIRs we are measuring. This may have been detected in the MS data that should have been collected and made available.
- The S17 MS data from the IRMS was destroyed before it could be looked at, and there is no evidence it ever was evaluated.
- The MS data from the IRMS for the other B samples has not been made available, and we do not know if it exists either.
- The CIR of the peaks in question is highly dependent on the purity of the contents of the sample in the peak. An example should show clearly both mathematically and visually the result of an amount of an impurity at a certain value on the value of the assumed-to-be-pure peak. This should be tied to specific peaks in the S17 Landis F3's
- The effect of non-linearity at the low end of the measurement should be shown with a similar example, both visually and mathematically, and tied to specific peaks in the S17 Landis F3's.
- The effect of a sloping baseline should be shown visually and mathematically, and tied to the S17 Landis F3's.
- The effect of manual marking of integration boundaries and background levels should be shown visually, mathematically, and tied to the Landis F3's.
- The results obtained with "automatic" integration during the reprocessing must be shown as due to impurities non-linearity and sloping baselines, and not all from manual processing.
- Amory's unrefuted testimony about 5aA and 5aB supports the belief that something is amiss in the measurements.
- It was suggested by Davis that the linearity of the machines drifted all over the place, and that this accounts for much of the inconsistency in what ought to be consistent results.
- The results that should be consistent and are not are mainly the Blanks, where values seem to be all over the place. (You can't argue with the Landis samples).
- It is further suggested that since the linearity changes over time, depending on timing, results may or may not be consistent for runs; the timing would depend on the period over which the linearity of the instrument oscillates from high to low and back again.
- It would be useful if it were possible to analyze the available data and offer estimates of the likely periods of non-linearity, mapped to known acquisition times and results of Blanks, then project the rates onto the Landis F3's.
- It would also be useful to consider what might have been run during the "gaps" that was not recorded, and why those runs might have been done.
- Failure to obtain adequate chemical separation;
- Failure to properly identify peaks by using inadequate methods;
- Failure to ensure peaks are not contaminated with co-elutes;
- Accepting momentary linearity data from an unstable system, failing to understand the underlying problem.
- Unconscious biases affecting manual operations.
Unfortunately, and to the detriment of the athlete and to the fairness of the proceedings, data identified as potentially exculpatory has consistently not been provided or intentionally destroyed. This includes (but is not limited to) SOPs, chemistry, and Mass Spec data.
It is impossible to conclude to a degree of comfortable satisfaction both that the tests were carried out in a reliable way that supports a finding a doping offense occurred.
There is no other reliable evidence, based in science, from which it can be concluded, given the seriousness of the allegation, to comfortable satisfaction, that a doping offense occurred.