And What It Tells Us About The CERA Anti-Doping Cases
We've seen a lot of discussion over the last few days on the anti-doping testing being performed at this year's Tour de France (TdF) to detect a new drug, a third-generation form of EPO called CERA. Up until a few days ago, most of us had no idea that there WAS a third generation form of EPO, let alone that it might be used by cyclists as a performance-enhancing drug (PED). Many experts, who were well aware of CERA and its potential as a PED, were unaware that any anti-doping lab claimed to have the ability to detect this drug. After all, there had been no published discussion of any such test. The World Anti-Doping Agency (WADA) had published no rules on how such a test should be conducted, or interpreted. To my knowledge, there has not even been a discussion in the scientific literature on the detection of CERA.
Then all of a sudden, we had last week's explosion of news: the adverse analytical findings (AAFs) announced by the French anti-doping lab (AFLD) against cyclists allegedly using CERA, together with the revelation that the AFLD had in its arsenal a previously undisclosed test for the detection of CERA.
The TdF news concerning the detection of CERA has raised a lot of questions, including legal questions. How is it that AFLD can use a "secret" test, one that is not referenced in any of the WADA rules? How can we know that the lab's test is valid? If there are no WADA rules for the test, how can the lab determine that the results of the test are sufficient to prove an AAF?
I'm not going to try to answer all of these questions.
But I do think it would be valuable at this point to look at the CAS decision in the Tyler Hamilton case. The Hamilton decision provides guidance on how the ADAs may try to prove an AAF in these CERA cases.
(For those of you who'd like to read the Hamilton decision, you can find it here.)
The Hamilton case arose during the 2004 Vuelta de Espana (Tour of Spain). Hamilton won a stage in that race, and underwent a specific kind of blood test (called a HBT test) performed by the WADA-accredited lab in Switzerland. According to the Swiss lab, the HBT test revealed that Hamilton had undergone a homologous blood transfusion in violation of the WADA rules. A homologous blood transfusion is a transfusion of someone else's blood into the athlete’s system (as opposed to an autologous transfusion, which is a transfusion of the athlete's own blood). Such a transfusion, when used to improve an athlete's performance, is commonly called "blood doping".
The main issue in the Hamilton case was this: the HBT test was a new test at the time it was used in the Hamilton case. When the Swiss lab performed this test, it did not have specific accreditation to do so -- not from its ISO 17025 inspector, and not from WADA. Without such accreditation, was the test valid?
Let's detour for a moment and discuss the issue of lab method accreditation. The primary set of WADA rules governing labs is WADA’s International Standard for Laboratories (ISL). Incorporated within the ISL is ISO 17025 (sometimes called ISO/IEC 17025), issued by the International Organization for Standardization. ISO 17025 provides operating rules governing testing labs world-wide; the ISL provides specific rules for the field of doping control. The ISL provides that all WADA laboratories are required to be accredited by a national accreditation body and periodically audited according to ISO 17025 (see ISL Rules 4.1.1 and 22.214.171.124), and that all WADA lab methods and procedures must eventually be included in the scope of these periodic audits. See ISL Rule 4.2.2. In addition, all WADA labs are separately accredited by WADA. See ISL Rule 4.1.
We have considered these rules in our discussion of the Landis case. The lab methods in the Landis case were all accredited under ISO 17025 and the ISL (whether these methods were PROPERLY accredited is an open question in my mind). Since these methods were accredited, the lab in the Landis case (LNDD) received an important benefit under WADA Code 3.2.1: the lab was presumed to have conducted its analysis in accordance with the ISL. This is a powerful presumption that is difficult for an athlete to overcome, and had much to do with the decision made against Landis.
In the Hamilton case, it was clear that blood doping was a violation of the WADA Code. However, the WADA code did not address how a lab was to prove that an athlete had blood doped. Moreover, the HBT test had not been accredited at the time it was performed in the Hamilton case. So, the validity of the HBT test was probably the key issue in the Hamilton case.
The panel noted that under WADA Code Section 3.2, the "facts relating to anti-doping violations may be established by any reliable means." From this, the panel concluded that it is not necessary for WADA to approve a lab method before the method can be used to prove an AAF.
The panel also ruled that a WADA lab CAN use an unaccredited test method to prove an AAF, so long as the lab can prove two things. First, the lab must prove that the unaccredited test method was conducted "in accordance with the scientific community's practices and procedures." Second, the lab must prove that it "satisfied itself as to the validity of the [unaccredited] method before using it." If the lab can satisfy this two-pronged burden of proof, then (according to the Hamilton decision) the lab gets the benefit of the presumption under WADA Code 3.2.1. If the lab cannot satisfy this burden, then the lab method in question cannot be used, and the AAF against the athlete must be dismissed.
The reasoning in the Hamilton case was based on the panel's assumption that sometimes WADA labs must use unaccredited test methods. New forms of doping arise all the time, but the formal lab accreditation process is relatively slow (the method at issue in the Hamilton case was not formally validated until more than a year after the lab's finding of the Hamilton AAF). If labs are going to detect new performance-enhancing drugs (PEDs), they may have to do so with new (and thus unaccredited) test methods. But since accreditation is an important step in making sure that test methods are "fit for purpose", the panel reasoned that the validity of unaccredited test methods must be defended by the lab and ultimately ruled upon by the arbitration panel.
There is, of course, an argument against the rule in the Hamilton case, which is that review of a lab method in arbitration is no substitute for ISO and ISL accreditation. It is unlikely that a few arbitrators, meeting in a location distant from the lab, can review a lab method in a manner comparable to experts in ISO and ISL requirements, who are present at the lab itself.
The ultimate question in cases like the Hamilton case is: can we give up the confidence that comes with formal lab method accreditation, in exchange for the ability to use a new lab method to catch doping that would otherwise go undetected? This is not an easy question to answer. In essence, the court tried to balance two competing interests: effective anti-doping testing versus the benefits of formal method accreditation. Whether the Hamilton decision struck the right balance is a matter for debate.
Turning back to the Hamilton decision: the CAS panel in the Hamilton case had no difficulty finding that the lab method in question was sufficiently reliable to support the AAF against Hamilton. The CAS panel based this finding on the following:
1. The HBT test was performed using a machine called a "flow cytometer", which has been used for a long time to analyze blood characteristics. So while the TEST was new, much of the technology involved in the test was established and well-accepted.
2. The panel in Hamilton found that the HBT test was similar to tests in common use to precisely match a donor's blood to a recipient's blood. In most cases, a patient can receive a blood transfusion based only on major blood type (A, B, O and Rh(D)), but in some cases (such as bone marrow transplants) it is necessary to match minor blood markers as well. The panel noted that flow cytometry is commonly used for this purpose.
3. The panel noted that the HBT test was based on research work supported by financial grants from WADA and USADA, and that the results of the research had been reported in peer-reviewed scientific publications. Moreover, the test in question was tentatively approved at a scientific meeting held prior to the 2004 Olympic Games to determine the drug testing that would be performed at these Games.
4. The HBT test was reviewed and "validated" at three different WADA labs prior to its use in the Hamilton case.
5. WADA had adopted "positivity criteria" for the HBT test prior to the 2004 Olympic Games, addressing what the HBT test would have to show in order for a particular test result to prove an AAF for blood doping.
6. The Swiss lab's HBT test method WAS eventually accredited by WADA and the ISO inspector.
Hamilton's legal team argued against the validity of the HBT test, on grounds that are familiar to us from the Landis case. Hamilton argued (among other things) that there had been inadequate control studies to support the HBT method, and that there had been no proper study of false positives or measure of uncertainty determined for this lab method. The CAS panel was not persuaded by these arguments, and (as you probably know) upheld the AAF against Hamilton. At least they did not require Hamilton to pay USADA's costs.
What can the Hamilton case tell us about the validity of the AFLD's method to test for CERA, and about the likelihood that the CAS will uphold AAFs based on this method? Not very much! We know nothing about the AFLD's method for CERA testing or how it might have been validated prior to this year's Tour de France. But the Hamilton case is a good guide to what the AFLD will need to do to prove its cases against Beltran, Duenas, Ricco and any others accused of using CERA in this year's Tour. The AFLD may not have to do everything that the Swiss lab did in the Hamilton case, but the AFLD will at least have to show that its CERA test is scientifically accepted and that it took proper steps to validate the test. This is, of course, a lot more than the LNDD had to do in the Landis case.