Thursday, January 31, 2008

Larry's Curb Your Anticipation, Part 13: Misapplication

Up to the Introduction; back to part 12; on to part 14.

By Commenter Larry


At the same time, there are some things that would be misapplication of method validation rules to the assessment of results. The Majority of the AAA Panel seems to have made mistakes of this sort.

[MORE]

  • Can the criteria for method validation be taken outside of the context of method validation, and used to judge the validity of a test result? The answer to this question seems to be a qualified “yes”. For example, in the FL case, FL argued that the LNDD violated ISL Rule 5.4.4.2.1 by failing to properly generate chromatograms that avoided matrix interference. During the arbitration hearing, both sides called expert witnesses to testify to the quality and reliability of these chromatograms. Predictably, the witnesses for USADA stated that these chromatograms were reliable, and the witnesses for FL disagreed. The majority of arbitrators ruled that these chromatograms “perhaps could have been better”, but ultimately found that the chromatograms were “fit for the purpose” and that the chromatography “unquestionably indicates the presence of exogenous testosterone in [FL’s] “A” and “B” samples.” In reaching this ruling, the majority arbitrators made the following statement, a statement that (in my opinion) completely misconstrues the nature, purpose and intent of the ISL:


In applying the language of the ISL what is required is that the “method should avoid interference.” The language is not mandatory. Had the drafters intended that matrix interference be avoided it would require wording such as “shall” or “must”. For this Panel to accept the submissions of the Respondent that matrix interference must be avoided would be a misconstruction of ISL 5.4.4.2.1. Dr. Ayotte confirms this statement in noting that a laboratory does not violate Article 5.4.4.2.1 of the ISL just because it produces a chromatogram that contains matrix interference. Therefore, even where matrix interference has occurred in the Stage 17 chromatograms it would not amount to a violation of the ISL. (FL decision paragraph 240.)


The majority’s reasoning here illustrates the danger inherent in reading rules out of context. The majority managed to interpret ISL Rule 5.4.4.2.1 to exclude any requirement to consider matrix interference – a reading that is possible only by failing to read the entire rule! (As we’ve seen, this rule includes a requirement that lab methods must achieve specificity, and the concept of “specificity” includes within it the need to avoid matrix interference.) But the majority’s reasoning is faulty on a more fundamental level, as the majority never considered ISL Rule 5.4.4.2.1 as applying to the lab’s method validation effort. In the context of method validation, a statement that a method “should avoid” matrix interference takes on a different meaning than the one found by the majority arbitrators. In the context of method validation, this statement would be read in light of the overall goal of method validation: to make a reasonable effort (within the constraints imposed by the cost and time required for the effort) to determine that a method is “fit for purpose”. In this context, the “should avoid interference” language in ISL Rule 5.4.4.2.1 might be interpreted to mean “do the best you can to avoid interference” or “make a reasonable effort to avoid interference”, consistent with the overall principle that a method cannot possibly be “fit for purpose” if it cannot provide us with a reasonable assurance that the method is measuring what we want it to measure.


So, let’s return to the question posed at the beginning of this bullet point: can the criteria for method validation be taken outside of the context of method validation, and used to judge the validity of a test result? The answer is “yes”, at least with respect to some of these criteria, but we need to be careful in how we do this. To be certain, if we want to assess the validity of a particular test result, we would want to consider criteria such as accuracy, uncertainty, specificity and interference. But we should avoid applying the specific language of ISL method validation rules embodying these criteria to our assessment of a particular test result. Instead, we should look to other rules within the ISL and ISO 17025 – rules that were written for the purpose of assessing a method result rather than the method itself – to provide us with the necessary guidelines for determining the validity of a particular method result.


Up to the Introduction; back to part 12; on to part 14.



Full Post with Comments...

Larry's Curb Your Anticipation, Part 12:
You should be able to argue...

Up to the Introduction; back to part 11 on to part 13.

By Commenter Larry

Now, some bullets about things you probably can argue, even if they may not be well received.

[MORE]


  • Can we use method validation criteria as criteria for the evaluation of the validity of a particular test result? For example, can we look at the difference between the original results of the FL stage 17 testing and the results from the subsequent EDF reprocessing, and say that this difference indicates a problem of repeatability or intermediate precision? This is a tricky question to answer. From our discussion above, we understand that criteria such as repeatability and intermediate precision are method validation criteria. Simply because a method produces a bad result does not mean that the method is invalid or should be reevaluated. However, we also understand that method validation is an ongoing process, and that a lab’s quality control efforts may indicate the need to revalidate a method. So it is important (perhaps even required under the ISL and ISO 17025) for a lab to review its test method results against the various method validation criteria, to see if revalidation of the method is necessary or advisable.


  • However, the failure of a particular lab result to comply with one or more method validation criteria does not necessarily mean that the result is invalid and should be thrown out. There is a difference between the criteria for evaluating test methods and the criteria for evaluating test results. To see this difference, all we need to do is to consider some of the method validation criteria discussed above. For example, the criterion of traceability addresses the test method only – either the method is tied to a known standard or it is not. The criterion of intermediate precision can be used to judge a test result only if the test is repeated at a different time or with different equipment or a different lab technician, but repeated testing of the same sample is not the norm.


  • Can the criteria for method validation be used by an athlete to prove an ISL departure? Absolutely. This was the basis for the argument made by Tyler Hamilton (discussed above) in his anti-doping case. It is an ISL departure if an AAF is based on a method that was not properly validated. However, we should recognize that it is a difficult proposition for an athlete to prove that a test method was not properly validated. Any such proof requires the athlete to challenge the science surrounding the test method, a tall order under the best of circumstances. This task is made even more difficult by the fact that the WADA rules presume the validity of all WADA lab methods, and that the documentation of the lab’s method validation will not be included in the lab document package provided to the athlete. This difficulty may be the reason why the FL team seemed to avoid many of the criteria discussed above (in particular, specificity) in the case it made before the arbitrators last year.



Up to the Introduction; back to part 11 on to part 13.


Full Post with Comments...

What Are They Thinking?

Bill Hue

Comments from Steve Johnson, CEO of USAC, in today's round-up caught my eye. They diametrically contrast with USADA Counsel Bill Bock's careful and measured denial that USADA participates in any disclosure of or independently discloses, a cyclist's identity (under investigation) to race organizers or anyone/anything else prior to an actual adjudication that the cyclist violated WADA Code or USADA protocol.

[MORE]


Mr. Bock may have determined that such acts expose the entity to legal action in a US Court ( a subject I previously commented upon, concluding that neither the corporate entity nor the individual may have immunity from liability for such activities) that occurred in the US and may have caused injury to an individual or entity protected by US law. Or, he may have determined that the risks thereof require better prudence. Either way, USADA now distances itself from any disclosure prior to adjudication that may impact a cyclist's ability to earn a living or impact a team's sponsorships or even its very existence. Mr. Johnson might have missed the memo.

This is the interesting part of Mr. Johnson's comment;

"If USADA doesn't want to share information with AEG, that is their prerogative, but until someone tells me specifically I cannot share that information I'm happy to help the Amgen Tour of California organizers."

Mr. Johnson may be ill informed about his obligations under law and the legal consequences of breaches thereon and also may not be aware that he may not necessarily have immunity either personally or as a corporate entity for a claim such as tortuous interference with contract when he deviates from the requirements of the WADA Code or USADA protocol.

Johnson acknowledges that USADA has some concern of which he is aware of on these issues, thereby refusing to disclose its investigations of specific cyclists to race organizers, something with which Johnson apparently feels comfortable with.

Johnson must know that professional cyclists are employed to actually ride in races. He must be aware that individuals and teams rely upon corporate sponsorships as additional income in exchange for riding in races in which sponsor's logos appear and that a sponsor utilizes team and rider popularity among consumers to sell the sponsor's product. If riders do not ride or teams not race, their relationship with sponsors might be effected negatively or even terminated and may cause riders and teams to have montetary damages. If a cyclist can't race, his/her relationship with a team might terminate or his/her salary might be reduced, causing the rider to have monetary damage.

When a person or entity acts outside the WADA or UCI or USADA Code or Protocol relating to cyclists, causing them damage in the process, there may be legal relief (in a US Civil Court because the act is outside the WADA Code or other "private" protocol) available to the cyclist or to the team.


Further, if one is not immune from liability, then one may be personally responsible should a judge determine a duty was owed and a jury determine that said duty was breached and damage occurred as a result.

Here are the elements of tortious interference with contract:

1) A valid existing contract;
2) That defendant had knowledge of;
3) That defendant intended to induce breach of;
4) That the contract was in fact breached or performance was rendered more difficult;
5) Causation; and
6) Actual damage.

Sometimes discretion is the better part of valor and usually discretion is preferable to unneccesary bravado. Sometimes the comments made by individuals on both sides of the issue boggles the mind. Johnson's comments certainly boggle mine.

Full Post with Comments...

Larry's Curb Your Anticipation, Part 11:
Methods v. Application

Up to the Introduction; back to part 10; on to part 12.

By Commenter Larry

We're going to start winding down, making some observations about the things we've discussed in previous parts with application to the Landis case. These aren't long, and each could easily become their own extended discussions, but TBV cut it off, so here they are as bullet points, a few at a time.

[MORE]


Let’s make some brief observations:


  • One of our goals in this article is to place the ISL rules into a proper context. Here on TBV, we’ve frequently discussed ISL criteria such as specificity and interference, but we have not done so in the context of method validation. We need to understand that under the ISL, these criteria are method validation criteria. They speak to the fitness for purpose of a lab’s methods. So, we must be careful how we use these criteria. If we say, for example, that the chromatograms in the FL case lack the specificity required under ISL 5.4.4.2.1, we’re not saying that the lab failed to perform its test method correctly. We’re saying that there’s something wrong with the method itself. We’re saying that the method needs to be reevaluated, revised, or possibly thrown out and replaced with a better method.


  • Let’s use two examples to illustrate this point. Both Duck and Mr. Idiot have argued that LNDD’s testing for exogenous testosterone should include analysis of complete mass spectrum data. This is an argument that goes to specificity; in effect, Duck and Mr. Idiot are questioning whether any method for testing for exogenous testosterone could have been properly validated if the method does not use mass spectrum data to determine specificity. In contrast, Ali has pointed out the wide variation between the results found by LNDD during its initial testing of the S17 sample, and the results reached later upon reanalysis of the S17 electronic data files (EDFs). Ali’s point here does not relate directly to method validation – Ali is not expressly questioning LNDD’s methods, but instead is pointing out an apparent failure of the lab to produce good results with its methods. The data pointed to by Ali may or may not indicate that LNDD’s method is flawed; the problem may lie elsewhere (for example, in LNDD’s sampling procedures, or with the training of LNDD’s personnel, or with LNDD’s equipment).




Up to the Introduction; back to part 10; on to part 12.


Full Post with Comments...

Thursday Roundup

News
The CyclingNews this morning says that reports of Rock Racing's roster for the ToC may have been premature. According to Sean Weide of Elevation Sports & Entertainment / Rock Racing:

"Per the race regulations for the Tour of California, a list of riders was submitted to the race organizers. The official Rock Racing roster for the event will be released by AEG after it is reviewed and all riders are cleared to start."

In an update CN says that Alberto Contador is willing to go to Italy to testify in CONI's resurrected OP investigation.

According the The VeloNews the assertion that team rosters for the Tour of California have been submitted to USADA for approval is incorrect. The team rosters are being vetted by USA Cycling and the UCI, not by USADA. According to USADA chief counsel Bill Bock there has been a fundamental misunderstanding:

"We don't have any communications of that sort with AEG or any race organization," Bock said. "There is some misinformation out there that somehow we have information about who is going to participate [in the Tour of California]. We don't have that information, and we don't give out confirmations about investigations or positive test results until after a case has proceeded through our internal process. We are not providing information to any race organizers, in any sport, concerning USADA testing or investigations."


But, even USADA wonders how USA Cycling and the UCI will have and share the kind of information that may disqualify a cyclist from the teams' rosters. Ramifications about athlete information confidentiality don't seem to bother Steve Johnson CEO of USAC:

"USADA protocol hasn't changed, what has changed is our position," Johnson said. "It used to be that we would communicate information only with the athlete. But we felt that it was our responsibility in this case when the Amgen Tour of California organizers asked us to help implement this much more selective policy. We were happy to help, and we discussed it with both the UCI and USADA. If USADA doesn't want to share information with AEG, that is their prerogative, but until someone tells me specifically I cannot share that information I'm happy to help the Amgen Tour of California organizers."

What in the world does that mean?

There are some delicate machinations going on, with some people who may be "shingling past the edge of the roof."*

Steve Johnson's position on behalf of USACycling seems tenuous regarding potential defamation.


Blogs
Brad Keyes speaks of feelings shared with a friend:
He even shares the view with me that, while we like Floyd Landis and think he’d be cool to party with, he’s surely a doper.

Rant has been wondering about a couple of things, one of them is the deadline for the filing of the Landis appeal briefs, which is today, and all that has been lost by everyone in this entire process. Rant is also starting to think we may not know who the "John Doe" in the lawsuit filed against USADA.

Wizbang uses Landis as a referent for their slam of John McCain. They seem to be Huckabee fans.



*coinage from John Amory MD in testimony at the Landis AAA hearing, see Transcript, PDF page 1385, line 21

Full Post with Comments...

Wednesday, January 30, 2008

What a Bunch of Black and Grey Text

Yikes, it's been page after page of words and more words lately. Let's bring some color back with this reminder of two years ago at the Tour of California.



Floyd in the Golden Fleece, 2006,
with apologies to a photographer unknown to us.



If anyone is interested in getting together Sunday in Palo Alto after the Prologue, let's work something out.

Full Post with Comments...

It Can't Happen, Especially Not Here, Dept.

Eight-zero passes on the following news article in the Seattle Times:, reminding us of some warnings we've heard from Bill Hue.

King County judges block breath-test evidence; hundreds of DUI cases could be affected

The Associated Press

A panel of King County District Court judges says there were so many ethical lapses and scientific inaccuracies at the Washington State Toxicology Lab that breath tests should not be admitted as evidence in drunken driving cases.

The ruling is expected to affect hundreds of cases, if not more.

The three-judge panel said problems at the lab were so systemic, and the results of breath tests so compromised, that they would not be helpful to a judge considering whether someone drove under the influence.

Previously, Snohomish County barred breath-test evidence in more than three dozen cases, while Skagit County judges have ruled that although conduct at the toxicology lab was troubling, the breath-test results can be admitted at trial.

Related


Eight-zero suggests the ruling is a fine example for some people at CAS:

Here's an example of judges (not arbitrators) actually doing judicial work. If the CAS is looking for a methodology to follow, here's a good model.

We concur.

Full Post with Comments...

Larry's Curb Your Anticipation, Part 10:
Specificity/Selectivity meaning in the ISL

Up to the Introduction; back to part 9; on to part 11.

By Commenter Larry


We left our discussion in part 9 on the varying language in the ISL about specificity/selectivity, wondering how you can tell what really applies. We take note of an interesting distinction proposed by Mr. Idiot, but aren't sure it would be accepted.

[MORE]


I think that the ISL rules governing specificity are best understood in the context of our discussion of method validation. In this context, it is clear that selectivity must play a role in method validation, notwithstanding any permissive language we might find in ISL rules 5.4.4.2.1 and 5.4.4.2.2. It would make no sense to devote great effort to the determination of the accuracy of a test method, without devoting a roughly equivalent effort to determine that the method is measuring what it is supposed to measure. Understanding what we do about “fitness for purpose”, it would be impossible to argue that a test method lacking in selectivity could possibly be “fit for purpose.”


At the same time, it may be unreasonable to expect that WADA lab method validation should ensure complete selectivity. As we’ve stated repeatedly, the ISL is based on the assumption that standard methods are not available for doping testing, and that WADA labs will be required to develop their own test methods in house. It’s probably unrealistic to expect new and novel test methods to achieve perfect selectivity (notwithstanding the importance to the athletes being tested that the labs avoid making any mistakes). In addition, given that a urine sample is a complex matrix that might contain any number of unexpected substances, it might be impossible to prove that a lab method was capable of avoiding all possible interferences. Finally, we should recall that method validation is a finite effort, designed to determine fitness for purpose, but limited in important respects by factors of time and cost. Given all this, it seems most reasonable to define “specificity” for purposes of the ISL as a high degree of selectivity, but probably not as 100% selectivity.


But what of the difference highlighted above between the standards for avoiding (or limiting) matrix interference in ISL rules 5.4.4.2.1 and 5.4.4.2.2? Here on TBV, Mr. Idiot has advanced a good explanation for the difference in these standards. ISL 5.4.4.2.2 addresses the requirement to limit interference in testing for threshold substances, where the lab is required to measure the amount of the substance present in the standard. In contrast, ISL 5.4.4.2.1 governs the requirement to avoid interference in testing for non-threshold substances, where the lab is only required to determine the presence of the substance. Arguably, the standards for methods that need to measure the amount of a substance must be more exacting than the standards for methods that need only to detect the presence of a substance.


But I would caution against making too much of the difference in language between the standards in ISL rule 5.4.4.2.1 and 5.4.4.2.2. Recall what we’ve said so far about method validation. The goal of method validation is to ensure that a lab method is fit for purpose; a lab method that does not achieve good specificity (whether specificity is determined by avoiding interference or by limiting interference) is not fit for purpose. Moreover, given the unpredictability inherent in matrix interference (at least when dealing with a complex matrix such as urine), we can also expect that a lab will use the best means available to combat interferences, regardless of whether the lab is testing for a threshold or a non-threshold substance. At the same time, we expect labs to perform only a limited amount of method validation, so that method validation can be completed within a reasonable time and at a reasonable cost. In other words, we can predict that the main limitation on determining selectivity under the ISL will not be the standards set forth under ISL 5.4.4.2.1 and 5.4.4.2.2, but instead will be the limited effort required of labs under the ISL and ISO 17025 in performing method validation.


So, I think that the best way to understand the ISL rules on specificity is the way we’ve described above: under the ISL, specificity is a high degree of selectivity, but probably not perfect selectivity.




Up to the Introduction; back to part 9; on to part 11.

Full Post with Comments...

Larry's Curb Your Anticipation, Part 9:
Specificity/Selectivity

Up to the Introduction; back to part 8; on to part 10.

By Commenter Larry


We left part 8 discussing the ISO take on uncertainty, and the criteria that fall in that category. We now turn to the second of the two key parts in determining fitness for purpose, specificity/selectivity.

[MORE]

  • Confirmation of Identity

A concept related to “selectivity” is “specificity”. “Specificity” is a desired state of selectivity. “Specificity” is one of those terms that is defined differently depending on where you look: specificity is either 100% selectivity (see Eurachem Guide paragraph 6.13) or a high degree of selectivity (see http://en.wikipedia.org/wiki/Selectivity).


As we’ve seen in earlier discussions on TBV, “specificity” is part of the ISL criteria for method validation (see ISL Rules 5.4.4.2.1 and 5.4.4.2.2). As per usual with rules written by WADA, it’s not clear exactly what the ISL means when it refers to specificity. The ISL contains separate rules for the validation of methods for threshold substances (ISL rule 5.4.4.2.2) and non-threshold substances (ISL rule 5.4.4.2.1). Both of these rules require validation of all WADA lab methods, and list “specificity” as an “[e]xample of factors relevant to determining if the method is fit for the purpose.” Given that specificity is described as an example of a criterion relevant to method validation, it’s not clear whether specificity is a required or an optional criterion. Later, rules 5.4.4.2.1 and 5.4.4.2.2 contain a bullet point for “Specificity” stating the following:


  • Specificity. The ability of the assay to detect only the substance of interest must be determined and documented. The assay must be able to discriminate between compounds of closely related structures.


The specificity bullet point makes it sound as if specificity is both a required criterion for method validation under the ISL, and that the ISL defines specificity as 100% selectivity. However, to confuse matters further, ISL rules 5.4.4.2.1 and 5.4.4.2.2 contain a second bullet point for “Matrix interferences”, and there are differences in the way each rule describes “Matrix interferences”. The bullet point for matrix interferences under ISL rule 5.4.4.2.1 (for non-threshold substances) reads as follows:


  • Matrix interferences. The method should avoid interference in the detection of Prohibited Substances or their Metabolites or Markers by components of the sample matrix.


And the bullet point for matrix interferences under ISL rule 5.4.4.2.2 (for threshold substances) reads as follows:


  • Matrix interferences. The method must limit interference in the measurement of the amount of Prohibited Substances or their Metabolites or Markers by components of the sample matrix.


In short, the ISL language governing specificity and matrix interference wanders all over the place: permissive language (specificity and matrix interferences are merely examples of criteria for method validation), mandatory language (specificity must be determined and documented), and language somewhere in-between (some methods should avoid matrix interference; other methods must limit matrix interference). How, then, should we understand the ISL rules regarding specificity?




Up to the Introduction; back to part 8; on to part 10.

Full Post with Comments...

Larry's Curb Your Anticipation, Part 8:
Further Uncertainty.

Up to the Introduction; back to part 7; on to part 9.

By Commenter Larry

We left part 7 in the middle of talking about the ISO idea of uncertainty, one of two key criteria for fitness for purpose. We continue now, still discussing factors in uncertainty.

[MORE]

“Precision” is itself a measurement of two other criteria, “Repeatability” and “Reproducibility”. “Repeatability” is a method’s precision where the method is performed on identical test items in the same laboratory by the same operator using the same equipment within short intervals of time. (Eurachem Guide paragraph A21). “Reproducibility” is a method’s precision where the method is performed on identical test items in different laboratories with different operators using different equipment. (Eurachem Guide paragraph A22). To complicate matters slightly, the ISL requires that method validation for threshold substances consider a criterion called “Intermediate Precision”. (See ISL Rule 5.4.4.3.2.1.) “Intermediate Precision” is the variation in results observed when one or more factors, such as time, equipment and operator, are varied within a laboratory. See http://www.measurementuncertainty.org/mu/guide/analytical.html. In other words, “intermediate precision” is a criterion that falls somewhere in-between repeatability and reproducibility.


(Interestingly, the ISL rules briefly refer to repeatability, see ISL Rule 5.4.4.3.2.1, but never to reproducibility. This omission may reflect WADA’s relative lack of concern with achieving consistent results among its various accredited labs. One further point: we can see that when Ali points to the variation between the LNDD S17 test results and the results achieved later upon the EDF re-analysis, he is pointing to a potential problem with the “intermediate precision” of LNDD’s test methods.)


Method “accuracy” is measured differently, depending on whether the method purpose is quantitative (as it would be for WADA threshold substances) or qualitative (as it would be for WADA non-threshold substances). (The Eurachem Guide says that this distinction applies to measurement of precision, see Eurachem Guide paragraph 6.37, but it would seem to apply equally to measurement of trueness.) If the method purpose is quantitative, “accuracy” is measured by looking at the amount that the test results differ from each other and from the reference value. If the method purpose is qualitative, “accuracy” is measured based on the percentage of the time that the test generates a false positive result or a false negative result. In either case, the method’s “purpose” should define the required test accuracy.


With the above discussion in hand, let’s look at how ISO 17025 and the ISL address the question of uncertainty. ISO 17025 Rule 5.4.6.2 addresses uncertainty in a general way, requiring testing laboratories to estimate uncertainty, or where such an estimate is impossible, to at least attempt to identify all of the components of uncertainty. The ISL requirements are similarly vague: ISL 5.4.4.3 notes the distinction we’ve already discussed between quantitative and qualitative uncertainty, and makes reference to concepts we’ve discussed above, such as repeatability, precision and bias. For quantitative methods, the ISL establishes a maximum uncertainty: “the expanded uncertainty using a coverage factor, k, to reflect a level of confidence of 95%.” ISL Rule 5.4.4.3.2.2. There is no corresponding maximum uncertainty for WADA lab qualitative methods – the ISL does not establish any maximum false positive or false negative percentages. However, it is clear from the ISL that all confirmation procedures – whether for threshold substances or non-threshold substances – must meet applicable uncertainty requirements. (ISL Rule 5.2.4.3)


(I should mention that the term “expanded uncertainty” used in ISL Rule 5.4.4.3.2.2 has a special meaning: it is a measure of uncertainty that defines a range within which we can expect to find a particular measurement result. The math for determining expanded uncertainty is beyond what I want to cover here. Anyone interested in learning more about “expanded uncertainty” can look here: http://physics.nist.gov/cuu/Uncertainty/coverage.html.)


I’ll conclude my discussion of uncertainty by stating the obvious: uncertainty is an enormously complicated topic, and my discussion here barely begins to explain this topic. There are important concepts that I have not touched upon, such as “measurement uncertainty” and “method uncertainty”. We have not covered enough information to look at how LNDD came up with its stated +/- 0.8 uncertainty for its CIR testing, nor have we figured out how this stated uncertainty relates to the ISL or to method validation in general.


We cannot become experts in “uncertainty” in the course of a single article like this one. Instead, what I’ve tried to do here is to introduce this topic, and more importantly, to place this topic in an overall context. “Uncertainty” (and related concepts such as bias, traceability, precision and repeatability), are method validation concepts that speak to whether a method is fit for purpose. We’ll return to a longer discussion of the importance of this context before this article is finished.





Up to the Introduction; back to part 7; on to part 9.



Full Post with Comments...

Wednesday Roundup

News
The CyclingNews says it's official, Bjorn Leukemans has appealed his two year suspension for testosterone use. In an update the CyclingNews reports on Astana's introduction to North America, and Johan Bruyneel's assertion that from what he has heard the ASO will deal fairly with the selection of teams for the TdF. There are also stories of conflict over the "special race calendar" and the possible unfairness of recent late night doping controls for Lampre.

The VeloNews reports that Rock Racing has named its squad for the upcoming Tour of California, and the rumored plaintiff in the lawsuit filed against USADA, Kayle Leogrande, is listed among the riders. The announcement was made in Malibu on Sunday, which VeloNews did not attend after being "disinvited":


Five riders guaranteed starting slots are former ProTour riders Victor Hugo Peña, Oscar Sevilla, Tyler Hamilton, Santiago Botero and Freddie Rodriguez. The sixth rider guaranteed a start is Leogrande. The 2006 national elite criterium champion made headlines over the weekend after several sources told VeloNews and the Associated Press that he is the "John Doe" suing USADA for testing his B sample from a 2007 Superweek event after his A sample tested negative

[...]
By naming Leogrande to the California squad, Ball could be defying Tour of California organizer AEG, which last week announced a round of anti-doping initiatives for the 2008 event and proclaimed that it would seek to bar any rider with an open anti-doping case from competing. Sevilla, Botero and Hamilton have also been named in connection with the ongoing Operación Puerto case in Spain. However, none of those riders is currently under a publicly disclosed formal investigation.

USA Cycling, the UCI, and USADA appear to be able to determne the makeup of the squads participating in the race:

USA Cycling CEO Steve Johnson said Tuesday that the review of rosters won't happen until at least next week. "I haven't seen any official rosters," he said. "The teams will submit their long teams to the race organizers, and the organizers will submit that to us. We will then query both the UCI and USADA as to what they consider to be open doping cases, answer yes or no, and will communicate back to the race organizer the status of anybody in question.


Stay tuned.

The NY Times "freakonomics" blog posts a guest entry by Bicycling's Joe Lindsey on why the legalization of doping won't work. Joe is replying to the suggestion that we just let everyone dope which was posted a couple of days ago.

Blogs
Rant advises three good reads and comments on them. First on the reading list is Larry's "curb your anticipation" series on TBV, Rant next suggests reading one of the latest entries in the "idiots series", and finally he recommends reading the WADAwatch analysis of the AFLD's recently released Landis decision. Have a box of bon bons at hand and enjoy.

CaliRado Cyclist posts an "interview" with retired WADA prez Dick Pound. Hilarity ensues, unless of course it really happened then it's just scary. Illustrations are included.

M-M-My Pomona tells why not to get involved with down-on-their-luck cyclists. We missed that advice, and look what happened to us.

Bourbon and Pork has a story from the 2003 tour, with a hopefully helping hand for Landis, with pictures.

Full Post with Comments...

Tuesday, January 29, 2008

Larry's Curb Your Anticipation, Part 7: Uncertainty

Up to the Introduction; back to part 6; on to part 8.

Hey folks, we're HALFWAY there!

By Commenter Larry


Criteria for Fitness for Purpose

There are a number of criteria a lab can use to determine the “fitness” of a particular method. Two of these criteria – uncertainty and selectivity/specificity – are arguably the most important, so we’ll limit our discussion of method validation to these two criteria – and to the myriad of sub-criteria related to these two principle criteria. It's going to take two posts to talk about uncertainty. We won't get to selectivity/specificity until part 9.



[MORE]


  • Uncertainty. In simplest terms, “uncertainty” is a measure of the “accuracy” of a lab method. The “accuracy” of a lab method consists of two components: “trueness” (the closeness of a single result to the true value) and “precision” (how close multiple measurements made by the same method are to one another). Eurachem Guide 6.30.


Obviously, “trueness” is a tricky concept: how can you tell if a new method is providing a “true” result? According to the Eurachem Guide, a lab determines the “trueness” of a method by comparing method results against a known “reference value”. The lab has two techniques available to determine a “reference value”: (1) the lab can utilize a characterized (or reference) material, where the value the method is supposed to measure is already known, or (2) the lab can compare the results of its test method against a different method that has already been validated and approved for “trueness”. See Eurachem Guide paragraph 6.31.


The concept of “trueness” has a few related concepts that are referred to in the ISL and in ISO 17025. One of these concepts is “bias”. “Bias” is the difference between the expectation of the test results and an accepted reference value. See Eurachem Guide paragraph A2. There are, in turn, two types of bias: “method bias” and “laboratory bias”. “Method bias” is bias inherent in the method; “laboratory bias” is the additional bias peculiar to the laboratory and its interpretation of the method. See Eurachem Guide paragraph 6.35. I personally do not find “bias” to be a useful concept, but it is referred to in the ISL (see ISL 5.4.4.3.2.1), so I thought I should mention it here.


A second related concept mentioned in ISO 17025 and the ISL is “traceability”. “Traceability” refers to the ability of a test method to relate to a known standard. See Eurachem Guide paragraph A30. In the context of our discussion here, “traceability” means that the “trueness” of a method has been measured by testing the method against a standard that is well-accepted in the scientific community – for example, the method would be “traceable” if it could be tested on a reference material that is widely accepted in the field of doping control. Traceability merits its own section in ISO 17025 (5.6), but I think traceability is best understood as a property of “trueness”.


(Interestingly, the ISL by its terms seems to preclude any meaningful validation of the “trueness” of a lab method. WADA labs cannot determine “trueness” by utilizing a known reference material, since according to the ISL, “[f]ew of the available reference drug and drug Metabolite(s) are traceable to national or international standards.” ISL 5.4.6.1. And given WADA’s proclamation (discussed above) that standard methods are not available for doping control, it would be impossible for a WADA lab to determine the “trueness” of a method by comparing the method to a second, already validated, method. My guess is that WADA labs validate the “trueness” of their methods by utilizing whatever reference materials they can find, but I have no way to know for certain that this is what they do.)


As stated above, method “accuracy” is a factor of both method “trueness” and method “precision”. “Precision” is a measure of the closeness of method results when the method is repeated under the same conditions. See Eurachem Guide paragraph 15.1.


“Precision” is itself a measurement of two other criteria, “Repeatability” and “Reproducibility”. “Repeatability” is a method’s precision where the method is performed on identical test items in the same laboratory by the same operator using the same equipment within short intervals of time. (Eurachem Guide paragraph A21). “Reproducibility” is a method’s precision where the method is performed on identical test items in different laboratories with different operators using different equipment. (Eurachem Guide paragraph A22). To complicate matters slightly, the ISL requires that method validation for threshold substances consider a criterion called “Intermediate Precision”. (See ISL Rule 5.4.4.3.2.1.) “Intermediate Precision” is the variation in results observed when one or more factors, such as time, equipment and operator, are varied within a laboratory. See http://www.measurementuncertainty.org/mu/guide/analytical.html. In other words, “intermediate precision” is a criterion that falls somewhere in-between repeatability and reproducibility.


(Interestingly, the ISL rules briefly refer to repeatability, see ISL Rule 5.4.4.3.2.1, but never to reproducibility. This omission may reflect WADA’s relative lack of concern with achieving consistent results among its various accredited labs. One further point: we can see that when Ali points to the variation between the LNDD S17 test results and the results achieved later upon the EDF re-analysis, he is pointing to a potential problem with the “intermediate precision” of LNDD’s test methods.)


Method “accuracy” is measured differently, depending on whether the method purpose is quantitative (as it would be for WADA threshold substances) or qualitative (as it would be for WADA non-threshold substances). (The Eurachem Guide says that this distinction applies to measurement of precision, see Eurachem Guide paragraph 6.37, but it would seem to apply equally to measurement of trueness.) If the method purpose is quantitative, “accuracy” is measured by looking at the amount that the test results differ from each other and from the reference value. If the method purpose is qualitative, “accuracy” is measured based on the percentage of the time that the test generates a false positive result or a false negative result. In either case, the method’s “purpose” should define the required test accuracy.





Up to the Introduction; back to part 6; on to part 8.




Full Post with Comments...

Larry's Curb Your Anticipation, Part 6:
On Fitness For Purpose

Up to the Introduction; back to part 5; on to part 7

By Commenter Larry

Fitness For Purpose

As should be obvious, in order to determine whether a lab method is “fit for purpose”, the lab must determine the method’s intended purpose, or in the words of the Eurachem Guide, “the performance requirements that a method must have to be suitable for solving the analytical problem.” The process of method validation can be understood as a cycle of method development and evaluation that continues until the method is found to be capable of meeting the defined performance requirement. See Eurachem Guide paragraph 6.9.


[MORE]


ISO 17025 is written for laboratories doing work for clients, and as you would expect, ISO 17025 is primarily addressed to the responsibilities that labs must assume when they do work for clients. However, when it comes to the “fitness for purpose” standard, the client bears some responsibility for defining the purpose of the method to be selected or developed by the lab. It is up to the client in the first instance to define the requirements of the method they want the lab to perform. As stated in the Eurachem Guide, “ideally the laboratory should first agree with the customer [on] an analytical requirement which defines the performance requirements that a method must have to be suitable for solving the analytical problem.” See Eurachem Guide paragraph 6.9. Of course, most lab customers are not sophisticated enough to come up with any such definition. The Eurachem Guide acknowledges that most customers “define their requirements in terms of cost and/or time and rarely know how well methods need to perform.” Eurachem Guide paragraph 6.10.


But WADA is not a typical lab customer – WADA is a leading authority in the field of doping control, and it contracts with numbers of labs world-wide to perform doping tests. Consequently, WADA bears a significant responsibility for defining the purpose of these tests, and WADA has at least made an effort in the ISL to meet this responsibility. Under the ISL, each test developed by a WADA lab must be capable of measuring a specified minimum amount of a particular prohibited substance. This specified minimum amount is sometimes called a “limit of detection”. See ISL Rule 5.4.4.1.3. (For a discussion of “limit of detection”, see Eurachem Guide paragraph 6.20.) If the test is to detect a “non-threshold substance” (a substance that can be the basis for an AAF if found in the athlete’s system in any amount), then the test must be able to identify the substance at the limit of detection. See ISL Rule 5.4.4.1.1. If the test is supposed to detect a “threshold substance” (a substance that must be present in an athlete’s system in more than a specified amount in order for the lab to find an AAF), then the “purpose” of the test is more demanding: the test must be capable of identifying both the substance and the amount of the substance with an acceptable “uncertainty”. See ISL Rule 5.4.4.1.2. (The issue of “uncertainty” is important, and we’ll address it shortly.)


(Here at TBV, we’ve debated the fitness of methods in use at WADA labs, but we have not considered as carefully the stated purpose for these methods. Assuming for the moment that there are deficiencies in the way that these labs perform their doping tests, we should consider whether the fault may lie in part with WADA’s failure to articulate a clear and definitive purpose for these tests.)


Once a lab determines the purpose of a particular method and develops a method to meet this purpose, the lab can then consider whether the method is “fit” for this purpose. This analysis is not limited to looking at the method in a vacuum. The determination of “fitness for purpose” turns on how the method performs when used at a particular lab, by the analysts employed at that lab, using the equipment and facilities available at that lab. See Eurachem Guide paragraph 6.9. A method that might work perfectly well at a state-of-the-art lab might not be “fit for purpose” at a less sophisticated lab.


The effort involved in method validation will vary. ISO 17025 rule 5.4.5.2 states vaguely that “[t]he validation shall be as extensive as is necessary to meet the needs of the given application.” The Eurachem Guide is a bit more helpful. The Eurachem Guide states that “[c]haracterisation of method performance is an expensive process and inevitably it may be constrained by time and cost considerations,” and that the lab must strike “the balance between time and costs constraints and the need to validate the method.” How does the lab manage to strike such a critical balance? The Eurachem Guide states only that “the laboratory should do the best it can within the constraints imposed.” (Eurachem Guide paragraphs 6.6 and 6.7.) So, we should require that the WADA labs do a reasonable job of method validation, but we should expect that “perfect” validation is probably beyond what we have a right to expect from any lab.



Up to the Introduction; back to part 5; on to part 7

Full Post with Comments...

Larry's Curb Your Anticipation, Part 5:
Test Development and Validation

Up to the Introduction; Back to part 4; On to part 6.

By Commenter Larry

Let’s get started looking at the ISO laboratory rules that apply, finishing here with "Fitness for Purpose".


Rules Governing Test Method Development and Validation


Our first category, test method and development, is probably our most complicated category. It is one of the most detailed categories under ISO 17025, and the longest single section in the ISL. Arguably, this is also the most important single category applicable to anti-doping testing. Why should this be the case?



[MORE]


The answer is set forth in ISL Rule 5.4.4.1:

Standard methods are generally not available for Doping Control analyses. The Laboratory shall develop, validate, and document in-house methods for compounds present on the Prohibited List and for related substances. The methods shall be selected and validated so they are fit for the purpose.


ISL Rule 5.4.4.1 points out a critically important feature of anti-doping testing: standard methods are generally not available for Doping Control analyses. Consequently, the ISL requires each WADA lab to come up with its own tests “in house”. Given the importance of these tests, it makes sense that a substantial portion of the ISL addresses the process of developing and validating these tests.


We can divide our discussion under this heading into two parts: method development and method validation. We will examine each of these concepts in turn. In the process, we’ll look at a number of the terms we’ve used in our discussions here at TBV: terms such as “specificity”, and “traceability”, and “uncertainty”. I will do my best to define these terms, but you should note that these terms can have different meanings in different contexts, and that some of these terms do not have universally accepted definitions. I will try to define these terms in a way I think is consistent with the ISL and ISO 17025, and I will always provide a citation to support the definition I have chosen, but you may well be able to find alternative definitions for some of these terms.

  1. Test Method Development


ISO 17025 assumes that a lab will select existing methods whenever possible, so the ISO 17025 standard has comparatively little to say about method development. ISO 17025 Rule 5.4.4 contains general requirements for new test procedures: each new test procedure must set forth an appropriate scope, parameters, pre-test checks, a method of recording observations, and interestingly, “criteria and/or requirements for approval/rejection.” (What does ISO 17025 Rule 5.4.4 mean when it refers to “approval/rejection”? Does this rule require that a test method contain criteria for when the results of a test have been performed correctly and can be approved, and when the results are invalid and must be rejected? ISO 17025 does not say. What criteria did LNDD use for “approval/rejection” under ISO 17025 Rule 5.4.4? It would be interesting to know the answers to these questions.)


While the ISL has rules addressed to “Selection of Methods”, see ISL Rule 5.4.4.1, I think that these rules fit better into our discussion of Test Method Validation below.


  1. Test Method Validation


Method validation rules are arguably the most important rules in the ISL. I will argue that most of the lab testing criteria we discuss here on TBV are method validation rules. However, “method validation” has a specific meaning in the world of lab testing. Thanks to Russ, we have a very good discussion of method validation available to us in the form of a Eurachem Guide on “Fitness for Purpose”. See http://www.eurachem.org/guides/valid.pdf. I will rely heavily on this Eurachem Guide in the analysis that follows.


ISO 17025 defines “method validation” as “the confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled.” (ISO 17025 Rule 5.4.5.1.) A shorter definition of method validation, more to the point, is “the process of verifying that a method is fit for purpose, i.e. for use for solving a particular analytical problem.” (See Eurachem Guide Annex A paragraph A33.2)


Method validation is closely connected with method development: according to the Eurachem guide, it is often not possible to determine exactly where method development finishes and validation begins. (Eurachem Guide 3.1) However, method validation is an ongoing process that does not end after the method has been developed. According to paragraph 5.1 of the Eurachem Guide:


A method should be validated when it is necessary to verify that its performance parameters are adequate for use for a particular analytical problem. For example:

  • new method developed for particular problem;

  • established method revised to incorporate improvements or extended to a new problem;

  • when quality control indicates an established method is changing with time;

  • established method used in a different laboratory, or with different analysts or different instrumentation;

  • to demonstrate the equivalence between two methods, e.g. a new method and a standard.


In other words, method validation should take place not only as a new method is adopted, but also at any time the method is changed or is employed in a different way, or where the lab’s quality control efforts indicate that the method should be re-examined.


Central to method validation is the concept of “fitness for purpose” – the determination that the method under consideration has performance capabilities consistent with what the application requires. If a method is “fit for purpose”, then it is trustworthy – it provides the “right answer to the analytical part of the problem.” (See Eurachem Guide Section 4.3) So, implicit in method validation is a determination by the lab that the method is good enough, that it is worth performing, that on an absolute basis the test will meet its designed purpose.



Up to the Introduction; Back to part 4; On to part 6.

Full Post with Comments...

Larry's Curb Your Anticipation, Part 4:
What are the key bits of 17025

Up to the Introduction; Back to part 3; On to part 5.

By Commenter Larry


The Structure of ISO 17025

There are two main groups of requirements under ISO/IEC 17025 - Management Requirements and Technical Requirements. Management Requirements are primarily related to the operation and effectiveness of the quality management system within the laboratory. Technical Requirements address the competence of staff, methodology and test/calibration equipment, but also address issues critical to lab testing, such as method validation, traceability and sampling. See http://en.wikipedia.org/wiki/ISO_17025. In order to understand how ISO 17025 regulates lab testing, we’ll need to look at both the Management Requirements and the Technical Requirements.


[MORE]


There are fifteen Management Requirement and ten Technical Requirements under ISO 17025. The Management Requirements are:

  • 4.1 Organization and management

  • 4.2 Quality system

  • 4.3 Document control

  • 4.4 Request, tender and contract review

  • 4.5 Sub-contracting of tests and calibrations

  • 4.6 Purchasing services and supplies

  • 4.7 Service to the client

  • 4.8 Complaints

  • 4.9 Control of nonconforming testing and/or calibration work

  • 4.10 Improvement

  • 4.11 Corrective action

  • 4.12 Preventive action

  • 4.13 Records

  • 4.14 Internal audits

  • 4.15 Management reviews


The Technical Requirements are:

  • 5.1 General

  • 5.2 Personnel

  • 5.3 Accommodation and environmental conditions

  • 5.4 Test and calibration methods including sampling

  • 5.5 Equipment

  • 5.6 Measurement traceability

  • 5.7 Sampling

  • 5.8 Handling of test and calibration items

  • 5.9 Assuring the quality of test and calibration results

  • 5.10 Reporting the results


In this article, we’re interested only in those ISO 17025 rules (as supplemented by the ISL) that deal with lab testing, and that might serve as the grounds for an “ISL departure” under the WADA rules. So we’re not interested in a number of the ISO 17025 requirements, such as how the lab does its contracting or how the lab hires its personnel. Also, you’ll probably note that the ISO 17025 requirements are not laid out in a logical order for reviewing how a lab performs its tests. Some of the steps we’d expect the lab to perform at the end of the testing process, such as corrective action, are listed before steps that the lab would need to take at the beginning of the testing process, such as test validation and sampling.


So for our purposes, I’d like to reorganize the ISO 17025 Requirements in a different way, so that we can focus just on those requirements dealing with lab testing, and so that we can consider those requirements in a more logical order. The order I propose to use is a sort of “chronological” order, with steps listed in the order that the lab must perform them in order to reach a valid result.


I propose to look at the ISO 17025 and ISL rules in the following order:

  • Rules governing test method development and validation (ISO Rule 5.4; ISL Rule 5.4.4)

  • Rules governing “quality control” (ISO Rule 5.9)

  • Rules governing sampling and other steps taken prior to actual testing (ISO Rules 5.7 and 5.8)

  • Rules governing how a lab must perform its tests

  • Rules governing how a lab must interpret the tests it performs

  • Rules governing the lab’s internal review of its work product (ISO Rules 4.9 and 4.11)


As should be apparent, we’re looking at only a handful of ISO Rules. Nevertheless, the task ahead of us is steep. The analysis of these six categories of rules will take us through a number of difficult and complex topics. We’ll only be able to examine the first of these six categories in this article.


Up to the Introduction; Back to part 3; On to part 5.


Full Post with Comments...

Tuesday Roundup

News
The CyclingNews this morning provides two items of interest. The first being Christian Prudhomme's assertion that since Astana's past doping problems have damaged the image of the TdF he cannot assure the team, and thus defending Tour de France champ Alberto Contador, an invitation to participate in it this year. CONI's reopening of the OP can of worms surely didn't help. Also, Frankie Andreu speaks out on Rock Racing and the Kayle Leogrande lawsuit filed against USADA. Andreu asserts that RR knew of the impending actions USADA might take against Leogrande and felt RR did not react appropriately:

When asked how much this specific instance of the Rock & Republic sponsored team affected his decision to part ways with the team in December, Andreu responded that it was more the reaction by the team owners – or more specifically the lack of reaction – that gave him a sour taste. "There were a combination of things [in my decision,] but the non-reaction by Rock & Republic was certainly part of that. I knew [the investigation] was in the development. I didn't know about the lawsuit, but I knew there were rumors of the test. "


In more CyclingNews ACE, the Agency for Cycling Ethics, has announced a new anti-doping program for smaller teams with fewer funds, and the first team to implement the program is BMC:

The new program announced Tuesday is called the "Blood Passport Program" and offers "slightly more blood testing at a small cost," according to the company's press release. "ACE's Blood Passport Program includes an average of 15 random collections per rider per year, both in and out of competition, and provides longitudinal analysis of biological markers, including testing for hemoglobin, hematocrit, mean corpuscular volume, reticulocyte count, and off score (stimulation index). As a result, this program provides for blood testing and analysis equal to or greater in frequency than the UCI's 2008 program.


The Village Voice gives "Bigger, Stronger, Faster ", the documentary about steroids in which Floyd Landis is interviewed, a thumbs up.

Blogs
Cyclelicio.us correctly notes the "told you so" we'll claim if a Slipstream rider shows up as a false positive after being clear on the team tests. We've predicted trouble like that when we heard thought about resolving disputes between Team/ACE testing, "Passport" readings, and WADA doping control tests. It'll be ugly.

Bugs and Cranks snarks that Roger Clemens should go bowling with Floyd Landis since they are both guilty and have been convicted in the court of public opinion. That doesn't sound fair given Clemen's shoulder, arm and wrist strength. How about billiards?

Dink and Flika, who appear to be Fox News fans, seem to have a running joke in giving "Floydie" awards to people who make comments on their site.

Full Post with Comments...

Monday, January 28, 2008

Larry's Curb Your Anticipation, Part 3:
ISO 17025

Up to the Introduction; Back to part 2; On to part 4.

By commenter Larry

ISO 17025: An Introduction

The main standard used by testing laboratories world-wide is ISO 17025 (sometimes called ISO/IEC 17025), issued by the International Organization for Standardization. ISO 17025 provides rules for how labs should operate in order to produce consistently valid results. ISO 17025 is also the basis for lab accreditation by bodies such as the American Association for Lab Accreditation.

[MORE]


The complete text of ISO 17025 is available at http://www.usocpressbox.org/usoc/pressbox.nsf/ac7bf642f496016a87256d0d006a340c/1b39aafc829ba275852572fc000de8ee/$FILE/023.PDF. The best general discussion I’ve seen of ISO 17025 is at http://www.labcompliance.com/tutorial/iso17025/default.aspx?sm=d_e.

(The version of ISO 17025 referred to in the ISL is the 1999 version, but at the time of the testing at issue in the FL arbitration, the 2005 version of ISO 17025 was in effect. USADA posted both the 1999 and 2005 versions of ISO 17025 as exhibits in the FL case, so it is not clear which version of ISO 17025 should be applied to the FL case. I have decided to use the 2005 version of ISO 17025 in this analysis. I do not believe that there are material differences between the two versions of ISO 17025 that would affect my analysis here.)

ISO 17025 applies one set of rules to be followed by all kinds of testing laboratories – not just anti-doping laboratories, or medical laboratories, or even laboratories that perform testing on human samples. For this reason, the standards in ISO 17025 are universal and general in nature. However, ISO 17025 recognizes that more specific requirements may be needed in order to ensure that specialty labs are competent to perform their jobs. For this reason, Annex B of ISO 17025 provides guidelines that certain specialty labs must use to supplement the general requirements of ISO 17025. In particular, Annex B.4 of ISO 17025 provides as follows:

[I]t may be necessary to develop a separate document of applications to supplement [ISO 17025] for specific types or groups of tests … Such a document should provide only the necessary supplementary information, while maintaining [ISO 17025] as the governing document through reference.

By its terms, Article 5 of the ISL is a supplementary document under Annex B.4 of ISO 17025, for the field of doping control. As required by ISO 17025, the ISL recognizes that any aspect of testing or management not addressed in the ISL is to be governed by ISO 17025. See ISL Rule 5.1. Moreover, all WADA laboratories are required to be accredited by a national accreditation body and periodically audited according to ISO 17025 (see ISL Rules 4.1.1 and 6.4.7.2), and all WADA lab methods and procedures must eventually be included in the scope of these periodic ISO audits. See ISL Rule 4.2.2.

The principle that ISO 17025 is applicable to anti-doping testing is expressly recognized in the decision of the majority arbitrators in the FL case, who ruled that “violations of the ISO 17025 …can be violations of the ISL for purposes of rebutting the initial presumption favouring the Lab that an AAF has been established.” See http://ia341243.us.archive.org/0/items/Floyd_Landis_USADA_Case_Decision_Documents/UsadaAndLandis-FinalAward20-09-07.pdf paragraph 156-157.

In some of my prior posts, I’ve noted that ISL Rule 7.1 contains language that appears to limit the application of ISO 17025 to anti-doping cases. The applicable language of ISL Rule 7.1 is set forth in full below:

The Laboratory is not required to provide any documentation not specifically included in the Laboratory Documentation Package. Therefore, the Laboratory is not required to support an Adverse Analytical Finding by producing, either to the Testing Authority or in response to discovery requests related to the hearing, standard operating procedures, general quality management documents (e.g., ISO compliance documents) or any other documents not specifically required by Technical Document on Laboratory Documentation Packages. References in the International Standard for Laboratories to ISO requirements are for general quality control purposes only and have no applicability to any adjudication of any specific Adverse Analytical Finding. (emphasis added)


Read broadly, the language in italics above would indicate that a lab’s violation of applicable ISO 17025 standards would not give an athlete a potential defense to an AAF found by the lab. However, this interpretation of ISL Rule 7.1 is contradicted by ISL Rule 5.1, by the express language of the FL decision, and by the fact that specific ISO 17025 requirements are cited more than a dozen times in the ISL, in ways that make it clear that ISO 17025 is an integral part of WADA drug testing. (See, for example, ISL Rules 4.1.1, 4.2.1, 4.2.2, 4.3.1, 5.2.6.6, 5.3.1.1, 5.3.2.1, 5.3.3, 5.3.4, 5.3.7.1, 5.3.7.3.4, 5.3.8, 5.3.10, 5.3.11, 5.3.13.1, 5.3.14.1, 5.4.1, 5.4.5.2, 6.2.1, 6.3.1, and 6.4.7). Consequently, the language in italics in Rule 7.1 should be understood as limited to the matters addressed in that rule. In other words, ISL Rule 7.1 should be understood to state that ISO 17025 requirements are not applicable to determine the documents that a lab is required to include in its Laboratory Document Package (LDP), but that ISO 17025 is applicable to other aspects of the lab’s work.

Let’s summarize. It is clear that in order to understand the ISL, we must also understand ISO 17025. ISO 17025 is the master document, while the ISL provides more specific rules to supplement the general rules in ISO 17025. These two documents are complementary documents that must be read together to understand the rules governing anti-doping testing in cycling. Where the ISL is silent, we must look to ISO 17025 for the governing rules. And as the ISL was written to “follow the format of the ISO 17025 document” (see ISL Rule 5.1), we should read the ISL rules within the context of ISO 17025 and within the structure set forth in ISO 17025.



Up to the Introduction; Back to part 2; On to part 4.

Full Post with Comments...

Larry's Curb Your Anticipation, Part 2: The ISL

Up to the Introduction; Back to part 1; On to part 3.


By commenter Larry

In part 1, we set up the legal framework for doping testing in cycling, so let’s turn our focus to the rules governing how WADA labs perform drug testing. The applicable rules are the WADA ISL (International Standard for Laboratories), and the ISO 17025 standard. Here in part 2, we'll introduce the ISL, and in part 3, ISO 17025.



[MORE]


The ISL: An Introduction

The document at the heart of the WADA Rules governing lab testing is the ISL. You can find the ISL at http://www.wada-ama.org/rtecontent/document/lab_aug_04.pdf. (Note that the version of the ISL applicable to the FL case is version 4.0. The current version of the ISL, version 5.0, came into effect on January 1, 2008.) The ISL also includes all WADA technical documents (TDs), including TD2003 LCOC (governing chain of custody issues) and TD2003IDCR (governing identification of substances in chromatographic testing). You can find all of these TDs listed and available for download at http://www.wada-ama.org/en/dynamic.ch2?pageCategory.id=372.

Let’s quickly revisit how the WADA Rules incorporate the ISL. Once a WADA lab has proved an AAF, the accused athlete has the burden of proving a “departure” by the lab from the ISL. If the athlete demonstrates such a departure, then the prosecuting ADA has the burden of proving that such departure did not cause the AAF or the factual basis for the anti-doping rule violation. (WADA Rule paragraph 3.2.2.)

From the above description, you might assume that the ISL consists of a list of rules that a lab must follow in testing an athlete’s sample. Not so. The ISL is considerably more complicated, and more difficult to interpret.

The ISL contains two kinds of rules (see ISL Rule 1.0 for this summary):

  • Rules Governing the Accreditation Procedure. These rules set forth the process that a lab must follow in order to receive WADA accreditation. These rules do not directly address how a lab performs its testing. For example, ISL Rule 4.1.2 requires that a lab seeking accreditation must receive a letter of support from a public authority responsible for a national anti-doping program (such as USADA). I believe that all rules under ISL Articles 4 and 6 fall under this category of accreditation procedure rules.


Some rules governing the accreditation procedure are of interest to us here at TBV. For example, ISL Rule 4.2.4 requires a WADA lab to analyze a minimum of 1500 doping samples a year, or else risk the loss or suspension of its accreditation. This gives us a picture of the minimum volume of work at a WADA lab. ISL Rule 4.2.2 allows a WADA lab to add or modify its methods without the approval of the organization that provided its accreditation, with the understanding that the method must be validated the next time the lab is audited. This lets us know that the methods used at LNDD are supposed to be reviewed by an accreditation body.


Since our primary focus here is to understand lab testing, and specifically to identify the ISL rules that can serve as the grounds for an ISL “departure” under the WADA Rules, I think we can ignore all ISL rules that govern the lab accreditation process.


  • Operating standards for laboratory performance. This is the place where we would look to find the kind of substantive rules that a lab must follow in order to prove an AAF. These rules are contained in ISL Article 5, as well as in the TDs. Consequently, our primary focus in this article will be on the ISL rules found in Article 5


The first rule set forth in ISL Article 5 may be the single most important rule in the ISL.

ISL Rule 5.1 states that Article 5 of the ISL is a specialty document under Annex B.4 of ISO 17025, for the field of doping control. I will argue that the key to understanding the ISL is set forth in this Rule 5.1. But to understand ISL Rule 5.1, we must first answer the questions: what is ISO 17025? And why is ISO 17025 referred to in the ISL?


(We should note that ISL Rule 5.1 refers to ISO 9001 as well as to ISO 17025. We will not have time in this article to consider ISO 9001, but we will probably want to examine ISO 9001 in a later article.)





Up to the Introduction; Back to part 1; On to part 3.

Full Post with Comments...