We left part 7 in the middle of talking about the ISO idea of uncertainty, one of two key criteria for fitness for purpose. We continue now, still discussing factors in uncertainty.
“Precision” is itself a measurement of two other criteria, “Repeatability” and “Reproducibility”. “Repeatability” is a method’s precision where the method is performed on identical test items in the same laboratory by the same operator using the same equipment within short intervals of time. (Eurachem Guide paragraph A21). “Reproducibility” is a method’s precision where the method is performed on identical test items in different laboratories with different operators using different equipment. (Eurachem Guide paragraph A22). To complicate matters slightly, the ISL requires that method validation for threshold substances consider a criterion called “Intermediate Precision”. (See ISL Rule 126.96.36.199.2.1.) “Intermediate Precision” is the variation in results observed when one or more factors, such as time, equipment and operator, are varied within a laboratory. See http://www.measurementuncertainty.org/mu/guide/analytical.html. In other words, “intermediate precision” is a criterion that falls somewhere in-between repeatability and reproducibility.
(Interestingly, the ISL rules briefly refer to repeatability, see ISL Rule 188.8.131.52.2.1, but never to reproducibility. This omission may reflect WADA’s relative lack of concern with achieving consistent results among its various accredited labs. One further point: we can see that when Ali points to the variation between the LNDD S17 test results and the results achieved later upon the EDF re-analysis, he is pointing to a potential problem with the “intermediate precision” of LNDD’s test methods.)
Method “accuracy” is measured differently, depending on whether the method purpose is quantitative (as it would be for WADA threshold substances) or qualitative (as it would be for WADA non-threshold substances). (The Eurachem Guide says that this distinction applies to measurement of precision, see Eurachem Guide paragraph 6.37, but it would seem to apply equally to measurement of trueness.) If the method purpose is quantitative, “accuracy” is measured by looking at the amount that the test results differ from each other and from the reference value. If the method purpose is qualitative, “accuracy” is measured based on the percentage of the time that the test generates a false positive result or a false negative result. In either case, the method’s “purpose” should define the required test accuracy.
With the above discussion in hand, let’s look at how ISO 17025 and the ISL address the question of uncertainty. ISO 17025 Rule 184.108.40.206 addresses uncertainty in a general way, requiring testing laboratories to estimate uncertainty, or where such an estimate is impossible, to at least attempt to identify all of the components of uncertainty. The ISL requirements are similarly vague: ISL 220.127.116.11 notes the distinction we’ve already discussed between quantitative and qualitative uncertainty, and makes reference to concepts we’ve discussed above, such as repeatability, precision and bias. For quantitative methods, the ISL establishes a maximum uncertainty: “the expanded uncertainty using a coverage factor, k, to reflect a level of confidence of 95%.” ISL Rule 18.104.22.168.2.2. There is no corresponding maximum uncertainty for WADA lab qualitative methods – the ISL does not establish any maximum false positive or false negative percentages. However, it is clear from the ISL that all confirmation procedures – whether for threshold substances or non-threshold substances – must meet applicable uncertainty requirements. (ISL Rule 22.214.171.124)
(I should mention that the term “expanded uncertainty” used in ISL Rule 126.96.36.199.2.2 has a special meaning: it is a measure of uncertainty that defines a range within which we can expect to find a particular measurement result. The math for determining expanded uncertainty is beyond what I want to cover here. Anyone interested in learning more about “expanded uncertainty” can look here: http://physics.nist.gov/cuu/Uncertainty/coverage.html.)
I’ll conclude my discussion of uncertainty by stating the obvious: uncertainty is an enormously complicated topic, and my discussion here barely begins to explain this topic. There are important concepts that I have not touched upon, such as “measurement uncertainty” and “method uncertainty”. We have not covered enough information to look at how LNDD came up with its stated +/- 0.8 uncertainty for its CIR testing, nor have we figured out how this stated uncertainty relates to the ISL or to method validation in general.
We cannot become experts in “uncertainty” in the course of a single article like this one. Instead, what I’ve tried to do here is to introduce this topic, and more importantly, to place this topic in an overall context. “Uncertainty” (and related concepts such as bias, traceability, precision and repeatability), are method validation concepts that speak to whether a method is fit for purpose. We’ll return to a longer discussion of the importance of this context before this article is finished.
Up to the Introduction; back to part 7; on to part 9.