Sunday, December 28, 2008

The Winnowing: Klaas Faber

Klaas Faber is a widely published author on statistics for accreditation of chemometry measures. He's looked at the Landis case since it became public. He cites below the editorial in Nature by Donald Berry. (Our posts referencing Berry)

[Back to the Introduction]


Laboratories have arranged that they don't need to give the full data, in the WADA TD2003LDOC. In contrast, the Dutch Forensic Institute, for example, is obliged to give everything the defense needs, because that's arranged by law - a law that holds for governmental bodies (i.e. not for a lab that is paid by sport organizations and/or anti-doping agencies). For that reason, I was never able to have a good look at Floyd's data. And neither was the defense. During the hearings before the CAS, for example, the analyst of the laboratory was not able to reproduce the numbers that have led to the blue 'dot' in Figure 1 of Berry's Commentary in Nature! However, they were able to meet the required burden of proof ('...to the comfortable satisfaction of the hearing body...') on all occasions.


(click for larger)

[MORE]


"Plots show the distribution of 167 samples of the metabolites etiocholanone and 5 beta-androstanediol (a, b), and androsterone and 5 alpha-androstanediol (c, d). Panels b and d show samples the French national anti-doping laboratory (LNDD) designate to be 'positive' (red crosses) or 'negative' (green dots); the values from Landis's second sample from stage 17 is shown as a blue dot. Axes display delta notation, expressing isotopic composition of a sample relative to a reference compound." (source: Nature)


A final comment on WADA's TD2003LDOC. Note the sentence "Quantitative Data or ratio data and uncertainty estimation, if applicable.". First, ratio data makes a statistical treatment very complicated. That's my comment because anti-doping research don't use a lot of statistics and are therefore not troubled. Note, however, that it is often not even necessary to work with ratios because there is always an internal standard. The reasons to work with ratios are historical, not technical. Second, uncertainty estimation is always applicable. However, the people who drew up this document probably don't know (because of their lack of statistical insight). How to change this practice once it is fixed in documents?

There was a meeting of the Council of Europe Anti-Doping Convention technical advisory group on 11-July-2006, with minutes issued on 31-July. On p.2 (3d paragraph) you can read that researchers themselves have doubt about the IRMS method. ["Moreover, given that reservations have been expressed on the validity of the IRMS method, scientific background for its use would also be appreciated."] Compare the dates of the document and the 17th stage. Rather cynical.

T-DO (2006) 29, page 2

At the moment I am setting up a think tank that involves anti-doping experts but also for example a forensic statistician. Currently, there is not a single test for which the risk of a false positive is known. In forensic statistics one is trying for the last twenty years to have Bayes determine the weight of the evidence (likelihood ratio: likelihood of evidence given guilt vs. likelihood of evidence given innocence). It appears to be successful, at least here in the Netherlands. Don't know about other countries. What helps is that we don't have a jury. Here, one only needs to train judges and prosecutors on how to (correctly) interpret Bayes.


- Klaas


2 comments:

Mike Solberg said...

This makes me think of all the technical discussion here and at DPF. We often had a hint of it, but I don't think we fully realized just how much we were like people on a cruise ship in rough seas wondering why a glass of water wouldn't stay calm. Everything in the dining room around us was perfectly still and calm (and in equilibrium with each other) but the whole system was tettering.

SwimMikeSwim.com

syi

Mike Solberg said...

Eh, make that teetering.