Friday, October 13, 2006

Verification

A comment to the Friday roundup asks pointedly

So does anyone have an explanation for the different numbers on pages 13 and 16, or is this just a KoolAid drinking blog?

Trust and DON'T Verify?
Taking such complaints seriously, I put the two together and get:

slide/trial
t value
e value
reference
13 #1
61.37
5.2
USADA_092
13 #2
172.23
17.59
USADA_212
16 #1
49.7
11.1
USADA_057
16 #2
61.37
5.2
USADA_092

Which looks correct to me -- the only common values are from USADA_92, and they appear to be the same in both slides. The different values are from different pages, because they are comparing different things. I've gone back to the PDF of the lab report and they seem correctly transcribed.

I guess both commenters missed the change in source data from USADA_212 to USADA_57. In so doing, both slides take the lowest LDP page number as the first row.

[end]

6 comments:

Anonymous said...

I downloaded the source docs, but I don't think I have the knowledge to make sense of them so I have not bothered to unzip them yet. If you say everything is okay, then I must have a misunderstanding of what the presentation is saying.

When they say test #1 and test #2, I am assuming they mean the A sample test and the B sample test. Their point being that the measured values vary considerably between the two samples. I would think any sort of testing that cannot produce repeatable results is worthless.

But I am now getting the feeling that test #1 and test #2 refer to tests done on just one sample. Is that right? The tests cannot even produce the same results on the same sample?

--BD

DBrower said...

My understanding is that each sample is divided in to mutiple sub-samples, that I think they call "aliquots" or sometimes "vials". These are then run through tests individually. In the examples given, they are comparing two trials taken from the A sample, and getting very different results. This shows it to be either (a) a non-repeatable test; (b) contaminated samples in some way; (c) a test for which the absolute value is not critical, but for which the ratio between the two values is the key value.

The last one (c) seems screwey unless the values are coming from different methodologies. There is some confusion here becuase there are different tests used for "screening" than for "confirmation", and it may not be unreasonable to get 4.9 for the screen, and 11.4 for the confirmation using a different machine.

TBV

DBrower said...

Also, if you go to the TBV page on the lab report, you can locate a single-file PDF that is easier to deal with than the zip file.

TBV

Anonymous said...

This presentation does not look as strong as I first thought. If they could show that the same tests on each of the two samples--knowing that both samples are really the same--produced wildly different results then that would be something that would be obvious to anyone. It would clearly show that the lab screwed up or that the test itself has a lack of precision.

But different results from different tests on the same sample requires a deeper understanding of how the tests are done and what they mean. Something that may look significant to a lay person may not seem like anything to an expert.

It would be interesting to go through the A and B tests and compare the results of each step.

--BD

Cheryl from Maryland said...

I discussed the contaminated sample issue with my spouse, an attorney. Legally, this may not be as solid an argument as it appears.

The arbitrators could easily consider the contamination NOT germane. Think of the contamination as the lack of a search warrant – a judge could consider the evidence gathered as the “fruit of the poisoned tree” and exclude it, but a judge could also determine that the police [i.e. the lab] were acting in good faith.

The key importance of the contamination is if it contributes to an explanation of the adverse findings, something TBV pointed out in his commentary.

Burt Friggin' Hoovis said...

I'm not sure that the "fruit of the poisoned tree" idea is a good analogy here. Acutally, I think that the sample degradation data may be the stongest defense that the landis camp has - If there really is a NMT 5% threshold that was exceeded, then the test is invalid - whether the lab ignored it or not.