Wednesday, November 07, 2007

Idiots look at Data, Part I. What is good?

UPDATED:

Having explored causes of error in "Integration for Idiots", we presented the argument that crappy collected data yields crappy results that are not reliable.

We take as axiomatic that when there are significantly overlapped peaks of unknown composition that no reliable CIR can be computed by common methods. There are three qualifications in that statement.

  1. Significant overlaps.
  2. Unknown composition.
  3. Common methods.

[MORE]




There is, perhaps, debate to be had about what constitutes a significant overlap, but fortunately, we need not address that here. It should be inarguable that when there is a third peak contained fully in the region occupied by two other peaks that there is significant overlap. See figure 17 in Part IV for an example about which we should have no doubt the overlap is significant.

We include common methods only for completeness. There is speculation some methods being researched by Brenna (what his $1.3 million are for) will be able to resolve some cases by working in multiple dimensions at the same time to resolve ambiguity. These methods are not widely deployed, and are certainly not in use by the LNDD.

We will also point out that, by definition, the complete occlusion of a third peak as shown in both figures 15 and 17 of Part IV makes it a peak of unknown composition. It has not even been identified as a peak, much less had its composition identified.

It is scientifically invalid to report CIR results for an impure peak. Whether doing so is invalid per the ISL is a different question we will leave aside for legal minds to consider.

We will in the future of this series look at all the Landis F3 chromatograms, since those are the ones at issue in the case. Before we do so, we'll identify things to be looking for.
  • Clean, unambiguous baselines suggesting good chemical separation of the prepared samples.
  • Significant (a debatable term) baseline (chromatographic) separation of peaks.
  • Absence of shoulders suggesting unidentified peaks.
  • Measurement of nearby peaks to consider their potential for influence.
We are looking for hints of the sorts of interference shown in "Integration for Idiots". If there are none present, then we may be inclined to think the chemical and chromatographic separation is good, and we are unlikely to have unidentified co-elutes and interference. But if we do find hints of interfering data, we are right to be skeptical of the quality of the data and want more evidence of proper identification and purity.

[Updated Digression: Earlier versions of this post must have caused guffaws and spilled coffee over at the offices of our friends at HRO, story at Idiots R Us.]

Before we look at the Landis chromatograms, it seems useful to look at some that do seem to be good, and unambiguous.

The one I'd like to show is in GDC 1362 I(and there are probably more useful ones around it), but I've had a hard time getting a really good copy.

(It seems to be something of a hot potato. USADA fought very hard to keep it from being shown, when Suh attempted to introduce it during cross examination of Catlin. I suspect continuing redaction issues have discouraged Landis from releasing it in original form. It was the cause of a Matt Barnett as William Novak moment. when he went out of his way later in the hearing to effectively identify the athlete whose test this was from. Apparently, proving the redaction wasn't great was more important to USADA than maintaining the confidentiality of the athlete in question, and showing that USADA will burn those who don't play ball.)

Fortunately, commenter M found a reasonable one in WMA's slideshow on page 40, but the resolution isn't great.

Even so, it is not be difficult to spot the difference between this and the chromatograms produced by the LNDD.



Figure 1: UCLA Chromatogram from WMA's slide 40.


So, it does appear possible to generate pretty clear chromatograms in a Testosterone IRMS. It is not the fundamental science of the method that produces garbage. Let's look closer at the payload part of the plot.


Figure 2: Zoom in on Figure 1.


What we don't see in this chromatogram is all of the junk at the beginning of the chromatogram typical in an LNDD result. Nor do we see anything that hints of an adjacent peak, or a shoulder, or a background that might be confused with a hidden peak, with the possible exception of the first true peak at around 1200s which might be bleeding towards the one that follows. There are exactly the four target peaks, and one pulse that looks like a reference.

Clean looking data.

These look like our pure, theoretical examples.

If there is support for the claim they are pure peaks, this is a very reliable looking case. We compliment the scientists and lab technicians at UCLA for producing excellent results, even if Don only wants to call them "pretty good". We understand the IRMS chromatograms produced by Ayotte's group at Montreal are about the same quality.

We can see why USADA didn't much like this being shown. Without this, you might think that what LNDD produces is typical.

In Part II, we'll start looking at the Landis chromatograms.


18 comments:

Mike Solberg said...

Ali, TbV,

Do you mean to tell me that those are the same types of graphs, including the same range of data, as the LNDD graphs? There is no "noise" at all! And none of that stuff at the beginning where LNDD had to guess which one was the SI / androstan? Those can't really be the same type of graphs. UCLA must have cleaned them up somehow before offering them in whatever case this was.

If they really are the same, what about the process accounts for the difference? More stable vacuum? More effective separation tube (whatever that thing is called)? Lower resolution settings of some kind (which would "edit" out background noise)?

syi

m said...

WRT the UCLA chromatograph.

Is this the same one at the end of the Meier slide show at slide 40?

Perhaps Campbell can accuse Landis (and you) of cherry picking a clean UCLA chromatograph. -)

This is the same UCLA lab that Hintzlik has lambasted for its poor practices?

Meier in his article says that overlapping peaks are sometimes unavoidable. I'm sure we could find a clean LNND one too.

Interestingly the Shackleton paper which I believed pioneered the IRMS method used here also shows a pretty messy chromatograph at figure 3, with shoulders and possible overlapping peaks. Yet he thought his measurements were accurate enough.

http://ia351412.us.archive.org/1/items/Floyd_Landis_Case_Documents_14/GDC01101-GDC01110.pdf


And not to telegraph where your analysis is eventually going, but slides 13,14, and 36-40 seem to make the case that a missing preceding peak before the 5A negatively biased its measured carbon ratio. If I recall correctly this was in the Meier testimony, and this is what Brenna was referring to, when he testified to the contrary that if the peak existed it would bias the 5A in a positive direction, not a negative direction.

m said...

Re: the Shackleton figure 3,

not sure how to post the complete url, since it was truncated.

I'll split it up for now.

http://ia351412.us.archive.org/

1/items/Floyd_Landis_Case_Documents_14

/GDC01101-GDC01110.pdf

DBrower said...

My understanding is that UCLA does much better chemical separation of fractions. It appears to make a difference.

Catlin testified they have no magic noise reduction software, and that he didn't know how Lndd did it. Search the transcript in his testimony for 'noise' - it is a vary crafty part where he appears to praise them, but can be read to be saying we can't and they don't know how to either but they are reporting numbers anyway...

TBC

Mike Solberg said...

Trust But Confirm?
Trust But Contain?
Trust But Criticize?
Trust But Challenge?
Trust But Choke?

syi

Larry said...

Um ... um ... the sound you just heard is the sound of my jaw dropping on the floor.

THAT is what a chromatograph is supposed to look like?

Really?

A live sample, not some kind of mix cal?

And you can achieve these kinds of results routinely, and not just for a sample taken from, say, a monk who drinks nothing but distilled water and has just completed a 30-day fast?

Are you sure?

This isn't merely some kind of miracle that they can pull off over at UCLA? Any good lab can do this?

OK. Then one final question.

Really?

DBrower said...

TB can't type so good on a phone!

DBrower said...

let me use your incredulity as a lever, if i may.

They were represented as being that good; this is unverified, and the cherry picking argument was not made by usafa at the hearing. They seemed to not want it in, an to run away as fast as they could. They did not as catlin if was a common or uncommon quality result from ucla. Would they have liked the answer?

TBV (right this time)

DBrower said...

but some other
things not;

Usada not usafa; and not an; ask not as

TBV

m said...

TBV,

Well take a look at slide 40 also from UCLA, and Shackleton figure 3.

Larry said...

M, can you post a cite to slide 40?

m said...

Larry,

I downloaded the Meier slide show from this site. It's slide #40. LOL!

If you look at slides 13,14, and 35-41, I think that is what Brenna was talking about.

DBrower said...

M,

Good catch-- Now that I'm back from the carpool stuff, I can see that I only have half the slide that is the nice clean reference pulses. If I'd realized the nice graphs were in WMA's set, I'd have used it and wouldn't have spent time digging through the video looking for what I could find.

I will update and correct it after dinner.

thanks!

TBV

DBrower said...

Fixed, I know I'm feeling better having gotten the real picture, even at low res.

TBV

Larry said...

Figure 2 still looks extremely clean to me. It's not as insanely clean as the pictures you showed earlier, but it's still way cleaner than anything I've seen before.

I think that all of the questions we asked before are still valid, (perhaps using a tone of voice indicating less astonishment and incredulity). Is this typical of what a good lab can achieve with a live sample from a normal adult? Or is this more of a "gold standard" that we can shoot for but cannot always expect to achieve in the real world?

While the UCLA graph looks a lot nicer than the LNDD graph, I don't see how the baseline separation of 5bA and 5aA at UCLA is any better than it is at LNDD. The UCLA graph appears to have better resolution, as it shows a curve between 5bA and 5aA while LNDD shows something more like a staircase between 5bA and 5aA. But I don't see separation between 5bA and 5aA at UCLA - the end of 5bA at UCLA curves right into the beginning of 5aA. Shouldn't we be concerned about that?

m said...

TBV,

Another question. I assume this slide 40 came from rgw lab documentaion pack of one of Barnett's other doping clients who had a test by UCLA.

I seem to recall someone claiming that UCLA only tests for two metabolites, the andro and etio, and requires both to be positive.

So are you sure slide 40 shows 5A and 5B, and if so why would UCLA be testing for 5A and 5B.

m said...

Larry,

Re separation between the 5A and 5B also look at the Shackleton figure 3 which shows similar problems.

DBrower said...

I think you mean Jacobs, not Barnett(!).

I also don't remember if UCLA tested more than two, but it certainly required positives on more than one by it's own validation study.

I don't know if these are 5bA and 5aA, since I don't have any more of the pack. This is offered less for that specific than general quality.

If it's not possible to get decent separation of the 5bA and 5aA at any lab with any chemistry, column and ramp, maybe they aren't a good pair to be testing for? But that point may not useful to argue at this juncture.

It is possible to say that is demonstrably possible to do chemistry and chromatography and get samples without a zillion unwanted peaks.

TBV