Journal of Chromatography A, 2003
The purpose of this brief review is to assess another of the papers that Dr Brenna co-authored, in order to determine whether anything of relevance to the Landis case is present. It is in the GDC exhibits, at the end of GDC1151 and filling GDC 1161
The purpose of this paper is to investigate the impact of quantization noise on the measured o/oo value for IRMS. Both the traditional summation method for integration and the curve fitting method are explored to see which performed better in the presence of quantization noise. Curve fitting is not really relevant to the Landis case because it was not employed, so we'll cut to the chase on this subject and say that curve fitting generally performed better than the traditional summation method. LNDD doesn't do that.
However, our interest in the article is tweaked by a highlighted paragraph on page 274, GDC 1155:
The reproducibility of the summation background correction depends in part on the two points that anchor the background line under the peak; imprecision in the measurement of either point multiplies through the entire length of the background segment connecting the points. In the presence of a simple linear background, a background line is easily drawn between any two points on either side of the peak, as shown in Figure 2a. Chemical noise due to column bleed or contaminant peaks may cause inaccuracy in defining the background, but such noise is usually correlated in all three traces.
We're not sure which party introduced the exhibit, or marked the paragraph. It could have been Landis, talking about the effect of inaccurate background subtraction, or USADA, saying the three traces (presumably 44, 45 and 46) can usually correlate background noise from contaminant peaks.
Reading the paper as it was intended, rather than how it might apply to Landis, we first get an explanation of quantization noise For those not familiar with it, quantization noise occurs when an analogue, continuous parameter is converted into a digital, discrete parameter by the process of analogue to digital conversion (ADC). This process effectively converts the signal present at the detector into an n-bit binary number, where n equals the number of bits in your converter. For example, if you have a 5-bit converter, you would have 32 possible values that your signal could take, ranging from 0 to 31 (in binary, 00000 to 11111). If the value of the analogue input to this converter happens to fall between two of your possible 32 binary values when it is sampled, it will be converted to either one or the other, so you will have lost its true value (e.g. an input of 2.5 will be converted to either 2 or 3). This process of constraining a parameter to take discrete values is called quantization and because it distorts the true signal, it is called noise.
Experiomentally, the authors injected CO2 samples of known composition into a GC/C/IRMS system at varying injection sizes. The results were recorded using a custom 24-bit ADC running at a sample rate of 0.1 seconds. This 24-bit measurement was reprocessed to simulate the results one would obtain with 12-bit, 14-bit and 16-bit ADCs.
As the figures are introduced, we're given some examples, pointed out in the case-cited paragraph:
(caption in original)
The investigation is concerned with the critical process of accurate background removal, here the effect of the subtle errors caused by quantization noise on identifying the true value of the background to be removed. The measured o/oo values are found to be a function of both the number of bits in the ADC and the CO2 injection size. CO2 injection sizes ranged from a minimum of 0.1 nmol to a maximum of approx 11 nmol. Results are expressed as the standard deviation (SD) of the observed measurements:
- The SD of the 12-bit ADC measurement error ranged from >> 10 o/oo at the minimum CO2 injection level to ~2 o/oo at the maximum CO2 injection level.
- The SD of the 14-bit ADC measurement error ranged from > 10 o/oo at the minimum CO2 injection level to <>
- The SD of the 16-bit ADC measurement error ranged from ~ 6 o/oo at the minimum CO2 injection level to <>
- The SD of the 24-bit ADC measurement error ranged from ~ 4 o/oo at the minimum CO2 injection level to <>
The paper then develops equations for estimating the potential error based on the CO2 injection level and the number of bits in the ADC. The conclusions were that at the lowest (12-bit) resolution, an error as low as 1 o/oo was unachievable, even with the maximum CO2 injection level.
Unfortunately, we idiots tried, but failed to find out how many bits the IsoPrime has in its ADC, so we can't draw a direct comparison. However, it is clear from Dr Brenna that significant errors are possible due to this effect. Perhaps he should also have addressed to impact of quantization error in determining the peak maxima and how that would effect any correction to the m45, m44 time lag.
So where does that leave us? The effect of quantization noise appears to be significant, but we may assume that it forms part of LNDD's claimed accuracy of +/- 0.8 o/oo. We think this is generous by assuming that their system achieves a high CO2 level and that their ADC is definitely greater than 12-bits. If either of these were not the case, then all bets are off.
It's important to recognise that this +/- 0.8 o/oo is a characteristic of the system. It includes contributions from the chromatographic efficiency of the system, the sensitivity of the detectors, chemical and electrical noise, the effects of digitizing the detector signals (quantization noise) and the subsequent impact of that when it comes to removing background and calculating peak areas. Unless you go out and change part of the system for something better, you can't improve on this. So, the true value for the peak will be +/- 0.8 o/oo away from the measured value (if our assumptions about LNDD CO2 levels and ADC bit size hold true).
That's the basic accuracy of the system. You've got your raw digitized data and the best you can hope to do with it is +/- 0.8 o/oo. Sounds pretty good, doesn't it?
LNDD close the book here and apply this tolerance to all their results.
Are there situations that may further reduce the accuracy? We idiots are left scratching our heads. Wasn't it USADA expert witness Dr Brenna who published papers confirming that many such situations exist and could result in significant additional error? Didn't team Landis expert Dr Meier-Augenstein also testify to that?
But LNND appears to be saying that even if they have a situation with overlapping peaks, it will have no impact on the accuracy of their measurement.
They don't stroke their chin and say "Hmmm ... maybe +/- 1.2 o/oo for this case ..." or "Ouch !, big overlap, that's got to hurt ... maybe +/- 3 o/oo for this one".
No, they claim that they're always within +/- 0.8 o/oo of the true value. It doesn't matter if they're 10%, 20%, 30% overlapping, they'll just drop a perpendicular down to background and ... +/- 0.8 o/oo.
On one side of the fence, we have Dr Brenna, Dr Meier-Augenstein and the Idiots, who have all both claimed and demonstrated that significant additional error is possible when you do not have good clean baseline separated peaks. On the other side of the fence, we have LNDD who take no cognizance of this irrefutable fact. And we're the idiots? You decide!
In comments below there's some discussion of errors observed in the paper, with reference to the figure above,