Monday, November 26, 2007

Idiots look at [Brenna 2003]: Quantization and Background

Analysis of Quantization Error in High Precision Continuous Flow Isotope Ratio Mass Spectrometry

by Gavin L Sacks, Christopher J Wolyniak and J Thomas Brenna
Journal of Chromatography A, 2003

Review by Ali and TBV

The purpose of this brief review is to assess another of the papers that Dr Brenna co-authored, in order to determine whether anything of relevance to the Landis case is present. It is in the GDC exhibits, at the end of GDC1151 and filling GDC 1161

The purpose of this paper is to investigate the impact of quantization noise on the measured o/oo value for IRMS. Both the traditional summation method for integration and the curve fitting method are explored to see which performed better in the presence of quantization noise. Curve fitting is not really relevant to the Landis case because it was not employed, so we'll cut to the chase on this subject and say that curve fitting generally performed better than the traditional summation method. LNDD doesn't do that.

However, our interest in the article is tweaked by a highlighted paragraph on page 274, GDC 1155:
The reproducibility of the summation background correction depends in part on the two points that anchor the background line under the peak; imprecision in the measurement of either point multiplies through the entire length of the background segment connecting the points. In the presence of a simple linear background, a background line is easily drawn between any two points on either side of the peak, as shown in Figure 2a. Chemical noise due to column bleed or contaminant peaks may cause inaccuracy in defining the background, but such noise is usually correlated in all three traces.

(emphasis added)

[MORE]

We're not sure which party introduced the exhibit, or marked the paragraph. It could have been Landis, talking about the effect of inaccurate background subtraction, or USADA, saying the three traces (presumably 44, 45 and 46) can usually correlate background noise from contaminant peaks.

Reading the paper as it was intended, rather than how it might apply to Landis, we first get an explanation of quantization noise For those not familiar with it, quantization noise occurs when an analogue, continuous parameter is converted into a digital, discrete parameter by the process of analogue to digital conversion (ADC). This process effectively converts the signal present at the detector into an n-bit binary number, where n equals the number of bits in your converter. For example, if you have a 5-bit converter, you would have 32 possible values that your signal could take, ranging from 0 to 31 (in binary, 00000 to 11111). If the value of the analogue input to this converter happens to fall between two of your possible 32 binary values when it is sampled, it will be converted to either one or the other, so you will have lost its true value (e.g. an input of 2.5 will be converted to either 2 or 3). This process of constraining a parameter to take discrete values is called quantization and because it distorts the true signal, it is called noise.

Experiomentally, the authors injected CO2 samples of known composition into a GC/C/IRMS system at varying injection sizes. The results were recorded using a custom 24-bit ADC running at a sample rate of 0.1 seconds. This 24-bit measurement was reprocessed to simulate the results one would obtain with 12-bit, 14-bit and 16-bit ADCs.


Figure 1: Shows quantization errors with various sampling resolutions.
(TBV caption)



As the figures are introduced, we're given some examples, pointed out in the case-cited paragraph:


Figure 2: Simulated chromatographic peaks in the presence of a linearly rising background (a) without and (b) with quantization error. In the presence of quantization error, the true background may fall anywhere within the arrows. Without quantization error background is easily and accurattely achieved by connecting points on either side of the peak.
(caption in original)

The investigation is concerned with the critical process of accurate background removal, here the effect of the subtle errors caused by quantization noise on identifying the true value of the background to be removed. The measured o/oo values are found to be a function of both the number of bits in the ADC and the CO2 injection size. CO2 injection sizes ranged from a minimum of 0.1 nmol to a maximum of approx 11 nmol. Results are expressed as the standard deviation (SD) of the observed measurements:

  • The SD of the 12-bit ADC measurement error ranged from >> 10 o/oo at the minimum CO2 injection level to ~2 o/oo at the maximum CO2 injection level.
  • The SD of the 14-bit ADC measurement error ranged from > 10 o/oo at the minimum CO2 injection level to <>
  • The SD of the 16-bit ADC measurement error ranged from ~ 6 o/oo at the minimum CO2 injection level to <>
  • The SD of the 24-bit ADC measurement error ranged from ~ 4 o/oo at the minimum CO2 injection level to <>

The paper then develops equations for estimating the potential error based on the CO2 injection level and the number of bits in the ADC. The conclusions were that at the lowest (12-bit) resolution, an error as low as 1 o/oo was unachievable, even with the maximum CO2 injection level.

Unfortunately, we idiots tried, but failed to find out how many bits the IsoPrime has in its ADC, so we can't draw a direct comparison. However, it is clear from Dr Brenna that significant errors are possible due to this effect. Perhaps he should also have addressed to impact of quantization error in determining the peak maxima and how that would effect any correction to the m45, m44 time lag.

So where does that leave us? The effect of quantization noise appears to be significant, but we may assume that it forms part of LNDD's claimed accuracy of +/- 0.8 o/oo. We think this is generous by assuming that their system achieves a high CO2 level and that their ADC is definitely greater than 12-bits. If either of these were not the case, then all bets are off.

It's important to recognise that this +/- 0.8 o/oo is a characteristic of the system. It includes contributions from the chromatographic efficiency of the system, the sensitivity of the detectors, chemical and electrical noise, the effects of digitizing the detector signals (quantization noise) and the subsequent impact of that when it comes to removing background and calculating peak areas. Unless you go out and change part of the system for something better, you can't improve on this. So, the true value for the peak will be +/- 0.8 o/oo away from the measured value (if our assumptions about LNDD CO2 levels and ADC bit size hold true).

That's the basic accuracy of the system. You've got your raw digitized data and the best you can hope to do with it is +/- 0.8 o/oo. Sounds pretty good, doesn't it?

LNDD close the book here and apply this tolerance to all their results.

Are there situations that may further reduce the accuracy? We idiots are left scratching our heads. Wasn't it USADA expert witness Dr Brenna who published papers confirming that many such situations exist and could result in significant additional error? Didn't team Landis expert Dr Meier-Augenstein also testify to that?

But LNND appears to be saying that even if they have a situation with overlapping peaks, it will have no impact on the accuracy of their measurement.

They don't stroke their chin and say "Hmmm ... maybe +/- 1.2 o/oo for this case ..." or "Ouch !, big overlap, that's got to hurt ... maybe +/- 3 o/oo for this one".

No, they claim that they're always within +/- 0.8 o/oo of the true value. It doesn't matter if they're 10%, 20%, 30% overlapping, they'll just drop a perpendicular down to background and ... +/- 0.8 o/oo.

On one side of the fence, we have Dr Brenna, Dr Meier-Augenstein and the Idiots, who have all both claimed and demonstrated that significant additional error is possible when you do not have good clean baseline separated peaks. On the other side of the fence, we have LNDD who take no cognizance of this irrefutable fact. And we're the idiots? You decide!

UPDATE/ADDENDUM:


Figure 7a from GDC 1162


In comments below there's some discussion of errors observed in the paper, with reference to the figure above,

31 comments:

wschart said...

OK, I'll confess that I don't always understand everything discussed here about the science of chromatography. However, I do understand the concept of error. If the +/-0.8 is the error of the system itself, this is ignoring the possible impact of the operator. The stated error figure would assume that everything was done absolutely correct, any deviation from that would increase the error. And we know that LNDD did deviate from correct operation of the system in several ways, so claiming this error figure applies to the Landis data is not warranted.

A

Ali said...

wschart,

This exercise is aimed at highlighting the errors inherent in the data processing of peaks which suffer interference from either other peaks or non-linear background (same difference really). The chromatography is the mechanism which gives rise to these conditions. Understanding how that works isn't really essential. We're looking at the results of the chromatography.

LNDD, or any other lab, who claim a constant tolerance on their measurements with no regard to these other interference factors are just wrong. Dr Brenna has proved that. Dr Meier-Augenstein insists that to be the case. In our own small way, us band of happy idiots have confirmed their findings. We've even provided the tool for cynics to satisfy their own curiousity (or to debunk the tool if they wish).

In my opinion, in the presence of any interference with a peak, it is simply absurd for LNDD to apply a fixed tolerance to their results. Absurd and unscientific !. It has been proven to wrong.

Larry said...

Ali -

Maybe I missed this point in your analysis ... but if it's wrong to apply a fixed error tolerance to chromatography results, is there a way to apply a variable error tolerance?

Ali said...

Larry,

If by that you mean something related to peak size, degree of overlap and quantization noise, then Brenna has addressed all of those factors. Equations have been developed to try and model these situations. The only problem is that this research was done under ideal lab conditions with pure substances, so the results are obviously going to be optimal. Having said that, the degree of error was still quite scary.

It's worth remembering that we're talking about a scale of parts per thousand (o/oo). Small errors in the assessment of the already very small m45 plot (C13) have big effects.

The results appear very variable and that's not just based on the Idiot's series. Remember that Dr Brenna and Dr Meier-Augenstein disagreed on the effect of overlapping peaks. I could safely say that they are both acknowledged experts in the field so it must indicate a high degree of unpredictability in this process.

There may be some scope for developing "expert system" software to aid analysis ?

The bottom line is repeated again and again in research papers ... There is no substitute for good chromatography - clean baseline separated peaks. None. You cannot recover what has been lost by poor chromatography.

Ali

Mike Solberg said...

Ali, or anyone else, can you please remind me of what the meaning and significance of the peak height is in the GC/MS and in the GC/C/IRMS. What exactly is peak height measuring?

syi

Larry said...

Ali -

I think this is part of the continuing discussion between the scientists (in charge of the chromatography) and the lawyers (in charge of writing the rules).

Yes, there's no substitute for good chromatography, but there's also no definition of what is good chromatography and what is "good enough". We're not even sure how good the chromatography can be under real world conditions, even with state of the art techniques. We've seen a number of chromatograms with different "I3" ratings, but we're not sure what caused the differences in the chromatography of these chromatograms.

At a much earlier stage in the discussions here, I raised the question whether we ought to rely on the science of chromatography to make findings of fact in legal cases (civil or criminal). Since then, I've learned a lot more about the science. Obviously, this science is heavily relied upon in many areas; it is a mature science. But I'm still not sure that the science should be relied upon to decide legal cases where a person's civil or criminal liability must be decided.

My most recent contribution here was to introduce the concept of YGIAGAM to our discussions. I can't claim that I INVENTED this concept; I can only claim credit for having injected the concept into our discussions at an opportune time. But given the fact that YGIAGAM seems to be the standard governing so many of our science discussions here ... how can we rely on chromatography to determine guilt or innocence at law?

Larry said...

SYI, what I think Duckworth has told me about peak heights is as follows:

1. We're not really interested in peak height, we're interested in the volume described by each peak. However, peak height is probably not a bad indicator of peak volume in most cases.

2. With regard to GC/MS peaks for any substance, there is usually a rough correspondence of peak height and area to the amount of the substance in a sample. However, this is not always going to be the case. I'm not sure what the variables are here, but the charge of a given ion is one such variable. If a given ion has a greater charge (more electrons knocked off by the ionizer), it's going to have a disproportionally bigger peak.

3. For the IRMS ... I am trying to remember whether the metabolites in question are first ionized and then vaporized, or whether it's the reverse. I think it's ionization first, then vaporization. Duckworth has told us that IRMS peak heights correspond to the amount of carbon in a given metabolite. So an amount of a large molecule with a lot of carbon will create a disproportionally greater peak than the same amount of a different molecule with less carbon in it.

I'm taking most of this description from Duckworth's post at 3:07 PM at ID Legal Continuing.

M has cited a wikipedia article to the effect that peak heights (or volumes, not sure) within a single total ion chromatogram are accurate indicators RELATIVE TO EACH OTHER of the amount of stuff in the peak. Sorry, I could not find M's post, and wikipedia contains a confusing array of chromatography articles, so I can't give you a cite here. I don't know how to reconcile the wikipedia article to Duckworth's comments.

In any event, I've seen nothing on wikipedia or elsewhere to indicate that peak heights from a GC/MS can be reliably compared to peak heights on an IRMS.

I'd like to know more about this area, too.

Mike Solberg said...

Thanks Larry.

btw, it's duckstrap.

syi

m said...

Larry and Swim

Wikipedia for "gas chromatography".

http://en.wikipedia.org/wiki/
Gas_chromatography#Data_reduction_and_analysis


"Data reduction and analysis

Qualitative analysis:

Generally chromatographic data is presented as a graph of detector response (y-axis) against retention time (x-axis). This provides a spectrum of peaks for a sample representing the analytes present in a sample eluting from the column at different times. Retention time can be used to identify analytes if the method conditions are constant. Also, the pattern of peaks will be constant for a sample under constant conditions and can identify complex mixtures of analytes. In most modern applications however the GC is connected to a mass spectrometer or similar detector that is capable of identifying the analytes represented by the peaks.

Quantitive analysis:

The area under a peak is proportional to the amount of analyte present. By calculating the area of the peak using the mathematical function of integration, the concentration of an analyte in the original sample can be determined. "


I'm also not sure Ali has the .8 error thing right.

He needs to refer us to the record where they discuss this. Otherwise we have no idea what is encompassed in that error figure. Typically an error figure is based on a one or two standard deviation spread. We don't know what the LNND error encompasses.

I do seem to recall that OMJ was comfortable ignoring the .8 error allowance.

In any case, even a greater than .8 delta error allowance is not going to change a 6 delta doping finding, unless you can show a monumental error, which Ali can't do, because he can't quantify all his discussion of potential error.

Larry said...

M, thanks for the post. As far as the .8 error thing goes, it's all SWAG to me. There are too many variables to allow for reasonable quantification of possible error tolerances based on various conditions (IMHO). I think Ali is right when he suggests that in the absence of good chromatography, all bets are off. Unfortunately, in the absence of any kind of standard for what is "good" or "good enough", we're stuck with dueling expert opinions, or YGIAGAM.

On the business of GC/MS peak heights, I note that the wikipedia article failed to say anything about what happens when you're dealing with ions having different positive charges. My guess is that the scientists probably know the ion charges for the ions that are characteristic for the substances they test for, and that they can make adjustments for this. Otherwise, as we've noted in other conversations, how could a lab reliably measure T/E ratios?

Probably the more difficult question has to do with comparing MS and IRMS peak heights -- a question that I can't find addressed in wikipedia or anywhere other than Duckstrap's comments. The question seems to come down to the carbon content of the ions created and vaporized by the GC/IRMS. I seem to remember that testosterone has 19 carbon atoms, but I don't know the carbon content of the various testosterone metabolites of interest to us. Even if we do know the carbon content of these metabolites, we'd also have to consider the carbon content of the other unidentified peaks on the GC/MS and GC/IRMS.

If you compare peak heights between the GC/MS and GC/IRMS chromatograms, you'll see that they don't match up perfectly. Clearly, there ARE factors at work that produce different peak heights on the two kinds of graphs. I'm not sure how significant these factors might be, but they ARE present.

SYI, at least I got the "Duck" part right. I have trouble spelling "chromatography", so we're lucky I didn't refer to him as some other kind of water fowl.

Ali said...

m,

You said: "...In any case, even a greater than .8 delta error allowance is not going to change a 6 delta doping finding, unless you can show a monumental error, which Ali can't do, because he can't quantify all his discussion of potential error."

Wouldn't that depend on how much greater than 0.8 delta the actual error was ?. If the actual error was +/- 3 delta , your 6 delta doping positive would turn into a doping negative (3 away from your reference is not a positive).

If you don't believe me that errors as big as 3 delta are not only easily achieved, but are easily exceeded, reread Brenna's papers (that's Brenna's papers, not Ali's papers).

Using a 16-bit analogue to digital converter and a middle of the range CO2 injection level, he observed a spread in the results spanning about 4 delta units. That was the same sample, injected at the same volume, repeatedly.

That's just looking at the effect of quantization noise. Now add in the effect of inteference between peaks and selecting your integration limits and identifying your background and ...

Still think it's hard to exceed +/- 0.8 ?

Ali

m said...

Ali,

I looked at the Brenna paper, and can't find the data you claim.

Please quote the language with a cite.

Brenna speaks of standard deviation errors of .3% to 1%.

It doesn't appear that this Brenna article was referred to in the testimony. This suggests that it is of little probative value and that the conclusions you are attempting to draw (as a non scientist) are wrong.

Mike Solberg said...

Ali, you probably know this stuff hands down, but your talk of uncertainty made me think of this part of ISO 17025:

"5.4.6 Estimation of uncertainty of measurement
5.4.6.1 A calibration laboratory, or a testing laboratory performing its own calibrations, shall have and shall apply a procedure to estimate the uncertainty of measurement for all calibrations and types of calibrations.
5.4.6.2 Testing laboratories shall have and shall apply procedures for estimating uncertainty of
measurement. In certain cases the nature of the test method may preclude rigorous, metrologically and statistically valid, calculation of uncertainty of measurement. In these cases the laboratory shall at least
attempt to identify all the components of uncertainty and make a reasonable estimation, and shall ensure that the form of reporting of the result does not give a wrong impression of the uncertainty. Reasonable estimation shall be based on knowledge of the performance of the method and on the measurement scope and shall
make use of, for example, previous experience and validation data.
NOTE 1 The degree of rigor needed in an estimation of uncertainty of measurement depends on factors such as:
-- the requirements of the test method;
-- the requirements of the customer;
-- the existence of narrow limits on which decisions on conformity to a specification are based.
NOTE 2 In those cases where a well-recognized test method specifies limits to the values of the major sources of
uncertainty of measurement and specifies the form of presentation of calculated results, the laboratory is considered to
have satisfied this clause by following the test method and reporting instructions (see 5.10).
5.4.6.3 When estimating the uncertainty of measurement, all uncertainty components which are of
importance in the given situation shall be taken into account using appropriate methods of analysis.
NOTE 1 Sources contributing to the uncertainty include, but are not necessarily limited to, the reference standards and
reference materials used, methods and equipment used, environmental conditions, properties and condition of the item
being tested or calibrated, and the operator.

I would love to see how LNDD did all that!

syi

Ali said...

m,

I was referring to Figure 7 on GDC01162. The text describes how that data was generated. I drew no conclusions on this hearing exhibit. I just reported what it said. If you think it's wrong, take it up with Dr Brenna.

The 0.3 to 1.0 delta values were benchmarks they predefined, against which they would assess their results. Less than 0.3 delta represents very good and greater than 1.0 represents not so good.

I'm not sure what you mean by "a non scientist", although I understand the implication is that I'm not qualified to form opinions on these matters. That's a fair question (or rather it would have been a fair question if you'd asked it and not just assumed it to be the case). I can confirm that I am qualified to form valid opinions on these matters based on both my academic qualifications and professional experience (much of which is directly related to what we're currently discussing).

Ali

Ali said...

syi,

Now that raises some questions, especially:

"5.4.6.3 When estimating the uncertainty of measurement, all uncertainty components which are of
importance in the given situation shall be taken into account using appropriate methods of analysis"

I read a report recently which highlighted the fact that when IRMS labs assess their own accuracy, they frequently do so by analysing internal standards of pure composition. In other words clean flat baselines and single discrete peaks. No interference. How does that comply with 5.4.6.3 ? Are they including "all uncertainty components which are of importance" ?

I wonder whether LNDD include noisy sloping backgrounds and overlapping peaks of unknown composition when they include "all uncertainty components which are of importance" ?

Or maybe they just run a pure sample, generating a nice clean background and peak and completely ignore all the factors normally found on typical chromatograms like Floyd's ?

Ali

m said...

Ali,

What are your relevant training, qualifications and experience?

I don't read Brenna to claim that quantification standard deviation of error in the range of 4 deltas are "easy to achieve" as you claim.

Looking at figures 4 and 8, which are easier to read, the amount of the standard deviation of error depends on the volume of carbon injected. Standard deviations of error of 1% and .6% are achievable with appropriate carbon volumes. Do you claim that they were not achieved in this case? If they were not, why didn't Landis use this as evidence to prove his case?

Ali said...

m,

You said:
"I don't read Brenna to claim that quantification standard deviation of error in the range of 4 deltas are "easy to achieve" as you claim."

Neither do I. That's why I never said that (not sure what you mean by quantification ?). I said that errors greater than 3 delta units are easy to achieve, as Brenna claims. That's errors, not the standard deviation of those errors. This is where statistics can become confusing for those not familiar with it.

Looking at Figure 8 (as it's easier for you to read), it presents three plots. These are the standard deviation of: the theoretical model; the summation method; the curve fitting method. You'll note that the summation method points are all above the theoretical model curve. These are the results of the practical experiments using the same method used by LNDD. They are also the standard deviation, not the spread of the observed results.

I have no idea why the Landis team didn't attack this from the basic accuracy of measurement angle. In retrospect, I would have.

As for me, I can post my CV if it really interests you, but I wouldn't do it just on a whim. If you feel it essential for you to take what I say seriously, then so be it.

m said...

Ali,

All you have to do is describe your training and experience. I don't know why you are being so mysterious. Otherwise, you are right, I do doubt your statements.

Here's another reason why I don't trust you. You spoke about a "spread" of 3 deltas, when what we would need is a standard deviation of 3 deltas since the point estimate is centered at 6 deltas. You knew that, but spoke about a "spread" anyway. I think you were trying to deceive.

You also don't know what carbon injection levels were used by LNND. Clearly there are many carbon injection levels which would result in standard deviations of error at or below 1%.

Finally you don't know whether LNND used a 16 bit or 24 bit digitizer. If they had used a 24 bit digitizer there would have been almost no error.

Again, Landis didn't make this argument. Why? Because he knew it was bogus.

Ali said...

m,

Take a few deep breaths and calm down.

You're completely missing the point. Show me where I claimed to know what injection levels LNDD used ? Where did I say that they had a 16 bit ADC ? In fact, I explicitly stated that I didn't know. However, seeing as you ask, I can say that I'd be surprised if they had a 24-bit ADC.

You also seem to be getting confused about what I said. Let me recap. You said that you would need a "monumental" error to recover from a -6 delta result. I said if the true error on your reading was +/- 3 delta, a -6 delta would be regarded as not positive.

Am I wrong ? I then pointed out the spread of error observed by Brenna in his quantization paper. That spread was from a relatively small sample. I picked my words carefully so that I did not mislead "...he observed a spread in the results spanning about 4 delta units". I did not claim that as a standard deviation

This is only one source of error. Other's are described in Brenna's curve fitting paper (interference between peaks).

The idiot's series simply confirms what Brenna stated. Brenna and WM-A can argue over the direction of the error, which may be significant, but not to this particular discussion. They both agree that ADDITIONAL ERROR occurs when you have interference between peaks.

So when LNDD assess their accuracy as +/- 0.8 delta, is that with a discrete clean peak sitting on a nice flat background ?. If it is, then they cannot apply the same accuracy to situations where there is interference between peaks, because they are not accounting for the additional error which one observes under those conditions. In fact, it would be a violation of the rules syi posted: "5.4.6.3 When estimating the uncertainty of measurement, all uncertainty components which are of
importance in the given situation shall be taken into account ..."

Capiche ?

We're talking about data processing aren't we ?. Measuring values and trying to determine the true nature of the peaks and understanding the sources of error. With that in mind, and because you insist:

I have a degree in Electronic Engineering and a MSc in Digital Systems Engineering, which, on a technicality, would make me a "master of science" :-). My work has been in the defence sector (military stuff). I worked 4 years on the design of digital control systems for jet engines, specifically involved in analysing the sources of error in the feedback signals (transducers, ADC, noise, how errors combine when they're processed, etc). It's important, because inaccuracies can upset your control system and cause engine thrust to start oscillating (bad news, even if you're on the ground).

Following that, I have > 5 years working on the design of radar. Radar is all about data processing. It's about recovering your tiny target return (peak) from a sea of background noise. The technology used in the data processing aspect of IRMS doesn't even come close, although the concept is similar.

Maybe I'm just kidding myself, but I think I'm qualified to at least have an opinion. You may disagree.

Ali

m said...

Ali,

I made the claim that any quantization error was not going to be large enough to change the 6 delta positive finding.

You said this in reply:

"If you don't believe me that errors as big as 3 delta are not only easily achieved, but are easily exceeded, reread Brenna's papers.."

"Using a 16-bit analogue to digital converter and a middle of the range CO2 injection level, he observed a spread in the results spanning about 4 delta units."

"EASY TO ACHIEVE". The clear implication is that it was "easy to achieve" in the Landis case. Brenna's paper does not support that claim.

Moreover, you spoke of a "spread" of 4 deltas (equivalent to a standard deviation of 2 deltas) and referred me to figure 7, when you knew that you should have spoken about a standard deviation of 3 or 4 deltas (spread of 6 or 8 deltas, figures 4 and 8). Such larger spreads were even more unlikely than the spread of 4 deltas. You thought I didn't know enough stats, but you clearly knew. I have to conclude you were being misleading.

I don't have a dispute with your summary of Brenna's paper. But when readers like SWIM and Larry misread your summary in accord with their sympathies, you egg them on with misleading statements.

Your summary basically says that any quantization error was likely accounted for as part of the .8 error allowance used by the lab. That is, there likely was NO ADDED ERROR in the Landis case by quantization above the .8 allowance.

Yet much of the comments and discussion, appear to read your summary of quantization to introduce more possibilities of error above the .8 error allowance used by LNND.

But your claims of greater error are really based on overlapping peaks and poor chromatography. (you do make that clear) But those issues have been already argued to death. I'm going to side with Brenna over Meier, especially since you concede that Brenna was correct wrt to his 1994 article.

And thanks for explaining your expertise.

Ali said...

m,

You said "I made the claim that any quantization error was not going to be large enough to change the 6 delta positive finding."

Really ? Is that how it went ? I certainly don't remember the "quantization error" qualifier. I went back and I didn't see it either. At that stage, I'd conservatively lumped the quantization error in with the +/- 0.8 error. That was explicit. Now you're implying that I'd said it could be responsible for +/- several delta swings in the results ? Don't confuse what I reported from Dr Brenna's paper with the opinions I stated myself.

Perhaps fewer lawyer games and a bit more honest discussion may be the order of the day here.

Ali

m said...

Ali,

You are being disingenuous here and making arguments about a peripheral issue.

This is what I said to Larry:

"In any case, even a greater than .8 delta error allowance is not going to change a 6 delta doping finding, unless you can show a monumental error, which Ali can't do, because he can't quantify all his discussion of potential error."

I did not use the words "quantization", but that's implied here because you are reviewing Brenna's quantization article.

More importantly, that's how you appeared to understand it, because your reply specifically referred to figure 7, and a 4 delta QUANTIZATION error spread in the Brenna quantization paper.

Again, this is what you said:

""If you don't believe me that errors as big as 3 delta are not only easily achieved, but are easily exceeded, reread Brenna's papers.."

"Using a 16-bit analogue to digital converter and a middle of the range CO2 injection level, he observed a spread in the results spanning about 4 delta units."

Again YOU use the words "EASILY ACHIEVED" and "EASILY EXCEEDED". I didn't put any words in your mouth. And again, your characterizations were not supported by the Brenna paper, and as I pointed out should have referred to the standard deviation of error not the spread.

I don't like to reply to these weasely types of arguments. So I am going to drop it. Your summary of Brenna's paper was valuable. Sorry I can't say the same for the following discussion.

Ali said...

To anyone reading these comments, all I ask is that you read the thread from start to finish.

Then form your own conclusions.

Ali

Larry said...

M, I don't think I've misread anything posted here in accordance with my sympathies.

You know me pretty well by now. I think you know that I listen carefully to everything you say, and that I respect your opinions. When I think you're right, I've jumped in to support you. When I think you've proven me wrong, and it's happened more than once, I've admitted to it. I've disagreed with you, but at one time or another I think I've disagreed with everyone on this forum.

It's a fact of life here: if the Dr. Brennas of the world are going to disagree with the Doktor M-As, then we lesser mortals have to get used to the fact that the discussions of the science are going to feature disagreements, and that some of them may be quite sharp.

FWIW, I'm not convinced that the possible errors in chromatography can be quantified with any accuracy. Maybe that means that I don't agree with either you or Ali! There seem to me to be way, way too many variables to know for certain what is going to happen when this peak interferes with that one, or when this peak co-elutes with that one, or when the noise background slopes. I agree with Ali that under the right circumstances, the errors could swamp the lab's built-in margin for error. On the other hand, I don't see a way to know that we've exceeded the margin for error by looking at the test results, and I don't think anyone has proved that LNDD's +-0.8 margin for error is "wrong" and that a different margin is "right".

TBV and Ali's "I3" index is very interesting, but in the absence of some agreed-upon standard of what is "good" and what is "good enough", we're stuck in a situation where the acceptability of chromatography in any given case is the province of the experts, who are going to disagree. Leading to YGIAGAM, or the classic "battle of the experts."

So at the end of the day, unless we're going to toss all of our chromatography equipment out of the window (and believe me, sometimes I'm tempted!), the alternative is to set up the best rules we can for the labs to follow, and then verify that they've followed those rules in a given case.

Hopefully this will ease your mind that I've been too easily influenced by anything I've read here.

I'm curious what you have to say about the latest revelations about LNDD's column switch.

Lowryder said...

Has anyone here ever used a truly OLD GC?

The whole area under the peaks discussion has made me a bit nostalgic.

In the old days, when the detector was still attached to a stylus that had a pen that actually drew the peaks out, the paper onto which the peaks were drawn was specially formulated to have a constant density.

Rather than using calculus to find the area under the peaks you would simply CUT the peaks out of the paper, with scissors, and then weigh them. You wanted to do a baseline correction? You'd just cut off a fraction of the curve.

Sometimes I'm not convinced that digital data is that much better than the old way.

AJ

m said...

Larry,

What Ali was talking about wrt to this Brenna article was quantization error, not errors resulting from chromatography or other sources.

What he winds up concluding in his summary was that the .8 LNND error margin probably took into account their estimates of quantization error plus estimates of these other sources of error. He doesn't really claim in his summary that quantization error here was large or should cause us to increase the .8 margin. I think you misunderstood him to claim this.

Only in the comments to me does he suggest that quantization errors could be large (3 or 4 deltas) and "easy to achieve".

***************************

As to the column stuff, I think I might have noticed that when I reviewed the exhibits but didn't focus on it at the time or make much of it. I did notice that they used a "17" in the IRMS and if I recall Shackleton used a "17" column also. I'm not sure if I noticed the difference in the GCMS. My memory could be faulty here.

The question is whether different columns make much of a scientific difference. I'll await SWIM's work on this. -)

Remember the GC temperature ramps were slightly different. Most of us didn't have a heart attack over that. And the resulting chromatograms matched (from my perspective).

However, if this violated the SOP then this might raise a legal problem, I haven't researched that issue like you have.

m said...

lowryder,

Thanks for that memory. I always suspected good science was as much art as ....

Makes my day.

Ali said...

m,

You're so full of it. Enjoy !

Ali

Ali said...

m,

I'll summarise all the key points I feel relevant to this case and fire them off to TBV, with a forward direction to Landis camp. They may be crap, but at least they'll be assessed for their worth by people who matter.

Ali

Lowryder said...

Ali,

I wasn't intending to disparage your work at all, previously. Many scientists had GCs before they really had decent computers which is quite remarkable when you think about it. They didn't have the ability to process the numerical calculations fast enough on their slide rules, which is why they went to the (expensive) constant density paper.

I've tried to keep up here at TbV, but I may have missed the correct discussion (for the record I used to post as BannaOj before the login reqirements were made, I now have this google account now is why I'm posting again)

But I digress, from your computer background, I was wondering if one of these papers actually discussed the impact of different formulas for calculating the area under the curves, in addition to the difference in baselines. I mean are they using a Taylor series polynomial, or a Simpson variant or what? That has to add to the inherent measurement error. They made us suffer through and calculate those errors long ago and far away when I was in engineering school.

AJ

Ali said...

AJ.

The basic method they use is ... well, basic. Having identified the peak area they want to integrate and following removal of the background, they simply sum the value of each sample over the integration period. Because they are dividing one peak area by another to work out the C13 ratio, they don't even need to take account of time. I would call this crude numerical integration, although it's perfectly adequate for baseline separated peaks (aasuming your sampling frequency and ADC bit size are adequate for the peak you want to integrate)

Brenna has proposed curve-fitting techniques, whereby they have a predined function (eg exponentially modified Gausian). They vary the parameters of this function until it closely resembles the IRMS peak. They don't say what they do after that, but the obvious way would again be to just use numerical integration on the results of that fitting function, evaluated at each point in time. I guess you could derive an analytical solution by integrating your fitting function, but it's easier for a computer to use numerical integration and the results are generally pretty good.

Ali