In the previous entry I made a comment about the reference to Measurement Uncertainty in the CRC Press book Quality Assurance in the Pathology Laboratory edited by Maciej Bogusz. I noted that the author's sole argument for laboratories to calculate their (more appropriately) uncertainty of measurement was because it was expressed as a requirement in the standard ISO17025. (It is also included in 15189).
I mentioned that in my opinion, there is NO reason for doing something worse than doing it solely because it is a requirement in a standard or that an accreditation body said to do it.
Here are some GOOD reasons why laboratories should perform certain processes and procedures:
1: We created a policy to which we are committed.
2: It is a legal requirement in the places that we work. Adherence reduces the risk of liability
3: It is a customer requirement and expectation.
4: It creates a better, safer work environment.
5: It amends an error and reduces the risk of repeat.
6: Adherence enhances our financial health.
The concept of uncertainty of measurement is an extensive of technical philosophy that no measurement is absolute. Factors including the precision and stability of equipment, consistency and quality of reagents, technical competency and reproducibility of operator skill, knowledge and talent can all influence the result of a measurement. All measurements should have some form of error bars around them.
I have absolutely no problem with that concept. This concept grew out of studies in the physical sciences where the most minor of minor error can result in a rocket being fired at the moon but hitting Mars instead, or where scatter within test results may get interpreted as cold fusion, or where the tiniest of deviations can result in huge alterations in interpretation of collisions in an atomic accelerator.
In all these situations the study and analysis can be under complete control it is appropriate to define the UM and take it into account during interpretation. I got it, I understand it, I believe it.
But here's a news flash. medical laboratories are not closed system research centers. Most of the life of a clinical sample is far beyond our control, and for all intents and purposes is likely to remain that way. The patient and the collection are almost always at a distance from the laboratory, and there are too many variables that can impact on the sample in a way that we can not control There is the technique of collection, the stability of the container and its contents (including specific additives). There is the temperature in the collection site, the duration of time for transport, the temperature at transport, the amount of agitation during transport. There is the amount of time the sample sits on a workbench before it is accessioned.
And those are the ones that we know. How about all the factors we don't know.
Generally have an idea or impression about the uncertainty and impacts of variables, but we have no way to calculate their impact. Metrologists understand this but their answer is that is OK we will just develop a list of all the variables that we can think of and make an ESTIMATE or QUESTIMATE on their value and on their impact. This is called the Uncertainty Budget. So now what we have done is taken a tool that was designed to calculate precise error bar values, but instead ended up with a best guess that may be close or not, may be valid or not, may be reproducible or not. But we have a Number and we can can now tell the accreditation team that we have a number (good), but then we go an tell our customer either as a patient or a surgeon or a cardiologist that we have a number (bad). What we don't tell them is that we have no idea of how much confidence we have in the value. We just give then the value.
That is what I can both misleading and DUMB.
So what should we do. Well we can do some things with confidence. We can test our analyzer repeatedly and we can plot the range of values that we get when we test a Certified Reference Material and we can calculate the analyzers range. Laboratories have done that for years and reported on test trueness, precision and bias. If we interpret quality control tests against that range we can say with confidence that the equipment has a certain allowable error range, and we can say with confidence that if we measured a sample concentration as 6.2 umol/L that the true value is somewhere likely between 6,12 and 6.28.
But that is not generating a value for uncertainty of measurement.
Here is what is so annoying about this. Anyone who has worked in a laboratory understands that calculating precision and bias is an important aspect of being a laboratorian. Anyone who has worked in a laboratory understands that making estimates or guesstimates for variables for which we have not basis for the estimate or guess is a fool's game. So why do we end up with standards with requirements that make no sense to the laboratorian.
It's because dumb things happen around the standards development, crafting, and negotiation table. and once something gets into the standard regardless of how inappropriate it is, and how wrong it is, it is almost impossible to get folks to acknowledge the error and actually fix it.
No comments:
Post a Comment
Comments, thoughts...