The problems with research
It’s always interesting when things come together.
Over the last few days I was
at an international meeting where the discussion about the potential value of a
Best Practice Guideline document for Researchers was raised because of a
general experience that too many research dollars are wasted on
non-reproducible and irrelevant studies.
An interesting proposal I thought, but perhaps a tad overstated.
Then I get to the airport
and pick up the Economist (March 15-21 2014) and turn to the Science and
Technology section and there is an article called Metaphysicians which mentions
John Ionnidis ( Why Most Published
Research Findings are False ). I have discussed this author and this paper before
[see: http://www.medicallaboratoryquality.com/2012/12/quality-and-medical-research.html ] because of his interest in doing “research on
research”. According to the article, a
scourge of modern science is the “waste of effort”. In 2010 “$200 BILLION (85% of total
expenditure on medical research) was squandered on studies that were flawed in
their design, redundant, never published, or poorly reported”. Assuming those numbers are true, that would
certain confirm there is a clear need for help.
According to Ionnidis too
many researchers staggeringly over interpret statistical significance in studies
that are far too small. Further they
have a lack of insight in proper study design and manifest “publication bias”
where positive data gets written up for presentation and negative data gets
ignored or worse.
Over the years we have
observed that graduate students working in research laboratories seeming lack
knowledge and interest and respect in running essential common equipment such
as autoclaves. Commonly the problem is
that they are required to use the equipment, but did not receive appropriate
training. In essence they were just
pushing buttons. ( I suspect in the
minds of their Principle Investigators, autoclaves are mundane compared to DNA
analysis.)
Furthermore common
procedures of quality control and quality assurance are often incomplete or
inadequate and generally not understood.
What they do not seem to
understand is that in the absence of basic Quality principles NO research can
or should be trusted.
Apparently Ionnidis has
doing something about it. He has opened
up a Center for meta research with the name Meta-Research Innovation Center at
Stanford and the even more appealing acronym METRICS. That will help to define the problems and
perhaps develop some answers.
Here are
some questions that I would like to see addressed:
A.
Would increased training in laboratory
Quality result in reduced non-reproducible research and increased value for dollar spent?
B.
Could accreditation of research laboratories result
in reduced non-reproducible research and increased value for dollar spent?
C.
Would introduction of proficiency testing
into research laboratories result in reduced non-reproducible research and increased value for dollar spent?
I was intrigued by the
notion of a Best Practices in Research
Guideline however I also understand that in the current environment such a
document would be pursued by those few laboratories that recognize the concepts
of standardization and continual improvement.
The vast majority would, I suspect, never become aware of its existence,
or never purchase it, or never read it, or read it but never consider it as
relevant or appropriate to their laboratory.
I know this sounds a tad
cynical, but for how long have we taken that approach in the health sciences. “It
is not our problem, we are too smart, if it is a guideline then we can ignore
it and if it is a regulation we can obscure it.”
This may well be the time when far more aggressive research oversight becomes a reality. If
a highly qualified author writes something in Accreditation and Quality
Assurance (an excellent journal) its impact is strictly limited. But if articles end up in the Economist, that
is a different story. Business folks
read the Economist, as do lawyers, and politicians, and University presidents,
and many of the general public.
Sooner
or later, the right (wrong?) person is going to start putting 2 and 2 together;
wasting $200,000,000,000 is an insult to the public purse. And then the regulations start happening,
and some people lose their jobs, and some people end up in jail.
Even if we spend $2B on setting
up a stringent oversight we are way ahead of the game. And the impact on jobs would be negligible
because for every research laboratory that is shut down, some would likely move
into consultancy or oversight.
Seems
like a win.
"researchers staggeringly over interpret statistical significance in studies that are far too small" - this is a serious problem, and for a very different reason: making the studies "bigger", i.e. increase sample size, means more money spent on any one study and, if budgets were not increased, less studies. This exactly is the reason so many studies are so poor (in more than one sense of the word). It means less PH.D.s, less postgrad studies, less papers, less money for the big publishers and a lot more less. As for training lab personnel better, I don't think it would matter that much. It is the bad design of studies, which cannot produce good results anyhow, that cause the problem. Perfect execution of a flawed study design still won't result in scientific progress, it might even veil the ineptitude further. As an aside, an anecdote: I was talking to a young chemical process engineer who had been hired by a company that does research for big pharmaceutical companies. He told me how they tried to optimize production parameters for new pharmaceuticals. From the way he told his story I had a strong feeling, they were going about this quite clumsily. So I asked if he had ever heard of "Design of Experiments". His sulky retort: "Of course we design our experiments". I didn't pursue this further. But I have another one. Ionnidis claims that 85% of research are worthless. May I surmise, it's even more. Let me explain: a psychology student told me about a study for a Ph.D. she participated in. They were asked to fill in questionnaires and the later statistical evaluation would be the core of the thesis. Now she was in about her third year of (psychology) studies. When she handed in her questionnaire, the Ph.D. candidate asked her if she couldn't instead of calling herself university student put "high school dropout" on the form. Because otherwise his cohort would not be random enough ... If she did (and if not she, then the Ph.D. himself), no Ionnidis would ever notice. I suspect, from his remaining 15% you can distract another 50%. And from the remaining 7,5% - which are actually contributing to progress?
ReplyDelete