So I got an opportunity to
participate in a small but important workshop the other day on developing a
large regional laboratory quality strategy and presented on the value and importance
of developing Quality Indicators. This
was not a new topic for me, having first put on QI workshops and created a QI
worksheet more than a decade ago.
To put Qis in perspective, Mark
Brown, who published Keeping Score: Using
the Right Metrics to Drive World Class Performance in 1996 wrote “Many
organizations spend thousands
of hours collecting and interpreting data.
However many of these hours are nothing more than wasted time because
they analyze the wrong measurements, leading to inaccurate decision making.”
At
the same time Philip Crosby wrote in his Reflections “ Quality
Measurement is effective only when
it is done in a manner that produces information that people can understand and
use.
Both
were true not only 20 years ago, but sadly, as we visit medical laboratories,
it is still true today. Folks faithfully
monitoring “Turn Around Times” and
contamination rates. They make their
graphs, pat themselves on the back on a job well done. But the results are far from understandable and usable. Their customers don’t know or
indeed care because they aren’t involved at any part of the process.
And
in the meantime, medical laboratories who have never been particularly open to
public engagement are quietly losing ground to expanding Point-of-Care
suppliers.
Of
the five ways that laboratories assess performance (Accreditation, Proficiency
Testing, Internal Audits, Quality Indicators, and Customer Service), Quality Indicators can be the most focused,
elegant, track-able, and telling (and available for public awareness and
engagement), so it makes sense to focus energy on getting them right.
When
I put my workshop materials together way back when, I proposed there were (are
still are) seven critical criteria that developed Quality Indicators must meet in
order to have any chance of being successful.
Leave one (any one) out, and you can pretty much guarantee failure.
OBJECTIVE: Know what you want to measure and why, Be
precise and specific.
MEATHOD:
Indicators are by their nature be
things, events that can be measured (counted or timed, or weighed). And more specifically your QIs need to be
measured by you. If you don’t how you are going to capture the
information, then don’t start.
LIMITS: Before you start collecting, know what your
level of acceptance is and what is a critical level of error. Get input from your customers. And take into account Risk. Telling even one person they have HIV/AIDS
when they do not is a BIG problem. I
understand that others may see this differently, but comparing your results against
another organization (bench marking) rarely works, but they aren’t you and you aren’t
them and too many variables get in the way.
INTERPRETATION: When you gather your information, does it
tell you, and others, something about your Quality? If it does not, then it is hard to call it a
Quality Indicator.
LIMITATIONS: No measure is absolute and perfect. That’s why we have Measurement Uncertainty. (MU is also not absolute or perfect). If you don’t appreciate that variables can
impact on your indicator, you may go down the wrong rabbit hole.
PRESENTATION: If you can’t express your results in an easy
to comprehend manner then it is going to be tough to have impact to engage the
people you need to engage. Maybe it is a
graph, maybe a picture, maybe a sentence – but definitely not a report.
ACTION
PLAN: If everything is pointing in the right direction, do you have a plan
about what happens next. More importantly
when everything is pointing south, is the wrong time to be thinking about what
to do. Have you plan in place before you
start.
If you take the seven initials OMLILPA and fiddle, you end up with LAMP-OIL, a rather perfect anagram
Done
well Quality Indicators can shed a LAMP OIL Bright Light on Quality Performance.