Canadians are so darned polite. We like to stand in lines and wait patiently. The first word of greeting is not "hello!"; it is "Sorry!". And we rarely complain (at least not out loud, and rarely if ever directly. And we never sue. So why am I writing a posting about complaints?
Part comes from some recent discussions with I-TECH about customer satisfaction, and part comes from a very interesting article on Medscape Today (http://www.medscape.com/viewarticle/725001) entitled "Malpractice Dangers in Patient Complaints: Best Ways to Deal With Complaints" by Lee J. Johnson Esq., a lawyer from NewYork.
Before continuing, I might as well state the obvious. There are many ways in which Canadians and Americans are similar. Our attitudes towards litigation is NOT one of them.
The article is instructive in a number of ways.
Many people complain indirectly. It is not uncommon for people to be unaware that others are not happy with the service they have been provided. Sometimes the first time that you learn about the unhappiness is when it is too late. In Canada the result is angry patients, grumbling physicians, and frustrations that spill over from all sorts of indirect directions. Sometimes it manifests as snide comments in semi-public situations, other times as a lack of support at hospital management meetings. Rarely is it a good thing. In the US, the risk of legal escalation is always present.
The article talks about doing a few things that can reduce some of the crisis from complaints, like developing a culture of patient safety; and training staff on how to address complaints, and setting up ways to uncover patient complaints, such as a suggestion box and patient-satisfaction surveys, and importantly, create a procedure for handling patient complaints.
This is all good stuff and is by-and-large consistent with ISO9001:2008 and ISO15189:2007.
But where the article does not go far enough is that after you set up the suggestion box and the satisfaction survey, and the procedure, these have to be monitored and assessed and the results used as part of a continuous improvement. Otherwise the problems continue to occur, and the liability risk is not controlled.
If complaints monitoring is not incorporated into the continual improvement process, in the same way as internal audits, and quality indicators, and KPIs, and reported incidents, then the process of continual improvement is incomplete, and the problems that lead to complaints never go away.
Seems intuitively obvious.
m
ps:in most provinces in Canada, this is the beginning of the August 1st long weekend. Enjoy the break.
A discussion site for folks interested in improving the quality of medical laboratories. Most will be the thoughts and vents of a long time player in the medical laboratory quality from many perspectives, complex and basic laboratories, developed and developing countries, research and new knowledge.
Friday, July 30, 2010
Tuesday, July 27, 2010
International Training in Quality - next chapter.
This week I had an opportunity to work with the University of Washington International Training and Education Center for Health (I-TECH) with an great group of participants from China, Kenya, Thailand, and the US. One might wonder about the mix of nationalities, but it worked well. Fortunately their English was a lot better than my non-existent Mandarin, Swahili, and Thai. And where there were potentially going to be challenges, there were people that could translate.
The basis of discussion was around the Quality Management training CD produced last year by the World Health Organization. It was the perfect dovetail with the UBC Program Office for Laboratory Quality Management, and I trust the beginning of a sustained collaborative relationship.
The full program is a two week training and mentoring exercise on a variety of relevant topics. I had the opportunity to talk about 3 favorite topics, Customer satisfaction, Laboratory Safety, and Laboratory Accreditation on 2 of the days. I will await the participant survey results, but from my perspective the program went well.
What distinguished this session was not so much the faculty, or the collaboration, or the the setting, or even the individual participants (all were important), but rather it was the high degree of interaction, dialogue, and discussion. It was a good thing that scheduling was considered loosely. Folks had a lot to say, most of which was generally accepted, some of which might be considered as controversial, but all of it became fodder for healthy discussion.
For Quality, all discussion, is a good thing.
The more open, the more transparent, the better.
International collaborations are a triple win, good for training programs, good for program trainees, and good for their home institutions.
m
PS: For folks who are interested, my presentations will be available on www.POLQM.ca next week.
The basis of discussion was around the Quality Management training CD produced last year by the World Health Organization. It was the perfect dovetail with the UBC Program Office for Laboratory Quality Management, and I trust the beginning of a sustained collaborative relationship.
The full program is a two week training and mentoring exercise on a variety of relevant topics. I had the opportunity to talk about 3 favorite topics, Customer satisfaction, Laboratory Safety, and Laboratory Accreditation on 2 of the days. I will await the participant survey results, but from my perspective the program went well.
What distinguished this session was not so much the faculty, or the collaboration, or the the setting, or even the individual participants (all were important), but rather it was the high degree of interaction, dialogue, and discussion. It was a good thing that scheduling was considered loosely. Folks had a lot to say, most of which was generally accepted, some of which might be considered as controversial, but all of it became fodder for healthy discussion.
For Quality, all discussion, is a good thing.
The more open, the more transparent, the better.
International collaborations are a triple win, good for training programs, good for program trainees, and good for their home institutions.
m
PS: For folks who are interested, my presentations will be available on www.POLQM.ca next week.
Sunday, July 25, 2010
Costs of Poor Quality - Revisited
Some (many?) of you may be members of the American Society for Clinical Pathology (ASCP). For those of you that are, I strongly encourage you to read the article in the July 2010 edition of Lab Medicine by Elbireer, Gable, and Brooks Jackson entitled "Cost of Quality at a Clinical Laboratory in a Resource-Limited Country".
The article describes a study performed in a Kampala laboratory in 2007 in which the authors looked at the traditional Juran developed classification of quality costs (Prevention-Appraisal-Internal Failure-External Failure) and analyzed the laboratory's experience over 6 months.
The results are interesting in a number of ways. The appraisal costs are astronomic (22% of budget) as compared to the Prevention (8%) and Failure costs (2.5%) are way out of line with every other industry that has looked at the issue. But remember that this is a small laboratory in a resource limited country supported by international grants. (note: Do NOT show these numbers to your hospital administrators. They already believe that you are spending an arm and a leg on proficiency testing and accreditation!).
The discussion part of the paper puts all this in context.
What is important about this study is that it has been done and published in a journal that is widely available and widely read, and all laboratorians should be thinking about doing something similar in their own institution.
I can tell you that when done in the developed country context, the costs of poor quality will be a lot closer to the traditional 5-7 times prevention-appraisal costs, and that is your opportunity to shine because once established, you can start working at reducing those costs down.
My own experience tells me that the traditional Juran model is probably not the best model to use in the medical laboratory. First since pathologists are paid more than technologists, a failure costs that consume pathologists' time are more costly than those that only consume technologist time. This makes no sense from a quality or time consumption basis. Second, the model does not address issues like clinician time and inconvenience and patient time and inconvenience.
But I digress, and importantly at this point all of that is irrelevant.
What is relevant is that every laboratory should be looking at their CPQ and developing strategies to lower their failure costs.
m
PS: If you are not a member of ASCP, then borrow somebody's copy. I don't think Lab Medicine is on medline. You can find the article on the ASCP website for a SMALL charge.
PPS: Don't be intimidated because 2 of the 3 authors have an MBA. There is no hard arithmetic here.
more later
The article describes a study performed in a Kampala laboratory in 2007 in which the authors looked at the traditional Juran developed classification of quality costs (Prevention-Appraisal-Internal Failure-External Failure) and analyzed the laboratory's experience over 6 months.
The results are interesting in a number of ways. The appraisal costs are astronomic (22% of budget) as compared to the Prevention (8%) and Failure costs (2.5%) are way out of line with every other industry that has looked at the issue. But remember that this is a small laboratory in a resource limited country supported by international grants. (note: Do NOT show these numbers to your hospital administrators. They already believe that you are spending an arm and a leg on proficiency testing and accreditation!).
The discussion part of the paper puts all this in context.
What is important about this study is that it has been done and published in a journal that is widely available and widely read, and all laboratorians should be thinking about doing something similar in their own institution.
I can tell you that when done in the developed country context, the costs of poor quality will be a lot closer to the traditional 5-7 times prevention-appraisal costs, and that is your opportunity to shine because once established, you can start working at reducing those costs down.
My own experience tells me that the traditional Juran model is probably not the best model to use in the medical laboratory. First since pathologists are paid more than technologists, a failure costs that consume pathologists' time are more costly than those that only consume technologist time. This makes no sense from a quality or time consumption basis. Second, the model does not address issues like clinician time and inconvenience and patient time and inconvenience.
But I digress, and importantly at this point all of that is irrelevant.
What is relevant is that every laboratory should be looking at their CPQ and developing strategies to lower their failure costs.
So congratulations to the authors on a really interesting and useful manuscript.
m
PS: If you are not a member of ASCP, then borrow somebody's copy. I don't think Lab Medicine is on medline. You can find the article on the ASCP website for a SMALL charge.
PPS: Don't be intimidated because 2 of the 3 authors have an MBA. There is no hard arithmetic here.
more later
Friday, July 23, 2010
3D - International Training in Quality
All the rage in movies these days is 3D. Movies like Avatar and the Alice in Wonderland attract big crowds. Makes tons of money.
Well we have our own version of 3D, a different kind of glitzy, and maybe not as lucrative, but pretty satisfying in its own special way. Its our three dimensions of international training in medical laboratory quality.
1. International training in Proficiency Testing.
ILAC Proficiency Testing Consultative Group has done 2 surveys of PT providers. There are a number of constituencies, but I will focus on 2. There are a substantial number of PT providers that enjoy providing education about PT. Some of it about how it can work well with Accreditation Bodies, some of it about pretty specialized topics, and some of it more general.
CMPT is a good and typical example of this. We see education as part of our university mandate. Rather than send our fresh simulated samples around the world, we would rather teach other groups how to make our types of samples and how to use them for high quality programs.
Then there is another group. Many, but certainly not all from developing countries that need and want assistance, training, and mentoring, but can't find a partner, or can't find the money. There goal is to provide local PT for their own laboratories.
The challenge is how do we put these two groups together to find a solution. We have been pretty successful over the years. You can visit www.CMPT.ca and see the pictures from our training sessions over the years. We have worked by word-of-mouth, and through our website and now this blog.
This may be part of our solution, but is not the bigger answer.
Clearly some sort of inventory site where both groups can get together would be helpful. Problem is that we don't have the resources for something like this.
Any ideas?
2: On-line UBC Certificate course in Laboratory Quality Management.
This course is very successful and we are pretty confident in our client satisfaction and our impacts on medical laboratory quality improvement. Visit www.POLQM.ca
Registration for 2011 will start in September.
3: International Quality Management training session in concert with other events. During our last 3 CMPT PT training programs we have introduced supplemental Quality Management lectures and dialogue. And now we have embarked on a Quality Management Workshop to be held in Vancouver in the next year. These sessions can have partners that provide funding support for deserving international delegations that would otherwise not be able to attend. I have some in mind now.
If there is any group out there that would like to contribute, let us know.
More on this later.
I had a department head mentor that talked regularly about the university mandate of Education, Research and Outreach, anther way of describing 3D activity. That's what we think, that's what we believe, and that is what we do.
So its not as glitzy as Avatar, but it is still pretty darn good.
m
Well we have our own version of 3D, a different kind of glitzy, and maybe not as lucrative, but pretty satisfying in its own special way. Its our three dimensions of international training in medical laboratory quality.
1. International training in Proficiency Testing.
ILAC Proficiency Testing Consultative Group has done 2 surveys of PT providers. There are a number of constituencies, but I will focus on 2. There are a substantial number of PT providers that enjoy providing education about PT. Some of it about how it can work well with Accreditation Bodies, some of it about pretty specialized topics, and some of it more general.
CMPT is a good and typical example of this. We see education as part of our university mandate. Rather than send our fresh simulated samples around the world, we would rather teach other groups how to make our types of samples and how to use them for high quality programs.
Then there is another group. Many, but certainly not all from developing countries that need and want assistance, training, and mentoring, but can't find a partner, or can't find the money. There goal is to provide local PT for their own laboratories.
The challenge is how do we put these two groups together to find a solution. We have been pretty successful over the years. You can visit www.CMPT.ca and see the pictures from our training sessions over the years. We have worked by word-of-mouth, and through our website and now this blog.
This may be part of our solution, but is not the bigger answer.
Clearly some sort of inventory site where both groups can get together would be helpful. Problem is that we don't have the resources for something like this.
Any ideas?
2: On-line UBC Certificate course in Laboratory Quality Management.
This course is very successful and we are pretty confident in our client satisfaction and our impacts on medical laboratory quality improvement. Visit www.POLQM.ca
Registration for 2011 will start in September.
3: International Quality Management training session in concert with other events. During our last 3 CMPT PT training programs we have introduced supplemental Quality Management lectures and dialogue. And now we have embarked on a Quality Management Workshop to be held in Vancouver in the next year. These sessions can have partners that provide funding support for deserving international delegations that would otherwise not be able to attend. I have some in mind now.
If there is any group out there that would like to contribute, let us know.
I had a department head mentor that talked regularly about the university mandate of Education, Research and Outreach, anther way of describing 3D activity. That's what we think, that's what we believe, and that is what we do.
So its not as glitzy as Avatar, but it is still pretty darn good.
m
Wednesday, July 21, 2010
Uncertainty of Measurement
Advance notice. This is a rant. I know that some folks don't appreciate rants. That is OK, but this is still a rant.
In science there are two kinds of measurement; hard measurement and soft. Hard measurement is what is needed in astrophysics and nuclear chemistry and engineering where exquisitely small variation has huge impact. Consider the impacts of calculating flights to Mars and being off by 1% or building a bridge and missing a critical weight by a few pounds. Soft measurement is what most of what we do in the medical laboratory. Our values need accuracy and precision and absence of bias, but few of our numbers meet the same demands as in astrophysics. (Maybe blood alcohol levels do because of the legal complications that surround 0.08 mL/L).
Our values are more about positive and negative, or high and low, or trends. In most situations, if a two-point differentiation is difficult to interpret, then a three-point can be created. Our values are more about contextual interpretation and clinical relevancy. We can tolerate certain inherent insensitivity and non-specificity because we have processes and safe-guards including treatment regimen that can compensate.
In the 1990's international organizations introduced the concepts of measurement uncertainty (see Guide to Expression of Uncertainty of Measurement) which was, and is, of tremendous benefit in the hard measurement disciplines because it creates a method of considering all factors that may influence a measurement and addressing them collectively.
But then discussion started when folks started to think about Uncertainty of Measurement (UM) in the context of the softer measurement areas. Think about a sample in the medical laboratory and all the steps that can have an impact on determining a value. Consider precision, bias, maintenance, timing sequences and sensitivity of equipment. Consider in addition quality, concentration, solubility, and dating of reagents.
How about how the sample was collected, the integrity of the tube and its content (such as EDTA or serum separator gel)?, How about trace or heavy haemolysis?, How about drugs and medications?. How about fasting? How about transport time, temperature, atmosphere? And what about personnel techniques and focus? Yes, yes, and yes.
Can we take all these things into consideration when we do our testing? Of course we should. Some of them are actually calculateable, and some are not. Some of them are at best, crude guestimates. And what about all the elements that Donald Rumsfeld would call “unknown unknowns”, should one put in a crude guestimate for these as well?
So why the rant? When accreditation bodies decide to make doing these sorts of uncertainty of measurement calculations based on crude guestimates an accreditation requirement it diminishes the notion of quality. It reverts to quality by dictate, rather than quality by principle. It diminishes the credibility of quality laboratorians, and it reinforces that quality is all about (or worse, only about) doing what accreditation bodies say, even when it makes no sense. This does not make laboratories better.
It makes laboratories worse.
m
PS: There will be no more rants, at least not today.
For additional reading on UM consider:
An Introduction to Uncertainty in Measurement
Les Kirkup and Bob Frenkel
2006. University Press, Cambridge.
In science there are two kinds of measurement; hard measurement and soft. Hard measurement is what is needed in astrophysics and nuclear chemistry and engineering where exquisitely small variation has huge impact. Consider the impacts of calculating flights to Mars and being off by 1% or building a bridge and missing a critical weight by a few pounds. Soft measurement is what most of what we do in the medical laboratory. Our values need accuracy and precision and absence of bias, but few of our numbers meet the same demands as in astrophysics. (Maybe blood alcohol levels do because of the legal complications that surround 0.08 mL/L).
Our values are more about positive and negative, or high and low, or trends. In most situations, if a two-point differentiation is difficult to interpret, then a three-point can be created. Our values are more about contextual interpretation and clinical relevancy. We can tolerate certain inherent insensitivity and non-specificity because we have processes and safe-guards including treatment regimen that can compensate.
In the 1990's international organizations introduced the concepts of measurement uncertainty (see Guide to Expression of Uncertainty of Measurement) which was, and is, of tremendous benefit in the hard measurement disciplines because it creates a method of considering all factors that may influence a measurement and addressing them collectively.
But then discussion started when folks started to think about Uncertainty of Measurement (UM) in the context of the softer measurement areas. Think about a sample in the medical laboratory and all the steps that can have an impact on determining a value. Consider precision, bias, maintenance, timing sequences and sensitivity of equipment. Consider in addition quality, concentration, solubility, and dating of reagents.
How about how the sample was collected, the integrity of the tube and its content (such as EDTA or serum separator gel)?, How about trace or heavy haemolysis?, How about drugs and medications?. How about fasting? How about transport time, temperature, atmosphere? And what about personnel techniques and focus? Yes, yes, and yes.
Can we take all these things into consideration when we do our testing? Of course we should. Some of them are actually calculateable, and some are not. Some of them are at best, crude guestimates. And what about all the elements that Donald Rumsfeld would call “unknown unknowns”, should one put in a crude guestimate for these as well?
So why the rant? When accreditation bodies decide to make doing these sorts of uncertainty of measurement calculations based on crude guestimates an accreditation requirement it diminishes the notion of quality. It reverts to quality by dictate, rather than quality by principle. It diminishes the credibility of quality laboratorians, and it reinforces that quality is all about (or worse, only about) doing what accreditation bodies say, even when it makes no sense. This does not make laboratories better.
It makes laboratories worse.
m
PS: There will be no more rants, at least not today.
For additional reading on UM consider:
An Introduction to Uncertainty in Measurement
Les Kirkup and Bob Frenkel
2006. University Press, Cambridge.
Sunday, July 18, 2010
Proficiency Testing - in all its guises.
I really enjoy being involved with medical laboratory proficiency testing. It satisfies so many roles.
First and foremost it is a primary quality partner. (Quality partners are those groups that provide the supports that a laboratory depends upon to improve its quality - standard development bodies, accreditation bodies, proficiency testing bodies, equipment and supplies providers, education bodies, professional organizations, and the public). Every quarter, there are another set of samples that allow the laboratory to challenge its systems and challenge its procedures to prove to itself that it can achieve a correct answer. Probably the worst thing that ever happened to proficiency testing was the intervention of "authorities" that jump in with all sense of nonsense rules when an error occurs. All this accomplished is promoting "gaming away" a very useful quality tool.
Proficiency education is really about continuing education and continual improvement. Our middle and smaller sized laboratories tell us that our critiques are the single most important source of continuing education material. Not so for the bigger laboratories, but then they are the ones with enough money to buy books, buy guidelines, send folks to conferences, send folks for training, and on-and-on. So they are not our first priority education target.
But there is another side to PT that most laboratories and laboratorians don't see, or if they do, it is less obvious. Standards Development bodies, like ISO and WHO, raise the point that the proficiency testing samples should look like and act like real samples, otherwise they don't really measure proficiency. That seems so obvious, until the laboratory receives vials of lyophilized powder. There is a huge research and development component to PT. How can you create samples that look like clinical samples, but are stable enough to transport over distances and still be acceptable when they get to the laboratory? We spend a lot of time at that, as do many other programs.
This summer adds one more additional component that is very satisfying. Two months ago we had personnel in from a middle-East proficiency testing program learning how we make our samples, and more importantly, how we go through the R&D. Tomorrow we have another group from southern-Africa.
So there is a big PT community in which we as Quality Laboratorians can play.
Memo to laboratorians. Odds are that most of you will not be engaged in starting-up a PT program. But everyone can have the opportunity to get engaged at a committee level, at a critique writing level, at a commentary level. Don't limit your Quality activities time to only doing accreditation visits to other labs.
Get involved.
At least that is what I think!
m
First and foremost it is a primary quality partner. (Quality partners are those groups that provide the supports that a laboratory depends upon to improve its quality - standard development bodies, accreditation bodies, proficiency testing bodies, equipment and supplies providers, education bodies, professional organizations, and the public). Every quarter, there are another set of samples that allow the laboratory to challenge its systems and challenge its procedures to prove to itself that it can achieve a correct answer. Probably the worst thing that ever happened to proficiency testing was the intervention of "authorities" that jump in with all sense of nonsense rules when an error occurs. All this accomplished is promoting "gaming away" a very useful quality tool.
Proficiency education is really about continuing education and continual improvement. Our middle and smaller sized laboratories tell us that our critiques are the single most important source of continuing education material. Not so for the bigger laboratories, but then they are the ones with enough money to buy books, buy guidelines, send folks to conferences, send folks for training, and on-and-on. So they are not our first priority education target.
But there is another side to PT that most laboratories and laboratorians don't see, or if they do, it is less obvious. Standards Development bodies, like ISO and WHO, raise the point that the proficiency testing samples should look like and act like real samples, otherwise they don't really measure proficiency. That seems so obvious, until the laboratory receives vials of lyophilized powder. There is a huge research and development component to PT. How can you create samples that look like clinical samples, but are stable enough to transport over distances and still be acceptable when they get to the laboratory? We spend a lot of time at that, as do many other programs.
This summer adds one more additional component that is very satisfying. Two months ago we had personnel in from a middle-East proficiency testing program learning how we make our samples, and more importantly, how we go through the R&D. Tomorrow we have another group from southern-Africa.
So there is a big PT community in which we as Quality Laboratorians can play.
Memo to laboratorians. Odds are that most of you will not be engaged in starting-up a PT program. But everyone can have the opportunity to get engaged at a committee level, at a critique writing level, at a commentary level. Don't limit your Quality activities time to only doing accreditation visits to other labs.
Get involved.
At least that is what I think!
m
Wednesday, July 14, 2010
Costs of Poor Quality
In our course we emphasize that Quality Managers ignore the economics of Quality at their peril.
Quality as far as most enterprises (especially hospital and laboratories) see it, is a necessary but unbalanced cost centre. Money out with no financial return. Many have written on the subject, and many have talked about looking at the costs of Prevention and Assessment (as input costs) and costs of finding and addressing Internal and External failure (as output). The problem is that CEOs and accountants only get to see the input costs (quality control, safety equipment, proficiency testing, accreditation, quality salaries) but they never see the savings on reduced error.
Big Mistake.
We have been looking at some of those failure (output) costs as measured in time. Preliminary data shows that average error takes less than a minute to create, but can cost over 90 minutes to fix. Our preliminary average is 116 minutes, and is likely to go up rather than down. When the smoke clears, I anticipate that the mean will be much closer to 200 minutes, which means that if by virtue of a quality system you prevent 3 errors a day, you save approximately the time of 1 person every day. More on this later.
For a more systematic approach, for those of you who have access to the journal ISO Focus, there is a very readable article entitled "The ISO Methodology - assessing the economic benefits of standards". (Gurundino and Hilb. ISO Focus June 2010) which provides an approach to assessing impact. For those without access as ISO members, they recommend contacting ISO (weissinger@iso.org) to get access to the Resources section. Have to be from an academic, or research centre, or a company.
Here is a valuable number for your pocket: "...the impact from standards ranges from 0.15% to 3.0% of turnover." This may seem like small potatoes, but for a tertiary care medical laboratory that comes to between $150,000 - $3,000,000 per annum. If that is true, those are the kind of numbers that guarantee a quality team salary for a long time.
m
Quality as far as most enterprises (especially hospital and laboratories) see it, is a necessary but unbalanced cost centre. Money out with no financial return. Many have written on the subject, and many have talked about looking at the costs of Prevention and Assessment (as input costs) and costs of finding and addressing Internal and External failure (as output). The problem is that CEOs and accountants only get to see the input costs (quality control, safety equipment, proficiency testing, accreditation, quality salaries) but they never see the savings on reduced error.
Big Mistake.
We have been looking at some of those failure (output) costs as measured in time. Preliminary data shows that average error takes less than a minute to create, but can cost over 90 minutes to fix. Our preliminary average is 116 minutes, and is likely to go up rather than down. When the smoke clears, I anticipate that the mean will be much closer to 200 minutes, which means that if by virtue of a quality system you prevent 3 errors a day, you save approximately the time of 1 person every day. More on this later.
For a more systematic approach, for those of you who have access to the journal ISO Focus, there is a very readable article entitled "The ISO Methodology - assessing the economic benefits of standards". (Gurundino and Hilb. ISO Focus June 2010) which provides an approach to assessing impact. For those without access as ISO members, they recommend contacting ISO (weissinger@iso.org) to get access to the Resources section. Have to be from an academic, or research centre, or a company.
Here is a valuable number for your pocket: "...the impact from standards ranges from 0.15% to 3.0% of turnover." This may seem like small potatoes, but for a tertiary care medical laboratory that comes to between $150,000 - $3,000,000 per annum. If that is true, those are the kind of numbers that guarantee a quality team salary for a long time.
m
Monday, July 12, 2010
Internal Quality
A colleague at work raised an interesting concept. If the Quality Team talks the talk of quality, then it should walk the walk. I can buy into that all day long. Her challenge was that it we monitor the organization with its Quality Management Implementation, and we monitor specific indicators for quality and customer satisfaction, then it is appropriate for the Quality Team do to the same for its own department. The Quality Team should develop its own internal Balanced Scorecard. Now that seems like an interesting idea but with certain challenges.
At CMPT we have a Scorecard metric that we publish each year in our CMPT Annual Report (visit www.cmpt.ca). In essence we monitor numbers of compliments, complaints, contracts, and consultations and presentations. Each is gathered and weighted (lost contracts and complaints weigh much heavier than new or retained contracts and complements) and summed and then put to the assessment score. See the figure.
Malcolm Baldrige may not consider this a perfect Balanced Scorecard, but it works for us.
But to get back to the notion that a division within a company should monitor its own scorecard is appealing because it reminds us that our customers, are the users of our services, and we need to be able to indicate to them that we are doing what we say and say what we are doing. So the question is not should we monitor, but which components should we incorporate and follow.
Again, more on this topic later.
PS, our CMPT composite score will be developed for 2009 - 2010 next month. If there is an interest it will be available in the annual report on the website in October.
m
At CMPT we have a Scorecard metric that we publish each year in our CMPT Annual Report (visit www.cmpt.ca). In essence we monitor numbers of compliments, complaints, contracts, and consultations and presentations. Each is gathered and weighted (lost contracts and complaints weigh much heavier than new or retained contracts and complements) and summed and then put to the assessment score. See the figure.
Malcolm Baldrige may not consider this a perfect Balanced Scorecard, but it works for us.
But to get back to the notion that a division within a company should monitor its own scorecard is appealing because it reminds us that our customers, are the users of our services, and we need to be able to indicate to them that we are doing what we say and say what we are doing. So the question is not should we monitor, but which components should we incorporate and follow.
Again, more on this topic later.
PS, our CMPT composite score will be developed for 2009 - 2010 next month. If there is an interest it will be available in the annual report on the website in October.
m
Sunday, July 4, 2010
Culture of Quality - Part 2
As we were preparing for our ISO 15189 accreditation, we started to see some glimpses of a developing quality culture that extended beyond the quality professionals, and manifested with a interested cohort of about 60 or so employees in a variety of positions; some sample collectors in patient service centres, some sample receivers in accessioning, some bench technologists in a variety of disciplines, and some (few) in the management group. Indeed it was interesting to me that as a group the non-management group were far more interested and committed to the notion of quality achievement.
We fostered this group in part with information, but I suspect our most successful tool was a series of discussion groups with free pizza. There was no doubt that they came for the pizza, but they stayed for the discussion.
The seminar series came to an end a few months ago, and we can already see the slippage. I am betting that we can get it back with a return-to-discussion in September (fingers crossed).
For those interested, there was a brilliant interview in the September 2008 volume of the journal QUALITY AND PARTICIPATION published by the American Society for Quality. Its an interview of Steve Gerhardt who talked about a very successful development of a Culture of Quality in one of the divisions of Ford (APAO).
It highlights some of my own observations (oh yes, and those of W. Edwards Deming too!)
Quality Programs beget Quality Culture.
Quality Programs by themselves are insufficient.
Laboratory workers have to have a sense of ownership of their procedures.
Knowledge sharing promotes Culture.
Command and Control can be damaging to growth of a Quality Culture.
A few questions come to mind - is this stuff real or just part of the wishful thinking of a quality-junkie and does even if it is real, does it matter to the organization?
More later.
m
We fostered this group in part with information, but I suspect our most successful tool was a series of discussion groups with free pizza. There was no doubt that they came for the pizza, but they stayed for the discussion.
The seminar series came to an end a few months ago, and we can already see the slippage. I am betting that we can get it back with a return-to-discussion in September (fingers crossed).
For those interested, there was a brilliant interview in the September 2008 volume of the journal QUALITY AND PARTICIPATION published by the American Society for Quality. Its an interview of Steve Gerhardt who talked about a very successful development of a Culture of Quality in one of the divisions of Ford (APAO).
It highlights some of my own observations (oh yes, and those of W. Edwards Deming too!)
Quality Programs beget Quality Culture.
Quality Programs by themselves are insufficient.
Laboratory workers have to have a sense of ownership of their procedures.
Knowledge sharing promotes Culture.
Command and Control can be damaging to growth of a Quality Culture.
A few questions come to mind - is this stuff real or just part of the wishful thinking of a quality-junkie and does even if it is real, does it matter to the organization?
More later.
m
Subscribe to:
Posts (Atom)