The central message in Quality
is monitor your customers and continually progress towards improvement.
I wish I could say that was an
absolute truth in the which is in the arena of education and teaching, but in
my observation, the best I can say is not so much. I don’t think this is entirely a consequence
of disinterest in wanting to set a quality agenda in teaching; it is also a lack
of investigation, follow through and innovation.
It should also be clear that
if your goal is only about customer satisfaction, then it is fair to say that
you are stuck in the 1980s. What the
more appropriate goal is improvement which goes beyond satisfaction, or goes
more under the current title of “customer delight”.
I have raised this
before. Customer delight follows the
model described by Kano,
which talks about providing a service beyond the normal expectation, beyond
satisfaction and creates a feeling of exceptional appreciation.
To be fair, it is difficult to
measure for satisfaction, if the sole tool are traditional student satisfaction
surveys. Surveys are at best marginal to
credibly measuring satisfaction. I
created “Noble’s
Rules” as a way to increase their potential. But
even with the “Rules” surveys have nothing to offer for looking at “customer delight”.
When educators discovered
surveys, either on-paper or on-line, they seemed like the perfect tool. You create a bunch of questions, students
answer them and you then can count the response. If one teacher gets 7 As and 3 Bs and another
gets 5As and 5Bs, the first must be better.
The problem is that most students soon learn there is little in the
surveys for them. This is a little game from which they quickly suffer survey fatigue and and boredom. They all too quickly become robotic in their answers and far too predictable to be reliable. Most students rank teachers on a 5-point scale with As or Bs
most of the time, mainly because it is fast and easy. Put down something else and then you get
these other questions. Too much work and
not worth the effort.
There are others who love to
be outliers who feel empowered and throw in a few Cs and Ds. Today we call this the
“twittering” of student surveys; the power of outliers when protected by
anonymity.
If we really want to gather information,
we need more objective, independent measures to determine if we are making
progress.
So let me tell you of a
supplemental measurement tool that seems to be working for us to see if our audience likes what we are doing.
In our certificate course for medical laboratory quality management, we do a lot of year-over-year update and revision, Since few people (if any) take our course year over year, few are aware of how much the course changes over time.
In our certificate course for medical laboratory quality management, we do a lot of year-over-year update and revision, Since few people (if any) take our course year over year, few are aware of how much the course changes over time.
But when they finish the
course they communicate with their organization manager or employer and tell them about what they
learned. If they had a terrible
experience, the message would likely be that the course was a waste of
time.
But what we are seeing is that
organizations send us more and more candidates year over year. This is happening in multiple provinces in
Canada, and in a number of foreign countries.
Over the past 5 years our repeat business not only continues to occur,
but many participants start registering earlier. For example, this year our registrants started
to come in early in September, with many coming from organizations who have
sent people to us before.
We see this as benefiting from
shared information. Participant A has a
positive experience, and informs their colleague who then registers early to become
Participant B, or perhaps, they inform their employer who this is inclined to
send more workers to increase the pool of Quality trained persons. Ultimately it is a Quadruple win: Participant
A, Participant B, Employer and us.
So while we track individual
opinions through satisfaction surveys, we can also track structural opinions by
looking at where they come from, and were they likely coming by referral.
So here is my message:
· (A) If you feel compelled to use student
satisfaction surveys, be very skeptical of the information you gather
· (B) If you feel you have no choice, at least
improve your surveys with Noble’s Rules.
· (C) Better yet, find another indicator that is less
subjective that satisfaction surveys, more independent and more measurable and
more focused on structural issues of referrals.
(See Noble’s Rule (8)).
This is a great article with lots of informative resources. I appreciate your work this is really helpful for everyone. Check out our website rauland responder 3 for more related info!
ReplyDelete