Even committed Qualitologists can make mistakes. Darn.
This is the 10th season of our virtual
classroom on-line course, and one might think that we have the process down
pat. Each year we review the information
from the previous year, retain what is still relevant and appropriate, and make
revisions where needed.
Every year we make a few “big changes”. This year it is a new Module on modern tools
for Quality and an additional Quiz.
These “big” things take a little more time and attention and care. And that is where I messed up. (Again, Darn!).
Quiz 1 is completed at the end of Module 1. It is an on-line auto-graded multiple choice exercise
of 10 questions that should take no more than 30 minutes. We allot an hour.
When the Quiz 1 was completed, the grades were surprisingly
and disappointingly low. Considering the
pre-selection process we undergo to ensure we have the right participants, they
should have done better. Once we
confirmed that the problem was not a computer grading technical error, or a
connection error or a transcription error, we had to go back further to when I
wrote and set the questions and responses.
On one question, I had defined a wrong answer choice as
the correct response, thus everyone answering correctly got a wrong result
flag. This was annoying because the
right answer was obvious and apparent. I
had just messed up.
Additionally, two other questions were so subtle, so
nuanced, that even I who had set the questions could barely figure them
out. No wonder we were getting some very
unhappy messages.
I remember clearly sitting in my office and setting the
questions. I remember the process was
being slow and arduous. The quiz muse
was staying away and I was struggling.
Then I had a rush of ideas and got the questions completed in a hurry. Clearly the rush was not necessarily
lucid.
Message to self: Hurrying is a bad thing.
A decision was made that we could not let the quiz stand,
if possible without inconveniencing the participants. This was completed through adjusting the auto-marking
software and re-evaluating the response.
The marks climbed to exactly the level we anticipated. Then we informed the participants of the
error that had taken place and described the remediation process and told them
to recheck their results. Then I had a
discussion with the coordinator on how I would try to avoid making the same
mistake of pushing to finish. I have
created a check list that all quiz and examination writers will need to go
through before questions can be submitted for final posting.
Four lessons learned:
Setting quiz
questions without deliberate and sufficient re-check and conformation time is an
unnecessary risk taking procedure than increases potential error
production.
Remediation and Correction takes a lot more time than
error causation.
Quiz errors hurt program credibility
This simple hurry-up slip has been a TEEM loss event. The gross cost to us has been 6 hours of emails
and discussions and IT labor shared by 3 people, plus we have had to make the
check-list and revise our procedure manual. And then we need to do the preventive thing
and re-check Quiz 2 and the final examination to make sure that we (I) didn’t mess
them as well. And to that we have to
add my frustration and a whole bunch of participant unhappiness.
Still, having a detection - remediation – correction and prevention process
that can and does pick up and analyze (study) mistakes and amend (act) them “expeditiously”
makes the point that Quality works.
Hooray.
Hi, about 13 years ago I taught stats at BCIT in the Operations Management program. I used the textbook, workbook, and answer keys provided to me but I quickly learned to start my corrections with the smartest students, as their work would sometimes be superior to the answer key. Your mitigation idea is valid, and actually you could engage your top students by making them pre-test the work first for the benefit of other students.
ReplyDeleteHello Daniel
ReplyDeleteIn all my Quality Assessment studies we use the "top tier" as an internal reference group, making sure that the identities within the group are kept private from everyone, including the members of the top tier. If that group gets something wrong, then I have to start looking more closely. In our proficiency testing program is we have less than 80 percent agreement in that group the challenge is deemed ungradeable.
By the way, I'm still enjoying A QualitEvolution [ http://qualitevolution.blogspot.ca/ ]