Measuring a Successful Conference
Peter
Drucker, the famed author of ideas and books in the arenas of management and
leadership is often quoted for “what is measured is improved”. Clearly a concept that resonates because
there exists many variations, including “what you don’t measure, you can’t
manage; and what you can’t manage, you cannot improve”.
For
the last 2 and a half days (October 28-30 2015) we have held our 5th
Program Office for Laboratory Quality Management Conference. It is one of our major on-going activities,
important to our program mission, important to our role in laboratory
leadership. If there is an activity we
need to measure, this would be one.
So
how can we measure conference success?
Attendance
and revenue are two obvious measures, but each has its own inherent weaknesses. Satisfaction surveys are also a tool with
certain value. But let me argue some
other measurements that we consider.
Total attendance in relationship to
expected attendance.
Our
original plan was to reach a total of 100 people, including sponsors,
presenters and attenders. We missed our
target attendance by 15 per cent which was a disappointment. One of our target groups (a local health
authority) reduced their participation by 25 people. We made up our audience with more people from
across Canada and international visitors.
So the impact of local authority folks not participating was diminished on
our conference.
I
rate our attendance goal as 4 out of 5
Diversity of audience.
By
every measure we met our goal of diversity.
We had folks from almost all provinces in Canada and people from Oman,
South Africa, India, and the United States.
We had folks from the public sector, and importantly from the community
health laboratories. We had laboratory decision
makers, leaders, consultants, and students, and international health
experts.
I am
rating our diversity as 5 out of 5
Participation of audience
We
can measure participation in two ways; first through active discussion during
round-table sessions, and second through participation in the last session in
relationship to the first.
At
the end of each presentation session there was a round-table where all the
presenters and the moderator had an open discussion on the theme and then
opened the session to audience participation.
Every open session ran the full length of planned time and every one had
to be respectfully stopped to stay of conference schedule. So that reflects full engagement.
And
the attendance of the last session (Friday at 4:00 PM) had 90 percent of the
attendance of the first session (Wednesday at 7:00 PM) which suggests that
folks did not get bored and drift off.
I am
rating Participation as 4.5 out of 5 (0.5 off for the 10% drop)
Compliments/Complaints ratio
Thinking
in terms of ambiance, hotel experience, food and entertainment, quality of
discussion, and the audience expectations, there are many many opportunities
for comment. During the full session I
have received 8 unsolicited compliments and no complaints. This is separate from the satisfaction survey
for which we have not had our responses counted.
I am
rating the C/C ratio as 5 out of 5.
Follow through opportunities.
Since
the conference we have had 3 invitations for new shared activities and two new
invitations for presentations. We
consider this as a measure of success and interest
4.5
out of 5.
So overall I am rating our
success as 23 out of 25; a rating that I am quite happy with. We attracted a diverse interested engaged audience that clearly enjoyed the meeting. We will find out once we get the survey results back if they felt they had learned new knowledge.
Clearly we have learned that we cannot and
should not depend on the local health authority for participation. For our next session we will focus more aggressive
energy in our areas of success abroad. We
have found that we can do that.
If locals want to participate
they will, or not. Spending a lot of
time on encouraging them to participate is not productive.
I will continue to write on
the conference for the next few days.
M