While I'm not much of a fan of Level I evaluation, I do think it can shed insight into the effectiveness of our programs if we ask the right questions and pay attention to the answers. I'm dealing with metrics-fans right now who want to ask smile-sheet questions like, "On a scale of 1 to 6, did you find the training useful? ..." What am I supposed to do with knowing that people ranked the training as an average of 5.9 in 'usefulness'? Or worse, a 2.6?
Here's an evaluation Kassy LaBorie and I did yesterday in the wrap up to our online "Games Synchronous Trainers Play" session. (See citation at the bottom of this post). It tells us much more than the typical "smile sheet"
What can I tell from this? That we emphasized the right things; that our points were clear; that we met our objectives; that we have provided people with tools (games) they feel they can integrate into their own synchronous programs. Next go round we may emphasize even more the need to incorporate games as they relate to content, not just as filler, I also see that we may have given the wrong impression about something: there's a comment in the lower right quadrant about self-paced learning, which we didn't discuss at all and certainly weren't casting aspersions toward. (Heck, I'd rather access the worst self-paced program than most lecture-based 'webinars' any day!)
And what else does it tell us? Well, for those who believe that the online experience suffers due to lack of eye contact and body language, look at this screen again: are people interested and engaged? Do I really need traditional "eye contact" to tell me that?
If you must undertake Level I evaluation, try to find something that will give you more meaningful information than "4.5" ratings with no explanation. And pay attention to the feedback!
What other ideas do you have for evaluating at this level?
Evaluation activity submtted by Michele St. Pierre; adapted from an activity in Pike & Solem's 50 Creative Training Closers (Pfeiffer, 1998).