Malcolm Gladwell's new collection of essays, What the Dog Saw, includes a piece on the Challenger explosion. Essentially, he asserts, most failures of this magnitude can't be traced to a single mistake or one bad decisionmaker. Sure, hindsight being what it is, things could be done differently -- but there are several things, sometimes in important chronological order or patterns -- that all need to happen. In other words, the problem is the result of a system failure.
And therein lies the central problem with the traditional (think 4-level) means of "evaluating" training. There are 1001 things (let's call them 'variables') standing between a freshly-trained worker and successful performance, from bad tools to a bad hard drive to, yes, a bad supervisor. Attempting to isolate the worker from the rest of the system in which he or she works invalidates the evaluation by removing context and circumstance -- and if the desired performance still isn't there, this approach to evaluation doesn't tell us how to fix it.
If you've been led to believe there's only one approach to evaluating training, try Googling around for Stufflebeam, Brinkerhoff, Stake, and Scriven. And there are others, so keep Googlin'. Perhaps something else would better meet your needs at informing both your formative and summative evaluation processes.
Or maybe you're already using something else? If not the 4 (or 5)-level taxonomy, what are you using to figure out whether training is really "working"?
Subscribe to:
Post Comments (Atom)
3 comments:
I so agree with you on this. And I'd go further.
It's not just a question of trying to see the 1001 things looking back but also during planning.
I hear people talk of (sorry, I'm about to insert an ill-conceived gross simplification here) qualified IDs designing scalable learning solutions with a minimalist approach to cognitive load (tsk, tsk, let's keep the chit-chat and the 'war stories' to a minimum, shall we?) to scientifically maximise learning (end gross simplification) and I wonder which world of work they work in.
If we attempting to isolate the training from the rest of the system in which learners work, it invalidates the evaluation by removing context and circumstance.
Many evaluative models which claim rigour are, in fact, reductionist. But they seem inevitable as long as we have a training department or learning professionals who aren't a part of the enterprise. Training itself is often a reductionist activity.
Hmmm. A tangential semi-rant?
Let me finish with an observation about workplace learning in the UK. Managers are complicit in the use of simplistic models. Given a choice between a discrete, easy-to-measure training intervention versus something more akin to learning, they most often go for the former. It's easier for all concerned.
Thanks for sharing the link, but unfortunately it seems to be down... Does anybody have a mirror or another source? Please answer to my post if you do!
I would appreciate if a staff member here at bozarthzone.blogspot.com could post it.
Thanks,
Peter
Sorry Anon, what link?
Jane
Post a Comment