While the Kirkpatrick taxonomy is something of a sacred cow in training circles—and much credit goes to Donald Kirkpatrick for being the first to attempt to apply intentional evaluation to workplace training efforts—it is not the only approach. Apart from being largely atheoretical and ascientific (hence, 'taxonomy', not 'model' or 'theory'), several critics find the Kirkpatrick taxonomy seriously flawed. For one thing, the taxonomy invites evaluating everything after the fact, focusing too heavily on end results while gathering little data that will help inform training program improvement efforts. (Discovering after training that customer service complaints have not decreased only tells us that the customer service training program didn’t “work”; it tells us little about how to improve it.)
Too, the linear causality implied within the taxonomy (for instance, the assumption that passing a test at level 2 will result in improved performance on the job at level 3) masks the reality of transfer of training efforts into measurable results. There are many factors that enable or hinder the transfer of training to on-the-job behavior change, including support from supervisors, rewards for improved performance, culture of the work unit, issues with procedures and paperwork, and political concerns. Learners work within a system, and the Kirkpatrick taxonomy essentially attempts to isolate training efforts from the systems, context, and culture in which the learner operates.
In the interest of fairness I would like to add that that Kirkpatrick himself has pointed out some of the problems with the taxonomy, and suggested that in seeking to apply it the training field has perhaps put the cart before the horse. He advises working backwards through his four levels more as a design, rather than an evaluation, strategy; that is: What business results are you after? What on-the-job behavior/performance change will this require? How can we be confident that learners, sent back to the work site, are equipped to perform as desired? And finally: how can we deliver the instruction in a way that is appealing and engaging?
An alternative approach to evaluation was developed Daniel Stufflebeam. His CIPP model, originally covering Context-Input-Process- Product/Impact, and later extended to include Sustainability, Effectiveness, and Transportability, provides a different take on the evaluation of training. Western Michigan University has an extensive overview of the application of the model, complete with tools, and a good online bibliography of
literature on the Stufflebeam model. Short story: this one is more about improving what you're doing than proving what you did.
More life beyond Kirkpatrick: Will Thalhimer endorses Brinkerhoff's Success Case evaluation method and commends him for advocating that learning professionals play a more “courageous” role in their organizations.
Enough already, Jane! More later on alternatives to the Kirkpatrick taxonomy. Yes, there are more.
(Some comments adapted from the 'evaluation' chapter in my book, From Analysis to Evaluation: Tools, Tips, and Techniques for Trainers. Pfeiffer, 2008.)