...So lately I've experienced a new phenomenon: trainers and instructional designers knowingly including invalid, untested, or discredited tools or theories in training because it's "what people have heard of". To wit 1: a certain 4-letter personality-type assessment that has no construct validity, no predictive value, and boasts a body of "research" for which the insturment's publisher has provided the grant money on the condition that the grantee's research "promote the use of" the instrument. To wit 2: a certain TAXONOMY of training evaluation-- not a "model", or a "theory"-- that everyone has heard of, hardly anyone uses, and that has been shown time and time again to be flawed and, basically useless.
Both times the designers insisted on leaving the stuff in because "It's what the learners have heard of."
As practitioners, isn't it our responsibility to help people discover things they maybe haven't heard of?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment