Don Clark's Performance Juxtaposition site has Kirkpatrick's Four Level Evaluation Model, which describes the model quite well. Kirkpatrick's original article can be read here and Kirkpatrick Partners (the official site) summarizes the levels this way (my comments about how each level is assessed in parens):
- Level 1: Reaction - To what degree participants react favorably to the training (smile sheets)
- Level 2: Learning - To what degree participants acquire the intended knowledge, skills, attitudes, confidence and commitment based on their participation in a training event (end-of-course test/project)
- Level 3: Behavior - To what degree participants apply what they learned during training when they are back on the job (follow-up survey/interviews about how trainee transferred training to the job)
- Level 4: Results - To what degree targeted outcomes occur as a result of the training event and subsequent reinforcement (follow-up analysis to determine how trainee's organization benefited from trainee being trained)
We trainers like to see all top marks on our smile sheets (the surveys that assess Level 1), but most of us realize that only means the attendees liked us enough to not hurt our feelings. High scores on smile sheets can indicate good food, long breaks, or a great sense of humor.
Trainers also like to see students actually learn something. I used to do training for systems integrators and programmers, so I really liked to see them actually create a working web app. This allowed me to assess what they learned (Level 2). High scores on an end-of-training assessment can indicate either an easy test or that learning happened, but even in the latter case do not guarantee that learning helped trainees in their jobs.
What we need to be able to assess is how well trainees are able to transfer knowledge from training to their jobs (Level 3), and ultimately how that translates into better results for their organization (Level 4). Following up with trainees and their supervisors isn't easy, but every organization should commit to it for critical training efforts. This sort of analysis benefits the organization receiving the training (did we get what we needed?) as well as the training organization (are we really being effective?).
BTW: Clark's Big Dog, Little Dog blog and his twitter feed @iOPT are worthwhile reads for anyone in the training field.
Trainers also like to see students actually learn something. I used to do training for systems integrators and programmers, so I really liked to see them actually create a working web app. This allowed me to assess what they learned (Level 2). High scores on an end-of-training assessment can indicate either an easy test or that learning happened, but even in the latter case do not guarantee that learning helped trainees in their jobs.
What we need to be able to assess is how well trainees are able to transfer knowledge from training to their jobs (Level 3), and ultimately how that translates into better results for their organization (Level 4). Following up with trainees and their supervisors isn't easy, but every organization should commit to it for critical training efforts. This sort of analysis benefits the organization receiving the training (did we get what we needed?) as well as the training organization (are we really being effective?).
BTW: Clark's Big Dog, Little Dog blog and his twitter feed @iOPT are worthwhile reads for anyone in the training field.
By whatever name, online training is a method of delivering training through an electronic medium without the immediate presence of a human instructor. cyber security institute in hyderabad
ReplyDelete