Monday, March 2, 2015

Evaluating Results of Training Programs

Coming into this selective, I had an inkling that I wanted to explore simulation training for procedures in internal medicine and the outcomes used to evaluate it. There were a couple of avenues that I could have explored- improvements in trainees’ proficiency after training, satisfaction with the programs, or better patient outcomes. 

Dr. Cavalcanti introduced us to an interesting approach to evaluating and stratifying outcomes called the Kirkpatrick Model during a meeting where he, Joanne, and myself discussed our research interests for the selective.

The Kirkpatrick Model for evaluating training programs creates four levels of outcomes in ascending order of impact on the trainee.

Level one outcomes would examine the reaction of the trainees to the training program. Did they enjoy it? Newer iterations of the model also add whether participants actively participate in the program, and whether the trainees find it relevant to their day-to-day work.

Level two outcomes evaluate whether participants in a program acquire the skills that were taught during it. A participant’s confidence in the skills and intention to use the skills can fall under the umbrella of level two as well.

Level three outcomes look at whether participant’s behaviour is changed, and whether they apply the skills they learned during the training program “on the job”.

Level four outcomes take a look at the program’s outcomes on a broader scale outside of whether the trainees apply the knowledge or not. Did the intended impacts of this training program happen? In medicine this would include metrics like patient outcomes. An example that I’ll explore in a later post on this blog would include Dr. Jeffery H. Barsuk’s study on the rates of central venous catheter related infections before and after the implementation of a simulation-training program.


In terms of my experience in medical school, I would have to say that I am only exposed to evaluation of level one and two outcomes regularly. Our evaluation forms after any seminars, lecture series, rotations, or other training sessions focus mostly on our satisfaction with the experience and any feedback (positive or negative) that we can offer. These forms also occasionally ask whether we feel more confident in our knowledge or skills after the event.  When it comes to level two outcomes, I feel like I might place our examinations and OSCEs into that category, and I’m sure some analysis and number crunching goes on in the background that the students don’t necessarily see.

More information on the Kirkpatrick Model can be found here: http://www.kirkpatrickpartners.com/OurPhilosophy/TheKirkpatrickModel/tabid/302/Default.aspx

-SR 

No comments:

Post a Comment