Wednesday, March 4, 2015

Simulation: Improving Outcomes for Patients?


As discussed in my last post, outcomes for training programs can be stratified into four different levels based on the Kirkpatrick Model. I started to search for papers that demonstrated an impact on patient outcomes (Level 4 in the Kirkpatrick Model), and with some guidance from Dr. Cavalcanti, I was able to find a couple.

Use of Simulation-Based Education to Reduce Catheter- Related Bloodstream Infections- Barsuk et al. 2009

In my last post I mentioned a study by Barsuk et al. from 2009 that demonstrated a change in rates of catheter related bloodstream infections after a simulation-based curriculum in central line insertion was instituted. The study compared rates of catheter related bloodstream infections before and after a simulation program was put in place. The program itself consisted of a pre-test, five-hour session with a videotaped lecture and three-hour training session with an ultrasound machine and practice on a simulator, followed by a post-test that the trainees were required to pass. The rates of infection before the simulation program was started were 3.2 per 1000 catheter days in the medical ICU and 4.86 infections in the surgical ICU. After the simulation-training program was instituted the rate of infections in the medical ICU dropped to 0.5 per 100 catheter days, but remained at 5.26 per 1000 catheter days in the surgical ICU.

Performance of Medical Residents in Sterile Techniques During Central Vein Catheterization- Khouli et al. 2011

This was a similar paper to the Barsuk one, however in this study the resident participants were randomized to receive simulation based training and video training or video training alone. The study also examined rates of catheter related bloodstream infections in ICU patients as well as the trainee’s sterile technique. The study found that rates of infections decreased in the medical ICU from 3.4 infections per 1000 catheter days to 1.0 infection per 1000 catheter days.  However in the surgical ICU the rates of infections remained unchanged at 3.4 infections per 1000 catheter days (again in the surgical ICU). Sterile technique improved in the intervention group as well.

These results showed that the simulation training not only had benefit in terms of the behaviors of the trainees (Level 3 in the Kirkpatrick Model) but also in patient outcomes (Level 4).

Like I've mentioned before, I have always found the simulation training we got to participate in during medical school incredibly valuable.  We have had experience with various simulated cardiac and airway emergencies during our anesthesia rotation, airway emergencies and anaphylaxis during our emergency medicine rotation, and heart sounds during our medicine rotation. The scenarios would respond realistically to our treatment decisions, and it was the first time I felt like we the students were the ones in charge and that our decisions held weight. For example, during our anesthesia rotation, we were able to run an ACLS algorithm and watch as the bedside monitor reflected the patient's condition. The advanced technology helped make the sessions feel more “real” and definitely helped nudge my confidence in my ability to handle these situations up from “none” to “a little”. It’s definitely something I’d look forward to in my residency training.

These opinions and thoughts are on an individual level though, and so that is why it was interesting to read a couple of studies that showed the positive impact that simulation training had on higher “population level” patient outcomes in the ICU.  

-SR

Monday, March 2, 2015

Evaluating Results of Training Programs

Coming into this selective, I had an inkling that I wanted to explore simulation training for procedures in internal medicine and the outcomes used to evaluate it. There were a couple of avenues that I could have explored- improvements in trainees’ proficiency after training, satisfaction with the programs, or better patient outcomes. 

Dr. Cavalcanti introduced us to an interesting approach to evaluating and stratifying outcomes called the Kirkpatrick Model during a meeting where he, Joanne, and myself discussed our research interests for the selective.

The Kirkpatrick Model for evaluating training programs creates four levels of outcomes in ascending order of impact on the trainee.

Level one outcomes would examine the reaction of the trainees to the training program. Did they enjoy it? Newer iterations of the model also add whether participants actively participate in the program, and whether the trainees find it relevant to their day-to-day work.

Level two outcomes evaluate whether participants in a program acquire the skills that were taught during it. A participant’s confidence in the skills and intention to use the skills can fall under the umbrella of level two as well.

Level three outcomes look at whether participant’s behaviour is changed, and whether they apply the skills they learned during the training program “on the job”.

Level four outcomes take a look at the program’s outcomes on a broader scale outside of whether the trainees apply the knowledge or not. Did the intended impacts of this training program happen? In medicine this would include metrics like patient outcomes. An example that I’ll explore in a later post on this blog would include Dr. Jeffery H. Barsuk’s study on the rates of central venous catheter related infections before and after the implementation of a simulation-training program.


In terms of my experience in medical school, I would have to say that I am only exposed to evaluation of level one and two outcomes regularly. Our evaluation forms after any seminars, lecture series, rotations, or other training sessions focus mostly on our satisfaction with the experience and any feedback (positive or negative) that we can offer. These forms also occasionally ask whether we feel more confident in our knowledge or skills after the event.  When it comes to level two outcomes, I feel like I might place our examinations and OSCEs into that category, and I’m sure some analysis and number crunching goes on in the background that the students don’t necessarily see.

More information on the Kirkpatrick Model can be found here: http://www.kirkpatrickpartners.com/OurPhilosophy/TheKirkpatrickModel/tabid/302/Default.aspx

-SR 

The "Perfect" Mentoring Program

By now, one can probably infer from my previous posts my particular interest in the area of mentorships during medical training. In my earlier blog, I discussed the premises for establishing a mentoring program and explained its effects on students. However, behind each success is an extensive effort by faculty and curriculum leaders pushing for its establishment. Not all medical schools have formal mentoring programs and even within the repertoire of existing models, evidence has not shown there to be a standardized approach to creating the “perfect mentoring program”. There is also a lack of defined outcomes for studying program effectiveness which limits the applicability of models and further program development.

Three main models of mentoring are currently used in medical schools:
  1. Individual mentoring: this is the traditional paired approach where each mentee is matched to a mentor
  2. Group mentoring: here, one mentor meets simultaneously with many mentees with different levels of training. Junior members often benefit considerably from the interaction with his/her advanced peers. This approach is used in the Internal Medicine residency program in Calgary; and despite students being assigned to mentor groups instead of choosing them, students have for the most part formed meaningful relationships with their upper year peers
  3. Telementoring and distance mentorship: this is an evolving model of mentorship that usually takes place over email or personal messaging devices, usually developed after students partaking in traditional mentoring programs have relocated to other geographic locations
The traditional individual mentoring model is still used most frequently across Canada and provides most flexibility in goal-setting and frequency of meetings between mentor and mentee. Group mentoring has been gaining more momentum, especially in settings where the members share similar interests and career aspirations. For example, the Women in Emergency Medicine Mentoring Program developed by Indiana University School of Medicine employs a group structure using 3 processes: vertical mentoring (senior faculty to junior faculty and residents), peer mentoring, and role modeling. Scheduled sessions are held under voluntary direction of a female mentor, usually happening every other month in a relaxed environment and welcome to families. There are also organized workshops and annual meetings for greater networking opportunities. The program has been shown to have high satisfaction rates and retention of mentor involvement.

When creating a new mentoring program, it’s important to consider potential problems, including a lack of time and commitment from either party, overdependence of the mentee on the mentor, creating “clones” of the mentor and inconsistent experiences between students in the case of informal mentorships. To address these potential problems, we can consider adopting strategies from those who have collaborated to discuss them. “The Mentoring Toolbox” was an annual workshop conducted by the Pediatric Academic Societies with involvement from over 100 faculty attendees, each representing various types of mentoring programs. Together they developed a guide to help with mentoring program design; and also the results were published in The Journal of Pediatrics, the core principles can certainly be applied in other medical specialties as well.

First and foremost, four essential components should be addressed: formal vs informal structure, mandatory vs voluntary participation, assignment vs flexibility of mentor selection, and availability of rewards for participating mentors. The following conclusions were made at the conferences regarding the best possible strategy:
  1. The program should be formally structured with explicit expectations and goals to produce a standardized experience and hold participants accountable. Formalization also shows institutional support which would facilitate formal mentor training.
  2. The program should be voluntary for mentors but mandatory for mentees to promote professional development in all students.
  3. Mentees should be allowed to explore multiple mentors and identify new mentors even if assigned a different mentor initially. This would allow a better chance of a meaningful relationship, promote autonomy, and increase commitment and satisfaction.
  4. Tangible rewards should be available to mentors, perhaps in the form of awards and recognition in the formal promotional process. In addition, time used to participate in the mentoring should be compensated through a reduction in expectations of clinical productivity, and subsidized by the school and/or hospital.
Designing a mentoring program requires involved and motivated players at various levels of the medical education system. It is a rather large investment, in both the financial and physical effort departments. Until there are large-pool studies looking objectively at the vitality of mentoring programs after their establishment and producing results showing significant benefit to the school, I do not see it becoming a spontaneous addition to every medical school.

-JJ

Sunday, March 1, 2015

Mentorship: Building Perspective and Foundations

In one of my previous reflections, I alluded to the role of mentorship in medicine by addressing Dr. HPK’s intriguing style of teaching. Teaching alone, however, does not equate to mentoring; a teacher has greater knowledge than a student, but a mentor can provide greater perspective. While knowledge is important and forms the foundation of our clinical engagement during medical school, perspective is what separates a student from a seasoned physician. Perspective is gained through years of experience –both positive and negative; through critical reflection of one’s strengths, vulnerabilities and limits; and through formation of relationships with colleagues and patients. During this selective, my colleague and I have been regularly exposed to Dr. HPK’s perspective on medicine. The holistic approach he takes with his patients is a clear indication of how much he takes interest in their lives and his genuine investment in their health. His discussions cover anything from the patient’s ancestry to their last vacation; and there seems to always be a focus on family pets. By approaching the patient as a story ready to unwind, you begin to understand how they have been affected by the disease, how they are coping, whether they are in control or feel out of control, and what they ultimately want to achieve. We’ve always been taught to treat patients holistically; I don’t think I’ve learned what true holistic medicine is until this rotation, and it’s all thanks to this new perspective on patient care.

There is no formal mentorship program here at Toronto. Instead, students are encouraged to explore non-academic areas of medicine with their preceptors, which often occur non-intentionally after working with preceptors during rotations with whom they share similar interests, career aspirations or personal/religious values. Other students may develop “mentorship”-like relationships with their longitudinal research supervisors, especially if the student hopes to enter the specialty of the supervisor. All of these opportunities operate on an informal basis usually initiated from the student’s end. The longevity of these relationships range anywhere from a single rotation’s length to formation of life-long friendships. The closest thing we have to a formal mentorship would be our portfolio curriculum, which runs throughout clerkship and centred around student reflection. Unlike mentorships however, portfolio sessions are more structuralized and focus on student experience rather than the dialogue between student and mentor.

Studies have shown that successful mentorships are experienced as a “free zone”, a neutral place, where students can bring up concerns they normally don’t discuss with their preceptors. This facilitates a space free from judgement and assessment. Students are more likely to inquire about the process of becoming a doctor, with emphasis on professional development, ongoing perception of the health care system, and personal uncertainties. Students may seek validation, comfort, new perspectives or more; as such mentors are perceived most often as counselors, providers or ideas and role models. One study in Munich described several benefits to a mentorship program, including using it to establish a feedback loop to address patterns of student concerns which can be implemented in overall medical curriculum changes.

For a mentorship program to be successful, explicit goals should be established: does the student want to focus on academic or personal growth, or both? At the beginning these expectations should be clearly discussed and allowed to evolve over time as the mentorship evolves. Both parties should be motivated and actively involved in the mentorship; as with all relationships, there needs to be balanced contributions and personal drive from both ends in order to make it meaningful. Mentee should be forthcoming with his/her needs in an honest manner and the mentor should be aware of and respect the potential personal nature of some of these needs from the student. Disengagement from any side may lead to disintegration of the mentorship, devolving it to generic periodic “check-in’s” on academic progress which defeats the purpose of a formal mentorship program.

-JJ

Saturday, February 28, 2015

Adventures in Ultrasound

Ultrasound has always been a bit of a black box for me. Aside from a handful of radiology lectures in preclerkship, our exposure has been limited to periodic opportunities during clerkship rotations. Given the increasing use of ultrasound for various applications at the bedside, these skills will be important to have in the future. 

This past Thursday we had the opportunity to spend and hour with Dr. Cavalcanti in the simulation lab at Toronto Western learning how to use the ultrasound maching, utilize ultrasound guidance for paracentesis, and identify anatomical structures in the abdomen.


We started with how to use the machine itself, a simple place to start, but something that I had not been taught explicitly yet. We learned how to change probes, change the type of exam, and modify the depth and gain. We then moved on to using the machine to detect pockets of fluid in a model of an ascitic abdomen. I have seen ultrasound used to do this on a real patient, and the model offered a better than expected simulation of the abdomen.  We then used a fairly advanced ultrasound simulation model to identify fluid in Morrison’s pouch and in the splenorenal recess.  The simulation was able to render both the ultrasound image and a 3-D animated image of the anatomy in real time, which is a fantastic way to teach trainees about how to visualize the structures they’re seeing on the ultrasound machine.  Clinical teaching about ascites was interspersed throughout the session.

I did a PubMed search about ultrasound teaching and simulation after the session and came across an interesting tidbit in a recent article. Coincidentally, a study was recently published in Medical Education that examined whether training in pairs was non-inferior to training an individual.  In “The effect of dyad versus individual simulation-based ultrasound training on skills transfer” thirty learners were randomized to receive training on transvaginal ultrasound simulators either individually or taking turns in pairs.  Participants were final year medical students completed a pre-test, training, and post-test. They were then evaluated performing an ultrasound examination of the uterus, lateral pelvic wall, and pouch of Douglas.  In the end the results showed that training in pairs was non-inferior to training as an individual, which could make training in the future more time-efficient and cheaper. On an individual level, I didn't feel that taking turns with my colleague during the training session we had on Thursday had any tangible drawback- it was a great session using fantastic technology. 

-SR


Thursday, February 26, 2015

Tea Steeping vs. iDocs- A Learner’s Perspective


After a discussion about competency-based programs, I took the opportunity to read “A tea-steeping or i-Doc model for medical education” by Dr. Brian Hodges (http://www.ncbi.nlm.nih.gov/pubmed/20736582). The article explores two different paradigms in medical education, and how they could be reconciled and applied to current needs to medical education.

 The first example that Dr. Hodges discusses is the traditional “tea steeping” model that is fairly familiar to those of us currently in medical school. The “tea steeping” model is the predominant one currently, and refers to the fixed duration of medical training (three or four years), after which it is assumed that a learner will become a competent practitioner.  In other words the tea is the student, and the hot water is the school. This model is firmly entrenched, and a complete departure from it would be quite difficult. Changes have been made to it over time, such as modifying the curriculum, admissions requirements, or lengthening the time of training. 

Issues with the “tea steeping” model include issues with evaluation of trainee performance (often at the end of rotations or courses rather than continuous assessments) and a disconnect between the basic sciences and clinical training. However, at the same time Dr. Hodges states that model may be better for developing habits of mind such as cognitive flexibility and tolerance of ambiguity.

The second model of competence development that Dr. Hodges describes is an outcomes-based one. He describes it as the “iDoc” model, drawing parallels between manufacturing iPods and manufacturing trainees that have a certain set of competencies (in fact he describes how some of the language surrounding outcomes based training reflects that used in manufacturing).  In this model, students progress when they demonstrate competency in certain areas. For example, the orthopedic surgery residency program at the University of Toronto ensures that residents gain mastery of skills in modules such as basic fractures, complex trauma, and pediatric orthopedics. While this model could make training programs more efficient, the logistical issues (organizing rotations, preceptors, and dealing with variable length of training) would likely preclude full implementation.

As a medical student (for a few more months) I generally agree with the benefits and drawbacks of each model. There are times when I reflect on how much time is left until July 1st and begin to feel somewhat worried about skills I have not mastered or presentations I have not yet seen. As a future resident, I would feel reassured to be in a program where I have to demonstrate clear competency in certain important domains before being considered “fully trained”. However at the same time, I wonder if my mastery of skills would suffer with time after completing the “module” in which it was taught. I can appreciate that the logistics would be difficult to manage, and the variable length of training would make it difficult to plan my career around.  In terms of the “tea-steeping” model, I have appreciated the fact that I was immersed in a four-year journey of learning. I was able to take this time to develop new ways of thinking and approaching problems, and didn’t feel pressured to take on the next module so I could move forward. 

Like Dr. Hodges, I feel that integrating outcome-based training into the four-year curriculum would be a good approach. In some ways I can see this happening already, with our observed histories and physicals in our family medicine rotation (FM-CEX) and the mandatory patient encounters that we have to log for each rotation (though these are not evaluated). I believe that a formalized system for continuous evaluation and feedback during clinical work for each rotation (not just halfway through and at the end) would be a possible next step that I would like to see done.


-SR