Tuesday, March 31, 2015

Do Student Response Systems Actually Improve Learning?

Student Response Systems provide many advantages to a lecturer. They improve participation in the class; they allow instant feedback from students regarding their knowledge; they facilitate discussions among students. However, in my last post, I alluded to the most important question: do they improve learning and more practically, performance? At first most of the literature I found seemed to support that notion. For example, many universities have employed the use of clickers and found increases of class averages up to a full letter grade. I had accepted it for the time being although I realized there were perhaps too many variables to control for. Things seemed to make sense in my brain. But then I was introduced to an interesting concept from another paper.

Anthis 2011 argued that many of the previous papers supporting improved grades with clicker use suffered from many confounding variables. For example, many courses that employed clickers also participated in weekly activities that reviewed questions and course materials. The author hypothesized that perhaps it wasn’t the use of clickers that improved learning but rather the use of clicker questions. Presentation of questions throughout lectures could prompt students to become aware about their own deficits and induce their “metacognitive skills”, resulting in a greater motivation to study. The study went on conduct two experiments. The first analyzed the use of clickers in an Infant and Child Development course where one group of students responded to random questions throughout lectures using clickers and another group of students responded to questions by raising their hands. Controlling for initial GPAs, the average scores actually significantly favored the group with no clickers. In the second study, the methods were the same except the course was a Lifespan Development course and instead of one group using clickers the entire semester, the groups switched halfway through. The result was that for each exam, scores were not significantly different.

The results of the study were interesting to me because one: its results were different from that of all the other studies I read and two: because it proposed an interesting concept. I have never heard of metacognition. From what I read, it essentially means cognition about cognition and it allows humans to regulate their behavior to improve performance by understanding how they go about learning things. Types of metacognition include person knowledge (understanding one’s own capabilities), task knowledge (understanding the nature of one’s task) and strategic knowledge (understanding what strategies exist and how to use those strategies to improve learning). Presumably, Anthis argues that clicker questions improve students’ person knowledge, thereby motivating them to study, and clickers themselves do not directly improve learning.

Despite seeing the value in the metacognition concept, I raise two concerns with the paper. First, although Anthis raises an interesting point regarding clicker questions, it is important to note that the study uses clickers for their bare minimum value. While SRSs engage students in comfortably answering questions anonymously, one great value is that it also provides feedback to the lecturer about the knowledge in the class and provides an opportunity for discussion and to clarify concepts. Sure questions on pieces of paper can also engage students and can prompt them to independently reduce their own knowledge gaps, but SRSs confer the greater advantage of addressing those deficits there and then. Interestingly, when clickers were used to their full extent in Mayer 2009, groups using clickers significantly outperformed groups that used paper questions or had no questions. In this experiment, discussions took place in both groups that were given questions to clarify concepts. The one difference was that discussion in the paper group took place at the end of class whereas discussion in the clicker group occurred immediately following the question. One could theoretically argue that the more immediate discussion lead to better retention of the knowledge. While this may be true, it also highlights an important advantage to electronic SRSs over traditional paper based/hand-raising methods: clickers allow for efficient and greater flexibility in data collection in assessing for gaps in knowledge. This brings me to my second concern with the study. Even if it was found that clickers merely function as a means of improving education through motivation, a purpose which can be served through other traditional means, technology is such a far more efficient and practical medium to do so that it may still hold more value than the author gives it.


Mayer 2009 represents one of the first studies comparing SRS with traditional methods of posing questions and initiating discussions. Of course, it cannot be said to have eliminated all confounding variables and its results may not be completely generalizable to all other classroom settings. However, it does represent the type of research that is needed to justify the introduction of SRS into curricula. I hope to see more of this research in the future and hopefully a wider use of SRSs in medicine and radiology. 

-DW

The Art of Medicine: On Being More than an Encyclopedia

In the last several weeks I feel like I've jumped down the rabbit hole of medical education theory and research and it's been an incredible learning experience. So much of what I have been focusing on however, is the teaching of medical knowledge, assessment of clinical performance, and diagnostic reasoning. While important, and certainly a necessary component of learning during medical school, as we all know, a good doctor should have many skills beyond an extensive medical knowledge.

A huge shift has occurred in the last few decades in terms of what should be the expected competencies of graduating physicians. I remember having the CanMEDS roles drilled into my head in first year of med school (I even came up with an acronym for them to not forget: CAMSPEC). I remember feeling and thinking that many of these qualities were self-evident; why do they need to be emphasized so strongly when there is so much more to learn in medical school? As I've progressed through medical school however, I've been reminded time and time and time again how my patients remember me as a good doctor, not because I make the perfect diagnosis, right away, all of the time, but because I care for them, I advocate on their behalf, and I take the time to speak with them and explain what's going on.

On my CaRMS interview tour I was asked about my most impactful experience during clerkship. I told the story of a young teenage patient I took care of for several weeks with an eating disorder. I remember feeling pretty helpless in terms of medical management those weeks; after all, my patient was simply in hospital to eat and gain weight, but I still felt like I wanted to be 'doing something' as their doctor. So, I made a point of going to have a conversation every day, speaking not just about symptoms but about painting, novels, their dog, what was going on with school, and even the weather. Gradually they opened up and told me things they were struggling with and I was able to advocate for more time spent outside in the sunshine - a small thing, but something that would really help. I didn't feel like I was doing much, but I was given a letter at the end of our time together, thanking me for my kindness and saying that I had been the only person they felt comfortable talking to in hospital and that I had made a huge difference.

I was floored.

To think that simple kindness of having a conversation, listening, and being present could make such a huge difference to a patient was not something I had expected.

Where did that ability come from though? Have I always been that kind of a person? One who knows when to listen and to be kind? Maybe, but I do think that I've become much better at it over the course of medical school. Perhaps it's just that now  that I've had these experiences, I'm much more aware of the influence individuals can have on each other, especially those in a position of power or privilege such as we have in medicine and I exert my abilities in a more mindful manner.

On the other hand, I think medical education has evolved to include the teaching or at least the cultivation of competencies beyond medical knowledge. Beyond being Medical Experts, we are trained as Collaborators, Advocates, Managers, Scholars, Professionals, and Communicators; four years later, I can truly believe that my skill in these domains has evolved but I can't figure out how. Was it explicitly taught? Or did I absorb it from observing other physicians and their interactions with patients? I certainly can remember many instances of being impressed by the kindness and patience that I have seen numerous doctors demonstrate with their patients; equally, I can remember poor examples which I've wished to avoid.

Beyond kindness, some of the physicians I've met, like Dr. HPK, truly connect  with their patients. He takes the time to ask questions about their life stories, their histories, their family, and through it all links it to his own experiences (which are extensive) so that he can hold a conversation in an area of interest to the patient. Many times I noticed him asking questions about myself as well. At first I attributed this to him trying to get to know me out of curiosity, but over time it's become too much of a pattern to be as simple as that. He brings these personal anecdotes up with patients too, thereby connecting not only himself to the patient, but his learners as well. And that connection, demonstrated daily by how much his patients show their appreciation, is what makes Dr HPK a genius at not just the science, but the art of medicine.

I'd like to think that my attempts at connecting with my eating disorder patient was a rudimentary version of Dr HPK's style (of my own devising). Now that I've spent several weeks in his clinic, I think I've learned even more how the 'grey areas of medicine' can be served by compassion, attention, and connection. There may not always be a cure, but as I've learned, cultivating these abilities can still make all the difference in the world.

~LG

Sunday, March 29, 2015

Socrative

In looking for a Student Response System (SRS) to use for my presentation, I considered a few. In my last post, I highlighted some of the positive attributes of SRS’s, such as increasing participation and engaging students, which directly or indirectly resulted in improved grades in a few studies. However, there were also some potential barriers to their usage identified in those studies. In my past, the main one I used was “clickers” and the majority of research has been focused on their use. Essentially it’s a device that takes a real-time poll from students and displays the results on a monitor. The main limitation to their usage is the need for equipment, software and their associated costs; new clickers generally cost around $50 each. With the increased popularity of smartphones and access to the internet, cloud-based response systems are more versatile and more accessible. Another SRS I considered was Poll Everywhere. It is easy to use and does not require equipment other than a smartphone. However, as my topic relates to radiology and incorporates many images into the presentation, I found that Poll Everywhere didn’t have the imaging features that I wanted. The Student Response System that I decided on for my presentation is called Socrative.

I first learned about Socrative during my radiology elective at Western University. The first thing I noticed about the program was its simple yet engaging nature. For students, they need nothing more than to just download the app on their smartphone (which takes around 5 seconds). There is no account needed and the interface is very user-friendly. For presenters, an account is easily created with an equally user-friendly interface to make the quiz or poll. Furthermore, there are many formats one can use to make the quiz, including a quick poll for on-the-fly questions and premade quizzes for testing application after a lecture. These quizzes can incorporate images (a feature essential to my presentation) that can be displayed and zoomed-in on the students’ smartphone for better clarity. Finally, the answers (true/false, multiple choice, open answer) are collected and displayed on the monitor, similar to other SRS’s.

In deciding on Socrative, I was interested in exploring what literature exists on its use. Socrative was released in 2012 and since then, there have been a number of studies and feasibility projects, the majority focusing on student perceptions. For example, Dervan 2013 used Socrative in a Contemporary Sports Management Class in Dublin. Following, he distributed a survey to identify student perceptions on its ease of use, whether it helped their learning, how it helped their learning, indications for future use and perceived disadvantages. 96% of students thought Socrative was easy or very easy to use, 92% thought it improved engagement, 77% wanted to use it more or significantly more the following semester and 58% thought there were no perceived disadvantages. With regards to how students thought it improved their learning, the overwhelming majority thought the SRS made lectures more interactive and introduced some fun into the learning environment. Of particular note was that it gives students who are generally shy a chance to participate anonymously without fear of getting questions wrong. Students also thought Socrative introduced some competition and highlighted gaps in their knowledge, introducing some motivation to learn. In terms of perceived disadvantages, there was potential that students would not take the interactions seriously, make inappropriate comments and possible technological failure. In my opinion, technology remains the greatest barrier as some students may not have data and/or access to the internet. With increased availability of WiFi at educational centers, this should be less of an issue moving forward.


Overall, I am very much looking forward to try Socrative in my presentation. I think it will make the presentation more engaging and keep students interested in the material (radiology may not be the most interesting to some!). At the same time, a big risk with incorporating technology into lectures is using it for the sake of using it. In those cases, there may be much time wasted in playing around with the technology. Thus, I still plan on using traditional methods of hand-raising and audience polling for simple questions. Knowing why one is using an SRS is very important, as highlighted by many experts. For myself, I am using an SRS primarily because I think engaging the audience in an esoteric topic such as radiology is critical to maintain attention and interest. Ultimately, I hope there will be studies done to show that using Socrative to maintain interest translates into improved learning. 

-DW

Friday, March 27, 2015

The good, the bad, and the ugly

Would you rather know if somebody was really good at something or really bad?

Yesterday, noon rounds were presented by a guest speaker from the University of Calgary on a topic specific to Medical Education. Dr Ma was presenting her research on assessment tools for clinical competence with procedural skills in internal medicine. She quite eloquently demonstrated the rationale and the problem which she was attempting to find an answer for. I was most impressed at the actual experimental design of her research which allowed her to demonstrate some fascinating results.

One of the particular themes on which she spoke was how many of the assessment tools we have that are check-list oriented are grounded in attempting to assess Competence. That might sound obvious, but what she pointed out was that the majority of them do not have a foolproof method to detect Incompetence. By comparing global assessments of students' performance to the check-list assessments, many experts would deem some of the students who passed according to a check-list to remain incompetent in terms of the particular skill that was being assessed. Many of the errors that were being committed that led an expert to judge a student incompetent were related to major patient safety issues - breaking sterility, taking too many attempts, or failure to prepare effectively. This led them to think that perhaps it would be a far worthier goal to assess incompetence by developing a check-list that enumerates errors that would be unacceptable. Thus, one would pass by not making errors as opposed to demonstrating competence.

At first glance, I like this approach, but I think that it could only be used in certain contexts; if the goal is assessment of competence at an end-of-session examination, you want to know they can do it, not just that they aren't making mistakes. Perhaps the best method would combine these two methods simultaneously - but as Dr Ma pointed out, those experiments haven't been done yet, so we'll just have to wait and see.

As I turned my attention towards assessment in medical education in general, I found myself reflecting on how the choice of evaluation method has changed throughout our four years of medical school. We have often had large scale MCQ exams with occasional short answer sections mixed in. As we progress, we also have OSCE's which are designed to assess our clinical competence in addition to our knowledge. I've often wondered how multiple choice exams can accurately determine knowledge acquisition; I've always found them frustrating because I can't write an explanation as to why I have selected the answer I have, which you can do in short answer questions. However, it turns out, that there is a lot of great evidence that test results from major MCQ examinations are able to predict future clinical competence and performance. In fact, in JAMA in 2007, there was a paper published that demonstrated that test scores can even predict the number of complaints to the college a physician might trigger.

Despite their utility, it is clear that a MCQ exam cannot be used to determine competence in performing a procedure and that performance-based assessments must also be incorporated into the assessment structure of medical education. Direct observation of students with patients or standardized patients seems to be a gold standard for assessment with the prevalence of OSCEs in undergraduate medical education. However, it's important to note that, as Swanson et al point out, despite our best attempts at high fidelity simulation, learners behave differently in these environments than they would in real life.

It is clear that designing assessment tools is not as straightforward as one might think. I'm reminded of my experiences in survey design and how challenging the rigorous methods of validation are that must needs be taken to ensure they are used effectively. I am equally as reminded of my frustration when I had to mark lab reports as a grad student and had to develop my own rubric for evaluation. Dr Ma's presentation was illuminating both in strategies that one could employ to determine the effectiveness of evaluation tools, but also as a reminder that going forward, perhaps I should not always trust the tool I'm being given to evaluate my future students, and that a simple pass/fail approach can sometimes be just enough.

~LG

Thursday, March 26, 2015

Thinking about Thinking Fast and Slow

Today I led a teaching session for the 3rd and 4th year clerks within the Internal Medicine Department. My teaching topic was "How do we think: Approach to Differential Diagnosis". My objective in picking this topic was, by going through cases, to demonstrate that there are many different ways that clinicians develop differential diagnoses depending on the familiarity with the case (or learner expertise), the urgency of the situation, and individual preference. Each has their pros and cons, and can be equally as useful, depending on the context. I had hoped that by going through this exercise, the students would become more aware of their individual thought processes (thinking about how they think) and to expand their toolbox so that, in future, they might have an increased variety of approaches to use.

I walked through cases from the simple (45yo man with fatigue and microcytic anemia) to the more complex (5yo with failure to thrive, oral ulcers, and joint pain), picking and choosing an approach for each. I started with the most basic 'Deadly and Common' approach - most useful for those with experience in a familiar and urgent situation. We then spent a significant portion of time walking through mnemonics. We discussed how they can be straightforward and automatic (such as TAILS for microcytic anemia), or, a tool for developing a more broad differential when we are less familiar or our initial attempt has failed (i.e. the systems/etiology oriented VITAMIN CDE). We also discussed approaches such as physiologic which can become more schematized as experience grows (such as the approach to an AKI) or anatomic (approach to abdominal pain). The latter was used more as a demonstration of the subconscious process of how we would come up with the questions to conduct a focused history. Hybrid matrix approaches such as published in Sacher & Detsky's paper were also discussed.

I enjoyed the exercise of talking about how we think with my colleagues, but equally as enjoyable were their questions afterwards. They echoed so closely my own questions as to whether one approach was better than the other, whether there was danger in automatic pattern-recognition or heuristic thinking (like using TAILS), and whether I had discovered in my research evidence of how learners go from more System 2 type thinking to System 1 thinking with greater expertise. I was particularly gratified by one individual's comment that he enjoyed the exercise of becoming more aware of what he was doing and reflecting on it, rather than 'just doing it'.

These comments prompted me to read Diagnostic Reasoning: Where We've Been, Where We're Going by S.M. Monteiro and G. Norman (aside: it seems like I'm reading ALL of Dr Norman's papers; he's very prolific). They contrast in their article the perspective made popular by Kahneman's best-selling novel, Thinking Fast and Slow from which the System 1 and System 2 thinking come, and a more psychological-derived theory that medical diagnosis is more of an exercise in categorization and memory retrieval.

In Thinking Fast and Slow, Kahneman describes a 'Default-Interventionalist' model which talks about System 1 (Fast) thinking being the default mode and which relies on quick, heuristic thinking, and System 2 (Slow) thinking, which is a more logical, deductive process requiring an increased cognitive load (see previous post). System 2 is, according to his book, preferential as it in theory is less prone to cognitive biases that heuristic thinking has traditionally been associated with, but is less of a natural process to us; we are encouraged in his novel to develop strategies to 'slow down' our thinking, which in theory would result in more accurate reasoning. As Monteiro and Norman point out however, there is a flaw to this assumption which is that our brain is even capable of 'choosing' to slow down our thinking (most of us do it without thinking, that IS the point after all) and that we can only do one at a time. Several studies have also shown that the assumption that slower thinking will result in more accurate solutions is false.

Further, the argument that System 1 thinking is inherently susceptible to cognitive bias may not actually be a bad thing. As Monteiro and Norman discuss, directed history taking to confirm a working diagnosis may look like 'confirmation bias' when the initial hypothesis was correct, but is actually probably the most efficient way to come to a diagnosis; certainly if we received an answer to the question that did not match our working diagnosis, I think most physicians would take a step back to re-evaluate their thinking. I think of this 'backward reasoning' approach as one of the most rapid examples of the Scientific Method I've ever seen. They additionally point out that System 2 thinking may be just as susceptible to these 'cognitive biases' as System 1 thinking; in this manner they cast doubt on the idea of System 2 thinking being superior to System 1.

Their contrasting perspective discusses a Parallel-Competitive model which describes dual processing that operates in a simultaneous manner in the context of memory models of categorization and recognition. Categorization relies on remembering prior knowledge of either learned cases or lived experiences to which we are constantly comparing the current clinical presentation to find 'which fits best'. This operates in a simultaneous process as Recognition itself which allows us to see the patterns and thereby compare them to our encoded memories. This theory is a perfect example of why Mixed Learning (as discussed in my previous post) can be so effective; by encoding a variety of presentations simultaneously we learn better how to differentiate between them rather than memorizing each independently.

Relating this all back to my presentation today, it's interesting to see that I was encouraging learners to engage in more System 2 type thinking as an exercise to become aware of what they were doing in an automatic System 1 thinking process, and, as a way of approaching a problem which is more unfamiliar, or when their previous attempt had failed. I honestly think that both an heuristic pattern-recognition approach to diagnosis and a more thorough broad-based approach are both useful, and that we should actively engage in using our brains in both ways so as to be able to use the tools as appropriate. I don't think that we would be very efficient as doctors if we simply relied on System 2 thinking exclusively, especially in emergent situations. However, I also find significant value in thinking about a more parallel-processing model. I personally have always thought of my memory as a library: there are many bookcases, and on each is a set of shelves, on each shelf is a set of books, and in each book there is information. Recognition serves me well as a way of knowing that I have the book in my library, but categorization allows me to find it when I need it. By continuing to see patients, I strengthen my book-retrieval system and become more adept at not just diagnosis, but remembering the proper management of a condition.

Certainly, going forward, it's been very valuable to think about how learners at different stages of training have different experiences of how they think about something. I'm more mindful now of when different approaches can be used and for whom they are most useful. I've always found it easiest to teach when I'm aware of what I'm doing subconsciously, rather than just doing it; as we're all going to be teachers someday, it behooves us to think about how we think more often.

~LG

Tuesday, March 24, 2015

Improving Teaching of Undergraduate Radiology

In one of my previous posts, I mentioned the gap that exists between the high potential of teaching in radiology at the medical school level and the existing curriculum. Multiple national surveys suggest that students prefer small-group or a one-on-one style of learning, so called the socractic method, over the lecture-based didactic teaching; however, most the medical imaging education given prior to residency comes from a classroom or from non-radiologists during clinical rotations. I also identified some barriers to curriculum development from radiologists and other departments. Unfortunately, many of these barriers, such as lack of time, funding and perceived value, will require major renovations in programs and beliefs. At the same time, more research in pilot programs using the new styles of teaching needs to be done to demonstrate its value. All of this will require time. Given that the traditional form of teaching, i.e. in a classroom, isn’t going to change any time soon, it may be just as valuable in the meantime to look for ways to improve on it rather than to start a revolution.

One thing is clear: the current method of teaching radiology in medical school is not effective. At the University of Toronto, radiology is introduced during first and second year through 4 or 5 afternoons of lectures. Attendance was not taken and no tests were conducted on the subject. In tying in my first blog post, there was minimal attention from the class and even less motivation to learn. Simply put, there was little to no engagement to learning. But who can blame them? In reflecting after having done 3 months of radiology electives, I’ve realized that radiology is such an esoteric topic compared with the rest of medicine. Much of it is unintuitive and even the nomenclature used in the field require several years of training to master. Unsurprisingly, there was very little intrinsic motivation to learn. Combined with the lack of extrinsic motivation, this creates a poor learning environment indeed. Ironically, we couldn’t even objectively measure what we learned since there was no exam although anecdotally, I could recall several instances where the deficiencies were apparent.

So what changes could be made? An obvious and relatively simple answer would be to start administering exams. I recall histology, which shares many characteristics of a specialty with radiology such as the high degree of specialization, was equally as mundane at the time. However, because there was a (relatively difficult) exam at the end of the week, we all fervently studied for it and there are still many histology concepts I remember. Whether my knowledge retention is due to the studying or other factors is unclear. However, the other issue with providing education of any kind is that ideally, students learn from an internal motivation rather than because they fear failing an exam. Although not definitive, many studies suggest that intrinsic motivations result in better long-term education and that extrinsic motivations may actually negatively affect intrinsic motivations. Therefore, strategies to improve teaching should also aim to increase genuine interest in the subject. Since material generally stays the same, increasing interest really boils down to the delivery.

In exploring ways to improve delivery, I was reminded of the idea of using technology, specifically Student Response Systems (SRS), in presentations. Essentially they use devices such as clickers that take real-time input from the class and either summarizes the results in graph form or directly displays the results on a projector.  From what I could recall, in all lectures where I was given a clicker, I felt more engaged and retained the information better. But what does the literature say? Caldwell in 2007 showed that the use of clickers in a mathematics course (controlled for instructor, semester, curriculum and the approximate number of students) increased the frequency of A’s, decreased the frequency of D’s and F’s and reduced the rate of withdrawal. Heaslip in 2012 similarly found that use of an SRS increased the class average for most of his modules by 8%. While it certainly isn’t an exact science, SRS’s certainly do seem to be an effective way to engage students in lecture material.


For my presentation, I would definitely like to try out using an SRS to increase participation and interest. Particularly, I feel that teaching in radiology, being a more unfamiliar subject, would greatly benefit from its usage. Fundamentally, I feel SRS’s introduce some motivation into the classroom either through innovation, competition or fun. I would like to give it a shot and I do have a specific one in mind…

-DW

Learning from Cognitive Load Theory

So I will admit that I have been much behind in my writing about my learning here, but in my defense, it's because I have become absolutely lost in the fascinating literature of medical education. I tend to be the kind of person who reads as much as I possibly can and must fully understand it as a whole before I am able to synthesize and summarize for other people. Harking back to my previous post, I guess that makes me an Analyst more than a Wholist... that 'Deep before Broad' concept...

This is a bit counter-intuitive given my most recent learning topic has been Cognitive Load Theory and teaching styles that have been perfected to allow medical students to learn most effectively. Cognitive Load Theory (summarized very effectively by van Merrienboer & Sweller),  has been developed to reflect the concept that human brains are only capable of maintaining a finite number of elements in their working memory (7-9 to be exact). Clearly we would not be able to have evolved to our current level of sophistication if humans were not able to become more efficient: thus, learning. By encoding these elements together into a 'schema' in our long term memory, we are able to reduce those previous 7 or 9 elements into a single element which can be recalled at will. Interestingly, there has been no limit identified for holding in your working memory information that has been recalled from long-term memory. This essentially means that our brains are limitless, provided we learn each new concept in discrete packages and incorporate it into our long-term memory and appropriate schemas.

Applying Cognitive Load Theory to lecture and teaching design, we can easily see that it would be fruitless to have an hour long lecture that simply spouts facts at you that greatly exceeds your brain's working memory. It's interesting, because this is actually how I felt at the very beginning of medical school. When we started learning anatomy, I felt like I didn't even have the language to understand what the words were (superior or posterior? myocardium or pericardium? artery or vein? It was all the same to me.) It took hearing the words over and over, reviewing my notes at the end of the day, and seeing it in person in the anatomy lab for my brain to be able to develop a schema. It took a week, but eventually I could get through a lecture and actually understand what was being said rather than frantically copying out notes.

Now that I've read more about it, I wonder if there isn't a more efficient or effective way to teach anatomy. With regards to working memory, there are several ways the load can be affected: the actual information/learning elements being presented (intrinsic load), the manner in which it is taught (extrinsic load), and by how much learning actually happens - i.e. how hard your brain is working to deal with, create schemas, and process the new information/intrinsic load (germane load). Using the anatomy lecture analogy, the intrinsic load (terminology and content) was huge; and, given that it was new, the germane load was almost too much for me to consciously absorb or understand anything until some of the information had been encoded into a schema (specifically, the vocabulary). Neither of these factors are particularly adjustable, but the extrinsic load (way it was presented) could in theory have been adjusted to reduce the total stress to working memory.

There are many strategies for reducing extrinsic load. Some examples are:

  1. Goal Free Principle: Replaces conventional goal-directed tasks with goal-free tasks. For example, rather than choosing the 'best diagnosis' (which requires reasoning and judgement), students are asked to provide a list of as many diagnoses as possible.
  2. Worked Example Principle: This relies on the fact that is easier to study and learn from a fully solved problem in front of you before trying to work through a problem on your own.
  3. Completion Principle: Breaks down a complex task into manageable parts (i.e. somebody gets you started, and then you complete the task).
  4. Modality Principle: Splitting the intrinsic information from a single modality into multiple; for example, providing verbal instructions with a visual diagram rather than written instruction. 
More than reducing extrinsic load though, I've learned that it's about optimization of working memory. For example, if the concepts are simple and intrinsic load is low, your working memory is likely capable of a high extrinsic load as you have more room to work with. However, that would be highly inefficient. Ideally, we would want the intrinsic load to be as high as we can manage with extrinsic load minimized. As well, ultimately, material must eventually be presented in its fully complexity to be fully understood. From this arise the principles of teaching from the ground up with simple concepts first and gradually increasing complexity. A great example of this is allowing students to practice first with paper-based cases then proceeding to high fidelity simulation environments prior to being with a real patient.

There are also strategies for optimizing germane load, or, how we process the information. There have been several studies that suggest our brains learn more effectively when they are challenged or forced to work. In that sense, intrinsic load is intentionally made more complex to increase the germane load and optimize the amount of working memory being used in a giving learning situation. There are some specific examples of how teaching strategies can embrace this concept:
  1. Variability of Task Situations: This is the idea that multiple examples (varied demographics, comorbidities, etc) of a particular clinical scenario, like solving an acid-base equation allows us to determine what are the 'key points' more easily by seeing them across the various presentations. van Merrienboer describes this perfectly: "[it] encourages learners to construct cognitive schemas because it increases the probability that similar features can be identified and that relevant features can be distinguished from irrelevant ones." 
  2. Contextual Interference: Also known as mixed teaching vs block teaching. This principle is based on the idea that is easier to identify what are the pathognomonic findings of a particular diagnosis when they are contrasted and interspersed with other diagnoses. As Hatala et al. demonstrated in their ECG teaching study, this allows us to see why it is a left bundle branch block vs a right bundle branch block for example when see them one after another, rather than just memorizing what a left bundle branch looks like by seeing 5 in a row.
Another concept is that of distributed learning, which, while not directly mentioned in the context of Cognitive Load, is similar to a larger scale application of contextual interference. As Raman et al. demonstrated in their paper, long-term retention of nutritional information was increased by 4 1-hour sessions distributed over four weeks rather than in a single 4 hour block. From my own personal experience, I think this also has to do with a student's inability to concentrate for that length of time without taking mental breaks. However, it is true that when the topic is constantly switching, it does make it easier to pay attention (similar to the mixed teaching mentioned above.) 

Upon further reflection, my experience of excessive cognitive load while learning anatomy was likely inevitable. Perhaps the stress could have been lessened by mixing anatomy lessons in with other topics, but, as I recall this was also done - we had histology lectures, embryology lectures, and radiology as well as anatomy lab time mixed in. What is often forgotten is that learners are not all coming into a given learning situation with the same baseline amount of knowledge. What felt like a steep learning curve for me, having no anatomical vocabulary, would have been straightforward for a student coming from a background in physiology or human kinetics. While it would have been nice to have had a longer adjustment time for myself, this would have been too slow for the other learners in the class, and would have perhaps have had the opposite effect on their learning.

This is the final point made by van Merrienboer and Sweller in their paper; that of Expertise Reversal. As they explain "This effect is an interaction between several basic cognitive load effects and level of expertise. The effect is demonstrated when instructional methods that work well for novice learners have no effects or even adverse effects when learners acquire more knowledge." This concept outlines how an attempt to minimize extrinsic load (ie by using worked examples) may become redundant as a students' intrinsic load capacity increases by evolving schemas. 

My overall conclusions are that:
  •  University of Toronto designed a pretty darn good medical school curriculum, though there is always room for improvement. 
  • Cognitive Load Theory can teach us a lot about effective teaching strategies and should be taken into account when designing not only curricula but individual teaching sessions
  • Going forward, I am certainly going to be incorporating some of these strategies into teaching medical students. My first hands-on try at this will be on Thursday; let's see how it goes!
~LG

Saturday, March 21, 2015

Evidence-Based Medicine and Radiology

As I was exploring educational topics relating to radiology, I came across an interesting concept that I hadn’t been exposed to before. It was the idea of applying evidence-based practice to radiology. My initial reaction was: oh it’s just the application of research to everyday practice. And essentially that’s what it entails. But what does that mean specifically and for radiologists?

In order to understand evidence-based radiology (EBR), one has to first understand evidence-based medicine (EBM). Essentially, people have been practicing EBM for generations but it wasn’t until the 1990’s that Dr. Gordon Guyatt and his group from McMaster University formally recognized and defined the approach. EBM is the systematic method of finding, critically appraising and applying up-to-date research to make clinical and policy decisions. At the end of the day, it is another tool we can use to guide our daily practice and potentially health care policies.

There are two important differences between EBM and conventional practice. First is the rigorousness of EBM. Good controlled trials nowadays use explicit criteria and standardized methods in obtaining and assessing the strength of their results. For years the scientific community has adhered to these values so that methods and results are transparent and reproducible. The second advantage of EBM is that it enables practitioners to develop their own strategies when faced with the “grey area of medicine”. It gives them the knowledge and tools to assess the available literature and make choices based on solid reasoning.

Broadly speaking, EBM is important any time one experiences a personal “knowledge gap”. While traditionally in these situations one would turn to their mentors and just imitate their approach, with EBM one can now critically assess these decisions with the existing literature. Resources such as StatDx and UpToDate are perfect for these occasions, providing great summaries with citations to the original studies. I guess I didn’t realize during my electives that when residents used these resources, they were essentially practicing EBM and EBR. I think growing up in this day and age with the wealth of high-quality information available within a few mouse clicks, we often take the process for granted. When I hear colleagues say that they aren’t interested in research, it feels almost disrespectful since our lives are made that much easier due to the research of those before us. I think the least we can do is to make some sort of scientific contribution for those that come after us.

So how does this apply to radiology? One can think of EBR as three main domains: diagnostics, screening and interventional.  With regards to diagnostics, classical findings and signs pointing to certain pathologies were previously taught by mentors. However, the concern here is that not all diseases presented with classical findings. With the practice of EBR, there is now data taking these considerations into account. Now we can characterize signs found on imaging with a certain sensitivity and specificity, positive and negative predictive values, all which can influence our post-test probability. Like medicine, radiology is not 100% and EBR reflects that fact while still providing useful information. Perhaps even more important to radiology is the use of EBR in screening. Historical studies have often been riddled with bias, particularly lead-time bias, which resulted in false representation of the test’s effectiveness. In addition, with the increase in the availability of CT scans, radiation dosage is now a realistic concern. With application to entire populations, EBR must be used in making the decision to screen asymptomatic patients, taking into consideration the pre-test probability, the accuracy of the exam, the absolute risk reduction and the potential harms. Finally, the relatively new field of interventional radiology (IR) necessitates the application of EBR. With any field that introduces new procedures and treatments, it is imperative that they are rigorously tested for effectiveness, especially when they incur high costs, as IR does.


As a student who worked in several research laboratories in the past, I was exposed to the scientific method at a young age and understood its value. However it wasn’t until medical school that I learned of clinical trials and EBM. I always thought medicine was more black and white. As I progressed through my career, I gained a better understanding of the intersection between science and medicine and the importance of continuing research as a clinician. Thus, I hope that in the future, I will get the opportunity to apply my past skills in research as an academic radiologist and contribute to the pool of existing knowledge. 

-DW

Thursday, March 19, 2015

An Untapped Resource

Next year, I will be beginning Radiology Residency at Western University. My decision to pursue radiology came rather late in medical school. Although we were introduced to the specialty in first year, our experience with the everyday functions of radiologists were minimal. It wasn’t until I decided to do a few electives that I truly understood radiology and made my decision. I enjoy radiology for three main reasons: the knowledge, the practicality and the academic nature, including research and teaching. While my electives confirmed my first two reasons, they fell rather short in supporting my third.

Radiology is a field with immense knowledge. To be a radiologist means to be a master of anatomy from head to toe and to know pathology of virtually every organ, in all ages and with the associated clinical presentations. Often, one has to even know the treatment options in order to guide the clinician or surgeon. The point is, will so much knowledge, one would think the specialty would be saturated with teaching, subsequently generating great interest among medical students. Sadly, as I learned, this is not the case.

Throughout my electives, I realized that given the heavy workloads in medical imaging, most radiologists hardly have time to teach residents, let alone medical students. Near the start, I often found myself being passed from staff to staff since they did not want the burden of teaching. When I did sit with a staff, often hours would pass by before a word was spoken to me. As I soon discovered, if I wanted to gain value out of my electives, I would need to be proactive, especially since my future depended on it. Therefore, I would use the first few days of my elective to seek out the keener teachers and then try to spend as much remaining time with them as possible. Although this system worked for me, I could see many issues with it. First, I potentially missed out on a lot of teaching opportunities with other “undiscovered” staff by working with a select few. Second, and more importantly, I only actively sought out my teaching because I wanted to pursue radiology. Many of my colleagues who were interested in other fields but chose a radiology elective did not make the effort and subsequently gained very little from the rotation. On the flip side, some students select radiology blocks precisely because they know there is very little responsibility and that preceptors don’t care if they show up or not. Given the importance of imaging currently, which will only grow in the future, there is a glaring deficiency with respect to radiology medical education at the medical school level.

Of course, as a future student of radiology, I cannot simply blame everything on radiologists. There are legitimate barriers to teaching from their perspective. As I experienced, radiologists often have work lists with studies in the hundreds. The faster they work, the quicker patients get their results and the more efficient the health care system runs. At the same time, there is only a finite amount of time during medical school and sometimes there is resistance from other departments in implementing radiology education. Despite these barriers, it is imperative that all schools engage in effective education surrounding medical imaging. Radiology departments often argue that it is unnecessary to generate interest because there is an abundant number of applicants each year anyways. However, evidence both scientific and anecdotal shows that students are still choosing radiology as a specialty because of income and lifestyle considerations rather than genuine interest. Casting moral judgement aside, radiology residencies undoubtedly want the best and most devoted students. Arguably even more important, teaching radiology to non-radiology-bound students can have a profound effect on the health care system. Radiologists often complain when imaging requisitions are filled out poorly or not enough clinical information is given. Considering that many community physicians have very little imaging knowledge, this shouldn’t be surprising. Clarifying requisitions and revising incorrect studies represents a huge inefficiency and often introduces unnecessary radiation.


The points I mention above and many others represent my motivation in learning about medical education. I think there is a great academic potential in radiology and it is a shame that departments are resistant to exploring it. My dream is to develop a radiology curriculum that exposes the field to 1st year medical students in a practical and realistic way through effective didactic and socractic teaching. My goal is to introduce radiology both conceptually and practically to attract genuinely interested students and to instill important radiological concepts to all students. I guess you could say enrolling in this selective was my first step. 

-DW

Counter-Intuitive Learning Styles

I've always thought of myself as a visual learner. Mostly, because I feel like I don't understand and process information as well when I'm listening to it as when I have something to look at at the same time. For example, I've never been able to listen to audiobooks without getting completely distracted by what's going on around me, nor do I do well in lectures or presentations where there is no visual component. The more I think about it though, Learning Styles are not as straightforward as Visual vs Auditory... after all, I also remember things much better when I write them down, connecting it to my Kinesthetic traits as well as my Visual side, and I'm perfectly capable of remembering things when I'm taught at the bedside despite there being no written or pictorial component. As my colleague recently alluded to, this is much more 'Contextual' learning - I associate my memories with the patient's story, their environment, even the events that happen around the time of the teaching experience.

As I've been spending a fair amount of time immersing myself into Medical Education literature, my biggest motivation has been trying to understand how you would even come up with a Medical School Curriculum. Why would you choose small group settings over didactic teaching? Where did PBL even come from? What is the evidence for any of these and why do they work? I've soon realized, much of what has been done in the past has very little basis in evidence and has been thrown together more out of 'Common Sense' than any sort of previous experience.

A great paper by Geoff Norman entitled Fifty years of medical education research: Waves of Migration gave me a very succinct overview of the history of medical education. He has conveniently separated it into the successes and failures of the implementation of both Common Sense and Evidence Based education strategies in the spheres of Learning, Clinical Problem-Solving/Reasoning, and Assessment. Today I want to spend a bit more time in the Learning area, but stay tuned for posts about the other areas later in the week.

The biggest Common Sense intervention in the area of Learning was that teaching techniques should be directed at addressing the different Learning Styles that students have and 'match' the teaching to the style so as to optimize the learning for every individual. This makes sense on the surface, and in fact, as I've grown up thinking myself as a Visual Learner and thought of my active brother as a Kinesthetic learner who didn't do as well in a classroom as I did, I often wondered why teaching didn't take advantage of students' differences. To my shock yesterday, I learned that there has been almost zero evidence that targeting the teaching strategy to a self-identified Learning Style makes any difference whatsoever.

As I've looked into the matter further, I've noted several papers that have attempted RCT's to test this theory. While most of these studies have involved Web-Based learning exercises, the findings could certainly be extrapolated to other environments. Notably, two studies done by Cook et al. in 2005 and 2007 could identify no difference between learners both 'matched' and 'mismatched' to their particular Learning Styles (which were determined at that moment in time by a series of standardized questionnaires). In a similar study, Massa and Meyer were also fruitless in their search for a significant difference.

About the only Learning Style that has a moderate amount of evidence as being worth targeting is the 'Wholist-Analyst' spectrum identified by Cook in his 2005 paper. In review, he identified four 'axes' of learning styles, one of which being this wholist-analyst approach. In this case, a 'Wholist' will benefit more from a generalized overview of a topic to provide context as well as social interaction before diving in to more detail ("Broad before Deep"). Conversely, 'Analysts' prefer to have the details provided from the beginning, finding the connections as they go, and then have a summary at the end which provides the context ("Deep before Broad"). In his analysis, Cook noted that tailoring teaching in this way did provide a significant difference in aptitude post intervention.

When I think about my own experience in the medical school curriculum, I feel like most lecturers do provide an approach that is conducive to both wholists and analysts; that is "Broad before Deep before Broad". Most lectures give an overview or a Case example before diving into a topic, and then summarizing at the end. As well, having material taught in several different ways in a given week (ie didactically, then reviewed in seminars, and worked through in a PBL case) allowed for the social interaction wholist's crave, but self-directed depth that analyst's prefer.

And so, back to learning styles. A discussion with Dr Cavalcanti last week also reminded me that, despite having comfort in a particular learning style (visual in my case), to have succeeded in school up to this point, most individuals will have had to learn in many different contexts and in many different styles. We have each come up with our own techniques to 'translate' the way something is being taught into a way that is more in line with the way we understand. For example, I read lips at an oral presentation - it helps me understand what somebody is saying better. I also take handwritten notes, as it keeps my mind from wandering when there is less visual cues around. Perhaps this then is the reason the Common Sense approach has failed - in order to succeed we have to be adaptable, and so both matched and mismatched interventions will be equally as understandable.

~LG

Monday, March 16, 2015

The Role for Contextualization

“What’s Saskatchewan famous for?”
“Potash?”
“Yes. Infamous. But what else.”
“Grain.”
“OK. But Ontario isn’t. So… Ontario is a-grain…”
“Oh! Agranulocytosis”.

And that is how we figured out why Dr. HPK asked about throat pain and infection in a patient on hyperthyroidism medication.

By now, we’ve all become somewhat accustomed to Dr. HPK’s method of teaching and can usually figure out if he’s trying to give us a hint. At the same time, I’m still amazed every time he links a real world figure or location or concept with medical knowledge. I feel like I’ve learned more about medicine and the world, from bullous pemphigoid to the Galapagos Islands, in the last week than in a month of electives. But what makes Dr. HPK’s teaching so effective?

I think there are two reasons. One, he keeps us engaged and motivated. With Dr. HPK, you never know what he’ll ask next. Perhaps he’ll ask who invented the printing press. Perhaps it’ll be the year of the First World War. Or maybe he’ll just straight up ask for the definition of fever of unknown origin. It’s never predictable but he always has a point and it keeps you interested and wanting to find out where he’s going with his questioning. At the same time, he forces you to think laterally and not with the narrow mindset often reinforced through didactic learning. Thus, he encourages us to learn by helping us maintain attention, a topic that is complicated and undergoing heavy research although its importance in learning is undisputed, both scientifically and intuitively.

The second reason I think why his methods are so effective is because it creates a solid context with which to associate the medical information. Context-based learning is a pedagogical methodology that essentially centers on the belief that setting is pivotal to education, not just the cues within the text, but everything concrete around you as well including your physical state, the room in which you study and the social interactions you engage in while learning. One classical example is an experiment by Goodwin and his team in 1969 where he showed that people who learned memory tasks while intoxicated recalled information better if they were intoxicated than if they were sober. While this is a better example of state-dependent learning, Dr. HPK’s teaching provides multiple social and intellectual cues for us to associate our medical knowledge and to “retrace our steps” in retrieving that information if we forget it.


My personal preference to learning aligns very well with Dr. HPK’s method. While I prefer Socratic to didactic teaching, I absolutely love getting “pimped”. It forces me to think and in coming up with a hypothetical answer I can either contextually reinforce or forget that particular information. Also, I learn best when there is some element of stress; I think it is also related to the social environment. Interestingly, most studies show that increased anxiety is associated with worse memory retrieval. Of course, Dr. HPK also minimizes the stress with his personality but one inevitably feels some pressure when put on the spot. I think I am just lucky that it motivates rather than discourages me. 

-DW

On Being a Physicist in Medicine

I have had a roundabout route to becoming a physician; having first done an undergraduate degree in biochemistry, I then switched departments and did my Master's in Physics before deciding to apply to medical school. During my Master's I was exploring a very niche area of science: Experimental Cellular Biophysics. A basic synopsis of my research is that I spent hours in a dark room poking fluorescent cells trying to figure out how they responded to being poked.

The motivation, you see, was that Biologists and Physicists imagined a cell in very different ways. When trying to explain phenomena, the physicists tended to imagine the cell as a ball filled with liquid and modeled all their equations based off this highly simplified version of a cell to try and explain how it would respond to being poked. The biologists on the other hand, understood the complexity and interconnectedness of the various cellular components, but were unable to apply any of the physics necessary to describe the mechanical processes that were occurring. The beauty of our lab was that we had students with varied backgrounds; not just biologists and physicists, but also engineers and biochemists. We were in a unique position to tackle this murky area of science.

My work, published in the Journal of Cell Science in 2012 (linked here for those interested), highlights some of the particular challenges of attempting to understand complex systems. It was known in the literature that cells were capable of responding to force, but it was sort of a black box phenomenon; nobody really understood all the components involved or how it worked in the short term even if the long term results were predictable. We designed several experiments to explore this short term response of cells to externally applied forces, and were able to demonstrate the dependence of a response on multiple aspects of the cytoskeleton, as well as generating a 'map' of how strain was distributed throughout the cell. I remember looking at the results of my strain experiment and being completely baffled: they were TOTALLY DIFFERENT than what we had anticipated. With my supervisor's help, we discovered that, while individual points had apparently random values for strain, there was a predictable increase by 50%  in the variance in strain when a force was applied over the whole cell. To translate, that means that there was a whole lot more motion going on, and, while unpredictable on a local scale, the large scale changes were more predictable.

The inevitable next question was then: "Why?" We didn't have a great answer, but like any good scientist, we came up with a hypothesis that was equally grounded in my experience in complex biochemical interactions as it was in the simple mechanics of a physical system. I still can't tell you if our hypothesis was right, but these experiments certainly did help to understand many of the components involved in a cell's response to forces. Eventually, they might even be used in medicine to tackle building organs out of stem cells or targeting cancer cells by their different mechanical properties.

I was reminded of my work today while reading through Geoff Norman's article Chaos, complexity and complicatedness: lessons from rocket science. Having read through several of the articles to which it makes reference (especially It's NOT Rocket Science by Glenn Regehr), I found myself agreeing with Norman on several of his points. The debate, to summarize, is whether we would be better off abandoning attempts at understanding interventions in medical education through reductionist scientific methods (seeking simplicity) to embrace a theory ground in complexity and chaos theory to describe the multi-dimensional systems in which medical education is produced and implemented. Arguments in favour of this point out that much of the literature is simply incomparable to each other: what works in its own microcosm does not work and is not applicable or widely generalizable to other medical schools or systems. Thus, we would do better to not even try to make a generalization. I however, find myself siding far more with Norman in his cautions that this would be a dangerous undertaking.

I particularly understand the point he is trying to make in cautioning the use of chaos and complexity theories to describe the state of medical education as those using the terminologies do not seem to truly appreciate the physics or basic tenets on which they are grounded. He also argues that there have been several well designed 'simplistic' experimental interventions in medical education that we risk disregarding by embracing chaos. He summarizes: "the issue at hand is whether this lack of predictability at the individual level represents an ultimate failure of classical scientific methods, or simply a psychosocial 'uncertainty principle' reflecting an ultimate limit on knowability."

I feel like many parallels can be drawn between work in interdiscplinary science, such as my Master's work, and in fields such as medical education. With my lab, it would not have been possible to come up with a generalized theory to explain our particular results without the knowledge base of both physics and biochemistry. What's more, we were also operating on a complex system for which we could not see all the moving parts, but we COULD see the overall result on a whole, both long term and short term. In medical education, it seems to me there is much the same divide. Much of the understanding of complex educational principles lies with the cognitive scientists, psychologists, and social science experts; however, it is the physicians with backgrounds in biology, physics, and the reductionist scientific method who are attempting to implement the best education curriculum possible, and they find themselves at an impasse. The physicists who were attempting to model a cell as a simple ball of fluid were wrong to ignore the complexity of the cell, but they were not wrong to continue to use linear-model physics to try and understand the system. Similarly, I think that in medical education, we should absolutely continue to apply an approach that is grounded in linear-model reductionist principles while acknowledging the complexity of the system.

Perhaps it isn't so much about asking "Does this work?" but more about asking "Why?" and "How?" And, even if we can't demonstrate an exact predictable response on the individual level, that doesn't mean that an understanding could not be found on a larger scale that could then be generalized and applied to other situations. The answer to what works in medical education lies as much in the details and the failures as much as successes. We should be asking our colleagues that are experts in education science for their input and collaborating by using qualitative methods in conjunction with quantitative to develop the best possible medical education interventions. I'm a firm believer in collaboration with people of varied backgrounds and creating an interdisciplinary team to find creative solutions to the problem; it's time for medical education to "think outside the box".

~LG

Thursday, March 12, 2015

A Reason to Teach

Ever since I was a kid, I’ve been interested in teaching. I’ve done countless mentorship programs, varying from after school tutoring to disability workshops to just teaching chess on my own time with kids ranging from ages 5 to 18. I enjoy teaching not only because I like sharing knowledge and skills with others but also because it gives me a chance to connect with my students. I’ve found throughout my experiences that the most effective way to get students to respect you is to show them that you are not superior to them as a person and can connect with them at their level. As I am quite a silly person by nature, this came rather naturally for me.

The advantage of connecting to the student is twofold. First, they are more likely to listen to you if they respect you. Second and perhaps even more rewarding as a teacher, it gives students a motivation to learn. I’ve always found that in general, teaching becomes easier as you graduate into higher education but successfully teaching the younger population was more rewarding. Reflecting back, I think this has a lot to do with motivation. While older kids have a better grasp on why education is important, for their future, for their success, these concepts are more abstract for young children to comprehend. As a result, most children have no inherent desire to learn, which makes it all the more satisfying seeing their eyes light up when they finally “get it”. By connecting to your students, you give them an additional motivation to learn. I’ve often found that when I have a teacher I like, I will work harder to impress since I value their feelings towards me more. Similarly, one could argue that teachers with more influence also should generate more motivation to learn, which is definitely true in medical school.

The bottom line is learning is governed by motivation and a truly good teacher should provide an inherent motivation in the student to achieve their best. Dr. Cavalcanti cited a study done previously with monkeys that attempted to understand how they learn in a social setting. The experimenters kept several monkeys in a cage and taught one monkey how to access a fruit in a box through a certain mechanism. Then they observed the monkeys to see how they learned. Essentially, they found that eventually all monkeys gained access to the box. However, it wasn’t that the original monkey taught them the mechanism. Rather, the original monkey showed them that there was a fruit present and then the rest of the group figured it out on their own. While we cannot defend the primate as a superior mentor, it is interesting to note that it was the one that provided the motivation.

In registering for this selective, I wanted to gain some additional insight into what it means to be a good medical teacher. Medical students encounter a variety of settings through which they learn and it would be interesting to see what effective techniques are used. Thus far, I’ve been to several rounds and have been taught by two different tutors, Dr. Panisko and Dr. Ho Ping Kong. All of these experiences have been fantastic and throughout the rest of the selective, I will try to tease out the methods they use to not only teach but also motivate and hopefully apply them in my field of radiology. I definitely feel there is a lot of untapped potential for teaching in that domain but that will be discussed in a future post. More to come…

-DW

Wednesday, March 11, 2015

Thinking Outside the Box

I arrived on Monday morning to begin my new Selective with CEEP in Ambulatory Internal Medicine and Medical Education...having really no idea what I was getting myself into. I discovered (to my happy surprise) that I would be spending a large part of my time with Dr HPK in his clinic and learning from his unique teaching style.

My first day was wonderful. It felt like my brain was executing constant mental acrobatics and forming all sorts of new connections. This kind of lateral thinking has always been similar to the way I think and it felt like coming home. Today, in Medical Grand Rounds, Dr Silver treated us to a set of Rebus Puzzles before beginning his talk. These again reminded me of my childhood adventures in lateral thinking exercises, and so I've chosen to reflect on that today.

I was "diagnosed" with being gifted at a young age. All I knew at that time was that it meant I thought differently than many other children and learned maybe a little bit faster. I also knew that it was the reason I got to skip a grade and why I got bored pretty easily at school. When I started attending the Program for Gifted Learners (PGL) in Grade 4, my scholastic life got decidedly more interesting.

At PGL we started every day with a "Problem of the Day" which was written up on the chalkboard and we were left to sit and figure out the solution for as long as it took us. These problems could be number-based, logic puzzles, or sometimes as simple as a play-on-words. Sometimes when we finished early, we got sheets of the Rebus puzzles (linked above) to let our brain exercise even more. The rest of the day was spent on various projects: from learning to write HTML, build websites, and use Corel Draw to working on fundraising campaigns banning Landmine usage in developing countries (in which I learned to use a button-making machine. How useful.) The day finished with more discussion-based group problem solving: sometimes we were given a Sherlock Holmes-like crime and were allowed to ask yes or no questions to get to the answer. Needless to say, I loved my time spent with the other students in this classroom, but I don't think I truly appreciated what it was teaching me until years later.

Fast-forward fifteen years and terms like 'Thinking Outside the Box' and 'Lateral Thinking' are common buzz words that you hear all the time. Quick googling has taught me these terms are relatively recent: Lateral thinking was first coined by psychologist and inventor Edward de Bono in 1967 and, interestingly, the term Thinking Outside the Box seems to have come a bit later- the first published references are in the 1970s. "The box" that is referenced in this term actually comes from the famous Nine Dots puzzle which was published in Sam Loyd's Cyclopedia of 5000 Puzzles, Tricks, and Conundrums (With Answers) in 1914. In this puzzle, one is asked to connect all 9 dots placed in a 3x3 grid using only 4 straight lines. The solution can only be acheived by drawing lines beyond the borders of the box, thus 'thinking outside the box'.

So what does this have to do with Medicine?

Much of what I value about medicine are the diagnostic challenges and puzzle solving that we face with patients on a daily basis. Certainly in Dr HPK's clinic, we see some of the most varied and intricate diagnostic challenges that can exist in medicine and are reminded to think beyond the obvious diagnosis. More than that however, I have been left feeling in the last couple days that Dr HPK's tendency to use clues and word association to teach and help problem-solve is very much in line with how I process information and harkens back to the lateral thinking I was taught at a very young age in PGL. I never realized how useful those brain exercises were and how much it has become second nature for me to think outside of the box, using every bit of knowledge I have learned (medical or otherwise) and in circles rather than a straight line.

And so, I've realized that this selective is perfect for me. I can't imagine a better way to spend the last month of medical school than reinforcing my knowledge by learning from a legend in the field of medicine and 'exercising the tiny grey cells', as Hercule Poirot would say. I'm looking forward to chronicling my journey down the rabbit hole of medical education and thinking about how we think, learn, and remember, especially in the context of medical knowledge.


~LG

Sunday, March 8, 2015

Reflections on the CEEP Selective

I signed up for the CEEP selective because I was interested in learning about medical education and taking part in simulation learning. I had previously enjoyed being able to teach students junior to me while on clinical rotations, and I wanted to explore how I could do this more effectively. On and around the first day, I was not too sure what I was supposed do around the readings and research for the medical education part of the selective.  I knew we had clinics to attend and papers that we should read, but not much else outside of that. Looking back, everything came together quite nicely.

One of the biggest takeaways from this elective for me was an introduction to the world of medical education and medical education research. I was a fresh slate coming in, and I had no idea where to start exploring this discipline. The papers we were given to read at the beginning of the selective revealed a discipline rooted in research, theory, and apparently ripe for debate as well. The open nature of the selective allowed us to explore an area of medical education on our own that interested us. I learned about journals such as Academic Medicine and Medical Education, and even just browsing through their table of contents and reading their titles and abstracts was illuminating. There was so much research happening in some really interesting areas- simulation, technology, and equity in medical education. In the end, I must say that I really enjoyed the open nature of this selective and how it allowed to define our own goals and learning objectives.

In terms of my more concrete goals, I am glad to say that I was able to reach them during the three weeks of this selective.  We got an introduction to how to teach around case presentations through the SNAPPS and One-Minute Preceptor frameworks.  As someone who is going to be partially responsible for teaching students next year, I really wanted to find a strategy to approach this new part of my job. These frameworks provided this approach. We were also able to read papers about the frameworks and their effectiveness. While these frameworks were designed for the ambulatory setting, going into my PGY-1 year in Internal Medicine, this is a strategy that I am going to adapt for the wards.  It was also a great opportunity to be able to put together a presentation about SNAPPS for our fellow selective students and discuss teaching with them. This presentation must have been somewhat infectious, as right afterwards one of our colleagues gave us a presentation on CPAP and Bi-PAP.

I also got the opportunity to try some advanced simulation exercises. Harvey the cardio-pulmonary simulator was a regular feature of my selective. The model in conjunction with the instruction from experienced clinicians made for a fantastic learning experience. We also got to hone our skills on a paracentesis simulator and an ultrasound simulator that was- for lack of a better term- super cool.  We were able to outline what we wanted to learn prior to starting, customizing the session and making it more satisfying in my opinion. These were some of the activities that I was looking forward to the most. Simulation was the focus of my MedEd readings throughout the selective. Through these readings, I was able to learn how simulation was not only an effective way to teach skills, but how it could also possibly result in improved patient outcomes.

The other large part of the selective was ambulatory clinics with Dr. Ho Ping Kong. These half-day clinics (and one full day clinic) became something I looked forward to every week.  They were like nothing I had ever experienced before. I always felt on my toes as Dr. HPK quizzed us, trying to draw on knowledge from all spheres of life- history, geography, the arts, and yes, even medicine. This wasn’t done in an intimidating fashion, as Dr. HPK would keep us at ease with his easygoing manner and great sense of humor. The ambulatory learning experience was just as valuable as inpatient learning. We would see patients with a variety of conditions, and we’d be able to identify teaching points in terms of the physical exam, treatment plans, and topics to read about later (something that Dr. HPK would emphasize). The biggest takeaway for me was Dr. HPK’s relationship with his patients. He had a great rapport with all the patients we saw and knew them well. He seemed to be able to develop that almost instantly with the new patients we saw.  I think that it came down to his communication with them. He was clear, friendly, and managed expectations well.


While I wasn’t 100% sure what to expect coming in to this selective, I was happy to meet my objectives and even more at the end of the three weeks. I got an introduction to the world of medical education research, learned about how to teach well, got to use some really cool simulation technology, and had an unforgettable experience in Dr. HPK’s clinic.  I can’t wait to return to the Western and/or Dr. HPK’s clinic as a resident in the future!

-SR