Student Response
Systems provide many advantages to a lecturer. They improve participation in
the class; they allow instant feedback from students regarding their knowledge;
they facilitate discussions among students. However, in my last post, I alluded
to the most important question: do they improve learning and more practically, performance?
At first most of the literature I found seemed to support that notion. For
example, many universities have employed the use of clickers and found
increases of class averages up to a full letter grade. I had accepted it for
the time being although I realized there were perhaps too many variables to control
for. Things seemed to make sense in my brain. But then I was introduced to an
interesting concept from another paper.
Anthis 2011
argued that many of the previous papers supporting improved grades with clicker
use suffered from many confounding variables. For example, many courses that
employed clickers also participated in weekly activities that reviewed
questions and course materials. The author hypothesized that perhaps it wasn’t
the use of clickers that improved learning but rather the use of clicker
questions. Presentation of questions throughout lectures could prompt students
to become aware about their own deficits and induce their “metacognitive skills”,
resulting in a greater motivation to study. The study went on conduct two
experiments. The first analyzed the use of clickers in an Infant and Child
Development course where one group of students responded to random questions throughout
lectures using clickers and another group of students responded to questions by
raising their hands. Controlling for initial GPAs, the average scores actually significantly
favored the group with no clickers. In the second study, the methods were the
same except the course was a Lifespan Development course and instead of one
group using clickers the entire semester, the groups switched halfway through.
The result was that for each exam, scores were not significantly different.
The results
of the study were interesting to me because one: its results were different
from that of all the other studies I read and two: because it proposed an
interesting concept. I have never heard of metacognition. From what I read, it
essentially means cognition about cognition and it allows humans to regulate
their behavior to improve performance by understanding how they go about
learning things. Types of metacognition include person knowledge (understanding
one’s own capabilities), task knowledge (understanding the nature of one’s
task) and strategic knowledge (understanding what strategies exist and how to
use those strategies to improve learning). Presumably, Anthis argues that
clicker questions improve students’ person knowledge, thereby motivating them
to study, and clickers themselves do not directly improve learning.
Despite seeing
the value in the metacognition concept, I raise two concerns with the paper.
First, although Anthis raises an interesting point regarding clicker questions,
it is important to note that the study uses clickers for their bare minimum value.
While SRSs engage students in comfortably answering questions anonymously, one
great value is that it also provides feedback to the lecturer about the
knowledge in the class and provides an opportunity for discussion and to clarify
concepts. Sure questions on pieces of paper can also engage students and can
prompt them to independently reduce their own knowledge gaps, but SRSs confer
the greater advantage of addressing those deficits there and then.
Interestingly, when clickers were used to their full extent in Mayer 2009,
groups using clickers significantly outperformed groups that used paper
questions or had no questions. In this experiment, discussions took place in
both groups that were given questions to clarify concepts. The one difference
was that discussion in the paper group took place at the end of class whereas
discussion in the clicker group occurred immediately following the question. One
could theoretically argue that the more immediate discussion lead to better
retention of the knowledge. While this may be true, it also highlights an
important advantage to electronic SRSs over traditional paper
based/hand-raising methods: clickers allow for efficient and greater
flexibility in data collection in assessing for gaps in knowledge. This brings
me to my second concern with the study. Even if it was found that clickers
merely function as a means of improving education through motivation, a purpose
which can be served through other traditional means, technology is such a far more
efficient and practical medium to do so that it may still hold more value than
the author gives it.
Mayer 2009
represents one of the first studies comparing SRS with traditional methods of
posing questions and initiating discussions. Of course, it cannot be said to
have eliminated all confounding variables and its results may not be completely
generalizable to all other classroom settings. However, it does represent the
type of research that is needed to justify the introduction of SRS into
curricula. I hope to see more of this research in the future and hopefully a
wider use of SRSs in medicine and radiology.
-DW