One of the great intellectual pleasures is to hear an idea that not only seems right, but that strikes you as so terribly obvious (now that you've heard it) you're in disbelief that no one has ever made the point before.I tasted that pleasure this week, courtesy of a paper by Walter Boot and colleagues (2013)
. The paper concerned the adequacy of control groups in intervention studies--interventions like (but not limited to) "brain games" meant to improve cognition, and the playing of video games, thought to improve certain aspects of perception and attention.
To appreciate the point made in this paper, consider what a control group is supposed to be and do. It is supposed to be a group of subjects as similar to the experimental group as possible, except for the critical variable under study. Active control group
The performance of the control group is to be compared to the performance of the experimental group, which should allow an assessment of the impact of the critical variable on the outcome measure.
Now consider video gaming or brain training. Subjects in an experiment might very well guess the suspected relationship between the critical variable and the outcome. They have an expectation as to what is likely to happen. If they do, then there might be a placebo effect--people perform better on the outcome test simply because they expect that the training will help just as some people feel less pain when given a placebo that they believe is a analgesic.
The standard way to deal with that problem is the use an "active control." That means that the control group doesn't do nothing--they do something, but it's something that the experimenter does not believe will affect the outcome variable. So in some experiments testing the impact of action video games on attention and perception, the active control plays slow-paced video games like Tetris or Sims. Out of control group
The purpose of the active control is that it is supposed to make expectations equivalent in the two groups. Boot et al.'s simple and valid point is that it probably doesn't do that. People don't believe playing Sims will improve attention.
The experimenters gathered some data on this point. They had subjects watch a brief video demonstrating what an action video game was like or what the active control game was like. Then they showed them videos of the measures of attention and perception that are often used in these experiments. And they asked subjects "if you played the video game a lot, do you think it would influence how well you would do on those other tasks?"
And sure enough, people think that action video games will help on measures of attention and perception. Importantly, they don't think that they would have an impact on a measure like story recall. And subjects who saw the game Tetris were less likely to think it would help the perception measures, but were more likely to say it would help with mental rotation.
In other words, subjects see the underlying similarities between games and the outcome measures, and they figure that higher similarity between them means a greater likelihood of transfer.
As the authors note, this problem is not limited to the video gaming literature; the need for an active control that deals with subject expectations also applies to the brain training literature.
More broadly, it applies to studies of classroom interventions. Many of these studies don't use active controls at all. The control is business-as-usual.
In that case, I suspect you have double the problem. You not only have the placebo effect affecting students, you also have one set of teachers asked to do something new, and another set teaching as they typically do. It seems at least plausible that the former will be extra reflective on their practice--they would almost have to be--and that alone might lead to improved student performance.
It's hard to say how big these placebo effects might be, but this is something to watch for when you read research in the future.
Boot, W. R., Simons, D. J., Stothart, C. & Stutts, C. (2013). The pervasive problems with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives in Psychological Science, 8, 445-454.
Daphne Bavelier and Richard Davidson have a Comment
in Nature today on the potential for video games to "do you good." The authors note that video gaming has been linked to
obesity, aggressiveness, and antisocial behavior, but there is a burgeoning literature showing some cognitive benefits accrue from gaming. Even though the data on these benefits is not 100% consistent (as I noted here
) I'm with Bavelier & Davidson in their general orientation: so many people spend so much time gaming, we would be fools not to consider ways that games might be turned to purposes of personal and societal benefit. Could games help to make people smarter, or more empathic, or more cooperative?The authors
suggest three developments are necessary.
- Game designers and neuroscientists must collaborate to determine which game components "foster brain plasticity." (I believe they really mean "changes behavior.")
- Neuroscientists ought to collaborate more closely with game designers. Presumably, the first step will not get off the ground if this doesn't happen.
- There needs to translational game research, and a path to market. We expect that some research advances (and clinical trials) of the positive effects of gaming will be made in academic circles. This work must get to market if it is to have an impact, and there is not a blazed trial by which this travel can take place.
This is all fine, as far as it goes, but it ignores two glaring problems, both subsets of their first point.W
e have to bear in mind that Bavelier & Davidson's enthusiasm for the impact of gaming is coming from experiments with people who already liked gaming; you compare gamers with non-gamers and find some cognitive edge for the former. Getting people to play games is no easy matter, because designing good games is hard. This idea of harnessing interest
in gaming for personal benefit is old stuff in education. Researchers have been at it for twenty years, and one of the key lessons they've learned is that it's hard build a game that students really like and from which they also learn (as I've noted in reviews here
.)Second, Bavelier & Davidson are also a bit
too quick to assume that measured improvements to basic cognitive processes will transfer to more complex processes. They cite a study in which playing a game improved mental rotation performance. Then they point out that mental rotation is important in fields like navigation and research chemistry. But one of the great puzzles (and frustrations) of attempts to improve working memory has been the lack of transfer; even when working memory is improved by training, you don't see a corresponding improvement in tasks that are highly correlated with working memory (e.g., reasoning). In sum, I'm with Bavelier & Davidson in that I think this line of research is well worth pursuing. But I'm less sanguine than they are, because I think their point #1--getting the games to work--is going to be a lot tougher than they seem to anticipate. Bavelier, D, & Davidson, R. J. (2013). Brain training: Games to do you good. Nature, 494, 425-426.
What people learn (or don't) from games is such a vibrant research area we can expect fairly frequent literature reviews. It's been about a year since the last one
, so I guess we're due. The last time I blogged on this topic Cedar Riener remarked that it's sort of silly to frame the question as "does gaming work?" It depends on the game. The category is so broad it can include a huge variety of experiences for students.
If there were NO games from which kids seemed to learn anything, you probably ought not to conclude "kids can't learn from games." To do so would be to conclude that distribution of learning for all possible games and all possible teaching would look something like this.
But this pattern of data seems highly unlikely. It seems much more probable that the distributions overlap more, and that whether kids learn more from gaming or traditional teaching is a function of the qualities of each.
So what's the point of a meta-analysis that poses the question "do kids learn more from gaming or traditional teaching?
I think of these reviews not as letting us know whether kids can learn from games, but as an overview of where we are--just how effective are the serious games offered to students?
The latest meta-analysis (Wouters et al, 2013
) includes data from 56 studies and examined both learning outcomes (77 effect sizes), retention (17 effect sizes) and motivation (31 effect sizes).
The headline results featured in the abstract is "games work!" Games are reported to be superior to conventional instruction in terms of learning (d
= 0.29) and retention (d
= .36) but somewhat surprisingly, not motivation (d
The authors examined a large set of moderator variables and this is where things get interesting. Here are a few of these findings:
- Students learn more when playing games in groups than playing alone.
- Peer-reviewed studies showed larger effects than others. (This analysis is meant to address the bias not to publish null results. . . but the interpretation in this case was clouded by small N's.)
- Age of student had no impact.
But two of the most interesting moderators significantly modify the big conclusions.
First, gaming showed no advantage over conventional instruction when the experiment used random assignment. When non-random assignment was used, gaming showed a robust advantage. So it's possible (or even likely) that games in these studies were more effective only when they interacted with some factor in the gamer that is self-selected (or selected by the experimenter or teacher). And we don't know yet what that factor is.
Second the researchers say that gaming showed and advantage over "conventional instruction" but followup analyses show that gaming showed no advantage over what they called "passive instruction"--that it, the teacher talk or reading a textbook. All of the advantage accrued when games were compared to "active instruction," described as "methods that explicitly prompt learners to learning activities (e.g., exercises, hypertext training.)" So gaming (in this data set) is not really better than conventional instruction; it's better than one type of instruction (which in the US is probably less often encountered.)
So yeah, I think the question in this review is ill-posed. What we really want to know is how do we structure better games? That requires much more fine-grained experiments on the gaming experience, not blunt variables. This will be painstaking work.
Still, you've got to start somewhere and this article offers a useful snapshot of where we are. EDIT 5:00 a.m. EST
2/11/13. In the original post I failed to make explicit another important conclusion--there may be caveats on when and how the games examined are superior to conventional instruction, but they were almost never worse. This is not an unreasonable bar, and as a group the games tested pass it.
Wouters, P, van Nimwegen, C, van Oostendorp, H., & van der Spek, E. G. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology.
Advance online publication. doi: 10.1037/a0031311
Is technology changing how students learn, that is, the workings of the brain?
in today's New York Times
reports that most teachers think the answer is "yes," and this development is not positive. The article reports the results of two surveys of teachers, one conducted by the Pew Internet Project, and the other by Common Sense Media. Both report that teachers believe that students' use of digital technology adversely affects their attention spans and makes them less likely to stick with challenging tasks.
In interviews, many teachers report feeling that they have to work harder than they used to in order to keep students engaged.
As the article notes, there have not been any long-term studies that show whether student attention span has been affected by digital media. Still, a lot of psychologists are actually skeptical that digital media are likely to fundamentally change the fundamentals of human cognition.Steven Pinker has written "Electronic media aren't going to revamp the brain's mechanisms of information processing." I made the same argument here.
The basic architecture is likely to be relatively fixed, and in the absence of extreme deprivation, will develop fairly predictably. Sure, it is shaped by experience but those changes will just tune to experience what's already there--it might change the dimensions of the rooms, without altering the fundamental floor plan, so to speak.Does that view conflict with teacher's impressions? Not necessarily.When we talk about a student's attention span, I suspect we're really talking about a particular type of attention. It's not their overall ability to pay attention: kids today can, I think, get lost for hours in a movie or a book or a game just as readily as their parents did. Rather, the seemingly shorter attention span is their ability to maintain attention on a task that is not very interesting to them.
But even within that situation, I suspect that there are two factors at work: one is the raw capacity to direct one's attention. The second is the willingness
to do so. I doubt that technology affects the first, but I'm ready to believe that it affects the second. Directing attention--forcing yourself to think about something you'd rather not think about--is effortful, even mildly aversive. Why would you do it? There are lots of possible reasons. Among them would be previous experiences leading you to believe that such sustained attention leads to a payoff. In other words, if you've grown up in circumstances where very little effort usually led to something that was stimulating and interesting, then you likely have an expectation that that's the nature of the world: I do just a little something, and I get a big payoff. (And the payoff is probably immediate.) The process by which children learn to expect a lot of cool stuff to happen based on minimal effort
may start early.When a toddler is given a toy that puts on a dazzling display of light and sound when a button is pushed, we might be teaching him this lesson. In contrast, the toddler who gets a set of blocks has to put a heck of a lot more effort (and sustained attention) into getting the toy to do something interesting--build a tower, for example, that she can send crashing down. It's hard for me to believe that something as fundamental to cognition as the ability to pay attention can moved around a whole lot. It's much easier for me to accept that one's beliefs--beliefs about what is worthy of my attention, beliefs about how much effort I should dispense to tasks--can be moved around, because beliefs are a product of experience. I actually think that much of what I've written here was implicit in some of the teachers' comments--the emphasis on immediacy, for example--but it's worth making it explicit.
A notable feature of most action video games is that one must pay attention to more than one thing simultaneously. For example, in a first-person shooting game
like the one depicted below, one must move to navigate the terrain while avoiding hazards and seeking out beneficial objects. At the same time, the player might switch among different weapons or tools. Thus, one might think that extended practice on such games would lead to the development of a general skill in allocating attention among multiple tasks.
That's a logical conclusion, but two recent papers offer conflicting data as to whether it's the case.
In one (Donohue, James, Eslick & Mitroff, 2012)
the authors compared 19 college-aged students who were avid gamers to students with no gaming experience (N = 26). Subjects completed three tasks: a simulated driving game, an image search task (finding simple objects in a complex drawing) and a multiple-object-tracking task. In this task, a number of black circles appear on a white screen. Four of the circles flash for two seconds, and then all of the circles move randomly. At the end of 12 seconds the subject must identify which of the circles flashed. Subjects performed all three tasks twice: on its own, and with a distracting task (answering trivia questions) performed simultaneously. The question is whether the performance on the experienced gamers would be less disrupted by the attention-demanding trivia task. These researchers found they were not, as shown in the figure below
(click for larger image).
The bars with dotted lines show the gamers' performance.
Everyone performed worse in the dual-task condition (i.e., when answering trivia questions) but the cost to performance was the same for the gamers as for the non-gamers. Extensive gaming experience didn't lead to a general skill in sharing attention. But a different group of researchers found just the opposite.Strobach, Frensch & Schubert (2012)
used much simpler tasks to compare 10 gamers and 10 non-gamers. They used simple reaction time tasks; the subject sat before a computer, and listened over headphones for a tone. When it sounded, the subjects was to push a button as fast as possible. A second task used a visual signal on the screen instead of a tone. In the attention-demanding dual task version, either an auditory or a visual signal might appear, with different responses for each. In this experiment, gamers responded faster than non-gamers overall, but most important, their performance suffered less in the dual-task situation. The authors didn't leave it at that. They recognized that the experimental paradigm they used has a significant drawback; they can't attribute the better attention-sharing skills to gaming, because the study is correlational. For example, it may be that some people just happen to be better at sharing attention
, and these people are drawn to gaming because this skill makes them better at it.To attribute causality to gaming, they needed to conduct an experiment. So the experimenters turned some "regular folk" into gamers by having them play an action game (Medal of Honor) for 15 hours. Control subjects played a puzzle game (Tetris) for 15 hours. Subjects improved their dual-task performance after
playing the action the game. The puzzle game did not have that effect.
So what is the difference between the two studies? It's really hard to say. It's tempting to place more weight on the study that found the difference between gamers and non-gamers. Scientists generally figure that if you unwittingly make a mistake in the design or execution of a study, that's most likely to lead to null results.
In other words, when you don't see an effect (as in the first study) it might be because there really is no effect, or it might just be that something went wrong.But then again, the first study has more of what scientists call ecological validity--the tasks used in the laboratory look more like the attention-demanding tasks we care about outside of the laboratory (e.g., trying to
answer a passenger's question while driving). It may be that both studies are right. Gaming leads to an advantage in attention-sharing that is measurable with very simple tasks, but that is washed out and indiscernible in more complex tasks. The conclusion, then, is a little disheartening. When it comes to the impact of action gaming on attention sharing, it's probably too early to draw a conclusion. Science is hard.
Donohue, S. E., James, B., Eslick, A. N. & Mitroff, S. R. (2012). Cognitive pitfall! Videogame players are not immune to dual-task costs. Attention, Perception, & Psychophysics, 74,
Stroback, T., Frensch, P. A., & Schubert, T. (2012). Video game practice optimizes executive control skills in dual-task and task switching situations. Acta Psychologica, 140,
A new review
takes on the question "Does video gaming improve academic achievement?"To cut to the chase, the authors conclude that the evidence for benefit is slim: they conclude that there is some reason to think that video games can boost learning in history, language acquisition, and physical education (in the case of exergames) but no evidence that gaming improves math and science.It's notable that the authors excluded simulations from the analysis--simulations might prove particularly effective for science and math. But the authors wanted to examine gaming in particular. Lest the reader get the impression that the authors might have started this review with the intention of trashing gaming, they authors describe themselves as "both educators and gamers (not necessarily in that order)" and even manage to throw a gamer's inside joke in the article's title: "Our princess is in another castle." (If this doesn't ring a bell, an explanation is here.) And they did try to cast a wide net to capture positive effects of gaming. They did not limit their analysis to random-control trials, but included qualitative research as well.
They considered outcome measures not just of improved content knowledge (history, math, etc.) but also claims that gaming might build teams or collaborative skills, or that gaming could build motivation to do other schoolwork. Still, the most notable thing about the review is the paucity of studies: just 39 went into the review, even though educational gaming has been around for a generation. Making generalizations about the educational value of gaming is difficult because games are never played the same way twice. There's inherent noise in the experimental treatment.
That makes the need for systematicity in the experimental literature all the more important. Yet the existing studies monitor different player activities, assess different learning outcomes, and, of course, test different games with different features.The authors
draw this rather wistful conclusion: “The inconclusive nature of game-based learning research seems to only hint at the value of games as educational tools.”I agree. Although there's limited evidence for the usefulness of gaming, it's far too early to conclude that gaming can't be of educational value. But for researchers to prove that--and more important, to identify the features of gaming that promote learning and maintain the gaming experience--will take a significant shift
in the research effort, away from a piecemeal "do kids learn from this game?" to a more systematic, and yes, reductive analysis of gaming.Young, M. F. et al. (2012). Our princess is in another castle: A review of trends in serious gaming for education. Review of Educational Research, 82, 61-89.