One of the great intellectual pleasures is to hear an idea that not only seems right, but that strikes you as so terribly obvious (now that you've heard it) you're in disbelief that no one has ever made the point before.I tasted that pleasure this week, courtesy of a paper by Walter Boot and colleagues (2013)
. The paper concerned the adequacy of control groups in intervention studies--interventions like (but not limited to) "brain games" meant to improve cognition, and the playing of video games, thought to improve certain aspects of perception and attention.
To appreciate the point made in this paper, consider what a control group is supposed to be and do. It is supposed to be a group of subjects as similar to the experimental group as possible, except for the critical variable under study. Active control group
The performance of the control group is to be compared to the performance of the experimental group, which should allow an assessment of the impact of the critical variable on the outcome measure.
Now consider video gaming or brain training. Subjects in an experiment might very well guess the suspected relationship between the critical variable and the outcome. They have an expectation as to what is likely to happen. If they do, then there might be a placebo effect--people perform better on the outcome test simply because they expect that the training will help just as some people feel less pain when given a placebo that they believe is a analgesic.
The standard way to deal with that problem is the use an "active control." That means that the control group doesn't do nothing--they do something, but it's something that the experimenter does not believe will affect the outcome variable. So in some experiments testing the impact of action video games on attention and perception, the active control plays slow-paced video games like Tetris or Sims. Out of control group
The purpose of the active control is that it is supposed to make expectations equivalent in the two groups. Boot et al.'s simple and valid point is that it probably doesn't do that. People don't believe playing Sims will improve attention.
The experimenters gathered some data on this point. They had subjects watch a brief video demonstrating what an action video game was like or what the active control game was like. Then they showed them videos of the measures of attention and perception that are often used in these experiments. And they asked subjects "if you played the video game a lot, do you think it would influence how well you would do on those other tasks?"
And sure enough, people think that action video games will help on measures of attention and perception. Importantly, they don't think that they would have an impact on a measure like story recall. And subjects who saw the game Tetris were less likely to think it would help the perception measures, but were more likely to say it would help with mental rotation.
In other words, subjects see the underlying similarities between games and the outcome measures, and they figure that higher similarity between them means a greater likelihood of transfer.
As the authors note, this problem is not limited to the video gaming literature; the need for an active control that deals with subject expectations also applies to the brain training literature.
More broadly, it applies to studies of classroom interventions. Many of these studies don't use active controls at all. The control is business-as-usual.
In that case, I suspect you have double the problem. You not only have the placebo effect affecting students, you also have one set of teachers asked to do something new, and another set teaching as they typically do. It seems at least plausible that the former will be extra reflective on their practice--they would almost have to be--and that alone might lead to improved student performance.
It's hard to say how big these placebo effects might be, but this is something to watch for when you read research in the future.
Boot, W. R., Simons, D. J., Stothart, C. & Stutts, C. (2013). The pervasive problems with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives in Psychological Science, 8, 445-454.
Most teachers t think that students today have a problem paying attention. They seem impatient, easily bored. I’ve argued
that I think it’s unlikely that they are incapable
of paying attention, but rather that they are quick to deem things not worth the effort.
We might wonder if patience would not come easier to a student who had had the experience of sustaining attention in the face of boredom, and then later finding that patience was rewarded. Arguably, digital immigrants were more likely to have learned this lesson. There were fewer sources of distraction and entertainment, and so we were a bit more likely to hang in there with something a little dull.
I remember on several occasions when I was perhaps ten, being sick at home, watching movies on television that seemed too serious for me—but I watched them because there were only three other TV channels. And I often discovered that these movies (which I would have rejected in favor of game shows) were actually quite interesting.
Students today have so many options that being mildly bored can be successfully avoided most of the time.
If this analysis has any truth to it, how can digital natives learn that patience sometimes brings a reward? Jennifer Roberts,
a professor of the History of Art and Architecture at Harvard, has a suggestion.
She gave a fantastic talk on the subject at a conference hosted by the Harvard Initiative on Learning and Teaching (more here
Roberts asks her students to select a painting from a Boston museum, on which they are to write an in-depth research paper.
Then the student must go the museum and study the painting. For three hours.
The duration, “meant to seem excessive” in Roberts’ words, is, of course, part of the point. The goal is that the student think “Okay, I’ve seen about all I’m going to see in this painting.” But because they must continue looking, they see more. And more. And more. Patience is rewarded.
Roberts gave an example from her own experience. As part of a book she was writing on 18th century American painter John Singleton Copley, she studied at length the painting A Boy With a Flying Squirrel
. Although she is, obviously, an extremely experienced observer of art, Roberts noted that it was many minutes before she noticed that the shape of the white ruff on the squirrel matches the shape of the boy’s ear, and is echoed again in the fold of the curtain over his left shoulder.
If we are concerned that students today are too quick to allow their attention to be yanked to the brightest object (or to willfully redirect it once their very low threshold of boredom is surpassed), we need to consider ways that we can bring home to them the potential reward of sustained attention.
They need to feel the pleasure of discovering that something you thought you had figured out actually has layers that you had not appreciated.That may not be the 21st century skill of greatest importance, but it may be the one in shortest supply.
A great deal has been written about the impact of retrieval practice on memory. That's because the effect is sizable, it has been replicated many times (Agarwal, Bain & Chamberlain, 2012) and it seems to lead not just to better memory but deeper
memory that supports transfer (e.g., McDaniel et al, 2013; Rohrer et al, 2010).
("Retrieval practice" is less catchy than the initial name--testing effect. It was renamed both to emphasize that it doesn't matter whether you try to remember for the sake of a test or some other reason and because "testing effect" led some observers to throw up their hands and say "do we really need more tests?")Now researchers (Szpunar, Khan, & Schacter, 2013) have reported testing as a potentially powerful ally in online learning. College students frequently report difficulty in maintaining attention during lectures, and that problem seems to be exacerbated when the lecture occurs on video.In this experiment subjects were asked to learn from a 21 minute video lecture on statistics. They were also told that the lecture would be divided in 4 parts, separated by a break. During the break they would perform math problems for a minute, and then would either do more math problems for two more minutes ("untested group"), they would be quizzed for two minutes on the material they had just learned ("tested group"), or they would review by seeing questions with the answers provided ("restudy group.")Subjects were told that whether or not they were quizzed would be randomly determined
for each segment; in fact, the same thing happened for an individual subject after each segment except
that each was tested after the fourth segment.So note that all subjects had reason to think that they might be tested at any time. There were a few interesting findings.
First, tested students took more notes than other students, and reported that their minds wandered less during the lecture.
The reduction in mind-wandering and/or increase in note-taking paid off--the tested subjects outperformed the restudy and the untested subjects when they were quizzed on the fourth, final segment.
The researchers added another clever measure. There was a final test on all the material, and they asked subjects how anxious they felt about it. Perhaps the frequent testing made learning rather nerve wracking. In fact, the opposite result was observed: tested students were less anxious about the final test. (And in fact performed better: tested = 90%, restudy = 76%, nontested = 68%).
We shouldn't get out in front of this result. This was just a 21 minute lecture, and it's possible that the benefit to attention of testing will wash out under conditions that more closely resemble an on-line course (i.e., longer lectures delivered a few time each week.) Still, it's a promising start of an answer to a difficult problem.
Agarwal, P. K., Bain, P. M., & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24, 437-448.
McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: Successful transfer performance on classroom exams. Applied Cognitive Psychology. Published online Feb. 25
Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36, 233-239.
Szpunar, K. K., Khan, N. &, & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, published online April 1, 2013 doi:10.1073/pnas.122176411
Something happens to the "inner clocks" of teens. They don't go to sleep until later in the evening but still must wake up for school. Hence, many are sleep-deprived.
These common observations are borne out in research, as I summarize in an article on sleep and cognition
in the latest American Educator.
What are the cognitive consequences of sleep deprivation?
It seems to affect executive function tasks such as working memory. In addition, it has an impact on new learning--sleep is important for a process called consolidation
whereby newly formed memories are made more stable. Sleep deprivation compromises consolidation of new learning (though surprisingly, that effect seems to be smaller or absent in young children.)
Parents and teachers consistently report that the mood of sleep-deprived students is affected: they are more irritable, hyperactive or inattentive. Although this sounds like ADHD, lab studies of attention show little impact of sleep deprivation on formal measures of attention. This may be because students are able, for brief periods, to rally resources and perform well on a lab test. They may be less able to sustain attention for long periods of time when at home or at school and may be less motivated to do so in any event.
Perhaps most convincingly, the few studies that have examined academic performance based on school start times show better grades associated with later school start times. (You might think that if kids know they can sleep later, they might just stay up later. They do, a bit, but they still get more sleep overall.)
Although these effects are reasonably well established, the cognitive cost of sleep deprivation is less widespread and statistically smaller than I would have guessed. That may be because they are difficult to test experimentally. You have two choices, both with drawbacks:
1) you can do correlational studies that ask students how much they sleep each night (or better, get them to wear devices that provide a more objective measure of sleep) and then look for associations between sleep and cognitive measures or school outcomes. But this has the usual problem that one cannot draw causal conclusions from correlational data.
2) you can do a proper experiment by having students sleep less than they usually would, and see if their cognitive performance goes down as a consequence. But it's unethical to significantly deprive students of significant sleep (and what parent would allow their child to take part in such a study?) And anyway, a night or two of severe sleep deprivation is not really what we think is going on here--we think it's months or years of milder deprivation.
So even though scientific studies may not indicate that sleep deprivation is a huge problem, I'm concerned that the data might be underestimating the effect. To allay that concern, can anything be done to get teens to sleep more?
Believe it or not, telling teens "go to sleep" might help. Students with parent-set bedtimes do get more sleep on school nights than students without them. (They get the same amount of sleep on weekends, which somewhat addresses the concern that kids with this sort of parent differ in many ways kids who don't.)
Another strategy is to maximize the "sleepy cues" near bedtime. The internal clock of teens is not just set for later bedtime, it also provides weaker internal cues that he or she ought to be sleepy. Thus, teens are arguably more reliant on external cues that it's bedtime. So the student who is gaming at midnight might tell you "I'm playing games because I'm not sleepy" could be mistaken. It could be that he's not sleepy because he's playing games. Good cues would be a bedtime ritual that doesn't include action video games or movies in the few hours before bed, and ends in a dark quiet room at the same time each night.
So yes, this seems to be a case where good ol' common sense jibes with data. The best strategy we know of for better sleep is consistency. References: All the studies alluded to (and more) appear in the article.
Is technology changing how students learn, that is, the workings of the brain?
in today's New York Times
reports that most teachers think the answer is "yes," and this development is not positive. The article reports the results of two surveys of teachers, one conducted by the Pew Internet Project, and the other by Common Sense Media. Both report that teachers believe that students' use of digital technology adversely affects their attention spans and makes them less likely to stick with challenging tasks.
In interviews, many teachers report feeling that they have to work harder than they used to in order to keep students engaged.
As the article notes, there have not been any long-term studies that show whether student attention span has been affected by digital media. Still, a lot of psychologists are actually skeptical that digital media are likely to fundamentally change the fundamentals of human cognition.Steven Pinker has written "Electronic media aren't going to revamp the brain's mechanisms of information processing." I made the same argument here.
The basic architecture is likely to be relatively fixed, and in the absence of extreme deprivation, will develop fairly predictably. Sure, it is shaped by experience but those changes will just tune to experience what's already there--it might change the dimensions of the rooms, without altering the fundamental floor plan, so to speak.Does that view conflict with teacher's impressions? Not necessarily.When we talk about a student's attention span, I suspect we're really talking about a particular type of attention. It's not their overall ability to pay attention: kids today can, I think, get lost for hours in a movie or a book or a game just as readily as their parents did. Rather, the seemingly shorter attention span is their ability to maintain attention on a task that is not very interesting to them.
But even within that situation, I suspect that there are two factors at work: one is the raw capacity to direct one's attention. The second is the willingness
to do so. I doubt that technology affects the first, but I'm ready to believe that it affects the second. Directing attention--forcing yourself to think about something you'd rather not think about--is effortful, even mildly aversive. Why would you do it? There are lots of possible reasons. Among them would be previous experiences leading you to believe that such sustained attention leads to a payoff. In other words, if you've grown up in circumstances where very little effort usually led to something that was stimulating and interesting, then you likely have an expectation that that's the nature of the world: I do just a little something, and I get a big payoff. (And the payoff is probably immediate.) The process by which children learn to expect a lot of cool stuff to happen based on minimal effort
may start early.When a toddler is given a toy that puts on a dazzling display of light and sound when a button is pushed, we might be teaching him this lesson. In contrast, the toddler who gets a set of blocks has to put a heck of a lot more effort (and sustained attention) into getting the toy to do something interesting--build a tower, for example, that she can send crashing down. It's hard for me to believe that something as fundamental to cognition as the ability to pay attention can moved around a whole lot. It's much easier for me to accept that one's beliefs--beliefs about what is worthy of my attention, beliefs about how much effort I should dispense to tasks--can be moved around, because beliefs are a product of experience. I actually think that much of what I've written here was implicit in some of the teachers' comments--the emphasis on immediacy, for example--but it's worth making it explicit.
Psychologists have not had anything nice to say about multitasking. Trying to do two things at once degrades performance in virtually all circumstances. The exception seems to be listening to music while performing other tasks, but that seems to be true only for some people, some of the time. (I review this literature here
.) This pattern of performance is especially troubling, given that multitasking--especially media multitasking--is becoming more prevalent, especially among younger people. But there's no evidence that doing a lot of media multi
tasking makes you better at it. In one study, researchers (Ophir, Nass & Wagner, 2009
) found that college students who reported more habitual multitasking were actually less
skillful in standard laboratory tasks that require shifting or switching attention. Why would they be worse? One possibility is that they are biased to spread attention broadly. That's a poor strategy when you're confronted with two tasks that have different or even conflicting requirements. But that bias would make you more likely to multitask, even if it's not very effective. Whether multitasking creates that bias or whether that bias exists for other reasons and prompts people to multitask is not known.
Either way, if heavy multitaskers have a bias to spread attention broadly, that bias should be helpful
in tasks where two different streams of information are mutually supportive. A new study
(Lui & Wong, 2012
) tests that prediction. The researchers used the pip and pop task.
Subjects view a display like this one:
The subject's task is to find, as quickly as possible, the single horizontal or vertical line amidst the oblique lines, and to press a button identifying it as horizontal or vertical.
All of the lines alternate colors (red and green) but do so asynchronously. The interesting feature of this task is that every time the target changes color, there is an auditory signal--the pip. The pip doesn't tell you where the target is, the color, or whether it's horizontal or vertical. It just corresponds to the color change of the target.
Subjects are not told that the pips have anything to do with the visual search task, nor that they should pay attention to the pips.
But people who integrate the visual information with the auditory report that the target seems to pop out of the display. They feel like they don't need to laboriously search, they just see it.
The researchers compared subjects speed and accuracy in finding the target when the auditory signal was present and found that accuracy (but not speed) correlated with subjects' self-reported frequency of multitasking, as shown below. Not a huge effect, but reliable.
Most laboratory tests of multitasking use tasks that are uncorrelated, so spreading attention among them hurts performance. In this case, the information provided among visual and auditory streams is mutually reinforcing, so spreading attention helps.
Does this have any bearing on the types of tasks people do outside of the lab?
Information in two different tasks is presumably uncorrelated. When two different streams of information are mutually reinforcing it's by design--the audio and visual portions of a movie, for example. In such cases it's so well synchronized that people make few errors.
One way that multitaskers might have an advantage in real-world tasks is in the detection of unexpected signals. For example, if you're biased to monitor sounds even as you're writing a document, you might be more likely to perceive an auditory signal that an email has arrived in a noisy office environment. Or even to perceive a police siren or lights while driving. Such predictions have not, to my knowledge, been tested.
Lui, K. F. H & Wong, A. C.-N (2012) Does media multitasking always hurt? A positive correlation between multitasking and multisensory integration. Psychonomic Bulletin & Review, 19, 647-653
Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences, 106,15583–15587
A notable feature of most action video games is that one must pay attention to more than one thing simultaneously. For example, in a first-person shooting game
like the one depicted below, one must move to navigate the terrain while avoiding hazards and seeking out beneficial objects. At the same time, the player might switch among different weapons or tools. Thus, one might think that extended practice on such games would lead to the development of a general skill in allocating attention among multiple tasks.
That's a logical conclusion, but two recent papers offer conflicting data as to whether it's the case.
In one (Donohue, James, Eslick & Mitroff, 2012)
the authors compared 19 college-aged students who were avid gamers to students with no gaming experience (N = 26). Subjects completed three tasks: a simulated driving game, an image search task (finding simple objects in a complex drawing) and a multiple-object-tracking task. In this task, a number of black circles appear on a white screen. Four of the circles flash for two seconds, and then all of the circles move randomly. At the end of 12 seconds the subject must identify which of the circles flashed. Subjects performed all three tasks twice: on its own, and with a distracting task (answering trivia questions) performed simultaneously. The question is whether the performance on the experienced gamers would be less disrupted by the attention-demanding trivia task. These researchers found they were not, as shown in the figure below
(click for larger image).
The bars with dotted lines show the gamers' performance.
Everyone performed worse in the dual-task condition (i.e., when answering trivia questions) but the cost to performance was the same for the gamers as for the non-gamers. Extensive gaming experience didn't lead to a general skill in sharing attention. But a different group of researchers found just the opposite.Strobach, Frensch & Schubert (2012)
used much simpler tasks to compare 10 gamers and 10 non-gamers. They used simple reaction time tasks; the subject sat before a computer, and listened over headphones for a tone. When it sounded, the subjects was to push a button as fast as possible. A second task used a visual signal on the screen instead of a tone. In the attention-demanding dual task version, either an auditory or a visual signal might appear, with different responses for each. In this experiment, gamers responded faster than non-gamers overall, but most important, their performance suffered less in the dual-task situation. The authors didn't leave it at that. They recognized that the experimental paradigm they used has a significant drawback; they can't attribute the better attention-sharing skills to gaming, because the study is correlational. For example, it may be that some people just happen to be better at sharing attention
, and these people are drawn to gaming because this skill makes them better at it.To attribute causality to gaming, they needed to conduct an experiment. So the experimenters turned some "regular folk" into gamers by having them play an action game (Medal of Honor) for 15 hours. Control subjects played a puzzle game (Tetris) for 15 hours. Subjects improved their dual-task performance after
playing the action the game. The puzzle game did not have that effect.
So what is the difference between the two studies? It's really hard to say. It's tempting to place more weight on the study that found the difference between gamers and non-gamers. Scientists generally figure that if you unwittingly make a mistake in the design or execution of a study, that's most likely to lead to null results.
In other words, when you don't see an effect (as in the first study) it might be because there really is no effect, or it might just be that something went wrong.But then again, the first study has more of what scientists call ecological validity--the tasks used in the laboratory look more like the attention-demanding tasks we care about outside of the laboratory (e.g., trying to
answer a passenger's question while driving). It may be that both studies are right. Gaming leads to an advantage in attention-sharing that is measurable with very simple tasks, but that is washed out and indiscernible in more complex tasks. The conclusion, then, is a little disheartening. When it comes to the impact of action gaming on attention sharing, it's probably too early to draw a conclusion. Science is hard.
Donohue, S. E., James, B., Eslick, A. N. & Mitroff, S. R. (2012). Cognitive pitfall! Videogame players are not immune to dual-task costs. Attention, Perception, & Psychophysics, 74,
Stroback, T., Frensch, P. A., & Schubert, T. (2012). Video game practice optimizes executive control skills in dual-task and task switching situations. Acta Psychologica, 140,
It's not often that an initiative prompts grave concern
in some and ridicule
in others. The Gates Foundation managed it. The Foundation has funded a couple of projects to investigate the feasibility of developing a passive measure of student engagement, using galvanic skin response (GSR). The ridicule comes from an assumption that it won't work.GSR basically measures how sweaty you are. Two leads are placed on the skin. One emits a very very mild charge. The other measures the charge. The more sweat on your skin, the better it conducts the charge, so the better the second lead will pick up the charge.Who cares how sweaty your skin is? Sweat--as well as heart rate, respiration rate and a host of other physiological signs controlled by the peripheral nervous system--
vary with your emotional state.Can you tell whether a student is paying attention from these data? It's at least plausible that it could be made to work.
There has long been controversy over how separable different emotional states are, based on these sorts of metrics. It strikes me as a tough problem, and we're clearly not there yet, but the idea is far from kooky, and indeed, the people who have been arguing its possible have been making some progress--this lab group says they've successfully distinguished engagement, relaxation and stress.
(Admittedly, they gathered a lot more data than just GSR and one measure they collected was EEG, a measure of the central, not peripheral, nervous system.) The grave concern springs from the possible use to which the device would be put.
A Gates Foundation spokeswoman says the plan is that a teacher would be able to tell, in real time, whether students are paying attention in class. (Earlier the Foundation website indicated that the grant was part of a program meant to evaluate teachers, but that was apparently an error
.)Some have objected that such measurement would be insulting to teachers. After all, can't teachers tell when their students are engaged, or bored, or frustrated, etc.? I'm sure some can, but not all of them. And it's a good bet that
beginning teachers can't make these judgements as accurately as their more experienced colleagues, and beginners are just the ones who need this feedback. Presumably the information provided by the system would be redundant to teachers who can read it by their students faces and body language, and these teachers will simply ignore it. I would hope that classroom use would be optional--GSR bracelets would enter classrooms only if teachers requested them.Of greater concern to me are the rights of the students. Passive reading of physiological data without consent feels like an invasion of privacy. Parental consent ought to be obligatory.
Then too, what about HIPAA
? What is the procedure if a system that measures heartbeat detects an irregularity? These two concerns--the effect on teachers and the effect on students--strike me as serious
, and people with more experience than I have in ethics and in the law will need to think them through with great care. But I still think the project is a terrific idea, for two reasons, neither of which has received much attention in all the uproar.First, even if the devices were never used in classrooms, researchers could put them to good use.
I sat in at a meeting a few years ago of researchers considering a grant submission (not to the Gates Foundation) on this precise idea--using peripheral nervous system data as an on-line measure of engagement. (The science involved here is not really in my area of expertise, and had no idea why I was asked to be at the meeting, but that seems to be true of about two-thirds of the meetings I attend.) Our thought was that the device would be used by researchers, not teachers and administrators.Researchers would love a good measure of engagement because the proponents of new materials or methods so often claim "increased engagement" as a benefit. But how are researchers supposed to know whether or not the claim is true?
Teacher or student judgements of engagement are subject to memory loss and to well-known biases. In addition, I see potentially great value for parents and teachers of kids with disabilities. For example, have a look at these two pictures.
This is my daughter Esprit. She's 9 years old, and she has Edward's syndrome
. As a consequence, she has a host of cognitive and physical challenges, e.g., she cannot speak, and she has limited motor control and bad motor tone (she can't sit up unaided). Esprit can never tell me that she's engaged either with words or signs. But I'm comfortable concluding that she is engaged at moments like that captured in the top photo--she's turning the book over in her hands and staring at it intently. In the photo at the bottom, even I, her dad, am unsure of what's on her mind. (She looks sleepy, but isn't--ptosis, or drooping upper eyelids, is part of the profile). If Esprit wore this expression while gazing towards a video for example, I wouldn't be sure whether she was engaged by the video or was spacing out.Are there moments that I would slap a bracelet on her if I thought it could measure whether or not she was engaged? You bet your sweet bippy there are. I'm not the first to think of using physiologic data to measure engagement in people with disabilities that make it hard to make their interests known.
In this article
, researchers sought to reduce the communication barriers that exclude children with disabilities from social activities; the kids might be present, but because of their difficulties describing or showing their thoughts, they cannot fully participate in the group. Researchers reported some success in distinguishing engaged from disengaged states of mind from measures of blood volume pulse, GSR, skin temperature, and respiration in nine young adults with muscular dystrophy or cerebral palsy. I respect
the concerns of those who see the potential for abuse in the passive measurement of physiological data. At the same time, I see the potential for real benefit in such a system, wisely deployed.When we see the potential for abuse, let's quash that possibility, but let's not let it blind us to the possibility of the good that might be done.And finally, because Esprit didn't look very cute in the pictures above, I end with this picture.
Should kids be allowed to chew gum in class? If a student said "but it helps me concentrate. . ." should we be convinced? If it provides a boost, it's short-lived. It's pretty well established that a burst of glucose provides a brief cognitive boost (see review here), so the question is whether chewing gum in particular provides any edge over and above that, or whether a benefit would be observed when chewing sugar-free gum.
One study (Wilkinson et al., 2002
) compared gum-chewing to no-chewing (and "sham chewing" in which subjects were to pretend to chew gum, which seems awkward). Subjects performed about a dozen tasks, including some of vigilance (i.e., sustaining attention), short-term and long term memory.
Researchers reported some positive effect of gum-chewing for four of the tests. It's a little hard to tell from the brief write-up, but it appears that the investigators didn't correct their statistics for the multiple tests.
This throw-everything-at-the-wall-and-see-what-sticks may be a characteristic of this research. Another study (Smith, 2010
) took that same approach and concluded that there were some positive effects of gum chewing for some of the tasks, especially for feelings of alertness. (This study did not use sugar-free gum so it's hard to tell whether the effect is due to the gum or the glucose.)
A more recent study (Kozlov, Hughes & Jones, 2012
) using a more standard short-term memory paradigm, found no benefit for gum chewing.
What are we to make of this grab-bag of results? (And please note this blog does not offer an exhaustive review.)
A recent paper (Onyper, Carr, Farrar & Floyd, 2011
) offers a plausible resolution. They suggest that the act of mastication offers a brief--perhaps ten or twenty minute--boost to cognitive function due to increased arousal. So we might see benefit (or not) to gum chewing depending on the timing of the chewing relative to the timing of cognitive tasks.
The upshot: teachers might allow or disallow gum chewing in their classrooms for a variety of reasons. There is not much evidence to allow it for a significant cognitive advantage. EDIT: Someone emailed to ask if kids with ADHD benefit. The one study I know of reported a cost to vigilance with gum-chewing for kids with ADHD
Kozlov, M. D., Hughs, R. W. & Jones, D. M. (2012). Gummed-up memory: chewing gum impairs short-term recall. Quarterly Journal of Experimental Psychology, 65, 501-513.
Onyper, S. V., Carr, T. L, Farrar, J. S. & Floyd, B. R. (2011). Cognitive advantages of chewing gum. Now you see them now you don't. Appetite, 57, 321-328.
Smith, A. (2010). Effects of chewing gum on cognitive function, mood and physiology in stressed an unstressed volunteers. Nutritional Neuroscience, 13, 7-16.
Wilkinson, L., Scholey, A., & Wesnes, K. (2002). Chewing gum selectively improves aspects of memory in healthy volunteers. Appetite, 38, 235-236.
Today's New York Times has an article
speculating that when you read on an ereader or tablet, your attention is likely to be diverted to other applications.
If you hit a dull patch in the book, can you resist the pull of YouTube, Twitter, or your email? Even if you're engaged in the book, Google may beckon to clarify a point in the book ("Essex? Where's that?") and next thing you know, 25 minutes have elapsed in surfing. Perhaps interesting, perhaps productive, but not what you sat down intending to do.Many people I've spoken with have the impression that this sort of distraction is predictable, and that it is a greater problem when reading on a tablet computer, even compared to reading a print book with a computer nearby. The data on this question are still thin, but I do know of one relevant study (Woody et al, 2011). Nearly 300 college students took part, each reading a chapter from an introductory psychology textbook in one of five formats: print textbook, printed text pages, printed manuscript in MS Word, electronic pdf file, or electronic textbook. Some students read in a laboratory, some at home, and everyone took a quiz on the chapter material after reading it. The results showed that media format did not affect quiz grades. But students who read electronic media versions were more likely to respond to instant messages and email while reading, and were more likely to use social networking sites (Facebook/Myspace) while reading. It's only one experiment, but this feels like an instance where the intuitions of the majority of people will end up according with data. Whether the extra level of distraction is really a problem remains to be seen; and it may well be that users (or software designers) come up with strategies to solve the problem, if it proves significant.
Woody, W. D., Daniel, D. B., & Stewart, J. M. (2011). Students’ Preferences and Performance Using E-Textbooks and Print Textbooks. In F. Columbus (Ed.), Computers in Education
. New York: Nova Publishing.