Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

Can you multitask on a treadmill?

5/2/2016

 
One of my graduate school mentors noted that if he was walking when presented with a really difficult cognitive challenge, he would stop, as though walking drew, however slightly, on his attention. Rousseau, in contrast, claimed “I can only meditate when I am walking.”
​
The advent of the treadmill desk makes the question of walking and cognition more urgent. Okay, there may be health benefits, but if walking is not fully automatic, it siphons away some of your thinking capacity; it demands multitasking, so why put one in the workplace? 
Picture
Researchers have caught up to business trends, and a couple of recent studies indicate that dedicated office walkers can relax—treadmills don’t seem to compromise cognition. Probably.

In one, researchers administered well-normed measures of working memory and executive function (digit span forwards and backwards, digit-symbol coding, letter-number substitution, and others) to 45 college undergraduates. Each completed the tasks while sitting, standing, and walking (in random order), with 1 week elapsing between sessions. Participants could set the walking speed as they preferred, between 1 and 3 km/hour. Performance on the working memory/executive function tasks was statistically indistinguishable in the three conditions.

That study had people walk (or not) and measured the impact on working memory. Another approach is for researchers to tax working memory (or not) and observe the impact on walking. Other researchers used that method, having subjects either just walk (again, at a self-selected pace), or walk while performing a working memory task, or walk while reading. Researchers recorded several aspects of gait, focusing on variability. Again, they found no evidence of interference.

The neurophysiology of walking would seem consistent with these results. Humans have central pattern generators in the spinal cord—neural circuits that, even in the absence of input of the brain, can generate patterns of flexion and extension in muscles that look like walking. Thus, if the spinal cord can handle walking on its own, it’s easy to see why walking is not compromised when the brain is doing something else.

But central pattern generators set up pretty crude motor output; locomotion (like all movement) requires close monitoring of perceptual feedback (from vision, from balance) which is used to fine-tune walking movements (For a review, see Clark, 2015). We notice the need for perceptual information when we walk on ice, and for motor tuning when we pick our way through a rocky beach, but the fine-tuning goes on in a less obtrusive way in everyday situations. To get a feel for that, find yourself a nice long hallway, pick a spot about 50 feet away, and walk towards it with your eyes closed. If walking were a completely automatic program that could run without visual input from the brain, this would be no problem, yet most sighted people feel uneasy just a few steps in.

So if walking actually can’t run with total automaticity, why does treadmill walking show no attentional cost?

It may be that there is a cost, but it’s so small that it’s not detected in these experiments. And that may mean it’s not worth worrying about. Follow-up experiments with greater statistical power to detect small effects would be needed to address that possibility. Three other caveats are worth considering before we all buy treadmill desks.

First, the studies to date have been of relatively brief duration—less than an hour. It’s possible that subjects can with some effort of concentration, walk without cognitive cost for a short period of time, but a few hours would reveal a deficit; tiredness might make walking a little sloppy, and thus more attention-demanding.

Second, at least one study has shown a movement task (key tapping) was compromised when people walk (Oblinger et al, 2011). That’s not an effect of attention, but of trying to do two motor actions at once, like rubbing your stomach while patting your head. Hence, although office activities like typing or data entry have not been tested on treadmills, I’d be willing to bet they would be compromised. 

Third, my graduate mentor and Rousseau may have been talking about different types of thought. My mentor referred to answering a question, whereas Rousseau may have meant more creative thought. Walking may not be helpful when the environment presents pressing problems in need of timely answers. But a meandering gait may promote meandering thought, which in turn promotes creativity. The latter has not be tested on office treadmills either.

What people know about the cost of multitasking

3/3/2014

 
Researchers emphasize there are very few circumstances in which you can do two things at once without cost (relative to doing each on its own). Yet some drivers sneak a look at their phone while on the road, and some students have the television playing while they complete an assignment.

Why? One possibility is that they don't understand the cost of multi-tasking very well. A new study (Finley, Benjamin, and McCarley, 2014) investigated that possibility.

Subjects initially practiced a tracking task: a small target moved erratically on a computer screen and the subject was to try to keep a mouse cursor atop it.

Interleaved with practice on the tracking task, subjects practiced a standard auditory N-back task: they heard a series of digits (one every 2.4 seconds) and were asked to say whether the digits matched the one spoken 2 digits earlier (or in other versions of the task, 1 digit or 3 digits earlier).

After a total of 3 phases of practice for each task, subjects were told that they would try to do both tasks at the same time. They were told to prioritize the tracking task; just as a driver must keep the car in the lane, they should do their best to keep the cursor near the target, but they should do their best on the N-back task.

Then subjects got feedback on their performance on the three phases of tracking task
(expressed as percent time they had the cursor on the target) and they were asked to predict their performance on the tracking task when simultaneously doing the N-back task.

The results showed a significant drop in tracking performance when subjects had to do the N-back task at the same time. What did subjects predict?

Subjects did predict a decrement. What they could not do was predict the size.

The graph shows the correlation between the predicted decrement in tracking performance and the actual decrement.

Picture
The diagonal shows perfect prediction
Subjects were not just wildly guessing. Their predicted performance in the dual task situation was related to their performance in the single-task situation, as shown here:
Picture
Dual-task performance as a function of single-task performance
So to make the judgment "how much will it hurt my tracking performance to add a second task?" subjects take their single-task tracking performance and subtract something. . . but the "something" is not accurate.

The analogy to typical dual-task situations is not that great. In this case, I have never performed the two tasks simultaneously and am asked to guess at performance when I do. When a student decides to watch television while completing an assignment, he very likely has completed those tasks in a dual-task situation.

This means he has two ways of predicting his performance: one would be guessing at the dual-task cost, and this experiment shows that although subjects know there is some cost, they are terrible at predicting its size.

The second way students could predict what will happen if they multitask while working is based on their memory of similar situations. But the feedback students get in this situation is unclear. First, the feedback is significantly delayed, relative to when the work is completed. Second, every assignment varies (and so do tv programs) so the student might attribute bad performance to one of those variables (although I don't know of any study showing no cost to background television).

But there is another interpretation of students' choice to multitask.  They know their performance will suffer, they know they don't know how much it will suffer, and they don't care.


Reference:
Finley, J. R., Benjamin, A. S., & McCarley, J. S. (2014). Metacognition of multitasking: How well do we predict the costs of divided attention? Journal of Experimental Psychology: Applied, in press.

Out of Control: Fundamental Flaw in Claims about Brain-Training

7/15/2013

 
One of the great intellectual pleasures is to hear an idea that not only seems right, but that strikes you as so terribly obvious (now that you've heard it) you're in disbelief that no one has ever made the point before.

I tasted that pleasure this week, courtesy of a paper by Walter Boot and colleagues (2013).

The paper concerned the adequacy of control groups in intervention studies--interventions like (but not limited to) "brain games" meant to improve cognition, and the playing of video games, thought to improve certain aspects of perception and attention.
PictureControl group
To appreciate the point made in this paper, consider what a control group is supposed to be and do. It is supposed to be a group of subjects as similar to the experimental group as possible, except for the critical variable under study.

The performance of the control group is to be compared to the performance of the experimental group, which should allow an assessment of the impact of the critical variable on the outcome measure.

Now consider video gaming or brain training. Subjects in an experiment might very well guess the suspected relationship between the critical variable and the outcome. They have an expectation as to what is likely to happen. If they do, then there might be a placebo effect--people perform better on the outcome test simply because they expect that the training will help just as some people feel less pain when given a placebo that they believe is a analgesic.

PictureActive control group
The standard way to deal with that problem is the use an "active control." That means that the control group doesn't do nothing--they do something, but it's something that the experimenter does not believe will affect the outcome variable. So in some experiments testing the impact of action video games on attention and perception, the active control plays slow-paced video games like Tetris or Sims.

The purpose of the active control is that it is supposed to make expectations equivalent in the two groups. Boot et al.'s simple and valid point is that it probably doesn't do that. People don't believe playing Sims will improve attention.

The experimenters gathered some data on this point. They had subjects watch a brief video demonstrating what an action video game was like or what the active control game was like. Then they showed them videos of the measures of attention and perception that are often used in these experiments. And they asked subjects "if you played the video game a lot, do you think it would influence how well you would do on those other tasks?"

PictureOut of control group
And sure enough, people think that action video games will help on measures of attention and perception. Importantly, they don't think that they would have an impact on a measure like story recall. And subjects who saw the game Tetris were less likely to think it would help the perception measures, but were more likely to say it would help with mental rotation.

In other words, subjects see the underlying similarities between games and the outcome measures, and they figure that higher similarity between them means a greater likelihood of transfer.

As the authors note, this problem is not limited to the video gaming literature; the need for an active control that deals with subject expectations also applies to the brain training literature.

More broadly, it applies to studies of classroom interventions. Many of these studies don't use active controls at all. The control is business-as-usual.

In that case, I suspect you have double the problem. You not only have the placebo effect affecting students, you also have one set of teachers asked to do something new, and another set teaching as they typically do. It seems at least plausible that the former will be extra reflective on their practice--they would almost have to be--and that alone might lead to improved student performance.

It's hard to say how big these placebo effects might be, but this is something to watch for when you read research in the future.

Reference

Boot, W. R., Simons, D. J., Stothart, C. & Stutts, C. (2013). The pervasive problems with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives in Psychological Science, 8, 445-454.

The 21st century skill students really lack.

5/13/2013

 
Most teachers t think that students today have a problem paying attention. They seem impatient, easily bored.

I’ve argued that I think it’s unlikely that they are incapable of paying attention, but rather that they are quick to deem things not worth the effort.

We might wonder if patience would not come easier to a student who had had the experience of sustaining attention in the face of boredom, and then later finding that patience was rewarded. Arguably, digital immigrants were more likely to have learned this lesson. There were fewer sources of distraction and entertainment, and so we were a bit more likely to hang in there with something a little dull.

I remember on several occasions when I was perhaps ten, being sick at home, watching movies on television that seemed too serious for me—but I watched them because there were only three other TV channels. And I often discovered that these movies (which I would have rejected in favor of game shows) were actually quite interesting.

Students today have so many options that being mildly bored can be successfully avoided most of the time.

If this analysis has any truth to it, how can digital natives learn that patience sometimes brings a reward?

Jennifer Roberts, a professor of the History of Art and Architecture at Harvard, has a suggestion.

She gave a fantastic talk on the subject at a conference hosted by the Harvard Initiative on Learning and Teaching (more here).

Roberts asks her students to select a painting from a Boston museum, on which they are to write an in-depth research paper.

Then the student must go the museum and study the painting. For three hours.

The duration, “meant to seem excessive” in Roberts’ words, is, of course, part of the point. The goal is that the student think “Okay, I’ve seen about all I’m going to see in this painting.” But because they must continue looking, they see more. And more. And more. Patience is rewarded.

Picture
Roberts gave an example from her own experience. As part of a book she was writing on 18th century American painter John Singleton Copley, she studied at length the painting A Boy With a Flying Squirrel. Although she is, obviously, an extremely experienced observer of art, Roberts noted that it was many minutes before she noticed that the shape of the white ruff on the squirrel matches the shape of the boy’s ear, and is echoed again in the fold of the curtain over his left shoulder.

If we are concerned that students today are too quick to allow their attention to be yanked to the brightest object (or to willfully redirect it once their very low threshold of boredom is surpassed), we need to consider ways that we can bring home to them the potential reward of sustained attention.

They need to feel the pleasure of discovering that something you thought you had figured out actually has layers that you had not appreciated.

That may not be the 21st century skill of greatest importance, but it may be the one in shortest supply.


Testing helps maintain attention, reduce stress in online learning

4/8/2013

 
A great deal has been written about the impact of retrieval practice on memory. That's because the effect is sizable, it has been replicated many times (Agarwal, Bain & Chamberlain, 2012) and it seems to lead not just to better memory but deeper memory that supports transfer (e.g., McDaniel et al, 2013; Rohrer et al, 2010). 

("Retrieval practice" is less catchy than the initial name--testing effect. It was renamed both to emphasize that it doesn't matter whether you try to remember for the sake of a test or some other reason and because "testing effect" led some observers to throw up their hands and say "do we really need more tests?")

Now researchers (Szpunar, Khan, & Schacter, 2013) have reported testing as a potentially powerful ally in online learning. College students frequently report difficulty in maintaining attention during lectures, and that problem seems to be exacerbated when the lecture occurs on video.

In this experiment subjects were asked to learn from a 21 minute video lecture on statistics. They were also told that the lecture would be divided in 4 parts, separated by a break. During the break they would perform math problems for a minute, and then would either do more math problems for two more minutes ("untested group"), they would be quizzed for two minutes on the material they had just learned ("tested group"), or they would review by seeing questions with the answers provided ("restudy group.")

Subjects were told that whether or not they were quizzed would be randomly determined for each segment; in fact, the same thing happened for an individual subject after each segment except that each was tested after the fourth segment.

So note that all subjects had reason to think that they might be tested at any time.

There were a few interesting findings. First, tested students took more notes than other students, and reported that their minds wandered less during the lecture.
Picture
The reduction in mind-wandering and/or increase in note-taking paid off--the tested subjects outperformed the restudy and the untested subjects when they were quizzed on the fourth, final segment.
Picture
The researchers added another clever measure. There was a final test on all the material, and they asked subjects how anxious they felt about it. Perhaps the frequent testing made learning rather nerve wracking. In fact, the opposite result was observed: tested students were less anxious about the final test. (And in fact performed better: tested = 90%, restudy = 76%, nontested = 68%).

We shouldn't get out in front of this result. This was just a 21 minute lecture, and it's possible that the benefit to attention of testing will wash out under conditions that more closely resemble an on-line course (i.e., longer lectures delivered a few time each week.) Still, it's a promising start of an answer to a difficult problem.

References

Agarwal, P. K., Bain, P. M., & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24,  437-448.

McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: Successful transfer performance on classroom exams. Applied Cognitive Psychology. Published online Feb. 25

Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36, 233-239.

Szpunar, K. K., Khan, N. &, & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, published online April 1, 2013 doi:10.1073/pnas.122176411

How to Get Students to Sleep More

12/12/2012

 
Something happens to the "inner clocks" of teens. They don't go to sleep until later in the evening but still must wake up for school. Hence, many are sleep-deprived.
Picture
These common observations are borne out in research, as I summarize in an article on sleep and cognition in the latest American Educator.

What are the cognitive consequences of sleep deprivation?

It seems to affect executive function tasks such as working memory. In addition, it has an impact on new learning--sleep is important for a process called consolidation whereby newly formed memories are made more stable. Sleep deprivation compromises consolidation of new learning (though surprisingly, that effect seems to be smaller or absent in young children.)

Parents and teachers consistently report that the mood of sleep-deprived students is affected: they are more irritable, hyperactive or inattentive. Although this sounds like ADHD, lab studies of attention show little impact of sleep deprivation on formal measures of attention. This may be because students are able, for brief periods, to rally resources and perform well on a lab test. They may be less able to sustain attention for long periods of time when at home or at school and may be less motivated to do so in any event.

Picture
Perhaps most convincingly, the few studies that have examined academic performance based on school start times show better grades associated with later school start times. (You might think that if kids know they can sleep later, they might just stay up later. They do, a bit, but they still get more sleep overall.)

Although these effects are reasonably well established, the cognitive cost of sleep deprivation is less widespread and statistically smaller than I would have guessed. That may be because they are difficult to test experimentally. You have two choices, both with drawbacks:

1) you can do correlational studies that ask students how much they sleep each night (or better, get them to wear devices that provide a more objective measure of sleep) and then look for associations between sleep and cognitive measures or school outcomes. But this has the usual problem that one cannot draw causal conclusions from correlational data.

2) you can do a proper experiment by having students sleep less than they usually would, and see if their cognitive performance goes down as a consequence. But it's unethical to significantly deprive students of significant sleep (and what parent would allow their child to take part in such a study?) And anyway, a night or two of severe sleep deprivation is not really what we think is going on here--we think it's months or years of milder  deprivation.

So even though scientific studies may not indicate that sleep deprivation is a huge problem, I'm concerned that the data might be underestimating the effect. To allay that concern, can anything be done to get teens to sleep more?

Picture
Believe it or not, telling teens "go to sleep" might help. Students with parent-set bedtimes do get more sleep on school nights than students without them. (They get the same amount of sleep on weekends, which somewhat addresses the concern that kids with this sort of parent differ in many ways kids who don't.)

Another strategy is to maximize the "sleepy cues" near bedtime. The internal clock of teens is not just set for later bedtime, it also provides weaker internal cues that he or she ought to be sleepy. Thus, teens are arguably more reliant on external cues that it's bedtime. So the student who is gaming at midnight might tell you "I'm playing games because I'm not sleepy" could be mistaken. It could be that he's not sleepy because he's playing games. Good cues would be a bedtime ritual that doesn't include action video games or movies in the few hours before bed, and ends in a dark quiet room at the same time each night.

So yes, this seems to be a case where good ol' common sense jibes with data. The best strategy we know of for better sleep is consistency.

References: All the studies alluded to (and more) appear in the article.

Is technology changing how students learn?

11/1/2012

 
Is technology changing how students learn, that is, the workings of the brain?

An article in today's New York Times reports that most teachers think the answer is "yes," and this development is not positive.

The article reports the results of two surveys of teachers, one conducted by the Pew Internet Project, and the other by Common Sense Media. Both report that teachers believe that students' use of digital technology adversely affects their attention spans and makes them less likely to stick with challenging tasks.

In interviews, many teachers report feeling that they have to work harder than they used to in order to keep students engaged.
Picture
As the article notes, there have not been any long-term studies that show whether student attention span has been affected by digital media.

Still, a lot of psychologists are actually skeptical that digital media are likely to fundamentally change the fundamentals of human cognition.

Steven Pinker has written "Electronic media aren't going to revamp the brain's mechanisms of information processing." I made the same argument here.

The basic architecture is likely to be relatively fixed, and in the absence of extreme deprivation, will develop fairly predictably. Sure, it is shaped by experience but those changes will just tune to experience what's already there--it might change the dimensions of the rooms, without altering the fundamental floor plan, so to speak.

Does that view conflict with teacher's impressions? Not necessarily.

When we talk about a student's attention span, I suspect we're really talking about a particular type of attention. It's not their overall ability to pay attention: kids today can, I think, get lost for hours in a movie or a book or a game just as readily as their parents did. Rather, the seemingly shorter attention span is their ability to maintain attention on a task that is not very interesting to them.

But even within that situation, I suspect that there are two factors at work: one is the raw capacity to direct one's attention. The second is the willingness to do so.

I doubt that technology affects the first, but I'm ready to believe that it affects the second.

Directing attention--forcing yourself to think about something you'd rather not think about--is effortful, even mildly aversive. Why would you do it? There are lots of possible reasons. Among them would be previous experiences leading you to believe that such sustained attention leads to a payoff.

In other words, if you've grown up in circumstances where very little effort  usually led to something that was stimulating and interesting, then you likely have an expectation that that's the nature of the world: I do just a little something, and I get a big payoff. (And the payoff is probably immediate.)

The process by which children learn to expect a lot of cool stuff to happen based on minimal effort may start early.

When a toddler is given a toy that puts on a dazzling display of light and sound when a button is pushed, we might be teaching him this lesson.
In contrast, the toddler who gets a set of blocks has to put a heck of a lot more effort (and sustained attention) into getting the toy to do something interesting--build a tower, for example, that she can send crashing down.

It's hard for me to believe that something as fundamental to cognition as the ability to pay attention can moved around a whole lot. It's much easier for me to accept that one's beliefs--beliefs about what is worthy of my attention, beliefs about how much effort I should dispense to tasks--can be moved around, because beliefs are a product of experience.

I actually think that much of what I've written here was implicit in some of the teachers' comments--the emphasis on immediacy, for example--but it's worth making it explicit.

At last, a spot of good news for multitaskers

7/16/2012

 
Psychologists have not had anything nice to say about multitasking. Trying to do two things at once degrades performance in virtually all circumstances. The exception seems to be listening to music while performing other tasks, but that seems to be true only for some people, some of the time. (I review this literature here.)

This pattern of performance is especially troubling, given that multitasking--especially media multitasking--is becoming more prevalent, especially among younger people.

But there's no evidence that doing a lot of media multitasking makes you better at it. In one study, researchers (Ophir, Nass & Wagner, 2009) found that college students who reported more habitual multitasking were actually less skillful in standard laboratory tasks that require shifting or switching attention.

Why would they be worse? One possibility is that they are biased to spread attention broadly. That's a poor strategy when you're confronted with two tasks that have different or even conflicting requirements. But that bias would make you more likely to multitask, even if it's not very effective.

Whether multitasking creates that bias or whether that bias exists for other reasons and prompts people to multitask is not known.

Either way, if heavy multitaskers have a bias to spread attention broadly, that bias should be helpful in tasks where two different streams of information are mutually supportive.

A new study (Lui & Wong, 2012) tests that prediction.

The researchers used the pip and pop task. Subjects view a display like this one:
Picture
The subject's task is to find, as quickly as possible, the single horizontal or vertical line amidst the oblique lines, and to press a button identifying it as horizontal or vertical.

All of the lines alternate colors (red and green) but do so asynchronously. The interesting feature of this task is that every time the target changes color, there is an auditory signal--the pip. The pip doesn't tell you where the target is, the color, or whether it's horizontal or vertical. It just corresponds to the color change of the target.

Subjects are not told that the pips have anything to do with the visual search task, nor that they should pay attention to the pips.

But people who integrate the visual information with the auditory report that the target seems to pop out of the display. They feel like they don't need to laboriously search, they just see it.

The researchers compared subjects speed and accuracy in finding the target when the auditory signal was present and found that accuracy (but not speed) correlated with subjects' self-reported frequency of multitasking, as shown below. Not a huge effect, but reliable.

Picture
Most laboratory tests of multitasking use tasks that are uncorrelated, so spreading attention among them hurts performance. In this case, the information provided among visual and auditory streams is mutually reinforcing, so spreading attention helps.

Does this have any bearing on the types of tasks people do outside of the lab?

Information in two different tasks is presumably uncorrelated. When two different streams of information are mutually reinforcing it's by design--the audio and visual portions of a movie, for example. In such cases it's so well synchronized that people make few errors.

One way that multitaskers might have an advantage in real-world tasks is in the detection of unexpected signals. For example, if you're biased to monitor sounds even as you're writing a document, you might be more likely to perceive an auditory signal that an email has arrived in a noisy office environment. Or even to perceive a police siren or lights while driving. Such predictions have not, to my knowledge, been tested.


Lui, K. F. H & Wong, A. C.-N (2012) Does media multitasking always hurt? A positive correlation between multitasking and multisensory integration. Psychonomic Bulletin & Review, 19, 647-653

Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences, 106,15583–15587

Gaming and attention-sharing--no easy answers

7/10/2012

 
A notable feature of most action video games is that one must pay attention to more than one thing simultaneously. For example, in a first-person shooting game like the one depicted below, one must move to navigate the terrain while avoiding hazards and seeking out beneficial objects. At the same time, the player might switch among different weapons or tools. Thus, one might think that extended practice on such games would lead to the development of a general skill in allocating attention among multiple tasks.
Picture
That's a logical conclusion, but two recent papers offer conflicting data as to whether it's the case.

In one (Donohue, James, Eslick & Mitroff, 2012) the authors compared 19 college-aged students who were avid gamers to students with no gaming experience (N = 26). Subjects completed three tasks: a simulated driving game, an image search task (finding simple objects in a complex drawing) and a multiple-object-tracking task. In this task, a number of black circles appear on a white screen. Four of the circles flash for two seconds, and then all of the circles move randomly. At the end of 12 seconds the subject must identify which of the circles flashed.

Subjects performed all three tasks twice: on its own, and with a distracting task (answering trivia questions) performed simultaneously. The question is whether the performance on the experienced gamers would be less disrupted by the attention-demanding trivia task. 

These researchers found they were not, as shown in the figure below (click for larger image).
Picture
The bars with dotted lines show the gamers' performance.
 Everyone performed worse in the dual-task condition (i.e., when answering trivia questions) but the cost to performance was the same for the gamers as for the non-gamers. Extensive gaming experience didn't lead to a general skill in sharing attention.

But a different group of researchers found just the opposite.

Strobach, Frensch & Schubert (2012) used much simpler tasks to compare 10 gamers and 10 non-gamers. They used simple reaction time tasks; the subject sat before a computer, and listened over headphones for a tone. When it sounded, the subjects was to push a button as fast as possible. A second task used a visual signal on the screen instead of a tone. In the attention-demanding dual task version, either an auditory or a visual signal might appear, with different responses for each.

In this experiment, gamers responded faster than non-gamers overall, but most important, their performance suffered less in the dual-task situation.

The authors didn't leave it at that. They recognized that the experimental paradigm they used has a significant drawback; they can't attribute the better attention-sharing skills to gaming, because the study is correlational. For example, it may be that some people just happen to be better at sharing attention, and these people are drawn to gaming because this skill makes them better at it.

To attribute causality to gaming, they needed to conduct an experiment. So the experimenters turned some "regular folk" into gamers by having them play an action game (Medal of Honor) for 15 hours. Control subjects played a puzzle game (Tetris) for 15 hours.  Subjects improved their dual-task performance after playing the action the game. The puzzle game did not have that effect.

So what is the difference between the two studies?

It's really hard to say. It's tempting to place more weight on the study that found the difference between gamers and non-gamers. Scientists generally figure that if you unwittingly make a mistake in the design or execution of a study, that's most likely to lead to null results. In other words, when you don't see an effect (as in the first study) it might be because there really is no effect, or it might just be that something went wrong.

But then again, the first study has more of what scientists call ecological validity--the tasks used in the laboratory look more like the attention-demanding tasks we care about outside of the laboratory (e.g., trying to answer a passenger's question while driving).

It may be that both studies are right. Gaming leads to an advantage in attention-sharing that is measurable with very simple tasks, but that is washed out and indiscernible in more complex tasks.

The conclusion, then, is a little disheartening. When it comes to the impact of action gaming on attention sharing, it's probably too early to draw a conclusion.

Science is hard.

Donohue, S. E., James, B., Eslick, A. N. & Mitroff, S. R. (2012). Cognitive pitfall! Videogame players are not immune to dual-task costs. Attention, Perception, & Psychophysics, 74,  803-809.

Stroback, T., Frensch, P. A., & Schubert, T. (2012). Video game practice optimizes executive control skills in dual-task and task switching situations. Acta Psychologica, 140,  13-24.

The Gates Foundation's "engagement bracelets"

6/26/2012

 
It's not often that an initiative prompts grave concern in some and ridicule in others. The Gates Foundation managed it.

The Foundation has funded a couple of projects to investigate the feasibility of developing a passive measure of student engagement, using galvanic skin response (GSR).

The ridicule comes from an assumption that it won't work.

GSR basically measures how sweaty you are. Two leads are placed on the skin. One emits a very very mild charge. The other measures the charge. The more sweat on your skin, the better it conducts the charge, so the better the second lead will pick up the charge.

Who cares how sweaty your skin is?

Sweat--as well as heart rate, respiration rate and a host of other physiological signs controlled by the peripheral nervous system--vary with your emotional state.

Can you tell whether a student is paying attention from these data? 

It's at least plausible that it could be made to work. There has long been controversy over how separable different emotional states are, based on these sorts of metrics. It strikes me as a tough problem, and we're clearly not there yet, but the idea is far from kooky, and indeed, the people who have been arguing its possible have been making some progress--this lab group says they've successfully distinguished engagement, relaxation and stress. (Admittedly, they gathered a lot more data than just GSR and one measure they collected was EEG, a measure of the central, not peripheral, nervous system.)

The grave concern springs from the possible use to which the device would be put.

A Gates Foundation spokeswoman says the plan is that a teacher would be able to tell, in real time, whether students are paying attention in class. (Earlier the Foundation website indicated that the grant was part of a program meant to evaluate teachers, but that was apparently an error.)

Some have objected that such measurement would be insulting to teachers. After all, can't teachers tell when their students are engaged, or bored, or frustrated, etc.?

I'm sure some can, but not all of them. And it's a good bet that beginning teachers can't make these judgements as accurately as their more experienced colleagues, and beginners are just the ones who need this feedback. Presumably the information provided by the system would be redundant to teachers who can read it by their students faces and body language, and these teachers will simply ignore it.

I would hope that classroom use would be optional--GSR bracelets would enter classrooms only if teachers requested them.

Of greater concern to me are the rights of the students. Passive reading of physiological data without consent feels like an invasion of privacy. Parental consent ought to be obligatory. Then too, what about HIPAA? What is the procedure if a system that measures heartbeat detects an irregularity?

These two concerns--the effect on teachers and the effect on students--strike me as serious, and people with more experience than I have in ethics and in the law will need to think them through with great care.

But I still think the project is a terrific idea, for two reasons, neither of which has received much attention in all the uproar.

First, even if the devices were never used in classrooms, researchers could put them to good use.

I sat in at a meeting a few years ago of researchers considering a grant submission (not to the Gates Foundation) on this precise idea--using peripheral nervous system data as an on-line measure of engagement. (The science involved here is not really in my area of expertise, and had no idea why I was asked to be at the meeting, but that seems to be true of about two-thirds of the meetings I attend.) Our thought was that the device would be used by researchers, not teachers and administrators.

Researchers would love a good measure of engagement because the proponents of new materials or methods so often claim "increased engagement" as a benefit. But how are researchers supposed to know whether or not the claim is true? Teacher or student judgements of engagement are subject to memory loss and to well-known biases.

In addition, I see potentially great value for parents and teachers of kids with disabilities. For example, have a look at these two pictures.
Picture
This is my daughter Esprit. She's 9 years old, and she has Edward's syndrome. As a consequence, she has a host of cognitive and physical challenges, e.g., she cannot speak, and she has limited motor control and bad motor tone (she can't sit up unaided).

Esprit can never tell me that she's engaged either with words or signs. But I'm comfortable concluding that she is engaged at moments like that captured in the top photo--she's turning the book over in her hands and staring at it intently.

In the photo at the bottom, even I, her dad, am unsure of what's on her mind. (She looks sleepy, but isn't--ptosis, or drooping upper eyelids, is part of the profile).  If Esprit wore this expression while gazing towards a video for example, I wouldn't be sure whether she was engaged by the video or was spacing out.

Are there moments that I would slap a bracelet on her if I thought it could measure whether or not she was engaged?

You bet your sweet bippy there are. 

I'm not the first to think of using physiologic data to measure engagement in people with disabilities that make it hard to make their interests known. In this article, researchers sought to reduce the communication barriers that exclude children with disabilities from social activities; the kids might be present, but because of their difficulties describing or showing their thoughts, they cannot fully participate in the group.  Researchers reported some success in distinguishing engaged from disengaged states of mind from measures of blood volume pulse, GSR, skin temperature, and respiration in nine young adults with muscular dystrophy or cerebral palsy.

I respect the concerns of those who see the potential for abuse in the passive measurement of physiological data. At the same time, I see the potential for real benefit in such a system, wisely deployed.

When we see the potential for abuse, let's quash that possibility, but let's not let it blind us to the possibility of the good that might be done.

And finally, because Esprit didn't look very cute in the pictures above, I end with this picture.

Picture
<<Previous

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    January 2024
    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory