Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

Bad news for brain training

6/22/2016

 
Improving a specific skill is not hard. Or at least knowing what to do (practice) is not hard, even if actually doing it is not so easy. But improving at very general skills, the sort of skills that underlie many tasks we take on, has proven much more difficult. The grail among these general skills is working memory, as it's thought to be a crucial component of (if not nearly synonymous with) fluid intelligence. Brain training programs that promise wide-ranging cognitive improvements usually offer tasks that promise to exercise working memory, and so increase its capacity and or efficiency.

Claims of scientific support (e.g., here) have been controversial (see here), and part of the problem is that many of the studies, even ones claiming "gold standard" methodologies have not been conducted in the ideal way. This controversy usually arises after the fact; researchers claim that brain training works, and critics point out flaws in the study design.

A new study has examined more directly the possibility that brain training gains are due to placebo effects, and it indicates that's likely. 

Cyrus Foroughi and his colleagues at George Mason university set out to test the possibility that knowing you are in a study that purportedly improves intelligence will impact your performance on the relevant tests. The independent variable in their study was method of recruitment via an advertising flyer: either you knew you were signing up for brain training or you didn't. 
Picture
The flier at left might attract a different sort of participant than the one on the right. Or people may not differ except that some have been led to expect a different outcome of the experiment.

All participants went through the same experimental procedure. They took two standard fluid intelligence tests. Then they participated in one hour of working memory training, the oft-used N-back task. The final outcome measures--the fluid intelligence tests--took place the following day. 

Even advocates of brain training would agree that a single hour of practice is not enough to produce any measurable effect. Yet subjects who thought brain training would make them smarter improved. Control subjects did not. 
Picture
It's well known that scores on IQ tests are sensitive to incentives--people do a little better if they are paid, for example. People in the placebo group  might try harder on the second IQ test because they  know how the experiment is "supposed" to come out and they unconsciously try to comply. This belief that training might either have been planted by the flier OR the flier might have been a screening device, luring people who believed brain training works, but not attracting people who didn't believe in brain training to the experiment. 

Most published experiments of brain training had not reported whether subjects were recruited in a way that made the purpose plain. Foroughi and his colleagues contacted the researchers behind 19 published studies that were included in a meta-analysis and found that in 17 of these subjects knew the purpose of the study. 

It should be noted that this new experiment does not show conclusively that brain training cannot work. It shows placebo effects appear to be very easy to obtain in this type of research. I dare say it's even a more dramatic problem than critics had appreciated, and more than ever, the onus is on advocates of brain training to show their methods work. 

A possible limit to the retrieval practice effect on memory

6/13/2016

 
Readers of this blog likely know about the retrieval practice effect--probing memory for learned information is better for cementing that information into memory than continued study would be. (Whether it's familiar or not, be sure to bookmark www.retrievalpractice.org/.)

If I asked "what are the consequences of retrieval practice to memory?" the question sounds silly...I just said it makes memory better. "Better" sounds like it means stronger, more robust, easier to get out of memory, more likely to last.

A recent study (Sutterer & Awh, 2016) separated two aspects of memory improvement: likelihood of retrieving the memory at the right time and precision of the memory.

Previous studies of the retrieval practice effect have confounded those aspects of memory by using binary stimuli; if the to-be-remembered item is "la montre" when I see the word "watch," there's no opportunity to show that the memory has become more precise...I either remember it or I don't. Maybe the memory representation did get more precise, and that made it more distinctive in memory and therefore easier to find...or maybe retrieval practice didn't make my memory for "la montre" more precise at all, but made the memory more accessible.

Instead of binary stimuli, Sutterer & Awh used stimuli that differed on a continuous scale. The stimuli were silhouettes of familiar objects in different colors. Subjects were to associate the color with the object.  
Picture
Everyone began with a study session. Then some subjects get another study session, whereas others were tested on their memory for the colors. They saw a white outline of the object and had to select a color from a color wheel. 
Picture
Note that in the restudy condition (like the retrieval condition), subjects had to make a response on the color wheel. This ensures (1) that we know people can use the wheel, so if we don't see an effect on precision, we confident the lack of effect is not due to their awkwardness with the wheel; (2) if we see a retrieval practice effect, it's not because the retrieval group made a response but the restudy group didn't. Everyone makes a response, but only the retrieval practice group probes their memory. 
​
In the final session everyone's memory for the colors was tested. 
Picture
You can imagine two types of responses on this final test. First, the subject says to herself "Dinosaur, eh? I cannot remember what color went with the dinosaur at all. I'm going to have to guess." Second, the subject says to herself "Dinosaur. Right, that was sort of a blue-green. Let me try to pick the right color on the wheel here...."

Of course we can't see what the subject is thinking...we only know they pick something on the color wheel. But it's possible to fit a mathematical model to the responses, assuming that the distribution of responses will be uniform for guesses (that is, they are guessing randomly) and normal (i.e., bell-shaped) for correct responses. The observed responses are a mix of these distributions. 

The modelling doesn't allow us to say whether any one individual response is a guess or not, but using all the data we can characterize the two distributions. 

Retrieval practice might either (1) increase the proportion of responses that are correct (and decrease the number of guesses) OR increase the precision of correct responses (i.e., make the distribution of correct responses tighter around the target value). Or both. 

The results showed that retrieval practice had no impact on memory precision at all. It made correct memories more accessible. You were more likely to remember that the kiwi was sort of an orange color, but not to get closer to the exact shade of orange. A followup experiment showed the same results were observed if the test occurred after a 24 hour delay.

What's the implication for classroom practice? Nothing, as yet. This is just one study. But there are things that teachers want students to know that we might think of as continuous (like color) rather than discrete (like a color label). One example might be aspects of "depth" of vocabulary (Perfetti, 2007) or the relationship of numerosity and space (Hubbard et. al, 2005).

It makes some intuitive sense that retrieval practice can make it easier to get to a memory that's already in there, but can't further fine tune the memory, make it more precise. 

Media multitasking and cognition in teens--new data

6/7/2016

 
Several reports in the last few years have shown a relationship between cognitive function and the frequency of media multitasking (e.g., Cain et al, 2014; Minear et al, 2013; Ophir et al, 2009). Studies don’t always directly replicate each measure, but overall people who report habitual media multitasking tend to be less proficient in the use of working memory, and more impulsive. 

One drawback of this work is that it has tested only college students, although they are obviously not alone in multitasking. As technology prices drop, younger children have increased access to mp3 players, smart phones, gaming platforms, etc., and with increased access comes increased use. Is the relationship between multitasking habits and cognition different for younger children?

In a recent study, Matthew Cain and colleagues (in press) tested 73 subjects (average age = 14.4 years) on a battery of cognitive and personality tests. They largely replicated previous work, and added some interesting extensions.

First, they found that frequency of media multitasking was negatively associated with working memory span measures. They did not find a relationship with the ability to filter information in working memory, which contrasts with some previous work (Ophir et al, 2009).  Cain et al speculated that Ophir et al may have used a filtering task that was also demanding of span.

Second, Cain et al wanted to use a measure that had more real-world validity, so they correlated media multitasking scores with student scores on the state tests for math in English (for Massachusetts, where the work was conducted). A significant negative correlation was observed. Scores were not related to standard lab measures of math and English ability, however, leading the researchers to suggest that scores on the state test might have been affected because test administration is longer, so scores would more likely be affected if students were prone to distraction.  In other words, students might have been equally proficient in math and English, but less able to maintain attention for a long duration.

Third, researchers wanted to show that media multitasking was not related to cognition across the board, so they administered some tests of cognitive ability they thought were unlikely to be related, and indeed found that manual dexterity, and the unconscious learning of a complex category, were not related to media multitasking.

Fourth, researchers tested whether media multitasking was related to personality characteristics and beliefs that are known to be related to academic achievement. They observed no relationship with grit or conscientiousness, but did see a robust negative relationship with growth mindset. The authors expected to see relationships of all three with media multitasking. Why is not clear to me. If you think that most teens know that multitasking while reading or studying is not conducive to good performance, you might predict a negative correlation with grit and conscientiousness, but it’s not clear that most teens are aware of the relationship. The observed relationship with growth mindset—a belief about the nature of intelligence—puzzles me, and the authors say little about it.

Perhaps the most important aspect of these results is the age of the children tested. The results show that the relationship between media multitasking and cognition that we see in college students is observable by middle adolescence. That marginally increases the probability that the direction of causality is cognition prompting media multitasking behavior, not multitasking prompting changes in cognition. The authors are appropriately cautious in discussing causality but the results are a bit easier to account for with that explanation, the idea being that younger children have had less multitasking experience, and so there has been less opportunity for multitasking to influence cognition—yet the relationship is present.

One point about this study does make me uneasy. The median self-reported estimate of media use was 18 hours/day. A good number of subjects offered weekly estimates of media use that amounted to more than 24 hours per day. The authors note that these seem like overestimates, and so they suggest that these raw figures are not really interpretable.
​
But they go on to take the subject reports of frequency of media multitasking at face value. For each medium, subjects were asked how often they engaged with that medium while doing something else, choosing among four categories (Never, a little of the time, some of the time, most of the time). Why trust these estimates, if their estimates of the number of hours spent with media are not trustworthy? Both are retrospective memory reports, and the judgment of media multitasking has not been validated, as far as I can tell.
​
That validation ought to be high on the priority list for researchers in this area. It would be a terrible shame if all of these effects that we are tying to media multitasking actually ought to be tied to accuracy and bias in memory reports. 

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory