Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

Are screens "digital heroin?" 

8/29/2016

 

A piece appeared in the New York Post on August 27 with the headline "It's digital heroin: How screens turn kids into psychotic junkies." 

Even allowing for the fact that authors don't write headlines, this article is hyperbolic and poorly argued. I said as much on Twitter and my Facebook page, and several people asked me to elaborate. So....

First, to say "Recent brain imaging research is showing that they [games] affect the brain’s frontal cortex — which controls executive functioning, including impulse control — in exactly the same way that cocaine does," is transparently false. Engaging in a behavior cannot affect your brain in exactly the same way a  psychoactive drug does. Saying it does is a scare tactic. 

Lots of activities give people pleasure and it's sensible that these activities show some physiological similarities, given the similarity in experience. But if you want to suggest that the analogy (games and cocaine both change dopamine levels, therefore they are similar in other ways) extends to other characteristics, you need direct evidence for those other characteristics. Absent that, it's as though you buy a pet bunny and I say "My God, bunnies have four legs. Don't you realize TIGERS have four legs? You've brought a tiny tiger into your home!"

On the addiction question: The American Psychiatric Association has considered including Internet Addiction in the DSM V and has elected not too, though it's still being considered. Research is ongoing, and technology changes quickly, so it's wouldn't make sense to close the book on the issue. 

To qualify as an addiction, you need more than the fact that the person likes it a lot and does it a lot. Addictions are usually characterized by: 

•Tolerance--increased amount of the behavior becomes necessary
•Withdrawal symptoms if behavior is stopped
•Person wants to quit but can’t
•Lots of time spent on the behavior
•Engage in the behavior even though it’s counterproductive

The last two of these seem a good fit for kids who spend a huge amount of time on gaming or other digital technologies. The others, the less obviously so. 

Let me be plain: I have plenty of concerns regarding the content and volume of children's time spent with digital technologies. I've written about this issue elsewhere, and my own children who are still at home (age 13, 11, and 9) face stringent restrictions on both.

But thinking through a complex issue like the social, emotional, cognitive, and motivational consequences of ominipresent screens in daily life requires clear-headed thinking, not melodramatic claims based on thin analogies. 

Why Do Intervention Effects Fade?

8/21/2016

 
Nothing is more familiar to those who follow the literature on early educational interventions. A program meant to boost children’s reading or math or school readiness works wonderfully, helping children who started pre-k at a disadvantage achieve at levels comparable to other kids…but follow-up studies in later years show that the boost was not long-lasting.  The results faded.

The most common explanation (and the one that I had always assumed was right) centers on the content these children were taught after the intervention ended. Instruction must continue to challenge these children, to extend their accomplishments. If teachers emphasize more basic material, naturally we’ll observe fadeout.

I’ve sometimes used this metaphor: early intervention is not like setting the trajectory of a rocket, a one-time event that, if you get it right, you  needn’t think about again. It’s more like extra fuel in the booster rocket; it gets kids to the right altitude early on, but you’ve got to ensure that they have the same fuel in their rocket that other kids do after the intervention.

A recent article by Drew Baily and colleagues (2016) casts doubt on this explanation. They call it the Constraining Content hypothesis, and set forth a competing explanation they call the Pre-Existing Differences hypothesis.

​You identified a bunch of students who were either behind or at risk for being behind. You intervened. At the end of the year, they are no longer behind. Fine, but you didn’t select students for the intervention randomly. You picked them because they were behind, and at least some of the reasons they were behind will still be present at the end of the intervention.

Maybe their home environment does not support mathematics achievement, for any of a large number of reasons. Maybe these children’s beliefs about mathematics and expectations of themselves differ. Maybe their working memory capacity and/or general intelligence differ. Whatever the reasons children start preK behind, is there any good reason to suppose those factors have magically disappeared by the end of the intervention? Or that they won’t affect math achievement any more?

Here’s how Baily and colleagues compared the constraining content and the pre-existing differences hypotheses. They used a preK math intervention that is known to work (Building Blocks). They measured math ability at the start of preK, at the end of preK (after the intervention) and at the end of Kindergarten. You’d expect to see better scores for the kids getting the intervention (compared to controls) at the end of preK, but then a diminution of that advantage at the end of Kindergarten—classic fadeout—and that’s what you see. Here are the results of the overall treatment and control groups.
Picture
Here’s the interesting part of the experiment. Researchers compared scores of control students (those who had been randomly selected not to receive the intervention) who nevertheless scored well at the end of preK; even though they had not received the intervention during the prior year, their scores were comparable to kids who did receive the intervention.

​All children were to receive the same instruction in the kindergarten. So if the Constraining Content hypothesis is right, the two groups should show comparable learning. But the Pre-Existing Differences hypothesis makes a different prediction. The control kids who nevertheless scored as well as the intervention kids had something going for them during the preK year—lots of support at home, lots of math smarts, whatever. Those factors will still contribute in kindergarten, so these control kids will score better than the intervention kids at the end of kindergarten. 
Picture
It makes sense that kids who manage to score well after the intervention without actually experiencing the intervention were better at the pre-test.

And crucially, those out-of-school factors are still present at followup. Even though they experienced the same instruction during kindergarten and began the year with comparable math knowledge, by the end of the year they are doing better.

The researchers had another way to compare the Constraining Content and the Pre-Existing Differences hypotheses. Students are paired by score—one control and one intervention kid who scored comparably on the post-test. They sorted these pairs so there is a higher- and a lower-achieving group. Then they looked at the followup scores of each group. The Constraining Content hypothesis predicts that fadeout will be worse for higher scoring kids…they are the ones who are most affected by the not-very-challenging content….lower scoring kids should be catching up to the higher scoring kids because for them, the instruction is challenging. BUT the data showed exactly equivalent gains in the high- and low-scoring pairs.

​We need a new metaphor. Intervention for at-risk students is not resetting the trajectory of the rocket, but it’s not just extra fuel in the booster rocket to get them to altitude, after which one must ensure they still have fuel in the rocket. If they are to keep pace with their peers, they continue to need extra fuel in the rocket after the intervention.

We Like Reductive Explanations, Especially Brainy Ones

8/16/2016

 
Do you remember the “seductive allure” experiments? Those are the ones showing that people find explanations of psychological phenomena more satisfying if they include neuroscientific details, even if those details are irrelevant. (See here here and here).
​
Emily Hopkins and her colleagues at Penn noted that there is more than one possible explanation for the effect. It may be that people hold neuroscience in special esteem, or that they like the physicality of neuroscience (in contrast with the seeming intangibility of behavioral explanation), or perhaps it’s the reductiveness that holds appeal. Hopkins and her group focused on this last possibility. They presented subjects with good and bad explanations for phenomena in six different sciences and asked them to rate the quality of the explanations from -3 to 3. Some of the explanations were horizontal, and some were reductive, according to this hierarchy of sciences. 
Picture
​Here are examples of good/bad explanations that are reductive/horizontal, from biology. (Click for larger view.)
Picture
Subjects rated good explanations as better than bad ones, but they also rated reductive explanations more positively than horizontal explanations (M = 1.26 vs. 1.04). This effect was somewhat larger when the reductive information was neuroscientific (purportedly explaining psychology) than for other pairs. Still, when each pair was evaluated separately, participants gave higher ratings for the reductive explanation in five of six sciences.

​The researchers gathered some other data about participants that cast an interesting light on these findings. They found that those who had taken more science courses at the college level were better at discriminating good from bad explanations. That was not the case for participants who had taken more college-level philosophy courses (although these participants scored better on a logical syllogisms task).

Researchers also asked participants questions about their perceptions of these sciences. Questions concerned the scientific rigor, the social prestige, and the difference in knowledge between an expert and novice. The graph shows averages for the sciences where the three questions were combined into a single measure. 
Picture
These ratings offer a possible explanation for why reductive explanations are especially appealing in the case of psychology/neuroscience: people don’t think much of psychology, but they hold neuroscience in esteem.

​Although the effect is strongest for psychology it is helpful to know that the “seductive allure” effect is not restricted to brains. It seems that there is some expectation that part of how sciences explain our world is to break things it into ever smaller pieces.  When that’s part of explanation, it sound like science is doing what it is supposed to do. 

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    January 2024
    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory