Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

How useful are manipulatives in mathematics?

1/28/2013

 
How much help is provided to a teacher and student by the use of  manipulatives--that is, concrete objects meant to help illustrate a mathematical idea?

My sense is that most teachers and parents think that manipulatives help a lot. I could not locate any really representative data on this point, but the smaller scale studies I've seen support the impression that they are used frequently. In one study of two districts the average elementary school teacher reported using manipulatives nearly every day (Uribe-Florez & Wilkins, 2010).
Picture
Do manipulatives help kids learn? A recent meta-analysis (Carbonneua, Marley & Selif, 2012) offers a complicated picture. The short answer is "on average, manipulatives help. . . a little." But the more complete answer is that how much they help depends on (1) what outcome you measure and (2) how the manipulatives are used in instruction.

The authors analyzed the results of 55 studies that compared instruction with or without manipulatives. The overall effect size was d = .37--typically designated a "moderate" effect.

But there were big differences depending on content being taught: for example, the effect for fractions was considerable larger (d = .69) than the effect for arithmetic (d = .27) or algebra (d = .21).
More surprising to me, the effect was largest when the outcome of the experiment focused on retention (d = .59), and was relatively small for transfer (d = .13).

What are we to make of these results? I think we have to be terribly cautious about any firm take-aways. That's obvious from the complexity of the results (and I've only hinted at the number of interactions).

Picture
It seems self-evident that one source of variation is the quality of the manipulative. Some just may not do that great a job of representing what they are supposed to represent. Others may be so flashy and interesting that they draw attention to peripheral features at the expense of the features that are supposed to be salient.

It also seems obvious that manipulatives can be more or less useful depending on how effectively they are used. For example, some fine-grained experimental work indicates the effectiveness of using a pan-balance as an analogy for balancing equations depends on fairly subtle features of what to draw students’ attention to and when (Richland et al, 2007).

My hunch is that at least one important source of variability (and one that's seldom measured in these studies) is the quality and quantity of relevant knowledge students have when the manipulative is introduced. For example, we might expect that the student with a good grasp of the numerosity would be in a better position to appreciate a manipulative meant to illustrate place value than the student whose grasp is tenuous. Why?

David Uttal and his associates (Uttall, et al, 2009) emphasized this factor when they pointed out that the purpose of a manipulative is to help students understand an abstraction. But a manipulative itself is an abstraction—it’s not the thing-to-be-learned, it’s a representation of that thing—or rather, a feature of the manipulative is analogous to a feature of the thing-to-be-learned. So the student must simultaneously keep in mind the status of the manipulative as concrete object and as a representation of something more abstract. The challenge is that keeping this dual status in mind and coordinating them can be a significant load on working memory. This challenge is potentially easier to meet for those students who firmly understand concepts undergirding the new idea.

I’m generally a fan of meta-analyses. I think they offer a principled way to get a systematic big picture of a broad research literature. But the question “do manipulatives help?” may be too broad. It seems too difficult to develop an answer that won’t be mostly caveats.

So what’s the take-away message? (1) manipulatives typically help a little, but the range of effect (hurts a little to helps a lot) is huge; 2) researchers have some ideas as to why manipulatives work or don’t work. . .but not in a way that offers much help in classroom application.

This is an instance where a teacher’s experience is a better guide.

References

Carbonneau, K. J., Marley, S. C., & Selig, J. P. (in press). A meta-analysis of the efficacy of teaching mathematics with concrete manipulatives. Journal of Educational Psychology. Advance online publication.

Richland, R. E.  Zur, O. Holyoak, K. J. (2007). Cognitive Supports for Analogies in the Mathematics Classroom, Science, 316, 1128–1129.

Uribe‐Flórez, L. J., & Wilkins, J. L. (2010). Elementary school teachers' manipulative use. School Science and Mathematics, 110, 363-371.

Uttal, D. H., O’Doherty, K., Newland, R., Hand, L. L., & DeLoache, J.
(2009). Dual representation and the linking of concrete and symbolic
representations. Child Development Perspectives, 3, 156–159.




The "human touch" in computer-based instruction.

9/12/2012

 
The importance of a good relationship between teacher and student is no surprise. More surprising is that the "human touch" is so powerful it can improve computer-based learning.

In a series of ingenious yet simple experiments, Rich Mayer and Scott DaPra showed that students learn better from an onscreen slide show when it is accompanied by an onscreen avatar that uses social cues.
Picture
Eighty-eight college students watched a 4-minute Powerpoint slide show that explained how a solar cell converts sunlight to electricity. It  consisted of 11 slides and a voice-over explanation.

Some subjects saw an avatar which used a full compliment of social cues (gesturing, changing posture, facial expression, changes in eye gaze, and lip movements synchronized to speech) which were meant to direct student attention to relevant features of the slide show.

Other subjects saw an avatar that maintained the same posture, maintained eye gaze straight ahead, and did not move (except for lip movements synchronized to speech).

A third group saw no avatar at all, but just saw the slides and listened to the narration.

All subjects were later tested with fact-based recall questions and transfer questions (e.g. "how could you increase the electrical output of a solar power?") meant to test subjects ability to apply their knowledge to new situations.

There was no difference among the three groups on the retention test, but there was a sizable advantage (d = .90) for the high embodiment subjects on the transfer test. (The low-embodiment and no-avatar groups did not differ.)

A second experiment showed that the effect was only obtained when a human voice was used; the avatar did not boost learning when synchronized to a machine voice.

The experimenters emphasized the social aspect of the situation to learning; students process the slideshow differently because the avatar is "human enough" for them to treat it prime interaction like those learners would use with a real person. This interpretation seems especially plausible in light of the second experiment; all of the more cognitive cues (e.g., the shifts in the avatar's eye gaze prompting shifts in learner's attention) were still present in the machine-voice condition, yet there was no advantage to learners.

There is something special about learning from another person. Surprisingly, that other person can be an avatar.

Mayer, R. E. & DaPra, C. S. (2012). An embodiment effect in computer-based learning with animated pedagogical agents. Journal of Experimental Psychology: Applied, 18,  239-252.

Book review: Textbook publishers are the problem

5/14/2012

 
In Tyranny of the Textbook Beverlee Jobrack offers many observations that you’ve heard before. Standards alone won’t improve achievement. Testing alone won’t improve achievement. Technology alone won’t improve achievement. What makes the book worth reading is not Jobrack’s thoughts on these topics, because they are, frankly, fairly ordinary. But her thoughts on the textbook industry make the book well worth your time.

Picture
The kernel of her argument has three pieces:

(1)    Textbook development: Textbooks are developed based on tradition and based on competitors’ products. No one in the publishing industry worries about whether the materials are effective. As Jobrack notes, publishers are for-profit enterprises. They need decision-makers to adopt their textbooks. Decision-makers do not base adoptions on effectiveness—or at least, publishers believe that they do not.

(2)    Textbook adoption: What factors drive adoptions? To the extent that teachers have any input, it will be teacher leaders, and they already teach well. They have an existing set of lesson plans that work well. So they are not interested in a textbook that would necessitate rewriting all of those lesson plans. So new textbooks tend to be conservative. Further, just three publishing companies account for 75% of the market. So most of the books look the same. Consequently, relatively trivial features have an outsize influence on adoption decisions.

Trivial features like the cover design. Like the font size. Like whether the important features are clearly labeled or a bit more difficult to find.

Content matters to adoptions, according to Jobrack, only insofar as the publishers ensure that all of the state standards are “covered.” But she goes on to point out that there is little or no attention paid to ordering and presenting this content in a way to ensure that students learn. Again, effectiveness of learning is simply not on the publishers radar screen.

(3)    Why textbooks matter: Jobrack argues that textbooks are hugely important because they constitute a de facto curriculum. Beginning teachers are overwhelmed by the prospect of writing lesson plans, and so depend heavily instructional materials provided by publishers.

Is Jobrack right about all this? She ought to know whereof she speaks. She was promoted through the editorial ranks until she was the editorial director of SRA/McGraw-Hill. Still, we should bear in mind that these are mostly Jobrack’s impressions, not a systematic study of publishing business practices.

I admit that I’m probably more ready to believe Jobrack on publishing because her description so often matches my own experience. Like the beginning teachers she describes, when I first started teaching cognitive psychology, I relied heavily on published materials.  I laid out four textbooks on my desk, used the sequence of topics they all shared, and cobbled together lectures by stealing the best stuff from each.

I saw the conservatism Jobrack describes much later when I prepared to write my cognitive textbook and told my editor that I wanted to do something really different than what was currently on the market. Her response: “Okay, but don’t make it more than about 20% different or you’ll never get any adoptions.”

A point Jobrack makes indirectly but strikes me as more important than she realizes is the role of measurement. Jobrack notes that publishers would be motivated to make textbooks effective if that drove the market. Well, in order to know whether they are effective, we—teachers, administrators, parents, researchers, policymakers—need to agree on what we mean by effective and on a way to measure it. The textbook problem brings fresh urgency to this issue.

 Whether Jobrack is right or not, I hope this book will prompt greater discussion about textbooks, and greater scrutiny of adoption processes.  

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory