How much help is provided to a teacher and student by the use of manipulatives--that is, concrete objects meant to help illustrate a mathematical idea?
My sense is that most teachers and parents think that manipulatives help a lot. I could not locate any really representative data on this point, but the smaller scale studies I've seen support the impression that they are used frequently. In one study of two districts the average elementary school teacher reported using manipulatives nearly every day (Uribe-Florez & Wilkins, 2010).
Do manipulatives help kids learn? A recent meta-analysis (Carbonneua, Marley & Selif, 2012) offers a complicated picture. The short answer is "on average, manipulatives help. . . a little." But the more complete answer is that how much they help depends on (1) what outcome you measure and (2) how the manipulatives are used in instruction.
The authors analyzed the results of 55 studies that compared instruction with or without manipulatives. The overall effect size
= .37--typically designated a "moderate" effect.
But there were big differences depending on content being taught: for example, the effect for fractions was considerable larger (d = .69
) than the effect for arithmetic (d
= .27) or algebra (d = .
More surprising to me, the effect was largest when the outcome of the experiment focused on retention (d = .
59), and was relatively small for transfer (d = .
What are we to make of these results? I think we have to be terribly cautious about any firm take-aways. That's obvious from the complexity of the results (and I've only hinted at the number of interactions).
It seems self-evident that one source of variation is the quality of the manipulative. Some just may not do that great a job of representing what they are supposed to represent. Others may be so flashy and interesting that they draw attention to peripheral features at the expense of the features that are supposed to be salient.
It also seems obvious that manipulatives can be more or less useful depending on how effectively they are used. For example, some fine-grained experimental work indicates the effectiveness of using a pan-balance as an analogy for balancing equations depends on fairly subtle features of what to draw students’ attention to and when (Richland et al, 2007).
My hunch is that at least one important source of variability (and one that's seldom measured in these studies) is the quality and quantity of relevant knowledge students have when the manipulative is introduced. For example, we might expect that the student with a good grasp of the numerosity would be in a better position to appreciate a manipulative meant to illustrate place value than the student whose grasp is tenuous. Why?
David Uttal and his associates (Uttall, et al, 2009) emphasized this factor when they pointed out that the purpose of a manipulative is to help students understand an abstraction. But a manipulative itself is an abstraction—it’s not the thing-to-be-learned, it’s a representation of that thing—or rather, a feature of the manipulative is analogous to a feature of the thing-to-be-learned. So the student must simultaneously keep in mind the status of the manipulative as concrete object and as a representation of something more abstract. The challenge is that keeping this dual status in mind and coordinating them can be a significant load on working memory. This challenge is potentially easier to meet for those students who firmly understand concepts undergirding the new idea.
I’m generally a fan of meta-analyses. I think they offer a principled way to get a systematic big picture of a broad research literature. But the question “do manipulatives help?” may be too broad. It seems too difficult to develop an answer that won’t be mostly caveats.
So what’s the take-away message? (1) manipulatives typically help a little, but the range of effect (hurts a little to helps a lot) is huge; 2) researchers have some ideas as to why manipulatives work or don’t work. . .but not in a way that offers much help in classroom application.
This is an instance where a teacher’s experience is a better guide.
Carbonneau, K. J., Marley, S. C., & Selig, J. P. (in press). A meta-analysis of the efficacy of teaching mathematics with concrete manipulatives. Journal of Educational Psychology. Advance online publication.
Richland, R. E. Zur, O. Holyoak, K. J. (2007). Cognitive Supports for Analogies in the Mathematics Classroom, Science, 316, 1128–1129.
Uribe‐Flórez, L. J., & Wilkins, J. L. (2010). Elementary school teachers' manipulative use. School Science and Mathematics, 110, 363-371.
Uttal, D. H., O’Doherty, K., Newland, R., Hand, L. L., & DeLoache, J.
(2009). Dual representation and the linking of concrete and symbolic
representations. Child Development Perspectives, 3, 156–159.
An experiment is a question which science poses to Nature, and a measurement is the recording of nature’s answer. --Max Planck
You can't do science without measurement. That blunt fact might give pause when people emphasize non-cognitive factors in student success and in efforts to boost student success.
"Non-cognitive factors" is a misleading but entrenched catch-all term for factors such as motivation, grit, self-regulation, social skills. . . in short, mental constructs that we think contribute to student success, but that don't contribute directly to the sorts of academic outcomes we measure, in the way that, say, vocabulary or working memory do.
Non-cognitive factors have become hip. (Honestly, if I hear about the Marshmallow Study
just one more time, I'm going to become seriously dysregulated) and there are plenty of data to show that researchers are on to something important. But are they on to anything that that educators are likely to be able to use in the next few years? Or are we going to be defeated by the measurement problem ?There is a problem, there's little doubt. A term like "self-regulation" is used in different senses: the ability to maintain attention in the face of distraction, the inhibition of learned or automatic responses, or the quelching of emotional responses.
The relation among them is not clear.Further, these might be measured by self-ratings, teacher ratings, or various behavioral tasks.
But surprisingly enough, different measures do correlate, indicating that there is a shared core construct (Sitzman & Ely, 2011). And Angela Duckworth (Duckworth & Quinn, 2009) has made headway in developing a standard measure of grit (distinguished from self-control by its emphasis on the pursuit of a long-term goal).
So the measurement problem in non-cognitive factors shouldn't be overstated. We're not at ground-zero on the problem. At the same time, we're far from agreed-upon measures. Just how big a problem is that?
It depends on what you want to do.
If you want to do science, it's not a problem at all. It's the normal situation. That may seem odd: how can we study self-regulation if we don't have a clear idea of what it is? Crisp definitions of constructs and taxonomies of how they relate are not prerequisites for doing science. They are the outcome of doing science. We fumble along with provisional definitions and refine them as we go along.
The problem of measurement seems more troubling for education interventions.
Suppose I'm trying to improve student achievement by increasing students' resilience in the face of failure. My intervention is to have preschool teachers model a resilient attitude toward failure and to talk about failure as a learning experience. Don't I need to be able to measure student resilience in order to evaluate whether my intervention works?
Ideally, yes, but that lack may not be an experimental deal-breaker.
My real interest is student outcomes like grades, attendance, dropout, completion of assignments, class participation and so on. There is no reason not to measure these as my outcome variables. The disadvantage is that there are surely many factors that contribute to each outcome, not just resilience. So there will be more noise in my outcome measure and consequently I'll be more likely to conclude that my intervention does nothing when in fact it's helping.
The advantage is that I'm measuring the outcome I actually care about. Indeed, there would not be much point in crowing about my ability to improve my psychometrically sound measure of resilience if such improvement meant nothing to education.
There is a history of this approach in education. It was certainly possible to develop and test reading instruction programs before we understood and could measure important aspects of reading such as phonemic awareness.
In fact, our understanding of pre-literacy skills has been shaped not only by basic research, but by the success and failure of preschool interventions. The relationship between basic science and practical applications runs both ways.
So although the measurement problem is a troubling obstacle, it's neither atypical nor final.
Duckworth, A. L., & Quinn, P. D. (2009). Development and validation of the Short Grit Scale (GRIT–S). Journal of Personality Assessment, 91, 166-174.
Sitzmann, T, & Ely, K. (2011). A meta-analysis of self-regulated learning in work-related training and educational attainment: What we know and where we need to go. Psychological Bulletin, 137, 421-442.
A fundamental insight of the last two decades is that motivation is strongly influenced by beliefs about ability and achievement. If you believe that achievement is a product mostly of ability, then you are likely to believe that people with a lot of natural ability achieve a lot without having to work very hard.A recent paper (Smith, Lewis, Hawthorne, & Hodges in press) examines whether such beliefs might account for sex differences in participation in STEM fields.In Experiment 1, they examined how much effort graduate students in STEM fields perceived
that they exerted, relative to their peers. The results showed that for women, perceived effort was inversely associated with sense of belonging. That is many women seemed to say to themselves "this is so hard for me, I must not really belong in graduate school." That perception was, in turn, associated with decreased motivation. These associations were not observed in men.In Experiment 2, the researchers created a fictitious field (Eco-psychology) and distributed a professional-looking brochure for a graduate program in Eco-psychology to Introductory psychology students. The graduate program was subtly portrayed as either male-dominated or gender-neutral. Students were asked a number of questions about it, including how interested they were in the program and how difficult they thought they would find it, compared to "the average student." When the program was portrayed as male-dominated, women thought that they would find the program harder, and were less interested in learning more about it.Experiment 3 used an elaborate ruse in which subjects believed they were interacting via webcam with a professor from the Eco-psychology program at University of Colorado, Boulder. The key manipulation was that the "professor" provided feedback about the subject's likely success in the program (which in this experiment was always portrayed as male-dominated). The feeling of alienation observed in Experiment 2 was observed again, but feedback from the professor could undo it; if the professor merely made effort seem normal but commenting that everyone in the program had to work hard, the gender effect disappeared.This study mirrors some conceptually similar studies of college freshmen from historically underrepresented groups. For example, in Walton & Cohen (2011) students heard a simple message from upperclassmen emphasizing that everyone feels disoriented and concerned about whether they can really do the work when they first get to college, but that things get better. These brief messages not only made students feel better, they had an impact on students' grades.
(There was no effect of the intervention on White students.)In the larger picture, these findings should remind us of the powerful impact of beliefs on motivation.References
Smith, J. L., Lewis, K. L., Hawthorne, L., & Hodges, S. D. (in press). When Trying Hard Isn’t Natural Women’s Belonging With and Motivation for Male-Dominated STEM Fields As a Function of Effort Expenditure Concerns. Personality and Social Psychology Bulletin
Walton, G. M., & Cohen, G. L. (2011). A brief social-belonging intervention improves academic and health outcomes of minority students. Science
I like Wikipedia
. I like it enough that I have donated during their fund drives, and not simply under the mistaken impression that doing so would make plaintive face of founder Jimmy Wales disappear from my browser. Wikipedia is sometimes held up as a great victory for crowdsourcing, although as Jaron Lanier has wryly observed, it would have been strange indeed to have predicted in the 1980's that the digital revolution was coming, and that the crowning achievement would be a copy of something that already existed--the encyclopedia. That's a bit too cynical in my view, but more important, it leapfrogs an important question: is Wikipedia a good encyclopedia?For matters related to education, my tentative answer is "no." For some time now I've noticed that articles in Wikipedia got things wrong
, even allowing for the fact that some topics in education are controversial.So in a not-at-all scientific test, I looked up a few topics that came to mind.Reading education in the United States: The third paragraph reads:
There is some debate as to whether print recognition requires the ability to perceive printed text and translate it into spoken language, or rather to translate printed text directly into meaningful symbolic models and relationships. The existence of speed reading, and its typically high comprehension rate would suggest that the translation into verbal form as an intermediate to understanding is not a prerequisite for effective reading comprehension. This aspect of reading is the crux of much of the reading debate.
There is a large literature using many different methods to assess whether sound plays a role in the decoding of experienced readers, and ample evidence that it does. For example, people are slower to read tongue-twisters than control text (McCutchen & Perfetti, 1982). Whether it is necessary to access meaning or is a byproduct of that process is more controversial. There is also pretty good evidence that speed reading can't really work, due to limitations in the speed of eye movements (Rayner, 2004
)Next I looked at mathematics education
. The section of most interest is "research" and it's a grab-bag of assertions, most or all of which seem to be taken from the website of the National Council of Teachers of Mathematics
. As such, the list is incomplete: no mention of the huge literatures on (1) math facts (e.g. Orrantia et al
2010), nor of (2) spatial representations in mathematics
: Newcombe, 2010. The conclusions are also, at times, sketchily draw ("the importance of conceptual understanding:" well, sure), and on occasion, controversial ("the usefulness of homework:" a lot depends on the details.)
: You probably could predict the contents of this entry. A long recounting of various learning styles models, followed by a "criticisms" section. Actually, this Wikipedia entry was better than I thought it would be, because I expected the criticism section to be shorter than it is. Still, if you know nothing about the topic, you'd likely conclude "there's controversy" rather than there's no supporting evidence (Riener & Willingham, 2010
Finally, I looked at the entry on constructivism (learning theory)
. This was a pretty stringent test, I'll admit, because it's a difficult topic.
The first section lists constructivists and this list includes Herb Simon, which can only be called bizarre, given that he co-authored criticisms of constructivism (Anderson, Reder & Simon, 1997).
The rest of the article is a bit of a mish-mash. It differentiates social constructivism (that learning is inherently social) from cognitive constructivism (that learners make meaning) only late in the article, though most authors consider the distinction basic. It mentions situated learning in passing, and fails to identify it as a influential third strain in constructivist thought. A couple of sections on peripheral topics have been added ("Role Category Questionnaire," "Person-centered messages") it would appear by enthusiasts.
Of the four passages I examined I wouldn't give better than a C- to any of them. They are, to varying degrees, disorganized, incomplete, and inaccurate.
Others have been interested in the reliability of Wikipedia, so much so that there is a Wikipedia entry devoted to the topic.
Two positive results are worthy of note. First, site vandalism is usually quickly repaired. (e.g., in the history
of the entry for psychologist William K. Estes one finds that someone wrote "William Estes is a martian that goes around the worl eating pizza his best freind is gondi.") The speedy repair of vandalism is testimony to the facts that most people want Wikipedia to succeed, and that the website makes it easy to make small changes.
Second, Wikipedia articles seem to fare well for accuracy compared to traditional edited encyclopedias. Here's where education may differ from other topics. The studies that I have seen compared articles on pretty arcane topics--the sort of thing that no one has an opinion on other than a handful of experts. Who is going to edit the entry on Photorefractice Keratectomy
? But lots of people have opinions about the teaching of reading--and there are lots of bogus "sources" they can cite, a fact I emphasized to the point of reader exhaustion in my most recent book
Now I only looked through four entries. Perhaps others are better. If you think so, let me know. But for the time being I'll be warning students in my Spring Educational Psychology course not to trust Wikipedia as a source.
Anderson, J. R., Reder, L. M., & Simon, H. A. (2000). Applications and Misapplications of Cognitive Psychology to Mathematics Instruction. Texas Education Review
McCutchen, D., & Perfetti, C. A. (1982). The visual tongue-twister effect: Phonological activation in silent reading. Journal of Verbal Learning and Verbal Behavior
Newcombe, N. S. (2010). Picture This. American Educator
Orrantia, J., Rodríguez, L., & Vicente, S. (2010). Automatic activation of addition facts in arithmetic word problems. The Quarterly Journal of Experimental Psychology
Radach, R. (2004). Eye movements and information processing during reading
(Vol. 16, No. 1-2). Psychology Press.
Riener, C., & Willingham, D. (2010). The myth of learning styles. Change: The Magazine of Higher Learning