A great deal has been written about the impact of retrieval practice on memory. That's because the effect is sizable, it has been replicated many times (Agarwal, Bain & Chamberlain, 2012) and it seems to lead not just to better memory but deeper
memory that supports transfer (e.g., McDaniel et al, 2013; Rohrer et al, 2010).
("Retrieval practice" is less catchy than the initial name--testing effect. It was renamed both to emphasize that it doesn't matter whether you try to remember for the sake of a test or some other reason and because "testing effect" led some observers to throw up their hands and say "do we really need more tests?")Now researchers (Szpunar, Khan, & Schacter, 2013) have reported testing as a potentially powerful ally in online learning. College students frequently report difficulty in maintaining attention during lectures, and that problem seems to be exacerbated when the lecture occurs on video.In this experiment subjects were asked to learn from a 21 minute video lecture on statistics. They were also told that the lecture would be divided in 4 parts, separated by a break. During the break they would perform math problems for a minute, and then would either do more math problems for two more minutes ("untested group"), they would be quizzed for two minutes on the material they had just learned ("tested group"), or they would review by seeing questions with the answers provided ("restudy group.")Subjects were told that whether or not they were quizzed would be randomly determined
for each segment; in fact, the same thing happened for an individual subject after each segment except
that each was tested after the fourth segment.So note that all subjects had reason to think that they might be tested at any time. There were a few interesting findings.
First, tested students took more notes than other students, and reported that their minds wandered less during the lecture.
The reduction in mind-wandering and/or increase in note-taking paid off--the tested subjects outperformed the restudy and the untested subjects when they were quizzed on the fourth, final segment.
The researchers added another clever measure. There was a final test on all the material, and they asked subjects how anxious they felt about it. Perhaps the frequent testing made learning rather nerve wracking. In fact, the opposite result was observed: tested students were less anxious about the final test. (And in fact performed better: tested = 90%, restudy = 76%, nontested = 68%).
We shouldn't get out in front of this result. This was just a 21 minute lecture, and it's possible that the benefit to attention of testing will wash out under conditions that more closely resemble an on-line course (i.e., longer lectures delivered a few time each week.) Still, it's a promising start of an answer to a difficult problem.
Agarwal, P. K., Bain, P. M., & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24, 437-448.
McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: Successful transfer performance on classroom exams. Applied Cognitive Psychology. Published online Feb. 25
Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36, 233-239.
Szpunar, K. K., Khan, N. &, & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, published online April 1, 2013 doi:10.1073/pnas.122176411
What people learn (or don't) from games is such a vibrant research area we can expect fairly frequent literature reviews. It's been about a year since the last one
, so I guess we're due. The last time I blogged on this topic Cedar Riener remarked that it's sort of silly to frame the question as "does gaming work?" It depends on the game. The category is so broad it can include a huge variety of experiences for students.
If there were NO games from which kids seemed to learn anything, you probably ought not to conclude "kids can't learn from games." To do so would be to conclude that distribution of learning for all possible games and all possible teaching would look something like this.
But this pattern of data seems highly unlikely. It seems much more probable that the distributions overlap more, and that whether kids learn more from gaming or traditional teaching is a function of the qualities of each.
So what's the point of a meta-analysis that poses the question "do kids learn more from gaming or traditional teaching?
I think of these reviews not as letting us know whether kids can learn from games, but as an overview of where we are--just how effective are the serious games offered to students?
The latest meta-analysis (Wouters et al, 2013
) includes data from 56 studies and examined both learning outcomes (77 effect sizes), retention (17 effect sizes) and motivation (31 effect sizes).
The headline results featured in the abstract is "games work!" Games are reported to be superior to conventional instruction in terms of learning (d
= 0.29) and retention (d
= .36) but somewhat surprisingly, not motivation (d
The authors examined a large set of moderator variables and this is where things get interesting. Here are a few of these findings:
- Students learn more when playing games in groups than playing alone.
- Peer-reviewed studies showed larger effects than others. (This analysis is meant to address the bias not to publish null results. . . but the interpretation in this case was clouded by small N's.)
- Age of student had no impact.
But two of the most interesting moderators significantly modify the big conclusions.
First, gaming showed no advantage over conventional instruction when the experiment used random assignment. When non-random assignment was used, gaming showed a robust advantage. So it's possible (or even likely) that games in these studies were more effective only when they interacted with some factor in the gamer that is self-selected (or selected by the experimenter or teacher). And we don't know yet what that factor is.
Second the researchers say that gaming showed and advantage over "conventional instruction" but followup analyses show that gaming showed no advantage over what they called "passive instruction"--that it, the teacher talk or reading a textbook. All of the advantage accrued when games were compared to "active instruction," described as "methods that explicitly prompt learners to learning activities (e.g., exercises, hypertext training.)" So gaming (in this data set) is not really better than conventional instruction; it's better than one type of instruction (which in the US is probably less often encountered.)
So yeah, I think the question in this review is ill-posed. What we really want to know is how do we structure better games? That requires much more fine-grained experiments on the gaming experience, not blunt variables. This will be painstaking work.
Still, you've got to start somewhere and this article offers a useful snapshot of where we are. EDIT 5:00 a.m. EST
2/11/13. In the original post I failed to make explicit another important conclusion--there may be caveats on when and how the games examined are superior to conventional instruction, but they were almost never worse. This is not an unreasonable bar, and as a group the games tested pass it.
Wouters, P, van Nimwegen, C, van Oostendorp, H., & van der Spek, E. G. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology.
Advance online publication. doi: 10.1037/a0031311
Michael Gove, Secretary of Education in Great Britain, certainly has a flair for oratory. In his most recent speech, he accused his political opponents of favoring "Downton Abbey-style" education (meaning one that perpetuates class differences), he evoked a 13 year old servant girl reading Keats, and he
cited as an inspiration the late British reality TV star Jade Goody
(best known for being ignorant), and Marxist writer and political theorist Antonio Gramsci
. Predictably, press coverage in Britain has focused on these details
. (So, of course, have the Tweets
.) The Financial Times
and the Telegraph
pointed to Gove's political challenge to Labour. The Guardian
led with the Goody & Gramsci angle. But these
points of color distract from the real aim. The fulcrum of the speech is the argument that a knowledge-based curriculum is essential to bring greater educational opportunity to disadvantaged children. (The BBC
got half the story right.)The logic is simple: 1) Knowledge is crucial to support cognitive processes.
(e.g., Carnine & Carnine, 2004; Hasselbring, 1988; Willingham, 2006). 2) Children who grow up in disadvantaged circumstances have fewer opportunities to learn important background knowledge at home (Walker et al, 1994) and they come to school with less knowledge, which has an impact on their ability to learn new information at school (Grissmer et al 2010) and likely leads to a negative feedback cycle whereby they fall farther and farther behind (
Stanovich, 1986). Gove is right. And he's right to argue for a knowledge-based curriculum.
The curriculum is most likely to meliorate achievement gaps between advantaged and disadvantaged students because a good fraction of that difference is fueled by differences in cultural capital in the home--differences that schools must try to make up. (Indeed, a knowledge-based curriculum is a critical component of KIPP and other "no excuses" schools in the US.
I'm not writing to defend all education policies undertaken by the current British government--I'm not knowledgeable enough about those policies to defend or attack them. But I find the response from Stephen Twigg (Labour's shadow education secretary) disquieting, because he seems to have missed Gove's point. "Instead of lecturing others, he should listen to business leaders, entrepreneurs, headteachers and parents who think his plans are backward looking and narrow. We need to get young people ready for a challenging and competitive world of work, not just dwell on the past."
(As quoted in the Financial Times.)It's easy to scoff at a knowledge-based curriculum as backward-looking. Memorization of math facts when we have calculators? Knowledge in the age of Google?
But if you mistake advocacy for a knowledge-based curriculum as wistful nostalgia for a better time, or as "old fashioned"
you just don't get it. Surprising though it may seem, you can't just Google everything. You actually need to have knowledge in your head to think well.
So a knowledge-based curriculum is the best way to get young people "ready for the world of work."
Mr. Gove is rare, if not unique, among high-level education policy makers in understanding the scientific point he made in yesterday's speech. You may agree or disagree with the policies Mr. Gove sees as the logical consequence of that scientific point, but education policies that clearly contradict
it are unlikely to help close the achievement gap between wealthy and poor. References
Carnine, L., & Carnine, D. (2004). The interaction of reading skills and science content knowledge when teaching struggling secondary students. Reading & Writing Quarterly
Grissmer, D., Grimm, K. J., Aiyer, S. M., Murrah, W. M., & Steele, J. S. (2010). Fine motor skills and early comprehension of the world: Two new school readiness indicators. Developmental psychology
Hasselbring, T. S. (1988). Developing Math Automaticity in Learning Handicapped Children: The Role of Computerized Drill and Practice. Focus on Exceptional Children
Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading research quarterly
Walker, D., Greenwood, C., Hart, B., & Carta, J. (1994). Prediction of school outcomes based on early language production and socioeconomic factors. Child development
Willingham, D. T. (2006). How knowledge helps. American Educator
Something happens to the "inner clocks" of teens. They don't go to sleep until later in the evening but still must wake up for school. Hence, many are sleep-deprived.
These common observations are borne out in research, as I summarize in an article on sleep and cognition
in the latest American Educator.
What are the cognitive consequences of sleep deprivation?
It seems to affect executive function tasks such as working memory. In addition, it has an impact on new learning--sleep is important for a process called consolidation
whereby newly formed memories are made more stable. Sleep deprivation compromises consolidation of new learning (though surprisingly, that effect seems to be smaller or absent in young children.)
Parents and teachers consistently report that the mood of sleep-deprived students is affected: they are more irritable, hyperactive or inattentive. Although this sounds like ADHD, lab studies of attention show little impact of sleep deprivation on formal measures of attention. This may be because students are able, for brief periods, to rally resources and perform well on a lab test. They may be less able to sustain attention for long periods of time when at home or at school and may be less motivated to do so in any event.
Perhaps most convincingly, the few studies that have examined academic performance based on school start times show better grades associated with later school start times. (You might think that if kids know they can sleep later, they might just stay up later. They do, a bit, but they still get more sleep overall.)
Although these effects are reasonably well established, the cognitive cost of sleep deprivation is less widespread and statistically smaller than I would have guessed. That may be because they are difficult to test experimentally. You have two choices, both with drawbacks:
1) you can do correlational studies that ask students how much they sleep each night (or better, get them to wear devices that provide a more objective measure of sleep) and then look for associations between sleep and cognitive measures or school outcomes. But this has the usual problem that one cannot draw causal conclusions from correlational data.
2) you can do a proper experiment by having students sleep less than they usually would, and see if their cognitive performance goes down as a consequence. But it's unethical to significantly deprive students of significant sleep (and what parent would allow their child to take part in such a study?) And anyway, a night or two of severe sleep deprivation is not really what we think is going on here--we think it's months or years of milder deprivation.
So even though scientific studies may not indicate that sleep deprivation is a huge problem, I'm concerned that the data might be underestimating the effect. To allay that concern, can anything be done to get teens to sleep more?
Believe it or not, telling teens "go to sleep" might help. Students with parent-set bedtimes do get more sleep on school nights than students without them. (They get the same amount of sleep on weekends, which somewhat addresses the concern that kids with this sort of parent differ in many ways kids who don't.)
Another strategy is to maximize the "sleepy cues" near bedtime. The internal clock of teens is not just set for later bedtime, it also provides weaker internal cues that he or she ought to be sleepy. Thus, teens are arguably more reliant on external cues that it's bedtime. So the student who is gaming at midnight might tell you "I'm playing games because I'm not sleepy" could be mistaken. It could be that he's not sleepy because he's playing games. Good cues would be a bedtime ritual that doesn't include action video games or movies in the few hours before bed, and ends in a dark quiet room at the same time each night.
So yes, this seems to be a case where good ol' common sense jibes with data. The best strategy we know of for better sleep is consistency. References: All the studies alluded to (and more) appear in the article.
The British Columbia education system would seem to be doing an excellent job. Although very recent data are not available, performance by BC 15 year-olds on the 2006 PISA showed them lagging just one country in science (Finland), two countries in reading (Finland and Korea), and five in math (Taipei, Finland, Hong Kong, Korea, and fellow Canadian Province Quebec). Meanwhile, in 2007, no one scored better than BC fourth graders on the PIRLS reading assessment. (
Eight countries or provinces scored about the same--36 scored lower. Test data summarized here
.)Despite this record of success, BC is not satisfied, and gearing up to change the curriculum.There's one sense in which this plan is clearly needed: there are too many
objectives. The document describing learning objectives
for the fourth grade runs 21 pages, and includes scores of items. No one can cover all that in a year, so the document ought to be tightened. Another stated objective in the document describing the proposed change is to offer teachers more flexibility
so that they can better tune education to individual students. Whether that's a good idea is, in my view, a judgment call. The BC Ministry of Education contends that the current curriculum is too proscriptive. It may be, but it's being taught (and learned) at very high levels of proficiency, at least as measured by international comparison tests that most observers think are pretty reasonable. Change the curriculum, and that level of performance will likely drop. But other benefits may accrue, such as better performance in academic areas not measured by students with strong interest in those areas, and greater student satisfaction.My real concern is that the plan doesn't make very clear what the expected benefit is, nor how we'll know it when we see it.At least in the overview document, the benefit is described as "increased opportunities to gain the essential learning and life skills necessary to live and work successfully in a complex, interconnected, and rapidly changing world. Students will focus on acquiring skills to help them use knowledge critically and creatively, to solve problems ethically and collaboratively, and to make the decisions necessary to succeed in our increasingly globalized world."Oddly enough, I thought that excellent preparation in Reading, Math, and Science was just the ticket to help you use knowledge critically and creatively. And then I saw this statement:"In today’s technology-enabled world, students have virtually instant access to a limitless amount of information. The greater value of education for every student is not in learning the information but in learning the skills they need to successfully find, consume, think about and apply it in their lives."
A lot of data from the last couple of decades shows a strong association between executive functions (the ability to inhibit impulses, to direct attention, and to use working memory) and positive outcomes in school and out of school (see review here
). Kids with stronger executive functions get better grades, are more likely to thrive in their careers, are less likely to get in trouble with the law, and so forth. Although the relationship is correlational and not known to be causal, understandably researchers have wanted to know whether there is a way to boost executive function in kids.Tools of the Mind (Bedrova & Leong, 2007) looked promising.
It's a full preschool curriculum consisting of some 60 activities, inspired by the work of psychologist Lev Vygotsky. Many of the activities call for the exercise of executive functions through play. For example, when engaged in dramatic pretend play, children must use working memory to keep in mind the roles of other characters and suppress impulses in order to maintain their own character identity. (See Diamond & Lee, 2011, for thoughts on how and why such activities might help students.)A few studies of relatively modest scale (but not trivial--100-200 kids) indicated that Tools of the Mind has the intended effect (Barnett et al, 2008; Diamond et al, 2007). But now some much larger scale followup studies (800-2000 kids) have yielded discouraging results.These studies were reported at a symposium this Spring at a meeting of the Society for Research on Educational Effectiveness. (You can download a pdf summary here.) Sarah Sparks covered this story for Ed Week when it happened in March, but it otherwise seemed to attract little notice. Researchers at the symposium reported the results of three studies. Tools of the Mind
did not have an impact in any of the three. What should we make of these discouraging results? It's too early to conclude that Tools of the Mind simply doesn't work as intended. It could be that there are as-yet unidentified differences among kids such that it's effective for some but not others. It may also be that the curriculum is more difficult to implement correctly than would first appear to be the case. Perhaps the teachers in the initial studies had more thorough training. Whatever the explanation, the results are not cheering. It looked like we might have been on to a big-impact intervention that everyone could get behind.
Now we are left with the dispiriting conclusion "More study is needed."
Barnett, W., Jung, K., Yarosz, D., Thomas, J., Hornbeck, A., Stechuk, R., & Burns, S.(2008). Educational effects of the Tools of the Mind curriculum: A randomized trial. Early Childhood Research Quarterly, 23
, 299–313.Bedrova, E. & Leong, D. (2007) Tools of the Mind: The Bygotskian appraoch to early childhood education. Second edition. New York: Merrill.Diamond, A. & Lee, K. (2011). Interventions shown to aid executive function development in children 4-12 years old. Science, 333, 959-964.
Diamond, A., Barnett, W. S., Thomas, J., & Munro, S. (2007). Preschool program improves cognitive control. Science, 318
Making a change to education that seems like a clear improvement is never easy. Or almost never.
Judith Harackiewicz and her colleagues have recently reported an intervention that is inexpensive, simple, and leads high school students to take more STEM courses.
The intervention had three parts, administered over 15-months when students were in the 10th and 11th grades. In October of 10th grade researchers mailed a brochure to each household titled Making Connections: Helping Your Teen Find Value in School. It described the connections between math, science, and daily life, and included ideas about how to discuss this topic with students.
In January of 11th grade a second brochure was sent. It covered similar ideas, but with different examples. Parents also received a letter that included the address of a password-protected website devised by researchers, which provided more information about STEM and daily life, as well as STEM careers.
In Spring of 11th grade, parents were asked to complete an online questionnaire about the website.
There were a total of 188 students in the study: half received this intervention, and the control group did not.
Students in the intervention group took more STEM courses during their last two years of high school (8.31 semesters) than control students (7.50) semesters.
This difference turned out to be entirely due to differences in elective, advanced courses, as shown in the figure below.
An important caveat about this study: all of the subjects are participating in the Wisconsin Study of Families and Work. This study began in 1990. when women were in their fifth month of pregnancy.
The first brochure that researchers sent to subjects included a letter thanking them for their ongoing participation in the longer study. Hence, subjects could reasonably conclude that the present study was part of the longer study.
That's worth bearing in mind because ordinary parents might not be so ready to read brochures mailed to them by strangers, nor to visit suggested websites.
But that's not a fatal flaw of the research. It just means that we can't necessarily count on random parents reading the materials with the same care.
To me, the effect is still remarkable. To put it in perspective, researchers also measured the effect of parental education on taking STEM courses. As many other researchers have found, the kids of better-educated parents took more STEM courses. But the effect of the intervention was nearly as large as the effect of parental education!
Clearly, further work is necessary but this is an awfully promising start.
Harackiewicz, J. M, Rozek, C. S., Hulleman, C. S & Hyde, J. S. (in press). Helping parents to motivate adolescents in mathematics and science: An experimental test of a utility-value intervention. Psychological Science.
Steven Levitt, of Freakonomics fame, has unwittingly provided an example of how science applied to education can go wrong.On his blog, Levitt cites a study
he and three colleagues published (as an NBER working paper
). The researchers rewarded kids for trying hard on an exam. As Levitt notes, the goal of previous research has been to get kids to learn more. That wasn't the goal here. It was simply to get kids to try harder on the exam itself, to really show everything that they knew.Among the findings: (1) it worked. Offering kids a payoff for good performance
prompted better test scores; (2) it was more effective if, instead of offering a payoff for good performance, researchers gave them the payoff straight away and threatened to take it away
if the student didn't get a good score (an instance of a well-known and robust effect called loss aversion
); (3) children prefer different rewards at different ages. As Levitt puts it "With young kids, it is a lot cheaper to bribe them with trinkets like trophies and whoopee cushions, but cash is the only thing that works for the older students."There are a lot of issues one could take up here, but I want to focus on Levitt's surprise that people don't like this plan. He writes "
It is remarkable how offended people get when you pay students for doing well – so many negative emails and comments." Levitt's surprise gets at a central issue in the application of science to education. Scientists are in the business of describing (and thereby enabling predictions of) the Natural world. One such set of phenomenona concerns when students put forth effort and when they don't. Education is a not a scientific enterprise. The purpose is not to describe
the world, but to change it, to make it more similar to some ideal that we envision. (I wrote about this distinction at some length in my new book. I also discussed on this brief video
Thus science is ideally value-neutral. Yes, scientists seldom live up to that ideal; they have a point of view that shapes how they interpret data, generate theories, etc., but neutrality is an agreed-upon goal, and lack of neutrality is a valid criticism of how someone does science. Education, in contrast, must entail values, because it entails selecting goals. We want to change the world--we want kids to learn things--facts, skills, values. Well, which ones? There's no better or worse answer to this question from a scientific point of view.A scientist may know something useful to educators and policymakers, once the educational goal is defined; i.e., the scientist offers information about the Natural world that can make it easier to move towards the stated goal. (For example, if the goal is that kids be able to count to 100 and to understand numbers by the end of preschool, the scientist may offer insights into how children come to understand cardinality.) What scientists cannot do is use science to evaluate the wisdom of stated goals.And now we come to people's hostility to Levitt's idea of rewards
for academic work.
I'm guessing most people don't like the idea of rewards for the same reason I don't. I want my kids to see learning as a process that brings its own reward. I want my kids to see
effort as a reflection of their character, to believe that they should give their all to any task that is their responsibility, even if the task doesn't interest them. There is, of course, a large, well-known research literature on the effect of extrinsic rewards on motivation. Readers of this blog are probably already familiar with it--if so, skip the next paragraph. The problem is one of attribution. When we observe other people act, we speculate on their motives. If I see two people gardening--one paid and the other unpaid--I'm likely to assume that one gardens because he's paid and the other because he enjoys gardening. It turns out that we make these attributions about our own behavior as well. If my child tries her hardest on a test she's likely to think "I'm the kind of kid who always does her best, even on tasks she don't care for." If you pay her for her performance she'll think "I'm the kind of kid who tries hard when she's paid." This research began in the 1970's and has held up very well. Kids work harder for rewards. . . until the rewards stop. Then they
engage in the task even less than they did before the rewards started. I summarized some of this work here.
In the technical paper, Levitt cites some of the reviews of this research but downplays the threat, pointing out that when motivation is low to start with, there's not much danger of rewards lowering it further. That's true, and I've made a close argument: cash rewards might be used as a last-ditch effort for a child who has largely given up on school. But that would dictate using rewards only with kids who were not motivated to start, not in a blanket fashion as was done in Levitt's study. And I can't see concluding that elementary school kids were so unmotivated that they were otherwise impossible to reach.In addressing the threat to student motivation with research, Levitt is approaching the issue in the right way (even if I think he's incorrect in how he does so.)But on the blog (in contrast to the technical paper), Levitt addresses the threat in the wrong way. He skips the scientific argument and simply belittles the idea that parents might object to someone paying their child for academic work. He writes: Perhaps the critics are right and the reason I’m so messed up is that my parents paid me $25 for every A that I got in junior high and high school. One thing is certain: since my only sources of income were those grade-related bribes and the money I could win off my friends playing poker, I tried a lot harder in high school than I would have without the cash incentives. Many middle-class families pay kids for grades, so why is it so controversial for other people to pay them?I think Levitt is getting "so many negative emails and comments" because he's got scientific data to serve one type of goal (get kids to try hard on exams) the application of which conflicts with another goal (encourage kids to see academic work as its own reward). So he scoffs at the latter. I see this blog entry as an object lesson for scientists. We offer something valuable--information about the Natural world--but we hold no status in deciding what to do with that information (i.e., setting goals). In my opinion Levitt's blog entry shows he has a tin ear for the possibility that others do not share his goals for education. If scientitists are oblivious to or dismissive of those goals, they can expect not just angry emails, they can expect to be ignored.
When I first saw yesterday's New York Times op-ed
, I mistook it for a joke. The title, "Is algebra necessary?" had the ring of Thurber's classic essay "Is sex necessary?" a send-up of psychological sex manuals of the 1920s. Unfortunately, the author, Andrew Hacker, poses the question in earnest, and draws the conclusion that algebra should not be required of all students. His arguments:
His proposed solution is the teaching of quantitative skills that students can use, rather than a bunch of abstract formulas, and a better understanding of "where numbers actually come from and what they actually convey,"
- A lot of students find math really hard, and that prompts them to give up on school altogether. Think of what these otherwise terrific students might have achieved if math hadn't chased them away from school.
- The math that's taught in school doesn't relate well to the mathematical reasoning people need outside of school.
e.g., how the consumer price index is calculated. For most careers, Hacker believes that specialized training in the math necessary for that particular job will do the trick. What's wrong with this vision? The inability to cope with math is not the main reason that students drop out of high school. Yes, a low grade in math predicts dropping out, but no more so than a low grade in English. Furthermore, behavioral factors like motivation, self-regulation, social control (Casillas, Robbins, Allen & Kuo, 2012), as well as a feeling of connectedness and engagement at school (Archambault et al, 2009) are as important as GPA to dropout. So it's misleading to depict math as the chief villain in America's high dropout rate.What of the other argument, that formal math mostly doesn't apply outside of the classroom anyway?
The difficulty students have in applying math to everyday problems they encounter is not particular to math. Transfer is hard. New learning tends to cling to the examples used to explain the concept. That's as true of literary forms, scientific method, and techniques of historical analysis as it is of mathematical formulas.
The problem is that if you try to meet this challenge by teaching the specific skills that people need, you had better be confident that you're going to cover all
those skills. Because if you teach students the significance of the Consumer Price Index they are not going to know how to teach themselves the significance of projected inflation rates on their investment in CDs. Their practical knowledge will be specific to what you teach them, and won't transfer.
The best bet for knowledge that can apply to new situations is an abstract understanding--seeing that apparently different problems have a similar underlying structure. And the best bet for students to gain this abstract understanding is to teach it explicitly. (For a discussion of this point as it applies to math education in particular, see Anderson, Reder, & Simon, 1996).
But the explicit teaching of abstractions is not enough. You also need practice in putting the abstractions into concrete situations. Hacker overlooks the need for practice, even for the everyday math he wants students to know. One of the important side benefits of higher math is that it makes you proficient at the other math that you had learned earlier, because those topics are embedded in the new stuff. (
e.g., Bahrick & Hall, 1991). So I think there are excellent reasons to doubt that Hacker's solution to the transfer problem will work out as he expects.What of the contention that math doesn't do most people much good anyway? Economic data directly contradict that suggestion. Economists have shown that cognitive skills--especially math and science--are robust predictors of individual income, of a country's economic growth, and of the distribution of income within a country (e.g. Hanushek & Kimko, 2000; Hanushek & Woessmann, 2008). Why would cognitive skills (as measured by international benchmark tests) be a predictor of economic growth? Economic productivity does not spring solely from the creativity of engineers
and inventors. The well-educated worker is more likely to (1) see the potential for applying an innovation in a new context; (2) understand the explanation for applying an innovation that someone else has spotted.
In other words, Hacker overlooks the possibility that the mathematics learned in school, even if seldom applied directly, makes students better able to learn new quantitative skills. The on-the-job training in mathematics that Hacker envisions will go a whole lot better with an employee who gained a solid footing in math in school. Finally, there is the question of income distribution; countries with a better educated populace show smaller income disparity, and suggesting that not everyone needs to math raises the question of who will learn it. Who will learn higher math in Hacker's ideal world? He's not clear on this point. He says he's against tracking, but notes that MIT and Cal Tech clearly need their students to be proficient in math. Does this mean that everyone gets the same vocational-type math education, and some of those going on to college
will get access to higher math? If that were actually implemented, how long before private vendors offer after school courses in
formal mathematics, to give kids an edge for entrance to MIT? Private courses that cost, and to which the poor will not have access.
There are not many people who are satisfied with the mathematical competence of the average US student. We need to do better. Promising ideas include devoting more time to mathematics in early grades, more exposure to premathematical concepts in preschool
, and perhaps specialized math instructors beginning in earlier grades
. Hacker's suggestions sound like surrender.
Anderson, J. R., Reder, L. M., & Simon, H. A. (1996). Situated learning and education. Educational Researcher, 25
Archambault, I., Janosz, M, Fallu, J.-S., & Pagani, L. S. (2009). Student engagement and its relationship with early high school dropout. Journal of Adolescence, 32,
651-670.Bahrick, H. P. & Hall, L. K. (1991). Lifetime maintenance of high school mathematics content. Journal of Experimental Psychology: General, 120, 20-33.
Hanushek, E. A. & Kimko D. D. (2000). Schooling, labor-force quality, and the growth of nations. The American Economic Review, 90,
E. A. & Woessmann, (2008). The role of cognitive skills in economic development. Journal of Economic Literature. 46,
Important study on the impact of education on women's attitudes and beliefs:Mocan & Cannonier (2012) took advantage of a naturally-occurring "experiment" in Sierra Leone. The country suffered a devastating, decade-long civil war during the 1990s, which destroyed much of the country's infrastructure, including schools. In 2001, Sierra Leone instituted a program offering free primary education; attendance was compulsory. This policy provided significant opportunities for girls who were young enough for primary school, but none for older children. Further, resources to implement the program were not equivalent in all districts of the country.
The authors used these quirky reasons that the program was more or less accessible to compare girls who participated and those who did not. (Researchers controlled for other variables such as religion, ethnic background, residence in an urban area, and wealth.)
The outcome of interest was empowerment
, which the researchers defined as "having the knowledge along with the power and the strength to make the right decisions regarding one's own well-being." The outcome measures came from
a 2008 study (the Sierra Leone Demographic and Health Survey) which summarized interviews with over 10,000 individuals.
The findings: Better educated women were more likely to believe
Better educated women were more likely to endorse these behaviors:
- a woman is justified in refusing sex with her husband if she knows he has a sexually transmitted disease
- that a husband beating his wife is wrong
- that female genital mutilation is wrong
One of the oddest findings in these data is also one of the most important to understanding the changes in attitudes: they are not due to changes in literacy. The researchers drew that conclusion because an increase in education had no impact on literacy, likely because the quality of instruction in schools was very low. The best guess is that the impact of schooling on attitudes was through social avenues.
- having fewer children
- using contraception
- getting tested for AIDS
Mocan, N. H. & Cannonier, C. (2012) Empowering women through education: Evidence from Sierra Leone. NBER working paper 18016.