Spring, 2013 at the Harvard Initiative for Learning and Teaching.
There are some studies in psychology where you pretty much know what the results will be before you collect the data. But you gotta do 'em to be sure you're right.
One example is a recent study (Sana et al, 2013) on the effects of laptop multitasking on classroom learning. (Thanks to Twitter users @rboulle and @CyniqueDeGauche for tipping me off to this study.)
The authors had college-aged subjects come into a laboratory to listen to a 45 minute lecture on meteorology, meant to simulate the sort of experience they would have in a college classroom. Half of the subjects were given a list of secondary tasks to perform, meant to represent the sort of thing that a bored student in a lecture might investigate during part of the lecture that seemed slow. For example, one question was "What is on Channel 3 tonight at 10 p.m.?" All the questions were designed to be answerable with a simple search using websites that virtually students are familiar with (Google, YouTube, et al.)
The number of questions--twelve--seemed pretty high to me. The authors said that pilot testing indicated students could answer all twelve in about 15 minutes. Thus, students would be multitasking for one third of the 45 minute lecture. Researchers argued that other data indicate this percentage estimate is not unreasonable, although it makes me want to cry.
A forty item comprehension test administered 20 minutes after the lecture showed a cost to multitasking.
Experiment 2 examined what happens when you are not multitasking yourself, but someone near you is doing so. Again, you kind of know what's going to happen. Motion in your peripheral vision is distracting, a phenomenon that web page designers have capitalized on for years, much to our annoyance.
And sure enough, a peer multitasking in your view is distracting.
There is a fundamental tension here, and I don't know how to resolve it. On the one hand, I like it when students have their laptops in class. Many of them are more comfortable taking notes this way than longhand. In the middle of a lecture I might ask someone to look something up that I don't know off the top of my head.
On the other hand, the potential for distraction is terrible. I've walked in the back of the classroom of many of my colleagues and seen that perhaps 50% of the students are on the Web.
Students think that they can snap attention back to class "when it gets interesting again." I don't have much confidence they can. Student judgments of their own learning are often not that well calibrated, and that seems to be especially true of multitasking. They think it's cost free.
Tellingly, researchers asked subjects in Experiment 2 to provide ratings as to whether they were distracted by other people multitasking and whether other people multitasking affected their own (the observers') learning. Average answers? "Somewhat distracting" and "Barely" hindered my learning.
What can be done?
Some educators simply ban laptops. Some banish laptop users to the back rows. I don't like either of these solution much because they impose a penalty on anyone who wants to use a laptop.
I asked our IT group if the Wifi could be turned on and off in my classroom. Nope.
Some argue that students are learning how to manage distraction, although there's not much evidence that students are learning this lesson. Certainly, I don't know of anyone actively teaching them this lesson.
Got ideas? I'd love to hear them.
How should textbooks be designed? A new paper by Jennifer Kaminski and Vladimir Sloutsky shows that that can be real subtly in the answer.
The researchers examined early elementary materials meant to teach kids how to read graphs. They were specifically interested in comparing boring, monochromatic, abstract, bar graphs versus colorful, fun graphs that use a graphic. (Please excuse the black & white reproduction.)
We all know that textbook publishers are eager to make books more visually appealing. And in this case, what's the harm? The graph with the objects seems like a natural scaffold to learn the concept.
kaminski & Sloutsky found that some children shown the graph with embedded objects adopted a counting strategy to read a graph, even if they were taught to focus on the bar height and the axis. The authors surmise that the counting routine is so well-learned that when the child is presented with the vivid graphic with salient objects to count, it's simply very easy to go down that mental path. And of course the child does read the graph correctly.
The problem is not just the child hasn't learned a good strategy to read the graph, or is distracted--the child has learned a bad strategy. So when kids who adopted the counting strategy see graphs like this . . .
. . . some of them count the stripes or count the dots to "read" the graph.
The effect fades as kids get older--first graders are better than kindergarteners in ignoring extraneous information when reading graphs.
On the one hand you could see this as small potatoes--kids will get over it, they will learn how to read graphs. But on the other hand, why knowingly put a stumbling block in front of kids trying to learn math? And more important, how many other small stumbling blocks are there that we don't know about?
A blog posting over at Schools Matter @ The Chalk Face has gathered a lot of interest--78 comments, many of them outraged.
The New York State Education Dept. has a website that is meant to help teachers prepare for the Common Core Standards. Author Chris Cerrone posted a bit of a 1st grade curriculum module on early civilizations. Here it is:
Cerrone asked primary grade educators to weigh in: "what do you think of the vocabulary contained in this unit of study?"
The responses in the 78 comments were nearly uniformly negative. As you might expect from that volume of commentary, the criticisms were wide-ranging, much of it directed more generally at standardized testing and the idea of the CCSS themselves.
But a lot of the commentary concerned cognitive development, and I want to focus there. This comment was typical (click for larger image).
There is an important idea at the heart of this criticism: developmental stages. This commenter specifically invokes Piaget, but you don't have to be a Piagetian to think that stages are a good way to think about children's thinking. Stage theories hold that children's thinking is relatively stable, but then undergoes a big shift in a relatively brief time (say, a few months) whereupon it stabilizes again.Photo from milwaukee-montessori.org
So lessons would be developmentally inappropriate if they demanded a type of thinking that the child was simply incapable of, given his developmental stage.
I have argued in some detail that stage theories have two major problems: first, data from the last twenty years or so make development look like it's continuous, rather than occurring in discrete stages. Second, children's cognition is fairly variable day to day, even when the same child tries the same task.
I have argued elsewhere that trying to take a psychological finding and using it to draw strong conclusions about instruction--including what children are, in principle, ready for--is fraught with problems. How much the more is that true when using a psychological theory rather than an experimental finding.
So if Piaget will not be our guide as to what 1st graders are ready for, what should be?
The experience of early elementary educators, of course, and some of the people commenting on the blog posting are or were first grade teachers. And almost unanimously, they thought this material was inappropriate for first graders. (Some thought kids this age shouldn't be learning about other religions at this age. No argument there, that's a matter of ones values. I'm only talking about what kids can cognitively handle.)
But if we adopt a proof-of-the-pudding-is-in-the-eating criterion, lessons on ancient civilizations are fine because they are in use and children are learning. The material shown above is part of the Core Knowledge sequence, around for more than a decade and used by over a thousand schools. (NB: I'm on the Board of the Core Knowledge Foundation.)
And Core Knowledge is not alone. Another curriculum has had first-graders learn about ancient civilizations not for a decade, but for about a century: Montessori. (NB again: my children experienced these lessons at their school, and my wife teaches them--she's an early elementary Montessori teacher.)
Montessori schools teach the same "Five Great Lessons" at the beginning of first, second, and third grades. They are
Naturally, these lessons are presented in ways that make sense to young children, but they are far from devoid of content. Montessori educators see them as the foundation and the wellspring of interest for everything to come: biology, geology, mathematics, reading, writing, chemistry and so on.
If it seems impossible or highly unlikely to you that 6 year olds could really get anything out of such lessons, I'll ask you to consider this. Our understanding of any new concept is always incomplete.
For example, how do children learn that some people they hear about (Peter Pan) are made up and never lived, whereas others (the Pharaohs) were real? Not by an inevitable process of neurological maturation that makes their brain "ready" for this information, whereupon they master it quickly. They learn it bit by bit, in fits and starts, sometimes seeming to get it, other times not.
And you can't always wait until children are "ready." Think about mathematics. Children are born understanding numerosity, but they understand it on a logarithmic scale--the difference between five and ten is larger than the difference between 70 and 75. To understand elementary mathematics they must learn to think of numbers of a linear scale. In this case, teachers have to undo Nature. And if you wait until the child is "developmentally ready" to understand numbers this way, you'll never teach them mathematics. It will never happen.
In sum, I don't think developmental psychology is a good guide to what children should learn; it provides some help in thinking about how children learn. The best guide to "what" is what children know now, and where you want their learning to head.
I read a lot of blogs. I only comment when I think I have something to add (which is rare, even on my own blog) but I read a lot of them.
Today, I offer a plea and a suggestion for making education blogs less boring, specifically on the subject of standardized testing.
I begin with two Propositions about human behavior
On Proposition 1: Standardized tests typically gain validity by showing that scores are associated with some outcome you care about. You seldom care about the items on the test specifically. You care about what they signify. Sometimes tests have face validity, meaning test items look like they test what they are meant to test—a purported history test asks questions about history, for example. Often they don’t, but the test is still valid. A well-constructed vocabulary test can give you a pretty good idea of someone’s IQ, for example.
Just as body temperature is a reliable, partial indicator of certain types of disease, a test score is a reliable, partial indicator of certain types of school outcomes. But in most circumstances your primary goal is not a normal body temperature; it’s that the body is healthy, in which case body temperature will be normal as a natural consequence of the healthy state.
If you attach stakes to the outcome, you can’t be surprised if some people treat the test as something different than that. They focus on getting body temperature to 98.6, whatever the health of the patient. That’s Proposition 1 at work. If a school board lets an administrator know that test scores had better go up or she can start looking for another job. . . well, what would you do in those circumstances? So you get test-prep frenzy. These are social consequences of tests, as typically used.
On Proposition 2: Some form of assessment is necessary. Without it, you have no idea how things are going. You won’t find many defenders of No Child Left Behind, but one thing we should remember is that the required testing did expose a number of schools—mostly ones serving disadvantaged children—where students were performing very poorly. And assessments have to be meaningful, i.e., reliable and valid. Portfolio assessments, for example, sound nice, but there are terrible problems with reliability and validity. It’s very difficult to get them to do what they are meant to do.
So here’s my plea. Admit that both Proposition 1 and Proposition 2 are true, and apply to testing children in schools.
People who are angry about the unintended social consequences of standardized testing have a legitimate point. They are not all apologists for lazy teachers or advocates of the status quo. Calling for high-stakes testing while taking no account of these social consequences, offering no solution to the problem . . . that's boring.
People who insist on standardized assessments have a legitimate point. They are not all corporate stooges and teacher-haters. Deriding “bubble sheet” testing while offering no viable alternative method of assessment . . . that's boring.
Naturally, the real goal is not to entertain me with more interesting blog posts. The goal is to move the conversation forward. The landscape will likely change consequentially in the next two years. This is the time to have substantive conversations.
Part of the fun and ongoing fascination of science of science is "the effect that ought not to work, yet does."
The impact of values of affirmation on academic performance is such an effect.
Values-affirmation "undoes" the effect of stereotype threat (also called identity threat). Stereotype threat occurs when a person is concerned about confirming a negative stereotype about his or her group. In other words a boy is so consumed with thinking "Everyone expects me to do poorly on this test because I'm African-American" that his performance actually is compromised (see Walton & Spencer, 2009 for a review).
One way to combat stereotype threat is to give the student better resources to deal with the threat--make the student feel more confident, more able to control the things that matter in his or her life.
That's where values affirmation comes in.
In this procedure, students are provided a list of values (e.g., relationships with family members, being good at art) and are asked to pick three that are most important to them and to write about why they are so important. In the control condition, students pick three values they imagine might be important to someone else.
Randomized control trials show that this brief intervention boosts school grades (e.g., Cohen et al, 2006).
One theory is that values affirmation gives students a greater sense of belonging, of being more connected to other people.
(The importance of social connection is an emerging theme in other research areas. For example, you may have heard about the studies showing that people are less anxious when anticipating a painful electric shock if they are holding the hand of a friend or loved one.)
A new study (Shnabel et al, 2013) directly tested the idea that writing about social belonging might be a vital element in making values affirmation work.
In Experiment 1 they tested 169 Black and 186 White seventh graders in a correlational study. They did the values-affirmation writing exercise, as described above. The dependent measure was change in GPA (pre-intervention vs. post-intervention.) The experimetners found that writing about social belonging in the writing assignment was associated with a greater increase in GPA for Black students (but not for White students, indicating that the effect is due to reduction in stereotype threat.)
In Experiment 2, they used an experimental design, testing 62 male and 55 female college undergraduates on a standardized math test. Some were specifically told to write about social belonging and others were given standard affirmation writing instructions. Female students in the former group outscored those in the latter group. (And there was no effect for male students.)
The brevity of the intervention relative to the apparent duration of the effect still surprise me. But this new study gives some insight into why it works in the first place.
Cohen, G. L., Garcia, J., Apfel, N., & Master, A. (2006). Reducing
the racial achievement gap: A social-psychological interven-tion. Science, 313, 1307-1310.
Shnabel, N., Purdie-Vaughns, V., Cook, J. E., Garcia, J., & Cohen, G. L. (2013). Demystifying values-affirmation interventions: Writing about social belonging is a key to buffering against identity threat. Personality and Social Psychology Bulletin,
Walton, G. M., & Spencer, S. J. (2009). Latent ability: Grades and test
scores systematically underestimate the intellectual ability of negatively stereotyped students. Psychological Science, 20, 1132-1139.
One of the great intellectual pleasures is to hear an idea that not only seems right, but that strikes you as so terribly obvious (now that you've heard it) you're in disbelief that no one has ever made the point before.Control group
I tasted that pleasure this week, courtesy of a paper by Walter Boot and colleagues (2013).
The paper concerned the adequacy of control groups in intervention studies--interventions like (but not limited to) "brain games" meant to improve cognition, and the playing of video games, thought to improve certain aspects of perception and attention.
To appreciate the point made in this paper, consider what a control group is supposed to be and do. It is supposed to be a group of subjects as similar to the experimental group as possible, except for the critical variable under study.Active control group
The performance of the control group is to be compared to the performance of the experimental group, which should allow an assessment of the impact of the critical variable on the outcome measure.
Now consider video gaming or brain training. Subjects in an experiment might very well guess the suspected relationship between the critical variable and the outcome. They have an expectation as to what is likely to happen. If they do, then there might be a placebo effect--people perform better on the outcome test simply because they expect that the training will help just as some people feel less pain when given a placebo that they believe is a analgesic.
The standard way to deal with that problem is the use an "active control." That means that the control group doesn't do nothing--they do something, but it's something that the experimenter does not believe will affect the outcome variable. So in some experiments testing the impact of action video games on attention and perception, the active control plays slow-paced video games like Tetris or Sims.Out of control group
The purpose of the active control is that it is supposed to make expectations equivalent in the two groups. Boot et al.'s simple and valid point is that it probably doesn't do that. People don't believe playing Sims will improve attention.
The experimenters gathered some data on this point. They had subjects watch a brief video demonstrating what an action video game was like or what the active control game was like. Then they showed them videos of the measures of attention and perception that are often used in these experiments. And they asked subjects "if you played the video game a lot, do you think it would influence how well you would do on those other tasks?"
And sure enough, people think that action video games will help on measures of attention and perception. Importantly, they don't think that they would have an impact on a measure like story recall. And subjects who saw the game Tetris were less likely to think it would help the perception measures, but were more likely to say it would help with mental rotation.
In other words, subjects see the underlying similarities between games and the outcome measures, and they figure that higher similarity between them means a greater likelihood of transfer.
As the authors note, this problem is not limited to the video gaming literature; the need for an active control that deals with subject expectations also applies to the brain training literature.
More broadly, it applies to studies of classroom interventions. Many of these studies don't use active controls at all. The control is business-as-usual.
In that case, I suspect you have double the problem. You not only have the placebo effect affecting students, you also have one set of teachers asked to do something new, and another set teaching as they typically do. It seems at least plausible that the former will be extra reflective on their practice--they would almost have to be--and that alone might lead to improved student performance.
It's hard to say how big these placebo effects might be, but this is something to watch for when you read research in the future.
Boot, W. R., Simons, D. J., Stothart, C. & Stutts, C. (2013). The pervasive problems with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives in Psychological Science, 8, 445-454.
Readers of this blog probably know about "the testing effect," later rechristened "retrieval practice." It refers to the fact that trying to remember something can actually help cement things in memory more effectively than further study.
A prototypical experiment looks like this (rows = subject groups; columns = phases of the experiment).
The critical comparison is the test in Phase three of the experiment; those who take a test during Phase 2 do better than those who study more.. There are lots of experiments replicating the effect and accounting for alternative explanations (e.g., motivation. See Agarwal, Bain & Chamberlain, 2012 for a review).
A consistent finding is that the benefit to memory is larger if the test is harder. But of course if the test is harder, then people might be more likely to make mistakes on the test in Phase 2. And if you make mistakes, perhaps you will later remember those incorrect responses.
But data show that, even if you get the answer wrong during Phase 2 you'll still see a testing benefit so long as you get corrective feedback. (Kornell, Hays & Bjork, 2009).
A tentative interpretation is that you get the benefit because the right answer is lurking in the background of your memory and is somewhat strengthened, even though you didn't produce it.
So that implies that the testing effect won't work if you simply don't know the answer at all. Suppose, for example, that I present you with an English vocabulary word you don't know and either (1) provide a definition that you read (2) ask you to make up a definition or (3) ask you to choose from among a couple of candidate definitions. In conditions 2 & 3 you obviously must simply guess. (And if you get it wrong I'll give you corrective feedback.) Will we see a testing effect?
That's what Rosalind Potts & David Shanks set out to find, and across four experiments the evidence is quite consistent. Yes, there is a testing effect. Subjects better remember the new definitions of English words when they first guess at what the meaning is--no matter how wild the guess.
Guessing by picking from amongst meanings provided by the experimenter provides no advantage over simply reading the definition. So there is something about the generation in particular that seems crucial.
What's behind this effect? Potts & Shanks think it might be attention. They suggest that you might pay more attention to the definition the experimenter provides when you've generated your own guess because you're more invested in the problem. Selecting one of the experimenter-provided definitions is too easy to provide this feeling of investment.
This account is speculation, obviously, and the authors don't pretend it's anything else. I wish that they were equally circumspect in their guess at the prospects for applying this finding in the classroom. Sure, it's an important piece of the overall puzzle, but I can't agree that "this line of research is relevant to any real world situation where novel information is to be learned, for example when learning concepts in science, economics, politics, philosophy, literary theory, or art."
The authors in fact cite two other studies that found no advantage for generating over reading, but Potts & Shanks think they have an account for what made those studies not very realistic (relative to classrooms) and what makes their conditions more realistic. They may yet be proven right, but college students in a lab studying word definitions is still a far cry from "any real world situation where novel information is to be learned."
The today-the-classroom-tomorrow-the-world rhetoric is over the top, but it's an interesting finding that may, indeed, prove applicable in the future.
Agarwal, P. K., Bain, P. M. & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24, 437-448.
Kornell, N., Hays, M. J., & Bjork, R. A. (2009). Unsuccessful retrieval
attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 989–998
Potts, R., & Shanks, D. R. (2013, July 1). The Benefit of Generating Errors During Learning. Journal of Experimental Psychology: General. Advance online publication. doi:10.1037/a0033194
James Paul Gee, a professor at Arizona state, is known as a pioneer in thinking about the educational uses of gaming. His book, What Video Games Have to Teach Us About Learning and Literacy, is considered a landmark in the field.
Thus his new book, The Anti-education Era: Creating Smarter Students through Digital Learning, is bound to attract interest.
Unfortunately, the book ultimately disappoints. Chief among the problems is that—despite the subtitle--there is very little solid advice here about how to change education.
In fact, the first 150 pages scarcely mentions education at all. It is a laundry list—16 chapters in all—of the weaknesses of human cognition. This is territory that has been well covered in other popular books by Chabris & Simon, Ariely, Schacter, Kahneman, and others.
I can’t really fault Gee for not doing as creditable a job in describing human cognition as these authors. It is, after all, their bread & butter, not Gee’s. But the presentation is slow-paced and there are some errors. For example, Gee gets the definition of grit wrong (p. 202) He flatly states that we think well only when we care about what we are doing (p. 12) but the relationship between motivation and performance depends on the complexity of the task and the expertise of the performer.
It’s only the last 60 pages of the book that address ways that digital technologies might come to our aid in addressing the frailties of human cognition. Here Gee is on his home turf, but it’s too well-trod: getting people to work together, ensuring that people feel safe,
The problem is not that people need to be persuaded that these are good ideas. The problem is that we have evidence in hand that they don’t always work. That means that we need a more nuanced understanding about the conditions under which these ideas work. Gee half recognizes this need, and on occasion warns that solutions will not be simple. But he never takes the next step and outlines the complexities for us.
For example Gee retells (via Jonah Lehrer) the story of a building at MIT that housed professors from a wide variety of disciplines, with a concomitant flowering of intellectual cross-fertilization. Gee quotes (with approval, I guess) Lehrer: “The lesson of Building 20 is that when the composition of the group is right—enough people with different perspectives running into one another in unpredictable ways—the group dynamic will take care of itself.”
As an academic who has been doing interdisciplinary work for 20 years, I would counter: “Like hell it does.”
Virtually every school of education is housed in a building with people trained in different disciplines, and interdisciplinary work remain rare. For reasons I won’t get into here (and much to the despair of university administrators) interdisciplinary work is hard.
So despite the title, educators will find little of interest here.
Common sense strikes back
Gee had better hope he does not meet up with Tom Bennett in a dark alley. Bennett is a British teacher who has been in the classroom since 2003, and has written for the Times Educational Supplement since 2009. (If you’re a reader outside the UK, you may not know that this is a very widely read weekly.)
Bennett’s fourth book, just out, is titled Teacher Proof: Why Research in Education Doesn’t Always Mean What it Claims, and What You Can Do about It.
The book is comprised of three sections: in the first, Bennett provides an overview of education research. In the second he evaluates some education theories, and in the third he suggests a better way forward.
As I read Teacher Proof , I kept thinking “This is one pissed off teacher.” The language is not at all bitter—in fact, it’s frequently quite funny, and Bennett is a marvelous writer—but you can tell that he feel cheated.
Cheated of his time, sitting in professional development sessions that advise an experienced teacher to change his practice based an evidence-free theory.
Cheated of the respect he is due, as researchers with no classroom experience presume to tell him his job, and blame him (or his students) if their magic beans don’t grow a beanstalk.
Cheated of the opportunity to devote all of his attention to his students, given that researchers are not simply failing to help him his do his job, but are actively getting in his way, to the extent that their cockamamie ideas infect districts and schools.
So what does this angry teacher have to say?
The first third of the book contrasts science and social science. The upshot, as Bennett describes it, is that social sciences aspire to the precision of the “hard” sciences but can’t get there. They are nevertheless full of pretentions, “walking around in mother’s heels and pearls,” as Bennett says, pretending to be a more mature version of itself.
There’s not much nuance in this view. As Bennett describes it, education research is not just badly done science, it is pretty much impossible-to-do-well science, given the nature of the subject matter.
This section of the book struck me as odd, both because it didn’t match my impression of the author’s view, based on his other writings, and in fact conflicts with the second section of the book.
This section offers a merciless, overdue, and often funny skewering of speculative ideas in education: multiple intelligences, Brain Gym, group work, emotional intelligence, 21st century skills, technology in education, learning styles, learning through games. Bennett has a unerring eye for the two key problems in these fads: in some cases, the proposed “solutions” are pure theory, sprouting from bad (or absent) science (eg., learning styles, Brain Gym); others are perfectly sensible ideas transmogrified into terrible practice when people become too dogmatic about their application (group learning, technology).
Bennett ends each chapter with a calm, pragmatic take, e.g., “yes, I use technology a lot. Here’s where I find it useful.” As he says early on, Experience trumps theory every time.”
But here’s where I think the second section of the book conflicts with the first. Bennett’s consistent criticism of these ideas is that there is no evidence to back them up. To me, this indicates that Bennett doesn’t think that social science research is impossible—he’s just fed up with social science research that’s done badly. In the third section of the book Bennett tells us what different actors in the education world ought to do. It is the briefest section by far--less than ten pages—and the brevity matches the tone of the advice: “Look, a lot of this really isn’t that complicated, gang.”
To the “what people should do” list, I’d add another directive: schools of education should raise their standards for what constitutes education research. Bennett is right—too much of it is second-rate.
There is an ugly system of self-interest that has produced the terrible research (and in turn, the need for Bennett’s book). Professors want to publish in peer-reviewed journals because that brings prestige. So publishers create “peer-reviewed” journals that have very low standards because journals bring them money. Institutional libraries buy these terrible journals (keeping them in business) because faculty say that they are needed so that faculty and students can keep up with the latest research. And universities are reluctant to blow the whistle on the whole charade because schools of education—second-rate or not--bring tuition dollars.
Teacher Proof is a worthy read. There have been scattered criticisms of the theories that Bennett takes on, but seldom collected in one place in such readable prose, and seldom (if ever) with a teacher's eye for the details of practice.
Teacher Proof is also a timely read. In the UK, impatience with the influence that shoddy science has had on teaching practice is mounting. Teachers are sick of being told what to do, with phantom “research” used as the excuse. Would that the same will happen in the US! Teacher Proof may help.
Does music training improve other academic skills?
One sometimes hears the inclusion of music in the curriculum justified by the claim that it improves mathematics, or reading.
I’ve never cared for this justification because I think students should study music for its own sake, whether or not it boosts other skills. And it seems a chancy argument; if it turns out that music doesn’t help other academic work, does that mean it should be dumped?
Setting that argument aside, it’s certainly of interest from a cognitive point of view to know whether musical training has an impact on reading or math. There are a good number of correlational studies showing a positive effect, but few experimental data.
Now a new experimental study (Rautenberg, in press) shows that music training does have some positive effect for reading.
159 German 1st graders participated. The music training lasted 8 months and focused on three areas: rhythmic skills training, tonal/melodic skills training and auditory discrimination of timbre and sound intensity. There were two control groups: one received no training. The other was an active control receiving training in art.
The results were fairly robust, as shown in the graph of single word reading accuracy at the beginning and end of the year.
What’s behind the benefit? Language does have a musical aspect to it, referred to as prosody. And indeed, children’s ability to appreciate the rhythmic aspect of speech is correlated with the ease with which they learn to read, even when controlling for phonemic awareness. In German (and in English) certain letter combinations signal certain stress patterns, so there is a signal in the written language that children can learn. The ideas is that children are less likely to learn the association of certain written letter patterns and their corresponding rhythms in speech if they don’t perceive the rhythms of speech very well.
That’s the argument. More fine-grained analyses of the data partially support it.
The argument predicts that it’s rhythm that’s important, not tonality, and the data do show significant correlations of reading with ability in the former, but not the latter.
The argument further predicts that the training ought to reduce a particular type of error: one in which a child reads the phonetic sounds correctly but gets the rhythm wrong; they segment the word into syllables incorrectly, or they accent the wrong syllable. This prediction was not supported.
All in all, this study seems to be an important addition--although certainly not a conclusive one--to the argument that some types of music training aids children's learning to read, at least in certain languages.
Rautenberg, I. (in press). The effects of musical training on the decoding skills of German-speaking primary school children. Journal of Research in Reading.