Very proud that an excerpt from When Can You Trust the Experts? is the cover story of American Educator
This is a negative finding, so I'll keep it brief.
How do kids acquire new vocabulary? This process is poorly understood.
An influential theory has been that the phonological loop in working memory provides essential support. The phonological loop is like a little tape loop lasting perhaps two seconds; it allows you to keep active a sound you hear.
The idea is that a new unfamiliar word can be placed on the loop for practice and to keep it around while the surrounding context helps you figure out the meaning.
If so, you'd predict that the larger the capacity of the phonological loop and the greater the fidelity with which it "records" the better children will be able to learn new vocabulary.
The efficacy of the phonological loop is measured by having kids repeat nonsense words. Initially they are short--tozzy--but they increase in length to pose greater challenge to the phonological loop--liddynappish.
Several studies have shown correlations between phonological loop capacity and vocabulary size in children (for a review, see Melby-Lervag & Lervag, 2012).
The problem: it could be that having a big vocabulary makes the phonological loop test easier, because it makes it more likely that some of the nonsense words remind you of a word you already know. (And so you have the semantics of that word helping you remember the to-be-remembered word.) Indeed, even proponents of the hypothesis argue that's what happens when kids get older.
What you really need is a study that measures phonological loop capacity at time 1, and finds that it predicts vocabulary size at time 2. There is one such study (Gathercole et al, 1992) but it used a statistical analysis (cross-lagged correlation) that is now considered less than ideal.
A new study (Melby-Lervag et al, 2012) used probably the best methodology of any used to date. It was a longitudinal study that tested nonword repetition ability and vocabulary once each year between the ages of 3 and 7.
They used a different statistical technique--simplex models--to assess causal relationships. They found that both nonword repetition and vocabulary show growth, both show stability across children, and both are moderately correlated, but there was no evidence that one influenced the growth of the other over time.
The group then reanalyzed the Gathercole et al (1992) data and found the same pattern.
This is one depressing paper. Something we thought we knew--the phonological loop contributes to vocabulary learning--may well be wrong.
If anyone is working on a remediation program for young children that centers on improving the working of the phonological loop, it's probably time to rethink that idea.
Gathercole, S. E., Willis, C., Emslie, H., & Baddeley, A. (1992). Phonological memory and vocabulary development during the early school years: A longitudinal study. Developmental Psychology, 28, 887–898.
Melby-Lervåg, M., & Lervåg, A. (2012). Oral language skills mod-erate nonword repetition skills in children with dyslexia: A meta-analysis of the role of nonword repetition skills in dyslexia. Scientific Studies of Reading, 16, 1–34.
Melby-Lervåg, M., & Lervåg, A., Lyster, S-A H., Klem, M., Hagtvet, B., & Hulme, C. (in press). Nonword-repetition ability does not appear to be a causal influence on children's vocabulary development. Psychological Science.
The importance of a good relationship between teacher and student is no surprise. More surprising is that the "human touch" is so powerful it can improve computer-based learning.
In a series of ingenious yet simple experiments, Rich Mayer and Scott DaPra showed that students learn better from an onscreen slide show when it is accompanied by an onscreen avatar that uses social cues.
Eighty-eight college students watched a 4-minute Powerpoint slide show that explained how a solar cell converts sunlight to electricity. It consisted of 11 slides and a voice-over explanation.
Some subjects saw an avatar which used a full compliment of social cues (gesturing, changing posture, facial expression, changes in eye gaze, and lip movements synchronized to speech) which were meant to direct student attention to relevant features of the slide show.
Other subjects saw an avatar that maintained the same posture, maintained eye gaze straight ahead, and did not move (except for lip movements synchronized to speech).
A third group saw no avatar at all, but just saw the slides and listened to the narration.
All subjects were later tested with fact-based recall questions and transfer questions (e.g. "how could you increase the electrical output of a solar power?") meant to test subjects ability to apply their knowledge to new situations.
There was no difference among the three groups on the retention test, but there was a sizable advantage (d = .90) for the high embodiment subjects on the transfer test. (The low-embodiment and no-avatar groups did not differ.)
A second experiment showed that the effect was only obtained when a human voice was used; the avatar did not boost learning when synchronized to a machine voice.
The experimenters emphasized the social aspect of the situation to learning; students process the slideshow differently because the avatar is "human enough" for them to treat it prime interaction like those learners would use with a real person. This interpretation seems especially plausible in light of the second experiment; all of the more cognitive cues (e.g., the shifts in the avatar's eye gaze prompting shifts in learner's attention) were still present in the machine-voice condition, yet there was no advantage to learners.
There is something special about learning from another person. Surprisingly, that other person can be an avatar.
Mayer, R. E. & DaPra, C. S. (2012). An embodiment effect in computer-based learning with animated pedagogical agents. Journal of Experimental Psychology: Applied, 18, 239-252.
Someone needs to tell Glen Whitney that algebra doesn't matter.
Poor, deluded Whitney has seen the negative attitude that most Americans have about mathematics--it's boring, it's confusing, it's unrelated to everyday life--and concluded that Americans need a mathematical awakening.
To prompt it, he's spearheading the creation of a Math Museum in New York City, the only one of its kind in North America. (There had been a small math museum on Long Island, the Goudreau Museum. It closed in 2006).
Whitney reports that he loved math in high school and college, but didn't think he was likely to make it as a pure researcher. He went to work for a hedge fund, creating statistical models for trading. When the Goudreau Museum closed, he organized a group to explore opening a math museum that would be more ambitious.
A rendering of the plan is shown below.
The plan is for exhibits similar to those seen in science museums--plenty of interaction and movement on the part of visitors, and a focus on the fact that mathematics is all around us.
All around us to the point that Whitney currently gives math walking tours in New York City. As he notes in a recent interview in Nature, math is in "the algorithms used to control traffic lights, the mathematical issues involved in keeping the subway running, the symmetry of the mouldings on the sides of buildings and the unusual geometry that gives gingko trees their distinctive shape."
A traveling exhibition, Math Midway, has been making the rounds of science museums around the country, whetting appetites for the the grand opening (December 15th, 2012).
The most popular exhibit is a tricycle with square wheels which can be ridden smoothly on a track with inverted curves, calculated to keep the axles of the trike level. In the photo below it's ridden by Joel Klein (former New York chancellor of education and current leader of News Corporation's education venture).
Whitney says that the beauty of the tricycle exhibit is that it gives people the sense that math can make the impossible seem possible.
Next impossible challenge: persuade people who think that math is mostly irrelevant and should be dropped from public schooling for most kids that they are wrong.
The Math Museum looks like a long step toward making that goal seem possible.
More at MoMath.org.
Anyone who has spent much time in classrooms has the sense that just a couple of disorderly kids can really disrupt learning for everyone. These kids distract the other students, and the teacher must allocate a disproportionate amount of attention to them to keep them on task.
Obvious though this point seems, there have been surprisingly few studies of just how high a cost disruptive kids exact on the learning of others.
Lori Skibbe and her colleagues have just published an interesting study on the subject.
Skibbe measured self-regulation in 445 1st graders, using the standard head-toes-knees-shoulders (HTKS) task. In this task, children must first follow the instructors direction ("Touch your toes. Now touch your shoulders.") In a second phase, they were instructed to do the opposite of what the instructor said--when told to touch their toes, they were to touch their head, for example. This is a well-known measure of self regulation in children this age (e.g., Ponitz et al., 2008).
Researchers also evaluated the growth over the first grade year in children's literacy skills, using two subtests from the Woodcock-Johnson: Passage Comprehension and Picture Vocabulary.
We would guess that children's growth in literacy would be related to their self-regulation skill (as measured by their HTKS score). What Skibbe et al showed is that the class average HTKS score also predicts how much an individual child will learn, even after you statistically account for that child's HTKS score. (Researchers also accounted for the school-wide percentage of kids qualifying for free or reduced lunch, as academic growth might covary with self-regulation as due to SES differences.)
Thus it would seem that kids who have trouble inhibiting impulses don't just get distracted from their work; when they get distracted from their work they likely engage in behaviors that distract other kids too.
Skibbe then replicated this finding with a second cohort of 633 children in 68 classrooms.
The effects were sizable both for comprehension (d = .35 for cohort 1 and .31 for cohort 2) and for vocabulary (d = .24 for cohort 1 and .16 for cohort 2). To provide some perspective, the effect on comprehension is close to the effect that an effective principal makes to kids' learning (d = .36) according to Hattie's 2009 meta-analysis.
So a calm classroom makes for a better learning environment. Who didn't know that?
Well, I might have guessed that the effect was present, but I wouldn't have guessed it is as large as it is.
To me, this finding also brings to mind the likely importance of peer self-regulation at older grades. Skibbe et al measured self-regulation at first grade, when most teachers still have ready tools to deal with disruptive behavior: most children (but not all, certainly) are ready to yield to teacher's authority.
That's less often true in middle or high school. What tools do teachers have for older kids? What can be done when kids compromise not only their own education, but those of their classmates?
This strikes me as a terribly difficult problem, and one for which I am without ideas. But it seems like a vital problem to address. Skibbe's work tells me that the effects of disruptive peers may be worse than we would have guessed.
Hattie, J. A. C. (2009). Visible Learning. London: Routledge.
Ponitz, C. C., McClelland, M. M., Jewkes, A. M., Connor, C. M., Farris,
C. L., & Morrison, F. J. (2008). Touch your toes! Developing a direct
measure of behavioral regulation in early childhood. Early Childhood
Research Quarterly, 23, 141–158.
Skibbe, L. E. , Phillips, B. M, Day, S. L., Brophy-Herb, H. E. & Connor, C. M. (2012). Children's early literacy growth in relation to classmate's self-regulation. Journal of Educational Psychology, 104, 541-553.
The Common Core standards for English Language Arts call for a significant dose of non-fiction reading, in support of reading comprehension, a finding I’ve discussed before. That requirement has led to some puzzlement (and occasional indignation). Can’t kids gain knowledge of the world from fiction as well? Information about science, history, technology, civics, geography, etc?
The answer is “they can and they do.” But there is an important caveat on this conclusion. Beth Marsh and her colleagues offer an excellent summary of this research in a new article published in Educational Psychology Review.
The advantage of fiction is that the narrative can engage students, transport them into the story. The fear is that readers will assume that information in fiction is true, whereas fiction may well contain inaccuracies. We don’t expect fiction to be vetted for accuracy the way a non-fiction source would be. (Certainly Hollywood movies are notorious for playing fast-and-loose with the truth.)
Isn’t it possible, then, that these inaccuracies would be later remembered by subjects as true?
Yes. In her experiments Marsh uses short stories that refer to facts about the world. The facts are either accurate (it happened on the largest ocean, the Pacific) inaccurate (it happened on the largest ocean, the Atlantic) or in a control condition, absent (it happened on the largest ocean).
Later, subjects take a general information test that includes a probe of the target information (Which is the largest ocean on earth?) . The question is whether reading the accurate or inaccurate information influences subjects’ response to the question (compared to the control condition).
As shown in the figure, seeing the correct information makes it more likely you’ll get the answer correct on the test (left panel) and less likely you’ll get it wrong (right panel). Reading the misleading information makes it less likely you’ll get it correct (left panel) and more likely you’ll get it wrong (right panel).
Thus, students are influenced by inaccurate information (at least for the duration of the experiment, as long as a week) and prior knowledge is not protective. In other words, the misleading information has an impact even for stuff that most of the students knew before the experiment started.
Even more alarming, a general warning “there may be misinformation here” was not effective (Marsh & Fazio, 2006). It may be that readers are caught up in the narrative and don’t worry overmuch about evaluating each bit of factual content they come across.
The good news is that a specific warning telling subjects exactly which bit of information cannot be trusted is very effective in preventing subjects from absorbing the inaccuracy into their beliefs (Butler et al 2009).
And “absorbing” is the right word: typically, readers later report that they “knew” the inaccurate information before the start of the experiment.
So, can fictional sources be used to help students learn new knowledge about the world? Yes, but teachers must be aware that the inaccuracies may be learned as well, and ideally they will inoculate students against inaccuracies with specific warnings.
Butler, A. C., Zaromb, F., Lyle, K. B., & Roediger, H. L., III. (2009). Using popular films to enhance classroom learning: The good, the bad, and the interesting. Psychological Science, 20, 1161–1168.
Marsh, E. J., Butler, A. C. & Umanath, S. (2012) Using fictional sources in the classroom: Applications from cognitive psychology. Educational Psychology Review, 24, 449-469.
Marsh, E. J., & Fazio, L. K. (2006). Learning errors from fiction: Difficulties in reducing reliance on fictional stories. Memory & Cognition, 34, 1140–1149.
A lot of data from the last couple of decades shows a strong association between executive functions (the ability to inhibit impulses, to direct attention, and to use working memory) and positive outcomes in school and out of school (see review here). Kids with stronger executive functions get better grades, are more likely to thrive in their careers, are less likely to get in trouble with the law, and so forth. Although the relationship is correlational and not known to be causal, understandably researchers have wanted to know whether there is a way to boost executive function in kids.
Tools of the Mind (Bedrova & Leong, 2007) looked promising. It's a full preschool curriculum consisting of some 60 activities, inspired by the work of psychologist Lev Vygotsky. Many of the activities call for the exercise of executive functions through play. For example, when engaged in dramatic pretend play, children must use working memory to keep in mind the roles of other characters and suppress impulses in order to maintain their own character identity. (See Diamond & Lee, 2011, for thoughts on how and why such activities might help students.)
A few studies of relatively modest scale (but not trivial--100-200 kids) indicated that Tools of the Mind has the intended effect (Barnett et al, 2008; Diamond et al, 2007). But now some much larger scale followup studies (800-2000 kids) have yielded discouraging results.
These studies were reported at a symposium this Spring at a meeting of the Society for Research on Educational Effectiveness. (You can download a pdf summary here.) Sarah Sparks covered this story for Ed Week when it happened in March, but it otherwise seemed to attract little notice.
Researchers at the symposium reported the results of three studies. Tools of the Mind did not have an impact in any of the three.
What should we make of these discouraging results?
It's too early to conclude that Tools of the Mind simply doesn't work as intended. It could be that there are as-yet unidentified differences among kids such that it's effective for some but not others. It may also be that the curriculum is more difficult to implement correctly than would first appear to be the case. Perhaps the teachers in the initial studies had more thorough training.
Whatever the explanation, the results are not cheering. It looked like we might have been on to a big-impact intervention that everyone could get behind. Now we are left with the dispiriting conclusion "More study is needed."
Barnett, W., Jung, K., Yarosz, D., Thomas, J., Hornbeck, A., Stechuk, R., & Burns, S.(2008). Educational effects of the Tools of the Mind curriculum: A randomized trial. Early Childhood Research Quarterly, 23, 299–313.
Bedrova, E. & Leong, D. (2007) Tools of the Mind: The Bygotskian appraoch to early childhood education. Second edition. New York: Merrill.
Diamond, A. & Lee, K. (2011). Interventions shown to aid executive function development in children 4-12 years old. Science, 333, 959-964.
Diamond, A., Barnett, W. S., Thomas, J., & Munro, S. (2007). Preschool program improves cognitive control. Science, 318, 1387-1388.
In an op-ed piece in August 19th's New York Times, Bronwen Hruska tells of her experiences with her son, Will, between the 3rd and 5th grade. Will was misdiagnosed with ADHD.
Hruska and her husband were initially approached by Will's teacher, who thought his behavior indicated ADHD. Though they were doubtful, they took him to a psychiatrist who said that Will did indeed have ADHD and prescribed stimulant medication. Will took the medication for two years but stopped when he concluded that Aderall is dangerous. Now a happy high school sophomore, there is not much reason to think that the medication was ever necessary.
How did this happen?
The title of the piece--"Raising the Ritalin Generation"--provides a clue to the author's conclusion. Hruska suggests that our society is sick. Teachers are too quick to suggest medication for kids. Schools "want no part" of average kids; they expect kids to be exceptional, extraordinary. And we, as a society, are teaching kids that average is not good enough, and that if you're only average you should take a pill.
But there's an important piece missing from this picture--parents.
From what's written, it sure does sound like Will was misdiagnosed. But I can't help but wonder why his parents didn't know it at the time.
ADHD diagnosis requires that symptoms be present in at least two settings. So it's not enough that Will shows troubling symptoms in school: he would also need to show them at home, in social settings, or in some other context for him to be diagnosed. There's no indication of a problem outside of school.
It's also notable that the mere presence of symptoms is not enough: the symptoms must be clinically significant; in other words, they obstruct the child's ability to function well in that setting and Hruska maintains that Will seems like a typical kid to her.
This is where Hruska loses me. Why would she accept the diagnosis if symptoms were observed in just one context, and if she believed there was limited evidence that the symptoms were clinically significant in that context? Why wouldn't she challenge the physician who diagnosed him?
I'm led to wonder if she knew the diagnostic criteria. They aren't hard to find. Google "adhd diagnosis." The first link is the CDC site that offers a reader-friendly version of the DSM IV criteria.
Are our kids pill-happy? Are we raising a Ritalin generation? If so, the solution is not to lay all of the blame on schools and society or even on physicians who make mistakes, and to portray parents as powerless victims. The solution is for parents to make better use of the wealth of scientific information available to us, and to ask questions when a doctor or other authority makes claims that fly in the face of our experience.
Making a change to education that seems like a clear improvement is never easy. Or almost never.
Judith Harackiewicz and her colleagues have recently reported an intervention that is inexpensive, simple, and leads high school students to take more STEM courses.
The intervention had three parts, administered over 15-months when students were in the 10th and 11th grades. In October of 10th grade researchers mailed a brochure to each household titled Making Connections: Helping Your Teen Find Value in School. It described the connections between math, science, and daily life, and included ideas about how to discuss this topic with students.
In January of 11th grade a second brochure was sent. It covered similar ideas, but with different examples. Parents also received a letter that included the address of a password-protected website devised by researchers, which provided more information about STEM and daily life, as well as STEM careers.
In Spring of 11th grade, parents were asked to complete an online questionnaire about the website.
There were a total of 188 students in the study: half received this intervention, and the control group did not.
Students in the intervention group took more STEM courses during their last two years of high school (8.31 semesters) than control students (7.50) semesters.
This difference turned out to be entirely due to differences in elective, advanced courses, as shown in the figure below.
An important caveat about this study: all of the subjects are participating in the Wisconsin Study of Families and Work. This study began in 1990. when women were in their fifth month of pregnancy.
The first brochure that researchers sent to subjects included a letter thanking them for their ongoing participation in the longer study. Hence, subjects could reasonably conclude that the present study was part of the longer study.
That's worth bearing in mind because ordinary parents might not be so ready to read brochures mailed to them by strangers, nor to visit suggested websites.
But that's not a fatal flaw of the research. It just means that we can't necessarily count on random parents reading the materials with the same care.
To me, the effect is still remarkable. To put it in perspective, researchers also measured the effect of parental education on taking STEM courses. As many other researchers have found, the kids of better-educated parents took more STEM courses. But the effect of the intervention was nearly as large as the effect of parental education!
Clearly, further work is necessary but this is an awfully promising start.
Harackiewicz, J. M, Rozek, C. S., Hulleman, C. S & Hyde, J. S. (in press). Helping parents to motivate adolescents in mathematics and science: An experimental test of a utility-value intervention. Psychological Science.
Steven Levitt, of Freakonomics fame, has unwittingly provided an example of how science applied to education can go wrong.
On his blog, Levitt cites a study he and three colleagues published (as an NBER working paper). The researchers rewarded kids for trying hard on an exam. As Levitt notes, the goal of previous research has been to get kids to learn more. That wasn't the goal here. It was simply to get kids to try harder on the exam itself, to really show everything that they knew.
Among the findings: (1) it worked. Offering kids a payoff for good performance prompted better test scores; (2) it was more effective if, instead of offering a payoff for good performance, researchers gave them the payoff straight away and threatened to take it away if the student didn't get a good score (an instance of a well-known and robust effect called loss aversion); (3) children prefer different rewards at different ages. As Levitt puts it "With young kids, it is a lot cheaper to bribe them with trinkets like trophies and whoopee cushions, but cash is the only thing that works for the older students."
There are a lot of issues one could take up here, but I want to focus on Levitt's surprise that people don't like this plan. He writes "It is remarkable how offended people get when you pay students for doing well – so many negative emails and comments." Levitt's surprise gets at a central issue in the application of science to education.
Scientists are in the business of describing (and thereby enabling predictions of) the Natural world. One such set of phenomenona concerns when students put forth effort and when they don't.
Education is a not a scientific enterprise. The purpose is not to describe the world, but to change it, to make it more similar to some ideal that we envision. (I wrote about this distinction at some length in my new book. I also discussed on this brief video.)
Thus science is ideally value-neutral. Yes, scientists seldom live up to that ideal; they have a point of view that shapes how they interpret data, generate theories, etc., but neutrality is an agreed-upon goal, and lack of neutrality is a valid criticism of how someone does science.
Education, in contrast, must entail values, because it entails selecting goals. We want to change the world--we want kids to learn things--facts, skills, values. Well, which ones? There's no better or worse answer to this question from a scientific point of view.
A scientist may know something useful to educators and policymakers, once the educational goal is defined; i.e., the scientist offers information about the Natural world that can make it easier to move towards the stated goal. (For example, if the goal is that kids be able to count to 100 and to understand numbers by the end of preschool, the scientist may offer insights into how children come to understand cardinality.)
What scientists cannot do is use science to evaluate the wisdom of stated goals.
And now we come to people's hostility to Levitt's idea of rewards for academic work.
I'm guessing most people don't like the idea of rewards for the same reason I don't. I want my kids to see learning as a process that brings its own reward. I want my kids to see effort as a reflection of their character, to believe that they should give their all to any task that is their responsibility, even if the task doesn't interest them.
There is, of course, a large, well-known research literature on the effect of extrinsic rewards on motivation. Readers of this blog are probably already familiar with it--if so, skip the next paragraph.
The problem is one of attribution. When we observe other people act, we speculate on their motives. If I see two people gardening--one paid and the other unpaid--I'm likely to assume that one gardens because he's paid and the other because he enjoys gardening. It turns out that we make these attributions about our own behavior as well. If my child tries her hardest on a test she's likely to think "I'm the kind of kid who always does her best, even on tasks she don't care for." If you pay her for her performance she'll think "I'm the kind of kid who tries hard when she's paid." This research began in the 1970's and has held up very well. Kids work harder for rewards. . . until the rewards stop. Then they engage in the task even less than they did before the rewards started. I summarized some of this work here.
In the technical paper, Levitt cites some of the reviews of this research but downplays the threat, pointing out that when motivation is low to start with, there's not much danger of rewards lowering it further. That's true, and I've made a close argument: cash rewards might be used as a last-ditch effort for a child who has largely given up on school. But that would dictate using rewards only with kids who were not motivated to start, not in a blanket fashion as was done in Levitt's study. And I can't see concluding that elementary school kids were so unmotivated that they were otherwise impossible to reach.
In addressing the threat to student motivation with research, Levitt is approaching the issue in the right way (even if I think he's incorrect in how he does so.)
But on the blog (in contrast to the technical paper), Levitt addresses the threat in the wrong way. He skips the scientific argument and simply belittles the idea that parents might object to someone paying their child for academic work. He writes:
Perhaps the critics are right and the reason I’m so messed up is that my parents paid me $25 for every A that I got in junior high and high school. One thing is certain: since my only sources of income were those grade-related bribes and the money I could win off my friends playing poker, I tried a lot harder in high school than I would have without the cash incentives. Many middle-class families pay kids for grades, so why is it so controversial for other people to pay them?
I think Levitt is getting "so many negative emails and comments" because he's got scientific data to serve one type of goal (get kids to try hard on exams) the application of which conflicts with another goal (encourage kids to see academic work as its own reward). So he scoffs at the latter.
I see this blog entry as an object lesson for scientists. We offer something valuable--information about the Natural world--but we hold no status in deciding what to do with that information (i.e., setting goals).
In my opinion Levitt's blog entry shows he has a tin ear for the possibility that others do not share his goals for education. If scientitists are oblivious to or dismissive of those goals, they can expect not just angry emails, they can expect to be ignored.
The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.