The Common Core standards for English Language Arts call for a significant dose of non-fiction reading, in support of reading comprehension, a finding I’ve discussed before
. That requirement has led to some puzzlement (and occasional indignation). Can’t kids gain knowledge of the world from fiction as well? Information about science, history, technology, civics, geography, etc?
The answer is “they can and they do.” But there is an important caveat on this conclusion. Beth Marsh and her colleagues offer an excellent summary of this research in a new article
published in Educational Psychology Review.
The advantage of fiction is that the narrative can engage students, transport them into the story. The fear is that readers will assume that information in fiction is true, whereas fiction may well contain inaccuracies. We don’t expect fiction to be vetted for accuracy the way a non-fiction source would be. (Certainly Hollywood movies are notorious for playing fast-and-loose with the truth.)
Isn’t it possible, then, that these inaccuracies would be later remembered by subjects as true?
Yes. In her experiments Marsh uses short stories that refer to facts about the world. The facts are either accurate (it happened on the largest ocean, the Pacific)
inaccurate (it happened on the largest ocean, the Atlantic)
or in a control condition, absent (it happened on the largest ocean).
Later, subjects take a general information test that includes a probe of the target information (Which is the largest ocean on earth?)
. The question is whether reading the accurate or inaccurate information influences subjects’ response to the question (compared to the control condition).
Left panel: Correct answers on final test, given the type of information in the story: correct, neutral (i.e. control) or misleading. Right panel: Incorrect answers on final test, given the type of information in the story.
As shown in the figure, seeing the correct information makes it more likely you’ll get the answer correct on the test (left panel) and less likely you’ll get it wrong (right panel). Reading the misleading information makes it less likely you’ll get it correct (left panel) and more likely you’ll get it wrong (right panel).
Thus, students are influenced by inaccurate information (at least for the duration of the experiment, as long as a week) and prior knowledge is not protective. In other words, the misleading information has an impact even for stuff that most of the students knew before the experiment started.
Even more alarming, a general warning “there may be misinformation here” was not effective (Marsh & Fazio, 2006). It may be that readers are caught up in the narrative and don’t worry overmuch about evaluating each bit of factual content they come across.
The good news is that a specific warning telling subjects exactly which bit of information cannot be trusted is very effective in preventing subjects from absorbing the inaccuracy into their beliefs (Butler et al 2009).
And “absorbing” is the right word: typically, readers later report that they “knew” the inaccurate information before the start of the experiment.
So, can fictional sources be used to help students learn new knowledge about the world? Yes, but teachers must be aware that the inaccuracies may be learned as well, and ideally they will inoculate students against inaccuracies with specific warnings.
Butler, A. C., Zaromb, F., Lyle, K. B., & Roediger, H. L., III. (2009). Using popular films to enhance classroom learning: The good, the bad, and the interesting. Psychological Science, 20, 1161–1168.
Marsh, E. J., Butler, A. C. & Umanath, S. (2012) Using fictional sources in the classroom: Applications from cognitive psychology. Educational Psychology Review, 24, 449-469.
Marsh, E. J., & Fazio, L. K. (2006). Learning errors from fiction: Difficulties in reducing reliance on fictional stories. Memory & Cognition, 34, 1140–1149.
A lot of data from the last couple of decades shows a strong association between executive functions (the ability to inhibit impulses, to direct attention, and to use working memory) and positive outcomes in school and out of school (see review here
). Kids with stronger executive functions get better grades, are more likely to thrive in their careers, are less likely to get in trouble with the law, and so forth. Although the relationship is correlational and not known to be causal, understandably researchers have wanted to know whether there is a way to boost executive function in kids.Tools of the Mind (Bedrova & Leong, 2007) looked promising.
It's a full preschool curriculum consisting of some 60 activities, inspired by the work of psychologist Lev Vygotsky. Many of the activities call for the exercise of executive functions through play. For example, when engaged in dramatic pretend play, children must use working memory to keep in mind the roles of other characters and suppress impulses in order to maintain their own character identity. (See Diamond & Lee, 2011, for thoughts on how and why such activities might help students.)A few studies of relatively modest scale (but not trivial--100-200 kids) indicated that Tools of the Mind has the intended effect (Barnett et al, 2008; Diamond et al, 2007). But now some much larger scale followup studies (800-2000 kids) have yielded discouraging results.These studies were reported at a symposium this Spring at a meeting of the Society for Research on Educational Effectiveness. (You can download a pdf summary here.) Sarah Sparks covered this story for Ed Week when it happened in March, but it otherwise seemed to attract little notice. Researchers at the symposium reported the results of three studies. Tools of the Mind
did not have an impact in any of the three. What should we make of these discouraging results? It's too early to conclude that Tools of the Mind simply doesn't work as intended. It could be that there are as-yet unidentified differences among kids such that it's effective for some but not others. It may also be that the curriculum is more difficult to implement correctly than would first appear to be the case. Perhaps the teachers in the initial studies had more thorough training. Whatever the explanation, the results are not cheering. It looked like we might have been on to a big-impact intervention that everyone could get behind.
Now we are left with the dispiriting conclusion "More study is needed."
Barnett, W., Jung, K., Yarosz, D., Thomas, J., Hornbeck, A., Stechuk, R., & Burns, S.(2008). Educational effects of the Tools of the Mind curriculum: A randomized trial. Early Childhood Research Quarterly, 23
, 299–313.Bedrova, E. & Leong, D. (2007) Tools of the Mind: The Bygotskian appraoch to early childhood education. Second edition. New York: Merrill.Diamond, A. & Lee, K. (2011). Interventions shown to aid executive function development in children 4-12 years old. Science, 333, 959-964.
Diamond, A., Barnett, W. S., Thomas, J., & Munro, S. (2007). Preschool program improves cognitive control. Science, 318
In an op-ed piece
in August 19th's New York Times, Bronwen Hruska tells of her experiences with her son, Will, between the 3rd and 5th grade. Will was misdiagnosed with ADHD.
Hruska and her husband were initially approached by Will's teacher, who thought his behavior indicated ADHD. Though they were doubtful, they took him to a psychiatrist who said that Will did indeed have ADHD and prescribed stimulant medication. Will took the medication for two years but stopped when he concluded that Aderall is dangerous. Now a happy high school sophomore, there is not much reason to think that the medication was ever necessary.
How did this happen?
The title of the piece--"Raising the Ritalin Generation"--provides a clue to the author's conclusion. Hruska suggests that our society is sick. Teachers are too quick to suggest medication for kids. Schools "want no part" of average kids; they expect kids to be exceptional, extraordinary. And we, as a society, are teaching kids that average is not good enough, and that if you're only average you should take a pill.
But there's an important piece missing from this picture--parents.
From what's written, it sure does sound like Will was misdiagnosed. But I can't help but wonder why his parents didn't know it at the time.
ADHD diagnosis requires that symptoms be present in at least two settings. So it's not enough that Will shows troubling symptoms in school: he would also need to show them at home, in social settings, or in some other context for him to be diagnosed. There's no indication of a problem outside of school.
It's also notable that the mere presence of symptoms is not enough: the symptoms must be clinically significant; in other words, they obstruct the child's ability to function well in that setting and Hruska maintains that Will seems like a typical kid to her.
This is where Hruska loses me. Why would she accept the diagnosis if symptoms were observed in just one context, and if she believed there was limited evidence that the symptoms were clinically significant in that context? Why wouldn't she challenge the physician who diagnosed him?
I'm led to wonder if she knew the diagnostic criteria. They aren't hard to find. Google "adhd diagnosis." The first link
is the CDC site that offers a reader-friendly version of the DSM IV criteria.
Are our kids pill-happy? Are we raising a Ritalin generation? If so, the solution is not to lay all of the blame on schools and society or even on physicians who make mistakes, and to portray parents as powerless victims. The solution is for parents to make better use of the wealth of scientific information available to us, and to ask questions when a doctor or other authority makes claims that fly in the face of our experience.
Making a change to education that seems like a clear improvement is never easy. Or almost never.
Judith Harackiewicz and her colleagues have recently reported an intervention that is inexpensive, simple, and leads high school students to take more STEM courses.
The intervention had three parts, administered over 15-months when students were in the 10th and 11th grades. In October of 10th grade researchers mailed a brochure to each household titled Making Connections: Helping Your Teen Find Value in School. It described the connections between math, science, and daily life, and included ideas about how to discuss this topic with students.
In January of 11th grade a second brochure was sent. It covered similar ideas, but with different examples. Parents also received a letter that included the address of a password-protected website devised by researchers, which provided more information about STEM and daily life, as well as STEM careers.
In Spring of 11th grade, parents were asked to complete an online questionnaire about the website.
There were a total of 188 students in the study: half received this intervention, and the control group did not.
Students in the intervention group took more STEM courses during their last two years of high school (8.31 semesters) than control students (7.50) semesters.
This difference turned out to be entirely due to differences in elective, advanced courses, as shown in the figure below.
An important caveat about this study: all of the subjects are participating in the Wisconsin Study of Families and Work. This study began in 1990. when women were in their fifth month of pregnancy.
The first brochure that researchers sent to subjects included a letter thanking them for their ongoing participation in the longer study. Hence, subjects could reasonably conclude that the present study was part of the longer study.
That's worth bearing in mind because ordinary parents might not be so ready to read brochures mailed to them by strangers, nor to visit suggested websites.
But that's not a fatal flaw of the research. It just means that we can't necessarily count on random parents reading the materials with the same care.
To me, the effect is still remarkable. To put it in perspective, researchers also measured the effect of parental education on taking STEM courses. As many other researchers have found, the kids of better-educated parents took more STEM courses. But the effect of the intervention was nearly as large as the effect of parental education!
Clearly, further work is necessary but this is an awfully promising start.
Harackiewicz, J. M, Rozek, C. S., Hulleman, C. S & Hyde, J. S. (in press). Helping parents to motivate adolescents in mathematics and science: An experimental test of a utility-value intervention. Psychological Science.
Steven Levitt, of Freakonomics fame, has unwittingly provided an example of how science applied to education can go wrong.On his blog, Levitt cites a study
he and three colleagues published (as an NBER working paper
). The researchers rewarded kids for trying hard on an exam. As Levitt notes, the goal of previous research has been to get kids to learn more. That wasn't the goal here. It was simply to get kids to try harder on the exam itself, to really show everything that they knew.Among the findings: (1) it worked. Offering kids a payoff for good performance
prompted better test scores; (2) it was more effective if, instead of offering a payoff for good performance, researchers gave them the payoff straight away and threatened to take it away
if the student didn't get a good score (an instance of a well-known and robust effect called loss aversion
); (3) children prefer different rewards at different ages. As Levitt puts it "With young kids, it is a lot cheaper to bribe them with trinkets like trophies and whoopee cushions, but cash is the only thing that works for the older students."There are a lot of issues one could take up here, but I want to focus on Levitt's surprise that people don't like this plan. He writes "
It is remarkable how offended people get when you pay students for doing well – so many negative emails and comments." Levitt's surprise gets at a central issue in the application of science to education. Scientists are in the business of describing (and thereby enabling predictions of) the Natural world. One such set of phenomenona concerns when students put forth effort and when they don't. Education is a not a scientific enterprise. The purpose is not to describe
the world, but to change it, to make it more similar to some ideal that we envision. (I wrote about this distinction at some length in my new book. I also discussed on this brief video
Thus science is ideally value-neutral. Yes, scientists seldom live up to that ideal; they have a point of view that shapes how they interpret data, generate theories, etc., but neutrality is an agreed-upon goal, and lack of neutrality is a valid criticism of how someone does science. Education, in contrast, must entail values, because it entails selecting goals. We want to change the world--we want kids to learn things--facts, skills, values. Well, which ones? There's no better or worse answer to this question from a scientific point of view.A scientist may know something useful to educators and policymakers, once the educational goal is defined; i.e., the scientist offers information about the Natural world that can make it easier to move towards the stated goal. (For example, if the goal is that kids be able to count to 100 and to understand numbers by the end of preschool, the scientist may offer insights into how children come to understand cardinality.) What scientists cannot do is use science to evaluate the wisdom of stated goals.And now we come to people's hostility to Levitt's idea of rewards
for academic work.
I'm guessing most people don't like the idea of rewards for the same reason I don't. I want my kids to see learning as a process that brings its own reward. I want my kids to see
effort as a reflection of their character, to believe that they should give their all to any task that is their responsibility, even if the task doesn't interest them. There is, of course, a large, well-known research literature on the effect of extrinsic rewards on motivation. Readers of this blog are probably already familiar with it--if so, skip the next paragraph. The problem is one of attribution. When we observe other people act, we speculate on their motives. If I see two people gardening--one paid and the other unpaid--I'm likely to assume that one gardens because he's paid and the other because he enjoys gardening. It turns out that we make these attributions about our own behavior as well. If my child tries her hardest on a test she's likely to think "I'm the kind of kid who always does her best, even on tasks she don't care for." If you pay her for her performance she'll think "I'm the kind of kid who tries hard when she's paid." This research began in the 1970's and has held up very well. Kids work harder for rewards. . . until the rewards stop. Then they
engage in the task even less than they did before the rewards started. I summarized some of this work here.
In the technical paper, Levitt cites some of the reviews of this research but downplays the threat, pointing out that when motivation is low to start with, there's not much danger of rewards lowering it further. That's true, and I've made a close argument: cash rewards might be used as a last-ditch effort for a child who has largely given up on school. But that would dictate using rewards only with kids who were not motivated to start, not in a blanket fashion as was done in Levitt's study. And I can't see concluding that elementary school kids were so unmotivated that they were otherwise impossible to reach.In addressing the threat to student motivation with research, Levitt is approaching the issue in the right way (even if I think he's incorrect in how he does so.)But on the blog (in contrast to the technical paper), Levitt addresses the threat in the wrong way. He skips the scientific argument and simply belittles the idea that parents might object to someone paying their child for academic work. He writes: Perhaps the critics are right and the reason I’m so messed up is that my parents paid me $25 for every A that I got in junior high and high school. One thing is certain: since my only sources of income were those grade-related bribes and the money I could win off my friends playing poker, I tried a lot harder in high school than I would have without the cash incentives. Many middle-class families pay kids for grades, so why is it so controversial for other people to pay them?I think Levitt is getting "so many negative emails and comments" because he's got scientific data to serve one type of goal (get kids to try hard on exams) the application of which conflicts with another goal (encourage kids to see academic work as its own reward). So he scoffs at the latter. I see this blog entry as an object lesson for scientists. We offer something valuable--information about the Natural world--but we hold no status in deciding what to do with that information (i.e., setting goals). In my opinion Levitt's blog entry shows he has a tin ear for the possibility that others do not share his goals for education. If scientitists are oblivious to or dismissive of those goals, they can expect not just angry emails, they can expect to be ignored.