I get so many questions about learning styles that I added an FAQ to my website. You can find it here.
David Daniel and I have a letter in latest issue of Science. It's behind a paywall, so I thought I'd provide a summary of the substance.
David and I note that there is, in some quarters, a rush to replace conventional paper textbooks with electronic textbooks. It is especially noteworthy that members of the Obama administration are eager to speed this transition. (See this report.)
On the face of it, this transition is obvious: most people seem to like reading on their Nook, Kindle or iPad--certainly sales of the devices and of ebooks are booming. And electronic textbooks offer obvious advantages that traditional textbooks don't, most notably easy updates, and embedded features such as hyperlinks, video, and collaboration software.
But David and I urged more caution.
We should note that there are not many studies out there regarding the use of electronic textbooks, but those that exist show mixed results. A consistent finding is that, given the choice, students prefer traditional textbooks. That's true regardless of their experience with ebooks, so it's not because students are unfamiliar with them (Woody, Daniel & Baker, 2010). Further, some data indicate that reading electronic textbooks, although it leads to comparable comprehension, takes longer (e.g., Dillon, 1992; Woody et al, 2010).
Why don't students like electronic textbooks if they like ebooks? The two differ. Ebooks typically often have a narrative structure, they are usually pretty easy to read, and we read them for pleasure. Textbooks in contrast, have a hierarchical structure, the material is difficult and unfamiliar, and we read them for learning and retention. Students likely interact with textbooks differently than books they read for pleasure.
That may be why the data for electronic books are more promising for early grades. Elementary reading books tend of have a narrative structure, and students are not asked to study from the books as older kids are.
Further, many publishers are not showing a lot of foresight in how they integrate video and other features in the electronic textbooks. A decade of research (much of it by Rich Mayer and his collaborators and students) show that multimedia learning is more complex than one would think. Videos, illustrative simulations, hyperlinked definitions--all these can aid comprehension OR hurt comprehension, depending on sometimes subtle differences in how they are placed in the text, the specifics of the visuals, the individual abilities of readers, and so on.
None of this is to say that electronic textbooks are a bad thing, or indeed to deny that they ought to replace traditional textbooks. But two points ought to be kept in mind.
(1) The great success of ebooks as simply the porting over of traditional books into another format may not translate to electronic textbooks. Textbooks have different content, different structure, and they are read for different purposes.
(2) Electronic textbooks stand a much higher chance of success if publishers will exploit the rich research literature on multimedia learning, but most are not doing so.
For these two reasons, it's too early to pick the flag and shout "Hurrah!" on electronic textbooks.
A. Dillon, Ergonomics 35, 1297 (1992).
W. D. Woody, D. B. Daniel, C. Baker , Comput. Educ. 55, 945 (2010)
A new review takes on the question "Does video gaming improve academic achievement?"
To cut to the chase, the authors conclude that the evidence for benefit is slim: they conclude that there is some reason to think that video games can boost learning in history, language acquisition, and physical education (in the case of exergames) but no evidence that gaming improves math and science.
It's notable that the authors excluded simulations from the analysis--simulations might prove particularly effective for science and math. But the authors wanted to examine gaming in particular.
Lest the reader get the impression that the authors might have started this review with the intention of trashing gaming, they authors describe themselves as "both educators and gamers (not necessarily in that order)" and even manage to throw a gamer's inside joke in the article's title: "Our princess is in another castle." (If this doesn't ring a bell, an explanation is here.)
And they did try to cast a wide net to capture positive effects of gaming. They did not limit their analysis to random-control trials, but included qualitative research as well. They considered outcome measures not just of improved content knowledge (history, math, etc.) but also claims that gaming might build teams or collaborative skills, or that gaming could build motivation to do other schoolwork. Still, the most notable thing about the review is the paucity of studies: just 39 went into the review, even though educational gaming has been around for a generation.
Making generalizations about the educational value of gaming is difficult because games are never played the same way twice. There's inherent noise in the experimental treatment. That makes the need for systematicity in the experimental literature all the more important. Yet the existing studies monitor different player activities, assess different learning outcomes, and, of course, test different games with different features.
The authors draw this rather wistful conclusion: “The inconclusive nature of game-based learning research seems to only hint at the value of games as educational tools.”
I agree. Although there's limited evidence for the usefulness of gaming, it's far too early to conclude that gaming can't be of educational value. But for researchers to prove that--and more important, to identify the features of gaming that promote learning and maintain the gaming experience--will take a significant shift in the research effort, away from a piecemeal "do kids learn from this game?" to a more systematic, and yes, reductive analysis of gaming.
Young, M. F. et al. (2012). Our princess is in another castle: A review of trends in serious gaming for education. Review of Educational Research, 82, 61-89.
Every year as the AERA convention approaches, Rick Hess writes a column poking fun at some silly-sounding titles in the program. Hess's point seems to be "Is any of this really going to help kids learn better?" (That's my summary, not his.)
I respect Hess, but I think he misses the more interesting point here. Hess's real beef, I suggest, is not with the AERA, but with schools of education, and with all education researchers.
Putting researchers from very different disciplines--history, critical theory, economics, psychology, etc.--in one school because they all study "education" sounds like a good idea. The problem is that it doesn't lead to a beautiful flowering of interdisciplinary research. Researchers ignore one another.
Why? Because these researchers start with different assumptions. They set different goals for education. They have different standards of evidence. They even have different senses of what it means to "know" something. So mostly they don't conduct interdisciplinary research. Mostly they ignore one another.
No, the Foucault crowd is not going to improve science education in the next ten years. The wheel on which the Humanities turns revolves much more slowly and less visibly than the cycle of the sciences. I admit I only dimly understand what they are up to, but I nevertheless believe they have a contribution to make.
But the fault lies not just with schools of education for sticking these varied researchers in one building.
A perhaps more significant problem is that there is little sense among education researchers that their particular training leads to expertise well suited to addressing certain problems and ill-suited to other problems. I think that education researchers would be smart to stake out their territory "We have methods that will help solve these problems."
Too often we forget our limitations. (I named this blog "Science and Education" to remind myself that, although I'll be tempted, I should not start mouthing off about policy, but should leave that to people like Rick who understand it much more deeply.) When the charter school affiliated with Stanford was in trouble a year or two ago, how many education researchers lacked an opinion? And how many of those opinions were really well informed?
Education research would look less silly if all of us made clear what we were up to, and stuck to it.
Every year I teach an introductory course on cognitive psychology to about 350 students. Every year I ask my students how they study and I find that they are much like students at Purdue, Washington University in St. Louis, UCLA, and Kent State--universities at which surveys have been conducted on student study strategies.
Like students at those schools, my students tend to take notes in class, color the readings with a highlighter, and later reread the notes and the highlighted bits of the text.
This Table show the results of a study by Jeff Karpicke and his colleagues (2009) on the study strategies of students at Washington University in St. Louis. (For other studies leading to similar conclusions see Hartwig & Dunlosky, 2012; Kornell & Bjork, 2007).
Rereading is a terribly ineffective strategy. The best strategy--by far--is to self-test--which is the 9th most popular strategy out of 11 in this study. Self-testing leads to better memory even compared to concept mapping (Karpicke & Blunt, 2011).
The table shows that students rarely self-test as a learning strategy. Other data show that they more often self-test as a check to be sure that they have studied enough.
There is much discussion these days of how much time students ought to spend studying text for later tests of factual recall. Whatever your answer to this question, if students are going to do it, we might as well give them the tools to do a good job.
This article on the website of the American Psychological Association is a good start.
Hartwig, M. K. & Dunlosky, J. (2012). Study strategies of college students: Are self-testing and scheduling related to achievement? Psychonomic Bulletin & Review, 19, 126-134.
Karpicke, J. D. & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 331, 772-775.
Karpicke, J. D., Butler, A. C., & Roediger, H. L., III. (2009). Metacognitive strategies in student learning: Do students practice retrieval when they study on their own? Memory, 17, 471–479.
Kornell, N., & Bjork, R. A. (2007). The promise and perils of self regulated study. Psychonomic Bulletin & Review, 14, 219–224.
One of the most troubling problems concerns the promotion or retention of low-achieving kids. It doesn't seem sensible to promote the child to the next grade if he's terribly far behind. But if he is asked to repeat a grade, isn't there are high likelihood that he will conclude he's not cut out for school?
Until recently, comparisons of kids who were promoted and kids who were retained indicated that retention didn't seem to help academic achievement, and in fact likely hurt. So the best practice seemed to be to promote kids to the next grade, but to try to provide extra academic support for them to handle the work.
But new studies indicate that academic outcomes for kids who are retained may be better than was previously thought, although still not what we would hope.
A meta-analysis by Chiharu Allen and colleagues indicates that the apparent effect of retention on achievement varies depending on the particulars of the research.
Two factors were especially important. First, the extent to which researchers controlled for possible differences between retained and promoted students. Better studies ensured that groups were matched on many characteristics, whereas worse studies just used a generic "low achiever" control group. Second, some studies compared retained students to their age-matched cohort--who were now a year ahead in school. Other studies compared retained students to a grade-matched cohort or to the grade-matched norms of a standardized test.
Which comparison is more appropriate is, to some extent, a value judgment, but personally I can't see the logic in evaluating a kids' ability to do 4th grade work (relative to other 4th graders) when he's still in 3rd grade.
The authors reported three main findings:
1) studies with poor controls indicated negative academic outcomes for retained students. Studies with better controls indicated no effect, positive or negative, on retention versus promotion.
2) When compared to students in the same grade, retained children show a short term boost to academic achievement, but that advantage dissipates in the coming years. The authors speculate that students' academic self-efficacy increases in that first year, but they come to adopt beliefs that they are not academically capable.
This pattern--a one-year boost followed by loss--was replicated in a recently published study (Moser, West, & Hughes, in press).
The question of whether it's best to promote or retain low-achieving students is still open. But better research methodology is providing a clearer picture of the outcomes for these students. One hopes that better information will lead to better ideas for intervention.
Allen, C. S., Chen, Q., Willson, V. L., & Hughes, J. N. (2009). Quality of research design moderates effects of grade retention on achievement: A meta-analytic, multi-level analysis. Education Evaluation & Policy Analysis, 31, 480-499.
Moser, S. E., West, S. G. & Hughes, J. N. (in press). Trajectories of math and reading achievement in low-achieving children in elementary school: Effects of early and later retention in grade. Journal of Educational Psychology.
As I’ve emphasized numerous times, reading comprehension depends on knowing at least something about the subject matter of the text one is reading. Adults read mostly for information, and so they must acquire a broad base of background knowledge to be competent readers. Yet much of the material students read in school is narrative fiction. Previous analyses (e.g., Venezky, 2000) of elementary reading materials have shown that kids spend most reading instructional time goes to stories.
There are few plausible contributors to this trend. First, there's a likely underestimation of the importance of background knowledge to reading. If you think of reading as a pure skill that can be applied equally well to any content, then the content with which one trains is irrelevant. Second, some seem to be believe that students need to read about things that are familiar to them, as illustrated by this opinion piece in Education Week, an opinion that is hard to square with the enthusiasm kids show in learning about ancient Egypt, dinosaurs, the natural world around them, etc. Third, basal readers may emphasize fiction because it is easier to create fiction that is non-controversial, and likely to anger no one on a school board or the PTA. Diane Ravitch’s book, The Language Police, documented the extent to which education publishers are frightened by controversy.
A new study (Moss, EDIT--I mistakenly said this was a 2012 study--it was published in 2008) of basal readers used in California indicates that things might be getting a little better on this front, but we are still not where we ought to be. Barbara Moss analyzed the two most recently adopted basal programs in California to determine the percentage of selections in grades one through six that are devoted to different types of prose.
Here are the results
She also reported the percentage of pages--important because selections obviously vary in length and depth. A teacher might spend two lessons on a half-page poem, for example.
There is much more to the paper, but these tables present the most important conclusion. These two basal reading programs offer more non-fiction than many have in the past: For example, Moss and Newton (2002) reported the figure was about 20% non-fiction for the programs they examined. There is fair variability in the numbers in different studies, as individual programs do vary. Still, the overall figure for non-fiction is never all that high.
The author compares the observed percentage to that recommended by the NAEP: about 50% in fourth grade, increasing to 55% in eighth grade and 70% in twelfth grade. I’m not crazy about setting our standards by the demands of a standardized test, even one as good as the NAEP. That’s the tail wagging the dog. Rather, we should construct our tests to reflect our educational goals.
In the case of reading, most adults read texts that are mostly informational. To enable comprehension, schooling should provide a wide foundation in the sort of knowledge that these texts demand. That knowledge need not come exclusively from reading—indeed, in early grades, it cannot. But given that most instructional time in early grades goes to reading, it’s important that the reading content support the goal of building background knowledge.
Moss, B. (2008). The information text gap: The mismatch between non-narrative text types in basal readers and 2009 NAEP recommended guidelines. Journal of Literacy Research, 40, 201-219.
Moss, B., & Newton, E. (2002). An examination of the informational text genre in basal readers. Reading Psychology, 23, 1–13.
Venezky, R. L. (2000). The origins of the present-day chasm between adult literacy needs and school literacy instruction. Scientific Studies of Reading, 4, 19–39.
To what extent can we trust that experimental results from a psychology laboratory will be observed outside of the laboratory?
This question is especially pertinent in education. It's difficult to conduct research in classrooms. It's hard to get the permission of the administration, it's hard to persuade teachers to change their practice to an experimental practice which may or may not help students--and the ethics of the request ought to be carefully considered. The researcher must make sure that the intervention is being implemented equivalently across classrooms, and across schools. And so on. Research in the laboratory is, by comparison, easy.
But it's usually assumed that you give something up for this ease, namely, that the research lacks what is usually called ecological validity. Simply put, students may not behave in the laboratory as they would in a more natural setting (often called "the field").
A recent study sought to test the severity of this problem.
The researchers combed through the psychological literature, seeking meta-analytic studies that included a comparison of findings from the laboratory and findings from the field. For example, studies have examined the spacing effect--the boost to memory from distributing practice in time--both in the laboratory and in classrooms. Do you observe the advantage in both settings? Is it equally large in both settings?
The authors identified 217 such comparisons.
Each dot represents one meta-analytic comparison (so each dot really summarizes a number of studies).
What this graph shows is a fairly high correlation between lab and field experiments: .639.
If it worked in the lab, it generally worked in the field: only 30 times out of 215 did an effect reverse--for example, a procedure that reliably helped in the lab turned to out to reliably hurt in the field (or vice versa).
The correlation did vary by field. It was strongest in Industrial/Organizational Psychology: there, the correlation was a whopping .89. In social psychology it was a more modest, but far from trivial .53.
And what of education? There were only seven meta-analyses that went into the correlation, so it should be interpreted with some restraint, but the figure was quite close that observed in the overall dataset: the correlation was .71.
So what's the upshot? Certainly, it's wise to be cautious about interpreting laboratory effects as applicable to the classroom. And I'm suspicious that the effects for which data were available were not random: in other words, there are data available for effects that researchers suspected would likely work well in the field and in the classroom.
Still, this paper is a good reminder that we should not dismiss lab findings out of hand because the lab is "not the real world." These results can replicate well in the classroom.
Mitchell, G. (2012). Revisiting truth or triviality: The external validity of research in the psychological laboratory. Perspectives on Psychological Science, 7, 109-117.
The data are unequivocal: kids from wealthy families do better in school than kids from poor families. It's observable across ages, on all sorts of different measures, and (to varying degrees) in every country.
A piece I wrote for the American Educator on this phenomenon is just out. You can read it here. A very brief summary follows.
A great deal of research from the last ten years can be summarized in two broad theories.
Family Investment theories offer the intuitive idea that wealthier parents has more resources to invest in their kids, and kids, naturally enough, benefit. Financial resources can go to enrichment experiences in the summer, more books in the home, a tutor if one is needed, better access to health care, and so one.
Wealthier parents are also likely to be higher in human capital--that is, they know more stuff. Wealthier parents speak more often to their children, and with a richer vocabulary, with more complex syntax, and in a way that elicits ideas from the child. Wealthier parents are also more likely to read to their children and to buy toys that teach letters and the names of shapes and colors.
Finally, wealthier parents are more likely to be rich in social capital--that is, they are socially connected to other people how have financial, human, or social capital.
The second family of theories on this phenomenon is Stress theory. Stress theories apply particularly to low-income families, and suggest that poverty leads to systemic stress--stress caused by crowding, by crime-ridden neighborhoods, by food uncertainty, and other factors. This stress, in turn, leads to emotional problems in parents, which leads to ineffective parenting strategies. Stress also leads directly to brain changes in children. Both of these factors lead to emotional and cognitive disadvantage for kids. The theory is summarized in the figure.
The article elaborates on these theories in more detail and I provide citations there.
I close with this paragraph:
The research literature on the impact of socio economic status on children's learning is sobering, and it's easy to see why an individual teacher might feel helpless in the face of these effects. Teachers should not be alone in confronting the impact of poverty on children's learning. One hopes that the advances in our understanding the terrible consequences of poverty for the mind and brain will spur policymakers to serious action. but still, teachers should not despair. All children can learn, whatever their backgrounds, and whatever challenges they face.
Last week I deplored the lack of time devoted to science in early elementary grades. Well, if kids aren't spending time on Science, what are they doing?
They are spending a great deal of time on English Language Arts. According to the papers I cited, 62% of classroom time for first-graders, and 47% for third-graders.
The irony is that, by failing to include more time for science, history, geography, civics, etc., we are very likely hurting reading comprehension. Why? Because reading depends so heavily on prior knowledge.
Every passage that you read omits information. For example, consider this simple passage "Dan was so embarrassed. He went to the concert and forgot to turn off his phone." The author has omitted much information: the phone rang, the ringing was audible to others, the phone rang at a time when others were enjoying the music. All of this omitted information must be brought to the text by the reader. Otherwise the passage will be puzzling, or only partly understood. (I made a video explaining this phenomenon. You can see it here.)
Once kids are fluent decoders, much of the difference among readers is not due to whether you're a "good reader" or "bad reader" (meaning you have good or bad reading skills). Much of the difference among readers is due to how wide a range of knowledge they have. If you hand me a reading test and the text is on a subject I happen to know a bit about, I'll do better than if it happens to be on a subject I know nothing about.
Two predictions fall out of this hypothesis.
First, if take some "bad readers" and give them a text on a subject they know something about, they should suddenly read well, or at least, much better.
Several studies show that that is the case. In one that I've cited before, Recht & Leslie (1988) tested "good" and "poor" readers (as identified by a reading test) on their comprehension of a passage about baseball. Some kids knew a lot about baseball, some not so much.
I've copied Table 1 from their paper: The numbers ("quantity" and "quality") are two different measures of comprehension--in each case, larger numbers mean better comprehension. I've circled data from the two critical groups: Lower left = "poor" readers who know a lot about baseball. Upper right = "good" readers who don't know about baseball.
Here's a second prediction. There should be a correlation between world knowledge and reading comprehension. The more stuff you know about the world, the more likely it is that you'll know at least a bit about whatever passage you happen to hit.
The Recht & Leslie paper is well known, but this finding, less so. It's from a paper by Anne Cunningham & Keith Stanovich that was actually addressing a different question.
The researchers administered a number of different measures to 11th graders, including the comprehension subtest of the Nelson-Denny Reading test, and three measures of general cultural knowledge: a 45 item cultural literacy test (e.g., in what part of the body does the infection called pneumonia occur? What is the term for selling domestic merchandise abroad?). A second test used 20 items from the NAEP history and literature tests (e.g., "Which mythical Greek hero demonstrated his bravery during his long journey homeward after the Trojan war?") In the third test the researchers provided a list of 48 names, 24 of which represented famous figures from history, the arts, sciences, etc. Students were to pick the famous names and ignore the non-famous names.
The red circle shows the remarkably high correlation between reading comprehension and the measures of cultural knowledge.
This association may not seem so remarkable. Wouldn't one predict that smart kids are good readers, and smart kids also know a lot of stuff? So all we've done is measure intelligence in two indirect ways.
The correlations shown above are actually partial correlations. Researchers administered a standard non-verbal intelligence test (Raven's progressive matrices) and statistically controlled for the effect of intelligence. The correlations I've circled reflect that statistical control.
In sum, once kids can decode fluently, reading comprehension depends heavily on knowledge. By failing to provide a solid grounding in basic subjects we inadvertently hobble children's ability in reading comprehension.
As I have put it elsewhere, Teaching Content IS Teaching Reading.
Cunningham, A. E. & Stanovich, K. E. (1997). Early reading acquisition and its relation to reading experience and ability 10 years later. Developmental Psychology, 33, 934-945.
Recht, D. R. & Leslie, L. (1988). Effect of prior knowledge on good and poor readers' memory of text. Journal of Educational Psychology, 80, 16-20.
The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.