I have a new article (with Sian Beilock) on math anxiety in the new American Educator. Free download here:
This post first appeared at RealClearEducation.com on May 6, 2014
As someone who spends most of their time thinking about the application of scientific findings to education, I encounter plenty of opinions about the scientific status of such efforts. Almost always, a comparison is made between the rigor of “hard” sciences and education. What varies is the accompanying judgment: derision for education researchers or despondency about the difficulty of the enterprise. In a recent article, physicist Carl Wieman (2014) offers a different perspective on the issue, suggesting that the difference between research in education and in physics is smaller than you might guess.
In education, Wieman is best known for refining and popularizing techniques to have college students better engage in large lecture courses. He started his career as a physicist, producing work that culminated in a Nobel prize. So he has some credentials in talking about the work of “hard” scientists.
Wieman begins by making clear what he takes to be the outcome of good science: predictive power. Can you use the results of your research to predict with some accuracy what will happen in a new situation? A common mistake is to believe that in education one ought to be able to predict outcomes for individual students; not necessarily so, any more than a physicist must be able to predict the behavior of each atom. Prediction in aggregate—a liter of gas or a school of children—is still an advance.
Wieman’s other points follow from his strong emphasis on prediction.
First, it follows that “rigorous” methods are any that contribute to better prediction. You don’t state in the absolute that randomized controlled trials are better than qualitative research. They provide different types of information in a larger effort to allow prediction.
Second, the emphasis on prediction frames the way one thinks about the messiness of research. Education research is often portrayed as inherently messy because there are so many variables at play. Physics research, in contrast, is portrayed as better controlled and more precise because there are many fewer variables that matter. Wieman argues this view is a misconception.
Physics seems tidy because you’ve probably only studied physics in textbooks, where everything is worked out: in other words, where no extraneous variables are discussed as possibly mattering. When the work that you study was first being conducted, it was plenty messy: false leads were pursued, ideas that (in retrospect) are self-contradictory were taken seriously, and so on. The same is true today: the frontiers of physics research is messy. “Messy” means that you don’t have a very good idea of which variables are important in gaining the predictive power that characterizes good science.
Third, Wieman suggests that bad research is the same in physics and education. Research is bad when the researcher has failed to account for factors that, based on prior research, he or she should have known to include. There is plenty of bad research in the hard sciences. People aren’t stupid; it’s just that science is hard.
I agree with Wieman that differences in “hardness” are mostly illusory. (That’s why I’ve been putting the term in quotation marks.) The fundamentals of the scientific method don’t differ much wherever they are applied. I also agree that people (usually people uninvolved in research) are too quick to conclude that, compared to other fields, a higher proportion of education research is low-quality. Come to a meeting of the Society for Neuroscience and I’ll show you plenty of studies that were poorly conceived, poorly controlled, or were simply wheel-spinning and will be ignored.
Wieman does ignore a difference between physics and education that I take to have important consequences: physics (and other basic sciences) strive to describe the world as it is, and so strive to be value-neutral. Education is an applied science; it is in the business of changing the world, making it more like it ought to be. As such, education is inevitably saturated with values.
Education policy would, I think, benefit, with a greater focus on the true differences between education and other varieties or research, and a reduced focus on the phantom differences in rigor.
Wieman, C. E. (2014). The similarities between research in education and research in the hard sciences. Educational Researcher, 43, 12-14.
This article first appeared at RealClearEducation.com on April 29, 2014.
Do children learn to read by translating letters into sound, or by perceiving the spelling of the word? The answer has an indirect bearing on teaching; it would presumably be best to instruct kids in a way consonant with how most perform the task. The last fifteen years has seen an increasing consensus among researchers: children initially learn via the letter-sound translation mechanism. As they gain reading practice, they acquire the spelling mechanism as well, although the letter-sound translation method continues to make a contribution to reading. Now a new study of 284 French children in grades 1 through 5 offers support to this model (Ziegler et al, 2014).
From the child’s perspective, the experimental task was simple. They sat before a computer screen. An asterisk appeared at the center for one second, and then a string of letters replaced the asterisk (I’ll call this the “response letter string.”) Children were to push one button if the response letter string formed a word, and another if it did not.
What the kids were not told was that another letter string actually appeared between the asterisk’s disappearance and the appearance of the response letter string. This letter string (called the prime) appeared for just .07 seconds, so the children didn’t consciously see it—if anything, they might have thought the screen flickered.
Even though you’re unaware of it, the prime can influence your response. If the prime is “MOP” and then the response letter string is also “MOP,” you’re faster to verify that “MOP” is indeed a word, compared to how fast you respond if the prime were a non-matching word, say, “DOG.” Even though you were unaware of the prime, you read it, and so you’re a bit faster to read it a second time.
But what if the prime were “MAWP?” Would you still be faster to verify that “MOP?” is a word when it appears? If you think that people read via letter-sound translation, the answer ought to be yes. When “MAWP” appears, you read it and generate the right sound, and if sound is the basis of reading, you should get the advantage when “MOP” appears.
Using a comparable method, the researchers tested whether kids read via spelling. They used a prime with nearly the same spelling as the response letter string: for example “TALBE” followed by “TABLE.” Other work has shown the readers are pretty resistant to spelling errors like this one, where letters are off by just one position (McCusker et al, 1981). So if you’re using the spelling of a word to identify it, we can expect that you’ll be faster to verify that “TABLE” is a word if the prime was “TALBE,” compared to a prime like “CAIRH.”
So take a moment and guess. Do first graders read mostly by sound or by spelling? How about fifth graders?
The data indicated that first graders read by sound. With each successive year, kids showed more and more evidence of using the spelling of words in their reading. BUT there was no diminution of the influence of sound. Experienced readers use both the sound and the spelling mechanisms.
This result fits with the following view of reading: most kids will learn to read by learning to sound out words. With practice over the course months and years, they develop an increasing number (and increasingly robust) mental representations that allow them to identify words by their appearance, i.e., by their spelling. These representations form as a consequence of reading practice and don’t require any special instruction. This general view accords with other behavioral data showing that methods of reading instruction that emphasize phonics have an edge over other methods.
McCusker, L. X., Gough, P. B. & Bias, R. G. (1981). Word recognition inside out and outside in. Journal of Experimental Psychology: Human Perception and Performance, 7, 538-551.
Ziegler, J. C., Bertrand, D., Lété, B., & Grainger, J. (2014). Orthographic and phonological contributions to reading development: Tracking developmental trajectories using masked priming. Developmental Psychology, 50, 1026-1036.
This article first appeared on the RealClearEducation website on April 22, 2014.
Please note: A few people have told me that they have trouble finding my column on RealClearEducation.com. If you would like an email notification when I post a new column, please just email me, and I'll ping you when I post.
You often hear the phrase that small children are sponges, that they constantly learn. This sentiment is sometimes expressed in a way that makes it sound like the particulars don’t matter that much; as long as there is a lot to be learned in the environment, the child will learn it. A new study shows that for one core type of learning, it’s more complicated. Kids don’t learn important information that’s right in front of them, unless an adult is actively teaching them.
The core type of learning is categorization. Understanding that objects can be categorized is essential for kids’ thinking. Kids constantly encounter novel objects; for example, each apple they see is an apple they’ve never encountered before. The child cannot experiment with each new object to figure out its properties; she must benefit from her prior experience with other apples, so that she can know, for example, that this object, since it’s an apple, must be edible.
But how can a child tell which properties of the apple are incidental (e.g., it has a long stem) and which properties are true of all apples (it’s edible)? The child must ignore many incidental properties, and hold on to the important properties that are true of all apples.
Previous research shows that by age three or four, children are sensitive to linguistic cues about this matter. They appropriately attach significance to the difference between “This apple has a long stem” and “Apples are good for eating.” “This apple” signifies that the information provided applies only to this particular apple; “Apples” indicates a generalization about all apples.
A recent study (Butler & Markman, 2014) examined whether there are cues outside of language that guide children in resolving this problem. The researchers tested the possibility that children are sensitive to adults teaching them; if an adult deliberately highlights a property for the child’s benefit, that presumably is a property of some importance, and one that is characteristic of all objects of this sort.
Children (aged 4-5) were shown a novel object and were told that it was a “spoodle.” There was a brief test to be sure that the child got the name right, and then another, irrelevant task. The experimenter began to clean the materials from this other task, and this is when the special property of the spoodle came into play; spoodles are magnetic.
In the pedagogical condition, the experimenter said “Look, watch this” and used the spoodle to pick up paperclips. In the intentional condition, the experimenter used the spoodle to pick up paperclips, but did not request the child’s attention or make eye contact. In the accidental condition, the experimenter feigned accidentally dropping the spoodle on the clips. In all of the conditions, the experimenter held the spoodle with the paper clips clinging to it and said “wow!”
Next, the child was presented 16 objects and was asked to say which were spoodles. Half were identical to the original spoodle, and half were another color. In addition, half of each color were magnetic and half were not.
So the question is which property kids think makes an object a spoodle: appearance (i.e., color) or function (i.e., magnetism). The data are shown here:
These children were clearly quite sensitive to the non-verbal teaching. Recall that in the Intentional condition, the adult used the spoodle’s function purposefully. The child could easily infer that magnetism is an important property of spoodles. But the children didn’t. Appearance made a spoodle a spoodle for these kids.
Yet when the adult did the exact same thing, but also made plain to the child her actions were for the child’s benefit, that she was teaching, then the child understood that magnetism held special significance for spoodle-hood.
I think this study has an interesting implication for differences in kids’ preparedness for schooling, associated with their home environment. We tend to focus on differences in the richness of experiences available to kids. That’s important, but this experiment provides a concrete example of small differences in parenting may have important consequences for children’s learning. “Little sponges” don’t learn certain types of information, even in a rich environment. They have to be taught.
Butler, L. P., & Markman, E. M. (2014). Preschoolers use pedagogical cues to guide radical reorganization of category knowledge. Cognition, 130, 116-127.
This piece first appeared on RealClearEducation.com on April 16, 2014.
If you would like an email notification when I post a new column at RealClear, send me an email, and I'll add you to my list.
A recent article in the Washington Post sounds a warning klaxon for our ability to read deeply. You’ve probably heard this argument elsewhere, made most forcefully by Nick Carr in the The Shallows: frequent users of the Web (i.e., most of us) are so in the habit of skittering from page to page, scanning for juicy bits of information but not really reading, that they have lost the ability to sit down and read prose from start to finish. I think the suggestion is probably wrong.
The first thing to make clear is that anyone who comments on this issue (including me) is guessing. There are simply not any data that address it directly. We might predict, for example, that scores on standardized reading tests would have dropped in the last fifteen years or so (they haven’t) but such data are hardly definitive, as reading comprehension test scores are a product of many factors.
The Post article cites studies comparing reading on paper versus reading on screens, but that won’t address the issue, which concerns the long-term consequences of a particular type of reading. The Post also incorrectly says that paper is superior. Most studies indicate no difference between screens and paper for pleasure reading. For textbook reading, students take longer to read on screens, although comprehension is about the same. (Daniel & Willingham, 2012).
The article, like all the pieces I’ve seen on this topic, is short on data and long on individual’s impressions. For example, teachers aver that students can no longer read long novels. Well, if we’re swapping stories, I (and most of my classmates) had a hard time with Faulkner and Joyce back in the early ‘80’s, when I was an English major.
The truth is probably that the brain is simply not adaptable enough for such a radical change. Yes, the brain changes as a consequence of experience, but there are likely limits to this change, a point made by both Steve Pinker and Roger Schank when commenting on this issue. If our ability to deploy attention or to comprehend language processes were to undergo substantial change, the consequences would cascade through the entire cognitive system, and so the brain is probably too conservative for large-scale change.
For example, there’s a lot of overlap in the processes of reading and the processes used for understanding spoken speech—processes that assign syntactic roles to words. Do we see any evidence that people are having a harder time understanding spoken language? Or does the problem lie in the mental processes that build understanding of larger blocks of language, as when we’re comprehending a story? If so, habitual Web users should have a hard time understanding complex narratives not just when they read, but in television and movies. No one should have watched The Sopranos, with its complicated, interweaving plotlines.
A more plausible possibility is that we’re not less capable of reading complex prose, but less willing to put in the work. Our criterion for concluding “this is boring. This is not paying off” has been lowered because the Web makes it so easy to find something else to read, watch, or listen to. (I explore the possibility in some detail in my upcoming book, Raising a Reader in an Age of Distraction.) If I’m right, there’s good news and bad news. The good news is that our brains are not being deep-fried by the Web; we can still read deeply and think carefully. The bad news is that we don’t want to.
Daniel, D. B. & Willingham, D. T. (2012). Electronic textbooks: Why the rush? Science, 335, 1569-1571.
This blog posting first appeared on RealClearEducation on April 8, 2014.
The 2012 results for the brand-new PISA problem-solving test were released last week. News in various countries predictably focused on how well local students fared, whether they were American, British, Israeli, or Malaysian. The topic that should have been of greater interest was what the test actually measures.
How do we know that a test measures what it purports to measure? There are a few ways to approach this problem.
One is when the content of the test seems, on the face of it, to represent what you’re trying to test. For example, math tests should require the solution of mathematical problems. History tests should require that test-takers display knowledge of history and the ability to use that knowledge as historians do.
Things get trickier when you’re trying to measure a more abstract cognitive ability like intelligence. In contrast to math, where we can at least hope to specify what constitutes the body of knowledge and skills of the field, intelligence is not domain-specific. So we must devise other ways to validate the test. For example, we might say that people who score well on our test show their intelligence in other commonly accepted ways, like doing well in school and on the job.
Another strategy is to define what the construct means—“here’s my definition of intelligence”—and then make a case for why your test items measure that construct as you’ve defined it.
So what approach does PISA take to problem-solving? It uses a combined strategy that ought to prompt serious reflection in education policymakers.
There is not any attempt to tie performance on the test to everyday measures of problem-solving. (At least, none have been offered so far, but there is more detail on the construction of the test to come, in an as-yet-unpublished technical report.)
From the scores report, it appears that the problem-solving test was motivated by a combination of the other two methods.
First, the OECD describes a conception of problem solving—what they think the mental processes look like. That includes the following processes:
· Exploring and understanding
· Representing and formulating
· Planning and executing
· Monitoring and reflecting
So we are to trust that the test measures problem solving ability because these are the constituent processes of problem solving, and we are to take it that the test authors could devise test items that tap these cognitive processes.
Now, this candidate taxonomy of processes that go into problem-solving seems reasonable at a glance, but I wouldn’t say that scientists are certain it’s right, or even that it’s the consensus best guess. Other researchers have suggested that different dimensions of problem solving are important—for example, well-defined problems vs. ill-defined problems. So pinning the validity of the PISA test on this particular taxonomy reflects a particular view of problem-solving.
But the OECD uses a second argument as well. They take an abstract cognitive process—problem solving—and vastly restrict its sweep by essentially saying “sure, it’s broad, but there is a limited way that we really care about how it’s implemented. So we just test those.”
That’s the strategy adopted by the National Adult Assessment of Literacy. Reading comprehension, like problem solving, is a cognitive process, and, like problem solving, it is intimately intertwined with domain knowledge. We’re better at reading about topics we already know something about. Likewise, we’re better at solving problems in domains we know something about. So in addition to (as best they could) requiring very little background knowledge for the test items, the designers of the NAAL wrote questions that they could argue reflect the kind reading people must do for basic citizenship. Things like reading a government-issued pamphlet about how to vote, and reading a bus schedule, and reading the instructions on prescription medicine.
The PISA problem solving test does something similar. They authors sought to present problems that students might really encounter, like figuring out how to work a new MP3 player, or finding the quickest route on a map, and or figuring out how to buy a subway ticket from an automated kiosk.
So with this justification, we don’t need to make a strong case that we really understand problem-solving at a psychological level at all. We just say “this is the kind of problem solving that people do, so we measured how well students do it.”
This justification makes me nervous because the universe of possible activities we might agree represent “problem solving” seems so broad, much broader than what we would call activities for “citizenship reading.” A “problem” is usually defined as a situation in which you have a goal and you lack a ready process in memory that you’ve used before to solve the problem or one similar to it. That covers a lot of territory. So how do we know that the test fairly represents this territory?
The taxonomy is supposed to help with that problem. “Here’s the type of stuff that goes into problem solving, and look, we’ve got some problems for each type of stuff.” But I’ve already said that psychologists don’t have a firm enough grasp of problem-solving to advance a taxonomy with much confidence.
So the PISA 2012 is surely measuring something, and what it’s measuring is probably close to something I’d comfortably call “problem-solving.” But beyond that, I’m not sure what to say about it.
I probably shouldn’t get overwrought just yet—as I’ve mentioned, there is a technical report yet to come that will, I hope, leave all of us with a better idea of just what a score on this test means. Gaining that better idea will entail some hard work for education policymakers. The authors of the test have adopted a particular view of problem solving—that’s the taxonomy—and they have adopted a particular type of assessment—novel problems couched in everyday experiences. Education policymakers in each country must determine whether that view of problem solving syncs with theirs, and whether the type of assessment is suitable for their educational goals.
The way that people conceive of the other PISA subjects (math, science and reading) is almost surely more uniform than the way they conceive of problem-solving. Likewise, the goals for assessing those subjects is also more uniform. Thus, the problem of interpreting the problem-solving PISA scores is formidable compared to interpreting other scores. So no one should despair or rejoice over their country’s performance just yet.
This post first appeared at RealClearEducation on April 1, 2014.
Our scientific understanding is always evolving, changing. Thus, one of the ongoing puzzles in education research is how confident one must be in a set of findings before one concludes it ought to be the basis of educational practice. If the data show that X is true, but X seems really peculiar, do we assume X is probably true, or do we assume that we just don't understand things very well yet? A new study provides something of an object lesson in this problem; in this case "X" was "parents teaching reading at home doesn't help much after kindergarten."
Here's the background on that counterintuitive finding. The work was inspired by the home literacy model (Senechal & LeFevre, 2000). It posits two dimensions of home literacy experience: formal experiences are those in which the parent focuses the child's attention on print, for example by teaching letters of the alphabet, or pointing out that two words look the same, or that we read from left to right.
Informal experiences are those for which print is present, but is not the focus of attention; reading aloud to one's child would be an example. Children usually look at pictures, not print, during a read-aloud.
Previous research from this research team, and others, has shown that formal and informal experiences have different effects. Formal experiences are associated with early literacy skills like knowing letters, and later, with word reading. Informal experiences, in contrast, are associated with growth in vocabulary and general knowledge.
But data supporting the home literacy model have usually been concurrent, not predictive, and have been limited to preschool, kindergarten, and early 1st grade. That is, the research shows an association between the relevant factors measured now, as opposed to showing that the home factors at, say, kindergarten, predict growth in reading outcomes for the 1st grade and beyond. That's peculiar.
There are at least two possible reasons. One is that the home literacy environment does have an impact on literacy growth, but researchers have been looking for the effect in 1st grade - just at the time that school instruction is so heavy. So perhaps the impact of home literacy environment on literacy growth is overwhelmed by the effect of school instruction. A second possible reason is that the home literacy environment may change as a consequence of how the parents perceive their child is doing in school.
A new study (Senechal & LeFevre, 2014) used a clever design to examine both possibilities. Subjects were 84 children in Quebec who spoke English at home, but for whom the language of instruction at school was French. So researchers could test progress in English, and thereby examine the impact of the home literacy environment independent of schooling. The research measured various aspects of children's literacy -- reading and oral language -- from kindergarten until spring of second grade. In addition, researchers used a number of measures to characterize their formal and informal literacy experiences at home.
The results provided strong support for the Home Literacy Model. Formal literacy activities at home were linked not only to performance in reading English, but, in contrast to prior work, a relationship was observed with growth in reading English from kindergarten to 1st Grade. Thus, there is some support of the idea that previous studies failed to observe the relationship because the experiences at school overwhelmed any effect that home experiences might have had.
But that can't be the whole story, because the relationship was no longer observed in 2nd grade. This is where parental responsiveness comes in. English instruction, one hour daily, began in 2nd grade, and so parents began to get feedback from schools about their child's English reading at that time.
Researchers found that the degree to which parents taught their children English at home was positively associated with student outcomes in kindergarten and 1st grade. But there was a negative association in 2nd grade. A straightforward interpretation is that many parents engaged in some English teaching at home during kindergarten and 1st grade, and the more of it they did, the better for their kids. Then in 2nd grade, parents get feedback from the school about their child's reading in English. If their child is doing well, parents ease off on the teaching at home. If their child is doing poorly, they increase reading. Indeed, researchers found that most parents -- 76 percent -- changed their formal literacy practices in response to their child's reading performance in 2nd grade. So you end up with a negative correlation of parental instruction and child performance in 2nd grade. The kids who are doing the worst in reading are the ones whose parents are teaching them the most.
The impact of informal literacy activities like read-alouds did not change; they were consistently linked to growth in vocabulary and other measures of oral language from kindergarten through second grade.
It should be noted that the parents in this study had greater than average education - more than half had a university degree. It's a good bet then, that the baseline home literacy environment was atypically high and that these parents may have been more responsive to their child's literacy outcomes than others would have been. We should not generalize these findings broadly.
Still, in this case, "X" turned out to be explicable and sensible. It appeared that parents teaching literacy at home did not help children's literacy because other variables had gone uncontrolled. This study doesn't solve the broader problem - we never know if our understanding of an issue is incomplete to the point of inaccuracy - but that's one issue on which we are at least closer to the truth.
Senechal, M., & LeFevre, J. (2002). Parental involvement in the development of children’s reading skill: A 5-year longitudinal study. Child Development, 73, 445–460.
Senechal M. & Lefevre, J. (in press). Continuity and change in the home literacy environment as predictors of growth in vocabulary and reading. Child Development.
This piece first appeared on RealClearEducation.com on March 26.
How do you know that whether a book is at the right level of difficulty for a particular child? Or when thinking about learning standards for a state or district, how do we make a judgment about the text difficulty that, say, a sixth-grader ought to be able to handle?
It would seem obvious that an experienced teacher would use her judgment to make such decisions. But naturally such judgments will vary from individual to individual. Hence the apparent need for something more objective. Readability formulas are intended as just such a solution. You plug some characteristics of a text into a formula and it combines them into a number, a point on a reading difficulty scale. Sounds like an easy way to set grade-level standards and to pick appropriate texts for kids.
Of course, we’d like to know that the numbers generated are meaningful, that they really reflect “difficulty.”
Educators are often uneasy with readability formulas; the text characteristics are things like “words per sentence,” and “word frequency” (i.e., how many rare words are in the text). These seem far removed from the comprehension processes that would actually make a text more appropriate for third grade rather than fourth.
To put it another way, there’s more to reading than simple properties of words and sentences. There’s building meaning across sentences, and connecting meaning of whole paragraphs into arguments, and into themes. Readability formulas represent a gamble. The gamble is that the word- and sentence-level metrics will be highly correlated with the other, more important characteristics.
It’s not a crazy gamble, but a new study (Begeny & Greene, 2014) offers discouraging data to those who have been banking on it.
The authors evaluated 9 metrics, summarized in this table:
The dependent measure was student oral reading fluency, which boils down to number of words correctly read per minute. Oral fluency is sometimes used as a convenient proxy for overall reading skill. Although it obviously depends heavily on decoding fluency, there is also a contribution from higher-level meaning processing; if you are understanding what you’re reading, that primes expectations as you read, which makes reading more fluent.
In this experiment, second, third, fourth, and fifth graders each read six passages taken from the DIBELS test: two passages each from below, at, and above their grade level, for a total of six passages.
Previous research has shown that the various readability formulas actually disagree about grade levels (e.g., Ardoin et al, 2005). In this experiment, oral reading fluency was to referee the disagreement. Suppose that according to PSK, passage A is appropriate for second graders and passage B is appropriate for third graders. Meanwhile Spache says both are third-grade passages. If oral reading fluency is better for passage A than passage B, that supports the PSK. (“Faster” was not evaluated only in absolute terms, but accounted for the standard error of the mean).
The researchers used an analytic scheme to evaluate how good a job each metric did of predicting the patterns of student oral reading fluency. Each prediction was considered binary: the grade level assignment predicted that there should be a difference (or not) in oral reading fluency: was a difference observed? Chance, therefore, would be 50%. The data are summarized in the Table
All of the readability formulas were more accurate for higher ability than lower ability students. But only one—the Dale-Chall—was consistently above chance.
So (excepting the Dale-Chall), this study offers no evidence that standard readability formulas provide reliable information for teachers as they select appropriate texts for their students. As always, one study is not definitive, least of all for a broad and complex issue. This work ought to be replicated with other students, and with outcome measures other than fluency. Still, it contributes to what is, overall, a discouraging picture.
Ardoin, S. P., Suldo, S. M., Witt, J., Aldrich, S., & McDonald, E. (2005). Accuracy of readability estimates’ predictions of CBM performance. School Psychology Quarterly, 20, 1 – 22.
Begeny, J. C., & Greene, D. J. (2014). Can readability formuas be used to successfully gauge difficulty of reading materials? Psychology in the Schools, 51(2), 198-215.
Because I teach in higher education but spend a lot of time thinking about K-12 education, the differences between the two naturally stand out to me. Perhaps most striking is the extent to which college students are responsible for their own education. Not only do they pick their classes and major (with few restrictions), they are fully responsible for regulating their own study time, and for showing up to class. At most colleges no one is aware that a student is experiencing academic trouble until things get pretty bad.
Students arrive at college with different levels of preparation to handle these responsibilities. Unsurprisingly, family background makes a difference. Students who are the first in their families to attend college (first-generation students) earn lower grades and drop out at higher rates than students with at least one parent who attended college (continuing-generation students), controlling for high school GPA (Pascarella et al 2004). (Otherwise-successful charter schools are struggling with this social-class achievement gap.)
What fuels the gap? Partly the access that continuing-generation students have to advice from parents on how best to navigate college—access that first-generation students obviously lack. Colleges try to make up for this difference by offering programs to aid first-generation students; programs that offer advice on how to select a major, how to manage one’s time, and so on.
But first-generation students don’t take colleges up on their offers of help. They are less likely to take advantage of college services than continuing-generation students.
That may be because first-generation students are unsure whether or not they really belong at college, whether they can succeed.
Taking a cue from similar studies examining race (e.g., Walton & Cohen, 2011) a new study Stephens et al, in press) sought to change how freshmen students thought about their family backgrounds by exposing them to stories of successful upper-class students, who described how their family backgrounds could be a source of challenges and of strength.
First-generation (N=66) and continuing-generation (N=81) freshman students attended a panel discussion by college seniors on college adjustment.
For half of the freshman, the panelists’ answers were linked to their family backgrounds. For example, a first-generation panelist pointed out that his parents couldn’t provide much advice about selecting classes, so he learned that he had to rely on his advisor more than other students. The continuing-generation students highlighted that they faced challenges as well: a panelist mentioned that she had attended a small private school that offered a lot of one-on-one attention, and that she felt lost in large lecture classes.
The other half of the freshman served as the control condition: they attended a different panel discussion in which challenges of college life and how to address them were discussed, but the answers were not directly linked to family background.
At the end of the year, this brief, one-time intervention had significantly reduced the social-class gap, as measured by cumulative GPA.
End-of-year surveys also indicated that the intervention had reduced student anxiety and led to better adjustment to college life. (Note: these outcomes were observed for both first-generation and continuing-generation students.)
Anytime you hear about a one-hour intervention that has such a profound and long-lasting effect, it’s natural to be suspicious. Certainly, we’d like to see this effect replicated, but there is at least a plausible explanation for the profound effect; the intervention provides a new way for students to think about difficulties. Instead of evidence that they don’t really belong at college, set-backs become a normal part of college life, and one that can be addressed.
Pascarella, E., Pierson, C., Wolniak, G., & Terenzini, P. (2004). First-generation college students: Additional evidence on college experiences and outcomes. Journal of Higher Education, 75, 249–284.
Stephens, N. M., Hamedani, M. G., & Destin, M. (in press). Closing the Social-Class Achievement Gap A Difference-Education Intervention Improves First-Generation Students’ Academic Performance and All Students’ College Transition. Psychological science.
Walton, G. M., & Cohen, G. L. A brief social-belonging intervention improves academic and health outcomes of minority students. Science, 331, 1447-1451.
Note: This post first appeared at RealClearEducation on March 11, 2014.
One of the controversies of the Common Core State Standards (CCSS) concerns the difficulty of the content, especially for early elementary grades. Some critics have suggested that the standards are too difficult; first grade children are simply not ready to learn about Mesopotamian civilizations, for example. But a new experiment shows that first graders can understand a scientific topic usually reserved for older grades--natural selection.
Even before the CCSS, key ideas from some content areas were left to later grades, presumably because students wouldn’t understand them earlier. For example, evolution has usually been taught in high school, even though it’s a foundational idea biology that, if students had under their belts, would likely make learning other concepts easier. The latest standards from Achieve, the National Research Council, and AAAS all take that tack.
It may seem foolish to suggest that students could tackle evolutionary ideas earlier, given that they frequently don’t understand them now. High schoolers usually understand the general idea of adaptation, but they focus on individuals, rather than populations. For example, they think that an individual’s efforts over a lifetime are influential in shaping its fitness, rather than random variation making some animals more fit, and thus more likely to survive and reproduce.
But the history of developmental psychology shows that the age at which children can reach cognitive milestones depends in no small part on the cleverness of the methods used to measure their ability. Perhaps younger students could understand evolution under the right circumstances. A new study (Kelemen et al, 2014) indicates that’s so.
Researchers tested children aged 5 through 8. Kids heard a story about pilosas, fictional animals whose survival was threatened when their food source, insects, started to live below ground in deep, narrow tunnels. Pilosas have trunks which might be wide or narrow. The story went on to explain that in successive generations, trunks became less variable, as pilosas with narrow trunks survived and had young, whereas pilosas with wide trunks could not get enough to eat and did not reproduce.
Researchers tested comprehension of the story and children’s ability to generalize the biological principle to a new case. They were tested immediately and after three months. Each test included ten questions in all (five open, five closed) which probed understanding of different aspects of natural selection such as differential survival, differential reproduction, and the passing on of traits between generations.
7 and 8 year-old children showed good comprehension of the story, with nearly half showing an understanding of the natural selection in one generation and 91% showing at least a partial understanding. Remarkably, 3 months later, this knowledge transferred more or less intact to a story about a new species.
A second experiment replicated the first AND added the idea of trait constancy within an individual; what you’re born with, you retain. This extra detail seemed to help, with still higher percentages of children showing complete understanding and transfer to a new case.
No one would claim that these children have a complete understanding of natural selection. But they got much farther along in their understanding than I think most would have guessed.
The authors speculate that children did so well because the explanation capitalized “on young children’s drive for coherent explanation, factual knowledge, and interest in trait function, along with their affinity for picture storybooks.”
They further speculate that explaining natural selection at a younger age may have worked out so well because they were not old enough to have developed naïve theories of species change; ideas that would become entrenched and potentially make it more difficult to understand natural selection properly.
The practical implication of this result is obvious; students may be ready to learn concepts of evolution much earlier than most have thought. It also invites the question of whether we do students a disservice if we are too quick to dismiss content as “developmentally inappropriate.”
Kelemen, D., Emmons, N. A., Schillaci, R. S., Ganea, P. A., Lillard, A., Rottman, J., & Smith, H. Young (in press). Children Can Be Taught Basic Natural Selection Using A Picture Storybook Intervention. Psychological Science.
The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.