I finally got around to reading Paul Tough's How Children Succeed. If you haven't read it yet, I recommend that you do.
You probably know by now the main message: what really counts for academic success is conscientiousness (or its close cousins, grit, or character or non-cognitive skills).
Tough intersperses explanations of the science behind these concepts with stories of students that he's met and followed. The stories add texture and clarity, and Tough is among a very small number of reporters who gets complex science right consistently. He takes you through attachment theory, the HPA axis, and executive control functions, all without losing his footing nor prompting glazing in the reader's eyes.
Tough also devotes considerable space to a fascinating inside look at how charter school mavens have thought about self-control, how their thinking has changed over time, and how their views square with the science.
The only flaws I see in the book concern a couple of big-picture conclusions that Tough draws.
First, there's what Tough calls the cognitive hypothesis--that academic success is driven primarily (perhaps even solely) by cognitive skills. The book suggests that this premise may be in error. What really counts is self-control.
But of course, you do need cognitive skills for academic success. In fact, Tough describes in detail the story of a boy who is very gritty indeed when it comes to chess, and who scales great heights in that world. But he's not doing all that well in school, and a teacher who tries to tutor him is appalled by what he does not know.
Self-control predicts academic success because it makes you more likely to do the work to develop cognitive skills. I'm sure Tough understands this point, but a reader could easily miss it.
Second, Tough closes the book with some thoughts on education reform. This section, though brief, struck me as unnecessary and in fact ill-advised. The whole book is about individual children and what makes them tick. Jumping to another level of analysis--policy--can only make this speculation seem hasty.
But these two minor problems are mere quibbles. If you have heard about "non-cognitive skill," or "self-control" or "grit" and wonder whether there's anything to it, you'd be hard put to find a better summary than How Children Succeed.
Amanda Ripley's new book, The Smartest Kids in the World: And How They Got That Way,
has garnered positive reviews in the Economist
, the New York Times
, USA Today
, the Daily Beast
and US News and World Report
. Is it really that good? It's pretty darn good. As the subtitle promises, Ripley sets out tell the education success stories of three countries: Finland and South Korea (whose 15 year olds score very high on the PISA test) and Poland (offered as an example of a country in transition, and making significant progress).What's Ripley's answer to the subtitle? They got that way by engaging, from an early age, in rigorous work that poses significant cognitive challenge.
In other words, the open secret is the curriculum.Along the way to this conclusion, she dispenses with various explanations for US kids mediocre performance on the science and math portions of PISA. I've made these arguments myself so naturally I found them persuasive:
What is the explanation? According to Ripley, there is a primary postulate running through the psyche of South Koreans, Finns, and Poles when it comes to education: an expectation that the work will be hard. Everything else is secondary. So anything that gets in the way, anything that compromises the work, will be downplayed or eliminated. Sports, for example. Kids do that on their own time, and it's not part of school culture.
- Poverty is higher in the US. Not compared to Poland. And other countries with low poverty (e.g. Norway) don't end up with well educated kids. The relevant statistic is how much worse poor kids do relative to rich kids within a country. The US fares poorly on this statistic.
- The US doesn't spend enough money on education. Actually we outspend nearly everyone. But because of local funding we perversely shower money on schools attended by the wealthy and spend less on the schools attended by poor kids.
- The US has lots of immigrants and they score low. Other countries do a better job of educating kids who do not speak the native language.
- The kids in other countries who take PISA are the elite. Arguably true in Shanghai, but not Korea or Finland, both of which boast higher graduation rates than the US.
- Why should we compare our kids to those of foreign countries? It's not a race. Because those other kids are showing what we could offer our own children, and are not.
Several consequences follow from this laser-like focus on academic rigor. For example, if schoolwork is challenging kids are going to fail frequently. So failure necessarily is seen as a normal part of the learning process, and as an opportunity for learning, not a cause of shame.
If the academic work for students will be difficult, teachers will necessarily have to be very carefully selected and well trained. And you'll do whatever is necessary to make that happen. Even if it means, as in Finland, offering significant financial support during their training.
So what is the primary postulate of American education?
Ripley doesn't say, and I'm not sure Americans are sufficiently unified to name one. But two assumptions strike me as candidates.
First, that learning is natural, natural meaning that a propensity to learn is innate, instinctive and therefore inevitable. That, in turn, means that it should be easy. This assumption is pretty much the opposite of the one Ripley assigns to South Korea, Finland, and Poland.
Many Americans seem to think that it's not normal for schoolwork to be challenging enough that it takes persistence. In fact, if you have to try much harder than other kids, in our system you're a good candidate for a diagnosis and an IEP.
This expectation that things should be easy may explain our credulity for educational gimmicks, for that's what gimmicks do: they promise to make learning easy for everyone. Can't learn math? It's because your learning style hasn't been identified. Trouble with Spanish? This new app will make it fun and effortless.
The second assumption I often see is that "rigor" and "misery" are synonyms. Rigor means that you will be challenged. It means you may not succeed quickly. It means your cognitive resources will be stretched. It doesn't mean you are being punished, nor that you will be unhappy.
At the same time, I can't agree with the "play is all you need" crowd. Play can be cognitively enriching, but that doesn't mean that all play is cognitively enriching.
It's easy to create schoolwork that's rigorous and a grind likely to make kids hate school. Ripley offers South Korea as an example. Children there are miserable, adults hate the system, and despite kids' excellent test scores, everyone sees the Korean system as dysfunctional.
It's much tougher to educate kids in a way that is challenging but engaging. That's Finland, according to Ripley. And she's here to remind us that most of what has been pointed to as responsible for the Finnish miracle is not. What's responsible is the rigor of the work kids have been asked to do.
Will Americans embrace this idea, and demand that our education system challenge our kids? Will they embrace it to the point that they will follow this primary postulate whither it may lead?
I think Ripley's right to suggest that it's essential. I think the odds that Americans will follow through are remote.
James Paul Gee
, a professor at Arizona state, is known as a pioneer in thinking about the educational uses of gaming. His book, What Video Games Have to Teach Us About Learning and Literacy
, is considered a landmark in the field.
Thus his new book, The Anti-education Era: Creating Smarter Students through Digital Learning
, is bound to attract interest.
Unfortunately, the book ultimately disappoints. Chief among the problems is that—despite the subtitle--there is very little solid advice here about how to change education.
In fact, the first 150 pages scarcely mentions education at all. It is a laundry list—16 chapters in all—of the weaknesses of human cognition. This is territory that has been well covered in other popular books by Chabris & Simon, Ariely, Schacter, Kahneman, and others.
I can’t really fault Gee for not doing as creditable a job in describing human cognition as these authors. It is, after all, their bread & butter, not Gee’s. But the presentation is slow-paced and there are some errors. For example, Gee gets the definition of grit wrong (p. 202) He flatly states that we think well only when we care about what we are doing (p. 12) but the relationship between motivation and performance depends on the complexity of the task and the expertise of the performer.
It’s only the last 60 pages of the book that address ways that digital technologies might come to our aid in addressing the frailties of human cognition. Here Gee is on his home turf, but it’s too well-trod: getting people to work together, ensuring that people feel safe,
The problem is not that people need to be persuaded that these are good ideas. The problem is that we have evidence in hand that they don’t always work. That means that we need a more nuanced understanding about the conditions under which these ideas work. Gee half recognizes this need, and on occasion warns that solutions will not be simple. But he never takes the next step and outlines the complexities for us.
For example Gee retells (via Jonah Lehrer) the story of a building at MIT that housed professors from a wide variety of disciplines, with a concomitant flowering of intellectual cross-fertilization. Gee quotes (with approval, I guess) Lehrer: “The lesson of Building 20 is that when the composition of the group is right—enough people with different perspectives running into one another in unpredictable ways—the group dynamic will take care of itself.”
As an academic who has been doing interdisciplinary work for 20 years, I would counter: “Like hell it does.”
Virtually every school of education is housed in a building with people trained in different disciplines, and interdisciplinary work remain rare. For reasons I won’t get into here (and much to the despair of university administrators) interdisciplinary work is hard.
So despite the title, educators will find little of interest here. Common sense strikes back
Gee had better hope he does not meet up with Tom Bennett in a dark alley. Bennett is a British teacher who has been in the classroom since 2003, and has written for the Times Educational Supplement
since 2009. (If you’re a reader outside the UK, you may not know that this is a very widely read weekly.)
Bennett’s fourth book, just out, is titled Teacher Proof: Why Research in Education Doesn’t Always Mean What it Claims, and What You Can Do about It.
The book is comprised of three sections: in the first, Bennett provides an overview of education research. In the second he evaluates some education theories, and in the third he suggests a better way forward.
As I read Teacher Proof
, I kept thinking “This is one pissed off teacher.” The language is not at all bitter—in fact, it’s frequently quite funny, and Bennett is a marvelous writer—but you can tell that he feel cheated.
Cheated of his time, sitting in professional development sessions that advise an experienced teacher to change his practice based an evidence-free theory.
Cheated of the respect he is due, as researchers with no classroom experience presume to tell him his job, and blame him (or his students) if their magic beans don’t grow a beanstalk.
Cheated of the opportunity to devote all of his attention to his students, given that researchers are not simply failing to help him his do his job, but are actively getting in his way
, to the extent that their cockamamie ideas infect districts and schools.
So what does this angry teacher have to say?
The first third of the book contrasts science and social science. The upshot, as Bennett describes it, is that social sciences aspire to the precision of the “hard” sciences but can’t get there. They are nevertheless full of pretentions, “walking around in mother’s heels and pearls,” as Bennett says, pretending to be a more mature version of itself.
There’s not much nuance in this view. As Bennett describes it, education research is not just badly done science, it is pretty much impossible-to-do-well science, given the nature of the subject matter.
This section of the book struck me as odd, both because it didn’t match my impression of the author’s view, based on his other writings, and in fact conflicts with the second section of the book.
This section offers a merciless, overdue, and often funny skewering of speculative ideas in education: multiple intelligences, Brain Gym, group work, emotional intelligence, 21st century skills, technology in education, learning styles, learning through games. Bennett has a unerring eye for the two key problems in these fads: in some cases, the proposed “solutions” are pure theory, sprouting from bad (or absent) science (eg., learning styles, Brain Gym); others are perfectly sensible ideas transmogrified into terrible practice when people become too dogmatic about their application (group learning, technology).
Bennett ends each chapter with a calm, pragmatic take, e.g., “yes, I use technology a lot. Here’s where I find it useful.” As he says early on, Experience trumps theory every time.”
But here’s where I think the second section of the book conflicts with the first. Bennett’s consistent criticism of these ideas is that there is no evidence to back them up. To me, this indicates that Bennett doesn’t think that social science research is impossible—he’s just fed up with social science research that’s done badly. In the third section of the book Bennett tells us what different actors in the education world ought to do. It is the briefest section by far--less than ten pages—and the brevity matches the tone of the advice: “Look, a lot of this really isn’t that complicated, gang.” Namely:
- Researchers need to take a good long look in the mirror.
- Media outlets need to be less gullible.
- And teachers should appear to comply with the district’s latest lunacy, but once the door closes stick to the basics, and Bennett lays out his version of the basics in 8 spare points.
To the “what people should do” list, I’d add another directive: schools of education should raise their standards for what constitutes education research. Bennett is right—too much of it is second-rate.
There is an ugly system of self-interest that has produced the terrible research (and in turn, the need for Bennett’s book). Professors want to publish in peer-reviewed journals because that brings prestige. So publishers create “peer-reviewed” journals that have very low standards because journals bring them money. Institutional libraries buy these terrible journals (keeping them in business) because faculty say that they are needed so that faculty and students can keep up with the latest research. And universities are reluctant to blow the whistle on the whole charade because schools of education—second-rate or not--bring tuition dollars. Teacher Proof
is a worthy read. There have been scattered criticisms of the theories that Bennett takes on, but seldom collected in one place in such readable prose, and seldom (if ever) with a teacher's eye for the details of practice. Teacher Proof
is also a timely read. In the UK, impatience with the influence that shoddy science has had on teaching practice is mounting. Teachers are sick of being told what to do, with phantom “research” used as the excuse. Would that the same will happen in the US! Teacher Proof
I think of two very broad education reform camps. One calls into question the basic arrangement of institutions involved in U.S. education, arguing that the contradictory priorities in the system almost guarantee mediocrity. The solution, therefore, cannot be a nibbling around the edges of reform, but wholesale change: for some reformers, that means a market solution with greater parental choice, often coupled with more stringent human resources policies. For others the solution is a complete change—via technology—of the way we think of “learning.”
The second group of reformers argues that the system of education institutions is mostly fine, and that factors external to the system are responsible for our woes (which are, in any case, exaggerated). Some point to social and economic factors, others to the incoherence in curriculum (cf the Common Core), and others to the very reform measures (especially standardized tests used to evaluate schools and teachers) instituted by the other group of reformers.
In his new book Improbable Scholars,
UC Berkeley professor of education David Kirp offers an unusually readable account of what improved schooling would look like if you’re in the second camp. His explicit mission: to show that educational excellence is possible with the system as it exists now, even in districts that face enormous challenges. He makes a fair case, given the limitations of the method he employs. Improbable Scholars
follows in the tradition of numerous education books by recounting time that the author spent in a school or district. Kirp tells the story of Union City, NJ, a city like so many others in the US: it has a great manufacturing past (“Embroidery Capital of the United States”) but was unable to find a new economic identity when cheap imports undermined its industries. Now most of its residents live in poverty, and a large percentage are recent immigrants who speak little English.
But Union City schools are unlike most districts with this profile. Despite the demographics, Union City students score about average on state tests. Ninety percent graduate high school, and sixty percent go on to college.
How they do it is Kirp’s subject, and in one sense this book has the feel of many others. The account is told through stories. We meet Alina Bossbaly, a local legend of a third-grade teacher who is able to connect even with the most difficult children, and to make them feel a part of the classroom community, a process that has come to be known as “Bossbaly-izing” children.
We meet long-time Union City Mayor Brian Stack, strong supporter of education, savvy politico in a tough political town, and point man in the procurement of funding for the new 180 million dollar high school.
Kirp is an academic, not a journalist, so although he’s an able writer, you’re not in the hands of a professional storyteller or fact-finder. But what you get from Kirp is a deeper analysis, a better-than-even tradeoff in this case.
So what is Kirp’s conclusion? He offers a list of key factors that he says must be in place for a district to thrive:
- District leaders put the needs of students ahead of those of staff
- They invest in quality preschool
- They insist that a rigorous curriculum is consistently implemented
- They make extensive use of data to diagnose problems
- The engender a culture of respect among the staff
- They value stability and avoid drama—they make a plan and stick with it for the long haul
- They never stop planning and reviewing the results of their plans.
When a district posts a remarkable record, it’s natural to ask “how did they do it?” The obvious problem is you’re looking at a single district. Maybe the real
key to Union City is the Mayor. Maybe it’s the fact that many of the students come from countries with a tradition of respect for authority.
Kirp makes a case that other unusually successful districts have the same set of factors in common. It’s no substitute for a quantitative analysis, but KIrp at least shows that he’s aware of the problem.
And to be clear, I read the book in this wise, as something like an ethnographic study. Books like this offer detail and texture that larger scale, more rigorous analyses lack. In so doing, they ought
to be inspiration to more quantitatively oriented researchers for what they are missing and where to turn their sights next.
When it comes to criticizing methods he thinks are ineffective, Kirp is less sure-footed. He dismisses the notion that the relationship between school funding and student achievement is uncertain by noting that such suggestions leave administrators “shaking their heads.” There is an extensive and complex literature on the impact of funding, and the proper conclusion is by no means as simple Kirp would like us to believe.
Likewise, I’m rankled by Kirp’s assertion that “If you’re a teacher or principal whose job is on the line and your ordered to accomplish what seems unattainable, cheating is a predictable response.” This sounds an awful lot like a tacit pass to cheating educators.
The section of Improbable Scholars
devoted to “what doesn’t work” left a bad taste in my mouth because it comes at he end of the book, but it is a mere five pages long.
If you’re curious about one vision of successful education that more or less maintains the status quo and actually gets into some detail, Improbable Scholars
is a good choice.
I like Wikipedia
. I like it enough that I have donated during their fund drives, and not simply under the mistaken impression that doing so would make plaintive face of founder Jimmy Wales disappear from my browser. Wikipedia is sometimes held up as a great victory for crowdsourcing, although as Jaron Lanier has wryly observed, it would have been strange indeed to have predicted in the 1980's that the digital revolution was coming, and that the crowning achievement would be a copy of something that already existed--the encyclopedia. That's a bit too cynical in my view, but more important, it leapfrogs an important question: is Wikipedia a good encyclopedia?For matters related to education, my tentative answer is "no." For some time now I've noticed that articles in Wikipedia got things wrong
, even allowing for the fact that some topics in education are controversial.So in a not-at-all scientific test, I looked up a few topics that came to mind.Reading education in the United States: The third paragraph reads:
There is some debate as to whether print recognition requires the ability to perceive printed text and translate it into spoken language, or rather to translate printed text directly into meaningful symbolic models and relationships. The existence of speed reading, and its typically high comprehension rate would suggest that the translation into verbal form as an intermediate to understanding is not a prerequisite for effective reading comprehension. This aspect of reading is the crux of much of the reading debate.
There is a large literature using many different methods to assess whether sound plays a role in the decoding of experienced readers, and ample evidence that it does. For example, people are slower to read tongue-twisters than control text (McCutchen & Perfetti, 1982). Whether it is necessary to access meaning or is a byproduct of that process is more controversial. There is also pretty good evidence that speed reading can't really work, due to limitations in the speed of eye movements (Rayner, 2004
)Next I looked at mathematics education
. The section of most interest is "research" and it's a grab-bag of assertions, most or all of which seem to be taken from the website of the National Council of Teachers of Mathematics
. As such, the list is incomplete: no mention of the huge literatures on (1) math facts (e.g. Orrantia et al
2010), nor of (2) spatial representations in mathematics
: Newcombe, 2010. The conclusions are also, at times, sketchily draw ("the importance of conceptual understanding:" well, sure), and on occasion, controversial ("the usefulness of homework:" a lot depends on the details.)
: You probably could predict the contents of this entry. A long recounting of various learning styles models, followed by a "criticisms" section. Actually, this Wikipedia entry was better than I thought it would be, because I expected the criticism section to be shorter than it is. Still, if you know nothing about the topic, you'd likely conclude "there's controversy" rather than there's no supporting evidence (Riener & Willingham, 2010
Finally, I looked at the entry on constructivism (learning theory)
. This was a pretty stringent test, I'll admit, because it's a difficult topic.
The first section lists constructivists and this list includes Herb Simon, which can only be called bizarre, given that he co-authored criticisms of constructivism (Anderson, Reder & Simon, 1997).
The rest of the article is a bit of a mish-mash. It differentiates social constructivism (that learning is inherently social) from cognitive constructivism (that learners make meaning) only late in the article, though most authors consider the distinction basic. It mentions situated learning in passing, and fails to identify it as a influential third strain in constructivist thought. A couple of sections on peripheral topics have been added ("Role Category Questionnaire," "Person-centered messages") it would appear by enthusiasts.
Of the four passages I examined I wouldn't give better than a C- to any of them. They are, to varying degrees, disorganized, incomplete, and inaccurate.
Others have been interested in the reliability of Wikipedia, so much so that there is a Wikipedia entry devoted to the topic.
Two positive results are worthy of note. First, site vandalism is usually quickly repaired. (e.g., in the history
of the entry for psychologist William K. Estes one finds that someone wrote "William Estes is a martian that goes around the worl eating pizza his best freind is gondi.") The speedy repair of vandalism is testimony to the facts that most people want Wikipedia to succeed, and that the website makes it easy to make small changes.
Second, Wikipedia articles seem to fare well for accuracy compared to traditional edited encyclopedias. Here's where education may differ from other topics. The studies that I have seen compared articles on pretty arcane topics--the sort of thing that no one has an opinion on other than a handful of experts. Who is going to edit the entry on Photorefractice Keratectomy
? But lots of people have opinions about the teaching of reading--and there are lots of bogus "sources" they can cite, a fact I emphasized to the point of reader exhaustion in my most recent book
Now I only looked through four entries. Perhaps others are better. If you think so, let me know. But for the time being I'll be warning students in my Spring Educational Psychology course not to trust Wikipedia as a source.
Anderson, J. R., Reder, L. M., & Simon, H. A. (2000). Applications and Misapplications of Cognitive Psychology to Mathematics Instruction. Texas Education Review
McCutchen, D., & Perfetti, C. A. (1982). The visual tongue-twister effect: Phonological activation in silent reading. Journal of Verbal Learning and Verbal Behavior
Newcombe, N. S. (2010). Picture This. American Educator
Orrantia, J., Rodríguez, L., & Vicente, S. (2010). Automatic activation of addition facts in arithmetic word problems. The Quarterly Journal of Experimental Psychology
Radach, R. (2004). Eye movements and information processing during reading
(Vol. 16, No. 1-2). Psychology Press.
Riener, C., & Willingham, D. (2010). The myth of learning styles. Change: The Magazine of Higher Learning
Psychologists have long looked to Oxford Press for top-flight works of original scholarship and useful synthesis volumes. Now Oxford is publishing a new series, Fundamentals of Cognition, designed to serve as very brief summaries of the state of the field, suitable for an undergraduate course or as the key reading in a beginning graduate course.
The first volume has been published: Fundamentals of Comparative Cognition by Sara Shettleworth and if it’s any indication of the quality of future volumes, Oxford has done very well indeed.
In a mere 124 pages Shettleworth offers the reader a good (though necessarily hurried) look at comparative cognition: the field that asks what humans have in common with other creatures regarding how they think, and what makes humans unique?
As she reviews highlights of this complex literature, Shettleworth shows us some of the key principles of comparative cognition. For example, different species might use very different cognitive strategies to solve the same problem: to orient in space, species might use dead reckoning, vectors, landmarks, route-learning or cognitive maps.
Another example: because animals have different abilities than we, humans may be insensitive to how they experience a problem. For example, because the visual systems of some birds and honeybees extend into the ultraviolet range, a scientist looking a brightly colored flower or plumage may mistake what a bird or bee responds to.
Another key principle that has frustrated many an undergraduate is Lloyd Morgan’s Cannon: boiled down, it means that one shouldn’t interpret animal behavior as reflecting more sophisticated cognition if simpler cognition will do. It’s natural to interpret an animal behavior as reflecting cognitive processes humans would invoke in that situation. The animal may be doing what humans do, but for very different reasons or different methods.
Most often, this “other mechanism” is simple association. Time and time again, Shettleworth points out that what looks like sophisticated communication, say, or empathy, is explainable by the operation of relatively simple associative models, and that more work is actually needed to persuade us that the claimed cognitive process is actually at work. Such reading leads to momentary frustration, but ultimate admiration for the care of the scientists.
So how exactly are species different than humans? First, I should repeat that species are all different from one another, and so the question that might interest us (as it interested Darwin) is whether humans are in any way unique? Shettleworth closes with a review of a few proposed answers—e.g., Mike Tomasello’s suggestion that humans alone cooperatively share intentions—but ultimately casts her vote with none.
This is a wonderful book for a reader with a bit of background in psychology, but make no mistake, it’s not popular reading. Shettleworth sets out to review the field, not to offer choice bits to tempt a reader who was not otherwise interested.
Should educators read this book? Direct applications to educational practice are unlikely to spring to mind, but educators who, as part of their practice, are deeply immersed in understanding human cognition and development will likely find it of value.
In Tyranny of the Textbook
Beverlee Jobrack offers many observations that you’ve heard before. Standards alone won’t improve achievement. Testing alone won’t improve achievement. Technology alone won’t improve achievement. What makes the book worth reading is not Jobrack’s thoughts on these topics, because they are, frankly, fairly ordinary. But her thoughts on the textbook industry make the book well worth your time.
The kernel of her argument has three pieces:
(1) Textbook development: Textbooks are developed based on tradition and based on competitors’ products. No one in the publishing industry worries about whether the materials are effective. As Jobrack notes, publishers are for-profit enterprises. They need decision-makers to adopt their textbooks. Decision-makers do not base adoptions on effectiveness—or at least, publishers believe that they do not.
(2) Textbook adoption: What factors drive adoptions? To the extent that teachers have any input, it will be teacher leaders, and they already teach well. They have an existing set of lesson plans that work well. So they are not interested in a textbook that would necessitate rewriting all of those lesson plans. So new textbooks tend to be conservative. Further, just three publishing companies account for 75% of the market. So most of the books look the same. Consequently, relatively trivial features have an outsize influence on adoption decisions.
Trivial features like the cover design. Like the font size. Like whether the important features are clearly labeled or a bit more difficult to find.
Content matters to adoptions, according to Jobrack, only insofar as the publishers ensure that all of the state standards are “covered.” But she goes on to point out that there is little or no attention paid to ordering and presenting this content in a way to ensure that students learn. Again, effectiveness of learning is simply not on the publishers radar screen.
(3) Why textbooks matter: Jobrack argues that textbooks are hugely important because they constitute a de facto curriculum. Beginning teachers are overwhelmed by the prospect of writing lesson plans, and so depend heavily instructional materials provided by publishers.
Is Jobrack right about all this? She ought to know whereof she speaks. She was promoted through the editorial ranks until she was the editorial director of SRA/McGraw-Hill. Still, we should bear in mind that these are mostly Jobrack’s impressions, not a systematic study of publishing business practices.
I admit that I’m probably more ready to believe Jobrack on publishing because her description so often matches my own experience. Like the beginning teachers she describes, when I first started teaching cognitive psychology, I relied heavily on published materials. I laid out four textbooks on my desk, used the sequence of topics they all shared, and cobbled together lectures by stealing the best stuff from each.
I saw the conservatism Jobrack describes much later when I prepared to write my cognitive textbook and told my editor that I wanted to do something really different than what was currently on the market. Her response: “Okay, but don’t make it more than about 20% different or you’ll never get any adoptions.”
A point Jobrack makes indirectly but strikes me as more important than she realizes is the role of measurement. Jobrack notes that publishers would be motivated to make textbooks effective if that drove the market. Well, in order to know whether they are effective, we—teachers, administrators, parents, researchers, policymakers—need to agree on what we mean by effective and on a way to measure it. The textbook problem brings fresh urgency to this issue.
Whether Jobrack is right or not, I hope this book will prompt greater discussion about textbooks, and greater scrutiny of adoption processes.
The insidious thing about tests is that they seem so straightforward. I write a bunch of questions. My students try to answer them. And so I find out who knows more and who knows less.
But if you have even a minimal knowledge of the field of psychometrics
, you know that things are not so simple.
And if you lack that minimal knowledge, Howard Wainer would like a word with you.
Wainer is a psychometrician who spent many years at the Educational Testing Service and now works at the National Board of Medical Examiners. He describes himself as the kind of guy who shouts back at the television when he sees something to do with standardized testing that he regards as foolish. These one-way shouting matches occur with some regularity, and Wainer decided to record his thoughts more formally.
The result is an accessible book, Uneducated Guesses,
explaining the source of his ire on 10 current topics in testing. They make for an interesting read for anyone with even minimal interest in the topic.
For example, consider the making of a standardized test like the SAT or ACT optional for college applicants, a practice that seems egalitarian and surely harmless. Officials at Bowdoin College have made the SAT optional since 1969. Wainer points out the drawback--useful information about the likelihood that students will succeed at Bowdoin is omitted.Here's the analysis.
Students who didn't submit SAT scores with their application nevertheless took the test. They just didn't submit their scores. Wainer finds that, not surprisingly, students who chose not to submit their scores did worse than those who did, by about 120 points.
Figure taken from Wainer's blog
Wainer also finds that those who didn't submit their scores had worse GPAs in their freshman year, and by about the amount that one would predict, based on the lower scores.
So although one might reject the use of a standardized admissions tests out of some conviction, if the job of admissions officers at Bowdoin is to predict how students will fare there, they are leaving useful information on the table.
The practice does bring a different sort of advantage to Bowdoin, however. The apparent average SAT score of their students increases, and average SAT score is one factor in the quality rankings offered by US News and World Report.
In another fascinating chapter, Wainer offers a for-dummies guide to equating tests. In a nutshell, the problem is that one sometimes wants to compare scores on tests that use different items—for example, different versions of the SAT. As Wainer points out, if the tests have some identical items, you can use performance on those items as “anchors” for the comparison. Even so, the solution is not straightforward, and Wainer deftly takes the reader through some of the issues.
But what if there is very little overlap on the tests?
Wainer offers this analogy. In 1998, the Princeton High School football team was undefeated. In the same year, the Philadelphia Eagles won just three games. If we imagine each as a test-taker, the high school team got a perfect score, whereas the Eagles got just three items right. But the “tests” each faced contained very different questions and so they are not comparable. If the two teams competed, there's not much doubt as to who would win.
The problem seems obvious when spelled out, yet one often hears calls for uses of tests that would entail such comparisons—for example, comparing how much kids learn in college, given that some major in music, some in civil engineering, and some in French.
And yes, the problem is the same when one contemplates comparing student learning in a high school science class and a high school English class as a way of evaluating their teachers. Wainer devotes a chapter to value-added measures. I won't go through his argument, but will merely telegraph it: he's not a fan.
In all, Uneducated Guesses is a fun read for policy wonks. The issues Wainer takes on are technical and controversial—they represent the intersection of an abstruse field of study and public policy. For that reason, the book can't be read as a definitive guide. But as a thoughtful starting point, the book is rare in its clarity and wisdom.