My Facebook feed today has lots of links to this article
. The upshot: a new Pew study showing that Americans think that US 15 year olds rank "near the bottom" on international science tests, whereas the truth is that they "rank in the middle among developed countries."I guess "the middle" covers a lot of terrain, but the way I look at the data, this assertion doesn't hold.
The international comparison in question is the 2009 PISA. Here are the rankings. (Click for larger image)
Most everyone would agree that it's not appropriate to compare scores of US kids to those of poorer countries with little infrastructure and funding to support education.
That's why the article specifies the ranking of the US among "developed countries," and by the author's reckoning, kids from 12 developed countries scored better, and kids from 9 developed countries scored worse. That would put US kids at the 41st percentile. The US is ranked 30th on the list. Just eyeballing it, it's hard to see how 17 of the countries scoring better could be considered "not developed." On measures of "developed" status would be the International Monetary Fund's definition of "
advanced economies" which includes: Australia, Austria, Belgium, Canada, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hong Kong, Iceland, Ireland, Israel, Italy, Japan, Luxembourg, Malta, Netherlands, New Zealand, Norway, Portugal, San Marino, Singapore, Slovakia, Slovenia, South Korea, Spain, Sweden, Switzerland, Taiwan, United Kingdom, United States (Click image for larger image
By this definition of "advanced" US kids are 23rd out of 32 countries, or the 28th percentile.
It's true that "near the bottom" is too grim an assessment. But I can't see a way to put the 2009 PISA data together such that American kids are scoring about average.
I think of two very broad education reform camps. One calls into question the basic arrangement of institutions involved in U.S. education, arguing that the contradictory priorities in the system almost guarantee mediocrity. The solution, therefore, cannot be a nibbling around the edges of reform, but wholesale change: for some reformers, that means a market solution with greater parental choice, often coupled with more stringent human resources policies. For others the solution is a complete change—via technology—of the way we think of “learning.”
The second group of reformers argues that the system of education institutions is mostly fine, and that factors external to the system are responsible for our woes (which are, in any case, exaggerated). Some point to social and economic factors, others to the incoherence in curriculum (cf the Common Core), and others to the very reform measures (especially standardized tests used to evaluate schools and teachers) instituted by the other group of reformers.
In his new book Improbable Scholars,
UC Berkeley professor of education David Kirp offers an unusually readable account of what improved schooling would look like if you’re in the second camp. His explicit mission: to show that educational excellence is possible with the system as it exists now, even in districts that face enormous challenges. He makes a fair case, given the limitations of the method he employs. Improbable Scholars
follows in the tradition of numerous education books by recounting time that the author spent in a school or district. Kirp tells the story of Union City, NJ, a city like so many others in the US: it has a great manufacturing past (“Embroidery Capital of the United States”) but was unable to find a new economic identity when cheap imports undermined its industries. Now most of its residents live in poverty, and a large percentage are recent immigrants who speak little English.
But Union City schools are unlike most districts with this profile. Despite the demographics, Union City students score about average on state tests. Ninety percent graduate high school, and sixty percent go on to college.
How they do it is Kirp’s subject, and in one sense this book has the feel of many others. The account is told through stories. We meet Alina Bossbaly, a local legend of a third-grade teacher who is able to connect even with the most difficult children, and to make them feel a part of the classroom community, a process that has come to be known as “Bossbaly-izing” children.
We meet long-time Union City Mayor Brian Stack, strong supporter of education, savvy politico in a tough political town, and point man in the procurement of funding for the new 180 million dollar high school.
Kirp is an academic, not a journalist, so although he’s an able writer, you’re not in the hands of a professional storyteller or fact-finder. But what you get from Kirp is a deeper analysis, a better-than-even tradeoff in this case.
So what is Kirp’s conclusion? He offers a list of key factors that he says must be in place for a district to thrive:
- District leaders put the needs of students ahead of those of staff
- They invest in quality preschool
- They insist that a rigorous curriculum is consistently implemented
- They make extensive use of data to diagnose problems
- The engender a culture of respect among the staff
- They value stability and avoid drama—they make a plan and stick with it for the long haul
- They never stop planning and reviewing the results of their plans.
When a district posts a remarkable record, it’s natural to ask “how did they do it?” The obvious problem is you’re looking at a single district. Maybe the real
key to Union City is the Mayor. Maybe it’s the fact that many of the students come from countries with a tradition of respect for authority.
Kirp makes a case that other unusually successful districts have the same set of factors in common. It’s no substitute for a quantitative analysis, but KIrp at least shows that he’s aware of the problem.
And to be clear, I read the book in this wise, as something like an ethnographic study. Books like this offer detail and texture that larger scale, more rigorous analyses lack. In so doing, they ought
to be inspiration to more quantitatively oriented researchers for what they are missing and where to turn their sights next.
When it comes to criticizing methods he thinks are ineffective, Kirp is less sure-footed. He dismisses the notion that the relationship between school funding and student achievement is uncertain by noting that such suggestions leave administrators “shaking their heads.” There is an extensive and complex literature on the impact of funding, and the proper conclusion is by no means as simple Kirp would like us to believe.
Likewise, I’m rankled by Kirp’s assertion that “If you’re a teacher or principal whose job is on the line and your ordered to accomplish what seems unattainable, cheating is a predictable response.” This sounds an awful lot like a tacit pass to cheating educators.
The section of Improbable Scholars
devoted to “what doesn’t work” left a bad taste in my mouth because it comes at he end of the book, but it is a mere five pages long.
If you’re curious about one vision of successful education that more or less maintains the status quo and actually gets into some detail, Improbable Scholars
is a good choice.
What would you say if a major corporation took out a full-page ad in the New York Times
to advertise a message that you thought was important and mainly agreed with, only to find that the text of the message was rife with misspellings, grammatical errors, and misused words?That's the feeling that I get from the new video all over my Facebook feed (and with over 7 million views in 4 days)
titled "Dove Real Beauty Sketches." If you haven't seen it, here you go. (I summarize it below.)
In brief, a woman describes herself to a forensic sketch artist, who cannot see her. He draws her portrait based only on her description, and then draws her again based on the description of a stranger who just met her. The woman then sees both portraits and recognizes that she has been rather hard on herself in her self description. (The process is shown for several women.)The associated website calls this "a social experiment." But it's a terrible example of experimentation.We are invited to draw the conclusion that women see themselves as less attractive than others do. I don't know the self-perception literature well, but I'm pretty sure this conclusion is right. But this experiment is a terrible way to illustrate that.
- The artist should be blind to condition. He knows when he's basing the drawing on the description of the subject vs. the stranger, and so could unconsciously bias the result
- The descriptions are not based on perception, they are based on memory. If you want to claim that it's about how women see themselves, not how they remember themselves, then each person should do their best to describe the woman based on the same photograph
- A the end the sketch artist tells each woman the source of each sketch. What would have happened if he had asked her to say which looks more like her, and to say which she thought was based on her description? If women's perception is really distorted, then the woman should see sketch based on her description as being more like her. An alternative hypothesis is that women more or less know what they look like, but talk about themselves in negative terms.
- The foregoing point raises another issue: social conformity. If the result is not due to perception but to people conforming to social norms, the difference in the sketches might be due to the women's reluctance to seem vain in their self-descriptions, and to the stranger feeling that he or she ought to describe the woman nicely.
How important are these criticisms to the overall message of the video? Not very. The point of the video is that women shouldn't be so hard on themselves in judging their looks. It's a good message.That's why I draw the analogy to grammar, punctuation, and spelling in a written message. If Dove had published a print ad full of grammatical and spelling errors, I expect someone would have called them out on it. Dove presents this as an experiment, but it's a terrible experiment.
It would not have been hard to do a video making the same point with a better experiment. Any graduate student of social psychology could have improved this ten-fold.I would have given the video 9/10 (subtracting one point for scientific sloppiness) if not for the statement made here in the video:
I should be more grateful of my natural beauty. It impacts the choices in the friends that we make, the jobs we apply for, how we treat our children, it impacts everything. It couldn’t be more critical to your happiness.
Well, I'd prefer a different message. Rather than "It couldn't be more critical to your happiness" and "be grateful for your natural beauty" I'd prefer a message amounting to "what you look like matters less than you think."
But I can't expect everything from a company selling beauty products. 8 out of 10, Dove.
A great deal has been written about the impact of retrieval practice on memory. That's because the effect is sizable, it has been replicated many times (Agarwal, Bain & Chamberlain, 2012) and it seems to lead not just to better memory but deeper
memory that supports transfer (e.g., McDaniel et al, 2013; Rohrer et al, 2010).
("Retrieval practice" is less catchy than the initial name--testing effect. It was renamed both to emphasize that it doesn't matter whether you try to remember for the sake of a test or some other reason and because "testing effect" led some observers to throw up their hands and say "do we really need more tests?")Now researchers (Szpunar, Khan, & Schacter, 2013) have reported testing as a potentially powerful ally in online learning. College students frequently report difficulty in maintaining attention during lectures, and that problem seems to be exacerbated when the lecture occurs on video.In this experiment subjects were asked to learn from a 21 minute video lecture on statistics. They were also told that the lecture would be divided in 4 parts, separated by a break. During the break they would perform math problems for a minute, and then would either do more math problems for two more minutes ("untested group"), they would be quizzed for two minutes on the material they had just learned ("tested group"), or they would review by seeing questions with the answers provided ("restudy group.")Subjects were told that whether or not they were quizzed would be randomly determined
for each segment; in fact, the same thing happened for an individual subject after each segment except
that each was tested after the fourth segment.So note that all subjects had reason to think that they might be tested at any time. There were a few interesting findings.
First, tested students took more notes than other students, and reported that their minds wandered less during the lecture.
The reduction in mind-wandering and/or increase in note-taking paid off--the tested subjects outperformed the restudy and the untested subjects when they were quizzed on the fourth, final segment.
The researchers added another clever measure. There was a final test on all the material, and they asked subjects how anxious they felt about it. Perhaps the frequent testing made learning rather nerve wracking. In fact, the opposite result was observed: tested students were less anxious about the final test. (And in fact performed better: tested = 90%, restudy = 76%, nontested = 68%).
We shouldn't get out in front of this result. This was just a 21 minute lecture, and it's possible that the benefit to attention of testing will wash out under conditions that more closely resemble an on-line course (i.e., longer lectures delivered a few time each week.) Still, it's a promising start of an answer to a difficult problem.
Agarwal, P. K., Bain, P. M., & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24, 437-448.
McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: Successful transfer performance on classroom exams. Applied Cognitive Psychology. Published online Feb. 25
Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36, 233-239.
Szpunar, K. K., Khan, N. &, & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, published online April 1, 2013 doi:10.1073/pnas.122176411
Ben Goldacre is a British physician and academic, and is the author of Bad Science,
an expose of bad medical practice that is based on wrong-headed science. For the last decade he has written a terrific column
by the same name for the Guardian
.Goldacre has recently turned his critical scientific eye to educational practices in Britain. He was asked by the British Department for Education to comment on the use of scientific data in education and
on the current state of affairs in Britain. You can download the report here
So what does Goldacre say? He offers an analogy of education to medicine; the former can benefit from the application of scientific methods, just as the latter has.Goldacre touts the potential of randomised controlled trails (RCTs). You take a group of students and administer an intervention (a new instructional method for long division, say) to one group and not to another. Then you see how each group of students did. Goldacre also speculates on what institutions would need to do to make the British education system as a whole more research-minded. He names two significant changes;
- There would need to be an institution that communicates the findings of scientific research (similar to the American "What Works Clearinghouse.")
- British teachers would need a better appreciation for scientific research so that they would understand why a particular practice was touted as superior, and could evaluate themselves the evidence for the claim
I'm a booster of science in education
As someone who has written shorter
treatments of the role that scientific research might play in education, I'm very excited that Goldacre has made this thoughtful and spirited contribution. I offer no criticisms of what Goldacre suggests, but would like to add three points.First,
I agree with Goldacre that randomized trials allow the strongest conclusions. But I don't think that we should emphasize RCTs to the exclusion of all other sources of data. After all, if we continue with Goldacre's analogy to medicine, I think he would agree that epidemiology has proven useful. As a matter of tactics, note that the What Works Clearinghouse emphasized RCTs to the near exclusion of all other types of evidence, and that came to be seen as a problem.
If you exclude other types of studies the available data will likely be thin. RCTs are simply hard to pull off: they are expensive, they require permission from lots of people.
Hence, the What Works Clearinghouse ended up being agnostic about many interventions--"no randomized controlled trials yet." Its impact has been minimal. Other sources of data can be useful; smaller scale studies, and especially, basic scientific work that bears on the underpinnings of an intervention. We must also remember that each RCT--strictly interpreted--offers pretty narrow information: method A is better than method B (for these kids, as implemented by these teachers, etc.)
Allowing other sources of data in the picture potentially offers a richer interpretation.As a simple example, shouldn't laboratory studies showing the importance of phonemic awareness influence our interpretation of RCTs in preschool interventions that teach phonemic awareness skills?
basic scientific knowledge gleaned from cognitive and developmental psychology (and other fields) can not only help us to interpret the results of randomized trials, that knowledge can be useful to teachers on its own. Just as a physician uses her knowledge of human physiology to diagnose a case, a teacher can use her knowledge of cognition to "diagnose" how to best teach a particular concept to a particular child.
I don't know about Britain, but this information is not taught in most American schools of Education. I wrote a book
about cognitive principles that might apply to education. The most common remark I hear from teachers is surprise (and often, anger) that they were not taught these principles when they trained. Elsewhere I've suggested
we need not just a "what works" clearinghouse to evaluate interventions, but a "what's known" clearinghouse for basic scientific knowledge that might apply to education. Third,
I'm uneasy about the medicine analogy. It too easily leads to the perception that science aims to prescribe what teachers must do, that science will identify one set of "best practices" which all must follow. Goldacre makes clear on the very first page of the report that's NOT what he's suggesting, but to the non-doctors among us, we see medicine this way: I go to my doctor, she diagnoses what's wrong, and there is a standard way (established by scientific method) to treat the disease.
That perception may be in error, but I think it's common.
I've suggested a different analogy: architecture. When building a house an architect must respect certain basic facts set out by science. Physics and materials science will loom large for the architect; for educators it might be psychology, sociology et al. The rules represent limiting conditions, but so long as you stay within those boundaries there is lots of ways to get it right. Just as physics doesn't tell the architect what the house must look like, so too cognitive psychology doesn't tell teachers how they must teach.
RCTs play a different role. They provide proof that a standard solution to a common problem is useful. For example, architects routinely face the problem of ensuring that a wall doesn't collapse when a large window is placed in it, and there are standard solutions to this problem. Likewise, educators face common problems, and RCTs hold the promise of providing proven solutions. Just as the architect doesn't have
to use any of the standard methods, the teacher needn't use a method proven by an RCT. But the architect needs be sure that the wall stays up, and the teacher needs to be sure that the child learns.
I made one of my garage-band-quality videos
on this topic.
There's more to this topic--what it will mean to train teachers to evaluate scientific evidence, the role of schools of education. Indeed, there's more in Goldacre's report and I urge you to read it. Longer term, I urge you to consider why we wouldn't
want better use of science in educational practice.
Illiteracy and its costs to individuals and to society has long been a focus of concern in public policy. A corresponding lack of ability in mathematics--innumeracy--has received increasing attention in the last few decades. The ability to use basic math is more and more important as modern day society grows more complex.
Some children have a problem in learning to read that is disproportionate to any other academic challenge they face. Some children have a corresponding problem with math. For some reason, the ideas just don't come together for these students.
In a recent article, David Geary (2013) reviews evidence that one cause of the problem may be a fundamental deficit in the representation of numerosity.
Geary describes three possible sources of a problem in children's appreciation of number.
To appreciate where the problems may lie, you need to know about the approximate number system. All children (and members of many other species) are born with an ability to appreciate numerosity. The approximate number system does not support precise counting, but allows for comparison judgements of "more than" or "less than." For example, in the figure below you can tell at a glance (and without counting) which cloud contains more dots.
This ability --making the comparison without counting--is supported by the approximate number system. (Formal experiments control for things like the total amount of "dot material" in each field, and so on.)
The ability depends on not on the absolute difference in number of dots, but on the ratio. Adults can discriminate ratios as low as 11:10. Infants can perform this task, but the ratio of the difference in dots must be much greater, closer to 2:1.
Many researchers believe that this approximate number system is the scaffold for an understanding of the cardinal values of number.
So the first possible source of problems in mathematics may be that the approximate number system does not develop at a typical pace, leaving the child slow to develop the cognitive representations of quantity that can support mathematics.
A second possibility is that the approximate number system works just find, but the problem lies in associating symbols (number names and arabic numerals) to the quantities represented there. Geary speculates that regulating attention may be particularly important to this ability.
Finally, It is possible for children to appreciate the cardinal value of numbers and yet not understand the logical relationships among those numbers, to appreciate the structure as a whole. That's the the third possible problem.
Geary suggests that there is at least suggestive evidence that each of these potential problems creates trouble for some students.
The analogy to dyslexia is irresistible, and not inappropriate. Math, like reading, is not a "natural" human activity. It is a cultural contrivance, and the cognitive apparatus to support it must be hijacked from mental systems meant to support other activities.
As such, it is fragile, meaning it lacks redundancy. If something goes wrong, the system as a whole functions very poorly. Thus, understanding how things might go wrong is essential to helping children who struggle early on.
Gear, D. (2013) Early foundations for mathematics learning and their relations to learning disabilities. Current Directions in Psychological Science, 22, 23-27.
We are in the midst of an effort to explore what the new technologies enabled by powerful computing and reliable long-distance connection will mean to higher education. (There is, of course, a parallel effort in K-12, but that’s another topic.)
A new entrant is poised to make a bid, and it’s worth some study. The Minerva Project
was initiated by Ben Nelson, the man behind Snapfish (a photo website). His vision is of a university that offers an “uniquely rigorous and challenging university education." (At a price, we might add, that is a relative bargain--reportedly
, the target cost is something like half of what the Ivys charge).
The idea is that classes will be delivered via video, and students will then engage in discussion and debate. Importantly, and in pointed contrast to MOOCs, class size will be limited to 25.
Because the university is virtual, students can live anywhere, but they will be encouraged to live in a different world city each semester, perhaps living together to gain some of the face-to-face interactions that many observers consider a significant advantage of bricks-and-mortar university life.
The curriculum will not rest on traditional academic subject divisions (some English, some math, some science) but rather on four essential skills: critical thinking, use of data, understanding complex systems, and effective communication. If that sounds squishy, be forewarned that the courses are planned to be demanding, and students who do not perform well will (gasp) fail the course.
Minerva is trying something quite different than other online higher ed options. Online universities exist, but their appeal has been their low price and low academic standards and, to a lesser extent, flexibility in scheduling. They claim to offer a degree that is comparable to a traditional degree, though few believe that they do.
It’s still not completely obvious how MOOCs (as implemented in Coursera and edX) will evolve, but no one thinks the long-term purpose is to give away courses. The interpretation I hear most often is that they will not seek replace standard degrees, but will offer more of an a la carte education; you take, say, 12 engineering courses (earning certificates showing that you’ve done the work) in the hopes that the reputation of the participating institutions will be enough to persuade an employer that passing the courses means you’ve got the chops for a job—and you’ve paid just a tiny fraction of what a traditional engineering degree would have cost.
Minerva seeks a third way. It promises an elite education, comparable to the most selective schools in the US. They are gambling that this option will appeal to students who were qualified to attend a big-name university, but didn’t get in.
It’s a darn good bet that there are plenty of frustrated students who thought they had the record for admission to Big-Name U; these places reject 75% or more of their applicants.
It’s no accident that the Minerva website notes that admissions decisions will disregard “lineage, state or country of origin, athletic prowess, or ability to donate.” In other words, "if you fear that you will be jostled by affirmative action targets, athletic admissions, legacies, etc., apply here."
The question is whether these students will see Minerva as a viable alternative to traditional schools.
Naturally, that will depend on the quality of the courses and the curriculum. Minerva has not hired any faculty yet, but Nelson has some high profile members on his board (Larry Summers, Bob Kerrey) and they just hired Steve Kosslyn as the founding Dean of the College. Kosslyn has been a Dean at Harvard and was most recently Director of the Center for Advanced Study of Behavioral Sciences at Stanford. To lure someone that capable to devote all of his energies to Minerva bodes well for the project.
We may be seeing the first of new wave of elite colleges. In the colonial era and following, this country saw elite colleges founded to train clergy (Harvard, Yale, Princeton). Better than a century later there was another spate, as wealthy industrialists founded new schools or heavily endowed existing ones (U of Chicago, Stanford, Vanderbilt).
There may well be room for a new model of elite higher education, and Minerva, as the first one out of the gate, holds a significant advantage.
Note: Thanks to Chris Chabris, whose Facebook post made me aware of Minerva.
Readers of this blog are probably aware of the research, pioneered by Carol Dweck, showing that certain types of praise--especially praise that focuses on who the child is, rather than what the child has done--can have counter-intuitive effects.
A new report (Brummelman et al, 2013
) shows that the consequences of certain praise for kids with low self-esteem can be particularly destructive. In experiment 1,
357 Dutch-speaking parents (87% mothers) read brief descriptions of children, some with high self-esteem ("Lisa usually likes the kind of person she is") and some with low ("Sarah is often unhappy with herself.") Parents were asked to describe what they would say in response to something the child was described as having done (e.g., "she has just made a beautiful drawing.")Responses were coded as praising the child's personal qualities (e.g., "You're such a good drawer") or the child's behaviors (""you did a good job drawing!"), other praise (e.g., "beautiful!") or no praise.
The figure shows an interaction--children with high self-esteem were less likely to receive person praise than children with low self-esteem. Children with high self-esteem were more likely to receive process praise.
Experiment 2 examined whether children with high or low self-esteem respond differently to person praise.
313 children (mean age about 10.5 yrs) completed a standard measure of self-esteem. Several days later at their school, they performed a computer task that (they were told) pitted them against an opponent from another school to see who had faster reactions. They were told that a webmaster would monitor the competitors' performance. (In fact, there was no competitor or webmaster; everything was controlled by the computer.)
After a practice round the webmaster gave either process praise ("wow, you did a great job!"), person praise ("wow, you're great!") or no praise to the subject.
Next, the subject played against their "opponent" and were told that he or she won or lost.
Finally, subjects were asked to rate "how you feel, right now" by agreeing or disagreeing with adjectives like "ashamed," and "humiliated." (They had made similar ratings before the game.)
The graph shows difference scores, based on the two measures of shame (taken before and after the reaction time game). As you would expect, the students who were told they won (the open figures in the graph) didn't feel much shame. Students told they lost (closed figures) felt more. In addition, the students receiving person praise feel more shame, overall, but crucially, all of this effect is due to the students with low self esteem. They are represented by that highest point on the graph at the upper left.The effect size is pretty substantial--around d
= .5 So it seems that the person praise makes the children with low-self-esteem feel more invested in the game, more like they have something at stake. So when they lose, they feel more shame. The high-self esteem students, in contrast, shrug off the loss, even after the person praise because they generally feel more secure
about their abilities. The message, coupled with the result from Experiment 1, is that adults are biased to do exactly the wrong thing--try to "buck up" kids with low-self esteem by offering person praise ("you're a great kid!") when these children will actually suffer more after a failure if they have received this praise. The interpretation hangs together, to my mind, but I'd like to see this effect replicated. In particular, the measure of shame seemed heavy-handed. As far as I can tell, students were not asked about other feelings, just those related to shame, so there is a really chance that demand characteristics played a role. (That is, that students were reacting as they thought the experimenter expected them to, not necessarily as they felt.)Still, an interesting, possibly important experiment.
Reference: Brummelman, E., Thomaes, S., Overbeek, G., Orobio de Castro, B., van den Hout, M. A., & Bushman, B. J. (2013). On Feeding Those Hungry for Praise: Person Praise Backfires in Children With Low Self-Esteem. Journal of Experimental Psychology: General.
Advance online publication: doi: 10.1037/a0031917
Daphne Bavelier and Richard Davidson have a Comment
in Nature today on the potential for video games to "do you good." The authors note that video gaming has been linked to
obesity, aggressiveness, and antisocial behavior, but there is a burgeoning literature showing some cognitive benefits accrue from gaming. Even though the data on these benefits is not 100% consistent (as I noted here
) I'm with Bavelier & Davidson in their general orientation: so many people spend so much time gaming, we would be fools not to consider ways that games might be turned to purposes of personal and societal benefit. Could games help to make people smarter, or more empathic, or more cooperative?The authors
suggest three developments are necessary.
- Game designers and neuroscientists must collaborate to determine which game components "foster brain plasticity." (I believe they really mean "changes behavior.")
- Neuroscientists ought to collaborate more closely with game designers. Presumably, the first step will not get off the ground if this doesn't happen.
- There needs to translational game research, and a path to market. We expect that some research advances (and clinical trials) of the positive effects of gaming will be made in academic circles. This work must get to market if it is to have an impact, and there is not a blazed trial by which this travel can take place.
This is all fine, as far as it goes, but it ignores two glaring problems, both subsets of their first point.W
e have to bear in mind that Bavelier & Davidson's enthusiasm for the impact of gaming is coming from experiments with people who already liked gaming; you compare gamers with non-gamers and find some cognitive edge for the former. Getting people to play games is no easy matter, because designing good games is hard. This idea of harnessing interest
in gaming for personal benefit is old stuff in education. Researchers have been at it for twenty years, and one of the key lessons they've learned is that it's hard build a game that students really like and from which they also learn (as I've noted in reviews here
.)Second, Bavelier & Davidson are also a bit
too quick to assume that measured improvements to basic cognitive processes will transfer to more complex processes. They cite a study in which playing a game improved mental rotation performance. Then they point out that mental rotation is important in fields like navigation and research chemistry. But one of the great puzzles (and frustrations) of attempts to improve working memory has been the lack of transfer; even when working memory is improved by training, you don't see a corresponding improvement in tasks that are highly correlated with working memory (e.g., reasoning). In sum, I'm with Bavelier & Davidson in that I think this line of research is well worth pursuing. But I'm less sanguine than they are, because I think their point #1--getting the games to work--is going to be a lot tougher than they seem to anticipate. Bavelier, D, & Davidson, R. J. (2013). Brain training: Games to do you good. Nature, 494, 425-426.
A math teacher and Twitter friend from Scotland asked me
about about this figure.
I'm sure you've seen a figure like this. It is variously called the "learning pyramid," the "cone of learning," "the cone of experience," and others. It's often attributed to the National Training Laboratory, or to educator Edgar Dale.
You won't be surprised to learn that there are different versions out there with different percentages and some minor variations in the ordering of acCertainly, some mental activities are better for learning than others. And the ordering offered here doesn't seem crazy. Most people who have taught agree that long-term contemplation of how to help others understand complicated ideas is a marvelous way to improve one's own understanding of those ideas--certainly better than just reading them--although the estimate of 10% retention of what one reads seems kind of low, doesn't it?
If you enter "cone of experience" in Google scholar
the first page offers a few papers that critique the idea, e.g., this one
and this one
, but you'll also see papers that cite it as if it's reliable. It's not. So many variables affect memory retrieval, that you can't assign specific percentages of recall without specifying many more of them:
- what material is recalled (gazing out the window of a car is an audiovisual experience just like watching an action movie, but your memory for these two audiovisual experiences will not be equivalent)
- the age of the subjects
- the delay between study and test (obviously, the percent recalled usually drops with delay)
- what were subjects instructed to do as they read, demonstrated, taught, etc. (you can boost memory considerably for a reading task by asking subjects to summarize as they read)
- how was memory tested (percent recalled is almost always much higher for recognition tests than recall).
- what subjects know about the to-be-remembered material (if you already know something about the subject, memory will be much better.
This is just an off-the-top-of-my-head list of factors that affect memory retrieval. They not only make it clear that the percentages suggested by the cone can't be counted on, but that the ordering of the activities could shift, depending on the specifics.The cone of learning
may not be reliable, but that doesn't mean that memory researchers have nothing to offer educators. For example, monograph
published in January offers an extensive review of the experimental research on different study techniques. If you prefer something briefer, I'm ready to stand by the one-sentence summary I suggested in Why Don't Students Like School?:
It's usually a good bet to try to think about material at study in the same way that you anticipate that you will need to think about it later. And while I'm flacking my books I'll mention that When Can you Trust the Experts was written to help you evaluate the research basis of educational claims, cone-shaped or otherwise.