Some functions--reading--are not localized in one spot. "Speech" is misleading--do they mean producing speech? Or language overall? Whatever, it's wrong.
Come on, BBC. The American election is hard enough on us. Don't add to our misery.
The BBC has a website which (based on the title) I gather is to help students prepare for the GCSE. The "science" area has a section on "Brain and Memory" which, unfortunately, has several inaccuracies.
This figure appears on page 1. Some of the functions attributed to some brain areas are roughly correct--motor cortex is shown as more toward the front of the brain than sensory cortex, which is right, and vision is where it ought to be. But it's hard to tell what's what because the brain is simply drawn wrong (see below) so localization is hard to pinpoint (e.g., there's no central sulcus, which would separate motor and somatosensory areas).
Some functions--reading--are not localized in one spot. "Speech" is misleading--do they mean producing speech? Or language overall? Whatever, it's wrong.
Things don't improve on the next two pages, which cover some cognitive aspects of memory. I've annotated them.
This is not advanced material. This is stuff I teach in my Introduction to Cognition course.
Come on, BBC. The American election is hard enough on us. Don't add to our misery.
Printing Nicholson Baker’s article in yesterday’s Magazine was a terrible, terrible decision.
The decision deserves two “terribles” because it was a double mistake.
First, you published an article on a topic that entails conflicting priorities in setting goals for public good, policy constraints in achieving these goals, the science of learning, distribution of wealth, and doubtless other complexities that I’m too exhausted to identify and enumerate. The author of the article has no expertise on any of these matters. That he appears to believe his 28 days as a substitute teacher gives him much insight into schooling only makes him less credible. The most fundamental limitations of his experience—for example, that teachers might choose a lesson for the substitute because it is easy to teach, even if it’s less interesting for the students—seem to have escaped him.
The second “terrible” is, unsurprisingly, the content. The author commits the common education newcomer blunder: “The school that would have been perfect for me, would be perfect for everyone.” He cannot understand why high school must be so stifling and soulless. Part of the blame goes to curriculum, where otherwise interesting topics are made dull, but there’s no mistaking that the teachers who inflict this boring stuff on students deserve blame as well. Baker reminisces fondly about his own experience at an alternative high school, where students studied what they wished.
To be more specific, NYTimes editors, here’s a probably incomplete list of problems in Baker’s argument:
1. There is actually evidence regarding classroom instructional quality in this country (e.g., here). He might have made use of it. (It shows, by the way, that the emotional tone is, on average, much more positive than he lets on. Instructional quality, however, is not much better.)
2. Baker is not the first to suppose that much greater freedom for students would lead to greater motivation and better outcomes. The lesson over the last hundred years seems to be that such schools are wonderful when they work, but reproducing the successes has proven more difficult than most observers would guess.
3. Some parents prefer a lot of structure. The private schools in my town do not all follow the lots-of-choice model, a al Waldorf, Montessori, or Reggio Emilia. More parents pay to send their children to highly structured, traditional schools.
4. There are good arguments in favor of a common curriculum.
While I have your attention, please don’t publish similarly one-note, blinkered pieces centering on the ideas like these:
1) Technology is poised to revolutionize learning and schools.
2) Competition would solve all problems in American education.
3) American education is the best in the world and all challenges in educational outcomes are due to poverty.
4) Teachers are fools, and the teacher’s unions are organized crime syndicates dedicated to protecting them.
5) All of America’s problems in education can be traced to standardized tests and if teachers were simply allowed to teach as they wished, all would be well.
Research over the last 20 years has shown that at least part of the academic problems children show in school are not wholly academic; some difficulties are rooted in self-image. Students may be hobbled by their perception of themselves as not fitting an academic context.
What are we to make of this?
Hanselman and colleagues suggest that the intervention may be sensitive to various modifying variables and presumably, it will be difficult identify and measure them all, casting doubt on the utility of the writing exercises in schools. (They were able to test the impact of some moderators that current theory would predict was important--they seemed not to matter.)
For my part, I’ve always found it difficult to understand why the intervention worked in the first place. Why would a writing intervention influence sense of self months later? Sense of self Is surely the product of many experiences over a long time, and there’s no reason to suspect that the stereotype-threat situations would trigger a memory of the writing exercise and thus influence sense of self at that moment.
The writing sounds sort of similar to one I blogged about recently—having college freshman watch videos of older student describing their experiences from freshman year, and then writing about the videos. The important difference is that the target was not students’ sense of self (which I’m suggesting is robust and hard to change) but their sense of what college is like, which, I would expect, would be less closely held and more easily influenced.
The door is not closed on the values affirmation intervention, but much work is to be done if it is to prove useful in schools.
A piece appeared in the New York Post on August 27 with the headline "It's digital heroin: How screens turn kids into psychotic junkies."
Even allowing for the fact that authors don't write headlines, this article is hyperbolic and poorly argued. I said as much on Twitter and my Facebook page, and several people asked me to elaborate. So....
First, to say "Recent brain imaging research is showing that they [games] affect the brain’s frontal cortex — which controls executive functioning, including impulse control — in exactly the same way that cocaine does," is transparently false. Engaging in a behavior cannot affect your brain in exactly the same way a psychoactive drug does. Saying it does is a scare tactic.
Lots of activities give people pleasure and it's sensible that these activities show some physiological similarities, given the similarity in experience. But if you want to suggest that the analogy (games and cocaine both change dopamine levels, therefore they are similar in other ways) extends to other characteristics, you need direct evidence for those other characteristics. Absent that, it's as though you buy a pet bunny and I say "My God, bunnies have four legs. Don't you realize TIGERS have four legs? You've brought a tiny tiger into your home!"
On the addiction question: The American Psychiatric Association has considered including Internet Addiction in the DSM V and has elected not too, though it's still being considered. Research is ongoing, and technology changes quickly, so it's wouldn't make sense to close the book on the issue.
To qualify as an addiction, you need more than the fact that the person likes it a lot and does it a lot. Addictions are usually characterized by:
•Tolerance--increased amount of the behavior becomes necessary
•Withdrawal symptoms if behavior is stopped
•Person wants to quit but can’t
•Lots of time spent on the behavior
•Engage in the behavior even though it’s counterproductive
The last two of these seem a good fit for kids who spend a huge amount of time on gaming or other digital technologies. The others, the less obviously so.
Let me be plain: I have plenty of concerns regarding the content and volume of children's time spent with digital technologies. I've written about this issue elsewhere, and my own children who are still at home (age 13, 11, and 9) face stringent restrictions on both.
But thinking through a complex issue like the social, emotional, cognitive, and motivational consequences of ominipresent screens in daily life requires clear-headed thinking, not melodramatic claims based on thin analogies.
Nothing is more familiar to those who follow the literature on early educational interventions. A program meant to boost children’s reading or math or school readiness works wonderfully, helping children who started pre-k at a disadvantage achieve at levels comparable to other kids…but follow-up studies in later years show that the boost was not long-lasting. The results faded.
The most common explanation (and the one that I had always assumed was right) centers on the content these children were taught after the intervention ended. Instruction must continue to challenge these children, to extend their accomplishments. If teachers emphasize more basic material, naturally we’ll observe fadeout.
I’ve sometimes used this metaphor: early intervention is not like setting the trajectory of a rocket, a one-time event that, if you get it right, you needn’t think about again. It’s more like extra fuel in the booster rocket; it gets kids to the right altitude early on, but you’ve got to ensure that they have the same fuel in their rocket that other kids do after the intervention.
A recent article by Drew Baily and colleagues (2016) casts doubt on this explanation. They call it the Constraining Content hypothesis, and set forth a competing explanation they call the Pre-Existing Differences hypothesis.
You identified a bunch of students who were either behind or at risk for being behind. You intervened. At the end of the year, they are no longer behind. Fine, but you didn’t select students for the intervention randomly. You picked them because they were behind, and at least some of the reasons they were behind will still be present at the end of the intervention.
Maybe their home environment does not support mathematics achievement, for any of a large number of reasons. Maybe these children’s beliefs about mathematics and expectations of themselves differ. Maybe their working memory capacity and/or general intelligence differ. Whatever the reasons children start preK behind, is there any good reason to suppose those factors have magically disappeared by the end of the intervention? Or that they won’t affect math achievement any more?
Here’s how Baily and colleagues compared the constraining content and the pre-existing differences hypotheses. They used a preK math intervention that is known to work (Building Blocks). They measured math ability at the start of preK, at the end of preK (after the intervention) and at the end of Kindergarten. You’d expect to see better scores for the kids getting the intervention (compared to controls) at the end of preK, but then a diminution of that advantage at the end of Kindergarten—classic fadeout—and that’s what you see. Here are the results of the overall treatment and control groups.
Here’s the interesting part of the experiment. Researchers compared scores of control students (those who had been randomly selected not to receive the intervention) who nevertheless scored well at the end of preK; even though they had not received the intervention during the prior year, their scores were comparable to kids who did receive the intervention.
All children were to receive the same instruction in the kindergarten. So if the Constraining Content hypothesis is right, the two groups should show comparable learning. But the Pre-Existing Differences hypothesis makes a different prediction. The control kids who nevertheless scored as well as the intervention kids had something going for them during the preK year—lots of support at home, lots of math smarts, whatever. Those factors will still contribute in kindergarten, so these control kids will score better than the intervention kids at the end of kindergarten.
It makes sense that kids who manage to score well after the intervention without actually experiencing the intervention were better at the pre-test.
And crucially, those out-of-school factors are still present at followup. Even though they experienced the same instruction during kindergarten and began the year with comparable math knowledge, by the end of the year they are doing better.
The researchers had another way to compare the Constraining Content and the Pre-Existing Differences hypotheses. Students are paired by score—one control and one intervention kid who scored comparably on the post-test. They sorted these pairs so there is a higher- and a lower-achieving group. Then they looked at the followup scores of each group. The Constraining Content hypothesis predicts that fadeout will be worse for higher scoring kids…they are the ones who are most affected by the not-very-challenging content….lower scoring kids should be catching up to the higher scoring kids because for them, the instruction is challenging. BUT the data showed exactly equivalent gains in the high- and low-scoring pairs.
We need a new metaphor. Intervention for at-risk students is not resetting the trajectory of the rocket, but it’s not just extra fuel in the booster rocket to get them to altitude, after which one must ensure they still have fuel in the rocket. If they are to keep pace with their peers, they continue to need extra fuel in the rocket after the intervention.
Do you remember the “seductive allure” experiments? Those are the ones showing that people find explanations of psychological phenomena more satisfying if they include neuroscientific details, even if those details are irrelevant. (See here here and here).
Emily Hopkins and her colleagues at Penn noted that there is more than one possible explanation for the effect. It may be that people hold neuroscience in special esteem, or that they like the physicality of neuroscience (in contrast with the seeming intangibility of behavioral explanation), or perhaps it’s the reductiveness that holds appeal. Hopkins and her group focused on this last possibility. They presented subjects with good and bad explanations for phenomena in six different sciences and asked them to rate the quality of the explanations from -3 to 3. Some of the explanations were horizontal, and some were reductive, according to this hierarchy of sciences.
Here are examples of good/bad explanations that are reductive/horizontal, from biology. (Click for larger view.)
Subjects rated good explanations as better than bad ones, but they also rated reductive explanations more positively than horizontal explanations (M = 1.26 vs. 1.04). This effect was somewhat larger when the reductive information was neuroscientific (purportedly explaining psychology) than for other pairs. Still, when each pair was evaluated separately, participants gave higher ratings for the reductive explanation in five of six sciences.
The researchers gathered some other data about participants that cast an interesting light on these findings. They found that those who had taken more science courses at the college level were better at discriminating good from bad explanations. That was not the case for participants who had taken more college-level philosophy courses (although these participants scored better on a logical syllogisms task).
Researchers also asked participants questions about their perceptions of these sciences. Questions concerned the scientific rigor, the social prestige, and the difference in knowledge between an expert and novice. The graph shows averages for the sciences where the three questions were combined into a single measure.
These ratings offer a possible explanation for why reductive explanations are especially appealing in the case of psychology/neuroscience: people don’t think much of psychology, but they hold neuroscience in esteem.
Although the effect is strongest for psychology it is helpful to know that the “seductive allure” effect is not restricted to brains. It seems that there is some expectation that part of how sciences explain our world is to break things it into ever smaller pieces. When that’s part of explanation, it sound like science is doing what it is supposed to do.
I've been asked this question a lot and I hate it. I’ll describe why in a bit, but for now I’ll just change it to “does your mind do more or less the same thing when you listening to an audio book and when you read print?”
The short answer is “mostly.”
An influential model of reading is the simple view (Gough & Tumner, 1986), which claims that two fundamental processes contribute to reading: decoding and language processing. “Decoding” obviously refers to figuring out words from print. “Language processing” refers to the same mental processes you use for oral language. Reading, as an evolutionary late-comer, must piggy-back on mental processes that already existed, and spoken communication does much of the lending.
So according to the simple model, listening to an audio book is exactly like reading print, except that the latter requires decoding and the former doesn’t.
Is the simple view right?
Some predictions you’d derive from the simple view are supported. For example, You’d expect that a lot of the difference in reading proficiency in the early grades would be due to differences in decoding. In later grades, most children are pretty fluent decoders so differences in decoding would be more due to processes that support comprehension. That prediction seems to be true (e.g., Tilstra et al, 2009).
Especially relevant to the question of audiobooks, you’d also predict that for typical adults (who decode fluently) listening comprehension and reading comprehension would be mostly the same thing. And experiments show very high correlations of scores on listening and reading comprehension tests in adults (Bell & Perfetti, 1994; Gernsbacher, Varner, & Faust, 1990).
The simple view is a useful way to think about the mental processes involved in reading, especially for texts that are more similar to spoken language, and that we read for purposes similar to those of listening. The simple view is less applicable when we put reading to other purposes, e.g., when students study a text for a quiz, or when we scan texts looking for a fact as part of a research project.
The simple view is also likely incomplete for certain types of texts. The written word is not always similar to speech. In such cases prosody might be an aid to comprehension. Prosody refers to changes in pacing, pitch, and rhythm in speech. “I really enjoy your blog” can either be a sincere compliment or a sarcastic put-down—both look identical on the page, and prosody would communicate the difference in spoken language.
We do hear voices in our heads as we read...sometimes this effect can be notable, as when we know the sound of the purported author's voice (e.g., Kosslyn & Matt, 1977). For audio books, the reader doesn't need to supply the prosody--whoever is reading the book aloud does so.
For difficult-to-understand texts, prosody can be a real aid to understanding. Shakespearean plays provide ready examples. When Juliet says “Wherefore art thou Romeo?” it’s common for students to think that “wherefore” means “where,” and Juliet (who in fact doesn't know Romeo is nearby at that moment) is wondering where Romeo is. "Wherefore" actually means “why” and she's wondering why he's called Romeo, and why names, which are arbitrary, could matter at all. An actress can communicate the intended meaning of “Wherefore art thou Romeo” through prosody, although the movie clip below doesn't offer a terrific example.
So listening to an audio book may have more information that will make comprehension a little easier. Prosody might clarify the meaning of ambiguous words or help you to assign syntactic roles to words.
But most of the time it doesn’t, because most of what you listen to is not that complicated. For most books, for most purposes, listening and reading are more or less the same thing.
So listening to an audiobook is not “cheating,” but let me tell you why I objected to phrasing the question that way. “Cheating” implies an unfair advantage, as though you are receiving a benefit while skirting some work. Why talk about reading as though it were work?
Listening to an audio book might be considered cheating if the act of decoding were the point; audio books allow you to seem to have decoded without doing so. But if appreciating the language and the story is the point, it’s not. Comparing audio books to cheating is like meeting a friend at Disneyland and saying “you took a bus here? I drove myself, you big cheater.” The point is getting to and enjoying the destination. The point is not how you traveled.
Students from disadvantaged backgrounds and students who are the first in their families to attend college are at heightened risk to leave school without a degree. It's not a matter of students coming to school inadequately prepared--something else is at work (e.g., Steele, 1997). It's not merely disheartening for these students, there can be potentially grave financial repercussions for dropping out.
Most colleges and universities have programs meant to help students with the process of transitioning, and they often focus on practical skills like choosing classes and study strategies on the assumption that navigating academic life is a significant part of the problem.
In the last few years researchers have focused on a quite different approach: offering students a "lay theory" of the college experience. A lay theory is a set of beliefs that are used to interpret one's experiences. In the case of college, two beliefs have been flagged as especially important: that the transition to college inevitably involves setbacks, and that these setbacks are temporary.
College always includes disappointments, both academic and social. A student fails a test, or a callous professor tells them that their writing is beyond hope. (My freshman year of college an English professor wrote this as the entire comment on my exam essay: "No. D" Students get lonely, and have trouble making friends. For students who grew up in families where it was always assumed that they would attend college, such disappointments are dispiriting, but not threatening. The student may even wonder if he or she belongs in college, but that doubt likely doesn't last. For a student who did not grow up in an environment where it was taken for granted that they would graduate college that doubt may persist. They may think that they are not smart enough to succeed, that they are "not college material," or that their cultural background is not compatible with college.
Researchers have sought ways to instill a lay theory of college that would change that interpretation, focusing on two ideas: setbacks in college are common (and therefore should not be taken as a sign that you don't belong) and setbacks are temporary (so things will get better).
Researchers have had some success with this intervention in smaller experiments (Stephens et al, 2014; Walton & Cohen, 2011. Now a new study (Yeager et al, 2016) suggests that a simple, inexpensive intervention works at scale.
Before they matriculated at college, students participated in an activity taking just 30 minutes, administered over the Internet. They were told that it was to help them think about the transition to college, and that they would have the chance to share their experiences, perhaps helping future students.
There were three experimental conditions. The social belonging condition provided information showing that feeling out of place is common in the transition to college, but most students do make friends and succeed academically. The growth mindset condition provided information showing that intelligence is malleable, and that student can succeed with effort, coupled with effective strategies. The third condition combined both strategies. In each case, students were asked to write an essay about how the information they read might apply to them, as a way of cementing the information in memory, and to help them imagine making it applicable to their own experience.
One experiment targeted the entering class of a large public university. As shown in the table below, the intervention improved retention. All three of the intervention conditions were equally effective.
Another experiment administered the intervention at a selective private university. The figure shows that disadvantaged students receiving the intervention earned higher GPAs in their first year of college
Consistent with the theory, advantaged students don't benefit from the intervention; they already believe that they can succeed at college, and that they belong there, so their lay theory of college is likely already similar to the one described in the intervention.
Two things are noteworthy about this experiment. First, the reduction in the achievement gap is quite sizable, on the order of 30-40%. Second, this intervention was remarkably brief, and remarkably inexpensive. Obviously this work needs to be replicated and the interventions fine-tuned. (The growth mindset intervention didn't really work in Experiment 1.) But if this finding holds up, it must be counted as a huge success for social scientists, and for David Yeager and Greg Walton in particular.
There's plenty of research on homework and the very brief version of the findings is probably well known to readers of this blog: homework has a modest effect on the academic achievement of older students, and no effect on younger students (Cooper et al, 2006).
In a way, this outcome seems odd. Practice is such an important part of certain types of skill, and much of homework is assigned for the purpose of practice. Why doesn't it help, or help more?
One explanation is that the homework assigned is not of very good quality, which could mean a lot of different things and absent more specificity sounds like a homework excuse. Another, better explanation is that practice doesn't do much unless there is rapid feedback, and that's usually absent at home.
A third explanation is quite different, suggesting that the problem may lie in measurement. Most studies of homework efficacy have used student self-report of how much time they spend on homework. Maybe those reports are inaccurate.
A new study indicates that this third explanation merits closer consideration.
The researchers (Rawson, Stahovich & Mayer, in press) examined homework performance among three classes of undergraduate engineering students taking their first statics course. The homework assigned was typical for this sort of course; the atypical feature was that students were asked to complete their homework with Smartpens. These function like regular ink pens, but when coupled with special paper, they record time-stamped pen strokes.
The researchers were able to gather objective measures of time spent on homework, as well as other performance metrics.
A few of these measures proved interesting. For example, students who completed a lot of homework within 24 hours of the due date tended to earn lower course grades.
But the really interesting finding was a significant correlation of course grade and time spent on homework as measured by the Smartpen (r = .44) in the face of NO correlation between course grade and time spent on homework as reported by the students (r = -.16).
The relationship between homework and course grades is not the news. This is a college course and no matter what the format, it's only going to meet a few hours each week, and students will be expected to do a great deal of work on their own.
The news is that students were poor at reporting their time spent on homework; 88% reported more than the Smartpen showed they had actually spent. The correlation of actual time and reported time ranged from r = .16 to r = .35 for the three cohorts.
In other words, with such a noisy measure of time spent on homework, there was little hope of observing a reliable relationship of homework with a course outcome. This finding ought to call into question much of the prior research on homework.
Please don't take this blog posting as an enthusiastic endorsement of homework. For one thing, this literature seems pretty narrow in focusing solely on academic performance outcomes, given that many teachers and parents have other goals for homework such as increased self-directedness. For another thing, even if it were shown the certain types of homework led to certain types of improvement in academic outcomes, that doesn't mean every school and classroom ought to assign homework. That decision should be made in the context of broader goals.
But if teachers are going to assign homework, researchers should investigate its efficacy. This study should make us rethink how we interpret existing research in this area.
Improving a specific skill is not hard. Or at least knowing what to do (practice) is not hard, even if actually doing it is not so easy. But improving at very general skills, the sort of skills that underlie many tasks we take on, has proven much more difficult. The grail among these general skills is working memory, as it's thought to be a crucial component of (if not nearly synonymous with) fluid intelligence. Brain training programs that promise wide-ranging cognitive improvements usually offer tasks that promise to exercise working memory, and so increase its capacity and or efficiency.
Claims of scientific support (e.g., here) have been controversial (see here), and part of the problem is that many of the studies, even ones claiming "gold standard" methodologies have not been conducted in the ideal way. This controversy usually arises after the fact; researchers claim that brain training works, and critics point out flaws in the study design.
A new study has examined more directly the possibility that brain training gains are due to placebo effects, and it indicates that's likely.
Cyrus Foroughi and his colleagues at George Mason university set out to test the possibility that knowing you are in a study that purportedly improves intelligence will impact your performance on the relevant tests. The independent variable in their study was method of recruitment via an advertising flyer: either you knew you were signing up for brain training or you didn't.
The flier at left might attract a different sort of participant than the one on the right. Or people may not differ except that some have been led to expect a different outcome of the experiment.
All participants went through the same experimental procedure. They took two standard fluid intelligence tests. Then they participated in one hour of working memory training, the oft-used N-back task. The final outcome measures--the fluid intelligence tests--took place the following day.
Even advocates of brain training would agree that a single hour of practice is not enough to produce any measurable effect. Yet subjects who thought brain training would make them smarter improved. Control subjects did not.
It's well known that scores on IQ tests are sensitive to incentives--people do a little better if they are paid, for example. People in the placebo group might try harder on the second IQ test because they know how the experiment is "supposed" to come out and they unconsciously try to comply. This belief that training might either have been planted by the flier OR the flier might have been a screening device, luring people who believed brain training works, but not attracting people who didn't believe in brain training to the experiment.
Most published experiments of brain training had not reported whether subjects were recruited in a way that made the purpose plain. Foroughi and his colleagues contacted the researchers behind 19 published studies that were included in a meta-analysis and found that in 17 of these subjects knew the purpose of the study.
It should be noted that this new experiment does not show conclusively that brain training cannot work. It shows placebo effects appear to be very easy to obtain in this type of research. I dare say it's even a more dramatic problem than critics had appreciated, and more than ever, the onus is on advocates of brain training to show their methods work.
The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.