A new survey
of American reading habits was published earlier this week. Much of the news coverage led with the somewhat surprising finding that young people (age 16-29), supposedly enamored of gaming and video content, reported that they read and use libraries. In fact, that they do so more than older people. New York Times blog: Young people frequent libraries, study says
.Christian Science Monitor: Millenials: A rising generation of book lovers.NPR (Boston): Facebook generation is reading strong. Sexy stuff, but I think it's misleading.One message is that young people are reading "a lot." What constitutes "a lot" is a judgement call, obviously, but in this study the data showed that 83% of 18-29 year-olds had a read a book sometime in the previous year. That strikes me as a low bar to be considered "a reader." Other data show t
hat Americans spend much
more time watching television each day than they do reading. This chart
is from the Bureau of Labor Statistics
Those data include Americans of all ages. If we look at younger Americans, the picture looks more or less the same: not a lot of reading. The figure
below shows leisure time activities, separated by sex.
The second way in which the coverage of the Pew study was deceptive lay in the reported age difference. Yes, young people were more likely than older people to report having read a book in the past year, but that difference was very likely due to the fact that many of them were students, doing required reading.
The study did report these data separately, shown below.
By the sometime-in-the-last year measure, older and younger Americans are about the same, except insofar as they are required to read for work or school.
Likewise, the increased use of libraries by young respondents is likely mediated by their need to use libraries for schoolwork.
There have been many reports of American reading habits in the last fifty years, and especially in the last twenty. The overall picture is that reading dropped when television became widely available, and hasn't changed much since then.
Does going to school actually make you smarter (at least, as measured by standard cognitive ability tests)? Answering this question is harder than it would first appear because schooling is confounded with many other variables.
Yes, kids cognitive abilities improve the longer they have been in school, but it's certainly plausible that better cognitive abilities make it more probable that you'll stay in school longer. And schooling is also confounded with age--kids who have been in school longer are also older and therefore have had more life experiences, and perhaps those have prompted the increases in intelligence.
One strategy is to test everyone on their birthday. That way, everyone should have had the same opportunity for life experiences, but the student with a birthday in May has had four months more schooling than the child with the January birthday.
That solves some problems, but it entails other assumptions. For example, older children within a grade might experience fewer social problems, for example.
A new paper (Carlsson, Dahl, & Rooth, 2012)
takes a different approach to addressing this difficult problem.
The authors capitalized on the fact that every male in Sweden must take a battery of cognitive tests for military service. The testing occurs near his 18th birthday, but the precise date is assigned more or less randomly (constrained by logistical factors for the military testers). So the authors could statistically control for the time-of-year effect of the birthday and in addition investigate the effects of just a few days more (or less) of schooling. The researchers were able to access a database of all the males tested between 1980 and 1994.
Students took four tests. Two tests (one of word meanings and one of reading technical prose) tap crystallized intelligence
(i.e. what you know). Two others (spatial reasoning, and logic) tap fluid intelligence
(i.e., reasoning that is not dependent on particular knowledge).
The authors found that older students scored better on all four tests--no surprise there. What about students who were the same age, but who, because of the vagaries of the testing, happened to have had a few days more or fewer of schooling?
More schooling was associated with better performance, but only on the crystallized intelligence tests: an extra 10 days in school improved by about 1% of a standard deviation. Extra non-school days had no effect.
There was no measurable effect of school days on the fluid intelligence tests. This result might mean that these cognitive skills are unaffected by schooling, but it might also mean that the "dose" of schooling was too small to have an impact, or that the measure was insensitive to the effect that schooling has on fluid intelligence.
Carlsson, M. Dahl, G. B. & Rooth, D-O. (2012). The Effect of Schooling on Cognitive Skills. NBER Working Paper No. 18484 October 2012
Last June I posted a blog entry
about training working memory, focusing on a study by Tom Redick and his colleagues, which concluded that training working memory might boost performance on whatever task was practiced, but it would not improve fluid intelligence.(Measures of fluid intelligence are highly correlated with
measures of working memory, and improving intelligence would be most people's purpose in undergoing working memory training.)I recently received an email from
Martin Walker, of MindSparke.com, which offers brain training. Walker sent me a polite email arguing that the study is not ecologically valid: that is, the conclusions may be accurate for the conditions used in the study, but the conditions used in the study do not match those typically encountered outside the laboratory. Here's the critical text of his email, reprinted with his permission: "There is a significant problem with the design of the study that invalidates all of the hard work of the researchers--training frequency. The paper states that the average participant completed his or her training in 46 days. This is an average frequency of about 3 sessions per week. In our experience this frequency is insufficient. The original Jaeggi study enforced a training frequency of 5 days per week. We recommend at least 4 or 5 days per week.
With the participants taking an average of 46 days to complete the training, the majority of the participants did not train with sufficient frequency to achieve transfer. The standard deviation was 13.7 days which indicates that about 80% of the trainees trained less frequently than necessary. What’s more, the training load was further diluted by forcing each session to start at n=1 (for the first four sessions) or n=2, rather than starting where the trainee last left off.
"I forwarded the email to Tom Redick
, who replied: "Your comment about the frequency of training was something that, if not in the final version of the manuscript, was questioned during the review process. Perhaps it would’ve been better to have all subjects complete all 20 training sessions (plus the mid-test transfer session) within a shorter prescribed amount of time, which would have led to the frequency of training sessions being increased per week. Logistically, having subjects from off-campus come participate complicated matters, but we did that in an effort to ensure that our sample of young adults was broader in cognitive ability than other cognitive training studies that I’ve seen. This was particularly important given that our funding came from the Office of Naval Research – having all high-ability 18-22 year old Georgia Tech students would not be particularly informative for the application of dual n-back training to enlisted recruits in the Army and Marines.
However, I don’t really know of literature that indicates the frequency of training sessions is a moderating factor of the efficacy of cognitive training, especially in regard to dual n-back training. If you know of studies that indicate 4-5 days per week is more effective than 2-3 days week, I’d be interested in looking at it.
As mentioned in our article, the Anguera et al. (2012) article that did not include the matrix reasoning data reported in the technical report by Seidler et al. (2010) did not find transfer from dual n-back training to either BOMAT or RAPM [Bochumer Matrices Test and Raven's Advanced Progressive Matrices, both measures of fluid intelligence], despite the fact that “Participants came into the lab 4–5 days per week (average = 4.5 days) for approximately 25 min of training per session” (Anguera et al., 2012), for a minimum of 22 training sessions. In addition, Chooi and Thompson (2012) administered dual n-back to participants for either 8 or 20 days, and “Participants trained once a day (for about 30 min), four days a week”. They found no transfer to a battery of gF and gC tests, including RAPM.
In our data, I correlated the amount of dual n-back practice gain (using the same method as Jaeggi et al) during training and the number of days it took to finish all 20 practice sessions (and 1 mid-test session). I would never really trust a correlation of N = 24 subjects, but the correlation was r = -.05.'. I re-analyzed our data, looking only at those dual n-back and visual search training subjects that completed the 20 training and 1 mid-test session within 23-43 days, meaning they did an average of at least 3 sessions of training per week. For the 8 gF tasks (the only ones I analyzed), there was no hint of an interaction or pattern suggesting transfer from dual n-back.So to boil Redick's response down to a sentence, he's pointing out that other studies have observed no impact on intelligence when using a training regimen closer to that advocated by Walker, and Redick finds no such effect in a follow-up analysis of his own data (although I'm betting he would acknowledge that the experiment was not designed to address this question, and so does not offer the most powerful means of addressing it.)
So it does not seem that training frequency is crucial. A final note: Walker commented in another email that customers of MindSparke consistently feel that the training helps, and Redick remarked that subjects in his experiments have the same impression. It just doesn't bear out in performance.
Psychologists have long looked to Oxford Press for top-flight works of original scholarship and useful synthesis volumes. Now Oxford is publishing a new series, Fundamentals of Cognition, designed to serve as very brief summaries of the state of the field, suitable for an undergraduate course or as the key reading in a beginning graduate course.
The first volume has been published: Fundamentals of Comparative Cognition by Sara Shettleworth and if it’s any indication of the quality of future volumes, Oxford has done very well indeed.
In a mere 124 pages Shettleworth offers the reader a good (though necessarily hurried) look at comparative cognition: the field that asks what humans have in common with other creatures regarding how they think, and what makes humans unique?
As she reviews highlights of this complex literature, Shettleworth shows us some of the key principles of comparative cognition. For example, different species might use very different cognitive strategies to solve the same problem: to orient in space, species might use dead reckoning, vectors, landmarks, route-learning or cognitive maps.
Another example: because animals have different abilities than we, humans may be insensitive to how they experience a problem. For example, because the visual systems of some birds and honeybees extend into the ultraviolet range, a scientist looking a brightly colored flower or plumage may mistake what a bird or bee responds to.
Another key principle that has frustrated many an undergraduate is Lloyd Morgan’s Cannon: boiled down, it means that one shouldn’t interpret animal behavior as reflecting more sophisticated cognition if simpler cognition will do. It’s natural to interpret an animal behavior as reflecting cognitive processes humans would invoke in that situation. The animal may be doing what humans do, but for very different reasons or different methods.
Most often, this “other mechanism” is simple association. Time and time again, Shettleworth points out that what looks like sophisticated communication, say, or empathy, is explainable by the operation of relatively simple associative models, and that more work is actually needed to persuade us that the claimed cognitive process is actually at work. Such reading leads to momentary frustration, but ultimate admiration for the care of the scientists.
So how exactly are species different than humans? First, I should repeat that species are all different from one another, and so the question that might interest us (as it interested Darwin) is whether humans are in any way unique? Shettleworth closes with a review of a few proposed answers—e.g., Mike Tomasello’s suggestion that humans alone cooperatively share intentions—but ultimately casts her vote with none.
This is a wonderful book for a reader with a bit of background in psychology, but make no mistake, it’s not popular reading. Shettleworth sets out to review the field, not to offer choice bits to tempt a reader who was not otherwise interested.
Should educators read this book? Direct applications to educational practice are unlikely to spring to mind, but educators who, as part of their practice, are deeply immersed in understanding human cognition and development will likely find it of value.
Passions run high in education debates because the stakes are high. When passions run high, name-calling is usually not far behind. I appreciate a good taunt as much as the next person (unless the next person is this guy)
but when the taunts go too far, or when they constitute most of a blog post, the most valuable audience--those who disagree with your or who are unsure--stop listening.
I think it's fair to say that, in education policy, some of us have gone too far. People who disagree with us are depicted as not merely wrong, but evil.
This characterization is most noticeable in the what is broadly called the reform movement.
People who advocate reforms such as merit pay, the use of value added models of teacher evaluation, charter schools, and vouchers are not merely labeled misguided because these reforms won't work. They are depicted as bad people who are unsympathetic to the difficulty of teaching and who are in the pockets of the rich.
Likewise, those who see value in teacher's unions, who are leery of current methods of teacher evaluation, who think that vouchers threaten the neighborhood character of schools are not merely wrong: they are accused of looking out for the welfare of lousy teachers.
And of course both sides are accused of "not caring about kids."
Why am I bringing all this up on a blog called "Science and Education?" Because studies of ingroup and outgroup thinking show that people who disagree with us are seen as immoral.
A recent study (Leach, Ellemers, & Barreto, 2007) evaluated three dimensions of ingroup status: sociability, competence, and morality. They reported that we like groups we are a part of and think the group is special because it is moral. The most important reason that we deem our group superior to other groups is not that we are smarter or more likeable; we are on the side of right.
Another comforting fiction: we think that we know what people on the other side of an issue would say, or how they would behave.
For example, one study from the 1990's (Robinson, Keltner, Ward & Ross, 1995) investigated the reactions of liberals and conservatives to the Howard Beach incident: a young Black man was struck and killed by a car as he was running away from a group of White pursuers in the Howard Beach neighborhood of New York City. After reading a synopsis of the incident, subjects were asked a series of questions meant to probe what they thought about (1) who was responsible for the death (2) the role of race in the incident, (3) the severity of the criminal sentences for the White teens.
Subjects were also asked to judge how liberals and conservatives would answer these questions.
The findings showed two things: (1) we think that we are more logical and less influenced by ideology than others are; (2) we think that our group is less influenced by ideology than other groups are.
In sum, we think that people who agree with us are moral, and people who disagree with us, less so. Further, we think that we know how other people will interpret complicated situations--they will driven more by ideology than by facts.
Of all the bloggers, pundits, reporters, researchers, etc. I know, I can think of two who I would say are mean-spirited--both of them unrelentingly vitriolic, I'm guessing in some wretched effort to resolve personal disappointments.
Of the remaining hundreds, all give every evidence of sincerity and of genuine passion for education.
So this is a call for fewer blog postings that, implicitly or explicitly, denigrate the other person's motives, or that offer a knowing nod with the claim "we all know what those people think."
It may be a natural bias, but it makes for a boring read.
Leach, C. W., Ellemers, N., & Barreto, M. (2007). Group virtue: The importance of morality (versus competence and sociability) in the positive
evaluation of in-groups. Journal of Personality and Social Psychology,
Robinson, R. J., Keltner, D., Ward, A., & Ross, L. (1995). Actual versus assumed differences in construal: "Naive Realism" in intergroup perception and conflict. Journal of Personality and Social Psychology, 68, 404-417.