If the title of this blog struck you as brash, I came by it honestly: it's the title of a terrific new paper by three NYU researchers (Protzko, Aronson & Blair, 2013
). The authors sought to review all interventions meant to boost intelligence, and they cast a wide net, seeking any intervention for typically-developing children from birth to kindergarten age that used a standard IQ test as the outcome measure, and that was evaluated in a random control trial (RCT) experiment.A feature of the paper I especially like is that
none of the authors publish in the exact areas they review. Blair mostly studies self-regulation, and Aronson, gaps due to race, ethnicity or gender. (Protzko is a graduate student studying with Aronson.) So the paper is written by people with a lot of expertise, but who don't begin their review with a position they are trying to defend. They don't much care which way the data come out. So what did they find? The paper is well worth reading in its entirety--they review a lot in just 15 pages--but there are four marquee findings.
First, the authors conclude that infant formula supplemented with long chain polyunsaturated fatty acids boosts intelligence by about 3.5 points, compared to formula without. They conclude that the same boost is observed if pregnant mothers receive the supplement. There are not sufficient data to conclude that other supplements--riboflavin, thiamine, niacin, zinc, and B-complex vitamins--have much impact, although the authors suggest (with extreme caution) that B-complex vitamins may prove helpful.
Second, interactive reading with a child raises IQ by about 6 points. The interactive aspect is key; interventions that simply encouraged reading or provided books had little impact. Effective interventions provided information about how to read to children: asking open-ended questions, answering questions children posed, following children's interests, and so on.
Third, the authors report that sending a child to preschool raises his or her IQ by a little more than 4 points. Preschools that include a specific language development component raise IQ scores by more than 7 points. There were not enough studies to differentiate what made some preschools more effective than others.
Fourth, the authors report on interventions that they describe as "intensive," meaning they involved more than preschool alone. The researchers sought to significantly alter the child's environment to make it more educationally enriching. All of these studies involved low-SES children (following the well-established finding that low-SES kids have lower IQs than their better-off counterparts due to differences in opportunity. I review that literature here.)
Such interventions led to a 4 point IQ gain, and a 7 point gain if the intervention included a center-based component. The authors note the interventions have too many features to enable them to pinpoint the cause, but they suggest that the data are consistent with the hypothesis that the cognitive complexity of the environment may be critical. They were able to confidently conclude (to their and my surprise) that earlier interventions helped no more than those starting later.Those are the four interventions with the best track record. (Some others fared less well. Training working memory in young children "has yielded disappointing results."
) The data are mostly unsurprising, but I still find the article a valuable contribution. A reliable, easy-to-undertand review on an important topic. Even better,
this looks like the beginning of what the authors hope will be a longer-term effort they are calling the Database on Raising Intelligence--a compendium of RCTs based on interventions meant to boost IQ. That may not be everything we need to know about how to raise kids, but it's a darn important piece, and such a Database will be a welcome tool.
An experiment is a question which science poses to Nature, and a measurement is the recording of nature’s answer. --Max Planck
You can't do science without measurement. That blunt fact might give pause when people emphasize non-cognitive factors in student success and in efforts to boost student success.
"Non-cognitive factors" is a misleading but entrenched catch-all term for factors such as motivation, grit, self-regulation, social skills. . . in short, mental constructs that we think contribute to student success, but that don't contribute directly to the sorts of academic outcomes we measure, in the way that, say, vocabulary or working memory do.
Non-cognitive factors have become hip. (Honestly, if I hear about the Marshmallow Study
just one more time, I'm going to become seriously dysregulated) and there are plenty of data to show that researchers are on to something important. But are they on to anything that that educators are likely to be able to use in the next few years? Or are we going to be defeated by the measurement problem ?There is a problem, there's little doubt. A term like "self-regulation" is used in different senses: the ability to maintain attention in the face of distraction, the inhibition of learned or automatic responses, or the quelching of emotional responses.
The relation among them is not clear.Further, these might be measured by self-ratings, teacher ratings, or various behavioral tasks.
But surprisingly enough, different measures do correlate, indicating that there is a shared core construct (Sitzman & Ely, 2011). And Angela Duckworth (Duckworth & Quinn, 2009) has made headway in developing a standard measure of grit (distinguished from self-control by its emphasis on the pursuit of a long-term goal).
So the measurement problem in non-cognitive factors shouldn't be overstated. We're not at ground-zero on the problem. At the same time, we're far from agreed-upon measures. Just how big a problem is that?
It depends on what you want to do.
If you want to do science, it's not a problem at all. It's the normal situation. That may seem odd: how can we study self-regulation if we don't have a clear idea of what it is? Crisp definitions of constructs and taxonomies of how they relate are not prerequisites for doing science. They are the outcome of doing science. We fumble along with provisional definitions and refine them as we go along.
The problem of measurement seems more troubling for education interventions.
Suppose I'm trying to improve student achievement by increasing students' resilience in the face of failure. My intervention is to have preschool teachers model a resilient attitude toward failure and to talk about failure as a learning experience. Don't I need to be able to measure student resilience in order to evaluate whether my intervention works?
Ideally, yes, but that lack may not be an experimental deal-breaker.
My real interest is student outcomes like grades, attendance, dropout, completion of assignments, class participation and so on. There is no reason not to measure these as my outcome variables. The disadvantage is that there are surely many factors that contribute to each outcome, not just resilience. So there will be more noise in my outcome measure and consequently I'll be more likely to conclude that my intervention does nothing when in fact it's helping.
The advantage is that I'm measuring the outcome I actually care about. Indeed, there would not be much point in crowing about my ability to improve my psychometrically sound measure of resilience if such improvement meant nothing to education.
There is a history of this approach in education. It was certainly possible to develop and test reading instruction programs before we understood and could measure important aspects of reading such as phonemic awareness.
In fact, our understanding of pre-literacy skills has been shaped not only by basic research, but by the success and failure of preschool interventions. The relationship between basic science and practical applications runs both ways.
So although the measurement problem is a troubling obstacle, it's neither atypical nor final.
Duckworth, A. L., & Quinn, P. D. (2009). Development and validation of the Short Grit Scale (GRIT–S). Journal of Personality Assessment, 91, 166-174.
Sitzmann, T, & Ely, K. (2011). A meta-analysis of self-regulated learning in work-related training and educational attainment: What we know and where we need to go. Psychological Bulletin, 137, 421-442.
A lot of data from the last couple of decades shows a strong association between executive functions (the ability to inhibit impulses, to direct attention, and to use working memory) and positive outcomes in school and out of school (see review here
). Kids with stronger executive functions get better grades, are more likely to thrive in their careers, are less likely to get in trouble with the law, and so forth. Although the relationship is correlational and not known to be causal, understandably researchers have wanted to know whether there is a way to boost executive function in kids.Tools of the Mind (Bedrova & Leong, 2007) looked promising.
It's a full preschool curriculum consisting of some 60 activities, inspired by the work of psychologist Lev Vygotsky. Many of the activities call for the exercise of executive functions through play. For example, when engaged in dramatic pretend play, children must use working memory to keep in mind the roles of other characters and suppress impulses in order to maintain their own character identity. (See Diamond & Lee, 2011, for thoughts on how and why such activities might help students.)A few studies of relatively modest scale (but not trivial--100-200 kids) indicated that Tools of the Mind has the intended effect (Barnett et al, 2008; Diamond et al, 2007). But now some much larger scale followup studies (800-2000 kids) have yielded discouraging results.These studies were reported at a symposium this Spring at a meeting of the Society for Research on Educational Effectiveness. (You can download a pdf summary here.) Sarah Sparks covered this story for Ed Week when it happened in March, but it otherwise seemed to attract little notice. Researchers at the symposium reported the results of three studies. Tools of the Mind
did not have an impact in any of the three. What should we make of these discouraging results? It's too early to conclude that Tools of the Mind simply doesn't work as intended. It could be that there are as-yet unidentified differences among kids such that it's effective for some but not others. It may also be that the curriculum is more difficult to implement correctly than would first appear to be the case. Perhaps the teachers in the initial studies had more thorough training. Whatever the explanation, the results are not cheering. It looked like we might have been on to a big-impact intervention that everyone could get behind.
Now we are left with the dispiriting conclusion "More study is needed."
Barnett, W., Jung, K., Yarosz, D., Thomas, J., Hornbeck, A., Stechuk, R., & Burns, S.(2008). Educational effects of the Tools of the Mind curriculum: A randomized trial. Early Childhood Research Quarterly, 23
, 299–313.Bedrova, E. & Leong, D. (2007) Tools of the Mind: The Bygotskian appraoch to early childhood education. Second edition. New York: Merrill.Diamond, A. & Lee, K. (2011). Interventions shown to aid executive function development in children 4-12 years old. Science, 333, 959-964.
Diamond, A., Barnett, W. S., Thomas, J., & Munro, S. (2007). Preschool program improves cognitive control. Science, 318
Steven Levitt, of Freakonomics fame, has unwittingly provided an example of how science applied to education can go wrong.On his blog, Levitt cites a study
he and three colleagues published (as an NBER working paper
). The researchers rewarded kids for trying hard on an exam. As Levitt notes, the goal of previous research has been to get kids to learn more. That wasn't the goal here. It was simply to get kids to try harder on the exam itself, to really show everything that they knew.Among the findings: (1) it worked. Offering kids a payoff for good performance
prompted better test scores; (2) it was more effective if, instead of offering a payoff for good performance, researchers gave them the payoff straight away and threatened to take it away
if the student didn't get a good score (an instance of a well-known and robust effect called loss aversion
); (3) children prefer different rewards at different ages. As Levitt puts it "With young kids, it is a lot cheaper to bribe them with trinkets like trophies and whoopee cushions, but cash is the only thing that works for the older students."There are a lot of issues one could take up here, but I want to focus on Levitt's surprise that people don't like this plan. He writes "
It is remarkable how offended people get when you pay students for doing well – so many negative emails and comments." Levitt's surprise gets at a central issue in the application of science to education. Scientists are in the business of describing (and thereby enabling predictions of) the Natural world. One such set of phenomenona concerns when students put forth effort and when they don't. Education is a not a scientific enterprise. The purpose is not to describe
the world, but to change it, to make it more similar to some ideal that we envision. (I wrote about this distinction at some length in my new book. I also discussed on this brief video
Thus science is ideally value-neutral. Yes, scientists seldom live up to that ideal; they have a point of view that shapes how they interpret data, generate theories, etc., but neutrality is an agreed-upon goal, and lack of neutrality is a valid criticism of how someone does science. Education, in contrast, must entail values, because it entails selecting goals. We want to change the world--we want kids to learn things--facts, skills, values. Well, which ones? There's no better or worse answer to this question from a scientific point of view.A scientist may know something useful to educators and policymakers, once the educational goal is defined; i.e., the scientist offers information about the Natural world that can make it easier to move towards the stated goal. (For example, if the goal is that kids be able to count to 100 and to understand numbers by the end of preschool, the scientist may offer insights into how children come to understand cardinality.) What scientists cannot do is use science to evaluate the wisdom of stated goals.And now we come to people's hostility to Levitt's idea of rewards
for academic work.
I'm guessing most people don't like the idea of rewards for the same reason I don't. I want my kids to see learning as a process that brings its own reward. I want my kids to see
effort as a reflection of their character, to believe that they should give their all to any task that is their responsibility, even if the task doesn't interest them. There is, of course, a large, well-known research literature on the effect of extrinsic rewards on motivation. Readers of this blog are probably already familiar with it--if so, skip the next paragraph. The problem is one of attribution. When we observe other people act, we speculate on their motives. If I see two people gardening--one paid and the other unpaid--I'm likely to assume that one gardens because he's paid and the other because he enjoys gardening. It turns out that we make these attributions about our own behavior as well. If my child tries her hardest on a test she's likely to think "I'm the kind of kid who always does her best, even on tasks she don't care for." If you pay her for her performance she'll think "I'm the kind of kid who tries hard when she's paid." This research began in the 1970's and has held up very well. Kids work harder for rewards. . . until the rewards stop. Then they
engage in the task even less than they did before the rewards started. I summarized some of this work here.
In the technical paper, Levitt cites some of the reviews of this research but downplays the threat, pointing out that when motivation is low to start with, there's not much danger of rewards lowering it further. That's true, and I've made a close argument: cash rewards might be used as a last-ditch effort for a child who has largely given up on school. But that would dictate using rewards only with kids who were not motivated to start, not in a blanket fashion as was done in Levitt's study. And I can't see concluding that elementary school kids were so unmotivated that they were otherwise impossible to reach.In addressing the threat to student motivation with research, Levitt is approaching the issue in the right way (even if I think he's incorrect in how he does so.)But on the blog (in contrast to the technical paper), Levitt addresses the threat in the wrong way. He skips the scientific argument and simply belittles the idea that parents might object to someone paying their child for academic work. He writes: Perhaps the critics are right and the reason I’m so messed up is that my parents paid me $25 for every A that I got in junior high and high school. One thing is certain: since my only sources of income were those grade-related bribes and the money I could win off my friends playing poker, I tried a lot harder in high school than I would have without the cash incentives. Many middle-class families pay kids for grades, so why is it so controversial for other people to pay them?I think Levitt is getting "so many negative emails and comments" because he's got scientific data to serve one type of goal (get kids to try hard on exams) the application of which conflicts with another goal (encourage kids to see academic work as its own reward). So he scoffs at the latter. I see this blog entry as an object lesson for scientists. We offer something valuable--information about the Natural world--but we hold no status in deciding what to do with that information (i.e., setting goals). In my opinion Levitt's blog entry shows he has a tin ear for the possibility that others do not share his goals for education. If scientitists are oblivious to or dismissive of those goals, they can expect not just angry emails, they can expect to be ignored.
Important study on the impact of education on women's attitudes and beliefs:Mocan & Cannonier (2012) took advantage of a naturally-occurring "experiment" in Sierra Leone. The country suffered a devastating, decade-long civil war during the 1990s, which destroyed much of the country's infrastructure, including schools. In 2001, Sierra Leone instituted a program offering free primary education; attendance was compulsory. This policy provided significant opportunities for girls who were young enough for primary school, but none for older children. Further, resources to implement the program were not equivalent in all districts of the country.
The authors used these quirky reasons that the program was more or less accessible to compare girls who participated and those who did not. (Researchers controlled for other variables such as religion, ethnic background, residence in an urban area, and wealth.)
The outcome of interest was empowerment
, which the researchers defined as "having the knowledge along with the power and the strength to make the right decisions regarding one's own well-being." The outcome measures came from
a 2008 study (the Sierra Leone Demographic and Health Survey) which summarized interviews with over 10,000 individuals.
The findings: Better educated women were more likely to believe
Better educated women were more likely to endorse these behaviors:
- a woman is justified in refusing sex with her husband if she knows he has a sexually transmitted disease
- that a husband beating his wife is wrong
- that female genital mutilation is wrong
One of the oddest findings in these data is also one of the most important to understanding the changes in attitudes: they are not due to changes in literacy. The researchers drew that conclusion because an increase in education had no impact on literacy, likely because the quality of instruction in schools was very low. The best guess is that the impact of schooling on attitudes was through social avenues.
- having fewer children
- using contraception
- getting tested for AIDS
Mocan, N. H. & Cannonier, C. (2012) Empowering women through education: Evidence from Sierra Leone. NBER working paper 18016.
There is a great deal of attention paid to and controversy about, the promise of training working memory to improve academic skills, a topic I wrote about here
. But working memory is not the only cognitive process that might be a candidate for training. Spatial skills
are a good predictor of success in science, mathematics, and engineering. Now on the basis of a new meta-analysis (Uttal, Meadow, Tipton, Hand, Alden, Warren & Newcombe, in press) researchers claim that spatial skills are eminently trainable. In fact they claim a quite respectable average effect size of 0.47 (Hedge's g)
after training (that's across 217 studies).
Training tasks across these many studies included things like visualizing 2D and 3D objects in a CAD program, acrobatic sports training, and learning to use a laparascope (an angled device used by surgeons). Outcome measures were equally varied, and included standard psychometric measures (like a paper-folding test
), tests that demanded imagining oneself in a landscape, and tests that required mentally rotating objects.
Even more impressive:
1) researchers found robust transfer to new tasks
2) researchers found little, if any effect of delay between training and test--the skills don't seem to fade with time, at least for several weeks. (Only four studies included delays of greater than one month.)
This is a long, complex analysis and I won't try to do it justice in a brief blog post. But the marquee finding is big news. What we'd love to see is an intervention that is relatively brief, not terribly difficult to implement, reliably leads to improvement, and transfers to new academic tasks.
That's a tall order, but spatial skills may fill all the requirements.
The figure below (from the paper) is a conjecture--if spatial training were widely implemented, and once scaled up we got the average improvement we see in these studies, how many more people could be trained as engineers?
The paper is not publicly available, but there is a nice summary here
from the collaborative laboratory responsible for the work. I also recommend this excellent article from American Educator
on the relationship of spatial thinking to math and science, with suggestions for parents and teachers.
Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N.S. (2012, June 4). The Malleability of Spatial Skills: A Meta-Analysis of Training Studies. Psychological Bulletin
. Advance online publication. doi: 10.1037/a0028446Newcombe, N. S. (2010) Picture this: Increasing math and science learning by improving spatial thinking. American Educator, Summer,
Should kids be allowed to chew gum in class? If a student said "but it helps me concentrate. . ." should we be convinced? If it provides a boost, it's short-lived. It's pretty well established that a burst of glucose provides a brief cognitive boost (see review here), so the question is whether chewing gum in particular provides any edge over and above that, or whether a benefit would be observed when chewing sugar-free gum.
One study (Wilkinson et al., 2002
) compared gum-chewing to no-chewing (and "sham chewing" in which subjects were to pretend to chew gum, which seems awkward). Subjects performed about a dozen tasks, including some of vigilance (i.e., sustaining attention), short-term and long term memory.
Researchers reported some positive effect of gum-chewing for four of the tests. It's a little hard to tell from the brief write-up, but it appears that the investigators didn't correct their statistics for the multiple tests.
This throw-everything-at-the-wall-and-see-what-sticks may be a characteristic of this research. Another study (Smith, 2010
) took that same approach and concluded that there were some positive effects of gum chewing for some of the tasks, especially for feelings of alertness. (This study did not use sugar-free gum so it's hard to tell whether the effect is due to the gum or the glucose.)
A more recent study (Kozlov, Hughes & Jones, 2012
) using a more standard short-term memory paradigm, found no benefit for gum chewing.
What are we to make of this grab-bag of results? (And please note this blog does not offer an exhaustive review.)
A recent paper (Onyper, Carr, Farrar & Floyd, 2011
) offers a plausible resolution. They suggest that the act of mastication offers a brief--perhaps ten or twenty minute--boost to cognitive function due to increased arousal. So we might see benefit (or not) to gum chewing depending on the timing of the chewing relative to the timing of cognitive tasks.
The upshot: teachers might allow or disallow gum chewing in their classrooms for a variety of reasons. There is not much evidence to allow it for a significant cognitive advantage. EDIT: Someone emailed to ask if kids with ADHD benefit. The one study I know of reported a cost to vigilance with gum-chewing for kids with ADHD
Kozlov, M. D., Hughs, R. W. & Jones, D. M. (2012). Gummed-up memory: chewing gum impairs short-term recall. Quarterly Journal of Experimental Psychology, 65, 501-513.
Onyper, S. V., Carr, T. L, Farrar, J. S. & Floyd, B. R. (2011). Cognitive advantages of chewing gum. Now you see them now you don't. Appetite, 57, 321-328.
Smith, A. (2010). Effects of chewing gum on cognitive function, mood and physiology in stressed an unstressed volunteers. Nutritional Neuroscience, 13, 7-16.
Wilkinson, L., Scholey, A., & Wesnes, K. (2002). Chewing gum selectively improves aspects of memory in healthy volunteers. Appetite, 38, 235-236.
The New York Times Magazine has an article
on working memory training and the possibility that it boosts on type of intelligence.I think the article is a bit--but only a bit--too optimistic in its presentation.The article correctly points out that a number of labs have replicated the basic finding: training with one or another working memory task leads to increases in standard measures of fluid intelligence, most notably, Raven's Progressive Matrices.
Working memory is often trained with a N-back task, shown in the figure at left from the NY Times article. You're presented with a series of stimuli, e.g. you're hearing letters. You press a button if a stimulus is the same as the one before (N=1) or the time before last (N=2) or. the time before that (N=3). You start with N=1 and N increases if you are successful. (Larger N makes the task harder.) To make it much harder, researchers can add a second stream of stimuli (e.g., the colored squares shown at left) and ask you to monitor BOTH streams of stimuli in an N-back task. That is the training task that you are to practice. (And although the figure calls it a "game" it's missing one usual feature of a game
; it's no fun at all.)There are two categories of outcome measures taken after training. In a near-transfer task, subjects are given some other measure of working memory
to see if their capacity has increased. In a far-transfer
task, a task is administered that isn't itself a test of working memory, but of a process that we think depends on working memory capacity. All the excitement has been about far-transfer measures, namely that this training boosts intelligence, about which more in a moment. But it's actually pretty surprising and interesting that labs are reporting near-transfer. That's a novel finding, and contradicts a lot of work that's come before, showing that working memory training tends to benefit only the particular working memory task used during training, and doesn't even transfer to other working memory tasks.
The far-transfer claim has been that the working memory training boosts fluid intelligence. Fluid intelligence
is one's ability to reason, see patterns, and think logically, independent of specific experience. Crystallized intelligence, in contrast, is stuff that you know, knowledge that comes from prior experience. You can see why working memory capacity might lead to more fluid intelligence--you've got a greater workspace in which to manipulate ideas.A standard measure of fluid intelligence is the Ravens Progressive Matrices task, in which you see a pattern of figures, and you are to say which of a several choices would complete the pattern, as shown below.
So, is this finding legit? Should you buy an N-back training program for your kids? I'd say the jury is still out.
quotes Randy Engle--a highly regarded working memory researcher--on the subject, and he can hardly conceal his scorn: “May I remind you of ‘cold fusion’?”
Engle--who is not
one of those scientists who has made a career out of criticizing others--has a lengthy review of the working memory training literature which you can read here
Another recent review
(which was invited for the journal Brain & Cognition
) concluded "Sparse evidence coupled with lack of scientific rigor, however, leaves claims concerning the impact and duration of such brain training largely unsubstantiated. On the other hand, at least some scientific findings seem to support the effectiveness and sustainability of training for higher brain functions such as attention and working memory."
My own take is pretty close to that conclusion.
There are enough replications of this basic effect that it seems probable that something
is going on. The most telling criticism of this literature is that the outcome measure is often a single task. You can't use a single task like the Ravens
and then declare that fluid intelligence has increased because NO task is a pure measure of fluid intelligence. There are always going to be other factors that contribute to task performance.
The best measure of an abstract construct like "fluid intelligence" is one that uses several measures of what look to be quite different tasks, but which you have reason to think all call on fluid intelligence. Then you use statistical methods to look for shared variance among the tasks.
So what we'd really like to see is better performance after working memory training on a few tasks.The fact is that in many of these studies, researchers have tried to show transfer to more than one task, and the training transfers to one, but not the other. Here's a table from a 2010 review by Torkel Klingberg showing this pattern. (Click the image to see a larger version.)
This work is really just getting going, and the inconsistency of the findings means one of two things. Either the training regimens need to be refined, whereupon we'll see the transfer effects more consistently, OR the benefits we've seen thus far were mostly artifactual, a consequence of uninteresting quirks in the designs of studies or the tasks
My guess is that the truth lies somewhere between these two--there's something here, but less than many people are hoping. But it's too early to say with much confidence.
Most colleges have strict polices about student plagiarism, often including stringent penalties for those who violate the rules. (At the University of Virginia, where I teach, the penalty is expulsion
.) Yet infractions occur. Why?
My own intuition has been that plagiarism is often due to oversight or panic. A student will fall behind and, with a deadline looming, get sloppy in the writing of a paper: a few sentences or even a paragraph makes its way into the student paper without attribution. In the rush to finish the student forgets about it, or decides it doesn't matter. Thomas Dee and Brian Jacob had a different idea.Some data (e.g., Power, 2009) indicate that even college students are not very knowledgeable about what constitutes plagiarism and how to avoid it, and so many instances of plagiarism may actually be accidental. Given the stiff penalties, why don't students bone up on the rules? Dee & Jacob point out that this may be an instance of rational ignorance. That is, it's logical for students not to try to obtain better information about
plagiarism; the cost of learning this information is relatively high because the rules seem complex, and the payoff seems small because the odds of punishment for plagiarism are low.
Dee and Jacob's idea: reduce plagiarism by reducing the cost of learning about what constitutes plagiarism.Their experiment included 1,256 papers written by 573 students in a total of 28 humanities and social-science courses during a semester a selective liberal arts college.
Half of the students were required to complete a "short but detailed interactive tutorial on understanding and avoiding plagiarism." The student papers were analyzed with plagiarism detection software. In the control group, plagiarism was observed in 3.3 percent of papers. (Almost every instance was a matter of using sentences without attribution.) Students who had completed the tutorial had a plagiarism rate of about 1.3% Thus, a relatively simple and quite inexpensive intervention may be highly effective in reducing at least one variety of plagiarism. Replicating this finding in other types of coursework--science and mathematics--would be important, as would replication at other institutions, including less selective colleges, and high schools. Even with those limitations, this is a promising start.This paper was just published as:Dee, T. S. & Jacob, B. A. (2012) Rational ignorance in education: A field experiment in student plagiarism. Journal of Human Resources, 47, 397-434. (I've linked to the NBER publication above because it's freely downloadable.)
Power, L. G. (2009). University Students’ Perceptions of Plagiarism. Journal of Higher Education, 80
I admit that until a few years ago, learning that a school asked their students to meditate prompted me to roll my eyes. It struck me as faddish and meant to appeal to parents rather than something meant to help students. But I’m not sneering anymore.
The last five or ten years has seen a burgeoning research literature on the cognitive benefits of mindfulness meditation—that style of meditation in which one focuses one’s thoughts on the present moment and emphasizes a open, non-judgmental attitude towards thoughts and sensations.
Most practitioners engage in mindfulness meditation for its effects on overall feelings of well-being. From a cognitive point of view, the daily practice in the management and control of attention might yield benefits for students. This sort of attentional control is positively associated with academic outcomes. (An article I wrote on the topic can be found here.
What do the data on meditation and attentional control look like?
The truth is that it’s a bit early to tell. A recent review (Chiesa, Calati, & Serretti, 2011
) concluded that meditation training did lead to improvements in controlled attention, but the authors warned that the effects were inconsistent.The results might be inconsistent because the benefits to attention only accrue after significant practice
--more practice then volunteers are willing to engage in for the sake of a study. But even when examining long-time meditators, the benefits to attention are inconsistent.
Another possibility is that meditation doesn’t make attentional control any more effective, but it does make it less taxing, which might be consistent with reports of improved well-being. There are electrophysiological data (e.g., Moore, Gruber, Derose & Malinowski, 2012
) indicating that meditation training leads to changes in how the brain deals with attentional challenges, and that these changes reflect easier, smoother processing.
Most of the work has been done with adults, not kids. There are at least a few studies that have used mindfulness mediation interventions with kids of middle-school age, and the authors of these studies claim these kids can learn the practice (e.g., Wall, 2005
So at this point, the benefits of mindfulness mediation are not clear enough to make a claim that there is scientific backing for the practice in schools, if the hoped-for benefit is academic. But this is a research literature worth keeping on the radar.
Chiesa, A., Calati, R., & Serretti, A. (2011). Does mindfulness training improve cognitive abilities? A systematic review of neuropsychological findings. Clinical Psychology Review, 31,
Moore, A., Gruber, T., Derose, J., & Malinowski, P. (2012). Regular, brief mindfulness meditation practice improves electrophysiological markers of attentional control. Frontiers in Human Neuroscience, 6,
Wall, R. B. (2005). Tai chi and mindfulness-based stress reduction in a Boston Public middle school. Journal of Pediatric Health Care. 19,