Readers of this blog probably know about "the testing effect," later rechristened "retrieval practice." It refers to the fact that trying to remember something can actually help cement things in memory more effectively than further study.
A prototypical experiment looks like this (rows = subject groups; columns = phases of the experiment).
The critical comparison is the test in Phase three of the experiment; those who take a test during Phase 2 do better than those who study more.. There are lots of experiments replicating the effect and accounting for alternative explanations (e.g., motivation. See Agarwal, Bain & Chamberlain, 2012 for a review).
A consistent finding is that the benefit to memory is larger if the test is harder. But of course if the test is harder, then people might be more likely to make mistakes on the test in Phase 2. And if you make mistakes, perhaps you will later remember those incorrect responses.
But data show that, even if you get the answer wrong during Phase 2 you'll still see a testing benefit so long as you get corrective feedback. (Kornell, Hays & Bjork, 2009).
A tentative interpretation is that you get the benefit because the right answer is lurking in the background of your memory and is somewhat strengthened, even though you didn't produce it.
So that implies that the testing effect won't work if you simply don't know the answer at all. Suppose, for example, that I present you with an English vocabulary word you don't know and either (1) provide a definition that you read (2) ask you to make up a definition or (3) ask you to choose from among a couple of candidate definitions. In conditions 2 & 3 you obviously must simply guess. (And if you get it wrong I'll give you corrective feedback.) Will we see a testing effect?
That's what Rosalind Potts & David Shanks set out to find, and across four experiments the evidence is quite consistent. Yes, there is a testing effect. Subjects better remember the new definitions of English words when they first guess at what the meaning is--no matter how wild the guess.
Guessing by picking from amongst meanings provided by the experimenter provides no advantage over simply reading the definition. So there is something about the generation in particular that seems crucial.
Results of four experiments in Potts & Shanks, performance on final test. Error bars = standard errors.
What's behind this effect? Potts & Shanks think it might be attention. They suggest that you might pay more attention to the definition the experimenter provides when you've generated your own guess because you're more invested in the problem. Selecting one of the experimenter-provided definitions is too easy to provide this feeling of investment.
This account is speculation, obviously, and the authors don't pretend it's anything else. I wish that they were equally circumspect in their guess at the prospects for applying this finding in the classroom. Sure, it's an important piece of the overall puzzle, but I can't agree that "this line of research is relevant to any real world situation where novel information is to be learned, for example when learning concepts in science, economics, politics, philosophy, literary theory, or art."
The authors in fact cite two other studies that found no advantage for generating over reading, but Potts & Shanks think they have an account for what made those studies not very realistic (relative to classrooms) and what makes their conditions more realistic. They may yet be proven right, but college students in a lab studying word definitions is still a far cry from "any real world situation where novel information is to be learned."
The today-the-classroom-tomorrow-the-world rhetoric is over the top, but it's an interesting finding that may, indeed, prove applicable in the future.
Agarwal, P. K., Bain, P. M. & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24, 437-448.
Kornell, N., Hays, M. J., & Bjork, R. A. (2009). Unsuccessful retrieval
attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 989–998
Potts, R., & Shanks, D. R. (2013, July 1). The Benefit of Generating Errors During Learning. Journal of Experimental Psychology: General. Advance online publication. doi:10.1037/a0033194
Which of these learning situations strikes you as the most natural, the most authentic?
1) A child learns to play a video game by exploring it on his own.
2) A child learns to play a video game by watching a more experienced player.
3) A child learns to play a video game by being taught by a more experienced player.
In my experience a lot people take the first of these scenarios to be the most natural type of learning—we explore on our own. The third scenario has its place, but direct instruction from someone is a bit contrived compared to our own experience.
I’ve never really agreed with this point of view, simply because I don’t much care about “naturalness” one way or the other. As long as learning is happening, I’m happy, and I think the value some people place on naturalness is a hangover from a bygone Romantic era, as I describe here
Now a fascinating paper
by Patrick Shafto and his colleagues (2012) (that’s actually on a rather different topic) leads to implications that call into doubt the idea that exploratory learning is especially natural or authentic.
The paper focuses on a rather profound problem in human learning. Think of the vast difference in knowledge between a new born and a three-year-old; language, properties of physical objects, norms of social relations, and so on. How could children learn so much, so rapidly?
As you're doubtless aware, from the 1920's through the 1960's, children were viewed by psychologists as relatively passive learners of their environment. More recently, infants and toddlers have been likened to scientists
; they don't just observe the environment, they reason
about what they observe.
But it's not obvious that reasoning will get the learning done. For example, in language the information available for their observation seems ambiguous. If a child overhears an adult comment “huh, look at that dog,” how is the child to know whether “dog” refers to the dog, the paws of the dog, to running (that the dog happens to be doing), to any object moving from the left to the right, to any multi-colored object etc.?
Much of the research on this problem has focused on the idea that there must be innate assumptions or biases on the part of children that help them make sense of their observations. For example, children might assume that new words they hear are more likely to apply to nouns than to adjectives.
Many models using these principles have not attached much significance to the manner
in which children encounter information. Information is information.
Shafto et al. point out why that's not true. They draw a distinction between three different cases with the following example. You’re in Paris, and want a good cup of coffee.
1) You walk into a cafe, order coffee, and hope for the best.
2) You see someone who you know lives in the neighborhood. You see her buying coffee at a particular cafe so you get yours there too.
3) You see someone you know lives in the neighborhood. You see her buying coffee at a particular cafe. She sees you observing her, looks at her cup, looks at you, and nods with a smile
In the first case you acquire information on your own. There is no guiding principle behind this information acquisition. It is random, and learning where to find good coffee will slow going with this method.
In the second scenario, we anticipate that the neighborhood denizen is more knowledgeable than we--she probably knows where to get good coffee. Finding good coffee ought to be much faster if we imitate someone more knowledgeable than we. At the same time, there could be other factors at work. For example, it's possible that she thinks the coffee in that cafe is terrible, but it's never crowded and she's in a rush that morning.
In the third scenario, that's highly unlikely. The woman is not only knowledgeable, she communicates with us; she knows what we want to know and she can tell us that the critical feature we care about is present. Unlike scenario #2, the knowledgeable person is adjusting her actions to maximize our learning
More generally, Shafto et al suggest that these cases represent three fundamentally different learning opportunities; learning from physical evidence, learning from the observation of goal-directed action, and learning from communication.
Shafto et al argue that although some learning theories assume that children acquire information at random, that's likely false much of the time. Kids are surrounded by people more knowledgeable than they. They can see, so to speak, where more knowledgeable people get their coffee.
Further, adults and older peers often adjust their behavior to make it easier for children to draw the right conclusion. Language is notable in its ambiguity-“dog” might refer to the object, its properties, its actions—but more knowledgeable others often do take into account what the child knows, and speak so as to maximize what the child can learn. If an adult asked “what’s that?” I might say “It’s Westphalian ham on brioche.” If a toddler asked, I ‘d say “It’s a sandwich.”
One implication is that the problem I described—how do kids learn so much, so fast—may not be quite as formidable as it first seemed because the environment is not random. It has a higher proportion of highly instructive information. (The real point of the Shafto et al. paper is to introduce a Bayesian framework for integrating these different three types of learning scenarios into models of learning.)
The second implication is this: when a more knowledgeable person not only provides information but tunes
the communication to the knowledge of the learner, that is, in an important sense, teaching.
So whatever value you attach to “naturalness,” bear in mind that much of what children learn in their early years of life may not be the product of unaided exploration of their environment, but may instead be the consequence of teaching. Teaching might be considered a quite natural state of affairs. EDIT: Thanks to Pat Shafto who pointed out a paper (Csibra & Gergely
) that draws out some of the "naturalness" implications re: social communication. ReferenceShafto, P., Goodman, N. D. & Frank, M. C. (2012). Learning from others: The consequen
ces of psychological reasoning for human learning. Perspectives in Psychological Science, 7,
A great deal has been written about the impact of retrieval practice on memory. That's because the effect is sizable, it has been replicated many times (Agarwal, Bain & Chamberlain, 2012) and it seems to lead not just to better memory but deeper
memory that supports transfer (e.g., McDaniel et al, 2013; Rohrer et al, 2010).
("Retrieval practice" is less catchy than the initial name--testing effect. It was renamed both to emphasize that it doesn't matter whether you try to remember for the sake of a test or some other reason and because "testing effect" led some observers to throw up their hands and say "do we really need more tests?")Now researchers (Szpunar, Khan, & Schacter, 2013) have reported testing as a potentially powerful ally in online learning. College students frequently report difficulty in maintaining attention during lectures, and that problem seems to be exacerbated when the lecture occurs on video.In this experiment subjects were asked to learn from a 21 minute video lecture on statistics. They were also told that the lecture would be divided in 4 parts, separated by a break. During the break they would perform math problems for a minute, and then would either do more math problems for two more minutes ("untested group"), they would be quizzed for two minutes on the material they had just learned ("tested group"), or they would review by seeing questions with the answers provided ("restudy group.")Subjects were told that whether or not they were quizzed would be randomly determined
for each segment; in fact, the same thing happened for an individual subject after each segment except
that each was tested after the fourth segment.So note that all subjects had reason to think that they might be tested at any time. There were a few interesting findings.
First, tested students took more notes than other students, and reported that their minds wandered less during the lecture.
The reduction in mind-wandering and/or increase in note-taking paid off--the tested subjects outperformed the restudy and the untested subjects when they were quizzed on the fourth, final segment.
The researchers added another clever measure. There was a final test on all the material, and they asked subjects how anxious they felt about it. Perhaps the frequent testing made learning rather nerve wracking. In fact, the opposite result was observed: tested students were less anxious about the final test. (And in fact performed better: tested = 90%, restudy = 76%, nontested = 68%).
We shouldn't get out in front of this result. This was just a 21 minute lecture, and it's possible that the benefit to attention of testing will wash out under conditions that more closely resemble an on-line course (i.e., longer lectures delivered a few time each week.) Still, it's a promising start of an answer to a difficult problem.
Agarwal, P. K., Bain, P. M., & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24, 437-448.
McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: Successful transfer performance on classroom exams. Applied Cognitive Psychology. Published online Feb. 25
Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36, 233-239.
Szpunar, K. K., Khan, N. &, & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, published online April 1, 2013 doi:10.1073/pnas.122176411
A math teacher and Twitter friend from Scotland asked me
about about this figure.
I'm sure you've seen a figure like this. It is variously called the "learning pyramid," the "cone of learning," "the cone of experience," and others. It's often attributed to the National Training Laboratory, or to educator Edgar Dale.
You won't be surprised to learn that there are different versions out there with different percentages and some minor variations in the ordering of acCertainly, some mental activities are better for learning than others. And the ordering offered here doesn't seem crazy. Most people who have taught agree that long-term contemplation of how to help others understand complicated ideas is a marvelous way to improve one's own understanding of those ideas--certainly better than just reading them--although the estimate of 10% retention of what one reads seems kind of low, doesn't it?
If you enter "cone of experience" in Google scholar
the first page offers a few papers that critique the idea, e.g., this one
and this one
, but you'll also see papers that cite it as if it's reliable. It's not. So many variables affect memory retrieval, that you can't assign specific percentages of recall without specifying many more of them:
- what material is recalled (gazing out the window of a car is an audiovisual experience just like watching an action movie, but your memory for these two audiovisual experiences will not be equivalent)
- the age of the subjects
- the delay between study and test (obviously, the percent recalled usually drops with delay)
- what were subjects instructed to do as they read, demonstrated, taught, etc. (you can boost memory considerably for a reading task by asking subjects to summarize as they read)
- how was memory tested (percent recalled is almost always much higher for recognition tests than recall).
- what subjects know about the to-be-remembered material (if you already know something about the subject, memory will be much better.
This is just an off-the-top-of-my-head list of factors that affect memory retrieval. They not only make it clear that the percentages suggested by the cone can't be counted on, but that the ordering of the activities could shift, depending on the specifics.The cone of learning
may not be reliable, but that doesn't mean that memory researchers have nothing to offer educators. For example, monograph
published in January offers an extensive review of the experimental research on different study techniques. If you prefer something briefer, I'm ready to stand by the one-sentence summary I suggested in Why Don't Students Like School?:
It's usually a good bet to try to think about material at study in the same way that you anticipate that you will need to think about it later. And while I'm flacking my books I'll mention that When Can you Trust the Experts was written to help you evaluate the research basis of educational claims, cone-shaped or otherwise.
Something happens to the "inner clocks" of teens. They don't go to sleep until later in the evening but still must wake up for school. Hence, many are sleep-deprived.
These common observations are borne out in research, as I summarize in an article on sleep and cognition
in the latest American Educator.
What are the cognitive consequences of sleep deprivation?
It seems to affect executive function tasks such as working memory. In addition, it has an impact on new learning--sleep is important for a process called consolidation
whereby newly formed memories are made more stable. Sleep deprivation compromises consolidation of new learning (though surprisingly, that effect seems to be smaller or absent in young children.)
Parents and teachers consistently report that the mood of sleep-deprived students is affected: they are more irritable, hyperactive or inattentive. Although this sounds like ADHD, lab studies of attention show little impact of sleep deprivation on formal measures of attention. This may be because students are able, for brief periods, to rally resources and perform well on a lab test. They may be less able to sustain attention for long periods of time when at home or at school and may be less motivated to do so in any event.
Perhaps most convincingly, the few studies that have examined academic performance based on school start times show better grades associated with later school start times. (You might think that if kids know they can sleep later, they might just stay up later. They do, a bit, but they still get more sleep overall.)
Although these effects are reasonably well established, the cognitive cost of sleep deprivation is less widespread and statistically smaller than I would have guessed. That may be because they are difficult to test experimentally. You have two choices, both with drawbacks:
1) you can do correlational studies that ask students how much they sleep each night (or better, get them to wear devices that provide a more objective measure of sleep) and then look for associations between sleep and cognitive measures or school outcomes. But this has the usual problem that one cannot draw causal conclusions from correlational data.
2) you can do a proper experiment by having students sleep less than they usually would, and see if their cognitive performance goes down as a consequence. But it's unethical to significantly deprive students of significant sleep (and what parent would allow their child to take part in such a study?) And anyway, a night or two of severe sleep deprivation is not really what we think is going on here--we think it's months or years of milder deprivation.
So even though scientific studies may not indicate that sleep deprivation is a huge problem, I'm concerned that the data might be underestimating the effect. To allay that concern, can anything be done to get teens to sleep more?
Believe it or not, telling teens "go to sleep" might help. Students with parent-set bedtimes do get more sleep on school nights than students without them. (They get the same amount of sleep on weekends, which somewhat addresses the concern that kids with this sort of parent differ in many ways kids who don't.)
Another strategy is to maximize the "sleepy cues" near bedtime. The internal clock of teens is not just set for later bedtime, it also provides weaker internal cues that he or she ought to be sleepy. Thus, teens are arguably more reliant on external cues that it's bedtime. So the student who is gaming at midnight might tell you "I'm playing games because I'm not sleepy" could be mistaken. It could be that he's not sleepy because he's playing games. Good cues would be a bedtime ritual that doesn't include action video games or movies in the few hours before bed, and ends in a dark quiet room at the same time each night.
So yes, this seems to be a case where good ol' common sense jibes with data. The best strategy we know of for better sleep is consistency. References: All the studies alluded to (and more) appear in the article.
Ulric “Dick” Neisser has passed away
at age 83.
Neisser is sometimes called the father of cognitive psychology due to a book he published in 1967, titled Cognitive Psychology.
The field was already well under way by that date, but Cognitive Psychology
did much to make the theoretical foundations and the experimental framework explicit. That served both to define the field, and to help train new students. (It's less often mentioned that Neisser repudiated this framework in a 1976 books, Cognition and Reality
, in which he adopted a more Gibsonian view of perception.)
Neisser was not just a theoretician, but a gifted experimentalist. Among other important findings, he conducted an experiment showing the people focusing on a complex video scene failed to notice a woman with an open umbrella traverse the scene, anticipating Simon & Chabris's now-famous gorilla video. In memory research, Neisser did important work in showing that “flashbulb” memories, although held with great confidence, are not terrible accurate.Neisser spent most of his career at Cornell, and died in Ithaca.