Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

On the Definition of Learning....

6/26/2017

 
There was a brief, lively thread on Twitter over the weekend concerning the definition of learning. To tip my hand here at the outset, I think this debate—on Twitter and elsewhere--is a good example of the injunction that scientists ought not to worry overmuch about definitions. That might seem backwards—how can you study learning if you aren’t even clear about what learning is? But from another perspective don’t we expect that a good definition of learning might be the result of research, rather than a prerequisite? 

The Twitter thread began when Old Andrew asked whether John Hattie’s definition (shown below) was not “really terrible."
Picture
I'll first consider this definition (and one or two others) as our instincts would dictate they be considered. Then I'll suggest that's a bad way to think about definitions, and offer an alternative. 

Hattie's definition has two undesirable features. First, it entails a goal (transfer) and therefore implies that anything that doesn’t entail the goal is not learning. This would be….weird. As Dylan Wiliam pointed out, it seems to imply that memorizing one’s social security number is not an example of learning. 

The second concern with Hattie’s definition is that it entails a particular theoretical viewpoint; learning is first shallow, and then later deep. It seems odd to include a theoretical perspective in a definition. Learning is the thing to be accounted for, and ought to be independent of any particular theory. If I’m trying to account for frog physiology, I’m trying to account for the frog and it's properties, which have a reality independent of my theory. 

The same issue applies to Kirschner, Sweller and Clark's definition, "Learning is a change in long-term memory." The definition is fine in the context of particular theories that specify what long term memory is, and how it changes. Absent that, it invites those questions: “what is long term memory? What prompts it to change?” My definition of learning seems to have no reality independent of the theory, and my description of the thing to be explained changes with the theory.

It’s also worth noting that Kirscher et al’s definition does not specify that the change in long term memory must be long-lasting…so does that mean that a change lasting a few hours (as observed in repetition or semantic priming) qualifies? Nor does their definition specify that the change must lead to positive consequences…does a change in long term memory that results from Alzheimer’s disease qualify as learning? How about a temporary change that’s a consequence of transcranial magnetic stimulation? 

I think interest in defining learning has always been low, and always for the same reason: it’s a circular game. You offer a definition of learning, then I come up with a counter-example that fits your definition, but doesn’t sit well with most people’s intuitions about what “learning” means, you revise your definition, then I pick on it again and so on. That's what I've done in the last few paragraphs, and It’s not obvious what’s gained. 

The fading of positivism in the 1950s reduced the perceived urgency (and for most, the perceived possibility) of precise definitions. The last well-regarded definition of learning was probably Greg Kimble's in his revision of Hilgard & Marquis’s Conditioning and Learning, written in 1961: “Learning is a relatively permanent change in a behavioral potentiality that occurs as a result of reinforced practice,” a formulation with its own problems.

Any residual interest in defining learning really faded in the 1980s when the scope of learning phenomena in humans was understood to be larger than anticipated, and even the project of delineating categories of learning turned out to be much more complicated than researchers had hoped. (My own take (with Kelly Goedert) on that categorization problem is here, published in 2001, about five years after people lost interest in the issue.)

I think the current status of “learning” is that it’s defined (usually narrowly) in the context of specific theories or in the context of specific goals or projects. I think the Kirschner et al were offering a definition in the context of their theory. I think Hattie was offering a definition of learning for his vision of the purpose of schooling. I can't speak for these authors, but I suspect neither hoped to devise a definition that would serve a broader purpose, i.e., a definition that claims reality independent of any particular theory, set of goals, or assumptions. (Edit, 6-28-17. I heard from John Hattie and he affirmed that he was not proposing a definition of learning for all theories/all contexts, but rather was talking about a useful way to think about learning in a school context.)

​This is as it should be, and neither definition should be confused with an attempt at a all-purpose, all-perspectives, this-is-the-frog definition.

Discovery learning at the tribe level

5/9/2016

 
Humans show remarkable cultural diversity. Different groups—let’s call them tribes—use different technologies, economic organizations, political organizations, they hold different religious beliefs, and so on. Explanations of this diversity typically fall into one of two categories:
  1. Humans inhabit almost every corner of the globe. Diversity of tribes’ behavior is a product of environmental diversity and the fact that humans are so good at problem-solving. Different environments prompt different behaviors, but even when environments are similar people are so ingenious they come up with different solutions to the same environmental challenges.
  2. Diversity of behavior is due to cultural traditions. Sure, there are some environmental constraints on what I do—people living in the desert won’t fish—but diversity is so high because small changes are preserved with fidelity. People keep doing what the tribe has always done.
 
The first account predicts that local environmental conditions will determine tribe behaviors. Two predictions may be drawn from the second of these accounts: (1) tribes that live spatially near one another will be more behaviorally similar and; (2) behaviors will persist across generations.
 
A recent study sought to test both predictions using an enormous dataset of 172 tribes in western North America.  The dataset records 297 behavioral variables (e.g., what people eat, their religious practices, family organization, and so on) and 133 variables concerning the environment (available flora & fauna, characteristics of soil, altitude, precipitation, etc.). All data represent practices and conditions at the time the tribe first encountered Europeans.
 
Spatial distance between tribes is simple enough to measure.  The researchers used language phylogeny as a proxy of cultural phylogeny. Analyses of the similarity of languages yields an “evolutionary tree” of languages, so the distance of any two languages on the tree can be measured with a “most recent common ancestor” approach.
 
The question of interest is which of three variables predicts whether two tribes show similar behaviors. If behavior is mostly a matter of smart individuals adapting to the local ecology, then tribes inhabiting similar terrain should behave similarly. But if learning is mostly social, then tribes that are physically and/or culturally close should be more likely to behave similarly.
 
The results showed that cultural history and ecology both affect everything…but cultural history generally has the stronger effect. Within cultural history, phylogeny mattered more than spatial distance.
 
This analysis is about group behaviors, things that most people in a tribe do. But the result showing the importance of social learning may hold a lesson for those of us who think about the education of individuals. It’s easy to be a little dazzled by the brilliance of the human mind, and to see most of cognitive development as the intrepid mind of the individual, exploring the environment like a little scientist.
 
That’s certainly the emphasis we get from many psychologists. Bandura’s social learning duly noted, the towering figure of Piaget puts the child’s individual discoveries at the center of learning.
 
When it comes to schooling, I sometimes sense a similar reverence for learning that is the product of an individual mind at work, over the mere copying of someone else’s solution. It’s true that you only get true invention/innovation from original thought. But it’s a whole lot quicker and more reliable to copy what others have done. That is probably why social learning seems to be the workhorse of cultural learning.

"Active learning" in college STEM courses--meta-analysis

6/20/2014

 
This column was originally published at RealClearEducation.com on May 20, 2014.

When you think of a college class, what image comes to mind? Probably a professor droning about economics, or biology, or something, in an auditorium with several hundred students. If you focus on the students in your mind’s eye, you’re probably imagining them looking bored and, if you’ve been in a college lecture hall recently, your image would include students shopping online and chatting with friends via social media while the oblivious professor lectures on. What could improve the learning and engagement of these students? According to a recent literature review, the results of which were reported by Science, Wired, PBS, and others, damn near anything.

Scott Freeman and his associates (Freeman et al, 2014) conducted a meta-analysis of 225 studies of college instruction that compared “traditional lecturing” vs. “active learning” in STEM courses. (STEM is an acronym for science, technology, engineering, and math.) Student performance on exams increased by about half a standard deviation. Students in the traditional lecture classes were 1.5 times as likely to fail as students in the active learning classes.

Previous studies of college course interventions have been criticized on methodological grounds. For example, classes would experience either traditional lecture or active learning, but no effort would be made to evaluate whether the students were equivalently prepared when they started the class. Freeman et al. categorized the studies in their meta-analysis by methodological rigor, and reported that the size of the benefit was not different among studies of high or low quality.

That’s encouraging. What’s surprising is the breadth of the activities covered by the term “active learning” and how little we know about their differential effectiveness and why they work. According to the article, active learning “included approaches as diverse as occasional group problem-solving, worksheets or tutorials completed during class, use of personal response systems with or without peer instruction, and studio or workshop course designs.” The authors do not report on differential effectiveness of these methods.

In other words, in most of the studies summarized in the meta-analysis professors were still doing a whole lot of lecturing, but every now and then they would do something else. The “something else” ostensibly made students think about the course material, digest it in some way, generate a response. The authors certainly believe that that’s the source of the improvement, citing Piaget and Vygotsky as learning theorists who “challenge the traditional, instructor-focused, ‘teaching by telling’ approach.”

I’m ready to believe that that aspect of the activity was important (although not because of theory advanced by Piaget and Vygotsky nearly a century ago.) But It would have been useful to evaluate the impact of an active control group-- that is, where active learning is compared to a class in which the professor is asked to do something new, but does not entail active learning  (e.g., ask the professor to show more videos). That’s important because interventions typically prompt a change for the better. John Hattie estimates that interventions boost student learning by 0.3 standard deviations, on average.

The exact figures are not reported, but it appears that for most studies the lecture condition was business-as-usual, the thing that typically happens. An active control is important to guard against the possibility that students improve because the professor is energized by doing something different, or holds higher expectations for students because she expects the “something different” to prompt improvement. It’s also possible that asking the professor to make a change in her teaching actually improves her lectures because she reorganizes them to incorporate the change.

It may seem captious to harp on the “why.” To be clear, I think that focusing on making students mentally active while they learn is a wonderful idea, and an equally wonderful idea is giving instructors rules of thumb and classroom techniques that make it likely that students will think. But knowing the source of the improvement will allow individual instructors to tailor methods to their own teaching, rather than following instructions without knowing why they help. It will also help the field collectively move to greater improvement.

Perhaps the best news is that the effectiveness of college instruction is on people’s minds. This past winter I visited a prominent research university, and an old friend told me “I’ve been here twenty-five years, and I don’t think I heard undergraduate teaching mentioned more than twice. In the last two years, that’s all anybody talks about, all over campus.”

Amen.

References

Freeman, S, Eddy, S. L, McDonough, M., Smith, M. K, Okoroafor, N. Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, doi: 10.1073/pnas.1319030111

Hattie, J. (2013). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.

Better studying = less studying. Wait, what?

7/8/2013

 
Readers of this blog probably know about "the testing effect," later rechristened "retrieval practice." It refers to the fact that trying to remember something can actually help cement things in memory more effectively than further study.

A prototypical experiment looks like this (rows = subject groups; columns = phases of the experiment).
Picture
The critical comparison is the test in Phase three of the experiment; those who take a test during Phase 2 do better than those who study more.. There are lots of experiments replicating the effect and accounting for alternative explanations (e.g., motivation. See Agarwal, Bain & Chamberlain, 2012 for a review).

A consistent finding is that the benefit to memory is larger if the test is harder. But of course if the test is harder, then people might be more likely to make mistakes on the test in Phase 2. And if you make mistakes, perhaps you will later remember those incorrect responses.

But data show that, even if you get the answer wrong during Phase 2 you'll still see a testing benefit so long as you get corrective feedback. (Kornell, Hays & Bjork, 2009).

A tentative interpretation is that you get the benefit because the right answer is lurking in the background of your memory and is somewhat strengthened, even though you didn't produce it.

So that implies that the testing effect won't work if you simply don't know the answer at all. Suppose, for example, that I present you with an English vocabulary word you don't know and either (1) provide a definition that you read (2) ask you to make up a definition or (3) ask you to choose from among a couple of candidate definitions. In conditions 2 & 3 you obviously must simply guess. (And if you get it wrong I'll give you corrective feedback.) Will we see a testing effect?

That's what Rosalind Potts & David Shanks set out to find, and across four experiments the evidence is quite consistent. Yes, there is a testing effect. Subjects better remember the new definitions of English words when they first guess at what the meaning is--no matter how wild the guess.

Guessing by picking from amongst meanings provided by the experimenter provides no advantage over simply reading the definition. So there is something about the generation in particular that seems crucial.
Picture
Results of four experiments in Potts & Shanks, performance on final test. Error bars = standard errors.
What's behind this effect? Potts & Shanks think it might be attention. They suggest that you might pay more attention to the definition the experimenter provides when you've generated your own guess because you're more invested in the problem. Selecting one of the experimenter-provided definitions is too easy to provide this feeling of investment.

This account is speculation, obviously, and the authors don't pretend it's anything else. I wish that they were equally circumspect in their guess at the prospects for applying this finding in the classroom. Sure, it's an important piece of the overall puzzle, but I can't agree that "this line of research is relevant to any real world situation where novel information is to be learned, for example when learning concepts in science, economics, politics, philosophy, literary theory, or art."

The authors in fact cite two other studies that found no advantage for generating over reading, but Potts & Shanks think they have an account for what made those studies not very realistic (relative to classrooms) and what makes their conditions more realistic. They may yet be proven right, but college students in a lab studying word definitions is still a far cry from "any real world situation where novel information is to be learned."

The today-the-classroom-tomorrow-the-world rhetoric is over the top, but it's an interesting finding that may, indeed, prove applicable in the future.

References:

Agarwal, P. K., Bain, P. M. & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24,  437-448.

Kornell, N., Hays, M. J., & Bjork, R. A. (2009). Unsuccessful retrieval
attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 989–998

Potts, R., & Shanks, D. R. (2013, July 1). The Benefit of Generating Errors During Learning. Journal of Experimental Psychology: General. Advance online publication. doi:10.1037/a0033194






What type of learning is most natural?

6/17/2013

 
Which of these learning situations strikes you as the most natural, the most authentic?

1) A child learns to play a video game by exploring it on his own.
2) A child learns to play a video game by watching a more experienced player.
3) A child learns to play a video game by being taught by a more experienced player.

In my experience a lot people take the first of these scenarios to be the most natural type of learning—we explore on our own. The third scenario has its place, but direct instruction from someone is a bit contrived compared to our own experience.
Picture
I’ve never really agreed with this point of view, simply because I don’t much care about “naturalness” one way or the other. As long as learning is happening, I’m happy, and I think the value some people place on naturalness is a hangover from a bygone Romantic era, as I describe here.

Now a fascinating paper by Patrick Shafto and his colleagues (2012) (that’s actually on a rather different topic) leads to implications that call into doubt the idea that exploratory learning is especially natural or authentic.

The paper focuses on a rather profound problem in human learning. Think of the vast difference in knowledge between a new born and a three-year-old; language, properties of physical objects, norms of social relations, and so on. How could children learn so much, so rapidly? 

As you're doubtless aware, from the 1920's through the 1960's, children were viewed by psychologists as relatively passive learners of their environment. More recently, infants and toddlers have been likened to scientists; they don't just observe the environment, they reason about what they observe.

But it's not obvious that reasoning will get the learning done. For example, in language the information available for their observation seems ambiguous. If a child overhears an adult comment “huh, look at that dog,” how is the child to know whether “dog” refers to the dog, the paws of the dog, to running (that the dog happens to be doing), to any object moving from the left to the right, to any multi-colored object etc.?

Much of the research on this problem has focused on the idea that there must be innate assumptions or biases on the part of children that help them make sense of their observations. For example, children might assume that new words they hear are more likely to apply to nouns than to adjectives.

Many models using these principles have not attached much significance to the manner in which children encounter information. Information is information.

Shafto et al. point out why that's not true. They draw a distinction between three different cases with the following example. You’re in Paris, and want a good cup of coffee.

1) You walk into a cafe, order coffee, and hope for the best.
2) You see someone who you know lives in the neighborhood. You see her buying coffee at a particular cafe so you get yours there too.
3) You see someone you know lives in the neighborhood. You see her buying coffee at a particular cafe. She sees you observing her, looks at her cup, looks at you, and nods with a smile

Picture
In the first case you acquire information on your own. There is no guiding principle behind this information acquisition. It is random, and learning where to find good coffee will slow going with this method.

In the second scenario, we anticipate that the neighborhood denizen is more knowledgeable than we--she probably knows where to get good coffee. Finding good coffee ought to be much faster if we imitate someone more knowledgeable than we. At the same time, there could be other factors at work. For example, it's possible that she thinks the coffee in that cafe is terrible, but it's never crowded and she's in a rush that morning.

In the third scenario, that's highly unlikely. The woman is not only knowledgeable, she communicates with us; she knows what we want to know and she can tell us that the critical feature we care about is present. Unlike scenario #2,  the knowledgeable person is adjusting her actions to maximize our learning. 

More generally, Shafto et al suggest that these cases represent three fundamentally different learning opportunities; learning from physical evidence, learning from the observation of goal-directed action, and learning from communication.

Shafto et al argue that although some learning theories assume that children acquire information at random, that's likely false much of the time. Kids are surrounded by people more knowledgeable than they. They can see, so to speak, where more knowledgeable people get their coffee.

Further, adults and older peers often adjust their behavior to make it easier for children to draw the right conclusion. Language is notable in its ambiguity-“dog” might refer to the object, its properties, its actions—but more knowledgeable others often do take into account what the child knows, and speak so as to maximize what the child can learn. If an adult asked “what’s that?”  I might say “It’s Westphalian ham on brioche.” If a toddler asked, I ‘d say “It’s a sandwich.”

One implication is that the problem I described—how do kids learn so much, so fast—may not be quite as formidable as it first seemed because the environment is not random. It has a higher proportion of highly instructive information. (The real point of the Shafto et al. paper is to introduce a Bayesian framework for integrating these different three types of learning scenarios into models of learning.)

The second implication is this: when a more knowledgeable person not only provides information but tunes the communication to the knowledge of the learner, that is, in an important sense, teaching.

So whatever value you attach to “naturalness,” bear in mind that much of what children learn in their early years of life may not be the product of unaided exploration of their environment, but may instead be the consequence of teaching. Teaching might be considered a quite natural state of affairs.

EDIT: Thanks to Pat Shafto who pointed out a paper (Csibra & Gergely) that draws out some of the "naturalness" implications re: social communication. 

Reference
Shafto, P., Goodman, N. D. & Frank, M. C. (2012). Learning from others: The consequences of psychological reasoning for human learning. Perspectives in Psychological Science, 7, 341-351.

Testing helps maintain attention, reduce stress in online learning

4/8/2013

 
A great deal has been written about the impact of retrieval practice on memory. That's because the effect is sizable, it has been replicated many times (Agarwal, Bain & Chamberlain, 2012) and it seems to lead not just to better memory but deeper memory that supports transfer (e.g., McDaniel et al, 2013; Rohrer et al, 2010). 

("Retrieval practice" is less catchy than the initial name--testing effect. It was renamed both to emphasize that it doesn't matter whether you try to remember for the sake of a test or some other reason and because "testing effect" led some observers to throw up their hands and say "do we really need more tests?")

Now researchers (Szpunar, Khan, & Schacter, 2013) have reported testing as a potentially powerful ally in online learning. College students frequently report difficulty in maintaining attention during lectures, and that problem seems to be exacerbated when the lecture occurs on video.

In this experiment subjects were asked to learn from a 21 minute video lecture on statistics. They were also told that the lecture would be divided in 4 parts, separated by a break. During the break they would perform math problems for a minute, and then would either do more math problems for two more minutes ("untested group"), they would be quizzed for two minutes on the material they had just learned ("tested group"), or they would review by seeing questions with the answers provided ("restudy group.")

Subjects were told that whether or not they were quizzed would be randomly determined for each segment; in fact, the same thing happened for an individual subject after each segment except that each was tested after the fourth segment.

So note that all subjects had reason to think that they might be tested at any time.

There were a few interesting findings. First, tested students took more notes than other students, and reported that their minds wandered less during the lecture.
Picture
The reduction in mind-wandering and/or increase in note-taking paid off--the tested subjects outperformed the restudy and the untested subjects when they were quizzed on the fourth, final segment.
Picture
The researchers added another clever measure. There was a final test on all the material, and they asked subjects how anxious they felt about it. Perhaps the frequent testing made learning rather nerve wracking. In fact, the opposite result was observed: tested students were less anxious about the final test. (And in fact performed better: tested = 90%, restudy = 76%, nontested = 68%).

We shouldn't get out in front of this result. This was just a 21 minute lecture, and it's possible that the benefit to attention of testing will wash out under conditions that more closely resemble an on-line course (i.e., longer lectures delivered a few time each week.) Still, it's a promising start of an answer to a difficult problem.

References

Agarwal, P. K., Bain, P. M., & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24,  437-448.

McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: Successful transfer performance on classroom exams. Applied Cognitive Psychology. Published online Feb. 25

Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36, 233-239.

Szpunar, K. K., Khan, N. &, & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, published online April 1, 2013 doi:10.1073/pnas.122176411

Cone of learning or cone of shame?

2/25/2013

 
A math teacher and Twitter friend from Scotland asked me about about this figure.
Picture
I'm sure you've seen a figure like this. It is variously called the "learning pyramid," the "cone of learning," "the cone of experience," and others. It's often attributed to the National Training Laboratory, or to educator Edgar Dale.

You won't be surprised to learn that there are different versions out there with different percentages and some minor variations in the ordering of ac

Certainly, some mental activities are better for learning than others. And the ordering offered here doesn't seem crazy. Most people who have taught agree that long-term contemplation of how to help others understand complicated ideas is a marvelous way to improve one's own understanding of those ideas--certainly better than just reading them--although the estimate of 10% retention of what one reads seems kind of low, doesn't it?

If you enter "cone of experience" in Google scholar the first page offers a few papers that critique the idea, e.g., this one and this one, but you'll also see papers that cite it as if it's reliable.

It's not.

So many variables affect memory retrieval, that you can't assign specific percentages of recall without specifying many more of them:
  • what material is recalled (gazing out the window of a car is an audiovisual experience just like watching an action movie, but your memory for these two audiovisual experiences will not be equivalent)
  • the age of the subjects
  • the delay between study and test (obviously, the percent recalled usually drops with delay)
  • what were subjects instructed to do as they read, demonstrated, taught, etc. (you can boost memory considerably for a reading task by asking subjects to summarize as they read)
  • how was memory tested (percent recalled is almost always much higher for recognition tests than recall).
  • what subjects know about the to-be-remembered material (if you already know something about the subject, memory will be much better.
Picture
This is just an off-the-top-of-my-head list of factors that affect memory retrieval. They not only make it clear that the percentages suggested by the cone can't be counted on, but that the ordering of the activities could shift, depending on the specifics.



The cone of learning may not be reliable, but that doesn't mean that memory researchers have nothing to offer educators. For example, monograph published in January offers an extensive review of the experimental research on different study techniques. If you prefer something briefer, I'm ready to stand by the one-sentence summary I suggested in Why Don't Students Like School?: It's usually a good bet to try to think about material at study in the same way that you anticipate that you will need to think about it later.

And while I'm flacking my books I'll mention that When Can you Trust the Experts was written to help you evaluate the research basis of educational claims, cone-shaped or otherwise.

How to Get Students to Sleep More

12/12/2012

 
Something happens to the "inner clocks" of teens. They don't go to sleep until later in the evening but still must wake up for school. Hence, many are sleep-deprived.
Picture
These common observations are borne out in research, as I summarize in an article on sleep and cognition in the latest American Educator.

What are the cognitive consequences of sleep deprivation?

It seems to affect executive function tasks such as working memory. In addition, it has an impact on new learning--sleep is important for a process called consolidation whereby newly formed memories are made more stable. Sleep deprivation compromises consolidation of new learning (though surprisingly, that effect seems to be smaller or absent in young children.)

Parents and teachers consistently report that the mood of sleep-deprived students is affected: they are more irritable, hyperactive or inattentive. Although this sounds like ADHD, lab studies of attention show little impact of sleep deprivation on formal measures of attention. This may be because students are able, for brief periods, to rally resources and perform well on a lab test. They may be less able to sustain attention for long periods of time when at home or at school and may be less motivated to do so in any event.

Picture
Perhaps most convincingly, the few studies that have examined academic performance based on school start times show better grades associated with later school start times. (You might think that if kids know they can sleep later, they might just stay up later. They do, a bit, but they still get more sleep overall.)

Although these effects are reasonably well established, the cognitive cost of sleep deprivation is less widespread and statistically smaller than I would have guessed. That may be because they are difficult to test experimentally. You have two choices, both with drawbacks:

1) you can do correlational studies that ask students how much they sleep each night (or better, get them to wear devices that provide a more objective measure of sleep) and then look for associations between sleep and cognitive measures or school outcomes. But this has the usual problem that one cannot draw causal conclusions from correlational data.

2) you can do a proper experiment by having students sleep less than they usually would, and see if their cognitive performance goes down as a consequence. But it's unethical to significantly deprive students of significant sleep (and what parent would allow their child to take part in such a study?) And anyway, a night or two of severe sleep deprivation is not really what we think is going on here--we think it's months or years of milder  deprivation.

So even though scientific studies may not indicate that sleep deprivation is a huge problem, I'm concerned that the data might be underestimating the effect. To allay that concern, can anything be done to get teens to sleep more?

Picture
Believe it or not, telling teens "go to sleep" might help. Students with parent-set bedtimes do get more sleep on school nights than students without them. (They get the same amount of sleep on weekends, which somewhat addresses the concern that kids with this sort of parent differ in many ways kids who don't.)

Another strategy is to maximize the "sleepy cues" near bedtime. The internal clock of teens is not just set for later bedtime, it also provides weaker internal cues that he or she ought to be sleepy. Thus, teens are arguably more reliant on external cues that it's bedtime. So the student who is gaming at midnight might tell you "I'm playing games because I'm not sleepy" could be mistaken. It could be that he's not sleepy because he's playing games. Good cues would be a bedtime ritual that doesn't include action video games or movies in the few hours before bed, and ends in a dark quiet room at the same time each night.

So yes, this seems to be a case where good ol' common sense jibes with data. The best strategy we know of for better sleep is consistency.

References: All the studies alluded to (and more) appear in the article.

Ulric Neisser has died

2/26/2012

 
Ulric “Dick” Neisser has passed away at age 83.

Neisser is sometimes called the father of cognitive psychology due to a book he published in 1967, titled Cognitive Psychology. The field was already well under way by that date, but Cognitive Psychology did much to make the theoretical foundations and the experimental framework explicit. That served both to define the field, and to help train new students. (It's less often mentioned that Neisser repudiated this framework in a 1976 books, Cognition and Reality, in which he adopted a more Gibsonian view of perception.)

Neisser was not just a theoretician, but a gifted experimentalist. Among other important findings, he conducted an experiment showing the people focusing on a complex video scene failed to notice a woman with an open umbrella traverse the scene, anticipating Simon & Chabris's now-famous gorilla video. In memory research, Neisser did important work in showing that “flashbulb” memories, although held with great confidence, are not terrible accurate.

Neisser spent most of his career at Cornell, and died in Ithaca.

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory