Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

How to make edu-blogging less boring

7/30/2013

 
I read a lot of blogs. I only comment when I think I have something to add (which is rare, even on my own blog) but I read a lot of them.

Today, I offer a plea and a suggestion for making education blogs less boring, specifically on the subject of standardized testing.

I begin with two Propositions about human behavior
  • Proposition 1: If you provide incentives for X, people are more likely to do what they think will help them get X. They may even attempt to get X through means that are counterproductive.
  • Proposition 2: If we use procedure Z to change Y in order to make it more like Y’, we need to measure Y in order to know whether procedure Z is working. We have to be able to differentiate Y and Y’.

A lot of blog posts on the subject of testing are boring because authors pretend that one of these propositions is false or irrelevant.

On Proposition 1: Standardized tests typically gain validity by showing that scores are associated with some outcome you care about. You seldom care about the items on the test specifically. You care about what they signify. Sometimes tests have face validity, meaning test items look like they test what they are meant to test—a purported history test asks questions about history, for example. Often they don’t, but the test is still valid. A well-constructed vocabulary test can give you a pretty good idea of someone’s IQ, for example.

Just as body temperature is a reliable, partial indicator of certain types of disease, a test score is a reliable, partial indicator of certain types of school outcomes. But in most circumstances your primary goal is not a normal body temperature; it’s that the body is healthy, in which case body temperature will be normal as a natural consequence of the healthy state.
Picture
Bloggers ignoring basic propositions about human behavior? What's up with that?
If you attach stakes to the outcome, you can’t be surprised if some people treat the test as something different than that. They focus on getting body temperature to 98.6, whatever the health of the patient. That’s Proposition 1 at work. If a school board lets an administrator know that test scores had better go up or she can start looking for another job. . . well, what would you do in those circumstances? So you get test-prep frenzy. These are social consequences of tests, as typically used.

On Proposition 2: Some form of assessment is necessary. Without it, you have no idea how things are going. You won’t find many defenders of No Child Left Behind, but one thing we should remember is that the required testing did expose a number of schools—mostly ones serving disadvantaged children—where students were performing very poorly. And assessments have to be meaningful, i.e., reliable and valid. Portfolio assessments, for example, sound nice, but there are terrible problems with reliability and validity. It’s very difficult to get them to do what they are meant to do.

So here’s my plea. Admit that both Proposition 1 and Proposition 2 are true, and apply to testing children in schools.

People who are angry about the unintended social consequences of standardized testing have a legitimate point. They are not all apologists for lazy teachers or advocates of the status quo. Calling for high-stakes testing while taking no account of these social consequences, offering no solution to the problem . . . that's boring.

People who insist on standardized assessments have a legitimate point. They are not all corporate stooges and teacher-haters. Deriding “bubble sheet” testing while offering no viable alternative method of assessment . . . that's boring.

Naturally, the real goal is not to entertain me with more interesting blog posts. The goal is to move the conversation forward. The landscape will likely change consequentially in the next two years. This is the time to have substantive conversations.
Douglas Hainline
7/30/2013 12:13:09 am

This is very well put, especially the example used to illustrate the principles.

It's sad that such obvious common sense has to be insisted on, but it does.

There is a problem here, which deserves further discussion, namely, if we give children proper tests, and these tests show that they are doing badly, then it would seem to follow that there will be bad consequences for their teachers and those who supervise them. Therefore, 'teaching to the test', and cheating (directly and via the various ways that one can manipulate statistics) would seem to be an unavoidable consequence of real testing.

How can we get around this?

Matthew Levey link
7/30/2013 01:16:40 am

You are, as ever, thoughtful and reasonable.

I am concerned that media coverage, of education and in general, focuses on conflict. Participants are encouraged to heighten the sense of conflict to gain further publicity, Thus the situation you describe.

It's a sad and vicious circle but I think the vast majority of bloggers just reflect what is happening in the broader public environment.

Jose Vilson link
7/30/2013 02:41:50 am

Right. One of the big things I find interesting is the argument that teachers against high-stakes testing are against assessment overall. That's a ridiculous argument, and one most of my colleagues should be quick to squash. How else will we know what students know if we don't assess? The problems lie in a) what actually gets assessed, b) what type of assessment gets prioritized and c) what we use the assessment for.

You've made the arguments rather clear here, and I appreciate that. We just need to make sure that we keep our eye on the ball here. Most teachers I know didn't have a problem with testing, but once it became high-stakes, it's an animal all its own. We need to find an adequate balance here.

John Hetts
7/30/2013 04:00:45 am

Fantastic and thought-provoking piece! Here's hoping it gets wide readership because I too would like to see those changes in edu-blogging - very useful framework for the average reader/edu-blogger and the apropos insertion of the SNL sketch was pitch perfect (nearly induced a spit-take with the morning dose of caffeine).

Two things I might add are that:

1) The focus/emphasis on standardized testing has led to the assumption by some/many that methods of measurement that appear face valid are actually valid indicators of the outcomes we care about in education. That rather than treating standardized tests as a method of understanding with certain flaws (as with any measurement), these tests can too often be treated/represented as an unfettered/direct view into the students' brains to view their ability rather than a reconstruction of a construct viewed through a particular lens. This can lead to meaningful distortions in the conversation about what they might and might not tell us about student learning and achievement.

2) Some take that a step further and have come to see/treat performance on the test as the actual outcome of education rather than as a proxy for the outcomes we're trying to reach, failing to recognize that the relationship of performance on standardized testing, while a potentially meaningful predictor of some kinds of performance, is not actual performance. (Though some of this is a practical result of the incentive structure you flag.)

The consequence is that edu-bloggers and others may ignore/dramatically underweight the importance of other predictors of the outcomes we care about, even ones that in many cases are more powerful, including very obvious ones.

It's worth noting that the standardized test incentives distort students behavior as well - admission into advanced math and English tracks in mid-to-late elementary school as well as GATE, honors, AP, and IB programs often rely on earlier standardized testing, many scholarships rely on standardized tests as a large component of eligibility, admissions to 4 year colleges and placement at community colleges are highly predicated on standardized assessment. As a result, it clearly and meaningfully distorts the learning activity of many, many students toward maximizing test performance. The business models of a number of national companies and an enormous number of small businesses are entirely based on taking advantage of this distortion and the mismatch between the outcomes education seeks to influence and the outcomes that we use to measure the influence and allocate available rewards for good performance.

Thanks again for a great piece and a fantastic blog.

Dan Willingham link
8/1/2013 03:31:46 am

and on your pt. #1 above, I think there is sometimes lip-service paid to the importance of other outcomes, but high test scores are equated, one-for-one, with "a good school."

Peter Ford
7/30/2013 04:03:09 am

I want to amplify what Jose Vilson said: when tests got into the hands of the edu-bureaucrats and pols they became the monster many professional, dedicated educators challenge.
I've rarely seen a 'high-stakes' test tell me something about my students my own assessments didn't. As a math teacher I don't like bubble tests because too many students guess (even though 'good guessing requires some knowledge & skill). There are alternatives, but as always there are trade-offs: money and time. For example, if an ATM can read the handwriting on a deposited check, we have the technology for students to take tests with short answers in their own handwriting. Spending the money, time, and finding the willingness to do that is part of the future conversation.

Dan Willingham
8/1/2013 03:33:54 am

Peter I have less faith than you do in machine-scoring of prose. Handwriting recognition is one thing. . . knowing what students are actually trying to say is something else.

Scott McLeod link
7/30/2013 08:42:06 am

Dan, I'm going to leave you a comment here because the power of blogs (and, indeed, social media writ large) is that they're read-write, interactive spaces, not read-only. I know that bloggers always, always appreciate the comments that they get so please feel free to pat yourself on the back every time you leave a comment for someone else!

I'm going to concur with José as well. I know few educators who are against testing and 'accountability' as long as they are done in ways that are reasonable and fair. Unfortunately, much of what has happened over the past dozen years has been anything but. When politicians and ideologically-driven advocacy groups show that they are completely dismissive of research, data, evidence, societal ramifications, educational consequences, reality, etc. of their policies and proposals, then the scorn, outcry, and "boring" attacks are well-deserved. Great harm is being done to students, educators, and communities in the name of 'accountability' (or, in some places, in the name of political talking points and/or cronyism). Personally I'm glad that many are using their voices to speak out against the injustices that are occurring.

I'll also note that we have plenty of alternatives that have been offered, over and over again, to counteract our current over-reliance on - and unfounded belief in - the 'magic' of bubble sheet test scores. Such alternatives include portfolios, embedded assessments, essays, performance assessments, public exhibitions, greater use of formative assessments (in the sense of Black & Wiliam, not benchmark testing) instead of summative assessments, and so on. There also have been ongoing pleas to look at the assessment practices of high-performing countries, which often do things very differently than we do. We know how to do assessment better than low-level, fixed-response items. We just don't want to pay for it...

Roger Sweeny
7/31/2013 05:18:14 am

You speak of "unfounded belief in - the 'magic' of bubble sheet test scores" and suggest that Americans "look at the assessment practices of high-performing countries."

How are you determining what are "high-performing countries?" I hope not with scores on things like PISA or TIMSS. As I recall, they are "bubble sheet tests."

I'm also interested to know how you would respond to Dan's statement in the original post, "Portfolio assessments, for example, sound nice, but there are terrible problems with reliability and validity. It’s very difficult to get them to do what they are meant to do."

Scott McLeod link
7/31/2013 10:30:04 am

Hi Roger,

I used the term 'high-performing countries' because that's the term that policymakers use. I agree that those countries usually are determined by performance on bubble tests. Yet their success on those international-level bubble tests often does not equate to internal overuse of bubble tests. Sometimes, but often not too. The USA seems to be the most zealous on this front...

I'm not an expert on the psychometric properties and validation of more open-ended assessments such as portfolios, exhibitions, and performances and am appreciative of the difficulties of inter-rater reliability. Perhaps the goal itself is part of the problem? In other words, should we even be trying to turn displays of mastery that emphasize uniqueness and divergence into convergent ratings and scores? What score should we give the Mona Lisa? And what would the 'objective' rating criteria be?

Roger Sweeny
7/31/2013 11:48:30 am

Any organization has to decide what it is trying to achieve, and then come up with ways to determine whether it has succeeded.

One of the problems of K-12 is that there is no general agreement of just what success is. What is the purpose of K-12?

Is the purpose to develop a certain level of factual knowledge and academic skills ("the standards") in most of the students? Then, it fails--badly.

Is the purpose to identify the 20% who will do well in college and professional school? Then, it does a pretty good job.

Is the purpose to provide most every 5-18 year old with 180 days a year of high quality day care? Again, in that case, it does a pretty good job.

Dan Willingham
8/1/2013 03:43:59 am

Scott
I don't think money is the problem. These alternatives are not, to my knowledge, reliable or valid, with the exception of essays.

Darin Schmidt
8/3/2013 03:12:48 am

As an arts educator, I have seen the combination of high quality rubrics and training for inter-rater reliability to be very effective. I think the problem of grading writing is similar in many ways to the problem of grading art.

jean sanders
8/22/2013 09:56:27 pm

Reliable and valid; those are the important words; when we have teachers read essays we look for inter-rater reliability. With portfolios it is difficult (and costly) to get inter-rater reliability. There are also consequent validity and predictive validity. A big problem with the current hype is that they claim the tests show a child to be "job - ready"; this the test cannot do so it is has no predictive validity about the future (job).... so it is a lie to parents. It places the blame on a child/parent in an environment where there aren't any jobs (they are going outside the country etc). We need to stop the lies.

Ava Arsaga link
7/30/2013 11:11:16 am

Say everybody agrees, and admits that both Proposition 1 and Proposition 2 are true...what question do we need to ask to move the conversation beyond the boring blog battles? What question moves the conversation forward?

Peter Ford
7/30/2013 07:01:58 pm

Ava, to move the conversation forward I believe we must move back to why we have these tests at all.
When a 9th grader from my class shows up in a High School the counselor/teachers look at their transcript and see 'Algebra 1,' and let's say a grade of 'B+.' Far too many high schools have gotten far to many B+ students in Algebra 1 who couldn't multiply fluently past their 6's, much less solve a quadratic equation by factoring or completing the square. To 'assure' students were learning we now have objective tests, which for better or worse end up driving what teachers instruct.
If there could be some fidelity in what teachers teach I believe there would be no need for testing. If the local high schools trusted what I was doing in my classroom they would recognize that B+ as legitimate, and place my students accordingly.
The private schools my students attend seem to focus more on placement tests, transcripts, recommendations and interviews; I don't think they even ask for California Standards Test (CST) scores in their applications. While I believe tests are helpful illuminators on a school, accountability belongs in the hands of the parents, as long as those parents have viable options for educating their children.

Dan Willingham
8/1/2013 03:49:21 am

Peter
you note "If the local high school trusted what i was doing in my classroom they would recognize that B+ as legitimate. . ."
but this is one of the key reasons that standardized testing for placement began. There is not a universal metric as to what a B+ means. A school across the state may have students who come into the same math class much less well prepared so it's almost inevitable that that teachers B+ will mean something different than yours. (the SAT was created because admissions officers at selective colleges would never admit students from small-town high schools the officer had never heard of. . .so you had massive bias towards elite prep schools. SAT was supposed to provide a way to compare kids from all high schools).

jean sanders
8/22/2013 10:01:33 pm

Dan: another reason that tests were prepared to "screen people out" like the gentleman's agreement that kept Jewish people out of privileged places (I have to revisit that article in Atlantic). SAT tests are too closely tied to "IQ" which is a construct that can't be measured so easily as one might suppose. Read Scott Barry Kaufman's book on Ungifted. He also points out the tremendous importance of the GPA as an indicator (over and above the test) and, in particular for women, the GPA can be a strong indicator of future work in college ....

Roger Sweeny
7/31/2013 05:06:31 am

There is something deeper lurking here. Teachers already give high stakes grades. The grade a teacher gives you determines whether you go on to the next grade, get into a "good" college, and so on.

After a number of years of school, many students have realized that there are easier ways of passing than putting in the effort to understand at any deep level what the teacher is talking about. This is especially true if they have no intrinsic desire to understand what the school is trying to teach them. Short term memory is a lot easier--if you know what will be on the test. As a high school teacher, I constantly felt pressure from my students, "tell us what you want us to tell you; don't give us this fuzzy, 'I want you to understand ...'"

Teachers have ways of bringing up the class average: Tell students before the test, "You should know X, Y, and Z" because you have seen the test and know X, Y, and Z are on it. "Review" the day before the test, emphasizing what you know will be on the test, even to the point of, "Why would A not be a right answer?" when A is one of the choices in a multiple choice question.

Early in my teaching career, I was shocked when I tried to talk about things the class had done earlier in the year only to find that most of them couldn't remember. Over the years, I've had a number of conversations, "Sometimes I feel like you guys memorize things for a test and then forget them a month later." "A month? Try the next day."

It is extraordinarily difficult to design assessment instruments that measure long-term understanding rather than short-term memorization. Perhaps that is why we try not to think of what kids are actually getting out of school.

For a classic cynical take:
http://bestofcalvinandhobbes.com/wp-content/uploads/2012/05/education.png

Dan Willingham
8/1/2013 03:50:49 am

Roger lots of interesting work being done on this problem and how to address it. . .some of the best by Katherine Rawson at Kent State.

crazedmummy
8/3/2013 11:18:22 am

At our school, students are moved on regardless of what they know or learn, whether they are awarded a passing or failing grade. Obviously, our school does poorly on the standardized tests. The administrators seem to be unable to follow the chain of reasoning that links these two pieces of news. Better to say our kids just "test badly" (yeah, because they don't actually know anything). For schools such as ours, where the 3.5GPA students have a 14 on the ACT, the standardized tests are the only items smacking of truth for students.
Since our state has course requirements for graduation, senior year for many many students is spent on E2020 guess and check courses, where the teacher in charge of the room tells kids which questions they need to re-guess before submitting their quizzes and tests.
The good teachers of course have all students who pass, who do not need to take E2020: the good E2020 teachers get everybody past all the courses they need, even if they need 4 years of math in a semester (and remember, these are the kids who are bad at math). A miracle occurs.
The "bad" teachers are honest reporters: possibly one day the administration will cotton on and allow kids to assemble part A and let the glue dry before going on to part B. As long as the system is run by people who neither like nor respect mathematics, our kids will continue to be let down by the system.

Mike G
8/1/2013 09:27:30 am

Good post. As usual.

Dan, here's my question. For sake of argument, let's me put aside your Prop 2.

I want to ask about Prop 1, that "people are more likely to do what they think will help them get X" (even if counterproductive).

Prop 1 isn't even a cautionary tale if it's "net positive," right?

I.e., incentives cause some bad behavior (reading teachers who jettison actual reading in favor of strategies, which they wrongly think will drive gains).

And some good behavior (reading teachers who read with struggling kids after school because of test-related incentives, which perhaps they had not done much of in previous years).

Is it your experience/belief that the "net" reaction to high-stakes testing is more of the former (counterproductive behavior, whether bad instruction or outright cheating), or more of the latter (extra, reasonably good attention on kids who are struggling)?

I.e., do we have to even broach the value of Prop 2 (How are we doing/trending?) if Prop 1 is a net positive?

Roger Sweeny
8/1/2013 10:25:57 am

You can't know if Prop1 is a net positive unless you have a way to measure results. That's Prop 2.

Dan Wilingham link
8/2/2013 02:55:13 am

Mike
I think so, yes, overall negative, though I admit I don't know of really systematic data on this point, so I might very well be wrong. My impression was that the testing flap in wealthy districts was more that the tests themselves were a a time-consuming nuisance but there was less craziness over test-prep because teachers felt confident that kids would pass. In districts serving disadvantaged kids there was more often significant time devoted to test prep.

david link
8/2/2013 02:27:29 am

It was indeed a well-researched and thoughtful discussion. Found it quite helpful. I completely second you in saying that vocabulary test is a good strategy to identify one's IQ level, as I have tried that with some people I know.

John Thompson
8/19/2013 10:29:46 am

Dan,

I'm not saying your post was boring, but we who oppose high stakes tests make the same caveates so often that we are on automatic pilot when we do it. I think you have a false equivalency here, but it is improtant to follow conventions and repeat our disclaimers constantly, and so it is good you made that same point that I wish that we all make frequently.


The most interesting comment, I think, was Mike G's. It prompted a very good response by Roger Sweeny (who I disagree with on about everything.) And, it should be read in the context of his answer on Pisa, Timms etc.

Rogers, I'd say that no, it is not human nature that makes #1 come out negative. In our historical, cultural, economic and political world, #1 always does more harm. Different cultures with different histories can have different answers. and #2, without stakes, is good
for providing the evidence for that discussion. And, its proved #1 in our society.

That raises points for a really interesting discussion. Wouldn't you agree that all testing, #1 and #2, are political processes that are born of history?

Mike G. and Dan, I'd answer your questions by noting how much more the proponents of #2 violate the principles that were mostly expressed in this post and the comments.

Roger Sweeny
8/19/2013 12:09:48 pm

Of course, "all testing ... are political processes that are born of history."

One of the things that bothers me about those "who oppose high stakes tests" is their selective blindness. Every test that a teacher gives has a certain "stake" --and taken together they are very, very, very high stakes. They would raise different moral questions if they accurately assessed things like "critical thinking." But in general, they don't. They are tests that reward "memorize and forget" in the context of academic subjects.

Not everyone passes. Unlike race or IQ, employers are allowed to, even encouraged to, discriminate on the basis of whether a job applicant has degrees.

Right now we have a system that rewards people who "memorize and forget" the knowledge contained in academic courses, courses that are full-bodied or watered-down versions of what a college student majoring in a particular discipline would take. Not surprisingly, people from lower-income backgrounds generally don't do as well as people from higher-income backgrounds.

John Thompson
8/20/2013 04:42:06 am

Again, we agree. Although, I don't think many advocates of high-stakes testing would say "of course" regarding to the political and historical roots of tests and responses to them.

During my school's best years, before high-stakes testing and NCLB-incentivized choice drove us to the bottom of the state, we had great success weaning second semester seniors off of grades and extrinsic motivations. As long as we ALL worked steady and worked smart, we discussed, there was no reason for grading. In my best classes, inner city kids learned legitimate college prep standards for mastery until graduation, and we didn't need grades.

Then, my alumni continually returned and taught those real-life lessons to subsequent classes. We had a tradition.

After NCLB testing drove all but memorize and forget instruction out of almost all classes, our alumni had invariably dropped out of college, and the advice they gave to the students was don't be like them and run up debt by trying four year colleges.

Now, almost all seniors do nothing but graduation exam test prep and attend EOI bootcamps, for passing the freshman and sophomore tests that they failed but need for graduation. For better and for worse, when Common Core hits, all this will come crashing down.

Roger Sweeny
8/20/2013 05:54:05 am

I'm sorry to hear that.

Roger Sweeny
8/21/2013 01:05:36 am

I just ran into this on Joanne Jacobs' blog. She's quoting a description of some parent focus groups:

"Standards, assessment, and evaluation don’t make sense to parents as separate concepts: to the extent they think about these things at all, it’s just stuff that they assume you do to manage sensibly. (Set goals, measure them, talk about how well you did, and then fix stuff that didn’t work.) Not only would it not make sense to parents to suggest not doing these things, parents are incredulous when they think that any of it isn’t already common practice.

"Most intriguing, 'standards' don’t even make sense to parents as an idea unless you measure them. I wished we’d videoed those moments in the conversations: if you suggested having standards but no common tests, parents got mad. They literally pushed chairs back from the table or threw pens down to make their point: 'You can’t say you have a standard if you don’t also measure it.'”

I don't think machine readable tests are a good way to "measure it" but I think the parents are absolutely correct that, "You can't say you have a standard if you don't also measure it."
http://www.joannejacobs.com/2013/08/parents-set-goals-measure-fix/#comments

Hugh
8/27/2013 07:24:36 am

There's another logic condition that policy wonks completely miss, as far as i'm concerned. It is this: it is possible that confusing a necessary condition and a sufficient condition can be fatal. I.e., given that high test scores are necessary for a good education but not sufficient (many scholars would disagree, but WTF), acting as if high test scores are sufficient could guarantee failure in achieving the goal of a good education. By analogy, having good wind is necessary to be a good soccer player, but if a coach sees it as sufficient, s/he will virtually guarantee a bad soccer team by ignoring the other necessary factors. That's where we are in education policy yet no one points this out...

jean sanders
8/27/2013 07:44:54 am

Hugh: you are right; actually Christopher Jencks has pointed it out but no one listens.... He is looking at Massachusetts data and Texas data (along with what he calls value added0 and he says the test scores alone are insufficient. One thing he would also want to measure is how students treat each other in their classrooms. You can find him on video at the SREE organization website; I am thinking of joining as a member of SREE and attending their conference. There are others like C. Jencks but who is to hear ???? when you have a computer with space you have to build a profit center by selling tests scoring etc...


Comments are closed.

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory