To Build Non-Cognitive Skills, We Need To Be Able To Measure Them
The second of two pieces I’m posting today on the hot topic of “non-cognitive skills” comes from Dan Willingham, cognitive scientist at UVA. In a post on his blog, Dan takes a hard look at what we still don’t know about these capacities:
“You can’t do science without measurement. That blunt fact might give pause when people emphasize non-cognitive factors in student success and in efforts to boost student success.
‘Non-cognitive factors’ is a misleading but entrenched catch-all term for factors such as motivation, grit, self-regulation, social skills . . . in short, mental constructs that we think contribute to student success, but that don’t contribute directly to the sorts of academic outcomes we measure, in the way that, say, vocabulary or working
Are [the promoters of non-cognitive skills] on to anything that that educators are likely to be able to use in the next few years? Or are we going to be defeated by the measurement problem?
Suppose I’m trying to improve student achievement by increasing students’ resilience in the face of failure. My intervention is to have preschool teachers model a resilient attitude toward failure and to talk about failure as a learning experience. Don’t I need to be able to measure student resilience in order to evaluate whether my intervention works?
Ideally, yes, but that lack may not be an experimental deal-breaker.
My real interest is student outcomes like grades, attendance, dropout, completion of assignments, class participation and so on. There is no reason not to measure these as my outcome variables. The disadvantage is that there are surely many factors that contribute to each outcome, not just resilience. So there will be more noise in my outcome measure and consequently I’ll be more likely to conclude that my intervention does nothing when in fact it’s helping.
The advantage is that I’m measuring the outcome I actually care about. Indeed, there would not be much point in crowing about my ability to improve my psychometrically sound measure of resilience if such improvement meant nothing to education.
There is a history of this approach in education. It was certainly possible to develop and test reading instruction programs before we understood and could measure important aspects of reading such as phonemic awareness.
In fact, our understanding of pre-literacy skills has been shaped not only by basic research, but by the success and failure of preschool interventions. The relationship between basic science and practical applications runs both ways.”
We can move forward, Dan concludes, by refining these concepts in the lab, and also by designing and implementing scientifically-informed programs, and seeing if they work.
(In his post, Dan also remarks: “Honestly, if I hear about the Marshmallow Study just one more time, I’m going to become seriously dysregulated.” Hear, hear. Let’s declare a moratorium on all discussions of marshmallows for at least six months . . . )