Is the Scientific Method Becoming Less . . . Scientific?

In my ongoing search to better understand how we reconcile the creative tension between subjective and objective measures of the world — including our ongoing (and thus far) elusive search for a better way of tracking how people learn — I took note of a recent New Yorker article that cast light on some emerging problems with the ostensible foundation of all objective research — the scientific method.

In the article, author Jonah Lehrer highlights a score of multiyear studies — ranging from the pharmaceutical to the psychological — in which core data changed dramatically over time. Drugs that were once hailed as breakthroughs demonstrated a dramatic decrease in effectiveness. Groundbreaking insights about memory and language ended up not being so replicable after all. And the emergence of a new truth in modern science — the “decline effect” — cast doubt on the purely objective foundation of modern science itself.

Without recounting the article in entire, there are several insights that have great relevance to those of us seeking to find a better way of helping children learn:

  • In the scientific community, publication bias has been revealed as a very real danger (in one study, 97% of psychology studies were proving their hypotheses, meaning either they were extraordinarily lucky or only publishing outcomes of successful experiments). The lesson seems clear: if we’re not careful, our well-intentioned search for the answers we seek may lead us to overvalue the data that tell us what we want to hear. In the education community, how does this insight impact our own efforts, which place great emphasis on greater accountability and measurement, and yet do so by glossing over a core issue — the individual learning process — that is notoriously mercurial, nonlinear, and discrete?
  • In the scientific community, a growing chorus of voices is worried about the current obsession with “replicability”, which, as one scientist put it, “distracts from the real problem, which is faulty design.” In the education community, are we doing something similar — is our obsession with replicability leading us to embrace “miracle cures” long before we have even fully diagnosed the problem we are trying to address?
  • In the scientific community, Lehrer writes, the “decline effect” is so gnawing “because it reminds us how difficult it is to prove anything.” If these sorts of challenges are confronting the scientific community, how will we in the education community respond? To what extent are we willing to acknowledge that weights and measures are both important — and insufficient? And to what extent are we willing to admit that when the reports are finished and the PowerPoint presentations conclude, we still have to choose what we believe?

Is Michelle Rhee putting Students First?

Like everyone else who does education for a living, I read that Michelle Rhee is launching a new national advocacy organization, Students First. And after checking out the site and hearing how she articulates its purpose, I see some reasons to feel hopeful — and many more reasons to feel deeply concerned.

First, the good news: It’s hard to argue with Rhee’s four “we believe” statements for the organization. Who doesn’t believe all children deserve great teachers? Who would argue with the idea that students should not need luck to get a good education? Why not start allocating public dollars where they can make the biggest difference? And who would deny the need for more parental involvement and increased efforts to engage the entire community? So let’s all hop on the Rhee express, right? Well, maybe.

Click here to keep reading.

Why We Measure Things

To conclude my recent bender on the “data craziness” that is plaguing our national education reform efforts, and once again in an effort to highlight a more thoughtful approach that resists either extreme — i.e., “all data all the time or no data none of the time” — I want to share, courtesy of my friend Lisa Kensler, this wonderful 1999 (read: pre-NCLB!) article by Meg Wheatley.

See what you think, and please share your thoughts and reactions.

“Data Craziness” (aka The Other Education: Part Deux)

Earlier this week, I responded to a column by New York Times columnist David Brooks, who constructed an artificial divide between our “formal education” (aka school) — which he indifferently described as linear, objective and ordinary — and our “emotional curriculum” (aka life) — which he approvingly described as nonlinear, subjective and transformational.

In fairness to Brooks, he’s hardly alone in this misconception — in fact, it’s probably inaccurate to call it a misconception, since this is how it works for too many of us: formal schooling is what you endure, and informal schooling is what helps you discover what really matters to you, what your personal strengths and weaknesses are, etc. But just because that’s the way things have been doesn’t mean that’s the way they should continue to be — a particularly relevant point for folks like Brooks, who are supposed to help light a better path, and for reform-minded cities like Washington, DC, where I now live. And yesterday I read something that gives me hope our city may be slowly adjusting its course to a more fruitful strategy for school improvement.

The event was a radio appearance by interim schools chancellor Kaya Henderson, a former deputy to Michelle Rhee, and a person who, depending on whom you ask, is either a constructive bridge between the Rhee era and the Gray administration, or a destructive reminder of the past four years. In the interview, Henderson artfully addressed the source of this artificial divide between formal and informal schooling, and suggested, to me at least, a nuanced understanding of what needs to happen going forward — in short, exactly what I want to hear from the top education official of my city.

“I think we’ve gotten something wrong,” she began. “Previously there was no measure of student achievement. We just sent kids to school and hoped for the best. And then the standards and accountability movement came along and said what doesn’t get measured doesn’t get done, so we have to test. And I think testing is incredibly important. But I also think that we have to help people understand that tests are a benchmark, not the goal. The goal is to educate children. And I think the swing of the pendulum from absolutely no accountability to what I might call data craziness is starting to hurt.”

Henderson ceded that, currently, test scores remain the most objective available indicator of academic growth across the school system. “But I feel like we have to make people understand that test scores are not the only thing happening in our classrooms,” she said.

Imagine if more of our education policies were being constructed to address this vital insight? Imagine if more of our public leaders urged us all to end our obsession with either side of the pendulum extreme  — and charted a course to let that pendulum settle in the middle, where we value both measures and meaning, and where our schools are incentivized to create environments that nurture the academic, emotional and spiritual needs of our children (and communities)? And imagine if the Gray administration, under Kaya Henderson’s leadership, set out to establish three conditions that are not being met today:

  1. To measure all things worth measuring in the context of providing children the most meaningful education possible (aka Brooks’s “informal curriculum”).
  2. To ensure we know how to measure what we set out to measure.
  3. To attach no more importance to measurable things than we attach to things equally or more valuable that elude our instruments.

I like what I’m hearing.