The Science of the Human Brain

It is the most complex living system in the known universe, built of hundreds of billions of cells, each as complicated as a city.

It is the primary author of the deeply personal story we tell ourselves about who we are and why we are here.

And it never, ever, shows us the world as it truly is — only as we need it to be.

This is the conundrum of the human brain, which is why understanding its peculiar science is a prerequisite towards our ability to imagine, and then build, a better world.

Consider this: Human beings — or, for that matter, any other life form on earth — see only what has helped them survive in the past. The way we make sense of the present is still being shaped by the innovations, fears and assumptions our ancestors made hundreds, thousands, even millions of years ago. And yet our success as a species has occurred not in spite of our inability to see reality, but because of it.

As neuroscientist Beau Lotto puts it, human beings didn’t evolve to see the world objectively; they evolved to “not die.”  And all living things have managed to “not die” thus far by developing different perceptual overlays on the same planetary backdrop. Bats developed echolocation. Cephalopods learned to change colors. Birds got GPS. And we developed a wet computer with a parallel processor — otherwise known as a bi-hemispheric brain.

Why did our brains evolve that way, with a singular blueprint stamped out twice? No one knows for sure, although psychiatrist Iain McGlichrist believes it happened to help us “attend to the world in two completely different ways, and in so doing to bring two different worlds into being,” and two different ways of surviving life’s slings and arrows.  

“In the one,” he explains, “we experience — the live, complex, embodied, world of individual, always unique beings, forever in flux, a net of interdependencies, forming and reforming wholes, a world in which we are deeply connected. In the other we ‘experience’ our experience in a special way: a ‘re-presented’ version of it, containing now static, separable, bounded, but essentially fragmented entities, grouped into classes, on which predictions can be based.

“These are not different ways of thinking about the world,” McGilchrist claims. “They are different ways of being in the world. If the left hemisphere is the hemisphere of ‘what’, the right hemisphere, with its preoccupation with context, the relational aspects of experience, emotion and the nuances of expression, could be said to be the hemisphere of ‘how.’”

Who we perceive ourselves to be is a mashup of these different perceptual overlays — ostensibly equal parts logic and emotion. And yet the reality of the modern world, says McGilchrist, is that our way of seeing has gotten out of balance.

For a number of reasons, we have become left-hemisphere heavy.

“My thesis is that for us as human beings there are two fundamentally opposed realities, two different modes of existence; that each is of ultimate importance in bringing about the recognizably human world; and that their difference is rooted in the bihemispheric structure of the brain. It follows,” he asserts, “that the hemispheres need to cooperate, but I believe they are in fact involved in a sort of power struggle, and that this explains many aspects of contemporary Western culture.”

Is it possible to bring ourselves back in balance? Harvard neuroscientist Jill Bolte Taylor would say yes — and she would know. In 1996, a blood vessel burst in the left half of her brain. “In the course of four hours,” she explains, “I watched my brain completely deteriorate in its ability to process all information.”

Because Taylor had spent her adult life studying the brain’s intricate architecture, she was uniquely suited to observe how the stroke was affecting her. What she discovered has huge implications for how all of us think about who (& why) we are in the world.

But first, a refresher course:

On one side of our brain, there is the right hemisphere, which is designed to help us remember things as they relate to one another. “To the right mind,” Taylor says, “no time exists other than the present moment, and each moment is vibrant with sensation. By design, our right mind is spontaneous, carefree, and imaginative. It allows our artistic juices to flow free without inhibition or judgment.” And it gives us the sense that we are indistinguishable, infinite, and interconnected.

By contrast, our left hemisphere organizes the world for us in a linear and methodical way. “Here,” says Taylor, “the concept of time is understood as either past, present or future, and our left brain thrives on details. Here, we use words to describe, categorize, define, and communicate about everything we see.” And here, we find the part of ourselves that feels most distinct — the part that proclaims, ‘I am,’ apart from the world.

Because Taylor’s stroke occurred in the left side of her brain, her right hemisphere was left unchecked by its usual counterbalance. As a result, she experienced a drastically different way of seeing the world (and her role in it). “I no longer perceived myself as a single, a solid, an entity with boundaries that separated me from the entities around me. Everything around us, about us, among us, within us, and between us is made up of atoms and molecules vibrating in space. Although the ego center of our language center prefers defining our self as individual and solid, most of us are aware that we are made up of trillions of cells, gallons of water, and ultimately everything about us exists in a constant and dynamic state of activity.

“My left hemisphere had been trained to perceive myself as a solid,” she explained, “separate from others. Now, released from that restrictive circuitry, my right hemisphere relished in its attachment to the eternal flow. I was no longer isolated and alone. My soul was as big as the universe and frolicked with glee in a boundless sea.”

For Taylor, the lesson is not that one hemisphere is better than the other; indeed, there is a reason our brain evolved this way — which is, simply, to integrate two contradictory yet complementary ways of seeing the world: one holistic and boundless, the other segmented and bound.

Rather, the lesson comes when we accept, as Beau Lotto puts it, that “the world out there is really just our three-dimensional screen. Our receptors take the meaningless information they receive; then our brain, through interacting with the world, encodes the historical meaning of that information, and projects our subjective versions of color, shape and distance onto things.

“Meaning is a plastic entity,” he reminds us, “much like the physical nature of the brain, which we shape and reshape through perceptual experiences. It is critical to understand that the meaning of the thing is not the same as the thing itself.”

Believing is seeing, more than seeing is believing. And so we will never change the story of how we learn and live unless we become aware of the way our brains are always trying to ensure that we “not die” — by providing us with a coherent story about our lives, a story that our brain works around the clock to create.

A story that is never, ever, accurate.

To see differently, we must learn to see seeing differently. “The more aware I remain about what my brain is saying and how those thoughts feel inside my body,” Taylor tells us, “the more I own my power in choosing what I want to spend my time thinking about and how I want to feel. If I want to retain my inner peace, I must be willing to consistently and persistently tend the garden of my mind moment by moment, and be willing to make the decision a thousand times a day.”

The clearer we are about the science of the human brain, in other words, the greater the chance we can appreciate the art of individual identity.

It is the most wondrous thing we have discovered in the universe, and it is us.

A Year of Wonder: The Neuroscience of Empathy

By announcing last month that I wanted 2016 to be a year of wonder, I put friendly pressure on myself to pursue on all the big questions that occurred to me. We’ll see how well I’m able to sustain the energy over the course of the rest of the year, but my first riddle was this: ‘If empathy is what makes us distinctly human, what do we know about the neuroscience of empathy itself?’

If a person wishes to wonder deeply about the world, which ingredient is more important – the person, or the world?

Until recently, our answer was clearly the latter.

For the great majority of our time on this planet, human beings have viewed the world almost entirely through the prism of “we,” not “me.” As foragers, we lived in unquestioning obedience to the unknowable marvels of the natural world. And in the earliest civilizations, we lived to serve the needs of our Gods in Heaven – and then, later on, their hand-chosen emissaries on Earth.

In these long chapters of the human story – which together make up more than 93% of our history as a species – our ancestors were most likely to find comfort, and a sense of identity, through their ability to fit usefully and invisibly into a larger community.

To stand out from the crowd was undesirable, since, in reality, doing so could mean ostracism or death.

To walk in someone else’s shoes was unnecessary, since, in effect, everyone wore the same shoes.

And to wonder about the world was to focus one’s gaze outward, or upward.

Over time, however, the human gaze has shifted. Beginning with the rise of the great religions, continuing through the citizen revolutions in France and the Americas, and running right up to and through the age of social media and the Selfie Stick, we humans have begun to increasingly look inward – and to find an equally endless source of awe and wonder as we do.

At the same time, a wave of new discoveries in fields ranging from neuroscience to psychology have taught us that our need to wonder is more than just a desire to daydream; it is the way we deepen our empathic capacity to connect with our fellow creatures.

“What do we human beings do all day long?” asks neuroscientist Marco Iacoboni. “We read the world, especially the people we encounter.”  And according to Iacoboni and his colleagues, we do so by relying on “mirror neurons” – a special subset of the more than 100 billion neurons that are busily and ever at work in the most complex structure in the known universe: the human brain.

They’re called mirror neurons to describe the ways that observing the behavior of someone else – from eating a peanut, to yawning, to experiencing sudden pain – can trigger the same brain activity in the observer as in the observed. “Our brains are capable of mirroring the deepest aspects of the minds of others at the fine-grained level of a single brain cell,” Iacoboni explains. “This is utterly remarkable. Equally remarkable is the effortlessness of this simulation. We do not have to draw complex inferences or run complicated algorithms.

“When we look at others, we find both them and ourselves.”

Similarly, a growing chorus of researchers has begun to suggest empathy is a foundational building block in our process of developing social cognition. “The brain is a social organ, made to be in relationship,” explains psychiatrist Daniel Siegel. “What happens between brains has a great deal to do with what happens within each individual brain . . . [And] the physical architecture of the brain changes according to where we direct our attention and what we practice doing.”

And yet, as far as words go, empathy is a new one – it didn’t even appear until the early 20th century. It comes from the English translation of the German word einfühlung, which was used to describe the relationship between a work of art and its subject; it was later expanded to include interactions between people.

Those interactions, according to social theorist Jeremy Rifkin, are what give rise to a deeper human capacity for making sense of the world. “Empathic consciousness starts with awe,” he contends. “When we empathize with another, we are bearing witness to the strange incredible life force that is in us and that connects us to all other living beings.

“It is awe that inspires all human imagination. Without awe, we would be without wonder and without wonder we would have no way to exercise imagination and would therefore be unable to imagine another’s life ‘as if’ it were our own.”

In other words, we have slowly flipped the paradigm of human understanding: strictly speaking, it is not the world that makes us wonder; it is our wondering that makes the world. Or, even more specifically, as the Chilean biologist-philosophers Francesco Varela and Humberto Muturana point out, “the world everyone sees is not the world but a world, which we bring forth with others.”

This epiphany is changing more than just our understanding of the brain. In recent years, scientists in fields ranging from biology to ecology have revised the very metaphors they use to describe their work – from hierarchies to networks – and begun to realize, as physicist Fritjof Capra says, “that partnership – the tendency to associate, establish links, and maintain symbiotic relationships – is one of the hallmarks of life.”

The downside of all this navel-gazing? A heightened risk of narcissism, consumerism, and reality television.

The upside? A steadily increasing empathic capacity, anchored in our development of a shared sense of vulnerability, and a paradoxical desire to seek “universal intimacy” with the world.

“We are learning,” Rifkin writes, “against all of the prevailing wisdom, that human nature is not to seek autonomy – to become an island to oneself – but, rather, to seek companionship, affection, and intimacy. We have been sending out radio communications to the far reaches of the cosmos in the hope of finding some form of intelligent and caring life, only to discover that what we were desperately seeking already exists among us here on Earth.”



Big Bird Can Close the Achievement Gap? Not So Fast . . .

Don’t get me wrong: I love Big Bird as much as the next guy. But when people start talking about how Sesame Street is just as effective at closing the achievement gap as preschool, I start to worry that we’re becoming enamored with a seductively simple characterization of a deeply complex problem.

To wit: this article, in which we are told the “new findings offer comforting news for parents who put their children in front of public TV every day.” Or this radio story, in which the reporter claims that the show’s heavy dosage of reading and math can yield long-term academic benefits that “close the achievement gap.”

The lure of a set of findings like this is pretty clear: plug your kids into an educational TV show, help them learn their letters and numbers, and voila! No more class inequality. And actually, when one considers how we measure the achievement gap — via the reading and math test scores of schoolchildren — the opportunity to draw a linear line of cause and effect is available to us.

The problem, of course, is that school is about a lot more than literacy and numeracy — and the problems that beset poor children run a lot deeper than the 30 million word gap.

Consider the research around ACE Scores — or the number of adverse childhood experiences young people have — and how much those scores shape a child’s readiness to learn and develop (you can take a short quiz to get your own score here).

Dr. Pamela Cantor has spent a lot of time thinking about ACE Scores. She founded Turnaround for Children to help schools become more attuned to the cognitive, social and emotional needs of kids, and she’s one of many out there who are urging us to understand what the latest research in child development is telling us about what we need to be doing in schools. As she puts it, “The profound impact of extreme stress on a child’s developing brain can have huge implications for the way children learn, the design of classrooms, the preparation of teachers and school leaders, and what is measured as part of the school improvement effort as a whole.”

Consequently, whereas reduced exposure to the building blocks of reading and math is a problem (and one that Sesame Street can clearly help address), the biggest problem for at-risk children — the root cause — has to do with their social and emotional health.

The good news is that although these deficits are profound, they are also predictable, since they stem directly from the effects that stress and trauma have on young people’s brains and bodies. “These stresses impact the development of the brain centers involved in learning,” Cantor explains. “It is because these challenges are knowable and predictable that it is possible to design an intervention to address them. Collectively, they represent a pattern of risk—risk to student development, risk to classroom instruction, and risk to school-wide culture—each of which is capable of derailing academic achievement.”

That means the best way to help poor and at-risk children become prepared for school is by ensuring that high-poverty schools have universal practices and supports that specifically address the impact of adverse childhood experiences. So while I love and appreciate the benefits of Sesame StreetI’d feel better if instead of suggesting that every child needs more time with Grover in the morning, we suggested that every child’s ACE Score needs more weight in determining how schools allocate resources to support their students’ holistic development.  

I hope that doesn’t make me seem like Oscar the Grouch.

The social origins of intelligence

There’s a fascinating new study out in which researchers studied the injuries and aptitudes of Vietnam War veterans who suffered penetrating head wounds. Among their findings? That “the ability to establish social relationships and navigate the social world is not secondary to a more general cognitive capacity for intellectual function, but that it may be the other way around. Intelligence may originate from the central role of relationships in human life and therefore may be tied to social and emotional capacities.”

Let me repeat that: cognitive intelligence is not separate from social intelligence. In fact, our capacity to deepen the former is dependent on our ability to be deeply grounded in the latter.

For anyone who has spent time as an educator, we’ve always intuitively known this to be true. As the saying goes, unmet social needs lead to unmet academic needs. Or, put more simply, the three most important words in teaching and learning are Relationships, Relationships, Relationships.

And yet the recent flood of cognitive research that confirms this intuitive truth is striking, especially when one considers how slowly it has made its way into the minds of our nation’s policymakers. Indeed, as lead researcher Aron Barbey put it, “the evidence suggests that there’s an integrated information-processing architecture in the brain, that social problem solving depends upon mechanisms that are engaged for general intelligence and emotional intelligence. This is consistent with the idea that intelligence depends to a large extent on social and emotional abilities, and we should think about intelligence in an integrated fashion rather than making a clear distinction between cognition and emotion and social processing.

“This makes sense,” Barber continues, “because our lives are fundamentally social. We direct most of our efforts to understanding others and resolving social conflict. And our study suggests that the architecture of intelligence in the brain may be fundamentally social, too.”

So what would the next generation of education policies need to look like in order to be aligned with the emerging consensus about how the brain works, and how people learn?

They would need to start incentivizing the conditions that support holistic child development and growth, and stop disproportionately weighting literacy and numeracy.

They would need to start crafting policies in concert with other departments, from health to housing to labor, as a way to try and systemically support our country’s poorest families.

They would need to ensure that teacher preparation and evaluation programs are grounded in the latest neuroscience, not our traditional notions of what teaching looks like and requires.

What else?

The Neuroscience of Democracy

In the ideal educational future, is there a single design principle that matters most in establishing the optimal learning environment for children?

That seems like a pretty important question to consider. And if you were to go by today’s leading reform strategies, you might conclude that the answer is, variably, greater accountability, better use of data, more strategic use of technology, or more personalization (all good things, by the way). Yet for my money, the design principle that matters most is the one modern reform efforts care about the least – the extent to which schools are creating true laboratories of democratic practice.

Continue reading . . .

Want to Get Smarter? Be More Childlike.

Interesting piece on NPR this morning in which Shankra Vedantam reviews some of the recent research in neuroscience. You can listen to it here, and you should because it highlights something simple and significant — that the best way to keep learning over one’s life is to keep hold of the boundless inquiry that characterizes early childhood.

“Using mathematical techniques that allow researchers to disentangle the effects of genetic and environmental influences on individuals,” Vedantam reports, researchers “noticed that kids who had higher IQs to begin with seemed to have an extended period in adolescence during which they retained the ability to learn at a rapid pace, just like much younger children.

“I found that twins that had a higher IQ were showing a more childlike pattern of influence during adolescence,” said one of the researchers, Penn State’s Angela Brant.

If that’s true, it would make sense to structure learning environments for children that are proactively designed to unleash each young person’s inherent sense of wonder and curiosity. And yet, here in DC and elsewhere across the country, we are doing the opposite. It’s true — too many young people are arriving in school with extreme deficits when it comes to literacy and numeracy. And it’s true — those things matter. But the best way to help all children thrive is not by making Kindergarten resemble a 10th grade honors class; it’s by making that 10th grade honors class more like Kindergarten.

That’s something educators have known for a long time. Now they have the research to boot.

The Science of Learning (and of School Reform)

Here’s a strange but illustrative little animated short based off a short clip of a David Brooks speech, in which he lays bare one of the false assumptions about the brain that has led us down the wrong path for generations.

As regular readers of this blog know, I’ve had my issues with David Brooks in the past — mostly because he’s so RIGHT half the time, yet he can’t seem to connect all the dots of his own emerging understanding of the extent to which we are, truly, social animals, and the extent to which that understanding should completely change how we think about schooling, and school reform.

If you’re interested in going a little deeper than 36 seconds into the science of the brain, and of school reform itself, I’d recommend reading this and this. In my mind, the implications of all this research are clear: We need to stop obsessing over what kids know, and start obsessing over who they are. We need to strike the right balance between the art and the science of teaching and learning. And we need to define the ultimate endgoal of public education as an essential set of lifeskills – and the content we teach as the means towards acquiring those skills – not vice versa.