Future of Conflict #2: Moral Tribes
This week I read Moral Tribes by Harvard philosopher and neuroscientist Josh Greene. The book claims to be about “Emotion, Reason, and the Gap Between Us and Them”, but pretty soon its true nature as a Hard Sell for Utilitarian Moral Philosophy is revealed!!
For better or worse, I was already pretty sold on utilitarianism (which Josh tries to re-brand as “deep pragmatism”), but if you’re not, he has some great arguments in there.
Our main concern, however, is building a Beloved Community out of a bunch of factions disagree on assumptions, goals, and strategies. So instead of focusing on the moral philosophy parts of the book (which is most of it), I’m going to focus on what parts of Josh’s research can help us in the Beloved Community project.
Here are the three takeaways I’d like to share with you:
- Tragedy of the Commons vs. The Tragedy of Morality
- Automatic vs. Manual-mode in the Moral Brain
- Stop Talking About Rights!
Tragedy of the Commons vs. Tragedy of Morality
The Tragedy of the Commons is a well-known social science problem re-posed during the Cold War by Garret Hardin. The basic idea is that a public good (like a common grazing pasture) will inevitably get jacked (overgrazing) unless a government imposes a solution (quotas) or the market imposes a solution (privatization).
Of course, this theory has been experimentally debunked by Elinor Ostrom and colleagues, who researched tens of thousands of systems where the commons are successfully managed by small-groups-of-people-who-cared, without resorting to authoritarian control or privatization. I think the Beloved Community can learn a lot from Ostrom’s findings about what makes commons work or fail, and I’ll be getting into her book, Governing the Commons, later this year.
But Josh’s point is that the Tragedy of the Commons is basically about selfishness vs cooperation, and that all of our moral emotions have evolved to favor cooperation over selfishness.
Empathy, familial love, anger, social disgust, friendship, minimal decency, gratitude, vengefulness, romantic love, honor, shame, guilt, loyalty, humility, awe, judgmentalism, gossip, self-consciousness, embarrassment, tribalism, and righteous indignation: These are all familiar features of human nature, and all socially competent humans have a working understanding of what they are and what they do… All of this psychological machinery is perfectly designed to promote cooperation among otherwise selfish individuals…”
I generally see emotions as information and input to my thought processes. Josh (and all of his published research) tends to see them as commands.
While the evolution of these behaviors was good (essential?) for getting along in small groups, it turns out that each group having their own moral codes is bad (unhelpful) for getting along in a diverse multi-cultural society.
This what Josh calls the “Tragedy of Commonsense Morality”: Every group’s moral codes — which work so well in their moral tribe — turn out to clash and not be so great at helping everyone get along in Earth’s pluralistic democracies.
This, according to Josh, is “The central tragedy of modern life”.
The Three Hindrances
There are three main ways that group-level moral thinking hinders public-policy decision making, and they’re one of the key things I will retain from this book (with the intention of bringing awareness to my own behaviors):
- Ethnocentrism
- “Local” values (usually religion-adjacent)
- Genuine differences in values
Ethnocentrism is a fancy word for “We are better or more valuable than you are”, and is an inherent part of any pre-pluralistic morality. It’s just a feature of our past that we all have to recognize and deal with. It’s pretty obviously not “true”, but we all feel it to some degree or another.
By “local” values, Josh means claims that won’t stand up to independent verification but are still prized by the believers. Anything related to Gods, sex, death, and eating habits usually falls into this category.
The third hindrance — genuine differences in values — is the only one where rational problem-solving can really help us, and in my mind is the “opportunity space” for what members of different moral tribes can learn from one another.
Unfortunately, we can’t get to this discussion while the first two hindrances are online, and they tend to be persistent:
In the end, there may be no argument that can stop tribal loyalists from heeding their tribal calls. No argument will convince Senator Santorum and Dr. Laura that their religious convictions, untranslated into secular terms, are unfit bases for public policy. At most, we can urge moderation, reminding tribal loyalists that they are not acting on “common sense,” but rather imposing their tribe’s account of moral truth onto others who do not hear what they hear or see what they see.
It’s not fully hopeless though, and the most important thing I’m going to write today (below) is about what can convert tribal loyalists into open-minded deliberators.
The key takeaway here is that the same moral tools that got us into pluralistic societies are not helping us thrive in them.
Automatic vs. Manual-mode in the Moral Brain
The second key idea I want to share is based in the experimental research that Josh and his colleagues have been doing. If you’re familiar with Dan Kahneman and Thinking Fast and Slow, this will all sound eerily familiar.
Apparently, we have two different parts of our brains that are used in moral evaluation and decision-making. The first is the part responsible for the (cooperation-inducing) emotions we discussed in the last section. He calls this “automatic mode”. The second is less personal, slower, and allows for conscious application of decision rules. He calls this part “manual mode”.
Surprise, surprise! Josh’s argument is that the emotions that promote cooperation evolved to solve the Tragedy of the Commons. Automatic mode forms the basis of morality in different cultures all over the worlds. However, when these people get together and have to make decisions, their “common-sense morality” is not very effective or helpful with people from other cultures (moral tribes).
In order to handle those discussions — The Tragedy of Morality — we need to use the other part of our brain: the slower and less emotional one. The manual mode.
This of course requires a part of the brain that decides which mode to use at any given time. Josh and others have found all these parts of the brain — the bits for automatic mode, the bits for manual mode, and the bit for deciding which to use — using clever neuroimaging experiments.
Josh summarizes:
Thus, we have two kinds of moral problems and two kinds of moral thinking. And now we can answer our question: The key to using our moral brains wisely is to match the right kind of thinking with the right kind of problem… Thus, when the problem is Me versus Us (or Me versus You), we should trust our moral gut reactions, also known as conscience: Don’t lie or steal, even when your manual mode thinks it can justify it. Cheat on neither your taxes nor your spouse. Don’t “borrow” money from the office cash drawer. Don’t badmouth the competition. Don’t park in handicapped spots. Don’t drink and drive. And do express your contempt for people who do such things. When it’s Me versus Us, trust your automatic settings. (The moral ones, not the greedy ones!)
The key takeaway here parallels the last sections: The automatic machinery is important at the Tragedy of the Commons and useless for dealing with other cultures and moral tribes. When it’s “Us vs Them”, we have to switch to the manual settings.
Stop Talking About Rights
The main thrust of the book is that Utilitarian Moral Philosophy is going to solve all our problems:
[We] should put their tribal ideologies aside, figure out which way of life works best, and then live that way
Rather than focus on any specific ideology (or local morality), we should just optimize for improving the quality of human experience, or something like that. There’s a lot of the book that goes into the nuances of this argument, but attacking or defending Utilitarianism is not what we’re here to do.
What is relevant is his claim that it will be much easier to come to agreement if we stop talking about Rights so much.
The Right to Marriage. The Right to Life. The Right to Guns. The Right to Speech. The Right to Choose.
In most of our political discussions, we end up using Rights-based language when we cannot prove the validity of our claims. They are a way to end the conversation, not start them.
Imagine Rights are a sort of minor God for your personal religion. Referring to them will have no impact on people of another religion.
As Barack Obama once said (quite bravely for a politician, I have to admit):
Democracy demands that the religiously motivated translate their concerns into universal, rather than religion-specific, values. It requires that their proposals be subject to argument, and amenable to reason. I may be opposed to abortion for religious reasons, but if I seek to pass a law banning the practice, I cannot simply point to the teachings of my church or invoke God’s will. I have to explain why abortion violates some principle that is accessible to people of all faiths, including those with no faith at all.
So rather than referring to the woman’s Right to Choose or the fetus’ Right to Live, effective pluralistic discussion has to appeal to something everyone can get down with. Josh calls this “common currency of value”.
What is that common currency? I’ll get there in a sec.
He then flips his attack on Rights into a very limited defense, explicitly endorsing using Rights to close a conversation when discussing a subject where the issue is “settled”. Of course, every wacko is going to have their own set of which issues are “settled” and which are not.
Civil Rights, Genocide, and Apartheid are “settled” issues unless you’re talking about the Occupied Territories, to pick an incredibly contentious example…
The key takeaway here is that in order to make sense to other tribes, we have to phrase our arguments in terms of something they acknowledge and respect (the common currency of value).
Practical Application in Your Own Life
Now let’s get real. What’s the common currency of value?
We all care about experience, both our own and others’. We all want to be happy. None of us wants to suffer. And our concern for happiness and suffering lies behind nearly everything else that we value, though to see this requires some reflection.
So far so good.
We can take this kernel of personal value and turn it into a moral value by valuing it impartially, thus injecting the essence of the Golden Rule: Your happiness and your suffering matter no more, and no less, than anyone else’s.
I almost laughed out loud when I read this. I mean, it would be so great if we could actually do this.
Arthur C. Clarke said:
Any sufficiently advanced technology is indistinguishable from magic.
My corollary:
Any sufficiently advanced moralily is indistinguishable from enlightenment.
And yet, these kinds of leaps are what the evolution of consciousness is made of!
My takeaway is that any common currency of value has to be hard-earned on an individual basis. Nobody is going to discard partiality and ethnocentrism overnight. Even Obama’s injunction — to frame their desires in terms of common interest — is hard enough to do, and a much lower bar than impartiality.
But there some awesome and short practical tips we can end with (thank you Josh), that apply to any contentious discussion:
Technique 1: Use controversy as a signpost
Whenever there is controversy, consciously downshift from automatic to manual mode. Check your assumptions, your language about rights, and your assumption of how much you know. Go directly to technique 2.
Technique 2: Understand how it works
Experimental research shows (quote) that when people are asked to explain a complex proposal (like single-payer healthcare or abortion policy) before supporting/attacking it, they realize how little they actually know, and subsequently moderate their opinions.
If you’re headed towards argument and realize you need to shift to manual mode, one effective way of doing that is to ask yourself and the other person to explain your understanding of the policy in question, and then identify holes (not errors) in your understandings. This can have the effect of tempering your opinions.
Technique 3: Watch out for “Rights” language
Rights and duties are the manual mode’s attempt to translate elusive feelings into more object-like things that it can understand and manipulate.
Use “Rights” when you want to end a discussion, not make a point. This might be the most practical piece of advice I gleaned from the whole book.
Similarly, when someone brings up a “Right”, inquire as to how they would explain the benefits of their position to someone who doesn’t share their belief in that “Right”.
Technique 4: Common Currency of Value
I believe a common currency of value has to be established in every conversation. Tedious as hell, I know. But ultimately, it’s the kind of slowing down and getting deep that builds trust and relationship. Going up the chain of “Why?” transforms an argument about a position into a conversation about a specific human’s humanity.
Step 1 is to get to some shared value that resonates with both of you.
Step 2 is to ask about the impartiality claim I was so dubious of above. For example, if the value is “happiness”, we can ask:
Do you think your happiness and your suffering matter no more, and no less, than anyone else’s?
If both people agree, you can then jointly evaluate the policy implications on the common currency of value you have taken the time to establish.
Okay.
Three Insights. Four Techniques. A love letter to Utilitarian Moral Philosophy. Here’s a link to the blog post with all the quotes.
If this is missing something in order to be truly helpful, let me know. I’m trying to find the balance between concision and relevance and I know it’s an iterative process.
Together
~ Ankur