Hugo Mercier and Dan Sperber are the authors of “The Enigma of Reason,” a new book from Harvard University Press. Their arguments about human reasoning have potentially profound implications for how we understand the ways human beings think and argue, and for the social sciences. I interviewed Mercier about the book.
HF: So, many people think of reasoning as a faculty for achieving better knowledge and making better decisions. You disagree. Why is the standard account of reasoning implausible?
HM: By and large, reasoning doesn’t fulfill this function very well. In many experiments — and countless real-life examples — reasoning does not drive people towards better knowledge or decisions. If people start out with the wrong intuitive idea, and then start reasoning, it rarely does them any good. They’re stuck on their initial wrong idea.
What makes reasoning fail is even more damning. Reasoning fails because it has a so-called ‘myside bias.’ This is what psychologists often call confirmation bias — that people mostly reason to find arguments that whatever they were already thinking is a good idea. Given this bias, it’s not surprising that people typically get stuck on their initial idea.
More or less everybody takes the existence of the myside bias for granted. Few readers will be surprised that it exists. And yet it should be deeply puzzling. Objectively, a reasoning mechanism that aims at sounder knowledge and better decisions should focus on reasons why we might be wrong and reasons why other options than our initial hunch might be correct. Such a mechanism should also critically evaluate whether the reasons supporting our initial hunch are strong. But reasoning does the opposite. It mostly looks for reasons that support our initial hunches and deems even weak, superficial reasons to be sufficient.
So we have a complete mismatch between, on the one hand, what reasoning does and how it works and, on the other hand, what it is supposed to do and how it is supposed to work.
HF: So why did the capacity to reason evolve among human beings?
HM: We suggest that the capacity to reason evolved because it serves two main functions:
The first is to help people solve disagreements. Compared to other primates, humans cooperate a lot, and they evolved abilities to communicate in order to make cooperation more efficient. However, communication is a risky business: There’s always a risk that one might be lied to, manipulated or cheated. Hence, we carefully evaluate what people tell us. Indeed, we even tend to be overly cautious, rejecting messages that don’t fit well with our preconceptions.
Reasoning would have evolved in part to help us overcome these limitations and to make communication more powerful. Thanks to reasoning, we can try to convince others of things they would never have accepted purely on trust. And those who receive the arguments benefit by being given a much better way of deciding whether they should change their mind or not.
The second function is related but still distinct: It is to exchange justifications. Another consequence of human cooperativeness is that we care a lot about whether other people are competent and moral: We constantly evaluate others to see who would make the best cooperators. Unfortunately, evaluating others is tricky, since it can be very difficult to understand why people do the things they do. If you see your colleague George being rude with a waiter, do you infer that he’s generally rude, or that the waiter somehow deserved his treatment? In this situation, you have an interest in assessing George accurately and George has an interest in being seen positively. If George can’t explain his behavior, it will be very difficult for you to know how to interpret it, and you might be inclined to be uncharitable. But if George can give you a good reason to explain his rudeness, then you’re both better off: You judge him more accurately, and he maintains his reputation.
If we couldn’t attempt to justify our behavior to others and convince them when they disagree with us, our social lives would be immensely poorer and more complicated.
HF: So, if reasoning is mostly about finding arguments for whatever we were thinking in the first place, how can it be useful?
HM: Because this is only one aspect of reasoning: the production of reasons and arguments. Reasoning has another aspect, which comes into play when we evaluate other people’s arguments. When we do this, we are, on the whole, both objective and demanding. We are demanding in that we require the arguments to be strong before changing our minds — this makes obvious sense. But we are also objective: If we encounter a good argument that challenges our beliefs, we will take it into account. In most cases, we will change our mind — even if only by a little.
This might come as a surprise to those who have heard of phenomena like the “backfire effect,” under which people react to contrary arguments by becoming even more entrenched in their views. In fact, backfire effects seem to be extremely rare. In most cases, people change their minds — sometimes a little bit, sometimes completely — when exposed to challenging but strong arguments.
When we consider these two aspects of reasoning together, it is obvious why it is useful. Reasoning allows people who disagree to exchange arguments with each other, so they are in a better position to figure out who’s right. Thanks to reasoning, both those who offer arguments (and, hence, are more likely to get their message across) — and those who receive arguments (and, hence, are more likely to change their mind for the better) — stand to win. Without reasoning, disagreements would be immensely harder to resolve.
HF: Despite reason’s flaws, your book argues that it “in the right interactive context, works.” How can group interaction harness reason for beneficial ends?
HM: Reasoning should work best when a small number of people (fewer than six, say) who disagree about a particular point but share some overarching goal engage in discussion.
Group size matters for two reasons. Larger groups are less conducive to efficient argumentation because the normal back and forth of discussion breaks down when you have more than about five people talking together. You’ll see that at dinner parties: Four or five people can have a conversation, but larger groups either split into smaller ones, or end up in a succession of short “speeches.” On the other hand, smaller groups will necessarily encompass fewer ideas and points of view, lowering both the odds of disagreement and the richness of the discussion.
Disagreement is crucial because if people all agree and yet exchange arguments on a given topic, arguments supporting the consensus will pile up, and the group members are likely to become even more entrenched in their acceptation of the consensual view.
Finally, there has to be some commonality of interest among the group members. You’re not going to convince your fellow poker player to fold when she has a straight flush. However, it’s often relatively easy to find such a commonality of interest. For example, we all stand to gain from having more accurate beliefs.
This article is one in a series supported by the MacArthur Foundation’s Research Network on Opening Governance. Neither the MacArthur Foundation nor the network is responsible for its specific content.
Earlier posts in the series:
Read more here: http://www.washingtonpost.com/blogs/monkey-cage/wp/2017/07/12/most-of-what-you-think-you-know-about-human-reasoning-is-wrong-heres-why/ by Henry Farrell Originally posted on http://www.washingtonpost.com/blogs/monkey-cage