Tribalism: How to temper it

“Wild animals, lacking imagination, almost never do disastrously stupid things out of false perceptions of the world about them. But humans create artificial disasters for themselves when their ideology makes them unable to perceive where their own self-interest lies.”

E. T. Jaynes


The Eternally Divided World

The world is currently divided. Though it always has been, and likely always will be. Because new issues arise, however, we tend to think of the current ones as representing a new breed. In some ways this is true, but in others, it is not.

For as long as there are problems, there will be different opinions on how to best approach or solve them. Once there is united opinion, a problem ceases to be perceived as such; even if it still has objectively negative outcomes. A problem, almost by definition, is something which we cannot agree on how to solve; disputation is a given. Therefore, regardless of our desires for utopia, problems will likely be a permanent feature of a human-inhabited world — for example, see this essay for a conceptual overview of how irreconcilable value-systems can emerge.

To be clear, though, this divergence of opinions regarding how to solve a problem brings many benefits. But it also produces issues of its own.

At the time of writing, a variety of concerns plague us. We have the difference in opinion regarding the threat of a novel coronavirus and the subsequent support or criticism of government-enforced lockdown. There is political unrest relating to police, violence, forms of protest and the treatment of minority groups. We have the decades-old feud of science and its limitations being fanned by the discussion of a vaccine and “Big Pharma.” Questions regarding the vaccines speed of creation, safety and even its necessity are causing wildly divergent thoughts about what the best practice in this context would be. This is to name but a few.

We all have fairly strong opinions on these topics. We must ask ourselves, though, how warranted are they?

To say these are complex issues is an understatement. Though, in my opinion, to say nothing more is also a copout. Under-simplifications can be as destructive as over-simplifications. Acting too slowly can cause as much harm as acting too swiftly. While the world is multifaceted beyond our comprehension, effectively navigating it will require more than two groups of people: those who supposedly fail to recognise nuance, and those who do nothing but notice the absence of nuanced-thought in others.

(While I mock the near incessant appeal to nuance amongst some thinkers, I do so out of personal-jest; this is a group I undoubtedly fall into.)

Decisions need to be made, there is no escaping that. If no change is instigated, then a decision has still been made opting for the status quo in that domain — at least for the meantime. Ultimately, decisions occur whether we want them to or not. This is a product of time passing, we cannot stop it. Some opportunities, however, provide us with an ability to maximise the utility created by our decisions. These powerful, scaleable decisions we must take very seriously.

Given all this — a complex world, with complex issues that necessitate decisions — we may want to deeply consider how we act. Life is notoriously unfair; we might want to stack the deck in our favour as much as possible. 

How may we do this?

By being as informed as possible, given the time and resources available.

How may we become as informed as possible? 

Being as informed as possible requires leveraging a variety of information networks. Something that I would consider as the antithesis of tribalistic behaviour. Something that is all too evident currently.

Let me ask you this: If you were pro-vaccination or opposed to it prior to the appearance of COVID-19, and you are of the same opinion when it comes to a corona-virus vaccine, what behaviour of yours could you point to that would demonstrate to someone that you are informed on this particular issue?

I think this is an important question to ask yourself.

If you cannot point to an engagement with alternative viewpoints upon the surfacing of COVID-19, then what distinguishes you from someone who is simply applying an old model of the world to a new situation? Do you suffer the condition of the old dog and are incapable of new tricks?

This is not to say that as each new issue arises you should just flip a coin and determine your view, as if they were isolated occurrences. Undoubtedly, these issues are linked. The perceived safety of a corona-virus vaccine should be informed by all the previous evidence — which also applies to our previous beliefs; what we thought once should inform what we think now, but not limit it. However, to conclude the safety of a COVID vaccine, or a global conspiracy, based only on your previous model of the world is not informed, intelligent thinking. Moving closer to the truth requires consistently updating our priors. Improvement requires change.

The most effective way to do this is to engage with others, respectfully, who possess beliefs that diverge from your own. Too often we become increasingly sure of our conceptions when we have only analysed one strand of evidence.

In many cases, the way in which we form a view is akin to asking ourselves: Is there food in the house? We then check the fridge to find nothing and become somewhat sure there is no food in the house. We then proceed to check the fridge again, becoming even more sure. This process is then repeated until we are certain there is no food in the house; even though not once did we look in the pantry, fruit bowl or freezer.

You simply cannot quantify the truth-value of your evidence without comparing it to other sources. How indicative the fridge being empty, is of the house being empty, can only be determined by checking other locations.

In the 160 years since John Stuart Mill published On Liberty, few, if any, have been able to phrase this point more perfectly. Mill said, “He who knows only his own side of the case knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no ground for preferring either opinion.”

He who knows only his side of the case knows little of that.

This is true, irrespective of what side and of what issue you fall on. Your tribe can be the scientific, the woke, conservative or progressive, it does not matter. Consistently reengaging with confirmatory information sources disproportionately alters how confident one is in comparison to how confident one should be. It is only through successfully engaging an adversary that actual confidence should rise. Your ideas and beliefs do not become battle-hardened by sitting in the safety of the barracks, telling stories of fictional glory amongst comrades. 

Let’s move away from the battle metaphors, however. While ideas can, and should, be put to war — humans should not.


A Field of Fallibility

All humans make mistakes — your opponents; your allies; you. To hate others for their fallibility is to do nothing more than amplify your own subjective biases.

Now, this is not to say that we all need to (do the impossible and) shed our subjective biases because anything we subjectively experience is wrong. This is not what I am saying at all — only that our subjective interpretations should be kept in check. To be fair, many of our subjective leanings towards anger or jealousy are well-ish reasoned and serve an evolutionary purpose. A good portion, however, tends to blind us and we would do well to remove them.

Let’s consider the key factors here. You and your tribe hold a certain opinion. Another tribe — the outgroup — holds a different one. If we accept that a single best, yet currently unknown solution to the issue does exist, then one of you is bound to be closer to it than the other. Given this, at least one of you is wrong in a relative sense.

Alternatively, both you and your opposition may be so far from a satisfactory conclusion that both of you can be considered wrong from a practical perspective. In optimal circumstances, only one of you is in error; though, it is entirely probable both of you may be.

This should always be kept in mind. Your opponents are human and prone to mistakes. You are too. In any instance, it may be you who is setting yourself up to star on the wrong side of history.

When we boil it down, if we lead with hatred, how advantageous is this situation really likely to be? Of all the things that positively influence human welfare, are being wrong or increased levels of hostility on that list? That’s what we seriously need to consider. Not only are you possibly wrong — which is not useful — but you also have aggravated tribal tensions. Talk about doubling down on your mistakes. Making errors and enemies does not seem like a probable path to a profitable future.

At this point, I would like to go over the ways in which we all succumb to poor thinking. Sure, some of us are a little more susceptible than others, but we are all at risk. An article that I refer to often, titled The Science of Anti-Science Thinking summarises this well. The authors find that many of the mechanisms that allow us to bring our fallibility to fruition fall into three categories.


1. Shortcuts:

Generally speaking, the brain looks to take the easiest route to solve a problem that still produces a satisfactory outcome. One of the common ways we achieve this is via heuristic; practical ways of approximating a solution to a problem.

The paper states:

“We use heuristics because they frequently work quite well. If a computer malfunctions, users can spend months learning about its various electronic components and how they are connected—or they can ask a computer technician. If a child develops a serious health problem, parents can study the medical literature or consult a physician.”

These heuristics are adaptive because they allow us to simplify complex situations into easier ones. We can then make a decision based on an easier problem. This tendency has become part of our operating system because in evolutionary times, on average, it was better to make a fast and potentially wrong decision, than to try and make a slow but precise one. The above example is a heuristic of deferring to expert opinion. Another would be going with the group consensus. Both may be useful for approximating a good answer, with relative ease; but they are no guarantee.

While these heuristics have likely always been far from perfect, due to the increasing complexity of modern life, the results they produce are beginning to fall progressively shorter and shorter of the ideal. As the tidal wave of information and change continues to crash down on modern humans, the need to simplify increases. And the more we simplify an increasingly complex world, the more unsatisfactory our outcomes will be.

I said above that the brain attempts to take the shortest route to satisfactory. The issue, however, is that we don’t know ahead of time what will produce a satisfactory outcome. Very often our mental laziness results in taking a shortcut that is too short and doesn’t end up producing a viable solution to our problem.


2. Cognitive biases:

Even if we give ourselves more thinking time for a problem, this is no guarantee we have escaped incorrect conclusions. Heuristics provide an opportunity to make fast-and-intuitive wrong decisions. Cognitive biases, however, allow us to arrive at erroneous conclusions in a slower and more deliberate manner. If we analyse things in a skewed manner, as cognitive biases lead us to, then more processing is not an automatic failsafe against error.

In order to arrive at a rational conclusion, we must first decide to pay attention to a problem and consider the alternative solutions. This takes us beyond heuristics, where a rule is immediately and automatically applied. The next step, however, is to pay adequate attention to all factors and appropriately weight all pieces of evidence. This is where biases get in the way. Biases lead our already-held-beliefs to influence how we form new ones. What we believe changes how and what we perceive. When starting from different initial beliefs, different people can draw different conclusions from the same evidence.

“If the initial hasty inspection of the data tended to favour the anti-gun-control group’s expectations, members would generally look no further, content with finding results that supported their particular bias. If the results contradicted the beliefs of the gun advocates, they would scrutinize the details of the study until they discovered the numbers that suggested the opposite conclusion.”

We all do this.

If you think this is only a tendency of anti-vaxxers or pro-choicers or capitalists or environmental activists or whoever you disagree with, then you would be wrong. Unless, of course, you are the first human to function without a human brain and all the epistemological bugs it contains. All of us pay more attention to information sources we agree with and exaggerate the errors found in ones we don’t; we all plan and make predictions about the future as if things will turn out how we want them to; when presented with evidence we cannot refute, we do not update our beliefs to the extent that we should and remain irrationally anchored to previous beliefs; and, we want to believe the world is just, so we explain why horrible injustices were deserved, in some way, by victims. These are just some of the cognitive biases that have been shown to inhabit all of us.

Again, some suffer more than others, but this is not a categorical defect only affecting a subpopulation. These flaws are standard-issue.


3. Social goals:

If a tendency towards rules of thumb, rather than actual analysis of information, as well the interference of prior beliefs on processing — when we do actually analyse information — weren’t enough, we still have social goals as a hindrance on rational thought and behaviour. This is how our ideas and action are subconsciously altered so that we can be more optimally viewed by others.

All humans — you and I included — have this tendency towards social conformity and status management.

“… disagreeing with a group to which you belong is associated with increased activity in the amygdala, an area that turns on in response to different types of stress. Holding an opinion different from other group members, even a correct one, hurts emotionally. It therefore comes as no surprise that people are often reluctant to provide evidence counter to what the rest of their group believe. Social pressures can also influence how we process new information. Group consensus may encourage us to take recourse in heuristics or to cling tightly to an opinion, all of which can interfere with objective thinking.”

We often think we are validated in holding an opinion because others around us also share that opinion. Because of the social conformity bias, however, the fact that others share the same opinion may actually be evidence to the contrary. Finding others who share our opinion may only be evidence of our irrationality, not rationality. As Walter Lippmann said, “Where all men think alike, no one thinks very much.”


Mutual-destruction Discussion > Debate

I wanted to outline the above for two reasons.

Firstly, it should shed some light on your own shortcomings. By understanding the above it may not be specifically clear where and when you will fall short of rationality in the future, but it gives you clues as to how such a result can come about. Understanding failure-modes is one major component of the progress puzzle.

The second reason I outline the above, is because it gives clues as to how we can better deal with those we disagree with. Before we move onto that, though, we should quickly remind ourselves why we should interact with those who don’t share our opinion. When we do this, we should have two fundamental goals: to acquire and to alter. We want to acquire any valuable information our counterpart may possess that we don’t, and also help them become better informed, altering their opinion. To do both of these things, we must be careful of attending to emotions and not triggering basal human fear responses.

The goal here is to be both well reasoned and reasonable. Being well reasoned helps with being correct, but unless you are reasonable, you are never going to persuade people for the better and propagate correctness. Alternatively, if you are only reasonable, without any internal need for strong reasoning, then you are entirely at the whim of the social environment. Being only well reasoned can make you rude, but being only reasonable makes you a people pleaser. Our goal is to be both well reasoned and reasonable, this allows us to be helpful.

Books such as Dale Carnegie’s How To Win Friends and Influence People have demonstrated an understanding of this. Additionally, this is not a new book — written in 1936 — nor was Carnegie the first to address the idea of attending the logic of humans via emotional avenues. Unfortunately, though, we seldom remember to bring this knowledge to mind when dealing with an ideological adversary.

Carnegie wrote the following:

“When dealing with people, let us remember we are not dealing with creatures of logic. We are dealing with creatures of emotion, creatures bristling with prejudices and motivated by pride and vanity.”

And understood that merely attacking others does little to help ourselves, let alone them:

“Instead of condemning people, let’s try to understand them. Let’s try to figure out why they do what they do. That’s a lot more profitable and intriguing than criticism; and it breeds sympathy, tolerance and kindness.”

Here we catch a glimpse that it doesn’t matter whether our ambitions are selfish or pro-social, we are still likely to be better served by treating those we disagree with respectfully. Even if we are motivated by a pure self-interest for knowledge, we can only learn what our opponents have to teach if we engage them in a civil manner. Alternatively, if we hope to steer them towards the truth for the pro-social benefits that will ensue, then it is also essential we address them as if they were a human who is of equal value as ourselves.

By being critical and disparaging we only increase the likelihood of poor outcomes. Stress leads all of us to dichotomise and turn situations of varying colours into ones of black and white. A passage from the brilliant book Language In Thought & Action by S.I. Hayakawa speaks to this point:

“In terms of a single desire, there are only two values, roughly speaking: things that gratify or things that frustrate that desire. If we are starving, there are only two kinds of things in the world so far as we are concerned at the moment: edible things and inedible things. If we are in danger, there are the things that we fear and the things that may help and protect us. At such basic levels of existence, in our absorption in self-defence or food-seeking, there are, in terms of those limited desires, only two categories possible. Life at such levels can be folded neatly down the middle, with all good on one side, all bad on the other, and everything is accounted for, because things that are irrelevant to our interests escape our notice altogether.”

Hayakawa refers to this as the two-valued orientation. Effective navigation of reality and complex situations, however, requires a multi-valued orientation — an ability to interpret the world with a full spectrum of colours; not just black and white or good and bad.

You likely know this already, though. Most of us inherently think that we see the complexities of the world and that others don’t — that is why we argue with our enemies. The important lesson we must take from Hayakawa is not just to be applied to our own perception of the world, but how our treatment of enemies alters their perception of it.

Through the mistreatment of other humans, by berating or insulting them, we create stress responses within them. This reduces their perception from one capable of a multi-valued orientation, down to only a two-valued orientation. This is a concern if we are insulting them for their ideological subscriptions, because in that case, we are not at all doing a good job of going about changing them. Through inducing stress, we increase the propensity of herd-like behaviour, and we turn those who can see some of the cracks within their own ideology into more dedicated followers of it. Like an intellectual antelope, we are all willing to explore the plains of ideas, but only in the absence of predators. Introduce a lion into the mix and we huddle together with little distinguishing features. By attacking a group, the need for loyalty and dedication to that group is simplified within its members. You create greater amounts of groupism by trying to destroy it.

This was one of the central tenets of this paper by David Lahti and Bret Weinstein titled The Better Angels of our Nature: Group Stability and the Evolution of Moral Tension. Lahti and Weinstein suggest that members of a group more strongly adhere to the group’s own moral framework the more the stability of the group is challenged. Additionally, the paper points out that in order for a member to leave a group, they must have somewhere to go — us hominids are communal creatures after all. In my analysis, both these findings support the idea of treating the out-group with reduced hostility. Not only does an all-out assault likely strengthens the enemy groups resolve, but also creates the perception that your own group is not hospitable, and thus, not likely to accept new members. This brings us to a crucial point. You defeat bad ideas by acquiring converts, not by attacking the people with minds that adhere to them.

The paper states:

“One exception to the trend of stability-dependent cooperation follows from the fact that humans, although highly dependent on their groups, are not absolutely so. When stability is so low that the group might be doomed to dissolution, group members may consider the benefits of leaving the group to be greater than the benefits of serving it. Moreover, as individuals cease striving for the group when they believe that the cause is lost, they will be accelerating the collapse of the group, both by their withdrawal of aid and by the effects of that withdrawal on the assessments of others. This consideration indicates a threshold effect, with a sharp decline in cooperation and, thus, self-fulfilling group dissolution, once hopelessness of group persistence begins to spread. The existence of this tendency, however, depends on a perceived probability of successfully integrating into new groups following past group failures. Where there is no such hope, individuals would be expected to go down with the ship, continuing to employ the only strategy with any apparent chance of success.”

For this reason, I implore you to deeply consider how you interact with others who maintain a grip on ideas that differ from your own. If you truly despise an idea and think the world would be significantly improved by its eradication, then you will care enough to control your temper and treat others with respect. It won’t be easy, you are a fallible human after all, but getting angry and resentful is taking the easy, shortcut route to nowhere.

There are always two ways to get what you want amongst the existence of alternative parties with differing goals: conflict or cooperation. One is easier, though wasteful, which creates long-term hardship. The other requires more initial effort, but is, ultimately, much more profitable for all. The best way to stop having enemies is to find allies. As Carnegie said, “A man convinced against his will, is of the same opinion still.”


Conclusion

In an ideal world, we would only think what we know. But, as we are all too aware, this is not an ideal world. It is because we think, that we think we know. This is a very misleading and potentially dangerous phenomenon. Because something is in our minds, we believe it should be there; but this is only because we can’t take a far enough step back from our own thoughts to look at the bigger picture. This is why we need to be suspicious of the conclusions we arrive at.

Fortunately, by recognising our own fallibility we can begin to implement mechanisms which aid our efforts to overcome it. One which I have encouraged you to consider today is the respectful engagement of those who disagree with you. Far too often we succumb to sharing memes and screenshots with friends of the same opinion, and overtime, our representation of what the out-group believes becomes so warped that we think no sane human could believe it. At this point, we become even less likely to engage and understand them. We need to catch these feedback loops and the creation of airtight echo-chambers before they cost us significantly.

Because you cannot hold up a mirror to your own biases — you barely even know where to look for them — you need the assistance of others. And not others who are standing in the identical ideological location, you need someone across the aisle or room to hold the mirror so that you can assess yourself properly. You can also perform the same function for them, should you opt for cooperative behaviour. By appropriately dealing with others you disagree with, you both can move closer to the truth.

With all this said, though, I simply ask that you keep the following in mind: Your contentions may be wrong — you are inescapably (for now) human. Given that, we all have little stable ground for presuming we are correct for hating others who don’t share our opinion. I suggest you hate being wrong, not others because you think they are wrong. And even if they are wrong, what good does hating them do?

Next step… If this topic is of interest to you — and making it this far in the article suggests that it is — then I would recommend checking out Dr. Mike Israetel’s lecture “Arguing To Convince on Youtube. It provides an excellent overview of this topic.

I am fascinated by the power of knowledge; in particular, how through its implementation we can build a better life for ourselves and others. Most specifically, I am interested in ideas related to rationality and morality. I believe we can all be benefited by having a concern for both probability as well as people. As a student, I am studying Artificial Intelligence. As a professional, I work in mental health case management. When I am not doing one of these things, I am very likely writing for my blog, recording an episode for the "PhilosophyAu" podcast, hanging out with my nan, reading a book or, occasionally, attending a rave. A previous version of myself obtained a bachelors and a masters degree in sport science and was the Manager of Educational Services for a leading health and fitness company.

Related Posts

Leave a Reply