Unpredictable patterns #14: Changing minds, building tribes
From KOFs to soldier minds and tribes, surprises and the real reason behind the tech lash.
Dear reader,
It is Easter break here in Sweden, and I am spending it in the mountains. It is a great time to re-charge and think, and to read. I am currently reading up on the future of financial services, money and banking and am finding it quite extraordinary. The way we organize value in our societies is much more fundamental to who we are than I think I appreciated, and the possible future scenarios seems more numerous than at almost any earlier time. That is intriguing.
This week’s note is about how we change minds - our own and others - and suggests that we may need to look more carefully at the models we have of opinions forming in society.
Understanding opinions
A society evolves in a web of different opinions, and no company or organization is free to operate but under those opinions - which makes understanding them and figuring out how they change a key challenge for any sufficiently successful company.
Most theories of how opinions change, however, are surprisingly simple.
Let’s look at the simplest possible toy theory that figures in the background of many discussions about opinion research: in this model society is made up of individuals that hold propositional opinions - x is good, y is important or z is harmful to society - and these individuals change their opinions if they are persuaded to do so through new facts or emotional manipulation.
The latter - that opinions are held for emotional rather than rational reasons - is sometimes pointed to as a problem or a challenge, and a general wish for people to be more rational is not rare amongst those who work with opinion research or formation.
This toy model suggests, then, that if we want to change opinions in a society we need to ensure that individuals are faced with new facts or made to feel differently about the subject that we are interested in. The way that this is usually accomplished is through public debate or marketing campaigns.
This model can be built out by suggesting that not all of society’s members have equal influence over the opinions held in a society, but some are key opinion formers - KOFs - and their opinions in turn shape other people’s opinions.
Who these KOFs are vary across different theories - some think that KOFs are the people who are most visible and prominent in society: politicians, artists, authors, thought leaders and famous business men. Others think that there are far more KOFs and that they are better described as ”grass tops” - people with certain standing in their local communities, citizens of some renown.
There are several problems with this toy model, however.
The first is the idea that individuals hold opinions. Opinions are much more likely to be a part of a collective identity, where you buy into a network of different opinions because people around you tend to have these opinions. It is a question of cognitive economics - you will not have the time to examine all the opinions you are expected to have (and this is a peculiarity of modernity, that we are supposed to have opinions about most things) - so you will adopt clusters of opinions.
These clusters will often not even be logically coherent or consistent - but they will connect you to an in-group that you want to belong to. And you will hold then no less lightly!
What this also means is that opinions are not propositions - they are perhaps better described as implicit dialogues, sets of questions and answers - and often not the questions you think.

Take the tech lash. In the toy model the analysis would be something like this: the tech lash is a series of propositions believed to be true about technology and technology companies, and making progress on changing the tech lash is about shifting or changing those beliefs.
In our alternative model, the analysis could look like this instead: a lot of people are asking themselves questions around the size and influence of large companies, and as a consequence of these questions they are increasingly skeptical about technology companies - as the industry that has been most successful in the last decade, and where the perceived size and success breed scrutiny and suspicion.
Note that in the second analysis people do not hold beliefs about technology companies, they have questions about corporate influence and size. These are really different scenarios. In the first we want to convince people that technology is good and technology companies are doing the right thing, in the second we want to ensure that we respond to the legitimate questions about corporate power and influence.
A good exercise, if we are interested in using the second model, is to list the questions that we think might be underpinning the provisional answers - the opinions (and this is what opinions really are, provisional answers to socially shared questions) - people have.
This is true not just for political opinions, but equally for a consumer buying a car or choosing a bank. What is the question people are asking, and how can you answer that question effectively and in a way that makes it possible to choose to believe something new?
Six degrees of belief
There is also a massive difference between different kinds of opinions. Some are, at best, loosely held and some are core to a person’s identity.

The often demonized RAND-thinker Herman Kahn (Kahn was fond of arguing the unthinkable and published a handbook for thermonuclear war -a how to for what most people just saw as the end of humanity) suggested that people believe things in 6 degrees of belief:
”We argue that it is useful to distinguish at least six degrees of belief in a theory, a proposition, position, or a policy: atheism, agnosticism, skepticism, deism, Scotch verdict, and acceptance. For most purposes we find in these six degrees and some associated nuances a satisfactorily rich and precise vocabulary. Our usage of these words will be slightly idiosyncratic but it is close enough to the normal literary meaning so that readers who are not cognizant of the definitions that follow can still comprehend our use of these terms.”
Paul Aligica. The Essential Herman Kahn: In Defense of Thinking (Kindle Locations 2237-2240). Kindle Edition.
Kahn took his degrees of belief from theology and law - arguing that these age old disciplines actually had a much better understanding of how we collectively believe, than any modern psychological theory - and so opened up a different set of models, the theological and legal, for us to use when we think about opinions.
His degrees are interesting in that they suggest very little rational analysis on behalf of the individual - when he describes the strongest degree of belief it is not rational conviction, but, tellingly, acceptance that he uses to describe how we believe when we believe strongly. And equally, when we do not hold a belief we are not open to argument - we actively believe something not to be true, just like the atheist.
This mental model is very helpful, since it allows us to see the problem as a problem of conversion and evangelism, not one of rational argument and logic. It also allows us to understand the ideological qualities of opinions better, and the challenge. If you are dealing with an agnostic, in Kahn’s terminology, you have to take that person through skepticism, deism, Scotch verdict and finally to acceptance if you want to change their minds.
A word about what Kahn calls “Scotch verdict” may be in order. In old Scottish courts, Kahn notes, the binary verdict of guilty or not guilty had a third alternative - not proven - which essentially meant that the court thought the person guilty, but that the high bar of the legal proceeding had not been reached. This is a damning position to be in, and it is not unlikely that a lot of public opinion resides in this category - ”there may not be proof, but it seems right” - and this is a very hard form of opinion to shake someone out of.
The second point we can extract from Kahn’s six degrees is that opinion is belief, not rational belief, and so when we try to shift opinions we need to ensure that we also work with the mechanisms that change belief.
Most important of these is the element of surprise.
Surprise can be described in many different ways. One of the simplest descriptions suggest that we are surprised when our expectations are blown up and we suddenly need to update our understanding of a situation. This is key, this is the moment that we really are open to shifting beliefs and unless we are in this mental state it will be very hard to move someone from one degree of belief to another.
Surprise seems so incredibly simplistic that we hesitate using that word, but it is an accurate description of what you want to do here - and here is why: when someone is surprised, they are experience a cognitive dissonance between the information they are given and the views that they hold to be true - but without the negative reaction. They are not attacked, they have discovered - hopefully by themselves - that the new information they are hearing is not even roughly consistent with what they already know. This, in turn, leads to a situation where they need to re-adjust their web of beliefs, and this is where that new information may actually change someone’s mind.
Let’s look at an interesting example.
In 2015, after two muslims had been arrested for terrorist shootings in the little town of San Bernardino, then-president Barack Obama wanted to address rising hate speech against muslims - worried that the attack would ignite hate crimes against muslims across the country. Evan Soltas and Seth Stephens-Davidowitz noted, in an oped in The New York Times, that his first attempt to do so fell flat. The way they measured this was by looking at searches in Google, monitoring how ”searches” like ’I hate muslims’ and ’are all muslims terrorists’ grew after the speech where Obama asked for tolerance and respect for different religions. The appeal to reason, the facts in the case, met with the belief immune system and hardened the views of those who thought all muslims were would-be terrorist.
Obama then changed tack. Instead of giving facts about Islam as a religion, and point to its peaceful doctrine and majority of non-violent believers, he instead said the following: ”Muslim Americans are our friends and our neighbors, our co-workers, our sports heroes and yes, they are our men and women in uniform, who are willing to die in defense of our country.”. Legendary athlete Shaquille O’Neal was one of the people identified as a muslim, and now the searches changed from hate to curiosity - and included curiosity about muslims serving in the US military and famous athletes who were muslims.
Obama had surprised, rather than lectured, the audience - and now they needed to understand better who these soldiers and athletes were.
The simplified model where surprise leads to us updating our beliefs is far from perfect, but it is a starting point for thinking differently about what stories we choose and how we tell them. It also helps us check our messaging - with a simple question: is there anything surprising about what we say here?
It is also useful to map out the expected - what do you think people expect you to say and do - and then seek ways of throwing those expected responses out.
‘Tis hard work!
It is hard to change someone’s mind. Once they believe something even facts don’t seem to do the job, as Elisabeth Kolbert notes in a much-shared article in the New Yorker. Kolbert cites the work of, among others, Jack and Sara Gorman - who suggest that one reason that it is so hard is because our reason did not evolve to find out the truth - it evolved for entirely different reasons. Kolbert:
”In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is /not/ being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)
The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.”
Which, then, are those other reasons the mind evolved for? Cognitive scientist Julia Galef has suggested that reason typically serves one of two different purposes: to find new things (this she calls the scout mind) or to increase the cohesion of the group (this she calls the solider mind). None of these minds seek objective truth - they seek novelty or identity.

That is a sobering thought for anyone who is involved in opinion formation or in trying to change public opinion. If it is to correct the views of the people who resist vaccines or to persuade a majority that we need to invest more in science - the task is not to convince with facts - but to surprise someone in a way that either speaks to their solider mind or their scout mind, and to offer novelty or identity.
This is a far cry from the toy model that we started with.
We are no fools
There is another observation here that can be worth just pausing on - and that is that there is a great tension here between this view and another more popular view: the view that technology companies with great ease can manipulate us into believing anything, even things that are bad for us.
We called this ”the internet made me do it”-theory in an earlier note - and this theory is in direct conflict with the observations that we have made here about how hard it really is to change people’s views and opinions. It cannot both be the case that it is hard to change opinions and that it is easy for tech companies to do so.
And if it was easy for tech companies, one would have to note - tongue in cheek - that they are not availing themselves of that power especially well. If tech companies could change our views at a whim - why would they not make the tech lash and all the criticism of tech companies go away?
Hugo Mercier, another cognitive scientist, has suggested in his book Not Born Yesterday, that we are not easy to fool - and in fact, it follows trivially from the fact that if we had been easy to fool we would have been taken advantage of and selected out in evolution.

Now, the challenge with this argument is that it starts from the assumption that not believing what is a fact is being fooled. Unfortunately, that is not the case. Not believing facts can, as we have repeated, instead open up novelty or strengthen identity - and this is the challenge.
We are not easy to fool, but it is easy to make us believe things that allow us to strengthen the group, it is easy to make us believe what the tribe believes.
And this is the thing with the tech lash - it has become a tribal belief, identifying scholars and others who now work in that tradition with a tribe of Kahnian atheists when it comes to technology, looking for new ways of strengthening the group identity that they belong to.
This is not in any way a derogatory description, by the way. All of us believe a lot of what we believe in tribal ways, because we belong to a certain group - but the task of changing that belief then becomes radically different than the task of changing the beliefs in our toy model, since every tribal belief system has an immune system that is really hard to get by.
It is helpful at this point to start thinking about comparable tribes and try to categorize the problem of changing these beliefs in analogy with other groups. The clearest example here may well be the real atheists - Richard Dawkins, Christopher Hitchens, Daniel Dennett, Sam Harris - and their tribe, the millions of people who follow them and believe them.
Changing the views of the people evangelizing the tech lash is probably a challenge of the same caliber as trying to get Dennett and Dawkins to just open up for agnosticism - not even getting them to skepticism or deism. It is not just unlikely - it seems almost impossible.
What the Church has done instead is to try to pick off different critics, like Dawkins, and brand him as too simplistic, and not respectful of people’s beliefs (which he has happily leaned into), making it possible to at least chip away at the edges of the atheists gang.
Opinion formation here is best described as a battle of tribes, a competition between different organizations of the knowledge we have in groups that have a higher or lower cohesion. In fact, group cohesion ends up being a key metric to look at to understand which views will dominate in a society. Not which beliefs are true, but which groups are tightest knit in believing certain beliefs.

Silicon Valley used to be a tightly knit group - organized around the likes of Stewart Brand or John Perry Barlow - but it lost its cohesion, and even the early cypherpunks and copyright warriors have now dissolved as the commercial success of the tech industry has thinned the ideological nature of the work and visions. Can a technology company have a vision these days? It seems harder!
This, then, gives us an entirely different model of the rise of the tech lash - it was not about new opinions or new ideologies arising in conflict with the old - no, at first the old ones had to dissolve and the old tribes fall apart. The rise of the tech lash started with the fall of cyberspace, leaving a vacuum in society in all questions about the ideology of technology and progress - a vacuum that was soon filled out by technology skepticism, a fast forming tribe that sensed the weakness of the crumbling tribe of early cyberspace visionaries.
The real challenge may then be not what the tech lash has led people to believe as much as the lack of any beliefs on the other side to organize a new tribe around - it is not what the critics believe, it is what the lack of anything for the sympathizers to believe in.
If this is true, the problem is not changing the views of the tech lash tribe, but finding the basis and views that can form an alternative tribe — and this requires understanding closely how corporations, tribes, commercial success and beliefs are intertwined.
So what…
What does this mean - and how can this be operationalized? I think there are a few things here, but the most important is to spend more time on the problem of opinion making and opinion shaping. We need to do far more than list a set of key opinion formers and hope that the toy model described in the beginning here is true. Even just analyzing the opinions you are interested in as tribal - and understanding the structure of the tribe you are arguing with - is important.
Aside from that, there are a few really simple things you can try if you are interested in experimenting:
Add the surprise test to anything you want to communicate or in any campaign you are planning around policy or political issues. How is it surprising? What in your message opens up that moment of cognitive dissonance that helps people find new views?
Explore soldier / scout perspectives when communicating: how does this help a group get closer together or what new opportunities are you opening up? Naturally all communication should be evidence based, but how is it packaged so that it helps you use the soldier and scout minds? This is another easy test to attach to campaign reviews.
Spend as much time on identifying your own tribe as the opponents. A lot of time is spent in organizations identifying and understanding critics and detractors. If you start to spend your resources on understanding why people sympathize with your positions and strengthen your own tribe instead of attacking your opponents, you are likely to build the basis of a soldier mindset that can then really change the game. My guess would be organizations today spend 80% of their attention on opponents - and only 20% on allies. If they reversed that they would probably be better off.
Understanding opinion change as a competition among tribes is a good mental model for exploring new options
On the blog this week
The blog this week has been Easter quiet. Two posts might still be of some interest.
A new episode of Regulate Tech on building policy teams.
A small meditation on B-sides and how they created a unique artistic space.
And as always, thank you for reading and let me know if you have any ideas, pushback or thoughts to share! And if there is anyone else you think could be interested in reading these notes do let me know!
Take care and speak soon,
Nicklas