Unpredictable Patterns #41: Our horizons of ignorance
On limits of knowledge, charting the distribution of ignorance and 4 ways ignorance can be turned to your advantage
Dear reader!
We are in the grind now - the months of October and November is when everything has to happen, it seems. It is not unpleasant, however, there is a value to this feeling of engagement and being absorbed in one’s work. I am lucky, privileged, to have a job and side projects I am deeply rewarded by - so for me work is meaningful and having the pleasure of sending this little note out adds to it. Anything you can do to add folks to the newsletter is deeply appreciated! This week we will dig into how ignorance works a bit more and see what can be learned there!
P.S. If you are Swedish, some personal news - Fredrik Stjernberg and I are about to publish a book on the history, art and philosophy of questions. You can find more information and order it here. I will be speaking more about this shortly. D.S.
Ignorance
How often do you say ”I don’t know?” - and if you are honest, should it be more often? If so, you are in good company - most of us do not want to admit that we do not know something and so we try to be helpful, and suggest solutions, guess at answers or simply make things up as not to seem ignorant.
Ignorance is associated with stupidity and evil - and in politics the ignorant are often the ones that do not hold the same things to be true as I do. Ignorance evokes emotions and anger, it suggests inferiority and stubbornness.
”If people only knew…” or ”if people were only better educated…” are both complaints that imply that the problem with the world is a lack of knowledge on behalf of the many, and that it can only be cured with education. Ignorance enters the political analysis as a means to explain what elites do not want to understand - so, for example, it has become standard to blame both Brexit and the election of Trump on ignorance. The calls for a common baseline of facts suggest that if we just knew the same things, we would vote the same way - solve problems the same way.
Did education matter in the election of Trump? This is complicated. There are some studies that suggest that education levels did indeed drive election patterns in the 2016 election, as evidenced by for example 538’s analysis - but 538 themselves note that education and cultural hegemony may be related, and that education levels reflect long term economic well-being.
But more importantly: we need to be careful when we say that the less educated are more ignorant than the educated. And here we find one of the first puzzling things about ignorance: we need to differentiate between individuals that realize that there are things they do not know, and those that think they know a lot. In the worst case scenario, this means that the educated may be more ignorant, but less aware of their ignorance, since their educational attainment may very well suggest to them that there are few things that they do not know. On the other hand the less educated may realize that they do not know, but equally not care if they know or not because they prefer to go by their intuition or feelings.
We need to differentiate between ignorance we are aware of and unconscious ignorance. This is why Rumsfeld’s first berated and then oft-quoted question about the “unknown unknowns” is so interesting - it is a meta-realization about the structure of ignorance. Rumsfeld’s categories reflect an understanding of the fact that the limits of our knowledge can only be partly known.
And from this follows a key observation in the grammar of ignorance: ignorance about ignorance is often confused with knowledge.

A corollary of this is the Socratic insight from the Oracle at Delphi: wisdom may be knowing how little we know, clearly delineating the contours of the vast ignorance that surrounds us and making decisions on the basis of the small island of knowledge we have cobbled together in the sea of uncertainty.
When we study ignorance we need to make a distinction between ignorance of facts and absolut ignorance. The first is simply a case where we do not know something yet, and a simple question can often help us find the answer. I don’t know, for example, who was president in the US in 1886 - but a quick search reveals that it was Grover Cleland. The second kind of ignorance is much more interesting, and it is about what we cannot know.
The second puzzle about ignorance is that it grows with our knowledge. Philosopher Karl Popper phrased it well:
”The more we learn about the world, and the deeper our learning, the more conscious, specific, and articulate will be our knowledge of what we do not know, our knowledge of our ignorance.”
What we cannot and will not know
Some of what we do not know we can find out. Other things cannot be found out at all - there are limits to our reason. Understanding these limits is crucial for understanding the world as we know it. In Noson Yanofsky’s The Outer Limits of Reason the author suggests a number of such boundaries of our reasoning that are interesting to explore more closely.
Among the many different limits that Yanofsky explores we find logical limits and complexity limits. The logical limits of the world are limits that have to do with contradiction. The simplest example is the following:
This sentence is false.
Is the sentence true or false? If it is false, then it is true. If it is true, the it is false. Can we know which it is? No - we have something here that is either nonsensical or needs to be translated to a different form, a different typological analysis - but the contradiction that this simple sentence generates is a small black hole of ignorance in our language. Not a concerning one - we do not need to get upset by it - but when the mathematician Kurt Gödel found a similar contradiction at the heart of mathematics its radically expanded the ignorance around us and rendered the project to put mathematics of sound footing moot.
Small inconsistencies grow into fields of ignorance.
But this kind of ignorance is less interesting that the kind that flows from complexity, and there are several kinds of ignorance that emerge from sufficiently complex systems. Physicist Marcus du Sautoy has explored some of this in his book The Great Unknown: Seven Journeys to the Frontiers of Science. Du Sautoy suggests that there are chaotic systems that cannot be predicted and where we will never know how they develop - but there may be patterns in that unpredictability if we look at enough systems:
”As the French historian Fernand Braudel explained in a lecture on history he gave to his fellow inmates in a German prison camp near Lübeck during the Second World War: ‘An incredible number of dice, always rolling, dominate and determine each individual existence.’ Although each individual die is unpredictable, there are still patterns that emerge in the long-range behaviour of many throws of the dice. In Braudel’s view this is what makes the study of history possible. ‘History is indeed “a poor little conjectural science” when it selects individuals as its objects … but much more rational in its procedure and results when it examines groups and repetitions.’”
du Sautoy, Marcus. What We Cannot Know: Explorations at the Edge of Knowledge (pp. 54-55). HarperCollins Publishers. Kindle Edition.
Those who remember their science fiction will see here where Isaac Asimov got his ideas for psycho-history. Braudels notion is almost exactly that of Hari Seldon - the mathematician-hero of the Foundation.
Knowledge has resolution - at some levels of resolution we can know, at others we cannot. We can predict human populations, but not individuals, collections of particles, but not individual electrons. Ignorance creeps closer to us from above as a well as below in the hierarchy of the universe.
Then, in addition to what we cannot know, there is what we choose not to know. A simple example is this: there are today affordable ways to sequence our DNA and find out if we have genetical predispositions for some diseases. The quality of these tests can be debated, but they do give some insight into our genetics. So, one question that often comes up in discussions is if people would do this and then find out if they have, say, some genetic pre-disposition for Alzheimer’s disease. Many if not most of my friends say no - they would prefer not to know, believing that the knowledge would be crippling.

This is just one instance of a case where we need to figure out what we should find out that we can find out, and as we get better and better medical diagnostics this issue will just become more and more pronounced. Elected ignorance is an ethical choice, and as I have argued elsewhere I think that this means that we need an ethics of ignorance, too.
Ignorance and time
Ignorance can be absolute - mathematical or physical - but far more often it is related to time. The time it takes to find something out is simply longer than the time in which you need to find it our for the information to be valuable. This simple restriction on knowledge - that we need time to know something - sets up a horizon of ignorance that we have to deal with. This horizon of ignorance is simply the time in which you can find valuable things out. Beyond the horizon there may be knowledge, but not valuable knowledge or actionable knowledge.
In the case of actionable knowledge the example is simple: if you are asking yourself if you should buy stock in company X before its quarterly report, you would like to know what that report says. You will know, but the point of knowledge is beyond the point where you need to buy the stock - so you effectively have to act under ignorance.
In the case of valuable knowledge the situation is slightly different. Say you are trying to figure out if you should bring an umbrella to work if it rains this afternoon - but that you have to catch the bus to work in 3 minutes, and checking the weather will require you to find your phone, google the weather, make an informed guess as to whether the predictions are correct and then decide whether to bring the umbrella, will take you 5 minutes or more. Then the time it takes to find out the information you need is longer than the time you have and again you are effectively ignorant.
There are a lot of cases where the time required to build knowledge is longer than the time in which you have to act. In some cases you can build partial knowledge, and then use that - in others you just need to accept that you are acting under ignorance (and perhaps invest the time you have to mitigate bad outcomes).
This is the basis of a particular problem in decision theory - namely when you should make a decision. Should you gather more information or make the decision on the information you have? In order to determine that you need to have a theory of the ignorance horizon - what kinds of information you will never get in time. And consciously pushing that horizon out by having scenarios in place that allow you more time to make decision is one important remedy for this problem.
There is what we know, what we can reasonably predict and the scenarios that give us a range of the possible - and then beyond that we have ignorance, in a scale from the certain over the probable to the possible to the uncertain.
Jeff Bezos famously notes that most decisions - at least the reversible ones - need to be made on 70% of the information you wish you had. If you wait for 90% you will become too slow. It may even be the case that the route from 70 to 90 is beyond the ignorance horizon and that you will never get to 90% at all!
Consciously thinking about the horizon of ignorance - how to push it, but also respect it in our decision making - helps us analyze decisions in better detail. If a decision consistently hinges on information beyond the horizon of ignorance we may want to think hard about how to push it out - or if that is not possible, find ways of hedging that decision.
All intelligence gathering should attach a time price to the intelligence that someone is asking for, routinely making it clear if that intelligence can be collected within the allotted decision time.
The distribution of ignorance
William Gibson famously said that the future is here, it is just unevenly distributed. This is a good reminder that knowledge and behaviors are spatial things. They need to be understood as distributed across places. And so, of course is ignorance. There will be, in any organizations, patterns of ignorance and knowledge. Examining these patterns and understanding who knows what when, and where there is articulated ignorance as well as where there is implicit ignorance not consciously recognized, is an important task for any one interested in improving organizational decision making.
In one model we can think about the organization as a network, and note that the edges of the network have one kind of ignorance and the center another - specifically - because of the time it takes to make decisions and transfer information - the ignorance at the center about the periphery may lead to either decision made too late or decisions made without the requisite knowledge.
This is why all organizations struggle with how to delegate decision making to such an extent that you maximize the quality of the decision, while respecting the limits of ignorance and knowledge.
One particular pathology to be wary of is the tendency of ignorance to pool in the center and at the top of organizations. There are multiple reasons for why this phenomenon emerges. The most pernicious one is that no-one wants to be the bringer of bad news.
The perhaps most iconic image if this is the bunker scene from the movie Downfall, where an obvious Hitler plans to issue orders and finds that the army is all but dissolved. His lieutenants - deathly afraid - finally tell him the truth and he erupts in the oft-parodied explosion of impotent anger that is the start of the end.
But it does not have to be just that, it can equally be the case of an organization that slowly splinters into silos, where some problems are seen as just pertaining to the silo itself and not to the wider organization. The silos thus keep the information to themselves until they start to realize that their problem is contagious.
Another version of this is the center keeping information from the periphery, having decided that the periphery is just an executing arm of the organization and does not have to included in decision making.
The design of the distribution of ignorance here is important - and there are many different models to choose from. An interesting model for a thought experiment is that of octopus cognition - where cognition is decentralized into the different parts of the organism at a much higher ratio than in, say, primates. As Peter Godfrey-Smith, a philosopher who studies octopus cognition, has put it:
”Further, in an octopus, it is not clear where the brain itself begins and ends. The octopus is suffused with nervousness; the body is not a separate thing that is controlled by the brain or nervous system. The usual debate is between those who see the brain as an all-powerful CEO and those who emphasize the intelligence stored in the body itself. But the octopus lives outside both the usual pictures.”
The cephalopod distribution of ignorance is much less chunky than the primate one, and as we evolve our organizations we may want to think about what the right balance here is.
An important point here is that ignorance can be collective, in the sense that even if I know X and you know Y, knowing X and Y is something that completely changes the situation. If that knowledge is never combined, the resulting collective ignorance, although the individuals have the required knowledge, will still be an organizational weakness.
Shallow knowledge, deep ignorance
Let’s get back to not knowing. What does it mean not to know something? Let’s take a very simple example: do you know how a fridge works? You do know - in some sense - that it is a technology that shifts heat and cold, and so creates a small cold environment, and that it requires energy to do so…you know things about the fridge - but could you build one? Could you repair it? Have you ever repaired one yourself?
Our knowledge is tiered - and beyond a very shallow surface knowledge of the world most of us have vast depths of ignorance.
Realizing this is a good start for becoming comfortable with not knowing, and seeking to understand things beyond that shallow surface. It requires practicing a special kind of curiosity, a curiosity that consciously seeks the next layer, rather than just resting content bobbing at the surface of knowledge.
Now, you may argue, there is no need for you to know how a fridge works - since that is something that people who work with fridges know. We have to allow for a certain social distribution of ignorance - in fact, that is just another name for the division of labor, is it not? This is right - in a sense - but being curious for the underlying principles of things is slightly different from becoming an expert on fridges.
There are, in fact, many different models of refrigerators - but one of the most common, the compression system, relies on a simple effect that we all know: the fact that a liquid when it evaporates creates cold. If you have not experienced this yourself, it is easy to do: just take a drop or two of reasonable pure alcohol and put it on your hand and as it evaporates you will feel a distinct cold on your skin.
Household refrigerators are built around this principle of evaporation, and then needs to solve the problem of how to have something constantly evaporate in a closed system - imagine a constantly pouring bottle of alcohol that somehow can conduct cold into a closed environment. This in turn is accomplished by shifting a vapor back into liquid with a compressor.
The principle is using two different states of matter and the energy transitions between them - something that represents an interesting overall mental model to play around with.
Now, we should not overstate this - it is hardly good advice to spend a lot of time understanding all of your household utensils more deeply. But it is good advice to understand phenomena you are interested in more deeply, do actually seek the details. We have a natural tendency to seek generalizations, and that is how we can manage large swaths of knowledge in our modern world, but we should combine that with an understanding of details where we really want to push our horizon of ignorance back.
Are there areas in our shallow knowledge that pay off more than others in terms of diving deeper? I don’t know. I will note that Elon Musk and Larry Page have both voiced the view that they think that the fundamentals of physics may be such an area, where more detailed knowledge may just reveal the outlines of the possible and impossible more clearly, and suggest new ideas. I think that can be right, but I equally think that logic and philosophy are such generative areas, where digging deeper into concept formation, laws of thought and systems of logic may be helpful across the board. But you will have your own ideas!
I also think that it is useful to dig deeper into the humanities. Learning an instrument, painting, taking up dancing or a martial art will allow you explore details that give you a lens into other areas as well.
We could even argue that the image of knowledge as a sea with a depth of ignorance is wrong. It could be much more like a sphere, where the underlying principles are common and allow us, once we hit the core principles, to explore the rest. The way we visualize knowledge and ignorance matters, and this is an almost occult view where the fundamentals allow us to generate the whole - from the center of the sphere we can know the surface whole.
Ignorance decoupled from knowledge
We have an uneasy relationship with ignorance in our information society. It feels as if we should know more, just because there is more information - but the rate at which we can turn data into information and then understand and interpret it as knowledge is not growing as fast as we may wish. That last step - from information to knowledge - is still limited by human capacity. This may change, of course, and we may end up in a very different world with artificial intelligence.
In fact, one of the great revolutions that artificial intelligence could bring would be the decoupling of knowledge and understanding, or competence and comprehension - to use philosopher Daniel Dennett’s terms. Our systems may be able to do a lot of things, predict systems and make decisions, on the basis of systems that are opaque to our understanding.
Polish science fiction writer Stanislaw Lem suggested in his 1960s book Summa technologiae that we may even farm information and knowledge just as we farm plants, not understanding the process that produces them fully but enjoying the fruits of our labors.
This future is one in which ignorance - human ignorance - is not the opposite of knowledge, and where understanding and explanation has been decoupled from knowledge as well. It brings home the fact that we have to distinguish between systemic knowledge - what a system knows - and what an individual knows. We made this point in relation to the refrigerator, but here we end up with a slightly different situation.
In the example of the refrigerator, it was a division of labor between humans - some know how a refrigerator works, and as long as someone does - we are fine not to know. Now, imagine, that no humans know how they work, but that the robots and artificial intelligences know and can produce them (in what sense do they know, you may ask - and that is a good question - we would say that they know if they can produce, and improve the products, as a first hypothesis).
What happens with a civilization in which human ignorance grows as the sum total of the civilization’s knowledge grows really fast? Not only in relative terms - it is not just that the system knows more, but it would really also mean in absolute terms that we know less and less as specialization becomes unnecessary when software systems can perform specialized tasks much better. And what if this also holds for not just the mundane production of refrigerators - but for science? It is far from inconceivable that computers could develop both mathematics and physics in ways that we see seem to work and to the best of our knowledge is correct, but where we cannot verify and explain all the details.
Such a civilization, one where we know in some sense of that word, but are unable to explain, would almost qualify as its own novel form of ignorance - a weird, competent ignorance.
So what
Ignorance is fun subject, because it has deep theoretical aspects, and at the same time it is so simple: it is about when we should say that we do not know something. And this, again, is becoming more and more important. A few things suggest themselves.
First, the debate about misinformation and fake news lacks a dimension here - and that is what to do with the vast set if cases where we really do not know, or where there are even no recognized strategies of knowing (we do not know how we could find something out). Today the tendency is to remove content that we think is false, but then we are forced to reinstate it when it turns out that we confused the false with what we do not know. The quintessential case being cited everywhere now is the question of if the Covid-19 causing virus was human engineered or not. That was dismissed as clearly false and now is back as unknown - with retractions form fact checkers. What if we had a higher tolerance of saying that we just do not know, and perhaps focusing on what our strategies of knowledge would be? This is akin to honoring the famous Alexander-question, as framed by, amongst others, Russell Alexander: what information would make us change our minds? If we cannot answer that question, even, then we really do not know what we are talking about. Imagine a search engine that has a dial for epistemic certainty, and when you dial it all the way back you get, essentially, physics and as you turn it you get chemistry, biology. historical facts in recent times up to recent claims of how the economy reacts to pandemic stimuli? In addition to page rank your could imagine having something like degree of certainty. Controversial? For sure - but having no such rating could be equally equated to say that the fact of evolution is equal to that of trickling down economies. And, for sure, we can say that we should not impose that on citizens - they should make up their own minds - but that misses the point. They can still make up their own minds as to whether they believe those degrees of certainty - and if the degree of certainty is set conservatively low you may actually drive people to find out more. Stuart Firestone makes this case in Ignorance: How It Drives Science - noting that knowing that we do not know triggers curiosity and exploration. Not always, but in enough cases for it to move science and society forward - and he recommends looking at what was known 10, 15 and 25 years ago to see how quickly things change. This leads to another idea: maybe it would be a good idea to state when we think we found something out historically? How old our information is? As Sam Arbesman has noted even facts have a half-life.
Second, mapping the distribution of ignorance in your organization and in society overall about your issues may be an interesting exercise. In tech policy specifically there has been a lot of noise made about politicians not understanding tech — well, how can that ignorance be distributed differently? And what ignorance about the processes of politics is distributed across the tech industry? What are the horizons of ignorance about social issues produced by an engineering mindset? And is the problem really the ignorance of politicians? Or is it the wider ignorance in our polities? And where do we need more than shallow knowledge? Mapping ignorance across an organization in time is also important - when did the organization as a whole understand what it understand today? A history of the ignorance of your organization is a great way to see how it changes. When did tech companies understand that privacy was not a European concern? They all understand it well today, but did they understand it 5 years ago? 10 years ago? How did they come to realize this?
Third, explore the horizons of ignorance you are operating under today. Are there ways of pushing them back? Better intelligence gathering? Scenarios? Are you seeing all the way from the certain into the possible? Or is your horizon just within the certain?
Fourth, surprise is a sure fire way to recognize ignorance - and building structures that generate surprise is key to building a creative organization. What routines and processes have you built into your organization that routinely surprises you? This notion of routine surprises may seem weird, but it is just about figuring out how you make sure that the organization does not build walls against ignorance in the fear of being seen as incompetent. Ignorance and incompetence are not synonyms - in fact it is the opposite: the extreme of competence is knowing where you are ignorant. Think about Warren Buffet’s oft quoted advice of knowing your edge. That is not just where you are sharp - it is where your competence ends! Surprise is a way to constantly explore those edges of your competence and, if done right, perhaps extend them.
As always, thank you for reading. Let me know thoughts, reactions or ideas for other subjects as well!
Nicklas