Unpredictable Patterns #85: Learning and power
How should societies regulate learning? What is learning, and why is it key to our future? How does learning work in evolution? And why is learning the new fitness?
Dear reader,
It is back to school days, and so what subject could be more fitting than learning? In this note we will focus on learning as a key concept in understanding not just society, but also how technology is changing our political reality. We will explore how learning works and why it is so important, as well as look into a few specific questions about how we, as a society, could think about regulating learning rather than technology. Enjoy!
Knowledge is not power
You have probably heard that knowledge is power - and that is almost right. If I know more than you, I can probably find better strategies and make better decision - even if luck still plays a role. Someone who knows a lot of poker still loses from time to time, but they are likely to be win in the long run - if the initial conditions are neutral. But the reason it is not quite right is that knowledge is the result of a process - learning - and the real power is actually in being able to learn. When we say that knowledge is power, we are really evaluating how much we have learned and how we can use what we have learned in different ways.
The ability to learn - to acquire knowledge - is what really matters.
A corollary of this is that learning challenges the social structure and power hierarchies that operate in a society. If you are where you are because of what you have learned, and I start learning it too -- then our relative power will start to change. Once I know as much as you do, we are equals and I can demand to be treated in the same way you are, and enjoy the same privileges you do. Learning is a socially disruptive force that can change age old social structures in a generation.

This is not news, of course. History is full with people who have argued that we should ensure that people do not learn too much, and question if farmers really need an education. Teaching people, they might say, will encourage them to seek social mobility and change, and that is a dangerous thing. Organised religion has been a strong force for rationing learning in different ways, and it was first with the enlightenment we made knowledge a key element in thinking about human rights and human freedom.
The Enlightenment’s "dare to know" still is not quite right - it should be "dare to learn", because that is even more disruptive. Knowing is an equilibrium in which we assemble a model of the world and try to make it consistent and useful. Learning is changing that model consistently and with an interest in improvements, rather than accomplishments.
In the early Internet days, we had this idea that we should promote access to knowledge, and that this would solve a lot of social problems and inspire a new enlightenment. Battle-lines were drawn between those that would rather monetise knowledge and preserve and strengthen copyright regimes, and those who suggested that we should re-think copyright entirely and focus on access rather than upholding artificial scarcity. In hindsight, that debate was slightly off too: we should have focused not just on access to knowledge, but on the overall process of making it our own, of learning.
Access to learning is much harder to deliver, than access to knowledge.
We see this today: we have moved into a world where there is no scarcity of knowledge anymore. You can download all of MIT:s curriculum and massive open online course providers like Coursera and Udacity offer access to enormous amounts of knowledge, open to anyone with an internet connection. But this is only the first part of the revolution we need. We now need to situate this knowledge in a learning context, and ensure that it can be used in such a way that we really can integrate it in our own understanding of the world.
We need knowledge to learn, and learning to understand - and it is first when we understand we can act.
Now, a word on education. I tend to shy away from talking about education, because I think that it creates the wrong image in people's minds. When we speak of education, we often think of school, and that may keep us from seeing the ways in which learning needs to change.
The institutional framework of learning - education - has been stuck in the same models for a very long time, and what we need to do is to re-think it from new initial conditions. The key conditions that we should start from are the following:
First, there is no longer a scarcity of information. There is massive amounts of information available to anyone who wants to try to access it, and our challenge is rather about ensuring that we can sift the signal from the noise, assess authority and create shared models of reality.
Second, the organisation of learning should be designed to respond to the demand. We need to understand the demand structure for learning much better than we do today. When and why do we need to learn? This is where everyone agrees that it is not enough to just learn in school, and we talk - but do little - about "lifelong learning", but that does not get it right either. We need to figure out how to design learning into a complex, and often individual demand profile.
Third, learning remains a communal and collaborative endeavour. Just as we have increased our powers of discovery at the same time as our powers of deliberation remain at pre-Internet levels, we have increased access to knowledge without providing learning mechanisms. Experimenting with social forms of learning is key to creating value: information discovery is a lonely thing, learning needs to be a social endeavour. The studies on MOOC-completion show this clearly: people who learn in groups finish courses, people who go at it alone do not. And teachers matter - a lot!
Fourth, we need to move to acquisition agnostic certification. This is really a no-brainer, and an old model. The Chinese civil servant model under Confucius allowed anyone to acquire knowledge anywhere and then sit for the exams. If successful they could get a great job, and a career. No-one cared where they learned. We need to build really strong institutions for such talent discovery, or risk getting stuck in social status structures that instead put a premium on your ability to stay in a certain geographical location for 5 years.
Fifth, we need to chunk learning differently. Micro-learning, and smaller components will help people compose their own unique capabilities - and learn what they need on demand. But this does not mean that we should try to speed learning up. There is something very interesting here about the speed with which we learn. Learning requires time, and we will run up against a human limit in time if we try to cram things into people. What you learn fast you forget fast. There is an interesting angle on this: we should try to ensure that learning compounds over time. The real value of learning is when you learn something that then allows you to do domain transfer - apply your knowledge to accelerate learning in a robust way in another domain. What this means is we also should prioritise learning to learn well, and help people explore their learning styles.
There is a certain urgency to this, as well -- and as you may know if you have followed these notes, the one thing that worries me is that I think that our society is increasingly complex.
I have said this before, but it is a model worth having in the back of our heads as we discuss learning: we buy our progress at the price of complexity, and we cannot slow down without complexity slowly eroding the status quo - this is the Faustian deal of progress, and one that I think we are right to accept. But this requires that we learn effectively in order to manage our complexity and continue crafting progress.
It is a version of the Simonian trade-off: with a wealth of information comes a poverty of attention, and a need to allocate it efficiently. What I believe is that we can generalise that trade-off: with a wealth of complexity comes a poverty of knowledge, and a need to learn effectively.
The social and technical design of learning is key to our continued progress.
Regulating nouns and verbs
So how does technology enter into this? The obvious answer is that with artificial intelligence we have created the ability to learn more effectively - and this is clear even from the naming of the technology: machine learning (and it is interesting that it was when we went from the noun of "intelligence" to the verb "learning" that we started to see real progress in this field). So, when we discuss the wider impact of technology on learning, we can see that we are at the cusp of a major societal shift, where we might be able to draw on machines to help us learn and address the complexity we have created for ourselves.
At the same time, this technology is very powerful, and so needs to be regulated - and there is almost universal agreement about this now (pace a few John Perry Barlow hard cases). But there is also a deep uncertainty about how to think about that regulation. The EU has started to build a model that focuses on risk, and in that model they are following an emerging regulatory tradition in the block, at the heart of which lies different versions of the precautionary principle. Their model is clean, and if the regulation survives negotiation in a fairly consistent shape, it will be a helpful stop gap measure to ensure that decision makers feel comfortable with not banning the technology (as genetically modified foods were banned).
At the same time, there is something missing, and it is, again, the focus on the noun that I think creates a problem. We want to regulate AI, and so we focus on the technology and then try to define it, rather than think about what the function of the technology is. There is an interesting point here about regulatory models, and how to choose between them and understand them. Different regulatory models open up different opportunities.
The first model is the noun-based regulatory model. In this model we define all the things that we want to regulate and then go on to try to regulate how they interact. The data protection legislation has evolved as noun-based regulation and focuses on data. All use of data is then regulated, and a set of entities are modelled that have different duties and rights within the set of things that we have set out to regulate.
The second model is the verb-based regulatory model. Here we regulate a practice, and think about how that practice should be limited or shaped. Such legislation is heavily indexed on the functions we are trying to regulate - what is the practice actually for, and how can it be designed so that it is not harmful in anyway? Free speech regulation is more like this.
These are ideal types, for sure, but the idea is that you have a choice: you can regulate things/nouns or functions/verbs. And if you apply this to the field of artificial intelligence, you immediately see that the alternative to regulating AI is regulating learning.
This opens up a whole set of very important and intriguing discussions about the regulation of this technology. As a society, how do we want to regulate learning? And on what basis can we make regulatory choices about learning? The questions we face if we take this perspective are also much more salient than questions about the quality and nature of datasets - let's look at a few examples:
Are there things that we do not want to learn as a society?
Are there things that we do not think the state should learn about its citizens?
Are there things that should be learned only under strict supervision?
Are there things that we can learn, but if we learn them we need to ask before we use the knowledge?
Are there things we must learn about each-other?
How do we think about liability for learning?
Is there a difference between learning about different domains, and if so why?
Are there things the military should not learn?
Are there things we can learn only if we also ensure that we forget it after a certain time?
And asking ethical questions also becomes more interesting. We know that we have a scarcity of attention, and, as a society we should then focus our learning capacity on learning things that can benefit us all and be used in good ways, so should we spend time on learning if a certain set of facial features correlate with a certain kind of sexuality? Probably not the most responsible use of our scarce learning efforts, right?
Focusing on the regulation and ethics of learning also forces us to move beyond the particular technology we use to understand how we think about the use and abuse of learning, and when learning something - or not learning it - can be really harmful. The problem with noun-regulation is that it skews the perspective towards the risks - the benefits are not easy to see when you stare at a thing. But with verb-regulation you can immediately see that a practice can have a positive side, a real set of important benefits.
In a sense, then, we are at the cusp of negotiating the Enlightenment's "sapere aude" - dare to know - in a social context, since we have and will greatly increase our ability to learn a lot of things -- and the tension between the medieval fear of higher things and our will to know will have to be re-examined in the light of the complexity of our challenges: climate change, migration. inequality, discrimination, polarisation, poverty...there is no shortage of these.
The project of understanding, regulating and debating how we learn as a society is, I think, perhaps one of the most important projects we can undertake. It has sub-parts that have to do with how science learns, and difficult questions about medicine and insurance, it contains competitive and data protection components as well as questions about military capabilities - but if we approach it through the lens of learning, we have a greater chance to get it right than if we approach it as a question about regulating technology.
Understanding how we learn is also key to understanding how power is distributed - learning is not possible for everyone, and this is both a reminder and a call to action: when we can learn equally, we are truly equal, and power is equitably distributed. Here too, technology can be a great equaliser - by ensuring that learning is more broadly available, accessible to different learning styles and reflected in a fair and equal way across different strata in society.
Both liberty and equality rest on a foundation of citizens' learning.
On a personal level, I think learning is also key to living a good life. How we learn varies - some learn new sports or climb new mountains, others take a course in wine tasting or learn to play a new video game. All kinds of learning are good, and we should think about learning as a new kind of fitness.
That there are gyms at every corner but not libraries is something I hope will change, but I am also acutely conscious that bookish learning is not the only thing we need. Indeed: the gyms are places of learning as well. I have undertaken getting in shape, and have been struck by how my personal trainer has helped me understand my body, how I can push boundaries and what it takes to build strength and health. I used to go to the gym and jump on an exercise bike for 30 minutes and then leave - and while better than nothing, it was the gym equivalent of just reading the blurbs on a book and pretending to have understood it.
Finding ways to learn and develop is not easy - especially in a society that seems to constantly accelerate -- but it is crucial. I write these newsletters as a way to force learning about different subjects, and the feedback I get is fantastic for me - and helps me find new threads to pull on. I also try to read a fair bit, but what really works is the combination: reading and then writing is the best thing for me, when it comes to learning - and it again makes that point that learning is a practice that is highly individual.
As we move into the autumn, and school starts for real - we could also grasp the opportunity and set ourselves learning goals, with others, to experience the joy it is to have learned something new.
Learning, adaptation and evolutionary search
If we want to generalise, and understanding the function of learning in society, we quickly find that it is not hard to come up with interesting analogies: the pace of learning seems to be determinative of commercial success as well as personal success, and if you had to invest in the stock market on a single datapoint, the pace of (relative) learning of the different companies you are looking at would be a really helpful input. But why is this?
My first approximation of a mental model here was simple: learning is adaptation, but in social systems rather than biological systems - what we are asking when we ask who learns fastest is simply who adapts best. This model held me hostage for many years, but it had one shortcoming: I had to construct a parallel social evolution to the natural one and even if it may have been a minor detail, it irked me. Then, quite recently, I stumbled on a paper that gave me a much, much better model. It is a 1987 paper by Geoff Hinton (the inventor of much of the theory that underpins today's machine learning) and Steven Nowlan (a computer scientist with interests across biology and systems science). The paper, "How Learning Can Guide Evolution", is short but provides a much more compact and powerful mental model to play around with.
What Hinton and Nowlan suggest is simply that adaptations can be learned, and even if they are not preserved and updated in the genotype (this was Lamarck's theory), they can still be guide evolutionary search - and do so quite efficiently. Here is their hypothesis:
"In difficult evolutionary searches which require many possibilities to be tested in order to discover a complex coadaptation, we demonstrate that each learning trial can be almost as helpful to the evolutionary search as the production and evaluation of a whole new organism. This greatly increases the efficiency of evolution because a learning trial is much faster and requires much less expenditure of energy than the production of a whole organism"
If you ever wanted a way to express the value of learning, well, there it is: learning trials are as efficient in shaping evolutionary search as the production of a whole organism. This is worth just pausing and reflecting on for a bit. Evolution developed learning as a way to shape search through the evolutionary space and make it much more efficient -- and for us to find adaptations faster. Hinton and Nowlan are quite modest in the conclusions they draw, but if we feel like exploring this in a much more expansive fashion we quickly end up with a much deeper respect for the value of learning.
Learning is not just the new fitness, it is evolutionary fitness too.
And if we add models in which our learning actually is preserved in culture and can be transmitted over generations, albeit not in the genotype, but in a memotype of sorts (building of Dawkin's meme-ideas), we find even more interesting models to play with. The sum total value of our acquired scientific knowledge, then, can be translated into how many organisms evolution would have had to produce to find them, and we end up with two Darwinian trees - one with learning and one that lacks learning - the second incredible inefficient and big, the first ours.
That suggests a metaphysical question - under what selection pressures does a Darwinian tree evolve? Hinton and Nowlan suggest one option: energy consumption. Energy consumption is, as complexity pioneer Geoffrey West has shown, an absolute selection pressure on all things. This insight also allows us to speculate about how learning and energy are related: learning could be seen as an energy-conserving way for life to evolve. If we learn more efficiently, we spend less energy and so become more energy-efficient from an evolutionary perspective.
The trivial analogy here is how an organisation spends resources - it seems undoubtedly true that the organisation that learns and so finds adaptations and shapes its evolutionary search space effectively expends less resources than the organisation that never learns at all. But it raises an interesting question: in building learning systems that consume energy - what does that Hinton-Nowlan tradeoff look like? If learning in biological evolution in extreme cases can be as efficient as producing and evaluating a whole organism, is this true for what we can learn with artificial intelligence today?
One model for thinking about progress in AI with different sorts of machine learning is to think about how much it would have cost to develop the organisms to achieve the capability that we can produce. If we then look at achieving artificial general intelligence, we could compare with the sum total search cost evolution incurred for human beings - and compare, but that would not be quite right. If we do achieve artificial general intelligence, it will be a discovery within evolutionary search space, guided by the many learned adaptations that have evolved over time, and it is one that arguably could impact the structuring of evolutionary search space in even more profound ways than learning itself.
As we play around with learning this way - this is in no way a rigorous theory - we start to see why it is such an interesting and foundational concept to our understanding of the world. Simply asking how the object of your studies learns is a question that will reveal a lot. An example of this is Stewart Brand's brilliant book about how buildings learn - a book that shows clearly that the evolutionary search space of non-biological artefacts also is structured by learning in different ways.
If we zoom out and allow ourselves the privilege of grand theories of everything, learning seems a good candidate to play with too. Law? The structuring of ethical learning within a society. Science? Learning that structures our long term evolutionary search space. Economics? The structured learning of how to allocate scarce resources. Art? Learning that shapes our internal evolutionary search spaces as human beings. Philosophy? The love not of knowledge, but learning.
But as all grand theories we should not get caught up in. Applying this as a lens to find new perspectives is fine, confusing it with an ultimate truth is a fallacy.
A fallacy we must learn to avoid.
So what?
There are a number of possible ways to operationalise our interest in learning. We have mentioned some of them in earlier notes as well, but here is a review:
Develop a model of how your organisation and your industry learns - assess the pace of learning, the sources and how the learning shapes the evolutionary search space for the organisation and industry both.
Structure your own learning by building in cues and accountability structures. Learn with others, and develop learning habits -- learning is the new fitness.
Develop a learning audit - look critically, if you dare, at how much you have wasted in resources by not learning well enough and assess what savings you could produce if you learned better.
Try to encourage the shift from noun-regulation to verb-regulation in AI. Look at the ethics and regulation of learning as key areas to explore, together with the polity at large.
And these are just some of the things you can do. There is much more to learn and it is a great, great thing.
Thank you for reading,
Nicklas
So many parallels with https://glazkov.com/problem-understanding-framework/. The noun/verb regulation framework was very enlightening.