As we enter 2025, there are good reasons for us to think about what is happening with artificial intelligence as a technology and an architecture, and how that will affect its rollout in broader social contexts. One of the things it's probably useful to pay more attention to is the question of what kind of entities we use in the analysis of the technology. The reason this matters is that once we answer the question of what we think there is—once we set out an ontology1 for the particular subject we're interested in—so much else is determined around accountability, regulation, valuation from an economic standpoint, and so forth.
The ontology really is where it all starts.
If we don't get the ontology right, we risk making a lot of analytical mistakes down the road, and our ontology also limits the kinds of opportunities and risks we can detect. To some degree, getting the ontology right is a prerequisite for any further progress when it comes to the question of how we should think about technology in society. This is not just true for artificial intelligence but for all kinds of technologies. But the more complex the technologies become, to some degree you could argue that the more important it is to get the ontology right. And correspondingly, unfortunately, I think it's also true that the more complex the technology becomes, the harder it is to get the ontology right.
This is a natural thing because a very complex technology will afford many different descriptions. You can describe it in many different ways—you can say it's a little bit like this, it's a little bit like that. It's open to so many analogies, so many possible ways of framing it, or so many possible ways of thinking about it in terms of metaphors that you end up having a hard time figuring out exactly what it is you want to choose.
The kind of change that I want to explore here is one that might seem obvious but I think has deeper repercussions than we usually allow for. It is the fact that we're moving from a world in which it makes sense to talk about an individual monolithic model to a world in which we need to talk about a network of different components of artificial intelligence. We're also moving from a world in which it's natural to think that a model has capabilities—that capabilities are singular, attached to and emerge from the model—to one in which we have to think about combinatorial capabilities.
There is a whole set of different models working together that can have different kinds of capabilities. So when we ask what kind of capability does this model provide, there are two answers to that question. The first answer is here's what the model can do on its own in isolation—that's the least interesting question. The much more interesting question then becomes what can this model do if we add it to the sum total of the different models out there.
This becomes even more true when we think about agentic AI because agentic AI is explicitly built to tap into this network of different resources that can do different things. In a sense, you can think about this as the development of a network of potential action. The entire economy, the entire society is provided with this technology that articulates a network of potential actions that can be taken and combined in different ways.
One of the best ways of thinking of this as a sort of general problem is probably the alphabet model.2 That means that you can think about the kinds of capabilities that you add with a model as the kinds of new words that you can form when you add a letter to the alphabet.
Think about the sum total of artificial intelligence technologies that we have access to today as a set of letters, and you can form different words with those letters. Each new artificial intelligence model you add can, if it's distinct enough3, represent the addition of a new letter and thus the addition of new words that can be formed—new things that can be expressed by the system.
In this case though, it's not expression, like words, it's more actions. So it's like an alphabet where the individual letters enable different kinds of actions. The word-action analogy is perhaps the most fitting one and one that I think is actually helpful as we start to think about this from a broader policy perspective as well.
This to me means that we have to rethink a lot of the different regulatory and policy questions that we have been thinking about so far. I think that is going to be a challenge, but I also think it's going to be incredibly interesting because we now have to think about this combinatorial set of different innovations, of different explorations4, that we can engage in.
How should we generally think then about targets of regulation if we believe that we're moving from a world in which a single model makes sense to think of as the object of regulation to a world in which it is actually more interesting to think about the sum total network of possible actions that we can take? Then it seems that we have to figure out how we adapt.
When you think about the targets of regulation, I think there's several different models that can be applied here that are interesting to think through, and we're not going to see a simple answer to the question—we're going to see several different answers. I think one of the first ones is the trade-off between simplicity and complexity. It will be simpler to target with regulation a single model, that's without a doubt true, and that simplicity is tempting. But that simplicity comes at a cost, and the cost is that the kind of regulatory action you can take when you look at an individual model will not necessarily cover the kinds of different potential regulatory interests that are expressed by the networks of different models.
So if you just go for simplicity, you will probably miss out on a lot of the questions that we think are reasonable to consider from a regulatory perspective. Now on the other hand, if you go for the most complex possible description of reality, you're going to end up in a situation where you're almost paralyzed by the immense challenge of trying to figure out where to regulate.
So you have to find a balance between simplicity and complexity, and that means that we have to think a little bit differently not just about the targets of regulation but the modes of regulation. The modes of regulation will need to be updated in different ways. There may well be different kinds of heuristics that we can bring to bear, and the legal system has developed different kinds of heuristics over time that are quite interesting to think through.
One of them is the notion of the least-cost avoider—who, given a reasonable amount of time and insight, could have avoided the potential harm by spending the least?5 That idea of the least-cost avoider, I think, is quite a powerful one6, and it's a heuristic that has evolved in the legal system over a period of time. What we may need to do now is to find other kinds of heuristics like this that reflect the new complexity of the regulatory environment we're thinking through, and then figure out where on the different levels of regulation we want to find our target.
The idea of different levels allows us to think about where regulation sits best. You can imagine a world in which you say look, there are at least four different levels of regulation that we should be interested in:
First is the object level—that's the model. That's where most of the regulation today actually happens. You regulate the object, the model itself.
Then you can imagine regulating the process—the process here would be the use of different models. So you regulate the use of different models; you don't necessarily care about the models themselves, but you say look, if you use a model to, for example, give legal advice or to engage in medicine, here are the kinds of rules that apply, no matter what models you use or what network you tap or anything like that. What actually really matters is how you use the models.
Then you can imagine regulating on the level of the project, and that is interesting because that's something that I think is not necessarily always well explored in legal philosophy. The idea of regulating projects can feel very stifling, and it can feel as if it's over-regulation by default, but there are certain kinds of projects that we might not want to allow for. When we regulate for projects, we'll regulate not for the use but the intent of the use—what it is that you're trying to accomplish with the use of the technology.
You can see some examples of this in the European AI Act, for example, where I think that the idea that you don't allow for social scoring with artificial intelligence is actually the outlawing not so much of a technological use as a social project—the idea that these kinds of projects we do not condone, that's not what we want to allow. And I think that's probably another way of approaching regulation, another kind of level: what kinds of projects do we want to restrict?
Finally, you can imagine regulating the system as a whole—the totality of the technology and of the use and the intent of the use and the models themselves. Regulating the sum total system of course is terribly hard because that means that you end up trying to figure out exactly how to get traction on what is an entangled set of different actors, technologies, and networks.
Another possible way to approach this would be to say that we should think about this as something like Lessig's framework. Lawrence Lessig famously said in his book "Code and Other Laws of Cyberspace" that there are four different things that regulate: One of them is code—that is how you write the code, the software. Another is markets—the economics of the thing, that's how you know a lot of things are regulated. We have norms—the norms are really important because norms are regulatory; from the regulatory perspective, more actions in our society are probably regulated by norms than anything else. And then finally, of course, law—law also has the regulatory function.7
So between these—code, markets, norms, and law—you have to figure out what the target of regulation is, where do you actually choose to regulate. And one of the things you see if you apply Lessig to artificial intelligence is that to a large degree, the regulation of the model is the regulation of code, and that is something where we've sort of ended up, I think, because of how we think about technology overall.
The way we came to artificial intelligence, and I think this path dependency is probably important, is through the Internet, which means that a lot of the artificial intelligence policy we think through is actually very close to Internet policy. And that is probably wrong. The Internet is a very different kind of technology from artificial intelligence. The Internet is a connecting technology, whereas artificial intelligence, to simplify enormously, is a kind of capability technology—it allows us to act in different ways. That's not quite right though, because you could argue that the Internet also allows you to act in different ways. So it's connection versus capability—that is a simplification, but there's something there that I think is worthwhile exploring.
So the idea that you come to artificial intelligence through the Internet probably means that you need to rethink your approach, because artificial intelligence is very different from the Internet, and regulating it or thinking about the regulation of that technology in the same way is going to lead you astray.
How should we change our metaphors then? One of the things that seems obvious when you look at the evolution of technologies is that we need to move from mechanical metaphors to biological ones. The idea that technology is a machine has been with us for hundreds of years, and we think about the machine as the quintessential technology. But as technology becomes more and more complex and more and more embedded in society and more and more mimicking different kinds of biological processes like intelligence, the more it makes sense to actually look at biological metaphors as well.
So perhaps rather than thinking that we should regulate the development and deployment of technology, we might want to think about how we influence the evolution and growth of technology. It's a very different perspective and seems harder, but it actually means that you're less the engineer and more the gardener—you're less the person who is sort of deconstructing a machine and more the person who is tending to an ecosystem and trying to make sure that it stays healthy. That challenge will be harder, there is no doubt about that, but maybe that's the better metaphor, or at least the complementary metaphor that we should use more often as we think these things through. This does lead to the interesting question of how you regulate biological systems, and I think that's an area of research that we need to think more about.
Taking all of this into account then, I think it's useful to think about a couple of different questions that we'll have to explore more in the coming, say, five years or so. The questions that I am interested in, and I think are absolutely essential to get right, are the following:
First, we need to figure out evaluations for combinatorial capabilities—how can we evaluate the ability of a system or a network to allow for more potential actions? That is, we need to figure out what it means to add a letter to the alphabet and what new kinds of words can be formed with the addition of that letter. I think those evaluations will be hard, and we haven't made enough progress on those kinds of evaluations yet.
Second, I think that we need to move from a mindset of thinking that we will get to a regulatory outcome by design and rather think that we need to get to regulatory outcomes by adaptation. We used to see, for example, in the privacy space this idea of privacy by design—so systems should be designed to incorporate confidentiality, security, secrecy, all of those different things, and it should be by design. The use of data, personal data, should be minimized and only allowed for certain purposes that were specified beforehand, and you had this idea that almost every regulatory target could be designed into the technology.
I think that's not entirely wrong for artificial intelligence either, but it is less right than it was before. I think that if we start to think instead of regulatory targets being accomplished by adaptation—by long-term adaptation of a particular technology to the network, to the society that technology is embedded in—that's going to be the better metaphor for us over time. Privacy by adaptation is different from privacy by design. Privacy by adaptation means that you see how these systems work, you figure out over time what you would like for them to do, you adapt to the functionality of the systems so that they reflect, for a certain period of time or at a certain period of time or point in time, the kind of trade-offs and the kind of balance that we believe is right when it comes to privacy.
Third question I think we need to figure out is if the answer to the network is in the network. This is sort of a paraphrase of an idea that came up in the copyright debates and the Internet where the question was "is the answer to the machine in the machine?" And in that debate, it essentially meant that if we're worried about the proliferation of copying on the Internet or in networks, then what we need to do is that we need to build technology that makes copying harder.
From this idea of "the answer to the machine is in the machine" came things like the electronic copyright management systems—huge monolithic systems that were supposed to manage content across enormous regions. In Europe, for example, the system IMPRIMATUR was launched as a way to manage content on the Internet everywhere. But we also saw the development of more modular solutions like digital rights management or different kinds of watermarking that allowed for the tracking of copyrighted content.
The question that we have to ask now is if the answer to the network—the answer to the question of what kinds of capabilities that we want to allow—is not so much in the model itself or in regulation but in designing the network so that it allows for certain uses and disallows other uses. That's a hard problem because, as we said before, how do you design biological ecosystems so that they accomplish certain regulatory targets? That's largely an unsolved question, but it's an interesting question to explore.
Fourth, we need to think about if complexity itself might actually be feasible as a regulatory target. How much complexity should you be allowed to add to an existing system? And if you think about this as a combinatorial problem, you could say that if you add certain kinds of letters to the alphabet, you can actually express many, many, many more words than if you add others. I think if you add, for example, a 'B', that's going to be very different from adding an 'S', especially in the English language.8
And so as you look at this, you can say that if you're adding a 'B', it's less problematic than if you're adding an 'S', but what you're looking at is not the size of the model, which we currently do. You would not be looking at the size of the model, the amount of compute that the model actually consumed, or any of those different factors, but you would look at the kind of combinatorial capability that the model adds to the network. Trying to assess that will naturally be really, really hard, but it might also be worthwhile because that will then answer the question of what is now possible to do with the network of capabilities.
All of this seems to suggest that our focus on models increasingly needs to be at least complemented by thinking through a more combinatorial and ecosystem-like approach.
This concept is used here as a sort of short-hand for the question of what kinds of things there are — something we should disagree about far more often than we do. For a fuller review of the concept have a look at the excellent Wikipedia page here: https://en.wikipedia.org/wiki/Ontology
See e.g. Hausmann, Ricardo here: https://www.ineteconomics.org/uploads/papers/hausmann-ricardo-berlin-paper.pdf
To make the metaphor a bit more tortured, a lot of models are just like the same letter in a different font — no extra word production capability is added.
Another way to phrase this, of course, is to think about it as each new model opening a new horizon of exploration across new subjects and domains.
This is a simplistic version of the concept as explored by Calabresi in his Cost of accidents (1969/70).
Although it has weaknesses for sure — see eg https://www.law.gmu.edu/pubs/papers/04_27
The idea is found in an early paper of his as well: Lessig, L., 2000. Code is law. Harvard magazine, 1, p.2000. Vancouver.
The productive nature of different letters is one of the principles underpinning the scoring in Scrabble, for example.