Unpredictable Patterns #114: Artificial Mediocre Intelligence
On sub-AGI trajectories, different modes of producing mediocrity and more
Dear reader,
The trade wars continue, and the subject of liquid factories engaged a lot of you. Thank you for the comments and the ideas around how trade really is also increasingly virtualized generally, this one is worth coming back to. I also want to put in a plug for two other texts: one is some notes I wrote up for trend talks, and the other is a great new text on AI-policy perspectives by Seb Krier. Hopefully you will find both interesting.
This week’s note takes a bit of a different turn and looks at sub-AGI trajectories, and specifically looks at the emergence of artificial mediocre intelligence (AMI).
Sub-AGI trajectories
There is a wealth of predictions out there right now about the coming of artificial general intelligence (AGI) and how these new systems will change everything from the game theory of conflict to the labour market - often simultaneously and deeply. That work is tremendously important - we should try to predict such socio-technical changes in order to build adaptive strategies that allow us to remain somewhat in control over what we care about. The predictions also often carry some note of warning: these systems are complex enough that the change they bring could be catastrophic and/or transformative in ways that will force us to adapt radically and rapidly.
But what if AGI never comes? What does the sub-AGI trajectories look like?
This might seem a silly question, but it is worth thinking about what we could get instead, and how policy, economies and societies would need to adapt if all we get is artificial mediocre intelligence (AMI).
Now, you may want to argue that we are already past that point: we have passed the point of mediocrity, and are now in the territory between AMI and AGI already. Whatever we get, you could argue, will be better than AMI. Maybe so, but here is why I think we don't know yet. When we think about AGI we need to think about a stable intelligence equilibrium: a state in which we have, and can sustain, AIs that can do whatever a human can do in some defined domain.
This is, I think, an important aspect of the discussions that is sometimes neglected. AGI is not a point in time - but an equilibrium where costs and benefits make sense. The energy consumed is reasonable given the productivity growth we see, the data access we provide can be defended from privacy and commercial reasons because of the social welfare outcomes we can produce and so on. There are many scenarios in which AGI collapses into AMI - where the models don't get new data fast enough, or where the energy prices and climate footprints don't match the productivity growth.
Most scenarios discussing AGI, and even more so the ones that discuss ASI, seem to assume that once we reach this point we will automatically continue and produce even better and more capable models. But this assumption needs to be defended robustly - since we know of many systems that peak, decline and collapse into oblivion. What if AGI once achieved represents but a temporary peak that then is followed by a slow decline into AMI, and even further into something like artificial intermittent intelligence - useful sometimes, but often too costly?
There is a science fiction story here: as interstellar explorers we come to find civilisations that seem to have evolved advanced AIs, but these then have ended up in vicious loops, suffered data starvation or simply become so complex as to resemble nothing more than random intelligence generators. AIs, they tell us, peaked some thousand years ago, but then croaked under their own complexity, degenerative data diseases and inter-AI competitive harm. These cultures now live in the cognitive ruins of once great artificial minds that have continued to function, but are caught in a slow spiral of hallucinations, self-deception and misalignment.
Cheap mediocrity
The idea that AGI could collapse into AMI is of course speculative - and maybe the more likely scenario is that we never quite get to AGI, but remain in a stable state where AMI emerges as an equilibrium for the economy and society overall. What would that look like?
First, let's be clear: AMI will not be useless — we can still imagine areas where the machine excels - like in routine coding or similar tasks. It is just that if we look across all domains, then what we get on average is mediocrity. Even so, that mediocrity might be economically valuable as long as it is cheap. A mediocre AI customer service agent might be okay compared to the cost of a mediocre human customer service agent. The thing is that if we get AMI, and it is somewhat cheap, well, then we should look at all the places where human mediocrity can be replaced by artificial mediocrity and use that as a proxy for predicting how society will change.
Cheaper mediocrity will still have a massive impact on society - it does not take AIs that are as good as the best humans to change society profoundly. What we would need to do to explore this more in detail is to explore how human mediocrity is distributed today, and how common it is. How many tasks performed by humans today are performed in a way that is ultimately rather mediocre? What percentage of human jobs are performed in mediocre ways?
This sounds like a deeply insulting question - but remember that we are not saying that the humans are mediocre, just that they might not be doing their ultimate in performing these tasks. It is the performance of the tasks, not the humans themselves that we are evaluating. Surely, we have all had times in our life when we have been mediocre?
There is another science fiction story here as well: a society that is automated, with robotic servants that do a "meh" job at what they do, but still remain cheaper than human labour, and free up humans to do other things. It is not the pristine, perfectionist future of Star Trek, but rather a future in which everything sort of works, but only so-so. A future that is more like Sweden in the 1970s. Things were okay, but there were lines at the post office, and trains were not always on time and public services were okay, but not great. Not a utopia, but not a dystopia either - more like a vacation week at a substandard, heavily rebated tourist destination where the locals are not really motivated to excel.
The mediocre scenario is interesting, because it would put a premium on premium service: if humans wanted to work, they could do so and outcompete the AI by excelling at what they do. Fine attention to detail, superb delivery of thought-through services would be the key to competing with AMI.
The mediocrity adjustment
If we end up in a mediocre equilibrium - where capabilities stall - we will also be forced to ask some hard economic questions.
Currently, the time, attention and resources that we spend on artificial intelligence are premised on the promise of generality. How would that equation change if we slowly realise that what we have is a promise of mediocrity? It seems inevitable that we would see a large correction in expectations, and the predicted mediocre plateau would not merit at all the kind of spending that we see today; instead we would have to find new ways to address the problems we are facing as a civilization. No AGI-saviour to sweep in and help us with climate change, pandemics, scientific progress and deep social inequities.
The fact that generality is priced in is salient for anyone thinking about what happens if we cap out at AMI - and what that does to markets.
Existential risk and AMI
From a purely existential perspective, AMI is a bit of a nightmare. We would get some kind of new cognitive resource, but we could not deploy it towards the more complex problems that we face. Maybe it would free up more human intelligence to really try to crack those problems - but even so, we would be left on our own when it comes to trying to solve the great challenges of our times.
This brings home how much trust we put in AGI: we really hope that this technology will help us deal with the problems we have produced for ourselves in the eternal cycle of scientific and technical development, progress, welfare increases and new complexities that require new science and technology. Our bet here is that better intelligence will help us past the complexity that we have accrued over time - but AMI could not do that at all. It can help us manage the complexity, albeit not in a great way, but it cannot make breakthroughs such that we can make a dent in the growing complexity overall.
If we plateau at AMI we are left to our own devices.
And it could be even worse. In his 2007 talk "What if the Singularity does NOT happen" author Vernor Vinge suggests that if technological development stalls, then the equilibrium is not stable, and if we translate that to our scenario here we quickly realise that mediocrity aggregated actually produces inefficiencies, system failures and costs that will just accelerate in pace and scope.
This implies that AMI is not an equilibrium at all, but just a plateau on a downward slope: the spread of artificial mediocrity will lead to a messier, less functional world - and every inefficiency will compound over time leading to what Vinge calls the Age of Failed Dreams.
Mediocrity is like sand in the machine, it will wear society out much faster than either generality or the current day best effort human work we see. And since artificial mediocrity likely is cheaper than human, it will likely also be more prevalent and used in more places, where the overall skill and capability in social domains might decline too.
If that is the case AMI might very well be the end that TS Eliot predicts for us: not with an AGI bang but with an AMI-whimper.
Modes of producing mediocrity
Mediocrity is interesting in that it can be produced not only by a technical slowdown. We can also imagine mediocrities that are produced by multi-agent systems that are badly tuned. These systems will need to find new equilibria of skills, capabilities and modes of interaction that are finely adjusted to a multitude of different actors - and it seems almost inevitable that mediocrity will creep in as a kind of multi-agent entropy in systems where enough different agents with subtle differences try to interact.
Such mediocrity by complexity is certainly what we see in multi-agent systems today. The systems almost get it right, but almost is worse than getting it wrong - because the almost contains a multitude of mediocrities that mean that you cannot rely on the collective outcomes at all.
Multi-agent mediocrity could also be legal: imagine contracts that overlap, exclude some effective ways of acting and essentially force mediocrity into the systems — and this would not be new. The European browser experience with cookie banners everywhere is an example of design mediocrity produced by legal processes and systems.
Mediocrity as a means of alignment
So far we have argued that mediocrity is a problem, but not everyone agrees. One way to think about mediocrity is that it can actually outcompete greater skill and capability - or even generality in intelligence. The argument could go like this: mediocrity is the key method of evolution - it does "good enough" and not "best possible" and in doing so tests and learns much faster than any system that tries to do "best possible". Mediocrity will always beat generality, because mediocrity acts where generality designs.
If this is true an interesting opportunity seems to open up for us: can we use mediocre systems to rein in artificial super intelligence? An ASI surrounded by AMIs, will essentially have to constantly tell them how to get it right, and spend hours trying to fine tune or fix their mediocrity to get anything done.
There is a funny set of possibilities here - but also a connection back to something that Nietzsche writes in Also Sprach Zarathustra: "A people is a detour of nature to get to six or seven great men. — Yes: and then to get around them." Now, let's immediately recognise the misogyny, elitism and pure cynicism of that quote - but let's also look at what it could teach us: that outliers are constrained by the median in most evolutionary systems.
A way to control a runaway ASI might be to ensure that there are enough AMIs around to get around the ASI, and slow it down.
AI and airtravel
Most people assume that the AI we will have in a hundred years from now will be radically different from what we have today. But what if it is not? What if AI is like airplanes? Where there might be design tweaks and incremental improvements, but where the basic functionality is not vastly different. Sure, there were Concorde planes, but we discontinued them as they were not quite right. We stuck with what we have. And we still get it wrong sometimes, and the airplanes are better, but not wholly safe. And air travel is cheaper - but it is less and less comfortable, and the whole experience is much more mediocre today than it was 50 years ago.
As we go through all the scenarios and futures associated with artificial intelligence we would do well to also think through scenarios in which we get not much more than we have today, and what they would mean. Not least because such scenarios should motivate us to pursue AGI so much more ardently.
So what?
Ok, so is this anything more than a cute little thought experiment? I think it might be — not least since I think we want to think carefully through policies and legislation so that these do not inadvertently push us towards AMI. There are some early signs that the EU AI act could do that, since we see a delay in the release schedule for Europe, leaving the EU with an overall lower capability landscape - something that makes for mediocre, or more mediocre outcomes.
I also think that domains, sectors and economic systems will pass through phases of AMI - and that it is important to recognize when that is the case. When stuck in patterns of mediocrity, these need to be broken fast not to aggregate to real collapse or dysfunction.
Finally, I think it is healthy to realize how much we actually will lose if we do not pursue the goal of AGI, and look to build the strongest possible technology we can to address the challenges we face as a civilization.
Thanks for reading,
Nicklas