Unpredictable Patterns #121: Constraints and futures
The geography and economy of constraints on AI development - spending time on the terrain
Dear reader,
Last week’s note on liability was timely, it turns out, as there is a lot of movement on that front. This week we turn to a more general model for geopolitical futures: looking at constraints. This week also marks a first, in the launch of The Future Habit podcast, the podcast that David Skelton and I are producing as a companion to our coming book. We have given the book and the podcast its own Substack, as appropriate, and would love to have you subscribe if you are at all interested in future studies, scenarios and the craft of working with the future. We promise interesting, and more importantly, useful, content! The first episode of the podcast is an interview with Sohail Inayatullah - one of the most influential futurists and thinkers in our field. Listen in and subscribe, and tell your friends!
TL;DR
🎙️ The Future Habit podcast launches this week—first episode features futurist Sohail Inayatullah.
🧭 We explore a constraint-based model for thinking about AI’s geopolitical futures: focus on where growth stalls, not just where it accelerates.
⚙️ Competing in AI means identifying, shifting, and absorbing key constraints—compute, data, energy, talent, and imagination.
Thinking about constraints
Constraint based analysis focuses on the limitations of any development, and seeks to figure out the economics, geography and politics that affect those limitations in the first, second and third order. The idea is simple: rather than predicting growth, you predict the inflection points at which growth will stall. This gives you a road map that can then be used to think about options for policy makers in different dimensions.
Currently, as evidenced in Mary Meeker’s reborn trends report, all the curves are pointing up and to the right for capacity and use, and down and to the right for cost -- and at breakneck speeds. Reading the report almost gives one vertigo, and is a good reminder that we are living through a more intensive period of change than we may realize. Modeling constraints allows us to think about the stability and sustainability of those curves.
Examining constraints is simply exploring the composition and nature of the Pareto frontier for a technology, or a bit like when the military spends time really understanding the terrain to figure out how to plan their operations.
There are different kinds of constraints. The first, and most obvious class of constraints are constraints on the inputs to the process. Such input constraints regulate the production of the technology, and the speed with which we can produce it. There are different proposed inputs for artificial intelligence, but most observers agree on compute, algorithmic efficiency and data being a good start.
Compute is needed for both the training the models and using them - so it constrains both production and use, but we will treat it in our inputs class first. As we analyze second order inputs to compute, energy emerges as the key factor that limits the availability of compute.
Algorithmic efficiency determines how much we can get out of the existing compute, and data determines the overall capabilities we get from the use of our compute. The second order inputs of algorithmic efficency are mostly given from the second class of constraints we mention below, the pace of technical and scientific progress.
For data the second order inputs are both the access to and production of data, where we usually focus on the former. The availability of data in turn is dependent on data protection frameworks as well as copyright limitations.
Another, more general, class of constraints, then, is the pace of technical and scientific progress in the fields related to the other constraints. Here the key constraints are talent, research organization and funding. Overall progress determines the limits of the specific constraints: if we make a break through in energy efficiency for chips, the primary pareto horizon moves.
Talent is needed to train, operate, innovate and develop the technology. We can quibble about whether talent belongs in the primary constraints category (without it there will be no AI), and there is reasonable disagreement here, but by placing talent in the more general category of constraints on progress overall, we also highlight that we need broad categories of talent, beyond computer science, to ensure that we can keep growing a technology. The key second order inputs on talent are education and high qualification migration (which can be negative).
Research organization is a key input for progress, since it determines the balance between exploration and exploitation in research, and so allows us to analyze the effect of mining the existing normal science problems, versus breaking paradigms. Second order inputs to research organization are things like institutional architectures (innovation and inertia). For funding, we care both about size and sources - since a mix of public / private inputs here is likely to beat state only organization.
There are also third order constraints here, but as we move backwards in our causal model, the generality of the constraints increase, which makes them harder to work with - but they can still be assessed. A few foundational such constraints include political will and capacity, economic growth, regulation and social resilience.
This brings us to the other set of constraints - where the perhaps most important one are absorption capacity. The production of a technology matters very little if a society lacks the ability to absorb the progress. Absorption rate is determined by things like usability, cost, imagination and understanding.
That usability matters seems clear: technology revolutions become revolutions because the technology is used. One way to chart the future of AI, is to imagine a path through different user interfaces: from chatbot to agent to … (ambient presence, robots, cars, holos, wearables). An observation here is that the more complex a technology is, the more important the user interface becomes — cognitive techologies need to have cognitive fit to succeed, and this means that they need to be integrated into what Wittgenstein called ”a form of life”. Cognition is a weave of actions, habits and patterns in our environment, and fitting AI in is a question of understanding this weave deeply.
Cost is self-evident: the costlier the technology the harder for a society to adopt it broadly, but cost is always relative to utility. What we get out of the technology will matter too — we are willing to pay increasing amounts for something that proves very useful.
Imagination and understanding may seem fuzzy, but I think they are incredibly important. We often speak of the asymmetries between the organizations that make AI and the governments, users or buyers of the technology, and the first asymmetry we converge on is the asymmetry of knowledge. Now, this is an important factor - but it is not an asymmetry at all. It is a symmetry of ignorance - where companies know as little about government, social and economic realities and culture as the government and public knows about the technology. Realizing this could lead companies to think about this problem differently.
Where I think there is a real asymmetry, however, is in imagination. The ability to build a shared imagination for a technology is a pre-requisite for use and diffusion. When we can imagine the good future that a technology can bring, we adopt and start using it. The Internet provided a future of connection and information, the ability to connect with social networks and access all kinds of information. The image of the Internet was always the network - we imagined it as a web across the world. What is the image of AI? How do we find a shared image that we believe accurately represents the change that now is imminent? This is a challenge that I think we need to spend much more time on, especially as the current image is something like a blue, abstract humanoid embedded in calculations.
How should we imagine a future where cognition is abundant?
Summing up our model of constraints, then, it looks something like this.
Next, we put this model to work.
Modeling constraints
With this model in hand, we can now start exploring different domains, to get a sense for the state of play. We can, for example, explore the geography of constraints - and discuss how they impact future development trajectories. Here is a clunky first approach to the geography of constraints - looking at the severity of the constraints on different geopolitical players. It is based on an early assessment, and there are contentious points here (I go back and forth on if China is data-constrained, for example) - but the point of the example is rather to suggest a model.
If we want to we can now also imagine combinations of countries that make geopolitical sense - like Middle East (energy) + the US (almost all other resources). The full artifact is here, if you want to tweak of explore.
We can also build a simulator where we can compare different countries and explore how we think their relative development will look, depending on how we weight the constraints in the model. This is a quick and dirty streamlit-script doing exactly that for the US and Europe - this allows us to model and discuss how to weight different constraints in a nice way.
This in turn allows for relative strength comparisons:
And so on. The modeling of constraints is a coarse-grained method, of course, but it is a first approximation for anyone interested in geopolitical scenario work or simulations. Simulation models can also be tweaked, but currently compares a number of different outcomes:
If you want to play around with this - coarse-grained! - simulator you will find it here.
There are also other ways of modeling the constraints, of course, including looking at them from an economic perspective, like looking at different regions vs the pareto frontier of AI capabilities (link here).
Or supply chain risk:
You can surely come up with other interesting ways of developing and deepening the model, and these are just some first sketches to allow us to explore the constraints more interactively.
These explorations of the economics and geography of constraints give us a sense of the geopolitical dynamics involved, but there is also another possible approach to the constraints - and that is to look at the question of when the constraints become absolute, or severly restricting. That is what we will look at next.
Limits
Our coarse-grained model of constraints has allowed us to explore different questions about geopolitics and economic dynamics, but we could equally be curious about what the constraints say in more absolute terms. What does this mean? Well, the key, here, I think is essentially asking when we run out of things.
Here are a few questions that I think are interesting to explore:
When will the US energy supply become a significant constraint on further development or use of AI?
When will we run out of data?
When will we hit a computational wall in terms of algorithmic efficiency?
Now, these kinds of questions come in two flavors, at least: the first is when we ask the question given today’s trends and the second is when we ask what would happen if we really prioritized AI-development and worked to push the constraints away as much as possible. On the spectrum between status quo and full commitment several different positions exist, and which country takes which position is naturally going to be key to understanding the way the balance of capabilities shifts.
But even just formulating these questions also allows us to ask some interesting questions about the elasticity of the constraints: producing more energy and data centers requires time and resources and political will, and the pace at which this can be accomplished is different from the pace at which we can progress foundational data science, or increase access to and production of data.
An elasticity analysis seems to suggest that of the primary constraints, the one that is most elastic, and can be produced fastest is data. Not only because governments can choose to relax copyright laws and reform data protection law - or not even primarily because of that reason - but because, as we have pointed out repeatedly here, the ability to deploy sensor networks and start collecting data at scale and speed in different domains of society: medicine, climate, transport and energy. We also have billions of phones that are sensor rich - allowing us to collect data in ways that are distributed, rich and targeted.
What does a best effort approximation look like if we look at the speed with which we can shift the constraints? If we start from a historical baseline - and project, we end up with something like this:
But this ignores some of the walls we know we are running up against - like energy shortages, and it vastly underestimates - I think - the data growth that we could unlock if we wanted to.
The reason this is interesting is that it suggests that any policy maker who is interested in AI-development can figure out where they get the most return on investment on different horizons. Energy networks - especially in Europe - need to be reinvented, not just renovated: we need microgrids, a different and much more decentralized network as well as a return to nuclear power (yes, looking at you Germany) — but that will take time. If Europe wants to build a position in the AI-race, its best bet might actually be data - and here I think it seems obvious that this is something that cities should be asked to think through and do. National, or worse, European projects to collect data will be too slow and clunky.
Competing in AI is, after all, competing on the constraints.
In closing
The study of constraints, of limits and of horizons is a different way of approaching forecasts and analysis. It is the exploration of what in biology is sometimes referred to as canalization—the tendency for development to follow particular pathways despite variation in conditions, creating predictable patterns even amid apparent chaos.
Military strategists understand this intuitively through terrain analysis. They know that landscape doesn't just influence troop deployment—it compels it. Mountains create chokepoints, rivers dictate crossing points, and valleys channel movement along predictable avenues of approach. Smart commanders don't fight the terrain; they read it and use its constraints to their advantage.
The same principle applies to technological and geopolitical development. Energy infrastructure, talent pipelines, and regulatory frameworks are the terrain features of our strategic landscape. They create chokepoints where development must pass, key terrain that confers advantages to whoever controls it, and avenues of approach that seem open until you realize everyone else is using them too.
This perspective shifts our analytical focus from predicting specific outcomes to mapping the channels themselves. We can't predict exactly what happens next, but we can map where the rivers are likely to flow. Understanding these constraint-carved pathways may be our best tool for navigating an uncertain future—not by fighting the terrain, but by reading it better than our competitors.
Thank you for reading!
Nicklas