Unpredictable Patterns #105: AIs with an infinity of senses
Sensors, intelligence, Umwelt, communication and the rise of massively multi-modal artificial intelligence in geopolitical competition
Dear reader!
Lots of good suggestions on the last note on public health as soft power, and why AI really matters (and why the EU should pay attention!). You rightly were skeptical on the cost calculus - pointing out that DeepSeek seemed to imply that compute costs might never rival drug costs in healthcare — and that might be right. I will still maintain that it is an interesting scenario to think about, as inference costs and healthcare costs at some point will be at least loosely coupled!
This week’s note is about something completely different: what happens when we build AI with an infinity of senses.
Sensors - natural and artificial
What is a sensor? In the crudest possible terms it is something that detects or measures a property in the world and indicates, responds to it or records it in some way.
A thermometer or a barometer are both sensors, but it is not crazy to call the eye or ear sensors as well. Sensors produce sense data - data about the world, that can then be interpreted or built on in different ways. Early analytical philosophy considered sense data a fundamental building block of understanding the world, and in the works of G.E. Moore, Bertrand Russell and Alfred Ayer and others this term was used to think about the how we perceive the world.
Russell’s definition, in the Problems of Philosophy (1912), was simple:
Let us give the name of 'sense-data' to the things that are immediately known in sensation: such things as colours, sounds, smells, hardnesses, roughnesses, and so on.
This perceptual space - our sensorium - is defined by the sensors, because they give us the sense data that we have access to.
This all should remind us of Jakob von Uexküll's work on Umwelt, particularly his A Foray into the Worlds of Animals and Humans (1934), in which he provides a crucial biological foundation for understanding how different organisms inhabit distinct perceptual worlds based on their specific sensory capabilities. His famous example of the tick, which responds to just three sensory triggers (butyric acid, temperature, and mechanical pressure), demonstrates how even a seemingly simple sensorium can create a complete and functional world for an organism. The tick's world is not impoverished compared to ours; it is precisely tuned to its needs and survival requirements. This connects directly to Russell's notion of sense-data but suggests a more profound biological contextualization: sense-data isn't just the building block of human knowledge, but rather each species' unique set of sense-data constructs its specific reality.
This biological perspective enriches our understanding of sensors and sense-data by suggesting that what counts as a "sensor" is deeply tied to an organism's evolutionary history and ecological niche. Von Uexküll's concept of the "functional cycle" (Funktionskreis) shows how sensors don't just passively receive information but are part of a dynamic loop where the organism's actions and perceptions are intimately connected. A dog's nose isn't just a more sensitive version of our nose - it creates a fundamentally different world of meaning, where scent forms complex spatial and temporal landscapes that we can barely imagine. This suggests that when we think about artificial sensors and expanded sensoria, we need to consider not just the raw capabilities of the sensors but how they might integrate into meaningful functional cycles that create new forms of Umwelt.
Now, if we think about the sensorium that we inhabit - and the sensors that we have - we could argue that evolution has been extraordinarily parsimonious: it has given us - as Aristotle teaches - 5 senses to understand a world that is enormously complex.1
One way to think about that complexity is to imagine how many senses there could, theoretically, be. The answer to that question is simple: as many properties as can be measured or detected in the world. So, how many properties are there in the world? It is quite common to note that the set of “signals” or “properties” available in nature is enormous compared with the handful of sensory modalities that our species (and indeed, most animals) actually employ.
In other words, evolution appears to have “picked only a few” of a vast menu of possible information channels. For example, the electromagnetic spectrum extends from radio waves to gamma rays—a range of wavelengths spanning more than ten orders of magnitude—yet human vision is limited to roughly 400–700 nanometers (a range of about 0.1% or less of the full spectrum). Similarly, while there are many chemical features in the world (and many molecules exist), our olfactory and gustatory systems are sensitive only to a relatively narrow range of compounds and concentrations; and in audition, we can hear only frequencies from about 20 Hz to 20 kHz, whereas many animals can detect sounds well below or above that range.
Technological, artificial sensors have vastly expanded our “sensory” reach and space far beyond the narrow bandwidth of our biological senses. Whereas human vision, for instance, captures a tiny fraction (on the order of 0.1% of the electromagnetic spectrum) compared to what modern devices can detect—from ultraviolet to infrared—the artificial sensory space provided by today’s technology is many times larger. In many cases, specialized sensors can monitor a broad range of wavelengths, frequencies, and chemical concentrations that were previously inaccessible to us.
For example, an infrared camera can capture thermal data invisible to our eyes, and electronic noses can detect thousands of volatile compounds compared to our limited olfactory range. This massive expansion of detectable phenomena has enabled a sensory space that is easily an order of magnitude—or more—wider than the natural one.
Moreover, when we integrate various artificial sensors into a unified system, the “information bandwidth” can expand further. Today’s devices combine inputs from high-resolution cameras, multi-channel microphones, chemical sensors, and even haptic feedback devices, effectively building a multimodal sensory network that outstrips our biological senses by capturing a vastly richer array of data.
Quantitatively, while a human might be limited to a few dozen distinct sensory channels, modern sensor arrays can collect hundreds to thousands of independent data streams simultaneously. Thus, if one were to assign an order of magnitude to this expansion, it is not uncommon for artificial sensory systems to offer at least 10 to 100 times more information than our innate sensory modalities alone, depending on the domain and application.
The evolution of sensors
It is worthwhile to just look at the development of sensors for a bit, as it is quite an undervalued field, and yet one of the most fascinating technology trees out there!
As we noted, the evolution of sensor technology began with simple devices like the thermometer. Early thermometers, based on the physical properties of liquids expanding and contracting with temperature, provided a fundamental way to measure thermal energy. These analog instruments not only laid the groundwork for quantifying our environment but also inspired the principle that physical phenomena could be translated into measurable signals—a concept that became central to sensor development.
Modern sensors have since evolved far, and fast, beyond these initial devices. Leveraging advances in electronics, microfabrication, and digital signal processing, modern sensors such as MEMS (micro-electromechanical systems), optical, and chemical sensors now measure a vast array of parameters including pressure, motion, chemical composition, and more. This transformation has enabled real-time data acquisition in fields ranging from industrial automation and environmental monitoring to medical diagnostics and consumer electronics. The miniaturization and integration of these sensors into compact, low-power devices have dramatically expanded our ability to monitor and interact with the environment, providing a richer, more detailed understanding of complex systems, and expanding sensory space at a breakneck speed.
Another field is biosensors. Biosensors represent one of the most dynamic and rapidly evolving areas in sensor technology, combining biological components with physicochemical detectors. At their core, biosensors use biological elements – such as enzymes, antibodies, nucleic acids, or whole cells – as recognition elements that interact specifically with target molecules. These interactions are then converted into measurable signals through transducers, which can be electrochemical, optical, piezoelectric, or thermal in nature. Recent advances have led to remarkable developments in areas like continuous glucose monitoring for diabetes management, rapid pathogen detection in food safety, and real-time environmental toxin monitoring.
The field is particularly exciting due to emerging capabilities in CRISPR-based biosensing, which offers unprecedented specificity in molecular detection, and the development of "living sensors" using engineered bacteria or cells. One of the most promising frontiers is the development of wearable biosensors that can continuously monitor various biomarkers in sweat, tears, or interstitial fluid, potentially revolutionizing personalized medicine and preventive healthcare. These advances are being driven by improvements in nanotechnology, materials science, and synthetic biology, allowing for increasingly sensitive, specific, and miniaturized biosensor platforms.
The latest frontier in sensor technology is defined by nano and quantum sensors, which push the limits of detection and precision even further. Nano sensors2 exploit the unique properties of nanomaterials—such as high surface-to-volume ratios and tunable electronic characteristics—to detect minute concentrations of chemical or biological agents at the molecular level. Quantum sensors3, by harnessing phenomena like quantum entanglement, tunneling, and superposition, offer unprecedented sensitivity, potentially measuring changes at the level of single atoms or photons. Together, these advanced sensor technologies are expanding our sensory reach into domains previously considered inaccessible, enabling breakthroughs in early disease detection, environmental analysis, and beyond, and heralding a new era where our capacity to sense the world is limited only by the boundaries of quantum mechanics and nanotechnology.
While Moore’s law famously described the exponential increase in the number of transistors on a chip, a similar—but less formally defined—trend can be observed in the expansion of the “sensorial space” available through technology. Early sensors like the thermometer provided one basic channel of information about a single physical parameter, but over time we have developed increasingly sophisticated sensors that capture a far wider spectrum of environmental data. Modern sensors, ranging from MEMS devices and optical sensors to advanced chemical and acoustic detectors, allow us to tap into wavelengths and phenomena (ultraviolet, infrared, subatomic events, minute chemical concentrations) that our biological senses cannot perceive.
In effect, technology has multiplied the types and volume of sensory input far beyond our innate capabilities, adding channels of information across all domains.
This evolution is not only about miniaturization or increased resolution but also about the diversification of sensory modalities. Whereas humans - as we noted - naturally rely on a handful of senses, technological advancements have introduced entirely new “senses” into our repertoire—such as the electronic sensors, quantum sensors for detecting minute changes in magnetic or electric fields, and even sensors that can directly sample information at the nanoscale. If we were to estimate the expansion quantitatively, one might argue that with every major technological breakthrough, the number of distinct environmental information channels accessible to us doubles or triples over a decade - and these breakthroughs happen in parallel, making the speed of expansion compound.
This rapid diversification—driven by advances in materials science, nanotechnology, and quantum mechanics—has expanded our sensorium by potentially one to two orders of magnitude relative to our biological baseline.
We could term this phenomenon, or “law”, the “Sensorium Expansion Principle”. Such a law would encapsulate the idea that the effective sensory space—measured in independent data channels or modalities—is growing exponentially as technology advances, much like Moore’s law did for computational power. Although the rate may be less predictable and standardized than Moore’s original observation, the trend reflects a dramatic shift in how we can perceive our environment: as technology augments and even transcends our natural senses, our capacity to capture and process information about the world expands into realms once thought unreachable.
This overwhelming sensorium has up until now been tapped into only in silos or narrow use-cases, but we now have a chance to apply entirely new tools to understand and explore the sensorium more in detail; with artificial intelligence, we may make the entireity of the sensorium accessible and process it in ways that will give AI access to the world in a radically different way.
Intelligence and sense parsimony
We noted that evolution has been parsimonious in providing us with senses, and our bio-sensorium is probably optimized for survival: we sense what matters to our fitness, and not other things. Within this sensorium we have developed agency, and when that agency is looped in on itself to strcuture itself recursively it becomes cognition and when that cognition is looped in on itself it becomes consciousness. Evolutions recursive strategies to build responses to more and more complex challenges - including mind - thus is maximized on the basis of the sensorium - as Uexkull showed.
This means that mind is limited by sense (duh!) and so what our minds can be is really ultimately determined by our sensorium.
One question that then quickly arises is if there is a limit that works both ways here: is it possible to imagine a mind that is built on an almost infinite sensorium? Or is there a limit to how large a sensorium a mind can entertain? And if so - are we close to breaking that limit with artificial intelligence? Or intelligence always have to exist within a limited sensorium?
That is: can we build an AI-system that is trained on a vast sensorium, and if we can, can we communicate with it? How large of an Umwelt can we give an AI? How larger of an Umwelt can support intelligence?
The evolutionary parsimony of our sensorium reflects a fundamental truth about biological optimization: sensing is really metabolically expensive. Each sensory capability requires energy to maintain and process, and evolution has effectively stripped away anything not directly contributing to fitness. This principle extends beyond just the basic five senses to include our entire range of perception - from our ability to sense balance through the vestibular system to our capacity to detect changes in blood oxygen levels. The fact that we can't sense magnetic fields like some birds, or detect electrical signals like electric eels, isn't a limitation but rather an optimization - these capabilities simply weren't worth the biological investment for our evolutionary path.
This optimization principle raises fascinating questions about artificial intelligence and its potential sensorium. Unlike biological systems, AI isn't constrained by metabolic costs in the same way (though remember energy cost!)4 - it could theoretically integrate an vast array of sensors, from quantum states to cosmic radiation, from subtle chemical gradients to gravitational waves. However, this theoretical possibility bumps up against a different kind of limitation: the challenge of meaningful integration. Just as our consciousness emerged from the recursive processing of our limited sensorium, an AI's "consciousness" or intelligence would need to meaningfully integrate its sensory inputs into a coherent model of reality. The question then becomes not just how many sensors we can attach to an AI, but rather this: how many senses can be meaningfully integrated into a coherent information processing system.
This concept of meaningful integration may point to a possible theoretical limit on sensorium size that might apply to any form of intelligence, artificial or biological. This limit might not be based on the number or type of sensors, but rather on the system's capacity to create useful relationships between different types of sensory information. Consider how our brain creates cross-modal associations - how we can "taste" colors or "feel" sounds in synesthesia. These associations suggest that consciousness requires not just the ability to process individual sensory streams, but to create meaningful relationships between them.
As the number of sensor types increases, the number of potential relationships between them increases exponentially, potentially creating a computational ceiling that even advanced AI systems might not be able to breach.
This brings us to a crucial distinction between sensing and knowing. An AI might have access to vast amounts of sensor data - from radio telescopes to particle accelerators - but that doesn't necessarily translate into a broader or more comprehensive understanding of reality. Just as a human expert in a specific field might have access to specialized instruments but still operates within the framework of human consciousness, an AI might need to compress or translate its expanded sensorium into more manageable frameworks to achieve anything we would recognize as intelligence or consciousness. This suggests that while we might be able to give an AI access to an enormous sensorium, its actual Umwelt - its lived experience and understanding of the world - might need to be more constrained to support coherent intelligence.
What we could see emerging, then, is the dynamic instantiantion of different Umwelts to deal with different kinds of problems. A mixture of Umwelts, just like we would have a Mixture of Experts, perhaps (but not quite). An AI that instantiates an Umwelt for the solving or exploration of a problem, perhaps with random dimensions for deep scientific exploration and then works through the problem in that Umwelt.
Communicating across Umwelts
Another way to explore the idea of an AI with a vast Umwelt to think about communicating with it.
Our sensorium is deeply intertwined with our language. Language emerges as a tool to represent, categorize, and share these sensory experiences, effectively encoding what we see, hear, feel, taste, and smell into words and symbols. In many cultures, the lexicon reflects the relative importance of different senses; for example, societies that rely heavily on visual information tend to have rich vocabularies for color and spatial relationships, while others might have more nuanced terms for smells or tactile sensations. This intimate coupling means that the evolution of our language has been shaped by our biological sensory limitations, with our cognitive categories and metaphors built around the kinds of stimuli our sensors are tuned to detect.
Now, imagine an intelligence with a sensorium 100 times larger than ours—one that perceives vast ranges of electromagnetic wavelengths, ultra-fine chemical gradients, and minute vibrational cues far beyond human capability. Such an expanded sensorium would likely lead to a radically different internal experience of the world. The concepts and categories that form the basis of language would be profoundly altered, as that intelligence would have perceptual data that are completely outside our natural experiential range. Its language might include distinctions for phenomena that are utterly alien to human perception. Consequently, even if this being could learn our language, the gap in experiential reference—the raw data that give meaning to words—could render our linguistic expressions imprecise or impoverished relative to its sensory reality and Umwelt.
Could we really understand or communicate with such a being at all?
This is almost like the question that Thomas Nagel asks in his paper “What is it like to be a bat”, where he notes that the experience of a bat is different from our own (if we simplify):5
We must consider whether any method will permit us to extrapolate to the inner life of the bat from our own case, and if not, what alternative methods there may be for understanding the notion.
The experience of an AI with a vastly larger Umwelt would make the bat-case seem trivial in comparison. The question of communication with such an AI becomes particularly intriguing, then. The AI's basic concepts and categories might be fundamentally alien to us, built on sensory experiences we can't access.
We can, however, communicate in some ways even with bats - it is just that it is very noisy and rudimentary. The key might lie in finding areas of overlap between our Umwelten, or in developing new frameworks that can bridge the gap between different ways of sensing and understanding reality. This could lead to new forms of human-AI communication that go beyond our traditional linguistic and conceptual frameworks, potentially enriching both human and artificial intelligence in the process.
((There might be zones of sense just as Vinge imagine zones of thought.))
So we end up with two intriguing questions here: how large of an Umwelt can support intelligence, and can we communicate with an AI with vastly larger Umwelt at all?
Umwelt as a competitive dimension
These are not just idle musings, however. The reality is that we are already capable of building vast sensor networks and allowing real-time access to them for AI-systems, and to train not just what we today call “multi-modal systems”, but systems that are massively multi-modal built on a stunning array of different sensors. Such systems will be able to explore very different parts of the possible sensorium, and find insights and build knowledge about the world based on pattern recognition across domains of that sensorium that are inaccessible to us (this is a version of the black box problem, the opaque sensorium version).
It is also possible that these systems will advance new capabilities in science and technology in ways that will be useful, even if not entirely understandable. If context is a competitive dimension for models, then certainly Umwelt is too.
But this competition is not just for companies, of course - but also for nation states. The first to build these new sensor arrays and connect them will also be the first to build a new, technological sensorium attached to the quickly advancing artificial intelligence technologies. Investing in sensors, then, might be equally interesting as investing in compute. That means that the sensorium or Umwelt might have geopolitical implications as well.
Here, the rapid technological expansion of our sensorium is not merely a theoretical curiosity—it is redefining the very nature of intelligence and strategic capability.
Imagine a military AI system that seamlessly integrates satellite imagery, signals intelligence, social media data, atmospheric conditions, electromagnetic signatures, seismic activity, and thousands of other data streams. This “hyper-sensorium” would enable the system to detect subtle patterns and correlations across immense and varied data spaces, revealing insights into troop movements, economic activities, and environmental changes that remain invisible to traditional, human-limited analysis.
This unprecedented capability establishes a significant strategic asymmetry. Nations or organizations capable of building and controlling these advanced sensor networks would obtain a fundamentally different, deeper understanding of reality compared to those that rely solely on conventional sensors.
The ability to integrate and analyze data from previously isolated sensory domains creates entirely new categories of knowledge. For instance, an AI system that concurrently examines atmospheric chemistry, ocean currents, and global economic trends could uncover novel environmental-economic relationships that human scientists have never considered.
The race to develop such integrated sensor systems is quickly emerging as a critical element of both national security and economic competition. A country deploying a vast network of advanced sensors—such as quantum, bio or nano sensors—could detect underground facilities or covert military equipment in ways that traditional methods cannot match.
More importantly, these capabilities would yield insights into fundamental physical processes and real-time events, offering advantages that extend well beyond mere data accumulation and into the realm of actionable intelligence, creating new strategic options.
This shift in emphasis—from building larger models and faster computers to expanding the artificial sensorium—could, in one scenario, reframe the future of AI development. The next frontier is not solely about algorithmic improvements but about creating, deploying, and integrating sophisticated sensor networks that redefine how data is acquired and interpreted. In this light, the key competitive advantage may lie in who can harness the full potential of this expanded sensor array to achieve a richer, more actionable understanding of the world.
This is a hyper-charged version of the idea that data infrastructures are key to developing the next generation AI.
The geopolitical implications of this sensor revolution could be really fascinating. Imagine if nations have to consider the concept of “sensory sovereignty”—the ability to control and protect the sources of their data and to counteract adversaries’ sensor networks. This emerging field of “sensory competition” could lead to new arms races, where the battle is fought over who can best sense, understand, and respond to global developments.
In parallel, maybe the debate on AI safety must be broadened to encompass the underlying sensory infrastructure that drives these systems. Robust investment in sensor technology, data integration, and governance will be essential to ensure that these powerful new capabilities are used and deployed responsibly.
And yes, as you might have guessed, I think EU should consider this very carefully and invest in sensor networks - as I believe that we have a relative competitive advantage in this space. Companies like EnOcean (no affiliation) and others have driven standardization in the sensor field for low energy sensors - and if Europe could lean in even more into this rapidly emerging space there is arguably quite a significant opportunity to lead internationally!
If we want to.
Thanks for reading,
Nicklas
It should be noted here that there is a live debate about how many senses we actually have, and the numbers range from 5 to over 50. See for example: Wilson, K.A., 2021. How Many Senses? Multisensory Perception Beyond the Five Senses. - available at https://philarchive.org/archive/WILMP-8
See eg Riu, J., Maroto, A. and Rius, F.X., 2006. Nanosensors in environmental analysis. Talanta, 69(2), pp.288-301.
See for an introduction to the concepts Degen, C.L., Reinhard, F. and Cappellaro, P., 2017. Quantum sensing. Reviews of modern physics, 89(3), p.035002. See also https://www.nature.com/articles/d41586-023-01663-0
Sensors are more becoming energy efficient however! See https://www.azom.com/news.aspx?newsID=63034
Nagel, T., 1980. What is it like to be a bat?. In The language and thought series (pp. 159-168). Harvard University Press.