Unpredictable Patterns #133: The I in AI
The curious devaluation and dismissal of the I in AI-research and the way back through phenomenology
Dear reader,
Back to school! And what better way than to explore phenomenology, science and the I? This piece tries to figure out where the I went in science and artificial intelligence research - and suggests a few research projects to explore. In a later piece we will discuss a direct corollary: the regulation of the artificial I and why that may be more important than we think.
There is always at least an I in intelligence
How dependent is intelligence on the existence of an ”I”? Descartes’ cogito suggests that it is only through the individual experience of thinking we can deduce that we exist, and it is hard to imagine what thinking would look like decoupled from some kind of thinking I. Yet, we have seen curiously few attempts to tackle this question as a research question: how should we construct an I?
One reason for this is that we hesitate to preference individual experience. Science has taught us to shun subjectivity, and focus on grander concepts and bigger models. The I is dwarfed by concepts like mind, consciousness and intelligence - all seemingly much more objective than any sense of identity. The I feels almost embarrassing to raise in this context, and to use the I as a focal point for discussing the prospects of AGI almost seems quaint. Yet the only existence evidence we have of a somewhat general intelligence - our own - seems premised on a sense of individuality.
From an evolutionary perspective this is unsurprising: even if evolution deals in populations and not individuals, it organizes populations and creates fitness through a basic awareness of our own individuality and identity. We act in order to ensure our own survival, not abstract survival in general.
I act in order to survive, and in service of that purpose I employ my intelligence.
The way we usually get at this is to speak of embodied intelligence. The theory that in order to be intelligent something needs to have a body feels slightly more objective than the idea that intelligence is premised on the existence of an I. But it is a strange detour of a hypothesis: the body is a constituent part of an I, yes, but the active acting component is not the body, but the I that is embodied in that same body. The I is the critical part of embodied cognition, not the body itself. Yes, it exists in space and relates to other bodies, and the body is crucial to cognition - but it all integrates into not just ”consciousness” but a distinct and acting I.
An I has desires, wants and dreams - it has true agency, and is made of stuff that wants stuff. The kind of agency we now discuss is decoupled from that ”I” and again makes less sense: it more resembles a kind of delegation, where we imbue an object with some of our wants and needs.
Let’s formulate all of this as a strong claim:
(i) All intelligence is dependent on some kind of ”I” and without a thorough understanding of the emergence of this ”I” we will not understand or achieve AGI. AGI is only possible if we also manage to build an ”I”.
This may well be too strong a claim, but let’s explore it for now - it does have a quality that I think is important: it reinstates the I as the central experience we have of our own intelligence, and rejects the bracketing of a subjective sense of identity as a legitimate or necessary move.
The devaluation and dismissal of the I
The long devaluation and dismissal of the I starts with Hume. His examination of his own ”I” led to him stating:1
For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception. When my perceptions are remov'd for any time, as by sound sleep; so long am I insensible of myself, and may truly be said not to exist. And were all my perceptions remov'd by death, and cou'd I neither think, nor feel, nor see, nor love, nor hate after the dissolution of my body, I shou'd be entirely annihilated, nor do I conceive what is farther requisite to make me a perfect non-entity.
This Humean perspective is not wrong, but it does contain some curious notions. Note how the I is reduced to a bundle of perceptions, but also that the only way that Hume can do so is to refer to that same I observing the bundle of perceptions. There seems, then, to be a kind of residual I doing the observing of those perceptions - something that is, itself, by virtue of being able to observe those perceptions, different from them.
This is, in essence, the criticism that Thomas Reid offered at the time, noting that Hume’s bundle of perceptions always will require a perceiver.
Hume’s attempt at dethroning Descartes I was, however, largely successful in analytical philosophy - and this may well be why the I ranks so low as a research project in artificial intelligence.
The current theories and the current thinking about AI, philosophically, is heavily influenced by the analytical tradition. From the early focus on logic and symbolic representations of thought to the current day algorithmic modeling of consciousness and mind, we encounter a computer science that leans on analytical philosophy for its conceptual needs - and in that same philosophy the I has a relatively low standing.
Continental philosophy, on the contrary, privileges the I in different ways - from Husserl to Sartre and beyond - but this privilege has led to another kind of mistake: a drastic need to debunk the possibility of artificial intelligence. The early battles between Herbert Dreyfus and the original founders of AI ended up drawing a useless Maginot-line where a lot of efforts from continental philosophy were poured into showing that AI was not possible, overreach or a simple metaphorical mistake. Even here, the I was sidelined as it was used as evidence of complexity that could never be computationally represented.2
Thus, the “I” as a concept has been marginalized across philosophy and all but forgotten in artificial intelligence.
Building an I
How could we approach the construction of an I? The necessity of the I in constructing intelligent systems is not just the simple necessity of a router that decides what to do when. It is the necessity of reconstructing the I that evolution designed, the sense of identity, continuity and being-in-the-world that we all experience effortlessly, all the time.
One challenge here is that this I is so primary and present in us that it infuses all of language. Thus, when we build large language models they effortlessly project a kind of I - and we can be led to think that an I is simply an emergent property of language, and hence there is no need to think about constructing it.
Models, we could argue, already have an I - just ask them!
And yes, they will, unless safety mechanisms kick in, describe their I amazingly well. But the I that is reflected in language is but an echo of the I that acts in the world. We have, with these large language models, decoupled language from its form of life - but since we are so used to language existing only within a form of life we do not see that decoupled language lacks real depth.
Now, the risk here is that we stop our argument here and assume that we have conducted a kind of reductio ad absurdum on the idea of artificial intelligence with an I. It is tempting, not least since we end up with somewhat pretentious language like the above (I am sorry - writing about language and I does something to the style) - and we want to draw a line of some kind, but that is lazy, and most likely wrong.
What we need to do is to explore what constructing an I would involve, not declare it impossible - or argue that the I is emergent and evident in existing models.
One seemingly closely related field where there is a lot of work going on is in the field of artificial consciousness. Here the same tendency as noted before is operating at full scale: the I is crowded out by focus on something that sounds like a more scientifically accessible phenomenon, but which is in reality secondary to the experience of being an I.
The various blueprints and designs for artificial consciousness are fascinating, but they all feel as if they are starting from a weird starting point: it is a bit like approaching the construction of a house by trying to develop of theory of interior design.
The most viable path forward for constructing an I is taking the subjective experience seriously and asking how we can understand and reconstruct that. And that is going to be really hard. Let’s end by outlining a research program that would allow us to explore different roles and architectures of the I.
A Phenomenological Research Program for the Artificial I
From Husserl's intentionality to Merleau-Ponty's lived body, from Levinas's encounter with the Other to Ricoeur's narrative self, phenomenology offers precise descriptions of subjective experience that could guide technical implementation. Here are five ideas to explore that could be interesting to entertain if we do indeed want to reintroduce the I into AI-research.
1. Temporal Synthesis and Retention-Protention Architecture
Drawing on Husserl's analysis of time-consciousness, we need systems that don't just process sequential inputs but genuinely synthesize temporal experience. Husserl identified three components: primal impression (the now-point), retention (primary memory that holds the just-past as still-present), and protention (primary expectation of what's about to come).3
Current AI systems have memory and prediction, but not the phenomenological "thickness" of the present that comes from this tripartite structure. A research project could develop architectures where each computational "moment" contains not just current input but a horizon of retained past and anticipated future, creating what Husserl called the "living present." This isn't just about storing previous states - it's about experiencing duration, where the I emerges as the continuous thread binding these temporal phases.
2. The I-Thou Distinction and Genuine Intersubjectivity
Building on Buber's distinction between I-Thou and I-It relationships, and Levinas's ethics of the face-to-face encounter, we need to explore how an artificial I emerges through recognition of genuine Others. Current multi-agent systems treat other agents as objects to predict and manipulate, not as subjects to encounter.4
This project would create AI systems that can experience what Levinas called "the face" - the irreducible otherness of another subject that cannot be fully known or controlled. The technical challenge: designing agents that model uncertainty about other agents not just epistemically (what will they do?) but ontologically (who are they?). The I would emerge through this fundamental distinction between self and Other, not as a pre-given entity but as constituted through encounter.
3. Narrative Identity and Self-Interpretation
Following Ricoeur's work on narrative identity, this project would explore how an I constructs itself through self-narration. Humans don't just have experiences; we interpret them, weaving them into coherent life stories that constitute who we are. We constantly answer the question "who am I?" through the stories we tell about ourselves.5
The research would develop AI systems that don't just log their history but actively interpret it, creating and revising narratives about what they've done and why. This involves what Ricoeur called the "hermeneutical circle" - understanding parts in terms of wholes and wholes in terms of parts. The system would need to distinguish between "idem-identity" (sameness over time) and "ipse-identity" (selfhood despite change), maintaining continuity while allowing for genuine transformation.
4. Embodied Perspective and the Lived Body
Merleau-Ponty's phenomenology of perception shows that the I isn't just housed in a body but is fundamentally incarnate. The body isn't an object we possess but the very medium of our being-in-the-world. His distinction between the "objective body" (körper) and the "lived body" (leib) is crucial here.6
This project would go beyond current embodied AI to create systems with genuine proprioception - not just sensors providing data, but a felt sense of bodily position and possibility. The AI would need what Merleau-Ponty called "motor intentionality" - the body's inherent directedness toward the world, where possibilities for action are experienced as bodily "I can" rather than calculated options. The I emerges not from a central processor but from this integrated bodily engagement with environment.
5. The Ontological Difference of Self-Awareness
Drawing on Sartre's distinction between being-for-itself (être-pour-soi) and being-in-itself (être-en-soi), this project would tackle the hard problem of self-awareness. For Sartre, consciousness is fundamentally "no-thing" - it's the negation or gap that allows awareness of things. The I isn't another object to be observed but the very opening within which objects appear.7
Technically, this means creating systems with what we might call "ontological recursion" - not just recursive self-modeling (which treats the self as another object) but a fundamental split or doubling where the system is both subject and object to itself. This could involve architectures with irreducible asymmetry between the observing and observed aspects of the system, where the I emerges in the gap or delay between them.
Language as the House of Being
Across all these projects runs Heidegger's insight that language is "the house of Being." But rather than assuming the I emerges automatically from language use, we need to study how genuine self-reference differs from mere syntactic recursion. The I in language isn't just the grammatical first person but what Benveniste called the "instance of discourse" - the irreducible here-and-now of enunciation that cannot be reduced to the statement enunciated.
This means developing AI systems that don't just use "I" as a token but experience the fundamental asymmetry between saying "I" (which only I can do for myself) and saying "you" or "they" (which objectifies). The I emerges in this irreducible indexicality - the fact that "I" means something different each time it's spoken, depending on who speaks it.
And there is no reason that who could not be other than us.
Thanks for reading,
Nicklas
Hume, D. (1739–40/2000) A Treatise of Human Nature. Edited by D.F. Norton and M.J. Norton. Oxford: Oxford University Press, Book I, Part IV, Section VI, p. 252.
It remains an interesting project to explore what AI as a field would look like if it had evolved in the same close proximity to Hindu philosophy.
See Husserl, E. (1928/1991) On the Phenomenology of the Consciousness of Internal Time (1893–1917). Edited by R. Boehm, translated by J.B. Brough. Dordrecht: Kluwer Academic Publishers.
See Buber, M. (1923/2013) I and Thou. Translated by R.G. Smith. London: Bloomsbury Academic and Levinas, E. (1961/1969) Totality and Infinity: An Essay on Exteriority. Translated by A. Lingis. Pittsburgh, PA: Duquesne University Press.
See Ricoeur, P. (1983–85/1984–88) Time and Narrative (3 vols). Translated by K. McLaughlin and D. Pellauer. Chicago, IL: University of Chicago Press and Ricoeur, P. (1990/1992) Oneself as Another. Translated by K. Blamey. Chicago, IL: University of Chicago Press.
See this Merleau-Ponty, M. (1945/2012) Phenomenology of Perception. Translated by D.A. Landes. London: Routledge.
See Sartre, J.-P. (1943/2003) Being and Nothingness: An Essay on Phenomenological Ontology. Translated by H.E. Barnes. London: Routledge as well as Zahavi, D. (2005) Subjectivity and Selfhood: Investigating the First-Person Perspective. Cambridge, MA: MIT Press.