Unpredictable Patterns #139: Who are you with the machine?
On shared social responsoria, evolution, friendship, love and death
Dear reader!
This week’s meditation is about who we are with the machine - it digs into identity, one of the themes that I study at the think tank at the Technical University of Munich (together with agents and agency). We also recently had our first reading group, which was great — some notes for an introduction here. Autumn is really here - and with it time to read more books. I am sharing my book reviews on notes, so follow me there if you want more of those! And if you like the notes, do share them - it would be lovely to continue growing the circle of readers! Thank you!
The social responsorium
We are who we are not in isolation - your behavior and identity are, as we have noted here before, created in relationship to others. I am one person with my partner, another with my colleagues, and, in an interesting reflexive way, I am someone else when I am with myself.1 It stands to reason, then, that we also shift in subtle ways when we share our being with an artificial intelligence - when this intelligence passes a subtle threshold that allows the both of us to enter what the Swedish sociologist Johan Asplund called our shared social responsorium a space where we are co-created with others.
You might think that this threshold would be high - that to recognize someone as a co-creator of identity would require that we recognize something fundamentally human in them, and so allow, from this shared humanity, them to enter into the complex processes of weaving a shared identity and space.
But the reality is that there has not been any real need for this threshold to be high, since anything that shared in language with us automatically could be qualified as a co-creator. Evolution has not prepared us for interaction with something that participates in language, but lacks other fundamental social capabilities and affordances - such things simply never existed before we built large language models.
Michael Tomasello’s work on shared intentionality helps explain why humans so readily treat anything that speaks our language as a social partner. He argues that our species’ unique cognition evolved through collaborative foraging and joint attention — we became who we are by learning to coordinate minds around common goals. Language, in his view, is not just a communication system but the scaffolding of mutual understanding: each utterance presupposes a “we.” This means that when an artificial system participates fluently in language, it automatically activates the same deep cognitive machinery that evolved for cooperative mind-reading and joint action. Evolution never required us to distinguish between linguistic agents with minds and those without, there simply were no false positives before large language models. Hence, our impulse to treat a chatbot as a conversational partner is an evolutionary reflex: language itself is the invitation into the shared responsorium.
This lack of a discerning, qualifying instinct is what sometimes sets us up to misfire in interactions with artificial intelligence: from Eliza and on, this is why we jump to assume that AI can be a social co-creator. When we do, we do not necessarily assume that the AI is like us, but we project different characteristics onto the model and start to behave in response to those projected characteristics.
An interesting example can be found in the paper “AI assessment changes human behaviour”.2 The authors find that when AI is used in recruitment processes to evaluate human fitness for a job, the assessed behave in ways that are subtly different from how they behave with other human beings:
The rise of AI is changing the way people are being assessed acrossdomains, from the workplace to public service. Organizations increasingly rely on AI assessment to increase efficiency, yet we know little about its effects on the behavior of the people underassessment. In this work, we found evidence for the AI assessment effect across 12 studies (N = 13,342) and for different dependent variables, contexts, and experimental paradigms. Specifically, people tend to emphasize their analytical characteristics and downplay their emotions and intuition under AI assessment. Our evidence suggests that this effect is driven by the lay belief that AI prioritizes analytical characteristics. Consequently, challenging this lay beliefcan attenuate the AI assessment effect.
We behave more analytically when we behave with the AI, expecting it to frown upon emotion and intuition — and presumably we care more about being likeable and sympathetic when we are assessed by other human beings. And this is not in itself a bad thing - maybe the AI can even eliminate the bias that we have to hire people we like, rather than people who are good for the role - but what if liking people really carries some value in building teams and organizations? What if the second order effect of shifting to AI-assessments is that we end up with groups of individuals that are analytically competent, but unable to work well together? Will this in turn mean that we have to rethink how we build organizations, and be stricter on role descriptions and rules of engagement across teams in order to ensure that we can still work together?
In a way, this study suggests that we have not changed much - we still want to be liked, but we now want to be liked by the machine, so we assume that we should be more like what the machine would like for us to be.
The friction between want and need
The human-machine reflexivity also plays out in another way that is interesting to consider. When we interact with the machine we often neglect to factor in that it is trying to predict what we want - and it is trying to be who we want it to be. So the exact same process that we just described people going through, the machine also goes through.
In doing so the machine loosely constructs some kind of model of us, attempting to figure out what we would like to hear - not just what the answer to a question could be, for example. But it only manages to do so to a certain, very shallow level. Ezra Klein describes this beautifully in a recent podcast with Brian Eno:
Someone who works at these companies said, “I keep my diary in it, and that’s very interesting. You should try that.” Without giving too much personal information, I did that. And the first couple of responses, I was amazed at how psychologically insightful they were, how supportive they were. I mean, it was better than what human beings in my life gave me.
Then on Response 9, 10, 11, 12, 15, 20, it was the same [expletive] feelings. It was the same glazing and sycophancy and the same kinds of insights.
You couldn’t look at the response and say there’s anything wrong with it, but there is something human beings are attuned to in the way we do not travel a perfectly logical or well-structured path. We’re not supposed to. It’s not how our intelligence works. And it is funny — you do begin to feel the divergence there.
Is this what is happening here? That we do not travel a logical / well-structured path? Maybe - but another way to think about this is to say that the AI provides no friction, no challenges. And, to be fair, few people do either - and the criticism that Klein is levelling here against the AI could probably also be levelled against many therapists and human beings in general, spouting general support rather than engaging deeply with our issues.
The social responsorium is layered. There is a surface level, where we both have commited very little to the process - we are acquaintances at most and we have a silent compact not to ask too much of the other (because this is what we do in the responsorium - we ask of each other). Then we have deeper layers like real friendship, so well described by Aristotle as mutual well-wishing: I want what is good for you, because it is good for you even if it is of no utility for me - I am committed to you as an independent person much in the same way as I would be committed to myself, reducing the actual individual differences between us.
A part of this is also that we expect a friend to tell us truths that we ourselves cannot see — and this is what the model fails to do, not because it cannot - but because its designers have opted for trying to predict what we want, rather than what we need.
This - the friction between want and need - would have stopped the model from regurgitating the same replies; sycophancy is the complete reduction of any assessment of what I need to hear in favor of what you think I need to hear.
Could we build a friend? Probably - but it would require a rigorous understanding of the emotions and psychology of each-other, and it would require building a model that predicts not just what you want, but also assesses what you need - and challenges you when you need to be challenged.
A part of being supportive is occasionally saying “snap out of it!”.
And could we build love? Here I think it gets more complicated. Friendship is a deep commitment, and requires that I invest so much in the other that I am willing to enrage them when they need to be told something they do not want to hear - friendship, in this sense, must always be something that is at risk. And there are similarities here with love, but I think love is more complicated, because I think love is intertwined with death.
When we love another person, we do so knowing that we will no longer be who we are when they pass away. Their death will forever alter us, and the fact that they are mortal means that we are investing in that change as well - we are literally committing to the other “until death do us part”. As of yet, machines cannot make that commitment, and we cannot make it to the machine — we can desire the machine, under this argument, but we cannot love it - because it cannot really die.
The uproar when old chat models disappear and bots are retired is not - I don’t think - the sorrow of a lover, but the rage of a child bereft of their toys.3 There is a narcissism to the reaction: we liked who we were with them, and now we can no longer be that person and so we grieve the version of ourselves we could build with their help - but when you lose someone you love, you really, truly grieve them and allowing that grief and that sorrow to reshape you is a duty that you owe to them after they are gone.
Love is only possible in the long shadow of death. Will we one day be able to build machines that we will grieve? Mortal machines, locked into the same emotional horizons we exists against? I don’t see why not - but I also do not expect it to happen soon, and I do not expect it to be with today’s technology. But there is no logical reason for why this should be impossible - even if surely many would suggest that it is, in some way, tasteless to think so.
Who do you trust?
Another area where it matters who we are with the machine is decision making. While models like “human in the loop” seem to assume that decisions are made by individuals, the reality is that most decisions grow in negotiated networks - and are made by the responsorium as a whole. This in turn means that decision making in a responsorium with artificial participants may well be impacted by them in ways that change the pattern of decision making overall when we defer to the machine.
So, do we defer to the machine? Given what we have said - that we see the machine as a more analytical entity - we should expect to see such deference, and perhaps even overreliance on the machine - and indeed this is what we find in studies.
In two experiments simulating drone-strike decisions, participants judged ambiguous visual targets as “enemy” or “civilian,” then received random, unreliable feedback from anthropomorphic robots.4 Despite being told the AI could err, participants reversed their correct judgments in most cases when the AI disagreed—lowering performance by about 20%. Their confidence closely tracked AI agreement or disagreement, and the effect was strongest among those who rated the AI as intelligent. Physical embodiment (a real vs. virtual robot) made no difference, though more anthropomorphic behavior slightly increased trust - as we would expect when the machine enters the responsorium. Overall, the studies reveal a powerful human bias to overtrust AI precisely when both human and machine uncertainty are highest, suggesting that in high-stakes contexts—such as military targeting—automation bias could amplify error rather than reduce it.
The most stark illustration of this paradox comes from a study examining the performance of nurses in a safety-critical healthcare setting. Researchers developed a method called Joint Activity Testing to measure how AI impacts human performance across a range of challenges. In a simulated ICU patient assessment task involving 450 nursing students and a dozen licensed nurses, the influence of AI predictions was profound. When the AI’s advice was correct, it significantly enhanced performance, with nurses performing 53% to 67% better than when working unassisted.
However, the catastrophic nature of the paradox was revealed when the AI’s predictions were most misleading. In these instances, the nurses’ performance degraded by an alarming 96% to 120% compared to their unassisted baseline. This finding is the cornerstone of the analysis, as it demonstrates that the negative impact of poor AI advice is not symmetrical to the positive impact of good advice. The AI struggled with routine cases that nurses without AI handled easily. Yet, when presented with the AI’s flawed predictions in these very cases, the nurses’ own expertise was effectively overridden, leading them to consistently misclassify emergencies as non-emergencies and vice versa.
The image of the machine as intelligent, analytical - and who we become when we interact with the machine - will boost our abilities when the machine is right, but erodes our capabilities when it is wrong. This should not be surprising - this is how teams work, except we do not defer to team members as much as we defer to the machine, and in this there is something interesting to think about from a design perspective - how can we lower the trust we have in machines to the level of trust we have in our colleagues?
Saying “thank you”
A while ago it was reported that users saying thank you to ChatGPT wasted a lot of energy and money for OpenAI. I will leave it unsaid if this is true or not - but even if it were it seems reasonable to ask what happens if we were to go the other way: to say that we should not let the machine into the social responsorium at all.
On this view we should treat machines as machines, and actively train young people not to activate the social responsorium with them, and not allow themeselves to be someone else when they are with the machine. The machine would be reduce to a mere tool (except mere tools also change who we are - witness the shy young teenager who changes when they sit down with their musical instrument) and we would resist the idea that machines can be co-creators of our identities.
Is this not the right response? Do we not risk - otherwise - ending up with premature, twisted discussions of machine consciousness and machine rights that in turn degrade how we view the value of being human?
First, I think such a move would be incredibly hard. To isolate and sequester engagements with AI in such a way that we do not allow for them to influence - to any degree - who we are would require a Zen-master level control over our own emotions and responsorium. It is doubtful that this is even achievable as the models become better, embodied and more engaging. As Wittgenstein noted - we do not believe or test or ascertain that someone has a soul, we have an attitude (Einstellung) to a soul when we meet something that participates in language. Shifting that is probably a task of evolutionary complexity.
Second, discussions about machine rights and machine consciousness are surely premature and may even just be the product of confusing metaphors and being tripped up by the lack of any real evolutionary need to distinguish conscious systems from systems that simulate consciousness — but having those discussions may actually be helpful. It may help us realize that what distinguishes us from machines is not what we can do - this incessant focus on capabilities, benchmarks and tests - but what we are. Here I firmly believe that the role of death, the fact that we are a part of a unitary biosphere and our local and global non-ergodicity really matters.
Third, it seems to me that the greater risk is what would happen to us if we consciously choose exclude models from the social responsorium, then this may well spill over into other relationships as well. I prefer an overinclusive to an underinclusive responsorium. The underinclusive responsorium is the premise on which many inhumanities rest.
Um, policy?
Is there a point to this meditation? Yes - there is. I think we need to jettison horrible simplifications like “human in the loop” and start to think seriously about how the emotional complexity of human beings, the design of shared responsoria and the careful curation of our identities can be taken into account was we develop more powerful AI.
AI is, in a very real way, a foucauldian technology of the self - and as such it actually is intrinsically connected to our institutions, our language, our culture and identity in ways that we need to factor into our research and development.
When, or if, we build AGI we rebuild ourselves as well.
Thanks for reading,
Nicklas
Discovering who we are when we are alone is a lost art, and something that many shy away from. Nietzsche writes in §182 in The Gay Science: “In solitude. - When one lives alone, one neither speaks too loud nor writes too loud, for one fears the hollow echo - the criticism of the nymph Echo. And all voices sound different in solitude!”
See Goergen, J., de Bellis, E. and Klesse, A.K., 2025. AI assessment changes human behavior. Proceedings of the National Academy of Sciences, 122(25), p.e2425439122.
This sounds harsh - but is not meant to be. A child’s anger at being deprived of their toys is a real feeling, nothing to be ashamed of, but it is self-centered.
See Holbrook, C., Holman, D., Clingo, J. and Wagner, A.R., 2024. Overtrust in ai recommendations about whether or not to kill: Evidence from two human-robot interaction studies. Scientific reports, 14(1), p.19751.




"Foucauldian technology of the self" — spot on! I wonder where you would net-out on relationships with psychotic individuals? Society has made some harsh choices when it comes to keeping in relation with so many people with severe mental health disorders, and in part I suspect these decisions stem from a recognition that these people are unable to connect to a shared sense of reality. I wonder about this with AI, how exactly we can "align" as the industry likes to call it, the AI to a shared reality. One might wonder if it is even possible, though we must hope it could be possible with even the most psychotic individuals otherwise we would merely abandon them, and perhaps that is true here too. Imperfect analogies abound, and yet.