Unpredictable Patterns #117: Agency-enhancing technologies
Persuasion, relational autonomy, choice architectures and artificial intelligence
Dear reader!
Thank you for the comments on last weeks musings on alignment. I do think there is value in thinking about the evolution of values, and the functions values perform and are selected for. At the same time I have sympathy for those of you who argue that functionalist explanations seem to miss something fundamental about the role values play in aspirations, in charting who we want to be over time. But maybe, my darwinian reductionist whispers, these aspirations themselves have great fitness value?
This week we dive into something more concrete, but still timely: persuasion!
Persuasive artifacts
Every morning when I wake up, my watch suggests a work-out. A run, or intervals or something else. Sometimes it suggests that I deserve a rest day, and I embrace that. If my very sporty watch thinks I need to take a day off working out, surely I deserve to?
This pattern—subtle persuasion woven into the fabric of digital experiences—has become so ubiquitous that we rarely question it. The digital landscape bristles with hooks designed to capture attention, shape decisions, and form habits. We find ourselves nudged toward particular choices, not always through reasoned argument, but through carefully crafted cues embedded in our digital environments.
This is not a new phenomenon, but rather the latest chapter in the long history of persuasion, amplified by technologies that operate at unprecedented scale and granularity.
What is interesting is not persuasion itself—humans have always tried to influence one another—but rather how these new forms of technological persuasion are becoming increasingly cognitive. Research on technological affordances has revealed how design choices can systematically shape behavior patterns over time.1 Technologies moderate habit formation, performance, and change by creating possibilities for action. Many dimensions of these technologies remain "hidden" to the user, yet they help form technological habits - but now they also are becoming explicit, as artificial agents.
The research on affordances highlights how technologies don't simply offer neutral possibilities—they systematically guide behavior in particular directions. What appears as frictionless design often conceals a careful orchestration of cues that gradually shape habit patterns. We may believe we're making independent choices, but the design of digital environments increasingly tilts the field toward behaviors that serve private interests while reshaping our decisional landscape in ways we rarely notice.2
The technological infrastructure becomes a kind of psychological infrastructure, a choice architecture, quietly reshaping how we perceive options and make choices, all without triggering our awareness that this restructuring is occurring.3
It goes almost without saying that this all will become even more interesting as we start using artificial agents and intelligent assistants to make decisions.4
Persuasion, Influence, and Time
To understand this landscape more clearly, we need a conceptual framework that distinguishes between different forms of technological impact on human choice architectures. I propose we differentiate between two fundamental dynamics: persuasion and influence.
Persuasion operates at the level of discrete events or choices. When technology persuades, it aims to direct a specific decision in a particular moment: click this button, buy this product, watch this video. Persuasion technologies frame individual choice points to make certain options more salient or appealing, creating a momentary tilt toward a particular decision. The persuasive moment is bounded in time—it concerns a single decision junction.5
Influence, by contrast, operates at the level of ongoing processes or dispositions. When technology influences, it shapes long-term patterns and habitual responses that persist across multiple contexts. Rather than targeting a single decision point, influence technologies shape the background conditions against which all decisions are made. They cultivate dispositions, create habits, and gradually restructure how we perceive the world and our place in it.
This distinction between persuasion and influence echoes Wittgenstein's profound insight regarding the difference between pain and sorrow. In his philosophical examinations, Wittgenstein noted that pain has what he called "echte Dauer" (genuine duration)—it genuinely continues or persists through time rather than being a series of discrete sensations. Sorrow, however, is something more profound—it is read, as Wittgenstein might say, in the weave of life. It doesn't merely persist across time but becomes integrated into how we perceive and engage with the world, coloring all experience. Similarly, persuasion targets discrete moments of choice, while influence becomes woven into the very fabric of how we understand and approach decisions.
The distinction matters because it reveals different mechanisms of agency erosion. Persuasive technologies threaten agency by shaping individual choices through immediate cues and framing effects, often bypassing deliberative processes. Influential technologies threaten agency in a more tricky way—by gradually reshaping our habits, associations, and perceptual frameworks, they alter not just what we choose but how we understand choice itself.
This dynamic creates what we could call the double bind of digital agency: we may preserve our sense of agency in individual moments of choice while losing it at the level of the processes that shape those choices. Like a chess player who “freely” decides each move while playing an opening they didn't choose, we may experience the sensation of choice without its substance.
One recent technical term for technologies designed to exploit these psychological dynamics is "dark patterns"—interface designs that benefit system owners by covertly manipulating users. But focusing exclusively on explicitly deceptive design misses the broader point. Even technologies designed with benevolent intentions can erode agency if they fail to respect the boundary between supporting human choice and subtly reshaping it. The question is not simply whether technologies aim to help or harm users, but whether they preserve the conditions necessary for meaningful agency.
There are some who worry that artificial intelligence will make us stupid, or indeed that search engines or the Internet will make us stupid - but there is another worry here that might be more interesting to explore: that technology will make us more pliable.
From Privacy to Agency
For the past four decades, privacy has dominated discussions of digital rights and ethics. We've built legal frameworks, advocacy organizations, and technical systems aimed at giving individuals control over their personal data. Privacy-enhancing technologies (PETs) have become an established category of tools aimed at minimizing data collection, anonymizing identity, and giving users greater control over their digital footprints.6
While privacy remains important, I now think that we are witnessing a subtle but significant shift in what we should be discussing. The central challenge is increasingly becoming not privacy but agency—the capacity to act as an effective agent on one's own behalf, according to one's own goals and values. As we rely more heavily on algorithmic systems that mediate our choices, agency rather than privacy emerges as the fundamental value at stake in our technological future.
It is not that privacy is unimportant, in any way, it is rather than privacy is a means to an end that we can now discuss directly: agency and autonomy.
Consider that people routinely trade privacy for convenience or other benefits when the exchange is transparent and perceived as fair. What they resist more fundamentally is the feeling of being manipulated, steered, or gradually reshaped by forces they neither understand nor consent to. They want to remain the authors of their own lives, even as technology assumes an increasingly integral role in daily decisions. The question is not just whether systems know things about us, but whether they use that knowledge to gradually remake us according to their own logic.
This shift from privacy to agency reflects a deeper understanding of how technology shapes human behavior. Privacy protections address information asymmetries, but agency protection concerns address power asymmetries. No amount of data protection can guarantee agency if the systems we interact with are designed to systematically influence behavior in ways that bypass deliberative capacities or reshape habitual responses without our awareness or meaningful consent.
The terminology of privacy-enhancing technologies (PETs) gave us a useful framework for thinking about tools that protect personal data. I propose we now need a parallel framework of agency-enhancing technologies (AETs)—tools specifically designed to preserve and amplify human agency within digital environments increasingly shaped by algorithmic systems. Just as PETs address privacy concerns without sacrificing functionality, AETs would address agency concerns without sacrificing technological advancement.
This is not merely a theoretical proposal. There is growing evidence that agency has become a central concern for users of digital systems. A 2023 Pew Research Center study of experts found deep concern about human agency in algorithmic systems, with one expert noting that on the one hand, better information technologies and better data have improved and will continue to improve human decision-making. On the other, black box systems and non-transparent AI can whittle away at human agency, doing so without us even knowing it is happening. This sentiment reflects a growing awareness that protecting agency requires more than just transparency—it requires actively designing systems that enhance rather than diminish our capacity to act as effective agents.7
The concept of agency-enhancing technologies offers a direction of exploration. Rather than simply trying to minimize the harms of algorithmic influence—an important but insufficient approach—we can actively design technologies that amplify human agency, making us more rather than less capable of steering our own course through digital environments. This means creating systems that make us better at recognizing manipulation, more effective at aligning technology with our true preferences, and more capable of reshaping technological environments rather than simply being shaped by them.
And no, AI did not make you do it
Before we continue mapping out agency-enhancing technologies, we must confront a pervasive myth—that humans are fundamentally gullible creatures, helpless in the face of sophisticated manipulation. This narrative, prevalent in both popular culture and academic discourse, portrays us as puppets easily manipulated by technological strings. It suggests a kind of abdication of responsibility: "The algorithm made me do it."
But this view is neither accurate nor helpful. As cognitive scientist Hugo Mercier demonstrates in his book Not Born Yesterday, humans have evolved sophisticated mechanisms of skepticism and discernment. Mercier argues that we are "rational and skeptical beings" and shows through careful research that claims to the contrary have little empirical support. We are not born yesterday—we come equipped with cognitive tools that help us navigate a world of potential deception.
Mercier's research reveals that people are remarkably good at discerning who is trustworthy and tend to resist being convinced of things they don't already believe. This skepticism makes evolutionary sense: communication could only evolve as a beneficial adaptation if both senders and receivers of information gained from the exchange. If humans were truly as gullible as often portrayed, language and social cooperation could never have developed to the extent they have.
Even phenomena that seem to demonstrate human gullibility—cult membership, conspiracy theory belief, susceptibility to propaganda—turn out to be more complex when examined closely. Contrary to common belief, studies on propaganda exposure have shown that repeated exposure to propaganda often fails to change deeply held beliefs. What appears as successful manipulation is often a case of selective attention—people embracing messages that align with beliefs they already hold.8
This evolutionary perspective on human skepticism has important implications for how we think about agency and technology. Rather than seeing ourselves as helpless victims of algorithmic manipulation, we should recognize and strengthen our innate capacity for critical evaluation. The "AI made me do it" excuse rings hollow when we understand that our cognitive architecture evolved specifically to resist undue influence.
Agency-enhancing technologies should not be designed on the premise that humans are fundamentally vulnerable and in need of protection from their own gullibility. Rather, they should be built to support and amplify our existing capacities for discernment. They should work with our evolved psychological mechanisms rather than attempting to bypass or override them.
Understanding our natural resistance to manipulation also reinforces the importance of personal responsibility in digital environments. While systems can certainly be designed to influence behavior, the final decisions remain our own. We may be nudged, but we are not controlled. Acknowledging this reality is essential for maintaining a healthy relationship with technology—one where we neither abdicate responsibility nor overestimate vulnerability.
The goal of agency-enhancing technologies, then, is not to protect us from influence (an impossible task in any social environment), but rather to ensure that influence operates in a context where our evolved capacities for evaluation and choice can function effectively. The aim is not the absence of influence but the presence of conditions that allow for meaningful agency within networks of influence.
With this more nuanced understanding of human agency and influence, we can turn to the specific design principles that might guide the development of agency-enhancing technologies.
Frameworks for Agency-Enhancing Technologies
What would agency-enhancing technologies look like in practice? How might we design digital systems that amplify rather than diminish human agency?
First, we need to recognize a fundamental truth about human autonomy that current technology design largely ignores: autonomy is not a state but a trajectory. Even systems designed to support autonomy at a specific moment can inadvertently constrain future growth if they permanently encode our current preferences, beliefs, and values. True autonomy requires not just the freedom to choose now, but the freedom to become someone who chooses differently in the future.
This understanding of autonomy as developmental rather than static has real implications for how we design technology. It suggests that systems should not only respect our current choices but preserve our capacity to evolve beyond them. Technologies that seem to support autonomy in the moment may still undermine it over time if they lock in patterns that stunt personal growth and development. Just as a tree needs both support and space to grow, human autonomy requires both structure and openness to new possibilities.
With this developmental view of autonomy in mind, we can better distinguish how agency enhancement operates differently in the domains of persuasion and influence. For persuasive contexts—where the concern is with immediate choice architecture—agency enhancement might involve creating multiple agent advisors rather than a single authoritative source. Just as we might seek a second medical opinion for important health decisions, we should have access to multiple algorithmic perspectives for significant digital choices. This plurality of algorithmic advice would make the contingency of any single perspective more visible, preventing us from treating algorithmic recommendations as neutral or objective.
For influence contexts—where the concern is with habit formation over time—agency enhancement might involve habit discovery mechanisms that make behavioral patterns visible. Most of us have little awareness of the habits we form through digital interactions. Technologies that visualize these patterns would allow us to see the gradual grooves being worn into our behavioral landscapes, making the invisible currents of influence perceptible and thus more amenable to deliberate choice.
These examples point toward a more general principle: agency-enhancing technologies make the mechanisms of influence visible and contestable. They transform the background conditions of choice from an invisible infrastructure into a perceptible environment that can be evaluated, adjusted, or rejected. They help us see not just the choices we make but how the choice architecture itself shapes what options appear available, salient, or desirable.
More specifically, agency-enhancing technologies might include:
Autonomy-preserving architectures that explicitly limit when and how automation can occur, ensuring that AI augments rather than replaces human capabilities. These systems would be designed to maintain balance between efficiency and control, helping to transform our interaction with technology toward a more intuitive, responsive relationship with our devices. And they may not be “human-in-the-loop” systems, but rather “human-in-the-mix” - varying the role / site of human judgment in the system as to not making it routine.
Decaying choice architectures that gradually fade or reset over time, preventing the permanent entrenchment of algorithmic influence. Just as human-to-human persuasion naturally decays—salespeople don't follow us around forever—digital persuasion could be designed with temporal boundaries. Systems that indefinitely remember our past behaviors create a kind of algorithmic lock-in that makes it increasingly difficult to break patterns or explore new possibilities. By designing architectures that deliberately weaken over time, we create space for renewed agency and fresh decisions unencumbered by the accumulated weight of past choices.
This principle of decay is essential precisely because autonomy is a trajectory, not a fixed state. Even when I make a fully autonomous choice today, that choice should not constrain who I might become tomorrow. Current systems essentially crystallize our past decisions, creating a kind of AI-enabled determinism where our future options become increasingly constrained by our history. Decaying choice architectures acknowledge that personal growth often requires escaping the gravitational pull of past preferences and behaviors. They create algorithmic environments that breathe with us, expanding and contracting to accommodate not just who we are but who we might become.
Concretely: maybe advising assistants and agents need a short life span, or an increasing probability of failure and “death” over time?
Agency transparency tools that help users critically examine the digital environments they inhabit, understand the algorithms that shape those environments, and recognize their impact on autonomy. Rather than treating algorithmic systems as black boxes, these tools would open them to inspection and evaluation.
Contestability mechanisms that allow users to challenge algorithmic decisions or recommendations. Just as legal systems provide avenues for appealing decisions, digital systems should include built-in mechanisms for contesting outcomes or influencing the processes that generate them.
Judgment preserving designs. One of the most obvious ideas here is that noone should have a single assistant or agent that they work with. A judgment preserving infrastructure might force us to choose between two different views, ensuring that we always work with at least two, maybe more, artificial advisors.
Nothing stops us from having a virtual inner cabinet of advisors and an always present devils advocate that we work with. Designing such a choice architecture is much more interesting than relying on a single agent.
Collaborative intelligence frameworks that promote partnership between human and machine rather than replacement or subordination. These frameworks would leverage the complementary strengths of human and artificial intelligence, creating systems where each enhances the capabilities of the other rather than one dominating the other.
Agency recovery systems that help users reclaim autonomy when it has been diminished. These might include tools for breaking habitual patterns, mechanisms for resetting algorithmic profiles, or interfaces that gradually increase user control as capabilities develop.
The design of agency-enhancing technologies requires a multidisciplinary approach that combines technical expertise with psychological insight and ethical reflection. It demands that we move beyond thinking of technology design as merely functional and recognize it as fundamentally ethical—shaping not just what we can do but who we can become.
Moreover, developing effective agency-enhancing technologies requires acknowledging the tension between personalization and autonomy. Truly personalized systems adapt to our preferences, but truly autonomous systems allow us to adapt them to our preferences. This distinction suggests that agency enhancement isn't about minimizing risks from algorithmics influence but rather about ensuring that humans become evermore powerful authors of their decisions —capable of steering, redirecting, or even abandoning technological systems when they no longer serve human purposes.
Autonomy and community
The shift from privacy to agency as the central concern of digital life represents an evolution in how we understand the relationship between humans and technology. Rather than simply asking what technologies know about us, we must increasingly ask how they help us reshape ourselves—and whether that reshaping aligns with our considered values and goals.
Agency-enhancing technologies offer a path forward that neither rejects technological advancement nor surrenders human autonomy to algorithmic determination. They represent a middle way that leverages the power of algorithms while preserving the conditions necessary for meaningful human choice and development.
This approach demands that we move beyond the binary thinking that has characterized much discussion of AI and automation. The question is not whether algorithms will replace humans or whether humans will control algorithms. Rather, it is how we design technological systems that enable humans and algorithms to complement one another in ways that preserve and enhance human agency.
A key starting point here is the recognition that autonomy is fundamentally developmental—it is not a static state to be preserved but a dynamic trajectory to be nurtured. Our technologies must not only respect who we are now but leave space for who we might become. They must allow for the full pursuit of human potential rather than subtly channeling us along predetermined paths.
The analogy with privacy-enhancing technologies provides a useful template. Just as PETs emerged from recognition that privacy could be strengthened through technical innovation, agency-enhancing technologies acknowledge that agency doesn't simply persist automatically in digital environments—it requires deliberate design choices that augment it.
But we must be careful not to fall into a trap of atomistic individualism in our vision of enhanced agency. True autonomy is not achieved in isolation from others, as if the ideally autonomous person were one who made decisions in a social vacuum, free from all influence. Such a conception would lead not to genuine autonomy but to a kind of atomistic individualism —a disconnection from the very social fabric that gives meaning to our choices. The individualistic understanding of autonomy that has dominated clinical practice and research portrays people as independent, self-interested and rational gain-maximising decision-makers, but this overlooks how our identities and capacities are shaped through relationships.
Instead, we should recognize that autonomy is fundamentally relational. Relational autonomy is an ethical concept that links to a variety of ethical approaches and rejects dichotomous thinking in favor of finding common ground between seemingly divergent perspectives. Our capacity for self-determination doesn't develop or express itself in isolation, but through meaningful relationships with others. Autonomy isn't about freedom from all influence, but rather about the capacity to discern which influences to embrace and which to resist—to choose whom to listen to, whom to trust, and whom to align ourselves with.
Autonomy is not possible without community. If there was only a single person left in the universe, to use Simone Weil’s thought experiment, they would not enjoy any meaningful autonomy.
What would a future of relational agency-enhancement look like? It would be one where technology not only preserves our autonomy from manipulation but actively enhances our ability to make choices free from undue social pressure and conformity demands. Where current social media platforms often amplify peer pressure and groupthink, agency-enhancing platforms might create spaces where people can explore ideas and form judgments with greater independence, free from the subtle tyranny of social validation metrics.
Agency-enhancing technologies would help us navigate the complex interplay between individual choice and collective responsibility, not by pretending these tensions don't exist, but by giving us better tools to address them consciously.
Imagine tools that help us recognize when we're making choices primarily to please others or to conform, not by assuming such motivations are always problematic, but by making them visible so we can evaluate them. Imagine social platforms designed not to maximize engagement but to foster genuine connection—where algorithms serve not as hidden manipulators but as transparent facilitators of meaningful human interaction. As robots and AI become increasingly integrated into our lives, we need ethical approaches that recognize how our "properties, actions, and being are already connected with non-human entities" in complex relational networks.
In this vision, technology wouldn't just protect us from manipulation—it would actively expand the horizon of our choices by helping us overcome the limitations of our social situatedness without denying its importance. It would recognize that autonomy isn't about removing ourselves from the social web of meaning but about navigating it more consciously and deliberately.
In the end, the goal isn't to be free from all influence—an impossible and ultimately undesirable aim—but to consciously choose which influences to embrace, which relationships to nurture, and which values to pursue. Technology at its best can make us more rather than less capable of this quintessentially human activity. That is the promise of agency-enhancing technologies, properly conceived as tools not for atomistic isolation but for relational growth, democratic resilience and human development.
So what?
Let’s assume this is right. What does it mean, concretely, for policy makers today? How should they factor this into their planning for AI? There are simple answers here that I think are wrong: one example of that would be to say that we will ban “persuasive technologies”. This is impracticable, as it would mean banning things like scales (as they persuade us to work out). It is also wrong to say that agency or autonomy should be treated as a synonym of privacy and protected in the same way. Agency and autonomy are on-going negotiations and what seems to be better is to think about this challenge as one that mirrors the one we have in contract law, when we try to figure out if a contract is legitimately concluded. We consider the relative power of the parties, we consider the intent and any coercion (because persuasion can easily be concentrated into coercion if applied in certain ways). So, maybe the better way to approach this is to say that we want to set up the collective negotiation of individual autonomy in as good a way as possible. This would entail things like:
Specifically thinking about when the state can persuade. How do we limit the persuasive powers of the state?
Exploring “persuasion” as a service and how it can be used / provided overall. We will want to by some kinds of persuasion, and be protected from others.
Draw hard red lines around persuasion as to be able to suggest when it crosses those lines into coercion.
Continue to explore the limits of persuasion overall — Mercier’s work is an important reminder that persuasion is not a simple case of us being subjected to a simple process of some kind.
Incentivize research into agency-enhancing technologies, making these technologies key for some kinds of services.
We will live with stuff that wants stuff, shapes what we want and allows us to want more and differently. These technologies of desire are going to be, in some ways, more influential than any technologies of intellect.
Thanks for reading,
Nicklas
See Norman, D.A., 1999. Affordance, conventions, and design. Interactions, 6(3), pp.38–43.
See Winner, L. (1980). "Do Artifacts Have Politics?" Daedalus, 109(1), 121–136.
The term “choice architecture” stems from Thaler, R.H. and Sunstein, C.R., 2008. Nudge: Improving decisions about health, wealth, and happiness. New Haven: Yale University Press.
Here as in many other cases, this work from former colleagues at DeepMind is essential reading Gabriel, I., Manzini, A., Keeling, G., Hendricks, L.A., Rieser, V., Iqbal, H., Tomašev, N., Ktena, S.I., Kenton, Z., Rodriguez, M., El-Sayed, S., Brown, S., Akbulut, C., Trask, A., Hughes, E., Bergman, A.S., Shelby, R., Marchal, N., Griffin, C., Mateos-Garcia, J., Weidinger, L., Street, W., Lange, B., Ingerman, A., Lentz, A., Enger, R., Barakat, A., Krakovna, V., Siy, J.O., Kurth-Nelson, Z., McCroskery, A., Bolina, V., Law, H., Shanahan, M., Alberts, L., Balle, B., de Haas, S., Ibitoye, Y., Dafoe, A., Goldberg, B., Krier, S., Reese, A., Witherspoon, S., Hawkins, W., Rauh, M., Wallace, D., Franklin, M., Goldstein, J.A., Lehman, J., Klenk, M., Vallor, S., Biles, C., Morris, M.R., King, H., Agüera y Arcas, B., Isaac, W. and Manyika, J., 2024. The Ethics of Advanced AI Assistants. arXiv preprint arXiv:2404.16244
I am not exactly using this term in the sense that it is used in Fogg, B.J. (1998). Persuasive Computers: Perspectives and Research Directions. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '98), pp. 225–232 - but the idea is certainly Fogg’s.
See Solove, D.J., 2008. Understanding Privacy. Cambridge, MA: Harvard University Press and Agre, P.E. and Rotenberg, M. (eds.), 1997. Technology and Privacy: The New Landscape. Cambridge, MA: MIT Press. Agre was also a pioneer in understanding the role that agency would play: Agre, P.E., & Rosenschein, S.J. (Eds.). (1996). Computational Theories of Interaction and Agency. MIT Press.
See https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/
There is a broad and complex literature here. Zaller, J., 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press and the references in Mercier.