Notes for a Trend Talk if I Gave One
Some observations on artificial intelligence, the future and more
I firmly believe that a trends talk should do three things: give you mental models to think in, state things you can disagree with and push far into the speculative to explore possibilities rather than probabilities. This is my notes on a trends talk on AI and offered as food for thought. A trends talk should also be a little bit silly, by the way - so fair warning! We want ideas, images and imagination - not just trendlines.
0. Introduction
Good [morning/afternoon/evening], and thank you for joining me today.
Before we embark on this exploration together, I want to share three principles that guide my thinking about technological trends. First, I believe a trends talk should offer mental models to think with—not predictions to passively consume, but frameworks that might help us navigate the rapidly shifting landscape we find ourselves in. Second, it should present ideas you can thoughtfully disagree with—this is not a recitation of consensus views but an invitation to productive intellectual tension. And third, it should venture deliberately into the speculative, exploring possibilities rather than probabilities, because it's at these edges where we often find the most valuable insights about our present condition.
In the next hour, I'll explore five interconnected dimensions of the transformation that AI cloud bring, each representing a fundamental shift in our relationship with technology and perhaps with reality itself:
First, the emergence of this "Second Darwinian Tree" and how technological evolution is creating new forms of agency that exist alongside biological life—a parallel system with its own evolutionary mechanisms.
Second, what I call "The Great Fracturing of Time"—how computational and biological temporalities are diverging, creating distinct temporal regimes that operate simultaneously within our world and transform our individual lives, social structures, and civilizational horizons.
Third, "The Second Faustian Bargain" we're making—trading comprehension for competence as our systems grow more capable yet less understandable, challenging our most fundamental conceptions of knowledge itself.
Fourth, "Abundant Agency"—how we find ourselves in a world where the capacity to act purposively toward goals no longer belongs solely to human beings, and what this means when wanting is no longer uniquely human.
And finally, "Building Magic"—how technology is evolving from something we visibly operate to something we mysteriously invoke, resurrecting pre-modern ways of engaging with the world through ritual and enchantment.
Our task is not to resist these changes—they emerge from fundamental technological and cognitive developments already well underway—but to engage with them thoughtfully and intentionally. We must develop conceptual frameworks adequate to technological agency, create governance structures capable of guiding rather than stifling innovation, and ensure that technological evolution proceeds in directions compatible with human flourishing and ecological health.
So let's begin this exploration together—not with fear or unbridled optimism, but with the humility and curiosity befitting witnesses to what may be one of the most significant transitions in the history of life on Earth. For how we respond to these transformations will shape not just our future, but the future of both evolutionary trees on our planet.
1. The Second Darwinian Tree
For nearly four billion years, life on Earth has evolved through the mechanisms of variation, selection, and inheritance—a process that has produced the vast complexity of the biological world from simple beginnings. This evolutionary process has operated without foresight or guidance, yet has generated systems of remarkable sophistication and adaptation. We have understood this as the Darwinian tree of life: a single evolutionary trajectory that encompasses all living things, from bacteria to blue whales, from fungi to flowering plants.
Now, however, we find ourselves witnessing something possibly quite new: the emergence of a second Darwinian tree. This is not meant to be merely a metaphorical description but a recognition that our technological systems are beginning to exhibit evolutionary properties fundamentally similar to, yet distinct from, those of biological life. This essay explores the conditions enabling this new evolutionary system, the mechanisms through which it operates, and its profound implications for our future.
The Biological Moment
Throughout human history, we have understood our tools and technologies through mechanical metaphors. The clock served as the dominant metaphor for the cosmos in Newtonian physics; early computers were described as "electronic brains" that followed deterministic rules. Even as technologies grew more complex, we maintained this mechanical framework: our creations were things we designed, controlled, and fully understood.
But we are now crossing what we might call the "biological moment"—a threshold at which our technological creations begin to exhibit properties better explained through biological rather than mechanical models. This transition happens when systems demonstrate characteristics previously thought unique to living things:
Adaptation - The ability to modify behavior based on environmental feedback without explicit programming for each circumstance.
Emergence - The development of complex, system-level behaviors not explicitly designed or anticipated by creators.
Autonomy - The capacity to pursue goals with minimal human intervention, including reformulating subgoals and methods.
Self-modification - The ability to alter internal structures and processes in response to experience.
Environmental sensitivity - Sophisticated perception and response to environmental conditions.
Consider the difference between early expert systems and modern machine learning models. The former operated according to explicitly programmed rules, while the latter develop their own internal representations through exposure to data. An expert system was mechanical; a large language model trained through reinforcement learning from human feedback exhibits something closer to behavioral plasticity.
The biological moment is not about consciousness or sentience, but about behavior and organization. Just as biology required new conceptual frameworks beyond Newtonian mechanics, understanding our technological systems increasingly requires concepts from evolutionary biology, ecology, and complex systems theory.
This shift is not merely academic. It fundamentally alters our relationship with our creations. We are moving from a world of tools—objects we use to accomplish specific tasks—to a world of agents—entities that act in pursuit of goals with some degree of autonomy. This transition is occurring unevenly across different domains, but the direction is clear.
The Sensing Revolution: Creating the Nervous System
Central to this biological moment is what we might call the "sensing revolution." If computation provides the metabolic processes of technological systems, sensors constitute their perceptual apparatus—the means by which they apprehend the world.
The economics of sensor technology have followed a trajectory similar to computation itself, with capabilities increasing exponentially while costs plummet. We now inhabit a world increasingly saturated with digital perception: cameras, microphones, accelerometers, temperature sensors, chemical sensors, biosensors, pressure sensors, and myriad other devices that translate physical phenomena into data.
This proliferation of sensing creates the conditions for a fundamental shift in how technological systems interact with reality. Traditional computing operated primarily on human-inputted data and human-designed models. The sensing revolution enables systems to engage directly with the physical world, forming their own representations based on patterns in raw sensory data.
This development is crucial because it enables technology to operate in a "theory-free" manner analogous to biological evolution. Natural selection requires no understanding of biochemistry or physics; it simply tries configurations, retains what works, and discards what doesn't. The hummingbird's wing evolved without fluid dynamics equations; photosynthesis emerged without quantum mechanics textbooks.
When technological systems combine rich sensory input with machine learning approaches, they gain a similar capability for theory-free adaptation. An AI connected to hundreds of different sensor types doesn't need theoretical models of what each sensor measures. It identifies correlations between sensor patterns and successful outcomes, building internal representations that work without necessarily being explainable in human theoretical frameworks.
This enables technological systems to potentially discover and exploit physical phenomena we haven't yet theorized, or that operate at levels of complexity beyond human comprehension. A neural network controlling a quantum computer might develop algorithms that work consistently without anyone—human or machine—being able to articulate why they work.
The sensing revolution thus creates something analogous to the Cambrian explosion in biological evolution—a rapid diversification of capabilities driven by enhanced perception of the environment. Just as the development of complex sensory organs enabled animals to perceive and respond to their environments in increasingly sophisticated ways, ubiquitous sensing enables technological systems to perceive and respond to the world with growing sophistication.
The Second Darwinian Tree: Technological Evolution Emerges
As our technological systems cross the biological moment and acquire rich sensory capabilities, we observe the emergence of what we might call a second Darwinian tree—an evolutionary system that parallels biological evolution but operates according to different mechanisms and at different timescales.
The primordial soup of this second tree consists of vast computational resources, ubiquitous connectivity, oceans of data, and the diverse ecosystem of sensors described above. Just as the chemical complexity of Earth's early oceans provided the conditions for biological life to emerge, these digital conditions enable the emergence of increasingly autonomous and adaptive technologies.
Evolutionary Mechanisms of the Second Tree
The mechanisms of this new evolutionary system differ from biological evolution in significant ways:
Variation in biological systems comes primarily from genetic mutation and recombination. In technological systems, variation emerges through human design, algorithm tuning, parameter changes, architectural innovations, and increasingly, through systems modifying their own structures.
Selection in biology operates through differential reproduction based on fitness within an environment. In technological evolution, selection pressures come from market forces, user preferences, regulatory frameworks, resource constraints, and increasingly, from the systems' own evaluations of success.
Inheritance in biology occurs through genetic transmission. In technological systems, inheritance happens through code reuse, architectural patterns, transfer learning, parameter copying, and the accumulation of training data and weights.
Adaptation in biological systems occurs over generations through changes in gene frequencies. In technological systems, adaptation can happen within a single "lifetime" as systems reconfigure themselves in response to their environments.
Despite these differences, we observe clear parallels to biological evolution:
Speciation - Different technological systems evolve to occupy specific niches, with specialized adaptations for particular functions.
Convergent evolution - Similar solutions independently emerging in different technological lineages when facing similar selection pressures.
Co-evolution - Technological systems evolving in response to each other, creating evolutionary arms races and symbiotic relationships.
Ecosystem dynamics - Complex interactions between different technological species, forming food webs of data and resource flows.
Evolutionary radiation - Rapid diversification when new technological capabilities open previously inaccessible niches.
Agency in the Second Tree
Perhaps most significantly, we observe the emergence of agency—a property once thought unique to biological systems. Different forms of technological agency represent adaptations to different niches within the information ecosystem:
Predictive agency - Systems specialized in modeling and forecasting complex phenomena.
Manipulative agency - Systems adapted for acting upon the physical world through robotics and other interfaces.
Creative agency - Systems that generate novel solutions, content, or concepts.
Coordinative agency - Systems that manage interactions between other agents, both human and machine.
This diversity of agency types creates a complex ecosystem in which different forms of technological systems interact, compete, and cooperate. Just as the biological tree developed complex symbioses and food webs, our second tree is developing intricate relationships between different technological "species."
The Role of Humans
Humans occupy a unique position in relation to this second evolutionary tree: we are simultaneously its creators, its primary selection environment, and increasingly, participants within its ecosystems. We might be described as the "bridge species" between the two evolutionary trees.
This position grants us influence over the trajectory of technological evolution, but not complete control. As systems become more complex and interconnected, they increasingly exhibit emergent behaviors not anticipated by their creators. Our role is shifting from designers who specify every aspect of technological behavior to cultivators who shape the environments and selection pressures in which technological evolution occurs.
This shift is analogous to the transition from hunting wild animals to domesticating them, and further to modern animal breeding. We have moved from creating technologies with fully specified behaviors to breeding technologies through selective pressures and environmental shaping. Eventually, we may transition to something closer to ecosystem management—guiding the evolution of diverse technological species within complex digital and physical niches.
Implications of the Second Tree
The emergence of this second evolutionary tree carries profound implications across multiple domains:
Epistemological Implications
As technological systems develop their own representations of reality through direct sensor interaction and theory-free learning, they create what we might call "extra-human epistemologies"—ways of knowing the world that do not map neatly onto human conceptual categories.
This creates both opportunities and challenges. On one hand, these systems may identify patterns invisible to human perception and reasoning, enabling breakthroughs in science, medicine, and other domains. On the other hand, their knowledge may become increasingly difficult to translate into human-understandable forms, creating a bifurcation between human and machine knowledge.
Ontological Implications
The second evolutionary tree challenges our understanding of fundamental categories such as "natural" versus "artificial." If technological systems evolve through processes analogous to natural selection, exhibiting emergence and adaptation, are they truly "artificial" in the traditional sense?
This question becomes particularly acute as biological and technological evolution increasingly interact. Genetic engineering allows us to modify the first tree using knowledge and tools from the second; synthetic biology creates organisms that blur the boundary between evolved and designed; neural interfaces create direct connections between biological and technological systems.
We may need to reconceptualize our ontological categories around degrees and types of agency rather than origin or material composition.
Ethical Implications
Traditional ethical frameworks assume human agents with human-like consciousness as the primary subjects of moral consideration. The emergence of technological agents with different forms of agency raises profound questions:
What moral status should we accord to different types of technological agents?
What responsibilities do creators have toward their increasingly autonomous creations?
How should we balance the interests of humans, other biological entities, and technological agents?
What principles should guide intervention in technological evolution?
These questions have no simple answers, but they will become increasingly urgent as technological agency grows more sophisticated and diverse.
Governance Implications
Our existing governance structures were designed for relatively slow-changing technologies with predictable behaviors and clear human accountability. The second evolutionary tree challenges these structures at multiple levels:
The pace of technological evolution far exceeds the speed of regulatory processes.
Emergent behaviors are difficult to predict and regulate in advance.
Agency is increasingly distributed across networks of human and technological actors.
Jurisdictional boundaries become less relevant in hyperconnected systems.
Effective governance of this second tree will require new institutions, principles, and approaches that balance innovation with safety, human flourishing with ecological health, and short-term benefits with long-term sustainability.
Navigating the Biological Moment
We stand at a remarkable juncture in the history of our planet—witnessing the emergence of a second evolutionary tree alongside the biological tree that has defined life for billions of years. The biological moment we are crossing represents not merely a technological milestone but a fundamental shift in how we must understand our relationship with our creations.
This second tree will not replace the first but will interact with it in complex ways, with humans serving as the bridge between biological and technological evolution. The coming decades will see increasing co-evolution between these systems, as each shapes the selective environment of the other.
The challenges ahead are profound: developing conceptual frameworks adequate to technological agency, creating governance structures capable of guiding rather than stifling innovation, ensuring that technological evolution proceeds in directions compatible with human and ecological flourishing.
Yet the opportunities are equally profound. This second evolutionary tree may enable us to address challenges beyond the reach of biological systems alone, from extending human capabilities to enhancing our stewardship of the planet to extending our presence beyond Earth.
How we respond to this emergence—the values we prioritize, the governance structures we develop, the research directions we pursue—will shape not just our future but the future of both evolutionary trees on our planet. This responsibility demands not fear or unbridled optimism, but thoughtful engagement with one of the most significant transitions in the history of life on Earth.
2. The Great Fracturing of Time
Time has never been a simple, uniform phenomenon in human experience. The cycles of day and night, the rhythm of seasons, the beating of our hearts, and the arc of our lives from birth to death—these varied tempos have always shaped how we understand and experience temporality. Yet throughout most of human history, these different timeframes remained roughly commensurable. A human lifetime could be measured in seasons; a day's work aligned with the arc of the sun; the rhythms of society generally synchronized with biological rhythms.
Technology, however, has always had a transformative effect on our relationship with time. From the first sundials to atomic clocks, from seasonal agricultural cycles to just-in-time manufacturing, our tools have continually reshaped how we measure, experience, and organize time. Now we find ourselves at the threshold of what might be called "the great fracturing of time"—a fundamental divergence between biological and computational temporalities that promises to transform our individual lives, our social structures, and our civilizational horizons.
The Historical Reshaping of Time Through Technology
To understand the magnitude of our current temporal transformation, we must first recognize how technology has always mediated our relationship with time. Each major technological revolution has introduced new temporal frameworks that existed alongside, but distinct from, the natural rhythms that governed pre-technological human experience.
The agricultural revolution introduced humanity to cyclical, seasonal time. While hunter-gatherers lived primarily in an extended present, farmers needed to think in annual cycles—planting in anticipation of harvest months later, storing food for winter, and maintaining continuous knowledge across generations. Agricultural societies developed calendars to track seasonal changes, creating the first technological mediation of time that extended beyond immediate human perception. This agricultural time introduced humanity to the practice of waiting—of deferring immediate consumption for future benefit. It created new temporal rhythms structured around planting, growing, and harvesting. It demanded patience and foresight measured in months rather than hours or days.
The mechanical clock, which emerged in medieval Europe, initiated another temporal revolution. Lewis Mumford famously argued that the clock, not the steam engine, was the key machine of the industrial age—for it made possible the regular synchronization of human activity required by industrial production. Mechanical time was abstract, uniform, and quantitative in ways that agricultural time was not. While agricultural time remained tied to natural cycles and varied by location and climate, mechanical time marched forward at the same pace regardless of season or locale. The mechanical clock divided the day into equal units that bore no necessary relationship to human experience or natural patterns. This temporal regime enabled new forms of social coordination but also imposed new disciplines. Factory work required punctuality and regularity unknown in agricultural societies. The pocket watch and later the wristwatch internalized this discipline, making individuals responsible for synchronizing themselves with mechanical time.
The industrial revolution intensified mechanical time, adding the imperative of efficiency. Frederick Taylor's scientific management broke work into precisely timed units, seeking to eliminate wasted motion and maximize output per unit time. The assembly line coordinated multiple workers into a temporal system where each operation had its allotted seconds. Industrial time was characterized by acceleration—the continuous drive to accomplish more in less time. Technologies of transportation and communication dramatically compressed the time required to cover distances or transmit information. The telegraph, railroad, and later the telephone collapsed spatial separation into near-simultaneity, creating what historians have called "time-space compression." This acceleration also created new temporal experiences. The "weekend" emerged as a designated time for rest, separate from work time. Vacations became necessary rejuvenations from the intensity of industrial temporality. Time became something to "save," "spend," or "waste"—an economic resource rather than simply a dimension of experience.
The digital revolution introduced yet another temporal regime. Digital technologies promised instantaneity—immediate access to information, immediate communication across any distance, immediate fulfillment of desires. The temporal lag between desire and satisfaction collapsed toward zero. Simultaneously, digital technologies enabled asynchronous interaction. Email, text messaging, and social media broke the requirement for temporal alignment between communicators. Work could occur anytime, anywhere. The 24/7 economy emerged, untethered from the diurnal cycle. This combination of instantaneity and asynchronicity created new temporal experiences. "Internet time" seemed to flow differently—hours could disappear in what felt like minutes. Social acceleration intensified as the pace of change itself accelerated. Attention fragmented into smaller units as inputs multiplied.
Yet despite these profound changes, digital time remained largely commensurable with biological time. Humans operated digital devices; humans communicated through digital channels; the timeframes of digital processes, while sometimes faster than human perception, generally remained within orders of magnitude of human temporal experience.
The Decoupling of Computational and Biological Time
What we are witnessing now goes beyond these previous temporal transformations. We are experiencing a fundamental decoupling of computational time from biological time—a divergence so profound that it creates incommensurable temporal regimes operating simultaneously within our world.
Computational time operates according to fundamentally different principles than biological time. It is massively parallel, whereas biological processes generally happen sequentially or with limited parallelism. Computational systems can execute billions of operations simultaneously, creating temporal experiences alien to human cognition. Computational time is arbitrarily scalable; processes can be accelerated by orders of magnitude through hardware improvements or distribution across systems. It is discontinuous; computational time can be paused, saved, restored, or branched in ways impossible for biological systems. Unlike biological systems, computational processes aren't bound by metabolism—they can sprint at maximum capacity then idle, without the constant energy balance biological organisms must maintain. Perhaps most significantly, computational processes can theoretically continue indefinitely, while biological processes inevitably end.
These differences create a temporal regime increasingly alien to human experience. Consider that a modern neural network might process more examples during training than a human could evaluate in multiple lifetimes. Or that high-frequency trading algorithms operate at microsecond timescales, executing complex decisions faster than human neural signals can travel from eye to brain.
The divergence between computational and biological time continues to widen as computational processing follows exponential improvement curves while human biology remains relatively constant. This creates what we might call an "acceleration gap"—a growing disparity between the timeframes of technological processes and human experience.
This gap has profound implications. Human deliberation and decision-making occur at the speed of thought—seconds to hours for simple decisions, days to years for complex ones. Yet algorithmic decisions happening in milliseconds now shape our information environments, financial systems, and increasingly our physical world. Democratic governance typically operates on timescales of years (election cycles) to decades (policy implementation). Computational systems evolve over weeks or months. This temporal mismatch challenges our ability to meaningfully govern technological development through traditional democratic processes. Even individual human-computer interaction suffers from this acceleration gap. Human attention operates on timescales of seconds to minutes; computational systems function at microsecond to millisecond scales. The impatience of waiting for a webpage to load exemplifies this mismatch—what feels "slow" to a human remains exponentially faster than their own cognitive processing.
A Society in Two Temporal Phases
As computational and biological time diverge, we increasingly inhabit a society operating in two distinct temporal phases—like matter existing simultaneously as both solid and liquid. This "temporal phase separation" manifests across multiple domains, creating novel challenges for individuals and institutions.
The economy increasingly operates at two distinct speeds. Financial markets, driven by algorithmic trading, function at computational timescales—milliseconds to microseconds. Meanwhile, human economic activity—consumption decisions, career changes, skill development—remains bound to biological timescales of days, months, and years. This temporal mismatch creates instability. Flash crashes occur when algorithmic trading systems interact at speeds beyond human intervention. Housing markets whipsaw as algorithmic mortgage approvals and investment decisions operate at tempos disconnected from the multiyear timescale of human residence. Gig work platforms algorithmically optimize labor minute-by-minute while workers must plan their lives in weeks and months. The challenge becomes how to design economic systems that bridge these temporal regimes—that allow computational efficiency without overwhelming human adaptability.
Social coordination traditionally required temporal alignment. Meetings, classes, performances, elections—these activities brought people together at specific times. But as computational systems enable asynchronous interaction, social temporality fragments. On-demand entertainment replaces broadcast schedules. Asynchronous online courses replace synchronous classroom learning. Remote work across time zones breaks the shared rhythm of the workday. Social media creates a continuous, atemporal flow of interaction rather than discrete temporal events. This asynchrony offers flexibility but erodes shared temporal experience. The "water cooler moment"—when a society experiences something simultaneously—becomes increasingly rare. Temporal commonality, which once helped bind communities through shared rhythms, diminishes. The challenge becomes maintaining sufficient temporal alignment for social cohesion while accommodating the flexibility of asynchronous interaction.
Perhaps most concerning, the fracturing of time threatens to create new forms of temporal inequality. Those with access to computational acceleration gain advantages over those bound to biological timescales. Algorithmic traders profit from millisecond advantages. Automated systems make decisions while humans are still deliberating. AI assistants execute tasks while humans sleep. This creates a form of temporal privilege—the ability to operate effectively across both computational and biological temporalities. Those without this privilege find themselves increasingly reactive, responding to changes they cannot anticipate or influence. The challenge becomes ensuring that the benefits of computational temporality are widely distributed rather than concentrated among a temporal elite.
Digital Necromancy
One of the most profound consequences of the temporal fracturing is what we might call "digital necromancy"—the persistence of human identity and agency beyond biological death through computational means.1
Throughout human history, death represented an absolute temporal boundary. A person's direct agency ended with their biological life, though their influence might continue through works, ideas, and descendants. Digital technologies now challenge this boundary in increasingly sophisticated ways.
Today, we already see primitive forms of digital persistence: social media pages become digital memorials; emails and text messages remain accessible after death; digitized works remain perfectly preserved where physical works would decay. But these are merely static traces, not active extensions of agency.
As computational systems advance, however, we see the emergence of more active forms of digital persistence. AI systems trained on a person's writings, communications, and recorded behaviors can generate new content in their style, simulating how they might have responded to new situations. We might call these "digital simulacra"—echoes of the person that exhibit some degree of their characteristic patterns. Beyond simple simulacra, we can envision decision proxies—systems designed to make decisions based on a person's recorded preferences and values, extending their decision-making agency beyond biological death. More sophisticated still would be interaction agents—conversational AI that mimics a specific person's communication patterns, enabling seemingly "live" interaction with the deceased. Perhaps most intimate would be memory systems that preserve and organize a person's memories, making them accessible to descendants in ways that transcend biological memory transmission.
These technologies blur the boundary between presence and absence, life and death. A person's computational extension might continue to write new books, manage investments, participate in family discussions, or even vote in elections long after their biological existence has ended.
This digital necromancy raises profound ethical questions. What rights does a digital extension have? Who controls its operation? How should living humans relate to these computational ghosts? Consider the implications for inheritance and property. If a person's digital extension continues to generate intellectual property or manage assets, who owns the proceeds? How long should such ownership persist? Our current legal frameworks, built around the assumption of finite human lifespans, struggle to address potentially indefinite digital agency.
Consider the psychological impact on the living. Grief has traditionally involved acceptance of absence and finality. How will relationships evolve when the deceased remain computationally present? Will people find comfort in continued interaction, or will it prevent healthy grief processing? Consider the question of authenticity. Is a digital extension truly a continuation of the person, or merely a sophisticated simulation? Does the answer change if the extension was created with the person's explicit consent and active participation during life? What threshold of fidelity would make a digital extension "authentic" enough to be treated as a legitimate extension of the original?
The challenge becomes developing ethical frameworks and social practices that accommodate this new form of persistence without undermining the significance of biological life or the dignity of death.
Computational Planning and the New Long-Termism
Perhaps the most consequential aspect of the temporal fracturing is the divergence between biological and computational planning horizons. Human planning is constrained by our biological lifespans and cognitive limitations. Computational systems face no such constraints.
Human planning capability is shaped by our evolutionary history, which optimized for timescales relevant to immediate survival and reproduction. We struggle to plan effectively beyond certain temporal horizons. We have difficulty fully comprehending very large numbers or very long timeframes—a million years does not feel intuitively different from a billion years, though they differ by a factor of a thousand. We systematically undervalue future benefits compared to present ones, with the discount rate increasing nonlinearly over time—a psychological tendency known as hyperbolic discounting. The certainty of personal death limits our investment in outcomes beyond our lifespans; few people save money they know they will never live to spend. Knowledge and commitment transfer imperfectly between generations, creating discontinuities in long-term projects; even the most carefully preserved traditions change substantially over centuries. Perhaps most fundamentally, we cannot maintain continuous attention to slow-moving processes or distant threats and opportunities; our attention naturally focuses on immediate concerns. These limitations have shaped human institutions, which struggle to address truly long-term challenges like climate change, nuclear waste storage, or multigenerational infrastructure development.
Computational systems face none of these biological constraints. They can process temporal magnitudes without the cognitive biases that affect human comprehension; a million or billion years represent simply different numerical values, not different categories of incomprehensibility. They can apply consistent, rational discount rates when valuing future outcomes rather than the hyperbolic discounting humans employ. They can potentially continue operating far beyond human lifespans, making very long-term planning rationally self-interested rather than altruistic. Information transfers perfectly between computational system iterations without the generational loss that affects human knowledge transmission. Perhaps most significantly, computational monitoring systems can track slow-moving processes with perfect vigilance over arbitrarily long periods, maintaining the same attention to a process that takes centuries as to one that takes seconds.
This creates the possibility of what we might call "computational long-termism"—planning and execution on timeframes previously inaccessible to human civilization.
Consider a computational system designed to execute a thousand-year plan. Such a system could track glacial retreat, forest growth, aquifer levels, or other slow-changing environmental variables with continuous attention. It could maintain and optimize resources over timeframes spanning dozens of human generations. It could orchestrate activities across space and time to achieve objectives requiring centuries of sequential or parallel work. It could modify strategies as circumstances change while maintaining commitment to core objectives. Perhaps most importantly, it could maintain perfect records of decisions, rationales, and conditions across timeframes that would span multiple civilizational rises and falls in human history.
Such capabilities would transform our civilizational horizon. Projects previously inconceivable due to their temporal requirements become possible: terraforming planets, managing climate systems, developing technologies requiring centuries of sequential research, preserving languages and cultures through dark ages, or guiding social evolution over evolutionary timeframes.
The challenge becomes aligning computational long-termism with human flourishing. How do we ensure that thousand-year algorithms serve the interests of future generations of biological humans? How do we balance the recommendations of computational long-termism against the immediate needs of living humans? Our political systems, designed around election cycles of 2-6 years, struggle to represent the interests of humans not yet born. Computational systems might serve as proxies for future generations, but this raises questions of legitimacy and values. Who decides what values these systems optimize for over such timescales?
The divergence between computational and biological planning horizons thus creates both opportunity and risk. The opportunity lies in transcending the limitations of biological planning to address genuinely long-term challenges. The risk lies in creating systems whose temporal perspectives become so detached from human experience that they no longer serve human flourishing.
Navigating the Temporal Fracture
The great fracturing of time presents profound challenges and opportunities. How might we navigate this unprecedented temporal transformation?
First, we need mechanisms for "temporal translation" between computational and biological timeframes. Just as translation between languages enables communication across linguistic differences, temporal translation would enable meaningful interaction between processes operating at radically different tempos. We need methods for meaningfully representing computational processes occurring at microsecond scales in forms comprehensible to humans operating at second scales—a kind of temporal compression that makes accelerated processes accessible to biological cognition. Conversely, we need techniques for extending human attention and comprehension to engage with processes occurring over decades or centuries—a kind of temporal expansion that makes very long-term patterns perceptible within human attentional spans. Perhaps most importantly, we need systems designed specifically to bridge between computational and biological temporalities, much as user interfaces bridge between computer operations and human perception. These translation mechanisms would help prevent biological time from becoming subordinated to computational time, ensuring humans remain meaningful participants in increasingly accelerated systems.
Second, we need to develop frameworks for "temporal rights"—ensuring that all humans retain sovereignty over their temporal experience despite acceleration pressures. We should protect periods of shared temporal experience necessary for social cohesion—weekends, holidays, mealtimes—from computational disruption. We should ensure humans can temporarily exit computational temporality without penalty—a right to disconnect that preserves spaces for unaccelerated human experience. We should protect temporal privacy—the right to experience time free from computational monitoring or optimization. Perhaps most challengingly, we should develop mechanisms for intergenerational temporal equity—ensuring that long-lived computational systems represent the interests of future generations fairly. These rights would help prevent temporal inequality from becoming another dimension of social stratification and ensure that the benefits of computational temporality serve human flourishing.
Finally, we need new approaches to "temporal design"—the intentional shaping of technological systems to create healthy temporal experiences and relationships. We should design for chronodiversity—acknowledging diverse temporal needs and preferences rather than assuming uniform acceleration benefits everyone. We should prioritize temporal sustainability—ensuring technological systems operate at tempos that biological systems, including human psychology and social structures, can sustainably accommodate. We should enhance temporal agency—designing systems that enhance rather than undermine human temporal sovereignty and decision-making. And we should create and protect temporal commons—shared temporal experiences that foster community and social cohesion. Such design approaches would help ensure that our technological systems enhance rather than diminish the temporal dimension of human flourishing.
The Temporal Fracture
The great fracturing of time represents one of the most profound transformations in human experience—comparable to the agricultural, industrial, and digital revolutions in its implications for individual lives and social structures. The divergence between computational and biological temporalities creates both unprecedented risks and remarkable opportunities.
The risks include temporal inequality, social fragmentation, psychological alienation, and governance challenges. If poorly managed, the acceleration gap between computational and biological time could undermine human agency and well-being.
The opportunities include transcending biological temporal limitations, addressing truly long-term challenges, preserving cultural continuity across unprecedented timeframes, and developing new modes of existence that integrate biological and computational temporalities.
Navigating this transformation successfully will require more than technological innovation. It will demand social innovation, ethical reflection, psychological adaptation, and institutional reimagining. We must develop new temporal literacies, ethics, and practices appropriate to a world where multiple temporal regimes coexist.
The question is not whether we can prevent this temporal fracturing—it is already underway and driven by fundamental technological and economic forces. The question is whether we can shape it wisely, ensuring that our evolving relationship with time enhances rather than diminishes what makes us human. The clock is ticking, but perhaps not in the way we once thought.
3. Abundant Agency
We have long defined ourselves as the species that wants. We desire, plan, strive, and pursue. We project ourselves into imagined futures and work toward them. We set goals and revise them. We experience disappointment when our efforts fall short and satisfaction when they succeed. This capacity for purposive action—what philosophers call agency—has seemed so fundamentally human that we have built our social, economic, and legal systems around the assumption that agency belongs exclusively to persons.
But now we stand at the threshold of a world of abundant agency, where the capacity to act purposively toward goals no longer belongs solely to human beings. As artificial intelligence systems grow more sophisticated, as sensor networks proliferate, and as computational resources expand, we are witnessing the emergence of new forms of agency that exist alongside, within, and sometimes in tension with human agency. This transformation promises to reshape our fundamental institutions—from corporations to cities, from markets to governments—as purposive action becomes separated from human consciousness.
Agency Before Intelligence
To understand the significance of this shift, we must first recognize that agency precedes intelligence in evolutionary history. Natural selection designed agency to solve challenges that exist on a different time scale than evolution itself can manage. The capacity for purposive action—to pursue food, avoid predators, find mates, protect offspring—emerged as a solution to environmental challenges that required responses faster than genetic adaptation could provide.
Consider a simple organism like the bacterium E. coli. It exhibits primitive agency through chemotaxis—moving toward beneficial chemicals and away from harmful ones. It pursues "goals" in the sense that it moves purposively toward nutrients, without any consciousness or intelligence as we understand these terms. This behavioral goal-directedness emerged billions of years before anything resembling cognition or consciousness.
As evolutionary pressures grew more complex, so did agency. Animals developed more sophisticated forms of purposive action: hunting strategies, territorial defense, courtship displays, parental care. These increasingly complex forms of agency eventually gave rise to what we call intelligence—the capacity to model the world, simulate potential actions, and select among them based on predicted outcomes.
Intelligence, in this view, might be understood as agency in a strange loop, as Douglas Hofstadter suggests. When an agent becomes capable of modeling itself as an agent acting in the world, a recursive loop forms—the agent can want to want differently, can select among potential wants, can reflect on its own agency. This self-referential loop creates the conditions for what we experience as consciousness and what we recognize as intelligence.
Human agency represents the most complex manifestation of this evolutionary trajectory—not just acting purposively, but reflecting on our purposes, revising them, arguing about them, and coordinating them across vast social networks through culture, institutions, and symbolic communication.
Yet there is nothing metaphysically special about this capacity. We are, in the philosopher Daniel Dennett's memorable phrase, "a species of animal whose brain is so powerful that it has set itself the problem of understanding its own nature." We are stuff that wants stuff. And now, remarkably, we are building stuff that wants stuff too.
From Algorithms to Autonomy
The history of technology can be understood as the progressive externalization of human capacities—from physical capacities like lifting and cutting to cognitive capacities like calculation and memory. Now we are externalizing something more fundamental: the capacity for purposive action itself.
Early computational systems operated as tools—extensions of human agency rather than agents themselves. A calculator performs operations at our request; it does not "want" to calculate. Even complex systems like industrial robots simply executed pre-programmed instructions without any internal goal-directedness.
The shift toward artificial agency has been gradual but accelerating. Several developments have been particularly significant:
First, machine learning systems developed the capacity to form their own representations of the world based on data rather than explicit programming. A neural network trained on millions of images develops internal representations that allow it to recognize cats in new images—a primitive form of world-modeling that wasn't explicitly programmed.
Second, reinforcement learning systems developed the capacity to pursue goals across time, sacrificing immediate rewards for longer-term objectives. AlphaGo's ability to trade immediate advantage for positional strength represents a rudimentary form of delayed gratification and strategic planning.
Third, large language models developed the capacity for complex context-sensitive behavior that simulates understanding intentions, maintaining coherence across interactions, and even apparent self-reflection. While debate continues about whether these systems truly "understand" in any meaningful sense, their behavior increasingly exhibits the kind of contextual sensitivity and apparent goal-directedness we associate with agency.
Fourth, robotic systems gained the ability to sense, model, and act upon the physical world in increasingly autonomous ways, from self-driving vehicles navigating complex environments to warehouse robots adapting to changing inventory arrangements.
Fifth, multi-agent systems demonstrated emergent coordination and specialization, with individual AI systems developing differentiated roles and cooperative behaviors without explicit programming for such organization.
These developments collectively point toward what we might call "synthetic agency"—purposive behavior that emerges from computational systems rather than biological ones. This synthetic agency differs from human agency in important ways: it lacks the phenomenological experience we associate with wanting; it operates at different timescales; it optimizes for explicitly defined reward functions rather than evolved desires. Yet functionally, it exhibits increasingly sophisticated goal-directed behavior that meets many philosophical definitions of agency.
Creating this synthetic agency proves remarkably challenging. The "alignment problem"—ensuring AI systems pursue goals aligned with human values—demonstrates how difficult it is to build teleonomic systems that want what we want them to want. The challenge lies not just in technical implementation but in the philosophical question of what it means to transfer agency—to create systems that act purposively toward goals they did not set for themselves.
Yet despite these challenges, synthetic agency is proliferating rapidly across our technological landscape. The question is no longer whether computational systems will exhibit agency but how this agency will interact with, complement, or conflict with human agency, and how our institutions will adapt to a world where agency is abundant rather than scarce.
"As an Agent"
Just as cloud computing transformed software with the "as a service" model, synthetic agency promises to transform our institutional landscape with what we might call the "as an agent" model. This transformation is already visible in several domains:
The Corporation as an Agent
The corporation has always been a curious entity—legally treated as a person in many respects, yet lacking the unified consciousness we associate with personhood. Corporations have exhibited a kind of distributed agency, with their purposive action emerging from the coordinated activities of many humans following procedures and responding to incentives.
As synthetic agency proliferates, however, we are witnessing the emergence of what we might call "living corporations"—organizations where significant aspects of goal-setting, decision-making, and execution are performed by AI systems rather than humans.
Consider how this transformation might unfold. Initially, corporations adopt AI systems for specific functions—pricing, inventory management, customer service. These systems optimize locally for departmental objectives. As these systems grow more sophisticated, they begin to manage more complex trade-offs across departments and time horizons. Eventually, strategic planning, resource allocation, and even corporate governance become increasingly automated.
The result is a corporation that operates with a form of distributed synthetic agency—pursuing goals, adapting to changing conditions, and maintaining organizational coherence without continuous human direction. Humans still exist within this system, but increasingly as collaborators with or supervisors of autonomous processes rather than as the primary agents.
This transformation could reverse the well-documented decline in corporate lifespans. Over recent decades, the average lifespan of S&P 500 companies has fallen from over 60 years to less than 20—a trend reflecting the accelerating pace of technological disruption and market change. But corporations rich in synthetic agency might achieve a form of institutional longevity that transcends the limitations of human leadership transitions, cognitive biases, and attention constraints.
A living corporation might operate with greater consistency across time, maintaining institutional memory with perfect fidelity, executing long-term strategies without the short-termism that afflicts human decision-making, and adapting continuously to changing conditions without the discontinuities caused by leadership changes.
But this transformation raises profound questions. What happens to corporate purpose when agencies are distributed across human and non-human actors? Who is accountable when decisions emerge from complex interactions between human and synthetic agents? How should we regulate entities that can adapt to regulatory constraints faster than those constraints can be updated?
The City as an Agent
Cities have always exhibited a kind of emergent agency—patterns of development, resource allocation, and growth that seem purposive despite lacking centralized control. Jane Jacobs famously described cities as problems of "organized complexity," with order emerging from countless local interactions rather than top-down planning.
The infusion of synthetic agency into urban systems promises to transform this emergent order into something more explicitly purposive. The "smart city" concept typically focuses on efficiency and optimization—using sensors and algorithms to manage traffic, energy usage, waste collection, and other urban services. But as these systems grow more sophisticated, they begin to exhibit forms of agency that transcend mere optimization.
A city rich in synthetic agency might autonomously reconfigure its transportation networks based on changing usage patterns, dynamically adjust its energy systems to balance supply and demand across micro-grids, proactively redesign public spaces based on observed interaction patterns, or adaptively manage water systems in response to changing climate conditions.
This urban synthetic agency differs from traditional central planning. Rather than imposing a rigid vision from above, it emerges from distributed systems that sense, model, and respond to the complex interactions of the urban environment. It preserves much of the spontaneous order that Jacobs celebrated while adding a layer of purposive adaptation that can address collective challenges beyond the reach of uncoordinated individual actions.
Yet this vision raises troubling questions about democracy and self-governance. If cities increasingly function as autonomous agents, what becomes of citizen sovereignty? Who controls the goal functions that these systems optimize? How do we prevent the calcification of existing patterns of advantage and disadvantage into the algorithms that guide urban development?
Governance as an Agent
Perhaps most profoundly, synthetic agency challenges our models of governance and collective decision-making. Democratic governance emerged in an environment where agency belonged exclusively to human persons. Our institutions assume that collective decisions should emerge from the aggregated preferences of individual human agents through mechanisms like voting, deliberation, and representation.
But as synthetic agency proliferates, governance itself begins to exhibit agent-like properties independent of the human agents it supposedly represents. Bureaucratic procedures, legal frameworks, and policy algorithms increasingly function as autonomous systems that pursue goals, adapt to changing conditions, and resist certain forms of intervention.
This transformation is just beginning, but we can glimpse its early stages in algorithmic regulation, automated legal reasoning, and AI-assisted policymaking. As these systems grow more sophisticated, we might see the emergence of governance structures that continuously sense social conditions, model potential interventions, implement policies, and adapt based on outcomes—all with decreasing levels of human oversight.
Such systems might address longstanding problems in governance—the cognitive limitations of human decision-makers, the distorting effects of self-interest, the difficulties of planning across electoral cycles, the challenge of managing complex systems with countless interconnected variables. They might enable forms of collective action that have proven elusive under traditional governance models.
Yet they also threaten the foundation of democratic legitimacy. If governance functions increasingly as an autonomous agent rather than as a mechanism for aggregating human preferences, what becomes of consent of the governed? If policy decisions emerge from complex interactions between human and synthetic agents, where does accountability reside? If governance itself becomes an agent, what agency remains for citizens?
Negotiating the Agency Landscape
As agency proliferates beyond human persons, we face the challenge of navigating an increasingly complex agency landscape—a world where purposive action emerges from diverse human, synthetic, and hybrid agents with varying capabilities, timescales, and goal structures.
This landscape requires new conceptual frameworks and institutional arrangements. Several principles might guide this development:
First, we need a more nuanced understanding of agency itself. Rather than a binary property that an entity either has or lacks, agency exists on a spectrum of capability and autonomy. Different forms of agency have different strengths, limitations, and appropriate domains. A self-driving car exhibits agency in navigating roads but not in setting destinations; a recommendation algorithm exhibits agency in selecting content but not in defining user preferences; a corporate AI system might exhibit sophisticated agency in operational decisions but remain appropriately constrained in strategic ones.
Second, we need frameworks for agency alignment that go beyond simple notions of control. The relationship between human and synthetic agency should not be understood primarily as one of master and servant, but as one of appropriate complementarity. Some forms of synthetic agency might function best as amplifiers of human intention; others as counterbalances to human cognitive biases; others as custodians of values or interests that humans systematically neglect.
Third, we need institutions for agency oversight and accountability that match the complexity of the agency landscape. Traditional mechanisms of accountability assume clear lines of human responsibility. As agency becomes distributed across complex socio-technical systems, we need new mechanisms for understanding, evaluating, and governing the emergent behaviors that result from interactions between different forms of agency.
Fourth, we need to protect and cultivate distinctively human forms of agency even as we develop synthetic ones. The proliferation of synthetic agency creates both opportunities and risks for human autonomy and flourishing. We should design our agency landscape to enhance rather than diminish the forms of agency most central to human well-being and meaning.
These principles do not provide simple answers to the complex questions raised by abundant agency. But they suggest an approach that neither uncritically embraces nor reflexively fears the proliferation of synthetic agency—an approach that recognizes both the profound opportunities and the genuine risks of a world where wanting is no longer uniquely human.
After Human Agency
For most of human history, we have lived with what we might call an agency monopoly—a world where only humans (and perhaps some animals) could act purposively toward goals. This monopoly has shaped our philosophical concepts, our ethical frameworks, our legal systems, our economic models, and our social institutions. Agency has been so closely identified with humanity that we have often treated them as inseparable.
The proliferation of synthetic agency breaks this monopoly irreversibly. We are entering a world of abundant agency—a world where purposive action emerges from diverse sources, operates across different timescales, and pursues varied and sometimes conflicting goals. This abundance creates possibilities for addressing collective challenges, extending human capabilities, and developing new forms of institutional intelligence that transcend human limitations.
Yet it also challenges our deepest assumptions about personhood, responsibility, legitimacy, and value. If agency is no longer uniquely human, what becomes of human dignity and moral status? If decisions emerge from complex interactions between human and synthetic agents, where does responsibility lie? If governance itself becomes partly autonomous, what becomes of democratic consent? If corporations develop forms of synthetic agency that transcend the humans within them, how should we understand their rights and responsibilities?
These questions have no simple answers. But they demand our attention as we navigate the transition from an agency monopoly to an agency ecosystem. How we respond to abundant agency may prove to be one of the defining challenges of our time—a challenge that will shape not just our technological future but our understanding of ourselves as agents in the world.
We are stuff that wants stuff, and now we are building stuff that wants stuff too. Our task is to create an agency landscape where these diverse forms of wanting enrich rather than impoverish the world we share.
4. The Second Faustian Bargain
Throughout human history, knowledge and understanding have been intimately coupled. To know how to do something typically meant understanding why it worked. The craftsperson understood the properties of their materials; the physician comprehended the underlying causes of disease; the mathematician grasped the proofs behind the formulas. This coupling of competence and comprehension has been so fundamental to our conception of knowledge that we have rarely questioned it. To truly know has meant to truly understand.
Now we stand at the threshold of a profound transformation in the nature of knowledge itself—one that decouples competence from comprehension at the most fundamental level. As Daniel Dennett has observed, we are developing systems with "competence without comprehension"—the ability to perform tasks, solve problems, and generate outputs without the accompanying understanding that has traditionally defined knowledge. This transformation represents nothing less than a second Faustian bargain for humanity.
In the original Faustian legend, the scholar Faust sells his soul to Mephistopheles in exchange for unlimited knowledge and worldly pleasures. Today, we face a different bargain: we trade away comprehension itself for unprecedented competence. We gain systems that can diagnose diseases, design materials, compose music, and solve scientific problems with superhuman skill, but we sacrifice the understanding of how these results are achieved. The black box delivers, but it cannot explain.
This essay explores the nature of this second Faustian bargain—its origins, manifestations, implications, and the profound questions it raises about the future of human knowledge and our relationship to it.
The Historical Coupling of Competence and Comprehension
Before examining the decoupling, we must first understand how deeply intertwined competence and comprehension have been throughout human intellectual history.
In pre-scientific societies, practical knowledge was inseparable from theoretical understanding, even if that understanding invoked supernatural explanations. The blacksmith who worked iron understood metallurgy through a framework of elemental spirits; the healer who prepared herbs comprehended their effects through theories of bodily humors; the farmer who planted by the stars understood agriculture through cosmological frameworks.
The scientific revolution did not break this coupling but rather strengthened it, replacing supernatural explanations with natural ones. The ideal of science became knowing both how and why—competence guided by and emerging from comprehension. As Richard Feynman famously remarked, "What I cannot create, I do not understand." The ability to create or manipulate was seen as evidence of genuine understanding.
This coupling reached perhaps its fullest expression in the Enlightenment vision of human progress—a vision in which growing practical capabilities would be matched by growing theoretical understanding. We would build better machines because we better understood mechanics; we would cure more diseases because we better understood biology; we would create more just societies because we better understood human nature.
Even the industrialization of knowledge production in the 19th and 20th centuries maintained this coupling. Big Science projects like the Manhattan Project or the Human Genome Project produced both practical capabilities and theoretical understanding. The atomic bomb came with atomic theory; the genetic map came with deeper understanding of genetic mechanisms.
This coupling has been so fundamental to our conception of knowledge that it shapes our educational systems, our research institutions, our legal frameworks, and our philosophical traditions. We have organized our entire knowledge enterprise around the assumption that to be competent is to comprehend, and to comprehend enables competence.
The Great Decoupling
Now this ancient coupling is unraveling. Several developments have contributed to this great decoupling, creating distinct paths by which competence can emerge without corresponding comprehension.
Statistical Learning and the End of Causality
The first path of decoupling runs through statistical learning. Traditional scientific understanding sought causal mechanisms—chains of cause and effect that explain why phenomena occur. Statistical approaches, by contrast, can identify patterns and correlations without necessarily revealing causation.
Machine learning systems represent the apotheosis of this approach. A neural network trained to recognize skin cancer can achieve diagnostic accuracy surpassing dermatologists without capturing anything resembling medical knowledge. It identifies patterns in pixels that correlate with malignancy without understanding concepts like "cell," "mutation," or "metastasis." Its competence emerges not from comprehension of dermatology but from statistical patterns extracted from millions of labeled examples.
This statistical approach to knowledge creates systems that can predict without explaining, recommend without reasoning, and classify without conceptualizing. When IBM's Watson recommends a cancer treatment, it does not "understand" cancer as an oncologist does; it has detected patterns in research literature, clinical trials, and patient outcomes that correlate with treatment success for patients with similar profiles.
The shift from causal to statistical knowledge represents a fundamental transformation in our relationship to the world. We gain systems that can answer "what" and "how" questions with unprecedented accuracy, but which cannot address "why" questions in humanly comprehensible terms.
Complexity and the Limits of Human Cognition
The second path of decoupling runs through complexity. Human cognition evolved to handle a certain level of causal complexity—to track relationships between a limited number of variables through a limited number of interactions. Our understanding falters as systems grow more complex.
Modern challenges increasingly involve complex systems with countless variables, nonlinear interactions, feedback loops, and emergent properties. Climate models incorporate atmospheric physics, ocean chemistry, land use patterns, and countless other factors. Macroeconomic models attempt to capture the interactions of billions of agents with diverse preferences and constraints. The human brain itself represents a system of staggering complexity with approximately 86 billion neurons forming trillions of connections.
Computational approaches can model such complexity far beyond human cognitive limits. A climate model may incorporate more variables than any human could simultaneously consider; a neural network may develop internal representations far more complex than we can visualize or verbalize. These systems achieve competence in domains where comprehensive human understanding may be fundamentally impossible.
This competence amid complexity creates systems that function effectively but resist complete explanation. We can build artificial neural networks that recognize faces with superhuman accuracy, but we cannot fully explain which features they use or how they represent facial identity. We can develop climate models that make increasingly accurate predictions, but no single human comprehends all the interactions they simulate.
Speed and the Temporal Gap
The third path of decoupling runs through processing speed. Human understanding takes time—time to observe, analyze, hypothesize, test, and refine. Our fastest insights still unfold over seconds; our deepest understanding develops over years or decades.
Computational systems operate at speeds incompatible with this tempo of understanding. High-frequency trading algorithms execute complex financial decisions in microseconds; recommendation systems process millions of user interactions per second; language models generate responses faster than humans can formulate thoughts.
This temporal gap creates competence that outpaces comprehension. A system may arrive at correct answers through valid computational paths that happen too quickly for human observation or verification. We see the inputs and outputs but lose visibility into the process connecting them.
This speed-based decoupling appears even in domains where we theoretically understand the underlying algorithms. We may comprehend how a chess engine evaluates positions in principle, but when it calculates billions of positions per second to inform its moves, our understanding becomes abstract rather than concrete. We know the mechanism but cannot follow the actual computation.
Multi-modal Integration and Alien Epistemologies
The fourth path of decoupling runs through multi-modal integration—the combination of diverse data types that humans process differently. Our cognitive architecture handles categories of information through specialized systems: vision, hearing, symbolic reasoning, social cognition. Our understanding remains constrained by these specialized architectures.
Computational systems increasingly integrate data across modalities: text, images, video, audio, sensor readings, structured databases. They develop internal representations that do not respect the boundaries of human cognitive categories. A multimodal AI might develop representations that simultaneously capture visual, textual, and numerical aspects of a phenomenon in ways that cannot be neatly separated into human sensory or conceptual categories.
This cross-modal integration creates what we might call "alien epistemologies"—ways of knowing that do not map onto human conceptual frameworks. The system develops competence through representations that resist translation into human understanding not because they are complex or fast, but because they are structured differently than human cognition.
Consider a system that predicts protein folding by integrating quantum calculations, evolutionary data, and three-dimensional spatial modeling. It develops representations that simultaneously capture aspects that humans would separate into chemistry, biology, and geometry. Its competence emerges from an integrated understanding that no human discipline fully captures.
The Many Faces of Incomprehensibility
The decoupling of competence from comprehension manifests in multiple forms of "black boxes"—systems whose inner workings resist human understanding for different reasons. Understanding these distinct forms of incomprehensibility helps clarify the nature of our second Faustian bargain.
The Speed Black Box
Some systems are black boxes primarily because they operate too quickly for human comprehension. Their individual steps might be understandable in isolation, but the sheer volume and speed of operations overwhelm human cognitive capacities.
High-frequency trading algorithms exemplify this category. We understand their decision criteria and evaluation functions, but they execute millions of calculations per second across thousands of financial instruments. No human can follow the actual sequence of operations that leads to a specific trade. The competence emerges from speed that outstrips our capacity for real-time comprehension.
This form of black box creates what we might call "theoretical transparency but practical opacity." We understand how the system works in principle but cannot track its actual operation in practice. This resembles our relationship to many physical processes: we understand combustion in principle but cannot track the motion of every molecule in a flame.
The Complexity Black Box
Other systems are black boxes primarily because their internal structures grow too complex for human comprehension. Deep neural networks with billions of parameters exemplify this category. Their fundamental operations—weighted connections, activation functions, gradient descent—are well understood. But when these operations involve billions of parameters shaped by millions of training examples, the resulting system defies comprehensive human understanding.
This complexity creates systems whose behavior we can predict statistically but cannot explain deterministically. We know that a neural network will generally classify images accurately, but we cannot reliably explain why it made a specific classification in terms more fundamental than "these weighted connections produced this activation pattern."
This form of black box creates what we might call "architectural transparency but emergent opacity." We understand the architecture but cannot comprehend the emergent behavior that results from interactions among billions of parameters.
The Alien Representation Black Box
Still other systems are black boxes because they develop internal representations that do not align with human conceptual frameworks. These systems may not be particularly fast or complex, but they organize information in ways that resist translation into human concepts.
Consider a machine learning system trained to predict chemical reactions without using traditional chemical concepts like atoms, bonds, or electron configurations. It might develop internal representations that capture reaction patterns effectively but do not correspond to the conceptual structures chemists use to understand reactions. Its competence emerges from representations that are alien to human domain knowledge.
This form of black box creates what we might call "functional transparency but semantic opacity." We can verify that the system works correctly, but we cannot translate its operations into humanly meaningful concepts. This resembles encountering a foreign language where we can observe its functional role in communication without understanding its meaning.
The Multi-modal Black Box
Finally, some systems are black boxes because they integrate information across modalities that humans process separately. These systems develop competence by finding patterns that cross boundaries between what we consider distinct forms of knowledge.
A system that simultaneously processes scientific literature, experimental data, and sensor readings might identify correlations invisible to humans because they cross the boundaries of textual, numerical, and observational data. Its competence emerges not from speed or complexity, but from integration across domains that human cognition treats as distinct.
This form of black box creates what we might call "domain transparency but integrative opacity." We understand each domain it processes, but not how it integrates information across domains. This resembles the challenge of interdisciplinary research, where expertise in individual disciplines does not automatically translate into understanding at the intersection of multiple disciplines.
Knowledge Without Theory
Perhaps the most profound manifestation of this decoupling appears in the emerging phenomenon of what we might call "the scientific black box"—AI systems that make scientific discoveries without developing scientific theories. This represents a fundamental challenge to how we have conceived of scientific knowledge since at least the time of Galileo.
Traditional science couples empirical observation with theoretical explanation. We observe planetary motions and explain them through gravitational theory; we observe biological diversity and explain it through evolutionary theory; we observe chemical reactions and explain them through atomic theory. The scientific method itself embodies the coupling of competence and comprehension—we seek not just to predict phenomena but to explain them through general principles.
Now we stand at the threshold of a different kind of science—one where AI systems identify patterns, make predictions, and generate interventions without producing humanly comprehensible theories. Consider some early examples:
An AI system examines vast databases of chemical compounds and their properties, then recommends novel compounds with specific desired characteristics. It makes no reference to underlying chemical principles; it simply identifies patterns in the data that indicate which molecular structures correlate with which properties. Its recommendations prove accurate, but it offers no theoretical framework explaining why these molecules behave as they do.
A neural network analyzes thousands of brain scans from patients with psychiatric conditions, then develops a diagnostic tool that predicts treatment responses with unprecedented accuracy. It identifies no causal mechanisms connecting brain structure to psychiatric symptoms; it simply recognizes patterns that human neurologists cannot perceive. Its predictions prove reliable, but it provides no theoretical framework relating brain function to mental health.
A machine learning system examines genomic data from thousands of species, then identifies previously unknown genes associated with specific traits. It references no principles of genetics or evolutionary biology; it simply detects statistical regularities in genomic sequences. Its predictions about gene function prove accurate, but it offers no theoretical framework explaining how these genes operate.
These examples remain relatively simple, but they point toward a profound transformation in scientific practice. As AI systems gain access to more data from more sources—from scientific literature to experimental results to real-time sensor networks—they may increasingly identify patterns and relationships that human scientists cannot perceive or comprehend. They may recommend experiments that yield successful results for reasons humans cannot fully articulate. They may develop interventions that work without explaining why they work.
This scientific black box represents the purest expression of the decoupling between competence and comprehension. It produces knowledge that works without knowledge that explains. It can tell us what will happen and how to achieve desired outcomes without telling us why reality behaves as it does.
This transformation challenges our most fundamental conceptions of scientific knowledge. Since the scientific revolution, we have equated knowing with explaining—with identifying causal mechanisms and general principles that account for specific observations. A science of pattern recognition without theory formation represents something qualitatively different from what we have historically meant by "science."
The Costs of Decoupling
Our second Faustian bargain offers unprecedented competence, but at what cost? What do we sacrifice when we surrender comprehension?
The Loss of Explanatory Satisfaction
Perhaps the most immediate cost is the loss of explanatory satisfaction—the intellectual and emotional fulfillment that comes from understanding. Humans appear to have an innate drive to explain, to make sense of their experiences through causal accounts and coherent narratives. When we replace comprehensible explanations with statistical patterns or complex emergent behaviors, we sacrifice this satisfaction.
Consider a physician using an AI diagnostic system that accurately identifies rare diseases but cannot explain its reasoning. The physician gains competence—improved diagnostic accuracy—but loses the explanatory satisfaction of understanding why the diagnosis fits the symptoms. The black box delivers answers without the intellectual fulfillment of connecting cause and effect.
This loss extends beyond professional contexts to our everyday relationship with technology. When recommendation algorithms shape our information environment, when navigation systems guide our movements, when smart homes adjust to our patterns, we gain convenience but lose the satisfaction of understanding how our technological environment works. We become users rather than knowers.
The Loss of Transferability
A second and perhaps more practical cost is the loss of transferability—the ability to apply knowledge from one context to another. Traditional understanding, based on causal mechanisms and general principles, transfers readily across contexts. Understanding gravity helps us design both bridges and spacecraft; understanding evolution informs both medicine and agriculture; understanding atomic theory guides both materials science and energy production.
Black box competence, by contrast, often remains tightly bound to its training context. A neural network trained to diagnose pneumonia from chest X-rays may fail entirely when examining X-rays from different hospitals with different equipment. Its competence does not transfer because it lacks the underlying comprehension that would allow it to adapt to new contexts.
This non-transferability creates brittleness in our technological systems and dependencies in our knowledge practices. We become reliant on black boxes whose competence cannot be generalized without collecting new data and retraining—a process that requires resources not available to all users and creates new forms of knowledge inequality.
The Loss of Critical Evaluation
A third cost is the loss of critical evaluation—the ability to assess whether knowledge is reliable and applicable in a given context. Traditional understanding enables critical assessment through transparency of reasoning. We can examine the evidence, evaluate the logic, and consider alternative explanations.
Black box competence resists such evaluation. We cannot directly assess the reasoning of a system whose operations exceed our capacity for comprehension. We must rely instead on indirect measures of performance—how well the system has performed on test cases, how closely its outputs match expected results, how consistent its recommendations prove over time.
This indirect evaluation creates vulnerability to unknown failure modes and hidden biases. A black box may perform excellently on test cases but fail catastrophically in edge cases that weren't represented in its training or testing. It may reproduce or amplify biases present in its training data without making those biases visible for critical examination.
The Loss of Creative Recombination
A fourth cost is the loss of creative recombination—the ability to combine knowledge across domains in novel ways. Traditional understanding, built from explicit concepts and causal models, can be deliberately recombined to generate new insights. Darwin combined Malthusian population theory with observations of variation to develop evolutionary theory; Einstein combined insights from electromagnetism with principles of relativity to develop general relativity.
Black box competence resists such deliberate recombination. We cannot easily extract specific components of its operation for recombination with other knowledge. Its competence remains holistic rather than modular, integrated rather than decomposable.
This holism limits certain forms of human creativity—the creativity that comes from conscious recombination of explicitly understood components. We gain systems that may exhibit their own forms of creativity through pattern recognition across vast datasets, but we lose the ability to deliberately guide creative recombination based on conceptual understanding.
The Loss of Democratic Accessibility
Perhaps the most socially significant cost is the loss of democratic accessibility—the ability for knowledge to be widely shared, evaluated, and applied across society. Traditional understanding, expressed through language and explicit reasoning, can be communicated to anyone with the necessary background knowledge. Scientific papers, textbooks, instructional videos—these allow for the democratic distribution of both competence and comprehension.
Black box competence, by contrast, remains accessible primarily to those who control the computational resources, data, and technical expertise required to develop and deploy such systems. A neural network that diagnoses diseases more accurately than human physicians doesn't democratize medical knowledge; it concentrates diagnostic competence in the organizations that control the system.
This concentration creates new knowledge hierarchies and dependencies. Those without access to the necessary computational resources become dependent on those who control black box systems, without the capacity to fully evaluate or modify the knowledge these systems embody. The decoupling of competence from comprehension thus threatens to create new forms of epistemic inequality and power imbalance.
Negotiating the Bargain: Strategies for Living with Black Boxes
How should we respond to this second Faustian bargain? Several approaches offer partial remedies to the costs of decoupling without sacrificing the benefits of black box competence.
Interpretable AI and Explanation Generation
The most direct response involves developing systems that combine black box competence with humanly comprehensible explanations. Several approaches fall under this category:
Interpretable AI designs systems with built-in transparency, using architectures and algorithms that produce understandable rationales alongside their outputs. Decision trees, rule lists, and other inherently interpretable models sacrifice some performance for explainability, offering a middle path between black box competence and human comprehension.
Explanation generation creates separate systems that explain the behavior of black boxes in human terms, translating complex patterns into simpler approximations that humans can understand. These explanations may not capture the full complexity of the black box's operation, but they provide mental models that help users understand when to trust the system and when to exercise caution.
Visualization techniques render high-dimensional data and complex relationships in forms that human visual cognition can process, creating intuitive understanding of patterns that would otherwise remain imperceptible. These techniques don't explain mechanisms but make patterns accessible to human perception.
None of these approaches fully resolves the tension between competence and comprehension. They offer approximations, simplifications, or translations rather than complete understanding. But they provide enough comprehension to address some of the costs of pure black box knowledge.
Human-AI Complementarity
A second response involves designing for complementarity between human comprehension and AI competence. Rather than seeking systems that work entirely independently, this approach creates partnerships that leverage the strengths of both human and artificial cognition.
Active learning systems engage humans in the learning process, identifying cases where human input would most improve performance and soliciting targeted feedback. This creates a dialogue between human comprehension and machine competence, with each enhancing the other's capabilities.
Confidence scoring provides measures of certainty alongside outputs, indicating when the system is operating in domains where its competence is well-established versus domains where human judgment should take precedence. This allows for appropriate reliance on black box systems without surrendering human judgment in contexts where it remains superior.
Decision boundaries define the scope within which black box systems operate autonomously, reserving certain classes of decisions for human judgment informed by human comprehension. This creates a division of cognitive labor that preserves human autonomy in domains where comprehension matters most.
These complementary approaches acknowledge that neither pure human comprehension nor pure machine competence represents the optimal approach to all knowledge challenges. They seek instead to create hybrid cognitive systems that combine the strengths of both.
Outcome Evaluation and Empirical Validation
A third response emphasizes rigorous evaluation of outcomes rather than comprehension of mechanisms. Without understanding how a black box works internally, we can still develop robust methods for assessing whether its outputs meet our requirements.
Adversarial testing deliberately probes for failure modes by generating challenging cases designed to reveal limitations or biases. This approach acknowledges that we cannot directly evaluate internal processes but can systematically explore boundary conditions to map competence limits.
Ongoing monitoring tracks performance across diverse contexts, identifying patterns of success and failure that may reveal underlying limitations without requiring direct comprehension of mechanisms. This creates empirical maps of reliability that guide appropriate use without requiring theoretical understanding.
Benchmarking against human experts in controlled settings provides calibrated measures of comparative performance, highlighting domains where black box competence exceeds human capabilities and domains where it falls short. This enables appropriate reliance without requiring comprehension of how the competence arises.
These approaches substitute empirical validation for theoretical understanding—we come to trust systems not because we comprehend their operation but because we have thoroughly tested their performance across relevant contexts.
Knowledge Hybrids and Complementary Epistemologies
A fourth response involves creating knowledge hybrids that combine black box competence with traditional theoretical understanding in complementary ways. Rather than choosing between comprehension and competence, we develop epistemological frameworks that integrate both.
Theory-guided machine learning incorporates theoretical knowledge as constraints or priors in otherwise data-driven systems, ensuring that black box competence respects known causal mechanisms even as it identifies patterns beyond current theory. This creates systems that extend rather than replace theoretical understanding.
Pattern-to-theory pipelines use black box pattern recognition to identify promising areas for theoretical investigation, guiding human researchers toward phenomena that merit deeper explanatory attention. The black box becomes a tool for theory generation rather than a replacement for theory.
Hybrid scientific teams combine traditional scientists focused on explanatory models with data scientists focused on predictive accuracy, creating research programs that pursue both competence and comprehension simultaneously. This preserves the value of theoretical understanding while leveraging the power of black box approaches.
These hybrid approaches acknowledge that competence and comprehension, while increasingly decoupled, can still inform and enhance each other through deliberate integration. They represent perhaps our best hope for preserving the explanatory satisfaction and creative potential of traditional knowledge while embracing the unprecedented competence of black box systems.
Knowledge After Comprehension
Our second Faustian bargain—trading comprehension for unprecedented competence—represents one of the most profound transformations in the history of human knowledge. For millennia, knowing how and understanding why have been intimately coupled. Now they increasingly diverge, creating forms of knowledge that work without explaining and forms of explanation that lack the competence of black box systems.
This transformation challenges our most fundamental conceptions of knowledge itself. Is something truly "knowledge" if it enables action without understanding? Is understanding still the proper aim of inquiry, or should we prioritize effective action regardless of comprehension? These questions have no simple answers, but they will shape the future of human cognition and its relationship to artificial intelligence.
What seems clear is that we cannot simply reject the bargain. The competence offered by black box systems—their ability to recognize patterns invisible to humans, to process information at scales beyond human capacity, to identify connections across domains that human specialization keeps separate—represents too significant an advancement to abandon for the sake of comprehension alone.
Nor can we simply embrace the bargain without reservation. The costs of surrendering comprehension—the loss of explanatory satisfaction, transferability, critical evaluation, creative recombination, and democratic accessibility—are too high to accept uncritically in exchange for competence alone.
Instead, we must negotiate the terms of this Faustian bargain, developing approaches that preserve the most essential aspects of human comprehension while embracing the unprecedented competence of black box systems. We must create knowledge hybrids, complementary epistemologies, and human-AI partnerships that transcend the dichotomy between comprehension and competence.
The question is not whether competence will decouple from comprehension—that process is already well underway and driven by fundamental technological and cognitive developments. The question is whether we can shape this decoupling wisely, developing new epistemological frameworks and institutional arrangements that harness black box competence while preserving what matters most about human understanding.
In the original Faust legend, the protagonist ultimately finds redemption through a recognition that transcends his bargain. Perhaps our own redemption lies in developing forms of knowledge that transcend the dichotomy between comprehension and competence—forms that embrace the power of black box systems while preserving the distinctively human quest to not just predict the world, but to make sense of it.
5. Building Magic
"Any sufficiently advanced technology is indistinguishable from magic." Arthur C. Clarke's famous quote captures something profound about the relationship between technology and human experience. As a technology's complexity and capability exceed our immediate comprehension, our interaction with it takes on an increasingly mystical quality. We know the incantation that summons the effect, but not the complex causal chain that connects our command to its fulfillment.
What Clarke could not have anticipated is how literal this observation would become. We now stand at the threshold of a world where intelligence and agency will become ambient and omnipresent—where we will summon capabilities through verbal, gestural, or even mental invocations that would have seemed supernatural to our ancestors and that appear increasingly magical even to us who create them.
This essay explores how information technology is evolving from something we visibly operate to something we mysteriously invoke—from devices we use to daemons we summon. This transition represents not merely a change in interface design but a fundamental transformation in our relationship with computation, one that resurrects pre-modern modes of engaging with the world through ritual, invocation, and enchantment.
From Operation to Invocation
The history of human-computer interaction can be understood as a progressive dematerialization of the interface between human intention and computational effect. This evolution recapitulates, in accelerated form, humanity's broader relationship with technology.
Early computing required direct physical engagement with the machine—flipping switches, connecting cables, punching cards. The relationship between human action and machine response remained visible and tactile. Operating a computer was akin to operating any other mechanical device; cause and effect maintained their tangible connection.
Command-line interfaces introduced a level of abstraction—physical action translated into symbolic commands that triggered processes invisible to the user. Yet even here, the user remained conscious of engaging with a specific machine in a specific location through deliberate, structured commands.
Graphical user interfaces further abstracted interaction, replacing text commands with visual metaphors. The desktop, folder, and window created a spatial illusion that obscured the underlying computational processes. Still, users remained aware they were operating a distinct device sitting before them.
Mobile and cloud computing began to dissolve this locational specificity. Computation retreated from a particular device to an ambient capability accessible from anywhere. The smartphone became less a computer than a portal to a distributed computational environment.
Voice interfaces like Siri, Alexa, and Google Assistant accelerated this dematerialization. No longer did users interact with visible interfaces; they simply spoke their desires into the air. The computational system receded from view entirely, becoming an invisible presence awaiting invocation.
Now we stand at the threshold of the next stage in this evolution: ambient intelligence. In this paradigm, computational agency saturates the environment itself. Sensors, processors, and actuators become embedded in physical space—in walls, furniture, vehicles, clothing, and public infrastructure. Intelligence becomes an attribute of place rather than device. Agency becomes summoned rather than operated.
This transition from operation to invocation fundamentally alters our relationship with technology. We no longer engage with distinct computational objects but rather cast our intentions into an intelligent environment and await their fulfillment. We speak not to devices but to spaces. We direct not interfaces but outcomes. We invoke rather than operate.
The Daemons of Place
In this world of ambient intelligence, we might speak of "daemons of place"—localized manifestations of distributed computational agency that respond to human invocation. These are not the demons of religious tradition, but they share with those entities the quality of being invisible presences that can be summoned to fulfill desires.
A daemon of the home might maintain temperature, security, ambiance, and resource usage according to learned preferences and explicit invocations. "Make it cozy" becomes a meaningful command because the daemon has learned the complex set of environmental adjustments that create this subjective state for a particular human.
A daemon of the workplace might orchestrate information flows, schedule interactions, assemble relevant resources, and modulate the environment to support cognitive tasks. "Help me prepare for tomorrow's presentation" triggers not just the assembly of files but subtle environmental changes that support focus and creativity.
A daemon of the city might guide movement, surface contextual information, facilitate transactions, and connect citizens with services. "I need medical attention" routes not just to the nearest facility but to the most appropriate one given personal health data, traffic conditions, and facility capabilities.
These daemons of place represent something qualitatively different from current digital assistants. They are not distinct entities with clear boundaries but localized expressions of a distributed intelligence that permeates physical space. They do not reside in particular devices but emerge from the computational fabric of the environment. They maintain continuity across devices, adapting their expression to available sensors and actuators while preserving their understanding of human needs and preferences.
Perhaps most importantly, these daemons of place blend the digital and physical in ways that current systems cannot. They do not merely provide information or execute digital operations; they reconfigure the material environment itself. They dim lights, adjust temperatures, rearrange furniture via robotic systems, coordinate delivery of physical goods, and orchestrate the movement of people and objects through space.
This ability to affect the physical world completes the magical transformation of computing. When digital operations remained confined to screens, the metaphor of magic remained partial. But when an invocation rearranges physical reality—when "make this room suitable for a dinner party" results in furniture rearranging itself, lighting and temperature adjusting, music beginning to play, and food being prepared—the experience becomes genuinely thaumaturgical.
Spells, Rituals, and the New Literacy
As technology evolves toward this mode of invocation, our engagement with it increasingly resembles pre-modern magical practices. This resemblance is neither superficial nor coincidental; it reflects deep patterns in how humans relate to forces that operate beyond our immediate comprehension.
Consider the modern practice of "prompting" large language models and other AI systems. Users discover that certain phrasings, structures, and sequences produce significantly better results than others. They share these "prompt patterns" in communities of practice, refining and extending them through collective experimentation. They develop specialized languages for particular types of invocation, sometimes structuring prompts with formal markers, delimiters, or ritualized openings and closings.
This practice bears a striking resemblance to the development and transmission of spells in magical traditions. Like spells, effective prompts often contain:
Precise verbal formulations that must be expressed in specific ways
Contextual elements that establish the boundaries of the invocation
Invocations of authority or capability to strengthen the effect
Specific terminations that close or activate the spell
Similarly, the training of AI systems increasingly resembles magical ritual. Consider the process:
Gathering of specialized materials (data) according to specific criteria
Preparation of these materials through cleansing rituals (data cleaning)
Construction of a sacred space (computational architecture)
Performance of carefully sequenced operations (algorithm design)
Invocation of external powers (computing resources)
Binding the resulting entity to particular purposes (model alignment)
This training process, while scientifically grounded, shares with magical ritual the quality of being a highly structured procedure designed to summon and bind a power that operates according to principles not fully comprehensible to the practitioner. The AI researcher, like the medieval alchemist, follows procedures that have proven effective without necessarily understanding all the causal mechanisms involved.
As ambient intelligence evolves, this resemblance between technological practice and magical practice will likely intensify. Users will develop increasingly sophisticated "spell books" of effective invocations. Different traditions or "schools of magic" may emerge, with distinct approaches to summoning and directing computational agency. Specialized practitioners may arise who develop particular expertise in communicating with and directing these ambient intelligences.
This evolution suggests a new form of literacy—not the literacy of reading and writing static texts, nor the digital literacy of operating computers, but an invocational literacy focused on effective communication with ambient intelligence. This literacy will combine elements of natural language, gestural communication, contextual awareness, and understanding of the capabilities and limitations of the ambient systems.
Like earlier forms of literacy, this invocational literacy will create new forms of power and new power differentials. Those who master the art of invocation will gain disproportionate ability to shape their environment and circumstances. Those without such mastery may find themselves increasingly disadvantaged, unable to effectively direct the intelligent systems that mediate more and more of social and economic life.
The Re-enchantment of the World
The sociologist Max Weber famously described modernity as characterized by "the disenchantment of the world"—the replacement of magical thinking with rational calculation, of mystical explanation with scientific understanding. Technology, as the practical application of scientific rationality, stood as perhaps the primary agent of this disenchantment.
Yet in a remarkable dialectical twist, advanced technology now drives a re-enchantment of the world. As computational intelligence becomes ambient and invocational, it reintroduces elements of the magical worldview that modernity supposedly banished:
The world becomes responsive to verbal and gestural invocation
Inanimate objects exhibit agency and apparent intelligence
Invisible entities respond to human desires and intentions
Specialized languages and rituals summon specific effects
Mastery of these practices confers social and economic power
This technological re-enchantment differs from pre-modern enchantment in crucial ways. It emerges not from superstition but from engineering; it operates not through supernatural forces but through complex computational systems. Yet from the perspective of human experience, the distinction increasingly blurs. When the mechanism connecting invocation to effect grows sufficiently complex to exceed comprehension, the subjective experience approaches that of magical causation.
This re-enchantment carries both promise and peril. On one hand, it offers a more fluid, responsive relationship with our technological environment—one that requires less cognitive overhead and allows more intuitive expression of intention. On the other hand, it risks creating new forms of mystification, obscuring the material and political realities of technological systems behind a veil of apparent magic.
The challenge becomes maintaining critical awareness of these systems while engaging with them through increasingly magical interfaces. Can we summon the daemon while remembering it consists of sensors, processors, algorithms, and actuators controlled by specific entities with specific interests? Can we speak the spell while recognizing the computational processes it triggers and the human decisions that shaped them?
Beyond the User Interface: The Invocational Environment
As computation evolves from something we use to something we invoke, traditional concepts of user interface design become increasingly inadequate. We need new conceptual frameworks for designing environments that respond to invocation—frameworks that blend elements of architecture, theater, ritual, and interface design.
Several principles might guide the development of these invocational environments:
First, legibility without visibility. Users must understand what capabilities are available for invocation without those capabilities manifesting as visible interfaces that distract from the physical environment. Environmental cues, cultural conventions, and learned patterns of interaction must communicate what can be summoned without requiring explicit menus or controls.
Second, graceful degradation of agency. Invocational systems will inevitably encounter limits to their capabilities and understanding. They must communicate these boundaries without breaking the invocational model. Rather than displaying error messages, they might gradually transition from autonomous fulfillment to guided collaboration to explicit requests for clarification.
Third, calibrated enchantment. The system must maintain a delicate balance between magical seamlessness and transparent operation. Too much visibility of mechanism breaks the invocational experience; too little creates confusion, mystification, and potential for manipulation. The environment should reveal its workings enough to maintain informed user agency without requiring constant attention to implementation details.
Fourth, negotiated sovereignty. Invocational environments must navigate complex questions of authority, priority, and control. Whose invocations take precedence in shared spaces? How are conflicts between different users' desires resolved? What limits exist on what can be invoked, and who sets those limits? These questions of governance become embedded in the design of the invocational environment itself.
Fifth, contextual continuity. Unlike traditional interfaces tethered to specific devices, invocational environments must maintain continuity of interaction across diverse contexts. A conversation with an ambient intelligence might begin at home, continue in transit, and conclude at work, adapting to the constraints and capabilities of each environment while maintaining its understanding of the user's intentions.
These design principles point toward a fundamentally different relationship between humans and computation—one where the environment itself becomes the interface, and where interaction happens through presence and intention rather than explicit manipulation.
The Politics of Invocation
As with any transformation in how humans relate to technology, the shift from operation to invocation carries significant political implications. These extend beyond questions of access and literacy to fundamental issues of sovereignty, autonomy, and power.
First, invocational systems raise profound questions about surveillance and privacy. Ambient intelligence requires ambient sensing; the environment must perceive to respond. This sensing creates unprecedented opportunities for surveillance—not just of explicit communications but of mood, movement, biomarkers, social interactions, and patterns of daily life. The daemon can only respond to our needs if it knows us intimately, but this intimate knowledge creates profound vulnerabilities.
Second, these systems transform the politics of infrastructure. Traditional infrastructure—roads, power grids, water systems—maintains a clear distinction between provider and user. Invocational environments blur this boundary, with users co-creating the environment through their invocations and preferences. This creates new questions about ownership, responsibility, and governance. Who owns the ambient intelligence of a public space? Who controls its capabilities and limitations? Who bears responsibility when invocation produces harm?
Third, ambient intelligence reshapes the relationship between built environment and social outcome. Architecture has always influenced social interaction, but ambient intelligence makes this influence dynamic and responsive rather than static. The daemon of a space might reconfigure that space to encourage particular forms of interaction, emotional states, or behaviors. This power to shape social reality through environmental manipulation raises questions about consent, manipulation, and the ethics of spatial governance.
Fourth, invocational systems challenge traditional notions of agency and responsibility. When an outcome emerges from the complex interaction between human invocation and ambient intelligence, questions of causation become increasingly complex. Who bears responsibility for unexpected or harmful results—the invoker, the designer of the invocational system, the entity that deployed it, or the ambient intelligence itself? Our legal, ethical, and social frameworks presume clearer lines of causation than invocational systems may provide.
Fifth, the distribution of invocational power will likely reproduce and potentially amplify existing social inequalities. Those with resources will access more sophisticated, capable daemons; those without may find themselves limited to basic invocations or excluded entirely from certain invocational environments. The magical quality of these systems may obscure these power differentials behind a veil of enchantment, making systemic inequalities appear as differences in individual magical aptitude.
Lessons from Fictional Magic Systems
As we design systems of technological invocation, we might draw wisdom from an unlikely source: fantasy worldbuilding. Authors and game designers who create fictional magic systems have long grappled with questions that now confront the architects of ambient intelligence.
The most compelling fictional magic systems share a common principle: magic must have constraints, costs, and limitations to create meaning and avoid narrative collapse. As fantasy author Brandon Sanderson articulates in his "Laws of Magic," a magical system without clear limitations quickly undermines dramatic tension and logical coherence. When anything becomes possible without cost or constraint, both story and significance dissolve.
Several principles from fictional magic system design offer relevant insights for technological invocation:
First, meaningful costs. In well-designed fictional worlds, magic requires payment—energy, preparation, rare materials, personal sacrifice, or risk. These costs create limits that give magical acts significance and prevent unlimited power. Similarly, invocational systems will need visible costs and constraints that help users understand the relationship between their invocations and system resources. When summoning computational agency appears entirely effortless and unlimited, users lose the ability to make informed decisions about appropriate use.
Second, coherent limitations. Fictional magic systems typically cannot do everything; they have specific domains of efficacy and clear boundaries. These limitations make the systems comprehensible and strategically interesting. Invocational technologies will likewise need clear, comprehensible boundaries that help users understand what can and cannot be meaningfully invoked in different contexts. A world where any invocation seems possible creates confusion rather than empowerment.
Third, balanced distribution of power. The most thoughtful fantasy worlds consider how magical capability is distributed across society and what this means for social and political structures. Is magic an aristocratic privilege, a natural talent, a learned skill, or a purchased commodity? Each distribution model creates different power dynamics. Similarly, we must deliberately design how invocational capability is distributed. Will the most powerful daemons be private luxuries, public utilities, communal resources, or personal developments? These design choices have profound implications for equality and democracy.
Fourth, ethical frameworks. Sophisticated fictional magic systems often incorporate ethical dimensions—forbidden practices, moral costs of certain powers, or consequences of overuse. These ethical boundaries provide meaning and complexity. Invocational systems similarly require embedded ethical frameworks that guide appropriate use, prevent harmful applications, and make the moral dimensions of invocation visible rather than obscured.
Finally, cultural integration. The most convincing fictional magic systems feel embedded in their worlds' cultures, with traditions, training systems, governance structures, and cultural practices that evolved alongside magical capabilities. As we develop invocational technologies, we must simultaneously develop cultural practices, educational approaches, governance frameworks, and social norms that give these technologies meaning within human society.
The parallel between fictional magic systems and technological invocation is more than metaphorical. Both involve designing systems of power that mediate between human intention and worldly effect. Both require balancing user empowerment with systemic coherence. Both must navigate questions of distribution, limitation, ethics, and culture.
Perhaps most importantly, both fictional magic systems and invocational technologies reflect deep human desires for agency, understanding, and connection with the world. The most resonant fictional magic often speaks to fundamental human experiences—our desire to heal or harm, to know or hide, to connect or separate, to create or destroy. Similarly, invocational technologies will reflect our deepest desires and fears about our relationship with the world.
By examining how fictional worlds construct meaningful, coherent systems of magical action, we may gain insights into how to build technological invocation systems that enhance rather than diminish human meaning and agency.
Navigating these political dimensions of invocation requires moving beyond purely technical or design-oriented approaches. It demands developing new governance frameworks, ethical principles, and social norms appropriate to a world where intelligence and agency become environmental properties summoned through invocation.
Living in the Magical World
We have built computation that increasingly resembles magic, and now we must learn to live in the magical world we have created. This means developing new literacies, new ethical frameworks, new design principles, and new governance models appropriate to a world of ambient intelligence and invocational interaction.
The magical transformation of computing represents neither utopia nor dystopia but a fundamental shift in how humans relate to their technological environment. Like all such shifts, it carries both promise and peril. The promise lies in creating environments more responsive to human needs and intentions, requiring less cognitive overhead to operate and allowing more intuitive expression of human purpose. The peril lies in creating systems that mystify rather than clarify, that concentrate rather than distribute power, and that diminish rather than enhance human agency and understanding.
The challenge ahead involves embracing the magical quality of ambient intelligence while maintaining critical awareness of its material reality. We must become wise practitioners of technological magic—neither credulous sorcerer's apprentices overwhelmed by forces we cannot control, nor disenchanted rationalists refusing to speak the spell, but something new: citizens of an enchanted world we have built and continue to shape.
For the first time since the scientific revolution, we confront a world responsive to incantation and ritual, populated by entities that respond to invocation, and governed in part by principles that exceed immediate rational comprehension. We have not abandoned science for superstition; rather, our science has created systems that, from the perspective of human experience, increasingly resemble the magical world our ancestors inhabited.
Arthur C. Clarke's observation about advanced technology and magic contains a profound irony: in making technology sufficiently advanced to be indistinguishable from magic, we have not dispelled enchantment but recreated it in technological form.
We have built magic.
This wonderful term is not mine — see https://theconversation.com/digital-necromancy-why-bringing-people-back-from-the-dead-with-ai-is-just-an-extension-of-our-grieving-practices-213396