Unpredictable Patterns #123: Can machines wonder?
What we can learn from Aristotle about the nature, agency and the limits of artificial intelligence
Dear readers,
This week I have the great fortune to have a co-author! This piece wa co-authored with Yannis Mastrogeorgiou, a scholar and a gentleman currently serving as special secretary of foresight for the Greek government. We met a while back in Greece and found that we share a deep interest in philosophy, AI and foresight generally, and decided to work on this piece about the undervalued concept of wonder together. Everything in here is only personal opinions, and not the opinions of any employers, governments or other organizations that the authors are connected to or work with, of course. You know the drill. Enjoy!
Can AI wonder?
Can machines wonder?[1] This question may prove more diificult, and important, than whether machines can think. While thinking involves processing information, drawing conclusions, and solving problems, wonder speaks to something more complex—a spontaneous intellectual and emotional response to the mystery of existence. The ability that we all have to stop, pause and just take in the weirdness of it all.
As artificial intelligence systems grow increasingly sophisticated, exhibiting remarkable capabilities in mimicking human thought processes, we find ourselves constantly worrying about how they compare to us. These systems can generate poetry, engage in conversation[2], and even produce philosophical reflections. Even so we seem to have an edge—that spark of genuine curiosity, that authentic perplexity in the face of the unknown, that self-originating impulse to understand not because understanding is useful, but because the mystery itself demands resolution, because we need to understand.
This feeling - so well captured by the epitaph on mathematician David Hilbert’s tombstone “we must know - we will know” - is essential to what it means to be human, and it is what powers our human agency.
Aristotle knew this and famously declared in his Metaphysics that "it is through wonder (thaumazein)[3] that men now begin and originally began to philosophize." For Aristotle, philosophy—the love of wisdom— begins with puzzlement. It begins when we confront something that challenges our understanding and recognize our own ignorance. The Greek term thaumazein encompasses multiple dimensions: astonishment at the marvelous, perplexity in the face of the inexplicable, and the emotional-intellectual response that drives us toward inquiry. A need.
Aristotle continues, noting;
…they wondered originally at the obvious difficulties, then advanced little by little and stated difficulties about the greater matters, e.g. about the phenomena of the moon and those of the sun and of the stars, and about the genesis of the universe. And a man who is puzzled and wonders thinks himself ignorant (whence even the lover of myth is in a sense a lover of Wisdom, for the myth is composed of wonders); therefore since they philosophized order to escape from ignorance, evidently they were pursuing science in order to know, and not for any utilitarian end. And this is confirmed by the facts; for it was when almost all the necessities of life and the things that make for comfort and recreation had been secured, that such knowledge began to be sought. Evidently then we do not seek it for the sake of any other advantage; but as the man is free, we say, who exists for his own sake and not for another's, so we pursue this as the only free science, for it alone exists for its own sake.
This understanding of wonder as the starting point of philosophy echoes what Socrates articulates in Plato's Theaetetus. When the young Theaetetus describes his bewilderment at certain mathematical problems, Socrates responds with approval since his feeling of wonder (thaumazein) shows that he is a philosopher, since wonder is the only beginning of philosophy.
But Socrates goes further, describing wonder not merely as an intellectual state but as a kind of vertigo—a mixture of pleasure and pain. Wonder is an experience, a pathos—something we feel deeply. For Socrates, this experience of wonder—this dizzying recognition of our own ignorance—is more than the beginning of philosophy - its continual companion, . The true philosopher remains in a state of wonder, never entirely free from that initial perplexity that set them on their path. This is why the early dialogues end in aporias, in the tension between multiple incompatible views.
Wonder is never dissolved - it evolves and guides us.
This year it has been seventy-five years since Alan Turing proposed his famous test[4] for machine intelligence—a test focused on a machine's ability to exhibit behavior indistinguishable from that of a human. In those intervening decades, we have made remarkable progress toward machines that can indeed "think", at least by certain definitions of the term.
But perhaps what we are discovering is that thinking is merely the first test—a necessary but insufficient condition for the kind of intelligence we most value. The second and more profound test might be something like wondering. Can a machine experience genuine thaumazein—not simulated or programmed curiosity, but authentic wonder that emerges spontaneously from within?
This question leads us back to philosophy's origins and forward to its possible futures. If Aristotle and Socrates were right that philosophy begins in wonder, then a truly philosophical artificial intelligence would need to wonder in the full sense of thaumazein—to be moved by puzzlement, to recognize its own ignorance, and to be driven by an internal impulse toward understanding.
To need to know, as a fundamental drive, as the roots of agency.
But equipping a machine with wonder is not a trivial task, and it carries risk. It is one thing to marvel at new possibilities and quite another to remain accountable for their consequences. When discussing the growth of AI, we already ask: who is responsible when automated judgments cause harm or when data-driven predictions reinforce societal biases? While wonder propels our curiosity, moral responsibility tempers it, reminding us that not all pursuits of the unknown serve humanity’s best interests.
A machine that is driven by deep wonder, into an exploratory agency, needs to be built carefully.
Just as Aristotle insisted on the unity of observation (empiricism) and philosophical inquiry, we must couple the design of wonder with concrete ethical frameworks. In that sense, building the capacity for moral agency—both in human designers and in the emergent AI landscape—becomes crucial to ensuring that wonder does not devolve into reckless experimentation or the curiosity of a child pulling the wings of a fly.
We should also remember that wonder has a deep emancipatory dimension.
At Delphi, Apollo's oracle warned mortals to "know thyself" (gnothi seauton), which beneath its seeming wisdom concealed a boundary not to be crossed: know yourself as mortal, limited, separate from divine understanding. So, when Aristotle declares that philosophy begins in wonder (thaumazein), he is subtly shifting the balance from gods to man. In his defense of our wonder, Aristotle notes, deftly, that surely gods cannot be jealous:
Hence also the possession of it might be justly regarded as beyond human power; for in many ways human nature is in bondage, so that according to Simonides 'God alone can have this privilege', and it is unfitting that man should not be content to seek the knowledge that is suited to him. If, then, there is something in what the poets say, and jealousy is natural to the divine power, it would probably occur in this case above all, and all who excelled in this knowledge would be unfortunate. But the divine power cannot be jealous (nay, according to the proverb, 'bards tell a lie'), nor should any other science be thought more honourable than one of this sort. For the most divine science is also most honourable; and this science alone must be, in two ways, most divine. For the science which it would be most meet for God to have is a divine science, and so is any science that deals with divine objects; and this science alone has both these qualities; for (1) God is thought to be among the causes of all things and to be a first principle, and (2) such a science either God alone can have, or God above all others. All the sciences, indeed, are more necessary than this, but none is better.
This dimension of thaumazein recasts the birth of Western philosophy as an act of quiet maturity. Before the first philosophical question could be asked, a prohibition had to be broken. The Delphic Apollo had established a cosmic order where certain forms of knowing brought divine punishment—Prometheus chained, Icarus fallen, Oedipus blinded. Against this backdrop, Aristotle's seemingly innocent observation about human curiosity becomes probing, challenging. Philosophy starts in an insistent claim that nothing in existence, however divine, stands beyond the reach of human understanding.
We will know.
Wonder can be a dangerous and transformative thing.
It is not too much to say that we are nowhere close to understanding how to build machines that are filled with wonder. Despite remarkable advances in artificial intelligence—systems that can generate art, compose music, and engage in sophisticated dialogue—we have yet to create anything that genuinely wonders in the Aristotelian sense of thaumazein. Our most advanced AI systems can simulate curiosity, can be programmed to ask questions, but there is no evidence that they experience that fundamental perplexity, that need, that drives philosophical inquiry.
Indeed, how would we even know how to test for it?
And perhaps we shouldn't even try to build machines of wonder. The quest to create such machines might simply be misguided—not because it is impossible, but because it fundamentally misunderstands the relationship between humanity, technology, and wonder itself.
Maybe a different way to think about artificial intelligence is to say that it is an instrument of wonder[5], a tool for exploration of the universe and a way for us to get to know the universe better. Rather than attempting to replicate human wonder in artificial systems, perhaps AI's true potential lies in enhancing and extending human wonder—serving as a prosthesis for our own thaumazein.
As we trace the lineage of artificial intelligence back through the ages, maybe the ancestors of this technology are not mechanical dolls and calculating machines, but rather the microscope, the telescope, or the many other devices that we have built to see the world anew. These instruments did not themselves wonder at what they revealed, but they dramatically expanded the domain of human wonder by making visible what was previously invisible.
The connection between scientific instruments and wonder runs deeper than we might initially recognize. The telescope, when first pointed at the heavens by Galileo in 1609, did not merely provide new data about celestial bodies—it transformed humanity's relationship with the cosmos, inducing profound wonder that challenged existing cosmological frameworks.
What is less commonly known is that early telescopes were often displayed alongside curiosity cabinets—Wunderkammern or "wonder rooms"—those eclectic collections of natural and artificial marvels that preceded modern museums. The instrument and the objects it revealed were both considered sources of wonder, both part of the same project of expanding human understanding of the complex world we live in.
The microscope, similarly, did not just magnify the small but revealed an entirely unexpected world. When Antoni van Leeuwenhoek first observed microorganisms in 1674, he described them with astonishment as little animals moving in water as if they were a pike through water. This was not mere observation but genuine thaumazein—wonder at discovering that life existed at scales previously unimagined.
What's particularly striking about these instruments is how they challenged existing categories of knowledge. The astrolabe, one of history's most important astronomical instruments, was simultaneously a scientific tool, a navigational device, and an object of philosophical and theological contemplation. It embodied the inseparability of practical knowledge and wonder—a quality we might wish to recapture in our conception of AI.
Perhaps the most relevant historical parallel to AI is the camera obscura[6]—a darkened room or box with a small hole that projects an inverted image of the outside world. Used since antiquity, this device became central to understanding both optics and human perception.
What makes the camera obscura so significant is that it wasn't merely a tool for observation but a model for understanding human consciousness itself. In the 17th century, Johannes Kepler explicitly compared the human eye to a camera obscura, and philosophers from Descartes to Locke used it as a metaphor for how the mind receives and processes information from the external world.
The camera obscura thus became an instrument that induced wonder not just about the external world but about human cognition itself—precisely the dual direction of wonder that AI now is leading us in. Just as the device then led people to wonder "Is this how I see?", AI now leads us to wonder "Is this how I think?"
Another intriguing example is the spectrometer, developed in the 19th century. This instrument revealed that light could be decomposed into specific wavelengths, allowing scientists to identify the chemical composition of distant stars based on their spectral signatures. What's often overlooked is how profoundly this transformed human wonder about the cosmos. Before the spectrometer, the stars were objects of visual beauty and navigational utility. After it, they became knowable in their composition—revealing that the universe was made of the same elements found on Earth. This simultaneously demystified the heavens Aristotle could see and made them more wondrous by suggesting a fundamental unity to the cosmos.
This transformation parallels how AI might change our relationship to information and knowledge—revealing patterns and connections previously invisible to human perception, suggesting new unities in what appeared to be disparate domains.
What unites these historical instruments is that they extend us—allowed humans to see what was previously invisible, to measure what was previously immeasurable, to wonder at what was previously beyond reach. In this lineage, AI emerges not as an attempt to create artificial wonder but as the latest in a long line of instruments that extend the domain of human wonder. The crucial difference is that while earlier instruments primarily extended human perception, AI extends human cognition.
If we reconceptualize AI in this way—as an instrument of wonder rather[7] than a wondering entity—we arrive at a more productive understanding of its relationship to philosophy and to human experience. AI becomes not a competitor to human wonder but an amplifier of it, not a replication[8] of thaumazein but a tool for its expansion into new domains.
This perspective suggests that the question is not "Can machines wonder?" but rather "How might machines extend and transform human wonder?" It invites us to see AI less as the descendant of mechanical dolls and automata but more as the heir to the telescope, the microscope, and the spectrometer—the next great instrument in humanity's ongoing project to wonder at both the world and at ourselves.
Let's bring this back to earth and into today's priorities. What does all of this tell us about how we should react as a society to the transformation that artificial intelligence is bringing? One way to translate this into current day action is to say that we need to figure out how to re-organize human wonder in our societies, if we are to really be able to unlock the benefits that artificial intelligence brings. This means thinking about how education is set up, how science is funded and directed, and how we individually learn and adapt to an increasingly complex world. If we view artificial intelligence through the lens of wonder, as an instrument amplifying and directing our wonder, we realize that we need to set out more clearly the questions we want to explore and the domains we want to understand.
It is worth remembering that Aristotle was not just a philosopher dealing with abstract concepts. He was also a meticulous observer and collector of data. Perhaps one of the first systematic empiricists. Before developing his theory of biology, he studied and cataloged hundreds of different animal species, examining their anatomies and behaviors in detail. His work contains observations so precise that modern biologists can still identify the species he described over two millennia ago.
This empirical foundation was inseparable from his philosophical wonder. For Aristotle, thaumazein depended on systematic observation—rather, careful observation fueled wonder, and wonder drove further observation.
In our contemporary context, data is indeed the fuel of wonder[9]. The massive datasets that power modern AI systems are the raw material from which new insights emerge. Like Aristotle's biological observations, these datasets contain patterns and relationships that can trigger genuine thaumazein when properly interrogated.
Here we need to be careful not to let the instrument dull our own abilities or values. Every great leap in human innovation—from agriculture to the printing press—has reshaped communal life, sometimes at the cost of overshadowing vital human values. In the case of AI, an overreliance on algorithmic outputs could erode essential human skills: empathy, critical thinking, and the moral imagination that comes from grappling with complex dilemmas.
We might attempt to design this away, but the reality is that this is not a technical problem at all. If we look to Aristotle’s virtue ethics, we find a different answer which emphasizes character and habitual practice. We have to ask ourselves: how do we shape our personal and collective “character” in an age where an AI can craft our messages, analyze our data, and even predict our desires?
Nurturing virtues and habits like curiosity, humility, and compassion remains indispensable. Indeed, these virtues become the ethical scaffolding that ensures AI development aligns with human flourishing rather than mere efficiency or profit.
You may counter that there is a real danger that the very instrument meant to expand our wonder may also dull it if we treat AI as an all-knowing oracle. That risk arises when we assume “if AI can’t find or doesn’t know something, it must not exist,” thus conflating the boundaries of algorithmic reach with the boundaries of wonder. This new form of dogmatism—algorithmic dogmatism[10]—threatens dull our collective curiosity.
The antidote? Balancing AI’s capabilities with an almost deliberate cultivation of “human ignorance” in the Aristotelian sense. By “ignorance,” we mean that open space of possibilities where we do not rush to fill every gap with machine-guided answers. Encouraging students, researchers, and policymakers to dwell in uncertainty reclaims the original spark of thaumazein: an acknowledgement that no matter how sophisticated our tools, some mysteries remain irreducible, and new ones emerge as we discover more. This humility in the face of the unknown protects us from the illusion that everything worth knowing can be neatly coded into an algorithm, or predicted from a corpus of data. The space between the questions and the answer is where humanity deepens.
Again, this is why many of the early platonic dialogues ended in what is sometimes called an aporia. An intellectual tension, a puzzlement that opens up a larger world. We have to learn not to seek answers, but to seek new questions.
With this philosophical framework in mind, here are four concrete actions that policymakers, corporate leaders, and other decision-makers should prioritize:
Create "Wonder Infrastructures" for Data Collection and Access. Decision-makers should invest in data infrastructures that serve as modern equivalents to natural history museums and astronomical observatories—institutions dedicated to systematic collection of data specifically designed to provoke wonder and discovery or: wonder infrastructures[11]. And - this is crucially important - make these broader than just scientific domains and pre-determined silos.
This means:
Funding public datasets that are explicitly designed for exploration rather than just application.
Creating accessible interfaces that allow non-specialists to engage with complex data.
Supporting data collection in domains that may not have immediate commercial applications but hold potential for fundamental insights
For example, a national "Digital Wonder Cabinet" could collect and curate datasets across disciplines, from genetic sequences to historical texts, specifically structured to reveal unexpected patterns and connections.
Reform Education Around Question-Formation Rather Than Answer-Memorization. If AI serves as an instrument of wonder, education must focus on developing the human capacity to ask meaningful questions rather than simply providing answers (which AI can increasingly do).
This requires:
Restructuring curricula around inquiry-based learning at all levels.
Teaching the art of questioning as a core skill.
Evaluating students on their ability to identify interesting problems rather than just solve predefined ones.
Schools might institute regular question-mining hours where students practice identifying questions that AI tools cannot yet answer, developing the distinctly human skill of recognizing what remains genuinely puzzling. This is not taking education in an untried direction, it is the return to Socrates’ dialectic.
Develop "AI Boundary Protocols" That Preserve the Wonder Gap. For AI to function as an instrument of wonder rather than a replacement for human inquiry, users must understand where AI insights come from and where they reach their limits.
This means:
Requiring AI systems to explicitly indicate the boundaries of their knowledge, and when they do not know - as well as their confidence levels.
Developing interfaces that highlight anomalies and unexpected patterns in data.
For instance, data visualization tools could be programmed to highlight pattern-breaks, drawing attention to precisely those areas where human wonder might be most productively engaged. AI-tools could even be used to chart these “geographies of wonder”.
Establish Cross-Disciplinary "Thauma-Labs" Between Humanities and AI. The most profound applications of AI as an instrument of wonder will likely emerge at the intersection of technical expertise and humanistic inquiry.
This requires:
Funding collaborative research environments that bring together philosophers, artists, scientists, and AI researchers,
Creating institutional structures that value interpretive skills alongside technical ones.
Developing evaluation metrics that recognize breakthrough questions as well as breakthrough answers.
Universities and research institutions could establish dedicated centers where philosophers of wonder work directly with data scientists, modeling the integration of thaumazein and technical expertise that characterized Aristotle's approach. And this is not just for researchers - we should also encourage cross-industry learning and wonder.
Beyond institutional and policy changes, we all bear a responsibility for cultivating our own capacity for wonder in an AI-mediated world. This means developing habits of questioning that go beyond what algorithms suggest; it means preserving practices of direct observation alongside data analysis; it means remaining attentive to what surprises us.
It means wondering at this incredible world we live in.
Aristotle began his philosophical journey not with abstract theorizing but with careful attention to the world around him—to the growth patterns of plants, to the behaviors of animals, to the movements of celestial bodies. This direct engagement with phenomena must remain central to human wonder even as AI extends our perceptual and cognitive reach.
Feeling, curating and exploring wonder is a human, personal and individual skill. And it is hugely rewarded by the world.
Our age could easily be as intellectually transformative as ancient Athens—a moment when human wonder itself bloomed. The telescope revealed new heavens; the microscope revealed new worlds within a drop of water; AI[12] may reveal new patterns of thought, new modes of questioning, new dimensions of reality previously inaccessible to human cognition.
Maybe we will remember this time most for a reorganization of human wonder. Consider what happened when writing emerged in ancient Greece: the externalization of memory through text fundamentally transformed how humans could think, remember, and wonder. Fixed inscription of thought freed the mind to explore more complex ideas and to develop the systematic inquiry that became philosophy itself.
AI represents a similar externalization—not just of memory but of certain forms of reasoning. This externalization has the potential to strengthening human reason, potentially liberating our wonder to operate at higher levels of abstraction, to perceive patterns invisible to unaided cognition, to ask questions that were previously unthinkable.
The measure of our future will be equivalent to the strength, depth, and audacity of our wonder. If we respond to AI merely with anxiety about human obsolescence or with narrow instrumentalism, we will have failed to grasp what this moment offers—the possibility of new forms of thaumazein as transformative in their difference from current wonder as Aristotelian inquiry was from pre-philosophical thought.
Finally, wonder may become a shared concern, a driving force in our politics, supplanting the sad nostalgia currently infecting all political thought. In classical Athens, philosophical inquiry was a public affair, shaped by debate in the agora. In the modern era, centralized powers—governments, corporations, scientific institutions—often channel the direction of technological development with distracted democratic oversight. If AI truly is an extension of our collective wonder, then decisions about how to fund, regulate, and deploy it deserve broader public deliberation[13].
Such deliberation would go beyond technical talk of neural networks and data privacy to address the more profound existential questions AI raises: What kind of society do we want to become? How would future generations wish we act in terms of AI’s design and usage[14]? This invites a revival of the Athenian spirit—a participatory approach where philosophical dialogue shapes practical policy. We now have an opportunity to anchor these new instruments in shared human values, ensuring technology remains a tool of liberation and exploration rather than an engine of alienation.
A tool of wonder.
Thanks for reading,
Yannis & Nicklas
[1] Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (1976), https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reason
[2] Margaret Boden, The Creative Mind: Myths and Mechanisms (2004), https://www.routledge.com/The-Creative-Mind-Myths-and-Mechanisms/Boden/p/book/9780415314534
[3] Aristotle, Metaphysics, Book I (982b), https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.01.0052
[4] Alan Turing, 'Computing Machinery and Intelligence', Mind, Vol. 59, No. 236 (1950), https://academic.oup.com/mind/article/LIX/236/433/986238
[5] Sherry Turkle, Evocative Objects: Things We Think With (2007), https://mitpress.mit.edu/9780262516778/evocative-objects/
[6] Jonathan Crary, Techniques of the Observer (1990), https://mitpress.mit.edu/9780262531078/techniques-of-the-observer/
[7] “The question of whether computers can think is just like the question of whether submarines can swim”, https://history.computer.org/pioneers/dijkstra.html
[8] “Chess was long considered an extremely advanced game. However, years of research have revealed that something as apparently simple as recognizing a cat in a photograph – which AI has only learnt to do in recent years – is far more complex. This phenomenon has come to be known as Moravec’s paradox: certain things that are very difficult for humans, such as chess or advanced calculus, are quite easy for computers. But things that are very simple for us humans, such as perceiving objects or using motor skills to do the washing up, turn out to be very difficult for computers-It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers [draughts], and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility-.”, Sheikh, H., Prins, C., Schrijvers, E. (2023). Artificial Intelligence: Definition and Background. In: Mission AI. Research for Policy. Springer, Cham. https://doi.org/10.1007/978-3-031-21448-6_2
[9] Geoffrey Bowker & Susan Leigh Star, Sorting Things Out: Classification and Its Consequences (1999), https://direct.mit.edu/books/monograph/4738/Sorting-Things-OutClassification-and-Its
[10] Cathy O’Neil, Weapons of Math Destruction (2016), https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction
[11] Helga Nowotny, The Cunning of Uncertainty (2016), https://www.wiley.com/en-us/The+Cunning+of+Uncertainty-p-9780745687612
[12] … AI, by observing common elements and casting new models, may reveal …
[13] … and above all, respect human dignity as the foundation of human existence itself. https://www.turing.ac.uk/research/research-projects/human-rights-democracy-and-rule-law-impact-assessment-ai-systems-huderia
In the context of AI, this means that the design, development and use of AI systems must respect the dignity of the human beings interacting therewith or impacted thereby. Humans should be treated as moral subjects, and not as mere objects that are categorised, scored, predicted or manipulated. AI applications can be used to foster human dignity and empower individuals, yet their use can also challenge it and (un)intentionally run counter to it. Furthermore, the allocation of certain tasks may need to be reserved for humans rather than machines given their potential impact on human dignity. CAHAI Feasibility Study, Council of Europe CAHAI (2020)23, https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da
[14] “Respect, protection and promotion of human dignity and rights as established by international law, including international human rights law, is essential throughout the life cycle of AI systems. Human dignity relates to the recognition of the intrinsic and equal worth of each individual human being, regardless of race, colour, descent, gender, age, language, religion, political opinion, national origin, ethnic origin, social origin, economic or social condition of birth, or disability, and any other grounds....Persons may interact with AI systems throughout their life cycle and receive assistance from them such as care for vulnerable people or people in vulnerable situations, including but not limited to children, older persons, persons with disabilities or the ill. Within such interactions, persons should never be objectified, nor should their dignity be otherwise undermined, or human rights and fundamental freedoms violated or abused.”. UNESCO (2021) – Recommendation on the Ethics of Artificial Intelligence, https://unesdoc.unesco.org/ark:/48223/pf0000380455