Unpredictable Patterns #106: After reading and writing
How agents, the decoupling of text and memory and new forms of knowledge might bring about a post-textual world.
Dear reader,
Thanks for the comments - and some good examples from science fiction of massively multi-modal AIs (the standard sci-fi AI that detects your heart rate and health indicators is the key example). This week’s note explores something else - and much more speculative: the end of text. Special thanks for Vint Cerf for chatting this through with me in an interview, that will be published on on AI-policy perspectives soon. And I would be remiss not to recommend Seb Krier’s excellent recent piece of the economy of agents as a more general introduction and long term exploration of agents than what you will find here. Related notes in the series are #89 on the future of language, #88 on memory, forgetting.
The hypothesis
This explores the hypothesis that writing and reading - long thought integral to human civilization - may disappear.1 The argument for this hypothesis can be broken down into several steps, and we will explore these closer. Here are what I think are the core steps in the argument.
We have reached peak text. The amount of text produced, the noisy nature of that text and the overall complexity of assessing it is eroding the value of text as an information carrier.
Writing and memory are being de-coupled. Whereas writing historically was revolutionary for both information storage and transmission, the importance of writing for storage and memory has now declined massivley as we can store all kinds of data, and not just text.
We are entering the post-web phase of the Internet. The human-readable web may well have been a parenthesis in the evolution of the Internet. The agent-affordance web is likely to be as effective or more than the human readable web. Since the majority of our writing and reading is now digital the fading away of the web will be the fading away of text overall.
As a long term consequence, then, writing and reading may fade away. Just as the appendix was once crucial for digesting cellulose in our ancestors but now serves a much reduced function, text might remain but in a diminished role as newer technologies take over its primary functions.
Let’s explore these one by one - and at the end think about how this might change society at large.
Peak text
Text has reached its breaking point as an information technology. Each day we produce more written words than the entire nineteenth century, yet this abundance has not made us more informed or better at communicating. Instead, the flood of text has made finding, understanding, and using information harder. When everything is written down and stored, nothing stands out, and the signal drowns in noise.
The failure of text manifests most clearly in our current information spaces. Take the process Cory Doctorow calls "enshittification" – where platforms like Twitter, Reddit, or Facebook become increasingly useless as they fill with spam, ads, and AI-generated content.2 But this degradation isn't just about bad actors or corporate greed. It reveals a deeper truth: text itself has become too easy to produce and too hard to filter. When AI can generate endless pages of plausible-looking content, and every human thought gets preserved in digital form, the written word loses its power to convey meaning reliably and the Text - sum of all text - is infected with noise.
This crisis of text parallels earlier technological transitions. Just as oral cultures hit their limits when human societies grew too large and complex for memory and speech alone, written culture now strains under the weight of its own success. We've built systems that can generate, copy, and store more text than any human can process. The typical office worker now spends hours daily just reading and writing emails – not because this improves their work, but because text-based communication has become a ritual confused with productive work.
If you look closely, the signs of text's soft decline appear everywhere. Students increasingly turn to video explanations over written textbooks.3 Voice interfaces replace written commands. Visual communication through images, videos, and augmented reality grows more prevalent. These aren't just shifts in preference – they reflect the fundamental limitations of text as a technology for storing and transmitting human knowledge. Text requires too much cognitive overhead, demands too much active attention, and lacks the immediacy needed for many types of information.
New technologies already reduce our dependence on written words. AI systems will increasingly mediate our interaction with information, presenting it in forms that match our needs rather than forcing us to wade through pages of text - as software, visualizations or through voice. Augmented reality will embed information directly in our environment. And new forms of recording and sharing experience will emerge that capture and convey human knowledge more efficiently than strings of written symbols.
Maybe the question isn't whether text will be displaced as our primary information technology, but how quickly and by what.
New memory palaces: the decoupling of writing and text
In 2024 Brill published a special issue on artificial intelligence and memory. In the introduction the editors, Sarah Gensburger and Frédéric Clavert captured the trend here:
Twenty years later, however, when artificial intelligence (ai), and more specifically generative ai, seems to have taken a technological leap forward as a new kind of infrastructure of memory. This computer-generated memory of a new nature could appear as the future of collective memory.
While none of the articles explicitly discusses the possibility that writing could disappear, there are several that point out that the sites of memory are changing, and changing radically:4
These memory palaces are matrices of memorial action: they structure the regeneration of memories. Large text and image models can be considered as massive externalised and collective memory palaces. The data they are trained on undergoes a restructuring, resulting in a dense and interconnected latent space. Each element within this space is connected to multiple other elements through complex relationships. Just as in an ancient memory palace, navigating through this latent space can reactivate memories, ideas, or concepts for its users. The dissemination of such massive memory spaces through platforms and applications is a major event in the history of culture: it represents a paradigm shift in the way humanity curates, accesses, and interacts with its collective memory.
Most articles in the issue are critical of this trend, as they envision that this will mean that our memories will become more malleable, and that big technology companies will be in charge of memory - a reasonable point to raise.
But the overall conclusion here is interesting: the way we remember will change, and this is return to the memory palaces of antiquity (now externalized in model weights and databases).5 Text then fades in importance, since it is no longer needed for the external preservation of information. Writing and remembering become decoupled as practices (except to the degree we write to remember ourselves - something that might be undervalued here).
The post-web phase of the Internet
Tyler Cowen has made the argument that marketers - or perhaps all of us - should write for AIs now.6 This comment contains a key observation that is worth articulating: the way we curate and present content of different kinds is increasingly going to be agent-mediated. As this happens what used to be meta-tags, invisible to the human reader, becomes the most important part of a web rescource.
Now, the vision of autonomous digital agents navigating information spaces isn't new. In 1988, well before the World Wide Web existed, Vinton Cerf and Robert Kahn published a prescient paper titled "The Digital Library Project, Volume 1: The World of Knowbots."7 They envisioned Knowledge Robots (Knowbots) – autonomous software agents that would move through digital networks, finding and managing information independently. These Knowbots wouldn't just passively retrieve data; they would actively seek out information, interact with other Knowbots, and perform complex tasks on behalf of their users.
One thing that is really remarkable about Cerf and Kahn's vision is how it anticipated a fundamentally different model of digital interaction than what the web would become. They weren't imagining agents that would click through human-readable pages or fill out forms. Instead, they foresaw a network architecture where machine agents could directly access and manipulate information, communicate with other agents, and navigate digital spaces in ways native to their capabilities.
Now, thirty-five years later, as we grapple with the limitations of having AI agents navigate a web designed for human consumption, Cerf’s and Kahn's Knowbots concept seems more relevant than ever. One way to think about this is just to play forward the evolution of what is sometimes called "computer use" - where an agent learns to mimic what a human will do with a browser, and then independently does the same thing. It seems obvious to me that this is an inefficient way of building agents, as they need to click through all kinds of pages that are optimized for human consumption, when an agent could bypass all of that and get directly to the result it needed.
This inefficiency reveals a deeper truth about our current web architecture: we're asking AI agents to navigate a world built for human perception and interaction. Imagine asking a robot to use a doorknob designed for human hands when it could simply pass through a sensor-activated door. That's essentially what we're doing when we have AI agents navigate websites designed for human browsing – making them click buttons, fill forms, and parse human-readable text when they could be directly accessing and processing the underlying data.8
The current approach to web automation often relies on what we might call "digital puppetry" – AI agents pulling the strings of web browsers, mimicking human actions. The browser itself becomes an unnecessary intermediary, a pseudo-skeuomorphic holdover from human-centric web design.
This brings us to a crucial question about the future of web architecture: why would we maintain the fiction of human-like interaction for AI agents? The Uniform Resource Identifier (URI) system, brilliant as it was for human navigation, might need to evolve into something more sophisticated for machine-to-machine communication. Instead of identifying pages designed for human consumption, URIs could point to structured data endpoints and action interfaces specifically designed for AI agents.
Consider a simple task like checking flight prices (for some reason this has become the canonical example of agentic AI). A human needs to navigate through a website's interface – clicking dropdowns, selecting dates, interpreting results displayed in a visually pleasing format. An AI agent doesn't need any of this. It could simply send a direct query to multiple data endpoints with the required parameters and receive structured data in response. The current practice of having AI agents simulate human browsing behavior to accomplish this task is like forcing a submarine to follow surface shipping lanes – it's an artificial constraint that serves no practical purpose.
What might this new architecture look like? Instead of traditional URIs pointing to web pages, we might develop an enhanced addressing system that directs agents to resource interfaces designed for machine consumption. These wouldn't just be APIs as we know them today, but rather a new layer of the web specifically engineered for machine-to-machine interaction.9 This layer would include not just data access points but also clear definitions of what actions can be performed, what inputs are required, and what outputs can be expected.
The implications of this shift extend beyond mere efficiency. By creating a parallel web architecture optimized for AI agents, we could possibly reduce the computational resources currently wasted on having machines interpret human-oriented interfaces. This would not only make AI agents more efficient but also more reliable, as they would no longer need to deal with the unpredictability of human-oriented web design changes or tweaks designed to communicate something to a human reader.
This doesn't have to mean abandoning the human-readable web – rather, it means acknowledging that we need different interfaces for different types of users.
We might even see this dual-layer architecture as a stepping stone to something more sophisticated. As AI agents become more capable, the distinction between human-oriented and machine-oriented web resources might blur, leading to new forms of interfaces that can adapt their presentation and interaction models based on whether they're dealing with a human or different kinds of agents.
This evolution might reflect a broader truth about technological progress: sometimes the most effective path forward isn't to make machines better at using tools designed for humans, but to create new tools that better suit the machines' native capabilities - an airplane doesn’t fly by trying to figure out how humans could possibly fly, but through finding new designs.
As this evolution speeds up, “reading the web” might well become less and less interesting to coming generations - and writing might also fade in importance.
The return of the dialogue?
If all these key forces interact, we may face a world in which text becomes a vestigal technology. But wait: can reading and writing really disappear? There are a few interesting things to observe about this question - and the first is the obvious, almost visceral reaction that it induces in most of us: we believe that if reading and writing were to disappear, the civilization would collapse, or at least lose something essential.
We would become less human, less alive!
I feel this too, for sure. Reading and writing are core to how I understand myself and the world I live in. I love writing long-hand in Moleskine journals, I love finding new, good pens and sitting down to write out thoughts and ideas — but that is habit. It is how I have learned to think. I think through writing, and without writing, I worry that there would be no thought!
But then I also realize that for a large portion of human history we did not think through writing.
For roughly 300,000 years, Homo sapiens communicated primarily through vocalizations, gestures, and facial expressions. These methods, deeply embedded in our evolutionary heritage, allowed our ancestors to coordinate hunting, maintain social bonds, warn of dangers, and pass down cultural knowledge through oral traditions. This pattern of predominantly verbal and non-verbal communication persisted across continents and cultures for most of human history.
The development of writing systems emerged relatively recently, around 5,000 years ago, first in Mesopotamia and independently in other regions. This revolutionary technology allowed humans to store information outside their minds for the first time, enabling the accumulation of knowledge across generations with unprecedented accuracy. However, for most of this period, literacy remained the privilege of a tiny elite—usually religious and political leaders, along with their scribes.
Mass literacy is an extraordinarily recent phenomenon, emerging only in the last 200 years. Even by 1800, global literacy rates were below 12%. The ability to read and write became widespread only with the advent of compulsory education, the printing press, and industrialization. This means that for about 99.9% of human existence, our species relied almost entirely on spoken communication, making our current text-based world a radical departure from our cognitive heritage.
And thinking can also be dialogical - I think when I speak to others, or even when I dictate to myself (as I have now started to do in exploring new workflows). In fact, Socrates was convinced that writing robbed us of thinking. From his perspective, these new dialogical interfaces might be preferable and much more interesting than the oceans of text we have produced. His retelling of the myth of Thamus and Toth is worth remembering here, and quoting in full:10
Soc. At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
Phaedr. Yes, Socrates, you can easily invent tales of Egypt, or of any other country.
Soc. There was a tradition in the temple of Dodona that oaks first gave prophetic utterances. The men of old, unlike in their simplicity to young philosophy, deemed that if they heard the truth even from 'oak or rock,' it was enough for them; whereas you seem to consider not whether a thing is or is not true, but who the speaker is and from what country the tale comes.
Phaedr. I acknowledge the justice of your rebuke; and I think that the Theban is right in his view about letters.
Soc. He would be a very simple person, and quite a stranger to the oracles of Thamus or Ammon, who should leave in writing or receive in writing any art under the idea that the written word would be intelligible or certain; or who deemed that writing was at all better than knowledge and recollection of the same matters?
Phaedr. That is most true.
Soc. I cannot help feeling, Phaedrus, that writing is unfortunately like painting; for the creations of the painter have the attitude of life, and yet if you ask them a question they preserve a solemn silence. And the same may be said of speeches. You would imagine that they had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves. (my emphasis)
The new agentic web breaks this silence, literally, with voice interfaces. We can now put questions to the paintings, and return to the dialogue as the site of memory, knowledge and intelligence.
No longer are we then locked into the semblance of truth and the silence of pictures.
A short aside on post-textual societies
In The Archaeology of Knowledge (1969), Foucault uses the term "archive" not to describe a physical repository of documents, but rather as the system that governs what can be said, preserved, and reactivated within a culture. It's not just a collection of texts or artifacts, but the entire epistemological framework.
The archive determines what can be said.
This conception of the archive is intrinsically tied to writing as a technology. The archive presupposes inscription, documentation, and the ability to fix statements in time. It operates through what Foucault calls "monuments" – the concrete manifestations of discourse that can be analyzed, classified, and arranged. The archive provides both the condition of possibility for knowledge and its mode of organization.
But what comes after the archive in an era where writing might no longer be the primary technology of knowledge preservation and transmission? The database might seem like the obvious successor (and this was my earlier suggestion in #88), but this may be too simple a progression. The database, while different in structure from the traditional archive, still operates within the logic of inscription and retrieval – it's still fundamentally archival in nature, even if its organization and access methods differ.
A more radical possibility emerges when we consider systems of knowledge organization that transcend both writing and traditional database structures. In an AI-mediated world, we might see the emergence of what we could call a "dialogical knowledge field" – a system where knowledge exists not as stored inscriptions (whether physical or digital) but as patterns of potential actualization in conversation.
This post-archival system would have several distinctive characteristics:
Non-Inscriptional Persistence: Rather than being stored as discrete units of information (documents, database records, etc.), knowledge would exist as patterns of relationships that can be dynamically reconstructed in conversation. Think of how a hologram contains information in interference patterns rather than point-by-point storage.
Contextual Emergence: Unlike the archive, where statements exist as fixed monuments, or databases where records have defined structures, knowledge in this system would emerge differently based on the context of inquiry - the questions we are asking! The same underlying patterns could generate different manifestations depending on the perspective and needs of the dialogue partner.
Dynamic Temporality: Foucault's archive operates through what he calls the "historical a priori" – the conditions that determine what can be said at a particular historical moment. A post-archival system might operate through what we could call a "reconfigurable a priori," where the conditions of knowledge are themselves constantly shifting and evolving, as we ask new questions.
The implications of such a system would be quite interesting. The very nature of knowledge would change from something that is stored and retrieved to something that is continuously generated and regenerated in dialogue. The distinction between memory and creation would blur, as would the line between past and present knowledge.
This post-archival system would also transform our relationship with time. The archive, as Foucault notes, creates a particular temporality through its organization of statements and their relations. A post-archival system might create new forms of temporality where past, present, and future knowledge exist in a more dynamic relationship.
Think about how this might change scholarly practice. Instead of researchers consulting archives or databases to construct new knowledge, they might engage with AI systems that can generate novel perspectives and insights by recombining and reinterpreting patterns of knowledge in real-time. The very notion of "primary sources" might become obsolete, replaced by dynamically generated manifestations of knowledge patterns.
This transformation would also affect how we understand authority and authenticity. In the archival system, authority often derives from the authenticity of documents and how they have been preserved - their provenance. In a post-archival system, authority might emerge from the coherence and utility of the patterns generated, rather than their historical authenticity.
This, then, could mark the emergence of a new kind of Socratic dialogue, where the rules of the dialectic govern the valuing of knowledge.
The relationship between individual and collective knowledge would also be transformed. The archive, even in its Foucauldian sense, maintains a distinction between personal and collective knowledge. A post-archival system might blur this distinction, creating new forms of shared cognition that transcend individual memory while maintaining personal perspective.
As we develop the technical capabilities for such systems, we need to understand their epistemological implications.
So what and the counter-argument
The hypothesis we set out to explore here was simple: that writing and reading text might be a short parenthesis in the history of knowledge. The argument for the hypothesis was then explored in detail, and some consequences sketched out (with some help from our friendly neighbourhood Foucault).
But why should we care about this? Does it matter if reading and writing disappear? If we bracket the cultural horror this thought invokes in most of us, I think it does. If we are approaching the end of text, the second and third order implications are almost impossible to state. Contemplating such fundamental shifts in our civilizational technology provides us with a tool to imagine the kinds of complexity that AI and AGI could create, and that makes it worthwhile entertaining these kinds of thought experiments.
How would we want to ensure democracy, freedom and human dignity in a world without text?
Now, all this said I also want to be clear and state that this is a speculative hypothesis. There are plenty of arguments against it - and to end I think it is useful to lay out the counter argument as well (if for no other reason than I want to keep writing and reading!).
Here goes.
Text will persist because its resilience lies precisely in its limitations and simplicity. Unlike complex multimodal or AI-mediated communication, text requires minimal technological infrastructure, can be preserved across millennia, and remains readable regardless of technological evolution. The very qualities that make text seem outdated – its static nature, its demand for active engagement, its technological simplicity – are what make it irreplaceable. While AI-mediated dialogue might offer deeper interaction, it introduces critical vulnerabilities: dependence on complex infrastructure, potential for manipulation or censorship through algorithmic attacks, and the risk of knowledge loss through technological obsolescence or failure.
Text is future-proof.
Furthermore, text's role in human cognition goes far deeper than mere information storage or transmission. The act of writing, particularly in fields like mathematics, philosophy, and scientific reasoning, is fundamentally intertwined with how we develop and structure complex thoughts. We are writing bodies and that is how we think. The deliberate, sequential nature of writing forces a discipline of thought that dialogue, no matter how sophisticated, cannot replicate.
This is evidenced by how even in our highly digital age, breakthrough thinking in fields from theoretical physics to constitutional law still relies heavily on written exposition. The "thinking through writing" process is not merely a learned habit but reflects the architectural constraints of human cognition – our limited working memory and need for external scaffolding to build complex arguments.
The feeling of writing is what it feels like to think as a human. Plato did write, after all!
The suggested agent-mediated web, rather than replacing text, is more likely to become an additional layer atop a text-based foundation. Just as the development of mathematics didn't eliminate written language but rather added a new symbolic layer for specific purposes, AI-mediated communication will likely complement rather than replace text. This is because text serves as a crucial "common denominator" format – it can be easily created, copied, verified, and transformed across different systems and contexts. It is also a great compression format. Even in a world of sophisticated AI agents, there will need to be some form of stable, human-readable record of knowledge that can survive technological transitions and serve as a ground truth for these systems. Text, in its fundamental simplicity and durability, remains uniquely suited for this role.
Plus there is just something about that notebook, the cup of coffee, the feeling of pen against paper.
But maybe I am just old.
Thanks for reading!
Nicklas
It is probably good to state that it is a hypothesis - and not a prediction, and the reason I want to explore it is that I think this could be an example of a profound and revolutionary change that technology could drive. If the hypothesis is credible it changes a lot of things!
See Doctorow, C., 2023. The ‘enshittification’of TikTok. Or how, exactly, platforms die. Wired [online]
Although the evidence for what they prefer is still mixed — compare Ghilay, Y., 2021. Text-Based Video: The Effectiveness of Learning Math in Higher Education Through Videos and Texts. Journal of Education and Learning, 10, pp. 55. https://doi.org/10.5539/JEL.V10N3P55. to Woodham, L., Ellaway, R., Round, J., Vaughan, S., Poulton, T., & Zary, N., 2015. Medical Student and Tutor Perceptions of Video Versus Text in an Interactive Online Virtual Patient for Problem-Based Learning: A Pilot Study. Journal of Medical Internet Research, 17. https://doi.org/10.2196/jmir.3922.
See Schuh, J., 2024. AI As Artificial Memory: A Global Reconfiguration of Our Collective Memory Practices?. Memory Studies Review, 1(aop), pp.1-25. An excellent, if very big tech-critical article.
The mandatory reading on memory palaces is Frances Yates excellent examination in Yates, F.A., 2013. Art of memory. Routledge - but Spence, J.D., 1985. The memory palace of Matteo Ricci. Penguin is also a good read.
In an interview with Dwarkesh Patel in October 2024, Cowen stated, "No one is writing or recording for the AIs very much. But if you believe even a modest version of this progress. Like I’m modest in what I believe relative to you and many of you. You should be doing this. You’re an idiot if you’re not writing for the AIs. They’re a big part of your audience, and their purchasing power, we’ll see but, over time it will accumulate.”
Kahn, R.E. and Cerf, V.G., 1988. The digital library project volume 1: the world of knowbots (DRAFT). Corporation for National Research Initiatives. Vancouver
One interesting example of this is that we force an agent to stay within a language - to choose language on a website. This is something that may actually reduce the agent’s efficiency as different languages can be effective at conveying different things - and AIs might prefer intermediate versions. See eg Johnson, M., Schuster, M., Le, Q.V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Viégas, F., Wattenberg, M., Corrado, G. and Hughes, M., 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5, pp.339-351.
Several initiatives are already working on developing enhanced architectures for machine-to-machine (M2M) interaction that go beyond traditional web APIs. Some examples include:
oneM2M: This global standards initiative is developing specifications for a framework to support M2M and IoT applications across various sectors. oneM2M aims to create a common M2M service layer that can be embedded in various hardware and software to enable seamless machine-to-machine communications.
Cloud Consumption Interface: VMware's Cloud Consumption Interface offers a cloud experience for Supervisor IaaS services, providing a consistent API and CLI for all cloud and IaaS operations. This interface allows machines to interact with cloud resources using Kubernetes-style APIs, enabling more efficient M2M communication in cloud environments.
Resource-Oriented Architecture (ROA): This approach extends REST principles to create a more flexible and interconnected system for M2M interactions. ROA uses URLs as global identifiers for various resources, including documents, data, services, and abstract concepts, allowing machines to interact with these resources using consistent methods.
M2M Authentication Systems: Advanced M2M authentication mechanisms are being developed to ensure secure communication between devices. These systems use technologies like Public Key Infrastructure (PKI) and lightweight cryptography to enable secure M2M interactions in resource-constrained environments.
These initiatives are working towards creating a new layer of the web that is specifically designed for M2M communication, moving beyond traditional APIs to enable more efficient and intelligent interactions between machines. They are all likely to evolve into A2A-standards or agent-to-agent standards.
Phaedrus 274b–278d
Excellent as always and thank you for the shout-out 🫡
https://www.cnri.reston.va.us/kahn-cerf-88.pdf - link to world of knowbots - vint cerf