Making the Inevitable Obvious
Artificial Intelligences, So Far
I wrote this short memo last November, 2024, at the invitation of Wired Mid-East for their year-end issue. I think it still holds up nine months later, and represents where we are on this astounding journey.
There are three points I find helpful when thinking about AIs so far:
The first is that we have to talk about AIs, plural. There is no monolithic singular AI that runs the world. Instead there are already multiple varieties of AI, and each of them have multiple models with their own traits, quirks, and strengths. For instance there are multiple LLM models, trained on slightly different texts, which yield different answers to queries. Then there are non-LLM AI models – like the ones driving cars – that have very different uses besides answering questions. As we continue to develop more advanced models of AI, they will have even more varieties of cognition inside them. Our own brains in fact are a society of different kinds of cognition – such as memory, deduction, pattern recognition – only some of which have been artificially synthesized. Eventually, commercial AIs will be complicated systems consisting of dozens of different types of artificial intelligence modes, and each of them will exhibit its own personality, and be useful for certain chores. Besides these dominant consumer models there will be hundreds of other species of AI, engineered for very specialized tasks, like driving a car, or diagnosing medical issues. We don’t have a monolithic approach to regulating, financing, or overseeing machines. There is no Machine. Rather we manage our machines differently, dealing with airplanes, toasters, x-rays, iphones, rockets with different programs appropriate for each machine. Ditto for AIs.
And none of these species of AI – not one – will think like a human. All of them produce alien intelligences. Even as they approach consciousness, they will be alien, almost as if they are artificial alien beings. They think in a different way, and might come up with solutions a human would never do. The fact that they don’t think like humans is their chief benefit. There are wicked problems in science and business that may require us to first invent a type of AI that, together with humans, can solve problems humans alone cannot solve. In this way AIs can go beyond humans, just like whale intelligence is beyond humans. Intelligence is not a ladder, with steps along one dimension; it is multidimensional, a radiation. The space of possible intelligences is very large, even vast, with human intelligence occupying a tiny spot at the edge of this galaxy of possible minds. Every other possible mind is alien, and we have begun the very long process of populating this space with thousands of other species of possible minds.
The second thing to keep in mind about AIs is that their ability to answer questions is probably the least important thing about them. Getting answers is how we will use them at first, but their real power is in something we call spatial intelligence – their ability to simulate, render, generate, and manage the 3D world. It is a genuine superpower to be able to reason intellectually and to think abstractly – which some AIs are beginning to do – but far more powerful is the ability to act in reality, to get things done and make things happen in the physical world. Most meaningful tasks we want done require multiple steps, and multiple kinds of intelligences to complete. To oversee the multiple modes of action, and different modes of thinking, we have invented agents. An AI agent needs to master common sense to navigate through the real world, to be able to anticipate what will actually happen. It has to know that there is cause and effect, and that things don’t disappear just because you can’t see them, or that two objects can not occupy the same place at the same time, and so on. AIs have to be able to understand a volumetric world in three dimensions. Something similar is needed for augmented reality. The AIs have to be able to render a virtual world digitally to overlay the real world using smart glasses, so that we see both the actual world and a perfect digital twin. To render that merged world in real time as we move around wearing our glasses, the system needs massive amounts of cheap ubiquitous spatial intelligence. Without ubiquitous spatial AI, there is no metaverse.
We have the first glimpses of spatial intelligence in the AIs that can generate video clips from a text prompt or from found images. In laboratories we have the first examples of AIs that can generate volumetric 3D worlds from video input. We are almost at the point that one person can produce a 3D virtual world. Creating a video game or movie now becomes a solo job, one that required thousands of people before.
Just as LLMs were trained on billions of pieces of text and language, some of these new AIs are being trained on billions of data points in physics and chemistry. For instance, the billion hours of video from Tesla cars driving around are training AIs on not just the laws of traffic, but the laws of physics, how moving objects behave. As these spatial models improve, they also learn how forces can cascade, and what is needed to accomplish real tasks. Any kind of humanoid robot will need this kind of spatial intelligence to survive more than a few hours. So in addition to training AI models to get far better at abstract reasoning in the intellectual realm, the frontier AI models are rapidly progressing at improving their spatial intelligence, which will have far more use and far more consequence than answering questions.
The third thing to keep in mind about AIs is that you are not late. You have time; we have time. While the frontier of AI seems to be accelerating fast, adoption is slow. Despite hundreds of billions of dollars invested into AI in the last few years, only the chip maker Nvidia and the data centers are making real profits. Some AI companies have nice revenues, but they are not pricing their service for real costs. It is far more expensive to answer a question with an LLM than the AIs that Google has used for years. As we ask the AIs to do more complicated tasks, the cost will not be free. Most people will certainly pay for most of their AIs, while free versions will be available. This slows adoption.
In addition, organizations can’t simply import AIs as if they were just hiring additional people. Work flows and even the shape of the organizations need to change to fit AIs. Something similar happened as organizations electrified a century ago. One could not introduce electric motors, telegrams, lights, telephones, into a company without changing the architecture of the space as well as the design of the hierarchy. Motors and telephones produced skyscraper offices and corporations. To bring AIs into companies will demand a similar redesign of roles and spaces. We know that AI has penetrated smaller companies first because they are far more agile in morphing their shape. As we introduce AIs into our private lives, this too will necessitate redesign of many of our habits, and all this takes time. Even if there was not a single further advance in AI today, it will take 5 to 10 years to fully incorporate the AIs we already have into our orgs and lives.
There’s a lot of hype about AI these days, and among those who hype AI the most are the doomers – because they promote the most extreme fantasy version of AI. They believe the hype. A lot of the urgency for dealing with AI comes from the doomers who claim 1) that the intelligence of AI can escalate instantly, and 2) we should regulate on harms we can imagine rather than harms that are real. Despite what the doomers proclaim, we have time because there has been no exponential increase in artificial intelligence. The increase in intelligence has been very slow, in part because we don’t have good measurements for human intelligence, and no metrics for extra-human intelligence. But the primary slow rate is due to the fact that the only exponential in AI is in its input – it takes exponentially more training data, and exponentially more compute to make just a modest improvement in reasoning. The artificial intelligences are not compounding anywhere near exponential. We have time.
Lastly, our concern about the rise of AIs should be in proportion to its actual harm vs actual benefits. So far as I have been able to determine, the total number of people who have lost their jobs to AI as of 2024, is just several hundred employees, out of billions. They were mostly language translators and a few (but not all) help-desk operators. This will change in the future, but if we are evidence based, the data so far is that the real harms of AI are almost nil, while the imagined harms are astronomical. If we base our policies for AIs on the reasonable fact that they are varied and heterogenous, and their benefits are more than answering questions, and that so far we have no evidence of massive job displacement, then we have time to accommodate their unprecedented power into our society.
The scientists who invented the current crop of LLMs were trying to make language translation software. They were completely surprised that bits of reasoning also emerged from the translation algorithms. This emergent intelligence was a beautiful unintended byproduct that also scaled up magically. We honestly have no idea what intelligence is, so as we make more of it and more varieties of it, there will inevitably be more surprises like this. But based on the evidence of what we have made so far, this is what we know.
An Audience of One

Today, AI tools lower the energetic costs of creating something. They make it easier to start and easier to finish. AIs can do a lot of the hard work needed in making something real.
I find little joy in having AIs do everything when I am being creative, but I do get enjoyment in co-creating with them. Co-creation feels like real creation. My role still takes effort and significantly determines the quality, style and nature of what is created. I typically spend 30 minutes to an hour co-generating an image in an engine like Midjourney. I have spent hours with AI co-writing an essay. All the stuff I like the best requires my personal attention and involvement as a co-creator.
My hypothesis is that in the near future, the bulk of creative content generated by humans – with the assistance of AI – will have an audience of one. Most art generated each day will be consumed primarily only by its human co-creator. Very little completed art will be shared widely with others – although a small percentage of it will be shared widely.
If most art created in the future is not shared, why is it made? It will be chiefly made for the pleasure of making it. In other words, the majority of all the creative work in the future will be made primarily and chiefly for the joy and thrill of co-creation.
Right now there are roughly 50 million images generated each day by AIs such as Midjourney, Google and Adobe, etc. Vanishing few of these 50 million images per day are ever shared beyond the creator. Still image creation today already predominantly has an audience of one.
A large portion of these still images are preliminary: a sketch, a first draft, a doodle, a memo, a phrase, not meant to share. But even among those creations completed, very few are shared – because they were made for the pleasure of making them. You can generate an endless stream of beauty for the same reason you take a stroll through a garden, or hike into the mountains – in the hope you’ll catch a moment of beauty. You might try to share what you find, but it is not why you went. You went to co-create it. I think of a walk in a garden, or a hike in the high mountains – a hike that is not necessary for transportation reasons – as an act of co-creation. Together with nature, we are co-creating the moments of beauty we might find. Most of the beauty in the world is never seen by anyone. When we encounter these glimpses of a vista, or an exquisite way something is backlit, we are an audience of one. The joy is in discovering it; sharing is an afterthought.
We have some traditional analogs of an audience of one with journals, sketchbooks, diaries, and logbooks. The creations in these forms are not intended to be shared beyond the creator, and in some cases, that limited audience is what makes them powerful. They bring a type of protective solitude to the creative act, and that power will also be part of an AI-based Audience of One. These kinds of private art act as a generative platform for bigger things. But reckoned in volume, a bona-fide artist may create far more material in private than is ever seen in public. However if you asked an artist why they fill notebooks and sketchbooks and journals, they will not say it is because this creation is inferior, but because they love to create it, because they enjoy it.
For those who view art primarily as a communication act, this art for an audience of one – traditionally found in journals and sketchbooks – still serves as communication, but to the self. In a curious way, AI can elevate self-communication, because its co-creativity enlarges the canvas and deepens the details of this communication to yourself. It is an enlarge self-inquiry.
In an AI-enhanced world, the realm of journals, sketchbooks, diaries and other private forms is expanded. Instead of compiling simple notes, doodles, fast impressions, small observations and other acts that can be done quickly, our journals, sketchbooks, and diaries will include fully rendered paintings, entire novels, feature-length movies, and immersive worlds.
These new creations will shift the time asymmetry long associated with creation. Until now an author would toil years on a book that could be consumed in a day; a painter sweated months over a painting viewed in seconds; a million work hours would be put into a movie that is watched in 2 hours. Henceforth, it may be quicker to generate a movie than to watch one; quicker to co-create a historical novel than to read one; faster to co-make a video game than to play it. This shifts the center of gravity away from consuming to generation in a good way.
I don’t believe that total viewing hours in a society will ever exceed total creation hours, but AI-based co-creation can help balance that imbalance. It makes entering into the creation mode much easier – without the need for an audience to justify the effort. From now on, the default destiny for most art will be for an audience of one, and it will abide in the memory of those who generate it. While some of this co-generated work might find its larger audience and some very tiny fraction of it might even become a popular hit, its chief value will be in the direct, naked pleasure of co-making of it.
Weekly Links, 06/27/2025
- Dark Matter = The weight of information. That’s one of the speculations in an offbeat but creative alternative theory of quatum physics that is not entirely a crackpot idea. Way out there. The radical idea that space-time remembers could upend cosmology
Weekly Links, 05/23/2025
- We are going to hear more about Prompt Theory. I for one, fully believe in Prompt Theory. Prompt Theory (Made with Veo 3) – AI-generated characters refuse to believe they were AI-generated
Weekly Links, 04/25/2025
- Something I knew nothing about: the degree to which job interviewees are trying to cheat with AI, and the difficulties that makes for good hires. Tech hiring: is this an inflection point?
- Excellent article on why humanoid robots are slow in coming and why it may take a lot longer to arrive in your home. Robot Dexterity Still Seems Hard
- There are a number of medical technologies that are feasible in the short term but lack sufficient funding to make them happen. I can’t vouch for this list of good tech that could but probably won’t happen in 5 years, but it is a good place to start. 10 technologies that won’t exist in 5 years
Epizone AI: Outside the Code Stack
Thesis: The missing element in forecasting the future of AI is to understand that AI needs culture just as humans need culture.
One of the most significant scientific insights into understanding our own humanity was the relatively recent insight that we are the product of more than just the evolution of genes. While we are genetic descendents of some ape-like creatures in the past, we modern humans are also molded each generation by a shared learning that is passed along by a different mechanism outside of biology. Commonly called “culture”, this human-created environment forms much of what we consider best about our species. Culture is so prevalent in our lives, especially our modern urban lives, that it is invisible and hard to recognize. But without human culture to support us, we humans would be unrecognizable.
A solo, naked human trying to survive in the prehistoric wilderness, without the benefit of the skills and knowledge gained by other humans, would rarely be able to learn fast enough to survive on their own. Very few humans by themselves would be able to discover the secrets of making fire, or the benefits of cooking food, or to discover the medicines found in plants, or learn all the behaviors of animals to hunt, let alone the additional educations need for the habits of planting crops, learning how to knap stone points, sew and fish.
Humanity is chiefly a social endeavor. Because we invented language – the most social thing ever – we have been able to not only coordinate and collaborate in the present, but also to pass knowledge and know-how along from generation to generation. This is often pictured as a parallel evolution to the fundamental natural selection evolution of our bodies. Inside the long biological evolution happening in our cells, learning is transmitted through our genes. Anything good we learn as a species is conveyed through inheritable DNA. And that is where learning ends for most natural creatures.
But in humans, we launched an extended evolution that transmits good things outside of the code of DNA, embedded in the culture conveyed in families, clans, and human society as a whole. From the very beginning this culture contains laws, norms, morals, best practices, personal education, world views, knowledge of the world, learnable survival skills, altruism, and a pool of hard-won knowledge about reality. While individual societies have died out, human culture as a whole has continued to expand, deepen, grow, and prosper, so that every generation benefits from this accumulation.
Our newest invention – artificial intelligence – is usually viewed in genetic terms. The binary code of AI is copied, deployed, and improved upon. New models are bred from the code of former leading models – inheriting their abilities –, and then distributed to users. One of the first significant uses for this AI is in facilitating the art of coding, and in particular helping programmers to code new and better AIs. So this DNA-like code experiences compounding improvement as it spreads into human society. We can trace the traits and abilities of AI by following its inheritance in code.
However, this genetic version of AI has been limited in its influence on humans so far. While the frontier of AI research runs fast, its adoption and diffusion runs slow. Despite some unexpected abilities, AI so far has not penetrated very deep into society. By 2025 it has disrupted our collective attention, but it has not disrupted our economy, or jobs, or our daily lives (with very few exceptions).
I propose that AI will not disrupt human daily life until it also migrates from a genetic-ish code-based substrate to a widespread, heterodox culture-like platform. AI needs to have its own culture in order to evolve faster, just as humans did. It cannot remain just a thread of improving software/hardware functions; it must become an embedded ecosystem of entities that adapt, learn, and improve outside of the code stack. This AI epizone will enable its cultural evolution, just as the human society did for humans.
Civilization began as songs, stories, ballads around a campfire, and institutions like grandparents and shamans conveyed very important qualities not carried in our genes. Later, religions and schools carried more. Then we invented writing, reading, texts and pictures to substitute for reflexes. When we invented books, libraries, courts, calendars, and math, we moved a huge amount of our inheritance to this collaborative, distributed platform of culture that was not owned by anyone.
AI civilization requires a similar epizone running outside the tech stack. It begins with humans using AI everyday, and an emerging skill set of AI collaboration taught by the AI whisperers.There will be alignment protocols, and schools for shaping the moralities of AIs. There will be shamans and doctors to monitor and nurture the mental health of the AIs. There needs to be corporate best practices for internal AIs, and review committees overseeing their roles. New institutions for reviewing, hiring and recommending various species of AI. Associations of AIs that work best together. Whole departments are needed to train AIs for certain roles and applications, as some kinds of training will take time (not just downloaded). The AIs themselves will evolve AI-only interlinguals, which needs mechanisms to preserve and archive. There’ll be ecosystems of AIs co-dependent on each other. AIs that police other AIs. The AIs need libraries of content and intermediate weights, latent spaces, and petabytes of data that need to be remembered rather than re-invented. There are the human agents that have to manage the purchase of, and maintenance of, this AI epizone, at local, national and global levels. This is a civilization of AIs.
A solo, naked AI won’t do much on their own. AIs need a wide epizone to truly have consequence. They need to be surrounded and embedded into an AI culture, just as humans need culture to thrive.
Stewart Brand devised a beautiful analogy to understand civilizational traits. He explains that the functions of the world can be ranked by their pace layers, which depend on all the layers below it. Running the fastest is the fashion layers which fluctuate daily. Not far behind it in speed is the tech layer, which includes the tech of AI. It changes by the week. Below that, (and dependent on it), is the infrastructure layer, which moves slower, and even slower below that is culture, which crawls in comparison. (At the lowest, slowest level is nature, glacial in its speed.) All these layers work at the same time, and upon each other, and many complex things share multiple levels. Artificial Intelligence also works at several levels. Its code-base improves at internet speed, but its absorption and deployment runs at the cultural level. In order for AI to be truly implemented, it must be captured by human culture. That will take time, perhaps decades, because that is the pace of culture. No matter how quick the tech runs, the AI culture will run slower.
That is good news in many respects, because part of what the AI epizone does is incorporate and integrate the inheritable improvements in the tech stack and put them into the slower domain of AI culture. That gives us time to adapt to the coming complex changes. But to prepare for the full consequences of these AIs, we must give our attention to the emerging epizone of AIs outside the code stack.

Weekly Links, 03/21/2025
- This article “Fetility on Demand” by @RuxandraTeslo is a fantastic report on one way to increase the fertility rate by artificially extending reproductive age. Fertility on demand
Best Thing Since Sliced Bread?
The other day I was slicing a big loaf of dark Italian bread from a bakery; it is a pleasure to carve thick hunks of hearty bread to ready for the toaster. While I was happily slicing the loaf, the all-American phrase “the best thing since sliced bread” popped into my head. So I started wondering, what was the problem that pre-sliced bread solved? Why was sliced bread so great?
Shouldn’t the phrase be “the best thing since penicillin”, or something like that?
What is so great about this thing we now take for granted? My thoughts cascaded down a sequence of notions about sliced bread. It is one of those ubiquitous things we don’t think about.
- Maybe the bread they are talking about is fluffy white Wonder bread that crushes really easy. That might be hard to slice, and so getting white bread pre sliced is nice.
- Maybe the bread they are talking about is not as tender as it is today, and it was actually tough to slice very thin for a sandwich. Buying pre slice saved embarrassment, and so in that respect it was a wonder.
- Maybe it is hard to automate sliced bread, and while not that much of a selling point, maybe it took some technical innovation to make it happen. Otto Frederick Rohwedder, an American inventor, developed the first successful bread slicing machine in 1928, but it took some years for the invention to trickle into bakeries around the country.
- Maybe this was a marketing ploy by commercial bread bakers, to sell a feature that becomes a necessity.
- Maybe this phrase has always been said ironically. Maybe from the beginning everyone knew that sliced bread was a nothing burger, and it was meant to indicate that the new thing was no big deal. Only later did the original meaning lapse and it become un-ironic.
- Maybe it is still ironic, and I am the last person to misunderstand that it is not to be taken as an indicator of goodness.
Turns out I am not the first to wonder about this. The phrase’s origins lie — no surprise — in marketing the first commerical sliced bread in the 1930s. It was touted in ads as the best new innovation in baking. The innovation was not slices per se, but uniform slices. During WWII in the US, sliced bread was briefy banned in 1943 to conserve the extra paper wrapping around sliced bread for more paper for the war effort, but the ban was rescinded after 2 months because so many people complained of missing the convienence of slice bread — a time when bread was more central to our diets. With the introduction of a mass-manufacture white bread like Wonder Bread, the phrase became part of its marketing hype.
I think the right answer is 4 — its a marketing ploy for an invention that turns a luxury into a necessity. I can’t imagine any serious list of our best inventions that would include sliced bread, although it is handy, and is not going away.
That leads me to wonder: what invention today, full of our infactuation, will be the sliced bread of the future?
Instagram? Drones? Tide pods, Ozempic?
This is the best thing since ozempic!
Public Intelligence
Imagine 50 years from now a Public Intelligence that was a distributed, open-source, non-commercial artificial intelligence, operated like the internet, and available to the whole world. This public AI would be a federated system, not owned by any one entity, but powered by millions of participants to create an aggregate intelligence beyond what one host could offer. Public intelligence could be thought of as an inter-intelligence, an AI composed of other AIs, in the way that the internet is a network of networks. This AI of AIs, would be open and permissionless: any AI can join, and its joining would add to the global intelligence. It would be transnational, and wholly global – an AI commons. Like the internet, it would run on protocols that enabled interoperability and standards. Public intelligence would be paid for by usage locally, just as you pay for your internet access, storage, or hosting. Local generators of intelligence, or contributors of data, would operate for profit, but in order to get the maximum public intelligence, you need to share your work in this public non-commercial system.
For an ordinary citizen, the AI commons of public intelligence would be an always-on resource, that would deliver as much intelligence as they required, or are willing to pay for. Minimum amounts would almost be free. Maximum amounts would be gated and priced accordingly. AI of many varieties will be available from your own personal devices, whether it be a phone, glasses, in a vehicle, or in a bot in your house. Fantastic professional intelligence can also be bought from specialty AI providers, like Anthropic and DeepSeek. But public intelligence offers all these plus planetary-scale knowledge and a super intelligence that works at huge scales.
Algos within public intelligence route hard questions one way and easy questions in another, so for most citizens, they only deal with the public intelligence with one interface. While public intelligence is composed of thousands of varieties of AI, and each of those comprises an ecosystem of cognitions, to the user these appear as a single entity, a public intelligence. A good metaphor for the technical face of this aggregated AI commons, is to imagine it as a rainforest, crowded with thousands of species, all co-dependent on each other, some species consuming what the other produces, all of them essential for the productivity of the forest.
Public intelligence is a rainforest of thousands of species of AI, and in summation it becomes – like our forests and oceans – a public commons, a public utility at a global scale.
At the moment, the training material for artificial intelligences we have is haphazard, opaque, and partial. So far, as of 2025, LLMs have been trained on a very small, and very peculiar set of writings, that are far from either the best, or the entirety, of what we know. For archaic legal reasons, much of the best training material has not been used. Ideally, the public intelligence would be trained on ALL the books, journals and documents of the world, in all languages, in order to create for the public good the best AIs we can make for all.
As the public intelligence grows, it will continue to benefit from having access to new information and new knowledge, including very specific, and local information. This is one way its federated nature works. If I can share with the public intelligence what I learn that is truly new, the public intelligence gains from my participation, and in aggregate gains from billions of other users as they contribute.
A chief characteristic of public intelligence is that it is global, or perhaps I should say, planetary. It is not only accessible by the public globally, it also is trained on a globally diverse set of training materials in all languages, and it is also planetary in its dimensions. For instance, this AI commons integrates environmental sensing data – such as weather, water, air traffic – from around the world, and from the cloak of satellites circling the planet. Billions of moisture sensors in farmland, tide flows in wetlands, air quality sensors in cities, rain gauges in backyards and trillions of other environmental sensors feed rivers of data into the public intelligence creating a sort of planetary cognition grid.
Public intelligence would encompass big thoughts about what is happening planet wide, as well as millions of smaller thoughts on what is happening in niche areas that would feed the intelligence with specific information and data, such as DNA sampling of sewage water from cities, to monitor the health of cities.
There is no public intelligence right now. Currently Open AI is not a public intelligence; there is very little open about it beyond its name. Other models in 2025 that are classified as open source, such as Meta’s, and Deepseek’s, are leaning in the right direction, but only open and to very narrow degrees. There have been several initiatives to create a public intelligence, such as Eleuther.ai, and LAION, but there is no real progress or articulated vision to date.
The NSF (in the US) is presently funding an initiative to coordinate international collaboration on networked AI. This NSF AI Institute for Future Edge Networks and Distributed Intelligence is primarily concerned with trying to solve hard technical problems such as 6G and 7G wireless distributed communication.
Diagram from NSF AI Institute for Future Edge Networks and Distributed Intelligence
Among these collaborators is a program at Carnegie Mellon University focused on distributed AI. They call this system AI Fusion, and say “AI will evolve from today’s highly structured, controlled, and centralized architecture to a more flexible, adaptive, and distributed network of devices.” The program imagines this fusion as an emerging platform that enables distributed artificial intelligence to run on many devices, in order to be more scalable, more flexible, more active, in redirecting itself when needed, or even finding data it needs instead of waiting to be given it. But in none of these research agendas is the mandate of a public resource, open source, or an intelligence commons more than a marginal concern..
Sketch from AI Fusion
A sequence of steps will be needed to make a public intelligence:
- We need technical breakthroughs in “Sparse Activation Routing,” enabling efficient distribution of computation across heterogeneous devices from smartphones to data centers. We need algos for dynamic resource allocation, automated model verification, and enhanced distributed security protocols. And we need breakthroughs in collective knowledge synthesis, enabling the public intelligence to identify and resolve contradictions across domains automatically.
- We need to release a Public Intelligence Protocol, establishing standards for secure model sharing, training, and interoperability, and establish a large-scale federated learning testbed connecting 50+ global universities demonstrating the feasibility of training complex models without centralizing data. A crucial technology is continuous-learning protocols, which enable models to safely update in real-time based on global usage patterns while preserving privacy.
- We need to pioneer national policies in small hi-tech countries such as Estonia, Finland, and New Zealand, explicitly supporting public intelligence infrastructure as digital public goods as a place to prototype this commons.
- An essential development would be the first legal framework for an AI commons, creating a new class of digital infrastructure with specific governance and access rights. This would go hand in hand with two other needed elements: “Differential Privacy at Scale” techniques, allowing sensitive data to be used for training while providing mathematical guarantees against privacy breaches. And “Community Intelligence Trusts,” allowing local communities to maintain specialized knowledge and capabilities within the broader ecosystem.
There is a very natural tendency for AI to become centralized by a near monopoly, and probably a corporate monopoly. Intelligence is a networked good. The more it is used, the more it can learn. The more it learns, the smarter it gets. The smarter it gets, the more it is used. Ad infinitum. A really good AI can swell very fast as it is used and gets better. All these dynamics move AI to become centralized and a winner-take-all. The alternative to public intelligence is a corporate or a national intelligence. If we don’t empower public intelligence, then we have no choice but to empower non-public intelligences.
The aim of public intelligence is to make AI a global commons, a public good for maximum people. Political will to make this happen is crucial, but equally essential are the technical means, the brilliant innovations needed that we don’t have yet, and are not obvious. To urge those innovations along, it is helpful to have an image to inspire us.
The image is this: A Public Intelligence owned by everyone, composed of billions of local AIs, needing no permission to join and use, powered and paid for by users, trained on all the books and texts of humankind, operating at the scale of the planet, and maintained by common agreement.
The Unpredicted
It is odd that science fiction did not predict the internet. There are no vintage science fiction movies about the world wide web, nor movies that showed the online web as part of the future. We expected picture phones, and online encyclopedias, but not the internet. As a society we missed it. Given how pervasive the internet later became this omission is odd.
On the other hand, there have been hundreds of science fiction stories and movies predicting artificial intelligence. And in nearly every single one of them, AI is a disaster. They are all cautionary tales. Either the robots take over, or they cause the end of the world, or their super intelligence overwhelms our humanity, and we are toast.
This ubiquitous dystopia of our future with AI is one reason why there is general angst among the public for this new technology. The angst was there even before the tech arrived. The public is slightly fearful and wary of AI based not on their experience with it, but because this is the only picture of it they have ever seen. Call up an image of a smart robot and you get the Terminator or its ilk. There are no examples of super AI working out for good. We literally can’t imagine it.
Another factor in this contrast between predicting AI and not predicting the internet is that some technologies are just easier to imagine. In 1963 the legendary science fiction author Arthur C. Clarke created a chart listing existing technologies that had not been anticipated widely, in comparison to other technologies that had a long career in our imaginations.
Clarke called these the Expected and the Unexpected, published in his book Profiles of the Future in 1963.
Clarke does not attempt to explain why some inventions are expected while others are not, other than to note that many of the expected inventions have been anticipated since ancient times. In fact their reality – immortality, invisibility, levitation – would have been called magic in the past.
Artificial beings – robots, AI – are in the Expected category. They have been so long anticipated that there has been no other technology or invention as widely or thoroughly anticipated before it arrived as AI. What invention might even be second to AI in terms of anticipation? Flying machines may have been longer desired, but there was relatively little thought put into imagining what their consequences might be. Whereas from the start of the machine age, humans have not only expected intelligent machines, but have expected significant social ramifications from them as well. We’ve spent a full century contemplating what robots and AI would do when it arrived. And, sorry to say, most of our predictions are worrisome.
So as AI is beginning to finally hatch, it is not being as fully embraced as say the internet was. There are attempts to regulate it before it is operational, in the hopes of reducing its expected harms. This premature regulation is unlikely to work because we simply don’t know what harms AI and robots will really do, even though we can imagine quite a lot of them.
This lopsided worry, derived from being Over Expected, may be a one-time thing unique to AI, or it may become a regular pattern for tech into the future, where we spend centuries brewing, stewing, scheming, and rehearsing for an invention long before it arrives. That would be good if we also rehearsed for the benefits as well as harms. We’ve spent a century trying to imagine what might go wrong with AI. Let’s spend the next decade imagining what might go right with AI.
Even better, what are we not expecting that is almost upon us? Let’s reconsider the unexpecteds.