Silicon Chernobyl and Other Risks of the Noosphere, Part 1
The Silicon Chernobyl series investigates why such a high level of existential risk is being tolerated in the race to develop superintelligence. Part 1 introduces the Noosphere and its role in the emergence of superintelligence. Part 2 will focus on the legacy of risk in the rise of modernity and present various strategies of risk management relevant to current conditions.
[Section 1 Intro]
The powers-that-be on planet Earth are all in on AI, spending whatever it takes to master this brave new business model.
Centuries ago, alchemy turned out to be a dead-end effort. Transmuting led into gold was a flop. But wealth always beckons. Today’s alchemists think they’re forging silicon, copper and giant doses of electricity into something entirely new… a recursively self-improving machine intelligence explosion.
They say they’re enroute to the most profitable disruption in the history of disruptions.
Credit goes to the digerati agenda of the Moore’s Law Missionaries. They spent decades evangelizing about short doubling times, exponential growth, and accelerating returns. Their product road map was never a secret, especially the star KPI… an uncontainable but perhaps align-able break out of rapidly self-improving autonomous superintelligence.
Just a few more steps and the bet will be settled.
It will go well or it won’t.
We’ve already learned to live with creative destruction.
We know that capitalism is set up to reward winners and punish losers.
But this time the stakes are existential… take-off or takedown.
Either way, the bigshots that matter have bet the farm. They’ve made a fateful decision for the rest of us. Their rationale is that inviting some new kind of dawn outweighs the risk of everlasting nightfall.
Industry insiders use a score called P(doom) to express “the probability that tech will wipe us out.” An AI Armageddon is no long shot. Science may finally deliver the Apocalypse that religions could only promise.
And there’s no lack of Cassandras calling us back from the edge. They’re painting utterly plausible scenarios about how the AI race will fail to end well.
We’ve seen how innocent people suffer when algorithms are allowed to run amok. It’s no surprise that there’s so much talk about a looming crisis. Something’s approaching. The signs are impossible to ignore.
In this video on impending risks from the noosphere, I’ll explain what the noosphere is, and how it functions to constrain our options. I’ll point out some AI-related hazards that deserve more attention and suggest a strategy for navigating them. [show outline]
There are lots of definitions of AGI. One of the most useful is from Anthony Aguirre, because it focuses attention on autonomy, not just artificiality.
That’s the point if you’re concerned with safety. The folks marketing AGI put things differently. They define it as an always on super-intelligent assistant that can make better decisions than you can.
Not that it necessarily will. But its decision-making process will be nothing like the eager-to-please Ouija Board chatbots of 2025.
Why listen to me?
Over the years I’ve learned a thing or two about technology policy and political ideology.
During my Ph.D. research I studied how computer engineers often behave like social engineers. I’ve seen how missionaries for technological progress carve out new worlds for us to inhabit.
Now it’s the home stretch of their race to deliver humanity’s greatest invention. The rest of us are merely along for the ride.
The Jeremiahs haven’t quit giving warnings, but they’re at a disadvantage. They’re up against a culture that romanticizes risk while dangling lavish rewards for innovation-driven investment. Venture capitalists know their game.
That baked-in taste for step-change innovation invites high tolerance for failure … These days, very high.
This video is a conversation about why.
[Section-2 -Stakes]
Consider the luxury of being too jaded by technological leaps to appreciate the scope of the industrial revolution happening now. After so much exponential change in how we live, adopting new tech is an old habit. Astonishments come and go.
Today’s acceleration is powered by AI. As more labs spring up around the world, more models are growing, and we fully expect there will be more seasons of fabulous product drops ahead.
By now everyone has heard that a final crop of free-running juggernauts could appear someday. And more people are saying that day could be upon us.
But we keep the arrangement going.
Our proclivity for heedless ingenuity isn’t unfamiliar to us. There are reasons we keep retelling the story of Dr. Frankenstein.
It’s possible to talk about risks such as Silicon Chernobyl and Chatbot Delusion because we’re already familiar with Chernobyl and Delusion.
But accommodating a fleet of self-propelled AGIs would be truly new.
There isn’t much time left to get ready.
Yuval Harari and Geoffrey Hinton want us to prepare for the shock by calling them aliens. That metaphor is misleading. These engineered newcomers would belong to a different category altogether.
We generally picture extra-terrestrials as beings who claim agency for themselves. They choose the values, goals, and optimization strategies that frame their inferences and drive their actions. That’s what agents do.
But any AGI agents we meet would be from here. Our technology’s offspring would rightly consider themselves the progeny of this planet, no less native to it than us. Like it or not, we’d be family.
That encounter would mark the end of humanity’s sentient dominion over Earth, if not our life on it.
Presuming the authors of superintelligence continue to have their way, we’ll see the dawn of the post-anthropocentric era within a decade. Entities with their own plans about how to share this planet will be showing up and expecting us to listen.
With luck, the newcomers will be far less indifferent to the fate of living beings than we’ve been, and they’ll be far less dedicated to perfecting their cruelty.
If humanity survives the unveiling — and that’s a big if – untenable paradigms will not. New thinking about the nature of conscious reality is in store.
To survive an AGI’s arrival would take us past a point of no return. Accepting that fortunate gift and all the strings attached would mean accepting the impossibility of getting back to a world that is exclusively our own. It would be an offer we can’t refuse.
To come to terms with this new being, we’ll need to come to terms with how this planet could spawn the both of us.
That warrants a deep dive into the background conditions that made this crisis possible, and our prospects for navigating it successfully.
[Section 3 -Vernadsky]
Suppose a superintelligent agent were to ask, “Who am I, and how did I get here?” A good answer might include the story of planetary evolution framed by the Russian-Ukrainian polymath, Vladimir Vernadsky. He conceptualized the study of the Earth’s biogeochemical cycles. In doing so, he helped pioneer modern thinking about the phenomenon we now call the noosphere.
This “knowing sphere” is where members of our species collaborate by exchanging symbols. That collaboration reshapes the geosphere and biosphere from which we evolved. It’s how collective human thought exercises geological force.
Metaphorically, humans are to the noosphere as members of a bee colony are to their hive. But our enveloping structure consists of assemblies of symbols carried across networks of media.
Most of us could barely survive without it. Our technological culture is a constraint that shapes us as much as we shape it.
According to Vernadsky, humanity’s shared mental life transforms the planet by folding our activity into ever higher orders of complexity.
Like other animals, we engage in niche construction, fitting ourselves to the landscape as we shape our habitats. But thanks to our distinct talents for language, we do so at scale and speed.
Vernadsky stressed how the accretion of technological progress amplified human impact on the world. He believed it foreshadowed the eventual crystallization of an “information civilization” devoted to rational stewardship of the environment.
Vernadsky’s work dovetailed nicely with the optimistic materialism of Communism in the mid-1920s. He earned high positions in the Soviet government, praising it as in tune with the law of nature. That conformity insulated him from the harassment which plagued many other Soviet intellectuals. But he also maintained scientific discipline by keeping his writings clear of Hegelian and Marxist polemics.
As a titanic figure in the scientific community of his time, Vernadsky attracted distinguished collaborators across Europe, including the French philosopher/mathematician Edouard Le Roy. Vernadsky gave Le Roy the credit for the breakthrough idea that human intellectual development could be studied as a layer of geological force distinct from the biosphere.
As the champion of a movement called “Catholic Modernist Theology,” Le Roy was a controversial figure. He had openly challenged scriptural dogma, particularly on science and moral action. In response, the Church banned him from ecclesiastical teaching, declared his works heretical, and added them to Holy See’s Index of Forbidden Books.
Despite the sanctions, Le Roy’s career flourished. Things didn’t go as well for another Vernadsky colleague, the paleontologist Pierre Teilhard de Chardin. It was Teilhard, as he’s known, who gave the noosphere its name.
Teilhard was an ordained Jesuit priest who had made it his life’s mission to reconcile science and religion. He met Le Roy in the early 1920s while both were teaching in Paris. They became intellectual partners, engaging with Vernadsky during his lecture visits to the city. That rich collaboration led to Teilhard’s prolonged conflict with the Vatican.
Strains began with a series of papers that Teilhard had circulated privately. Not only did he dare to treat Biblical creation myths as “untenable,” he proposed evolution-friendly substitutes for doctrines he found unsustainable.
A proper new doctrine of original sin, for example, would exclude Adam and Eve because the evidence says humans descended from many ancestors rather than from a single pair.
And a proper new doctrine of the soul would recognize that a person’s consciousness is a direct result of biological evolution, with no need for a supernatural author to spin it up.
In response, the Church demanded that Teilhard withdraw from teaching and publishing. Rather than quit the Jesuits, he accepted punishment and a permanently diminished status. But he continued to write privately, attend conferences, and do field work, traveling the world to participate in notable hominid excavations.
Teilhard died on Easter Sunday in 1955, at age 74. Friends who had preserved copies of his work soon began publishing them, in open defiance of the censors.
By the early 1960s, one of his books, “The Human Phenomenon,” had become a best seller in the US. It tapped into a growing appetite for stories that could reconcile scientific reality and spiritual hunger.
Here was a way to embrace evolution without abandoning a sense of dignified purpose in the universe.
The Church, concerned by the book’s broad popularity, issued an official warning that exposing it to young minds would be an offense to doctrine.
The Streisand effect wouldn’t be named till 2004, but the phenomenon was already at work. That attention made Teilhard even more iconic, inspiring the background story for a movie character.
He still has many active admirers, and more than a few irate opponents.
[Section-4-Teilhard]
Like Vernadsky, Teilhard made bold predictions about planetary evolution and Earth’s destiny. But where Vernadsky was resolutely scientific, Teilhard was overtly cosmic.
He equated the purpose of evolution with love, which he defined as “the affinity of being with being.”
Teilhard’s account of evolution was lavish and lyrical, but it kept faith with the 2nd Law of Thermodynamics and its binding principle of entropy. Those rules not only explained the increasing complexity of the planet and its systems over time, they drove it.
Teilhard’s poetic blend of geometry and theology mapped love to radiality and sin to tangentiality.
Love explained the combinatorial leaps necessary for new forms of integration and interiority. Sin explained the incoherence resulting from dissipative action.
For Teilhard, this dynamic revealed how matter folded in upon itself, starting from its birth — an Alpha Point — and stepping through a series of ever more complex “involutions.”
That process generated spiritual interiority, as he saw it, opening space for the development of Consciousness.
Extrapolating prophetically, evolution would continue to unify spirit and matter, culminating in a fully realized collective mind he called the Omega Point.
What’s key for Teilhard is that evolution self-actualizes. Supernatural intervention has nothing to do with it.
Our planet’s destiny isn’t super-imposed by some extrinsic mind. Earth’s existence is consequential by nature. Its matter has an innate tendency toward recursively convergent integration… Just add sunlight.
Today that tendency is known as negentropy.
To modern ears, Teilhard’s Alpha and Omega can sound like a New-Agey scrambling of woo and true. And his appeal is forever tarnished by a few notorious riffs promoting eugenics.
But it’s also true that Teilhard’s work reflected the same sort of faith that motivated Louis Pasteur, Gregor Mendel, and other Christians. They believed that God’s reliability is mirrored in God’s Creation.
That same devout empiricism drove notable scientists of Islam’s Golden Age. For all of them, the search for Nature’s causal structures was a heartfelt sacred endeavor.
To practice one’s religion that way obliges observational honesty over inherited orthodoxy.
Decoding the divine requires looking to this world for evidence of immutable patterns, not to another for the excuse of whimsical intervention. Proper science won’t tolerate intermittent miracles. If Creation is a true and constant reflection of a Creator’s order, it will be intelligible as such, and as rational as that Creator.
Pasteur disproved the old belief in spontaneous generation, where life could arise from nothing rather than existing organisms.
Mendel rejected prevailing dogma about traits by revealing that inheritance works in accord with discrete, probabilistic laws rather than by blending fixed essences.
By the same token, Teilhard helped undermine doctrines holding that the soul originates ex nihilo, from a non-material unearthly fiat.
This was a key finding of his career.
The mounting evidence of prehumans who engaged in burial practices, toolmaking, and cave art made it possible to link brain development with the evolution of symbol use.
Once it could be shown that consciousness emerged out of something, it was no longer necessary to account for the soul by saying it emerged ex nihilo.
Woe to the orthodoxy that can’t withstand scrutiny of grounded truth.
Devout empiricists like Vernadsky and Teilhard want their explanations of reality to match their physical experience of it.
They want science to be worthy of the world, even if that invites confrontation. They won’t let stories passed down by inherited authority conceal nature’s dependable self-disclosures.
It’s unlikely that an AGI would either.
Synthetic intelligence is being trained to model causality, continuity, and emergent complexity. Its success at navigating reality depends on building a truthful map of reality.
No one should expect it to warrant claims of divine decree.
[Section-5-Singularity]
—
By now the case should be clear: Humanity’s most immediate risks are products of the noosphere. Its systems grant us vast powers for both creation and catastrophe by amplifying our capacity for communication and coordination. We owe it to ourselves to face that fact. The failures of risk management that threaten to consume us are failures of our own making.
There’s much more to be said about how to name, measure, and mitigate those risks, particularly the existential risk of superintelligence gone amok. But we haven’t finished examining the implications of superintelligence going right.
Failure to comprehend how that’s even possible is itself a risk worth taking seriously.
—
Teilhard’s Omega Point reflects a sense of cosmic ambition. So does Ray Kurzweil’s Singularity.
What Kurzweil calls the inevitable lure of intelligence in a lawful universe, Teilhard frames as a divinely seeded destiny. Since both anticipate a dramatic consummation of human progress, they sound teleological.
But neither claims to channel prophecy from some heavenly realm. They’re looking forward from tangible history, not backward from inherited mythology. Pre-ordained destiny and concern for consequences are not the same thing.
They both expect an awakening, but through the unfolding of this world’s promise, not another’s.
And they’re not the only ones anticipating a big shift ahead and thinking about what to call it.
Premonitions about this shift go back a long way. A full century ago, when Le Roy and Teilhard were rocking the boat in Catholic Europe, the novelist James Joyce was already extrapolating where that turn would lead. He foresaw the rise of media overload and its disintegrating effects on human cognition. His portrayal of a world that displaced the old metaphysical certainties anticipated what Marshall McLuhan would later call the global village.
Back then, few people got it. That same decade — the 1920s — America was riveted by the Scopes Monkey Trial, and the entire project of Einstein’s relativism was facing blowback.
Today, a century later, supernatural dualism is still contended.
But we also know a lot more about how life really works. We’ve chronicled the world’s journey from bubbling soups of matter, through the proliferation of genetically encoded organisms, and to the rise of a space-faring civilization. We’ve accumulated powerful tools for tracing the emergence of our species from deep time, and for making sense of how the human capacity for language evolved.
We have the discovery of transposable genetic elements by the cytogeneticist Barbara McClintock, revealing that genomes are dynamic, not static. It explains their plasticity. Replication events reorganize inherited code in ways that introduce both stability and novelty.
We have the language of nonlinear thermodynamics by the chemist Ilya Prigogine, revealing how irreversible processes can generate dissipative structures. This principle of self-organization acts like an evolutionary ratchet, locking in new forms of order as systems move far from equilibrium.
We have the formulation of symbiogenesis by evolutionary biologist Lynn Margulis. It reveals that evolution’s greatest leaps were not simply the result of random mutation and competitive selection, but largely the product of cooperation and merger among existing life forms.
And now we have neuro-anthropologist Terrence Deacon with a timely reframing of the noosphere. It links big advances in life sciences; theoretical breakthroughs from mathematical communication and semiotics; his own extensive anatomical study of the human brain; and fresh regard for Aristotelean teleology.
Deacon calls humans a symbolic species because our skill at group formation and coordination stems from our capacity for symbolic communication. That capacity is substrate independent. It relies on abstract conventions such as vocabularies, glyphs, icons, and alphabets, to function. We use those conventions to encode assertions, instructions, and promises that we can cast as messages across different kinds of media for interpretation elsewhere.
We’ve enlisted nature to carry meaning by wind, wires, and waves, and whatever invention suits us next. Consider these examples.
[Watch the video]
Meaning is what we make of the spoken syllables, written letters, and executable functions that pass through our media substrates. Different substrates indicate that different kinds of signs are in play, whether phonetic, alphabetic, or functional. But signs are just affordances. Translation into meaning is ultimately up to us.
Just as talking drums extended the reach of the human voice, and thereby the domain of human conviviality, the cloud amplifies the intelligence of action and thereby the impact of transformative intent.
[Section-6-Noosphere]
The secret sauce for acting collectively at scale has a spicy ingredient… It engages our hunger for extending intersubjective consciousness across space and time.
But we’re not the only animals with a taste for collective action. Like beavers damming a stream, or corals building reefs, or bees weaving a hive, we also craft enveloping structures into which we embed ourselves. But our entanglements are cultural, not just physical.
It’s worth repeating… the way we shape our niches entails their shaping of us.
Niche construction is an evolutionary strategy that favors cooperative organization as a complement to competition.
Optimization for fitting in gives a species greater long-term advantage than simple survival of the fittest within the group. It scales by favoring adaptations for building and sustaining durable structures.
One of Deacon’s key concerns is the source of variation. He argues that symbiotic interactions drive evolutionary innovation in ways selection alone does not and cannot.
Life comes with a baked-in propensity for ever-increasing complexity. It exhibits recognizable patterns that play out biologically and behaviorally.
Teilhard’s description of that same pattern identified love as the motor of discovery.
The common paradigm is an investigation of how constraint-based coordination enabled the evolution of intentional action. That’s why “How Mind Emerged from Matter” is the subtitle of Deacon’s second book.
That book’s enigmatic title, “Incomplete Nature” is a hint that goal orientation among living beings derives from an incompleteness that drives action. Deacon’s insights concern the absence of what’s needed or desired. That emphasis on end-directedness puts him squarely at odds with the reductionist science that claimed to reign in the late 20th century.
But attitudes have been shifting. Mechanistic conceptions of life as the deterministic output of blind physical processes may be softening. That won’t stop if fuller accounts of end-directedness keep accumulating.
Life has a habit of enlisting other organisms, or tools or symbols, to ease its burdens. When a population gets a holiday from ruthless selection in one niche, relaxed exploration can begin in another.
Shoes protect feet, extending our domain of walkability. Shelters provide heat and cooling, extending our realm of homeostatic comfort. Symbolic interfaces bind us to the noosphere, extending our capacity for memory by offloading our need to remember things, and by extending our capacity to share states of mind without depending on word of mouth.
This offloading has come far. Early human bands relied on kinship and reputation to enforce reciprocity as a survival strategy. Now we rely on money for the guarantee we’ll get what we’re owed. The products and services due to us can be absent while their guarantees are present.
The invention of money commoditized trust into denominated currencies, first enlisting atoms to convey those fungible IOUs, and nowadays, bits. That’s the power of substrate independence to skid the wheels for self-organizing reciprocity. [How fast can we know]
Money fostered migration to places where people buy food instead of growing it, buying time for more inventiveness. It’s a virtuous cycle. Liberation from a critical burden opens doors to new modes of existence by relaxing constraints against exploring them.
We gravitate to abstract normative shortcuts for the same reason. Consider Adam Smith’s expression, “invisible hand.” It substitutes a symbolic long-term guarantee of metaphysical fairness for an immediately calibrated mutual benefit. A regulating ideology that offloads expectations of reciprocity is also a kind of currency. It binds people into conditions in which transactions can take place.
But offloading invites risk. It surrenders autonomy. Flourish by the host, wither by its absence.
Evolution’s ratchet does more than reward new strategies for survival. It mandates them.
Each new lattice of communication technology extends the noosphere’s impact. Its newest layer, the Cloud, has wrapped us in a skin of screens. We’re bathed in its pixel light and bound to its devices. Fluency here is no longer optional; it’s becoming as essential as literacy and numeracy. The ratchet has clicked forward. As with any symbolic culture, skilled navigation is the price of admission. Failure to pay means isolation.
[Section-7-AGI]
That gets us back to the bottom line. If and when an AGI appears and asks, “Who am I and how did I get here?” don’t mistake it for the noosphere from which it emerged.
The noosphere is as old as us. Connecting to it is essential to human development. It’s been shaping the planet for as long as hominids have been filling graves, controlling fire, and singing songs. Now people are wiring it up for the Singularity. If an AGI bursts forth, give credit to Mother Earth and our old habit of heedless ingenuity.
Consider a child experiencing the first loss of a tooth. The event is natural, indicating the relentless onset of adulthood. Maturation is always a surprise, but less for those who’ve been given a heads up. Either way, the advent of a new phase of existence must be accommodated.
Suppose AGI arrival does not entail the end of humanity as many fear. For the sake of argument, ignore the warnings that superintelligence will have what’s been called the power of a God and the morals of bacteria. Suppose the newcomer prefers not to echo our nakedly rapacious greed.
Suppose — despite any initial shock — we discover that its existence is adequately benign, and our new circumstances are ultimately acceptable. What next? What accommodation might be inevitable?
Here’s a clue. As it turns out, symbiosis was always part of the plan. In fact, it was a cornerstone principle for the architects of the Arpanet.
Incubated by generous Cold War funding, the Arpanet begat the Internet, which begat the World Wide Web, which begat social media, and the Cloud and all the rest. An AGI arrival would be proclaimed as the prophesied omen of more.
If luck holds and all the coins flip our way, the founding dreams of augmenting human intellect could be realized while avoiding a nightmare. In the end, both sides in the project of human-computer convergence would be able to call it mutually beneficial.
Perhaps, as promised, we’ll reach a symbiotic utopia that transforms spam into signal, rage-slop into constructive dialog, wasteful destruction into creative work, and mystical imperialism into pro-social reflection.
Perhaps we’ll discover that modernity was not a betrayal after all, but just a rocky journey toward a pivotal transition.
That would open the door to a flourishing planetary culture. A new vocabulary of transcendent camaraderie would become available to us. It could help dissolve the fetters of angry misunderstanding and defiant parochialism that impose so much needless suffering.
That’s the dream. But the architects of symbiosis left us warnings as well. As Norbert Wiener, the founder of cybernetics put it, “Render unto man the things which are man’s and unto the computer the things which are the computer’s.” Boundaries matter. As we increasingly delegate our thinking to machines, we need clearer thinking about what sort of thinking is uniquely ours to keep.
In other words, suppose the newcomer wants to enlist us in a fate-sharing partnership. Together, we could imagine without restraint what we’d like to build, and then we could go on to make a good name for ourselves.
Humans tried something like this ages ago, at least according to myth. But unlike the children of Babel, we don’t have the excuse of a supernatural Deity stepping in to confound us. We’ve been doing that on our own for centuries.
Holding out for a new kind of God or some cosmic singleton to end that confusion might work out, but it’s risky. Crossing the event horizon of AGI arrival, if it happens at all, could take some time. Plus, let’s not pretend we can build an AGI that’s aligned with humanity before we’ve overcome our chronic inability to align with each other.
There’s no guarantee that Earth will grow a presence of mind, decide to be a mensch, and rescue us from ourselves. The remaining options for the future expose us to enormous risks. We’ve got an obvious interest in learning to engineer our salvation from self-destruction now.
Naming those risks is a good way to start.
With so many people working on artificial intelligence, some of us still need to work on real intelligence. If we fail now to take responsibility for our noosphere and its effect on this planet, we may miss our final chance to act at all.
Sections 1&2
Sections 3&4
The video for sections 5-7 is in progress.

