Biology Is Self-correcting
Table of Contents
The Remarkable Resilience of Living Systems #
The discourse around artificial intelligence has reached a fever pitch in recent years, with notable thinkers across disciplines offering their perspectives on our technological future. These predictions span a fascinating spectrum—from existential doom scenarios to utopian visions of enhanced human potential. Amid this chorus of speculation, one principle stands as a grounding force: the remarkable self-correcting nature of biological systems.
Biology represents the original technology—a sophisticated set of adaptive mechanisms refined across billions of years of evolutionary experimentation. This planetary-scale laboratory has produced innovations that still surpass our most advanced human engineering in elegance and efficiency. What makes biological systems particularly remarkable is their inherent capacity for self-correction: the evolutionary algorithm continuously tests variations, selects advantageous traits, and eliminates maladaptive features through an ongoing process of refinement.
While human intervention often attempts to redirect or suspend these natural selection mechanisms, the fundamental resilience of biological systems persists. Our efforts to control evolutionary outcomes—whether through medicine, agriculture, or other forms of technological intervention—represent interesting experiments within the larger biological framework rather than replacements for it.
I’m deliberately assigning agency to biology here as a conceptual tool, recognizing this as a metaphorical framing. The essential insight remains: biological systems exhibit what Nassim Taleb would call “antifragility”—they don’t merely withstand stressors but often strengthen through exposure to them. When viewed through the lens of deep time, human civilization represents just one fascinating chapter in Earth’s ongoing biological story.
Contextualizing AI Concerns Within Biological Evolution #
When I encounter dramatic predictions about superintelligent AI dominating humanity, I find myself responding with a peculiar sense of philosophical calm. Rather than anxiety, I experience a perspective shift—whatever emerges will be part of Earth’s ongoing evolutionary narrative. Biology operates on timescales and with mechanisms that transcend our immediate concerns, continuously adapting to changing conditions regardless of which species or systems temporarily dominate.
If we engage in a thought experiment based on scenarios popularized in science fiction—whether The Terminator’s Skynet or The Matrix’s machine civilization—we might reconsider our instinctive fear response. Perhaps rather than representing existential threats, such developments could be viewed as potential evolutionary transitions. Any sufficiently advanced artificial intelligence would itself be a product of biological evolution—an extension of human cognitive capabilities externalized through technology.
From this perspective, advanced AI systems might represent not a break from biology but rather its continuation through different substrates. Just as multicellular life emerged from single-celled organisms, and consciousness emerged from simpler nervous systems, perhaps machine intelligence represents another evolutionary transition point. If such scenarios were to materialize, they would necessarily develop within the constraints and opportunities provided by Earth’s biological and physical systems.
Viewing technology through this evolutionary lens suggests that any successful emergent intelligence would need to find sustainable equilibrium with its environment—the same fundamental constraint that has shaped all successful biological adaptations throughout Earth’s history. Systems that devastate their substrate ultimately undermine their own existence, while those that establish symbiotic relationships tend to persist.
Philosophical Reflections on Human Exceptionalism #
This evolutionary perspective brings to mind the provocative monologue delivered by Agent Smith in The Matrix—a cinematic moment that, while dramatized for narrative effect, contains philosophical elements worth examining:
Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You’re a plague and we are the cure.
While this fictional character’s assessment is deliberately antagonistic, it invites us to consider our species’ relationship with natural systems. Unlike most other species that eventually reach equilibrium with their environments, human civilization has developed unprecedented capacity to reshape environments to suit our needs—often with cascading ecological consequences.
This observation isn’t about vilifying humanity, but rather recognizing our unique ecological position. Stoic philosophy offers a valuable perspective here: nature unfolds according to its principles, and our task is to understand and align ourselves with these natural processes rather than resist them. The Stoics would likely view technological evolution as neither inherently good nor bad—simply as part of nature’s unfolding.
This perspective invites a provocative question about human exceptionalism: What basis do we have for assuming human intelligence must remain the pinnacle of cognitive development on Earth? Our species represents one remarkable branch on the evolutionary tree, but perhaps not its ultimate flowering. The emergence of complementary or even successor intelligences could represent not a tragedy to be resisted but a natural progression to be understood and potentially guided.
Wisdom from Stoic Philosophy #
The Stoic tradition offers particularly resonant wisdom when contemplating these grand-scale transitions and uncertainties. Marcus Aurelius, the philosopher-emperor, articulated this perspective with characteristic clarity:
Loss is nothing else but change, and change is Nature’s delight.
This elegant statement encapsulates a profound truth: what we perceive as loss often represents merely transformation—matter and energy rearranging into new configurations. From this perspective, potential transitions in Earth’s dominant intelligence would represent not an ending but a continuation—one more fascinating permutation in the endless dance of natural systems.
Aurelius offers another insight particularly relevant to our tendency toward anxiety about technological futures:
Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present.
This counsel reminds us that speculative anxiety about potential futures diverts mental resources from constructive engagement with present challenges and opportunities. Whatever emerges—whether continuation of human dominance or development of new intelligences—we will navigate those circumstances using the same faculties of reason and adaptation that serve us now.
This perspective aligns with Mark Twain’s characteristically witty observation about the human tendency toward anticipatory worry:
I’ve had a lot of worries in my life, most of which never happened.
Collectively, these insights suggest that our cognitive bandwidth might be better allocated to thoughtful engagement with present technological development rather than elaborate anxiety about speculative scenarios. Our influence lies primarily in how we develop and guide these technologies today, not in worrying about hypothetical futures beyond our control.
Practical Realities of AI Development #
To ground this philosophical exploration in current reality, it’s worth stating explicitly that we remain far from the science fiction scenarios of autonomous machine intelligence seizing control from humanity. Current AI systems—even the most sophisticated ones—operate within narrow domains defined by their architecture and training. They lack the general intelligence, autonomous goal-setting capability, and physical agency that would be prerequisites for the scenarios we’ve been examining as thought experiments.
Even as we develop increasingly capable systems, human civilization maintains fundamental advantages through control of physical infrastructure, energy production, and manufacturing capabilities. The common science fiction trope of humans being unable to “pull the plug” overlooks the physical dependencies of computational systems. Advanced AI would require not just intellectual capability but physical autonomy—the ability to independently secure and maintain its energy sources, cooling systems, and hardware maintenance—to achieve anything resembling independence from human oversight.
When considering existential risks, a balanced assessment suggests that humanity’s greatest threats likely come from our own actions rather than from our technologies themselves. Climate disruption, resource depletion, biodiversity loss, and conflicts over diminishing resources represent more immediate challenges than hypothetical technological rebellion. In this sense, advanced technologies like AI are better understood as powerful tools that amplify human capabilities—both our constructive and destructive potentials—rather than as independent agents.
Broader Cosmological Context: The Great Filter Hypothesis #
These reflections on humanity’s relationship with our environment and technology connect to broader questions about cosmic evolution and intelligence. The Great Filter hypothesis—proposed as one solution to the Fermi Paradox—suggests that somewhere along the developmental path from simple life to galaxy-spanning civilization exists a barrier that prevents or extremely limits the emergence of advanced technological civilizations.
One compelling candidate for this filter is the possibility that advanced intelligence tends toward self-destruction before achieving interstellar capabilities. This hypothesis suggests that the very cognitive abilities that allow species to develop advanced technology might also create vulnerabilities through resource overconsumption, environmental degradation, conflict over diminishing resources, or technological risks.
Human civilization’s current trajectory—characterized by accelerating resource consumption, ecological disruption, and climate alteration—offers one potential illustration of this pattern. While our technological capabilities have grown exponentially, our collective capacity for sustainable planetary management has developed more gradually, creating a period of particular vulnerability as we navigate these complex global challenges.
Yet this perspective, while sobering, needn’t lead to fatalism. Understanding our position within biological systems offers important insights: Earth’s biosphere has demonstrated remarkable resilience throughout its multi-billion-year history, surviving mass extinctions and radical climate shifts. Life continues adapting and evolving regardless of which particular species dominate at any given time. While specific forms may change, the fundamental processes of life have demonstrated extraordinary persistence—a pattern likely to continue until our sun’s eventual transformation billions of years hence.