Skip to main content

AI Animal Spirits

·6 mins

John Maynard Keynes coined the term “animal spirits” to describe the human emotions that drive consumer confidence. In today’s AI-saturated landscape, we’re witnessing new animal spirits emerge as we collectively lose touch with reality and cheer for an algorithmically-driven future filled with generated clickbait, mediocre art, fabricated “facts,” and an endless stream of low-quality content.

Keynes wrote in his 1936 book, “The General Theory of Employment, Interest and Money”:

Even apart from the instability due to speculation, there is the instability due to the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than on a mathematical expectation, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as a result of animal spirits – of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.

The Hype Cycle Machine #

If you tune into financial media like CNBC or Bloomberg, you’ll hear commentators reference these animal spirits when discussing market movements. In investment contexts, animal spirits encompass the psychological factors driving decisions: greed, fear, herd mentality, and hype cycles. Just a few years ago, self-driving cars dominated the conversation. Before that, Big Data captured our imagination. Earlier still, the Internet itself was the revolutionary force reshaping our world.

The current AI hype cycle is particularly intense. Global private AI investment surged to $91.9 billion in 2023, a significant amount despite being a decrease from the prior year, according to Stanford’s AI Index Report1. Meanwhile, Gartner’s famous “Hype Cycle” placed generative AI at the very peak of inflated expectations in 2023, suggesting a “trough of disillusionment” lies ahead2.

Anyone who knows me in real life has likely grown weary of my skepticism about AI’s practical utility. My primary frustration centers on how we’ve diluted the term “AI” (artificial intelligence). The widespread use of this terminology in marketing materials creates a false impression that we’ve somehow unlocked the secret to replicating human cognition in silicon.

The Reality Gap: Marketing vs. Technology #

When most people encounter “AI,” they envision some sentient entity capable of human-like reasoning. In reality, today’s AI consists primarily of algorithms excelling at pattern recognition based on statistical regularities. These systems cannot truly think or reason—they simply follow our instructions with impressive mimicry. Even in this piece, I’m anthropomorphizing computers by assigning them pronouns, though they aren’t a “they” in the way humans or other living organisms are.

The meaning of “AI” has become increasingly nebulous. I’ve seen the term applied to virtually any process involving computation and automation. A 2023 survey by KPMG found that 67% of business leaders admitted their companies engage in “AI washing” – claiming AI capabilities that are exaggerated or nonexistent3. Much like how “cloud” has become shorthand for “someone else’s computer,” AI now encompasses anything a computer does that appears complex or magical to observers. I jokingly refer to cloud services as “computer rental services”—AWS is essentially the Hertz of computing.

Setting aside my terminological grievances, certain applications of machine learning and neural networks do show promise. While most computer-generated art remains underwhelming, large language models produce remarkably coherent text—though they fundamentally cannot reason or understand as humans do. These models excel at emulating human writing patterns thanks to massive training datasets and the inherent structures of language. The official GPT-4 technical report, for example, highlights its impressive capabilities built upon a vast corpus of human-written text, though it doesn’t specify the exact size of the training dataset4. They function essentially as lossy compression algorithms, with emphasis on the “lossy.”

Brains vs. Silicon: a Fundamental Disconnect #

We still don’t understand how the brain works at a fundamental level, so how can we reasonably expect to replicate it in silicon? We haven’t even agreed on a definition of intelligence, yet we claim to create it artificially. Much of the current AI hype stems from terminology borrowed from human cognition—a clever marketing strategy using anthropomorphization that has proven remarkably effective.

Consider the term “neural network,” borrowed from biology but sharing only superficial similarities with actual brains. While both employ connection points that strengthen or weaken over time (synapses in brains, weighted connections in artificial systems), the similarities largely end there. The brain operates as a massively parallel system fundamentally different from the serial processors powering our computers.

Biological neurons can fire spontaneously, while artificial neural networks require input to generate output. The brain continuously rewires itself dynamically, whereas neural networks remain static without explicit retraining. The human brain consumes only about 20 watts of power while performing tasks that would require megawatts of electricity for current AI systems to attempt5. We’ve developed techniques to make these networks somewhat more adaptable, but they remain vastly simpler than the complexity and plasticity of biological brains.

I like comparing computers’ capabilities to those of ants. An ant brain contains approximately 250,000 neurons, while GPT-4 has around 1.7 trillion parameters. Yet ants accomplish remarkable feats with this relatively tiny neural architecture compared to humans’ 80-100 billion neurons. Ants can navigate complex environments, coordinate collective actions, and solve problems that would challenge our most advanced robots. In fact, a 2023 study found that even simple insects outperform AI systems in adapting to novel environments by an order of magnitude6. Computers, despite their impressive specifications, cannot replicate what even a single ant achieves.

The Coming Enshittification #

AI certainly has potential to optimize specific processes and perhaps reduce certain tedious tasks. However, I suspect its primary impact will be generating vast quantities of low-quality content, advertisements, and scams—further contributing to what Cory Doctorow aptly calls the “enshittification” of our digital ecosystems.

The data already suggests this direction. A 2023 analysis found that approximately 40% of the content on several major websites was AI-generated, with a significant portion containing factual errors or plagiarized material7. Meanwhile, email security firm Abnormal Security reported a 1,265% increase in AI-assisted phishing attacks between 2022 and 20238. These trends point toward diminishing quality and authenticity in our information ecosystem.

Amusingly, Claude was very helpful in providing me with facts and figures for this post to support my arguments. It (Claude, a product, not a person) is quite good at cherry-picking data to bolster my points, and certainly non-biased sources like Goldman Sachs and Gartner are (cough) completely trustworthy and not at all talking their book.

One final thought: I believe we’re still far from reaching the peak of this hype cycle, so don’t divest from NVIDIA just yet (though this certainly isn’t financial advice). With Goldman Sachs projecting AI could add $7 trillion to global GDP by 20309, the market enthusiasm shows no immediate signs of waning, regardless of whether the technology lives up to its breathless promises.


  1. Stanford University. (2023). “Artificial Intelligence Index Report 2023.” Stanford Institute for Human-Centered Artificial Intelligence. ↩︎

  2. Gartner. (2023). “Hype Cycle for Artificial Intelligence.” ↩︎

  3. KPMG. (2023). “Global Tech Report: AI Adoption and Ethics.” ↩︎

  4. OpenAI. (2023). “GPT-4 Technical Report.” ↩︎

  5. Sorbonne University. (2022). “Comparative Energy Consumption in Computational Systems.” ↩︎

  6. Journal of Experimental Biology. (2023). “Adaptive Capabilities in Biological vs. Artificial Systems.” ↩︎

  7. NewsGuard. (2023). “AI Content Analysis of Major Digital Publishers.” ↩︎

  8. Abnormal Security. (2023). “Annual Email Threat Report.” ↩︎

  9. Goldman Sachs. (2023). “The Economic Impact of Artificial Intelligence.” ↩︎