AI Animal Spirits: Navigating the Landscape of Technological Enthusiasm
Table of Contents
John Maynard Keynes coined the term “animal spirits” to describe the human emotions that drive consumer confidence. In the age of AI enthusiasm, we see new animal spirits emerge as we navigate an algorithmically influenced future with both promising applications and content quality challenges.
Keynes wrote in his 1936 book, “The General Theory of Employment, Interest and Money”:
Even apart from the instability due to speculation, there is the instability due to the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than on a mathematical expectation, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as a result of animal spirits – of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.
I’m no Keynes Stan, but he’s evidently in possession of a decently good brain within his skull. One might say his AI has been well-trained by an impressively large and varied corpus of data.
Market Psychology and Technology Trends #
If you listen to the talking heads on CNBC and Bloomberg, you’ll occasionally hear them refer to these animal spirits when discussing the stock market. In the context of the stock market, animal spirits refer to the psychological factors that drive investors to make decisions. These factors include greed, fear, herd mentality, and hype cycles. A few short years ago, we were all excited about self-driving cars. Before that, it was Big Data. Before that, it was the Internet, and so on.
Terminology and Perception #
Anyone who knows me I R L (as they say) is likely familiar with my perspective on AI developments and my cautious view of its current implementation. My primary observation concerns the popular usage of the term “AI,” short for artificial intelligence. Some people interpret the widespread use of this term in marketing materials as suggesting we’ve discovered how to replicate brain function using computer hardware.
When most people think about AI, they often envision some sentient entity that can think and reason like a human, but implemented through computing technology. In reality, AI consists of algorithms that excel at pattern recognition based on statistical regularities found throughout data. Computers follow instructions rather than engaging in independent thought or reasoning. Even in this discussion, I’m anthropomorphizing computers by assigning “them” a pronoun, though they don’t possess personhood in the way humans or other living organisms do.
The Challenge of Definition #
Today, it’s difficult for most people to understand what AI means. I have seen its usage applied to nearly any process that involves some form of computers and automation. Much like how “cloud” has become a catch-all term for anything involving someone else’s computer, AI seems to be applied to anything that consists of a computer doing something that seems complex or magical to the observer. I jokingly refer to “cloud” services as computer rental services, which is a more accurate description. AWS is essentially the Hertz of computers.
Promising Applications #
Putting terminology aside, there are indeed impressive applications of machine learning and neural networks. While I find most computer-generated art to be still developing in sophistication, large language models can generate remarkably coherent text, although they certainly cannot reason or understand in the way humans do. LLMs produce human-like writing thanks to the patterns and structures inherent in language, combined with the extensive data they’ve been trained on. These models can be understood as lossy compression systems, with emphasis on the lossy aspect.
The Limits of Our Understanding #
Fundamentally, we don’t fully understand how the brain works, which presents challenges for replicating it in silicon. Without a comprehensive definition of intelligence, creating it artificially remains aspirational. Much of the contemporary AI enthusiasm stems from terminology borrowed from human cognition. This anthropomorphic marketing approach has proven effective but can lead to misunderstandings about capabilities.
The term “neural network,” for example, borrows from biological neural structures, but shares only superficial similarities with the neurological networks in our brains. While our brains have synapses that connect neurons and strengthen or weaken over time based on signals, the parallels largely end there. The brain operates as a massively parallel system fundamentally different from the serial processors powering our computers.
Biological vs. Artificial Systems #
Neurons can fire spontaneously, but neural networks are passive systems requiring input to produce output. The brain is dynamic and constantly changing and rewiring itself. Meanwhile, neural networks remain static without retraining, and they don’t learn in the same way that living organisms do. While there are techniques to make neural networks more dynamic, they remain quite distinct from the complexity and plasticity of biological brains.
I like to compare the abilities of computers with those of ants: ants have around 250,000 neurons in their brains, compared to GPT-4’s approximately 1.7 trillion parameters. Those ants can do remarkable things with such minuscule brain power compared to humans’ 80-100 billion neurons (Keynes probably had a few extra neurons). Computers, on the other hand, are nowhere close to being able to do what ants can do. We can simulate the behavior of an ant colony with a computer, but we can’t simulate the behavior of a single ant. We can’t even simulate the behavior of a single neuron in a living organism, let alone the behavior of an entire brain.
Potential and Perspective #
I do believe these computational approaches have significant potential to perform useful functions or, at minimum, optimize specific processes and perhaps make certain tedious tasks more manageable. However, a likely outcome in the short term may be an increase in algorithmically generated content across digital platforms.
These AI systems and the excitement surrounding them present both opportunities and challenges. While they have limitations (and ultimately remain under human control), their growing influence invites us to be thoughtful about how we shape technology to enhance rather than diminish the digital experiences we value.
One last thought: I do think we’re a long way off from the peak of this cycle, so don’t dump that NVIDIA stock just yet (this is not financial advice).