AI Animal Spirits
John Maynard Keynes coined the term “animal spirits” to describe the human emotions that drive consumer confidence. In the age of AI hype, we see new animal spirits emerge as we lose connection with reality and all cheer for a new algorithmically driven future of generated clickbait, crappy art, incorrect “facts”, and a flood of low-quality content.
Keynes wrote in his 1936 book, “The General Theory of Employment, Interest and Money”:
Even apart from the instability due to speculation, there is the instability due to the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than on a mathematical expectation, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as a result of animal spirits – of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.
I’m no Keynes Stan, but he’s evidently in possession of a decently good brain within his skull. One might say his AI has been well-trained by an impressively large and varied corpus of data.
If you listen to the talking heads on CNBC and Bloomberg, you’ll occasionally hear them refer to these animal spirits when discussing the stock market. In the context of the stock market, animal spirits refer to the psychological factors that drive investors to make decisions. These factors include greed, fear, herd mentality, and hype cycles. A few short years ago, we were all excited about self-driving cars. Before that, it was Big Data. Before that, it was the Internet, and so on.
Anyone who knows me I R L (as they say) is likely tired of hearing me rant about the AI hype and how I’m highly skeptical of its usefulness. My biggest complaint is the colloquial usage of the term “AI”, which is short for artificial intelligence. Some folks believe the widespread usage of the term AI in marketing material implies we’ve unlocked the secret to emulating brains with computer chips.
When most people think about AI, I think they envision some sentient being that can think and reason like a human, except it’s computers. In reality, AI is just a bunch of algorithms that are good at pattern recognition based on the statistical patterns that exist everywhere. The computers cannot think or reason; they only follow our instructions. Even in my writing here, I’m anthropomorphizing computers by assigning “them” a pronoun, though they are not a they in the sense we humans or any other living organism is.
Today, it’s difficult for most people to understand what AI means. I have seen its usage applied to nearly any process that involves some form of computers and automation. Much like how “cloud” has become a catch-all term for anything involving someone else’s computer, AI seems to be applied to anything that consists of a computer doing something that seems complex or magical to the observer. I jokingly refer to “cloud” services as computer rental services, which is a more accurate description. AWS is essentially the Hertz of computers.
Ignoring my beef with the jargon, some neat applications of machine learning and neural networks exist. I find most of the computer generated art to be underwhelming, but LLMs can generate remarkably coherent text, although they certainly cannot reason or understand in the way humans do. LLMs are able to emulate human writing simply thanks to the patterns and structures that exist in language, plus the massive amount of data they’ve been trained on. A good way to think about these models is as lossy compression, with emphasis on the lossy.
Fundamentally, we don’t understand how the brain works, so how can we expect to replicate such a thing in silicon? We can’t even define what intelligence is, so how can we expect to create it artificially? Most of the modern AI hype has come about because the creators of these algorithms chose to use terminology that is borrowed from words we use to describe human intelligence. It’s using anthropomorphization as a marketing gimmick, and it’s working.
The term “neural network”, for example, is borrowed from the brains of living organisms, but in practice, it shares little in common with the biological neural networks that exist in our brains except in very superficial terms. For example, our brains have synapses that connect neurons, and these synapses can strengthen or weaken over time based on the signals they receive. This is somewhat similar to how artificial neural networks work, but the similarities end there. The brain is a massively parallel system that operates on a completely different level than the serial processors that power our computers.
Neurons can fire spontaneously, but neural networks are passive systems requiring input to produce output. The brain is dynamic and constantly changing and rewiring itself. At the same time, neural networks are static and unchanging without a retraining process, and they don’t learn in the same way that living organisms do. We can use some tricks to make neural networks somewhat more dynamic, but they are still a far cry from the complexity and plasticity of our brains.
I like to compare the abilities of computers with those of ants: ants have around 250,000 neurons in their brains, compared to GPT-4’s approximately 1.7 trillion parameters. Those ants can do remarkable things with such minuscule brain power compared to humans’ 80-100 billion neurons (Keynes probably had a few extra neurons). Computers, on the other hand, are nowhere close to being able to do what ants can do. We can simulate the behavior of an ant colony with a computer, but we can’t simulate the behavior of a single ant. We can’t even simulate the behavior of a single neuron in a living organism, let alone the behavior of an entire brain.
I do think AI (in the parlance of our time) has a lot of potential to do useful things or, at the very least, optimize specific processes and perhaps make some drudgery less traumatizing. By and large, however, I feel like we’re just going to get a flood of low-quality content, ads, and scams on the Internet and elsewhere.
These AIs and their animal spirits likely won’t be our undoing (they aren’t that capable, and we can always turn the computers off), but they will likely accelerate the general enshittification of everything we’ve all been enjoying for the past few decades.
One last thought: I do think we’re a long way off from the peak of this cycle, so don’t dump that NVIDIA stock just yet (this is not financial advice).