Why We Love Predictions
- Hamburg, Germany
Dieser Artikel ist auch auf Deutsch verfügbar.
Around New Year’s, my feed filled up with videos about the future of AI, podcasts released their annual previews, and blogs lined up one forecast after another. I consumed all of it, even though I know most of these predictions won’t come true, and at some point I asked myself why I even do this.
To understand that, you have to go way back1. On the African savanna, whoever noticed that dangerous wasps always appeared after certain rainfalls survived more often than someone who missed that pattern2, and over millions of years, evolution hardwired this drive for pattern recognition so deeply into our brains that we can’t help but look for connections everywhere. The problem is that our inner pattern machine was built for a world where true randomness was rare, while the world of AI is the exact opposite.
Nassim Taleb calls this the narrative fallacy3, and the term is apt, because even when facts don’t connect at all, our minds spin them into a coherent story by simplifying, condensing, and sweeping inconvenient details under the rug. We’re so good at it that we systematically underestimate how much chance and plain luck are at play. When analysts today explain why ChatGPT had to conquer the market or why certain AI startups failed, it sounds completely logical, even though almost no one saw it that way beforehand.
Nowhere is this blind spot more visible than in AI predictions, because Ray Kurzweil claimed in 2005 that computers would reach human-level intelligence by 20294, Elon Musk announced in 2024 that AI would be smarter than any individual human the following year5, and Dario Amodei said he was convinced AI would write ninety percent of code within three to six months6. These predictions either didn’t come true or remain contested at best, and yet we all click on the next video as if the previous failures were long forgotten.
“No online database will replace your daily newspaper.”
— Clifford Stoll, Newsweek, 1995. Seventeen years later, Newsweek discontinued its print edition and went fully online.
The crazy part is that they actually are forgotten78, because our memory plays a trick on us that psychologists call hindsight bias. When a new language model appears and you previously said it would change everything, you later remember only the parts that were right, while your memory unconsciously adapts to what actually happened. The feeling develops that it had to happen this way, and in the end you’re firmly convinced you knew it all along, which is why you’re far too confident again when the next AI prediction comes around.
But that only explains half of it, because Ernest Becker described a deeper reason in “The Denial of Death”9. The knowledge of our own mortality creates an existential anxiety that we somehow need to manage, and one of the most important strategies is convincing ourselves that we live in an orderly world where things unfold predictably. Predictions give us a sense of control, and that’s calming, even if that control is an illusion.
Psychologist Ellen Langer demonstrated this phenomenon in an elegant experiment10 by asking subjects to predict thirty coin tosses and manipulating the feedback so that everyone ended up guessing exactly half correctly, precisely the result you’d expect from pure chance. Yet participants whose successes happened to come early significantly overestimated their overall performance11, and forty percent even believed by the end that they could improve their hit rate through practice in this purely random game.
On top of that, many people fundamentally struggle to tolerate uncertainty, not because they lack courage, but because their nervous system triggers an unpleasant alarm state in ambiguous situations12. When someone then tells us what AI will have changed in five years and which jobs will still exist, it feels reassuring, even though we secretly suspect that no one can really know. Predictions aren’t an intellectual weakness – they fulfill a deep psychological need.
While thinking about this, I noticed how paradoxical it all really is13, because precisely in times of rapid change, when predictions should become less reliable, they become especially popular. When ChatGPT appeared and suddenly no one could say what things would look like in six months, predictions exploded, and the more uncertain things became, the greater the need for someone to explain what comes next.
Perhaps that explains why AI predictions work so well, not because anyone has a special crystal ball, but because the alternative – living with genuine uncertainty – is harder to bear than following a prediction that probably won’t come true anyway.
-
Timothy D. Wilson & Daniel T. Gilbert, Affective Forecasting , Harvard University ↩︎
-
PMC, Superior pattern processing is the essence of the evolved human brain , 2014 ↩︎
-
Farnam Street, The Narrative Fallacy , from Nassim Taleb’s “The Black Swan” ↩︎
-
Ray Kurzweil, The Singularity Is Near , Viking Press, 2005 ↩︎
-
Dario Amodei, CEO Speaker Series , Council on Foreign Relations, March 2025 ↩︎
-
The Decision Lab, Hindsight Bias ↩︎
-
Neal J. Roese & Kathleen D. Vohs, Hindsight Bias , Perspectives on Psychological Science, 2012 ↩︎
-
Wikipedia, Terror Management Theory ↩︎
-
The Decision Lab, Illusion of Control ↩︎
-
Wharton School, Illusion of Control Research ↩︎
-
ScienceDirect, Intolerance of uncertainty: dimensionality and associations , Journal of Anxiety Disorders, 2007 ↩︎
-
UC Press, The Illusion of Control , Global Perspectives, 2024 ↩︎