“What was the happiest day of your life?
The documentary How to Live Forever asks that innocent question to a centenarian who offered an amazing response.
“Armistice Day” she said, referring to the 1918 agreement that ended World War I.
“Why?” the producer asks.
“Because we knew there would be no more wars ever again,” she says.
An appealing fiction is:
Something you want to be true.
Is backed up by data, observation, or reasonable common sense.
But that data only goes one level deep on a topic with many layers of complexity.
The first two check all the boxes needed to deeply believe in something. Mix a little data with something you want to be true and you grab it with both hands, vowing to never let it out of your sight.
But a little data doesn’t get to the bottom of how most stuff works. Most stuff is complicated.
Biologist Bret Weinstein once described the difference between intelligence and wisdom:
Intelligence is mental horsepower that lets you quickly calculate answers to problems.
Wisdom is weighing, and tying together, answers from multiple problems – many of them unrelated and counterintuitive – in a way that brings you closer to figuring out how people behave in a complicated world.
Appealing fictions happen when you nail intelligence but stop short of wisdom. If you are intelligent and can calculate answers, you’ll stop considering other possibilities and draw conclusions as soon as you hit something you want to be true. Like confidently concluding that no one who lived through the destruction of one bloody war would ever consider doing it again.
The only thing more dangerous than something you made up is something you think you discovered and have evidence for, but is still wrong.
Daniel Kahneman once gave an example. He described working with an Israeli Air Force captain who needed a better way to train pilots. The captain scoffed at Kahneman’s recommendation of providing pilots with positive reinforcement, explaining:
On many occasions I have praised flight cadets for clean execution of some aerobatic maneuver. The next time they try the same maneuver they usually do worse. On the other hand, I have often screamed into a cadet’s earphone for bad execution, and in general he does better on his next try. So please don’t tell us that reward works and punishment does not, because the opposite is the case.
Kahneman dug deeper in the day and found out what was actually happening:
The instructor was right— but he was also completely wrong! His observation was astute and correct: occasions on which he praised a performance were likely to be followed by a disappointing performance, and punishments were typically followed by an improvement. But the inference he had drawn about the efficacy of reward and punishment was completely off the mark. What he had observed is known as regression to the mean, [which meant a pilot performing well was likely to perform worse on his next attempt].
The captain wanted improvement, stopped digging when he found data that supported improvement, and created his own appealing fiction.
John Hussman has spent last decade betting against the stock market, with a mutual fund down more than 50% to show for it.
What’s struck me about Hussman is that he is not an end-of-the-world doomer. He’s clearly intelligent. He backs up his bearishness with mounds of data and charts, most of which are compelling at first glance even to people who’ve been bullish. Hussman has many critics. None claim he doesn’t back his positions up with facts.
But the pseudonymous blogger Jesse Livermore dug through Hussman’s data, tore it apart layer by layer, and showed that most of Hussman’s arguments are based on random anomalies that canceled each other out.
If you want the market to fall because you’ve positioned your portfolio that way, you might stop digging as soon as you hit data that validates your views. And you’ll feel great about your decision because you have data backing it up – you’re not winging it. It takes something different – assuming you’re still wrong, digging layers deeper, and incorporating the lessons of different fields – to find out what’s really going on.
This also helps explain Coke’s debacle with New Coke in 1985.
Coke desperately wanted an answer to Pepsi’s encroachment. In testing and focus groups, the new Coke formula was slam-dunk superior. People said it tasted better. They liked it better than both old Coke and Pepsi. There’s your data. Done.
But New Coke failed spectacularly because a more important truth sat a layer deeper: The power of brands is more about familiarity than quality. People didn’t want a better-tasting Coke. They wanted a familiar drink wherever they went. New Coke ruined that, and customers protested. This was baffling to Coke marketers who knew New Coke taste better, and had the data to prove it. But that quick conclusion was an appealing fiction – one truth deep in a problem with layers of complexity.
Charlie Munger once spoke about this problem:
The human mind is a lot like the human egg, and the human egg has a shut-off device. When one sperm gets in, it shuts down so the next one can’t get in.
He then talked about why Charles Darwin was such a great scientist, despite, by most accounts, lacking superior intelligence:
Darwin tried to disconfirm his ideas as soon as he got them. He quickly put down in his notebook anything that disconfirmed a much-loved idea. He especially sought out such things. If you keep doing that over time, you get to be a perfectly marvelous thinker instead of one more klutz repeatedly demonstrating first-conclusion bias.
You might read this and think, “I’m open-minded. I can do that.” But open-mindedness is usually viewed as an acceptance that other people might be right, rather than an active process of discovering where you’re wrong. Those might seem like the same thing, but they’re not. Being open to the possibility of others being right is passive. Real open-mindedness is like Darwin: Trying as hard as you can to disconfirm your own ideas, even when you want them to be right.
It is so hard to do. But it’s the only antidote to appealing fictions. And the irony is that you’re more likely to be right if you’re constantly trying to prove yourself wrong.