The Psychology of Prediction

(PDF version here)

This report describes 12 common flaws, errors, and misadventures that occur in people’s heads when predictions are made.


1. The distinction between “wrong” vs. “early” has less to do with analytics than the social ability to prevent listeners from giving up on you.

Say it’s 2003 and you predict the economy is going to collapse under the weight of a housing bubble.

In hindsight, you got that right.

But it’s 2003. So those who listened to your predictions have to wait four years for that prediction to come true. And I guarantee you, most would not have. They would have given up and walked away long before housing tanked. Those who did stick with your prediction have to account for the opportunity cost of being four years early – both the financial cost and the social cost of looking wrong. The two can easily swamp the eventual benefit of being right.

Chronicling the 1990s bull market that never seemed to end, Maggie Mahar writes in her book Bull!:

The fact that the bull market lasted so long presented problems even for the most skeptical reporters. “You can only say that price/earnings ratios are too high so many times,” reflected a business writer at The New York Times. “Eventually, you lose credibility.”

Weil agreed: “There was widespread thinking among skeptical financial writers—this can’t go on—but it has. What are we supposed to do about it? How many times can you say it? The problem is, if you’re a daily newspaper, you have to come up with something different to say every day.”

Moreover, “in a public marketplace, if you write a story that doesn’t resonate with the marketplace—you have to question the story,” said The Wall Street Journal ’s Kansas. “Reporters can get hesitant about their own convictions.”

Some people preemptively realize this. I remember watching CNBC in March 2009, when the S&P 500 bottomed out 60% below its previous high. Host David Faber noted that every trader he talked to knew a big market rally was coming. “So, how are you invested?” Faber asked them. “In cash,” the traders told him. Faber said it was because they couldn’t afford to have another down month. They were confident a turn was coming – and they were right – but being even a month early was too much risk. They couldn’t stand going to their bosses or their investors and explaining why they lost money again.

In Excel, the difference between wrong and early isn’t that big a deal. In Word, it’s enormous.

2. Credibility is not impartial: Your willingness to believe a prediction is influenced by how much you need that prediction to be true.

If you tell me you’ve found a way to double your money in a week, I’m not going to believe you by default.

But if my family was starving and I owed someone money next month that I don’t have, I would listen. And I would probably believe whatever crazy prediction you have, because I’d desperately want and need it to be right.

Chronicling the Great Plague of London, Daniel Defoe wrote in 1722:

The people were more addicted to prophecies and astrological conjurations, dreams, and old wives’ tales than ever they were before or since … almanacs frighted them terribly … the posts of houses and corners of streets were plastered over with doctors’ bills and papers of ignorant fellows, quacking and inviting the people to come to them for remedies, which was generally set off with such flourishes as these: ‘Infallible preventive pills against the plague.’ ‘Neverfailing preservatives against the infection.’ ‘Sovereign cordials against the corruption of the air.’

The plague killed a quarter of Londoners in 18 months. You’ll believe anything when the stakes are that high.

It’s crazy to think you can impartially judge a prediction if the outcome of that prediction will impact your wellbeing. This is especially true if you need, rather than merely want, a specific outcome.

The majority of lottery tickets are purchased by the lowest-income Americans. Why? I have a theory: The lowest-income Americans overestimate their odds of winning because when you feel trapped in poverty-stricken stagnation you desperately need to believe that you can buy a ticket out of your situation in order to maintain a certain level of functioning optimism.

A lot of decisions are statistically wrong but support the incentives of the person making them – a good thing to remember when analyzing the predictions you use to justify your own actions.

3. History is the study of surprising events. Prediction is using historical data to forecast what events will happen next.

Do you see the irony?

Historical data is a good guide to the future. But the most important events in historical data are the big outliers, the record-breaking events. They are what move the needle. We use those outliers to guide our views of things like worst-case scenarios. But those record-setting events, when they occurred, had no precedent. So the forecaster who assumes the worst (and best) events of the past will match the worst (and best) events of the future is not following history; they’re accidentally assuming the history of unprecedented events happening doesn’t apply to the future.

Nassim Taleb writes:

In Pharaonic Egypt … scribes tracked the high-water mark of the Nile and used it as an estimate for a future worst-case scenario. The same can be seen in the Fukushima nuclear reactor, which experienced a catastrophic failure in 2011 when a tsunami struck. It had been built to withstand the worst past historical earthquake, with the builders not imagining much worse – and not thinking that the worst past event had to be a surprise, as it had no precedent.

This is not a failure of analysis; it’s a failure of imagination. Realizing the future might not look anything like the past – and indeed that phrase may as well be a synonym of the word “history” – is a special kind of skill that is not generally looked highly upon by the analytical forecasting community.

Daniel Kahneman was once asked how we should respond when we make an analysis mistake. He said:

Whenever we are surprised by something, even if we admit that we made a mistake, we say, ‘Oh I’ll never make that mistake again.’ But, in fact, what you should learn when you make a mistake because you did not anticipate something is that the world is difficult to anticipate. That’s the correct lesson to learn from surprises: that the world is surprising.

That last line should be written on forecasters’ walls: The correct lesson to learn from surprises is that the world is surprising.

4. Predictions are easiest to make when patterns are strong and have been around for a long time – which is often when those patterns are about to expire.

Investor Jesse Livermore once wrote about the well-meaning use of big, old data sets when doing analysis. You gain credibility if you use more data, because it seems more robust and less susceptible to noise. But recent data can be more relevant:

Some investors like to poo-poo this emphasis on recency. They interpret it to be a kind of arrogant and dismissive trashing of the sacred market wisdoms that our investor ancestors carved out for us, through their experiences. But, hyperbole aside, there’s a sound basis for emphasizing recent performance over antiquated performance in the evaluation of data. Recent performance is more likely to be an accurate guide to future performance, because it is more likely to have arisen out of causal conditions that are still there in the system, as opposed to conditions that have since fallen away.

Statements like “this trend has been around for 20 years” make forecasters feel good because it is the data equivalent of safety in numbers. But if you believe in reversion to the mean – and most forecasters do – they should give you pause. Social trends become obvious when they’re ubiquitous and old, both of which encourage the trends to become exploited, priced out, and cliche – which is when they stop becoming trends.

There’s the common phrase, usually used mockingly, that “It’s different this time.” If you need to rebut someone who’s predicting the future won’t perfectly mirror the past, say, “Oh, so you think it’s different this time?” and drop the mic. It comes from John Templeton’s view that “the four most dangerous words in investing are, ‘it’s different this time.’” Templeton, though, admitted that it is different at least 20% of the time. The world changes. Of course it does. And those changes are what matter most over time. Michael Batnick put it: “The twelve most dangerous words in investing are, ‘The four most dangerous words in investing are, ‘it’s different this time.’”

5. Prediction is about probability and putting the odds of success in your favor. But observers mostly judge you in binary terms, right or wrong.

Nate Silver predicts presidential elections.

Here’s what they said in 2012, when he predicted the winner:

“Nate Silver Predicts Election Outcome, Becomes Nerdy Chuck Norris”

And in 2016, when he didn’t:

“How Nate Silver Failed To Predict Trump”

This might seem fair, but it’s not. Nate Silver doesn’t necessarily predict winners and losers; he weighs probabilities. That’s how all good statistics are done. It’s not black or white – on election day 2016 he gave Trump a 28.6% chance of winning. If we had a hundred elections he’d expect Clinton to win about two-thirds of them, Trump the other third.

But we don’t have hundreds of elections. We have one. And people judge the success of his predictions by the outcome of that one in isolation. The best way to judge Silver’s skill is to see how his probabilities match reality over many, many election cycles. But in national elections that’s not feasible for a media that has to report on how his predictions fared today. They only report right vs. wrong. So they were too complimentary in 2012 and too harsh in 2016.

Most things in statistics work this way. The market for unwavering, rock-solid opinions is bigger than the market for weighted probabilities. That’s true for predicting recessions, market moves, company performance, politics, industry trends, you name it. Confidence is easier to grasp than nuanced odds, and many analysts are happy to oblige.

6. Past predictions that end up on the unfortunate side of probabilistic odds might cause hesitancy to make more predictions in the future.

Nate Silver can predict Hillary Clinton has a 71% chance of winning, watch Trump actually win, and not lose much faith in his skills – even if others do. If you know how statistics work you know everything is a game of odds, not certainties. By definition sometimes you’ll make a prediction when the odds are in your favor and still watch reality come down on the other side.

Ed Thorpe, an investor and former successful blackjack card-counter, writes in his book that the best card-counting method provides a mere 2% edge over the house. Which means you’ll spend a lot of time losing. His road to blackjack success was paved with agony:

I lost steadily, and after four hours I was behind $1,700 and discouraged. Of course, I knew that just as the house can lose in the short run even though it has the advantage in a game, so a card counter can fall behind and this can last for hours or, sometimes, even days. Persisting, I waited for the deck to become favorable just one more time.

Not everyone can keep the faith in this situation. Thorp once enlisted a partner, Manny, who was fascinated by the counting system but couldn’t stand the long bouts of losing. “Manny became in turns frantic, disgusted, excited, and finally close to giving up on me as his secret weapon.”

Losing faith after the inevitable losses that take place during a sound, probabilistic predictions can cause people to quit predicting even when they’re technically good at it. Making probabilistic bets is hard enough, and rare enough. Maintaining confidence during losses adds a whole different level of skill.

Years ago investor Mohnish Pabrai lost big on an investment in a troubled lender called Delta Financial. Soon after he did an interview with SmartMoney magazine:

SmartMoney: How do you deal with Delta Financial, professionally and personally?

Pabrai: Investing is a game of probability. Sometimes when you make favorable bets, you still lose them. Even a blue chip could go to zero tomorrow. With Delta Financial, the company didn’t have enough financial strength – that was probably a mistake on my part. I think of it as a favorable starting blackjack hand where unfavorable cards showed up afterward.

SmartMoney: If you could do it over, would you have done the same thing?

Pabrai: It was a good bet.

It was a good bet. That is a smart, but very hard, thing to say after a losing bet.

7. Predictions are often calibrated to promote or preserve your reputation and career goals.

What would happen if everyone making a prediction in your industry had to stay anonymous?

Surely it would affect you, because you wouldn’t know if a source is credible.

But it might also affect the person making a forecast.

Big systems are messy, and people’s personal beliefs don’t at all times perfectly match the incentives and strategies of their profession. It is normal to work for a company or an industry that you are occasionally, if only temporarily, bearish on. But career-wise it can be suicide to ever make that feeling known. A CEO might be scared out of their mind worrying about how their company is going to dig out of a problem. But morale will plunge and key people will leave if they advertise that view. So they project optimism and faith, even if it’s not their most honest prediction. It’s normal to do this. I think we all do it to some degree. But we should acknowledge what it is: predictions being influenced by our own attempt to maintain credibility.

The opposite, and just as powerful, are those making wild predictions they don’t believe in because doing so will bring a spotlight of attention. You don’t get on TV, or invited to industry conferences, or big book deals, for predicting average outcomes. Pundits get paid for sitting three standard deviations away from sane analysts. Take away that incentive and you’d find that many extremists – even respected ones – are merely opportunists.

8. Enough effort goes into an initial forecast that updating your views when new information becomes available can trigger the sunk-cost fallacy and cause you to be right or wrong for the wrong reason.

Say it’s 2005 and you’re a retail analyst. Your job is to predict how much Amazon will be worth in 10 years. You forecast economic growth, consumer purchasing power, e-commerce trends, Amazon’s margins, market share, etc. to calculate the company’s future earnings power.

Then AWS comes along.

It changes everything. AWS not only accounts for the majority of Amazon’s current profits, but it created a stable capital base that allows the retail side of Amazon to take risks without teetering on the edge of insolvency.

But you’re a retail analyst. You don’t do cloud computing, and whatever your forecast was in 2005 it didn’t include AWS because it didn’t yet exist.

So what do you do? Do you even pay attention to AWS? If you do, do you upend your original forecast, throw it all out, and start over? You spent a lot of time on the forecast. You don’t want to tell everyone who followed the original prediction that everything you said then turned out to be irrelevant. Sunk costs hurt.

Maybe you just ignore it. Your original Amazon forecast was bullish. That was the right call – Amazon stock is up 44x since 2005. Everyone praises you for the brilliant prediction, even though you never predicted AWS, which arguably was the biggest driver of returns. You were right for the wrong reason.

The story could have been reversed. In a hypothetical world you could have nailed the retail forecast but never have foreseen some crazy acquisition that eventually led to its collapse. Everyone would say your original forecast was wrong, and they’d also be right for the wrong reason.

When sunk costs make changing forecasts hard and lead to outcomes that can be right or wrong for the wrong reason, both the person making the prediction and those receiving it can be fooled.

9. Predicting the behavior of other people relies on understanding their motivations, incentives, social norms and how all those things change. That can be difficult if you are not a member of that group and have a different set of life experiences.

A decade after World War I the League of Nations declared “aggressive war” to be an illegal crime against humanity. A year later the Kellogg–Briand Pact got 61 countries to renounce war as an instrument of national policy. “Deeply sensible of their solemn duty to promote the welfare of mankind … the peaceful and friendly relations now existing between their peoples may be perpetuated,” read the latter.

Among the signatories of both documents were Germany and Japan, who would commit some of the most aggressive warfare in history within a handful of years.

It’s hard to predict what’s going to happen next if you don’t fully understand the cultural motivations and influences of people whose experiences and goals are different than your own.

On a lighter level, part of why predicting something like the economy is difficult is because you have private-school-educated economists earning seven figures trying to understand the spending behaviors of a total population, 78% of whom live paycheck to paycheck. Ask me to predict the financial motivations of someone exactly like me, and I’d be pretty good at it. Add in someone who has seen the world through a completely different lens than I have, and they might look at risk, reward, and goals in a way I don’t understand and therefore would never predict. Long-term investors might underestimate the odds of bubbles because they can’t understand why anyone would pay 100x revenue for a crappy company. But the people buying that crappy company aren’t long-term investors; they’re basically momentum traders. What seems crazy to you makes perfect sense to them.

You’ll hear in behavioral finance that humans are often irrational. This is true, but incomplete. A lot of times people just look irrational in the eyes of others who have different motivations and goals than their own.

10. Some predictions are intellectual stimulation and don’t need to be acted upon even if they’re right.

I like reading the blog Calculated Risk because I think he’s the smartest economic analyst and has the best chance of predicting when the next recession will come.

But when he does, I won’t do anything about it.

I won’t change my investments. I won’t save more money. I won’t alter my spending. Nothing will change based on a prediction I think is right, even if it turns out to be right.

I care about the forecast of a smart person predicting recessions because I think it’s neat. I find it fascinating. Recessions aren’t games; they hurt people, and I may become one of those people. But I think it’s fine to pay attention to a forecast merely because you’re interested in how the world works without feeling the need to take any action.

That view isn’t shared by everyone. Investing is often portrayed as black or white, active or passive. Either act upon predictions or don’t pay any attention to them. I don’t think it has to be that way. Trying to figure out how the world works and learn a little bit about how people behave, respond to risk, and act around uncertainty has hidden benefits even if you don’t take any immediate action with what you learn.

11. If you refuse to make predictions because you know how hard they are you may become suspect of everyone else’s predictions even if they have insight and skills you don’t.

If you predict who’s going to be president in the year 2045 I won’t believe you, because there’s no way I could know and you don’t possess special powers I don’t.

I’m firm on that.

But if you predict what the unemployment rate will be next year I might listen, because even though I don’t have enough knowledge to make that prediction I know others do.

That’s an obvious distinction. But people’s relationship with predictions can become so zealous that they don’t allow the distinction to be made. This is especially true in finance, where a (good) trend in passive investing has created a cult-like group dismissive of all financial prediction. I get why it happened: a generation of investors burned by both their own, and pundits/advisors’, predictions, threw in the towel. To avoid cognitive dissonance it’s easier to treat predictions as black and white – they can either be made or they can’t.

But assuming no one can predict anything is almost as dangerous as assuming they can predict anything.

The difference between predicting people’s future behavior (really hard) and analyzing how parts of the world are likely to change in a way that might influence their behavior (hard but doable) is important. We have a pretty good idea about the demographic shifts that will take place over the next decade. That’s worth paying attention to. But we don’t know how society will change or adapt because of those demographic shifts. Separating the two and accepting that you should pay attention to some forecasts and not others is a vital calibration.

One way to think about this: It’s not predicting X that’s dangerous. It’s predicting that because of X, Y will happen, that gets people into trouble.

12. Effort put into a prediction may increase confidence more than accuracy.

There are stories of Tiger Woods hitting 1,000 balls at the range without a break. And of Jason Williams practicing dribbling for hours on end without ever shooting a ball.

That’s how you become an expert. That’s how you get amazing results.

At least in some fields. In fields with stable variables like golf, where the rules and objectives don’t change – the correlation between effort and skill is obvious. But it breaks down in fields where outcomes can overwhelmingly be tied to one or two tail events that change over time.

Finance is one of those fields.

You can spend years creating the most advanced economic model on earth, but the big events are often caused by one or two variables that make all the difference. It doesn’t matter how complex your economic model was in 2007. It was useless if it didn’t foresee the financial system freezing up.

Whenever a system is driven by tail events, effort doesn’t necessarily correlate with outcomes because you either captured the tails or you didn’t.

But effort correlates with outcomes in so many fields that it’s hard to accept situations where it doesn’t. So it’s natural to assume that effort put into a forecast should increase its accuracy. That can bring the worst of both worlds: high confidence in model with low, or no, foresight.


Fifteen billion people were born in the 19th and 20th century. It’s a staggering figure. I’ll give Daniel Kahneman the last word:

It is hard to think of the history of the twentieth century, including its large social movements, without bringing in the role of Hitler, Stalin, and Mao Zedong. But there was a moment in time, just before an egg was fertilized, when there was a fifty-fifty chance that the embryo that became Hitler could have been a female. Compounding the three events, there was a probability of one-eighth of a twentieth century without any of the three great villains and it is impossible to argue that history would have been roughly the same in their absence. The fertilization of these three eggs had momentous consequences, and it makes a joke of the idea that long-term developments are predictable.

More on this topic:

The psychology of money

Rational vs. reasonable

No one is crazy

Fool me three times and I give up