“I know what the stats say but…”: The illusion of validity in basketball fandom

The book “Thinking, Fast and Slow” by Daniel Kahneman is fascinating reading; it’s sort of a “greatest hits” of the cognitive fallacies that he and various colleagues (the most famous of whom is Amos Tversky) documented through clever experimentation. I’ve often thought that how we as fans think about basketball falls prey to each of these fallacies and plan to write a series on various fallacies and how they apply to thinking about basketball on my website, The NBA Geek. But it occurs to me that one article won’t be enough to cover all the ways that the illusion of validity affects us (yes, all of us, even me).

It’s pretty geeky to have a “favorite scientist”, but nobel prize winner Daniel Kahneman is probably it for me. I think the greatest take-away from Kahneman’s work is that we simply cannot trust ourselves when it comes to decision-making (or judgement-making) in complex situations. And another, perhaps more important, take-away is that this knowledge that one cannot trust one’s self won’t actually protect you from making that mistake anyway. A great illustration of this is from this New York Times article. Kahneman and his colleagues in the military had designed a program to evaluate officer candidates, and predict who should succeed:

We were willing to make that admission because, as it turned out, despite our certainty about the potential of individual candidates, our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we could compare our evaluations of future cadets with the judgments of their commanders at the officer-training school. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much.

We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed, and there were orders to be obeyed. Another batch of candidates would arrive the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log and within a few minutes we saw their true natures revealed, as clearly as ever. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions.

I thought that what was happening to us was remarkable. The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each particular prediction was valid. I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.

I had discovered my first cognitive fallacy.

In other words, even faced with the sure knowledge that the data they were collecting was useless, they remained absolutely stone-cold certain that what they were seeing revealed the true nature of the men they were evaluating:

I coined the term “illusion of validity” because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true — that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word.

It’s that last bit that is telling. I fall prey to this myself constantly. As my readers know, I am a pretty avid believer that WP48 tells us far more about basketball performance than the naked eye ever could. Yet I couldn’t possibly count the number of times I have seen a basketball game and thought “player X was Amazing!”, only to check the box score and discover he was terrible-to-average, committing lots of turnovers (which my mind glazed over), missing lots of shots (which weren’t as important as those three THUNDEROUS DUNKS, SURELY) or grabbing no rebounds.

As basketball fans (or basketball analysts — I like to give myself fancy titles to lend more validity to my statements), are convinced that we are experts in this field. That what we see on the basketball court has meaning, regardless of what the data says. It is why coaches are so reluctant to give up on players drafted highly — they see things in on-court performances that convince them the player is capable of so much more than what the box-score tells them. It is why fans of certain players get outraged whenever we post an article that shows how they are overrated.

We want what we see to have meaning, to fit into a narrative.  Players that look spectacular when scoring, well, they must be great players! Look at that athleticism! Everybody knows he’s a great player! Meanwhile, players that score lots of boring put-backs, or don’t really get shots off the dribble and only shoot when they are open and passed-to, well, they slip by, beneath our notice, and somehow the buckets that they score don’t count the full 2 points in our cognitive registry.

The illusion of validity is why I get deeply suspicious whenever a fan, sportswriter, coach, or GM says anything to the effect of “the numbers don’t tell the whole story”. This is, in fact, true, but what the person saying this usually means is “I don’t care what the numbers say because I am convinced that what I have seen is correct.” Which is, thanks to this illusion, almost never true. If I make an argument that the data says a player isn’t good, and someone points out “Yes, but if you watch the games you will notice that this year they are only shooting threes from the slot, and rarely from the corner, where he used to excel,” then that person is pointing out a hole in the data that’s worth investigating. If the argument is along the lines of “anyone who’s watching him can clearly see he’s much better than that,” then I’m certain the illusion of validity is doing its dirty work.

Moral of the story: you can’t trust yourself. And if you find yourself saying, “I know, but this time I’m sure!” then you really can’t trust yourself.

-Patrick

Comments are closed.