Clarifying the recent hot hand research: What if it’s impossible to find what you’re looking for?

Unlikely this image ever appears in an academic paper…

What follows is a response from Dr. Jeremy Arkes and Dr. Dan Stone to Jeremy Britton’s epic three part series on the WEA:

Here are Dan Stone’s paper and slides

Thanks very much to Jeremy Britton for giving sports economists some nice press the other day.  He did a good job of summarizing and constructively commenting on much of our work.

We have to admit though that we were disappointed that, in contrast with others at the conference, Jeremy didn’t seem too convinced by our work — separate papers that happen to be on the same topic — the statistical analysis of the hot hand in basketball.  (Britton described Stone’s work as “seems more a quibble” and Arkes’ work as “seems to rest on the assumption that the hot hand has to exist”).  So we appreciate the powers that be at the Wages of Wins giving us the chance to reply. 

(Editor Dre’s Note: Jeremy did want to pipe in his comments were purely based on the presentation. Jeremy makes no claims at his Sports Economics expertise to comment on the content at an expert level. Additionally, I do enjoy being referred to as ‘powers that be’ for future posters…)

Quick background: all fans and players believe players sometimes get hot, and have believed this for a long time.  So they believe the hot hand exists.  Everyone knows this.  But statistical analysis has failed to find much evidence for it.  So psychologists and behavioral economists have concluded “the ’hot hand’ is just a figment of the imagination,” consistent with a vast body of research showing people tend to see patterns in noise.  This has been a popular conclusion among sports researchers, being able to claim that such a common belief is wrong due to people misreading patterns.

We make the case that the researchers may be the ones misreading the patterns (of research, in this case).  The absence of evidence can be evidence of absence.  But it might be strong evidence – and might be weak.  One well known reason for it being weak is small sample size.  Our work identifies new reasons—neglected by most of the previous literature in this area—why absence of evidence for the hot hand could be very weak evidence of absence.

In our research, we essentially say, suppose players do get hot sometimes, in a few pretty simple and plausibly general ways.  What would the data then look like?  What would the analysis show?  And we found that there’s a really good chance the analysis would show almost nothing – i.e. no evidence or very weak evidence of hotness – for any sample size.  This is because the data do not directly reflect how hot, or not, players really are.  The data are a bunch of 0s and 1s (misses or makes).  But, ‘hotness’ is probably best measured, as simply as possible, as a number between 0 and 1—the probability of making the next shot (holding constant location of shot, defensive intensity, etc).

In some situations, using 0s and 1s as approximations for probabilities doesn’t cause a problem.  The 0s and 1s average out and all is fine.  But what we show is that in the context of bball shooting data, the approximations cause a big problem—they make it really hard to detect shooting trends.

So the absence of evidence here really doesn’t tell us too much.  And since most of the research in this area doesn’t recognize the problem caused by these approximations, most of the research in this area that finds an absence of evidence doesn’t tell us too much.  And the research that does find evidence (e.g. Arkes’ previous, 2010, paper, and Yaari and Eisenmann, 2011) may tell us much more than we realized—given our two current studies, Arkes’ original finding of a modest hot hand effect of 3-to-5 percentage points (for free throws) suggests a much larger effect in reality.

We are not denying the existence of the ‘hot hand fallacy’ – the tendency to infer hotness too quickly (after say, just 2 or 3 made shots in a row).  In fact, we are certain that fans and players misperceive many instances of players hitting a few shots in a row as being hot or teams winning a few games in a row as having momentum.  But, not all instances. And just because statisticians don’t have the data to identify the hot hand, that doesn’t mean players don’t.  They have lots of information analysts don’t–how they felt when shooting, whether the shot was a swish or lucky, the difficulty of the shot, etc.

Our overall conclusion – based on the intuition, experience and judgment of millions of bball fans/players (that, of course, we only have a sense of), what’s been found and not found in the data (from bball and other sports), and our recent theoretical analysis—is that behavioral scientists have been too quick to conclude that there is no hot hand in bball, and in fact it’s likely that players do occasionally get hot, to varying degrees.  We think this is kind of a fun, feel-good story – that the masses were right after all!  Human judgment prevails!  Now we just need to be careful not to become overconfident.

– Jeremy Arkes and Dan Stone

Comments are closed.