Six Cutting Edge Pieces of Sports Research. The WEA, Part 2

Continuing from yesterday’s introductory coverage of the 2012 WEA Conference, What Are Sports Economists Like?, this post examines some of the work I saw presented by various researchers including David Berri.

How the Conference Works: Present, Critique, Discuss.

The format of the conference is fascinating. Work is presented in two-hour sessions composed of four research papers each. Researchers give 15-minute presentations each followed by eight-minute critiques by selected fellow researchers. Sessions end with open group discussion.

Presentations all used slide projection, though I am told that in the old days they simply held court with verbal arguments. Some finished gracefully on time, while others rushed through dense arguments with lots and lots of slides. Dave made a game-time adjustment to one of his by adopting Guy Kawasaki’s 10/20/30 rule of PowerPoint, which I thought worked really well.

Note: I would urge other presenters to consider for any audience in the future. Less data forces more careful editing beforehand of the information you present. This typically leads to better comprehension during a presentation and better recall of information afterward.

As an armchair economist, I was in over my head. To gain some footing, I adopted Dave’s guidelines to help listen with a discriminating ear and give my notes structure. Here is what I found helpful to consider:

Guidelines for Evaluating Models

Is there a hypothesis? Does this paper explain how one thing causes another? Does the model specify reasonable variables that could correlate to each other and be capable of explaining its hypothesis? Is the paper interesting and meaningful?
Statistical and Economic Significance Do your independent variables show any significant deviation from the mean? If so, you may have found a variable that explains something. Moving beyond statistics to economics, does the variable explain anything about financial value?
Robustness of Results Does the research anticipate critiques by ensuring the results are not an artifact of one type of research tool, but rather results that appear when viewed from a variety of relevant analytic angles? (I heard this discussed as a “testing a variety of specifications.”)
Explanatory Power How much correlation exists between your dependent variables (what you are trying to explain) and independent variables (what you think does the explaining)? You will see this written as variable X explaining 55% of variable Y.
Simplicity of the Model When choosing between models that attempt to explain the same things, it can be useful to ask how complicated they are compared to one another. If one model makes fewer assumptions and uses fewer variables, but has the same explanatory power, it is a simpler model to exploy (and perhaps more capable of lending itself to a compelling theory).
Generalizability of Findings Few research papers make the effort to forecast beyond the sample they studied, but when they do this can be a powerful thing. For instance, if research in NBA basketball is robust enough to suggest a broad theory of decision-making applicable to, say, the operations of government or retail business, then the research may have tapped into a valuable general insight about people. That is a very cool thing.

My Impressions of the Six Papers I Saw

Let me share my impressions. It is worth noting that I saw only nine out of 70 presentations. I am commenting on six of them and focusing on the presentations of the work as an outsider, not the academic research paper itself.

1. Race and The NBA Draft by David Berri, Steve Walters, and Jennifer VanGilder

It was fun to see Dave in action presenting this work. He was animated and lively, getting the whole room to perk up. This research starts by asking whether a player’s race has an unfair effect on draft selection, quickly answers ‘no,’ and then offers strong evidence that several other factors do. The coup de grace is that most of these factors do not help teams win–in other words, NBA teams typically draft the wrong players first.

The approach Berri, Walters, and VanGilder took to measuring race–they sampled RGB (red, green, blue) color values from NBA photo headshots to assign each player’s “colorism” score–seemed novel (though Berri noted was an established research technique). It took a contentious and seemingly qualitative trait and made it measurable. However, the subject of race did not really matter in the findings.

In the discussion that followed many people urged Berri to look deeper for evidence of racial bias, for instance by considering the race of the decision-makers in contrast to the players. It’s true that absence of evidence is not evidence of absence, but this is where experience and intuition should inform judgment about whether to pursue this part further. Otherwise wouldn’t research risk falling prey to confirmation bias?

Since the thrust of the article had nothing to do with evidence of racial bias in draft position, I’d love to see Berri and Walters focus it around the factors they found that do influence draft position. Building on his previous work, this makes Berri’s research on draft position that much more robust.


2. Sequential Judgment Effects in NBA Officiating: An Analysis of Referee Bias in Make-Up Call Situations by Paul Gift

Paul offered one of the most energetic presentations on a great topic: make-up calls. I think this research wields a double-edged sword of ‘fairness’: 1) Do referees attempt to “make up” for calls they know were bad? 2) Should they attempt to restore balance like this or does this introduce another, worse form of unfairness (i.e. a second bad call that is willful)?

The most interesting thing to me that came out of this paper involved how to position it for people. Gift admitted he used the word “bias” to make the paper “more provocative” instead of using the more precise and fair phrase “increased scrutiny.” However, when he offers a mouthful of a title beginning with the phrase “sequential judgment effects” it’s clear he’s speaking to peers, not the general public.

What I appreciated in Gift’s work was how elaborate his model was, but never without offering reasons why for each step. His energy and examples made a dense argument something people could follow. It also offered significant evidence of makeup calls in NBA officiating in spite of league policy to the contrary, but no evidence of makeup non-calls.


3. Put Me In The Game Coach: The Effect of All-Star Status On Playing Time by Joshua Price with Richard Patterson

Josh (brother of Joe Price) was amazing. He even took the time to answer my layman’s questions about economic significance on a walk one day of the conference. The purpose of his and Patterson’s research seems to be to provide evidence that teams value reputation over winning, which appears to defy self-interest and violate the goal of the game (to win).

Josh’s evidence shows a rise in star player minutes up until the all-star lineup in announced, followed by a sharp drop in star players minutes. While this is followed by a steady decline in star minutes afterward (as teams apparently “save” their best for the playoffs), the focus of Josh’s paper is that sudden post-announcement decline as evidence of team behavior to achieve the short-term gain of landing your best players on the all-star team to give their franchise a boost.

Much of the critique discussant Dan Stone and others offered was directed toward “getting sharper results” by controlling for players in televised games, looking at blowout games, etc. Josh ended with several provocative questions, but I would love to see him expand these to focus on this idea of “loss leaders” in basketball (i.e., unproductive “stars” who are heavily merchandised by the team for gains that are never realized in terms of revenue).


4. Measurement Error and The Hot Hand by Dan Stone

This was one of two pieces of research about the “hot hand” in basketball. Its purpose is create doubt within the economics community about widely accepted findings that throw a cold, statistical splash of water on a treasured belief of fans and players alike–that sometimes you get hot and can do no wrong putting the ball in the basket.

Stone took this furthest in his conclusion by asserting that luminaries in behavioral economics such as Daniel Kahneman and Richard Thaler–nonbelievers in the “hot hand”–are themselves guilty of a bias toward proof that average people get things wrong. To me this begged the question whether Stone wasn’t biased to pull a similar gotcha on them, though he clearly respects and appreciates their work.

His presentation was extremely captivating in the beginning and in its conclusion. Like Fox Mulder felt with UFOs, I want to believe in the “hot hand.” Evidence of its existence would be thrilling confirmation of something ‘we all know.’ However, the core of Stone’s argument was incredibly complicated, dense, and difficult to follow. While he cited evidence of measurement error as an “extreme bias”–calling into question all previous research on the ‘hot hand’–this seemed more a quibble than a slam dunk case against it.

One thing I did find compelling was Stone’s mention of research in bowling and archery outcomes are more precise and demonstrate strong evidence for a ‘hot hand.’ I would love to know more about the quality of those findings.

Dan kindly offered links to his research paper as well as presentation slides.


5. The Payoff to Consistency in Performance — The Case of the NBA by Christian Deutscher

This research asks whether consistent performers are rewarded with bigger paychecks than inconsistent ones. The undercurrent to this work seems to ask whether management fairly rewards consistent performers without unfairly rewarding inconsistent ones. Christian’s introduction of the topic reminded me of one of my favorite quotes:

“The consistent work enhanced my act. I learned a lesson: It was easy to be great. Every entertainer has a night when everything is clicking. These nights are accidental and statistical: Like lucky cards in poker, you can count on them occurring over time. What was hard was to be good, consistently good, night after night, no matter what the abominable circumstances.” —- Steve Martin, Born Standing Up

Deutscher’s findings indicated that consistent performance on offense does correlate with higher pay (hooray for fairness and good judgment), but his seems to be an early work in progress. He ended with an open question to further his work: do players anticipate this and strive for consistency?What I thought I noticed during his presentation–and something others called out in discussion afterward–was that Deutscher opted to reinvent the wheel, so to speak, when specifying the measures he used. Since he is not an expert specifically on the NBA, it would seem to make sense for him to adopt the models and measures Berri and others have already created when moving forward. It would provide a better foundation for the work.


6. Misses in ‘Hot Hand’ Research by Jeremy Arkes

Similar to Dan Stone’s research, this one seeks to show that cold, hard analysis can sometimes happily reveal that conventional wisdom around magical things like the ‘hot hand’ are true.While I found Arkes a very entertaining and enjoyable presenter–thorough in his explanation of research and relatable in the stories he used for example–the actual story he is telling did not seem terribly compelling on the face of it and felt at times like splitting hairs.

Arkes is not presenting evidence that the ‘hot hand’ exists in basketball, but rather running simulated models to show that the statistical tools other researchers use to try to measure the ‘hot hand’ are flawed and likely to radically understate potential positive evidence. His own research from 2010 demonstrated a small but significant effect under those same conditions that he argues would understate it.

That, in itself, was fascinating to hear as an outsider. However, it is procedural and seems to rest on the assumption that the ‘hot hand’ has to exist if only we can find the right tools to measure it. My intuition is that this is what Occam’s Razor is for–the prevent us from falling in love with too many unfounded assumptions!(All that said, I told Arkes I still “want to believe” in the hot hand and it’s true.)


In the third and last post with my coverage of the WEA Conference coming up next, I will share four of the big themes that emerged from the conference about NBA Insiders and Outsiders, as well as some advice for presenters about following in Alfred Marshall’s and Wages of Wins footsteps to reach as broad an audience as possible without dumbing things down.

Comments are closed.