Too Much Background on my Latest for Time.com

My latest for Time.com — What NBA Referees Can Teach Us About Overcoming Prejudices — discusses the research on implicit bias and NBA referees.  This research — initially conducted by Joe Price and Justin Wolfers (and later with Devin Pope) — indicates that NBA referees exhibited implicit bias.  But when this research was publicized, the evidence of implicit bias vanished.

At Time.com I noted that this is an important story.  Implicit bias appears to be something that is fairly widespread. And the study of NBA referees shows that it can even occur in an work environment that is both integrated and heavily scrutinized.  However, this research also shows that if people are aware of the bias, it can be eliminated.  All in all, this research is good news for everyone. But it is especially good news for the NBA.  We have learned that the NBA had an issue in the past.  Today, though, that problem appears to be gone. Despite this, the NBA recently indicated — as I note at Time.com — that the original research by Price and Wolfers is “not valid”.  And this conclusion is based on research conducted by consultants paid by the NBA.

As I note in the Comment Policy (which I recently updated), I am now primarily focused on writing for other outlets (like Time.com).  So this forum is only being used to point people to this work, and maybe to offer a few more thoughts on what is being said.  For the most part, that means posts here are rather short.  But this particular story at Time.com has a larger back-story.  So this post will go on a bit (one should note that is also reflects the fact that our semester has ended, so I have a bit more time to write).

More on the NBA Referee story

Let’s begin with a bit more on the consultants the NBA hired to refute the Price-Wolfers work.   Before I start, let me emphasize that this not a discussion of consultants vs. academics.  I have known consultants who do very good work (in fact, I have done some consulting!).  But the work of consultants does not often go through the same peer review process faced by academic researchers.   And in this case, that seems like a problem.

Price and Wolfers — in a subsequent paper published in Contemporary Economic Policy — note a number of problems with the work by the consultants hired by the NBA. And the following — at least for me — is the most “impressive” (and yes, that is sarcasm).

As Price and Wolfers note…

“The first point that emerges from these tables is that many of the regression models that the NBA runs in their analysis are redundant. For example, models 5- 8 are literally identical, with the only difference being which group is used as the omitted category. The same is true for models 1 and 3 and models 2 and 4. In each case the coefficients on the player or referee race dummy variable are the same but with the opposite sign.”

In a study like this, a dummy variable is used which can be 1 = white, 0 = black.  Or it can be 1 = black, 0 = white.  If you took one econometric class you would know that it makes no difference which way you define this dummy variable.  As Price and  Wolfers note, the coefficient will be “the same but with the opposite sign”.  But the consultants sold two models that only differed by how this dummy variable was defined.  And they called it a “different” model.  Again, though, these models are not “different.”

As noted, this is not the only problem with the consultants work. And yet, the NBA insists their consultants story trumps the research published in academic journals.

The Economic Impact of Sports

Furthermore, this is hardly a unique story.   A couple of weeks ago I was interviewed by Jeff Mosier of the Dallas Morning News.  The subject of the discussion was the economic impact of hosting a bowl game.  My position was quite simple:  Academic studies have demonstrated that there isn’t much economic impact from sports.

In contrast — as you can see in the article — a consultant was interviewed for the article who had experience selling economic impact studies.   This is not uncommon.  Economic impact studies from consultants often find significant impact from sports.  And these impact studies are used to justify public subsidies for sports.

A recent example is the proposed DC United stadium deal.  The New York Times recently ran an article detailing the public subsidies supporting this stadium.  And as Aaron Gordon of Vice Sports argued, the Times article is best described as a “trainwreck”.

The Times article  argues that stadiums can revitalize neighborhoods.  Here is how Gordon reacts to this quote:

This, more than anything else in the entire article, suggests Mr. Eugene L. Meyer has never read a lick of research on the subject. Here’s a quote from a meta-study (a study of studies) done by two economists, Brad Humphreys and and Dennis Coates:

“We find near unanimity in the conclusion that stadiums, arenas and sports franchises have no consistent, positive impact on jobs, income, and tax revenues.”

Economists don’t agree on anything—I have literally heard two economists argue about the best way to wait in line while waiting in line—but they agree on this. Meyer’s assertion, that baseball stadiums have helped spur development in inner cities, is a myth. A total and complete myth, one that the academic community has known for over two decades. Given the amount of information now available at our disposal with a simple Google search, asserting otherwise is a lie and, for a journalist, ethically questionable.

So just as we saw with the study of referees and race,  paid consultants tell a story that contradicts the academic research.  And despite what the published research makes clear, the consultants are able  to continue to get paid to tell these contradictory stories.

Measuring Player Performance in the NBA

And I have one more example.  This particular example is probably only interesting to me.  But every once in awhile I still see this “research” being referenced.  So I thought I would toss out some more thoughts on it (and yes, regular readers of my work would note that much of this has been said before).

The Wages of Wins appeared in 2006 and within its pages you can find the details for the Wins Produced model for player performance.   This simple model takes what would seem to be the obvious approach.  Previous models — like Player Efficiency Rating or Win Shares– were constructed by inventing the weights attached to the individual statistics.  An alternative to this approach is to simply using standard regression analysis (same sort of regression analysis used in the previous two stories) to create a model that would empirically ascertain the value — in terms of wins — of the box score statistics tracked by the NBA.  The specification of the models employed takes some work (and a few papers were published with different specifications).  But once one knows the value of the individual statistics, one can ascertain how many wins each individual player produces.

Soon after this book appeared, two NBA consultants — who were selling productivity analysis to an NBA team — wrote an article (that was never published and I think is no longer available on-line) comparing Wins Produced to other performance models.  Like the aforementioned consultants hired by the NBA to refute Price-Wolfers, the consultants attacking Wins Produced offered some work that has some real problems. Let me see if I can explain one of the bigger problems (there was more than one problem).

There is an obvious approach one would take if you wished to see if two models — like the Player Efficiency Rating (PER) or Wins Produced — were better.  One could simply ask: “Which model does a better job of explaining wins?”

The consultants, though, decided they didn’t like this approach.  Wins Produced was designed to explain wins.  PER was not.  So focusing on explanatory power clearly puts PER at a huge disadvantaged.  And if you really want the story to be that Wins Produced is not better (or even worse), you need a different approach. And the approach the consultants took was to force each model to completely explain current outcomes.  By taking this approach, all models had the same explanatory power as Wins Produced.

Let me just re-state the approach taken.  Rather than figure out which models did a better job of explaining team outcomes, the authors forced all models to have the same explanatory power.  And then they declared from that perspective, all models are equal in explanatory power of current outcomes.  From that point, they then tried to forecast the future with Wins Produced and their distorted new models (the word “distorted” is appropriate since they are not forecasting with PER, but a PER model that is forced to explain team outcomes).

Allow me to illustrate the silliness of this approach.  Studies for several decades have  demonstrated that NBA salaries are related to box score statistics, other player characteristics (like player age), and team characteristics (like market size and team wins).  But imagine I thought that all that mattered was the zip code of the athlete’s hometown.  According to these two consultants, I wouldn’t simply see how these two models explained salaries.  Instead I would force each model to have the same explanatory power and then use my distorted zip code model to forecast the future.

Yes, this approach is frankly odd (to say the least).  In my career I have published dozens of academic papers.  No referee has ever asked for this sort of approach.  And I have also reviewed countless papers.  Again, never seen anyone do this.

And it was not the only problem with the paper.  Every section of the paper — including a an obviously mis-specified NBA salary model that ignored  the aforementioned decades of research (and actually contradicted the thesis of the consultant’s paper) —  had glaring errors suggesting the consultants did not have a clear idea how to empirically answer any of the questions they asked.

As Price and Wolfers did with respect to the NBA’s consultants, the problems with consultant’s work attacking Wins Produced was noted in Stumbling on Wins and in an academic article I wrote with JC Bradbury.  In addition, we noted how models like PER do not really explain wins very well.  We even noted how well PER plus the defensive variables used in Wins Produced explain wins (again, not that well).   Yes, one could argue that Wins Produced explains more because it includes more variables.  But even with the additional variables, PER does not do as good a job of explaining wins (and if you understand the problems with PER, you can understand why this is the case).

Nevertheless, I still occasionally see people reference the story told by these consultants (again, in a never published work that was addressed in subsequent academic work that was actually published).

The Basic Story

Again, that last story is probably not interesting to many people beyond myself.  But this last tale is essentially  the same as the first two stories told.  Consultants make claims that academics clearly refute.  This, though, does not end the debate.  The consultants — and/or those who remember the consultant’s work — simply proceed as if nothing was ever said.  And we see that this actually works (at least, for some people).

All of this reminds me of the following quote:  According to programmer Alberto Brandolini,

“The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.

I should emphasize again.  I know consultants who do very good work.  So I don’t want to paint this entirely as a consultant vs. academic debate. That being said… there does seem to be a problem when people ignore the fact that the claims of consultants have been addressed in the academic literature.

Of course, there is a simple explanation for this outcome.  Most people never read academic articles.  So no matter how many times academics refute the consultants in peer reviewed research, the consultants  just proceed as if nothing has ever been said because few people read and understand what the academics have published.

At this point I would like to tell you that there is a simple solution to this problem.  But really, not sure that there is.  There is nothing that prevents the NBA from declaring the Price-Wolfers research is “invalid”.  And nothing stops consultants from saying that sports have a large economic impact or that models like PER are really just as good at explaining wins as Wins Produced (and yes, one of those stories is not nearly as important as the other two!).

All the academic can do is simply note what the empirical evidence seems to say.  If people choose to hear other stories…well, such is life.  Again, there is probably not enough energy in the world to stop this from happening.  And if you devote your energy to try and correct everyone who is getting it “wrong”, well, maybe you are not doing your best to allocate your time.

And yes, the length of this post illustrates that I don’t always get this last point.  But you see, classes have ended.  And I really didn’t have much else to do this weekend…

– DJ