# John Hollinger, Dean Oliver, and Some Other People Comment on Plus/Minus

In the March 8th issue of the ESPN the Magazine is an article by John Hollinger on the subject of plus/minus.  In “Fuzzy Math: Plus/Minus Tell a Story, Though Not the Whole One”, Hollinger details the problems with the latest addition to the standard box score.  Unfortunately I was unable to find an on-line version of the article.  So let me try and summarize the issues Hollinger raises.

• The first critique comes from Dean Oliver (author of Basketball on Paper and currently the stats person with the Denver Nuggets).  Oliver is quoted saying, “It’s (the plus/minus measure) noisy, uncertain and kind of a black box – you have a hard time understanding why its coming out the way it is.”
• Hollinger also notes the plus/minus stat doesn’thelp us compare players across teams.
• On a related point, the stat also doesn’t take into account substitution patterns.

Much of what Hollinger says in this article was originally stated in an article he first posted at ESPN.com in 2005 (insider access required). That article also noted that a player’s teammates impacted his plus/minus.

Fans of this approach, though, might argue that all that’s needed is adjusted plus/minus.  This approach – originally developed by Wayne Winston and Jeff Sagarin – employs regression analysis to control for a player’s teammates.  Theoretically, adjusted plus/minus should answer Hollinger’s critiques (but doesn’t – as I will note in moment – Oliver’s criticisms).

When we look at the adjusted plus/minus numbers, though, it doesn’t look like Hollinger’s issues have gone away.  Consider the case of Darius Songaila.

Last season Songaila posted a -0.076 WP48 [Wins Produced per 48 minutes] with the Washington Wizards.  This result is not unusual.  Songaila posted WP48 marks in the negative range in 2005-06, 2006-07, and 2007-08 (he posted a 0.056 mark – his career best – as a rookie at the age of 25 in 2003-04).

Adjusted plus/minus, though, told a very different story.  According to basketballvalue.com, Songaila was the third best player on the Washington Wizards in 2008-09.  Again, adjusted plus/minus is supposed to control for a player’s teammates.  So when the New Orleans Hornets acquired Songaila, they probably expected to see a positive adjusted plus/minus as well.  But currently Songaila is posting the lowest adjusted plus/minus in New Orleans.   So with completely different teammates, Songaila – according to adjusted plus/minus – is a very different player.

This should be thought of as odd, though, since Songaila’s WP48 with the Hornets is still in the negative range.  In other words, his box score numbers are not very different despite the fact his teammates have all changed.

This result is not just confined to Songaila.  JC Bradbury and I – in a forthcoming article in the Journal of Sports Economics — report that only 7% of a player’s adjusted plus/minus is explained by what a player did the previous season (oddly enough, unadjusted plus/minus has a stronger – albeit still relatively weak – correlation).  In other words, the correlation coefficient for adjusted plus/minus from season-to-season is below 0.30.   And when we look at players who switch teams – as Songaila did – we fail to find a statistically significant relationship. In contrast, any measure (PERs, Wages of Wins measures, NBA Efficiency, Win Shares, etc…) based on the box score will have a correlation coefficient of at least 0.65, and often these marks are above 0.80.   And that correlation remains strong even when a player changes teams.

What does this mean for decision-makers?  Decisions are about the future.  Unfortunately – because plus/minus is so inconsistent across time — it doesn’t appear this measure can be relied upon to make decisions about the future.

It’s important to note that inconsistency is not the only problem with this measure.  The standard errors associated with this measure – even when multiple years are added – tend to be so large that for many players the results are statistically insignificant (Bradbury and I make this point in our article as well).

Even if the problems of inconsistency and the standard errors could be solved, the critique from Dean remains.  As Dean notes, this measure is essentially a “black box.”  A decision-maker has no idea why a specific result is obtained.  So it’s hard to know what the results mean.

One can state this last critique as follows:  What plus/minus can show is a correlation.  When a specific player is on the court, a team tends to do good or bad.  But it doesn’t show causation.  And therefore, it’s hard for a decision-maker to know really what this means.

Of course, all of this doesn’t stop decision-makers from using this information. And as Avery Johnson details, the Golden State Warriors upset of the Dallas Mavericks in 2008 can be partially attributed to Johnson following the dictates of plus/minus analysis.

Let me close with three more observations.

• One senses that people might be able to tell a story about why Songaila’s plus/minus numbers have changed.  Such stories, though, are also a problem.  Analysis should begin with a story, and then this story should be tested.  We should try and avoid looking at a test and then making up a story.
• We should note that adjusted plus/minus analysts have fully acknowledged Dean’s observation that this measure has “noise.”  Unfortunately, when specific players are analyzed this observation seems to vanish.  In other words, we never seem to see someone argue that a player’s current adjusted plus/minus is just “noise.”  But if there is “noise” in the model, some of these results have to also be “noise.”
• The fact that some teams have turned to such measures confirms what has been argued about a traditional approaches to player evaluation.  Teams are turning to these measures because the traditional approaches do not appear to work.

So although adjusted plus/minus has problems, it is understandable that teams are turning to this measure.  One suspect, though, that the problems – detailed by Hollinger, Oliver, Bradbury, and I — are simply not well understood by everyone.

– DJ

Our research on the NBA was summarized HERE.

The Technical Notes at wagesofwins.com provides substantially more information on the published research behind Wins Produced and Win Score

Wins Produced, Win Score, and PAWSmin are also discussed in the following posts:

Simple Models of Player Performance

Wins Produced vs. Win Score

What Wins Produced Says and What It Does Not Say

Introducing PAWSmin — and a Defense of Box Score Statistics

Finally, A Guide to Evaluating Models contains useful hints on how to interpret and evaluate statistical models.