# Introducing PAWSmin – and a defense of box score statistics

A few days ago I introduced PAWS, or Position Adjusted Win Score. This is not actually a new metric, just a new name for Win Score adjusted for position played. At the time, I also noted PAWSPERM, or PAWS per minute. As noted on Friday, PAWS-PERM also spells PAW-SPERM, hence the obvious need for a new name.

After soliciting suggestions I have settled on PAWSmin – a candidate not actually suggested by anyone.

Now that we have a name (which I don’t think spells anything stupid or offensive), let me spend a few moments discussing the story this metric tells us about the value of box score statistics in the NBA.

A Start – the Win Score formula

The story begins with Win Score, the simple metric we introduced in

Win Score = PTS + REB + STL + ½*BLK + ½*AST

– FGA – ½*FTA – TO – ½*PF

The analysis reported in The Wages of Wins indicates that points, rebounds, steals, field goal attempts, and turnovers have basically the same impact – in absolute terms – on team wins. Blocked shots, assists, free throw attempts, and personal fouls have less of an impact. The impact of these latter four statistics is set at ½, a simplification that we note does not alter the accuracy of our analysis. To illustrate, there is a 0.99 correlation between Win Score per minute (with the simplified values) and the per 48 minute evaluation of a player (with the statistics weighted exactly in terms of the value each stat has in terms of wins).

Calculating PAWS

Position matters in evaluating basketball players. Centers and point guards do not do the same things on a basketball court. Consequently, if you wish to compare players you must adjust for position played. The per-minute position averages for Win Score (from the years 1993-94 to 2004-05) are as follows:

• Centers: 0.225
• Power Forwards: 0.215
• Small Forwards: 0.152
• Shooting Guards: 0.128
• Point Guards: 0.132

To calculate PAWS, you subtract from a player’s Win Score the Win Score an average player at his position would have produced in these minutes. For example, after 56 games Chris Paul has a 274 Win Score. An average point guard would have a 188.9 Win Score in the 1,430 minutes Paul played. So Paul’s PAWS (which is hard to say) – calculated by subtracting 188.9 from 274 – is 85.1.

Calculating PAWSmin

Although totals can tell us something, per-minute values are perhaps more important. Per-minute PAWS, or PAWSmin, is calculated by subtracting the average per-minute Win Score at a player’s position from a player’s per-minute Win Score. For example, Chris Paul has a per-minute mark of 0.192. Since he is a point guard, we subtract 0.132 to arrive at Paul’s PAWSmin of 0.059.

The importance of the team adjustment

PAWSmin requires a bit of effort, but not quite the effort needed to calculate Wins Produced and Wins Produced per 48 minutes [WP48]. To calculate Wins Produced we need to note the following:

• A player’s statistics, valued in terms of the impact these statistics have on wins
• The average performance at a player’s position
• The value of team statistics, an adjustment that allows us to account for the quality of a team’s defense and the pace the team plays.

We find that Wins Produced explains about 95% of a team’s wins, or in other words, sums quite closely to the actual number of wins a team achieves. This is not surprising since Wins Produced is derived from a team’s offensive and defensive efficiency, two metrics that – not surprisingly — also explain about 95% of team wins.

The accuracy of Wins Produced has led a few to suspect that it’s all in the team adjustment. In reality, though, the team adjustment has virtually no impact on the evaluation of players offered by Wins Produced and WP48.

To illustrate this point, consider that correlation between WP48 and PAWSmin. Remember, WP48 has a team adjustment. PAWSmin does not. If the team adjustment was driving the story, the evaluation with the team adjustment would be very different from the evaluation of players without any adjustment. But when we look WP48 and PAWSmin we find that these two metrics have a 0.994 correlation. In other words, these metrics are virtually identical.

Furthermore, you can actually use PAWSmin to project WP48. For those interested, the equation is as follows:

WP48 = 0.104 + 1.621*PAWSmin

With this equation you can project WP48 and Wins Produced, and never touch a team adjustment. This equation explains 99% of WP48, so obviously it tells us once again that PAWSmin and WP48 are telling us the same thing.

This analysis clearly tells us that the team adjustment is not why Wins Produced is so accurate. Wins Produced is accurate because the box score statistics tracked for players have been accurately connected and valued in terms of wins.

The Lessons Learned

So what have we learned from our analysis of WP48 and PAWSmin? A few bullet points will hopefully drive the lessons home.

• Wins Produced – which is based on the box score statistics tracked by the NBA – explains 95% of team wins.
• This accuracy is driven by correctly evaluating each statistic in terms of their impact on team wins.
• Since PAWSmin – which does not have a team adjustment – has a 0.994 correlation with WP48, it is clear the team adjustment is not what drives this story.

The value of box score statistics

All of this leads me to what may seem – in some circles – to a bold statement. I think the box score statistics tracked for players in the NBA are more valuable than the box score statistics tracked for baseball players (which I don’t think anyone has suggested we abandon). As we note in The Wages of Wins, basketball players are more consistent than baseball or football players across time. To illustrate, there is about a 0.6 correlation between a baseball player’s OPS in the current season and what the player did last season. For basketball players, though, Win Score per minute has a 0.8 correlation from season-to-season. In other words, the box score statistics in the NBA have greater predictive power than the box score statistics tracked in baseball.

And ultimately this is a big part of how we evaluate models. In judging the value of a model we consider

• whether the model explains what it purports to explain.
• the ability of the model to predict the future.
• whether the model is simple enough for decision-makers to utilize.

Explanatory and predictive power would seem obvious, although often people develop models and then fail to tell us whether or not the model can actually explain or predict anything. The last point is also very important. You cannot help decision-makers if you are the only one who can actually explain and utilize your model. If that is the case, then decision-makers have to trust that you did the analysis correctly. This is a leap of faith people making million-dollar decisions may not wish to make.

In the end, other people have to be able to utilize your work. So simplicity is as important as accuracy in evaluating a model. As I noted a few days ago, other people are starting to use Win Score, a trend that highlights the simplicity of this approach. Hopefully this is a trend that will continue. Certainly I don’t want to be the only one analyzing basketball with Win Score, PAWS, and PAWSmin.

– DJ