The Second Rookies Trump the First

Most awards in the NBA are determined by the sportswriters.  The lone exceptions are the All-Defensive and All-Rookie teams.  Each of these is determined by NBA coaches.

The All-Rookie team appears to be fairly simple.  This year there were 64 players who began their NBA careers.  From this pool of talent, each coach selects five players for the All-Rookie First Team and then five more for the Second Team (coaches cannot select players from their own team).  The five rookies who receive the most voting points (2 points for first team vote, 1 point for a second team vote) – regardless of position played – are placed on the All-Rookie First Team.  The next five in voting points are on the Second Team.

Second Team is Tops

One would expect – since the coaches are making the selections – that the First Team is “better” than the Second Team.  At least you might expect this if you didn’t consider Wins Produced per 48 minutes [WP48].  Here is the All-Rookie First Team in 2008, with Voting Points and WP48 reported for each player.

Al Horford: 58 voting points, 0.170 WP48

Kevin Durant: 57 voting points, 0.012 WP48

Luis Scola: 53 voting points, 0.124 WP48

Al Thornton: 48 voting points, -0.081 WP48

Jeff Green: 43 voting points, -0.082 WP48

And here is the Second Team:

Jamario Moon: 38 voting points, 0.196 WP48

Juan Carlos Navarro: 24 voting points, 0.013 WP48

Thaddeus Young: 23 voting points, 0.099 WP48

Rodney Stuckey: 22 voting points, 0.069 WP48

Carl Landry: 18 voting points, 0.258 WP48

The average First Team player posted a WP48 of 0.028.  The Second Team players, though, averaged a 0.127 WP48.  Yes, the Second Team All-Rookie squad consists of players who are collectively more productive than the members of First Team.

A Quick Look Back and an Explanation

When I saw this result I wondered whether this had ever happened before.  Table One reports the average WP48 of the All-Rookie First and Second Teams back to 1995. 

Table One: Comparing All-Rookie First and Second Teams

As noted in the above table, most years – as we would expect – the First Team is the best.  But in 1997, 2004, and 2007 (as well as 2008), the Second Team came out on top.  And in 2008, the second team had its greatest advantage ever (if ever begins in 1995).

So how does this happen?  The answer is detailed in The Wages of Wins.  The coaches are not evaluating rookies in terms of Wins Produced or WP48. We can do a much better job of explaining the coaches’ voting with a simple model like NBA Efficiency.  We can do even better if we just looked at points scored per game.   In other words, just like we see when we look at the player evaluation of general managers and the media, scoring dominates the player  rankings of coaches.

The All-Rookie story in The Wages of Wins was told with some simple regression analysis.  We can see the same story, though, just looking at the 2007-08 rookies.

Table Two: The Rookies of 2007-08

Table Two reports the Wins Produced, NBA Efficiency, and scoring of the 64 rookies who debuted in 2007-08. The players are ranked first in terms of the All-Rookie Voting and then points scored.  Of the top ten rookies in voting points, five finished in the top ten in Wins Produced.  Eight rookies, though, were in the top ten in NBA Efficiency and scoring.  As we see when we look at past votes, scoring dominated the selection of the 2007-08 team.

Ranking All Rookies with Wins Produced and WP48

How would our story change if we focused solely on Wins Produced?  According to this metric the All-Rookie teams would be as follows:

First Team:

Al Horford: 9.0 Wins Produced

Jamario Moon: 8.8 Wins Produced

Luis Scola: 5.2 Wins Produced

Joakim Noah: 4.9 Wins Produced

Carl Landry: 3.8 Wins Produced

Second Team

Thaddeus Young: 3.2 Wins Produced

Ramon Sessions: 2.2 Wins Produced

Sean Williams: 1.9 Wins Produced

Nick Fazekas: 1.6 Wins Produced

Arron Afflalo: 1.6 Wins Produced

The first team is dominated by big men.  This is primarily because the rookie guards in 2007-08 were almost all below average.  Only Ramon Sessions and Coby Karl posted WP48 marks that were above average, and neither played 500 minutes this past season.

If we focus strictly on rookies who played at least 500 minutes – and ranked these rookies in terms of WP48 – then the All-Rookie teams would be as follows:

First Team

Carl Landry: 0.258

Jamario Moon: 0.196

Al Horford: 0.170

Joakim Noah: 0.154

Luis Scola: 0.124

Second Team

Thaddeus Young: 0.099

Julian Wright: 0.086

Arron Afflalo: 0.079

Sean Williams: 0.070

Rodney Stuckey: 0.069

Of all rookies who played 500 minutes, only five posted above average marks. The best guards played for the Pistons (Afflalo and Stuckey) and each fell short of the 0.100 threshold.  The general poor play of the rookies is why Fazekas and Sessions can be among the rookie leaders in Wins Produced without playing much.

It’s important to remember that rookies tend to play badly.  From 1993-94 to 2006-07, the average rookie posted a WP48 of 0.047.  Second year players have an average mark of 0.076. It’s not until the third year that players on average approach the NBA average level of productivity.

When we consider what rookies do on average, our list of above average players gets larger.  In addition to each player listed on the WP48 All-Rookie first and second teams we can add Mike Conley (WP48 of 0.054) and Jared Dudley (WP48 of 0.053).

Even with the lower threshold, though, Kevin Durant, Al Thornton, and Jeff Green were below average performers.  Yes, each showed some ability to score.  But none showed that they could score efficiently. 

Despite such performances, each player has now been told that they are “good.”  So although we can expect rookies to get better, how much better will players get when a “bad” performance is mislabeled? And in the case of Durant and Green, if these players do not improve, will this ultimately dim the enthusiasm people in Oklahoma City have for NBA basketball?

– DJ

Our research on the NBA was summarized HERE.

The Technical Notes at provides substantially more information on the published research behind Wins Produced and Win Score

Wins Produced, Win Score, and PAWSmin are also discussed in the following posts:

Simple Models of Player Performance

Wins Produced vs. Win Score

What Wins Produced Says and What It Does Not Say

Introducing PAWSmin — and a Defense of Box Score Statistics

Finally, A Guide to Evaluating Models contains useful hints on how to interpret and evaluate statistical models.

Comments are closed.