A Comment on the NBA Draft and Some Cutting and Pasting

Not sure everyone is aware of this fact, but DraftExpress reports Win Score per 40 minutes for every player in the NCAA.  You will not see PAWS (Position Adjusted Win Score), but positions are provided.  Therefore if you know the average Win Score at each position it’s easy to make comparisons across positions.

Previously I noted that I have calculated the average college Win Score per 40 minutes [WS40] at each position from 1995 to 2008 (the specific data set considers all players who were drafted in these fourteen seasons, and who played at least 500 minutes their last season in college). 

Here are these averages:

Centers: 12.30

Power Forwards: 12.48

Small Forwards: 9.92

Shooting Guards: 8.43

Point Guards: 7.30

The numbers essentially follow what we see in the NBA.  Big men – because they rebound in greater numbers and tend not to turn the ball over – post higher Win Scores.  Smaller players are the opposite and post lower Win Scores.  Because positions in basketball are complements in production (economic talk for the idea that teams appear to need all positions to produce wins), it makes sense to evaluate a player relative to what we generally see from a player’s position.

Sometime before the draft I think an analysis of the prospects in the draft will get posted.   In the meantime, everyone can have these reference points to do some of their own analysis.  Please keep in mind, though, that college numbers are not a perfect predictor of NBA productivity.  Yes, there is a relationship.  But players who are above average in college can be below average in the NBA.  And players that are below average in college can become above average in the Association.  The tendency is for players to hold to form, but there are no guarantees and there are certainly exceptions.

And Now For Something Else…

A few weeks ago Julian Sanchez offered the following comment on the climate change debate.  What Sanchez had to say was then linked to by Crooked Timber, Brad DeLong, and JC Bradbury.  At the time I meant to follow suit, but then I never got around to it.  Well, better late than never.  Hopefully everyone will find this as interesting as I (and others) did.

Sometimes, of course, the arguments are such that the specialists can develop and summarize them to the point that an intelligent layman can evaluate them. But often—and I feel pretty sure here—that’s just not the case. Give me a topic I know fairly intimately, and I can often make a convincing case for absolute horseshit. Convincing, at any rate, to an ordinary educated person with only passing acquaintance with the topic. A specialist would surely see through it, but in an argument between us, the lay observer wouldn’t necessarily be able to tell which of us really had the better case on the basis of the arguments alone—at least not without putting in the time to become something of a specialist himself. Actually, I have a plausible advantage here as a peddler of horseshit: I need only worry about what sounds plausible. If my opponent is trying to explain what’s true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience.

Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. Certain bad arguments work the same way—skim online debates between biologists and earnest ID afficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it’s both intelligible—even somewhat intuitive—to the layman and sounds as though it might qualify as some kind of insight. (If it seems too obvious, perhaps paradoxically, we’ll tend to assume everyone on the other side thought of it themselves and had some good reason to reject it.) The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it’s really possible to explain why the talking point is wrong. So the setup is “snappy, intuitively appealing argument without obvious problems” vs. “rebuttal I probably don’t have time to read, let alone analyze closely.”

If we don’t sometimes defer to the expert consensus, we’ll systematically tend to go wrong in the face of one-way-hash arguments, at least our own necessarily limited domains of knowledge. Indeed, in such cases, trying to evaluate the arguments on their merits will tend to lead to an erroneous conclusion more often than simply trying to gauge the credibility of the various disputants. The problem, of course, is gauging your own competence level well enough to know when to assess arguments and when to assess arguers. Thanks to the perverse phenomenon psychologists have dubbed the Dunning-Kruger effect, those who are least competent tend to have the most wildly inflated estimates of their own knowledge and competence. They don’t know enough to know that they don’t know, as it were.

Again, I thought that was pretty interesting.   One last note…the second book is almost completed.  When it is completed I will go back to writing posts that don’t involve me cutting and pasting.

- DJ

The WoW Journal Comments Policy

Our research on the NBA was summarized HERE.

The Technical Notes at wagesofwins.com provides substantially more information on the published research behind Wins Produced and Win Score

Wins Produced, Win Score, and PAWSmin are also discussed in the following posts:

Simple Models of Player Performance

Wins Produced vs. Win Score

What Wins Produced Says and What It Does Not Say

Introducing PAWSmin — and a Defense of Box Score Statistics

Finally, A Guide to Evaluating Models contains useful hints on how to interpret and evaluate statistical models.

Comments are closed.