Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

In 2007, Reid Brignac was a sure thing. Lacking only elite defensive skills, hounds such as BP’s own Kevin Goldstein praised Brignac’s bat with the most effusive verbiage. With an average ranking of 25th on a collection of top prospect lists, the only question regarding Brignac was at which position he’d be posting an All-Star batting line.

As any recent student of baseball will know, this was not to last. The next year, his ranking had fallen to 32nd, then 89th the following year. Brignac’s offense collapsed somehow, and never recovered. A quick perusal of Brignac’s player comments reads like the arc of a cannonball: soaring upward toward stardom, lingering for a moment as a high-end, blue-chip prospect, and then suddenly crashing downward into irrelevance. After having amassed an unimpressive -0.8 WARP in lifetime value, Brignac was last seen signing a minor-league contract with Marlins.

One could contrast Brignac’s parabolic path with fellow Tampa Bay uber-prospect Evan Longoria’s meteoric rise. After going from college draftee to High-A to Double-A in the space of less than a year, Longoria managed to rise into Triple-A in his second year, and was ready for the majors shortly thereafter. Longoria spent so little time in the minors that he merited only two BP comments before he was Rookie of the Year.

A more fair comparison might be to Andrew McCutchen, another bat out of high school. Instead of a rise and fall, McCutchen’s was a slow and steady ascent toward stardom. His average prospect rankings go like this: 65, 20, 18, 27.

The variation in paths of prospects good and bad brings to mind the broader question of how prospect trajectories reflect the ultimate destiny of players. It’s easy to develop narratives around a player’s growth or stagnation, ascribing inevitable success or failure on that basis. To establish the truth of the matter, we must take a more general, less anecdotal look.

Using data from Chris St. John, we can look at the relationship between prospect trajectory and lifetime WARP. As a reminder, St. John’s huge collection of prospect data covers more than 20 years, aggregating multiple lists per year. From this data, I average the ranks across lists for each year to get an overall picture of the consensus on each prospect.

We also need to operationalize the notion of a prospect trajectory. What does it mean for a prospect to ride an upward trend, or slide downhill? To get a first approximation, I take the averaged prospect rank in a player’s first year on the lists in comparison with their last year. The difference in position can then be compared to the player’s lifetime WARP.

To define the risers from the fallers, I grouped prospects by whether they gained 10 or more ranks, stayed within 10 ranks of their initial position, or lost 10 or more ranks.

Dropped 10 or more ranks

Stayed within 10 ranks

Gained 10 or more ranks

4.48 WARP

10.62 WARP

15.00 WARP

The results are neatly linear. Those prospects who fell by 10 or more ranks from their first to their last years offer the worst results, at 4.48 WARP. Those who stayed within 10 ranks of their initial appearance on the lists gather an intermediate amount of WARP. Finally, those who climb do the best on average, producing a full 15 WARP, not a bad major-league career.

Instead of placing the prospects in discrete categories, we can also examine the full breadth of trajectories. Seeking to establish a relationship between trajectories and WARP, I plot here the difference in rankings against the lifetime WARP of each player, with a best-fit curve (LOESS) in red.

There’s a surprisingly smooth and continuous curve which links prospect trajectory to future performance. Guys who ascend the ladder of prospectdom, who crawl all the way from the bottom to the top, have an expected lifetime WARP of more than 10. Meanwhile, those who manage to chart the reverse course, from the top to the bottom, are doomed to lifetime production around or below replacement level.

It’s worth noting that the curve asymptotes (flattens out) at both ends, albeit for different reasons. At the lower end, we are seeing the effect of survivor bias. Bad players cease to accumulate further negative WARP, because by definition, replacement level players can exceed their performance.

On the other side of the curve, risers on the prospect rankings tend to cap out at 12 or so WARP. That seems to be a product of the fact that those prospects, despite their incredible upward paths, must have started from humble beginnings. Players who eventually arrive at Hall of Fame-type careers almost invariably come from the top of the prospect rankings, and so have no room to rise.

The relationship between prospect rank difference and lifetime WARP is not perfect, and it explains relatively little of the variance (about 10 percent) in lifetime WARP. But 10 percent is a considerable amount, given the overall difficulty in projecting prospects. If we pull this information together with the handful of factors I identified in previous work (things like age, average rankings, position, etc.), we arrive at an updated projection that gets as far as explaining 20 percent of the variation in lifetime production.

The above finding illustrates that trajectory is partially redundant with other factors related to lifetime WARP. Using only static information (age, average ranking, and so on), about 18 percent of the variation can be explained. So, even though trajectory explains 10 percent of the variation alone, its impact is dramatically diminished when combined with other factors (the technical term for this is collinearity). Still, if we limit ourselves to examining trajectories only of prospects with similar final rankings (within the top 20), trajectory remains important[1]:

Dropped 10 or more ranks

Stayed within 10 ranks

Gained 10 or more ranks

18.56 WARP

20.21 WARP

24.96 WARP

The above table shows that trajectory is significant, even for prospects who ultimately attain nearly the same rank. Those who rise to attain their rank tend to achieve better outcomes than those who stay the same, and still better than those who drop.

Overall, this information suggests we ought to pay a little more attention to the direction of a prospect’s rank, and not only their single-year outcome. When we see the same names ascending the lists, year upon year, that’s an affirmative signal for their future potential. Conversely, the first hint of decline from a top prospect ought to be troubling.

This advice must apply more severely to prospects coming from high school than from college. As the Brignac/Longoria story above illustrates, college bats tend to come to the minors with more polish, and spend fewer years inhabiting the rankings. Those coming straight from high school, on the other hand, are likely to spend several years being evaluated, and their upward or downward migration upon the prospect rankings carries more significance. We may never be able to foresee a prospect like Brignac’s decline, but perhaps we can better identify it as it happens by keeping a careful eye on a prospect’s past as well as their present.



[1] The average prospect rankings of these players are very similar: 16.667 for the fallers, 10.45 for those who stagnated, and 10.8 for those who increased their rankings.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
hotstatrat
3/18
I'm not a statistician, but shouldn't there be a control group here? How does a guy ranked in the 80s and 90s who has experienced a decline in his ranking compare to same aged & experienced players who have maintained their ranking or recently moved into the top 100? There is a bias in just comparing players moving up or down the rankings, because it is more likely the player moving up will have a higher ranking at that end point than someone moving down. We don't care about the starting points, because at that point in time we can't know if the player is going to move up or down.
nada012
3/18
That's what I was going for in the last table. All those groups have similar average rankings, but the guys that rose up to their rankings did significantly better than the guys that stayed (you can think of them as the "control").

Also, if trajectory wasn't important then it shouldn't improve the accuracy of the cross-validated model. But it did (marginally), suggesting that how a prospect traverses the lists is meaningful above and beyond their final position.
hotstatrat
3/20
The flaw I think you have with that control group is going by "average" ranking rather than "last" ranking or "third" ranking or "Xth" ranking. The third ranking is the scout's view based the third year of evidence since reaching the rankings. If one player trended up to the same ranking as another player trended down to that ranking, they will have different averages. However, that's the fair comparison, because that is what we are measuring - the scout's rankings. When judging players, we don't compare players based on their average ranking, just their most recent.
nada012
3/20
OK, here it is with Final Ranking, instead of average ranking (by the way, the reason I used average over years is because it is more predictive of future WARP than final ranking):

Final rankings within top 40, whose trajectory was up (improving their rankings the more time they spent on lists): median WARP 17.2, mean 20.96
Within top 40, trajectory down (going down the list the more time they spent): median 5.9, mean 15.09
Stayed the same: median 8.52, mean 20.09.

And the average final rankings are 20 for trajectory up, 26 for trajectory down, and 22 for trajectory staying the same, which is not enough of a difference to explain the lifetime WARP variation that you see.
hotstatrat
3/20
Interesting, thanks.
markpadden
3/27
"And the average final rankings are 20 for trajectory up, 26 for trajectory down, and 22 for trajectory staying the same, which is not enough of a difference to explain the lifetime WARP variation that you see."

Well, that depends what the actual distribution looks like. If the "risers" group had an abnormal number of top 10 guys (and 31-40 guys), the WARP would have been higher, even though the avg. and median rank wouldn't have shown it. That's because WARP does not decline linearly when you go from #1 to #40.

The proper test here is to compare expected WARP (for given final rank, age and position) to actual WARP for ~five bins of ranking trajectories [lots of ways to define this]. If the effect is real, it will be obvious.
orioles
3/18
What if you did this--separate "bins" of players ranked 1-10, 11-20, etc, for every year and from every list that you're considering in your sample. Determine the average career WAR for a typical player in that bin. Then ask the question, does the player's bin the previous year (year n-1) affect his total WAR, either positively or negatively? I.e., if a player is ranked in the 30s this year, would his expected career WAR be different if he were in the 10s last year vs in the 60s?

In a perfect world, it should not, but I wouldn't be surprised if there is a relationship akin to the one you're getting at here--that on average a #35 prospect moving down actually has less future value than a #36 prospect moving up, for example.

This reminds me of Silver's chapter on credit ratings in Signal & Noise--an entity's likely future credit rating shouldn't be influenced by its rating trend, but it most certainly is.
ericmvan
3/21
Two of your chief findings contradict each other. If the final ranking of the fallers should have been lower, and that of the risers should have been higher, then their average rankings should be less predictive of the outcome, not more.

My guess is that the superiority of average rankings is being driven by players with U- and inverted-U-shaped trajectories, which you're ignoring. I'm especially thinking of guys like Trot Nixon, whose BA rankings went 13-46-39-x-x-99. In retrospect, a former elite prospect who has just hit .310 / .400 / .513 in AAA while playing 65 or 70 defense in RF should have been ranked much higher than 99. Nixon was actually a big riser *at the end*, but you have him as a faller.

You can solve this riddle by using more sophisticated trajectory measures. You can divide the straight risers and fallers from the more complex trajectories. And you can look at slope, correlation, and the endpoint of the trendline through all the rankings.
nada012
3/21
"If the final ranking of the fallers should have been lower, and that of the risers should have been higher, then their average rankings should be less predictive of the outcome, not more."

This doesn't make any sense to me; could you expand? I'm happy to take your criticism seriously, but I can't understand what this sentence means. I would note that I took into account both final and average ranking, and in both cases, the trajectory of the ranking was significant. So, pretty much no matter how you slice it, trajectory impacts the lifetime WARP of a prospect.

I would also note that I did look at the endpoint and slope. Both proved significant for improving predictions of lifetime WARP. It's hard for me to understand why the correlation would be meaningful, but I will try it.