BP Comment Quick Links
December 9, 2013 Baseball TherapyWhat Happened to the Complete Game?
In 2013, Adam Wainwright led Major League Baseball by pitching five complete games. In 2012, Justin Verlander was much more of an ironman and pitched six. A mere 30 years ago, in 1983, six complete games would have landed Verlander in a tie for 42nd place with such notables as Storm Davis, Bob Forsch, Jim Gott, Ken Schrom, and Bruce Hurst. Even 20 years ago, six complete games would have been good for a tie with David Cone for 15th place in MLB. What happened to finishing what you started? Last week, we saw that starting pitchers really have seen a reduction in their workload over time. Since 1950, there has been a steady downward trend in the number of batters that pitchers have faced, the number of outs they’ve recorded, and the number of pitches that they’ve thrown. Indeed, the percentage of games in which the starter records at least 27 outs has fallen from 30 percent in 1950 to two percent in 2012. What happened to the complete game? Well, for one, it’s hard to get through nine innings in 100 pitches, or even 110, and as we saw last week, managers have reined in their starters over time. But perhaps there’s another reason why managers have felt more and more comfortable turning to the bullpen in the seventh inning. Let’s see if we can figure it out. Warning! Gory Mathematical Details Ahead! I tried modeling how this decision has played out over time. I located all cases in which a starter had lasted all the way through the fifth inning (recorded 15 outs). To try to isolate cases in which we can surmise that the manager knew the starter was faltering, but still left him in, I looked for all cases in which the sixth inning (whether the starter completed it or not) was his final act that day. I figured out how many runs (on average) the starter surrendered in those sixth innings. I then found games in which the starter exited after exactly five innings and found out how many runs (on average) the relievers in the sixth inning gave up. For the results that I’m about to show, I considered only games in which the score was still within three runs (in either direction). It actually doesn’t end up making much difference in the overall conclusions when you take that filter off. Before you are allowed to see the results, you must memorize the following paragraph. The results that follow do show what actually happened, but the decision as to whether a reliever came into the game was NOT made at random. Managers probably let their better starters go an extra inning and their back-of-the-rotation guys go to the locker room. Managers with good bullpens were probably more likely to pull the plug, and those with a bullpen from Hades probably thought twice about it. Here are the results, by year, going into the sixth inning: We see that over time, there are peaks and valleys in the number of runs that relievers (the blue line) give up when they go out there, but the numbers vibrated within a fairly small range between 0.3 and 0.7 runs. However, when a starter was allowed to go out there in 1950, he was likely to give up more than a run and a half! But over time, the gap between what starters did in their sixth and final inning, and what relievers did in their first inning and the sixth overall began to narrow. By 2010, the two lines were touching. Here’s the same graph modeling the same basic decision, only this time, the game is headed into the seventh inning. We see the same basic pattern at first, but by the time we get to the late 1970s and early 1980s, the lines cross. And for the decision headed into the eighth inning, we see the same pattern again. All three graphs have the same message. In the 1950s and 1960s, managers were much more likely to leave the starter in for an extra inning than go to the bullpen. In fact, if we graph the number of cases where a starter is left in for another inning vs. the number of times a reliever is brought in (I’m showing the graph for the decision going into the eighth inning—they all basically have the same shape over time), the majority of cases favor “one more inning” in the 1950s, ’60s, and into the ’70s. Somewhere in the ’80s, the trend turns downward and accelerates. Managers developed quicker hooks in the ’80s. Probably not coincidentally, this was also the time that the results that they got from leaving the starter in vs. bringing in a reliever began to come into line. A theory: over time, managers realized that when they sent just any old pitcher out there for another inning when he was tired, it produced worse results than when they had gone to the bullpen. It wasn’t always the right call to go to the pen, but managers became better at picking their spots for when a starter should be pushed and when he should be restrained. Eventually, managers realized that going with a tired starter—especially a tired, bad starter—when a fresh reliever was available was counter-productive. For the curious, the decision going into the ninth inning looks like this. For most of the last six decades, starters have been about equal to their relieving counterparts in the ninth inning, and lately, they’ve been better. We do need to account for the fact that those starters who are pitching in the ninth inning may not be particularly tired, while in the previous innings that we’ve studied, we’ve selected those who have a marker that at least suggests fatigue. Additionally, by the ninth inning, the manager has an eight-inning sample of how the pitcher has been performing that day to consider. The Benefits of a Bunch of Relievers So, yes. There was a time when the complete game was much more common than it is now. I’d argue that the reason for the decline isn’t because today’s pitchers are wimps. It’s because it’s always been foolish to send a tired pitcher out to the mound when there’s a better option available. It just took baseball a few decades to figure this out. To put some sort of number on it, in the 1960s, the gap between the starter and reliever lines floated between about .2 and .3 runs. Let’s call it a quarter of a run, just to have a nice easy number to work with. By the 1980s, the performance between the two groups was equal, and coincided with the development of a quicker hook. The “cost” of the quicker hook was that teams were beginning to have pitching staffs that had 11 and 12 pitchers on them, rather than 9 or 10. The extra bodies were needed because the bullpen was going to be taxed more with a quicker hook. Let’s assume that if teams went back to the days of pushing their starters harder—either to prove how manly they are or because they want to convert some of those reliever roster spots into something more useful—we would see the same sort of discrepancy re-appear. More tired starters would be sent out to work an extra inning, at the cost of a quarter of a run each time. In the 1960s, managers seemed to choose “send him out for one more” at a rate that was 25 percentage points higher than it is now. Let’s say that now, 25 percent more of the time (40.5 games) than he would have in 1965, the manager makes a decision to pull a tired pitcher, because with the expanded bullpens of this era, he can. In doing so, he saves his team a quarter of a run. The modern bullpen thus saves (using some admittedly slapdash math, roughly) 10 runs a season over the way things used to be done. There might be other savings that come in the form of not running up pitch counts, and thus preventing injuries, but we haven’t gotten there yet. One common critique of the modern bullpen is that those extra relievers consume roster spots that could be used to have extra hitters available on the bench for use in platoons, or for defensive specialists, pinch hitters, or designated pinch runners. Those things may very well have value, but so do the extra relievers. A few months ago, I looked at the value of some other uses of roster spots that could be facilitated through a player who played multiple positions. Finding someone a good platoon partner would add perhaps 150-200 plate appearances in which the batting team gets a handedness advantage when they would not have otherwise, and perhaps three or four additional on-base events (perhaps two or three runs). A defensive specialist replacing a really bad defender might save a team .04 runs per inning, but get to play only 80-100 innings over the course of a year (three or four runs) as a defensive replacement. Having a good “10th man” on the bench who would divert plate appearances away from the really awful-hitting (but he can play short!) utility infielder is worth a couple of extra runs per year. BP’s own Sam Miller has also shown that being able to carry a designated pinch runner (of the Billy Hamilton variety) would be worth about a tenth of a win in the space of a month, so call it five or six runs over the course of a year. And that’s if you have Billy Hamilton. Let’s pretend that a team went back to “the old days” when pitchers were pushed, and could liberate two roster spots from fringy relievers and re-purpose them to position players. The team would lose about 10 runs of value from having to push a tired starter out there, plus probably put their starters at greater risk for injury. The math behind some of these estimates involves the occasional assumption/best guess, and in specific circumstances, the effects might be bigger (or smaller), so your exact mileage may vary. However, looking at all of these alternate uses for a roster spot, even if you found two “best-case scenario” guys, the value that they have to replace is roughly 10 runs (on average), plus whatever benefit comes from managing the pitch counts better. Most of these extra batters are worth an upgrade of between three to five runs over the course of a season in ideal circumstances. There are probably specific cases where a team could make up those 10 runs and add some profit. But maybe you can also see that the quick hook and the big bullpen are actually a perfectly reasonable strategy, and it’s not surprising that evolutionary pressures have moved the game to favor that roster construction over time. It may not make for an aesthetically pleasing game, but it’s perfectly reasonable from the point of view of maximizing a team’s chances to win a game.
Russell A. Carleton is an author of Baseball Prospectus. Follow @pizzacutter4
12 comments have been left for this article.
|
Did you consider a chaining effect from heavy workload starters - that relievers will be kept fresher and thus more effective over the course of a season with a 240 inning guy in the rotation over a 180 one? Would there be a way to isolate this effect? Maybe reliever effectiveness by month (acknowledging that September call-ups would skew things)?
Thanks and keep up the great work
In my head, yes, although it seems that over time, the historical trend has dealt with this by having a couple extra guys in the bullpen, so as not to overtax the relief corps.
But isn't there an issue that those extra guys are not very good (sub-replacement on most teams, I'd imagine)?
Also, considering leverage, if your 8th inning guy is tired/less effective in a September pennant race then wouldn't that would to have a significant effect if lesser relievers move up the 'pecking order'? And wouldn't this effect be exacerbated if the better relievers are tired by the post-season?