BP Comment Quick Links
November 18, 2011 Baseball ProGUESTusWhy Having a Quick Hook HelpsBelieve it or not, most of our writers didn't enter the world sporting an @baseballprospectus.com address; with a few exceptions, they started out somewhere else. In an effort to up your reading pleasure while tipping our caps to some of the most illuminating work being done elsewhere on the internet, we'll be yielding the stage once a week to the best and brightest baseball writers, researchers and thinkers from outside of the BP umbrella. If you'd like to nominate a guest contributor (including yourself), please drop us a line. Mitchel Lichtman, or MGL, has been doing sabermetric research and writing for over 20 years. He is one of the authors of The Book: Playing the Percentages in Baseball, and co-hosts The Book blog, www.insidethebook.com. He consulted for the St. Louis Cardinals from 2004 to 2006, as well as other major-league teams. He holds a B.A. from Cornell University and a J.D. from the University of Nevada Boyd School of Law. Most of the time these days you can find him on the golf course. In Game 6 of the World Series, Texas scored a go-ahead run in the top of the fifth inning. With two outs and the bases loaded, Colby Lewis, Texas’ starter, was allowed to bat for himself. He struck out to end the frame and pitched 1 1/3 subsequent innings. St. Louis eventually won the game in 11 innings. In Game 1, with runners on first and third, two outs, and a 2-2 tie in the bottom of the sixth, Chris Carpenter was due up for the Cardinals. This time, La Russa pinch-hit with Allen Craig, who singled in the go-ahead run, and St. Louis went on to win the game, 3-2. In Game 1 of the NLDS, with Arizona losing 1-0, Kirk Gibson, the D-Backs’ manager, sent his starter, Ian Kennedy, to the plate in the top of the sixth inning. Kennedy pitched another 1 2/3 innings and would allow one more run and take the loss for the Snakes. Finally, in Game 3 of the STL/PHI series, with two outs in the bottom of the sixth inning of a scoreless game and runners on first and second, Jaime Garcia batted for himself, struck out, and the Phillies went on to win the game 3-2. Garcia pitched only one more inning, giving up a single, double, and a two-run homer (after an ill-fated IBB to Carlos Ruiz). What all these scenarios have in common should be obvious. In a close game in the middle or late innings, with the starting pitcher due to bat, the manager must decide whether to pinch-hit for him. In all but one of the above cases, the pitcher was allowed to hit. In no case did the pitcher remain in the game for more than two innings subsequent to his PA. How often does a similar situation arise? In NL games from 2005-2010, the starting pitcher was due to bat in the fifth or sixth inning or later, after completing at least five innings on the mound (thus becoming eligible for a win), with a leverage index (LI) greater than 1.5 (based only on the score, inning, bases, and outs, using a 9.0 rpg environment), a total of 2,687 times. That is 448 per year or 28 per team per season (once every six games or so). In 43.6 percent of those occasions, the pitcher batted for himself. The average number of subsequent innings pitched was 1.42. Amazingly, 5.5 percent of the time the starter who was just allowed to hit was taken out before retiring another batter! An additional 10.0 percent of the time, he pitched less than an inning. Not surprisingly, one of the biggest determining factors as to whether the starting pitcher is allowed to hit is his runs allowed thus far. Here is a chart detailing the relationship between runs allowed and how often a pitcher bats for himself (and how many subsequent innings he pitches) in high-leverage situations: Pitcher has completed at least five innings and the LI > 1.5 (NL games ’05-’10)
Using this (starter has pitched at least five innings and the LI is greater than 1.5) as our criteria for inclusion, here are some other breakdowns of the numbers:
It is surprising to me that the home starter bats significantly more often than his visiting counterpart, since in any given inning, the home pitcher has pitched one more inning that the road pitcher. Part of the difference has to do with the fact that the home team tends to be winning more often and that the home starter tends to be pitching better than the road starter. Home teams are also more likely to sacrifice when down by a run. There are probably other logical reasons.
On the other hand, it is not surprising that starters bat more often when they are winning (and have likely been pitching well). However, keep in mind that all of the data are when the leverage is high. So while the average leverage is higher when the losing team is batting, managers likely underappreciate the importance of the AB when their team is ahead. For example, in the bottom of the seventh inning, if the home team is losing by a run, the LI with no outs and no one on base is 1.57; with a runner on first and no outs, it is 2.56. With a one-run lead, while the LI is only .83 to lead off the inning, with the bases loaded and two outs, it is 2.57. With a runner on second, it is at least 1.50. Even with a two-run lead, the LI for bases loaded and 2 outs is 1.58.
As you can see, the earlier in a game it is, the less likely the manager is to yank his starter, even when the leverage is high. What is a little surprising is that even when a starter bats in the fifth or sixth inning, he still doesn’t stay in the game for very long (on the average). Finally, let’s take a quick look at which teams over the last six years allowed their pitchers to bat (with the LI > 1.5) the least and the most after throwing at least five frames (2010 is also listed):
I don’t know that there is a whole lot that we can infer from the above numbers. Most teams have had several managers over the last six years. If we look only at the 2010 numbers, the samples are small, around 28 opportunities per team on the average. In addition, good starting staffs would tend to hit for themselves and deep bullpens might portend shorter outings by the starters. Park factors and a team’s offensive talent would likely come into play as well. In general, there doesn’t appear to be much variation among managers. There is less than a 10 percent absolute difference between the top and bottom teams. Let’s try and list what criteria and information managers presumably use in order to facilitate these decisions.
What if we ignored the bullet points above (and anything else a manager might use to facilitate his decision) and simply went by one, simple rule, the starting pitcher’s ego (and possibly bank account) be darned? What if we instructed every major-league manager that he is never to let his starter come to bat when the LI is greater than 1.5 and he has already pitched at least five innings? “Heresy,” you say—especially if that starter is Roy Halladay or Tim Lincecum (or any other workhorse ace). First, let’s look at what kind of pitchers we’re talking about. Remember that 44 percent of the time certain pitchers are allowed to bat, and 56 percent of the time a pinch hitter is called upon. Of course, it’s not just the identity and talent of the pitcher that dictates the decision. By definition, the average pitcher in the NL during this time frame (’05-’10), or any league and year, has a .260 TAv against. We’ll call these “.260 pitchers” to denote their talent/performance. However, when a starter is on the mound, his TAv is .265; for a reliever, it is .250. (All pitcher batting is removed, as are IBB and SH.) Remember that. It is a very important piece of data. Pound for pound, the average reliever is a better pitcher than the average starter. So why don’t relievers start (and starters relieve)? Because if relievers had to start a game and throw five or more innings, they would do considerably worse (to the tune of around one run per nine innings, or 26 points in TAv). Why is that? Because relievers get the benefit of (typically) facing each batter only one time per game, they generally pitch when it is colder (in night games, which are most games), they can throw harder with fewer pitches in their repertoire, and they are less likely to get fatigued. This is one feather in the “replace your starter as soon as possible, especially when he comes to bat in a high-leverage situation” cap. The other feather, of course, is the fact that you can replace him with a much, much better hitter. We’ll get to that feather in a minute. Now, just because the average reliever is “better” than the average starter, that doesn’t mean that the average reliever is better than the average starter who bats for himself in the sixth or seventh inning of a close ballgame. Let’s take a look at the talent of those starters who are allowed to bat, and those who were sent to the showers.
Surprisingly, on the average, only slightly better pitchers were allowed to bat. Charlie Manuel is just about as likely to let Joe Blanton or Kyle Kendrick bat for themselves as he is Cliff Lee or Roy Halladay. So what else drives the manager’s decision? For pitchers who bat for themselves, prior to their PA, their TAv against is .221. For pitchers who are removed for a pinch hitter, it is .243. (Why are those numbers so low since the average pitcher is .260? Both groups are self-selected—they don’t include pitchers who were taken out of the game on defense. In other words, if you survive either to hit for yourself or be taken out for a pinch hitter after pitching at least five innings, you were, on the average, pitching well.) So pitchers who are allowed to hit and remain in the game are indeed pitching extremely well, while those who are removed are not pitching nearly as well (although still pitching well, as noted above). The 22-point difference in TAv represents around 0.8 runs per nine innings. Also, starters who hit for themselves averaged facing 21.7 batters (excluding IBB, SH, and INT)—the equivalent of throwing around 83 pitches (at 3.8 pitches per PA). Those who were pinch-hit for faced 26.1 batters, or around 100 pitches. Even though the former group has faced fewer batters and thrown fewer pitches, and thus, is more likely to have something left in the tank, I think it is reasonable to presume that managers also think they are likely to continue pitching well based on the fact that they have already pitched spectacularly well, at least according to the numbers. A .221 TAv is equivalent to an ERA of roughly 2.20 in a league that averages 4.00. The $64,000 is, “Did they continue to pitch well?” According to The Book (of which yours truly is one of the authors), when a starting pitcher starts off a game like gangbusters (retiring the first nine batters in order), it has little predictive value. The exact quote from The Book is, “You can’t tell if a pitcher is on based solely on the results of the first nine batters he faces.” On the other hand, we also found that when a starter is getting lots of outs late in a game, he tends to have very good subsequent performance. Also, in this thread on The Book blog, I presented some research that indicated that pitchers who were “on” and were allowed to pitch the seven, eight, and ninth innings did not pitch particularly well in the seventh and eighth innings. (The ninth inning, as you’ll see if you read the entire post, is a special circumstance, and one cannot just look at starter performance as measured by TAv or wOBA against.) Here is the money quote from that blog entry:
In any case, it is easy enough to see how these starters pitched after they were allowed to bat. Remember that, on average, the starter pitched another 1.42 innings after his stint at the plate. What was the average TAv against over these 1.42 innings, again, after removing all pitcher batting? Remember, these were .258 pitchers for the season who had pitched to the tune of .221 so far in the game. They were also facing the lineup for the third time on the average; if there was no carryover effect, we would actually expect them to have pitched worse than their normal .258, all other things— like the park, weather, and opposing batters—being equal. So how did they pitch? (We’re going to exclude the ninth inning for the reasons stated above.) They pitched to the tune of a .251 TAv against (after adjusting for opponent batter pool)—better than we expected, but quite a bit worse than their .221 prior to being allowed to hit. Coincidentally, .251 is almost exactly the same as the average reliever, who is a .250 pitcher. So leaving your “hot” starter in the game yields no advantage over replacing him with an average reliever, unless he is a considerably above-average pitcher. What about when the starter was taken out of the game for a pinch hitter? How did the relievers pitch in the very next inning? They allowed a TAv of .243, which is around what you would expect from a late-inning reliever. So let’s recap the last few paragraphs. When a manager allows his starter, who is an overall .258 pitcher but has thus far pitched at a .221 level, to bat in a high-leverage situation (LI > 1.5) after he has pitched at least five full innings, he pitches at a .251 level (TAv) for the next 1.42 innings, on the average. When the starter is removed in the same situation, relievers pitch at a .243 clip for at least the next inning. Clearly, there are some starters who are good enough to post a sub-.243 TAv later in the game, but remember we are asking the question, “What if we were to remove all starters when they have completed at least five innings on the mound and they are due to bat in a situation where the LI is greater than 1.5?” Whatever our answer is, we can perhaps leverage that one-size-fits-all strategy by letting some pitchers (aces) bat for themselves and continue to pitch. Of course, you are still giving up the value of using a pinch hitter, which is almost the whole point of the strategy. In other words, a pitcher would have to pitch a heck of a lot better than .243 or .250 in order to justify allowing him to bat. This brings us to our next, vitally important question: How much do we gain when we replace our pitcher with a pinch hitter when the leverage is high? A simple way of computing an answer is to figure the average run value difference between a pitcher and a pinch hitter batting, and multiplying that result by the average LI. The average LI in our “starting pitcher coming to bat” situation is 2.29. When the pitcher ends up batting, it is 2.13, and when a pinch hitter is used, it is 2.41. So while managers are certainly using leverage to make their decision, they are letting their starters bat in some pretty high-impact situations (the average is 2.13, so there are many situations which are considerably higher than that). How can we approximate the average gain in run expectancy (RE) from pinch-hitting for our pitcher? First, let’s establish the hitting level of the pitchers who are allowed to bat. The average pitcher who is sent to the plate has a career OPS of 397. The average pitcher who is lifted has a career OPS of 372. Starting pitchers in general are at 383, so one of the criteria for the “hit/don’t hit” decision appears to be the hitting prowess of the pitcher. The approximate “line” for a 397 (OPS) hitting pitcher, per 500 PA (no SH or IBB), is:
The average pinch hitter in the NL from ’05-’10 had this line:
Now we need to know how often each bases/outs situation comes up in our late-inning, high-leverage situation when the pitcher bats. The average numbers of base runners and outs when a manager allows his starter to bat are 1.35 and .85, respectively. The distribution looks like this:
If you are wondering why some common situations occur so infrequently, like no one on and one or two outs, it is because it is rare for this to be a high-leverage situation (LI > 1.5), regardless of the score and inning. Keep in mind that the LI I used is based only on the bases, outs, inning, and score, and not the position in the lineup or team or pitcher talent. It is a somewhat generic LI. The only thing left to do, in order to estimate the average gain by pinch-hitting for the starter, is to compute the difference in RE between our average pitcher and an average pinch hitter for each of the 24 bases/outs states, using the “batting lines” above, and multiply that difference by their respective frequency of occurrence. Here is an example which will make that last sentence a lot clearer: Bases loaded and two outs, typically a very high-leverage situation, occurred 5.5 percent of the time that a pitcher batted for himself. Using the stat line for our above-average hitting pitchers, and a “mini-Markov” calculation, we get a resultant RE of .596. In the same situation with a pinch hitter at the plate (using his stat line above), we get an RE of .837. So our pinch hitter gain is .241 runs. Multiplying this by .055, the frequency with which this bases/outs state occurred in our sample, we get .0133. We do this for all of the bases/outs states and add all of the resultant numbers (like the .0133 above) together. This gives us our average gain by pinch-hitting for the starting pitcher in these high-leverage situations in the late innings. Here is the same chart as above, with the pinch-hitting gain in runs (RE of pinch hitter minus RE of pitcher batting) added for each bases/outs state:
Multiplying the gain in each cell by the frequency of that cell, and then adding everything up, we get an average gain of .151 runs when a pinch hitter hits for the starting pitcher when the LI is greater than 1.5 and the starter has pitched at least five innings. Remember that the average LI when the pitcher bats for himself is 2.13. If we multiply the average gain by the average leverage, we get .322 runs. That is the approximate average effective gain in runs, which corresponds to an approximate gain of .032 wins. (Sacrifice bunts by the pitcher with no outs can mitigate the loss in RE that results from not pinch-hitting.) Also remember that each team averages around 28 of these decisions per season, and the starter ends up batting around 12 of the 28 times. Multiplying 12 times .032 wins yields a gain of .384 wins per season per team by virtue of this simple strategy. If one were to argue that such a strategy might tax a bullpen or hurt the confidence or ego of a team’s starting pitchers, remember that the average subsequent IP whenever a starter is not removed for a pinch hitter is only 1.42. That means that we would be transferring a total of 12 * 1.42, or 17 IP, from the starters to the bullpen, an average of around three fewer innings per starter and perhaps one or two more innings per reliever. This hardly seems like a crisis. In addition, a manager can leverage this strategy and thus invoke it less often (and increase the overall gain in wins) by balancing the gain from pinch-hitting with the true talent of the starter. For example, ace starters are around .1 runs per inning better than an average starter. If a manager expects him to pitch for another two innings on the average (say it is only the fifth or sixth inning, he has been pitching well, and has thrown only 75 or 80 pitches), we might expect that starter to gain around .2 runs times, perhaps, an average LI of 1.5 while pitching (these decisions typically come in close games), for a total of .3 runs. If the gain from pinch-hitting (including leverage, of course) is less than .3 runs, then the manager can stick with his ace starter. Similarly, if an average of worse starter, regardless of how he has been pitching thus far, is due to bat in a situation where the gain from a pinch hitter is large, it is a clear case of sending the pitcher to the showers, after congratulating him on a job—albeit an abbreviated one—well done. There is also no law that precludes a manager from invoking this strategy in the fourth or fifth inning (or even earlier!), before the starter has pitched at least five frames, especially if he is pitching poorly such that the manager is not likely to receive much flack for his early hook. By leveraging or expanding our general strategy, a team can add a half win per season, maybe more. Let’s go through the 2010 season and see which teams/managers could have gained the most and least wins from invoking our general strategy of removing every starter for a pinch hitter after he has pitched at least five innings and is due to bat in a high-leverage (LI > 1.5) situation. For each occurrence, we’ll use the actual potential gain in runs, based on the bases/outs state times the actual leverage, in order to figure the total loss (in missed opportunities) for the season. In other words, I’m simply adding up all the potential losses from not pinch-hitting. I am also going to add in the potential loss or gain from extending the starting pitcher. If he is a below-average starter, there is additional loss to the team, since I am assuming that he could have been replaced by a better pitcher—a league-average reliever (.250 TAv against). If the starter is better than a league-average reliever, then the loss from allowing him to bat will be mitigated by the difference between his seasonal TAv against (actually six points better—I’ll assume he is “on” that day) and that of the average reliever, multiplied by the number of expected innings after the PA (two more if the PA occurs in the fifth or sixth inning and one more if the PA occurs in the seventh or later). In other words, this exercise will reward those managers who leverage or mitigate their decisions by the talent of the starting pitcher, and further penalize those who allow below-average starters to come to the plate. I’ll add two more entries in the last two columns: one, “leveraged wins gained,” which allows excellent starters to remain in the game (if the pinch-hitting gain is less than the difference between the starter’s expected runs allowed and an average reliever’s expected runs allowed over the estimated number of subsequent IP), and two, the same “leveraged wins gained,” but expanding the criteria to allow a starter to be removed from the game after pitching only four innings (rather than five). Before looking at this chart, perhaps you can guess which teams/managers might be the best or worst at allowing their starting pitchers to bat for themselves in the middle and late innings when the game is on the line.
As you can see, leveraging our “five-inning” strategy by allowing the best starters to remain in the game (sometimes) reduces the number of pinch-hitting appearances by more than 15 percent, while the net gain in wins is slightly improved. If you look at the last entries in the third column, you’ll see that allowing starters to be removed after pitching only four innings greatly increases the effectiveness of our strategy. Five teams could have added more than a win in expectancy, and one loveable loser, more than a win and a half! (The downside to this expanded strategy includes a heavier workload for the bullpen, depriving your starters of an occasional win, and “burning” more pinch hitters early in the game.) The last chart is our expanded, leveraged strategy, for each team, 2005-2010:
15 comments have been left for this article.
|
A shortcoming of this otherwise interesting analysis is that it presumes that the innings not pitched by starters as a result of the quick hook would be spread evenly around the bullpen. This is clearly not the case. The closer wouldn't get more innings (except maybe as an indirect result of having a couple more more late, tight leads to hold), nor would the 8th-inning guy, nor would the LOOGY(s). The load would consequently fall disproportionately on one or two "long guys" in the pen. That is undesirable for two reasons. One, the long guys usually aren't very good. Two, one of the main reasons long guys are long guys (the other one being their stuff) is that they don't have the resiliency to tolerate a starter's work load. As a Cardinals fan, I would not have been happy to see Mitchell Boggs or Kyle McClellan take on fifteen or twenty more innings in 2011 than they actually pitched -- McClellan was overextended as it was.
Interesting analysis, though, and worth some exploration. And who would have thought that St. Louis would have suffered second most severely in the NL, over the last several years, as a result of Tony La Russa NOT messing with his bullpen?
MGL did his calculations based on an average reliever replacing the starter's 1.4ish innings the starter might have pitched if left to bat for himself. Boggs and McClellan are almost certainly near the league average.
The point, though, isn't just the run prevention (or lack of it) by the relievers; it's also wear and tear. When I read the part about "What all these scenarios have in common should be obvious", my immediate reaction was "they were all in the post season, when you don't have to worry about overextending the bullpen any more." This is not the case during the regular season.
McClellan, to take one, had a very substantial first-half/second-half split in 2011. He threw too many innings as it was. Piling on more innings in the second half of the season, when he became the long guy after starting for the first half, would not have ended well. And it wasn't going to be the LOOGYs absorbing those extra innings, whether the calculation talks about "an average reliever" or not.
You are just pinpointing an example that makes your argument. There are plenty of longmen on teams that hardly ever get a chance to pitch.
Not really. If you go through the NL rosters for 2011, you will find that very few teams had real "long guys," in the sense of relievers with IP/game approaching 2, at all -- St. Louis is practically the only exception, with Boggs, Lynn, McClellan during the part of the year when he relieved. Most teams didn't have that luxury. The overwhelming majority of NL relievers had IP/G averages of 1 or thereabouts. When you consider that few teams carried an acceptable 11th (or 12th) man who pitched significantly fewer than the 50-ish innings a typical reliever pitched, it is not obvious at all that pitchers to absorb the added load that MGL proposes can be found. St. Louis would actually have been better equipped for this than most -- and I STILL wouldn't want McClellan or Boggs to up their innings count.
... lending more credence to the postulation that the 8th inning set-up guy and/or the closer might be better utilized in the high-leverage situations early in the game...