BP Comment Quick Links
![]() | |
July 22, 2013 Baseball TherapyThe High-Pitch-Count Hangover
A week ago, Tim Lincecum pitched a no-hitter against the San Diego Padres, striking out 13, walking four, and throwing—gulp!—148 pitches. He also drew a walk at the plate and scored a run. I'm sure recording the last out is a moment he’ll remember for the rest of his life, just as it was for Johan Santana, who last year pitched the first no-hitter in Mets history in a comparatively efficient 134 pitches. Generally, pitchers don't go more than 100 pitches in a game, but this was a special occasion. I used to use the same logic when I wanted to stay up late as a kid. The thing is that once you use the "special occasion" excuse and find out how much fun it is to stay up until midnight, it becomes easy to think of every occasion as special. There's a re-run of that one episode of Deep Space Nine that was so cool? (The baseball one!) That's special and worth staying up the extra hour. The next day, you feel a little groggier, but you get through, and it's not like anything really bad happened. Right? I have to imagine that a manager who has a pitcher nearing the 100-pitch threshold, but who really has good stuff that night, finds himself in the same basic position. Should he let the pitcher stay up late and face one more batter, or walk out to the mound with a glass of warm milk and tuck the pitcher in for the night? Here at BP, the idea of pitcher abuse and extreme pitch counts has been previously discussed by Rany Jazayerli and Keith Woolner, but it's been more than a decade since their work. Let's re-visit the issue of pitch counts and the effects that a 140-pitch marathon might have on a pitcher and his performance the next time that he goes out to the mound. But first... Warning! Gory Mathematical Details Ahead! As per usual, I controlled for general batter and pitcher quality through the log-odds method and used only plate appearances that involved a pitcher who faced at least 250 hitters in that year against a batter who also had at least 250 plate appearances. I controlled for whether the pitcher had the handedness advantage, and entered his pitch count for the current game prior to the individual plate appearance (i.e., Smith has thrown 37 pitches so far). I entered the pitch count from the previous game as our predictor of interest. I looked at how all of these variables did at predicting the seven basic outcomes of a plate appearance (strikeout, walk, Pitch count from the previous game had a significant predictive effect on singles (p = .082, please spare me the lecture), home runs (.057), and outs in play (hooray, .015!) All three effects were bad news for the pitcher. There is a carry-over effect from one start to the next. How bad is it? Let's assume that our pitcher is league average for 2012 and is facing a league-average batter, and compare what would happen if his previous outing had been 100 pitches vs. 110 pitches (and for fun, 140 pitches).
We see that extending a pitcher to 110 pitches in his previous start, compared to a 100-pitch outing, shaves a few hundredths of a percent off each of those outcome rates in his next start. To put that into some workable context, let's say that a manager routinely pushed all five of his league-average starters to 110 pitches, and another routinely stopped at 100. Figuring that a team's starters face about 4,000 batters per year, the first manager's team might be expected to give up roughly an extra single and an extra home run, while losing about four outs. (Yes, I know that doesn't add up. If we looked at the other events, there would probably be tiny fractions of those changing hands.) All told, we're talking about roughly three or four runs for the team all season as the penalty for routinely pushing pitchers to 110 pitches, rather than 100. That's not zero. If you round a little bit, you can say the words "half a win" and not feel like a liar. Then again, if a manager went to 110 half the time with his pitchers (and how many do that even half the time?), the penalty would be "a run or two." Over an individual game, the effect is very small, and it would be overwhelmed by randomness anyway. There's a signal in that noise, but it's not as interesting a signal as people seem to believe. Now, regularly pushing pitchers to 140 is a different story. A team would give up seven or eight singles, five extra home runs, and get 18 fewer outs in play (again, doesn't add up... I know). That makes the carry-over penalty over 4000 plate appearances around 15-20 runs for the season. It's a bad strategy if done constantly, but then there is no manager anymore who does this constantly. I ran a couple of supplemental analyses (research speak for "I was playing around with the dataset") to check a couple other possible effects. I added an interaction term to the regression between pitch count from the last time out and pitch count up to this point in the game. Maybe a guy coming off a 120-pitch outing tires more quickly than a guy coming off a 100-pitch outing. That interaction term never got close to significance. I also looked at whether the number of pitches from two outings ago made a difference by adding that into the regression. (I looked at cases in which both the immediately previous start and two starts ago were all on standard four-day rest.) Pitch count from two starts ago did not seem to have any additional effect. There is a carry-over effect on performance from one start to the next, but it doesn't appear to persist much past that. I also tested a quadratic model (I entered pitch count from last time, squared) to account for the fact that at the extreme edges of pitch counts, the effects might be compounded more with each additional toss toward home. This didn't seem to fit the data, however. How Long is Too Long? Finally, the guys who are allowed to go 120 pitches, for example, are (somewhat by definition) the guys whom the manager believes can handle 120 pitches in one game and come back on regular rest and still be effective (in other words, not Erik Bedard). Assuming that managers have some clue about what they're doing, we need to be careful in interpreting these results. Pushing any random pitcher to 120, perhaps one who's not built to do that, might (repeat, might) actually have much more catastrophic effects in his next start than these results might suggest. Then again, for those of you playing fantasy baseball, if a pitcher does have a 120-pitch outing, history shows that it will not affect him too greatly his next time out. These results look only at a performance hangover effect from throwing a lot of pitches in one start. The risk of injury is another issue altogether. One could make a case that allowing a really good starter to work a little overtime in the seventh inning of a tight game when the bullpen is tired or not that good to begin with is actually worth the price to be paid in his next start. However, we know that throwing a lot of pitches is hazardous to a starter's health, and it does little good to get an extra inning out of him now if you lose him for two months down the road. I guess I'll have to do that injury study next. Finally, there's the issue of the fact that Lincecum was chasing a no-hitter, and if Bochy had pulled him out, Lincecum would have spent the rest of his life wondering "what if." Might that have damaged his ego so much that it would have affected him through the rest of the season? Bochy may have been fully aware that letting Lincecum throw another 20 pitches would affect him, but believed that the alternative was worse. Part of the problem is that potential no-hitters don't come along very often, so it's hard to run a study on what has happened throughout history.
Russell A. Carleton is an author of Baseball Prospectus. Follow @pizzacutter4
|
Pizza, I'm wondering if this analysis runs into the same difficulty as JC Bradbury's analysis when he worked on this. The game in which a pitcher throws 110 or 140 pitches is generally a well-pitched game. So, that pitcher will have (on average) underperformed his seasonal averages in all other games that year including the game immediately after. In other words, if you did this same analysis on the game *before* the 110 pitch or 140 pitch game (or any other randomly sampled start from the season selected as being not the 110 pitch game in question) would you have found the same thing?
It's a variation on the "punishment illusion". Lincecum went 140 pitches _because_ he was throwing a no-hitter, and it's not likely that his next game will be as good. (Where do you go from there but down?)
I had considered that. I figured that the effect is ameliorated by the fact that my baseline for performance is his average stats for that year (although as you point out, this includes his likely awesome performance, which will skew the results). If anything, that if some of the decrease in performance is due to a regression to the mean bias, the small effect that I found just got smaller.
Russell, wait a minute. If a pitcher in the high pitch count game, gives up 1 run less per 9 (which is probably conservative), then in the next game, his average rpg, even if there is no effect from the previous game, is going to be around 1/30 rpg less than his seasonal average, right?
You are finding an effect of roughly the same amount! So where is there a residual effect? Is my math wrong?
Slight correction on your math. Let's say he's a 4.00 RA/9 pitcher usually, but throws a shutout (so, 4 runs per 9 less than usual). Figure that his starts aren't usually 9 innings, but he's a 180 IP guy seasonally (for ease of calculation). We'd expect him to give up 80 runs over 180 innings. Taking this masterpiece, but long start out, we assume 80 runs in 171 innings, which means that he's something like a 4.21 RA/9 pitcher. I don't know that we can make those sort of static state assumptions in real life, but the point is well-taken.
I believe that the argument you're making is that even the small effect I found might be even smaller, which I am happy to support.
Yes, except that if the difference was that a pitcher is really .21 runs/game worse in other games than all games (diff between 4 and 4.21) that would would be 20+ runs per season in the "all team all season" hypothetical and actually change the sign of the effect not just reduce it, right?
With MGL's more modest 1 r/g lower in long outings and your innings model we get maybe 5 runs per season in the all team all season hypothetical which changes the sign of the 110 pitch effect and cuts into the 140 pitch effect significantly (and 1 r/g is probably an understatement for 140 pitch outings). So, the adjustments might well be big enough to change part of the take home message. Given the size of the uncertainty here it seems uncertain whether the sign if positive or negative in either case. Perhaps it doesn't make sense to worry about the sign, when if the big take home is just that the effect, whatever it is, is a small one.