CSS Button No Image Css3Menu.com

Baseball Prospectus home
  
  
Click here to log in Click here to subscribe
<< Previous Article
Premium Article Rumor Roundup: The Wor... (01/21)
<< Previous Column
Prospectus Feature: Th... (01/13)
Next Column >>
Prospectus Feature: Th... (01/29)
Next Article >>
Premium Article 2015 Prospects: Detroi... (01/21)

January 21, 2015

Prospectus Feature

Quantifying the Wobbly Chair

by Andrew Hopen

Last fall, the Diamondbacks, Cubs, and Red Sox all finished last in their respective divisions. The Diamondbacks dismissed manager Kirk Gibson in what was widely seen as an appropriate move given the franchise’s decline and Gibson’s grittiness-bordering-on-violence. The Cubs fired manager Rick Renteria, not because of performance but because Joe Maddon became available. Public reaction was one of uncomfortable sympathy; nobody was out for Renteria’s head, but c’mon, it’s Joe Freaking Maddon. The Red Sox retained John Farrell, whose team severely underperformed expectations. Surely he benefitted here from a wildly successful 2013.

Point being, keeping or dismissing a manager is a complicated decision, in which on-field results have to be weighed against history, context, and intangibles like leadership and respect. But of the tangible results, which types truly matter, and how much does each shade the picture? I aimed to build a model to answer that question.

The Model
For my data, I included all seasons from 1996 (first full season of the Wild Card Era) to 2013, using information I could find within or derive from the Lahman database. This includes things like win percentage, playoff appearances, year-to-year improvement, and awards won. I opted to include every opening day manager (i.e. no interim guys, whose fates are often pre-determined) and used my data to predict whether or not each would appear as manager for the same franchise next year. I chose to fit a decision-tree model with boosting. (For those interested, the final tuning parameters chosen by repeated CV were: shrinkage=0.01, #trees=350, and interaction depth=3.) I excluded the two expansion-team managers because they messed up variables that relied on previous seasons, and because I felt they deserved unique categorization but were too few to be distinguished by the model. I also excluded 1999 Astros manager Larry Dierker, whose health forced a mid-season hiatus, resulting in two separate 1999 stints in the Lahman database.

Across various re-samplings and variable sets, test-set predictions were about 86-87 percent accurate—reaching as high as 91 percent in one instance—compared to a base return rate of 76.5 percent. For the final model, however, I removed variables “# of Games,” “Wins,” and “Losses,” not because they weren’t helpful, but because they were too helpful. Using either “# Games” or a combination of “Wins” and “Losses,” the model can predict without fail that mid-season firings won’t return next year. This is good for prediction accuracy, but the result is that other, more interesting variables got drowned out. The final model still predicts about 84 percent accurately and is far less simplistic.

The gbm R package spits out a nifty summary of the percent influence of each variable. For clarification, the “2-year playoffs” variables indicate which of the current and previous year the team made the playoffs. Options are “Both,” “Neither,” “First” and “Second.” The 2014 Red Sox would be “First” since they advanced in 2013 but not 2014. Here are the results:

Variable

Relative Influence

Win%

26.129

Win% Improvement

20.735

Years with Team

12.243

Age

11.140

Career Games Managed

7.411

2-year Playoffs

7.167

Divisional Ranking

6.349

New

4.621

Won wild card

1.950

Playoff Appearances w/ Team

1.337

Career Playoff Appearances

0.597

Division Champ

0.282

Award (BBWAA or TSN)

0.039

Rookie

0.000

Made Playoffs

0.000

League Champ

0.000

World Series Champ

0.000

It’s important to note that many of these variables are related to each other, which can have a big effect on their levels of influence. For example, making the playoffs doesn’t actually have zero influence on a manager’s job security, but other variables like “Win %” and “Divisional Ranking” were more helpful and subsumed the influence of “Made Playoffs.” Similarly, winning the World Series has “zero influence” only in the sense that by the time you’re there, you’ve already solidified your position, and other variables catch that. Also, since the model looks at all kinds of departures and not just firings, variables like “age” or “years with team” gain importance, as they can indicate whether a manager is likely to retire.

Examining the variables shows a few interesting results. For example, you’re apparently better off having a .400 winning percentage than .450. I’d venture to guess that the .400 teams were known to be terrible from the start, whereas the manager takes some flak when his pre-season dark horse wins only 73 games. Also, winning the wild card appears to be negatively correlated with return. This is probably not a true phenomenon. It’s much more likely that the model is using the wild card variable to separate strong finishers from teams who lost their division lead down the stretch, or something along those lines.

Okay, that’s enough explanation. It’s time to announce the Managerial Model Awards, given only to the most notable managerial seasons of the wild card era.

Most Slam-Dunk Return
Ron Roenicke, 2011 Brewers
The Brewers improved by almost 20 games in Roenicke’s first season, winning the NL Central and advancing to the NLCS, riding big seasons from Prince Fielder and Ryan Braun as well as a decent showing from one-year rental Zack Greinke.

Runners-Up: Ron Gardenhire (2002 Twins), John Farrell (2013 Red Sox)

Most Slam-Dunk Departure
Phil Garner, 2002 Tigers
After a sub-.500 first year and 66-win sophomore campaign, Phil Garner was afforded no patience in 2002, fired after an 0-6 start. Despite this version of the model not taking total games into account, Phil Garner was still an incredibly obvious decision. Also of note, second-runner-up Frank Robinson had the lowest predicted return for a full-season manager in every single version of the model.

Runners-Up: Davey Lopes (2002 Brewers), Frank Robinson (2006 Nationals)

Most Surprising Return
Tony Peña, 2004 Royals
Peña was brought on in 2002 as a replacement, and in 2003 won 83 games to give Royals fans their first glimpse of hope since the early ’90s. However, 2004 saw a return to the cellar with an AL-worst 58 wins. Though he did return for 2005, Peña was relieved of duty after just 33 games.

Runners-Up: Frank Robinson (2004 Expos), Bud Black (2011 Padres)

Most Surprising Departure
Tim Johnson, 1998 Blue Jays
Johnson was actually the 13th best bet to return according to the model, and is the perfect example of its inherent limitations. In his first year with Toronto, Johnson’s Blue Jays improved by 12 games from 1997. As the story goes, he was able to motivate his team using tales of his heroism in Vietnam. Problem was, the stories were untrue, and the outrage that followed this admission led to Johnson’s firing shortly before the 1999 season.

Runners-Up: Davey Johnson (1997 Orioles), Larry Dierker (2001 Astros)

Flip a Coin
Jack McKeon, 2005 Marlins (49.9% Chance, Did not return)
McKeon was a Florida hero after turning around the 2003 squad in time to win the World Series, but advanced age coupled with two straight years of 83-79 third-place finishes made McKeon an iffy proposition. He also lacked an elite track record, with 2003 being his only playoff appearance in a 16-year career. Of course, McKeon was brought back in 2011 at age 80, but he was unable to once again whip the struggling Marlins into a contender.

Runners-Up: Tony Muser (1999 Royals, 50.3% chance, Returned), Jim Tracy (2005 Dodgers, 49.7% chance, Did not return)

Big thanks to Russell A. Carleton and Rob McQuown for their invaluable advice on this piece.

Related Content:  Managers

5 comments have been left for this article.

<< Previous Article
Premium Article Rumor Roundup: The Wor... (01/21)
<< Previous Column
Prospectus Feature: Th... (01/13)
Next Column >>
Prospectus Feature: Th... (01/29)
Next Article >>
Premium Article 2015 Prospects: Detroi... (01/21)

RECENTLY AT BASEBALL PROSPECTUS
Playoff Prospectus: Come Undone
BP En Espanol: Previa de la NLCS: Cubs vs. D...
Playoff Prospectus: How Did This Team Get Ma...
Playoff Prospectus: Too Slow, Too Late
Premium Article Playoff Prospectus: PECOTA Odds and ALCS Gam...
Premium Article Playoff Prospectus: PECOTA Odds and NLCS Gam...
Playoff Prospectus: NLCS Preview: Cubs vs. D...

MORE FROM JANUARY 21, 2015
Premium Article 2015 Prospects: Detroit Tigers Top 10 Prospe...
Premium Article Rumor Roundup: The World Is Drunk On Burke B...
Premium Article Daisy Cutter: Maddon Meets the Bleacher Bums
Fantasy Article Fantasy Three-Year Projections: Second Base
Fantasy Article The -Only League Landscape: American League ...
Fantasy Article Fantasy Infographic: Second Base
Fantasy Article The Adjuster: Second Base

MORE BY ANDREW HOPEN
2015-01-21 - Prospectus Feature: Quantifying the Wobbly C...
More...

MORE PROSPECTUS FEATURE
2015-02-13 - Prospectus Feature: The Golden Age of Immacu...
2015-02-12 - Premium Article Prospectus Feature: The Genius Of Arbitratio...
2015-01-29 - Prospectus Feature: The PECOTA Release
2015-01-21 - Prospectus Feature: Quantifying the Wobbly C...
2015-01-13 - Prospectus Feature: The 2014 All Out-of-Posi...
2014-12-15 - Prospectus Feature: The Surprising Math Team...
2014-12-09 - Premium Article Prospectus Feature: How Far Did That Fly Bal...
More...