Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

I’m about to commit a cardinal sin of writing. I’m going to spend the first part of this article telling you all the reasons why you shouldn’t believe the second part of this article. I’m going to lay out a pretty good case (if I do say so myself) that I’m an idiot and that I shouldn’t be allowed anywhere near a database. And then I’m going to do it anyway.

A couple of weeks ago, I looked into the issue of organizations teaching their minor leaguers to play multiple positions. In general, I found that they don’t tend to start until the kids reach about 22 years old, but as players age through the minors, they tend to be more likely to wear more than one glove. It’s hard to know whether this is teams figuring that players are finally “ready for it” developmentally, or whether it’s that players who are still around the minors after a certain age (i.e., they aren’t good enough to have yet made it) had better learn more than one trick if they are ever going to have major-league value.

It’s hard to know anything by looking at minor-league stats. They’re eternally noisy. The minor-league season is shorter than the major-league one, so the sample sizes are smaller. The parks (especially in the Pacific Coast League) tend to play a little more extreme. And players aren’t always… themselves. Because the games don’t actually count (shhhhh…) teams will sometimes tell prospects to practice a skill that they aren’t good at. The power hitter will be told to work on his line drive stroke. He could hit 30 HR, but he needs to learn to tone it down when the situation calls for it. And he might be facing a pitching prospect who had his slider taken away from him. To make things worse, when a player gets really good at hitting in Triple-A, he is removed from the sample. Basically, minor-league baseball violates every rule of sound research methodology out there. I know. I used to teach research methods.

There are plenty of cases where the guy where the guy who hit .300/.400/.500 at Triple-A ends up hitting .154/.179/.203 in his brief stay in the majors, before being sent back to Triple-A to be Superman again. And then there’s the guy who has a pedestrian line at Triple-A, but becomes a surprise regular in the bigs. Sometimes, the numbers do lie. It’s one of the things that makes research on player development so hard to do. The data streams that we’re dealing with are dirty and frankly (and with deepest apologies to the people who have given it the ol’ college try) no one’s come up with a good way to clean it up. I’m fond of saying that 95 percent of Sabermetrics is accounting for bias in the data set. The remaining 5 percent is long division. We’re only about 50 percent there on this one.

Since minor-league stats have so many problems and snags that make them unreliable to work with, we’re going to do the only logical thing that we can. We’re going to ignore all of those issues. Damn the torpedoes! When I looked at minor-league multi-positionality, I wondered whether if it was true that (as the old saw goes) teams don’t like to move guys around the field because learning a new position will take valuable brain power away from their development as hitters.

Ummm… small problem. This runs straight into fresh batch of torpedoes. To get big enough sample sizes on whether a player has been affected by a position switch (or becoming a dual citizen at both second and third base), we’re going to need stats on the order of a full season (even a minor-league season), and something funny happens after a minor-league season. A lot of the good players tend to take another step up the ladder, where, in theory, the competition is tougher. On top of that, as we mentioned before, the fact that a team introduces a second position into the player’s life might itself be an indicator that they’re concerned about his bat. Or perhaps they only pick the guys to go to “learn to play left field” camp based on the belief that those are the guys who can handle it without losing anything from their hitting development. Maybe both are at work.

And of course, while we can observe what players do on the field, it’s not like teams publish the notes from their player development meetings. (Hey, team people… help a guy out?)

This is the worst idea I’ve ever come up with (non high school division), and I’m going to do it anyway. We’re going to do the best we can to get around some of these issues, but there will be plenty of “yeahbuts.”

I’ve long lamented the lack of research on player development, and I think a lot of it comes from the litany of “yeahbuts” that come along with it. Yes, it’s a dirty dirty data set and there’s a lot more noise in there than we’re used to. It means that even if I find something interesting, I can’t go around proclaiming that I’ve found some great (or minor) truth about baseball. I’ve found something in a horribly biased data set that may or may not have another reasonable explanation. And I get that, but it means that entire area of the game is being ignored, and it’s kind of an important area.

Damn the torpedoes! Full steam!

Warning! Gory Mathematical Details Ahead!

I gathered seasonal minor-league batting lines for 2010-2015, including data on positions played during that season. For the purposes of these analyses, I went with simple strikeout and walk rates (K/PA and BB/PA) as outcomes. These aren’t the only stats worth looking at, but they are the ones that are going to be most resistant to crazy park effects, and if there’s going to be some sort of change, we’ll probably see something there. I used pairs of seasons from the same player in which he logged more than 250 PA in both seasons. The nice thing about strikeouts and walks is that we know that—at least at the MLB level—these stats stabilize at PA levels well below 250, so we can feel confident that a player’s K and BB rates in those years were a true reflection of his talent over that year.

I tried a few different definitions of when a player was working in a new position. For one, I looked at whether he had gone from only appearing at one position for 25 or more games in the previous season to appearing in two (or more) positions for 25 games or more (each) in the following season. Another was that I looked at hitters who switched their primary position (the one that they played most often) from year to year. Another, more extreme, version of that was the guy who had a primary position in one year that he did not play at all the previous year.

All of the analyses were set up in the same way. I used a mixed-design ANOVA, with time (season x and season x-1) crossed with whatever factor indicated whether our minor leaguer was (or was not) dabbling in a new spot on the field. I also limited the sample in different ways. For example, I ran the analyses only with players who repeated the same level from year to year. Then with those who didn’t. I played around with the age filter a few different ways. I left out the kids in Rookie ball.

Let me keep this short. Nothing happened. I ended up with a large pile of non-significant findings. In no case did the “trying a new position” variable make a significant difference. There were a few close calls (as in p-values between .1 and .2), but nothing that crossed the line. There is little evidence that players who are learning a new position see much of an effect in the batter’s box.

Now, these analyses are blind to what position switches are being attempted. For example, a team might be asking a player to switch from left field to right field, where the skills are largely the same, or from third base to center field which call for two different skill sets. They don’t code for whether the new position is easier (and thus, takes some pressure off of the player so that he can focus on his hitting) or more demanding. It’s possible that all makes a difference.

But right now, we don’t have evidence that position-switching makes much of a difference for a player as he continues his growth and development in the minor leagues.

Yeah, But…
The overly simplistic, perhaps overly optimistic interpretation of these findings is that teams can simply move players around the field as they wish and should only be limited by the player’s abilities to actually play the position. After all, if there’s no offensive price to be paid, why not get all your guys a few reps here and there and increase their value that way? It’s not quite that simple. That might be true in the aggregate, and we have evidence that supports that position here, but evidence that supports a position and evidence that really proves a position are two very different things. Another—equally valid at this point—explanation is that the sample of players who were selected to switch positions were selected very specifically. The people who selected them probably figured that they were the kind of players who would be okay at that time in their careers with learning some new skills and not having it drain their resources to develop as hitters. Maybe what we’re seeing here is that the 30 minor-league directors in MLB all did a decent job at picking out who was a good bet.

Still, at the same time, we don’t see any evidence (in the aggregate anyway) of multi-positionality leading to a collapse of all offensive skill. That’s a strawman, but it’s at least true. It suggests that multi-positionality can be done and is being done right now with no ill effects. And maybe there’s a case to be made that it could be pushed more aggressively. I can’t swear to that, but the evidence that we have here is at least encouraging.

And yeah, I just wrote a bunch of words that ends with “I may or may not be right about that…” and that’s okay.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
yibberat
8/09
>I’m fond of saying that 95 percent of Sabermetrics is accounting for bias in the data set. The remaining 5 percent is long division. We’re only about 50 percent there on this one.

But you're giving 110% effort