Smart Baseball

Home > Other > Smart Baseball > Page 7
Smart Baseball Page 7

by Keith Law


  Using these numbers, which do vary slightly each year and would look very different if we were to consider a different baseball environment, like a high school game or the summer collegiate league on Cape Cod, we can estimate the break-even success rate for stolen base attempts that should be the starting point for any manager’s decision on whether to attempt to steal.

  Continuing with the example above, we’d look at the expected value of a success compared to the expected value of a failure, and in this somewhat simplified scenario,* a baserunner must succeed in at least 71 percent of his attempts to steal second base with zero outs for the move to have a positive expected value. If the baserunner starts at second base with zero outs, the breakeven rate for stealing third is even higher, 81 percent, because the runner was already in scoring position and could have scored from second with the right combination of two field outs.

  These examples are a bit oversimplified because other things can happen on stolen base attempts, especially fielder errors such as errant throws, which can advance the runner an additional base. The expected value of a specific stolen base attempt also depends on factors like the speed of the runner and the batters at bat and on deck; you are more willing to risk the stolen base attempt if your eighth- and ninth-place hitters are coming up than you are with Bryce Harper at the plate. Indeed, if you’re attempting a steal with Harper at the plate, you’re an idiot, because it doesn’t matter where the runner is standing when the batter puts the ball in the bleachers.

  Compare this somewhat rigorous look at the expected value of a stolen base attempt with the traditional approach, which largely hinges on truisms like “don’t make the first or third out at third base” or “a base stealer on first means more fastballs for the hitter.” The book Baseball Between the Numbers, written by multiple Baseball Prospectus writers, including Nate Silver, found that the latter wasn’t true—a good base stealer on first actually reduced the performance of the hitter at the plate.

  There’s good anecdotal evidence that major-league teams are catching on to this idea of gauging stolen base opportunities by the runner’s success rate and by the base-out situation. Since 2000, only one MLB player has stolen at least 20 bases with a success rate under 67 percent, Luis Castillo’s comically bad 21-for-40 performance in 2003 for a rate of 52.5 percent. Matt Kemp was 19 for 34 in 2010; Ian Kinsler was 15 for 26 in 2013; but such seasons are becoming increasingly rare. In 2015, the worst base stealer by percentage was probably Jace Peterson, 12 for 22, the only player to fall under 67 percent in at least 20 attempts. Only D. J. LeMahieu did so in 2014, going 10 for 20 on the bases, an especially baffling statistic given that he played for the Colorado Rockies, whose hitter-friendly ballpark makes losing baserunners even more costly. Why send the runner when a routine flyball could end up a home run?

  However, this calculation can change along with the game itself. The value of an additional base increases in a lower run-scoring environment; if home runs were to tumble again, then it would make more sense to try to grab an extra base that would allow the runner to score on a single or double. A major-league run expectancy table wouldn’t apply to amateur games where tin bats lead to higher scoring and poorly struck balls can still fall in for hits, and the entire equation varies when you get further from the majors and fielders don’t field as well.

  While the stolen base as a statistic isn’t bad, nor is it a bad tactic, judging players by their stolen base totals presents us with two problems. One is the imbalance between a stolen base and a time caught stealing I mentioned above: If I tell you a player stole 100 bases, is that good? Well, if he was caught ten times, yes, it’s great. If he was caught 40 times, it’s probably fine. If he was caught 90 times, what the hell is his manager doing?

  Rickey Henderson and Tim Raines were two of the greatest players of the 1980s, and both were known for their prolific base stealing. Rickey is better remembered today, and was elected to the Hall of Fame in his first year of eligibility, specifically because of how often he stole. But Raines was the better percentage base stealer, which isn’t as sexy as setting the single-season and career stolen base records but may mean Raines’s legs were more valuable than Rickey’s.

  Before those two came on the scene, only two players in MLB history had stolen 800 or more bases: Ty Cobb, who stole 897 bases and held the career record until 1977, and Lou Brock, who broke it and ended up with 938 stolen bases. Henderson destroyed Brock’s mark, finishing an amazing twenty-five-year career with 1,406 stolen bases, leaving Raines, whose total of 808 remains fourth on the all-time list, in his shadow.

  But Raines has one advantage over Rickey and Brock: he was caught less frequently than either of those guys, less than half as many times as either player:

  Since caught stealing became an official stat in 1920, only one player has stolen at least 200 bases with a higher success rate than Raines—Carlos Beltran, still active as of this writing, at 86.3 percent (311 steals, only 49 times CS).

  In practical terms, all of those extra times caught stealing for Brock and Rickey negate many of the additional bases they stole when compared to Raines. Brock stole 130 more bases than Raines, but was caught 161 more times, an obvious loss of value even when comparing two different offensive eras. (Brock is also in the Hall of Fame, like Rickey; in January 2017, Raines was elected to the Hall of Fame in his final year on the ballot.)

  Henderson stole 598 more bases but was caught 189 more times, a rate of 76 percent just on the excess. The most glaring difference in their base-stealing success rates comes on straight steals of third base; Rickey was much more likely to attempt to steal third base than Raines was, but was also caught there more frequently, with an 81 percent success rate in 244 attempts to steal third, compared to Raines’s 42-for-43 career performance in steals of third.

  In addition to the idea that we’ve been viewing individual base-stealing accomplishments the wrong way, there is perhaps a more drastic, and substantial, flaw to the traditional view of base stealing than that.

  The more math-inclined among you may also have noticed something about the stolen base from the charts in the previous sections: Stealing a base isn’t as valuable as we—all of us, really—once thought it was. Its excitement exceeds its worth. Advancing a runner from first base to second base, into what is colloquially known as “scoring position,” is worth just under a quarter of a run with zero outs, and only about .15 runs with one out or two outs. In other words, you need four successful steals of second base with zero outs to add up to one run of value, or one additional expected run scored for the offense. A stolen base is worth much less than a single or walk, for example—barely over half—and that’s before we even consider the cost of a failed attempt. Stolen bases have their place, but they’re no way to build an offense.

  The tables above show only aggregate figures covering all situations for all teams over the course of the year. If Jon Lester, who has trouble throwing to first base and can’t hold a runner on, is pitching, you’ll be more inclined to send your runners. If Bryce Harper is at the plate, as I mentioned above, your first base coach should break out the epoxy. Think of “run expectancy” less as a mathematical concept and more as a baseball one: Do you think you’ve got a good chance to score here? Is the guy at the plate a good bet to hit a home run or a double, something that would make the stolen base totally unnecessary? Or are you at the bottom of the order, where getting two hits to score a runner from first is an unlikely outcome, so the value of a successful steal is probably higher? How scarce are runs in this ballpark? And, perhaps most important of all, who’s running?

  The data tell us that steals are good when successful and toxic to an offense when unsuccessful. The hardest thing for an offense to do in baseball, regardless of the era, is to put a man on base; deleting him with an ill-advised stolen base attempt is foolhardy. There is game-changing speed, of course, but it’s rare, and players with speed aren’t automatically good base stealers—nor is sending a runner the right
move in every situation. Managers shouldn’t drop the stolen base entirely, but they should be more judicious in its use. The value of a steal or of “team speed” has risen to mythical proportions, but myths, as we shall soon see, can still overshadow rational thinking in the way the game is played on the field.

  6

  Fielding Percentage:

  The Absolute Worst Way to Measure Defense

  Ozzie Smith was known as the Wizard of Oz for his incredible work on defense and his habit of doing a backflip whenever he ran out to his position at the start of a game. At the game’s most challenging defensive position, Smith established a new standard for excellence. He was rewarded by fans with fifteen All-Star Game appearances and by coaches with thirteen Gold Glove Awards. (Gold Gloves don’t always go to the best fielders, but Smith happens to have been worthy.) His unlikely NL Championship Series home run off Dodgers reliever Tom Niedenfuer helped the Cardinals win the National League pennant in 1985, one of three trips to the World Series they’d make with Smith at shortstop.

  Smith played 21,785.2 innings at shortstop in the major leagues, over 19 seasons. He had 4,249 putouts, 8,375 assists, made 281 “errors,” and was part of 1,590 double plays. Baseball-Reference values his defensive contributions at +239 runs, the most of any shortstop in MLB history.

  Omar Vizquel played 22,960.2 innings at shortstop in the major leagues over 24 seasons; his 2,709 games at the position are the most in MLB history. He had 4,102 putouts at short, 7,676 assists, made 183 “errors,” and was part of 1,734 double plays. Vizquel’s Hall of Fame case—he’ll appear on the ballot for the first time after the 2017 season—is a widely debated one, and his advocates often present him as a defender comparable to Smith.

  So Smith played 130 fewer games’ worth of innings at shortstop than Vizquel did, but recorded 147 more putouts and 699 more assists. To put it another way, Smith made 100 more plays per season than Vizquel did.

  Omar Vizquel was a nice player, a good defender, but Omar Vizquel was no Ozzie Smith.

  Advanced metrics agree with this. Baseball-Reference uses TotalZone to measure defense from before the advent of good play-by-play data, which power today’s advanced metrics like dRS and UZR; TotalZone values Vizquel’s defensive contributions at +134 runs, but Smith’s at +239 runs, the third best by any player at any position in MLB history.*

  Yet if you looked only at the statistic that, until the last decade, was seen by nearly everybody as the most reliable measure of defensive prowess, fielding percentage, you’d have favored Vizquel, whose .9847 fielding percentage bests Smith’s .9782 and ranks as the second best among shortstops with at least 500 games at the position (behind only the still-active Troy Tulowitzki). Indeed, Smith ranks just 15th among shortstops all-time in fielding percentage, behind such luminaries as Stephen Drew and Larry Bowa.

  Measuring defense has long been the most difficult problem for anyone, from general managers to independent analysts, within baseball, because the numbers available to us for most of baseball history just weren’t very good. As a result, fielding percentage, one of the only stats on defense we’ve had, rose to the top as a method of evaluating a player’s abilities in the field. Fielding percentage is a simple stat; if you know the player’s total chances in the field, meaning putouts plus assists plus errors, and his error total by itself, you can calculate it: divide his errors by his total chances, and subtract that number from one. You’ll get something in the range of .95 to 1.00 for most big leaguers, although occasionally an awful defender comes along and manages to come in below that.

  The problem here is that you’d get an equally good measure of a player’s fielding abilities if you rolled a pair of dice. Fielding percentage doesn’t impart any useful information whatsoever. Pulling a number at random would be just as valuable to us. Hey, Andrelton Simmons is . . . (sound of dice rolling) . . . an 11! That’s great, right? What’s at the root of these problems and why has fielding percentage held such a prominent position in how people evaluate defenses? It all starts with an error.

  One of the biggest problems with fielding percentage is that it’s calculated based on a player’s errors, and errors are highly subjective. So much so that the rules guiding these subjective calls have changed numerous times since they were first incorporated in 1878. In 1887, the National League adopted a rule from the American Association that stated, “[a]n error shall be given for each misplay which allows the striker or baserunner to make one or more bases, when perfect play would have insured his being put out,” while also removing the earlier stipulation that a pitcher should be charged with an error when issuing a base on balls. (Yes, for four seasons, a walk was counted as an error for the pitcher.)

  In 1888, the National League introduced the first rule specifying when a run should count as “earned,” saying a run that scored “unaided by errors before chances have been offered to retire the side” would be earned, implying then that other runs would be unearned. This rule, however, was deleted in 1898, only to reappear in slightly different form in 1917.

  In 1914, the guidelines for errors were expanded, introducing the first of many wrinkles that would reduce the statistic’s usefulness for measuring defense or teasing out pitcher performance. If a catcher or infielder attempts to complete a double play but fails, he should not be charged with an error . . . unless the throw is so wild that one or more runners advance an extra base as a result. So a pitcher can induce what would be an inning-ending double play, only to have it muffed by a fielder and perhaps allowing one or more runs to score that would not have scored without the non-error error.

  Similarly, an error committed during the course of turning a double play may have little to do with the fielder’s range or even his defensive ability overall, especially prior to 2016, when MLB put into place a new rule that discouraged the “takeout” slides that runners would use to try to break up double play attempts. In October 2015, Dodgers second baseman Chase Utley slid at Mets shortstop Ruben Tejada to prevent Tejada from making an accurate throw to first; a potential double play turned into a fielder’s choice with no outs recorded, with four runs eventually charged to pitcher Noah Syndergaard when that double play would have ended the inning with no runners crossing the plate. Oh, and Utley broke Tejada’s leg in the process, which became the impetus for MLB’s rule change.

  Further refinements to the error rule followed in 1920, 1931, and 1950, followed by the rule change in 1951 that awarded an error to any fielder who commits interference or obstruction and thus allows a batter to reach base or a runner to advance one or more bases. Here again we see the use of the error bucket as a sort of trash can for miscellaneous events—if a fielder commits interference, it doesn’t tell us anything about his defensive skills, specifically his ability to convert balls hit to him into outs. It’s just noise.

  Then we get to 1967, when the rulebook finally admits the flaw in the error stat that I think everyone knew was there: “Mental mistakes or misjudgments are not to be scored as errors unless specifically covered in the rules.” The fielder who breaks the wrong way on a routine flyball and doesn’t recover in time to make the play, allowing that ball to fall in for a hit, is not credited with an error; in fact, he’s not credited (or debited) with anything at all, even though his mistake hurt the pitcher and the team. Official scorers aren’t qualified to make this kind of judgment anyway, but the absence of such miscues from the error bucket destroys the stat’s utility.

  These changes and refinements have given us an error rule that, while long, still leaves the official scorer substantial discretion in the judgment of whether a play should be scored as a hit or an error.

  9.12 Errors

  An error is a statistic charged against a fielder whose action has assisted the team on offense, as set forth in this Rule 9.12 (Rule 10.12).

  (a) The official scorer shall charge an error against any fielder:

  (1) whose misplay (fumble, muff or wild throw) prolongs the time at bat of a batter, prolong
s the presence on the bases of a runner or permits a runner to advance one or more bases, unless, in the judgment of the official scorer, such fielder deliberately permits a foul fly to fall safe with a runner on third base before two are out in order that the runner on third shall not score after the catch;

  Rule 9.12(a)(1) Comment: Slow handling of the ball that does not involve mechanical misplay shall not be construed as an error. For example, the official scorer shall not charge a fielder with an error if such fielder fields a ground ball cleanly but does not throw to first base in time to retire the batter.

  So, let’s get into what this is really saying: a groundball hit directly to the shortstop, shortstop fields it cleanly but can’t get the ball out of his glove to make a throw, runner reaches first base. How is this anything but an “error”? If we assume that the pitcher was partially responsible for inducing a groundball to shortstop that was playable, why would the pitcher be charged with a hit and, if the runner ends up scoring, an earned run?

  It is not necessary that the fielder touch the ball to be charged with an error. If a ground ball goes through a fielder’s legs or a fly ball falls untouched and, in the scorer’s judgment, the fielder could have handled the ball with ordinary effort, the official scorer shall charge such fielder with an error.

  Here’s the key language: “in the scorer’s judgment.” The scorer isn’t a trained scout or evaluator, and may face pressure from players or coaches who call the press box to complain about scoring decisions. With the error entirely subjective, lacking any consistency across games or ballparks, the scorer’s contributions here tell us nothing of value about player defense.

  For example, the official scorer shall charge an infielder with an error when a ground ball passes to either side of such infielder if, in the official scorer’s judgment, a fielder at that position making ordinary effort would have fielded such ground ball and retired a runner.

 

‹ Prev