Smart Baseball

Home > Other > Smart Baseball > Page 6
Smart Baseball Page 6

by Keith Law

So here we sit, nearly fifty years after Jerome Holtzman fabricated the save rule and no one thought to contest it or point out that it was useless or as brain-dead a stat as the game-winning RBI (RIP), with a game that has changed substantially because of one writer’s actions. The idea of the Proven Closer™ did not exist before Holtzman’s Folly, but it has become a huge factor in how teams build their bullpens as well as how managers deploy their relievers, creating an assembly-line mentality of one-inning-and-done relievers that wastes roster spots and probably hasn’t improved any team’s chances of winning.

  In the last five full seasons, 2011 to 2015, only four full-time relievers have even reached 90 innings in a season, with 96 the highest. The last reliever to reach 100 innings in a single season was Scott Proctor in 2006, but he did it in 83 games, and was thus still mostly a one-inning guy. Even into the 1990s, the 100-inning reliever was still somewhat common. Mariano Rivera, who later became the greatest one-inning closer in the game’s history, threw 107 innings in 1996 as a two-inning setup man to John Wetteland, and then threw another 1,109 innings over the rest of his career without ever suffering a major arm injury. Keith Foulke threw 105 innings in 67 games for the White Sox in 1999, then spent the next five years working mostly as a closer for five teams before losing effectiveness. While I don’t think we know if relievers throwing back-to-back 100-inning seasons are at risk of injury or decline, or if those 100 innings are less stressful to the arm if they come in 60 appearances or 70, the current paradigm of reliever usage isn’t doing anything to keep pitchers healthy or to help teams manage their bullpens more effectively.

  Teams have believed for decades that they needed a Proven Closer™ to be contenders, even though certain clubs—the Oakland A’s in the late 1990s and early 2000s—would simply trade one Proven Closer™, install a new reliever in the role the next year, and trade him once he was similarly “proven” or just let him walk when he became too expensive.

  Taylor had one career save going into 1996, became the team’s part-time closer at age thirty-four that year, and then was their full-time closer for the next three seasons. He was out of baseball twenty-four months after Oakland traded him to the Mets and never registered another save. Isringhausen was a failed prospect as a starter due to injuries; Oakland acquired him in the Taylor trade, made him a closer, and lost him to free agency after 2001. They traded for Koch from Toronto,* where he was already an established closer, and after he had a career year with the A’s they traded him to the White Sox for Keith Foulke, who had lost his job as Chicago’s closer the previous year. Koch pitched only two more years in the majors, with a 5.12 ERA for two teams. Oakland took Foulke—can you see the pattern here?—and let him reestablish himself as a closer for them, losing him to free agency after the season; he signed a four-year deal with Boston, helping them win the World Series in 2004, but started his decline the following season.

  Few other teams caught on to the tactic at the time, or they simply didn’t want to risk it themselves, even though the A’s demonstrated that closers are made, not born. Most relievers capable of handling the eighth inning can handle the ninth, which means a team facing budget constraints can usually gamble on a cheap closer rather than overpaying for established production that may not even continue.

  One reason teams don’t wish to take the risk that Oakland took in that period is that they don’t view the ninth inning as just another three outs. As we’ve seen many long-standing tenets of conventional baseball wisdom fall over the past ten years, one that we haven’t entirely lost is the idea that the closer’s job is harder than that of the pitchers who came before him. Yet consider this common situation: A team with a one-run lead enters the eighth inning, with the opponent’s three best hitters due up. The team with the lead won’t call on its closer, because that’s not a save situation (described by the arbitrary rules listed above), so it will call on another reliever, known as a setup man, even though that pitcher probably isn’t the best option. It also means the closer will pitch in the ninth, by which point he’ll either face an easier slate of hitters with a one-run lead, or the lead will already be gone because the setup man gave up the tying run. So which reliever here had the tougher job, the setup man or the closer? And was using the team’s second-best reliever for the hardest three outs really the right call, just because some long-dead sportswriter said the ninth inning was super-special?

  Despite these less-rational, tradition-bound tendencies, we may have seen a turning point in reliever usage in the 2016 postseason, where several managers, notably Cleveland skipper Terry Francona, began using their ace relievers in nonsave situations, often much earlier in the game than they would have in the regular season, because such tactics maximized their chances of winning. It remains to be seen whether any team will adapt this usage to the regular season, where teams have fewer days off than they do in October, but I’m betting that some team or teams will try out a new bullpen paradigm in the wake of the 2016 playoffs.

  Cleveland acquired Andrew Miller from the Yankees in a massive trade-deadline deal that cost them two of their top prospects. Miller had been the Yankees’ full-time closer in 2015, racking up 36 saves, and then was a part-time closer for New York in 2016, with 12 saves before the trade, as Aroldis Chapman (who was himself traded to the Cubs) became the Yankees’ primary option in save situations. Francona chose to use Miller earlier in games from the moment Miller came to Cleveland, retaining Cody Allen—a good pitcher but inferior to Miller in every way—in the ninth inning. Miller finished only one game out of the ten in which he appeared in October, in game three of the ALCS against Toronto, which was the only game in which he threw a pitch in the ninth inning all month.

  But Miller also worked more within each game than a closer usually does or than Miller himself typically did: he recorded at least four outs in every playoff appearance, something he did just eleven times in the entire regular season. Miller came into his first playoff game in 2016 in the fifth inning, replacing the starter, Trevor Bauer, with a one-run lead, and entered game three of the World Series in the fifth inning with the score tied—both of these situations that, by the traditional, save-centric bullpen model, would be verboten for a team’s closer.

  The save just isn’t necessary. It tells us nothing we couldn’t already glean from the box score, and gives people the illusion of meaning by its mere existence, which has contributed to overspecialized relief usage and a perverse system where teams often reserve their best relievers for the ninth inning even if those aren’t the toughest outs to get. It deserves its own plot in the stat graveyard, along with the pitcher win, the RBI, and one of the most useless stats baseball has ever seen, fielding percentage.

  5

  Stolen Bases:

  Crime Only Pays If You Never Get Caught

  Everybody loves the stolen base. It’s the most exciting two seconds in baseball, because it’s so visible and involves the actions of three or four different players at once. It’s as if all of the surrounding play stops for a matter of two seconds while we wait to see if the throw gets to the base in time for the fielder to catch it cleanly and tag the runner out. If there’s anything I miss about the way baseball was played when I was a kid, scouring box scores and waiting impatiently for the next Saturday afternoon Game of the Week on NBC, it’s the prevalence of the stolen base in the 1980s, which disappeared when home runs surged the following decade.

  When Hunter S. Thompson quoted the late Oakland Raiders owner Al Davis as saying, “Speed kills. You can’t teach speed. Everything else in the game can be taught, but speed is a gift from God,” it was meant to be in praise of the value of speed in football. In baseball, however, “speed kills” cuts both ways: You can run yourself out of a big inning even more easily than you can run yourself into one, because the cost of making an out on the bases is so much bigger than the benefit of moving a runner ninety feet closer to scoring. Even today, we still see managers fail to understand the very basic calculus of the stolen base
.

  This is not to say that the stolen base itself is bad, or that crediting hitters with adding value through base stealing is wrong; if anything, teams have spent resources trying to value speed on the bases more accurately during this ongoing analytics revolution. But stolen bases have a cost, and ignoring that cost means they’re often overvalued. Baseball loves its fast players, and sometimes that means players like Joey Gathright or John Moses reach the big leagues because they can run, even if they can’t do anything else. (Moses is the worst player, by total value, of all MLB players who stole 100 bases in the majors; in an eleven-year career, he managed just a .313 on-base percentage and .333 slugging percentage, and while he stole 101 bases, he was caught 57 times.)

  Speed can help an offense, but it can short-circuit a rally, and a good manager must understand how to use steals judiciously. Making an out on the bases is costly. You have to know where the break-even point is to decide how often to steal and to determine whether a player actually helped your team score more runs with his base stealing.

  The stolen base itself has long been part of baseball. The stolen base was first counted as a statistic in 1886; the next season, two players, including eventual Hall of Famer John Montgomery Ward, reached 100 stolen bases. (These totals have been adjusted for the 1898 rule change that gave us the stolen base as we count it today.)

  Since 1901, the first year of the American League, stolen bases have fallen out of vogue and then come back in again, as you can see from the chart below:

  Stolen bases peaked in 1911, with more than 210 steals per team in each league during the height of the “dead ball” era, before home runs became so prevalent or central to the game. (The eight American League teams of 1911 hit 198 home runs in total. Five American League teams hit at least that many by themselves in 2015.) Stolen bases steadily dropped from there through the 1920s, when Babe Ruth’s power revolutionized offenses, and stayed low into the 1960s, when Maury Wills emerged as a force on the base paths for the Dodgers, stealing 104 bases by himself in 1962—more than any other single major-league team that year. Wills was followed by Lou Brock, who, after the Cubs traded him to the Cardinals in a deal that Cubs fans would likely rather I didn’t mention, took over the annual stolen base crown from Wills in 1966. Brock led the NL in steals eight times in nine years, culminating in his then-record 118 stolen bases in 1974 (along with a less-noted yet still league-leading 33 times caught stealing, at the time the most since the end of the dead-ball era). When MLB decided to lower the pitcher’s mound in 1969, destroying offense around the game, teams turned to “small ball” to try to “manufacture” more runs (today they’d just outsource it, arguing it’s not where the real value-add lies), and stolen bases entered a twenty-five-year renaissance that only ended when home runs started their surge in 1993.

  My personal coming-of-age as a baseball fan was during that renaissance’s high period, the go-go 1980s, when Rickey Henderson, nicknamed the Man of Steal, broke Brock’s record with 130 stolen bases in 1982, a record that still stands today along with the 42 times he was caught that season. (He even missed 13 games that year, so he attempted 172 steals in 149 games.) Vince Coleman later cracked 100 steals as well, topping out at 110 in 1985. In the eight full seasons from 1982 to 1989, there were a total of 49 seasons of at least 50 stolen bases by 26 players; 15 seasons of at least 75 steals by five different players, including one by my cousin Rudy Law*; and 5 of at least 100 steals, all by Henderson or Coleman. If you liked stolen bases, this was an amazing time to watch baseball.

  There were some truly atrocious seasons of stolen bases in the 1980s, too, though. Steve Sax stole 56 bases in 1983 but was caught 30 times; the latter was only surpassed by Henderson in the 1980s. Gerald Young stole 65 bases and was caught 27 times in 1988; Omar Moreno stole 60 bases and was caught 26 times in 1982. One of the worst base-stealing seasons in history came from current broadcaster Harold Reynolds, who, in 1988, stole 35 bases but was caught 29 times.

  When offensive levels spiked suddenly in 1993, the value of the stolen base started to drop; even if managers didn’t do the math, it was obvious that advancing a runner one base didn’t matter as much if the batter at the plate was going to hit 40 home runs that year. Since Rickey stole 93 bases and Coleman 81 in 1988, no player has reached 80 steals in a season. Only one player has topped 75 steals in a season since the offensive surge—commonly, if somewhat spuriously, called the “steroid era”—started in 1993: Jose Reyes in 2007 with 78. Only five other players have even reached 70, with Kenny Lofton doing it twice. Even as offensive levels have dropped in the last few seasons, however, stolen base attempts haven’t bounced back.

  Much of this significant change in the way teams manage their baserunning has come from an increased understanding of the cost of losing a baserunner to an out. Take these two real-player seasons, with the names removed, both of which occurred in the American League since the year 2011:

  The two players had roughly the same number of plate appearances on those seasons. Knowing no other information about them, which player would you say was the better offensive performer?

  Player D has the higher batting average, but Player J makes up for that gap by drawing 22 more walks than Player D, so their OBPs are close to dead even. Their slugging percentages are extremely similar as well. If you see those stats and assume, fairly, that their performances with the bat are about on par, then shouldn’t Player D get the edge for a few more steals?

  Let’s take this even one step further with another player who had similar rate stats:

  In reality, Player R had more than 100 more PA than the other two players did, but we’re going to ignore that for the purposes of this exercise. If you assume that all three had about the same playing time, how would you rank their seasons at the plate?

  The correct answer here is that you can’t. Never mind getting it exactly right—you’d need a lot more information for that—but you can’t even ballpark it, because that last column can’t exist by itself. A successful stolen base is a net positive for the team, because it makes it easier for that player to score in a subsequent hitter’s at bat; that is, it increases the probability that the runner will score. Nothing in baseball exists in a vacuum, but stolen bases are particularly guilty of telling half a good story:

  That next-to-last last column, times the runner was caught stealing, is far more important than the column to its left—as much as three times more important, depending on the year and how exactly you want to measure their relative values. (The range of estimates of their values isn’t very wide.) So in this specific case, Player D, Dee Gordon in 2016, stole six more bases than Player J, Jacoby Ellsbury in 2013, but made 16 more outs on the bases to do so, a net negative for his team. Player R, Jose Reyes in 2007, stole 26 more bases than Ellsbury, but made 17 more outs on the bases to do so, which is also a net negative, although a somewhat less obvious one at first glance. So it turns out that even with the rate stats all fairly close, Ellsbury was the most valuable offensive performer of the three, even overcoming Gordon’s advantage in hits and batting average, or Reyes’s big playing-time lead.

  Stealing bases is great fun, and can absolutely help an offense, but it has to be done well to work. If you’re caught stealing more than about 25 percent of the time, as a player or as a team, you’re putting a dent in your run-scoring capacity. It’s better to try to steal less frequently but to be successful on a higher percentage of your attempts than to run wild without regard to the consequences of getting caught, as teams did in the 1980s. Most teams have caught on to this at some level, although we still see managers sending runners in a lot of situations where they’d be better off having the first base coach nail the runner’s foot to the bag.

  The hardest thing for any offense to do is to put a man on base; even the best hitters will still fail to reach base safely 60 percent of the time, and plenty of hitters will fail to do so about 70 percent of the time. Therefore, it stands to reason (and holds up statistically
) that once you get a guy on base, the last thing you want is to lose him to an entirely preventable out like a failed stolen base attempt. What has only recently crept into mainstream baseball thinking is how to compare the cost of such a failure against the gain of a success.

  Statisticians have long known how to look at this kind of question via something known as the Run Expectancy Matrix, a cumbersome name for a simple concept. Given a specific base-out state, referring to runners on specific bases with a specific number of outs, how many runs should the team at the plate “expect” to score in the remainder of the inning? Here’s the matrix for 2015, provided by Baseball Prospectus in the Stats portion of their website:

  These expected run numbers are just the average over all such base-out situations for the entire regular season of 2015.

  So, from looking at this table, you can see that when a team has a man on first and zero outs, they should expect to score 0.84 runs for the remainder of that inning. If the next batter gets a hit, and the runner advances to third base, that expectation rises from 0.84 to 1.67, because they’ve added a baserunner and have pushed the first one up two bases.

  In the case of a stolen base attempt, we are trying to compare two situations, the successful attempt and the failed one. The successful attempt with a man on first and zero outs takes us from 0.84 to 1.08, a gain of about a quarter of a run. To put it another way, every four successful steals of second with zero outs would be worth about an additional run to the offense.

  A failed attempt, however, drops the team from 0.84 to the expectation for one out and nobody on, an expected value of 0.26 runs. You can see right away that the cost of a time caught stealing is more than twice the gain of a successful steal. Therefore, if you’re Harold Reynolds in 1988, you’re killing your team by running often but getting caught nearly half the time.

 

‹ Prev