Book Read Free

Smart Baseball

Page 12

by Keith Law


  Paul Molitor

  7625

  Rafael Palmeiro

  7571

  Rabbit Maranville

  7473

  Alex Rodriguez

  7452

  Lou Brock

  7355

  Ty Cobb

  7245

  * “Outs” here just refers to outs made specifically by the player in his at bats, and is equal to AB + SF– H.

  Pete Rose is the Hit King, with more hits (4,256) than any other player in MLB history, having passed Ty Cobb’s previous record of 4,192 in September 1985. But to get there, Rose had to make more outs than any other hitter in MLB history, 2,600 more than Cobb did. (Granted, Cobb played a very different game, with his entire career in the dead-ball era—and prior to integration as well.) This doesn’t change the fact that Rose is MLB’s all-time leader in hits, but the magnitude of that accomplishment is diminished by how many extra outs he made to get there.

  The next nine players on the list are all in the Hall of Fame, followed by Omar Vizquel—whose poor case for the Hall of Fame I discussed in the chapter on fielding percentage—and certain Hall of Famer Derek Jeter. Rabbit Maranville is the only name that might be unfamiliar to some readers, and if you know him at all, it’s probably because so many folks have called him the worst player (or simply one of the worst players) in the Hall of Fame.

  Then there’s Finger-Pointin’ Raffy, Rafael Palmeiro, who testified before Congress that he’d never taken performance-enhancing drugs (PEDs), then tested positive for an anabolic steroid, stanozolol, a few days after recording his 3,000th hit. The Baseball Writers’ Association of America, a subset of whose members vote for the Hall of Fame, would likely have given Palmeiro the honor had he not failed this test, but instead he failed to reach the 5 percent threshold to remain on the ballot in his fourth year and did not appear on the ballot again.

  On merit, Palmeiro probably had a Hall of Fame–caliber career; Sports Illustrated writer and author Jay Jaffe’s work on Hall of Famers pegs Palmeiro’s production as about average for a Hall of Fame first baseman. He’s among the top 100 players of all time as measured by the total-value metric Wins Above Replacement, and his career total of 569 home runs would, in and of itself, have meant automatic enshrinement in an earlier era. I’m not going to argue that Palmeiro was not a Hall of Famer as a player, or that he wasn’t good, but you can see why his case was soft enough that the failed test was the death knell for his candidacy rather than just a point against him.

  Palmeiro produced, but at some cost. His career OBP of .371 ranks just 248th all-time, and he only finished in the top ten in his league in OBP twice, finishing 9th in the AL in 1999 and 2002. Palmeiro did damage when he didn’t make an out, but he made more outs than any other member of the 500 home run club except for Hank Aaron (755 homers), Alex Rodriguez (696), Willie Mays (660), and Eddie Murray (504).

  “Steady Eddie” might be—or have been—Palmeiro’s most favorable comparison, another first baseman who was never a star but was an above-average big leaguer for a very long time, and, like Palmeiro, wasn’t particularly well liked by teammates. In fact, in the somewhat frivolous stat called a Similarity Score, developed by Bill James as a rough way to measure how similar the career stats of two players are, Murray is the second-most-similar player to Palmeiro, and Palmeiro is the most similar player to Murray. Murray played a bit longer, and if you adjust for the different offensive eras in which they played, with Palmeiro’s production worth a bit less because he played in a much higher run-scoring period, their offensive production comes out nearly even. If you believe that Hall of Fame credentials adhere to the transitive property, then if Murray’s in, Palmeiro belongs in, too. But since Palmeiro’s candidacy revolved heavily around the two big milestones—3,000 hits and 500 homers—it’s fair to look at the cost of that production, which in his case was higher than that of his peers.

  In the end, it’s not that making outs makes you a bad hitter or entirely negates the value you bring to your team. Looking at the names on that list, it would be hard to argue that any of them weren’t assets to their teams. But hits alone aren’t enough to judge the full scope of a hitter’s contribution.

  If you tell me I can only know one thing about a hitter, I want to know how often he gets on base—that is, how often he doesn’t make an out. It’s a simple number that encapsulates a couple of core skills, from the ability to make contact to plate discipline and pitch recognition, and has more predictive value going forward than batting average does. It also covers every time the batter came to the plate and did something related to hitting; the only things excluded are freak events like reaching on catcher’s interference or managerial mistakes like sacrifice bunts.

  Of course, it’s not a complete measure of what a hitter did at the plate. It treats a single and a home run as equivalent events, because in the eyes of on-base percentage, they are equivalent: The hitter reached base safely. OBP is essentially a binary stat—you did or you didn’t, period. To get more detail on the value a hitter produced, we need to look at other statistics, and eventually we’ll have to find a way to put these different pieces together into a measurement of the whole player.

  9

  The Power and the Glory:

  Slugging Percentage and OPS

  In 1975, Mike Schmidt, the twenty-five-year-old third baseman for the Philadelphia Phillies, posted a .249 batting average, which ranked him 50th out of 62 qualifying hitters in the National League that year. Schmidt could work a walk even at that young age, so his OBP was much better at .367, but still not even top 20 in the league. So how did he end up the second-most-valuable player, by modern metrics, in the NL that year?

  Schmidt didn’t hit for much average, but when he did hit, he hit it hard. He led the National League with 38 home runs, and finished in the top ten in doubles with 34. Add in three triples and he led the league in extra-base hits and was fourth in total bases. That gave him a slugging percentage, a crude but effective rate stat that rewards power, of .523, good for third in the NL, night and day compared to his standing in the other common rate stats.

  Schmidt is one of the greatest players in the game’s history and 1975 wasn’t even one of his best seasons, but he hit for so much power even in what was, for him, a “down” year that he was still among the league’s best position players. Of the stats you might find on the back of Schmidt’s baseball card, slugging percentage gives you the best explanation of why.

  Slugging percentage gets to the stuff on-base percentage doesn’t: power. It’s a brute-force approach to the question, just taking a player’s total bases and dividing that into his at bats.*

  Slugging percentage correlates with run-scoring better than straight batting average, albeit not quite as well as on-base percentage. This makes intuitive sense, as slugging percentage incorporates more information than just plain batting average does, but less than on-base percentage. Of course, combining the two in some form will give us even more information about a team’s underlying offensive performance . . . but that will have to wait for the moment.

  Slugging percentage is, in many ways, just smarter batting average. Whereas batting average treats all hits as having equal value, slugging percentage assigns each hit a whole-number value—the number of bases the hitter reached safely as a result of the hit. A single is worth one base, a double two, a triple three, and a home run four. Those numbers are not precise measures of those hits’ actual in-game values, but they are more accurate than simply assuming every hit is the same. We know that an infield single and a home run into the gloaming are not equal. To take one extreme, if absurd, example:

  I assume it is fairly obvious which of these players was more valuable based on the information above, but batting average alone would argue in favor of Luis Castillo, or that the two were roughly equal in value, because of how much information it omits—in this case, 71 homers and 15 doubles of omission.

  Even in recent history, the tendency within and ou
tside of the game has been to speak of hitters’ power by their home runs and RBI. I’ve discussed the RBI in a previous chapter, but to reiterate, the use of even a rough stat like slugging beats home runs and RBI for a few reasons.

  • RBI are context-dependent—the guys in front of you in the lineup have to get on base—while slugging has no such dependence on who else is in the lineup or how they perform.

  • Slugging percentage also includes doubles and triples, which are measures of power and to some extent speed, while home runs and RBI exclude those entirely. (Triples are an oddball occurrence; in the modern game, they’re relatively infrequent, and hitters who get a lot of them tend to have one or both of two factors working in their favor: they’re fast, or they play in ballparks that have deep outfields that favor triples.)

  • Slugging percentage is a rate statistic, whereas home runs and RBI are bulk or counting statistics. The more you play, the better those two stats get, while slugging percentage measures total bases per at bat, so it increases only with more production rather than more playing time.

  Team slugging percentage’s correlation to team run-scoring (again using the coefficient of correlation tool) is 0.846, or about 84.6 percent, from 1901 to the present. If we start in 1921, the beginning of the live-ball era, the correlation creeps up slightly to 86.5 percent. Measuring home runs alone, either in the aggregate or on a per-game basis, against runs scored produces correlations in the 49–58 percent range depending on time period. If you hit for more power, you score more runs, whether we’re talking doubles or triples or homers or even folding singles back into the equation. (Deleting singles and looking just at extra-base hits, a statistic known as isolated power and which is equal to slugging percentage minus batting average, drops the correlation to runs/game to 65 percent.)

  How telling are on-base percentage and slugging percentage? Looking at the top 100 offensive seasons in history according to Baseball-Reference,* all one hundred hitters posted at least a .410 OBP (Frank Robinson, 1966), and only one hitter posted a slugging percentage under .550 (Nap Lajoie, 1910, the middle of the dead-ball era). Willie Mays had the best season ever by a player with an OBP below .400, when he hit 52 homers with a .398 OBP in 1965 for the San Francisco Giants. Several dead-ball era hitters had all-time great seasons with slugging percentages below .500, but the best of those since 1920 came in 1988, when Wade Boggs hit .366/.476/.490, leading the league in batting average, OBP, and in doubles with 45.

  Boggs is an interesting case study in producing value via OBP without home-run power. Boggs slugged .500 just once in his eighteen-year Hall of Fame career, during the rabbit ball season of 1987, his only season with more than 11 home runs. He led the American League in OBP six times, had over 3,000 hits and 1,400 walks, and retired with the 14th-best career OBP of the modern era. Baseball-Reference ranks him 30th all-time among position players in offensive value produced, even with just 118 career homers and a .443 career slugging percentage.

  A big part of why slugging percentage has become so valuable to the modern game, and largely replaced batting average in its usefulness in the process, is that it’s a stat that actually reflects the evolution of modern baseball. In earlier eras of baseball, batting average was sufficient because there weren’t as many people hitting for power, but because the game itself has changed, the metrics we use to examine it have to change as well.

  Today’s game would be unrecognizable for a time traveler from 1890 or 1915, not just because of technology or the presence of nonwhite players (although those are rather significant upgrades), but because the way the game itself is played is so different from how it was played a century or more ago. The original spirit of the game was that hitters were there to put the ball in play; striking out was disdained, and home runs didn’t come into vogue until Babe Ruth became a full-time hitter at the end of the 1910s and started out-homering entire teams, beginning the first New York Yankee dynasty.

  On average, in the dead-ball era, which ended around 1920, each team would hit about a homer every five games, which seems hard to fathom if you’re under the age of about forty and have never seen anything but the current high-offense era of the sport. Home run totals gradually crept up into the early 1960s, with a brief dip during World War II, then experienced a drought in the late 1960s around some rule changes (including the disastrous decision to lower the mound in 1969) and a wave of higher-velocity starters like Bob Gibson and Nolan Ryan. MLB crossed the one homer per team per game margin first in 1987’s rabbit ball year, crossed it again in the strike year of 1994, and has been at one or more in sixteen of the twenty full seasons since then, peaking at 1.17—seven homers per six team-games—in 2000. While the shape of offensive performance has changed many times in the game’s history, the home run is not going anywhere; predictions, for example, that home run rates would drop with improved testing for PEDs have not borne out in reality, as we’re below the 2000 peak but still well above any point in MLB history from prior to 1994.

  As home runs have increased, however, so have strikeouts, which stands to reason as swinging harder to try to maximize power (making harder contact, which we now measure via the ball’s “exit velocity” off the bat) also means swinging and missing more frequently. Strikeouts per team per game didn’t even reach four until 1952—that is, four strikeouts per side in a game, so eight total given that it’s hard to play a game without two teams—but crossed five before the decade was out. In 1994, when MLB crossed the one homer per team-game barrier and stayed there for fifteen years, strikeouts also spiked, crossing the six per team-game mark that same season, then hitting seven in 2010. As I write this in mid-2016, MLB is on pace for an all-time high of more than eight strikeouts per team-game—sixteen per game total, more than double where the rate was in the dead-ball era.

  The rise in power has come with more strikeouts, which again makes intuitive sense, but the more subtle effect is that more strikeouts make hitting for power much more important, too. If you’re putting fewer balls in play overall, you need to accomplish more when you do put the ball in play. Batting averages and on-base percentages haven’t changed much over the last 115 years, but slugging percentage has. Home runs are at historical highs, and doubles have returned to their 1930ish peak level. (The 1930 season itself was an early rabbit ball year, as run-scoring jumped across the game; three of the seven cases in history of a batter knocking in 170 or more runs occurred that year, and Hack Wilson’s 56 home runs in 1930 stood as the NL record until 1997.) Hitters are bigger and stronger than ever, and pitchers are throwing harder than ever, which has resulted in harder contact and more swings and misses. It’s still our game, but it’s not the game our great-grandparents knew, and that makes evaluating hitters strictly on batting average even more foolish than it was back then. If you aren’t hitting for power as a team in 2016 and beyond, you’re not going to score enough runs to compete: in the 232 complete league-seasons from 1901 through 2015, 139 of the teams that led their respective leagues in slugging percentage also led their leagues in runs scored.

  Slugging percentage, by itself, only tells a portion of the story; it’s leaving out lots of information, including a hitter’s ability to draw a walk, and it doesn’t value events perfectly. But as a quick look at a hitter’s power output, it’s useful. The top ten players in MLB history by career slugging percentage include seven inner-circle Hall of Famers, along with Barry Bonds, Mark McGwire, and Manny Ramirez. The next ten include four more Hall of Famers, eventual members Albert Pujols and Miguel Cabrera, and Mike Trout, who is threatening to rewrite the record books by the time he’s done. Hank Aaron ranks 22nd, right behind Frank Thomas. If you hit for power, you’ll have a high slugging percentage, and if you do that you’re probably among the all-time greats.

  So, if OBP is good, and slugging percentage is good, and each stat includes important information the other one doesn’t, why don’t we just add them together and get one glorious stat that covers everything! This stat exists, and h
as entered the mainstream in the last decade, under the name of OPS, for on-base plus slugging. It’s an ugly mess underneath the hood, and I do not use it in my writing or any kind of evaluation of individual players . . . but at a team level it actually works rather well at the one thing that matters: predicting runs scored.

  OPS itself is just bad math. If you think back to fourth grade, you’ll remember that you can’t add two fractions with unlike denominators, yet that’s exactly what OPS does. On-base percentage is just the number of times a hitter reached base safely divided by most of his plate appearances. Slugging percentage is a hitter’s total bases divided by his at bats, a denominator that in nearly all cases will be smaller than the denominator of OBP because it doesn’t include walks, sacrifice flies, or times hit by pitch. If you do this in elementary school, you get back a paper covered with red ink, and maybe a dunce cap. If you do it in baseball writing, you get a gold star.

  The problem with OPS’s formula—if it even rises to the level of a formula, rather than a mash-up the way a toddler smushes two lumps of Play-Doh together and calls it a present for Mommy—is more than just an academic one. The two components have vastly different scales, so adding them together underweights one component, OBP, and overweights the other, slugging.

  Imagine two players, each with an .800 OPS in the same total number of plate appearances. Player A has a .300 OBP and a .500 slugging percentage. Player B has a .400 OBP and .400 slugging percentage. Which player was more valuable at the plate?

  Before I answer that—and it may be obvious already—consider why the scaling issue of the two components of OPS is a problem. The range of on-base percentages for full-time hitters in MLB in 2016 ran from .256 (Wilson Ramos, Washington Nationals) to .460 (Bryce Harper, also Washington Nationals). Of 141 qualifying hitters, meaning hitters who had 3.1 plate appearances per game their teams played, 115 fell between .300 and .400, about 82 percent of the total.

 

‹ Prev