Smart Baseball

Home > Other > Smart Baseball > Page 8
Smart Baseball Page 8

by Keith Law


  Once again, we’re relying on the scorer’s judgment . . . and in practical terms, this just never happens. If the fielder doesn’t touch the ball, he will almost certainly not be charged with an error, even when an average defender at that position would normally make that play. But the problem here again is that you are asking an untrained eye to discern what a typical fielder would do with that particular ball in play. Even experienced scouts may disagree on that question, so asking a single person to make that call is futile, and the sum of such judgment calls renders the statistic unreliable.

  The comment here continues in the same vein, followed by five rather straightforward descriptions of errors, until we get to:

  (7) whose throw takes an unnatural bounce, touches a base or the pitcher’s plate, or touches a runner, a fielder or an umpire, thereby permitting any runner to advance; or

  Rule 9.12(a)(7) Comment: The official scorer shall apply this rule even when it appears to be an injustice to a fielder whose throw was accurate. For example, the official scorer shall charge an error to an outfielder whose accurate throw to second base hits the base and caroms back into the outfield, thereby permitting a runner or runners to advance, because every base advanced by a runner must be accounted for.

  What the hell is an “unnatural bounce”? Are we accusing the fielder of injecting the ball with steroids or horse tranquilizers? And if the fielder in question made an “accurate” throw, then why are we recording something that would count against him in defensive measures? By the way, if the outfielder’s throw hits second base, it wasn’t an accurate throw. That’s just not how the game works.

  The rule is riddled with references to “the scorer’s judgment,” but the scorer’s judgment isn’t worth squat—and the problem is exacerbated by the lack of any second opinion on these judgments. Since a typical infielder might only have 15–20 errors in a full season, adding or subtracting one due to a scorer’s judgment can make a significant difference in how we perceive the player at the end of that season.

  Furthermore, an error in the scorebook isn’t always an error on the field, while errors on the field often show up as hits (but not errors) in the scorebook. If we think of errors as defensive mistakes, we get very different results from recorded errors, and can easily see how they mislead us as to fielding prowess.

  Consider the case of poor Jose Valentin, the Puerto Rican shortstop who was frequently bounced to other positions because his employers thought his fielding at short was subpar. Valentin led the American League in errors committed twice, in 1996 while with the Milwaukee Brewers and in 2000 while with the Chicago White Sox, yet was among the best-fielding shortstops in the AL in both seasons. In 1996, he led AL shortstops with 37 errors, 15 more than the next-highest total (22 by Derek Jeter), but also fielded more balls in play than all but two shortstops in the AL, finishing fourth in the league in assists and third in double plays. Jeter played 80 more innings at short than Valentin but fielded 30 fewer chances, so TotalZone rated Valentin’s defense as 26 runs better than Jeter’s on the season—the opposite of the conclusion you might get from their error totals or fielding percentages.

  In 2000, Valentin committed 36 errors—12 more than the second-highest total, again from Jeter—but fielded more balls than any shortstop in the AL but Miguel Tejada, even though Valentin only played in 141 games and had fewer innings at the position than the other shortstops who finished in the top five in total chances. Again, Valentin played less than Jeter, about 66 innings’ worth, but handled 110 more chances in the field, more than making up for the handful of extra errors, so that the difference between their TotalZone values was 33 runs in Valentin’s favor.

  There are two possible explanations for Valentin’s particular brand of defense:

  1. Valentin tended to make more errors than the normal shortstop on routine plays, but made up for it by fielding balls that the normal shortstop never touches.

  2. Valentin tended to make more errors than the normal shortstop because he fielded balls that the normal shortstop never touches, earning “errors” when he personally would have been better served leaving the ball alone entirely.

  This the perverse incentive of the error rule: the player concerned with his own statistics is better served by avoiding difficult plays where he might commit an “error” than by trying to make more plays for his team and risking harm to his basic fielding stats.

  Flawed as errors are, they aren’t even the worst thing about fielding percentage.

  The fundamental problem with fielding percentage is its omission of plays not made. Defense in baseball is not a question of avoiding mistakes, but a matter of converting balls hit in play into outs as often as possible. This is why teams now shift or reposition fielders to maximize their chances of recording outs, even though doing so would increase the chances of someone committing an error. (You can’t mishandle a ball you never touch.)

  Fielding percentage takes the wrong approach from the start: it only considers balls the fielders actually handle, without factoring in the vast number of balls put into play that might have been fielded but weren’t. Scouts understand intuitively that defense includes range, and will grade a player with greater range—that is, the ability to field balls over a wider physical area on the field—higher than one with less. Yet fielding percentage doesn’t even pretend to address that part of defense. If a fielder never touched a ball, in the eyes of fielding percentage that play simply never happened. It is the “see no evil” of baseball stats, and pretending that measures defense in any way, shape, or form is willful ignorance.

  When I was a kid, collecting baseball cards and getting much of my baseball “insight” from Baseball Digest, I was impressed by the error-free ways of Angels outfielder Brian Downing, a converted catcher who didn’t commit an error as a full-time outfielder in 1982 or 1984, resulting in fielding percentages of 1.000 for those seasons, and committed just one in 1983. He made only 7 errors in his tenure as an outfielder and retired with a fielding percentage of .995, which as of this writing is the third best in MLB history, after Darin Erstad (another Angel) and current Yankees outfielder Jacoby Ellsbury.

  Yet by one very simple measure of range, called Range Factor, Downing ranks only 281st among MLB outfielders with a career mark of 2.174. Range Factor takes the fielder’s total chances in the field, putouts plus assists, and normalizes it per 9 innings the way we calculate ERA:

  Range Factor = (Putouts + Assists) * 9 / Innings Played

  Range Factor is hardly a perfect measure of defense, but it does tell us one thing that fielding percentage can’t: how many times per game the fielder actually, you know, fielded something cleanly. The top of the historical Range Factor leaderboards is stacked with center fielders, who by assignment have more ground to cover, and typically very fast ones who’d run down more balls in play with pure speed.

  Even when we limit the range factor to left fielders, where Downing played most of his outfield innings, he drops from 3rd to 27th, again showing that Downing’s low error totals were probably a reflection of his limited range: He made plays on balls hit right at him, but didn’t cover much ground and thus didn’t make many difficult plays that would have increased the chances of him committing errors.

  Range Factor and similar statistics that are based on putouts and assists address one problem with fielding percentage—the emphasis on plays made cleanly without considering how many plays were made at all—but they don’t do enough to get at the question of how many plays weren’t made. Take the example of Willie Wilson, left fielder for the Kansas City Royals in the late 1970s and 1980s, and one of the fastest players of his era. Wilson made a lot of plays, so his Range Factor and similar stats are high, but that still gives us an incomplete picture of how good a fielder he was.

  Wilson sits atop the all-time Range Factor leaderboards for left fielders, making about one more play per 27 innings than any other left fielder as far back as Baseball-Reference has data (roughly 1954). That’s a
bout 54 more outs recorded per season than the next-best left fielder, mostly putouts since left fielders don’t record many assists. Wilson was one of the fastest players in major-league history and played quite a bit in center, so it’s no surprise at all that he had tremendous range in left. But Wilson had some inverse help from his pitching staff, too.

  The Royals won the American League pennant in 1980, the first pennant in the franchise’s history, losing the World Series to the Philadelphia Phillies that fall. But one of their big weaknesses was the pitching staff’s inability to miss many bats; they were 13th out of 14 AL teams in strikeouts recorded in 1980, and then finished dead last in the league in strikeouts the next three seasons. Fewer strikeouts means more balls put into play, so it’s not a surprise that several members of those great Royals teams of the late 1970s and early 1980s have some of the highest career Range Factors/9 for their positions, including George Brett (6th among third basemen) and Frank White (11th among second basemen). Even Amos Otis, the center fielder who was probably more fast than he was a good fielder, ranks 28th for his position, which is by definition filled with good defenders and fast guys.

  Indeed, the common flaw between fielding percentage and range factor is context. A fielder behind a high-strikeout pitching staff won’t get as many opportunities as a fielder on the 1982 Royals did. An infielder behind a pitching staff full of sinkerballers like Kevin Brown or Brandon Webb will get disproportionately more opportunities than an infielder on another team, but an outfielder on that same squad will get fewer—and many of the balls he does handle will be groundballs that are already base hits by the time he gets to them. Fielders in ballparks with a lot of foul ground, such as the atrocity where the Oakland A’s currently play their home games, will get more opportunities to field foul pop-ups, which are relatively easy to field and result in very few errors if misplayed.

  Meanwhile, all plays made (or not made) aren’t created equal, even though fielding percentage and Range Factor treat them that way. The most obvious example is the play where an outfielder reaches over the wall and catches a ball that would otherwise have been a home run. This play saves one run by definition, and possibly more if there were men on base; on average that play is going to be worth more than a run saved if we try to place a generic value on all catches that bring back home runs. That play is still recorded as one putout, the same as a routine pop fly to the second baseman, even though the savings in run prevention are obviously different. This idea, that each play made or not made has a different value, drives more advanced defensive metrics like dRS (defensive Runs Saved) and UZR (Ultimate Zone Rating), which I’ll discuss in a later chapter, as they have strengths and weaknesses of their own, and have only become possible with the more detailed play-by-play data that have become available in the last twenty years.

  In the meantime, forget fielding percentage. You’ll certainly hear it less in 2017 than you would have five or ten years ago, but it still crops up as a weak measure of defense for people who aren’t aware of the changes in how MLB teams think about fielding. Using fielding percentage to try to tell us who’s a good fielder is like using the prices on a menu to tell us if the food’s any good: if there’s something useful in this information, it’s going to be swamped by all the bullshit.

  Which brings us back to where we began: despite anything else you’ve heard, Ozzie Smith is the greatest defensive shortstop at least of the modern era of baseball and likely in the sport’s history, especially given how players today are much faster, stronger, and more athletic than their counterparts of the early twentieth century. Smith’s defensive prowess plus his offensive contributions—in an era when shortstops didn’t hit at all, Smith posted solid OBPs and was a high-percentage base stealer—made him a no-brainer for the Hall of Fame. Vizquel can’t come close to Smith’s value on defense, and was the inferior hitter as well, hitting .272/.336/.352* playing in a higher-offense era, while Smith hit .262/.337/.328 in the low-offense 1980s and was substantially more valuable on the bases.

  Fielding percentage needs to die, and if it takes the error with it, I won’t be sorry. Instead, we need to shift our minds around defense to thinking about plays made versus plays not made, while also considering whether those plays should be made at all.

  The play made/not made distinction is clear and, in most cases, entirely objective. The other distinction, whether the play “should have” been made or would have been made by an average fielder at that position, is a more difficult one, especially today in an age where teams position their fielders differently for each opposing batter, but we can work with an objective basis by looking at all batted balls hit to the same spot on the field and seeing how frequently they were turned into outs. The same method can look at how much such plays were worth—an outfielder who catches a ball hit to the gap that would typically fall in for a double or a triple has helped his team more than the outfielder who comes in and robs a hitter of a weakly hit single—and value defensive contributions not by number of plays, but in terms of bases or runs saved.

  Fielding percentage and the underlying putout-assist-error framework can’t do this. The “play should have been made, but wasn’t” box is ignored unless the fielder committed an error; if he never touched the ball, it’s as if it never happened. The fielder who makes a play that isn’t typically made doesn’t get any more credit than the fielder who fields a ball hit directly at him. We have plenty of data now to do better.

  Fielding percentage may be awful, but there’s some value in simply knowing how many plays a fielder made in total. In the end, though, that number only gives us a portion of the picture and doesn’t tell us enough about what it’s leaving out. It’s comparable to stolen bases—unless it’s accompanied by the number of times the player was caught stealing, a raw stolen base total isn’t that meaningful.

  The world of traditional stats is full of numbers like these, ambiguous totals and ratios that look like they tell us something, but in fact tell us almost nothing at all.

  7

  Bulfinch’s Baseball Mythology:

  Clutch Hitters, Lineup Protection, and Other Things That Don’t Exist

  Baseball’s long history and onetime status as the national pastime have led to countless stories, legends, and myths about all aspects of the game. If you grew up any kind of baseball fan, you heard at some point about Babe Ruth calling his shot in the 1932 World Series (he didn’t) or about Negro Leagues star Josh Gibson hitting a 580-foot homer at Yankee Stadium (physically impossible). Baseball just seems to attract this sort of malarkey, and it’s not limited to stories like a kid begging of Shoeless Joe Jackson to “say it ain’t so, Joe” after several White Sox players were tried for throwing the 1919 World Series.

  Some baseball myths pertain to the playing of the game itself and affect the way broadcasters, writers, and fans discuss the game, and can still even play a role in team decisions on players. The rise of advanced statistics, and sometimes just the curiosity of people with some basic coding and math skills, allow us to examine these and determine whether the conventional wisdom holds any water.

  As the great sabermetrician Carl Sagan said, extraordinary claims require extraordinary evidence, and it turns out many myths about baseball don’t stand up to rational scrutiny.

  There is no such thing as a “clutch hitter.”

  This is anathema to many baseball fans, and to narrative-starved sportswriters, but the truth of the matter is that good hitters are good hitters regardless of the situation. A good “clutch hitter” is just a good hitter. If you can hit, you can hit with men on base, with two strikes, with two outs, with runners in scoring position, with the score tied, whatever the case may be. The idea of the hitter who can elevate his game in these “clutch” situations, loosely defined as at bats taking place late in games with the score close, is a myth.

  There are clutch hits, of course. The walk-off home run is, by definition, a clutch hit: it wins the game, perhaps breaking a tie or even bringing t
he home team from behind to win. We can discuss that hit itself as being clutch, perhaps even saying the whole at bat was clutch, although at some point it becomes unclear whether we’re talking about baseball or a car with a manual transmission. But the idea that a certain hitter is somehow better in these close and late situations—or even that some hitters are demonstrably worse in said situations—is not based in fact, nor has it withstood dozens of attempts to verify it. There are clutch hits in clutch situations, but there are no “clutch hitters.”

  The mere idea of this player who can summon something extra from within himself like the hero of some ancient Greek epic certainly predates my fandom, and the first attempt to prove or disprove their existence of which I’m aware came in 1977, when I was just four years old. Dick Cramer wrote a seminal piece in SABR’s Baseball Research Journal asking the simple question in its title: “Do Clutch Hitters Exist?” Cramer used a statistic called Player Win Average, a precursor to today’s Win Probability Added, and which assigned points to hitters based on each event in their seasons and whether those events (hits, outs, walks, etc.) increased or decreased their teams’ chances to win each of those games. He looked at data from 1969 and 1970—bear in mind, this was essentially the era of the abacus when it came to data analysis—and found the hitters who were supposedly “clutch” in one year were no more likely to be so in the other year than players who weren’t “clutch.” Cramer concluded that the supposed clutch ability was merely random variation, and that “[g]ood hitters are good hitters and weak hitters are weak hitters regardless of the game situation.”

  Subsequent studies have fared no better, although some analysts, including Bill James (who initially supported Cramer’s work), have gone from nonbelievers in the clutch hitter to agnostic in the interim. Tom Ruane ran an extensive study using play-by-play data from Retrosheet from 1960 to 2004 and found no confirmation that the clutch hitter is real, with his results looking very much like a random distribution of player results would. Absence of evidence is not necessarily evidence of absence—that is, just because you didn’t find it doesn’t prove it’s not there—but, as Ruane says in his conclusion, “One could argue that the forces at work here, if they exist, must be awfully weak to so closely mimic random noise, and if they are really that inconsequential perhaps we could assume they don’t exist without much loss of accuracy.”

 

‹ Prev