wfusportsanalytics.files.wordpress.com · web viewas you can see, the designated hitter greatly...
TRANSCRIPT
The Effect of the Designated Hitter on National League and American League Pitchers
Chris Lewis
In 1973, Major League Baseball introduced the designated hitter (DH) to the Junior
Circuit, the American League. The idea behind the DH was to add offense to the game by
allowing a player to hit for the pitcher’s spot in the batting order. Generally, pitchers’
performance at the plate is well below average, as they focus much more of their attention and
training on getting hitters out as opposed to getting hits themselves. Therefore, it makes sense
to have a legitimate hitter take at bats in place of a pitcher. Furthermore, the designated hitter
changes the ways that American and National League managers make decisions within the
game. The designated hitter allows American League managers to construct a more complete
lineup instead of trying to bury the pitcher in either the eighth or ninth spot, as most National
League teams do. Additionally, National League managers are, at times, forced to use a pinch
hitter for their pitchers early in the game in an effort to capitalize on a scoring opportunity.
Bullpen pitchers are rarely allowed to hit, which can hinder their ability to pitch multiple
innings. American League managers do not have to consider these nuances in their game. Thus,
it is clear that the designated hitter greatly impacts the way American League and National
League games are managed.
Now, we’ll explore the effect that the DH has on pitchers. For context, the advanced
hitting numbers from FanGraphs for the designated hitter and the pitcher averaged over the
past five years are shown below:
Table 1 (DH): Per Season Averages for the DH Over 5 Years (2015-2019)
Table 2 (P): Per Season Averages for the P Over 5 Years (2015-2019)
As you can see, the designated hitter greatly outhits the pitcher. Compared to other
position players, the average season for a designated hitter in the American League is slightly
above average (106 wRC+, where 100 is league average; 6.27 wRAA, where zero is average). In
contrast, the average season for a pitcher in the National League is well below league average,
with a negative wRC+ and a wRAA of -40 (For reference, FanGraphs lists a scale of wRAA with
an adjective to describe each number. The scale stops at -20, which it characterizes as “awful.”).
Therefore, the question becomes: what impact does this have on the pitchers? In other words,
how does facing an above average hitter once every nine at bats affect a pitcher in comparison
to essentially getting a free out each time through the order?
Intuitively, you would think that the National League Pitchers perform better than
American League pitchers in large part because they face easier lineups on a more regular basis.
And you would be correct. Here are the ERA, xFIP, SIERA, and WAR statistics from FanGraphs
for starting pitchers over the past five years across all games. Note, I chose starting pitchers
because they are most likely to face a fellow pitcher hitting in the National League, as opposed
to relievers who often face pinch hitters.
2
Figure 3.1: Figure 3.2:
Figure 3.3: Figure 3.4:
As you can see, National League starting pitchers grade out better than American
League starters in nearly every year and with nearly every kind of metric. In each year,
American League starting pitchers have a higher ERA, which measures the earned runs a pitcher
allows averaged over nine innings. An expected metric like xFIP captures a pitcher’s expected
performance without accounting for the performance of his defense. Still, the National League
performs better. SIERA is a FanGraphs metric that builds off of xFIP by incorporating expected
results of balls in play. While the margin decreases, the National League has a lower SIERA in
3
each year. WAR is the only metric in which the American League edges out the Senior Circuit,
but only in 2016 and 2015, and by a fairly slim margin. This, too, begs another question. Are
National League starting pitchers actually better than those in the American League, or do they
just perform better because they face weaker lineups?
My initial thought process was to explore how National League starting pitchers play
against the American League and vice versa. I compared these interleague numbers to how
each group of starting pitchers perform against their respective leagues. I averaged the ERA and
xFIP in each of these situations over the past five seasons, and the results are as follows:
Figure 4.1: Figure 4.2:
Interestingly enough, National League starters perform better against both the
American League and National League. The reason for their comparative success in interleague
play can be, in part, attributed to other factors, such as a much smaller sample size (roughly
10,000 fewer innings pitched per season) and the hitters’ lack of familiarity with the pitchers.
Perhaps the most revealing takeaway from this data is that both American League and National
League starters perform better against the National League and worse against the American
4
League. The xFIP for Junior Circuit starters increases by over 0.2 runs when facing the American
League (4.16 against the National League; 4.38 against the American League). For the National
League, it increases by almost 0.1 runs. An interesting point to note is that, when facing a
pitcher once every nine at bats, the xFIPs and ERAs of both leagues’ starting pitchers are nearly
identical. While we can expect that these numbers for AL starters will increase with a larger
sample size, it is still fairly revealing that, over an average of 1615 innings pitched per season,
American League starters’ performance is nearly identical to those in the NL against the same
competition. This led me to believe that there were some roots to the theory that National
League starters play better because they face weaker lineups. However, Senior Circuit hurlers
outperform their counterparts against the American League (ERA increases by 0.19 and xFIP by
0.14), which is the opposite of what you would expect. Without a clear answer, I wanted to dive
deeper into some more advanced data to get a better feel for the true performance of the
starters.
First, I analyzed the batted ball data from Baseball Savant of starters in each league. I
chose to look at launch angle, exit velocity, hard-hit percentage (batted balls over 95 mph),
barrels per plate appearance (percent of plate appearances that result in a “barrel,” which
Statcast defines as a certain exit velocity paired with a launch angle that results in a very high
slugging percentage), home runs allowed, and average home run distance.
Looking at the first set of data on exit velocity and launch angle, it is clear that American
League starters allow a higher launch angle on balls in play but roughly the same exit velocity
(plus or minus a few tenths of a mile per hour) as their interleague counterparts. Both metrics
are crucial in determining the outcome of a batted ball, but the differences between the two
5
leagues’ starters are not significant enough to determine whether one is a better quality group
than the other.
Figure 5: Table 3:
The next important set to look at is hard hit percentage and barrel per plate appearance
percentage. These dive deeper into the exit velocity and launch angle statistics to determine
how much quality contact pitchers give up.
Figure 6.1: Figure 6.2:
In each year except for 2019, American League starters gave up a higher percentage of
hard hit balls than the National League, but only by 0.2 percent. Interestingly enough, 2019 was
6
the year with either the biggest or second biggest gap between the two leagues in their ERA,
xFIP, WAR, and SIERA numbers, each favoring the Senior Circuit. In terms of barrel per plate
appearance percentage, National League pitchers came out on top each year. The margin was
relatively close; the largest difference was in 2017, when 0.73 percent more of AL starters’ plate
appearances resulted in a barrel. That was also the year in which the two leagues had the
smallest gap in their respective SIERA (only a 0.06 difference). Even though American League
starters gave up more barrels per plate appearance, they should have had essentially the same
quality of performance. In 2015, the difference in barrel percentage was only .05 percent. The
average WAR for both leagues was basically equal, but all other metrics I analyzed slightly
favored the National League. Thus, in 2015, National League starters, on average, gave up the
same amount of high quality contact per plate appearance and were equally valuable, but they
still got better results. There is an argument that the differences in hard hit percentage and
barrel percentage are negligible at best. In this case, NL starters give up the same quality of
contact as AL starters, even without facing the DH in the majority of their games and get better
results. This not only speaks to the notion that NL starters benefit from facing a fellow pitcher
once every nine at bats, but also to the quality of AL and NL lineups as a whole.
To account for the ballpark effect, I looked at the average HR per nine rates for each
league and the average distance of home runs they give up.
Figure 7.1: Figure 7.2:
7
Not surprisingly, the American League gives up more home runs per nine innings than
the NL. The interesting part about these two sets of data is that the average home run distance
is greater in the NL in every year except for 2017. In the other four years, the difference was
roughly two feet. This suggests one of two things. The first: that the NL simply gives up higher
quality contact, and therefore, the homeruns they give up go further. The second: that both
leagues give up roughly the same quality of contact, but that homeruns do not have to travel as
far in the AL versus the NL. We know that the American League gives up better quality contact
at a higher rate than the National League, which means that the latter must be true.
The final area I analyzed expands upon hard hit percentage and barrel percentage and
looks at how this contact translates to results for pitchers. I wanted to compare the expected
stats for pitchers in each league, which would give me a better gauge of the quality of each
leagues’ pitchers, not their outcomes. They isolate the results that the pitcher can control and
focus on quality of contact allowed. The metrics I used were expected slugging percentage
(xSLG) and expected weighted on base average (xwOBA). I decided to go team by team for
8
these metrics. The American League is in blue, National League in green, the respective league
averages in yellow, and the Major League average in red.
Figure 8.1:
AL Average: 0.420, NL Average: 0.409, League Average: 0.412
Figure 8.2:
AL Average: 0.320, NL Average: 0.315, League Average: 0.318
As you can see, the National League runs away with the expected statistics. The Dodgers
have the lowest average xSLG and xwOBA of any team over the past five years (0.366 and
0.286). The Nationals are second in xwOBA (0.292) and a point behind the Astros for second in
xSLG (0.377 for the ‘Stros). The average xSLG for the American League over the past five
seasons is the same as that for the Pirates and the Reds. Neither team is thought of as having a
9
great rotation, or even an average rotation, except for the Reds in the last year or two. This
number is only .001 better than the Colorado Rockies. Is pitching to a DH similar to the Coors
effect? The National League average xSLG is two points higher than the Boston Red Sox, and
three points below the Major League Average. The Red Sox, who made the playoffs three times
in the past half-decade and won it all in 2018, hold the fifth best xSLG in the American League.
Is the average NL rotation a top five AL rotation?
In terms of xwOBA, the average mark for the American League is 0.320, which is only
0.002 points above the Major League average. Starters from the Atlanta Braves and the San
Diego Padres share the same xwOBA, and the Phillies, Pirates, and Brewers are only .002
or .003 above that number. For context, those five teams are roughly 0.007 points above the NL
average (0.315). Staying with the Senior Circuit, its average xwOBA is, again, most comparable
to the Red Sox (0.311–the next closest team is the Yankees at 0.310 and then the Blue Jays at
0.322). The Sox rank fifth in xwOBA in the AL as well. Essentially, this means that, based on xSLG
and xwOBA, the average National League rotation would be a top five rotation in the AL and the
average American League rotation would rank below average in the NL.
If we simply look at xSLG and xwOBA, it is fairly clear that NL rotations are superior to
their interleague counterparts, even though the margins are not huge. However, when we take
into account the barrel percentages and hard hit percentages, we see that the quality of
contact that NL and AL starters give up is very similar, but still favors the NL by one or two
percentage points. Taking into account the ballpark effect and average home run distance, we
can conclude that NL starters would give up more home runs if they pitched in AL parks, solely
based on the dimensions. Additionally, looking back at the first set of graphs (pg. 3) with ERA
10
and SIERA, the difference in ERA in each year decreases when we account for quality of contact
and evaluate the pitcher based on factors he can control.
Thus, based on the results of my analysis, it is reasonable to conclude that National
League starters would not perform as well in the American League. It is not to say that if Jacob
deGrom were pitching for the Yankees instead of the Mets that he would be average, or that if
Gerrit Cole pitched in the NL West he would pitch to a 1.00 ERA. But, as teams become more
focused on predicted future performance, it is important to look at NL starters’ numbers in a
broader context and consider the effect that the designated hitter has on the game.
Data courtesy of Baseball Savant and FanGraphs.
11