While listening to 670 The Score in Chicago one of the personalities went on a misguided excursion into advanced metrics by stating that how NFL teams perform on opening drives is a measure of quarterback ability. The hairs on the back of my neck stood up immediately as I recognized that cause and effect were being reversed, so I did the research. Here are the top 10 QBs in scoring on opening drives for 2012:
Using the renowned Josh Freeman as an example, he led Tampa Bay on all 16 of their opening drives in which the Bucs scored 7 touchdowns and 4 field goals, a total of 11 scores and 61 points. If the assumption is that a good team/quarterback should score on every opening drive, Freeman and the Bucs scored 54.5% of the points possible. No drives ended on fumbles (which wouldn't have been Freeman's fault) and one ended on a missed field goal (also not Freeman's fault).
Take a look at those quarterbacks--some are very good and some are...Josh Freeman? Sam Bradford? Just as importantly, who ISN'T on that list--Aaron Rodgers, Colin Kaepernick and others. Viewing team success in this manner is precisely BACKWARDS--it suggests that success in this category LEADS to team success, instead of recognizing that good quarterbacks (Matt Ryan, Russell Wilson, Tom Brady, etc.) will have success on opening drives as well as other drives. This treats an effect as a cause and ignores a very basic premise of sports:
WHEN a team scores is typically less important than HOW OFTEN they do
I communicated this to the show and stated this was among the dumbest use of statistics I'd ever seen, to which their producer asked me my top 10 dumb statistics and this blog post was thus born. This particular "stat" (and I use that term incorrectly since no one measures it) is first on the list of dumb sports stats that is in no particular order:
1. Opening drive success as a facet of quarterback evaluation
2. 1st inning runs in baseball
On the same show about three years ago someone noticed a little factoid about the Twins in that they were leading baseball in run differential after the 1st inning. The Twins were still decent in 2010, good enough to get swept by the Yankees in the ALDS. At that time I hadn't yet subscribed to Baseball-Reference's (B-R) Play Index feature ($36 a year, you'll feel like Saul at Damascus when you first use it), so it took me some time to amass that data. It's much easier now:
Sure enough, there are the Twins, and the Cardinals that year weren't bad either but missed the playoffs. The Marlins, however, finished the year 80-82, and they're going to look back fondly on 2010 as a HIGH POINT unless they make significant organizational changes.
This is similar to my first example, confusing the results of success with the cause. This occurs when people attempt to place some element of "momentum" or even worse, "the will to win" into sports, leaving aside that once people are at the professional level, ALL have that will. No player goes into any game with the notion they can give a subpar effort--it might happen, but except in the rarest of occasions, it won't be intentional and it certainly won't happen for long. To look at how a team starts conversely suggests that the OTHER team doesn't have what it takes to compete.
None of this is true. What I really would have liked to have heard was how teams that outscore their opponents by the most in the 5th inning fared. I'm sure most people would shrug their shoulders and say "That's just stupid"--and it is. It's just as stupid for any other inning, including the 1st.
3. Pitching wins in baseball
I beat this dead horse in two posts that can be read here and here. If you haven't read them, they're among the most read posts on this blog which I'll summarize very succinctly by stating that NO OTHER TEAM SPORT breaks out wins for individual players, with the possible exception of hockey goalies, but how often do people discuss the win-loss record of a goalie? WINNING is important in baseball, and to unduly place the focus of the win on the pitcher who has little effect on offense and defense is an unfair standard. Felix Hernandez won the 2010 Cy Young with a very pedestrian 13-12 record suggesting that voters are improving their ability to evaluate pitchers, but it takes time. Wait and see how the NL Cy Young voting turns out this year--Clayton Kershaw should win it in a landslide, but I won't be surprised if Francisco Liriano does (read why I think that here).
The next two are statistics that aren't measured as well as they should be or measured at all--I consider their omission to be stupid.
4. Football counting stats
I can easily look up passing, rushing and receiver's stats, but what I can't see is how many long passes a quarterback completed, how many long rushes a running back made or how many long receptions a receiver had. Not all pass completions are the same--a pass that goes for -1 yard isn't the same as a 65-yard completion. There are places to find this more in-depth information for a fee (I prefer Pro Football Focus)--this is PFF data for Peyton Manning in 2012:
There are other ways to obtain this data--for example you can download play-by-play data from this site ($15--it used to be FREE!) that I did for a couple of years but the resulting databases I created using Excel were so unwieldly and fat they were borderline useless, but it did allow me to carve up runs, passes and receptions by length. It also allowed me to compare and see if numbers like Manning's were great, average or below average (take a wild guess).
Pro Football Reference (PFR) uses this same play-by-play data that allows for analysis once you learn how to use it--it's how I amassed the data for the first useless stat. For example, this table shows the most pass completions of 20 yards or more in 2012:
This is a start but doesn't tell me how many attempts were made of 20 yards or more. Leave aside the fact the numbers from PFF and PFR don't match up, that's not the point--it's difficult to find plays by length for the NFL. Until those numbers become more prevalent (and it is happening, with a huge push from fantasy football), football counting stats are at best incomplete markers of success.
5. Hockey fights
Hockey statistics in general are difficult to work with. The formatting at both Hockey Reference and NHL.com doesn't allow for my preferred data-grabbing techniques (for example, I can get a whole year's worth of baseball data at B-R with a couple clicks and some data manipulation techniques) and the data itself is displayed in such a manner as to be borderline useless. Having said that, I've developed a 2007-2013 hockey database that has:
1. Every goal (and when it occurred)
2. Every assist (and when it occurred)
3. Every penalty (and when it occurred)
As a result, I know when every hockey fight occurred, what the score was and what the final outcome of the game was--in other words, I can almost measure the impact a fight had on a game (I say almost--when I write this up closer to the opening of the NHL season I'll explain it). I can also measure the incidence of fighting in hockey:
The incidence of fights per game drops significantly in the postseason as compared to the regular season, but finding out the stratification of penalties is almost impossible to find. Not all hockey penalties are the same, and it took parsing game box score data to create this table.
There are sites that compile hockey fight data like this one, but the important facts are still left out--when it occurred and the effect it had/didn't have on the game. Hockey makes it difficult to tell who really started the fight--for example, there were 696 fights in the 2012-13 regular season (down significantly from previous years) but only 53 instigator penalties. Not knowing enough about the sport to know if game misconduct penalties can be another marker for who started a fight, it would be nice if the NHL were to penalize the person who started the fight only for two reasons:
1. Impact could definitely be measured--right now, both players on both teams are penalized, making it impossible to measure "momentum" or change in game outcome.
2. Fighting would be cut down dramatically, since now teams wouldn't be playing 4-on-4 but would be short-handed for 5 minutes.
It shouldn't be so difficult to measure fights and other penalties, and since the penalties are broken out in hockey box scores, why not tabulate them for easy reference?
Returning to baseball
6. Winning percent when leading going into the 9th inning
It was some time ago when I heard something along the lines of "That team is 95-2 when entering the 9th inning with a lead," implying the team had both a lockdown closer who knew how to finish games AND the killer instinct to finish off opponents. There's just one problem--ALL teams in baseball are successful when entering the 9th with a lead--this is 2013 data through Wednesday, August 21st:
Yes, you're reading that correctly--even Houston and Miami manage to win just about every game when they enter the 9th with the lead. Using Milwaukee as an example:
1. Even though they're 46-0 when entering the 9th with the lead, who for one moment considers them a good team?
2. How can this number be used as an indicator of performance when a bad team can be the best in baseball? A good statistic has either predictive value or evaluative power, and this particular number is neither--it's predictive in the sense that EVERY team is almost guaranteed to win when in this situation, but there's nothing a team can do to put in this position. It's not evaluative in that every team does well in this situation. It's an empty stat somebody stumbled upon while doing their research for the umpteenth article on Mariano Rivera (by the way--pretty good!) and didn't take the time to place in the proper context.
Back to football for the next two:
7. Display football penalties by individual player
This is another omission stat, one I wish I could see. Aggregate penalty yards by team is readily found, but even using the PFR Play Index feature doesn't allow a search for penalties, and that's a shame. Who makes the most penalties by position? What individual has the most penalties? How many DUMB penalties (those that allow an opponent a first down) occur? The list is endless and it's not broken out anywhere. I have it using the aforementioned football database, and PFF has some information, but not all penalties are equal--an offsides on the second play of the game is totally different from a holding penalty with 15 seconds to go on the opponent's 3-yard line. These are the "leaders" by position in penalties (even PFF doesn't give penalty yards):
Why are penalties not as readily displayed as a player's completions, rushes, catches, sacks, tackles or interceptions? They're just as important, and mistakes are measured in other sports--errors in baseball and turnovers in basketball. It could be that the relative paucity of penalties could be the reason these numbers aren't readily available, but even so in this day and age they should be available.
8. Red Zone Efficiency
Believe it or not this DOES have value and can be measured using the PFR Play Index feature, but not accurately in my mind. In 2012 the Bears had 132 plays in the red zone, which should bring up several questions:
1. How does that rank in the NFL? Is that a lot of plays, not many, what? They were 21st.
2. How many TIMES were they in the red zone? In theory, they could have scored 132 times but the number of plays won't tell us that. Doing some text manipulation shows that the Bears were inside the opponent's 20-yard line 41 times.
3. How often did they score in the different parts of the red zone? It's a 20-yard stretch of the field, and yards 0-5 are much different from 15-20.
This can be investigated using play-by-play data, but this is a case of an incidence stat being used when we really need a counting stat. Using the data in the table below, who did better?
1. The Bears scored on 40 of 41 trips (24 TDs and 16 FGs) to the red zone
2. The Falcons "only" scored on only 93% of their red zone opportunities, less than the Bears 98%.
Some analysts will say "The Bears scored in 98% of their red zone incursions." They would be correct, but the Bears only scored 204 points in those trips. New England "only" scored in 93% of their trips but scored 316 points in the process. It's the NUMBER of times a team is in the red zone that matters most--this data is for every NFL team in 2012:
Just to be clear, this data can be difficult to quantify making some of the numbers above slightly confusing--for example, the data shows the Bears making 41 trips and coming away with 24 TDs, 16 FG and 2 missed FG, or 42. I have no clue how that can be, so just accept this as a way to frame the issue vs. some sort of PhD dissertation. The table on the left is offensive red zone efficiency and the table on the right shows how defenses performed.
The real story of red zone efficiency isn't the one heard in pregame shows because the red zone is like the 9th inning--once teams get there, for the most part they score. Two things matter:
1. How MANY times they get there--it's difficult to score if a team can't get close to the goal. There were 316 TDs on plays longer than 20 yards last year--out of a total of 1,164 offensive touchdowns.
2. How many TOUCHDOWNS were scored vs. field goals--that's the ratio that deserves attention.
And it's back to baseball for the final two, the first of which is related to the red zone discussion and is...
9. RBI
"You can't drive in a runner who isn't on base." Yes, that's true despite the fact a hitter can drive himself in with a home run, but the general sentiment is true and conveniently ignored when the RBI is discussed. There's plenty of data that clearly shows that RBI are a matter of opportunity as much as ability. I wrote about this some time back but will continue to shout it as loud as I can:
With few exceptions ALL hitters have the same amount of success in driving in runners--the difference is in the number of opportunities to drive in runners they have.
With few exceptions ALL hitters have the same amount of success in driving in runners--the difference is in the number of opportunities to drive in runners they have.
It takes a couple clicks to find this table at B-R which makes it abundantly clear that RBI leaders are those with the most base runners when they step to the plate. The columns to look at are under "BaseRunners" on the right hand side. The first column is the number of base runners when the player was at bat, the second the number that scored and the third the percent of base runners driven in and is shockingly consistent from player to player. Miguel Cabrera leads the majors in RBI and is 8th in the number of base runners when he comes to bat. Granted, his 23% of base runners driven in IS outstanding given the MLB average is 14% (other standouts include Allen Craig and Paul Goldschmidt). The RBI puts emphasis on the outcome without properly checking to see how often the player was in the position to drive in a run.
10. Manager Pythagorean Wins
I'll admit when the person who asked what my 10 most useless stats were posed the question, I struggled once I got to 9 and 10, but I still hear this one all the time. To review, Pythagorean Wins (PW) are a Bill James creation you can read about here but at heart is a very simple concept:
Teams that score more runs than their opponents win more games
Blew your mind there, didn't I? It's become a fairly standard baseball metric and is very useful. The problem stems from the fact that when there's a difference of 5 or 6 games between expected and actual wins people state it must be due to the manager and that PW can be used to evaluate manager effectiveness.
Yeah, I wrote about this too but have reached some different conclusions since I wrote it. The first is what I've known for quite some time--there CAN be differences in expected vs. actual wins, both on the positive and negative side, but they balance out over time. Even someone like Connie Mack, who managed in the majors for over 50 years ended up with a miniscule difference in PW over his managerial career because every year in which PW exceeded actual wins was counterbalanced by another year in which PW were fewer than expected. There MIGHT be an effect in a given year, but over a career, not so much.
Three factors can explain variances from actual and expected wins, two of which I came to very recently:
1. How well teams do in low-run games. The PW formula expects teams to lose games in which they score 1 or 2 runs, so when teams win those games, that makes for variances.
2. Walks have an effect. Cee Angi (@CeeAngi) wrote a post recently suggesting that walks can help explain differences between wins and PW, and they certainly can as they allow base runners to reach and advance on base, and there is a definite relationship between total bases and runs scored.
3. I do my Mistake Index on a daily basis, and since most of these mistakes involve giving away bases at essentially no cost, they can also have impact on PW run difference.
All of these affect run scoring and begs a very simple question--what control does a manager REALLY have over any of these things? Does he impact the runs scored? Even more so, can he impact runs allowed? Can he prevent walks, errors, base running mistakes, balks and all the other things that occur daily in baseball? The answer is no, which makes the notion of giving him undue weight for team success a bad idea. A manager's primary responsibility is preparing his team to succeed and making in-game decisions as they come up. It's not like football or basketball where the coach can control who is on the field, tempo and play calling. Managers ARE important, but not so much as to attribute all difference between expected wins and actual to them.
I love numbers--they help give context and meaning to events we might not otherwise understand and can add flavor and nuance when used correctly, but can be extremely detrimental when used improperly. There are two main take-home points:
1. Very rarely does one stat tell the complete picture of anything
2. It is very easy to confuse cause with effect
Go back to what started this whole post--the Bears are what they are not because they do (or don't) do well on initial drives in games--they will (or won't) do well on those drives depending on if they're a good team. It puts the cart before the horse and obscures the more important point that the Bears have been offensively mediocre for quite some time. This is why statistics work much better as explanation than prediction--they can tell us WHY something happened much better than what WILL happen.
This is an excellent column that speaks common sense to the ridiculousness of many "all-telling" stats. People are always trying to quantify how they can be 100% accurate with predicting how a player or team will perform based on numbers alone; history has shown: IT DOESN'T WORK THAT WAY!! On any given night, a bench player can become Babe Ruth or Michael Jordan. If sabermetrics (WAR) and the like are so great, how come the Athletics haven't won a single world series since basing their whole culture off of it?
ReplyDelete