So after seeing betting odds sites go on about how things happen about as often as they say that they do, I figured I'd go back and measure my own track record here, and whether things happen about as often as I predict that they do. I'll be using only my official final forecasts from 2016, 2020, and 2022 here. Not my revised 2020 forecast to correct for my errors made that election cycle (I'm owning up to my mistakes), or my hypothetical 2004, 2008, or 2012 predictions (although I did predict 2008 and 2012 at the time, I just didnt use as sophisticated of a system as I do now so even if i reported the results it's like apples and oranges here).
With that said, here's the results of this analysis.
I predicted a grand total of 80 races between my senate, gubernatorial, presidential predictions.
Of those, I predicted 62 races correctly. This translates to an overall correct prediction rate of 78%.
Of course, my predictions ranged in certainty from around 50% to up to 98-99%. For the sake of gauging the accuracy of specific predictions, I'm going to split them up the way I shade my forecasts into three groups. "Tilts" (<1% margin, or 50-60% probability), "leans" (1.1-4% margin, or 61-84% probability), and "likelies" (4.1-8% margin, or 84-98% probability, although I did include a couple "safe" predictions here as well, or those with above an 8% margin or 98% probability here).
Of the tilts, I predicted 17 races, and got 12 right. This gives me a 71% correct prediction rate when I'd only expect around 55% or so.
Why did I get the tilts right more often than average? Sheer luck, honestly, although I do think the "wave" theory of elections has to do with it somewhat. I mean, if one can guess the side the "energy" is on correctly, you can potentially get more predictions right than probability would indicate. For example, in 2016, there were 5 races. I guessed 4 of them would go Trump and 1 Clinton. 3 went to Trump and 2 went to Clinton. The 2020 Senate race might have also somewhat distorted results as my "poll corrections" brought down the odds of some rather safe states to tilt category, causing those states to go the direction they were supposed to by much higher margins, and making my predictions more correct. So if anything, to some extent, some states should have never been included there and as such, I overpredicted. Correcting for that, I would have only gotten 9 right, which gives me 60%, closer to what I would expect.
This is also what happens when you have a smaller sample size. Sometimes you get a couple outliers throwing off your entire average. But yeah, I predicted at least as well as I should have, if not better by incorrectly labelling some races as "tilt" that never should have been tilt.
Of lean races, I predicted 33 races, and only got 22 right. That's a probability rate of 66%, when I would have expected something closer to 72%, given that the probability range of these predictions would be between 61% on the low end, to 84% on the high end.
Why was I off more than I expected? Much of it is simply an underperformance due to how 2016 seemed to systematically underestimate Trump and the republicans. I got Pennsylvania and Michigan wrong on the presidential level, and on the senate level, I only got 4 out of 7 correct. Wisconsin and Pennsylvania were two of them, and I also got New Hampshire wrong, which was genuine polling error, and I expected it to go republican.
So to some extent I was just unlucky. The other race I seemed to perform below my expectations in was the 2020 senate forecast, where I got 1 out of the 3 races correct. Of those races, I got georgia's special election wrong, and North Carolina. I just was wrong on North Carolina in general and this is how culling polling averages caused me to be more off than I would otherwise. Georgia, I chalk that up to not knowing how to predict that popularly.
Still, all things considered, I only missed a few more races than I should have. If I expected a 72% success rate here, I should have gotten 19-20 correct, and I got 17, so I only was off a little more than expected, and still fell within the expected probability range, just on the very low end of it. It might be interesting to throw in the 2004, 2008, and 2012 data just to get a larger sample size here.
As far as likely races go, I predicted 30 races and got 28 right. That's a probability race of 93%. The expected probability rate was 91%, so this was right on target. For anyone curious which races I got wrong, I got Wisconsin wrong in 2016. I had it at Clinton +6.5% with a 5% chance of flipping and it flipped. The other likely I got wrong was the Maine senate race in 2020. I expected it to go democratic by 5 point margin, with only an 11% chance of flipping, and it flipped, probably once again because my methods of culling the polls was unreliable. Yeah, I screwed up in 2020 a lot. I own up to that. That was on me, not on the polling averages themselves.
Still, I ended up being right where I expected to be.
As such, for close races, I actually predicted things more correctly than I should have. For distant ones, I was on par with probability. With those in between, that's where I made the most errors, in part due to bad luck, and in part due to my own incompetence in 2020. Yes, we get it, weighting averages is bad. Just let the data speak for itself next time.
So that's my actual track record with actual predictions that I actually made on this blog at the time, with no hindsight. Am I pleased with myself? Well, I would hope I get "leans" a bit better in the future, but otherwise I don't think my predictions were far outside of expected parameters. We have a relatively small sample size where even being off from the expected norm by 2-3 races can make me look better or worse than expected, and that actually does account for the entire divergeance fromBut expectations, I got 2 more "tilt" races right than I expected, and got 2-3 more "lean" races wrong. Much of this is due to my own screw ups in 2020. Again, don't weight the polling data and I'll be fine.
Adding in 2004, 2008, and 2012 data
So, adding in the more recent retroactive 2004, 2008, and 2012 predictions using the same methodology of my current one, I get a larger sample size.
I get 26 tilts, 18 of which I got right. This leads to a 69% success rate, higher than the 55% or so that I'd expect.
I get 53 leans, of which I got 39 right. This leads to a 74% success rate, which is slightly above the 72% that I'd expect. It really was mostly bad luck after all.
I get 50 likelies, of which I got 48 right. This is a 96% success rate, higher than the 91% that I'd expect.
What does this mean in practice? Well, it means my tilt predictions are far more accurate than I would expect, in line with what I'd expect from lean predictions.
Lean predictions are generally dead on with the expected probability.
Likely predictions I tend to once again overestimate the odds to. This could mean that my my margin of error is too generous and perhaps a 3 point MOE would be more in line with the actual data. Still I'm kind of reluctant to change things this election cycle.
Estimating my 2024 success rate
Currently in 2024, I have 3 tilt predictions, 4 leans, and 11 likely ones. Of them, I would expect to get 1 lean wrong, 1 tilt wrong, and probably get all of the likely ones correct. If anything, this could be good news for Biden, as if I had to guess, the ones I'm likely to get wrong, and that I'm least confident in, are those rult belt predictions. However, I'd have to be wrong on two tilts and one lean for Biden to win there. I could just as easily be wrong in the other direction, with NE2 going Trump, as well as Minnesota, Virginia, or Maine. But such is probability. We don't know where the polling is off and we won't know until election day. It gives us an idea, and based on my track record, it gives us a reasonably good idea, but it's not 100% accurate. Still, it doesn't pretend that it is. Those probabilities exist for a reason, they're not expected to all be correct. But the fact that I get things as or more correct than expected seem to indicate that I'm either doing something right, or should tighten my margin of error somewhat.
Either way, I hope the people who insist the polls are "wrong" realize that they are themselves wrong. Polls have a pretty reasonable track record, and things happen roughly as often as expected. I'm mostly happy with the current state of my predictions.
No comments:
Post a Comment