So....I decided to ask chatGPT a few questions in light of section 2 of the VRA getting struck down, like how to win the house. The results were kinda obvious, but have a strong showing of democrats turn out to win the election. I asked them what their odds are of us winning the house are, and they seemed pretty skeptical of it. if anything, they seemed more inclined to believe that the senate would flip first, which was insane to me. Because, as we know, in 2024, the house BARELY went republican. It went 215-220 Republican. And I expect an 8 point shift in polling or so based on what I see so far. Meanwhile, the senate requires flipping rather hostile territory. We're talking states where the environments were R+13 in 2024, where even if 8 point shifts happen, we're theoretically talking R+5. We're starting to see polls showing some of those states CAN crack and shift in favor of dems, but I'm still skeptical, and despite my own model having the dems ahead, I really suspect the data might just be wrong. It's just too big of a shift and the data is weak and possibly contradictory with other priors of mine.
But the house? That's a shoe in.
So...I asked chatGPT why they think the house is harder, and gave my case for a 235-200 outcome based on current data. They said that not all districts shift uniformly. An 8 point shift might mean like 3-5 points in actual swing districts while 12 points in safe districts. They have a point. And admittedly, my model doesnt take that into consideration.
However, after listening to what they had to say, and their suggestions of grading different districts differently, and focusing on the swingiest ones, I went back to the cook PVI that I drew my data from. And....here's the thing.
They got 192 safe D districts, 11 likely D, 14 lean D, and then there's 16 tossups. Assuming a dem heavy environment, i doubt anything lean D will flip. So that's 217 districts out of the gate, one short of a majority. The the tossups, all the data I have in my models suggests they'd flip. Even if I left out the 2 lean Rs, which is how I get 235 seats, I'd get 233. Of course, chatGPT thinks not all of the tossups will go D, and thinks only 70-90% will. Okay, so let's assume 80%, that's still 12, which gives us 231. I point that out, they say, but keep in mind the lean Rs still might flip, so now I'm back up to 232-233.
At which point, I have to ask...is it worth adding all of this complexity to my model? Like, that's the problem I noticed of complex models in 2020 and 2024. They account for more variables, but they're not a ton more accurate than my simplistic ones. Like, when you have like 5 extra variables where you're like, but these WILL go blue, but then these wont, but then these other things will, and you get a result that's like....2-4 seats off of my own projection, does that matter much?
It's a lot like the AK47 vs M16 philosophies of warfare. The M16 is a complicated machine. It's more precise and accurate, but it has more moving parts and can break down more often. When introduced in vietnam soldiers didnt even have cleaning kits and it broke down regularly despite how good of a firearm it is.
Meanwhile, the AK47 was a bit less accurate, but it was "good enough". It was built with the philosophy of keeping things simple, and not breaking down. You can argue that an "M16" type model will get a slightly more accurate result, but does it matter when my own model is like an AK47, rugged and "good enough?" I kind of like the simplicity of my current models. Adding more complexity just means more moving parts and more things that can go wrong. And for what, a slight difference from my original model in the first place? As I said in 2024. A model that is, for example, perpetually at 50-50 4 months out from an election because it's accounting for so many things that havent happened yet is the equivalent of not having a model at all. because you're not really predicting anything. You're just throwing your hands up and saying "I dont know, anything can happen." Sure, anything CAN happen, but polling data as I used was able to get better results than all of Allen Lichtman's keys and all of 538's fancy models that left us perpetually in the dark at 50%. So...idk.
I'm probably not gonna stick with my current iteration of my house model all the way to election day. The landscape is changing too much and hopefully by October or so, there will be actual polling giving me a better picture of what's going on. But, im probably not gonna overcomplicate things either.
Tbqh, giving how chatGPT started out at republicans winning the house only to end at a slightly less favorable prediction than my original one shows me it cant be trusted anyway. It's kinda like that meme I'm seeing lately where it's like "yeah, this is a good idea", and then later "oh yeah, sorry, i was wrong, would you like to see why data on how?" Like, it started out being like "republicans will probably win", then shifted to "democrats will win but the results will be narrow" to "democrats will win, but I'm predicting like 4 less seats than you are" is kind of a huge goalpost mover. Goes to show how accurate AI is at this sorta thing.
Regardless, i do admit it has points about the relative inelasticity of some districts and my model does have a weak point there. I do admittedly oversimplify things. but when an oversimplified model still gets us 90% of the way there, that kinda tells me it's "good enough", ya know? Kinda like an AK47 is "good enough" compared to an M16.