Theres also only 1 election, you cant run it multiple times to collect data and see how close your prediction matched up with 100 elections.
On top of that, theres only been like 80 presidential elections, right? And there hasn't been that many that are similar to the system of today. So you have, maybe, 8 or 10 actual historical elections to base your predictions on? And how many of those have the fine grain data available that we have now to determine likely voters?
All they're doing is guessing and putting a patina of statistic aesthetic - thats why Diggler could do as well or better as Nate Silver.
I agree with your sentiment here, but one way that you could check is by looking at the entire history of probabilities given by someone like Nate Silver. e.g:
and comparing them to simple baseline predictors, e.g.
1/2 * 1/2 * 1/2.
I couldn't find what probability 538 assigned in 2008 but looking at 2012 and 2016, Nate Silver gave the outcomes 80% and 28% probability respectively.... which is a bit worse than the uniformly random baseline 😆
Doing this kind of modelling on a presidential election is just braindead. He starts with all these priors that don't mean anything anymore, things like the incumbency advantage, and then bakes in dumb lib analysis of the electorate and the result is pretty much the same thing that every other pundet tells you.
https://hexbear.net/post/32883/comment/255304
There are ways to measure how accurate a system of probability prediction is across multiple samples. If the dem-rep predictions are 90-10, 80-20, 70-30, and 60-40, then that's a total of 300-100 across 4 elections, so there should've been three dem wins and one rep win during that period. If it was 4-0 or 2-2, those would both be equally wrong, despite the prediction always showing dems favored.
538's methodology is constantly being "refined," so the prediction system you'd be measuring would really be Nate Silver's ass. But you could still measure it.
deleted by creator
On the other hand, when someone says "60-40" they get fucking dragged when the 40 happens, like it's not an extremely likely outcome.
20% is not even a very unlikely outcome. It's about the same as the odds of flipping a coin twice and getting the same side both times.
that's actually still 50/50. you'd have to specify getting "heads" both times for it to be 25%
Theres also only 1 election, you cant run it multiple times to collect data and see how close your prediction matched up with 100 elections.
On top of that, theres only been like 80 presidential elections, right? And there hasn't been that many that are similar to the system of today. So you have, maybe, 8 or 10 actual historical elections to base your predictions on? And how many of those have the fine grain data available that we have now to determine likely voters?
All they're doing is guessing and putting a patina of statistic aesthetic - thats why Diggler could do as well or better as Nate Silver.
Sounds like some application of approximate bayesian could be relevant
I agree with your sentiment here, but one way that you could check is by looking at the entire history of probabilities given by someone like Nate Silver. e.g:
[content warning: math]
P(obama wins in 2008) * P(obama wins 2012) * P(trump wins 2016)
and comparing them to simple baseline predictors, e.g.
1/2 * 1/2 * 1/2.
I couldn't find what probability 538 assigned in 2008 but looking at 2012 and 2016, Nate Silver gave the outcomes 80% and 28% probability respectively.... which is a bit worse than the uniformly random baseline 😆
Doing this kind of modelling on a presidential election is just braindead. He starts with all these priors that don't mean anything anymore, things like the incumbency advantage, and then bakes in dumb lib analysis of the electorate and the result is pretty much the same thing that every other pundet tells you. https://hexbear.net/post/32883/comment/255304
oh yeah agreed 100%, my point is just that it's not non-falsifiable :)
The way stochastic works if they are repeadedly wrong it's a claer sign they should not be trusted.
There are ways to measure how accurate a system of probability prediction is across multiple samples. If the dem-rep predictions are 90-10, 80-20, 70-30, and 60-40, then that's a total of 300-100 across 4 elections, so there should've been three dem wins and one rep win during that period. If it was 4-0 or 2-2, those would both be equally wrong, despite the prediction always showing dems favored.
538's methodology is constantly being "refined," so the prediction system you'd be measuring would really be Nate Silver's ass. But you could still measure it.