12 Comments
Mar 8, 2023Liked by Maxim Lott

Excellent article. I kind of love that bettors can perform as well as a professional in the business of statistics. I'll probably be checking out both sites for some electoral news soon, so thank you for writing this!

Expand full comment
Nov 28, 2022Liked by Maxim Lott

This is a very interesting article. A questions.

(1) With respect to your first chart ("Bettors vs. Reality"), do you have the R-square value? My statistics days are way behind me, but I think that is the goodness of fit measure -- please let me know if that's wrong. Just eyeballing the scatterplot, the R-squared value looks extremely high to me. Also (and this question may not make sense statistically), do you have the R-squared value just for the 20%-80% range where the predictions are most accurate?

(2) Intuitively, the Brier Score makes sense to me (I didn't work through the math), but does it vary much from just comparing the R-squared scores to one another? For your second panel ("538 vs. Reality" and "Bettors vs. Reality," do you have the R-squared scores for the two regressions? And do you have the R-squared scores for each? And the R-squared scores in the 20%-80% range for each?

(3) I'm usually particularly interested in the very close races. Do you happen to have a comparison of the bettors and 538 in those races, say in the 45% to 55% range or some similar mid-range?

Thanks for doing all this work. I find it fascinating.

Expand full comment

This is a great piece! I'm especially intrigued by the difference in 538's brier scores depending on whether a republican or democrat won, it seems whatever secrete sauce is giving them much more accurate predictions when democrats win should be copied over?

Expand full comment

I would guess people making bets in prediction markets likely consider 538's forecast when deciding (whereas 538's forecast doesn't take prediction markets into account I assume), so prediction markets' success could in theory be derived from 538's success. Something that might be worth looking at is how often prediction markets agree with 538. Also I'd be curious what the Brier scores look like for only cases where 538 and betting markets disagree, which might tell more about whether prediction markets are accurate independently of 538.

Expand full comment

The trading price of a prediction market fluctuates in the days and weeks before an election. How did you condense that graph to a single number? Did you just use the trading price 24 hours before the polls close?

If so, that prediction market snapshot has an advantage over the forecasters, who are averaging in time-lagged poll data. And your analysis would not imply that a prediction market's price is reasonable in the weeks before an election, just that their last-day price (which you used in this analysis) is reasonable.

Expand full comment

These results didn't gel with my memory of the 2020 election, so I had a quick look at your data to tease out what was going on. It seems the vast majority of the market's outperformance in 2020 was during the primaries (0.018 vs 0.077) rather than the general (0.052 vs 0.051). If we ignore primaries, 538 outperformed in every election vs the markets

Expand full comment

Konrad -

Thank you, but there is also value in familiarity and simplicity. When I was trying cases, I loved to use R-squared values because it made it simple for the judge to understand how little of the variance the other side's model was able to explain.

If there's a way to us Brier for the same thing, that would work too.

Expand full comment