(no title)
enjeyw | 1 month ago
In general prediction markets can’t be “correct” or “incorrect” - for instance if a prediction market says there’s a 60% chance of an event occurring, and it doesn’t occur, was the market right or wrong? Well it’s hard to say - certainly the market said the event was more likely to occur than not, but only just, and who knows? Maybe the event _only just_ occurred, and very nearly didn’t!
So generally we say a prediction market is “correct” if it is “well calibrated”, which is to say that if we took all the events that the market said had a 60% chance of occurring, then approximately 60% percent of these events occurred (with the same holding true for all other percentages).
On this note, an interesting phenomenon that used to occur was “favorite-longshot bias”, where markets would consistently overestimate the likelihood of longshot events occurring - so events that the market predicted would occur 10% of the time would only occur 5% of the time. What’s fascinating is that once people realized that this bias exited, they began to exploit it by making bets against longshots, which had the effect of moving the market and removing the biases, making the markets well calibrated. It’s a pretty neat example of the efficient market hypothesis in action!
BrenBarn|1 month ago
pseudo0|1 month ago
thaumasiotes|1 month ago
For most events like this, you'd want to see the market spike to 0% or 100% as the deadline approached. And in particular for an event that happens, you want to see the spike to 100% before it happens. Remaining at 60% until after the fact is wrong because the occurrence of the event becomes more certain as it gets closer.
Being "well-calibrated" as you describe is a very bad quality metric in the sense that two sets of predictions can achieve the same calibration profile while differing markedly in quality. The farther the predictions are from 50%, the better they are, but your calibration metric doesn't take this into account.
jjmarr|1 month ago
It seems unlikely since Nobels aren't awarded posthumously.
pseudo0|1 month ago
Anything under 3%/year of time until decision is going to have pretty limited predictive value within that range. Anything starting above that range will end up hitting that floor rather than going to zero because of the difficulty of finding a counterparty.
rich_sasha|1 month ago
I'd weigh the accuracy by how much money is at stake...
Even then, a "perfect" prediction market need not be accurate, if people use it for hedging. If some low probability event is really bad for me, I may pay over odds (pushing the implied probability up) to get paid if it happens. The equilibrium probability may be efficient, reasonable and biased.
avadodin|1 month ago
I'm not sure the same(any) rules apply.
vintermann|1 month ago
Who's to say a dead person can't have done the most to "promote peace conferences" as mentioned in Nobel's will? These days, I'd say dead people make a larger net contribution to peace than most politicians.
hobofan|1 month ago
pinkmuffinere|1 month ago
kurtis_reed|1 month ago
enjeyw|1 month ago
If you want to go be pedantic about it and select one metric, markets are evaluated on their Brier Score or some other Proper Scoring Rule, not accuracy.
However, I prefer calibration as a high level way to explain prediction market performance to people, as it’s more intuitive.
stingraycharles|1 month ago
Edit: just found the answer myself: “accuracy measures the percentage of correct predictions out of total predictions, while calibration assesses whether a prediction market's assigned probabilities align with the actual observed frequency of those outcomes”