top | item 7515328

(no title)

edgarallenbro | 12 years ago

>was given access to a website that listed dozens of carefully worded questions on events of interest to the intelligence community, along with a place for her to enter her numerical estimate of their likelihood.

If you need to know more, try reading the article? It says that they're not yes/no questions.

discuss

order

waterlesscloud|12 years ago

I'm participating in this project, as are at least a couple other HNers (surprising to no one, huh?).

I won't go into details about it because it's not my place to do so at this point (maybe after).

But I will confirm that for every response you give, you're required to enter a percentage estimate of likelihood. For example, you'd enter 90% or 72.212% or whatever on whichever question you're responding to. So there's a potential mechanism for further ranking of participants beyond the binary. The voting mechanism itself is more complicated, but again, I'll leave discussion for when it's over.

notahacker|12 years ago

The interesting thing about this is it raises the [theoretical, in the absence of information about weighting/ranking systems] possibility professional intelligence analysts' relative underperformance against the measure is less due to inaccurately identifying high/low probability more frequently than amateur "superforecasters", and more due to professionals making the systematic error of overconfidence in the evidence they have when weighting their estimates - e.g. if the amateurs are a lot more likely to either pick 50% on events where there genuinely isn't enough information to forecast and less likely to assign single digit probabilities to events which no available evidence suggest are likely which nevertheless happen. In other words, it sounds highly plausible that if you asked simple binary questions about expected outcomes both groups would give almost identical answers and usually be correct, but the professionals are more confident when both groups are wrong.

If this is the case then it's reasonable to assume CIA's statisticians would have done the analysis and know that's the reason these "superforecasters" are better: doubt

I guess the reverse is also possible: professional intelligence analysts are systematically tending towards being overcautious and tend to pick numbers towards the middle of the range, either out of a desire not to look silly or because they're more aware of policy implications. But subjectively I'd assign that a lower probability.

cpeterso|12 years ago

That's very interesting! How did you become involved in this project?

lostcolony|12 years ago

Whether or not they happened is a yes/no though. It's not "I believe this is 60% likely to happen" "Why yes! It was exactly 60%!", it is instead "This person said it was 60% likely to happen, and it happened. That means this person was right, for a certain degree of right"; we don't actually know what that means.

It may be that they're weighting it based on confidence level (so saying 0% chance on something that happens counts against them, but 49% chance on something that happens counts against them less), but it still counts as a yes/no in that given enough people, and a random distribution of answers, you would expect a subset of people to always be right (though the 'amount' of right changes, this person said 51% chance of it happening, this person said 100% chance of it happening; they both got it right).