Log in

View Full Version : Philip Tetlock and expert predictions



al.khwarizmi
15th September 2012, 03:29
en (dot) wikipedia (dot) org/wiki/Philip_E._Tetlock


His Expert Political Judgment: How Good Is It? How Can We Know? (2005) describes a twenty-year study in which 284 experts in many fields, including government officials, professors, journalists, and other, and with many opinions, from Marxists to free-marketeers, were asked to make 28,000 predictions[1][2] about the future, finding that they were only slightly more accurate than chance, and worse than basic computer algorithms

I believe that one of the major reasons behind this gap, besides bad / too few models of the world, is that "Marxists" and "free-marketeers" make predictions based on pure wishful thinking, whereas "basic computer algorithms" do not have this bias.

Thoughts?

ÑóẊîöʼn
15th September 2012, 14:15
Even "basic computer algorithms" are going to contain built-in assumptions. My guess is that the assumptions made for the algorithms are of a much more fundamental/generalised nature than the assumptions made by that set of Marxists and free marketeers.

al.khwarizmi
16th September 2012, 03:10
Even "basic computer algorithms" are going to contain built-in assumptions. My guess is that the assumptions made for the algorithms are of a much more fundamental/generalised nature than the assumptions made by that set of Marxists and free marketeers.

Of course. And "wishful thinking" generally doesn't enter into those assumptions. That's why computers get it right.

Kotze
16th September 2012, 13:10
Very interesting. How did you come across that person? The research is related to what James Surowiecki wrote about in The Wisdom of Crowds.

There's a bunch of videos with Tetlock on YouTube. He uses the distinction between hedgehogs and foxes. Hedgehogs mean the big idea people who try to explain everything through one organizational principle. Foxes have broad interests and a meandering style of arguing: on the one hand, one the other hand, on the third hand... Foxes are better at making predictions than hedgehogs, but make for very boring pundits.

Aggregating predictions makes them more reliable. This improvement is particularly strong with hedgehogs, but aggregated hedgehog predictions still don't beat aggregated fox predictions.

Tetlock talks about problems with getting input that can be used for aggregation and testing and just how slippery language in general and also (especially?) the language of experts is. He gives the example of an expert's prediction of finding weapons of mass destruction in Iraq. When they didn't show up, what did the expert say? -You just wait.

When professional western Marxist Academics talk about the upcoming crisis/crash/apocalypse/Rob Schneider remake of capitalism, what do they mean by that? Do they ever bother to describe it in a way that you can check for yourself whether it's happening? That could potentially show them wrong, and who wants to risk losing face. Here's an example of a prediction that can be checked: Will the median age of people who die in Germany in 2020 be more than 4 years shorter than it was in 2010? (My prediction is nope.) We hardly ever talk like that, but we should.

I'm not sure I understand what you mean with computers getting it right. Yes, a simple prediction algo expecting more of the same did very well, but Tetlock doesn't propose to replace anybody with a computer (based on the stuff on YT, haven't read his books). What Tetlock suggests is rather a combination of appeals to individuals to get into some good habits (like asking yourself regularly what would make your opinion about something change) and to aggregate opinions by prediction markets or the inexplicably-named Delphi method (several rounds of polling and commenting on predictions where your input isn't linked to your name).