The Art and Science of Prediction

by Philip E. Tetlock , Dan Gardner

Number of pages: 352

Publisher: Broadway Books

BBB Library: Psychology and Strengths

ISBN: 978-0804136716

About the Authors

Philip E. Tetlock : Philip E. Tetlock (born 1954) is a Canadian-American political science writer,


Dan Gardner : Dan Gardner is the New York Times best-selling author of books


Editorial Review

Philip E. Tetlock and his research and life partner Barbara Mellers launched the Good Judgment Project and invited volunteers to sign up and forecast the future. Big as it was, the Good Judgment Project (GJP) was only part of a much larger research effort sponsored by the Intelligence Advanced Research Projects Activity (IARPA). IARPA is an agency within the intelligence community that reports to the director of National Intelligence and its job is to support daring research that promises to make American intelligence better at what it does. And a big part of what American intelligence does is forecasting global political and economic trends.

Book Reviews

"The prescriptions in “Superforecasting” should offer us all an opportunity to understand and react more intelligently to the confusing world around us." — The New York Times

"I think that this is a fantastic book. It contains many more insights, touching on topics such as how to combine and manage teams of forecasters, and comes highly recommended." — Bond Vigilantes

"Even if the hoped-for revolution never arrives, the techniques and habits of mind set out in this book are a gift to anyone who has to think about what the future might bring. In other words, to everyone."— The Economist

Books on Related Topics

Wisdom to Share

By one rough estimate, the United States has twenty thousand intelligence analysts assessing everything. This forecasting is critical to national security.

Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, gathering information, and updating beliefs.

System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere. It must treat the available evidence as reliable and sufficient.

The key is doubt.

When faced with a hard question, we often surreptitiously replace it with an easy one.

Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better.

When we make estimates, we tend to start with some number and adjust. The number we start with is called the anchor.

Coming up with an outside view, an inside view, and a synthesis of the two isn’t the end. It’s a good beginning.

There are many different ways to obtain new perspectives. What do other forecasters think? What outside and inside views have they come up with? What are experts saying?

Reality is infinitely complex.

Beliefs are hypotheses to be tested, not treasures to be protected.

Forecasters who use ambiguous language and rely on flawed memories to retrieve old forecasts don’t get clear feedback, which makes it impossible to learn from experience.