## Notes
### Superforecasting
#### Ch. 10
We have learned a lot about superforecasters, from their lives to their test scores to their work habits. Taking stock, we can now sketch a rough composite portrait of the modal superforecaster.
In Philosophic outlook, they tend to be:
CAUTIOUS: Nothing is certain
HUMBLE: Reality is infinitely complex
NONDETERMINISTIC: What happens is not meant to be and does not have to happen
In their abilities and thinking styles, they tend to be:
ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protected
INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges
REFLECTIVE: Introspective and self-critical
NUMERATE: Comfortable with numbers
In their methods of forecasting they tend to be:
PRAGMATIC: Not wedded to any idea or agenda
ANALYTICAL: CaSable of stepping back from the tip-of-your-nose
Perspective and considering other views
DRAGONFLY-EYED: Value diverse views and synthesize them into their own
PROBABILISTIC: Judge using many grades of maybe
THOUGHTFUL UPDATERS: When facts change, they change their minds
GOOD INTUITIVE PSYCHOLOGISTS: Aware of the value of checking thinking for cognitive and emotional biases
In their work ethic, they tend to have:
A GROWTH MINDSET: Believe it’s possible to get better
GRIT: Determined to keep at it however long it takes
I paint with a broad brush here. Not every attribute is equally important. The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.
And not every superforecaster has every attribute. There are many paths to success and many ways to compensate for a deficit in one area with strength in another.
#### Ch.11
{{On Taleb critique:}}
What matters can’t be forecast and what can be forecast doesn’t matter. Believing otherwise lulls us into a false sense of security.
If black swans must be inconceivable before they happen, a rare species of event suddenly becomes a lot rarer. But Taleb also offers a more modest definition of a black swan as a “highly improbable consequential event.” These are not hard to find in history. And as Taleb and I explored in our joint paper, this is where the truth in his critique can be found.
Taleb, Kahneman, and I agree there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious—“there will be conflicts”—and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems.
[...]
If you have to plan for a future beyond the forecasting horizon, plan for surprise. That means, as Danzig advises, planning for adaptability and resilience. Imagine a scenario in which reality gives you a smack in the ear and consider how you would respond. Then assume reality will give you a kick in the shin and think about dealing with that. “Plans are useless,” Eisenhower said about preparing for battle, “but planning is indispensable.”11 Taleb has taken this argument further and called for critical systems—like international banking and nuclear weapons—to be made “antifragile,” meaning they are not only resilient to shocks but strengthened by them. In principle, I agree. But a point often overlooked is that preparing for surprises—whether we are shooting for resilience or antifragility—is costly. We have to set priorities, which puts us back in the forecasting business.
Tokyo is earthquake central, so expensive engineering standards make sense. But in regions less prone to big quakes, particularly in poorer countries, the same standards make less sense. These sorts of probability estimates are at the heart of all longterm planning, but they are rarely as explicit as those in earthquake preparation. For decades, the United States had a policy of maintaining the capacity to fight two wars simultaneously. But why not three? Or four? Why not prepare for an alien invasion while we are at it? The answers hinge on probabilities. The two-war doctrine was based on a judgment that the likelihood of the military having to fight two wars simultaneously was high enough to justify the huge expense—but the same was not true of a three-war, four-war, or alien-invasion future. Judgments like these are unavoidable, and if it sometimes looks like we’ve avoided them in long-term planning that is only because we have swept them under the rug. That’s worrisome. Probability judgments should be explicit so we can consider whether they are as accurate as they can be. And if they are nothing but a guess, because that’s the best we can do, we should say so. Knowing what we don’t know is better than thinking we know what we don’t.
[...]
All three of us see history this way. Counterfactuals highlight how radically open the possibilities once were and how easily our bestlaid plans can be blown away by flapping butterfly wings. Immersion in what-if history can give us a visceral feeling for Taleb’s vision of radical indeterminacy. Savoring how history could have generated an infinite array of alternative outcomes and could now generate a similar array of alternative futures, is like contemplating the one hundred billion known stars in our galaxy and the one hundred billion known galaxies. It instills profound humility.16 Kahneman, Taleb, and I agree on that much. But I also believe that humility should not obscure the fact that people can, with considerable effort, make accurate forecasts about at least some developments that really do matter. To be sure, in the big scheme of things, human foresight is puny, but it is nothing to sniff at when you live on that puny human scale.