What Superforecasters Know That You Don’t
A Futures Thinking Perspective
👋 Hello friends,
Thank you for joining this week’s edition of Brainwaves. I’m Drew Jackson, and today we’re exploring:
Solutions to Cognitive Limitations
Key Question: Given the many limitations to our predictive capacities, what strategies can we employ to mitigate them?
Thesis: Insights from superforecasters and systems thinking methodologies can help address the limitations in our predictive abilities. While we will never be able to predict perfectly, there are domains where we can improve our performance. Other strategies presented in Tenets #7, #8, and #9 will provide further solutions.
Credit Medium
Before we begin: Brainwaves arrives in your inbox every other Wednesday, exploring venture capital, economics, space, energy, intellectual property, philosophy, and beyond. I write as a curious explorer rather than an expert, and I value your insights and perspectives on each subject.
Time to Read: 49 minutes.
Let’s dive in!
A frog was hopping along the shore of a river looking for a place to cross. He came upon a scorpion sitting on the shore. “Hello, friend frog,” said the scorpion. “It appears you are looking to cross the river. I too want to cross. Would you mind carrying me?”
The frog was taken aback. “Why, if I let you on my back to cross the river, you’d sting me and I would die. I don’t think I’ll do that.”
The scorpion immediately replied, “There is no logic to your concern. If I sting you and you die, I will surely die as well, since I can’t swim. I wouldn’t need a ride if I could swim.”
The frog thought a moment and then said, “Your logic makes sense. Why would you kill me if it would result in your death? Come along and climb on my back and we’ll cross this river.”
The scorpion climbed on the frog’s back and off they went to cross the river.
About halfway across the river, the scorpion raised its tail and stung the frog. The frog was both astounded and disconsolate. “Why did you sting me? Now I will die and you will surely drown and die also.”
The scorpion replied, “I can’t help it. It’s who I am. It’s in my nature.”
- Lev Nitoburg, The German Quarter, 1993
The future actively shapes our lives. Historically, the way humans have thought about and approached the future has been flawed. Futures Thinking is a modern approach to the future, rethinking how humans think about and approach the future.
Rather than trying to predict specific future events, Futures Thinking encourages a shift in how we conceptualize the future itself—drawing on diverse cultural perspectives, foundational world characteristics, deep modern literature reviews, and recognizing that our present actions and narratives significantly influence future outcomes. Since most major life decisions are essentially bets on the future, adopting this framework could transform how we approach education, careers, relationships, and other essential aspects of life.
Today, our discussion revolves around how our world is set up and how these underlying characteristics shape everything that goes on in the world, specifically focusing on Futures Thinking Tenet #6: Diverse perspectives, critical thinking, systems thinking, and humility help navigate complexity and mitigate cognitive limitations.
Credit Them Frames
PREDICTIONS ARE CONSTANTS IN OUR LIVES - STRATEGIES TO ADDRESS OUR LIMITATIONS - DEVELOPING A NEW TOOLKIT FOR FUTURES THINKING
Philip Tetlock, discussed later, wrote “Leaders must decide, and to do that they must make and use forecasts.”
Whether we like it or not, predictions and forecasts are used in almost every aspect of our lives, from checking the weather app to seeing when an upcoming delivery is scheduled to autocorrecting in your texting app. The applications are seemingly endless.
In the conclusion of Tenet #5, I summarized the core issue very succinctly: We simply cannot predict accurately.
This is due to a wide range of cognitive limitations, most of which are detailed in the table below.
On the surface, this is an incredibly unfortunate combination of circumstances: we constantly make predictions throughout our lives, yet we cannot predict with any meaningful accuracy. How should we live, given this information?
All hope should not be lost, as there are a handful of strategies that can address, in some capacity and intensity or another, all of the limitations on our predictive abilities. To find them, we can draw upon modern scientific research, tacit knowledge, anecdotal evidence, and ancient wisdom, creating a series of perceptual tools and knowledge sources that broaden our predictive capabilities. These mitigation strategies are listed in the table below, along with the cognitive limitation they address.
You may have read to this point and concluded that through these perceptual tools and knowledge sources, you can correct the errors in your predictive ways, and now can predict everything with 100% accuracy.
To be explicitly clear, if anyone, myself included, is trying to convince you of this fact, they are trying to sell you snake oil.
These mitigating factors will help you predict better than you did before, had you known about or implemented them in your process; however, they still aren’t perfect. They are better than nothing, and some are surprisingly good, but they cannot provide perfect, omniscient predictive abilities.
Let’s dive into each one!
Credit Earth
BLINDFOLDED DART THROWING MONKEYS - FORESIGHT IS REAL - START WITH THE OUTSIDE VIEW
Our story begins in the 1970s, with Princeton University economist Burton Malkiel. At this point, Malkiel was in his forties, having served in the army, worked in investment banking, and spent close to a decade teaching at Princeton. In 1973, he published his classic finance book A Random Walk Down Wall Street.
Popular now for the random walk hypothesis and efficient-market hypothesis leanings, it’s crucial for our discussion today for a small anecdote buried within the pages, which reads along the lines of “a blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts.”
Fast forward around three decades to a Wharton professor in the early 2000s named Philip Tetlock. In his landmark 2005 study and subsequent research, Tetlock tested these claims, seeking to determine whether these so-called experts could actually predict with meaningful accuracy beyond random chance.
The results? Writing in his collaborative book with Dan Gardner, Superforecasting: The Art and Science of Prediction, they explain their findings:
There were two statistically distinguishable groups of experts. The first failed to do better than random guessing, and in their longer-range forecasts even managed to lose to the chimp. The second group beat the chimp, though not by a wide margin, and they still had plenty of reason to be humble. Indeed, they only barely beat simple algorithms like “always predict no change” or “predict the recent rate of change.” Still, however modest their foresight was, they had some.
The first group was quickly dismissed as truly worse than blind monkeys, but the second, and much more enticing, group became the star of the show, eventually dubbed “superforecasters.”
Initially, the researchers were unsure whether the results were significant or due to random chance (i.e., luck or skill). The people participating in this study were volunteers, semi-ordinary people like you or me, including pipe installers, filmmakers, professors, mathematicians, farmers, and more.
Over the next couple of years, the researchers found that the superforecasters’ abilities held up phenomenally well. After year 1, individuals who placed in this elite category were placed on teams with fellow superforecasters. What happened? Their scores improved even more than their incredibly average peers’.
The results were clear-cut each year. Teams of ordinary forecasters beat the wisdom of the crowd by about 10%. Prediction markets beat ordinary teams by about 20%. And superteams beat prediction markets by 15% to 30%.
To be clear, these superforecasters weren’t infallible; many were still subject to luck, occasionally having a bad year of ordinary results. But, the significant result of this longitudinal study was the following: “We can conclude that the superforecasters were not just lucky. Mostly, their results reflected skill.”
Once apprised of this conclusion, Tetlock and Gardner set out to understand why these individuals were consistently outperforming. Over the next decade, they synthesized vast quantities of predictive data, qualitative survey results, conversational anecdotes, and other data sources to produce a near-comprehensive breakdown of this elite group.
And what they found was hopeful for the everyday person: foresight isn’t a mysterious genetic gift bestowed at birth, it’s the product of particular ways of thinking, of gathering information, and of updating beliefs. “These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.”
To identify specific characteristics to emulate, they found that the regular forecaster who volunteered scored higher on intelligence and knowledge tests than 70% of the population. In contrast, superforecasters scored around 80% higher than the population. They’re smart, but not outlandishly intelligent.
The first characteristic that sets superforecasters apart is their inclusion of a wide variety of perspectives.
Tetlock and Gardner use the analogy of a dragonfly to illustrate how superforecasters incorporate a wide range of viewpoints into their thinking. Like humans, dragonflies have two eyes; however, they are constructed as a bulging sphere, with the surface covered by as many as 30,000 individual lenses.
Information from each of these thousands of unique perspectives flows into the dragonfly’s brain, where it is synthesized into a centralized vision. This process enables dragonflies to see in almost every direction simultaneously, with the clarity and precision needed to track flying insects at high speeds.
There are two key aspects of the dragonfly to highlight that explain a portion of why superforecasters are as good as they are.
Firstly, superforecasters start by gathering as many perspectives as possible from as many sources as possible. Second, superforecasters take all those perspectives and aggregate them into a single, actionable viewpoint.
One strategy superforecasters employ is beginning with the outside view. The outside view, as discussed by Daniel Kahneman in his book Thinking, Fast and Slow, is similar to the statistical base rate. The base rate is the fundamental probability of an event or characteristic occurring in a population, representing the overall frequency without any specific information.
It’s the population-level rate of occurrence, without factoring in any specific information unique to the problem at hand. For instance, if you were interested in forecasting the question “how many white women will get in crashes this year?”, you would start with the base rate of what percentage of people, on average, get in a crash each year.
The outside view is typically abstract, bare, and doesn’t lend itself so readily to storytelling. This is often why inexperienced forecasters (which is most of us) are drawn to and begin with the inside view, as it’s usually concrete and filled with engaging detail about the specifics of the forecast in question.
Why do superforecasters begin with the outside view first? Whatever view you start with acts as the anchor to the rest of your thought process, in the classic psychological view of anchoring. When we make estimates, we tend to start with one number or thought and then adjust based on that—this number is called the anchor.
Forecasters who begin with the inside view risk being swayed by a number that may have little or no meaning. In contrast, starting with the outside view will provide a meaningful anchor.
So, superforecasters start with the outside view, then add details from the inside view, but they don’t stop there. They gather “as much information from as many sources as they could.”
In his 1996 book, The Art of the Long View, Peter Schwartz discusses how they employed this practice at Shell to great success:
At Shell, Pierre had come to believe that if you wanted to see the future you could not go to conventional sources of information. Everyone else would know them as well and thus you would have no unique advantage. You had to seek out truly unusual people who had their finger on the pulse of change, who could see significant but surprising forces for change. These people would be found in very different walks of life, all over the world.
While identifying and courting a wide variety of perspectives is valuable, aggregation is arguably the more critical part of the equation. This is where teams of superforecasters (dubbed “superteams”) demonstrated the power of consolidating information.
Superteams leveraged the “wisdom of the crowd.” Helpful information is often scattered, with many people holding scraps of relevant information. In the original experiments demonstrating the phenomenon, hundreds of people contributed valid information, creating a collective pool far greater than any of them could have possessed. When averaged, positive errors canceled out negative errors, yielding a near-perfect aggregation of the actual value.
Similarly, superforecasters and superteams participated in the same exercise. By drawing on many sources of data, viewpoints, and perspectives, they were able to aggregate them into a more holistic perspective on the matter at hand. In Tetlock and Gardner’s words:
They deploy not one analytical idea but many and seek out information not from one source but many. Then they synthesize it all into a single conclusion. In a word, they aggregate. They may be individuals working alone, but what they do is, in principle, no different from what Galton’s crowd did. They integrate perspectives and the information contained within them. The only real difference is that the process occurs within one skull.
Even when they all looked at the same evidence, it would be unlikely that they would all reach precisely the same conclusion. This variation, driven by their different backgrounds, provided much more value than if they had all reached the same conclusion.
Granted, stepping outside ourselves and really getting a differentiated view of reality is a struggle. Whether by virtue of temperament or habit or conscious effort, superforecasters tend to engage in the hard work of consulting other perspectives.
In the 21st century, it’s becoming increasingly more challenging to encounter truly diverse perspectives. Byrne Hobart and Tobias Huber discuss this in their book, Boom: Bubbles and the End of Stagnation, specifically how technology is collapsing the multifaceted human experience into a one-dimensional stream, flattening individual differences.
It’s a bleak view: “While our culture fetishizes novelty and diversity on the surface, the universalizing and totalizing nature of technology erases radical difference on a deeper level, only to preserve distinctions without real differences.”
However, as we’ve seen, diverse perspectives are critical to any foresight abilities we wish to foster, let alone their impact on the rest of our lives. To quote Joshua Stehr, a climate science nerd with a love for futurism, “If we are to be better prepared for our future, we need to expand who and what we’re taking seriously.”
Similarly, he discusses how it’s almost a moral obligation for us to challenge views of the future and propose alternatives. If we don’t, these visions of the future can become self-fulfilling prophecies. As he writes, “alternative futures exist, they need amplifying.”
This is the foundation for the NORMALS Studio New Future Archetypes thought framework. In their introductory article, they discuss how they hope to foster novel views of the future—to “shape something new”—by using creative thinking to break free from inherited constraints.
Arguably the most impactful phrase they employ is the following: “The diversity of futures we hold shapes our capacity to respond to complex and shifting realities.” That’s a daunting perspective on why diverse perspectives are essential.
Long story short, no matter what realm of literature or popular thought you draw from, the underlying message of the strength of diverse perspectives is throughout. Jonathan Haidt, a renowned positive psychologist, writes in his book The Happiness Hypothesis, “The wise are able to see things from others’ points of view, appreciate shades of gray, and then choose or advise a course of action that works out best for everyone in the long run.”
And finally, the renowned business operator Peter Thiel, writing in his book Zero To One, provides a perfect, overarching viewpoint to cap off this discussion:
Only by seeing our world anew, as fresh and strange as it was to the ancients who saw it first, can we both re-create it and preserve it for the future.
Credit HomeGuide
ELEMENTARY, MY DEAR WATSON - THE NUMBER OF PIANO TUNERS IN CHICAGO - THE UNTHINKABLE HAITIAN REVOLUTION
Appearing in the late 1880s, Sherlock Holmes quickly became a widespread hit, arguably becoming the best-known fictional detective. The Guinness World Records lists him as the most portrayed human literary character in film and television history.
Created by Arthur Conan Doyle, Sherlock Holmes, as described by his friend Dr. Watson, is known for his proficiency with observation, deduction, forensic science, and logical reasoning, which he employs when investigating all manner of cases.
In 1924, Doyle wrote the shortest Sherlock Holmes story, presented below:
Watson had been watching his companion intently ever since he had sat down at the breakfast table. Holmes happened to look up and catch his eye. “Well, Watson, what are you thinking about?” he asked.
“About you.”
“Me?”
“Yes, Holmes. I was thinking how superficial are these tricks of yours, and how wonderful it is that the public should continue to show interest in them.”
“I quite agree,” said Holmes. “In fact, I have a recollection that I have myself made a similar remark.”
“Your methods,” said Watson severely, “are really easily acquired.”
“No doubt,” Holmes answered with a smile. “Perhaps you will yourself give an example of this method of reasoning.”
“With pleasure,” said Watson. “I am able to say that you were greatly preoccupied when you got up this morning.”
“Excellent!” said Holmes. “How could you possibly know that?”
“Because you are usually a very tidy man and yet you have forgotten to shave.”
“Dear me! How very clever!” said Holmes. “I had no idea, Watson, that you were so apt a pupil. Has your eagle eye detected anything more?”
“Yes, Holmes. You have a client named Barlow, and you have not been successful with his case.”
“Dear me, how could you know that?”
“I saw the name outside his envelope. When you opened it you gave a groan and thrust it into your pocket with a frown on your face.”
“Admirable! You are indeed observant. Any other points?”
“I fear, Holmes, that you have taken to financial speculation.”
“How could you tell that, Watson?”
“You opened the paper, turned to the financial page, and gave a loud exclamation of interest.”
“Well, that is very clever of you, Watson. Any more?”
“Yes, Holmes, you have put on your black coat, instead of your dressing gown, which proves that your are expecting some important visitor at once.”
“Anything more?”
“I have no doubt that I could find other points, Holmes, but I only give you these few, in order to show you that there are other people in the world who can be as clever as you.”
“And some not so clever,” said Holmes. “I admit that they are few, but I am afraid, my dear Watson, that I must count you among them.”
“What do you mean, Holmes?”
“Well, my dear fellow, I fear your deductions have not been so happy as I should have wished.”
“You mean that I was mistaken.”
“Just a little that way, I fear. Let us take the points in their order: I did not shave because I have sent my razor to be sharpened. I put on my coat because I have, worse luck, an early meeting with my dentist. His name is Barlow, and the letter was to confirm the appointment. The cricket page is beside the financial one, and I turned to it to find if Surrey was holding its own against Kent. But go on, Watson, go on! It ‘s a very superficial trick, and no doubt you will soon acquire it.”
Another crucial factor of superforecasters is their ability to think critically, and Sherlock Holmes is an excellent exemplar of this characteristic. It’s a term commonly used in conversations and technical literature, but has slowly begun to lose its meaning. Formally defined, critical thinking is the process of analyzing available facts, evidence, observations, and arguments to make sound conclusions or informed choices.
Enrico Fermi, an Italian American physicist, renowned for being the creator of the world’s first artificial nuclear reactor and a member of the Manhattan Project, provides a methodology to deploy critical thinking in the realm of forecasting.
A Fermi problem is usually a back-of-the-envelope estimation problem that involves making justified guesses about quantities. I’ve seen them most often deployed in interview situations to test analytical reasoning and critical thinking skills. A commonly cited example is below.
Let’s begin by trying to answer this question: How many piano tuners are there in Chicago?
How do we begin concocting a reasonable answer? Most people would think about it for a second, then offer a best guess at an overall number, with the whole process taking less than 10 seconds. How did they come to this answer? When asked, most would shrug and say something along the lines of “it seems about right.”
Fermi knew people could do better. The key was breaking the question down into subquestions. For our question, we can break it down into the four key questions that wrap up into a main calculation:
How many pianos are in Chicago?
How often are pianos tuned each year?
How long does it take to tune a piano?
How many hours a year does the average piano tuner work?
With the first three, we can calculate the total amount of piano-tuning work in Chicago. We can divide by the last to get a reasonable estimate of how many piano tuners there are in Chicago.
However, we don’t have any of that information. We’ve simply split the question into four, so you may think we’ve created meaningless work. Not necessarily. What Fermi understood is that by breaking the question down, we can better separate the knowable from the unknowable. So, guessing isn’t eliminated from our process; however, we’ve broken it down into manageable chunks.
The net result tends to yield a more accurate estimate than whatever number popped out of the black box when we first read the question.
If you don’t believe me, try the following thought experiment, trying to find an answer to the question: How many tennis balls fit on an airplane? Start with a best-guess estimate, break it down as Fermi would, compare the two, then Google it. Which was better?
Across the board, superforecasters tended to engage in and enjoy hard mental slogs. They have strong preferences for variety and intellectual curiosity. A brilliant puzzle solver may have the raw material necessary for forecasting, but if they don’t constantly question fundamental beliefs about the world, they will be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.
Superforecasters are incredibly precise in their forecasts, often debating by a single percentage point. Barbara Mellers, a professor of psychology at the University of Pennsylvania, has shown that greater forecast granularity predicts greater accuracy. The average forecaster who predicts in 10% increments (20%, 30%, or 40%), is less accurate than the finer-grained forecaster who predicts in 5% increments (25%, 30%, or 35%), is less precise than the even finer-grained forecaster who uses 1% increments (20%, 21%, or 22%).
As we’ve seen, critical thinking is essential to our ability to forecast, as well as to other aspects of our lives. Broadening our scope to the entire Futures Thinking framework, author and practitioner Peter Schwartz provides some valuable color:
To operate in an uncertain world, people needed to be able to reperceive—to question their assumptions about the way the world works, so that they could see the world more clearly.
Marilee Adams, author of the popular book Change Your Questions, Change Your Life, offers the subsequent piece of the puzzle:
Without an ability to question and critically evaluate what we take in, the cumulative impact of these experiences fuels our anxiety, uncertainty, discord, and effectiveness as leaders and as human beings.
Combining these into a single thought stream: given the uncertainty in the world, we need to think critically to see the world more clearly. If we can’t think critically, the world will feel more uncertain and unpredictable.
That’s a powerful thought. It seems that critical thinking addresses many of the issues we face. And it does, in some ways. However, it isn’t a perfect solution.
Between 1791 and 1804, enslaved people in Haiti did what was impossible. They overthrew a colonial regime, established the first free Black republic, and even built a Black Sans-Souci – complete with running water and magnificent gardens.
What makes this revolution particularly interesting is not just that it happened at all, but how it exceeded everyone’s imagination – even while it was taking place. Even those enacting it couldn’t fully formulate its claims in advance.
It was just too radical.
In a 2025 article, one of his Permutations, Simon Hoher dives into what he calls “unthinkable futures.” Such moments, as the Haitian Revolution described above, are “unthinkable”, yet they are shaping our future.
Unthinkability, in Hoher’s definition, refers to the gap between what we can imagine and what might become possible. To incorporate terminology from this section, Hoher is essentially saying that critical thinking (and thinking and imagining in general) can only take you so far. Some things are truly beyond our grasp.
In other words, in the realm of “thinkable futures,” critical thinking helps us sort out the various cognitive limitations we face. However, when we venture into the realm of “unthinkable futures,” any benefit provided by critical thinking is lost.
Credit Explore Minnesota
AIRPORT LOUNGE PROVIDES INTEGRAL INSIGHTS - REFUTING MARCUS AURELIUS - A FURTHER GAP IN OUR EDUCATION
Recently, I was stuck in the Minneapolis-St. Paul Airport on a layover with no entertainment and no hope for a productive upcoming flight. I went searching through every bookstore in the airport and, with the help of Gemini, stumbled upon Adam Grant’s 2021 book, Think Again: The Power of Knowing What You Don’t Know.
Over the next 3 hours, I dove deep into the thought process behind what he calls “rethinking.” At that moment, the ideas were intriguing and resonated decently, but I didn’t truly understand the extent to which Grant’s methodologies could apply to Futures Thinking. A week later, when I read Superforecasting and discovered the following, everything came together.
Tetlock begins his account by referencing Daniel Kahneman’s work in Thinking, Fast and Slow, where he defines the two systems that make up our inner workings: “System 1” and “System 2.”
System 1 is our automatic system, running fast and constantly in the background. Conversely, System 2 is our slower, more deliberate, and methodical system, which requires conscious attention and effort.
In the book’s title, System 1 enables us to think fast, and System 2 enables us to think slow. When we go to think, System 1 comes first. To quote Tetlock, “If a question is asked and you instantly know the answer, it sprang from System 1.” From there, System 2 gets involved, mainly in charge of dissecting the answer System 1 came up with to see if it holds up to scrutiny and evidence.
System 2 requires a conscious effort to intervene. Thinking a problem through using System 2 requires sustained focus and takes way longer relative to the snap judgment we get from System 1. As such, it’s normal human behavior to rely on our strong hunches, the outputs of System 1. For us, if it “feels good”, it probably is.
However, as Kahneman proved, System 1 is designed to jump to conclusions from little evidence. It’s a part of human DNA; we’re too quick to make up our minds and too slow to change them.
In most cases, this is beneficial. As Tetlock writes, “Indeed, it is the propulsive force behind all human efforts to comprehend reality. The problem is that we move too fast from confusion and uncertainty to a clear and confident conclusion without spending any time in between.”
Tetlock’s research has shown that this effect has a measurable impact on our forecasting abilities. Superforecasters, unlike normal humans and even other forecasters, are more likely to think twice, thrice, or more about things.
For superforecasters, forecasts aren’t made and then locked away until they come to pass. Instead, they are framed as judgments based on the available information present at that moment. If material information becomes available at a later date, superforecasters take the opportunity to reconsider (rethink) their forecasts.
And this is one of the main drivers of forecast accuracy. A forecast that is updated based on new information is more likely to be closer to the real value than a previous one that isn’t so informed.
To begin with, superforecasters’ initial forecasts were at least 50% more accurate than those of regular forecasters. Their constant rethinking and updating of their forecasts pushed that premium even higher.
Superforecasters employ rethinking at each phase in the process. As I previewed, superforecasters begin with the outside view(s), then add in elements from the inside view(s). This process of aggregation and interpretation requires many rethinking cycles to produce the initial forecast. Then, when additional information becomes available, they begin the entire process again, engaging in further rounds of rethinking.
As I’ve described, rethinking is critical to our ability to forecast accurately and is especially useful for refining past forecasts. However, rethinking isn’t just beneficial in this niche area; in fact, as Grant will show, it’s vital in every part of our lives. We’ve known this anecdotally for a very long time as a society, but haven’t had much data to prove it until recently. For instance, in his Meditations, Marcus Aurelius states:
If anyone can refute me—show me I’m making a mistake or looking at things from the wrong perspective—I’ll gladly change. It’s the truth I’m after, and the truth never harmed anyone. What harms us is to persist in self deceit and ignorance.
Similar to Tetlock, Grant begins by discussing how our ability to rethink is hindered by our “cognitive laziness.” We often prefer the ease of holding onto old views (whether those are snap judgements from System 1 or even older views) over the difficulty of grappling with new ones. We don’t just hesitate to rethink our answers; we hesitate at the very idea of rethinking. He highlights a unique rationale that fits perfectly into the overall Futures Thinking framework:
Yet there are also deeper forces behind our resistance to rethinking. Questioning ourselves makes the world more unpredictable. It requires us to admit that the facts may have changed, that what was once right may now be wrong. Reconsidering something we believe deeply can threaten our identities, making it feel as if we’re losing a part of ourselves.
To paraphrase, rethinking brings to the forefront three ideas that we are uncomfortable addressing: 1) that we are uncertain about our answers, 2) that the world is unpredictable, and 3) that we potentially didn’t have the “right” answer the first time. That’s a scary admission, one that we all tend to shy away from.
As such, most humans tend to stick with our knowledge and opinions (what psychologists refer to as seizing and freezing). As we’ve seen in Tenet #4, we favor the comfort of certainty over the discomfort of doubt. This makes sense in a stable, unchanging world, where we get rewarded for having conviction behind our ideas. However, as we’ve seen throughout Tenet #1, the world we live in is rapidly changing, so our tendencies to remain fixed cause more harm than good.
This leads to two main biases: confirmation bias (we see what we expect to see) and desirability bias (we know what we want to see). Rethinking provides the solution. Contrary to what you might be thinking, the solution is not to decelerate our thinking; it’s to accelerate our rethinking.
Dissecting our problem further, knowledge often closes our minds to what we don’t know. As Grant writes, “We all have blind spots in our knowledge and opinions. The bad news is that they can leave us blind to our blindness, which gives us false confidence in our judgment and prevents us from rethinking.” We attach to these beliefs and hold on to them longer than we should.
To address this issue and many others, we need to employ the process of rethinking. Why? Grant provides a perfect big picture analogy, which cuts to the heart of Tenet #6:
In driver’s training, we were taught to identify our visual blind spots and eliminate them with the help of mirrors and sensors. In life, since our minds don’t come equipped with those tools, we need to learn to recognize our cognitive blind spots and revise our thinking accordingly.
Rethinking is the solution, working hand-in-hand with critical thinking, diverse perspectives, awareness, and humility to confidently address many of the cognitive limitations we face.
In his explanation of the benefits of rethinking, Grant references Tetlock’s work, citing “The single most important driver of forecasters’ success was how often they updated their beliefs. The best forecasters went through more rethinking cycles. They had the confident humility to doubt their judgments and the curiosity to discover new information that led them to revise their predictions.”
Superforecasters are eager and willing to think again, seeing their opinions more as best-guess hunches than as concrete truths. They constantly seek new information and better evidence, especially evidence that goes against their beliefs.
Bringing in the ideas discussed above, a key portion of rethinking is your network. Developing a challenge network whose goal is to point out our blind spots and weaknesses can push us into rethinking cycles through their diverse perspectives. Grant writes, “We learn more from people who challenge our thought processes than those who affirm our conclusions.”
One of my earliest Futures Thinking articles was titled “Addressing A Critical Flaw in Our Education.” In it, I discussed how education systems primarily teach linear thinking, even though the world is predominantly exponential.
Similarly, rethinking is another critical gap in our educational systems. There is now a strong emphasis on imparting knowledge and building confidence in kids, so many teachers don’t do enough to encourage students to doubt, face uncertainty, or recognize unpredictability.
Instead, we should be embracing the movement to encourage kids to think like fact-checkers. Grant details three main guidelines of the theory: 1) interrogate information instead of simply consuming it, 2) reject rank and popularity as a proxy for reliability, and 3) understand that the sender of information is often not its source.
This fact isn’t just limited to elementary education. Lectures, one of the primary teaching forms in colleges across the world, aren’t designed to accommodate a dialogue. Instead, they turn students into passive recipients of information rather than active thinkers engaged in critical and reflective cycles.
As valuable as rethinking is, we don’t do it enough. The world’s complexities demand continuous adaptation and rethinking to thrive; however, we often are stuck in our initial thoughts and beliefs. Given how broad and complex the world is, one key question here is what specifically we should be rethinking.
As we’ve discussed, we should be quick to rethink assumptions, intuitions, and opinions we’ve taken for granted. However, it’s less clear how we should approach more profound knowledge, core beliefs, and sacred values.
We shouldn’t recklessly abandon these foundational concepts of our lives, but we should continually re-evaluate where we stand. This isn’t an argument to rashly rethink the foundations of our being; however, we shouldn’t do the opposite and simply accept everything as truth. The ideal answer is somewhere in the middle.
Credit Sloww
LESSONS FROM THE CRACKED POT - HOW TO “KNOW THYSELF” - CULTIVATING ACTIVE OPEN-MINDEDNESS
A popular Indian folktale, called “The Cracked Pot”, reads as follows:
Long ago there lived a man whose job it was to haul water from the stream uphill to his master’s house many times each day. To do this work, the water bearer had two large pots that hung from each end of a pole he carried across the back of his neck, balanced over the top of his shoulders. The two pots were identical, but only one of them was perfect – the other one had a small crack in it, so with every trip up the hill the cracked pot lost nearly half of its water while the perfect pot delivered a full portion.
The perfect pot was proud of its accomplishments, and loved to brag about them. He also loved to point out to the cracked pot how flawed it was…that no matter how hard the water bearer worked, the cracked pot only ever managed to deliver a half portion of water to the master’s house. The cracked pot felt ashamed of his imperfection and was miserable that he could accomplish only half of what he had been made to do.
One day the cracked pot spoke to the water bearer when they had stopped by the stream. “I am ashamed of myself, and I must apologize to you for my flaw…for my inability to carry all the water you need me to carry. You work so hard and I fail to give you the full value for your effort.”
The water bearer listened and looked upon the pot with compassion and said, “As we return to the master’s house this time, I want you to pay attention to the beautiful flowers growing along the path.” And indeed, as they went back up the hill, the cracked pot did take notice of the sun warming the beautiful wild flowers on the side of the path, and felt cheered…somewhat. But at the end of the trail, the pot again felt miserable and apologized once more for having lost half its water along the way.
The water bearer said to the pot, “I’m afraid you do not understand what I was trying to show you. Did you not notice that there were flowers only on your side of your path, and not on the other pot’s side? That’s because I planted flower seeds on your side of the path, and every day while we walk back from the stream, you’ve watered them. I have known about your crack for some time and could have crafted a new pot. But because of your flaw, we have been able…together…to grow beautiful flowers and with them bless many tables. Without you being just the way you are, where would we have found such beauty?
The most integral tools we can use to counter our cognitive limitations are awareness and humility. These two characteristics go hand-in-hand. First, we need to recognize that we have a limitation in the first place; second, we need the humility to implement a solution and change our behavior.
Christophe Andre, in his book, Looking at Mindfulness: Twenty-Five Paintings to Change the Way You Live, writes, “We must decide to open our mind’s door to all that lies beyond it, rather than hiding away in one of our inner fortresses, such as rumination, reflection, certainty, or expectation.”
As I’ve shown, humans have ingrained tendencies to favor certainty and absolutism. We dislike doubt and uncertainty. We love when the world is either black or white, and despise when there are many shades of gray in the middle. Andre provides a glimpse of the solution, one which the ancients have been preaching for millennia: awareness.
Awareness is a central concept of Buddhism. In the spiritual tradition, the teachings emphasize that true awareness is not tied to a specific object, but is a direct, nonconceptual knowing of the present moment. Through practices such as mindfulness and meditation, practitioners can observe their thoughts and emotions without being caught by them.
Socrates’s famous phrase, “Know thyself,” is a direct call for self-awareness. He believed that understanding one’s own nature, virtues, and vices is the key to wisdom and a virtuous life. Plato, building on the ideas of Socrates, employed the classic “Allegory of the Cave” to illustrate this issue. The prisoners in the cave, who were only aware of the shadows, represented a state of profound ignorance. The journey out of the cave to see the sun represented ascension to a higher level of awareness and knowledge.
Cultivating awareness enables us to consider broader perspectives and integrate them into our thought processes. It’s the first crucial step in our process. The second is cultivating and acting on our humility.
In his book, Grant explains how intellectual humility—knowing what we don’t know—is the first part of the rethinking cycle. We discussed the idea of knowing what we don’t know in Tenet #4, but Grant broadens the discussion:
Recognizing our shortcomings opens the door to doubt. As we question our current understanding, we become curious about what information we’re missing. That search leads us to new discoveries, which in turn maintain our humility by reinforcing how much we still have to learn. If knowledge is power, knowing what we don’t know is wisdom.
The rethinking process favors humility over pride, doubt over certainty, and curiosity over closure. Our goal should be to cultivate humility grounded in confidence, which is the key to overcoming many cognitive limitations. Confident humility means having faith in our capabilities while appreciating that we may not have the right solution or even be addressing the right problem.
As you can see in the table above, the ideal situation is to believe in ourselves while being uncertain about the tools that we possess. Luckily, confident humility can be taught. In one experiment, when students read an article about the benefits of admitting what they don’t know rather than being certain, their odds of seeking help when confronted with complex issues went from 65% to 85%.
However, as novices advance to amateur and beyond, this can break the rethinking cycle. As we gain experience, we lose some of our humility. We enter an overconfidence cycle that prevents us from doubting what we know and from being curious about what we don’t.
Luckily, a dose of complexity can disrupt overconfidence cycles and spur rethinking, bringing us back on track. Grant writes, “It gives us more humility about our knowledge and more doubts about our opinions, and it can make us curious enough to discover information we were lacking.”
Tetlock found that superforecasters often have a spirit of humility—a feeling that the complexity of reality is staggering and our ability to comprehend it is limited. It includes the maturity to admit that it’s impossible ever to have all the answers.
Tetlock and Grant marry the two ideas together and show how they’re highly applicable to superforecasters in their discussion of active open-mindedness, an essential aspect of superforecasters.
As coined by psychologist Jonathan Baron in 1993, active open-minded thinking is characterized by a willingness to seek out and reflect on contrary evidence, with an openness to changing one’s mind in light of new evidence.
Active open-mindedness requires metacognitive awareness, which is the ability to think about your own thinking. It means being conscious of your mental processes, biases, and the tendency to favor information that confirms what you already believe. It involves recognizing that your initial beliefs may be flawed or incomplete and that your gut reactions aren’t always accurate. This self-awareness allows you to step back from your own mental shortcuts and deliberately seek out alternative perspectives.
The practice of active open-mindedness is an exercise in intellectual humility. As I’ve highlighted, this is the recognition that your own knowledge is limited and that you are fallible. It means accepting the possibility of being wrong without feeling threatened or defensive. Instead of seeing a change of mind as a sign of weakness, an intellectually humble person sees it as an opportunity for growth and further accuracy.
Active open-mindedness helps participants combat overconfidence, confronting the limits of their knowledge and considering other possibilities. Similarly, it reduces reliance on confirmation bias by deliberately seeking information that challenges initial hypotheses, leading to a more complete and accurate understanding.
Furthermore, and most importantly, Tetlock’s research found a correlation between a team’s active open-mindedness and the accuracy of their forecasts. Similarly, a 2023 study by Haran, Ritov, and Mellers in the Cambridge University Press tested forecast abilities while varying the levels of control over the amount of information available for collection before estimating. Of their methods, only actively open-minded thinking statistically predicted performance.
Credit Harvard Business School
THE BEST ELECTION FORECASTER - FATE MINDSET CORRELATES WITH PREDICTIONS - THE CHARACTERISTICS OF SUPERFORECASTERS
Back in the 1990s, Jean-Pierre had a hobby of collecting the predictions that pundits made on the news and scoring his own forecasts against them. Eventually, he started competing in forecasting tournaments—international contests hosted by Good Judgment, where people try to predict the future.
It’s a daunting task; there’s an old saying that historians can’t even predict the past. A typical tournament draws thousands of entrants from around the world to anticipate big political, economic, and technological events. The questions are time-bound, with measurable, specific results. Will the current president of Iran still be in office in six months? Which soccer team will win the next World Cup? In the following year, will an individual or a company face criminal charges for an accident involving a self-driving vehicle?
Participants don’t just answer yes or no; they have to give their odds. It’s a systematic way of testing whether they know what they don’t know. They get scored months later on accuracy and calibration–earning points not just for giving the right answer, but also for having the right level of conviction. The best forecasters have confidence in their predictions that come true and doubt in their predictions that prove false.
On November 18, 2015, Jean-Pierre registered a prediction that stunned his opponents. A day earlier, a new question had popped up in an open forecasting tournament: in July 2016, who would win the U.S. Republican presidential primary? The options were Jeb Bush, Ben Carson, Ted Cruz, Carly Fiorina, Marco Rubio, Donald Trump, and none of the above. With eight months to go before the Republican National Convention, Trump was largely seen as a joke. His odds of becoming the Republican nominee were only 6 percent, according to Nate Silver, the celebrated statistician behind the website FiveThirtyEight. When Jean-Pierre peered into his crystal ball, though, we decided Trump had a 68 percent chance of winning.
Jean-Pierre didn’t just excel in predicting the results of American events. His Brexit forecasts hovered in the 50 percent range when most of his competitors thought the referendum had little chance of passing. He successfully predicted that the incumbent would lose a presidential election in Senegal, even though the base rates of reelection were extremely high and other forecasters were expecting a decisive win. And he had, in fact, pegged Trump as the favorite long before pundits and pollsters even considered him a viable contender. “It’s striking,” Jean-Pierre wrote early on, back in 2015, that so many forecasters are “still in denial about his chances.” Based on his performance, Jean-Pierre might be the world’s best election forecaster.
When you read about their abilities in action (as written by Adam Grant), Tetlock’s superforecasters almost seem superhuman. I wouldn’t go that far, but in many ways, they’re among the best at what they do.
We’ve seen how they curate the broadest possible spectrum of information when analyzing a new problem, drawing on a wide variety of inside and outside perspectives. Similarly, these practitioners continuously engage in critical thinking exercises, bombarding the data to identify weak points and gaps, and ultimately mold it into a cohesive viewpoint.
When approaching their analyses, they consider the full range of potential solutions, consistently rethinking and refining their perspective. Critical factors underlying their process are a broad awareness of the world, the humility to recognize that their mindset may be incomplete, and the willingness to change it.
These factors are core principles that enable superforecasters to achieve statistically significant improvements in their forecasting abilities.
To be clear, they aren’t doing anything that the average person, if they put their mind to it, couldn’t also do. For some of them, this framework comes easily; for others, this was a lifelong journey.
As such, if we were to invest time in improving in these areas, we too could predict better and respond to the fundamental characteristics of the world (complexity, nonlinearity, impermanence, etc.), the uncertainties present, the inherent disruptions leading to failures and collapse, and the unpredictability with ease.
I’ve harped on the benefits of superforecasters over the last 35 pages, but I think it’s a crucial modern development in the science of Futures Thinking. In addition to the plethora of the above, here are a few additional tidbits about superforecasters.
On a scale from 1 to 9, where 1 is the total rejection of it-was-meant-to-be thinking, and 9 is a complete embrace of it, the mean score of US adults is in the middle. Undergraduate students at the University of Pennsylvania scored slightly lower. Regular forecasters were a little lower than that. And superforecasters got the lowest scores of all, confidently on the rejection-of-fate side.
Another central feature of superforecasters is their comfort and confidence with numbers. Most are capable of putting them to practical use in complex models.
Tetlock provides a summary of the characteristics of superforecasters:
In a philosophic outlook, they tend to be:
Cautious: nothing is certain
Humble: reality is infinitely complex
Nondeterministic: what happens is not meant to be and does not have to happen
In their abilities and thinking styles, they tend to be:
Actively open-minded: beliefs are hypotheses to be tested, not treasures to be protected
Intelligent and knowledgeable, with a “need for cognition”: intellectually curious, enjoy puzzles and mental challenges
Reflective: introspective and self-critical
Numerate: comfortable with numbers
In their methods of forecasting they tend to be:
Pragmatic: not wedded to any idea or agenda
Analytical: capable of stepping back from the tip-of-your-nose perspective and considering other views
Dragonfly-eyed: value diverse views and synthesize them into their own’
Probabilistic: judge using many grades of maybe
Thoughtful updaters: when facts change, they change their minds
Good intuitive psychologists: aware of the value of checking thinking for cognitive and emotional biases
In their work ethic, they tend to have:
A growth mindset: believe it’s possible to get better
Grit: determined to keep at it however long it takes
This framework for the ideal characteristics of superforecasters, as discussed extensively above, isn’t foolproof. Superforecasters often tackle problems in roughly the same way; however, there are minor differences.
Overall the methodology holds: 1) Delineate as sharply as possible between known and unknown data, 2) Approach first with an outside, overarching perspective, then add back in specific nuances from the uniqueness of the issue, 3) Compare and contrast other views, paying special attention to those which contradict your thinking, 4) Aggregate all of these into a singular vision of the problem, and 5) Express your answer as precisely as possible without succumbing to actual precision defects.
Through this methodology, you can significantly improve your forecasting abilities, mitigate the effects of cognitive biases on your thinking, and proactively address fundamental characteristics of the world.
Throughout Tenet #6, our pursuit has been to accomplish just that: minimize or eliminate the effect of cognitive limitations on our thought processes. The principles of superforecasting provide many solutions; however, they are not exhaustive. An additional resource, or thought genre, provides supplemental value critical to our narrative: the realm of systems thinking.
Credit Citizens for Global Solutions
SYSTEMS THINKING PROVIDES A SOLUTION TO COGNITIVE LIMITATIONS - ADDRESSING BOUNDARIES AND BOUNDED RATIONALITY - INVITING OTHERS TO CHALLENGE OUR ASSUMPTIONS
In her best-selling 2008 book, Thinking in Systems, Donella Meadows explores how systems thinkers address the inherent uncertainty of systems. In her words, “If you can’t understand, predict, and control, what is there to do?”
This expression hits at the heart of our discussion today—trying to find ways to address the cognitive limitations present in our predictive problems, but also, in a more 10,000-foot view, trying to address our responses to all of the volatilities, disorder, failures, collapses, Black Swan events, uncertainties, and unpredictabilities present in life.
We’ve seen above how the discovery of superforecasters provides a guide for how we can approach predicting better, with meaningful increases in measured performance. Systems thinking offers a further answer for facing the uncertainties present in life.
As it’s defined on Wikipedia, systems thinking is a “way of making sense of the complexity of the world by looking at it in terms of wholes and relationships rather than by splitting it down into its parts.”
We’ve discussed previously in Tenet #1 how the world is a complex system (technically a system of systems), with many inherent interconnections, nonlinearities, randomness, and feedback loops. These give way to continuously changing variables and emergent phenomena.
Given these properties of the world, how have systems thinkers developed a proactive approach?
First, before intervening to disturb the system, systems thinkers watch how it behaves. As we’ll discuss in Tenet #7, this is a way to address the premature introduction of fragilities into the system.
Begin by learning the system’s history. This focus on the system’s behavior forces you to focus on facts without being blinded by your own beliefs, misconceptions, or other cognitive limitations. The history of the system directs us toward a dynamic, rather than static, analysis of the world, dissecting not only which elements are in the system but also how they are interconnected.
This isn’t always an intuitive viewpoint. As we’ve discussed, our minds tend to think in terms of single causes producing single effects. In contrast, systems thinking engages with multiple causes, multiple effects, and emergent phenomena throughout them.
Another key portion of systems thinking is the analysis of boundaries. As we discussed in Tenet #4, one of our natural responses to the uncertainty in the world is to simplify it into manageable portions (though we’ve seen that these don’t reflect the world as it truly is).
To understand complex systems, we have to simplify, which means we have to make boundaries. Where should we draw these boundaries? There is no single, legitimate boundary to draw around a system. In Meadows’ words, “We have to invest boundaries for clarity and sanity; and boundaries can produce problems when we forget that we’ve artificially created them.”
The world’s interconnectedness shows that there are no separate systems; the world is a continuum. We draw boundaries for analysis and discussion; however, the natural human tendency is to make them too small. As such, we leave out valuable information from our scope, creating cognitive limitations to our thinking and predictive abilities. Meadows writes:
Ideally, we would have the mental flexibility to find the appropriate boundary for thinking about each new problem. We get attached to the boundaries our minds happen to be accustomed to… It’s a great art to remember that boundaries are of our own making, and that they can and should be reconsidered for each new discussion, problem, or purpose.
Adam Smith and other influential economists developed the theory of rational economic agents, which has since been widely adopted as a basis for many economic actions and theories. It discusses two fundamental principles of human behavior: 1) humans act with perfect optimality on complete information, and 2) when many humans do this, their actions add up to the best possible outcome for everybody.
However, many economists and psychologists have shown, among them Daniel Kahneman and Dan Ariely, that humans more often make decisions based on the theory of bounded rationality. If you’re unfamiliar with the term, it refers to the idea that individuals’ decision-making is limited by factors such as incomplete information, time constraints, and cognitive limitations, leading them to make “good-enough” rather than “optimal” decisions.
Now that we have this understanding, it doesn’t provide an excuse for this narrow-minded behavior. Instead, it provides further understanding of why this behavior arises. Within the bounds of what that person in that part of the system can see and know, that behavior is reasonable.
How can we adapt to this new understanding? Going full circle, the solution comes from stepping outside the limited information available from any single place in the system (the “inside view”) and gaining an overview of the entire system (the “outside view”). As Meadows writes, “From a wider perspective, information flows, goals, incentives, and disincentives can be restructured so that separate, bounded, rational actions do add up to results that everyone desires.”
The bounded rationality of each actor in a system is determined by the information, incentives, disincentives, goals, stresses, and constraints impinging on that actor. One way to address this is through diverse perspectives and the integration of more information. Another way is much more abrupt: redesigning the system itself to improve the information, incentives, disincentives, goals, stresses, and constraints that affect actors within the system.
The most crucial contribution of systems thinking to our present discourse is that it encourages models to be open-sourced. Donella Meadows speaks in depth on this topic in her book:
Get your model out there where it can be viewed. Invite others to challenge your assumptions and add their own. Instead of becoming a champion for one possible explanation or hypothesis or model, collect as many as possible. Consider all of them to be plausible until you find some evidence that causes you to rule one out.
As the world continues to change rapidly, systems thinking can help us manage, adapt, and recognize the wide range of choices we face. We can’t navigate well in an interconnected, feedback-dominated world unless we take our eyes off short-term immediacies and look for long-term behavior and structure; unless we are aware of false boundaries and bounded rationality; unless we take into account limiting factors and nonlinearities.
While systems thinking can help us understand many things we didn’t before, it can’t help us know everything.
Credit Reddit
SIMPLE SOLUTIONS TO PREDICT BETTER - THERE ARE SOME REALMS WE WILL NEVER BE ABLE TO PREDICT - PROACTIVE PREDICTION VS. REACTIVE RESPONSE
We embarked on a journey to address cognitive limitations with a large table detailing each limitation impacting our predictive abilities, as identified in Tenet #5, and their proposed solutions.
Hopefully, you’ve seen that there are simple steps we can take today to significantly address many of these limitations, minimizing their effect on our lives. The two leading solutions are the characteristics of superforecasters (diverse perspectives, critical thinking, rethinking, awareness, and humility) and the methodology of systems thinkers.
These frameworks do assist our forecasting. Unfortunately, they don’t solve every last predictive issue we face, although they do significantly help. There are still some fundamental aspects of the world that we simply cannot predict better than random chance.
Tetlock details some of these in the latter half of his book. As he showed throughout his studies and other researchers have similarly concluded, “human cognitive systems will never be able to forecast turning points in the lives of individuals or nations several years into the future.” In other words, our forecasting accuracy exponentially degrades the further into the future we wish to project. In his work, Tetlock found the accuracy of expert predictions declined toward random chance around 5 years into the future.
Tetlock also reflects on how his methodologies supplement his colleague Nassim Taleb’s work in discussing Black Swan events. For these highly improbable events, which occur so infrequently, it would take decades, centuries, or millennia to aggregate enough data and forecasts to accurately determine whether people could correctly predict them with statistical significance.
As such, the current results tell us nothing about how well superforecasters are at spotting these Black Swan events—we simply don’t know how good they might be at this. Tetlock writes further:
We may have no evidence that superforecasters can foresee events like those of September 11, 2001, but we do have a warehouse of evidence that they can forecast questions like: Will the United States threaten military action if the Taliban don’t hand over Osama bin Laden? Will the Taliban comply? Will bin Laden flee Afghanistan prior to the invasion? To the extent that such forecasts can anticipate the consequences of events like 9/11, and these consequences make a black swan what it is, we can forecast black swans.
Given these bits of commentary along with everything discussed throughout Tenets #4, #5, and now #6, we can fully flesh out the table of circumstances where we can and can’t predict, as well as highlight the gray area in between.
Undoubtedly, there are many areas in which we can and do predict with relative accuracy. Those are often near-term events governed by linear, known variables, with little noise.
Having said that, these domains are often the areas that, colloquially, don’t move the needle much in our lives. Yes, there are instances where a butterfly flapping its wings creates a tornado across the world, but that’s also the product of many other variables.
As you can see above, the gray area and the area where we can’t predict better than random chance are much more life-altering, with widespread consequences. This is worrying: we don’t have the capabilities, and probably won’t ever have them, to predict many of these events.
Given that, what should we do?
Luckily, that’s the subject of our next portion of Tenets, #7, #8, and #9. We’ll start by diving deeper into the other ways our lives (whether through our own doing or a natural characteristic of the world) are fragile and how we can recognize those aspects, beginning with Tenet #7:
Fragile systems amplify their own vulnerabilities, increasing the risk of failure and worsening our cognitive limitations.
That’s all for today. I’ll be back in your inbox on Saturday with The Saturday Morning Newsletter.
Thanks for reading,
Drew Jackson
Stay Connected
Website: brainwaves.me
Twitter: @brainwavesdotme
Email: brainwaves.me@gmail.com
Thank you for reading the Brainwaves newsletter. Please ask your friends, colleagues, and family members to sign up.
Brainwaves is a passion project educating everyone on critical topics that influence our future, key insights into the world today, and a glimpse into the past from a forward-looking lens.
To view previous editions of Brainwaves, go here.
Want to sponsor a post or advertise with us? Reach out to us via email.
Disclaimer: The views expressed here are my personal opinions and do not represent any current or former employers. This content is for informational and educational purposes only, not financial advice. Investments carry risks—please conduct thorough research and consult financial professionals before making investment decisions. Any sponsorships or endorsements are clearly disclosed and do not influence editorial content.




















