Thinking in Bets (How To Improve Decision Making)

Issue #6 of Rinaldo's weekly newsletter

Hello friends,

Last year I read another book about decision making. I think I take too long to make important decisions and often find myself in analysis paralysis mode. Therefore, I wanted to revisit my notes, quotes and clippings from this book and share my findings with you. What follows is content from Thinking in Bets by Annie Duke.

Hope it helps.

Thinking in bets starts with recognizing that there are exactly two things that determine how our lives turn out: the quality of our decisions and luck. Learning to recognize the difference between the two is what thinking in bets is all about.  

Hindsight bias is the tendency, after an outcome is known, to see the outcome as having been inevitable. When we say, “I should have known that would happen,” or, “I should have seen it coming,” we are succumbing to hindsight bias.

We link results with decisions even though it is easy to point out indisputable examples where the relationship between decisions and results isn’t so perfectly correlated.

We are discouraged from saying “I don’t know” or “I’m not sure.” We regard those expressions as vague, unhelpful, and even evasive. But getting comfortable with “I’m not sure” is a vital step to being a better decision-maker. We have to make peace with not knowing. Embracing “I’m not sure” is difficult. We are trained in school that saying “I don’t know” is a bad thing. Not knowing in school is considered a failure of learning. Write “I don’t know” as an answer on a test and your answer will be marked wrong.

What makes a decision great is not that it has a great outcome. A great decision is the result of a good process, and that process must include an attempt to accurately represent our own state of knowledge. That state of knowledge, in turn, is some variation of “I’m not sure.”

Acknowledging uncertainty is the first step in executing on our goal to get closer to what is objectively true. To do this, we need to stop treating “I don’t know” and “I’m not sure” like strings of dirty words.

What good poker players and good decision-makers have in common is their comfort with the world being an uncertain and unpredictable place. They understand that they can almost never know exactly how something will turn out. They embrace that uncertainty and, instead of focusing on being sure, they try to figure out how unsure they are, making their best guess at the chances that different outcomes will occur. The accuracy of those guesses will depend on how much information they have and how experienced they are at making such guesses. This is part of the basis of all bets.

That’s true in any business. Start-ups have very low chances of succeeding but they try nonetheless, attempting to find the best strategy to achieve the big win, even though none of the strategies is highly likely to create success for the company. This is still worthwhile because the payoff can be so large.

There are many reasons why wrapping our arms around uncertainty and giving it a big hug will help us become better decision-makers. Here are two of them. First, “I’m not sure” is simply a more accurate representation of the world. Second, and related, when we accept that we can’t be sure, we are less likely to fall into the trap of black-and-white thinking.

The secret is to make peace with walking around in a world where we recognize that we are not sure and that’s okay. As we learn more about how our brains operate, we recognize that we don’t perceive the world objectively. But our goal should be to try.

When we think in advance about the chances of alternative outcomes and make a decision based on those chances, it doesn’t automatically make us wrong when things don’t work out. It just means that one event in a set of possible futures occurred.

Decisions are bets on the future, and they aren’t “right” or “wrong” based on whether they turn out well on any particular iteration. An unwanted result doesn’t make our decision wrong if we thought about the alternatives and probabilities in advance and allocated our resources accordingly,

When we think probabilistically, we are less likely to use adverse results alone as proof that we made a decision error, because we recognize the possibility that the decision might have been good but luck and/or incomplete information (and a sample size of one) intervened. Maybe we made the best decision from a set of unappealing choices, none of which were likely to turn out well. Maybe we committed our resources on a long shot because the payout more than compensated for the risk, but the long shot didn’t come in this time. Maybe we made the best choice based on the available information, but decisive information was hidden and we could not have known about it. Maybe we chose a path with a very high likelihood of success and got unlucky. Maybe there were other choices that might have been better and the one we made wasn’t wrong or right but somewhere in between. The second-best choice isn’t wrong. By definition, it is more right (or less wrong) than the third-best or fourth-best choice. It is like the scale at the doctor’s office: there are a lot more choices other than the extremes of obesity or anorexia. For most of our decisions, there will be a lot of space between unequivocal “right” and “wrong.”

When we move away from a world where there are only two opposing and discrete boxes that decisions can be put in—right or wrong—we start living in the continuum between the extremes. Making better decisions stops being about wrong or right but about calibrating among all the shades of grey.

Redefining wrong allows us to let go of all the anguish that comes from getting a bad result. But it also means we must redefine “right.” If we aren’t wrong just because things didn’t work out, then we aren’t right just because things turned out well.

Being right feels really good. “I was right,” “I knew it,” “I told you so”—those are all things that we say, and they all feel very good to us. Should we be willing to give up the good feeling of “right” to get rid of the anguish of “wrong”? Yes.

First, the world is a pretty random place. The influence of luck makes it impossible to predict exactly how things will turn out, and all the hidden information makes it even worse. If we don’t change our mindset, we’re going to have to deal with being wrong a lot. It’s built into the equation.

Losses in general feel about two times as bad as wins feel good. So winning $100 at blackjack feels as good to us as losing $50 feels bad to us. Because being right feels like winning and being wrong feels like losing, that means we need two favorable results for every one unfavorable result just to break even emotionally. Why not live a smoother existence, without the swings, especially when the losses affect us more intensely than the wins?

One of the reasons we don’t naturally think of decisions as bets is because we get hung up on the zero-sum nature of the betting that occurs in the gambling world; betting against somebody else (or the casino), where the gains and losses are symmetrical. One person wins, the other loses, and the net between the two adds to zero. Betting includes, but is not limited to, those situations. In most of our decisions, we are not betting against another person. Rather, we are betting against all the future versions of ourselves that we are not choosing. We are constantly deciding among alternative futures:

Whenever we make a choice, we are betting on a potential future. We are betting that the future version of us that results from the decisions we make will be better off.

Ignoring the risk and uncertainty in every decision might make us feel better in the short run, but the cost to the quality of our decision-making can be immense.

If we can find ways to become more comfortable with uncertainty, we can see the world more accurately and be better for it.

As with many of our irrationalities, how we form beliefs was shaped by the evolutionary push toward efficiency rather than accuracy.

We might think of ourselves as open-minded and capable of updating our beliefs based on new information, but the research conclusively shows otherwise. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.

This irrational, circular information-processing pattern is called motivated reasoning. 

Fake news isn’t meant to change minds. As we know, beliefs are hard to change. The potency of fake news is that it entrenches beliefs its intended audience already has, and then amplifies them. The Internet is a playground for motivated reasoning. It provides the promise of access to a greater diversity of information sources and opinions than we’ve ever had available, yet we gravitate toward sources that confirm our beliefs, that agree with us. Every flavor is out there, but we tend to stick with our favorite.

The Internet, which gives us access to a diversity of viewpoints with unimaginable ease, in fact speeds our retreat into a confirmatory bubble. No matter our political orientation, none of us is immune.

We just want to think well of ourselves and feel that the narrative of our life story is a positive one. Being wrong doesn’t fit into that narrative. If we think of beliefs as only 100% right or 100% wrong, when confronting new information that might contradict our belief, we have only two options: (a) make the massive shift in our opinion of ourselves from 100% right to 100% wrong, or (b) ignore or discredit the new information. It feels bad to be wrong, so we choose (b). Information that disagrees with us is an assault on our self-narrative. We’ll work hard to swat that threat away. On the flip side, when additional information agrees with us, we effortlessly embrace it.

How we form beliefs, and our inflexibility about changing our beliefs, has serious consequences because we bet on those beliefs. Every bet we make in our lives depends on our beliefs:

Surprisingly, being smart can actually make bias worse. Let me give you a different intuitive frame: the smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalizing and framing the data to fit your argument or point of view.

We can train ourselves to view the world through the lens of “Wanna bet?” Once we start doing that, we are more likely to recognize that there is always a degree of uncertainty, that we are generally less sure than we thought we were, that practically nothing is black and white, 0% or 100%. And that’s a pretty good philosophy for living.

Instead of thinking of confidence as all-or-nothing (“I’m confident” or “I’m not confident”), our expression of our confidence would then capture all the shades of grey in between. When we express our beliefs (to others or just to ourselves as part of our internal decision-making dialogue), they don’t generally come with qualifications. What if, in addition to expressing what we believe, we also rated our level of confidence about the accuracy of our belief on a scale of zero to ten? Zero would mean we are certain a belief is not true. Ten would mean we are certain that our belief is true. A zero-to-ten scale translates directly to percentages. If you think the belief rates a three, that means you are 30% sure the belief is accurate.

Forcing ourselves to express how sure we are of our beliefs brings to plain sight the probabilistic nature of those beliefs, that what we believe is almost never 100% or 0% accurate but, rather, somewhere in between.

We can also express how confident we are by thinking about the number of plausible alternatives and declaring that range.

Incorporating uncertainty in the way we think about what we believe creates open-mindedness, moving us closer to a more objective stance toward information that disagrees with us. We are less likely to succumb to motivated reasoning since it feels better to make small adjustments in degrees of certainty instead of having to grossly downgrade from “right” to “wrong.”

When confronted with new evidence, it is a very different narrative to say, “I was 58% but now I’m 46%.” That doesn’t feel nearly as bad as “I thought I was right but now I’m wrong.” Our narrative of being a knowledgeable, educated, intelligent person who holds quality opinions isn’t compromised when we use new information to calibrate our beliefs, compared with having to make a full-on reversal. This shifts us away from treating information that disagrees with us as a threat, as something we have to defend against, making us better able to truthseek.

There is no sin in finding out there is evidence that contradicts what we believe. The only sin is in not using that evidence as objectively as possible to refine that belief going forward.

We assume that if we don’t come off as 100% confident, others will value our opinions less. The opposite is usually true. If one person expresses a belief as absolutely true, and someone else expresses a belief by saying, “I believe this to be true, and I’m 80% on it,” who are you more likely to believe?

The fact that the person is expressing their confidence as less than 100% signals that they are trying to get at the truth, that they have considered the quantity and quality of their information with thoughtfulness and self-awareness. And thoughtful and self-aware people are more believable.

We can’t just “absorb” experiences and expect to learn. As novelist and philosopher Aldous Huxley recognized, “Experience is not what happens to a man; it is what a man does with what happens to him.” There is a big difference between getting experience and becoming an expert. That difference lies in the ability to identify when the outcomes of our decisions have something to teach us and what that lesson might be.

The future unfolds into a set of outcomes, we are faced with another decision: Why did something happen the way it did?

The more evidence we get from experience, the less uncertainty we have about our beliefs and choices. Actively using outcomes to examine our beliefs and bets closes the feedback loop, reducing uncertainty. This is the heavy lifting of how we learn.

The challenge is that any single outcome can happen for multiple reasons. The unfolding future is a big data dump that we have to sort and interpret. And the world doesn’t connect the dots for us between outcomes and causes.

We are good at identifying the “-ER” goals we want to pursue (better, smarter, richer, healthier, whatever). But we fall short in achieving our “-ER” because of the difficulty in executing all the little decisions along the way to our goals. The bets we make on when and how to close the feedback loop are part of the execution, all those in-the-moment decisions about whether something is a learning opportunity. To reach our long-term goals, we have to improve at sorting out when the unfolding future has something to teach us, when to close the feedback loop. And the first step to doing this well is in recognizing that things sometimes happen because of the other form of uncertainty: luck.

The way our lives turn out is the result of two things: the influence of skill and the influence of luck. For the purposes of this discussion, any outcome that is the result of our decision-making is in the skill category. If making the same decision again would predictably result in the same outcome, or if changing the decision would predictably result in a different outcome, then the outcome following that decision was due to skill. The quality of our decision-making was the main influence over how things turned out. If, however, an outcome occurs because of things that we can’t control (like the actions of others, the weather, or our genes), the result would be due to luck. If our decisions didn’t have much impact on the way things turned out, then luck would be the main influence.

Chalk up an outcome to skill, and we take credit for the result. Chalk up an outcome to luck, and it wasn’t in our control. For any outcome, we are faced with this initial sorting decision. That decision is a bet on whether the outcome belongs in the “luck” bucket or the “skill” bucket.

If this all doesn’t seem difficult enough, outcomes are rarely all skill or all luck. Even when we make the most egregious mistakes and get appropriately negative outcomes, luck plays a role.

He said we study our outcomes like scientists, but like “naïve scientists.” When we figure out why something happened, we look for a plausible reason, but one that also fits our wishes. Heider said, “It is usually a reason that flatters us, puts us in a good light, and it is imbued with an added potency by the attribution.”

Blaming the bulk of our bad outcomes on luck means we miss opportunities to examine our decisions to see where we can do better. Taking credit for the good stuff means we will often reinforce decisions that shouldn’t be reinforced and miss opportunities to see where we could have done better.

Maybe we could stop clinging to ego, giving up on that need to have a positive narrative of our lives. Maybe we could still drive a positive narrative but, instead of updating through credit and blame, we could get off on striving to be more objective and open-minded in assessing the influence of luck and skill on our outcomes.

Habits operate in a neurological loop consisting of three parts: the cue, the routine, and the reward. A habit could involve eating cookies: the cue might be hunger, the routine going to the pantry and grabbing a cookie, and the reward a sugar high. Or, in poker, the cue might be winning a hand, the routine taking credit for it, the reward a boost to our ego. Charles Duhigg, in The Power of Habit, offers the golden rule of habit change—that the best way to deal with a habit is to respect the habit loop: “To change a habit, you must keep the old cue, and deliver the old reward, but insert a new routine.”

We can work to change the bell we ring, substituting what makes us salivate. We can work to get the reward of feeling good about ourselves from being a good credit-giver, a good mistake-admitter, a good finder-of-mistakes-in-good-outcomes, a good learner, and (as a result) a good decision-maker. Instead of feeling bad when we have to admit a mistake, what if the bad feeling came from the thought that we might be missing a learning opportunity just to avoid blame?

Certainly, in exchange for losing the fear of taking blame for bad outcomes, you also lose the unadulterated high of claiming good outcomes were 100% skill. That’s a trade you should take. Remember, losing feels about twice as bad as winning feels good; being wrong feels about twice as bad as being right feels good. We are in a better place when we don’t have to live at the edges. Euphoria or misery, with no choices in between, is not a very self-compassionate way to live.

To be sure, thinking in bets is not a miracle cure. Thinking in bets won’t make self-serving bias disappear or motivated reasoning vanish into thin air. But it will make those things better. And a little bit better is all we need to transform our lives. If we field just a few extra outcomes more accurately, if we catch just a few extra learning opportunities, it will make a huge difference in what we learn, when we learn, and how much we learn.

Thinking in bets corrects your course. And even a small correction will get you more safely to your destination.

Forming or joining a group where the focus is on thinking in bets means modifying the usual social contract. It means agreeing to be open-minded to those who disagree with us, giving credit where it’s due, and taking responsibility where it’s appropriate, even (and especially) when it makes us uncomfortable.

Being in a group can improve our decision quality by exploring alternatives and recognizing where our thinking might be biased, but a group can also exacerbate our tendency to confirm what we already believe.

Being in an environment where the challenge of a bet is always looming works to reduce motivated reasoning. Such an environment changes the frame through which we view disconfirming information, reinforcing the frame change that our truthseeking group rewards. Evidence that might contradict a belief we hold is no longer viewed through as hurtful a frame. Rather, it is viewed as helpful because it can improve our chances of making a better bet. And winning a bet triggers a reinforcing positive update.

Although the Internet and the breadth of multimedia news outlets provide us with limitless access to diverse opinions, they also give us an unprecedented opportunity to descend into a bubble, getting our information from sources we know will share our view of the world. We often don’t even realize when we are in the echo chamber ourselves, because we’re so in love with our own ideas that it all just sounds sensible and right. In political discourse, virtually everyone, even those familiar with groupthink, will assert, “I’m in the rational group exchanging ideas and thinking these things through. The people on the other side, though, are in an echo chamber.”

Be a data sharer. That’s what experts do. In fact, that’s one of the reasons experts become experts. They understand that sharing data is the best way to move toward accuracy because it extracts insight from your listeners of the highest fidelity.

We are naturally reluctant to share information that could encourage others to find fault in our decision-making. My group made this easier by making me feel good about committing myself to improvement. When I shared details that cast me in what I perceived to be a bad light, I got a positive self-image update from the approval of players I respected. In my consulting, I’ve encouraged companies to make sure they don’t define “winning” solely by results or providing a self-enhancing narrative. If part of corporate success consists of providing the most accurate, objective, and detailed evaluation of what’s going on, employees will compete to win on those terms. That will reward better habits of mind. Agree to be a data sharer and reward others in your decision group for telling more of the story.

Another way to disentangle the message from the messenger is to imagine the message coming from a source we value much more or much less. If we hear an account from someone we like, imagine if someone we didn’t like told us the same story, and vice versa.

This can be incorporated into an exploratory group’s work, asking each other, “How would we feel about this if we heard it from a much different source?” We can take this process of vetting information in the group further, initially and intentionally omitting where or whom we heard the idea from. Leading off our story by identifying the messenger could interfere with the group’s commitment to universalism, biasing them to agree with or discredit the message depending on their opinion of the messenger. So leave the source out to start, giving the group the maximum opportunity to form an impression without shooting (or celebrating) the message based on their opinion of the messenger (separate from the expertise and credibility of the messenger).

Telling someone how a story ends encourages them to be resulters, to interpret the details to fit that outcome.

If the group is blind to the outcome, it produces higher fidelity evaluation of decision quality.

The best way to do this is to deconstruct decisions before an outcome is known.

After the outcome, make it a habit when seeking advice to give the details without revealing the outcome.

Another way a group can de-bias members is to reward them for skill in debating opposing points of view and finding merit in opposing positions.

This is one of the reasons it’s good for a group to have at least three members, two to disagree and one to referee.

Skepticism is about approaching the world by asking why things might not be true rather than why they are true. It’s a recognition that, while there is an objective truth, everything we believe about the world is not true. Thinking in bets embodies skepticism by encouraging us to examine what we do and don’t know and what our level of confidence is in our beliefs and predictions. This moves us closer to what is objectively true.

If someone expresses a belief or prediction that doesn’t sound well calibrated and we have relevant information, try to say and, as in, “I agree with you that [insert specific concepts and ideas we agree with], AND . . .” After “and,” add the additional information. In the same exchange, if we said, “I agree with you that [insert specific concepts and ideas you agree with], BUT . . . ,” that challenge puts people on the defensive. “And” is an offer to contribute. “But” is a denial and repudiation of what came before.

We can think of this broadly as an attempt to avoid the language of “no.” In the performance art of improvisation, the first advice is that when someone starts a scene, you should respond with “yes, and . . .” “Yes” means you are accepting the construct of the situation. “And” means you are adding to it. That’s an excellent guideline in any situation in which you want to encourage exploratory thought. The important thing is to try to find areas of agreement to maintain the spirit of partnership in seeking the truth. In expressing potentially contradictory or dissenting information, our language ideally minimizes the element of disagreement. 

Ask for a temporary agreement to engage in truthseeking. If someone is off-loading emotion to us, we can ask them if they are just looking to vent or if they are looking for advice. If they aren’t looking for advice, that’s fine. The rules of engagement have been made clear. Sometimes, people just want to vent.

We’re not perfectly rational when we ponder the past or the future and engage deliberative mind, but we are more likely to make choices consistent with our long-term goals when we can get out of the moment and engage our past- and future-selves.

“Every 10-10-10 process starts with a question. . . . [W]hat are the consequences of each of my options in ten minutes? In ten months? In ten years?” This set of questions triggers mental time travel that cues that accountability conversation “How would I feel today if I had made this decision ten minutes ago? Ten months ago? Ten years ago?”

Moving regret in front of a decision has numerous benefits. First, obviously, it can influence us to make a better decision. Second, it helps us treat ourselves (regardless of the actual decision) more compassionately after the fact. We can anticipate and prepare for negative outcomes. By planning ahead, we can devise a plan to respond to a negative outcome instead of just reacting to it.

Signs of the illusion of certainty: “I know,” “I’m sure,” “I knew it,” “It always happens this way,” “I’m certain of it,” “you’re 100% wrong,” “You have no idea what you’re talking about,” “There’s no way that’s true,” “0%” or “100%” or their equivalents, and other terms signaling that we’re presuming things are more certain than we know they are. This also includes stating things as absolutes, like “best” or “worst” and “always” or “never.” Overconfidence: similar terms to the illusion of certainty. Irrational outcome fielding: “I can’t believe how unlucky I got,” or the reverse, if we have some default phrase for credit taking, like “I’m at the absolute top of my game” or “I planned it perfectly.” This includes conclusions of luck, skill, blame, or credit. It includes equivalent terms for irrationally fielding the outcomes of others, like, “They totally had that coming,” “They brought it on themselves,” and “Why do they always get so lucky?” Any kind of moaning or complaining about bad luck just to off-load it, with no real point to the story other than to get sympathy. (An exception would be when we’re in a truthseeking group and we make explicit that we’re taking a momentary break to vent.) Generalized characterizations of people meant to dismiss their ideas: insulting, pejorative characterizations of others, like “idiot” or, in poker, “donkey.” Or any phrase that starts by characterizing someone as “another typical ________.”

“Wrong” is a conclusion, not a rationale. And it’s not a particularly accurate conclusion since, as we know, nearly nothing is 100% or 0%. Any words or thoughts denying the existence of uncertainty should be a signal that we are heading toward a poorly calibrated decision.

This is by no means a complete list, but it provides a flavor of the kinds of statements and thinking that should trigger vigilance on our part. Once we recognize that we should watch out for particular words, phrases, and thoughts, when we find ourselves saying or thinking those things, we are breaking a contract, a commitment to truthseeking. These terms are signals that we’re succumbing to bias.

For us to make better decisions, we need to perform reconnaissance on the future. If a decision is a bet on a particular future based on our beliefs, then before we place a bet we should consider in detail what those possible futures might look like. Any decision can result in a set of possible outcomes. Thinking about what futures are contained in that set (which we do by putting memories together in a novel way to imagine how things might turn out) helps us figure out which decisions to make. Figure out the possibilities, then take a stab at the probabilities.

If we’re worried about guessing, we’re already guessing.

By at least trying to assign probabilities, we will naturally move away from the default of 0% or 100%.

The more expert the player, the further into the future they plan.

In addition to increasing decision quality, scouting various futures has numerous additional benefits. First, scenario planning reminds us that the future is inherently uncertain. By making that explicit in our decision-making process, we have a more realistic view of the world. Second, we are better prepared for how we are going to respond to different outcomes that might result from our initial decision.

When it comes to advance thinking, standing at the end and looking backward is much more effective than looking forward from the beginning.

They “found that prospective hindsight—imagining that an event has already occurred—increases the ability to correctly identify reasons for future outcomes by 30%.”

Imagining a successful future and backcasting from there is a useful time-travel exercise for identifying necessary steps for reaching our goals. Working backward helps even more when we give ourselves the freedom to imagine an unfavorable future.

When we set a weight-loss goal and put a plan to reach that goal in place, a premortem will reveal how we felt obligated to eat cake when it was somebody’s birthday, how hard it was to resist the bagels and cookies in the conference room, and how hard it was to find time for the gym or how easy it was to find excuses to procrastinate going.

Research, consistently finding that people who imagine obstacles in the way of reaching their goals are more likely to achieve success, a process she has called “mental contrasting.”

We’re all outcome junkies, but the more we wean ourselves from that addiction, the happier we’ll be. None of us is guaranteed a favorable outcome,

Life, like poker, is one long game, and there are going to be a lot of losses, even after making the best possible bets. We are going to do better, and be happier, if we start by recognizing that we’ll never be sure of the future. That changes our task from trying to be right every time, an impossible job, to navigating our way through the uncertainty by calibrating our beliefs to move toward, little by little, a more accurate and objective representation of the world. With strategic foresight and perspective, that’s manageable work. If we keep learning and calibrating, we might even get good at it.

Have a great week!

-Rinaldo Ugrina