How to Decide: The Art of Choosing Well
My previous post, was really confessional, and narrative—a look at some of the beliefs a younger me held without too much inspection. I want to think deeply about how we evaluate beliefs, but first, we need to talk about decisions, and how we can make the best possible decisions.
This is not a simple problem. In some sense, it’s all about making good decisions! Much of education and training in any field is designed to support making good decisions, and the quality of those decisions may say everything about our life experience. This is as true for individuals as it is for companies, for cultures, and for entire civilizations.
Sometimes it’s easy to make the best decision: maybe we only have a few options and it’s obvious what the right answer is. Maybe all decisions except one are ludicrously bad, or maybe there’s no possibility that’s much better than another. Pick one and deal with the consequences.
But those cases are in the minority. Usually, it’s much harder to decide between alternatives, and one of the real dangers is making decisions without awareness of our process. Maybe we’ve left much on the table by making mistakes we never even imagined.
Let’s think more carefully. Every good decision probably needs at least these three pieces:
- “I have these pieces of information…”
- “These are the possible actions I could take...”
- “Here’s how I’ll evaluate the result…”
The first two require some brainstorming and maybe some creativity, but the last one is the slippery part. How can we really evaluate the likely outcomes of our decisions?
Here’s a contrived example where decision making is complicated because we’re balancing many factors: Imagine choosing a career path as a young person—one leads to you making millions of dollars a year but being a bitter, lonely person; the other leads to you hovering on the edge of poverty, but leading a very happy life; while the other might lead to you struggling with a number of personal and financial issues, dying early, but curing cancer in the process. Given a “god’s eye” view of the future, which should you choose?
Much decision making is poorly considered and happens below the surface. People probably don’t spend enough time on this part of the question. What’s the best decision? That question, without some rigor in process and defining goals, may not help much. I’m constantly reminding people that you get what you optimize for—in other words, you’ll probably get the solution you’re driving toward, so make sure it’s the one you want.
Let’s talk about some standard models for decision making. Expected value is a classic, for good reason. The key idea is that we:
assemble all the possible outcomes of the decision
look at the payoff or loss from each outcome
and then multiply each one by the probability of it happening.
If, for instance, you had a game where you made $1.10 each time a fair coin came up heads and you lost $1.00 each time it came up tails, you want to play that game all day, every day. The more you play, the more you’ll probably make. (In this case, the math is easy because the probabilities are 50/50, but this framework can easily accommodate more outcomes with more nuanced probabilities.)
This is an important tool for me in my day job. It really shines in financial markets, especially where we can define and control risk and be pretty certain that we can play a game over and over. It’s also useful for poker, for calculating insurance, for product risk, and in many other areas.
It’s not perfect, though, and it’s not always the right tool. What if we can’t neatly collapse the question into scenarios with defined payoffs? What if there’s just too much we don’t know?
What if we can only play the game once? What if there’s an “absorbing barrier” where we have to stop playing? A good example is bankruptcy. You can easily lose all your money playing a “winning” positive expected value game if you have limited capital or even small surprises happen.
I’ve argued before that flying commercial is a good bet, at least in expected value terms. But you might well counter something like “yeah but if it goes badly, I’ll die as a flaming industrial lawn dart”, and you would have a point—the negative value of that payoff (death) might make it unacceptable for some people no matter how improbable.
We’ve been talking about risk here, and risk is one thing. It’s something we can quantify and understand. Even for a one-time event, we could imagine running the experiment in a bazillion parallel universes and being able to say something about most of the outcomes. But what about absolute uncertainty? What if we don’t know how bad things could be and really can’t assign probabilities to anything? What if we don’t know much of anything?
We have tools to think about this. Frank Knight pioneered a concept in 1921 that we now call “Knightian uncertainty”. This framework calls for us, first, to be intellectually honest. Don’t call it “risk” if we’re staring wide-eyed into the abyss of utter unknowing. Don’t assign bullshit probabilities if they or the payoffs are unquantifiable. First, admit the uncertainty. Just say you don’t and can’t know.
Then decide how to deal with it. Remember, we’re really dealing with edge of the map, “here be dragons” kinds of scenarios: nuclear war, AGI wiping out humanity, cryptocurrency outcomes, COVID-19 and novel pandemics. (Finance bros need this too—how do you value a startup in a pioneering industry? What are you going to pay for that company?)
There’s no one-size-fits-all answer in these cases, but here are a few tools that might help decision making under deep uncertainty:
Use minimax strategies which attempt to minimize the outcomes of the worst scenarios. Anti-fragile solutions also live here.
Use rules of thumb like “don’t bet more than you can lose”; “avoid irreversible decisions”; and maybe favor conservative, time-tested solutions.
Just wait. Don’t make a decision if you don’t have to. Don’t play the game if you don’t have to. (Of course, sometimes this option is not available.)
The message of Knightian uncertainty is a powerful one: sometimes we just don’t know and we can’t know. There are times when we really must bow before the unknowing and admit limitations. But for rational decision makers this is worst-case territory. Before we end up here, we should exhaust other useful tools.
Decisions rest on beliefs. One of the most powerful tools for evaluating beliefs comes to us from a fascinating and twisted path. The Reverend Thomas Bayes (d. 1761) was an English Nonconformist Minister who enjoyed mathematics in his spare time. He published almost nothing during his lifetime, but a friend, Richard Price, published his paper titled An Essay towards solving a Problem in the Doctrine of Chances after his death in 1763.
Bayes gave us a way to update our beliefs based on incoming evidence—a precise, powerful, mathematical refinement of rational thought. As a good example: a classic low-level interview question might be something like “I have a fair coin. Let’s say I flip it and it comes up heads twice in a row. What is the probability of getting three heads in a row? [Wait for answer.] So what is the probability of the coin coming up heads on my next flip, which will be the third in the sequence?” If you answer anything but 50%, you’re going to have an enjoyable and relaxing interview and never hear from the firm again because they just learned you have pudding inside your head.
But a more interesting extension of the question might be something like: “Ok, so now fast forward. We’ve flipped the coin 20 times and it’s come up heads 18 of those. What’s the probability of seeing heads on the next flip? What are you thinking?” What you should be thinking now is, “that’s really unlikely to be a fair coin. You said it was a fair coin. It’s far more likely that you are a lying weasel of an interviewer.”
And this is exactly what Bayesian inference does: we take an existing belief (a prior) and update it as new information comes in. There’s formal math around this, and it leads to some precise and sometimes wildly counterintuitive answers in the right environments.
But it’s also not perfect. In an ironic twist, Price, published Bayes’ paper in support of religious belief and to defend the reality of miracles, in a refutation of Hume. Hume was saying, “don’t believe something really unlikely, no matter who says it happened.” Price (using Bayes) was saying, “something really unlikely can become probable, given enough and strong enough evidence” and was using Bayes’ math to walk that razor’s edge.
The final irony is that Bayes’ theorem is now understood to focus on questions centering around what rational belief means and when it’s justified. Modern thinkers are far more likely to use it to make blazing bonfires out of theistic assertions.
Speaking of razors, you’ve probably seen Occam’s Razor invoked in many discussions. We’ll cover this again in the future, but you should just know that the original version of Occam’s Razor has been so dulled by modern thinkers that it’s nearly useless.
You probably know it as something like “the simplest explanation is usually the right explanation.” Well, the original was quite a bit different. The scholastic summary is: Entia non sunt multiplicanda praeter necessitate (entities should not be multiplied beyond what is necessary), which raises a few questions: What’s an entity? What’s necessary? How do we know? At any rate, it’s not as useful as we might assume because reality can be wonderfully weird.
And with that, I think we’re almost there. Shelves could be filled with the books written on decision making, and maybe I should write another one to add to that shelf—this is a nuanced topic, and there’s more to be said.
There are many other good frameworks, and some are “more good” than others. Red team/blue team analysis demands you actively argue against your chosen path. Scenario planning would have to sit down and think around the corners to cover as many conceivable scenarios as possible. Premortem analysis can be both fun and scary—pretend your chosen decision failed spectacularly, and do a “pre-“ post-mortem on the outcome.
There’s something to be said, in some cases, for favoring reversibility and taking decisions that can be undone. There also might be something to be said for taking the most robust choice—the one that leads to good outcomes across the largest number of scenarios, even if it doesn’t get you the best possible outcome in any of them.
Last, with all of these possibilities, there’s a real chance of paralysis by analysis and failing because you simply fail to act. Frameworks like time boxing, which demands you make a decision within a certain timeframe, or satisficing, which avoids looking for the best decision and instead takes the first one that meets your criteria, are solutions to this kind of paralysis. But, as I pointed out earlier, you get to choose which problem you solve, and the solution is unlikely to solve everything.
This post has been a bit of a detour, but I hope it’s one you’ve found both interesting and useful. Decision making is wonderfully concrete because it demands action; we move from the abstract and often ill-defined inner experience to actually doing something. For all of us, this is highly relevant, and it’s always easier to understand things we can touch or see.
But what I’m really interested in, what lies more directly in our path, is much harder to grasp—much deeper, and much more terrifying to confront. Decisions are an outward and visible reflection of an invisible, inner reality. Our decisions are built on a foundation that is so often unchallenged and uninvestigated. Decisions rest on beliefs.
And that’s what we really need to get to. In my next post, we will: How we form them, how we hold them, how we evaluate them, when and if we change them.