The anatomy of the residue: Why we are hardwired for the wrong turn?

03.03.2026 • 8 min read

There is a specific, cold sensation that settles in the pit of the stomach at the exact moment a mistake is realised. It is the sound of a glass shattering on a tiled floor before you have even consciously acknowledged it slipped from your hand. It is the silence that follows a sent email when you notice, too late, the missing attachment or the unintended recipient. In that microscopic sliver of time, the world splits into two: the reality you intended to inhabit, and the one you have just accidentally created.

We spend a wide amount of our lives trying to insulate ourselves against this sensation. We build checklists, we set alarms, we install fail-safes. Yet, errors remain the most consistent ghost in our machinery. We treat mistakes as anomalies—glitches that shouldn’t have happened—but when you look at the sheer frequency of them, you begin to wonder if the error is not the exception, but the fundamental price we pay for existing in a complex environment.

The question that has been bothering me lately isn't just why we make mistakes, but why they are so fundamentally invisible to us until the very moment they become irreversible. If a mistake is truly “avoidable”, as we often claim in the post-mortem of a disaster, why did it feel like the most logical, or even the only, course of action at the moment of execution?

The mirage of the avoidable

When we look back at a mistake, we are victims of a cognitive distortion that makes the past look much more predictable than it actually was. We see a straight line of causality leading to the failure. This is often exacerbated by hindsight bias, where we judge our past selves with information that only our future selves possess.

An error, by definition, is only an error in retrospect. At the moment of commission, the act is almost always perceived as “correct” by the person performing it. If I turn left when I should have turned right, I didn’t do it because I wanted to be lost; I did it because, at the precise junction, my internal map made left feel like the right choice.

To understand errors, we have to strip away the moral weight we attach to it. We often conflate mistakes with carelessness or even stupidity, but many of our most significant errors are committed by the most diligent people. As James Reason explored in his seminal work on the Swiss Cheese Model of accidents, a catastrophe is rarely the fault of single individual. Instead, it is the result of small, systemic weaknesses across multiple layers, that happen to align at the wrong moment. With this in mind, the error isn’t an isolated event; it is the moment a hazard finally finds a clear path through the gaps in a system we assumed was impenetrable.

If we accept that most errors are permitted by systems that already contain these holes, then the concept of the avoidable error starts to look like a delusion. To avoid a mistake, you must first be able to identify the alignment of the holes. But if your internal model of the world tells you that the layers are solid, there is no trigger for avoidance.

The efficiency trap: Errors as a price on speed

One of the most compelling reasons for the existence of errors is the sheer computational cost of being perfect. To be truly error-free would require an infinite amount of time to process every possible variable before taking a single step in any direction. We would be paralysed by the burden of verification.

Instead, our brains operate on a system of satisficing—a term coined by Herbert Simon in 1956 to describe searching through alternatives until an acceptability threshold is met. We don’t look for the perfect solution; we look for the first solution that seems like it will work. This is an interestingly efficient way to live.

However, this efficiency relies on heuristics, which are mental shortcuts that allow us to make decisions quickly. These shortcuts are usually right, which is why we trust them. But they have a “residual”—a percentage of cases where the shortcut leads us astray. When we talk about a sub-optimal choice, we are often looking at a moment where a usually reliable heuristic encountered a situation that fell outside its standard deviation.

In this sense, a mistake isn’t necessarily a failure of logic; it’s a statistical invariability. If you perform a task one thousand times using a shortcut that is 99.9% accurate, you are mathematically destined to fail once. We call that failure a mistake, but from a systems perspective, it’s just the cost of the 999 times you succeeded without having to spend hours over-analysing the process.

The problem of identification: The blind spot in the theatre of the mind

Why can’t we see a mistake coming? This is perhaps the most frustrating aspect of human errors. Even when we are looking directly at the variables, we often fail to register the warning signs.

Psychologists often refer to cognitive tunnelling, a state where our focus becomes so narrow that we lose all peripheral awareness, we loseintellectual periphery. When we are deeply engaged in solving a specific problem, our brain deprioritises irrelevant information. The tragedy is that that irrelevant information often contains the very data point that would show us we are making a mistake.

Think of a person trying to force a key into a lock. As their frustration grows, they focus harder and harder on the mechanical act of pushing and turning. They become tunnelled into the task. They fail to notice that they are holding the key to their office, not their home. The information, which in this case would be the different shape of the key, is right there in their hand, but unfortunately their brain has filtered it out to focus on the problem of the stubborn lock.

The identification failure is compounded by the illusion of validity. We have a profound, often unfounded, confidence in our own perceptions. We assume that if we are looking at something, we are seeing it as it truly is. We forget that our eyes and ears are not cameras or microphones; they are instruments that provide raw data which our brain then heavily edits and interprets based on what it expects to see.

When reality deviates from our prediction (e.g., when the lock doesn’t turn) we don’t immediately think that the prediction was wrong. Instead, we think that the lock must be broken. We externalise the error because our internal theatre of the mind is designed to maintain a consistent, coherent narrative where we are the competent protagonists.

The moral luck of the wrong turn

Alarmingly, there is a disturbing element of luck in how we categorise errors. Two people can make the exact same mistake, say, forgetting to check their blind spot while driving. For the first person, nothing happens; the road is empty, and they continue their journey, never realising they made an error. For the second person, a cyclist happens to be in that blind spot.

In the eyes of the law, the second person has committed a grievous error, while the first person is a safe driver. But from a cognitive and procedural standpoint, the error was identical. This is the philosophical problem of moral luck, as explored by Thomas Nagel and Bernard Williams. We tend to judge the gravity of a mistake based on its outcome, rather than the process that led to it.

This outcome-bias prevents us from truly understanding the nature of errors. We obsess over the mistakes that lead to disasters while ignoring the millions of near misses that happen every day. By only studying the errors that end in tragedy, we are looking at a skewed data set. To understand why we make mistakes, we have to look at the moments where everything went wrong but, by sheer luck, nothing happened. These are the silent errors, the ones that suggest our systems—both internal and external—are far more fragile than we care to admit.

The topography of a mistake: Why the environment matters

We often treat errors as a personal failing that encompasses a lack of character or attention. But if we shift our perspective, we can see that errors are often a response to a poorly designed environment. There is a topography to our world that can either nudge us toward the correct action or pull us toward the wrong one.

Consider the Norman Door, a door designed so poorly that you pull when you should push. Is it a mistake when you pull the handle and the door doesn’t budge? Technically, yes. You have performed an incorrect action. But the fault doesn’t lie in your brain; it lies in the design of the handle, which signals pull to every human instinct.

Our lives are full of these intellectual doors. We work in digital environments where the delete button is right next to the save button. We operate in social structures where the expected behaviour is at odds with our biological needs (i.e., working after lunch breaks).

When we ask “why do mistakes happen?”, we should look less at the individual and more at the interface between the individual and the world. Many errors are simply the path of least resistance in a landscape that has been shaped incorrectly. A mistake, in many cases, is a natural response to an unnatural environment.

The residual of being human

If we could somehow eliminate all sub-optimal choices, all lapses in identification, and all environmental frictions, what would be left? We would be left with a world of perfect, robotic precision (and we would likely find it intolerable).

There is a fundamental noise in human existence. In statistics, the residual is the difference between the observed value and the predicted value. It is the part of the data that the model cannot explain. Error is the human residual. It is the unpredictable, messy variance that arises from being a biological entity trying to navigate a logical world.

Perhaps we should stop viewing errors as something to be solved and start looking at them as a calibration tool. A world without mistakes is a world without learning. If we never turned left when we should have turned right, we would never discover the side streets, the unexpected views, or the limitations of our own maps.

The sting of a mistake, the stinking feeling in the stomach, is a biological signal that our model of the world has just been updated. It is the friction of reality rubbing against our expectations. Without that friction, we would be gliding through a vacuum, never knowing where we end and the rest of the world begins.

The persistence of the ghost

As I sit here, reflecting on the nature of these wrong turns, I am aware that even this essay is likely to contain its owns residuals. There will be a typo I missed, a nuance I failed to capture, or a sub-optimal phrasing that will only become apparent to me after I click “publish”.

We cannot escape the ghost in the machine. We can only change our relationship with it. Instead of seeing errors as a sign of failure, we can choose to see it as evidence of our engagement with a world that is too big, and too complex to ever be fully mastered.

The avoidable mistake is a myth we tell ourselves to maintain the illusion of control. The reality is far more humbling: we are all walking through a fog, guided by flickering lanterns of intuition and habit. Most of the time, we find our way. But when we don’t—when the glass breaks or the wrong turn is taken—it isn’t because we have failed to be human. It’s because we have succeeded.

← Back to Blog