The Noise Threshold: Why Process Won’t Save Us
Field Notes on Human Inconsistency in the Age of AI
Consider this prediction: by 2030, 50% of operational decisions will be made directly by AI, with most remaining decisions being AI-assisted. In my previous essay on operational decision-making, I explored what this shift might look like in practice. What I didn’t fully explore was why such a shift becomes inevitable—not merely possible, but organizationally necessary.
The conventional narrative focuses on AI capability. Can AI understand context? Can it handle exceptions? Can it match human judgment? These questions, while valid, miss a more fundamental issue.
The case for AI in operational decisions isn’t primarily about AI’s brilliance. It’s about human inconsistency.
The Hidden Cost of Noise
When we discuss errors in human decision-making, we almost always mean bias—systematic patterns of misjudgment that push decisions in predictable directions. Decades of behavioral research have made us fluent in these patterns: anchoring, confirmation bias, overconfidence. Organizations invest heavily in training programs and decision frameworks designed to counteract them.
Yet bias is not the most costly limitation on human performance. The greater problem is noise: the random variability in human judgment that causes different people to reach different conclusions from identical information, or the same person to reach different conclusions on different days. This distinction—between the systematic errors we obsess over and the random variance we ignore—was central to Daniel Kahneman’s later work, and it reframes how we should think about human judgment in operational contexts.
Unlike bias, noise has no pattern. It simply adds variance to every decision, invisibly, at scale.
Consider a thought experiment. Take any operational decision made repeatedly across your organization—pricing adjustments, risk assessments, resource allocations, approval determinations. Now imagine tracking not just outcomes, but the decision itself: what different people decided, or what the same person decided when presented with equivalent situations at different times. The variation would likely be substantial, and largely unexplained by any systematic factor.
This isn’t a failure of competence. It’s a feature of how human cognition operates. We are influenced by recent experiences, current mood, cognitive load, the order in which information was presented, and countless other factors we neither notice nor control.
The result is organizational entropy—decisions that drift without intention, inconsistencies that compound over time, and outcomes that vary for reasons no one can articulate.
The Governance Response
Organizations have long recognized the dangers of unchecked individual judgment, even if they haven’t framed it explicitly as a noise problem. The response has been governance: approval hierarchies, decision frameworks, standardized criteria, documented processes, committee reviews.
The entire apparatus of decision governance exists, in essence, to make humans behave more like algorithms.
This approach—what Kahneman calls decision hygiene—has genuine value. Structured processes reduce variance. Checklists ensure relevant factors aren’t overlooked. Multiple reviewers catch individual errors. The problem isn’t that governance doesn’t work—it’s that governance has inherent limitations, and we may be approaching them.
Process adds time. Every approval layer, every standardized review, every committee deliberation creates latency between situation and response. In operational contexts where speed matters, organizations face an uncomfortable tradeoff: they can have consistency or velocity, but increasing one typically decreases the other.
Process also has diminishing returns. Beyond a certain point, additional structure doesn’t further reduce noise—it simply adds friction. And process itself introduces new sources of variability: how strictly are guidelines interpreted? How do reviewers weight competing criteria? The very mechanisms designed to create consistency become new venues for inconsistency.
An Unstable Equilibrium
This creates what I would characterize as an unstable equilibrium. Organizations are investing heavily in decision governance—frameworks, training, oversight structures—to extract more consistency from human judgment. But if the purpose of this investment is to make humans more algorithm-like, and if actual algorithms can be algorithms more effectively than humans can simulate them, the logic of the current approach contains its own expiration date.
Think of it as a bridge. Process and governance serve as the bridge between purely human decision-making and something more systematic. Bridges are valuable, but they are not destinations. Organizations don’t invest in bridges to stay on them indefinitely; they invest in bridges to reach the other side.
The question worth asking: is your investment in decision governance building infrastructure for the long term, or scaffolding for a transition already underway?
Every dollar spent on decision governance is implicitly a bet that humans-plus-process will remain superior to purpose-built systems for some meaningful period. That bet may be correct today. It becomes less certain as AI capability advances and operational expectations shift.
Reframing the Question
This shifts how we should think about AI adoption in operational contexts. The relevant question isn’t “Is AI good enough?”—a framing that positions AI as needing to prove itself against human judgment.
The more honest question is “How much inconsistency can we afford?”
This isn’t an argument that AI judgment is superior in all dimensions. Kahneman himself noted uncertainty about AI’s capacity for evaluative judgment—determining what outcomes are worth pursuing, not merely optimizing toward given targets. That question of values remains distinctly human. But for operational decisions where objectives are clear and consistency matters, the noise threshold argument holds.
Every organization has a noise tolerance, whether explicit or not. Below that threshold, the variance in human decisions is accepted as a cost of doing business—perhaps not even recognized as a cost at all. Above that threshold, the inconsistency becomes visible, problematic, and eventually intolerable.
What changes over time is not just AI capability, but noise tolerance. As decision volumes increase, as operational tempo accelerates, as competitive environments tighten margins for error, the threshold for acceptable variance drops. Simultaneously, as organizations gain experience with AI-assisted and AI-driven decisions, they develop new expectations for consistency that human judgment cannot match.
Viewed through this lens, the 50% prediction isn’t primarily a statement about AI reaching some capability threshold. It’s a statement about organizations reaching a noise threshold—a point where the inconsistency inherent in human operational judgment becomes an unacceptable competitive liability.
Looking Forward
The path forward requires honest assessment. Where in your organization does decision variance create silent costs—not dramatic failures, but quiet inefficiencies, unexplained inconsistencies, outcomes that depend more on who decided than what was decided? These are the domains where the noise threshold will be crossed first.
The organizations that navigate this transition successfully won’t be those that resist it longest in the name of human judgment, nor those that adopt AI fastest without understanding why. They will be organizations that recognize the true nature of the shift: not a replacement of human capability, but an escape from human inconsistency.
The bridge of process and governance served us well. It may be time to consider what lies on the other side.
Other essays in the series :






