We Did Not Know This Was a Decision
Anil Nair3 min read·Just now--
The next AI governance failure will probably not begin with a rogue model. It will begin with a system event that nobody thought to govern.
A claim is prioritised. A customer is segmented. A case is routed because a score crossed a threshold. Inside the system, these look like operational steps. No one calls them decisions because no final letter has been sent and no human has clicked approve. The NIST AI Risk Management Framework is useful here because it treats risk as a property of the whole system lifecycle, not only the final model output.
In practice, the meaningful choice may already have happened.
The Decision Happens Before The Label
Most organisations look for decisions at the obvious points — Approve, Reject, Deny, Escalate or Close. Modern automated systems rarely wait for those labels. They shape the path earlier through scoring, ranking, routing, thresholding, and interface defaults.
That creates the hidden boundary problem. A fraud score can route a claim into a denial queue, a document classification can delay payment while a customer segment can change price, priority, or access. A pre-loaded template can make override slow and costly. The CMA’s foundation models work makes a related point in market terms: defaults and interface design can shape user behaviour long before a formal choice is made.
When Preparation Becomes Decision Work
Engineers often describe these steps as preparatory tasks. The model produces a signal. The rules engine applies a threshold. The orchestrator chooses the next workflow. That language keeps accountability at the final step.
Our insurance oversight case study shows why that framing breaks down. A model flagged a motor claim as potentially fraudulent. A reviewer later approved the denial. The real decision boundary had already moved upstream. The fraud model produced a score of 0.87. A threshold rule converted that score into a denial recommendation. The workflow routed the claim with the denial path already dominant. The interface hid the model reasoning, uncertainty, comparable claims, and alternative outcomes.
The system made denial easier and faster before the reviewer acted. Preparation had become decision work because it materially narrowed the human path. The firm was watching the approval step while in fact the control point was earlier.
Boundary Detection Is A Technical Discipline
AI governance often begins too late because it waits for named decision fields. A modern control system needs to detect boundaries from system behaviour. The useful signals are technical: a score crosses a threshold, a route changes, a queue priority changes and a default outcome is generated. The human option is hidden, delayed, or made costly.
Those are the moments that need control. This work belongs closer to observability than policy writing. The system needs to detect where an event changes the state of a case, narrows a person’s options, or commits the organisation to a hard-to-reverse path.
Without that detection layer, human oversight arrives after the system has already done the important work.
You Cannot Gate What You Have Not Identified
The Decision Control System depends on knowing where a decision boundary sits. A Decision Gate can hold a high-risk outcome before release. A Human Review Session can show the review context. The gate must be placed at the right point.
If governance only watches the approval button, it will miss the threshold rule. If it only watches the final denial, it will miss the routing decision. If it only watches the human action, it will miss the interface design.
The next generation of AI controls will need to move from label-based governance to logic-based governance. The question is no longer only “who approved this?” A second question matters just as much. Where did the system first make this outcome likely?
That is where the decision happened.
If your system treats routing, thresholding, and interface defaults as routine operations, you may be missing the real decision boundary. Read the insurance oversight case study and the Decision Control System specification.