Article — AI Decision Control
The question that exposes most AI governance programs
Last week I asked a question to a team deploying AI in a regulated environment:
The answer was immediate: “Yes. We log overrides.”
When we tried to run the query, it didn’t exist.
Override events were logged. Model outputs were stored. But the query — high-confidence overrides, bound to the confidence that was actually presented — required pulling from multiple systems, recomputing scores under current settings, and approximating thresholds that had since changed.
It took two days.
The result was assembled, not retrieved. That is not evidence. That is reconstruction.
This is not a logging problem
The confidence score in the logs is not the confidence score that was presented. Confidence is produced at render time. Through thresholds. Across versions. Those components change. The score becomes a function, not a value. So when you ask for what was presented at time T, you are not retrieving it. You are attempting to recreate it.
The same applies to decision context
The model output is stored. What was presented is a composition:
- score
- label
- threshold logic
- suppression rules
- UI state
Assembled at render time. When components change, the composition cannot be reconstructed exactly. It must be reconstructed. And reconstruction is not evidence.
What changes when the system is built differently
When the same question is asked in a system with decision control:
The query runs. It returns immediately. Confidence as presented. Threshold at time of decision. Override action. Responsible party.
No reconstruction. No stitching. No approximation.
The difference is not accuracy. The difference is not performance. The difference is whether the system produces evidence or requires reconstruction.
This is the gap most AI governance work does not reach
Compliance can be demonstrated without control. An audit can pass. Documentation can be complete. And the system can still be unable to answer what actually happened at the moment of decision.
Article 14 of the EU AI Act assumes human oversight exists. Article 26(5) assumes it can be demonstrated. The next question is not whether you defined oversight. It is whether your system can prove it was operational at the moment of decision.
Most systems cannot answer that.
One question worth asking your team today
If someone asked you to show where humans rejected high-confidence recommendations last quarter — would your system answer directly? Or would you need to reconstruct it?
If you want to run this question against your own system, I’m happy to do that with you. info@giggleaiinnovation.com — Thokozile Phiri, Founder, Giggle AI Innovation