AI Decision Control for Healthcare AI
Healthcare AI systems that pass audits still make decisions nobody owns. We close the decision control gap — making every AI-driven decision in production visible, owned, and interruptible.
Not sure if this applies to your system? Send a message first.
01 — The Problem
Most healthcare AI organisations have invested heavily in compliance. Audit trails, model cards, validation studies, regulatory submissions. They can demonstrate their systems were built correctly.
What they cannot demonstrate is control over the decisions those systems are making right now — in production, on live patient pathways, in real time.
The gap between compliance and control is where clinical and regulatory risk accumulates. Quietly. Systematically. Until it doesn't.
No one owns the decision
AI recommendations pass through clinical workflows with no assigned accountability. When outcomes diverge, ownership is contested. Investigations go nowhere.
No mechanism to interrupt
Systems operating in production have no defined override pathway. Human oversight exists on paper. In practice, the AI operates without effective control points.
Decisions are invisible at the point of consequence
Logging captures outputs. It does not capture the decision — who was in the loop, what context applied, what authority was exercised.
EU AI Act changes the exposure
Article 14 requires demonstrable human oversight — not nominal oversight. Organisations that cannot show control, not merely compliance, face material regulatory risk.
02 — What We Do
We install a structured control layer over the AI decisions your organisation makes in production. We do not replace your systems. We give those systems the architecture they are currently missing.
The absence of that architecture is not a compliance gap. It is a control gap. The two are different problems with different solutions.
Decision Objects
Every consequential AI output is formalised as a discrete Decision Object: a defined artefact with inputs, rationale, context, and consequence. Not a log entry. A decision.
Ownership
Each Decision Object is assigned to a named human authority. Ownership means accountability is not distributed into ambiguity — it is explicit, documented, and auditable.
Control Points
Defined intervention gates at which a human authority can review, modify, or override an AI decision. Not aspirational oversight. Structural interruption capability.
A healthcare AI system with decision control does not merely produce outputs.
It produces decisions — with visibility, ownership, and the structural capacity for human control.
That is what regulators mean by oversight. That is what boards should be demanding.
03 — How It Works
Each phase is scoped, fixed in time, and ends with a defined output — not an ongoing engagement.
A structured diagnostic of your current AI deployment to identify where decisions are being made without ownership, visibility, or control. We map your live decision surface — the gap between what your documentation says and what your systems are doing.
No prior preparation required. No system changes. An independent read of your actual control state.
3–5 daysA deep engagement with your AI decision infrastructure. We work with your CTO, Head of AI, and clinical leadership to define the Decision Registry, assign ownership, establish Control Points, and produce your Decision Control Index — a scored, auditable picture of your control state.
Fixed scope. No open-ended retainer. Pricing discussed on the Guided Assessment call.
2–4 weeksThe control layer is put in place. Decision Objects are formalised. Ownership is assigned. Control Points are operational. Your organisation moves from nominal oversight to structural control — documented, defensible, and ready for regulatory scrutiny.
This is not a handover document. It is a working system.
Timeline defined at Diagnostic closeReady to establish your control state?
The Guided Assessment call takes 45 minutes. No preparation required.
Not sure if this applies? Send a message first.
04 — What You Leave With
Every engagement ends with operational infrastructure — artefacts that are live, assigned, and functioning. Not recommendations waiting to be prioritised. Not a roadmap for a future quarter.
Decision Registry
A complete, structured record of every consequential AI decision in your production environment — defined, categorised, and documented to regulatory standard.
Ownership Assignment
Named human authority attached to each decision category. Clear accountability that survives personnel changes, audits, and incident investigations.
Control Points
Defined, operational override and intervention gates. Human oversight that is structural, not aspirational. Article 14-ready from day one.
Decision Control Index
A scored, auditable assessment of your organisation's control state across all active AI decision surfaces. A baseline. A benchmark. A board-level instrument.
05 — Who This Is For
This is not a general AI governance engagement. It is a precision instrument for organisations where AI decisions carry clinical and regulatory consequence.
SaMD & Clinical AI
Software as a Medical Device where AI output influences clinical decision-making. Imaging, triage, diagnostic support, treatment recommendation systems operating in regulated environments.
EU AI Act High-Risk
Organisations operating AI systems classified as high-risk under Annex III. Where Article 14 compliance is mandatory and human oversight must be demonstrable — not asserted.
Regulated Health Systems
Health systems, hospital groups, and digital health organisations deploying AI in care pathways where clinical and regulatory accountability cannot be delegated to a vendor.
06 — Regulatory Alignment
Not interpreted from guidance. Not extrapolated from general compliance practice. We address the specific control requirements that healthcare AI organisations face under current and incoming regulation.
Human Oversight
Article 14 requires that high-risk AI systems are designed and deployed to allow natural persons to effectively oversee them during the period of use. Our Control Points architecture directly operationalises this requirement — creating structural human oversight capability, not policy-level documentation of intent.
User Intervention Expectations
GSPR 14.2(d) expects that SaMD supports user intervention in automated processes. Our control layer generates structured evidence that intervention pathways are defined, assigned, and documented — supporting the demonstration of user override capability expected during conformity assessment. Organisations should seek independent legal review of specific compliance claims.
Ongoing Decision Monitoring
Our Decision Registry provides the structured decision data necessary for meaningful post-market surveillance. Rather than aggregating system outputs, organisations hold a discrete record of AI decisions — with context, ownership, and consequence — that satisfies the intent of PMS requirements and withstands notified body scrutiny.
Request a Guided Assessment
A Guided Assessment establishes whether this is the right engagement for your organisation — and what your current control exposure looks like.
I work directly with teams running this diagnostic. If you're deploying AI in a regulated healthcare environment and want to understand where decision control may be breaking, reach out directly.
— Thokozile Phiri, FounderNo preparation required. No commitment beyond the conversation.
Questions
The AI Decision Control Diagnostic is an engagement by Giggle AI Innovation that installs a control layer over AI-driven decisions in production, making decisions visible, owned, and interruptible. It is not a self-serve dashboard — it is delivered through a guided assessment, diagnostic, and installation process.
AI decision control is the ability to identify AI-driven decisions in production, assign clear ownership to each, define intervention points, and ensure decisions can be reviewed, escalated, or overridden when needed. It is distinct from AI compliance, which documents how a system was built — not how its decisions are governed once deployed.
No. This is not a self-serve SaaS platform. It is an operator-led engagement delivered in three phases: an Exposure Scan, an AI Decision Control Diagnostic, and a control layer Installation. It is designed for regulated healthcare AI organisations, not for general-purpose AI monitoring.
We work first with healthcare AI organisations — developers and operators of SaMD, clinical decision support systems, imaging AI, and other regulated AI systems operating under EU AI Act and MDR requirements. Engagements are led by CTOs, Heads of AI, clinical AI leads, and regulatory affairs teams.