AI Decision Control for Healthcare AI

Your AI is compliant.
That is not the same
as controlled.

Healthcare AI systems that pass audits still make decisions nobody owns. We close the decision control gap — making every AI-driven decision in production visible, owned, and interruptible.

Not sure if this applies to your system? Send a message first.

01 — The Problem

Compliance proves you documented.
It does not prove you control.

Most healthcare AI organisations have invested heavily in compliance. Audit trails, model cards, validation studies, regulatory submissions. They can demonstrate their systems were built correctly.

What they cannot demonstrate is control over the decisions those systems are making right now — in production, on live patient pathways, in real time.

The gap between compliance and control is where clinical and regulatory risk accumulates. Quietly. Systematically. Until it doesn't.

No one owns the decision

AI recommendations pass through clinical workflows with no assigned accountability. When outcomes diverge, ownership is contested. Investigations go nowhere.

No mechanism to interrupt

Systems operating in production have no defined override pathway. Human oversight exists on paper. In practice, the AI operates without effective control points.

Decisions are invisible at the point of consequence

Logging captures outputs. It does not capture the decision — who was in the loop, what context applied, what authority was exercised.

EU AI Act changes the exposure

Article 14 requires demonstrable human oversight — not nominal oversight. Organisations that cannot show control, not merely compliance, face material regulatory risk.

02 — What We Do

Not a tool.
Not a framework.
A Decision Operating System.

We install a structured control layer over the AI decisions your organisation makes in production. We do not replace your systems. We give those systems the architecture they are currently missing.

The absence of that architecture is not a compliance gap. It is a control gap. The two are different problems with different solutions.

Not a self-serve dashboard Not a compliance checklist Not an audit exercise
01

Decision Objects

Every consequential AI output is formalised as a discrete Decision Object: a defined artefact with inputs, rationale, context, and consequence. Not a log entry. A decision.

02

Ownership

Each Decision Object is assigned to a named human authority. Ownership means accountability is not distributed into ambiguity — it is explicit, documented, and auditable.

03

Control Points

Defined intervention gates at which a human authority can review, modify, or override an AI decision. Not aspirational oversight. Structural interruption capability.

The Principle
A healthcare AI system with decision control does not merely produce outputs.
It produces decisions — with visibility, ownership, and the structural capacity for human control.
That is what regulators mean by oversight. That is what boards should be demanding.

03 — How It Works

Three phases.
One outcome: control.

Each phase is scoped, fixed in time, and ends with a defined output — not an ongoing engagement.

Phase One

Exposure Scan

A structured diagnostic of your current AI deployment to identify where decisions are being made without ownership, visibility, or control. We map your live decision surface — the gap between what your documentation says and what your systems are doing.

No prior preparation required. No system changes. An independent read of your actual control state.

3–5 days
Phase Two

AI Decision Control Diagnostic

A deep engagement with your AI decision infrastructure. We work with your CTO, Head of AI, and clinical leadership to define the Decision Registry, assign ownership, establish Control Points, and produce your Decision Control Index — a scored, auditable picture of your control state.

Fixed scope. No open-ended retainer. Pricing discussed on the Guided Assessment call.

2–4 weeks
Phase Three

Installation

The control layer is put in place. Decision Objects are formalised. Ownership is assigned. Control Points are operational. Your organisation moves from nominal oversight to structural control — documented, defensible, and ready for regulatory scrutiny.

This is not a handover document. It is a working system.

Timeline defined at Diagnostic close

Ready to establish your control state?

The Guided Assessment call takes 45 minutes. No preparation required.

04 — What You Leave With

A working control system.
Not a report.

Every engagement ends with operational infrastructure — artefacts that are live, assigned, and functioning. Not recommendations waiting to be prioritised. Not a roadmap for a future quarter.

Decision Registry

A complete, structured record of every consequential AI decision in your production environment — defined, categorised, and documented to regulatory standard.

Ownership Assignment

Named human authority attached to each decision category. Clear accountability that survives personnel changes, audits, and incident investigations.

Control Points

Defined, operational override and intervention gates. Human oversight that is structural, not aspirational. Article 14-ready from day one.

Decision Control Index

A scored, auditable assessment of your organisation's control state across all active AI decision surfaces. A baseline. A benchmark. A board-level instrument.

Not this
Slide deck recommendations Gap analysis report Roadmap document Best-practice framework Vendor assessment

05 — Who This Is For

Built first for
regulated healthcare AI.

This is not a general AI governance engagement. It is a precision instrument for organisations where AI decisions carry clinical and regulatory consequence.

SaMD & Clinical AI

Software as a Medical Device where AI output influences clinical decision-making. Imaging, triage, diagnostic support, treatment recommendation systems operating in regulated environments.

EU AI Act High-Risk

Organisations operating AI systems classified as high-risk under Annex III. Where Article 14 compliance is mandatory and human oversight must be demonstrable — not asserted.

Regulated Health Systems

Health systems, hospital groups, and digital health organisations deploying AI in care pathways where clinical and regulatory accountability cannot be delegated to a vendor.

This engagement is led by
CTO Head of AI Head of Product (AI) Chief Medical Officer Clinical AI Lead Head of Regulatory Affairs Chief Risk Officer General Counsel

06 — Regulatory Alignment

Built around what regulators are actually requiring.

Not interpreted from guidance. Not extrapolated from general compliance practice. We address the specific control requirements that healthcare AI organisations face under current and incoming regulation.

EU AI Act
Article 14

Human Oversight

Article 14 requires that high-risk AI systems are designed and deployed to allow natural persons to effectively oversee them during the period of use. Our Control Points architecture directly operationalises this requirement — creating structural human oversight capability, not policy-level documentation of intent.

MDR
GSPR 14.2(d)

User Intervention Expectations

GSPR 14.2(d) expects that SaMD supports user intervention in automated processes. Our control layer generates structured evidence that intervention pathways are defined, assigned, and documented — supporting the demonstration of user override capability expected during conformity assessment. Organisations should seek independent legal review of specific compliance claims.

Post-Market
Surveillance

Ongoing Decision Monitoring

Our Decision Registry provides the structured decision data necessary for meaningful post-market surveillance. Rather than aggregating system outputs, organisations hold a discrete record of AI decisions — with context, ownership, and consequence — that satisfies the intent of PMS requirements and withstands notified body scrutiny.

Request a Guided Assessment

Compliance does not
guarantee control.
We create it.

A Guided Assessment establishes whether this is the right engagement for your organisation — and what your current control exposure looks like.

I work directly with teams running this diagnostic. If you're deploying AI in a regulated healthcare environment and want to understand where decision control may be breaking, reach out directly.

— Thokozile Phiri, Founder

No preparation required. No commitment beyond the conversation.

AI Decision Control Diagnostic  •  2–4 weeks  •  Fixed scope  •  No retainer

Questions

What is the AI Decision Control Diagnostic?

The AI Decision Control Diagnostic is an engagement by Giggle AI Innovation that installs a control layer over AI-driven decisions in production, making decisions visible, owned, and interruptible. It is not a self-serve dashboard — it is delivered through a guided assessment, diagnostic, and installation process.

What is AI decision control?

AI decision control is the ability to identify AI-driven decisions in production, assign clear ownership to each, define intervention points, and ensure decisions can be reviewed, escalated, or overridden when needed. It is distinct from AI compliance, which documents how a system was built — not how its decisions are governed once deployed.

Is this a SaaS platform?

No. This is not a self-serve SaaS platform. It is an operator-led engagement delivered in three phases: an Exposure Scan, an AI Decision Control Diagnostic, and a control layer Installation. It is designed for regulated healthcare AI organisations, not for general-purpose AI monitoring.

Who is this for?

We work first with healthcare AI organisations — developers and operators of SaMD, clinical decision support systems, imaging AI, and other regulated AI systems operating under EU AI Act and MDR requirements. Engagements are led by CTOs, Heads of AI, clinical AI leads, and regulatory affairs teams.

From the Founder

Article — AI Decision Control

The question that exposes most AI governance programs

One question most AI teams cannot answer without reconstruction — and what that reveals about the gap between compliance and control.

Read the full article