Aspen Circle

Governing the AI stack: why healthcare must look beyond individual tools

23 March 2026
AI stock shot
Source: Adobe Stock
23 March 2026
Categories
Thought leadership
Global
Health technology
SHARE

Artificial intelligence is rapidly becoming part of everyday healthcare. Tools now assist with diagnosis, documentation, triage and clinical decision-making, and adoption across the health sector is accelerating.

This is both exciting and necessary. Healthcare systems face workforce pressures, rising demand and increasing complexity. Used well, AI can support clinicians and improve patient care in meaningful ways.

But as I watch AI enter clinical workflows, I believe we are focusing on the wrong question.

Most discussions centre on whether an individual AI tool is safe, accurate or effective. These are important questions. But they can obscure a larger issue: what happens when multiple tools operate together inside a healthcare system.

I call this the challenge of the AI stack.

The problem we are not yet governing

Today, most AI tools used in healthcare are validated individually. They are tested in isolation to demonstrate accuracy or performance, rather than assessed as components of an interacting stack.

But clinical care is delivered by teams inside complex systems with established assurance and accountability.

As a clinician, I am registered and credentialed, and my colleagues are held to the same standard. The equipment and systems we use are validated against Australian standards and routinely calibrated. We run simulation exercises to test performance under pressure, and when things go wrong we investigate rigorously to identify the point of failure. That level of whole-of-system assurance is largely absent when multiple AI tools are introduced into a workflow without being validated together.

In practice, AI rarely operates alone. Documentation tools, clinical decision support, coding, triage and patient-facing applications can run in parallel and exchange information via the electronic medical record and other integrations.

When AI is layered into these workflows, the relevant unit of risk is no longer a single product, but the combined system. Interactions can produce behaviours that no individual tool was designed or tested to create.

This is the gap: we validate tools, but we do not routinely validate the stack they form in real care settings.

When tools are not validated together, errors can propagate. A flawed output can enter the patient record, travel with a referral to a specialist, shape a report, and flow into administrative processes such as coding, billing, payer review and insurance decisions. Once embedded, it can be repeated, amplified or treated as ground truth by downstream systems and people.

When multiple AI-enabled systems interact, cascading effects can be difficult to detect, reproduce and audit, especially when outputs are copied, summarised or transformed from one tool to the next.

These are not simply technical issues. They are governance challenges.

Three perspectives on the shift

In my work across healthcare systems, I see this shift through three lenses: as a patient, a clinician and a healthcare leader. These perspectives are real, and they are unfolding now.

The first is the patient. More patients are arriving with AI-generated diagnoses or explanations. The issue is not necessarily that the technology is wrong, but that patients may be committed to a conclusion before the clinical conversation begins. When that assumption is not surfaced and explored, trust can erode and the therapeutic relationship is compromised. The alternative is also concerning: evidence suggests that when patients present a potential diagnosis, clinicians themselves can anchor to it.

The second is the clinician. If a clinical decision is influenced by AI, we must be able to audit how a recommendation was produced. Yet many tools remain opaque. When reasoning cannot be verified, accountability becomes unclear. I have seen this firsthand: in one case, a clinician was not authorised to use an AI scribe. An audit of the medical record concluded that an AI scribe had been used, but when questioned the clinician denied it, attributing the change in documentation quality to reflective practice. The truth was that we could not tell either way. Without a verifiable audit trail, governance becomes guesswork.

The third is the healthcare leader. As leaders, we understand the risks, but we also know we need to adopt AI; not adopting it is not a neutral position. We keep hearing about standards, governance and guardrails, yet the reality is that multiple tools, and even stacked AI, are already being used now. We cannot wait for perfect frameworks. We need practical governance solutions that work today.

Together, these perspectives highlight the central tension: moving fast enough to realise AI’s benefits, while safeguarding patients, clinicians and organisations.

A practical governance bridge

Healthcare does not yet have a comprehensive framework for governing the AI stack. Regulation is evolving and evidence is still emerging.

However, there are practical steps we can take now. I describe this as Solution Governance: a pragmatic bridge that helps organisations mitigate risk today, even if it is not perfect.

First, declare the use of AI. Organisations, clinicians and patients should know when AI tools are being used in care delivery. Transparency builds trust and supports accountability.

Second, evidence or explain. Use of AI should be evidence-based for the specific clinical context; where evidence is limited, that uncertainty should be explicitly acknowledged and justified as a known risk.

Third, build contingency. Every AI-enabled workflow should have a clear fallback plan if the tool fails or is unavailable, and staff should know what that contingency is so they can step in safely and confidently.

These steps are not perfect, but they are practical, and they are better than doing nothing, whilst governance frameworks catch up with the technology.

Governing the system, not just the tools

AI will continue to shape everyday healthcare. The question is no longer whether we adopt these technologies, but whether we are prepared to govern them responsibly.

For healthcare leaders, the task ahead is clear: move beyond assessing tools in isolation and start governing the systems those tools create together.

When AI operates as a stack, the stack itself becomes part of care delivery - shaping decisions, documentation and downstream outcomes.

Like every other component of healthcare, it must be transparent, auditable and designed with patient safety at its centre.

AI in healthcare must be governed as an interconnected system, not individual tools, as risks emerge from interactions across the AI stack, requiring practical, system-level oversight and accountability.

Dr Katrina Sanders is Aspen Medical's Chief Medical Officer.

In my work across healthcare systems, I see this shift through three lenses: as a patient, a clinician and a healthcare leader. These perspectives are real, and they are unfolding now.