AI-enabled technologies are now a permanent and growing part of digital health.
From patient-facing tools to systems that influence clinical decision-making, AI is moving rapidly from experimentation into everyday use across health systems.
While ambition and innovation in this space are high, one challenge is becoming increasingly clear: assurance mechanisms have not scaled at the same pace as AI adoption.
Health systems, commissioners, regulators, and suppliers are navigating a complex landscape of emerging regulation, overlapping standards, and evolving expectations - often without shared, operational ways to assess risk, governance, and safety in practice.
In response, ORCHA has established the Digital Health AI Advisory Group.
Over the past year, ORCHA has seen a significant increase in AI-enabled digital health products entering assessment pathways. These products vary widely in how AI is used, from low-risk automation and personalisation, to systems that influence diagnosis, treatment, or clinical prioritisation.
What has become clear is that treating “AI” as a single category is no longer viable.
At the same time:
High-level AI principles are widely agreed, but difficult to apply consistently
Health systems face fragmented, duplicative, or opaque assessment approaches
Innovators lack clarity on what “good” looks like for governance and evidence
Regulators and policymakers need assurance models that are practical, not theoretical
The Digital Health AI Advisory Group has been formed to help address these challenges, not by creating new rules, but by supporting the translation of existing expectations into shared, assessable practice for digital health.
The Digital Health AI Advisory Group exists to advise and inform the development of ORCHA’s AI Assurance Module, helping ensure it is:
Proportionate to real-world risk
Grounded in clinical and system context
Aligned with existing and emerging regulation
Practical for innovators to engage with
Credible and defensible for health systems to adopt
The Advisory Group will support ORCHA across several key areas:
A core principle underpinning this work is that risk is contextual.
AI-related risk depends on how a system is used, what function it performs, and the impact it may have on users and care pathways — not simply on the presence of AI itself.
The group will provide input on approaches that distinguish between:
Different AI techniques
Functional roles within digital health products
Clinical and non-clinical contexts of use
This supports proportionate assurance and helps avoid both over- and under-scrutiny.
Many of the most significant challenges associated with AI in health are not purely technical. They relate to governance: ownership, oversight, change management, monitoring, and accountability over time.
The Advisory Group will help inform how governance expectations within the AI Assurance Module are:
Clearly articulated
Assessable in practice
Appropriate to the level of risk and use case
Aligned with how health systems and suppliers operate
ORCHA’s approach is not to invent new standards, but to align with and operationalise existing regulatory and policy frameworks relevant to AI and digital health.
The Advisory Group will provide insight on:
Mapping regulatory principles to practical assessment criteria
Identifying areas where guidance lacks operational clarity
Ensuring the framework can adapt as regulation and policy continue to evolve
A key aim of the AI Assurance Module is to support informed use, rather than binary pass/fail judgements.
The Advisory Group will help shape how assurance outcomes are communicated, with a focus on clarity around:
What has been assessed
In what context
At what level of depth
And where limitations or boundaries remain
This transparency is essential for building confidence among health systems, suppliers, clinicians, and patients.
The Digital Health AI Advisory Group forms part of a wider, phased programme of work at ORCHA, alongside beta testing, academic collaboration, and engagement with policy bodies, health systems, and industry partners.
This recognises that AI assurance is not static. As technologies, use cases, and expectations change, assurance approaches must evolve - grounded in shared understanding and real-world application.
The establishment of the Digital Health AI Advisory Group marks an important step in supporting safe, responsible adoption of AI across digital health.
By convening diverse expertise and focusing on practical, proportionate assurance, ORCHA aims to help create an ecosystem where:
Innovators have clearer pathways to trust
Health systems can adopt AI with confidence
And patients and clinicians benefit from transparent safeguards
ORCHA will continue to share insights, milestones, and opportunities to engage as this work progresses.