ORCHA becomes strategic partner of DrDoctor to support patients on NHS waiting lists
And a registry tick isn't going to save us.
I've been genuinely troubled by something I keep hearing from trusts, and even from regulatory experts, about the NHSE AVT Registry. The assumption, apparently widespread, that selecting a product from that list either bypasses local governance or means the product has received some form of national approval.
It hasn't. Every compliance check on that registry is self-certified. No independent oversight. No external validation. Not even a requirement beyond Class 1 medical device status — which is itself self-certified.
And yet the assumption persists. And that assumption is dangerous.
The checklist has become the assurance. And that's where we go wrong.
We've all watched the story unfold: a major AI deployment halted days before go-live. Not because the technology failed. Not because regulation failed.
Because assurance failed.
And this is becoming a pattern.
As digital health accelerates, and AI in particular is moving fast, we're seeing the unchallenged rise of what I'd call checklist governance.
-
Is it on a registry? ✓
-
Has the manufacturer declared it compliant? ✓
-
Do the claims meet minimum stated requirements? ✓
Box ticked. Move on.
The problem isn't that registries exist. They're a step in the right direction. The problem is what happens when a signal gets mistaken for a verdict. When a list becomes a catalogue of pre-approved solutions. When trusts assume the hard work of validation has been done by someone else.
It hasn't. The responsibility still sits locally. And local teams, working with incomplete information, are being set up to fail.
This model doesn't scale. Not even close.
This approach might just about hold up in a system with a handful of technologies. Digital health is not that system.
We're talking thousands of products. Hundreds of AI tools. New entrants every single week. Deployed across one of the largest employing organisations on the planet.
Expecting every Trust, every ICB, every local governance team to independently validate every solution from first principles is not governance. It's a fantasy. And the consequences of that fantasy are already showing up:
Duplication of effort eating into already stretched capacity. Inconsistent standards creating variable and unpredictable risk. Late-stage interventions and deployment delays. Near-misses that, so far, we've been fortunate to catch.
The system is not set up for this. And it is going to get worse before it gets better, unless we change the model.
The answer isn't more lists. It's real assurance.
Independent. Centralised. Consistent.
Assurance that goes beyond what a manufacturer says about themselves. That asks the questions a self-certification form was never designed to answer:
How does this AI model actually behave in real-world clinical settings? What happens at the edge cases? Where are the failure points and what controls exist to catch them? What evidence, independent, reproducible evidence, actually supports the claims being made?
This isn't about replacing local governance. It's about giving local teams something solid to build on. Instead of starting from zero every time, they start from a foundation of independently validated information and focus their expertise where it genuinely adds value: local context, integration, and implementation.
That's governance that actually works.
The cost of getting this wrong is too high to ignore.
Patient safety is the first risk, and the most important. Unvalidated technologies reaching patients because the system assumed someone else had checked them.
But the costs run further. Reputational damage when deployments fail publicly. Wasted investment in tools that can't scale or can't be trusted. Clinicians and patients losing faith in digital health at exactly the moment it has the most to offer.
And perhaps most damaging of all: we slow down the adoption of genuinely transformative technology, not because innovation isn't ready, but because the system isn't confident enough to trust it.
We make the problem worse by failing to build the infrastructure that would solve it.
This is fixable. But the window to fix it is narrowing.
AI in health is not waiting for governance to catch up. The question has moved on from whether we adopt these technologies to how we do it, safely, consistently, and at a scale that actually serves patients.
The shift we need is not complicated to describe. From self-declaration to independent validation. From fragmented local duplication to shared, centralised standards. From lists to assurance.
Because in digital health, trust isn't built on what's listed.
It's built on what's been proven.
What's your experience of governance keeping pace with AI adoption?
I'd genuinely like to hear from people working on this, whether you think the current model is holding up or whether you're seeing the same cracks I am. Reach out to me at jordan.hawkings@orchahealth.com.
Tags: