APA Labs Launches Digital Badge Program to Advance Best Practices in U.S. Digital Mental and Behavioral Health
The trust gap is the real barrier to AI adoption in European health, not the regulation.
The EU AI Act just got longer. The Omnibus agreement has pushed the highest-risk obligations back; high-risk AI systems to December 2027, AI embedded in regulated products to August 2028.
The headlines will frame this as good news for the industry. More time. Less pressure. Room to breathe.
But spend any time talking to the people responsible for procuring, deploying and governing AI in hospitals, health systems and payer organisations across Europe, and you quickly realise the deadline was never really the problem. The problem is trust. And a regulatory delay does nothing to fix that.
What we are actually seeing
Significant budgets for digital health and AI exist across Europe. The clinical need is well documented. The technology is maturing rapidly. And yet adoption remains stubbornly slow — pilots stall, and procurement decisions that should take months take years.
The reason, again and again, is the same. The people on the other side of the procurement table — Chief Medical Officers, Clinical Safety Officers, Information Governance leads, Board members who will be personally accountable if something goes wrong — cannot get a satisfactory answer to the question they are all quietly asking: How do I know this is safe?
That is not an unreasonable question. It is exactly the right question. And it is not being asked because those people are obstructive or risk-averse. It is being asked because fewer than 20% of digital health tools on the open market meet baseline safety and clinical standards. The people responsible for deploying AI to patients and clinicians have every reason to be cautious.
The regulation was supposed to help with this
One of the genuine promises of the EU AI Act was a common language for safety and trustworthiness in AI — a framework that developers and deployers could both point to. Documented risk management. A technical file. Evidence of accuracy and robustness. Human oversight mechanisms. Post-market monitoring. For health and care, where the stakes are highest, this kind of structured assurance matters enormously.
The delay does not invalidate that framework. It removes the external forcing function that was pushing organisations to engage with it.
That is where the real risk lies. Not in the regulatory penalty — though at up to €30 million that is significant — but in the possibility that the delay becomes a reason to shelve the whole question until 2027. To carry on as before. To leave the trust gap unaddressed.
What the best organisations are already doing
The health AI developers and deployers moving fastest across European markets are not waiting for 2027. They are building the evidence now. Conducting AI use-case inventories. Mapping data flows. Putting in place the governance structures that responsible deployment requires — not because a regulator has told them to, but because their customers are asking for it. Being able to answer clearly is a genuine commercial and clinical differentiator.
The same is true on the deployer side. The hospitals, insurers and health systems making confident AI procurement decisions have developed the internal capability to ask the right questions of developers, and to evaluate the answers. They are not waiting for Brussels to tell them what good looks like.
Where we go from here
The EU AI Act delay is, in one sense, a gift. Eighteen months of additional runway to do the work properly rather than scrambling to meet an arbitrary deadline.
But the trust gap slowing AI adoption across European health will not close by itself in those eighteen months. It closes when developers can demonstrate — clearly and consistently — that their AI is safe, governed, and built to the standard that clinical environments demand.
The regulation will eventually land. The penalties will eventually apply. But long before that, the market will reward the organisations that took trust seriously while everyone else was enjoying the extension.
The delay is not the story. The trust gap is the story. And closing it is the work that matters.
ORCHA has spent 11 years assessing digital health and AI technologies across global health systems. We sit on the EU Task Force for Digital Health and the IEEE Health App Accreditation Group, working to build the frameworks that make safe digital health deployment possible at scale.
If you'd like to be the first to hear about our work within health AI assurance, register your interest. If you're interested in talking to us about how you can partner with us as we progress, contact us.