Follow us on X
Follow us on Linkedin
Real-world-data enabled assessment
for health regulatory decision-making

REALM Academy Workshop 1: Multi-Stakeholder Insights for Evaluating Medical AI

5 November 2025, Brussels, Belgium

How should we evaluate AI medical devices when evidence standards are evolving and stakeholders have competing priorities? This question guided REALM Academy's first in-person workshop, which brought together more than 25 European experts from 9 countries: regulators, HTA bodies, notified bodies, medical device manufacturers, and researchers.

The workshop combined expert input with case-based learning to surface real tensions in AI medical device evaluation. Prof. Tom Melvin (University of Galway) opened with an overview of the current regulatory landscape, focusing on how the new EU AI Act interacts with existing medical device regulations (MDR/IVDR). Most AI medical devices will be assessed by notified bodies under both frameworks, each with its own risk classification system. The first guidance on this interplay was published in June 2025, making this genuinely new territory for evaluators.

IMG_6333

Participants then evaluated two fictional but realistic AI medical device cases: a lung nodule detection system (Class IIa) and a breast cancer treatment planning tool (Class IIb). Both cases included deliberately incomplete data to mirror the gaps and ambiguities evaluators face in practice. This approach surfaced important divergences in how different stakeholders assess the same technology.

  • Evidence thresholds vary by stakeholder role. What constitutes "sufficient evidence" differs fundamentally across stakeholder groups. Regulatory approval, HTA reimbursement, and manufacturer feasibility each require different types and levels of evidence. These are not disagreements to resolve but structural tensions that evaluation frameworks must navigate.
  • Adaptive learning governance remains uncharted territory. The case featuring continuous post-market algorithm updates sparked the most intense discussion. How do we enable innovation through adaptive learning while maintaining patient safety and regulatory oversight? No clear framework exists yet, making this a confirmed priority for REALM's ongoing work.
  • Diverse challenges persist. When asked about the biggest barriers to bringing AI medical devices to patients, participants identified concerns spanning trust, evaluation standards, explainability, interoperability, cost, data access, and validation processes. No single bottleneck emerged, but rather a complex web of interdependent challenges.

Full analysis of evaluation patterns across stakeholder groups will be published in 2026. The insights from this workshop are already informing REALM's tool development priorities and will shape the design of future REALM Academy workshops.

The next REALM Academy Workshop 2 (16 January 2025, Brussels) will focus on patient perspectives on AI medical device evaluation. Workshop 3 (Spring 2026, Brussels) will demonstrate the practical REALM tools developed in response to stakeholder needs. Role-specific online training modules for regulators, HTA bodies, and developers will launch throughout 2026. Learn more about REALM Academy and register for the upcoming events at the link.