.webp)
Understand when AI falls under medical device regulation in the EU, UK and US, including MDR, FDA pathways and MHRA approaches.




Artificial intelligence is no longer confined to experimental healthcare projects. It is now embedded in imaging diagnostics, clinical decision support tools, workflow automation, and patient monitoring software. As these technologies move from pilot environments into routine clinical use, one question consistently surfaces across development and regulatory teams:
At what point does AI fall under medical device regulation?
The answer depends heavily on jurisdiction. The European Union, the United Kingdom, and the United States each regulate AI-enabled medical devices differently, shaped by distinct legal frameworks, regulatory cultures, and risk management philosophies.
For manufacturers, quality leaders, and regulatory professionals, understanding these differences is no longer optional. Regulatory classification affects development strategy, documentation depth, approval timelines, and ultimately market access.
The use of AI in regulated medical products has expanded rapidly over the past decade. By mid-2025, the U.S. Food and Drug Administration had authorised more than 1,200 AI-enabled medical devices, with the majority cleared in the last five years alone.
Radiology remains the most common application area, followed by cardiovascular monitoring, pathology, neurology, and clinical workflow optimisation. These domains benefit from structured datasets and repeatable decision logic, making them well suited to machine-learning models.
Regulatory frameworks, however, have not evolved at the same pace. Rather than introducing entirely new regimes, most authorities have adapted existing medical device legislation, supplemented by guidance documents and pilot programmes. This has created a regulatory environment that rewards early planning and penalises assumptions about how AI will be treated.
Across the EU, UK, and US, regulation is driven by intended use, not by technical complexity.
AI software is regulated as a medical device when it is intended to diagnose, prevent, monitor, predict, treat, or alleviate disease or injury. This applies whether the software operates independently as Software as a Medical Device (SaMD) or is embedded within a physical product as Software in a Medical Device (SiMD).
The boundary becomes less clear with clinical decision support tools. Software that organises or displays information without influencing clinical judgment may fall outside medical device regulation. In contrast, software that prioritises findings, recommends actions, or influences diagnostic or treatment decisions is typically regulated.
In practice, regulators examine how outputs are presented, the level of clinician reliance, and whether independent verification is realistically possible.
Under the EU Medical Device Regulation, applicable since May 2021, AI-enabled software is regulated using the same framework as other medical devices. Classification depends on intended purpose and clinical risk, with many diagnostic AI tools falling into Class IIa or IIb, requiring Notified Body assessment.
Manufacturers must demonstrate safety, performance, and clinical benefit through technical documentation, clinical evaluation, quality management systems, and post-market surveillance. While MDR is technology-neutral, AI introduces challenges around validation, transparency, and lifecycle change management.
The EU AI Act, adopted in 2024, introduces a horizontal regulatory framework for AI systems. Most AI-enabled medical devices are classified as high-risk under this regulation, as they are either medical devices themselves or safety components regulated under MDR or IVDR.
High-risk classification triggers additional obligations, including requirements for data governance, robustness, human oversight, transparency, and lifecycle monitoring of AI models.
Although the AI Act entered into force in 2024, its obligations are phased. For AI systems already regulated as medical devices, full compliance is expected from August 2027. This extended timeline reflects the current lack of harmonised standards and the complexity of aligning AI-specific obligations with existing MDR requirements.
Ongoing legislative adjustments are expected through 2026 as implementation guidance and standards mature.
The U.S. FDA regulates AI-enabled medical devices through existing classification pathways rather than standalone AI legislation.
AI-enabled devices are classified as Class I, II, or III and cleared via established pathways, including 510(k), De Novo, or Premarket Approval (PMA), depending on risk.
Most AI devices cleared to date have followed the 510(k) pathway, relying on substantial equivalence to predicate devices. Novel applications without suitable predicates typically follow the De Novo route, while only a small number require PMA.
One of the FDA’s most significant developments is the Predetermined Change Control Plan, designed to address AI systems that evolve over time.
A PCCP allows manufacturers to define, in advance, the types of post-market changes an AI system may undergo, along with the methods used to control risk and verify performance. When authorised, approved changes can be implemented without new submissions, provided they remain within the defined scope.
In 2025, the FDA issued draft guidance on lifecycle management for AI-enabled medical device software. The guidance emphasises a Total Product Life Cycle approach, covering development, validation, deployment, and post-market monitoring.
Key expectations include transparency of model behaviour, performance metrics linked to clinical claims, bias assessment, and post-market performance tracking.
Following Brexit, the UK regulates AI-enabled medical devices under the UK Medical Devices Regulations 2002, overseen by the MHRA.
The MHRA’s Software and AI as a Medical Device Change Programme addresses qualification, classification, and post-market oversight challenges specific to AI. The programme focuses on explainability, adaptivity, and lifecycle management rather than introducing separate AI legislation.
In 2025, the MHRA announced reforms enabling greater reliance on approvals from trusted regulators such as the FDA, Health Canada, and Australia’s TGA. This applies to Software as a Medical Device, including AI-based systems, and reduces duplication for manufacturers pursuing UK market access.
The MHRA’s AI Airlock pilot introduced a regulatory sandbox for AI medical devices, allowing regulators and developers to test regulatory approaches collaboratively. Insights from this initiative continue to inform UK policy development.
While all three jurisdictions apply risk-based regulation, the practical implications differ:
These differences influence documentation depth, development planning, and market sequencing decisions.
For organisations developing AI-enabled medical devices, several themes are consistent across jurisdictions:
Manufacturers that integrate regulatory thinking into early design decisions are better positioned to scale across markets without rework.
AI regulation in medical devices is no longer theoretical. It is active, enforceable, and increasingly interconnected across regions.
Organisations that succeed will be those that view regulatory compliance as part of responsible AI development rather than a post-hoc hurdle. As frameworks mature, trust, transparency, and governance will become as important as technical performance.
Learnova's Navigating AI-Enabled Medical Devices Masterclass provides practical training on regulatory and quality expectations across the EU, UK, and US.
Led by Leon Doorn, CEO and Co-Founder of MedQAIR, the programme focuses on regulatory strategy, technical documentation, AI-specific risk management, and post-market oversight.
Dates: April 22nd & 23rd, 2026
Format: Virtual, two half-day sessions

