März 08, 2026
Regulatory
AI
A New Kind of Approval Problem in Medtech
Artificial intelligence is already embedded in hundreds of certified medical devices, particularly in radiology, cardiology and digital pathology. Most of these systems are regulated as Software as a Medical Device (SaMD) under the EU Medical Device Regulation (MDR). Yet AI-based devices raise a regulatory question that traditional medical device frameworks were not originally designed to answer: What happens when a medical device changes after approval?
Part 1 of the series “The New Rules of Medtech” by Sandra Heeg, Editor‑in‑Chief, MedTech Media Europe
Image: pixabay
Artificial intelligence is already embedded in hundreds of certified medical devices, particularly in radiology, cardiology and digital pathology. Most of these systems are regulated as Software as a Medical Device (SaMD) under the EU Medical Device Regulation (MDR).
Yet AI-based devices raise a regulatory question that traditional medical device frameworks were not originally designed to answer:
What happens when a medical device changes after approval?
Unlike conventional medical devices, AI-based systems rely on software models that may require updates, retraining or recalibration over time. As a result, regulatory approval may no longer represent the end of the regulatory lifecycle. For many manufacturers, the real regulatory challenge begins once the device is already deployed in clinical practice.
Locked vs. Adaptive AI Systems
A key distinction in current regulatory discussions is the difference between locked AI systems and adaptive AI systems.
Most AI-based medical devices currently approved by regulators use locked algorithms. This means the algorithm does not automatically learn from new clinical data once deployed. Any modification to the model must be implemented through a controlled software update released by the manufacturer.
Adaptive AI systems, by contrast, are designed to update their behaviour as they receive new data in real‑world use. While such systems could theoretically improve continuously, they also raise major regulatory challenges because the behaviour of the system may evolve after certification.
For this reason, most AI-based devices currently on the market rely on locked models combined with controlled update cycles and strict version management.
How AI Software Is Regulated Under the MDR
Under the EU Medical Device Regulation (MDR), most AI-based software falls under Rule 11, which governs standalone medical software.
Depending on how the software influences clinical decisions, devices may be classified as Class IIa, IIb or III. Compared with the previous EU regulatory framework, Rule 11 significantly increased the regulatory classification of many medical software products.
Manufacturers must therefore undergo conformity assessment with a notified body and demonstrate compliance with key standards such as IEC 62304 (software lifecycle processes) and ISO 14971 (risk management).
When Software Updates Become a Regulatory Issue
Under MDR, manufacturers must determine whether a modification to a certified device qualifies as a “significant change.” If so, the modification may require renewed conformity assessment by a notified body.
Guidance for interpreting such changes is provided in MDCG 2020‑3, which outlines how significant changes should be evaluated under MDR.
For software-based devices, modifications to algorithm logic or decision mechanisms can potentially qualify as significant changes depending on their impact on safety, clinical performance or intended use.
Example: Updating an AI Radiology System
Consider an AI-based radiology tool designed to detect lung nodules in CT scans.
The algorithm is trained using large datasets and receives CE certification after demonstrating high diagnostic performance during validation studies.
Once deployed across hospitals, however, the manufacturer may collect additional real-world data and retrain the model to improve detection performance.
At this point several regulatory questions arise:
• Does the retrained model remain within the scope of the original certification?
• Does the update represent a significant change under MDR?
• Would the update require renewed conformity assessment?
In practice, these decisions depend on whether the modification affects the device’s intended use, risk profile or clinical performance.
Post‑Market Surveillance and Real‑World Monitoring
Even without adaptive AI, AI devices require continuous monitoring once deployed.
Under MDR, manufacturers must implement Post‑Market Surveillance (PMS) and Post‑Market Clinical Follow‑Up (PMCF) processes to monitor real‑world performance.
For AI-based devices, this monitoring may include:
• Tracking model performance in real‑world datasets
• Detecting data drift
• Monitoring false positives and false negatives
These processes introduce a form of lifecycle oversight that goes beyond traditional device regulation.
How the FDA Is Approaching AI Updates
Regulators in the United States have proposed alternative approaches for managing algorithm updates.
The US Food and Drug Administration (FDA) has introduced the concept of a Predetermined Change Control Plan (PCCP). Under this model, manufacturers define in advance which types of algorithm changes may occur after approval and how those changes will be validated.
If updates remain within the predefined change protocol, they may be implemented without requiring a completely new regulatory submission.
This approach effectively regulates the process of change rather than every individual update.
Why Adaptive AI May Become Relevant
Today most certified AI medical devices rely on locked algorithms. However, several technological developments could increase interest in adaptive systems in the future.
These include growing clinical datasets, connected devices and real‑time monitoring systems that generate continuous streams of patient data.
Adaptive models could potentially respond to changes in patient populations, imaging equipment or clinical environments more quickly than static models.
However, deploying such systems safely would require robust governance mechanisms, including algorithm change protocols, continuous validation procedures and strong post‑market monitoring.
A Shift Toward Lifecycle Governance
The regulatory debate around AI reflects a broader shift in the medical device industry.
As devices become increasingly software-driven, regulatory oversight may gradually move from a single approval event toward continuous lifecycle management.
For manufacturers, this means regulatory strategy may increasingly involve managing updates, monitoring real‑world performance and maintaining robust algorithm governance processes.
In this sense, artificial intelligence may not only change how medical devices function.
It may also change how they are regulated.
Author: Sandra Heeg
© 2026 Sandra Heeg. All rights reserved. Copyright retained by the author. Publication on medtechmediaeurope.com is permitted by the author and may be withdrawn at any time.