Risk-Based Monitoring for AI-Enabled Medical Devices
  • Brooke Haddock
    Data Discern Bridges
  • Isaac R. Rodriguez-Chavez
    4Biosolutions Consulting
A

rtificial intelligence (AI) is dramatically reshaping the landscape of medical device development and clinical trial management. AI-enabled medical devices promise breakthroughs—from early diagnosis to personalized therapies—yet carry unique risks that existing regulatory frameworks have struggled to fully address. As the name implies, they carry two sets of components to be considered throughout clinical investigations: AI-specific risks and medical device-specific risks. Against this backdrop, successfully implementing risk-based monitoring in clinical investigations of AI-enabled medical devices remains a complex but essential task for sponsors, regulators, and industry alike.

The Growing Challenge of Clinical Trial Monitoring for AI-Enabled Medical Devices

Clinical trials are the crucible where AI’s theoretical benefits are tested in real-world healthcare settings. But the intricacies of the systems for AI components—such as real-time data analysis, adaptive learning algorithms, and intricate software updates—introduce challenges that traditional approaches to clinical trial monitoring of medical devices are not equipped to handle. Early identification and management of risks specific to AI components are vital to ensuring patient safety, data integrity, and trial validity. To meet this need, recent regulatory guidance issued by the European Medical Device Coordination Group (MDCG), in conjunction with the European Artificial Intelligence Board, along with complementary frameworks from the US Food and Drug Administration (FDA), provide essential direction for aligning risk-based monitoring with the growing use of AI in medical devices.

As the regulatory environment evolves, sponsors and clinical trial managers must understand and apply these intersecting frameworks to maintain compliance while fostering innovation. This article unpacks the latest landscape of risk-based monitoring for AI-based medical devices, clarifies how overlapping regulations interact, and highlights practical considerations essential for successful AI-enabled medical device clinical studies.

Emerging Global Frameworks on Risk-Based Monitoring for AI-Enabled Medical Devices

In 2025, the MDCG issued detailed guidance entitled “Interplay between the Medical Devices Regulation (MDR) & In Vitro Diagnostic Medical Devices Regulation (IVDR) and the Artificial Intelligence Act (AIA),” developed jointly with the European Artificial Intelligence Board (MDCG, 2025). This document elucidates the complex regulatory architecture governing AI components embedded within regulated medical devices and in vitro diagnostics (IVDs) across the European Union. Central to the guidance is the classification of AI component systems based on risk profiles—a process critical for tailoring monitoring and oversight activities appropriately.

The MDCG confirms that AI embedded within devices falling under MDR/IVDR classifications automatically inherits the device’s risk categorization, thereby defining whether the AI is “high risk.” This categorization triggers enhanced regulatory scrutiny along with specific risk management and monitoring stipulations. For example, AI-powered continuous glucose monitors (CGMs), which predict hypoglycemic episodes by analyzing glucose metrics and alert patients in real time, qualify as high-risk devices under MDR/IVDR due to their safety-critical function. Existing data indicate that such AI algorithms have attained accuracy rates of up to 98.5% in predicting hypoglycemic events, illustrating not only the promise for improved patient outcomes but also the critical need for ongoing monitoring to mitigate potential erroneous alerts or bias.

The MDCG guidance closely aligns with the FDA’s established risk-based monitoring principles released in 2023, which emphasize addressing specific critical risks in clinical investigations through oversight by natural persons and ongoing active management within sponsor monitoring plans. Both the MDCG and FDA advocate strategic design choices in both components, medical device and AI systems. These design choices are to enable human operators to retain supervision over AI component functions, ensuring that AI output remains interpretable and subject to intervention if necessary. They also recommend iterative and stratified risk assessments to adjust monitoring intensity in response to emerging data or identified risks during trials (see Table 1 for comparison).

 
MDCG
FDA
Informed Design Choices
Yes
Yes
Iterative Risk Assessment
Yes
Yes
Stratified Risk Assessment
Yes
Yes
Tailored Monitoring Approach
Yes
Yes
Table 1. Comparison of Risk-based Monitoring Approaches for AI-Enabled Medical Devices by MDCG and FDA

A notable aspect emphasized in MDCG and AIA requirements is the rigorous management of data governance. The AI component systems’ performance and safety depend fundamentally on high-quality, reliable data sets used for training, validation, and independent testing of AI algorithms. The AIA stipulates that data sets must possess appropriate statistical characteristics and be scrutinized carefully for biases that could negatively impact patient health and fundamental rights or introduce discriminatory outcomes (MDCG, 2025). Ensuring data integrity includes verifying representativeness across demographics and clinical contexts to prevent systemic errors or health disparities in AI outputs affecting medical device performance in safety or efficacy.

Supporting Details: Complexity and Context in AI Regulation and Monitoring

The growing sophistication of AI applications in medical devices presents regulatory and operational challenges dating back to earlier frameworks like the MDR and IVDR, which broadly addressed software as a component but lacked specific provisions for AI’s unique risks. The AIA fills this regulatory void by focusing explicitly on hazards specific to the AI component that could potentially affect physical health, safety, and the protection of fundamental rights.

The FDA’s 2023 guidance complements European requirements by outlining a risk-based monitoring approach tailored to clinical investigations involving investigational devices, including AI-enabled ones. Notably, FDA guidance insists that sponsors must proactively identify critical risks, including AI component risks, formally document them, and actively track these risks throughout trial conduct—practices that mirror the expectations recently articulated by MDCG and the AIA.

In practical terms, sponsors must adapt traditional monitoring strategies for the AI component’s specific demands:

  • Real-time performance monitoring of AI algorithms to detect deviations or failures promptly during the trial.
  • Verification of the robustness and completeness of AI training and validation data sets to reduce bias.
  • Management of iterative software updates during ongoing trials without compromising data integrity or patient safety.
  • Identification and mitigation of emergent signals indicating AI-related safety concerns or adverse events.

The case of AI-powered continuous glucose monitoring illustrates the application of these principles. High algorithmic accuracy reported in 2021 indicates favorable outcomes; however, ongoing vigilance must assure consistent performance across diverse patient populations and real-world conditions. These real-world monitoring efforts are vital to capturing variations that might not be evident in initial testing phases.

Trial sponsors face added challenges establishing the governance structure for the AI component of AI-enabled medical devices because both the medical device components and the AI components need to be integrated into clinical trial protocols and oversight mechanisms. Detailed documentation and transparency about data usage, AI algorithm development, and monitoring procedures are critical additions to meeting regulatory expectations and preserving public trust.

Why This Matters to Patients, Industry, and Regulators

The intersection of risk-based monitoring for AI-enabled medical device clinical trials has profound implications for multiple stakeholders:

Patients place increasing trust in AI-enabled tools guiding diagnosis and treatment decisions; risk-based monitoring tailored to include medical device risks and AI’s risk characteristics is essential to maintain this trust by ensuring safe, reliable, and equitable medical care.

Healthcare innovators and sponsors confront a complex regulatory landscape that demands both agility and comprehensive oversight. Balancing potentially rapid AI-component updates with thorough risk management is a high-stakes endeavor to keep pace with technological advances and regulatory compliance.

Regulators face the pressing task of harmonizing AI-related frameworks for their AI-enabled medical devices across jurisdictions, fostering clarity and consistency to enable smoother, international clinical development programs.

A failure to adequately integrate AI-specific risk considerations into clinical trial monitoring of AI-coupled medical devices can damage patient safety, erode regulatory credibility, and slow market access for promising therapeutics. Conversely, robust and agile risk-based monitoring drives safer clinical translation of innovative AI-enabled medical devices, ultimately benefiting patients globally by accelerating access to cutting-edge, reliable health technologies.

Embracing the Future of AI Monitoring in Clinical Trials

Looking forward, harmonization and collaboration among regulatory authorities remain crucial for an efficient while effective global ecosystem for AI-enabled medical device evaluations. Continued alignment between the European MDCG, AIA frameworks, the FDA, and other international bodies can reduce duplicative efforts and streamline sponsor compliance to meet both medical device and AI-specific standards.

Industry stakeholders investing in AI-enabled medical devices should prioritize adaptable data governance systems that incorporate AI performance metrics, bias detection, and continuous monitoring protocols within their traditional medical device clinical trial designs. Equipping monitoring teams with AI-specific expertise and fostering transparent communication about AI component-related risks will be increasingly important.

Beyond regulators and industry, patient involvement and advocacy will play a vital role in shaping monitoring standards for the AI components of AI-enabled medical devices. Incorporating patient perspectives ensures that AI-driven therapeutic innovations meet community needs while upholding equity and trust.

Over the coming years, these concerted efforts will cultivate a regulatory and operational environment that not only mitigates AI component risks in medical devices but also harnesses AI’s transformative potential in healthcare. By proactively integrating risk-based monitoring tailored to AI-enabled medical devices, stakeholders can boldly accelerate innovation while safeguarding the patients at the heart of clinical research.