The EU AI Regulation: Transforming Clinical Research Through Regulation
James Riddle
Advarra
I

n an era where artificial intelligence (AI) is revolutionizing healthcare and clinical research, regulatory bodies are racing to keep pace. The European Union’s Artificial Intelligence Regulation 2024/1689 represents a watershed moment in AI regulation, with potentially far-reaching implications for the global clinical research community. What will be the regulation’s impact on clinical trials, and how might stakeholders navigate this new regulatory landscape?

The artificial intelligence (AI) revolution is getting attention from everyone, including regulators. The European Parliament recently took a significant regulatory step in the oversight of AI technologies by adopting the AI Regulation. This landmark legislation seeks to produce a comprehensive framework for the development and deployment of AI, helping ensure its safety, ethical use, and transparency for European Union (EU) residents. The AI Regulation’s introduction comes at a critical juncture, as the integration of AI in clinical trials accelerates, raising questions about data integrity, patient safety, and regulatory compliance.

The EU AI Regulation is applicable to all industries. However, there are potentially unique implications specifically for clinical trials, where researchers increasingly use AI for tasks like medical image analysis, natural language processing for endpoint analysis, and generating/analyzing data for synthetic control arms. Non-EU entities in the clinical research community should be familiar with the AI Regulation and how it impacts their business. Understanding related efforts currently underway at the US Food and Drug Administration (FDA) are also key to ensuring compliance when including AI in the clinical trial setting.

An Overview of the AI Regulation

The full EU AI Regulation will be enforced in March 2026, although some compliance dates are already in effect as of August 2024. The EU AI Regulation uses four risk levels to categorize AI applications: unacceptable, high, limited, and minimal. This risk-based approach applies to all industries and aims to balance innovation with safety, ensuring that AI applications with the potential to significantly impact human health and safety are subject to stringent oversight.

The AI in benign gaming apps and language generators is one example of a system that might be considered of “limited” or “minimal risk.” These applications still must meet specific standards to ensure ethical use, but they may face fewer regulatory requirements. “High-risk” systems must comply with strict regulatory requirements regarding transparency, data governance, registration with the central competent authorities, and human oversight. Unacceptable-risk AI systems are banned entirely by the regulation.

High-Risk AI-powered Systems: Key Requirements in the EU AI Regulation
Many AI-based systems used in contemporary clinical trials may be considered “high risk” under the AI Regulation. This includes technology like drug discovery software, study feasibility solutions, and patient recruitment tools. Consider these key requirements for “high risk” AI systems in the context of clinical trials (for an exhaustive list of requirements, reference the full AI Regulation):

  • Transparency and explainability
  • Data governance
  • Human oversight
  • Accuracy and reliability
  • Ethical considerations
  • Continuous monitoring

These requirements underscore the EU’s commitment to ensuring that AI systems which may be used in clinical research are not only effective but also trustworthy and aligned with ethical standards.

Potential Impact on Clinical Research

Software vendors, sponsors, CROs, and clinical sites are all increasingly using AI components in their processes, programs, and systems. According to an FDA analysis, regulatory submissions involving AI increased approximately twofold from 2017 to 2020, and 2021 saw a 10-fold increase compared to 2020, although the agency notes this probably “only represents a fraction of [AI’s] increasingly widespread use in drug discovery.”

Here are three key areas where AI can significantly impact clinical research that fall under the purview of the AI Regulation:

  • Medical Image and Medical History Analysis
  • Synthetic Control Arms
  • Identifying Patients

Each of these areas presents unique challenges and opportunities under the new regulatory framework, requiring stakeholders to carefully balance innovation with compliance.

Impact of AI Regulation on Companies Outside the EU

The EU AI Regulation extends enforcement outside the EU Economic Zone (not unlike the EU General Data Protection Regulation [GDPR]). Its implications are potentially significant for any company doing business within the EU, particularly those marketing AI-powered clinical trial products and services in the region. This extraterritorial reach underscores the global impact of the AI Regulation, necessitating a proactive approach from international stakeholders.

Non-EU companies must comply with the AI Regulation if their AI systems are used (or will be used) in the EU market. Non-EU-based organizations conducting clinical trials in the EU should consider taking the following steps:

  • Understand the regulatory landscape
  • Appoint an EU representative
  • Modify products and services for compliance
  • Prepare clinical trial stakeholders for AI Regulation compliance

Sponsors, CROs, and others doing business in the EU market should consider the following actions as they develop compliance plans:

  • Conduct an inventory and compliance assessment
  • Optimize data governance protocols
  • Improve transparency and explainability
  • Enhance human oversight
  • Institute ethical and legal training on AI

US Regulatory Perspectives on AI Use in Clinical Trials

While the FDA has not yet published final guidance on using AI-enabled systems in clinical research, the agency expects to do so soon. The FDA’s approach, while distinct from the EU’s, reflects a growing global consensus on the need for robust AI regulation in healthcare and clinical research.

FDA saw 118 AI engagements in clinical research in 2021, a trend that continues to rise, according to remarks made in a September 9, 2024, webinar by Dr. M. Khair ElZarrad, director of the Office of Medical Policy at the Center for Drug Evaluation and Research (CDER).

The FDA’s evolving stance on AI in clinical trials highlights the dynamic nature of this regulatory landscape, emphasizing the need for ongoing dialogue between regulators and industry stakeholders.

Future Considerations for AI in Clinical Trials

The European Parliament’s adoption of the AI Regulation represents a pivotal moment in AI technology oversight for all industries, with particular implication for high-stakes fields like clinical research. It’s likely this is just the beginning of AI regulation; even companies not involved in EU business should still take notice and consider the act’s impact, as it may foreshadow future domestic policies.

The EU AI Regulation’s emphasis on transparency, data governance, and human oversight aims to ensure the safe and ethical use of AI, ultimately fostering greater trust and reliability in AI-driven clinical research.

As we look to the future, the AI Regulation serves as a blueprint for responsible AI integration in clinical trials. Stakeholders must remain vigilant, adapting to evolving regulations while harnessing AI’s potential to revolutionize patient care and drug development. The coming years will likely see a harmonization of global AI regulations, presenting both challenges and opportunities for the clinical research community. By embracing these changes proactively, the industry can ensure that AI becomes a trusted, integral part of the clinical trial ecosystem, ultimately accelerating the development of life-saving therapies while safeguarding patient rights and safety.

To learn more about the EU AI Regulation, plan to attend DIA Europe 2025.