Around the Globe: Europe
EU Acts on the Promise of Artificial Intelligence for Medicinal Products
David Isom and Monica Mihedji
Pfizer, Inc.
R

egulators are advancing information technology and data modernization initiatives to drive more efficiencies and to become more responsive to industry use of digital technologies in drug development. These include digital health technologies to promote decentralized trials, cloud-based platforms to enable more capacity and external collaboration opportunities, and machine learning (ML) technologies including artificial intelligence (AI) to transform review of high-volume data, including real-world evidence programs. A key milestone took place in July of 2023, with the release by the European Medicines Agency (EMA) of a reflection paper sharing their views on the use of AI in the regulation of medicines.

The EMA recently published for public consultation a draft reflection paper on the use of artificial intelligence (AI) in the medicinal product lifecycle to support the safe and effective development, use, and regulation of AI for human and veterinary medicines. The paper reflects on principles relevant to the application of AI and ML at any step of a medicine’s lifecycle, from drug discovery to the post-authorization setting. The reflection paper, co-developed by the HMA (Heads of Medicines Agencies)/EMA Big Data Steering Group, the Committee for Medicinal Products for Human Use (CHMP, and its Methodology Working Party), and the Committee for Veterinary Medicinal Products (CVMP), is part of the joint HMA-EMA Big Data Steering Group (BDSG) initiatives to develop the European Medicines Regulatory Network’s capability in data-driven regulation.

AI tools are in common use by companies such as Google and Netflix, and have been in use in the pharmaceutical industry, particularly for automating internal processes. However, the use of AI can mean many different things. The EMA defines AI as systems that display intelligent behavior by analyzing their environment and taking actions with some degree of autonomy to achieve specific goals. The Food and Drug Administration (FDA) refers to AI as a range of technologies, statistics, and engineering that use algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions. The acceleration of the use of AI, in particular the rapid introduction of new Large Language Models (LLMs) such as Generative AI (GenAI), have introduced excitement for the potential for AI to drive innovation in drug research and development, and with that has increased concerns around its potential for misuse.

GenAI goes further than traditional AI in that it recognizes patterns, predicts what to expect next, and generates additional data based on rules on the data it is trained on. The EMA reflection paper notes that generative language models are prone to include plausible but erroneous output; therefore quality review mechanisms need to be in place to ensure that all model-generated text is both factually and syntactically correct before submission for regulatory review. Concerns like these have prompted calls for additional regulatory guardrails for the use of AI to promote the promise of AI and manage the concerns for erroneous output.

Beyond the EMA reflection paper, European AI policy was already advancing, most notably with the European Commission’s draft regulation of AI known as the “AI Act” which is anticipated to be the first and most comprehensive AI legislation in the world when it is finalized in 2024. The AI Act is a legally binding framework with significant fees for noncompliance. It is positioned to regulate AI systems across all industrial sectors including healthcare. Significant impact is foreseen for those pharmaceutical companies manufacturing AI-enabled medical devices in the diagnostic space, because these tools as described currently in the draft will be classified as high risk.

Evaluating existing frameworks for the use of AI in drug development given the opportunities and challenges with the use of AI is prudent. These frameworks going forward should be fit-for-purpose and provide the flexibility needed to accommodate the rapidly evolving landscape of AI. They should promote innovation in drug research and development, and ground AI practices with responsible AI principles for patients, customers, colleagues, and society. We believe strong principles for responsible AI include 1) striving to design AI systems to empower humans and promote fairness and equity, and avoidance of bias, 2) respecting individuals’ need for privacy and ensuring transparency in the use and limitations of AI systems including maintaining human controls over AI, and 3) taking ownership and accountability in the use of AI systems which includes meeting legal, regulatory, and sustainability standards.

The rapid rise of more powerful AI introduces significant new opportunities to transform drug research and development. Key to enabling the full promise of AI is ongoing industry engagement with regulators globally to advance best practices and fit-for-purpose frameworks for regulatory oversight that are transparent and flexible. This includes advancing responsible AI principles and defining patient-centric, risk-based guardrails that will foster innovation while ensuring the utmost data and privacy protections for patients, and compliance with existing health authority guidance.