rtificial intelligence (AI) and more recently generative AI (GenAI) are profoundly transforming the life sciences sector. While offering unprecedented opportunities to enhance drug discovery, accelerate clinical trials, and improve patient outcomes, these technologies also introduce complex ethical, legal, regulatory, and governance challenges. The pace of technological change often outstrips the development of regulatory and governance frameworks, necessitating a proactive approach to align innovation with responsible use of AI.
AI Applications and Use Cases in the Life Sciences: US FDA Considerations
AI technologies are revolutionizing the life sciences by enhancing efficiency, reducing costs, and improving patient outcomes across the value chain. A variety of technologies are utilized such as robotic process automation (RPA), machine learning (ML), deep learning (DL), natural language processing (NLP), GenAI, computer vision, predictive analytics, and expert systems. Formal definitions are provided on the DIA website.
After issuing its January 2025 guidance on using AI to support regulatory decisions on drugs and biologics, FDA issued a discussion paper in February (download) on AI applications which are emerging across the life sciences drug development lifecycle. These were further explored in the Harnessing AI and Automation in Regulatory: Insights from the Recent DIA Regulatory Information Management (RIM) Intelligent Automation Survey presented at the DIA Global Annual Meeting 2025 and include:
- Drug Discovery & Nonclinical: Drug discovery, target identification, compound screening and design, pharmacokinetics (PK)/pharmacodynamics (PD) modeling, toxicity studies, and predictive modeling.
- Clinical Development: Patient recruitment and selection, dose optimization, patient adherence monitoring, site selection, data collection and analysis, clinical endpoint assessment, auto-generating starter documents, and auto-classifying/organizing Trial Master File (TMF) documents.
- Regulatory: Auto-classifying/organizing documents, auto-generating starter documents, extracting IDMP (Identification of Medicinal Products) and other data, performing document quality control (QC) tasks, submission preparation, and assembling affiliate/distributor document packages.
- Safety/Pharmacovigilance: Case processing, case evaluation, and submission.
- Manufacturing & Quality: Process design optimization, advanced process control, smart monitoring and maintenance, trend monitoring, and complaints management.
AI Ethics, Principles, and Regulations: A Global Overview
The rapid evolution of AI has prompted governments and health authorities worldwide to develop ethical principles and regulatory frameworks to ensure its safe, effective, and responsible use. While approaches may vary across them, a common thread of core ethical considerations is emerging. Here below is an overview of ethical AI principles by WHO and regulatory guidances from a number of health authorities around the globe.
WHO Ethical AI Principles and Guidelines
The World Health Organization (WHO) has been a significant voice in establishing ethical and regulatory guidelines for AI in healthcare, issuing several guidances since 2021. Its principles often serve as a foundation for national health authorities.
- Protect Human Autonomy: Ensure that humans remain in full control of healthcare systems and medical decisions. Users should understand the context of use of AI applications.
- Promote Human Well-being, Human Safety, and Public Interest: AI applications must satisfy regulatory requirements for safety, accuracy, and efficacy, ensuring no harm to humans.
- Ensure Transparency, Explainability, and Intelligibility: AI technologies must be transparent and understandable to all stakeholders (developers, professionals, patients, regulators). Algorithms and outputs must be explainable, and systems must be tested thoroughly with independent oversight.
- Foster Responsibility and Accountability: Clear accountability mechanisms are needed for AI design, development, deployment, and use, with appropriate legal redress.
- Ensure Inclusiveness and Equity: AI must be designed to be free of bias and available for the widest possible, appropriate, equitable use and access, irrespective of demographic characteristics.
- Promote AI that is Responsive and Sustainable: AI technologies should be continuously assessed during actual use and consistent with the broader promotion of healthcare systems, environment, and workplace sustainability.
The WHO emphasizes that these ethical principles apply to various stakeholders within and across organizations, from developers of foundational models to application developers to end users and governments, highlighting the need for widespread training and awareness. Ethical considerations, such as addressing bias in data collection and ensuring privacy by design, are crucial throughout the AI development lifecycle.
FDA Guidances (US)
The US Food and Drug Administration (FDA) has been actively developing guidances for AI, particularly for medical devices and, more recently, for drugs and biological products.
- Good Machine Learning Practice (GMLP) for Medical Device Development:
Jointly issued with Health Canada and MHRA (download), these principles emphasize:- Leveraging multidisciplinary expertise throughout the total product lifecycle.
- Implementing good software engineering and security practices.
- Ensuring clinical study participants and data sets are representative of the intended patient population to manage bias.
- Maintaining independence between training and test data sets.
- Focusing on the performance of the Human-AI Team, emphasizing human factors and interpretability.
- Continuous monitoring of deployed models for performance and managing retraining risks.
- Draft Guidance on Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products:
This draft guidance (download) outlines a seven-step process:
- Define the Question of Interest (QOI): Clearly state the problem the AI model addresses.
- Define the Context of Use (COU): Specify how the AI model will be used.
- Assess the AI Model Risk: Evaluate risk by mapping model influence and decision consequences.
- Develop Credibility Plan: Outline how the AI model’s output credibility will be established. Describe the model along with data used and development, training, and evaluation processes.
- Execute the Plan: Execute the plan keeping in view the context of use for the QOI.
- Document Results: Record credibility assessment findings and deviations.
- Evaluate Adequacy: Determine whether the AI model is suitable for the COU.
European Guidances (EU & UK)
Europe has taken a significant step with the EU AI Act, complemented by specific guidances from the EMA and MHRA.
- The EU Artificial Intelligence Act is a legally binding, horizontal regulation applicable across all industries, balancing innovation with ethical and safety considerations. It employs a risk-based approach:
- Unacceptable Risk: Prohibited AI systems (e.g., social scoring).
- High Risk: Strict compliance, pre-market assessments, and CE marking (e.g., AI in medical devices, critical infrastructure). AI in medical devices or clinical decision support is automatically classified as high risk, requiring strict compliance with data quality, transparency, human oversight, and robustness.
- Limited Risk: Transparency obligations (e.g., chatbots).
- Minimal Risk: No specific regulations.
- EMA’s Reflection Paper on the use of AI in medicinal products= highlights AI’s transformative potential while addressing regulatory challenges across the medicinal product lifecycle. It emphasizes robust validation, data integrity, and ethical AI use, complementing the broader EU AI Act with domain-specific guidance.
- MHRA’s Position Paper “A Pro-Innovation Approach to AI Regulation” balances innovation with patient safety, promoting a principle-based, decentralized approach. Unlike the EU AI Act, it leverages existing regulatory frameworks and focuses on contextual risk assessment rather than rigid classifications. It supports regulatory sandboxes for experimentation and emphasizes transparency, explainability, and post-market surveillance for AI tools in medical devices and drug development.
Asian Guidances
Asian countries are also developing their own broader frameworks, reflecting diverse regulatory philosophies; the section below provides a survey and is not exhaustive.
- ASEAN Guidance on AI Governance and Ethics is a nonbinding, regional framework emphasizing trust, interoperability, and responsible AI adoption across Southeast Asia. It promotes human-centricity, transparency, and contextual risk management, with a three-tier model of human involvement (human-in-the-loop, human-over-the-loop, human-out-of-the-loop) tailored to risk levels. It provides practical tools like risk assessment templates.
- China’s Interim Administrative Measures for Generative AI Services is China’s first legally binding regulation specifically for GenAI. It balances innovation with tight state control over content and data, requiring outputs to align with “Core Socialist Values” and prohibiting harmful content. It mandates security assessments, real-name registration, and labeling of synthetic content.
- Singapore’s Model AI Governance Framework is a voluntary, principle-based, industry-friendly framework emphasizing responsibility, transparency, and trust. It provides practical guidance on internal governance, risk management, operations, and communication. Singapore also developed “AI Verify,” a testing toolkit to assess AI systems against ethical principles.
Business Impact of AI Ethical Principles, Regulations, and Guidances
It is important for healthcare and life sciences companies to adhere to principles like fairness, transparency, explainability, accountability, safety, reliability, human-centricity, privacy, and security. The integration of AI ethics and regulations has profound implications for companies, impacting innovation, risk, efficiency, and market competitiveness. We (the authors) believe that alignment to global AI ethical principles and regulatory frameworks will have the following business impacts, highlighting the balance between innovation and compliance:
- Accelerated Innovation: Responsible AI fosters innovation in drug discovery and personalized medicine while building trust among stakeholders. Clarity on compliance enables faster, responsible development and deployment of AI solutions.
- Enhanced Patient Outcomes: Fair and transparent AI improves diagnosis and treatment, reducing healthcare disparities.
- Increased Competitiveness: Alignment with global standards allows companies to compete internationally and supports cross-border collaboration.
- Greater Regulatory Compliance: Alignment with global ethical standards facilitates smoother approvals.
- Better Risk Mitigation: Identifying risks upfront and planning around them reduces legal and ethical risks, minimizing reputational damage from noncompliance.
- Higher Cost Implications: Initial compliance investments may be high, but long-term savings arise from reduced risks and improved efficiency.
- Increased Operational Efficiency: Streamlined processes have the potential to enhance data integrity and improve overall operational efficiency.
- Increased AI Adoption: Ethical AI promotes fairness, transparency, and accountability, driving innovation and benefiting society.