DIA
Kearney
rtificial Intelligence (AI) is on the cusp of quickly becoming a large component of the healthcare industry (broadly defined), with the emergence of generative AI and sophisticated predictive models making significant inroads across various domains. At this point, it is crucial to carefully consider and plan for the responsible use of AI in healthcare, as it presents both numerous potential benefits and diverse risks to society.
The global AI in healthcare market was valued at approximately USD 19.27 billion in 2023, and it is projected to grow at a compound annual growth rate of 38.5% from 2024 to 2030. This market expansion is largely driven by the healthcare sector’s growing demand for improved efficiency, accuracy, and enhanced patient outcomes. According to a Premier, Inc. survey of 752 healthcare business decision-makers in November 2023, 81.5% said that AI was part of their organization’s strategic direction.
However, as AI continues to penetrate the healthcare landscape, ensuring its responsible use is critical. The National Academy of Medicine finds that with the rapid advancement of AI technologies, it is vital that healthcare stakeholders quickly adapt, learn, and agree on the safeguards needed for the ethical use of AI in health, healthcare, and biomedical science. Responsible and trustworthy AI involves the ethical, transparent, and fair application of AI technologies, ensuring they benefit all patients while minimizing risks and biases. A study by Premier, Inc. revealed that a significant portion of respondents (42.2%) expressed at least moderate confidence in AI’s ability to provide “accurate, safe, and actionable clinical diagnoses”, suggesting that while AI can be a valuable tool, human clinicians and oversight are essential. The swift expansion of AI in these fields not only heightens existing risks but also introduces new challenges across healthcare. Although AI outputs may seem objective because they are driven by data, they are built on data sets and models influenced by human values, which can reflect societal implicit or explicit biases. These biases can negatively impact underrepresented groups disproportionately. If these risks are not properly recognized and mitigated, AI in healthcare risks perpetuating existing disparities and creating new ones.
Responsible AI is essential for maintaining trust, protecting patient privacy, and ensuring equitable healthcare solutions and is only accomplished through collaboration and when all stakeholders uphold these practices. A DIAmond session at the DIA 2024 Global Annual Meeting explored how to use AI in healthcare responsibly.
DIA 2024 DIAmond Session
The DIAmond session Navigating the Trusted, Responsible, and Ethical Horizon of Artificial Intelligence: Uniting Healthcare Perspectives delivered an overview of AI trends in healthcare, key drivers of limited adoption, current ethical issues and dilemmas, and the need for enhanced ethics. Panelists discussed what “responsible AI” means within various stakeholder groups and ethical frameworks and discussed challenges in practice, the need to accelerate innovation responsibly, and strategies to accomplish these goals. The panel concentrated on enhancing existing ethical frameworks, assessing responsible AI through a patient-centric lens, advancing patient advocacy, and adopting a holistic perspective.
Solution Room Workshop
The multistakeholder Solution Room workshop that followed the DIAmond session was designed as a multidisciplinary, neutral, collaborative space to spark impactful discussions on critical issues in healthcare. The dialogue focused specifically on cross-sector partnerships for responsible AI use and strategies to mitigate AI risk.
When discussing responsible AI, it’s critical that stakeholders have clarity and alignment around terms to reduce miscommunications and enhance collaboration. A structured framework to evaluate responsible AI provides the ability for assessments, consistency, accountability, and confidence in the use of AI in the healthcare spectrum. To accomplish these objectives, solution room members used the OECD AI Principles, which provide a framework for developing AI in a way that respects human rights and aligns with democratic values while fostering innovation. Participants first translated the OECD principles to implementable practices, then discussed potential challenges to and opportunities resulting from such practices.
Implementing Trustworthy AI Principles in Healthcare
Principle 1: Inclusive growth, sustainable development and well-being
Trustworthy AI is critical in promoting global growth, prosperity, and development such as enhancing human abilities, fostering creativity, increasing inclusion, reducing inequalities, and protecting the environment. AI systems can perpetuate biases, especially causing harm to vulnerable populations. An example of a commonly used algorithm, which impacts millions of patients, revealed a significant racial bias by indicating that Black patients were much sicker than White patients. This bias stemmed from the algorithm using healthcare costs as a stand-in for health needs, with Black patients receiving less spending despite having the same level of need as White patients. Correcting this issue would raise the proportion of Black patients identified for additional care from 17.7% to 46.5%. It’s crucial to use AI to empower all members of society and minimize biases. Building trust in responsible AI is accomplished through multidisciplinary collaboration and continuous public dialogue.
Practices on How to Implement AI in Healthcare Following This Principle
- Representative Data: Use data that represents diverse populations to avoid bias in development and evaluation.
- Environmental and Financial Sustainability: Reduce energy use in AI processes and ensure financial sustainability.
- Health Literacy and Educational Programs: Enhance health literacy through AI education, training, and awareness campaigns with patient input to promote public trust and understanding.
- Resource Distribution: Ensure equitable distribution of resources for AI integration across different regions to ensure all people and countries benefit.
Principle 2: Human-centered values and fairness
AI systems should include safeguards to ensure a fair society, respecting the rule of law, human rights, and democratic values like nondiscrimination, equality, freedom, and privacy. AI development should align with human-centered values, ensuring that safeguards protect human rights and promote fairness. This alignment builds public trust and supports AI’s role in reducing discrimination and emphasizes human oversight to manage risks. Tools such as human rights impact assessments, ethical codes, and certifications should be used to promote fairness and human-centered values.
Practices on How to Implement AI in Healthcare Following This Principle
- Representation and Diversity: Ensure that AI systems represent diverse populations, including marginalized groups most affected by previous lack of inclusion in healthcare, to recognize and mitigate bias.
- Common Terminology and Standards: Develop standards and common language for AI use in healthcare alongside patients.
- Accountability and Rules: Clarify rules and ensure accountability in data use and AI implementation for various AI actors.
Principle 3: Transparency and explainability
AI actors should provide clear, relevant information (without revealing proprietary information and balancing accuracy, performance privacy, and cost) about AI capabilities, limits, data sources, processes, and decision logic, thus allowing individuals to understand and consent to AI outcomes, know when they are interacting with AI, and have the opportunity to challenge AI results.
Practices on How to Implement AI in Healthcare Following This Principle
- Data Context: Provide context for AI inputs and outputs to ensure proper interpretation.
- Explainable AI: Choose transparent, explainable AI models over black-box models (where the decisions made by the AI system are not easily understood or explained) when possible and disclose data sources and methods used in AI systems. This lack of transparency can make it difficult to trust the decisions and conclusions.
- National and Global Standards: Work towards national and global legislation and standards for AI transparency, such as the US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
- Risk Management: Implement processes for checks, balances, and risk management in AI systems.
Principle 4: Robustness, security and safety
AI systems must be robust, secure, and safe throughout their lifetimes, with continual risk assessment and management to support transparency and accountability. They should function properly under normal, misuse, and adverse conditions, with mechanisms to override, repair, or deactivate, while maintaining information integrity. AI actors should use a risk-management approach to identify and mitigate foreseeable misuse and unintended risks throughout the lifecycle while keeping records of data characteristics to understand outcomes and improve trust. For example, the FDA guidance on Artificial Intelligence and Machine Learning in Software as a Medical Device requires ongoing post-market surveillance to address potential misuse and unanticipated risks that may come about once the device is on the market.
Practices on How to Implement AI in Healthcare Following This Principle
- Security and Privacy: Define and implement robust security and privacy policies.
- Collaboration: Enable safe collaboration while protecting patient data.
- Training and Validation: Train AI models with diverse data and ensure ethical considerations in clinical trials and continuously evaluate and validate AI systems to avoid bias.
- Oversight: Implement oversight mechanisms based on the risk level of AI applications.
Principle 5: Accountability
Organizations and individuals involved in AI must ensure their systems function correctly, can explain decisions and actions, and take corrective measures when needed. AI actors should maintain traceability of data sets, processes, and decisions to analyze outputs and respond to inquiries through documentation and audits. They should use a risk-management approach at each AI lifecycle phase and adopt responsible practices to address risks like bias, human rights, safety, security, privacy, and labor and intellectual property rights.
Practices on How to Implement AI in Healthcare Following This Principle
- Human Oversight: Include human oversight to identify and correct errors.
- Governance Programs: Establish AI governance programs with accountability policies and acknowledge the limitations of AI systems.
- Patient Advocacy: Involve patient advocacy groups in AI certification and data privacy efforts.
- Guidance and Feedback: Implement guidelines and a feedback process to improve AI systems continuously.
Trustworthy AI Challenges and Opportunities in Healthcare
Workshop participants then discussed challenges that occur in each step of the healthcare lifecycle, followed by mechanisms in the existing ecosystem available to manage these issues and opportunities for collaboration and next steps in operationalizing responsible AI.
Difficulties in the Application of Responsible AI Principles
- Overestimation of AI: Overestimating AI’s capabilities and expecting it to solve everything.
- Siloes: Lack of coordination between different stakeholder groups, both within and between organizations.
- Diverse Perspectives: Varying stakeholder interests and expectations.
- Rapid Change: Difficulty keeping up with the fast pace of AI development.
- Knowledge and Training: Insufficient understanding, talent development, and specific guidelines for AI.
- Data Standards: Need for a lot of high-quality, labeled, and representative data.
- Trust and Awareness: Gaps in patient awareness, trust, and meaningful evaluation of AI for patients.
- Protection and Governance: Data protection issues, differing laws between countries, and lack of interoperability and global AI strategy.
Current Mechanisms to Manage Challenges
- Clarity and Accountability: Define roles, responsibilities, and accountability clearly for all AI actors.
- Ongoing Monitoring and Audits: Implement continuous monitoring and audits, and use pilot projects and sandboxes to ensure risk assessment and accountability. For example, under the EU AI Act, AI that falls into the “high-risk” category must establish a risk-management system, undergo data governance, provide technical documentation showing compliance, design the system for record-keeping, provide instructions for use, allow for human oversight, and create a quality management system. The EU AI Act also proposes regulatory sandboxes to allow for a controlled environment to develop, test, and validate AI systems.
- Utilize Existing Standards: Adhere to existing standards for data quality, provide context, and apply agile guidance. Apply ethical frameworks and governance procedures, such as the EU AI Act and Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
- Sharing Wins and Lessons Learned: Use collaborative, pre-competitive forums such as DIA to share lessons learned and positive use cases.
Opportunities for Collaboration
- Engagement of Regulators: Continuous collaboration with global regulators to standardize guidance and share development use cases.
- Data Sharing: Facilitate secure data sharing to allow for more representative and curated data for model development and validation.
- Privacy Regulations: Address complexities in global data privacy regulations through forums such as the Roundtable of G7 Data Protection and Privacy Authorities.
- Participatory AI: Involve patients and all other stakeholders in AI development, patient-focused outcomes research, and community oversight mechanisms.
- Cross-functional Models: Create new models for collaboration across functions.
Next Steps for Progress
- Stakeholder Collaboration: Foster collaboration among all AI stakeholders for success. Form working groups and professional AI communities to establish best practices and share information on wins and losses. Focus on collaboration, not competition.
- Patient Involvement: Provide funding and education for patient involvement in AI governance and development.
- Next Generation Involvement: Engage younger generations in AI development in the healthcare space.
DIA’s Role in Promoting Responsible AI in Healthcare
The need for resource sharing through collaboration and for every stakeholder’s voice to be heard was a clear and reoccurring theme throughout the workshop. DIA’s next step is to form a pre-competitive public-private consortium to continue these conversations with subject matter experts from industry, regulatory, and policy, technologists, patients, and academia and allow for resource sharing and transparency of successes and pain points in a neutral setting.

Responsible AI Solution Room participants: Ella Balasa, Balasa Consulting; Greg Ball, ASAP Process; Andrew Bate, GSK; Rune Bergendorff, Implement Consulting Group; Brooke Casselberry, Epista Life Science; Alison Cave, Medicines and Healthcare products Regulatory Agency (MHRA), UK; Jonathan Chainey, Roche; Ethan Chen, US FDA; Karla Childers, Johnson & Johnson; Deborah Collyar, Patient Advocates In Research; Dave deBronkart, e-patient Dave; Christina Defilippo Mack, IQVIA; Martin Hodosi, Kearney; Stacy Hurt, Parexel; Barbara Lassoff, ProductLife Group; Rosanna Lim, Kearney; Nicole Mahoney, Novartis; Sridevi Nagarajan, AstraZeneca; Raviv Pryluk, PhaseVTrials; Rose Purcell, Takeda; Erika Rufino, Johnson & Johnson; Cary Smithson, Cencora PharmaLex; Elizabeth Somers, Merck; Ling Su, Shenyang Pharmaceutical University, Yeehong Business School; Joanne Sullivan, ProductLife Group; Phil Tregunno, MHRA; Sarah Vaughan, MHRA; Christine Von Raesfeld, People with Empathy; Ramona Walls, Critical Path Institute; and Reem Yunis, Vaultree.
DIA and the authors thank all the above for their contributions to this field.