Kearney Healthcare and Life Sciences
rtificial intelligence (AI) has already begun transforming the healthcare industry, with applications ranging from diagnostics to drug discovery and personalized medicine. The global healthcare AI market, valued at $19.27 billion in 2023, is expected to grow at an annual rate of 38.5% through 2030, according to Grand View Research.
At DIA’s 2024 Global Annual Meeting, healthcare stakeholders talked openly about the need to balance AI-driven innovation with the ethical challenges AI presents. As the saying goes, we are now in a situation where we need to build the airplane while we are flying it.
The industry has confronted ethics and safety challenges before; from early public safety scandals such as the thalidomide crisis, we learned the need for rigorous oversight. Now, as AI accelerates rapidly, a balanced approach is essential to make sure that overcaution doesn’t delay critical advancements and that the necessary oversight is in place. In other words, ethical issues such as bias, patient privacy, and transparency must be addressed head-on without stalling progress.
The Urgency of AI Adoption in Healthcare
Despite AI’s vast potential, the healthcare sector has been slower to adopt these technologies compared with industries such as financial services, which receive significantly higher AI investment.
By 2025, healthcare is forecast to make up only 3.5% of AI investments across seven global industry sectors (transportation, consumer, government, financial services, industrial, IT, and healthcare), according to PitchBook’s 2023 Artificial Intelligence & Machine Learning Overview. This discrepancy reflects the risk-averse nature of the healthcare sector. Regulatory uncertainties, patient safety concerns, and the inherent complexity of healthcare data are among the factors holding back more widespread AI implementation.
At the same time, we are seeing growing demand for AI-driven healthcare solutions from both patients and providers. Patients, empowered by their own research and digital tools, are more informed than ever. They approach their physicians with specific questions based on AI-generated health information, pushing the healthcare system to adapt.
Patient advocacy groups such as The Light Collective are leading the charge to shape AI policy with the concerns of patients in focus, through projects such as the Patient AI Rights Initiative. This shift toward patient empowerment is forcing the industry to confront the ethical and operational challenges AI presents.
Navigating the Ethical Challenges
Key ethical challenges that slow down innovation and investment of AI in healthcare include a higher risk of data bias, the fragmented regulatory landscape, and low public trust.
Data Bias
The ethical challenges of AI in healthcare are complex and multifaceted. One of the most pressing issues is data bias. AI systems depend on vast amounts of data to make predictions, but when that data is incomplete or unrepresentative, the results can be biased.
This is particularly concerning in healthcare. Although the problem of bias in healthcare data is not new, AI has the potential to accelerate it, which could amplify bias and health inequities when treating patients from different demographic groups. For instance, training AI models on data sets that do not adequately represent minorities could result in less accurate diagnoses or treatment recommendations for these populations. Furthermore, AI algorithms trained on data sets from certain regions or countries may not be applicable worldwide, or at least where patient populations and treatment strategies differ.
Thankfully, we are now well aware of biases in healthcare data and the extra risks that AI brings. However, the publicization of this risk—highlighted by some tech giants’ missteps—makes firms nervous to approach it, even constructively. Few want to risk patient harm or anything like the attention that many large language models have received in recent months. Google, for example, had to apologize after its tool Gemini generated racially diverse Nazis.
Regulations
The fragmented regulatory landscape presents another challenge. Different regions have adopted varied approaches to AI governance. For example, the European Union passed the AI Act, a regulatory framework that prioritizes transparency and accountability, while the US has so far taken a more hands-off approach. This patchwork of regulations creates uncertainty for AI developers, innovators, and healthcare providers, making it difficult to implement AI solutions at scale while ensuring compliance across borders.
Trust
Public trust in AI also remains low, particularly in health contexts. According to a 2024 KFF Health Misinformation Tracking Poll, over half of adults do not trust AI-generated health information; even among AI users, most are skeptical about the accuracy of AI-driven medical advice. This mistrust is a significant barrier to the widespread adoption of AI in healthcare.
The pharmaceutical industry has grappled with trust problems for decades and is loath to take steps that risk further eroding trust. AI development seems to adhere to the idea that innovation means moving fast and potentially breaking things. This is a mismatch with society’s expectations, namely full precision and explicability at all times, of the pharmaceutical industry and healthcare systems.
To meet this challenge, we must create systems that allow for continuous adaptation and learning as we implement AI in healthcare settings (again, building while flying). We must ensure that we manage risks without grounding innovation.
Recommendations for Responsible AI Implementation
Addressing these challenges requires a multipronged approach:
Build on existing frameworks.
For instance, healthcare stakeholders should build on existing ethical frameworks rather than starting from scratch. Mature R&D organizations typically already have robust systems in place through quality and compliance functions that monitor process deviations and apply validation frameworks to new technologies to safeguard patient privacy and ensure ethical practices.
These frameworks, however, must be adapted to address the unique aspects of AI. For example, while existing bioethics frameworks cover data privacy, they might not adequately address the issue of data bias inherent in AI models.
Get everyone collaborating.
Wide collaboration among stakeholders is also crucial.
Regulators, R&D teams, tech companies, healthcare providers, and patient advocacy groups must come together to develop a unified approach to responsible use and development of AI. This collaborative ecosystem should work to create shared standards and best practices that can be applied across the board. By doing so, we can ensure that AI systems are developed in ways that prioritize patient safety and equitable healthcare access.
Use regulatory sandboxes.
Regulatory frameworks must also become more flexible to keep pace with the rapid evolution of AI technology. One promising approach is the creation of “regulatory sandboxes,” which allow companies to test new AI technologies in a controlled environment. These sandboxes provide a space for innovation while maintaining oversight from regulatory bodies, thus ensuring that new AI solutions are both safe and effective before they are widely adopted.
Regulatory sandboxes balance the need for innovation with the responsibility to protect patient health and privacy. These test environments will enable all parties to evaluate AI solutions from a risk-benefit perspective for patients and establish appropriate regulatory guidelines to empower innovation moving forward.
Support AI literacy for patients and give them agency.
Another essential component of responsible AI is patient agency. This is different than “involving” patients in their own care. Giving patients agency means understanding and embracing that they will not wait for the industry or regulators to allow them to move. Patients will arm themselves with as much knowledge as they can, no matter where they get it. As an industry, our goal should be to take advantage of this urge and make patients an integral part of innovation.
To do that, patients must be involved from the outset—not only to provide real-world data to improve AI models but also to ensure that the technologies developed align with their needs and values.
By educating patients on how to interpret and use AI-generated health information, we can empower them to ask the right questions when discussing their health with providers. This will foster more transparent and trusted healthcare systems.
The Path Forward
As we look to the future, the responsible implementation of AI in healthcare should be a collaborative effort that requires input from all stakeholders—patients, providers, innovators, regulators, and technology and pharmaceutical companies alike.
By building on existing ethical frameworks, fostering cross-sector collaboration, and developing flexible regulatory environments, we can unlock AI’s potential to transform healthcare while safeguarding patient trust and minimizing risks.
The stakes are high, but with the right approach, AI can deliver on its promise to revolutionize healthcare for the better.