DIA
magine a physician and a patient sitting quietly together in an examination room. The physician’s eyes are focused on a computer screen as she speaks in brief sentences about elevated A1C levels and the challenges of managing blood sugar through lifestyle changes and medication. The patient nods along silently and anxiously while holding an incomprehensible sheet of lab results and struggling to process her Type 2 diabetes diagnosis.
Encounters like these form the foundation of a deepening crisis in American healthcare. One study found that trust in physicians and hospitals has plummeted from 71.5% in April 2020 to 40.1% in January 2024—an erosion partly tied to the COVID-19 pandemic. Meanwhile, 60% of Americans grade the healthcare system C or worse, and 70% express a desire for stronger relationships with their healthcare providers (HCPs).
This erosion of trust occurs as advances in artificial intelligence (AI) are changing how we view healthcare and look for information about our conditions and treatment options. Despite concerns surrounding data biases and potential errors, generative AI tools can help rebuild trust in medical establishments and strengthen the patient-provider relationship—if providers are committed to using these tools ethically and responsibly.
Building Better Clinical Relationships
Clinicians are in a tough situation: Because they’re stretched so thin, maintaining a high quality of care has become increasingly challenging from logistical and psychological standpoints.
Many are turning to AI to help. A recent survey determined that as many as 76% of physicians have started incorporating large language models (LLMs) into their clinical decisions, and 40%-60% used LLMs to assess drug interactions, and for diagnosis support, literature searches, and treatment planning.
Using AI in clinical settings offers countless benefits. AI tools can handle documentation and treatment planning, so clinicians can focus on patient care. Additionally, AI-powered ambient clinical intelligence can transcribe patient encounters in real time, allowing physicians who use these services to have more meaningful patient conversations.
Increasing the Patient’s Understanding
The moments after a medical appointment often bring more questions than answers. Patients struggle to recall their physician’s explanation, understand their diagnosis, or make sense of their treatment instructions.
Clear communication is vital to strengthening their relationship and ultimately to good patient care. AI can convert medical terminology from an 11th-grade reading level to a sixth-grade reading level (the accepted standard for health literacy), thereby offering patients a clearer understanding of their diagnosis and treatment.
One emergency room doctor tried unsuccessfully to explain to an elderly patient’s children why their treatment suggestions would worsen their mother’s condition, so he turned to ChatGPT. “As I recited ChatGPT’s words, their agitated expressions immediately melted into calm agreeability,” he wrote.
Confusion and frustration are magnified when physicians and patients don’t speak the same language. Language barriers have been shown to result in more frequent adverse events, reduced access to health information, and diminished care satisfaction. Beyond basic translation, AI-powered services can be trained to understand cultural nuances and medical terminology across different dialects—and they’re only getting stronger.
AI can also help overcome fundamental access restrictions. Specialized medical chatbots, including one for cancer patients, may offer on-demand, cost-effective preliminary diagnostic guidance and health information to patients who lack immediate access to care. They can also alert patients when their condition requires in-person medical attention.
AI therefore can put knowledge in patients’ hands. It can deliver customized content about conditions, treatments, and preventive care. Patients can show up for appointments prepared with a greater understanding of their illnesses, and physicians can verify their diagnoses or find common ground with patients.
Detailed treatment explanations enable more informed healthcare decisions—and a feeling that your doctor is there for you.
Ensuring Safety and Privacy is Crucial
However, AI needs considerable human oversight and rigorous safeguards to be effective in healthcare settings. Clinicians must address privacy concerns and assure the quality of any output as well as the quality of the data sources utilized if they wish to use AI to rebuild and maintain patient trust.
AI implementation must be systematic and thoughtful. More than 200 guidelines exist globally to direct appropriate AI use in healthcare settings, including some established by the US FDA. Providers recognize that AI and LLMs in particular still require human oversight: 97% of them report consistently vetting LLM outputs before clinical application.
Any clinical AI tool must comply with the most stringent patient data encryption requirements, including HIPAA. Clinicians may also wish to receive patient consent before using AI in order to maintain transparency. Deloitte found that 80% of patients want to know how their providers use AI in delivering care.
Once a physician begins using AI, its outputs must be reviewed diligently to verify their accuracy. Errors must be tracked to improve the models. All staff members on a clinical team must undergo training to understand AI’s capabilities and limitations.
Most importantly, the focus must remain on augmenting, rather than replacing, human medical expertise. Like any other tool, AI is a resource that should help HCPs be more efficient, leaving them more time for meaningful and empathetic patient interactions. Providers must maintain the essential human elements of medical care to give patients what they need and want and to preserve the heart of the patient-provider relationship.
Embracing a Future with AI
Consider again the physician and patient with newly diagnosed diabetes in that examination room. AI now offers tools to transcribe their conversation, explain complex lab results in clear terms, and provide the patient with understandable information about diabetes management. The physician spends less time documenting and more time answering questions. The patient leaves with confidence in her treatment plan and renewed assurance in the provider’s care.
As healthcare systems implement AI tools thoughtfully and securely, they create opportunities for stronger connections between clinicians and patients, leading to restored trust in medical care and improved health outcomes. AI can support various aspects of patient care, such as transcribing conversations, clarifying lab results, and simplifying information about complicated conditions such as diabetes. How and when these tools are used will greatly depend on each individual patient’s preferences, each healthcare provider’s workflow, and the healthcare setting. However, utilizing models with trustworthy, diverse data sets, and constant validation and improvement, will be critical to ensuring the best AI outcomes.
To learn more about AI in healthcare, plan to attend our DIA 2025 Global Annual Meeting.