Special Section: Pharmacovigilance

scientist pointing to graphics of scientific icons
Advanced Analytics Drives Innovation in Pharmacovigilance Risk Management Systems
Francois Audibert
Vitrana

Mariette Boerstoel
AstraZeneca

Courtney Granville
DIA

Jeremy Jokinen
Bristol Myers Squibb Company

A

rapid acceleration of automation sees artificial intelligence (AI) driving efficiency across the safety continuum. Although cost, unfamiliarity, and hesitancy to explore new ways of working were (and to some extent remain) barriers to rapid integration of AI, the demands of increasing case volumes and the consequent need to reduce human workload are shifting priorities and may promote further adoption of AI. This comes with many benefits, including reduction in errors and overreporting, and improvement in timelines and transparency. It is also enabling a shift in workforce needs and new ways of working. Finally, evidence generation in the COVID era has shifted mindsets and could stimulate a future in which post-approval safety assessment shifts from exploratory to confirmatory.

How is AI Driving Innovation in Pharmacovigilance Today?

In addition to case processing, AI is now being put to work in pharmacovigilance (PV) to identify adverse events from real-world data (RWD) sources such as social media, literature, and call center records, to manage high-risk populations, and for signal detection, tracking, and analysis. Specific use cases demonstrating the value of AI include:

  • Error reduction
  • Increased consistency
  • Management and streamlining of data from diverse sources
  • Structuring data for use in signal detection
  • Extracting data using natural language processing (NLP)
  • Extracting historical information.

For example: When a user reads a text containing medical information, the construct, wording, sequencing, and the writer’s literacy can influence the reader’s understanding of the main points of that text. This behavior can typically create error further downstream in the process by retaining only certain aspects of the text initially presented. Using an AI agent to organize the text into meaningful elements, and presenting this extraction to the reader for review, disposes the reader to review the text in its entirety. Even if the extraction is not perfect, the user will investigate the missing information to complement it. This approach can ensure that all the text is read, all meaningful elements are viewed, and data is captured close to its source, reducing the chance for error due to misidentification or misinterpretation further down the processing path.

By providing an AI-based processing step, whatever role this step has, an organization develops a consistent treatment of the information it receives. For example, if an AI agent, before a quality control (QC) task, evaluates a received item against similar items that have already been through QC, the agent can provide the user conducting quality review with a consistent evaluation of the item (even if the scoring is similar but not a perfect match) before the user even sees it.

Collecting data from various sources is typically done in different locations by different teams, and sometimes using different tools. AI can help consolidate all this collecting and processing into a single system, while providing the same formulation of the initial data for user review. If AI is used to systematically extract key characteristics of the original data, users can consistently address items in priority. This approach allows organizations to develop thorough data processing and management and accelerates transformation of these disparate sources into similar data sets, helping to achieve organizational and economic efficiencies.

What Will Operationalize Innovation Across the Pharmacovigilance Continuum?

As the shift to these advanced approaches continues, there is growing need for education and change management to optimize adaptation to new ways of identifying and managing safety data and support safety monitoring.

New skills will be required to meet the new demands of AI-driven approaches, and sharing lessons learned from initial experiences will drive our path forward. Development of tools should include input from end users to assure ease of operability and their engagement in the shift from data entry to data quality assessment. Selecting specific expected outcomes (e.g., successfully processing the conversion of a specific form into a user-friendly set of fields for the user to review) that provide meaningful business values and achieving them in a specific timeframe (i.e., slices of the overall project called “sprints”) will drive the adoption of these AI capabilities by PV professionals. “Sprints” could be used to stagger integration, allowing measurement of improvements and impacts during integration, flexible delivery, and refinements, and to maintain engagement of users through the process.

Second, use of AI across the PV spectrum is a shift in the landscape that is not well understood by many, if not most, people and organizations. Thus, education to enhance skills, build awareness, and set expectations around the utility of AI in this context is imperative to assure successful integration of automated approaches.

Finally, successful change management is required to guide end users to the adaptation, optimization, and standardization of new processes.

What Does the Future of PV Look Like, Given the Rapid Insights Coming from Integration of AI?

Many clinical trials have been halted, suspended, or continued with modified data collection due to COVID-19, resulting in incomplete data sets and data gaps that must be addressed to obtain an adequate safety profile. Despite this, we must assure that safety is maintained; benefit/risk assessment is a regulatory need globally. This challenge brings with it an opportunity to think critically about what data is needed to demonstrate safety and how we can use lessons imparted out of necessity to drive a new era in safety assessment.

Safety assessment has always been a surveillance activity and, unlike efficacy, a safety determination comes from lack of signal. However, some of the approaches that are used in making an efficacy determination may also help inform safety assessments, and AI could be leveraged to do so.

For example, if data were missing in an efficacy determination, an exercise would be completed that includes a probabilistic imputation of data that makes assumptions about values for gaps and models the outcomes to determine how the efficacy conclusions change based on those assumptions.

Could an approach like that used for assessing efficacy be applied to safety? As with efficacy assessments, the approach begins with an inventory of data and questioning if it is sufficient to characterize the safety profile. Knowledge gaps are next assessed to determine how to address them through additional study or evidence gathering or if there is sufficient data despite the gaps to make reasonable assumptions and consider the sensitivity of conclusions based on the gaps.

What Would Safety Look Like if our Mindset Changed to See Safety as an Evidence-Generating R&D Activity Rather than as an Exploratory Exercise?

In this model, we would generate an evidence plan that proactively specifies expectations, what data collection is required, what analyses will be conducted, and what subjects are needed to derive a safety determination. Such a plan pre-specifies unanswered questions, data sources, and analytics to address them. As a living document, the plan is updated with new questions to be addressed and answered as additional insights are gained.

How might AI positively impact this approach? The availability of advanced analytics allows us to consider the many sources of information that could be used for safety assessment as part of generating that evidence. These include population literature, pre-clinical data, data from post-authorization safety studies, clinical data, electronic health records and insurance claims data, observational studies, post-market literature, patient support programs, digital health data, and surveillance databases.

Applying AI to data from these and other sources can inform our immediate need to respond to gaps and can move us toward planning for safety evidence generation that identifies conditions for safe use in addition to identifying conditions where use is unsafe.