Around the Globe

Trustworthy AI:
EC High-Level Expert Group Issues Guidelines
Thomas Kühler
Sanofi R&D
D

igital is everywhere and is transforming our lives in profound ways; the pharmaceutical sector is no exception. This article will discuss two aspects of this matter: First, the application of Artificial Intelligence (AI) throughout the drug development and lifecycle continuum from the European point of view; second, digital therapeutics.

AI systems will impact the pharmaceutical enterprise, the healthcare sector, and patient interaction in ways that we cannot yet imagine. Indeed, AI technologies are already being used, for instance, in digital therapeutics to provide personalized targeted treatments (see more below); in decision support systems for the analysis of (big) data; and by epidemiologists for early detection of disease outbreaks (pandemics).

In 2018, the European Commission created the High-Level Expert Group (HLEG) on AI, which recently issued a set of guidelines for trustworthy AI. It is important to have an appreciation of the contents of these guidelines; after all, a music recommendation proposed by an AI algorithm does not raise the same ethical concerns as an AI system proposing a medical treatment.

According to the HLEG on AI, trustworthy AI needs to be human-centric. That is, it must service the common good of humanity with the goal of improving human welfare and freedom; and build on trust, similar to the trust established by the aviation industry or achieved by the management of food safety. They propose three components which must be considered for any AI system being developed:

  1. It should be lawful, complying with all applicable laws and regulations which a user rightfully should be able to take for granted;
  2. It should be ethical, ensuring adherence to ethical principles and values, which means it should not be used to unduly shape and influence our behavior through unfair manipulation, deception, herding, or conditioning; and
  3. It should be robust, both from a technical and social perspective, since even with good intentions, AI systems can cause unintentional harm.

The third component is of particular interest. Indeed, the HLEG discusses the importance of ensuring the benefits of “black box” AI algorithms—especially those of a non-deterministic nature where small changes in data input can result in substantial changes in outcomes, where a reproducible behavior cannot be guaranteed, and where human control is almost entirely relinquished.

How does one strike the right balance between AI utility (maximizing the algorithm value-add) and AI explainability (documenting an auditable trace)? Society has not yet articulated a clear answer to this question. The Commission’s HLEG on AI does not go that far in its guideline, either. Rather, it states: “This overview is neither meant to be comprehensive or exhaustive, nor mandatory. Rather, its aim is to offer a list of suggested methods that may help to implement Trustworthy AI.”

Nonetheless, they do present a rather extensive Trustworthy AI Assessment List which they invite stakeholders to pilot in practice and to provide feedback regarding its feasibility, completeness, and relevance. Based on the feedback, a revised version of the Trustworthy AI assessment list will be proposed in 2020. Stay tuned.

Digital Therapeutics

Digital therapeutics, which may or may not be driven by AI algorithms, are software-generated therapeutic interventions that prevent, manage, or treat medical disorders or diseases. They are designed to give patients more personal control over their own care. These are very similar to consumer well-being apps, but their distinguishing difference is that they aim to deliver clinical outcomes. They can be used in stand-alone mode or along with medications, medical devices, or other interventions. While this may sound futuristic, many such medicines (downloadable apps) are currently being pursued in development. In fact, FDA approved the first Digital Therapeutic as a medicine earlier this year, thereby validating the concept.

While the FDA regulates drugs, devices, diagnostics—and now digital therapeutics—the situation in the EU is not as straightforward and it mandates the involvement of a second Authority, namely a Notified Body (of which there are several to choose from in the EU) in addition to the EMA when it comes to product quality and design, clinical validation, patient utilization, and regulatory oversight of digital therapeutics. Digital therapeutics are regulated by the Medical Device Regulation (and possibly the In Vitro Diagnostics Regulation) currently being implemented. Hence, the pace at which regulations and guidelines are developed currently lags behind the speed at which this new class of therapeutics is developed.

While the appetite to adopt digital therapeutics to date remains unaligned across all stakeholders (including pharmaceutical companies, providers, payers, and regulators), the sector is working hard to advance, adapt, and manage in this fast-moving area. This field is very much still in the shaping. Stay tuned.