Proceedings: DIA Europe 2018
Exploring the Use of Artificial Intelligence
Trust in Technology, or Trust in Each Other?
Raleigh E. Malik
DIA Senior Scientist
Patrick Brady
VP and Head of Regulatory Policy and Intelligence,
Bayer
Simon Brown
Global Head,
Learning Centre of Expertise and Novartis Universities
Detlef Hold
Global Strategy Lead Knowledge Cycling,
PD Faster Filing Program,
Genentech Inc. (A Member of the Roche Group)
he DIAmond session Exploring the Use of Artificial Intelligence at the 2018 DIA Europe Meeting explored the concept of human trust both in technology and with each other. The diverse panel of stakeholders discussed current perceptions of artificial intelligence (AI) applications and considerations for how AI will impact professionals working in therapeutic development. Here are a few key takeaways:
- Society is still at the beginning of fully harnessing the capabilities of AI.
- Regulatory authorities recognize that AI will impact pharmaceutical regulatory decision-making but acknowledge that, in general, regulators have not begun to specifically address it.
- Trust in technology will result from an understanding of accountability and transparency.
- Advances in data collection and analysis are driving pharmaceutical professionals to consider how best to evolve their skills to meet the increase in knowledge acquisition.
AI Today
Experts agree that data scientists are still developing and advancing AI technology. However, society is still far away from what is currently depicted in scientific films with robots replacing humans in the workforce. Patrick Brady, VP of Regulatory Policy and Intelligence for Pharma and Consumer Health at Bayer and Chair of the session, noted that progress has been made in what he referred to as “narrow AI,” or the use of AI approaches, such as natural language processing or machine learning to address discrete cases. Novartis, for example, applied machine learning to help curate multiple training courses from across 14 learning management systems (LMS) within the company. The idea was that an algorithm could be trained to review the current courses across all LMSs and create new simplified course catalogs at a faster rate than it would take employees to do the same task. Although this approach was logical, the company discovered that it still took a considerable amount of time for employees to train the algorithm and review the curated materials for accuracy.
Trust is the Key to Adoption
This scenario above exemplifies the importance of trust in the successful adoption of AI approaches. Companies cannot confidently implement and accept new technological methods if the output is not accurate. Furthermore, concerns related to accountability and transparency should be considered prior to applying AI to a business challenge.
Accountability and Transparency
Who is accountable and responsible for the outcome when AI is used? Is it the programmer, or is it the sponsor? And who is responsible for the credibility and integrity of the data? Thomas Senderovitz, Director General at the Danish Medicines Agency noted that regulators inherently mistrust data, and sponsors and programmers will need to provide sufficient context to make the data credible to gain the trust of regulators and other stakeholders.
Panelists agreed that sponsors and programmers should be transparent with each other and with external stakeholders when determining how the data will be collected, stored, and used. Openness in the AI methodology, including the question addressed, and the context in which the data are applied help prevent confusion and possible disagreements related to the output and application.
As a possible solution, it was suggested that during the planning phase, the sponsor should collaborate with those involved to map everyone responsible and accountable for each phase of the project. Communicating accountabilities up front helps prevent ambiguity and creates trust through transparency among stakeholders.
The panelists of this DIAmond session were:
Elena Bonfiglioli, Regional Business Leader, Health and Life Sciences, Microsoft
Patrick Brady, VP and Head of Regulatory Policy and Intelligence, Bayer
Simon Brown, Global Head, Learning Centre of Expertise and Novartis Universities, Novartis
Detlef Hold, Global Strategy Lead Knowledge Cycling, Genentech
Thomas Senderovitz, Director General, Danish Medicines Agency
An Evolution of Skills
With the emergence of ‘Big Data,’ professionals must consider how to evolve their skills to manage the new magnitude of available data and analyses. Artificial intelligence, whether it be machine learning or natural language processing, appears to be one method for effectively capturing and analyzing these data. But what skills will pharmaceutical professionals need in order to stay relevant? Some predict that programming and analytics, such as analyzing unstructured text, will become common skills. Professionals will need to learn how to better manage their cognitive load to extract the right information from sources, and avoid feeling overwhelmed with misinformation and noise. Computational thinking, the ability to translate vast amounts of data in abstract concepts, will become a key skill. Furthermore, both social media literacy and human-machine design in drug development will be as important as technical skills. It was recommended that programmers and engineers be required to take ethical training to prepare for the future in which AI more readily augments the work of humans.
So where do we go from here? We keep innovating. We keep collaborating. And we keep communicating openly. For transparency will lead to trust, and trust will lead to adoption and progress.