Bristol Myers Squibb
arge language models (LLMs) have been used in drug discovery for some time now, but many in the life sciences industry are just becoming aware of Generative AI: a buzzword that has become increasingly pervasive in society. Frequently and inaccurately associated with OpenAI, the company responsible for creating ChatGPT among numerous tools in the expansive AI domain, this technology garners a wide array of responses: lauded as a transformative innovation at one end of the spectrum and decried as a potential societal threat at the other. This article explores the transformative potential of LLMs underpinning Generative AI, their possible impacts on drug development, the regulatory hurdles they face, and the pressing need for a balanced approach to their utilization. There is a serious need for a judicious blend of innovation, regulatory compliance, and education to reap the maximum benefits of these tools while managing potential risks such as a proliferation of racial bias in healthcare, safety concerns that might arise from the use of AI-enabled decision support, or the undermining of public trust in our healthcare institutions if data is improperly consented for use in training data sets.
Testing Innovation
Regulation
Regulatory implementation of LLMs faces several challenges, including difficulties in traceability, explainability, data privacy, and consent for data use in training models. To fully realize the potential benefits of LLMs, stakeholders across the healthcare ecosystem must collaborate with regulatory bodies to establish regulations that balance risks without stifling innovation. Until clear regulations emerge, we need to be smart about setting our own policies that hold us accountable to the ethical principles and values of our organizations. We need to set policies and standards that respect privacy and keep our data safe such as revisiting our data governance policies with respect to the types of data we will train our AI models with and setting standards that ensure we hold ourselves accountable to ethical methods including the use of adversarial debiasing to reduce model bias. This may mean restricting the use of open-source tools within our organizations as we find our footing. It may also require enabling new technical environments and governance guardrails to mitigate risk and maximize value. While these policies will be individualized to each organization, the promotion of trust and transparency must be common core.