Special Section: AI in Clinical Research


Introduction: Defining Intelligence Shaped by Subjective Perceptions
Sridevi Nagarajan
AstraZeneca
D

efining intelligence, whether natural or artificial, remains a realm shaped by subjective perceptions. Intelligence is often measured by the yardstick of reasoning, with mathematics serving as the benchmark for such logical processes. While deep learning artificial intelligence (AI) language models excel at pattern recognition, they currently struggle with reliably solving even eighth-grade mathematical problems. Two quotes worth considering from a human utility standpoint are George Box’s “All models are wrong, but some are useful” and Lord Kelvin’s “To measure is to know. If you cannot measure it, you cannot improve it.”

Modern AI relies on encoding vast amounts of knowledge into numerical parameters, forming a complex, opaque web of interconnections. In the domain of health research and treatment delivery, a multifaceted ecosystem combines routine tasks with intelligent decision-making, further complicated by a plethora of disparate or functionally overlapping digital tools. The advent of Generative AI (Gen AI) introduces a new dimension to this landscape.

Against this backdrop, the three articles featured in this December Global Forum Special Section are significant for shedding light on operational challenges in clinical trials and proposing actionable steps to navigate this evolving terrain. The common thread in these articles is the role of AI, emphasizing the importance of measurable value attribution and establishing a foundation for growth—capabilities crucial for the contemporary life sciences, clinical research, and therapeutic product development industries.

Sridevi Nagarajan and Michael Meighu (CGI) serve as Co-Chairs of DIA’s AI in Healthcare Community.