CEO Roundtable on Cancer Project Data Sphere
rtificial intelligence (AI) has emerged as a promising, if not transformative, force in drug development, demonstrating significant technical capabilities across various domains, including target identification, in silico modeling, biomarker discovery, digital pathology, and clinical trial optimization. The synergy between machine learning and high-dimensional biomedical data has fueled growing optimism about AI’s potential to accelerate and enhance the therapeutic development pipeline.
In this perspective, I propose two interdependent imperatives for realizing AI’s full potential in therapeutic development.
First, the emerging “TechBio” sector and the broader AI-focused technology community should adopt rigorous clinical validation frameworks, prioritizing real-world performance and prospective clinical evidence over mere algorithmic novelty. This approach is crucial for building trust, securing regulatory acceptance, and achieving reimbursement.
Second, regulators have the opportunity to modernize their internal digital infrastructure, thereby facilitating more agile innovation pathways and scalable oversight mechanisms. This modernization will not only streamline reviews but also create new opportunities for advancing AI-enabled technologies.
As a case study in regulatory innovation, I examine the US FDA’s Information Exchange and Data Transformation (INFORMED) initiative. INFORMED’s incubator model serves as a compelling template for embedding innovation within regulatory bodies. I argue that a pragmatic paradigm shift—encompassing both workflow modernization and mindset realignment—is essential to unlock the potential of existing AI tools that remain underutilized in today’s clinical and regulatory environments.
AI in Drug Development: Unlocking New Possibilities
AI is increasingly deployed to address key drug development challenges: multiomic data integration, structural biology, pharmacokinetic prediction, and toxicity modeling. Deep learning has enabled high-dimensional representations of disease and treatment response, promising more precise therapeutic development. Applications such as digital pathology, imaging-based tumor response assessment, and trial design optimization suggest a growing maturity.
In early discovery, machine learning algorithms can identify novel targets by integrating diverse molecular data sets, predicting protein structures, and modeling compound-target interactions. These approaches have the potential to drastically reduce the time and cost of identifying promising therapeutic candidates. In clinical development, AI tools can optimize trial designs, predict patient responses, and monitor safety signals across large data sets.
Despite promising applications, several factors impede the translation of AI innovations into clinical practice and regulatory workflows. For example:
- Most AI tools are developed and benchmarked on curated data sets under idealized conditions. These controlled environments rarely reflect the operational variability, data heterogeneity, and complex outcome definitions encountered in real-world clinical trials. The gap between development and deployment contexts creates performance discrepancies that can undermine confidence in AI systems.
- AI development often occurs in isolation from the clinical and regulatory ecosystems where these tools must ultimately function. This disconnect can result in solutions that may achieve impressive technical benchmarks but fail to integrate with existing workflows or address the practical constraints of clinical decision-making and regulatory review. Consequently, a fundamental challenge is not technological capability but rather the absence of frameworks that bridge the gap between algorithmic development and clinical implementation. Addressing this challenge requires coordinated efforts from researchers, clinicians, industry sponsors, and regulatory agencies.
The Clinical Imperative: Validating AI in Real-World Contexts
Prospective Evaluation as the Missing Link
Despite the proliferation of peer-reviewed publications describing AI systems in drug development, the number of tools that have undergone prospective evaluation in clinical trials remains vanishingly small. Retrospective benchmarking in static data sets can often be an inadequate substitute for validation under conditions that reflect the true deployment environment, which includes real-time decision-making, diverse patient populations, and evolving standards of care.
Prospective validation is essential for several reasons:
- First, it assesses how AI systems perform when making forward-looking predictions rather than identifying patterns in historical data, addressing potential issues of data leakage or overfitting.
- Second, it evaluates performance in the context of actual clinical workflows, revealing integration challenges that may not be apparent in controlled settings.
- Third, it measures impact on clinical decision-making and patient outcomes, providing evidence of real-world utility beyond technical performance metrics.
The field of oncology illustrates this validation gap. While numerous studies have demonstrated that AI algorithms can detect cancer with accuracy comparable to expert radiologists and pathologists in controlled evaluations, far fewer have assessed performance in routine clinical practice across diverse healthcare settings and patient populations. This lack of prospective validation creates uncertainty about how these systems will perform when deployed at scale, limiting their potential to transform diagnostic processes and improve patient outcomes.
The Critical Requirement of Randomized Controlled Trials
The need for rigorous validation through randomized controlled trials (RCTs) presents a significant hurdle for technology developers who typically excel in rapid innovation environments; nonetheless, AI-powered healthcare solutions promising clinical benefit must meet the same evidence standards as therapeutic interventions they aim to enhance or replace.
This validation framework serves to protect patients, ensure efficient resource allocation, and build essential trust among stakeholders. The requirement for formal RCTs directly correlates with how innovative the AI claims to be: The more transformative or disruptive an AI solution purports to be for clinical practice or patient outcomes, the more comprehensive the validation studies must become to justify its integration into healthcare systems.
Analogous to the drug development process, it is essential for most AI models to undergo prospective RCTs to validate their safety and clinical benefit for patients. The FDA requires prospective trials for most therapeutic agents, and a similar standard should be applied to AI systems that impact clinical decisions or directly affect patient outcomes. Without rigorous validation, even technically advanced AI systems are unlikely to gain widespread adoption or reimbursement. Comprehensive clinical evidence is critical for regulatory approval, inclusion in clinical guidelines, and fostering trust among oncologists.
There is a perception that traditional RCTs are impractical for AI models due to factors like rapid technological evolution, integration challenges with existing workflows, and the lack of dedicated funding mechanisms.
However, this view must be challenged. Adaptive trial designs that allow for continuous model updates while preserving statistical rigor, digitized workflows for more efficient data collection and analysis, and pragmatic trial designs all represent viable approaches for evaluating AI technologies in clinical settings. The technology and investment communities should embrace these approaches if they aspire to create solutions with lasting clinical impact.
Reimbursement and Real-World Adoption
Beyond regulatory approval that is focused on patient safety and clinical benefit, commercial success of AI tools in drug development depends on demonstrating value to payers and healthcare systems.
Payers increasingly demand evidence of clinical utility, cost-effectiveness, and improvement over existing alternatives. Without reimbursement pathways, even technically sound and regulatory-approved AI solutions may face commercial failure.
AI developers should therefore consider incorporating validation studies that generate economic and clinical utility evidence alongside traditional efficacy and safety data. These studies should measure outcomes that show statistically significant and clinically meaningful impact on patients, such as improved patient selection efficiency, reduced adverse events, or enhanced treatment response rates.
Real-world adoption requires addressing implementation factors beyond performance metrics. User experience, workflow integration, training requirements, and interoperability with existing systems all influence whether an AI tool will be successfully incorporated into clinical practice. Developers must consider these factors from the outset, designing systems that enhance and adapt to established workflows as a phased approach for modernization. This holistic approach to validation and implementation creates a foundation for both regulatory acceptance and successful clinical adoption.
The Regulatory Imperative: INFORMED as a Blueprint for Innovation
The INFORMED Initiative: An Incubator for Regulatory Science
While innovators developing AI tools need to improve their clinical evidence generation capacity, regulatory frameworks can also benefit from new capabilities to accommodate AI-enabled technologies. The Information Exchange and Data Transformation (INFORMED) initiative, which operated at the US FDA from 2015 to 2019, represented a novel approach to driving regulatory innovation. INFORMED functioned as a multidisciplinary incubator for deploying advanced analytics across regulatory functions, including pre-market review and post-market surveillance.
INFORMED was established on the premise that traditional regulatory structures were increasingly inadequate for addressing the complexity of modern biomedical data and AI-enabled innovation. Rather than attempting to modify existing frameworks incrementally, INFORMED created a dedicated space for experimentation and rapid prototyping—an organizational construct that enabled innovation to occur alongside established regulatory processes.
The initiative adopted entrepreneurial strategies commonly used in the private sector but rarely seen in regulatory agencies: rapid iteration, cross-functional collaboration, and direct engagement with external stakeholders. This approach allowed INFORMED to function as a sandbox for ideation and technical resource sharing, empowering project teams with tools needed to develop novel data science solutions.
INFORMED’s organizational model offers several lessons for regulatory innovation:
- First, it demonstrated the value of creating protected spaces for experimentation within regulatory agencies. By operating somewhat independently and horizontally across traditional organizational structures, INFORMED could pursue higher-risk, higher-reward projects without disrupting essential regulatory functions.
- Second, it highlighted the importance of multidisciplinary teams that integrate clinical, technical, and regulatory expertise. INFORMED drew together clinicians, data scientists, and regulatory experts, creating a convergence of perspectives that enabled novel approaches to longstanding challenges.
- Third, it showed how external partnerships can accelerate internal innovation. INFORMED actively engaged with academic institutions, technology companies, and industry sponsors, creating a dynamic exchange of ideas and resources that enhanced its capabilities.
- Most importantly, INFORMED demonstrated that targeted innovation initiatives can catalyze broader institutional change. While operating for a relatively short period, INFORMED initiated several projects that continued to develop after the initiative itself had concluded, illustrating how incubator models can seed longer-term transformation in regulatory processes and mindsets.
Digital IND Safety Reporting: A Case Study in Regulatory Transformation
Among INFORMED’s many innovations, the digital transformation of Investigational New Drug (IND) safety reporting stands out as a particularly instructive case study. This project addressed a critical inefficiency in the drug development process: the submission and review of safety reports for investigational products.
The existing system for reporting serious and unexpected suspected adverse reactions was predominantly paper-based, with sponsors submitting reports to the FDA and participating investigators within 7 or 15 days depending on the type of event. In 2016, FDA’s drug review divisions received approximately 50,000 reports annually, primarily as PDF files or on paper, creating significant challenges for safety signal detection and tracking.
A foundational audit revealed that only 14% of expedited safety reports submitted to the FDA were informative. The vast majority of reports lacked clinical relevance and potentially obscured meaningful safety signals. This finding highlighted a critical opportunity to improve both efficiency and safety through digital transformation. Furthermore, an INFORMED survey of medical officers at the FDA’s Office of Hematology and Oncology Products in April 2016 provided further evidence of this opportunity. The survey revealed that reviewers spent a median of 10% of their time (with an average of 16%) reviewing expedited pre-market safety reports, with some spending as much as 55% of their time on this task. This substantial commitment of highly specialized expertise to largely administrative tasks represented a significant inefficiency in the regulatory process. Based on these findings, INFORMED estimated that hundreds of full-time equivalent hours per month could be saved through the implementation of a digital safety reporting framework. This would not only increase efficiency but would also allow medical reviewers to focus their expertise on meaningful safety signals rather than processing uninformative reports.
INFORMED initiated a pilot project to develop a digital framework for the electronic submission of IND safety reports. This framework transformed unstructured safety data into structured formats that could be analyzed using advanced computational methods. The pilot demonstrated both technical feasibility and substantial potential benefits, showing how digitization could enable visualization, analysis, and tracking capabilities that were impossible in the paper-based system. The digital safety reporting pilot’s journey from concept to implementation illustrates both the promise and challenges of regulatory innovation. While the technical proof-of-concept was rapidly established, full implementation required navigating complex organizational, policy, and stakeholder considerations. Eight years after the initial pilot, in April 2024, the FDA released formal guidance mandating electronic submission of structured safety data, demonstrating the long timeline often required for regulatory transformation.
Current Regulatory Approaches and Future Needs
The INFORMED initiative exemplifies the FDA’s willingness to innovate internally, yet broader regulatory challenges remain for AI-enabled technologies.
Despite significant progress, regulatory frameworks have not kept pace with the rapid evolution of AI in healthcare. This gap is evident in the current pathways for AI-enabled medical devices. The FDA has authorized almost 1,000 AI-enabled medical devices, yet the majority have been cleared through the 510(k) pathway, which has lower evidentiary standards than the more rigorous Premarket Approval (PMA) process.
While the 510(k) pathway has facilitated market entry for many AI technologies, it has not necessarily translated to widespread clinical adoption or reimbursement. This situation creates a paradox: regulatory clearance without clinical implementation. For AI to realize its full potential in drug development and clinical care, the field must move toward more robust validation through RCTs, which are essential for generating the evidence needed for reimbursement, inclusion in clinical guidelines, and broad acceptance by healthcare providers.
The experience with INFORMED in the context of the current landscape of AI regulation highlights a crucial insight: Regulatory innovation must balance facilitating technological advancement with ensuring appropriate evidence standards. As AI technologies become increasingly complex and autonomous, regulatory frameworks must evolve to address unique challenges while maintaining their fundamental commitment to safety and efficacy.
Toward a New Paradigm for AI in Drug Development
Integrating Validation into Development Workflows
For AI to fulfill its promise in drug development, validation must become an integral component of development workflows rather than an afterthought. This integration requires new approaches that balance innovation with evidentiary rigor, building on lessons from both the clinical imperative for validation and the regulatory opportunity for modernization.
Staged validation frameworks that align with drug development phases represent one promising approach. Early-stage validation may focus on analytical performance and face validity, while later stages address clinical utility and impact on decision-making. This parallelism allows AI development to proceed alongside therapeutic development, generating appropriate evidence at each stage.
“Digital twins” or simulation environments can enable testing of AI systems before deployment in actual clinical settings. These controlled environments can identify performance issues and integration challenges without risking patient safety or compromising trial integrity. Continuous performance monitoring should become standard practice for AI systems in drug development. Unlike traditional analytics, AI systems may drift or degrade over time as data patterns evolve. Ongoing monitoring with feedback mechanisms ensures sustained performance and enables refinement based on real-world experience.
Industry-wide standards for AI validation would further accelerate progress by establishing common metrics, methodologies, and reporting requirements. These standards would provide clarity for developers, streamline regulatory review, and enhance comparability across systems. Through thoughtful integration of validation into development workflows, AI technologies can generate the evidence needed for both regulatory approval and clinical adoption.
The Technology Investment Community’s Responsibility
A critical yet often overlooked aspect of advancing AI in healthcare is the role of the technology investment community in fostering rigorous validation practices.
Venture capital and private equity firms traditionally prioritize rapid development cycles, scalability, and market entry over the deliberate pace of clinical validation. However, this mindset must evolve for AI to achieve meaningful clinical impact in drug development.
Forward-thinking investors should recognize that robust clinical validation represents a strategic advantage rather than merely a regulatory hurdle. AI companies with strong evidence of clinical utility are more likely to achieve sustainable adoption, secure reimbursement, and generate long-term returns. Furthermore, as healthcare systems increasingly demand evidence of value, companies that invest in rigorous validation will have a competitive edge in the marketplace.
Investment firms should therefore recalibrate their expectations regarding development timelines and validation requirements for AI in healthcare. This might include:
- Allocating dedicated funding for clinical validation studies, including RCTs when appropriate
- Extending investment time horizons to accommodate the deliberate pace of clinical evidence generation
- Developing expertise in regulatory science and clinical evidence requirements to better evaluate AI companies
- Partnering with academic institutions and healthcare systems to facilitate validation studies
By embracing these approaches, the investment community can play a pivotal role in bridging the gap between technological promise and clinical impact in AI-enabled drug development.
Redesigning Regulatory Frameworks for Learning Systems
The experience with INFORMED and the current challenges in AI regulation highlight the need for redesigned regulatory frameworks that can accommodate learning systems.
Traditional regulatory frameworks were designed for static medical products with largely fixed characteristics. AI systems, particularly those that learn and adapt over time, challenge these frameworks by introducing the possibility of performance changes after approval. This fundamental mismatch requires reimagining regulatory approaches.
Regulatory frameworks for AI in drug development should incorporate several key elements:
- Initial qualification standards need to balance the need for evidence with the recognition that AI systems will continue to evolve. These standards should focus on core performance characteristics, validation methodologies, and risk management approaches rather than specific algorithmic details that may change over time.
- Post-market monitoring requirements should be tailored to the unique characteristics of AI systems, including performance drift, data-set shift, and evolutionary learning. These requirements might include periodic revalidation, performance reporting, and predefined thresholds for additional regulatory review.
- Change management protocols should establish clear guidelines for when modifications to AI systems require regulatory notification or review. These protocols would distinguish between minor refinements that maintain performance within established parameters and significant changes that warrant additional scrutiny.
- Regulatory sandboxes or experimental spaces, building on the INFORMED model, could provide controlled environments for testing innovative approaches before full regulatory implementation. These sandboxes would allow for collaborative learning among regulators, developers, and clinical users while maintaining appropriate safeguards.
By redesigning regulatory frameworks to accommodate the unique characteristics of AI systems, regulators can facilitate innovation while ensuring appropriate oversight. This approach recognizes that regulatory modernization is not merely about accelerating approvals, but also about creating frameworks that align with the technical realities of AI while maintaining rigorous standards for safety and efficacy.
Building Shared Digital Infrastructure
The digital safety reporting framework pioneered by INFORMED demonstrates the value of shared infrastructure for enabling AI applications in drug development.
Similar infrastructure investments are needed across the drug development ecosystem to support both clinical validation and regulatory modernization. Standardized data formats and exchange mechanisms would facilitate interoperability among systems and organizations, reducing the friction of data sharing while maintaining appropriate privacy and security. These standards should accommodate diverse data types, including clinical, genomic, imaging, and real-world data.
Federated learning approaches could enable collaborative model development without centralizing sensitive data, addressing privacy concerns while leveraging the statistical power of large, diverse data sets. These approaches are particularly valuable for rare diseases or specialized therapeutic areas where individual organizations may have limited data.
Shared evaluation frameworks and benchmark data sets would provide common ground for assessing AI performance, enabling meaningful comparisons across systems and accelerating the identification of best practices. These resources would be particularly valuable for emerging applications where evaluation methodologies are still evolving.
By building shared digital infrastructure, the biomedical community can create a foundation for both rigorous validation and efficient regulation of AI technologies. This infrastructure would not only support individual AI applications but would also enable the ecosystem-level coordination needed to realize AI’s potential in drug development.
Conclusion
The maturation of AI in drug development depends not on algorithmic innovation alone, but also on the sociotechnical systems that surround and enable its use.
- Clinical validation is essential for adoption.
- Regulatory modernization is essential for scalability and safety.
These dual imperatives are interdependent: Robust validation generates evidence that enables regulatory confidence, while modernized regulatory frameworks create pathways for validated technologies to reach patients.
Technology innovators and investors need to recognize that generating robust clinical evidence is not merely a regulatory obligation but a strategic imperative for achieving meaningful adoption and scale. Without credible evidence of clinical utility, even the most technologically sophisticated AI systems will struggle to gain traction in healthcare settings. The investment community should therefore recalibrate its expectations regarding development timelines and validation requirements, recognizing that rigorous validation enhances rather than undermines long-term returns.
The INFORMED initiative demonstrated the potential of novel organizational models to drive regulatory innovation. Its approach to the digital transformation of IND safety reporting illustrates how targeted interventions can address specific inefficiencies while laying the groundwork for broader system changes. This model suggests that incubator-like structures within regulatory agencies can accelerate innovation while maintaining alignment with core regulatory missions.
The journey from INFORMED’s digital safety pilot to formal FDA guidance eight years later highlights both the promise and challenges of regulatory innovation. While incubator models can successfully generate and validate novel approaches, sustained institutional commitment is necessary to fully implement these innovations at scale.
Similarly, the current landscape of AI regulation, with its reliance on the 510(k) pathway rather than more rigorous evidence standards, illustrates the ongoing challenge of balancing innovation facilitation with appropriate oversight.
Today’s dynamics can be an opportunity to systematize new approaches to innovation by creating permanent mechanisms for regulatory agencies to adapt to technological change while maintaining their essential oversight functions. By developing structured processes for identifying challenges, piloting solutions, and scaling successful innovations, agencies can create more responsive regulatory environments that keep pace with scientific advances.
The future of AI in therapeutics will not be determined by technical capability alone. It will be shaped by whether we are willing to build the regulatory, clinical, and institutional architecture needed to support it. INFORMED offered a glimpse of what this architecture might look like; the challenge now is to develop more systematic approaches to regulatory innovation that can build on this foundation while advancing rigorous clinical validation. By addressing these dual imperatives, we can unlock the full potential of AI to transform drug development and improve patient outcomes.