The Problem: Fragmented Oversight for a Transformational Technology
he application of artificial intelligence (AI) is no longer a futuristic concept in healthcare. It is actively transforming how we discover, develop, and deliver therapies. Generative AI (GenAI), large language models, and agentic AI systems are already embedded in drug discovery, clinical decision support, and regulatory operations. These systems are dynamic, learning-based, and often autonomous, marking a significant departure from traditional drug and device development approaches. Yet, the US Food and Drug Administration (FDA) continues to operate within a regulatory model designed for a different era. This mismatch may limit regulatory transparency, slow innovation, and challenge the ability to respond to increasingly complex AI tools that do not fit cleanly into legacy categories.
Why It Matters: A New Class of Risk and Opportunity
Regulating AI in healthcare is not about tweaking the current system; it is about responding to a paradigm shift. AI tools operate across the therapeutic lifecycle from drug discovery to real-world monitoring and are not limited to static or predefined endpoints. The emergence of GenAI amplifies this challenge, as these systems can generate new hypotheses, design molecular structures, and interact with regulators or patients autonomously. In this context, the risks are not only those known to stem from GenAI but also about failing to adapt regulatory expectations to a new model of science. Applying 20th century rules to 21st century systems risks slowing innovation, creating uncertainty for developers, and ultimately limiting patient access to life-saving technologies.
Limits of the Current FDA Approach
The FDA regulates almost a fifth of the US economy, with a focus on large centers that encompass human therapeutics, food, tobacco, and veterinary medicine. This expansive remit is a cause of the FDA’s fragmented approach to AI regulation. For example, within the FDA, the center that has made the most advances related to providing AI-related regulations and guidelines is the Center for Devices and Radiological Health (CDRH), highlighted by the launch of the Digital Health Advisory Committee (DHAC), which explicitly considers the impact of GenAI. However, recent draft guidance from the US FDA Center for Drug Evaluation and Research (CDER) and Center for Biologics Evaluation and Research (CBER) that focuses on AI in drug development omits any reference to drug discovery applications or regulatory operations, nor does it build upon CDRH’s efforts. Additionally, the recent launch of the ELSA (Evaluation of AI in Life Sciences Applications) initiative by the FDA, while promising, appears to be reactive and limited in scope.
A constrained and outdated definition of AI may be the root of this fragmentation. FDA continues to view AI through narrow, precedent-based lenses, which makes it challenging to address novel AI-enabled ecosystems that have no historical precedent. The concept of “risk-based regulation” is central to FDA’s approach, but current definitions of “risk” are not fully equipped to evaluate self-evolving, generative, or context-sensitive AI systems. The “Move 37 conundrum,” a reference to the surprising strategy used by DeepMind’s AlphaGo, exemplifies this: When an AI makes a decision no human has seen before, how should a regulator respond? Traditional tools offer little guidance.
War Footing Mindset: Lessons from Asymmetric Engagement
AI represents a strategic, fast-moving challenge. As with asymmetric warfare, where adversaries are unknown and tactics are unpredictable, regulators must adopt a war footing mindset. This requires new tools, new talent, and new rules of engagement. Conventional regulatory timelines and siloed expertise are not suited for responding to complex, evolving challenges (or opportunities). Creative, flexible, and anticipatory approaches will be essential.
Case for a New, Standalone Agency to Regulate AI-Enabled Ecosystems for Human Therapeutics
Rather than retrofitting the FDA, akin to modifying an internal combustion engine car to run on battery power, it’s time to build a regulatory agency purpose-built for the AI era. Just as electric vehicle (EV) pioneers revolutionized vehicle design by building EVs from the ground up, we must apply that same logic to regulatory sciences. This new agency, AHA (AI for Human Health Administration), would:
- Oversee entire AI-enabled ecosystems (not just individual products), including systems, processes, and platforms around which AI-enabled human therapeutics are created, developed, approved, and launched.
- Be staffed by technologists, data scientists, systems engineers, and bioethicists, complementing clinical expertise with computational and ethical depth.
- Use lifecycle oversight, with emphasis on monitoring, continuous validation, and adaptive regulation of the system, process, platform, or product after it is approved.
- Employ regulatory sandboxes, simulation environments, and shared testing frameworks to foster safe experimentation.
This agency should be housed within the Department of Health and Human Services (HHS), ensuring alignment with national health priorities. The AHA would complement, not replace, the FDA and would operate under HHS alongside the FDA, with the FDA continuing to regulate traditional products while the AHA oversees AI-enabled ecosystems for human therapeutics. Both agencies would share technical assessments, harmonize standards, and coordinate review processes to ensure innovation, patient safety, and regulatory consistency. The new AI regulatory agency could be structured to work in parallel and in partnership with the FDA, rather than replace it.
Advantages of Acting Now
By acting now, the US can:
- Reaffirm its role as a global leader in regulating the AI-enabled ecosystem for human therapeutics.
- Create clearer pathways for innovation while reinforcing safety, accountability, and ethical standards.
- Build institutional capacity for regulating technologies that evolve too rapidly for legacy frameworks.
- Leverage momentum from White House AI initiatives, which promote regulatory sandboxes, cross-sector collaboration, and risk-based oversight.
The White House AI Policy: Enabling Momentum
The White House’s AI Action Plan outlines over 90 concrete federal actions to advance AI governance. Its emphasis on innovation-enabling regulation, public-private collaboration, and global leadership sets the stage for a new agency. Congress now has a rare opportunity to capitalize on this momentum. Former FDA Commissioners Robert Califf and Scott Gottlieb have echoed this need, calling for legislative frameworks that reflect AI’s transformative potential.
Who Should Lead and Govern?
This new agency should report to HHS and work closely with other leading branches of government, such as the current US FDA, National Institutes of Health (NIH), National Science Foundation (NSF), National Institute of Standards and Technology (NIST), and the Office of Science and Technology Policy (OSTP), while maintaining operational independence. Leadership must include AI scientists, ethicists, data scientists, medical professionals, regulators, and policy professionals with a mandate to govern AI systems, processes, and platforms, and not merely the products. Its charter should include adaptive regulation, public engagement, and collaboration with global counterparts, but it must be rooted in a US-led vision for safe, equitable, and effective use of AI in medicine.
Conclusion: A Pragmatic Path Forward
Creating this new agency, the AHA, is not about disruption for disruption’s sake. The rapid evolution of AI in healthcare leaves no time for incremental fixes. The purpose is to modernize the governance of a once-in-a-generation technology that defies legacy systems. Attempts to regulate GenAI and AI-enabled ecosystems with 20th century structures will not succeed. Building a new, dedicated agency within HHS is a prudent, proactive step toward ensuring responsible innovation in AI-enabled human therapeutics. The risk of inaction is too high!