Leveraging AI in Regulatory Decision-Making: FDA Issues Draft Guidance

The US Food and Drug Administration (FDA) kicked off the year with a major milestone: the release of its highly anticipated first draft guidance on the use of AI in drug and biological product development.

Titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products,” this guidance focuses on AI models supporting regulatory decisions about drug safety, effectiveness and quality. Central to the draft is a proposed risk-based credibility assessment framework to guide clinical trial sponsors in planning, gathering, organizing, and documenting the necessary information to demonstrate that their AI models are credible.

Here’s what you need to know.

Purpose of the Draft Guidance

FDA draft guidances are preliminary versions of official guidance documents published for public review and comment. Although not legally binding, they offer stakeholders insight into the agency’s initial thinking and provide an opportunity to shape the final guidance. For this draft, the FDA seeks input on how well it aligns with industry practices. Stakeholders have 90 days to submit their feedback.

The Growing Importance of AI in Drug Development

In recent years, AI has transformed drug development along the entire pipeline. Increasingly, companies are using AI-driven approaches to generate data that support regulatory decisions on drug safety, effectiveness and quality. 

According to the FDA, the agency has received over 500 submissions with AI components since 2016, covering applications such as:

  • Predictive Modeling: AI supports pharmacokinetics and exposure-response analyses and reduces reliance on animal studies.
  • Data Integration and Analysis: AI synthesizes data from diverse sources, such as clinical trials and registries, to improve disease understanding, develop clinical endpoints, and assess outcomes.
  • Manufacturing Optimization: AI enhances quality control and streamlines manufacturing processes.

While AI’s value is evident, adoption also introduces new and unique challenges, including:

  • Data Variability: Variability can introduce bias and raise questions about the reliability of AI-driven results.
  • Model Complexity: Complexity makes it hard to understand how AI arrives at its conclusions and calls for more transparency and explainability.
  • Accuracy Uncertainty: Concerns about accuracy make it challenging to interpret, explain, or quantify the output of a model.
  • Changing Model Performance: The introduction of new data can affect model performance and make ongoing maintenance a necessity.

Against this backdrop of opportunities and challenges, the FDA’s draft guidance was developed to ensure that the models used by drug developers are credible and can be trusted to do what they are meant to do.

It consists of three parts: a proposed framework to assess risk, a discussion of the importance of life cycle management of AI models, and a list of ways sponsors can engage with the FDA on issues related to AI model development.

Part A: A 7-Step Framework to Assess AI Credibility

AI models used to support regulatory decision-making need to be credible as drug developers rely on them to generate information to evaluate the safety, effectiveness and quality of the drugs they are developing. To assess the credibility of the models, the FDA proposes a 7-step process, referred to as the “Risk-Based Credibility Assessment Framework.”  The steps are:

  1. Define the Question of Interest: Identify the specific question, decision, or concern the AI model addresses.
  2. Specify the Context of Use (COU): Clearly define the model’s role, scope, and application.
  3. Assess AI Model Risk: Based on deep subject matter expertise, assess the model risk, i.e., the possibility that the AI model could lead to an incorrect decision and, with that to an adverse outcome.
  4. Develop a Credibility Plan: Outline the methods employed to establish the credibility of the AI model, e.g., explanations of the model and its development process, the rationale for choosing a specific model, information about how the datasets were developed and the models trained.
  5. Execute the Plan: Implement the outlined credibility activities.
  6. Document Results: Maintain a detailed report of the validation process, including deviations and outcomes.
  7. Assess Adequacy: Evaluate the AI model’s performance relative to its intended use.

The agency provides two hypothetical examples to illustrate steps one through three to help the reader better understand how to formulate the question of interest, define the COU, and demonstrate how to assess a model’s risk.

Part B: Life Cycle Maintenance of AI Models

Part B of the draft guidance addresses the challenges associated with keeping the AI model fit for use over its life cycle.

AI models can be highly sensitive to variations and changes in inputs and, therefore require ongoing monitoring – or life cycle maintenance – to ensure that they remain fit for use and show consistent performance throughout their life cycle.

The FDA recommends that sponsors make detailed life cycle maintenance plans, including information on model performance metrics, risk-based frequency for monitoring model performance, and triggers for model retesting, which are available for review.

Part C: Engage Early with the Agency

In the final section of the draft guidance, the FDA strongly encourages sponsors to engage early through formal and informal channels. Whether through pre-IND meetings, or any of a list of other engagement options such as the Emerging Drug Safety Technology Program (EDSTP), the  Real-World Evidence (RWE) Program, or the Complex Innovative Trial Design Meeting, the agency emphasizes that early dialogue can streamline development and address challenges proactively.

Conclusion

The FDA’s draft guidance on AI in regulatory decision-making represents a significant step toward integrating emerging technologies into healthcare. It aligns with global trends, such as the European Medicines Agency’s Reflection Paper on AI in the medicinal product lifecycle (published September 2024). Together, these efforts demonstrate a commitment by regulators to shape the future of AI in drug development.

For pharmaceutical and biotech companies, the guidance provides both a framework for AI-based solutions and an opportunity to influence regulatory standards through their comments.