Blog Post

Who Decides? Navigating AI Governance in Clinical Research

Loading…

Andrea Bastek

Vice President, Market Strategy

When a clinical research site wants to implement a new AI tool, or use AI for a new use case, who actually approves the decision?

The answer should be simple. But when 17 clinical research professionals gathered recently to discuss AI governance, they shared a common frustration: no one really knows (yet). The group agreed that the process is complex, cross-departmental and evolving.

While nearly 70% of clinical research professionals are exploring or piloting AI, only 12% are consistently using it today [REF SOI]. The difference is not surprising in our risk averse industry, but it’s important to note that governance plays an important role in progressing from exploring to piloting to consistently using. Uncertainty about governance best practices will be a challenge that limits the rate of AI adoption in clinical research. 

The Cross-Functional Puzzle

Much like traditional software purchases, approval for AI implementation typically requires input from multiple departments. The group identified these typical stakeholders:

  • Functional team
  • Executive leadership
  • Legal
  • Compliance 
  • IT / Security
  • Finance
  • Data science or data governance
  • Vendor review committee
  • IRB/Ethics
  • Regulatory bodies

Unlike traditional software purchases, when it comes to AI adoption the requirements are less clear, the use cases are much broader, there are more uncertainties and there is no alignment on what governance questions to ask or what metrics might be meaningful to assess performance and validate a tool. The result is there is no one-size-fits-all process and there’s often no established governance body or decision-making authority to coordinate the review. In fact, because of the uncertainty and perceived risk, often no one WANTS the ultimate decision-making authority. And yet, AI adoption is inevitable, and so organizations must actively work to establish a process for AI review in spite of the complexity.

When Current Regulations Don’t Fit

Part of the governance challenge stems from a significant gap in regulatory guidance that applies to the average researcher’s intended use of AI. For computerized systems in clinical research, regulatory guidance establishes clear recommendations and provides a framework for the use of technology. In the realm of AI, however, available guidance from bodies like the FDA and the EU AI Act primarily address high-risk applications rather than the practical efficiency tools used in routine research activities .

Most clinical researchers are implementing AI in operational use cases, like communications, document creation and review, workflow optimization, and administrative tasks. These applications fall into a grey area under available regulatory frameworks. The FDA’s draft guidance, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, applies to AI that impacts “patient safety, drug quality, or the reliability of results from a nonclinical or clinical study.” Operational efficiency uses may only indirectly, or not at all, affect these areas and are generally outside its scope.  Similarly, the EU AI Act Compliance Checker  typically identifies obligations related to “AI literacy” (Article 4) and “Transparency for Synthetic Content” (Article 50, Point 2), which set baseline requirements but do not provide meaningful governance direction for these operational uses. 

Given this gap, organizations end up either over-regulating simple tools or struggling to find any applicable guidance at all. As one working group member noted, “It’s challenging to apply existing regulations to our operational AI use cases when the guidance just doesn’t address what we’re actually doing.”

The Trust Question: Why AI Governance Is Different

Beyond regulatory challenges, AI presents a fundamental trust issue: its decision-making processes can be variable and opaque. In clinical research, where accuracy and reliability are paramount, this creates understandable hesitation.

How do you validate a system whose logic you can’t fully observe? How do define success metrics and actually measure accuracy, or hallucination rates? How do you test for the worst possible behavior?

The working group discussed practical approaches: maintaining audit trails of AI outputs, requiring human-in-the-loop oversight for critical decisions, and establishing clear accountability for AI-generated results. For IRB submissions involving AI, researchers need to provide documentation of their AI usage, including outputs and decision points.

Building Your Framework When None Exists

There’s no perfect framework, complete regulatory guidance or established industry standard for AI governance in clinical research today. That is a challenge but also an opportunity: organizations can build governance approaches that actually fit their needs. The working group explored different ideas and resources as a starting point:

  • Champion the need for an approval process specific to clinical trials, which might differ from that needed for clinical care
  • Use the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) as a foundation, which provides general principles applicable across organizations rather than prescriptive rules for specific use cases.
  • Create a decision tree to determine which stakeholders should be involved in reviewing different types of AI implementations

Consider key questions to assess risk as part of the decision-making process including:

  • Does the AI have access to or process PHI or PII data?
  • Is this for research study use or operational use?
  • What level of human oversight is built in?
  • What are the consequences of AI errors?
  • Who is responsible for system security and validation (internal team, vendor, etc.)?

Key Takeaways

Clear governance structures will accelerate AI implementation. Research sites that establish decision trees, track KPIs, and implement change management strategies will be better positioned to leverage AI responsibly.

Build for learning, not perfection. Early AI experiments aren’t just about the technology, they’re about discovering where processes, data, quality, governance and integrations must improve to support broader usage.


This conversation was powered by The League. The League’s AI Working Group brings together clinical research professionals to explore practical challenges and solutions in AI implementation. Interested in joining the conversation? Learn more and apply here.

Additional Resources

Lorem ipsum dolor sit amet lorem ipsum