Catherine Gregor
Chief Clinical Trial Officer, Florence Healthcare
When anyone talks about artificial intelligence in clinical research, it’s tempting for our industry to read the fine print and then default to the safest possible interpretation. While that instinct is understandable, it’s also a mistake.
Between the FDA’s recent public statements on AI use and its broader guidance on AI- and machine learning-enabled technologies, one message comes through clearly: the agency is preparing for more AI in healthcare and expects the clinical trial ecosystem to do the same.
The most concerning response I see today is not irresponsible AI use. It’s the growing impulse to prohibit AI altogether.
Over the past year, we’ve watched a wave of blanket “no-AI” clauses creep into CDAs and clinical trial agreements. Florence’s own conversations with sites and sponsors mirror what’s being discussed publicly: “No AI” language is suddenly popping up in contracts, leaving research teams unsure whether they can even use basic productivity tools without breaching terms. A recent webinar on “navigating the ‘no AI’ clause minefield” described a surge of these clauses in CDAs and CTAs, particularly from large sponsors reacting defensively to emerging regulations rather than thoughtfully to actual risk. Vimeo
At the same time, actual AI adoption in clinical trial operations remains modest, especially at the site level. A recent Tuft’s survey of nearly 80 companies found that as of late 2024, only 11% reported fully implementing AI/ML to enable and support clinical trial activities, with another 22% only partially implementing such tools. Broader healthcare data show a similar pattern: one large analysis of more than 119,000 firms found AI use in healthcare at just 8.3% of organizations in 2025, significantly lower than other industries.
The result is a paradox. The AI-in-clinical-trials market is growing rapidly, with billions in projected value and high double-digit growth driven by investment in recruitment, trial design, and monitoring. Yet the people closest to patients, sites and study teams, are often contractually or culturally discouraged from using AI to reduce low-value work. The result isn’t safer trials; it’s slower, more manual, and more error-prone trials, which is the opposite of what patients need.
What the FDA Is Actually Saying About AI
The FDA’s position on AI is not ambiguous. It is publicly available and consistently emphasizes a risk-based approach to AI adoption.
Key resources worth reading carefully include:
- FDA Press Release: “FDA Takes Steps to Advance Responsible AI Use”
- FDA Discussion Paper: Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device
- FDA Guidance on Computer Software Assurance
- FDA Risk-Based Approach to Digital Health Technologies
Across these materials, the message is not “don’t use AI.” It is:
- Define the intended use– regulations are applied based on the intended use of an application. An AI tool used to organize or flag data poses a very different risk than one used to drive protocol of clinical decisions.
- Validate for that use- make sure the AI tool performs as expected for its specific purpose, no more, no less.
- Apply human oversight- A human-in-the-loop remains essential for oversight.
- Scale controls to risk- Proportionality matters. Low-risk, administrative tools warrant lighter controls while higher-risk applications require stronger safeguards.
This framework should feel very familiar to anyone involved in clinical trials because it mirrors how we already work: assess risk, control for safety, and work within the existing regulatory guidelines. Responsible AI adoption doesn’t require reinventing compliance, rather it just requires applying the principles we already know consistently and thoughtfully.
Time Waste Is a Patient Harm We Don’t Talk About Enough
In clinical research, inefficiency is often dismissed as an operational inconvenience. It is not.
Every unnecessary manual step extends trial timelines, increases burden, and delays access. AI, when used responsibly, addresses one of our biggest industry failures: wasted time.
Automating low-risk, high-volume administrative tasks doesn’t replace human judgment, rather it protects it, allowing skilled professionals to focus on patients, oversight, and decision-making. If we truly want to improve the patient experience in trials, reducing time waste is one of the most powerful levers we have.
What Clinical Research Sites Need to Do
Sites are often on the receiving end of decisions they didn’t make, but that doesn’t mean they’re powerless.
Sites should:
- Ask sponsors and CROs what tools are being used and why
- Push for technologies that reduce duplicative data entry and document chasing
- Be wary of contracts that prohibit tools clearly intended to reduce site burden
- Advocate for transparency, not prohibition
What Sponsors Need to Do
Sponsors set the tone for how innovation is adopted.
Sponsors should:
- Move away from blanket “no-AI” contractual language
- Require vendors to explain intended use, validation, and controls
- Differentiate between operational AI tools and higher-risk applications
- Align internal legal, compliance, and clinical teams early, before default bans appear in contracts
A Final Thought
AI is not the enemy of compliance. Inefficiency is.
In clinical trials, inefficiency often hides behind familiar processes like manual reconciliation, redundant reviews, endless email chains, and systems that don’t talk to one another. These practices feel “safe” because they’re familiar, but they quietly drain time, resources, and attention from the work that actually matters.
For patients waiting on new therapies, time is not an abstraction or a metric on a dashboard. It’s measured in progression of disease, in missed eligibility windows, in delayed hope. Every avoidable delay in a trial’s lifecycle has human consequences.
The FDA understands this. Its guidance on AI reflects an agency preparing for a future where technology is used to strengthen oversight, not bypass it. They are creating a regulatory environment where tools are evaluated based on their intended use and risk, and where human accountability remains non-negotiable. This is not a signal to slow down. It’s an invitation to modernize responsibly.
Progress in clinical trials has never come from standing still. If we care about patients, we have an obligation to move forward—carefully, transparently, and with purpose—but to move forward nonetheless.