Who Regulates the Regulators in an AI-Assisted World?

Sahar Zahid, MD, Graduate Student, Arizona State University

As more and more clinical research professionals come to grips with the day-to-day, warts-and-all reality behind the glittering productivity promises offered by artificial intelligence (AI), one thing that has become clear is that AI is not just about reviewing clinical trials data and regulatory paperwork—it’s about reviewing the reviewers themselves. 

As noted by Sahar Zahid, MD, a global clinical research and regulatory science professional and graduate student at Arizona State University who will present a poster on this topic for the upcoming ACRP 2026 conference in Orlando, “The regulators have long acted as the gatekeepers of ethical oversight and public health, where they were trusted by professionals in the field to answer the toughest questions. The AI revolution and, with that, the embedding of algorithms within regulatory workflow, however, has added a unique layer of complicated questions.” 

For the U.S. Food and Drug Administration (FDA), “efficiency” has always been the priority goal, adds Zahid, who serves as Secretary of the ACRP Chicagoland Chapter. But as regulatory frameworks become increasingly dependent on an inexperienced algorithm, she warns that an important question emerges regarding whether increased efficiency comes at the cost of accountability. 

Further, “A lot of desensitization has happened against the dystopian narrative shaped by science fiction productions about AI,” Zahid says. “Today, AI has not just become a household tool, but also a regulatory assistant. With the launch of ELSA and CDRH-GPT by the FDA, the agency has delivered on its promise regarding efficient AI integration in its operations. However, the FDA had also repeatedly emphasized that these tools are not meant to replace the human reviewer—rather, they are meant to support a system that’s under immense pressure in terms of costs, safety, and efficiency by improving consistency and managing scale.” 

Yet alongside this optimism, several concerns regarding errors in the operations and reliability of the ELSA have surfaced, Zahid notes, judging by comments from professionals who have been using it within the agency. “Such concerns highlight the familiar challenges incurred by even the most sophisticated AI models, like their tendency to hallucinate, risk of bias, and model drift,” she says. “On the one hand, operational errors of this sort are definitely not unique to regulatory AI. On the other, they take on heightened significance when embedded within oversight institutions, especially considering the significant safety and public trust at stake here.” 

All of this raises an uncomfortable but important question, Zahid says; namely, “Who evaluates and reviews the process, training data, design assumptions, and performance boundaries of the AI tools the regulator uses internally?” 

Also, in lieu of ensuring the transparency of approval processes, Zahid observes that the FDA has been publishing reviews, data summaries, regulatory rationales, and other relevant records of products it has approved for public scrutiny. “Why not ensure the same level of opacity and transparency when embracing a revolutionary new model in its own operations?” she asks. 

Saying that AI is more than just a traditional device that may cause harm through a single instance of device failure or malfunction, Zahid notes that the harm here surfaces as a pattern of subtle errors, misclassifications, or unchecked reliance that only becomes visible over time. “This may never trigger a formal device-malfunction event, yet still shape outcomes in meaningful ways,” she explains. “This is why the strategy for post-deployment monitoring of the regulatory AI assistant is equally important as, if not more than, the pre-deployment disclosures. These systems require continuous monitoring, feedback loops, and clearly defined points for human judgment, not just guidelines.”

According to Zahid, the question that should be asked isn’t about whether AI belongs in the regulatory arena—the question is whether existing accountability structures are keeping pace with the rapidly advancing technology. “I’m not saying we need to reject AI, but to ask better questions about how we govern it,” she concludes. 

Abstracts for Zahid’s poster on “FDA Meets AI: Who’s Regulating the Regulators?” and other posters scheduled for presentation at ACRP 2026 are available on the conference website. 

Submitted by Sahar Zahid, MD, and edited by Gary Cramer