Consumers of popular culture in the 1960s and 1970s preferred their artificial intelligence (AI) to come with overtones of danger, if not outright world domination, judging by small- and large-screen science fiction artifacts ranging from episodes of the original Star Trek series to movies like 2001: A Space Odyssey and Colossus: The Forbin Project. All these decades later, amidst a burst of new applications of AI to tasks large and small—including in the realm of clinical research—the questions behind most people’s concerns about the technology, whether stemming from mild curiosity to serious opposition, may seem downright prosaic in comparison to those raised in earlier times.
“Common questions I hear are ‘What constitutes responsible oversight of AI?,’ ‘What should I be cautious of when asked to use AI?,’ ‘Is AI going to take my job away?,’ and ‘Does AI really work as advertised?’” says David Vulcano, LCSW, MBA, CIP, RAC, FACRP, Vice President, Clinical Research Compliance and Integrity, at HCA Healthcare and author of a recent ACRP White Paper on Responsible Oversight of Artificial Intelligence for Clinical Research Professionals. “These are all good questions, and they apply as much to the clinical research enterprise—with its multitude of large- and small-scale tasks to which AI can be applied—as to even more obviously tech-heavy industries.”
Indeed, other concerns about AI involve knowing when it make sense to use AI versus automation and how the two approaches are different, says Lisa Moneymaker, Head of Strategic Customer Engagement for Medidata Solutions, who adds, “I would also like people to be thinking about whether the U.S. Food and Drug Administration allows for AI in clinical data analysis/clinical decision making, what is the quality of the data used to train the models proposed, and how are we controlling for historic representation challenges in the underlying data?”
To tackle these and other questions swirling around this timely topic, Vulcano and Moneymaker will be joined by Noelle Gaskill, MBA, ACRP-CP, Vice President and General Manager of Time Network for Tempus, and Pamela Tenaerts, MD, MBA, Chief Medical Officer for Medable, to deliver a Signature Series presentation on “AI in Clinical Research” at the ACRP 2025 conference in New Orleans, La., in April.
“We will demystify AI’s role in clinical studies, address your concerns about job security and data quality, and provide practical insights on responsible implementation in your jobs,” says Tenaerts. “Whether you’re AI-curious or AI-anxious, you’ll gain actionable knowledge to use AI responsibly in your job while maintaining a professional edge.”
Saying that they intend to offer “a scenic tour of the work of AI,” the panelists will start the journey with an introduction to the basics of what AI is and how it is being implemented in drug and device development. They will then steer their audience toward critical talking points regarding the responsible use of AI and how clinical research professionals can navigate this journey together as they move to embrace its many applications in the enterprise.
“With careful preparation and execution, there are many ways in which those in our industry, including our volunteer study participants, can benefit from AI now and in the future,” says Vulcano. “Professionals involved with study teams at every level should be part of the process of exploring how and when those benefits can be delivered across all the different settings in which clinical trials are conducted.”
Edited by Gary Cramer