Pre-Screening Reimagined: AI, Script Engineering, and the Future of Clinical Trials

Clinical Researcher—December 2025 (Volume 39, Issue 6)

PEER REVIEWED

Milan Sheth, MS; Justin Brathwaite, MBA

 

Artificial intelligence (AI) is rapidly reshaping clinical research, with some of its most impactful applications emerging in patient pre-screening and recruitment. By analyzing electronic health records (EHRs) at scale, AI can match potential trial participants to complex eligibility criteria in a fraction of the time required for manual review. This automation alleviates one of the most labor-intensive tasks for research coordinators and investigators, reducing administrative burden while accelerating enrollment timelines.

Importantly, these efficiencies do not diminish the human role in trials; instead, they expand capacity for coordinators to focus on high-value activities such as patient communication, trust-building, and long-term retention strategies. Beyond these efficiencies, this paper introduces the concept of script engineering—the design of reusable AI prompts and eligibility templates—as a novel approach to standardizing pre-screening across sites.

By pairing automation with consistency and compliance, script engineering positions AI not only as a workflow enhancer, but also as a framework for reproducibility and equity in trial enrollment.

Challenges in Patient Pre-Screening and the Emerging Role of AI

Patient pre-screening is often described as the first step to clinical trial enrollment, but in practice it is anything but simple.{1} What seems like a straightforward eligibility check is, in reality, a demanding process shaped by administrative burdens, tight recruitment timelines, and resource constraints.{2} Investigators and site staff must sift through complex medical histories, reconcile increasingly intricate eligibility criteria, and compete with other sites for the same limited pool of patients—all while working under uncompensated conditions that strain budgets and staff capacity.{3,4}

For clinical research coordinators (CRCs) and principal investigators already balancing multiple trials and routine clinical care, manually matching patients to trials is not only a time-consuming process, but also an error-prone one.{4} AI offers a promising way forward. When applied thoughtfully, AI has the potential to streamline pre-screening, reduce administrative load, and improve both efficiency and equity in trial enrollment.

The rest of this article explores how AI can be integrated into the pre-screening process, highlighting both regulatory considerations and practical strategies for site teams.

The Current State of Pre-Screening

While systemic pressures shape the pre-screening landscape, the most acute burden often falls on CRCs. These frontline staff spend countless hours on manual chart reviews, combing through dense and often unstructured EHRs to extract datapoints and patient histories that match complex inclusion and exclusion criteria.{5} This painstaking work can take hours, severely limiting the number of candidates they can evaluate and introducing opportunities for error.{5,6}

Inevitably, this inefficiency contributes to high screen failure rates, with ineligibility remaining the most common reason patients do not advance beyond screening.{7} The impact also extends to patients themselves. Many endure extensive in-person screenings—such as blood draws, imaging, and costly diagnostic tests—only to learn they are ineligible.{8} The end result for patients is lost time, financial strain, and emotional frustrations; likewise, research sites expend valuable resources on would-be participants who were never promising candidates to begin with.

AI in Action: Site-Level Benefits

At the site level, the advantages of AI are both tangible and immediate, with the potential to transform the daily workflows of clinical trial professionals. A single patient review can consume nearly two hours for a CRC, only to reveal that the individual does not meet a single exclusion criterion buried deep in the record. AI-powered tools—such as EHR alerts—can streamline this process by automatically flagging patients who meet key trial criteria, reducing time lost to manual chart mining. When combined with trial-matching platforms and feasibility dashboards, these tools can transform what is now an onerous task into a faster and more accurate workflow.

Automating these processes also creates space for study coordinators to focus on what patients notice most: direct communication and support. Meaningful interactions—explaining complex procedures in plain language, addressing concerns about side effects, or simply providing reassurance—are often shortened as coordinators juggle multiple trials, documentation deadlines, and EHR navigation. AI-driven efficiencies have the potential to redirect more of that time back toward the patient.

This shift is not trivial, as patient experience is directly tied to retention. Studies consistently show that patients are more likely to remain in a study when their concerns are addressed, their questions answered, and when they feel supported by the research team.{9,10} Conversely, patients may withdraw after consent because of unanswered questions or overwhelming visit schedules that were never fully explained.

By alleviating repetitive administrative tasks, AI enables coordinators to reclaim bandwidth for these crucial conversations. In doing so, it strengthens the human connection—trust, reassurance, and partnership—that underpins both recruitment and long-term retention in clinical trials. These site-level benefits underscore the importance of standardization, a gap that script engineering seeks to address.

The Role of Script Engineering

Coordinator View—Reusable Prompts and Templates for Eligibility Queries

From the coordinator perspective, one of the most time-consuming parts of pre-screening is repeatedly searching through EHRs for the same pieces of information: prior therapies, lab thresholds, comorbidities, and diagnostic results. Script engineering—designing reusable AI prompts or templates—has the potential to transform this process.

Instead of manually re-creating search queries for each patient, standardized scripts can be applied across multiple records to quickly extract relevant data against protocol criteria. For example, a reusable prompt might automatically identify prior lines of therapy, flag abnormal liver enzymes, or summarize recent cardiac evaluations. This approach reduces variability between coordinators, minimizes overlooked details, and ensures a consistent, audit-ready process.

Structured checklists and templates are already known to improve efficiency; when applied through AI scripting, these methods could amplify those gains while maintaining accuracy.

Site/System View—Scaling Consistency and Compliance

At the site level, script engineering offers benefits beyond efficiency: it standardizes how eligibility is assessed across staff, trials, and even institutions. Trial sponsors and regulatory bodies often raise concerns about consistency in pre-screening practices, particularly when inclusion/exclusion criteria are complex or evolving with amendments.{1} With reusable AI scripts, sites can implement uniform logic for eligibility queries, reducing the risk of subjective interpretation or human error.

Moreover, interactive dashboards could further pair these scripts with real-time feasibility tracking, ensuring that every patient evaluation is logged, reproducible, and transparent for monitoring. For leadership, this creates both operational savings and compliance assurance, while for coordinators, it translates into more time for patient-facing interactions rather than repetitive chart mining. Equally important, these efficiencies lay the foundation for a more patient-centered approach to pre-screening.

Patient-Centric Benefits and Ethical Considerations

AI-enabled pre-screening provides a more patient-centered approach. By analyzing large volumes of de-identified health data, AI can flag individuals who are more likely to meet eligibility requirements before they enter the clinic.{11} This precision ensures that patients invited to screening are better matched from the outset, reducing the burden of unnecessary procedures and making the research experience less discouraging.

Beyond efficiency, AI shows promise for advancing inclusivity and diversity in clinical research. Historically, certain populations have been underrepresented in trials, limiting the generalizability of results and perpetuating inequities in access to innovative therapies. By incorporating demographic data and social determinants of health, AI can assist sites in identifying and recruiting patients from a broader range of backgrounds.{12}

This not only strengthens trial design, it also ensures that new therapies are evaluated across populations that reflect the real-world patients they are intended to serve. To achieve these outcomes, however, adoption must be supported by thoughtful training and governance frameworks that build trust in AI among site staff.

Adoption and Change Management

Coordinator View—Building AI Literacy

CRCs often experience firsthand how new systems can either streamline their responsibilities or create additional complexity. To ensure that AI serves as a support rather than a burden, coordinators need to become “AI-literate.” This does not require deep technical expertise, but instead the ability to use AI tools with confidence—recognizing when to trust an AI-generated match and when to question it.

Much like their adaptation to EHRs, sponsor portals, and regulatory databases, coordinators will benefit from structured training to incorporate AI prompts, eligibility scripts, and automated dashboards into daily workflows. Without such a foundation, even the most advanced tools risk being underutilized or mistrusted.

System View—Frameworks for Safe Adoption

From both the site and sponsor perspective, adoption of AI requires more than enthusiasm—it requires guardrails. Compliance, safety, and data privacy must remain at the forefront.

Coordinators need assurance that the systems in use have been vetted for regulatory alignment and that outputs are in audit-ready condition. One of the fastest ways for a new process to fail is when staff doubt its compliance or worry about regulatory pushback. Clear frameworks provide frontline teams with the reassurance needed to trust and integrate these tools, rather than reverting to manual methods.

Shared Return on Investment: Demonstrating Value

For frontline staff, the real measure of AI lies in what it gives back: fewer failed screens, less time consumed by repetitive chart reviews, and more energy for face-to-face conversations with patients. Burnout is a pressing concern, and every hour saved on administrative work is an hour that can be redirected toward guiding patients through complex trials.

While leadership may define return on investment (ROI) in terms of faster trial start-up or cleaner feasibility metrics, coordinators often view ROI more personally—as leaving the clinic feeling they had the time to answer patient questions rather than being buried in paperwork. Striking this balance is where adoption will ultimately succeed or fail.

The Role of AI in Regulatory and Study Start-Up

Beyond patient pre-screening and site-level efficiencies, AI is also reshaping the earliest stages of trial planning, which includes feasibility assessments and site selection. Using predictive analytics and natural language processing (NLP), AI platforms can analyze large and diverse datasets such as EHRs, real-world evidence, and historical trial performance metrics.{13} This allows sponsors to predict recruitment rates with greater accuracy and identify sites with demonstrated strengths in patient enrollment and retention.

Unlike traditional methods that depend heavily on site questionnaires and self-reporting, AI creates data-driven profiles of research sites, offering a more comprehensive and objective view.{14} In oncology, for example, where eligibility is often tied to genetic or biomarker-driven criteria, AI has been used to pinpoint optimal trial sites and investigators in a fraction of the time required by conventional approaches.{15,16} In one case, a biopharmaceutical company leveraged AI to identify more than 20 trial sites and 45 high-value investigators for a rare genetic disease study within weeks, a process that previously took months.{16}

Insights from Industry Experience

Still, technology alone does not resolve the structural challenges that slow trial activation and patient recruitment. Clinical trials continue to grow in complexity, and research sites face mounting operational pressures, from staffing shortages to administrative bottlenecks, that AI cannot erase overnight.

In response, contract research organizations (CROs) and sponsors have increasingly shifted to a more site-centric model, investing in tools and services designed to reduce burden and improve retention among site staff.{17} AI has an important role to play here as well, not only in site identification, but also in reducing the day-to-day administrative demands that contribute to high turnover.

At the same time, over-reliance on AI for feasibility may unintentionally reinforce existing inequities. Algorithms may prioritize established, high-volume research centers, leaving newer or smaller sites, many of which already struggle to compete for trials, overlooked. To avoid this, practical applications of AI should aim to enhance site capacity rather than simply narrow site selection.

For example, AI can support recruitment directly by identifying eligible patients more efficiently, helping sites demonstrate capability and success. On the operational side, AI can streamline site outreach by maintaining accurate investigator contact databases, addressing a long-standing inefficiency that often costs CROs weeks of time.

Ultimately, the value of AI in regulatory and start-up lies not just in speeding processes for sponsors, but in strengthening the partnerships between CROs, sites, and investigators. When implemented thoughtfully, AI can reduce barriers to activation, improve communication, and enable sites of all sizes to focus on the shared goal: recruiting patients efficiently and generating high-quality data that meet regulatory standards.

AI to Accelerate Site Activation

AI is also being applied to the regulatory documentation that often slows activation. Platforms can auto-draft informed consent forms, as well as prepare submissions for institutional review boards (IRBs) or ethics committees (ECs).{18} Beyond drafting, these systems track submissions, manage version histories, and organize feedback, reducing the administrative bottlenecks that frequently delay start-up.{18}

By streamlining both patient identification and document workflows, AI enables sites and sponsors to shorten activation timelines while maintaining consistency, accuracy, and compliance.

Regulatory and Ethical Considerations

As AI becomes more deeply embedded in clinical development, regulatory and ethical oversight will be central to its responsible adoption. Data privacy is one of the foremost concerns. AI systems must comply with established frameworks such as the Health Insurance Portability and Accountability Act in the United States and the General Data Protection Regulation in the Europe Union, ensuring that any data used are de-identified or anonymized and supported by robust security measures, audit trails, and governance protocols.{19,20}

Global regulators, including the U.S. Food and Drug Administration and the European Medicines Agency, are already shaping guidance for AI in clinical research. Their risk-based frameworks emphasize that oversight should scale with the level of influence an AI tool exerts on patient safety and trial outcomes.{21,22} In practice, this means that transparency, reproducibility, and thorough documentation of model development, validation, and ongoing monitoring will be non-negotiable expectations.

Ethical considerations are equally pressing. Algorithmic bias poses a real risk: if AI models are not trained on diverse and representative datasets, they may inadvertently disadvantage specific demographic groups, reinforcing existing inequities in clinical trial access.{23} Safeguards must therefore extend beyond technical validation.

Ongoing human oversight, coupled with review by IRBs and ECs, will remain essential to ensuring fairness, protecting patients, and upholding the integrity of research across every stage of a trial. Addressing these ethical and regulatory expectations is only part of the equation—practical barriers to workflow integration and trust at the site level also demand attention.

Challenges and Barriers

Coordinator View—Workflow Integration, Accuracy, and Trust

At the site level, one of the greatest challenges in adopting AI is workflow integration. Even well-intentioned tools can create friction when they fail to align with daily practices or when staff feel they are duplicating work. For AI to succeed, it must fit seamlessly into existing platforms such as EHRs, clinical trial management systems, and sponsor portals; otherwise, it risks being sidelined.

Accuracy is equally critical—if an AI tool consistently flags ineligible patients or misses eligible ones, trust quickly erodes. Coordinators already manage multiple trials and patient visits, leaving little capacity for systems that add more noise than clarity. Building trust will require consistent accuracy, transparent outputs, and opportunities for coordinators to validate and provide feedback on AI-driven recommendations.

Industry/System View—Collaboration and Alignment

At the industry level, AI adoption is not a single-site issue but a collective one. Sponsors, CROs, regulators, and research sites must align to ensure that implementation is both responsible and consistent. Without collaboration, the field risks fragmented adoption: some sites may fully embrace AI while others remain hesitant, leading to uneven standards across studies.

Regulators have begun introducing risk-based frameworks, but more clarity is still needed on how AI-driven pre-screening, feasibility, and documentation will be monitored and audited. Likewise, CROs and sponsors must work closely with sites to ensure AI is introduced not as a top-down mandate, but as a shared solution to long-standing operational bottlenecks.

Conclusion

AI should be embraced as an augmentative partner in clinical research—not a replacement for the human expertise that anchors trial success. Its greatest value lies in automating the repetitive, resource-intensive tasks that consume site capacity, from parsing EHRs to mapping patients against increasingly complex eligibility criteria.{24}

By delegating these functions to AI, coordinators and investigators can redirect their energy to higher-value work: applying clinical judgment, engaging with patients, and managing trials strategically. This redistribution of effort not only reduces administrative burden and human error, it also accelerates patient identification, enrollment, and retention.

However, these benefits will only be realized through thoughtful integration. A proactive approach is required—one that builds trust through transparency, safeguards patient privacy with robust governance, and actively addresses the risks of bias and inequity. Equally important, AI adoption must be shaped through collaboration among sites, sponsors, CROs, and regulators, ensuring tools are practical, compliant, and aligned with real-world workflows.

Among the most promising approaches is script engineering, which offers a reproducible and scalable framework for standardizing eligibility assessments while reinforcing both compliance and equity. By embedding reusable prompts and templates into pre-screening, script engineering transforms AI from a general efficiency tool into a structured method for consistency, audit-readiness, and fairness across sites.

When deployed responsibly, AI has the potential to reshape feasibility assessments, streamline site activation, and modernize regulatory processes, creating a more efficient and equitable research ecosystem. Importantly, it does not supplant coordinators or investigators—it empowers them.

By giving back time for the human-centered work of communication, empathy, and patient advocacy, AI strengthens the very relationships that keep patients enrolled and trials moving forward. In this way, AI is not just a technological upgrade, but a practical pathway to reducing long-standing inefficiencies and delivering life-changing therapies to patients more quickly and more fairly.

References

  1. Cotter S. 2022. Best practices in pre-screening includes use of technology. Advarra. https://www.advarra.com/blog/best-practices-in-pre-screening-includes-use-of-technology/
  2. McLaren M. 2025. Unlocking new efficiencies in patient recruitment and site collaboration: What are the challenges and benefits when considering digital components within study protocols, and how can sponsors leverage collaborations with sites to improve recruitment? Clinical Trials Arena. https://www.clinicaltrialsarena.com/sponsored/unlocking-new-efficiencies-in-patient-recruitment-and-site-collaboration/
  3. Penberthy LT, Dahman BA, Petkov VI, DeShazo JP. 2012. Effort required in eligibility screening for clinical trials. Journal of Oncology Practice 8(6):365–70. https://doi.org/10.1200/JOP.2012.000646
  4. Kirn DR, Grill JD, Aisen P, Ernstrom K, Gale S, Heidebrink J, Jicha G, Jimenez-Maggiora G, Johnson L, Peskind E, McCann K, Shaffer E, Sultzer D, Wang S, Sperling R, Raman R. 2023. Centralizing prescreening data collection to inform data-driven approaches to clinical trial recruitment. Alzheimer’s Research & Therapy 15(88). https://doi.org/10.1186/s13195-023-01235-4
  5. Ni Y, Bermudez M, Kennebeck S, Liddy-Hicks S, Dexheimer J. 2019. A real-time automated patient screening system for clinical trials eligibility in an emergency department: Design and evaluation. JMIR Preprints. https://preprints.jmir.org/preprint/14185
  6. Ni Y, Kennebeck S, Dexheimer JW, McAneney CM, Tang H, Lingren T, Li Q, Zhai H, Solti I. 2015. Automated clinical trial eligibility prescreening: Increasing the efficiency of patient identification for clinical trials in the emergency department. Journal of the American Medical Informatics Association 22(1):166–78. https://doi.org/10.1136/amiajnl-2014-002887
  7. Trially AI. AI vs. humans: Inside the JAMA study that proves it’s time to ditch manual prescreening. https://www.trially.ai/blog/ai-vs-humans-inside-the-journal-of-the-american-medical-association-study-that-proves-it-is-time-to-ditch-manual-screening
  8. Caston NE, Lalor F, Wall J, Sussell J, Patel S, Williams CP, Azuero A, Arend R, Liang MI, Rocque GB. 2023. Ineligible, unaware, or uninterested? Associations between underrepresented patient populations and retention in the pathway to cancer clinical trial enrollment. JCO Oncology Practice 18(11):e1853–65. https://ascopubs.org/doi/pdfdirect/10.1200/OP.22.00359
  9. Gray S. 2022. The business case for patient experience. Clinical Researcher 36(5). https://acrpnet.org/2022/10/18/the-business-case-for-patient-experience/
  10. Boyd P, Sternke EA, Tite DJ, Morgan K. 2024. “There was no opportunity to express good or bad”: Perspectives from patient focus groups on patient experience in clinical trials. Journal of Patient Experience 11. https://doi.org/10.1177/23743735241237684
  11. National Institutes of Health (NIH). 2024. NIH-developed AI algorithm matches potential volunteers to clinical trials. https://www.nih.gov/news-events/news-releases/nih-developed-ai-algorithm-matches-potential-volunteers-clinical-trials/
  12. Lu X, Yang C, Liang L, Hu G, Zhong Z, Jiang Z. 2024. Artificial intelligence for optimizing recruitment and retention in clinical trials: A scoping review. Journal of the American Medical Informatics Association 31(11):2749–59. https://doi.org/10.1093/jamia/ocae243
  13. Harrer S, Shah P, Antony B, Hu J. 2019. Artificial intelligence for clinical trial design. Trends in Pharmacological Sciences 40(8):577–91. https://doi.org/10.1016/j.tips.2019.05.005
  14. Mihic A, Viswa CA, Agrawal G, Yew H, Webster K. 2025. Unlocking peak operational performance in clinical development with artificial intelligence. McKinsey & Company. https://www.mckinsey.com/industries/life-sciences/our-insights/unlocking-peak-operational-performance-in-clinical-development-with-artificial-intelligence/
  15. Kehl KL, Mazor T, Trukhanov P, Lindsay J, Galvin MR, Farhat KS, McClure E, Giordano A, Gandhi L, Schrag D, Hassett MJ, Cerami E. 2024. Identifying oncology clinical trial candidates using artificial intelligence predictions of treatment change: A pilot implementation study. JCO Precision Oncology 8:e2300507. https://doi.org/10.1200/PO.23.00507
  16. Telisina, LLC. How Telisina used AI to identify ideal clinical trial sites. https://telisina.com/case-studies/using-ai-to-redefine-clinical-trial-site-selection/
  17. Advarra. 2025. The ROI of site-centric training and support [White paper]. https://info.advarra.com/roi-site-centric-training-support-wp.html
  18. Shi Q, Luzuriaga K, Allison JJ, Oztekin A, Faro JM, Lee JL, Hafer N, McManus M, Zai AH. 2025. Transforming informed consent generation using large language models: Mixed methods study. JMIR Formative Research 9:e68139. https://doi.org/10.2196/68139
  19. Ramirez F. 2025. HIPAA and AI: Navigating compliance in the age of artificial intelligence. HIPAA Vault. https://www.hipaavault.com/resources/hipaa-and-ai-navigating-compliance-in-the-age-of-artificial-intelligence/
  20. Ward G, Hermsen P. 2025. The intersection of GDPR & AI: Navigating data protection when adopting AI. Perforce. https://www.perforce.com/blog/pdx/gdpr-ai
  21. U.S. Food and Drug Administration (FDA). 2025. FDA proposes framework to advance credibility of AI models used for drug and biological product submissions [Press release]. https://www.fda.gov/news-events/press-announcements/fda-proposes-framework-advance-credibility-ai-models-used-drug-and-biological-product-submissions
  22. Mulryne J, Roussanov A, Sideri E. 2024. EMA adopts reflection paper on the use of artificial intelligence (AI). BioSlice Blog. https://www.biosliceblog.com/2024/10/ema-adopts-reflection-paper-on-the-use-of-artificial-intelligence-ai
  23. Cross JL, Choma MA, Onofrey JA. 2024. Bias in medical AI: Implications for clinical decision-making. PLOS Digital Health 3(11):e0000651. https://doi.org/10.1371/journal.pdig.0000651
  24. Beck JT, Rammage M, Jackson GP, Preininger AM, Dankwa-Mullan I, Roebuck MC, Torres A, Holtzen H, Coverdill SE, Williamson MP, Chau Q, Rhee K, Vinegra M. 2020. Artificial intelligence tool for optimizing eligibility screening for clinical trials in a large community cancer center. JCO Clinical Cancer Informatics 4:50–9. https://doi.org/10.1200/CCI.19.00079

Milan Sheth

Milan Sheth, MS, is a Clinical Research Coordinator specializing in oncology trials at Houston Methodist Neal Cancer Center.

Justin Brathwaite

Justin Scott Brathwaite, MBA, is a PhD student in Clinical Research at the University of Jamestown and a Senior Site Readiness and Regulatory Startup Specialist at Fortrea, a contract research organization.