Clinical Researcher—August 2025 (Volume 39, Issue 4)
CERTIFICATION
Rob O’Connor, MS, CCRA, ACRP-CP, FACRP, 2025 Chair of The Academy of Clinical Research Professionals
ACRP Certification has been and still is the most reputable credentialing program in clinical research. Since 1992, more than 43,000 professionals and their employers have come to trust ACRP Certification as the mark of excellence in clinical research. However, few know and understand the cost and effort that goes into creating and maintaining the quality of our certification exams. From the volunteers who draft exam questions (items), to the exam committee members who edit and revise them, and to the psychometricians at the testing agencies who help us develop the exams and test the questions, the effort involved in creating and administering credentialled exams is immense.

With recent integrations of artificial intelligence (AI) into many processes and workflows we touch on a day-to-day basis, it is imperative that ACRP and The Academy of Clinical Research Professionals (The Academy) investigate the use and utility of incorporating AI into the exam development process. With that in mind, ACRP and The Academy are undertaking a study in collaboration with our testing agency and its AI platform partner to evaluate the effectiveness of AI-assisted item writing for the Certified Clinical Research Coordinator (CCRC®) certification exam.
The purpose of this study is to determine whether AI-generated multiple-choice questions are equivalent in quality to those written by subject matter experts (SMEs) from our bank of trained item writers, based on four key metrics:
- Accuracy (factual correctness)
- Consistency (alignment with exam domains and correct answers)
- Representativeness (coverage of exam content categories)
- Perspicuity (clarity and ease of understanding)
The study’s design highlights the following elements:
- AI-generated items will be reviewed and edited by SMEs.
- A blinded comparison will be conducted between AI-generated and human-written items.
- Time and cost data will be collected to evaluate efficiency gains.
- The study will include approximately 100 AI-generated items, 100 human-written items, and 10 to 15 blinded reviewers.
Additionally, there is an intent to assess potential time and cost efficiencies of integrating AI into the item development workflow. Efforts will be taken to evaluate potential impacts to shorten the development timeline, interrupt the development loop to make it more efficient, and decrease the per-question costs necessary for developing high-quality items.
The intent is not to completely replace the human elements of exam creation and design, but to utilize the tools available for some of the more tedious tasks associated with the process. Limitations of AI will be examined to determine if the technology may be appropriate with caveats or limitations, such as question complexity (recall vs. application vs. analysis) or over-simplification of material to be tested. In this new world of AI tools, the hope is we can speed up the process, decrease the costs of developing exam content, and maintain or improve the quality of our exams going forward.
Once formally launched, the study is expected to be completed within six to nine months, from AI configuration to final data analysis. Results will be shared with the membership once available.
**ACRP**


