Impact of a Risk-Based, Study-Specific Training Program on Research Coordinator Competency

Clinical Researcher—June 2020 (Volume 34, Issue 6)

PEER REVIEWED

Jessica Fritter, MACPR; Melissa Metheney, BSN, CCRC; Sally Jo Zuspan, RN, MSN

 

Clinical research coordinators (CRCs) are on the front lines of clinical research and play an integral role in human subjects’ protection and protocol adherence. Despite this critical role, many CRCs report inadequate training for the roles to which they were assigned.{1}

The Pediatric Emergency Care Applied Research Network (PECARN) is the only federally funded pediatric emergency research network in the United States. The network was established by the Health Resources & Services Administration, Maternal Child Health Bureau, Emergency Medical Services for Children program in 2001. It is currently comprised of 18 clinical centers (Hospital Emergency Department Affiliates) and nine Emergency Medical Services Agencies.{2}

There are approximately 80 CRCs across PECARN sites that contribute to PECARN research studies. Our recent work concluded that many PECARN CRCs feel less than competent to perform their jobs adequately after their institutional onboarding process.{3} Despite local institutional onboarding programs that include shadowing, web training, simulation, and online courses, most CRCs did not report feeling confident to conduct clinical research.

From this prior work, we suggested that there is a need for CRC core competency training and education in clinical research. Recent regulatory changes in Good Clinical Practice (GCP) Guidance (ICH E6(R2) from the International Council for Harmonization) recommend that a risk assessment process should be used to identify key study activities that pose a risk to patient safety, data integrity, or regulatory compliance.{4} High- and moderate-risk study activities should have a risk mitigation plan and a method for evaluation of these risks throughout the trial. We studied whether a targeted, competency-based training program focused on moderate to high risks would result in high levels of competency and performance in CRCs.

We conducted our study while PECARN implemented the Traumatic Injury Clinical Trial Evaluating Tranexamic Acid (TXA) in Children (TIC-TOC) study.{5} The TIC-TOC study is a multicenter, randomized, double-blinded, placebo-controlled trial collecting preliminary data on the safety of TXA in severely injured children and the feasibility of conducting a large definitive trial.

Our study-specific, competency-based training program combined both the Joint Task Force Competency Domains (JTFCDs) and the ICH E6(R2) risk assessment process into a training program for PECARN CRCs.{1,4} We evaluated perceived competency of CRCs in the PECARN based on the JTFCDs.

Our objective was to determine whether a targeted, competency-based training program focused on moderate- to high-risk aspects of a specific trial would result in both perception of competency among CRCs and actual performance competency on required study activities. We hypothesized that a CRC competency-based training program targeting high- and moderate-risk protocol activities would result in CRCs reporting that they felt competent to perform study activities as well as demonstrate their competency in performance of key study tasks.

Methods

We designed a risk-based, study-specific competency training program, including a study training plan and simulation activity. The study team at the PECARN Data Coordinating Center (DCC) and the TIC-TOC study lead investigators completed a risk assessment of the trial protocol, based on the ICH E6(R2) guidelines, prior to study implementation.

The study team identified risks to subject safety and data integrity. Once these were defined, they then evaluated the risks for probability of occurrence, impact to the study data or subject safety, and likelihood of detection at the DCC. We identified several high-to-moderate risks in this trial. This includes administration of study drug in a chaotic Emergency Department environment, limited time windows to complete study procedures, three study arms with mg/kg dosing, enrollment of children with or without parents present, and time-sensitive eligibility criteria.

We then developed a training plan (see Appendix A) that included a staff training checklist incorporating competency domains and the key study risks. Due to the complex nature of the TIC-TOC study procedures, we also devised a simulation activity in which CRCs could demonstrate competency of study skills inside their own Emergency Department.

A simulation activity is a common training exercise in medical settings where a patient scenario is created and participants must manage decision-making and treatment and assessment activities. Teams may use either a verbal outline of interventions (often known as a table top activity) or fully enacted role-play using patient mannequins and real medical interventions. The choice of table top or fully enacted simulation activity was selected by each site based on its standard approach to training simulations.

Simulations or mock trauma scenarios are a common training method in Emergency Departments, and all sites practiced trauma simulations routinely. Simulations can be a useful tool in training staff in research.{6} It took approximately four hours for CRCs to complete the study checklist and between 45 and 90 minutes to complete the simulation.

We also developed two surveys to evaluate the self-perceived competence of CRCs after site initiation training (competency survey) and after a study-specific simulation scenario. We piloted the competency survey among independent clinical research staff, including project managers and data analyst, for face validity. Under 45 CFR 46.101 of the Code of Federal Regulations, the Nationwide Children’s Hospital Institutional Review Board found the study to be exempt from the need for further review.

The in-person CRC training session for the TIC-TOC trial covered the study protocol, enrollment activities, and basic research competencies such as regulatory, ethics, and human subject safety. We also included mock scenarios during the in-person training session that highlighted moderate- and high-risk key procedures that could pose a risk to subject safety or data integrity based on the risk assessment. The mock scenarios allowed CRCs to practice key procedures in a low-stress training environment as a preparation for the simulations that would be held at their respective sites.

High- or moderate-risk trial procedures are ones that are judged to be complex, vary from standard of care, or must be administered within a strict timeline to avoid protocol deviation. For example, the TIC-TOC study protocol required drug administration within the specific time window from the time of injury. A miscalculation in this time window might not allow enough time for randomization and drug administration.

The study population was 20 emergency medicine CRCs from four hospitals in PECARN participating in the TIC-TOC trial. Respondents were recruited by e-mail and completed the competency survey using a REDCap survey tool after completing the study training. Once the survey tool was completed, CRCs were required to complete the study-specific simulation activity (described below) in their respective Emergency Departments.

After the simulation activity, participants completed a post-simulation survey to evaluate their perceived competence after the simulation exercise. The trainings, surveys, and simulations were administered prior to site enrollment, in April 2018. We did not use a “pre-post” survey design in this study for timing and logistical reasons. We designed this study to demonstrate perceived and actual competency, but were unable to evaluate these specific items prior to the training implementation.

Description of Survey Tools

We administered the competency survey to the CRCs at the completion of the study training. The survey collected demographic information and perceived competence in areas relevant to the TIC-TOC trial. CRCs scored their perceived competency on a Likert scale from “not at all competent” to “very competent.” The competency survey delineated study procedures into each competency domain (see Table 1) in both a competency and survey pathway. Results were analyzed to measure CRCs’ level of perceived competence after completing the risk-based, study-specific competency training session.

Table 1: Competency and Survey Pathway

Scientific Concepts and Research Design ·       Explaining Phase II trial

·       Identifying key data required for outcome measures

Ethical and Participant Safety Considerations ·       Informed consent

·       Human subject protections

Investigational Products Development and Regulation ·       Study drug dosage

·       Identifying adverse events/serious adverse events (AEs/SAEs)

Clinical Study Operations (GCPs) ·       Inclusion/exclusion criteria

·       Collection procedures

·       Randomization process

·       ICH E6(R2) GCP guidelines

Study and Site Management ·       Site-specific workflow
Data Management and Informatics ·       Electronic data capture systems
Leadership and Professionalism ·       Leadership and professionalism
Communications and Teamwork ·       Communicating key information to site personnel (i.e., physicians, nurses, pharmacists, etc.)


Description of Simulations

After we surveyed the CRCs, each site held study-specific simulations conducted in the Emergency Department trauma resuscitation area or a table top simulation with trauma team members. A clinical research moderator led the simulation at each Emergency Department using a script and a list of specific tasks and procedures identified by the risk assessment and the JTFCD competencies (see Appendix B).

Participants were presented with patient-specific scenarios that might occur during an enrollment. Participants were evaluated on a pass/fail basis based on their performance of key study activities using a standardized tool that assessed competence in study-specific skills.

Each participant was required to successfully complete the simulation activity with a passing score. Passing was defined as completing all five sections of the simulation activity accurately according to the protocol. Participants were allowed three attempts to successfully pass the simulation activity. The study activities included screening and eligibility, informed consent, study drug administration/randomization, baseline activities/sample collection, and follow-up and AE/SAE reporting.

The data analyzed included the results from two surveys evaluating perceived competence among participating CRCs in the TIC-TOC study after completion of the staff training checklist and the simulation, and the CRCs’ results (pass/fail) from the study-specific simulation.

Results

There were 20 survey participants with varying backgrounds (see Table 2).

Table 2: Competency Survey Demographics  
Job Title n (%)
Enroller 0 (0%)
Research Assistant 5 (25%)
Research Coordinator 13 (65%)
Research Associate 0 (0%)
Research Manager 2 (10%)
Other 0 (0%)
Years of Experience in Clinical Research
<1 year 5 (25%)
1 to <2 years 5 (25%)
2 to <3 years 4 (20%)
3 years or more 6 (30%)
Clinical Research Certifications Obtained
Certified Clinical Research Associate (CCRA) 3 (15%)
Certified Clinical Research Professional (CCRP) 1 (5%)
Certified Clinical Research Coordinator (CCRC) 3 (15%)
Other 13 (65%)
Highest Level of Education
High School Diploma or GED 0 (0%)
Associate Degree 0 (0%)
Bachelor’s Degree 16 (80%)
Master’s Degree 4 (20%)
Doctoral Degree 0 (0%)

 

After the training, more than 80% of CRCs reported feeling “very competent” in informed consent, GCP, and leadership and professionalism. Most CRCs reported being “very competent” in the definition of the trial, the study outcome, study drug dosing, inclusion/exclusion criteria, workflow, electronic data capture, and communication. About half of the CRCs reported being “very competent” in sample processing, randomization, and defining the study outcome. The remainder indicated they felt “somewhat competent” in these areas (see Table 3). Few CRCs indicated they felt “slightly competent” or “not at all competent” in these areas. CRCs reported varying levels of competence in understanding and reporting safety and AE/SAE issues in the trial, with 50% feeling “very competent,” 30% feeling “somewhat competent,” and 20% “slightly confident.”

Table 3: Competency Survey Results

Scale: 1=Not at all competent; 2=Slightly competent; 3=Somewhat competent; 4=Very competent

Scientific Concepts and Research Design 3.5 (0.46)
After reading the protocol, how competent do you feel in explaining the definition of a Phase II randomized, double-blinded, placebo-controlled trial? 3.65 (0.49)
After reading the protocol, how competent do you feel in identifying the key data elements required for the primary outcome measure of the trial: the total amount of blood products transfused in the initial 48 hours? 3.35 (0.59)
Ethical and Participant Safety Considerations 3.83 (0.28)
Regarding the site-specific informed consent document, how competent do you feel in describing all eight required elements of informed consent to prospective participants in the trial? 3.75 (0.55)
Regarding your site’s specific informed consent document, how competent do you feel in selecting an appropriate location where you will discuss informed consent with the family? 3.75 (0.55)
Regarding Protection of Human Subjects, how competent do you feel in understanding protection of human subject’s guidelines from required training? (This may include CITI training or other site-specific systems.) 4 (0)
Investigational Products Development and Regulation 3.45 (0.67)
How competent do you feel in determining the appropriate dose of study drug to give to the participant? 3.6 (0.68)
How competent do you feel in understanding how to identify and report AE/SAEs and other participant safety issues? 3.3 (0.8)
Clinical Study Operations (GCPs) 3.61 (0.43)
How competent do you feel in applying the inclusion and exclusion criteria to evaluate subject eligibility? 3.7 (0.47)
How competent do you feel about collection procedures including sample processing, sample storage, tube priority, storage, and shipping of study samples? 3.55 (0.6)
How competent do you feel in understanding the randomization process and what to do if the Use Next Box is not available? 3.35 (0.93)
Regarding Good Clinical Practice (GCP), how competent do you feel in understanding the ICH E6(R2) GCP guidelines around conducting clinical trials? 3.85 (0.37)
Study and Site Management 3.65 (0.59)
How competent do you feel with your site-specific work flow and carrying it out to complete enrollment of participants in compliance with the protocol? 3.65 (0.59)
Data Management and Informatics 3.68 (0.54)
Regarding electronic data capture (EDC) systems, how competent do you feel in utilizing OpenClinica and REDCap? 3.75 (0.44)
In regards to EDC, how competent do you feel in utilizing Query Manager? 3.6 (0.82)
Leadership and Professionalism 3.95 (0.22)
How competent do you feel in your leadership and professionalism skills? 3.95 (0.22)
Communications and Teamwork 3.75 (0.44)
In regards to Communication, how competent do you feel in communicating key information to all site personnel involved in the study (i.e., Emergency Department clinicians, nurses, pharmacists)? 3.75 (0.44)

 

Fourteen out of the 20 CRCs successfully completed the simulation activity and all participants were able to pass in fewer than three attempts. In the simulation survey, 64% of CRCs reported feeling “very competent” in screening and eligibility for eligible patients, 86% “very competent” in informed consent, 64% “very competent” in study drug administration/randomization, 50% “very competent” in baseline activities/sample collection, and 57% “very competent” in follow-up and AE/SAE tracking. This is further shown in Table 4.

Table 4: Simulation Survey Results

Not at All Competent Slightly Competent Somewhat Competent Very Competent
How competent did you feel you could screen for eligible patients? 0% (0) 0% (0) 36% (5) 64% (9)
How competent did you feel going through the informed consent process? 0% (0) 0% (0) 14% (2) 86% (12)
How competent did you feel with study drug administration and randomization? 0% (0) 0% (0) 36% (5) 64% (9)
How competent did you feel with sample collection? 0% (0) 0% (0) 50% (7) 50% (7)
How competent did you feel with AE/SAE tracking? 0% (0) 14% (2) 29% (4) 57% (8)

 

Discussion

We devised a risk-based, competency-focused training program combining the JTFCDs and the ICH E6(R2) risk assessment process into a study-specific program for PECARN CRCs. We combined these two approaches to address our previous finding that institutional onboarding processes did not adequately prepare PECARN CRCs to perform their jobs effectively. The risk assessment process helped identify moderate- to high-risk study procedures that could potentially impact study data or patient safety in a PECARN clinical trial. The staff training checklist helped direct the CRCs to the key risk areas prior to the training, and required them to identify potential problems in integrating the key study procedures at their own site.

We designed the training session to emphasize the moderate- and high-risk procedures both by using didactic lectures and mock scenarios. We felt “hands-on” scenarios combined with the lectures would help to instill confidence in the CRCs. Eligibility determination, obtaining parental permission, study drug administration, and sample collection were all determined to have higher than standard risk and elevated complexity, and were therefore integrated into the Staff Training Checklist and the study training session.

Finally, we implemented a study-specific simulation activity at each site that required CRCs to demonstrate competency in performing the moderate- to high-risk procedures as well as the activities in the JTFCDs. Importantly, the simulation activity required CRCs to demonstrate competence by successfully performing both standard skills representing the JTFCDs as well as the key procedures identified in the risk assessment.

Our results suggest that this focused approach helped CRCs feel competent in the high- to moderate-risk areas of the trial as well as in the standard areas of research. The risk-based approach combined with the JTFCDs resulted in a highly focused training session designed to increase CRC perceived competence as well as demonstrate competence in a study simulation activity. We suggest that this sort of measure is a critical piece of determining competence in perceived and actual performance, and recommend that other programs integrate similar programs.

While many CRCs felt “very competent” on most of the skills, there were CRCs who indicated they felt only “somewhat competent” on key study activities. It is difficult to distinguish whether those who felt “somewhat competent” were more modest in their self-evaluation, or whether that categorization reflects a perceived shortfall in knowledge.

CRCs’ perceived competency varied across the different areas of the survey. For example, 70% or more of CRCs perceived they were competent in informed consent, the ICH E6(R2) GCP guidelines, and the study drug administration in the competency survey, but fewer CRCs selected “very competent” for AE/SAE reporting and sample collection. This disparity could have been related to the amount of training time devoted to each topic, the topic’s complexity, or the baseline knowledge of each CRC.

We acknowledge that there are areas in which our training may have fallen short, and we will address the areas with lower perceived competence in our next training. We also noticed differences in perceived competency between the two surveys. Seventy percent of CRCs indicated they were “very competent” in determining the “appropriate study dose” in the competency survey, but only 64% indicated they were very competent in “study drug administration and randomization” in the simulation survey.

While we cannot make any statistical comparison nor conclusion between these two groups, we suggest the difference may be because an individual’s perception of competence does not always match their performance of a specific task. The variation in the wording of the questions could have contributed to this difference, or the time period in which the survey was administered may have impacted these responses.

Despite these differences, we are encouraged by the fact that most participants ranked their competence in the upper two categories (“very competent” and “somewhat competent”). Each CRC successfully passed the simulation activity by demonstrating competency in key study activities. This suggests that perception of competence is not an adequate predictor of performance, or that despite demonstrating competency in performing the task, CRCs may have lingering doubt about their own individual perceived competency, and thus the scores may never match the performance.

Limitations

Competence evaluations rely on self-report of the participants and are subjective. We did not know what the levels of perceived competency were before the CRCs completed the study training. The study simulation activities were conducted as was customary in each institution’s Emergency Department and were not standardized. While we provided a study script and a standardized competency check-off form, the actual simulation activity may have varied among sites, and this could have affected results.

Another limitation is a difference between the number of participants in the competency survey (20) and the simulation survey (14)—because they are not identical, it is difficult to draw conclusions of relevant competency within both groups. We also realize that the options on the competency survey and the simulation survey (“not at all,” “slightly,” “somewhat,” and “very” competent) were not defined, and this could have resulted in different interpretations by the participants.

Conclusion

CRCs successfully demonstrated key study skills and reported feeling competent in key study activities after completing a risk-based, competency focused training program for a randomized clinical trial of severely injured children. A risk-based training program that incorporates JTFCDs may lead to better performance of study procedures in a clinical trial. It will be beneficial to follow up with the participants to see if, after having enrolled study patients, they feel as though the training helped sustain their confidence.

References

  1. Sonstein SA SJ, Li R, Jones CT, Silva H, Daemen E. 2014. Moving from compliance to competency: a harmonized core competency framework for the clinical research professional. Clinical Researcher 28(3):17–23. doi:10.14524/CR-14-00002R1.1. https://acrpnet.org/crjune2014/
  2. The Pediatric Emergency Care Applied Research Network (PECARN): rationale, development, and first steps. Acad Emerg Med 2003;10(6):661–8. https://pubmed.ncbi.nlm.nih.gov/12782530/
  3. Saunders J, Pimenta K, Zuspan S, Berent R, Herzog N, Jones C, Kline J, Kocher K, Robinson V, Thomas B, Stanley RM. 2017. Inclusion of the Joint Task Force Competency Domains in onboarding programs for clinical research coordinators: variations, enablers, and barriers. Clinical Researcher 31(6). https://acrpnet.org/2017/12/12/inclusion-joint-task-force-competency-domains-onboarding-crcs/
  4. E6(R2) Good Clinical Practice: Integrated Addendum to ICH E6(R1)—Guidance for Industry. 2018. https://www.fda.gov/media/93884/download
  5. Nishijima DK, VanBuren J, Hewes HA, Myers SR, Stanley RM, Adelson PD, et al. 2018. Traumatic injury clinical trial evaluating tranexamic acid in children (TIC-TOC): study protocol for a pilot randomized controlled trial. Trials 19(1):593.
  6. Fisher KE, Martin M, McClain A, Stanley R, Saunders J, Lo C, et al. Nurse-driven simulations to prepare and educate for a clinical trial. Clin Simul in Nursing 28:35–8.

Acknowledgements

Research reported in this publication was supported by the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health under Award Number R34HL135214. PECARN is supported by the Health Resources and Services Administration (HRSA) of the U.S. Department of Health and Human Services (HHS), in the Maternal and Child Health Bureau (MCHB), under the Emergency Medical Services for Children (EMSC) program through the following cooperative agreements: DCC-University of Utah, GLEMSCRN-Nationwide Children’s Hospital, HOMERUN-Cincinnati Children’s Hospital Medical Center, PEMNEWS-Columbia University Medical Center, PRIME-University of California at Davis Medical Center, CHaMP node-State University of New York at Buffalo, WPEMR-Seattle Children’s Hospital, and SPARC-Rhode Island Hospital/Hasbro Children’s Hospital. This information or content and conclusions are those of the authors and should not be construed as the official position or policy of, nor should any endorsements be inferred by HRSA, HHS, or the U.S. Government.

Additional support was provided by the Resource-Related Cooperative Agreement, National Center for Advancing Translational Science, 1U24TR001597 Utah Trial Innovation Center.

Jessica Fritter, MACPR, is a Clinical Research Administration Manager with Clinical Research Services at Nationwide Children’s Hospital, an Instructor for the Master of Clinical Research program at The Ohio State University, and lead author for this article.

Melissa Metheney, BSN, CCRC, is Program Director with the Data Coordinating Center at the University of Utah.

Sally Jo Zuspan, RN, MSN, is Director of Research with the Data Coordinating Center at the University of Utah and senior author for this article.