Establishing the Link Between Trial Complexity and Coordinator Capacity

Clinical Researcher—February 2020 (Volume 34, Issue 2)

PEER REVIEWED

Alexa Richie, DHSc; Dale Gamble, MHSc; Andrea Tavlarides, PhD; Kate Strok, CCRC, CCRA; Carol Griffin

 

The workforce of the clinical research enterprise continues to change and the demand for experienced professionals at the site, sponsor, and contract research organization (CRO) levels continues to increase. At a national level, there continues to be a lack of qualified professionals for both study coordinator and study monitors. This trend will continue as the appetite for clinical research at a site and sponsor level expands at an exponential rate. At the site level, meaningful assessment of workloads and understanding the capacity of teams are necessary to enhance job satisfaction, retain key talent, maintain high performance, and reduce turnover.

In an earlier article introducing this topic,{1} the authors described their experience and process in the development of a tool to assess the complexity of a clinical trial in a uniform way across any specialty and study type. Briefly, the first iteration of the tool was comprised of 21 unique elements, each with a possible score of 0–3 points, where 0 = least complex and 3 = most complex (see Figure 1).

Figure 1: Example of Original Complexity Tool

Complexity Tool

Study Element No Effort

Minimal

(1 point)

Moderate

(2 points)

Maximum

(3 points)

Active Scoring Elements

PI expertise and experience with clinical research N/A Physician has been lead P.I. on  several trials and  has a clear understanding of a P.I.’s responsibilities Physician has been Sub -I on a study(ies) and has enrolled and followed patients on a clinical trial Physician has minimal research experience and/or requires an increased level of engagement
Study recruitment N/A Development of flyers or adding to LCD screens Community outreach Specialized recruitment efforts will be required
Target enrollment 0 <20 20 – 100 > 100
Inclusion/ exclusion criteria N/A 1-10 Inclusion/ exclusion criteria 11-20 inclusion/ exclusion criteria > 21 Inclusion/ exclusion criteria
Informed consent process (initial) No informed consent 1-10 pages 11-19 pages > 20 pages
Screening procedures for eligibility (post consent) 0 1-5 6-10 > 10
Screening visit (length) N/A < 4 hours 4-8 hours Over 8 hours
Randomization/ baseline cycle 1 procedures 0 1-5 6-10 > 10
Baseline visit/ randomization (length) N/A < 4 hours 4-8 hours Over 8 hours
Personnel required other than the research team, feasibility of the study N/A Involves only the research team, Involves moderate number of different medical disciplines and staff Involves high number of different medical disciplines and staff, requires more effort and coordination
Procedures needed after baseline/ randomization to end of treatment (outside of procedure/drug) 0 1-10 11-20 > 21

For example, we included items scored on values such as recruitment strategies, principal investigator (PI) experience, number of screening procedures, number of visits, number of departments involved, frequency of monitoring, and activities at follow-up. An example would be how a score of 1 would be assigned if a study involved one department, but a study with more than departments including the hospital would score a 3. The total possible score across all items is 63 points.

Additional elements of the complexity tool relate to the overall study design, team engagement, target accrual, consenting processes, length of study, monitoring elements, billing requirements, and if there are any associated ancillary studies.

From there, the research leadership team at the Mayo Clinic in Florida was able to develop a standard based upon natural breaks in the bell curve of the scores. The breaks indicated what would be considered a high, moderate, or low complexity trial design from a complexity standpoint for each clinical research unit.

Development of Version 2 of the Complexity Tool

Through its implementation, the research leadership team quickly identified a key area that could be improved in the Complexity Tool—the elements that were scored were done in such a way that all items were given equal weight. However, many items had a stronger impact than others on the complexity of a study. For example, the amount of data collection and requirements for reporting serious adverse events had a greater impact on coordinator effort than internal billing requirements or the length of a study subject’s visit. Therefore, a review of the 21 elements was performed and those items that were felt to have a high impact on complexity of effort were weighted (see Figure 2).

Figure 2: Example of Weighted Complexity Tool

Scores were weighted by a multiplier ranging from 1.2 to 1.7 across all 21 items. Less complex or less time-consuming items were multiplied by 1.2 (e.g., type of study recruitment). The most complex and time-consuming items were multiplied by 1.7 (e.g., adverse event reporting).

From these weighted scores, the total possible score changed from 63 to a balanced and more relatable score of 100 points. This also allowed for a more intuitive breakdown of the high-, moderate-, and low-complexity categories across a 100-point spread.

Studies that were open to enrollment and new studies going forward, were assessed with the new weighted complexity score. This model has been implemented and sustained for the last two years.

Correlating Trial Complexity with Coordinator Capacity

Once a final version of the Complexity Tool was in place, the research leadership team in Mayo Clinic Florida aimed to use the complexity score as a baseline determinant of a clinical research coordinator’s (CRC’s) capacity. Various disease teams were reviewed; leadership chose examples of teams that appeared understaffed, adequately staffed, and overstaffed, to see if the “gut feeling” from the management team held true when the new scoring was applied.

As a small test of change, the leadership team selected three teams (disease pods) within the Cancer Clinical Research Office (CCRO) with their initial assumption that one disease pod was understaffed (gastrointestinal [GI] cancer), one was adequately staffed (breast cancer), and a third had capacity to take additional studies (leukemia). These three disease pods’ scores were reviewed and a composite score per group was determined (see Figure 3). The score was then divided by the allocated CRC full-time employees (FTEs). After comparing the scores for the three sample disease pods, the leadership team identified a predictive score of 350 points as a potential target capacity score for a CRC.

Figure 3: Scores by Disease Pod within the CCRO
Breast Cancer Team* GI Cancer Team** Leukemia Team***
Total Team Score 1,197 938 789
FTE in the Team 3 2 3
Score per FTE 399 469 263
*Disease pod was adequately staffed for the workload

**Disease pod was slightly understaffed

***Disease pod was overstaffed or has capacity to take on additional studies

From there, the remaining disease pods within the CCRO were scored. The leadership team then completed a stakeholder analysis and reviewed metrics with the CRCs, data coordinators, and PIs, to understand their level of understanding of the workload and what they felt was an ideal state or workload. Through these discussions, the team was able to finalize that the ideal workload for a CRC within the CCRO was a score of 375–400 points. Once a target workload score per coordinator was established, research leadership further engaged their PI community on campus to review the needs and existing resources of each disease pod.

Over the last three years, clinical trial activity on the Mayo Clinic Florida campus has tripled in volume and complexity. With finite space to add new staff, assessing capacity of the existing team, reallocating resources, and having meaningful discussions of closing non-recruiting studies have received increased levels of attention.

Through the use of the Complexity Tool and the creation of a “CRC Standard,” a maximum score per disease pod was able to be determined based upon their allotted FTEs. For example, if one assumes the maximum complexity score per CRC is 400, and GI cancer has two FTEs of coordinator support, the maximum score would be 800. When investigators were interested in opening new trials, the current pod score was reviewed to determine if there was capacity within the team to take on another study.

With the Complexity Tool, studies ranged generally from a score of 10–100. If there was adequate capacity (e.g., disease pod score of 600), the study was able to open without further review. If there was limited capacity available (e.g., disease pod score of 790), research leadership, in partnership with clinical department practice chairs, would review the disease pod’s portfolio of existing and in-development studies to determine if there were studies that were underperforming that could be closed, or if there were competing studies that would prohibit the proposed study. If no such situations occurred, the amount of additional required FTEs would be reviewed.

Before posting for a new hire, research leadership would review other disease pods that had capacity to determine if coverage could be attained within the clinical research unit. The research leadership team is currently in the process of implementing a model whereby teams that are at or near capacity, but that cannot financially support an additional full FTE, will be able to share a “floater CRC” resource with other teams. As portfolios grow, the existing floater CRC would become dedicated to a specific team when the need arises.

Linking Capacity to Budgeted Effort

The next step was to determine if the weighted complexity score could serve as a predictive measure of how much coordinator effort should be budgeted for a clinical trial. Research leadership retrospectively reviewed a sample of studies within the CCRO to document how much effort was originally indicated by the coordinator to complete study tasks versus the complexity score calculated (using the 100-point weighted scale).

Complexity scores for this subset of studies ranged from 25 to 81 and were categorized into three ranges: 25–45, 46–65, and 66–85. The average percentage of effort per a subject (without taking into account the number of visits) was 11%, 28%, and 40%, respectively (see Figure 4). We did not evaluate above 85 points for the retrospective review, as no studies had a score that high to include.

Figure 4: Predictive Measure
Complexity Score
(out of 100 possible points)
Complexity Level Average % Effort for CRC
25–45 points Low 11%
46–65 points Moderate 28%
66–85 points High 40%

When reviewing the amount of effort spent by the coordinator on the trials, rule sets were established based upon the complexity score. For example, in the CCRO, every study that had a complexity score greater than 55 utilized a minimum of 35% of coordinator time, with the majority of these studies being Phase I. By understanding the minimum amount of effort required for a trial, based upon the complexity score, we are now in a better position to develop more accurate study budgets and have precedent to draw upon to assist in the negotiation of per-patient amounts with trial sponsors.

Current State

Through this review, rule sets based upon the complexity score are now being established that will allow for a more solid foundation upon which the assumption of CRC time could be based. The leadership team is in the process of creating a mechanism through which feasibility could easily be determined based upon negotiability of a proposed budget. It will also allow for proactive conversations with the PI on studies that may require financial supplementation in order to support FTEs to open the study, and will create a standard that could be expanded to other roles, such as data coordinators.

Reference

  1. Richie A, Gamble D, Tavlarides A, Griffin C. 2019. Trial complexity and coordinator capacity: the development of a complexity tool. Clin Res 33(5), 17–23. https://acrpnet.org/2019/05/14/trial-complexity-and-coordinator-capacity-the-development-of-a-complexity-tool/

Alexa Richie, DHSc, is a Research Operations Manager at Mayo Clinic Florida.

Dale Gamble, MHSc, is a Program manager at Mayo Clinic Florida.

Andrea Tavlarides, PhD, is a Research Supervisor at Mayo Clinic Florida.

Kate Strok, CCRC, CCRA, is a Senior Research Protocol Specialist.

Carol Griffin is a Research Operations Administrator at Mayo Clinic Florida.