Assessing the Operational Complexity of a Clinical Trial: The Experience of the National Institute of Mental Health

Clinical Researcher—March 2020 (Volume 34, Issue 3)

PEER REVIEWED

Sharon L. Smith, DNP; Galia Siegel, PhD; Ashley Kennedy, PhD

 

In recent years, the National Institutes of Health (NIH) has prioritized strengthening the stewardship of clinical trials.{1,2} The intent of these reforms is to improve the management and oversight of clinical trials research, increase transparency in the research endeavor, improve the efficiency and quality of scientific research, strengthen scientific rigor and reproducibility, and provide study outcomes to the scientific community and the public in a timely manner.

As one of the initiatives, each NIH institute and center enhanced procedures for assessing and managing the risks presented by funded clinical trials research. The National Institute of Mental Health (NIMH) identified operational complexity as a key component of clinical trial risk assessment.

The Clinical Trials Operations Branch in the Office of Clinical Research at the NIMH developed a framework for assessing the operational complexity of clinical trials based on potential operational challenges presented in the planned research. The purpose of this paper is to disseminate the initial framework for an operational assessment that emerged as the outcome of this effort. Note that this assessment occurs independent of scientific review and is only applicable to clinical trials that receive funding.

Operational Assessment Working Definitions

Clinical trial operations refer to the broad range of trial implementation activities involved in the execution of a clinical trial from study start up to close out. Prioritizing ethical conduct, participant safety, and data integrity, operations focus on the conduct of a clinical trial in accordance with a study protocol approved by an institutional review board (IRB), the tenets of Good Clinical Practice (GCP), and International Council for Harmonization guidelines.

Clinical trial operations include procedures that support participant safety, protocol compliance, data quality, efficient study completion, data sharing, and timely publication and dissemination of results.

Assessment of operational complexity refers to a process of identifying aspects of a clinical trial that may be difficult to implement according to the timeline or procedures outlined in the grant application, thereby increasing the possibility that the trial encounters challenges to successful completion. The goal of the assessment is to evaluate these operational aspects of the trial in conjunction with the study team’s resources, capacity, and plans for managing them.

The operational assessment is conducted pre-award for all clinical trials, and then for a select subset of studies continues over the life cycle of the project, in order to make recommendations that support the timely and successful completion of clinical trials.

Operational Assessment Elements

The data utilized for the NIMH operational assessment include a detailed description of the study design, participant recruitment, enrollment and retention, study procedures/interventions, regulatory oversight, and data collection, coordination, and management. The operational assessment elements discussed below highlight potential operational challenges and examples of resources and procedures that may be helpful to mitigate these are offered.

This brief discussion does not represent a comprehensive list of operationally relevant issues in clinical trials, but is meant to illustrate the approach developed by NIMH to identify issues of interest to operational functioning. A graphic tool, such as that in Table 1, may be useful when performing an operational assessment.

Table 1: Operational Assessment

Operational Element Description of Complexity Proposed Mitigation/Management Strategies and Recommendations
Study Design
Size of trial/enrollment and retention plans
Eligibility criteria/participant characteristics
Randomization and/or blinding
Demands of trial participation (i.e., intervention delivery, follow-up completion)
Regulatory Oversight
Number and type of regulatory bodies involved (i.e., FDA, single or multiple IRBs, DSMB)
Number of sites
Types of sites (i.e., foreign, tribal nations)
Vulnerable population oversight
Data Collection, Coordination, and/or Management
Data management plan, collection, tracking, storage, and quality assurance
Quantity, quality, and type of data collected
Fidelity and consistency of data collection
Data coordinating center factors
Other

Study Design

Study designs vary greatly and can present challenges related to numerous aspects of the trial design. The operational assessment requires a review of the key questions the study was developed to answer, the trial design, and the study procedures and interventions. The assessment considers the size of the trial, participant characteristics, the demands of trial participation and/or the demands of executing the trial intervention(s), and planned follow-up assessments, among other components of the trial procedures.

Challenges with enrollment and retention of study participants are a common occurrence in clinical research. The operational assessment considers how difficult it will be for a study team to enroll participants into the study. Are eligibility criteria broad and participants expected to be easily found in the setting where the study is taking place? Alternatively, are there extensive and specific inclusion/exclusion criteria that few potential participants will meet? Another point to consider is the target sample size. Will it be feasible to fulfill the planned enrollment targets in the proposed timeframe given the participant population?

In addition to successfully enrolling eligible individuals in a study, a trial relies on having enough retention of subjects through study completion to have the statistical power to answer the proposed research questions. There are numerous factors that contribute to study dropout and follow-up completion rates, some controlled by the study team and others not (e.g., a population that is less clinically stable than expected). Consideration of what is being asked of the participants in terms of frequency and burdensomeness of procedures is necessary to assess if individuals will be willing to enroll and remain engaged for the duration of a study.

Another aspect of study design included in the operational assessment is randomization and masking of treatment conditions, specifically the potential threats to the randomization scheme and to maintaining the blind. Numerous factors can impact randomization, such as unbalanced stratification across treatment arms and inconsistent enrollment patterns across time and sites. An operational assessment asks whether a study has planned an ongoing schedule to review randomization balance to identify potential problems over the course of the study.

Some studies have straightforward blinding schemes in which only one staff member (i.e., the statistician), is unblinded to treatment condition and outcome data. Others may have more complicated masking in which some study staff are blinded to both the study condition and outcome data, while other study staff are not. The operational assessment notes whether procedures are in place to protect the blind, including training for study staff and validation to assure that procedures are in place and working. Procedures should also include documentation identifying under what circumstances the blind should be broken, and who on the team will be unblinded if that event occurs.

The specifics of intervention delivery and follow-up completion represent another area of the operational review. Consideration needs to be given to how challenging the intervention and follow-up will be to deliver as per protocol, and what might interfere with successful implementation. This includes factors described above, such as frequency and burdensomeness of procedures, as well as who on the study team can conduct certain procedures and the impact on scientific integrity and safety when those procedures can’t be delivered as described in the protocol.

For studies involving a pharmacological product, additional operational challenges can arise. In early-phase research, there may be constraints on where or how much of the product can be obtained. The regulatory process can also impact drug supply and expiration, which can directly affect study viability.

Studies that require higher levels of precision and specificity in their intervention design may present more operational challenges, especially in multisite studies requiring cross-site harmonization. Study teams need a plan to ensure adequate operational oversight across all study sites, such as dedicated staff or a coordinating center, for tracking protocol fidelity and data quality and harmonization over the course of the study.

Regulatory Oversight

The number of sites involved in the conduct of a study can significantly impact the regulatory demands. Consideration needs to be given to whether the study will operate under a single IRB review or whether multiple IRBs are permissible or required. Both the U.S. Department of Health and Human Services’ Revised Common Rule (45 CFR 46 in the Code of Federal Regulations){3} and the NIH’s Single IRB Policy for Multisite Research{4} include requirements for streamlining the IRB review process for multisite research.

The number of regulatory bodies (e.g., IRBs, ethics committees, Ministries of Health, data safety monitoring boards) that have oversight over the safety and conduct of the study needs to be considered and tracked. An operational assessment reviews how a study team plans to track these activities and the associated timelines to stay abreast of the regulatory review process.

Exempt from these policies, foreign sites and tribal nations may have local laws and regulations that influence the regulatory context of running a study. Foreign sites may require a study to be reviewed by a Ministry of Health and/or multiple ethical bodies at a local level.

Based on the number of regulatory bodies and anticipated timing of their reviews, study teams can develop a timeline to plan the most efficient and orderly way to seek and maintain needed approvals. Factors to consider include: 1) frequency of regulatory body meetings, 2) prerequisites to initiating the IRB review process, and 3) varying documentation requirements of different oversight and governmental bodies.

For studies required to submit to the FDA or a comparable entity outside the United States, has the study team considered the time needed for back and forth communication and/or wait time and built this into the study timeline? Additional regulatory protections are required for some populations (e.g., pregnant women, human fetuses, neonates, prisoners). Study staff need training and experience to address the regulatory, logistical, and clinical challenges of working with those specific populations.

An operational assessment also reviews how study teams are planning to track all the documentation and regulatory approvals for the trial. A study team might utilize a regulatory matrix to document and track the dates of reviews and approvals from relevant regulatory bodies for each version of a document.

Ensuring all regulatory approvals are in place at the onset of the study and at continuing review is crucial. Are procedures established to ensure all staff across the various sites are using the most updated version of study documents, and that all regulatory bodies have the same version of each study document at any given point in time? Is version control implemented to ensure synchrony in documents across all sites and regulatory bodies?

Data Collection, Coordination, and/or Management

A final aspect of the operational evaluation relates to data collection, coordination, and management. The relevant information includes how study data will be collected and stored, the quantity, quality, and type of data being collected, and in cases of multisite trials, the fidelity and consistency of data collection and the capacities of the data coordinating center. An assessment of challenges and ongoing review is advantageous so that study teams might implement strategies to improve the quality, reproducibility, reliability, and validity of study outcome data. Operational issues may arise at any point in the process from data collection, entry, validation, and reporting, as well as database design.

The complexity of the data collection, coordination, and management effort is influenced by the sources, type, volume, storage, transfer, and communication of data. Related factors include the processes for protecting confidentiality of participants and study data, the training of study staff, the reliability of assessments, and the quality assurance/quality improvement processes related to the entry, monitoring, and auditing of the study data.

Most clinical research is based on a combination of data sources and/or measurements. Each source of data presents challenges to the operational complexity of the overall study. An assessment of the sources of data in a study includes careful attention to what, how, and from whom data are collected.

There are unique concerns when relying on self-reported data or data from electronic medical records housed in one or more systems, external sources like state or vital records, paper-and-pencil sources versus electronic data capture (EDC) sources, social media, mobile devices, and other digital or imaging formats. What systems does a study team have in place to assess the completeness, verifiability, reliability, and validity of each data source?

Additional operational issues to consider include the number and schedule of assessments, the challenges to collecting the assessment and outcome data, how narrow the time frame for data collection, and the likelihood that participants will be hard to reach or become lost to follow-up.

The processes and schedule of retrieval of assessment data from electronic sources, as well as peculiarities of the data storage and management systems, must also be considered, as they contribute to the integrity of the data. Many software tools and programs are available for data management. There are standards for EDC in the Code of Federal Regulations for the pharmaceutical industry that are also recommended as GCPs in other settings. These standards include controls for security provisions such as individual log-in, timestamp, attribution, audit trails, and system validations.

There is a significant difference in data security when using a 21 CFR 11–compliant database (e.g., RedCap) versus a noncompliant spread sheet (e.g., Excel). Studies with datasets in formats that are not readily verifiable, reliable, and attributable may prove challenging to creating a complete dataset at the end of a study.

Additionally, studies may rely on previously obtained data, data obtained from external systems, or data entered into multiple data systems. These layers add operational complexity, as the integration of these data is needed to finalize the study dataset.

If the study is conducted at multiple sites, study teams need to assess how data management and reporting are harmonized. Is there one integrated database for all study data or multiple databases? Is there a coordinating site or an identified data coordinating center (DCC)? In cases where a DCC is used, has the study team considered the budget, infrastructure, staffing, and experience needed to handle the regulatory oversight for the study?

Studies that have many sites benefit from a clear plan for data harmonization. These issues are best identified before the study starts, so that they can be addressed and minimized to assure fidelity, consistency, and compliance.

Finally, the operational assessment considers whether there is a data management plan in place prior to the start of the study. Such a plan provides guidelines for database design, data entry and tracking, quality assurance/improvement, serious adverse event identification, discrepancy management, data transfer/extraction, and database locking. This may mitigate data collection, coordination, and management issues that can arise during the conduct of the study and afterward.

Conclusion

The primary goal of conducting operational assessments of clinical trials is to think through—pre-award and throughout the duration of the study—how challenging a study’s design, regulatory requirements, and data collection and management will be to implement and maintain as per protocol. A comprehensive operational review allows NIMH staff and study teams to make more informed decisions about whether a team has the staffing, resources, and procedures in place to run a trial successfully from the outset. Thus, by reviewing factors that contribute to operational complexity during the study planning process and lifecycle of the trial, NIMH is better positioned to enhance its stewardship of the clinical trials it supports.

References

  1. Hudson KL, Lauer MS, Collins FS. 2016. Toward a new era of trust and transparency in clinical trials. JAMA 316(13):1353–4.
  2. Lauer MS, Wolinetz C. 2016. Building better clinical trials through stewardship and transparency. https://nexus.od.nih.gov/all/2016/09/16/clinical-trials-stewardship-and-transparency/
  3. Federal Policy for the Protection of Human Subjects. 45 CFR 46. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html
  4. National Institutes of Health. 2019. Single IRB policy for multi-site research. https://grants.nih.gov/policy/humansubjects/single-irb-policy-multi-site-research.htm

Sharon L. Smith, DNP, (smiths@mail.nih.gov) is a Clinical Trials Program Coordinator for the National Institute of Mental Health (NIMH), part of the National Institutes of Health (NIH), in Bethesda, Md.

Galia Siegel, PhD, is a Clinical Trials Program Coordinator with the NIMH.

Ashley Kennedy, PhD, is a Clinical Trials Program Coordinator with the NIMH.