Clinical Researcher—December 2019 (Volume 33, Issue 10)
DATA-TECH CONNECT
Steve Young
The clinical research industry has gone through an important transformation over the past 15 to 20 years, from a largely paper-based paradigm to one in which electronic systems are regularly—and increasingly—leveraged for all aspects of clinical trial planning, execution, and management. Internet-based electronic data capture (EDC) systems have replaced paper case report forms, manual double-data entry processes, and faxing of paper queries.
Further, the use of direct source data capture methods is steadily increasing and replacing the EDC-based data transcription paradigm. This includes investigator-led and patient self-assessments using laptops or tablet devices, patient diary information using hand-held mobile devices (iPhones, etc.), and wearable sensors that automatically record and transmit various health-related measurements (glucose levels, heart rate, etc.).
The benefits of these eClinical technologies are significant, including more efficient and reliable capture of a broader array of patient data than was previously possible. They also enable much quicker data access to various stakeholders for trial monitoring and oversight activities. Centralized risk monitoring in particular benefits from more timely access to study data, to support the most proactive detection and mitigation of emerging issues in study conduct—issues that may have an impact on patient safety and/or the reliability of trial results.
Centralized statistical monitoring of clinical patient data—as also presented in section 5.18.3 of ICH E6(R2) from the International Council for Harmonization—along with key risk indicators (KRIs) and quality tolerance limits (QTLs), have proven to be very effective at identifying various study conduct–related issues that traditional site monitoring and data management review methods fail to find. Issues discovered range from malfunctioning measurement devices, to site training and/or sloppiness issues, and even to intentional misconduct including fabrication of patient data. These issues—often detected initially as various types of statistically unusual patterns or trends in data—reflect misapplied operational processes that may result in generation of incorrect or otherwise unreliable clinical data.
Following the Evidence
While the statistical monitoring of patient data is very effective, use of the audit trail information and other operational data (e.g., EDC query data, protocol deviations, site issue logs, etc.) from all of these eClinical systems can also be extremely powerful in helping to expose study conduct issues that may be impacting data reliability and integrity. Much of the data collected from these eClinical devices represents critical study data—supporting key efficacy and safety evaluations—and thus their appropriate use and functioning are of critical importance for the operational success of the study.
Audit trail data in particular offers us greater insight into the who, how, and when of clinical data generation and processing. When assessed effectively via KRIs, QTL parameters, or other statistical monitoring, these data can alert us to patterns of usage and behavior that are not expected and may represent a problem. The following examples represent just a handful of the KRIs implemented by study teams to leverage audit trail data to successfully identify risks and issues:
- Visit-to-eCRF Entry Cycle Time—One of the most common “standard” KRIs leveraged across studies, this monitors the timeliness of sites in transcribing relevant patient data into the EDC system for the study. Long delays may have significant negative implications for the reliability of the EDC data, as well as for the study team to proactively monitor the site risks.
- Mean Duration of Assessments—The increasing use of tablet devices for direct entry of patient assessments at the site (i.e., electronic clinical outcomes assessment [eCOA]) is enabling use of audit trail time stamps to assess behaviors such as the average time to complete each assessment. An investigator/assessor or a patient that is taking unusually long or is too quick to complete required assessments may indicate improper application of the assessment or even problems with the eCOA technology. Such a KRI was recently used in a dermatology study to help uncover significant misconduct at one of the investigative sites. In particular, the average duration of a key patient efficacy assessment was around three minutes for this site, while the average across all other sites in the study was closer to 15 minutes. The very short average duration at this site was assessed to be clinically unreasonable and highly suspect.
- Assessment Time-of-Day—eCOA and electronic patient-reported outcomes (ePRO) audit time stamps can also be used to reveal unusual and suspicious patterns in time-of-day usage by patients or sites. One real example involves a case of ePRO fraud, in which a site failed to provision the required ePRO devices to its 15 patients. To cover up the mistake, the site coordinator fabricated daily diary entries for each of the patients. The misconduct was first detected by a centralized statistical monitoring test, which discovered that more than 70% of this site’s daily diary entries were being logged in the 6 p.m. hour locally, while the time-of-day distribution of diary entries was much broader at all other sites in the study.
- Rater Change Rate—The reliability of many patient assessments relies to some extent on having a single, consistent person or “rater” conducting the assessment across patient visits for each patient. A site that is presenting a high incidence of rater changes may be raising concerns regarding the interpretability of the assessment results. While the names/identification of raters might be captured as part of the clinical data entry, it is also possible to implement this KRI using the user ID data stored with the assessment audit trails.
Conclusion
These examples reinforce a clear additional benefit in the use of mobile technologies and direct-source data capture in enabling more effective operational risk monitoring and quality oversight. Combined with an effective risk planning process which considers the risks associated with these technologies, we should anticipate and look forward to better quality outcomes in this new risk-based quality management paradigm.
Steve Young is Chief Scientific Officer for CluePoints, an information technology and services provider with offices in Belgium and King of Prussia, Pa.