Clinical Researcher—January 2019 (Volume 33, Issue 1)
PEER REVIEWED
Takoda H. Roland, CCRP, CCRA, CNA
The clinical research organization (CRO) I work for doesn’t provide source documentation for studies, so every site we work with does things a bit differently. The variations in source data formats and quality create inconsistencies in data capture and increase the monitoring burden by forcing the clinical research associate (CRA) for any given multisite study to learn how each site operates.
Even for a small Phase III trial with only about 50 research sites in the U.S. and Canada, the hassles to a monitor of conducting visits and capturing data that are collected according to myriad site-specific standard operating procedures can be overwhelming. Phase III studies are sophisticated operations and are increasing in complexity,{1} so the chances that 50 different sites perfectly capture all the data we need are close to zero. In addition to complexities of the study itself, these 50 sites may have been trained by 10 or more different CRAs on a protocol that has been amended multiple times before the study even started—so we must focus on capturing just the minimum data we need to have the test product approved.
This is not a criticism of my employer. I have worked for one small CRO and two of the largest CROs, and they are all essentially the same. Not providing source documentation to sites for consistent data capture is an industry standard practice.
This is a criticism of the industry. There are two reasons I have identified for why CROs don’t provide source to site:
- It means less work for the CRO.
- It addresses the issue of liability—if there is a deficiency in a site’s source data, it is the site’s fault instead of the CRO’s.
The Case for Standardized Source
I believe a CRO is hired for its expertise in clinical trials, and should be responsible for providing adequate source documentation to ensure the study goes as smoothly as possible by promoting consistency in data capture. Adopting and providing standardized source to sites reduces the workload not only on research sites, but also on the CRO’s own staff of CRAs.
By having to familiarize myself with each site’s method of source data capture, I not only eat into my limited time onsite, but also am less likely to recognize any trends across my sites, since they all capture the data differently. Noticing trends is a crucial part of clinical research monitoring, and is vital to patient safety and good data quality.
The Case for Technology in Patient Consent
The consent form lists out the possible benefits of the study, side effects of the treatment, and any alternative treatments. It also informs the patient that he/she can withdraw at any point and provides contact information in case they have questions.
The consent form is usually around 20 pages long, but can vary depending on the complexity of the study. A patient must be consented prior to the completion of any study procedures.{2} The consent process is a critical element of any legitimate research study, and ensuring proper consent is main priority for a CRA. Proper consent protects patients by keeping them informed of risks and alternatives.
When I monitor the consent process, I check that each page of the most recently approved consent form is present in the patient’s chart and that the signature date matches the patient’s first visit date. Hopefully, the site jots down a note with some details about the consent conversation with the patient, or at least uses a checklist that hits the bare minimum points.
I can never truly know by simply checking a signature and a note if the proper consent process happened; there is no way to know if the dates on the signature are accurate. Even if the date is accurate, I have no way of knowing if the consent process happened before any other study procedures that day. As for the note about the consent conversation with the patient, it does not take a monitor long to notice that sites use standard language. I suspect many sites have become much more proficient in documenting a proper consent process than actually performing proper consent.
I can ask site staff to clarify their consent process if I am having trouble believing the accuracy of the standard language consent notes, but if they say none of the subjects had any questions, I don’t have any evidence to the contrary. I am forced to accept the industry standards for documenting consent.{2}
India recently addressed the consent issue by requiring the consent process to be captured on video.{3} Filming consent is much more effective at ensuring the process was followed, since a monitor can easily re-watch the entire process. However, the revised video consent process has been met with resistance. Some doctors in Indian clinical trials argue that the requirement for being videotaped makes a patient less likely to enroll in a trial and hurts enrollment.
As an industry skeptic, I believe the push-back from doctors and the decrease in enrollment rates due to video consent have a different source: fraud.
It is much more difficult to fabricate patient data and entire patients when video consent is required. The truth is likely somewhere in between, but ample evidence exists across the globe proving that some patients are fabricated.{4,5} As a monitor, I have seen patient fabrication first hand, and suspect there have been instances I missed.
Industry leaders may argue that a video consent process carries the potential of unblinding patient data or increasing the time of monitoring. To those experts I pose the following questions:
- Does adaptive and remote monitoring not address the issue of taking too much time to monitor the full consent process?
- Are you willing to risk patient safety, rights, and well-being by not having a complete video consent process in the interest of saving time/money and expediting enrollment?
Currently there are no video consent requirements in the U.S. or for U.S. Food and Drug Administration (FDA) submission, so I am forced to accept paper documentation at face value.
The Case for BYOD
Clinical trial managers now routinely use eDiary software that is provided to patients to complete some assessments, though it does not track dosing or side effects. Both dosing compliance and side effects are essential data, so not capturing them as accurately as possible in real time can be problematic.
As a CRA, I have witnessed these problems first hand. During an HIV study I worked on, I noticed at a patient’s last visit how she had returned almost all of her study medication unused. There were 30 days between each study visit and the patient returned 28 pills. Proper dosing is once a day, so there had been an obvious noncompliance in dosing with no way to determine exactly when the patient stopped properly dosing. The site reported that the subject stopped dosing two days after her previous visit, and that the site was not aware until the patient came in for her most recent visit 30 days later.
If the study recorded patient dosing electronically, the system could have been set up to automatically notify the site of dosing noncompliance, and the site could have followed up with the patient in real time. Noncompliant dosing is particularly dangerous in studies such as HIV, as it can cause the patient to develop resistance to the treatment and potentially to future treatments.{6}
I read on to determine why the patient stopped taking her medicine. At her Day 30 visit, the patient reported that 28 days earlier she felt that the medicine was making her nauseous. This side effect isn’t uncommon in the study, but can require some follow up. In this instance, it will require a lot of follow up.
The site performs a pregnancy test at each visit, and this patient’s most recent test was positive. Her nausea was not due to the medicine, but caused by her pregnancy. Now the site has a pregnant woman at risk of developing resistance to HIV treatment for both herself and her unborn child. All of this could have been avoided if the eDiary reported dosing and side effects to the site in real time. The patient, site, and CRO would have been aware of the pregnancy in three days, and the patient would have had continued treatment.
At this point, many sites and CROs are familiar with some kind of electronic clinical outcomes and assessments (eCOA) device, but current solutions present their own challenges. Supplying a large volume of sites with adequate diaries in a study with unpredictable enrollment can result in supply shortages. There is an adage in clinical trials that 80% of study enrollment will come from 20% of the sites on the study. With such a discrepancy in enrollment between sites, it can be difficult to forecast accurately to ensure adequate supply of product and supplies. Study supply shortages delay enrollment and greatly increase the costs of the study.{7}
The tactic of “bring your own device” (BYOD) mitigates the problems associated with supplying sites with an eCOA device. Critics of BYOD will argue that many patients in clinical trials are economically disadvantaged and are unlikely to have a smartphone necessary for BYOD. However, data suggests 50% of U.S. adults making less than $30,000 per year own a smartphone.{8} Further, critics of eCOA argue that older patients have difficulties utilizing smart devices, but research shows 73% of U.S. adults 50 to 64 own a smartphone.{8}
BYOD should further mitigate concerns with patients being unable to correctly capture eCOA by allowing them to use the devices they are already familiar with. Not having to carry two smart devices also improves the chance of patients remembering to complete their assessments as required.
However, BYOD is not without its own challenges. Any BYOD application needs to have a tested and proven user interface to ensure a diverse patient population will be able to complete all required assessments. While data suggests most patients do have access to the required smartphones for BYOD, it is crucial to not exclude patients who do not own a personal smartphone. The best clinical trial management system should incorporate both BYOD and sponsor-supplied diaries to ensure all potential patients can enroll.
While eCOA does not yet have the capability to send out real-time alerts, early adoption of this technology is a step in the right direction. A workaround to temporarily solve real-time alerts could be text-message reminders sent directly to your patients, alerting them to take the medication and fill out their diaries. Patients can respond to these text messages if they are experiencing any side effects, such as nausea, which will result in follow-up visits.
The Case for eSource
I will spend the bulk of my day going through every datapoint the site has collected for each subject. I verify that the information is complete, accurate, makes logical sense, and was properly entered into the electronic data capture (EDC) system. When the EDC is properly set up prior to the study start, this process can be as simple as just checking to make sure the numbers on the page match what’s in the EDC system.
However, it is seldom this easy. In my experience, sites rarely have a fully functional EDC with good data validation and system queries in place prior to study start. Due to poor study foresight, tight timelines result in the implementation of deficient study management systems. The CRA is responsible to work with the site to mitigate the errors incurred as a result of any shortfalls. Errors are compounded by the fact that sites often enter data into the EDC several days after patient visits occur. It is not uncommon for a site to miss crucial study datapoints at the beginning of enrollment.
The need for onsite monitoring of early study data is crucial to ensure research sites are capturing all required data. Industry standards tend to require a visit within the first two weeks of enrollment. Unpredictable enrollment and a large site load can make it difficult for a CRA to meet this crucial requirement. With delayed EDC entry and required onsite monitoring, it can be anywhere from several days to months before a research site is even aware of a data deficiency, which could potentially affect all of a site’s patients up to that point.
The worst part is that this problem can be easily avoided.
A relatively new solution has arrived on the clinical trials scene: eSource. eSource comes in many forms, and there currently isn’t a one-size-fits-all solution; each study in clinical trials presents unique problems that require unique solutions. Recently, more complete eSource systems have emerged, with new systems seeking to eliminate issues caused by delays in CRA monitoring (and the costs they incur to both sites and CROs).
The site I am at on any given day may not utilize eSource, so I will be paging through multiple patients’ visit binders all day. Each error I find—from simple mistakes like using the wrong year in a date to larger issues such as missing data—needs to be addressed by the site staff while I am physically onsite. However, the site staff have a regular workload while I am onsite—one that will be continuously interrupted throughout the day as I find new issues to be tackled. This creates tensions that often lead to poor relations between site staff and their CRAs, thus reducing the CRAs’ ability to serve effectively.
A good eSource enters timestamps automatically for each datapoint, so the staff don’t even have to enter a date as the audit log captures the required information. Timestamps that include user signatures go a step further. With unique user accounts, every datapoint is traceable back to its originator; site staff do not have to waste any time signing and dating, and can instead focus on performing patient visits as quickly as possible. Expediting patient visits is critical as the industry moves toward more patient-centric trials.
Effective eSource reduces the workload on sites by decreasing visit time and transcription errors, thus freeing up site overhead to take on additional studies. As I sit at the (typically small) desk in the makeshift office I am given during visits at most sites, pouring through pages of data to ensure the site does not have any transcription errors, it occurs to me that eSource renders this entire process obsolete.
The greatest benefactor of eSource is the CRO. Too often, a monitor’s time is spent fixing transcription errors that do not exist when eSource is properly implemented. eSource enables the data to be pulled directly into the EDC. This process dramatically reduces the workload for sites by eliminating data entry and the need for quality control/quality assurance for the data entry process.
Source data validation can easily account for more than 80% of a monitor’s time. After eliminating the need for source data validation onsite by making the source electronic, the monitor can focus on the larger issues of site enrollment, performance, and patient compliance, which can all be overlooked with a high source validation workload.
When I am at a site that does not utilize eSource, I will have to page through hundreds of pages of source documents to ensure nothing is missing or incomplete. As I scramble to ensure I checked the bare minimum amount of data before I need to rush off to catch my flight, only to do it all again tomorrow in another city, I am struck with this thought: I love being a CRA, but the role as it exists today is obsolete.
Conclusion
By adopting completely electronic systems, 95% of the issues I address currently won’t exist. Say goodbye to transcription errors, say goodbye to follow-up calls for missing documentation; most monitoring issues should be eliminated. The future of monitoring will be to focus on educating sites on the latest available systems developed to reduce workload, improve patient safety, and increase patient engagement.
To the sites and CROs that haven’t started to look at eSource, my advice is simple: Start today.
I believe in 10 years clinical trials will be completely paperless. Complete eSystems will eliminate the existing inefficiencies. Average study length will decrease, and study workload for sites and CROs will drastically decrease. Data safety and trends will be tracked in real time using advanced analytics, making investigative trials as safe as possible for our patients.
I believe the latest technology will significantly reduce the cost of bringing new treatments to market, and it is my sincerest hope that the savings generated by more efficient clinical trials will be passed onto the people who truly matter in clinical trials: Patients.
References
- Getz K. 2010. Rising clinical trial complexity continues to vex drug developers. ACRP Wire May 13.
- U.S. Food and Drug Administration. 21 CFR Part 50 – Protection of Human Subjects.
- www.cdsco.nic.in/writereaddata/Guidance_for_AV%20Recording_09.January.14.pdf
- Radio Free Asia. 2016. Chinese clinical trials data 80 percent fabricated: government. September 27.
- Patel M. 2017. Misconduct in clinical research in India: perception of clinical research professional in India. J Clin Res Bioeth 8:303. doi: 10.4172/2155-9627.1000303
- Smith RJ. 2006. Adherence to antiretroviral HIV drugs: how many doses can you miss before resistance emerges? Proceedings Biological Sciences March 7. U.S. National Library of Medicine.
- Alsumidaie M. 2017. Non-adherence: a direct influence on clinical trial duration and cost. Applied Clinical Trials April 24.
- Pew Research Center. 2018. Mobile Fact Sheet (February). Pew Research Center Internet & Technology. pewinternet.org/fact-sheet/mobile/
Takoda H. Roland, CCRP, CCRA, CNA, (takodaroland@gmail.com) is the owner of Philadelphia Pharmaceutical Research, a clinical project manager with Five Eleven Pharma, and a remote CRA II with PRA Health Sciences.