Clinical Researcher—June 2018 (Volume 32, Issue 6)
ON THE JOB
Once a clinical trial has reached the point of patient recruitment, the prime source of data generation and collection becomes the network of investigator sites engaged in the trial. Whether the sites are based at medical clinics, hospitals, or academic medical centers, or are independent trial management sites, together they contribute vital trial data which are otherwise unavailable.
Running an investigator site is an exercise in administration involving registering for clinical trials that are a good match, completing feasibility studies, following the tenets of good clinical practice, overseeing staff training and development tasks, keeping up to date on clinical regulatory requirements and data privacy requirements—all before a single patient has been added to a trial and before a single data point has been created.
Technology can be seen as an additional burden. Often presented as a set of tools and methods to make work more efficient, the experience of dealing with multiple trial sponsors and contract research organizations (CROs) means sites normally have to operate with a baffling array of systems, many of which don’t provide tight integration with in-house systems. With incompatible systems and multiple processes remaining manual, double capture, delays, and inconsistencies become commonplace.
Any processes with handoffs and manual interfaces are generally points where data tracking and information access are poorest. This creates challenges further down the line for insight gathering and analysis.
It Gets Worse
Where teams need to communicate, e-mail seems to be the default. This works because, well, who doesn’t have e-mail? Yet, with all the ease of use and wide distribution, e-mail can be its own burden. With long conversation threads covering multiple topics, it’s hard to see what is being asked and what is being answered.
Even when dealing with a single sponsor (or third-party distribution partner/CRO), multiple systems may be in use. Separate systems have been developed over many years for ordering new supplies, reporting Time Out of Environment and temperature excursions, managing investigational product returns and destruction requests, reporting adverse clinical events, and more—all with inconsistent interfaces, different username/password requirements, and different standard operating procedures. All place a burden on investigator sites—particularly those smaller sites with fewer specialist staff to deal with technical issues.
In the near future, sites will also have data sources that sit with the patient, in the form of apps that request regular input, smart devices such as smartphones, and custom medical wearables—all generating data and likely all incompatible with one another.
What’s the Solution?
With so many sponsors and CROs active at the same time—not to mention the rapidly increasing pace of clinical research—it’s unlikely that any single system will be available to sites for managing all these data anytime soon. That doesn’t mean that sites have to accept the status quo.
Multiple sources are here to stay, and smart software solutions can be created for allowing additional feeds, new partners, and more functionalities to be incorporated with less stress to administration procedures. By designing for open interconnections, more operations can be accomplished in one system. Allowing site systems to connect and automatically reformat and transfer data means manual steps can be removed, operations can speed up, and more flexibility can be offered for the future.
Managing multiple incompatible software systems is a challenge, but there is hope with new software models. Designing software to manage complexity and plan for flexibility is achievable, mainly by abstracting connections from the core of the software. By adopting an architecture that anticipates multiple incompatible data connections, the core software can continue to develop and grow, whilst managing many data sources, including sources not yet known.
Abstraction really is the way to go; and it can be thought of both in a micro and macro sense. In a micro sense, within a single software application, one may use abstraction to separate different elements and allow changes to small parts of the application to be carried out without impacting the whole.
At a macro level, one may abstract tasks to different tools. Myriad reporting and presentation tools are available on the market, such as Tableau and PowerBI (to name two), which connect to many different data sources. Adding a new data source? No problem—make a small adjustment to the dashboard tool and everything ticks on as normal. Removing a data source? Same thing.
Generally, using off-the-shelf tools and applications for common tasks, dashboards, reports, and alerts, for example, is a great idea. It reduces the custom code that needs to be managed and the number of systems that users need to log into each day, just for starters.
For smaller sites where the in-house information technology (IT) provision may be minimal (or non-existent), anything that reduces IT burden is a good thing! Designing operations around an abstract model allows smaller site operators to do a lot of their own configuration and use external specialists for discrete projects. This keeps costs down and yields flexibility as the site manages what the site wants to manage and outsources the burden of more complicated tools.
Most critically, because of abstraction, sites are not at the mercy of a single provider to look after a software behemoth. Abstraction fosters an environment in which small tools and applications targeting simple tasks cooperate with one another to ease the site IT burden.
With better analytics and more of the clinical operations and data management requirements covered by fewer systems, the training burden is reduced and information is free to flow faster and more accurately—keeping both sponsors and CROs happy.