Life sciences and health care organizations are increasingly seeking to partner, exchange and license real-world data (RWD). Life sciences organizations use it for real-world studies to evaluate the effectiveness and safety of their products. Health systems use RWD for benchmarking, quality measurement and care pathway optimization. Payers can assess total cost of care and the real-world value of interventions. More and more, all of these organizations are realizing that de-identified RWD is essential to improving care and quality, and advancing the development of new medical treatments.
As groups seeking RWD expand their search for data, there are more organizations offering it than ever before. As an organization speaking to potential data partners, how do you judge whether data is “useful?” What makes some data more valuable than others?
Many organizations that generate data presume it has value. Often, their own assessment of data assets can be likened to contestants in a talent show. The people performing presume they have talent, but the judging audience often feels otherwise. This is a great lens through which to view the health care data ecosystem.
Having met with many groups engaged in RWD partnerships, we’ve considered a framework for data evaluation. Whether you are an organization seeking to make data accessible to others or a group seeking data for research, there are some measures of data value you can apply to understand data “usefulness.”
Let’s examine a real-life data evaluation example. A pharmaceutical company with products treating depression may seek data that captures patients’ reported mood or expressed emotional state, response to medication and possible side effects to understand product effectiveness. With COVID-19 impacting our lives, mental health is gaining more attention. Social isolation and uncertainty about the future have led many people to seek out support for depression and anxiety. Many organizations have developed applications enabling telephonic therapy. These apps often have weekly questionnaires where people can record their emotions and experiences. They may also capture data from telephonic therapy sessions (date, time, session goals). For this example, let’s assume the appropriate patient consents, de-identification and data use rights are in place to allow external party research.
When we evaluate data value, we are really asking, what can the data do? What questions can the data answer? Another way to phrase it — is the data fit for purpose? Companies evaluating data are trying to ascertain whether the data contains enough information to answer defined questions of value. Pharma companies researching depression may want to know how similar or different the patient’s reported experience is from their clinical encounters. Why would this matter? The physician’s summary of the patient’s mood is an interpretation of the patient’s statements and the physician’s own objective observation of the patient’s expressed emotional state or affect. This clinical interpretation may differ from the patient’s subjective experience. Additionally, some patients may feel reticent about sharing their true state of mind. Lastly, with in-person clinical visits hampered by the coronavirus disease 2019 (COVID-19), remote telehealth may offer the only available recent insight into patients’ health.
With “useful” data, a comparison between the patient’s reported experience (via an app) and their clinical outcomes captured within electronic health record (EHR) data while taking a depression drug can be performed. It may help researchers understand how a patient’s subjective experience on therapy differs from the physician’s reported patient status. Understanding this difference can help life sciences companies create more precise and appropriately defined patient cohorts to measure the real-world effectiveness of their therapeutics. But how can one tell the data is good enough to do such a comparison?
It’s important to know whether the data is comprised of the right content to answer this question. How can we build confidence in an app’s ability to measure patient mood or emotional state to yield reliable data? Use of a regularly self-administered survey is a good start. A survey instrument administered at a regular interval is a valuable indicator of whether you may be able to reliably measure the patient’s reported experiences over time.
Using a validated instrument increases the likelihood that the app data can yield reliable insight because validated instruments have already been tested and correlated with some independent clinical measure (such as vital signs, lab values, physician administered assessment tools). Many apps offering mental health support lack evidenced-based content.2 The Patient Health Questionnaire (PHQ-9) is one of the validated instruments that measures depression severity.3 An app using this validated screening tool is an indicator that the data collected may help assess whether the patient’s depression symptoms are improving.
Having insight into sessions conducted telephonically is also valuable. How often were therapeutic sessions held, on what dates and for how long? Were these sessions conducted by a licensed therapist using an using an evidence-based approach?5 Across patients, did sessions include similar therapeutic content, format and questions? This consistency of approach supports comparison of patients at a given point in time in their therapeutic program (including drug therapy). If a validated survey instrument was used and telephonic therapy sessions also had standardized frequency, duration and format, then the data from the app is more likely to offer reliable insight into a patient’s mental health trajectory.
Data breadth, population size and representativeness
If the data contains the right content, then population size and representativeness become important. Ensuring you have enough patients to observe improvement is necessary, along with enough subgroups (organized by gender, age, severity of depression, co-morbidities, treatments taken, etc.). It may require tens of thousands, hundreds of thousands or millions of patients to ensure you can detect changes across groups. Lastly, consider data usability and quality. Is the data complete, reliably collected and internally consistent? Many companies lack the talent and systems to standardize massive amounts of data in a consumable format.
Even with all this exploration, how can we know for certain that the data is capable of yielding valid insight? A few options come to mind.
A provider group could pilot the app within their own population where a preexisting history of patients under their care already exists. They could compare a patient’s app data to their EHR characteristics “at baseline.” They could further compare if the patient’s mental health trajectory in the app data matches what is captured in the EHR.
Alternatively, researchers could select a group of matched patients from data generated by the app and compare it to a different, but matching, group within the EHR data receiving similar therapeutic interventions.
A third option is to use the results from a completed study of patients who had similar characteristics and types of treatment to evaluate if the patient trajectory and outcomes were also similar. It’s likely that none of these methods are perfect, but these types of exercises could add another layer of confidence that the app and its data are reliable. The expression, “do your homework,” has special meaning when it comes to health care data. Doing the due diligence is somewhat significant, but the potential value is worth the investment.
1 Martinengo L, Van Galen L, Lum E. et al. Suicide prevention and depression apps’ suicide risk assessment and management: a systematic assessment of adherence to clinical guidelines. BMC Medicine. 17, 231 (2019). https://doi.org/10.1186/s12916-019-1461-z. Accessed month date, [PN1] 2020.
2 Wasil AR, Venturo-Conerly, KE, Shingleton RM et al. A review of popular smartphone apps for depression and anxiety: Assessing the inclusion of evidence-based content. Behaviour Research and Therapy, Volume 123. https://www.sciencedirect.com/science/article/pii/S0005796719301846#! December 2019.
4 Zimmerman M. Using the 9-item patient health questionnaire to screen for and monitor depression. JAMA Insights. https://jamanetwork.com/journals/jama/article-abstract/2753532. October 18, 2019.
5 Fischer J, Mendez DM. Increasing the use of evidence-based practices in counseling: CBT as a supervision modality in private practice mental health. The Journal of Counselor Preparation and Supervision, Vol. 12(4). 2019. https://repository.wcsu.edu/jcps/vol12/iss4/4.