Introduction

More than ever, medicine now aims to tailor, adjust, and personalize healthcare to individuals’ and populations’ specific characteristics and needs—predictively, preventively, participatorily, and dynamically—while continuously improving and learning from data both “big” and “small.” Today, these data are increasingly captured from data sources both old (such as electronic medical records, EMR) and new (including smartphones, sensors, and smart devices). Combining artificial intelligence (AI) with augmented human intelligence, these new analytical approaches enable “deep learning health systems” that reach far beyond the clinic to forge research, education, and even care into the built environment and peoples’ homes.

The volume of biomedical research is increasing rapidly. Some is being driven by the availability and analysis of big data—the focus of this collection. Despite this, only a tiny fraction of research ever translates into routine clinical care. An analysis by the USA’s Institute of Medicine (now the National Academy of Medicine) noted that it takes 17 years for 14% of research findings to move into clinical practice [1]. As noted by Westfall et al., many factors can affect implementation—several of which involve the use of data. More and more data are generated in medicine, such that big data approaches previously used in fields such as physics and astronomy are increasingly relevant in medicine.

Data, while necessary, are insufficient to inform medical practice. Data must be transformed before it can be useful. A commonly used framework is the “data, information, knowledge, and wisdom” (DIKW) hierarchy. References to this hierarchy date back to the late 1980s in the works of Zeleny [2] and Ackoff [3]. The first reference to this, in the context of medicine, was in the discipline of nursing informatics [4]. This framework was recently revisited by Damman [5], who proposed that the framework be modified to “data, information, evidence, and knowledge” (DIEK) to reflect the importance of evidence. In this framework, “knowledge” is used to denote evidence that is relevant, robust, repeatable, and reproducible. Whichever conceptual framework is preferred, it is evident that data must be transformed to be useful. Despite predictions of the value that big data analytics holds for healthcare [6], medicine has lagged behind other industries in the application of big data to realize its value. Lee and Yoon [7] identify several limitations that affect the use of big data in the medical setting. These include the inherent “messiness” of data collected as a part of clinical care, missing values, high dimensionality, inability to identify bias or confounding, and the observational nature of the data decreasing the ability to infer causality.

The Beyond Big Data to New Biomedical and Health Data Science article collection published in BMC Medicine focuses on providing examples of how big data-driven approaches might ultimately improve healthcare provision and health outcomes. In addition, the collection’s articles address data complexity, challenges facing this type of research, and other enablers and barriers.

At the heart of precision health

The dynamism of progress enabled by new data sources is significant. For example, a smartphone microphone within a bedroom environment can now listen for unique gasping sounds, called agonal breathing, which occur when the heart stops beating [8]. These are an audible biomarker—a sign of cardiac arrest and brainstem reflex that arises in the setting of severe hypoxia. An AI algorithm can differentiate them from other types of breathing, with a potential for calling for early cardiopulmonary resuscitation (CPR).

In this article collection, a Debate by Hekler et al. [9] helpfully presents a complementary “small data” paradigm of N-of-1 unit (i.e., a single person, clinic, hospital, healthcare system, community, and city). The authors argue that using these “small data” complements use of big data for advancing personalized medicine, but is also valuable in its own right.

Next, Mackey et al. [10] explore the role of blockchain in use cases such as precision health, drug supply chain, and clinical trials. The authors highlight that beyond the benefits of a distributed, immutable, transparent, and higher trust system, the unique benefits of the much-hyped blockchain for healthcare processes over other existing technologies must be assessed. It is argued that the necessity to share data throughout the ecosystem is what makes blockchain a viable application for healthcare. Healthcare blockchain is, however, not yet “fit-for-purpose,” because it lacks technical data standards and regulatory policies, among other things. The authors have proposed a design framework and set of principles relating to blockchain to help advance the field.

Huang et al. [11] provide a timely reminder that cutting-edge advances in precision health, mHealth, and the use of apps to empower people with diabetes to self-manage their health and disease cannot be achieved without building on sound foundations of evidence-based medicine, following best practices and guidelines. New advances in digital health need quality standards, quality and safety assurance mechanisms, and—at times—even regulation to (counterintuitively for some) speed their adoption.

Implementation science and genomic medicine

Implementation science is the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice and, hence, to improve the quality and effectiveness of health services and care [12]. Implementation of new findings in genetics and genomics is subject to the same limitations as noted in the introduction, although it is magnified because genomic information is used to define smaller and smaller subgroups of patients—ultimately down to the level of the individual.

Development of the methods of implementation science and the incorporation of implementation science frameworks such as RE/AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) [13], the Consolidated Framework for Implementation Research (CFIR), and others [14] has led to great progress in understanding what is needed to implement important research findings into clinical settings. Increasingly, funding agencies are explicitly including the requirement to study implementation, as evidenced by the USA’s National Institutes of Health’s identification of Dissemination and Implementation Science as a research priority [15].

Despite the importance of implementing new findings, the distribution of research funding allocated to data generation compared to that allocated to translation disproportionately favors discovery. For instance, Khoury et al., in an analysis of the genomic translation research continuum from 2007, noted that less than 3% of research publications presented results of T2 research (assessing the value of a genomic application for health practice leading to the development of evidence-based guidelines), with a much smaller proportion devoted to T3 (research to move evidence-based guidelines into health practice, through delivery, dissemination, and diffusion research) or T4 (research that seeks to evaluate the “real-world” health outcomes of a genomic application in practice) research [16]. This has been seen in other areas of biomedical research, and though some improvement has been seen, most publications describe discovery research. To address this issue, one major funder of genetic and genomic research, the National Human Genome Research Institute, explicitly includes implementation research as part of their strategic plan [17].

In this collection, the paper by Namjou et al. [18] emphasizes discovery and implementation—the Electronic Medical Records in Genomics (eMERGE) Network. Namjou and colleagues describe a genome-wide association study (GWAS) looking at non-alcoholic fatty liver disease (NAFLD). What makes this paper exemplary for implementation is the use of natural language processing (NLP) of actual EMR clinical notes to develop a much richer phenotype for discovery than the typical GWAS, which depends heavily on diagnosis codes, a known limitation of these types of studies [19]. eMERGE has been a leader in the development of standardized phenotypes that can be used across EMR systems with high sensitivity and specificity [20]. These phenotypes are available for general use at PheKB.org [21]. The study replicated the known association of NAFLD severity with the PNPLA3 gene cluster and identified two novel associations: one associated with NAFLD (near IL17RA) and another associated with NAFLD progression to fibrosis (near ZFP90-CDH1). This study also includes a phenome-wide association study (PheWAS). In contrast to a GWAS, in which a phenotype is tested in cases and controls to identify the genetic loci associated with the phenotype, a PheWAS study tests a known genetic locus in carriers and non-carriers across all phenotypes contained in a health record to discover disease associations with the genetic marker [22]. The PheWAS identified a novel negative association for gout, using the PNPLA3 gene cluster locus. This study exemplifies how analysis of the big data associated with EMR systems can facilitate discovery, with relevance to real-world disease, and provides an avenue for discovery, dissemination, and implementation.

Increasing the validity of risk progression models derived from electronic health record-derived data

The drive towards so-called P4 medicine—that is, medicine that is “predictive, preventive, personalized and participatory” [23]—supported by the accompanying increasing availability of EMR-derived clinical cohorts, has led to a proliferation in the development of risk prediction models. Given the very high global disease burden of ischemic heart disease and stroke [24, 25], it is unsurprising that development of cardiovascular risk prediction models has been a major research focus of interest. In a related vein, there has been a policy drive to embed such models into routine clinical care.

In the UK, the National Institute for Health and Care Excellence (NICE) currently recommends use of the QRISK 2 cardiovascular disease algorithm [26]. Using the internationally respected Clinical Practice Research Datalink (CPRD), linking primary care, secondary care, and mortality data, Pate and colleagues [27] constructed a cohort of 3.79 million patients and then tracked risk scores over a 10-year period. They compared the QRISK 2 and 3 algorithms with the incorporation of additional data on secular trends, geographical variation, and approach to imputing missing data. They found that incorporating these additional variables resulted in substantial variation in risk across models. The authors concluded that modeling decisions could have a major impact on risk estimates, particularly secular trends that can relatively easily be accounted for in the modeling process.

Big data, shared data, good data?

While modern technology allows the collection and analysis of data at ever greater scales, the potential for benefit from widespread sharing of data remains hampered by human conventions such as interdisciplinary politics, funding mechanisms, institutional policies, and perverse incentives for career researchers [28], among other research challenges [29]. From the public perspective, there are also potential concerns around fairness, ethics, information governance, and the entry of commercial industries into some health systems. While patients might reasonably assume that medical research professionals routinely and freely share data with fellow academic researchers (and perhaps even industry) on a global scale, they would likely be surprised to hear that most of us do not [30].

Sharing clinical trial data is becoming increasingly commonplace—championed by initiatives such as AllTrials, and demanded by calls from the National Academy of Medicine, the World Health Organization, and the Nordic trial alliance [31]—though it is the oft-criticized commercial sponsors that share more data than their academic counterparts [32]. The landscape of data sharing in practice remains fractured, with a recent review of top biomedical journal practice revealing a split between journals with no formal policy, those that require sharing on request, and those that require full data availability with no restriction [33].

In this collection, Waithira and colleagues [34] argue for clear institution-level policies around data sharing, particularly in low- and middle-income countries. Formal procedures around issues like cost-recovery are particularly important given the lower resource availability in such settings, but also the potential for inequity, given the authors’ experience that most requests to access data from low- and middle-income countries comes from higher-income countries. While the case for data sharing in support of replication, secondary post hoc analysis, and meta-analyses is clear, sharing must not further disadvantage those in the poorest institutions to further the careers of their peers in richer countries.

Ethical considerations around big data sets are also the focus of this collection’s Opinion from Nebeker and Torous [35], who outline ways in which the rapidly evolving landscape of technology presents new and volatile challenges. Ethical frameworks and procedures developed half a century ago for controlled experiments in universities and hospitals struggle when faced with real-time analysis, productization, and monetization of the incalculable “data exhaust” we produce each day with our digital devices. They highlight a newer framework that seeks to balance risks and benefits (as is standard), but also elevates the growing considerations of privacy, data management, access, and usability. The piece serves as a call to action to develop a new digitally minded ethical infrastructure to address these new challenges before the pace of developments in AI, the scale of the “big tech” companies, and the influx of new stakeholders from countries without a robust history of medical ethics, overwhelm our ability to maintain the key principles of justice, beneficence, and respect for persons.

Conclusions

The United Nations recently reported that, for the first time, half of humanity is now connected to the Internet [36], with major growth in Africa and economically developing countries. Such vast growth in data and connectivity holds great opportunities to gather data, test interventions, and hone care pathways in timescales once thought impossible. Yet, in moving towards an always-online and all-digital culture, we risk forgoing the hard-fought lessons of traditional research. All too often, human bias, generalizability, conflicts of interest, politics, and prejudice still lurk behind the 1s and 0s and the deus ex machina of artificial intelligences that could render simple our complex challenges. While there remains much work to be done, we are cautiously optimistic that we might soon be past the “peak of inflated expectations”, and the “trough of disillusionment” in the so-called “hype cycle” for big data [37]. As this pervasive mega-trend touches off a variety of new technologies and approaches, the foundational work on validity, data sharing, generalizability, and ethical principles described in this special issue will continue to resonate for decades to come.