Figures
Citation: Kieran TJ, Maines TR, Belser JA (2024) Data alchemy, from lab to insight: Transforming in vivo experiments into data science gold. PLoS Pathog 20(8): e1012460. https://doi.org/10.1371/journal.ppat.1012460
Editor: Felicia Goodrum, University of Arizona, UNITED STATES OF AMERICA
Published: August 29, 2024
This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.
Funding: This research was funded by the Centers for Disease Control and Prevention. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Meta-analyses of laboratory-generated data have the potential to improve experimental protocols and offer meaningful understanding of complex biological processes. However, while development of genotype-to-phenotype statistical and predictive studies is growing, in vivo-generated data are generally employed for validation of genotypic models and are not often included as features in the analyses themselves, and rarely used on their own separate from genotypic data. Moreover, the difficulties of normalizing, cleaning, and transforming these data before analysis are multifaceted, in part due to challenges of aggregating sufficient data sets for practical use. Here, we highlight important considerations and best practices when translating data from viral pathogen in vivo studies for use in data science applications, notably statistical analyses, and machine learning (ML) approaches which we use as an illustrative example. Topics covered are applicable when studying multifactorial disease processes (such as viral pathogenicity and transmissibility) independent of the specific data analyses or programming language employed.
Data transformation mastery: Bridging lab notes to analysis-ready inputs
When conducting an in vivo experiment, researchers will typically collect a diverse array of qualitative observations and quantitative measurements. Therefore, choosing data that are most relevant for aggregation and tidying is a crucial first step (Fig 1). Care must be taken when combining data from multiple studies to determine which data points are most consistently collected between experiments (especially if different research staff are conducting the work). This may require excluding specific parameters for analysis (like lethargy or animal activity level) which may be more vulnerable to laboratorian bias depending on the specific standardized assessment employed. To reduce experimental confounders, studies intended for aggregation should be conducted under as uniform or standard conditions as possible (with these inclusion criteria explicitly stated within the analysis) [1–3]. As any research scientist will attest, in vivo-generated data is highly heterogeneous, particularly when using outbred species. Variability may be present in baseline (pre-inoculation) animal age, weight, temperature, activity level, blood chemistry, and innate immune response parameters, among others. Inoculation (e.g., infectious dose) and post-inoculation (e.g., specimen collection) variability can also be present. As most studies assessing viral pathogenicity report changes relative to baseline, normalizing raw data to reflect a linear or percentage-based deviation from baseline will typically yield aggregate data with less standard error and greater uniformity, and represents a best practice in the field [4]. Normalization can typically occur before or after aggregation.
Prior to formal analyses, diverse experimental data must be aggregated and organized into tidy data files (see Fig 2 for examples of different data types that may be sourced for this purpose). Next, additional data transformation steps will likely take place, concurrent with initial exploratory analyses to identify and refine key parameters (see Fig 2 for examples of different considerations that can modulate data transformation outcomes). Once these steps (outlined in green) are completed, hypothesis-driven research questions and model development can take place (in isolation or in tandem), such as generation of simple statistical models (outlined in blue) and ML models (outlined in purple). Establishing ML models may necessitate additional experimental considerations to be optimized prior to training/testing/refining of models. Best practices involve validating models with new, independent data, and the use of cross-validation methods to ensure accurate predictive outcomes not influenced by data noise.
It is frequently desirable to contextualize in vivo-derived outcomes with genotypic data [5–8]; however, these data must be similarly curated before further analysis, with reliable consensus sequence data available for aggregation and use (Fig 2). Will full-length genetic sequences be assessed, or will specific molecular residues that are known to affect the tested variable be sufficient [9]? Molecular residues are often compensatory in nature; will researchers build new data set columns with anticipated phenotypic outcomes from constellations of specific amino acids at key positions (like predicted receptor binding preference or length of an accessory protein)? If laboratory-generated data will be included, have researchers ensured reproducibility of aggregated experiments performed over time [10], with oversight for potential dual-use research of concern? Considering the scope of information that can be obtained from in vivo, in vitro, and molecular analyses, selecting input data for subsequent processing represents a challenging endeavor.
In vivo-generated data can encapsulate a wide range of serially collected and/or discrete (stand-alone) specimens and observations, and experimental outcomes. Results from in vivo experimentation are frequently contextualized with a diversity of pathogen sequence-based information and laboratory-based assays. Examples of data types within these groupings are shown on the left-hand side of this figure. Depending on the data type, there are a range of options available for distilling complex laboratory-based readouts into discrete values which are necessary for many data science applications; these decisions can meaningfully impact the conclusions drawn from the work. Examples of how complex data can be tidied for this purpose for each data type are shown on the right-hand side of this figure. AUC, area under the curve; RBS, predicted receptor binding preference; PA, predicted polmerase activity. Data types and analysis considerations are representative only and do not encapsulate all potential parameters employed in data science applications employing in vivo data. Image generated entirely by CDC illustrators by hand.
Distilling complexity: Turning serial data into single values
In vivo pathogen experiments typically yield both serially collected and discrete viral titer and clinical data (Fig 2). Serially collected data (especially linked measurements over time) can provide valuable information about pathogen biology, but normalizing, converting, and condensing serially collected data into discrete values, often employed in many standard statistical and predictive applications, can represent a substantial challenge, and a choice that may greatly influence the resulting analyses (Fig 1). In example, serially collected infectious virus titers from the nasal passages of a virus-inoculated animal can be reported as a mean peak titer, a discrete titer representing one sampling time point, an area-under-the-curve summary measure, a measure of infection progression between multiple time points, among others [11,12]. Distilling these serial data into discrete values may necessitate a great deal of exploratory analyses to ascertain data structure and variable relationships, while concurrently ensuring that associations identified in the data are biologically relevant and in alignment with known behaviors of the model pathogen(s) tested.
Furthermore, it may be prudent to consider if selected numerical summary parameters, with varying degrees of biological effect which can generate spurious noise in a model, should be transformed to other data types prior to analysis (Fig 2). Is it more sensible to convert maximum weight loss from a continuous to a categorical variable, and if so, how many weight loss categories are appropriate? Is it more appropriate to report viral detection in a tissue over (or under) a predetermined cutoff rather than using a quantitative titer measurement? Which criteria determine whether a binary report of a phenotypic outcome is optimal, and if so, which one? These decisions may result in greater statistical strength and/or predictive outcomes, but often cannot or will not be made until exploratory analyses are completed. However, researchers should never eliminate raw data, but rather add to the dataset these modifications and transformations with applicable accompanying code to reproduce them.
Unveiling data: Scales/model choice driven by data
It is crucial to decide if in vivo source data are best represented at the level of individual animals or as a mean/median of multiple animals (inoculated with the same virus, treated with the same agent, etc.); this may not be finalized until late in the study (Fig 1). A smaller sample size may result from aggregating in vivo data at the level of viral inoculation or treatment condition but may increase consistency between like groups with fewer outliers. While it is reasonable to link data points collected from the same animal over time, it might not be possible to link data collected from multiple animals across the same observation for analyses (like ML) which require one row of data per observation (e.g., eschewing pairing per-animal necropsy data with serially collected data from different animals, so not to imply a linkage in viral titers obtained between different individual animals). This may necessitate mixed rows in the data set encapsulating in vivo-derived data at both the per-animal and per-virus level, alongside molecular data, and any additional in vitro-based parameters (which may exhibit some variability across scales).
Concurrent with these analyses, elucidating the underlying relationships present in data across scales will ultimately contribute towards improved analysis approaches. Running statistical correlations, linear regression models, and other assessments would inform if the relationships meant to be explored are linear or not [11,13–15]. For ML applications, understanding if predicted outcomes should be categorical classification, numerical regression, or clustering, supervised or unsupervised, can represent a crucial decision that will greatly impact the suitability of ML algorithm(s) employed and results obtained [16,17]. On a positive note, it is likely that these formative analyses on source data will provide valuable information towards subsequent understanding of disease processes.
Unlocking the code: Seamlessly integrating the data science recipe
As exploratory results are obtained, it is critical to apply a strong mix of domain expertise (and common sense) when determining biological relevance for downstream analyses like multivariate statistical evaluates and/or feature selection in ML [5]. Different data types obtained during laboratory experimentation may govern the choice of ML algorithm(s) employed [16]. Determining which features, or variables, to use in ML is also dependent on data availability. In vivo studies often have features with missing data (such as resulting from an animal reaching an unscheduled humane endpoint, or serially collected weight/temperature observations recorded on different schedules post-inoculation). Determining whether such a feature should be dropped or have values imputed usually comes down to the amount of missingness and how biologically important such a feature may be. Many methods for data imputation exist with their own considerations [17].
From raw input to reliable conclusions: Interpreting and validating results
Regardless of the analyses conducted, it is quite possible that in vivo source data will not always be associated with high statistical correlations and/or high-performance metrics using ML algorithms. This should not be considered a failure. Studies conducted in vivo (including but not limited to studying pathogen pathogenicity, tropism, and transmissibility) are conducted specifically because molecular, in vitro, and/or ML algorithms are insufficient to fully predict these multifactorial outcomes. There is great utility in understanding the relative contribution of specific features from in vivo experimentation towards disease outcomes, even if the highest statistical correlation or performance metric itself may not be as striking, or the confidence intervals of an association are higher than desired [10,11,18,19].
For ML studies, it is crucial to confirm any high-performance metrics obtained, ideally with externally generated independent testing data, to ensure model overfitting is not taking place (Fig 1) [13,16]. However, this can pose a substantial challenge when employing in vivo data in training data sets, due to a paucity of publicly available data sets for testing. The potential for substantial heterogeneity across different laboratories performing similar research, and the likelihood that selected features required for model use are not available in other published studies, must be considered. Researchers can employ techniques like cross-validation, or combing internal and external data together in both the training/testing data sets to improve model robustness. However, care must be taken for practicality and appropriateness. It is likely that researchers will need to comb the literature to create from scratch suitable data sets for evaluation and validation purposes [5,19], despite this being a time-intensive effort.
Spreading the data science wealth: An alchemist’s guide
The diversity of data science analyses possible with data aggregated from in vivo experimentation likely exceeds what any one research group can identify on their own. Publicly releasing aggregated data sets for use by other researchers represents a best practice for data sharing in general, and supports the 3 R’s of animal research by seeking to glean additional insight from preexisting data without employing additional animals. Preparing these data for external dissemination can be a challenge, but making sure the data is compiled in a digital (i.e., CSV) tidy format (i.e., with one observation per row and one variable per column), with associated metadata and code for how the data was collected, complied, and modified, and deposited in a public database is important for transparency, reproducibility, and scientific advancement. Additionally, data sets can be described in detail in companion peer-reviewed journals that publish these Data Notes or Data Descriptors, which can be cross-referenced and cited to primary manuscripts, encouraging reuse, with complete information about the creation, benefits, and limitations of the data (as an example see [20]).
Conclusions
In vivo studies provide critical information that cannot be obtain from in vitro and/or molecular analyses alone. However, retrospective analyses employing in vivo-generated data are rare [5,11,18], with few viral pathogen studies incorporating in vivo data in ML algorithms [13,19]. While we focus on ML as an illustrative example, these principles apply to other statistical methodologies not discussed here. As discussed throughout this article, these analysis workflows involve numerous decisions that necessitate the input and domain expertise of both data scientists and research staff. The most rigorous and valuable studies will ultimately result from close collaboration between groups who perform in vivo research and groups who perform meta-analyses of this work. While it can be time consuming, the benefits of data aggregation and sharing can be transformative, and increased efforts by researchers to generate and share these data resources are warranted.
Disclaimer
The findings and conclusions are those of the authors and do not necessarily reflect the official position of the Agency for Toxic Substances and Disease Registry (ATSDR)/the Centers for Disease Control and Prevention (CDC).
References
- 1. Belser JA, Lau EHY, Barclay W, Barr IG, Chen H, Fouchier RAM, et al. Robustness of the Ferret Model for Influenza Risk Assessment Studies: a Cross-Laboratory Exercise. MBio. 2022;13(4):e0117422. Epub 20220711. pmid:35862762; PubMed Central PMCID: PMC9426434.
- 2. Huang SS, Banner D, Fang Y, Ng DC, Kanagasabai T, Kelvin DJ, et al. Comparative analyses of pandemic H1N1 and seasonal H1N1, H3N2, and influenza B infections depict distinct clinical pictures in ferrets. PLoS ONE. 2011;6(11):e27512. Epub 20111116. PubMed Central PMCID: PMC3217968. pmid:22110664
- 3. Buhnerkempe MG, Gostic K, Park M, Ahsan P, Belser JA, Lloyd-Smith JO. Mapping influenza transmission in the ferret model to transmission in humans. Elife. 2015:4. Epub 20150902. pmid:26329460; PubMed Central PMCID: PMC4586390.
- 4. Belser JA, Kieran TJ, Mitchell ZA, Sun X, Mayfield K, Tumpey TM, et al. Key considerations to improve the normalization, interpretation and reproducibility of morbidity data in mammalian models of viral disease. Dis Model Mech. 2024;17(3). Epub 20240305. pmid:38440823; PubMed Central PMCID: PMC10941659.
- 5. Lycett SJ, Ward MJ, Lewis FI, Poon AF, Kosakovsky Pond SL, Brown AJ. Detection of mammalian virulence determinants in highly pathogenic avian influenza H5N1 viruses: multivariate analysis of published data. J Virol. 2009;83(19):9901–9910. Epub 20090722. pmid:19625397; PubMed Central PMCID: PMC2748028.
- 6. Sun Y, Zhang K, Qi H, Zhang H, Zhang S, Bi Y, et al. Computational predicting the human infectivity of H7N9 influenza viruses isolated from avian hosts. Transbound Emerg Dis. 2021;68(2):846–856. Epub 20200808. pmid:32706427; PubMed Central PMCID: PMC8246913.
- 7. Peng Y, Zhu W, Feng Z, Zhu Z, Zhang Z, Chen Y, et al. Identification of genome-wide nucleotide sites associated with mammalian virulence in influenza A viruses. Biosafety and Health. 2020;2(1):32–38.
- 8. Zeller MA, Gauger PC, Arendsee ZW, Souza CK, Vincent AL, Anderson TK. Machine Learning Prediction and Experimental Validation of Antigenic Drift in H3 Influenza A Viruses in Swine. mSphere. 2021;6(2). Epub 20210317. pmid:33731472; PubMed Central PMCID: PMC8546707.
- 9. Borkenhagen LK, Allen MW, Runstadler JA. Influenza virus genotype to phenotype predictions through machine learning: a systematic review. Emerg Microbes Infect. 2021;10(1):1896–1907. pmid:34498543; PubMed Central PMCID: PMC8462836.
- 10. Creager HM, Kieran TJ, Zeng H, Sun X, Pulit-Penaloza JA, Holmes KE, et al. Utility of Human In Vitro Data in Risk Assessments of Influenza A Virus Using the Ferret Model. J Virol. 2023;97(1):e0153622. Epub 20230105. pmid:36602361; PubMed Central PMCID: PMC9888249.
- 11. Kieran TJ, Sun X, Maines TR, Beauchemin CAA, Belser JA. Exploring associations between viral titer measurements and disease outcomes in ferrets inoculated with 125 contemporary influenza A viruses. J Virol. 2024;98(2):e0166123. Epub 20240119. pmid:38240592; PubMed Central PMCID: PMC10878272.
- 12. Danzy S, Lowen AC, Steel J. A quantitative approach to assess influenza A virus fitness and transmission in guinea pigs. J Virol. 2021;95(11). Epub 20210317. pmid:33731462; PubMed Central PMCID: PMC8139685.
- 13. Jhutty SS, Boehme JD, Jeron A, Volckmar J, Schultz K, Schreiber J, et al. Predicting Influenza A Virus Infection in the Lung from Hematological Data with Machine Learning. mSystems. 2022;7(6):e0045922. Epub 20221108. pmid:36346236; PubMed Central PMCID: PMC9765554.
- 14. van den Brand JM, Stittelaar KJ, van Amerongen G, Reperant L, de Waal L, Osterhaus AD, et al. Comparison of temporal and spatial dynamics of seasonal H3N2, pandemic H1N1 and highly pathogenic avian influenza H5N1 virus infections in ferrets. PLoS ONE. 2012;7(8):e42343. Epub 20120808. pmid:22905124; PubMed Central PMCID: PMC3414522.
- 15. Chokkakula S, Oh S, Choi WS, Kim CI, Jeong JH, Kim BK, et al. Mammalian adaptation risk in HPAI H5N8: a comprehensive model bridging experimental data with mathematical insights. Emerg Microbes Infect. 2024;13(1):2339949. Epub 20240416. pmid:38572657; PubMed Central PMCID: PMC11022924.
- 16. Greener JG, Kandathil SM, Moffat L, Jones DT. A guide to machine learning for biologists. Nat Rev Mol Cell Biol. 2022;23(1):40–55. Epub 20210913. pmid:34518686.
- 17. Goodswen SJ, Barratt JLN, Kennedy PJ, Kaufer A, Calarco L, Ellis JT. Machine learning and applications in microbiology. FEMS Microbiol Rev. 2021;45(5). pmid:33724378; PubMed Central PMCID: PMC8498514.
- 18. Stark GV, Long JP, Ortiz DI, Gainey M, Carper BA, Feng J, et al. Clinical profiles associated with influenza disease in the ferret model. PLoS ONE. 2013;8(3):e58337. Epub 20130305. PubMed Central PMCID: PMC3589361. pmid:23472182
- 19. Kieran TJ, Sun X, Maines TR, Belser JA. Machine learning approaches for influenza A virus risk assessment identifies predictive correlates using ferret model in vivo data. Commun Biol. 2024;7:927. pmid:39090358
- 20. Kieran TJ, Sun X, Creager HM, Tumpey TM, Maines TR, Belser JA. An aggregated dataset of serial morbidity and titer measurements from influenza A virus-infected ferrets. Sci Data. 2024;11(1):510. Epub 20240517. pmid:38760422; PubMed Central PMCID: PMC11101425.