Prediction of Hematopoietic Stem Cell Transplantation Related Mortality- Lessons Learned from the In-Silico Approach: A European Society for Blood and Marrow Transplantation Acute Leukemia Working Party Data Mining Study

Models for prediction of allogeneic hematopoietic stem transplantation (HSCT) related mortality partially account for transplant risk. Improving predictive accuracy requires understating of prediction limiting factors, such as the statistical methodology used, number and quality of features collected, or simply the population size. Using an in-silico approach (i.e., iterative computerized simulations), based on machine learning (ML) algorithms, we set out to analyze these factors. A cohort of 25,923 adult acute leukemia patients from the European Society for Blood and Marrow Transplantation (EBMT) registry was analyzed. Predictive objective was non-relapse mortality (NRM) 100 days following HSCT. Thousands of prediction models were developed under varying conditions: increasing sample size, specific subpopulations and an increasing number of variables, which were selected and ranked by separate feature selection algorithms. Depending on the algorithm, predictive performance plateaued on a population size of 6,611–8,814 patients, reaching a maximal area under the receiver operator characteristic curve (AUC) of 0.67. AUCs’ of models developed on specific subpopulation ranged from 0.59 to 0.67 for patients in second complete remission and receiving reduced intensity conditioning, respectively. Only 3–5 variables were necessary to achieve near maximal AUCs. The top 3 ranking variables, shared by all algorithms were disease stage, donor type, and conditioning regimen. Our findings empirically demonstrate that with regards to NRM prediction, few variables “carry the weight” and that traditional HSCT data has been “worn out”. “Breaking through” the predictive boundaries will likely require additional types of inputs.


Introduction
Allogeneic hematopoietic stem transplantation (HSCT) is a potentially curative procedure for selected patients with hematological malignancies. Transplant associated morbidity and mortality remains substantial, making the decision of whom, how and when to transplant, of great importance [1].
The European Group for Blood and Marrow Transplantation (EBMT) score, initially developed for prediction of allogeneic HSCT outcomes in chronic myeloid leukemia, and later validated for other diagnoses, has pioneered the field of prognostic modeling in HSCT [2,3]. Since its release, almost two decades ago, additional scores have also been developed. These have been validated, but do not fully account for transplantation risk in acute leukemia [4][5][6][7][8][9].
Performance limiting factors of HSCT prediction models might be attributed to inherent procedural uncertainty, the statistical methodology used, or the number and quality of features collected. Using an in-silico approach (i.e., iterative computerized simulations), based on machine learning (ML) algorithms, we set out to explore these factors in order to improve future acute leukemia HSCT outcome prediction models.
ML is a field in artificial intelligence. The underlying paradigm does not start with a predefined model; rather it lets the data create the model by detecting underlying patterns. Thus, this approach avoids pre-assumptions regarding model types and variable interactions, and may offer additional knowledge, which has eluded detection by standard statistical methods. ML algorithms, have been applied in various "big data" scenarios such as financial markets, complex physical systems, marketing, advertising, robotics, meteorology, biology and more. They are tools in the data mining approach for knowledge discovery in large datasets [10,11]. Recently, we have developed the EBMT-Alternating Decision Tree (ADT) ML based prediction model for mortality at 100 days following allogeneic HSCT in acute leukemia [9,12]. Hence, demonstrating feasibility of the data mining approach in HSCT.

Study population
This was a retrospective, data mining, supervised learning study, based on data reported to the Acute Leukemia Working Party (ALWP) registry of the EBMT. The EBMT is a voluntary group of more than 500 centers, required to report all consecutive HSCT and follow-ups annually in a standardized manner. The study was approved by the ALWP board. Written informed consent was given by participants for their clinical records to be used in EBMT retrospective studies.
Inclusion criteria encompassed first allogeneic transplants from HLA matched sibling and unrelated donors (> = 8/10), performed from 2005 to 2013, using peripheral blood stem cells or bone marrow as cell source, on adults (age > = 18 years) diagnosed with de-novo acute leukemia. Haploidentical and cord blood transplants were not included.
A total of 26,266 patients from 326 European centers were initially analyzed. Patients lost from follow-up before day 100 post HSCT were discarded from analysis (n = 343, 1.3%). Twenty two variables describing recipient, donor, and procedural characteristics were considered.Variables were defined according to EBMT criteria (Table 1 and Appendix A in S1 File) [13].

Study objectives
Study objectives included development of multiple prediction models for NRM 100 days post allogeneic HSCT, while estimating effects of the algorithm type, population size, specific subpopulations and number of variable incorporated, on the models' predictive performance. Day 100 NRM was defined as death without previous relapse/progression before day 100.

Study design
Prediction models for day 100 NRM were developed using six ML algorithms (WEKA v. 3-7-11, New-Zealand). Through an in-silico approach, algorithms were iteratively exposed to an increasing population size, varying sub-populations, or an increasing number of ranked variables, selected by a separate feature selection algorithm (Fig 1). For each iteration, a prediction model was trained and tested through 10 fold cross-validation. This process was repeated 5 times, each time randomly sampling the experimental dataset (see below). Performance was evaluated according to the area under receiver operator characteristic curve (AUC) [14,15].
Tuning of the algorithms parameters (Table A in S1 File) and the feature selection process, explained below, were conducted on an optimization dataset (n = 3,888, 15%), whereas the development of the various models of day 100 NRM prediction were done on the experimental dataset (n = 22,035, 85%). Samples were randomly allocated to each dataset from the original dataset.

Feature selection
Feature selection is the process of ranking variables and identifying irrelevant and redundant information. The reduction of dimensionality presents a number of benefits, such as enabling algorithms to operate faster and more effectively, improving classification accuracy, improving data visualization, and enhancing understanding of the derived classification models [23]. Using a classifier based feature selection algorithm, which was applied on the optimization dataset for each of the 6 previously described ML classification algorithms, variables were ranked according to their importance for prediction of day 100 NRM (Appendix C in S1 File).

Patient characteristics
Characteristics of 25,923 analyzed patients are listed in Table 1. The majority had Acute Myeloid Leukemia (AML) (71.8%), were in first complete remission (CR1) (62.5%) and received myeloablative conditioning (MAC) (66.2%). Grafts from matched sibling donors were used in 47.6% of patients. Graft source was mainly peripheral blood (84.1%). NRM and overall mortality prevalence at day 100, were 9.2% (n = 2,387) and 12.7% (n = 3,280) respectively. Whereas 9.8% (n = 2,539) of patients relapsed before 100 days. They were consequently considered as no NRM at day100. The parameter optimization and experimental datasets were similar in terms of baseline characteristics (Table B in S1 File).

Sample size effect on prediction
Day 100 NRM prediction models were developed with 6 ML algorithms on an expanding patient population (110-22,035 patients) sampled from the experimental dataset. When models were developed on all available patients, AUCs ranged from 0.64 for the MLP algorithm to 0.67 for the LR and AdaBoost algorithms (Fig 2 and Table C in S1 File). Depending on the algorithm, predictive performance plateaued on a sample size of 6,611-8,814 patients. Samples

Sub-population effect on prediction
Prediction models were developed for specific subsets of patients, and performance was compared with models developed on the whole population (

Variable importance
When the feature selection process was applied on the optimization set, disease stage, donor type and conditioning were consistently the 3 top ranking variables across all day 100 NRM prediction models (Fig 3). The mean variable rankings of time from diagnosis to transplant, recipient age, and diagnosis were 4-6, respectively. However, standard deviation was considerably high, as their importance varied between algorithms. To assess the relationship between models' performance and the number of variables incorporated into them, the ranked variables were serially introduced to the 6 ML algorithms. The algorithms were applied on the experimental dataset. Starting from the top ranking variables, gradually adding variables with lower ranking, prediction models for day 100 NRM were iteratively constructed (Fig 1). The maximal predictive performance ranged from 0.65-0.67, with LR and MLP achieving their optimal AUC with only 6 variables (Fig 4). When introduced with the 3 top ranking variables all models achieved an AUC of 0.64.

Discussion
Eligibility of patients with acute leukemia for allogeneic HSCT is based on a risk benefit-assessment of the relapse risk versus NRM risk [24]. Risk scores for transplant associated mortality have been developed based on retrospective registry data. A large HSCT registry was explored, while automatizing the prediction model development processes, creating thousands of models, depending on the questions asked. We show that for day 100 NRM prediction various models, developed on the basis of 6 popular ML algorithms, reach approximately the same performance. With data commonly collected, saturation of predictive performance requires very few variables, but large datasets.
The nature of association between predictors and response, the data's quality and dimensionality (i.e., number of variables analyzed), and the number of events per outcome, all affect the sample size necessary for generation of a robust and generalizable prediction model  [15]. Hence, predetermination of the sample size is a matter of empirical testing, rather than a standardized calculation. Using repetitive computerized simulations, we demonstrate that with marginal differences between algorithms, approximately 6,000 patients were needed to achieve maximal or near-maximal predictive performance. Defining a strict cutoff for modeling studies would be erroneous, as the data's features differ between cohorts. However, a rather solid assumption based on the presented results on "real world" data, would be the need to include thousands of patients when aiming to develop and validate similar modeling problems. Iterative development of prediction models for specific subpopulation has drawn attention to the different disease stage groups. Performance was lowest for the CR2 group, with an AUC ranging from 0.53-0.58. Low performance, but to lesser extent, was also noted for the other disease stage groups. Disease stage is highly predictive of day 100 NRM. Thus, it is not surprising that when disease stage was excluded from the pool of variables considered for prediction, performance declined, as it is highly informative.
Prospects for cure are higher for patients in CR1 compared to other disease stages. Hence, estimation of NRM risk is of special interest in this group, as non-transplant alternatives exist [25][26][27]. Versluis et al., have addressed such a population receiving reduced intensity conditioning. When looked upon separately, the Hematopoietic Cell Transplantation-Specific Comorbidity Index (HCT-CI) and EBMT score were not predictive of NRM, corroborating the challenge we encountered. A new score, based on integrated feature of the comorbidity index and EBMT score, was constructed achieving an AUC of 0.68 [8].
It should be noted that most algorithms reached an AUC of 0.65 using only 3-5 variables. Adding more variables led to a modest improvement, which translates to marginal clinical significance. The top 3 ranking variables, shared by all algorithms were disease stage, donor type, and conditioning. Transplanters will not be surprised by these determinants, which have been validated repeatedly [28,29]. Predictive weight attributed to other features varied considerably between models, leading at best to modest increment in predictive accuracy. Traditional HSCT prognostic studies, rely on a collection of variables similar to the one presented. Thus, effective prediction of individualized NRM is unlikely to substantially improve. Incorporation of the HCT-CI score holds promise. However, even when applied separately or in combination with other features, the comorbidity index reaches a maximal AUC of 0.7 [4,7,8,[30][31][32]. In other words, contemporary prognostic models are suitable for risk stratification rather than outcome prediction. The discovery of additional prognostic markers, the incorporation of electronic medical records to routine clinical use, and the addition of biological and genetic data to information gathered on leukemia patients, offers great opportunities for model improvement [33,34]. Mortality following transplantation is likely the result of a complex network of interactions and non-linear associations. Hence, the Occam's razor concept, where the simplest solution is the best solution, might not hold for prediction of transplantation outcomes. Exploiting the abundance of data now available on transplant patients, could potentially improve prediction models' applicability. Novel modeling techniques such as ML [35,36], enabling non-parsimonious incorporation of a high number of variables, are warranted. These methods could potentially improve accuracy, but interpretability might be lost.
The EBMT-ADT prediction model marked the entrance of the data mining methodology into HSCT prognostic research [9,22]. The aim of the ADT study was development of a prediction model for overall mortality at 100 days following allogeneic HSCT in acute leukemia patients. Though using a data mining methodology, the perspective of the current study was not prediction per-se, but rather an analysis of the predictive modeling process and its boundaries, while focusing on NRM at day 100 as the objective. Thousands of prediction models, with varying algorithms, were developed and evaluated in order to discover elements that could improve future models. The in-silico experimental system allowed us to dissect the conditions under which the models were developed and the corresponding performance. Thus, providing methodological and clinical insights regarding sample size, modeling technique, and variable importance.
The study carries several limitations. First, it is a retrospective analysis susceptible to data selection and measurement biases. However, the registry analyzed reflects real world data, hence conveying contemporary practice. Second, a few variables suffered from a large amount of missing values. That being said, ML algorithms allow prediction of the outcome of interest without strong assumptions regarding the distribution and missingness, In addition, we show that when discarding variables with more than 15% missing values, prediction does not improve (Table D in S1 File). Third, we focus on short term data-day 100 NRM, rather than long term mortality. We believe that the high rate of day 100 NRM (9.2%) makes it a valid objective. Moreover, prediction of long term outcomes might be expected to give lower performance, as more parameters come into play. Hence, the concepts presented should be applicable to modeling distant outcomes. Fourth, we relate to prediction of day 100 NRM as a simple classification task, disregarding the time to event effect. However, given the large sample size, disregarding censored data (1.3%) is unlikely to have impact on performance.

Conclusion
The in-silico approach is a novel experimental system utilizing machine learning algorithms, for empirical estimation of prediction boundaries in HSCT. Several clinical and methodological lessons have been learned by the suggested approach. Large registry studies, involving thousands of patients are necessary for development of robust prediction models, as performance of different algorithms converged when sampling more than 6,000 patients. In addition, an exhaustive search for variable importance, reveal that few variables "carry the weight" with regard to predictive influence. Potential bias of the presented approach include: data quality issues and selection of a short term rather than a long term outcome. Overall, it appears that when using traditional HSCT data, a point of predictive saturation has been reached. Improving performance will likely require additional types of input like genetic, clonal and biologic factors.
Supporting Information S1 File. Appendix A in S1 File: Variables' Definitions. Appendix B in S1 File: Machine Learning Algorithms. Appendix C in S1 File: Feature Selection. Table A in S1 File: Algorithms' parameters. Table B in S1 File: Comparison between variables in the optimization and experimental datasets. Table C in S1 File: Predictive performance of day 100 NRM prediction models with increasing sample size. Table D in S1 File: Predictive performance of day 100 NRM prediction models discarding variables with prevalent missing values. (DOCX)