Figures
Abstract
Background and objectives
This study aims to develop a machine learning-based approach to predict adherence to gamified cognitive training using a variety of baseline measures (demographic, attitudinal, and cognitive abilities) as well as game performance data. We aimed to: (1) identify the cognitive games with the strongest adherence prediction and their key performance indicators; (2) compare baseline characteristics and game performance indicators for adherence prediction, and (3) test ensemble models that use baseline characteristics and game performance data to predict adherence over ten weeks.
Research design and method
Using machine learning algorithms including logistic regression, ridge regression, support vector machines, classification trees, and random forests, we predicted adherence from weeks 3 to 12. Predictors included game performance metrics in the first two weeks and baseline measures. These models’ robustness and generalizability were tested through five-fold cross-validation.
Results
The findings indicated that game performance measures were superior to baseline characteristics in predicting adherence. Notably, the games “Supply Run,” “Ante Up,” and “Sentry Duty” emerged as significant adherence predictors. Key performance indicators included the highest level achieved, total game sessions played, and overall gameplay proportion. A notable finding was the negative correlation between initial high achievement levels and sustained adherence, suggesting that maintaining a balanced difficulty level is crucial for long-term engagement. Conversely, a positive correlation between the number of sessions played and adherence highlighted the importance of early active involvement.
Discussion and implications
The insights from this research inform just-in-time strategies to promote adherence to cognitive training programs, catering to the needs and abilities of the aging population. It also underscores the potential of tailored, gamified interventions to foster long-term adherence to cognitive training.
Citation: Pang Y, Singh A, Chakraborty S, Charness N, Boot WR, He Z (2024) Predicting adherence to gamified cognitive training using early phase game performance data: Towards a just-in-time adherence promotion strategy. PLoS ONE 19(10): e0311279. https://doi.org/10.1371/journal.pone.0311279
Editor: Stefano Triberti, Universita Telematica Pegaso, ITALY
Received: March 15, 2024; Accepted: September 17, 2024; Published: October 2, 2024
Copyright: © 2024 Pang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The source code and data used for data analysis and model implementation can be accessed at the following GitHub repository: https://github.com/YuanyingPang/APPT_Game_Performance_Analysis.
Funding: This work was supported by the National Institute on Aging grant R01AG064529. This study was also partially supported by University of Florida-Florida State University Clinical and Translational Science Award funded by National Center for Advancing Translational Sciences under Award Number ULITR001427. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Worldwide, there were 727 million people 65 years of age or older in 2020 [1]. By 2050, this figure will be more than doubled to over 1.5 billion people, indicating that the world is going through a significant demographic change [1]. As people age, they are more likely to experience chronic diseases, which can have significant impacts on their quality of life and their ability to live independently. Mild cognitive impairment (MCI), Alzheimer’s disease (AD), and related dementias are three of the most common chronic illnesses that lead to memory loss and deterioration of cognitive skills [2]. Even for older adults without MCI, AD, or dementia, their cognitive abilities may decline as they age [3]. Thus, there is a growing need for care services to support older adults, including medical care, long-term care, and social services. At present, the treatment of memory loss is mainly divided into pharmacological therapy and non-pharmacological therapy. While medications like cholinesterase inhibitors and memantine may help delay cognitive decline in some individuals with Alzheimer’s disease, they are not effective for everyone [4, 5]. Additionally, even for those who do experience benefits, the effects may be modest and short-lived [5]. Expectedly, these modification treatments have been associated with side effects, including gastrointestinal symptoms and dizziness, which have often led to treatment discontinuation [4, 5]. Recently approved drugs for early-stage dementia such as donanemab-azt and lecanemab-irmb target the amyloid plaques that are considered a biomarker of MCI and Alzheimer’s disease [6, 7]. Although successful at clearing amyloid, the drugs have limited success in slowing cognitive decline and are associated with severe side effects such as brain swelling [6, 7].
Non-pharmacological interventions represent another approach to delay memory loss in older adults [8]. Gamified cognitive training, which is a form of non-pharmacological intervention, has gained popularity over the last few year [9].For older people, gamified cognitive training has several potential benefits: offering a more engaging and motivating setting, increasing adherence and motivation, increasing effectiveness [10, 11]. Gamified cognitive training can also provide a more customized and enjoyable experience, which could increase its overall effect [11]. Nevertheless, some researchers found that gamified cognitive training yielded disparate outcomes with respect to enhancing memory, and there needs more studies to explore the different effects of different games [12].
We found many articles indicated that gamified environment can help older adults engage with long-term online intervention [13, 14]. But researchers also suggested that the training should be personalized to maximize adherence so that their effect can be shown [13]. The objective of the “Adherence Promotion with Person-Centered Technology” (APPT) project is to develop an adaptive reminder system to promote older adults’ adherence to mobile-based gamified cognitive training, with the ultimate goal of promoting early detection and prevention of age-related cognitive decline [15]. In previous studies, we developed machine learning models to predict participants’ adherence at different levels (overall and weekly) using data collected from a previous cognitive training intervention, using a variety of baseline measures (demographic, attitudinal, and cognitive ability variables), as well as deep learning models to predict the following week’s adherence using variables derived from the previous week’s training data [16]. This study revealed that both individual differences and previous intervention interactions provide valuable information for predicting adherence, and these insights can provide initial hints regarding who to target with adherence support strategies and when to provide support [16]. However, the overall adherence throughout the training phase was only predicted using baseline measures with a moderate prediction accuracy (AUROC = 0.71) with the best performing machine learning model). Another study by the APPT team, used multivariate time series data from the game training data consisting of time-dependent variables (i.e., duration for which a participant played, number of sessions, max level reached, and the number of games performed) to assess whether participants would meet the minimum adherence criteria on day N+1 given their previous N-day continuous play pattern. These two studies present the first steps toward the development of an innovative AI-based reminder system to promote adherence to mobile-based cognitive training program [17].
In this paper, we investigated the predictive power of participants’ performance on different cognitive games during the initial two weeks of training and their subsequent 10-week adherence to the computerized cognitive training program. This approach helps in identifying consistent adherence patterns, which is crucial for developing robust, just-in-time intervention strategies. While daily predictions might offer more granular insights, the 10-week average offers a broader perspective that informs sustained engagement and practical intervention planning. Further, the development of a just-in-time adherence support system that capitalizes on both short-term (day-to-day) and long-term (overall adherence over the remainder of the intervention) predictions would be more effective than a system that relies solely on short-term predictions. Specifically, the reminder system can be fine-tuned based on the combined forecasts. For example, if the long-term model predicts a high risk of dropout and the short-term model indicates a high probability of non-engagement over the next few days, this might present a more serious concern compared to scenarios where the short-term model predicts an adherence lapse but the long-term model predicts overall high adherence for the remainder of the trial. In such cases, the system might deploy more engaging, context-aware reminders that address specific barriers to long-term adherence to reduce the risk of attrition while also supporting short-term reengagement. We hypothesize that performance on different cognitive games during the initial two weeks of cognitive training is predictive of the subsequent 10-week adherence. Additionally, we conduct a comparative analysis between the predictive capabilities of baseline factors and game performance metrics. We also examine the performance of ensemble models that incorporate both baseline factors and game performance data in predicting adherence. Furthermore, we investigate which specific games possess stronger predictive power regarding adherence and identify key performance indicators within these games that drive this predictive ability. A study by Turunen et al. used various cognitive, demographic, lifestyle, and health-related variables to predict older adults’ adherence to computer-based cognitive training [18]. Their analysis included both bivariate and multivariate techniques, utilizing a Zero-Inflated Negative Binomial (ZINB) model to predict adherence. In contrast, our study incorporates not only these baseline predictors but also game performance data. This comprehensive analysis provides more robust and reasonable results, which can be used to personalize cognitive training programs and improve adherence among older adults. The insights obtained from this research could aid in tailoring adherence promotion strategies to support individuals who may need extra support to maintain adherence to cognitive training. Moreover, the identification of the most influential factors associated with adherence can facilitate the design of personalized and effective cognitive training programs that cater to the unique needs and abilities of the target population.
Materials and methods
The cognitive training trial
The dataset was obtained from a prior clinical trial conducted on the Mind Frontiers mobile-based cognitive training game suite, designed explore how different message framings (positive-framed and negative-framed) about brain health influence adherence to a technology-based cognitive intervention and to identify individual differences that predict adherence [19]. Florida State University Institutional Review Board approved the study protocol (IRB #: 2017.20622) and informed consent form. The recruitment period of this study was between July 1, 2017, and March 1, 2018. All the recruited participants provided written consent with their signatures. No minors were included in the study. The trial recruited 118 older adults living in the community, with a mean age of 72.6 years and a standard deviation of 5.54. Among these older adults, 78 are female (66%), 38 are male (32%), and 2 are unknown (2%) (see Table 1). Participants were randomly assigned to one of three groups: no message, positive-framed messages, or negative-framed messages (see Table 1). Those in the positive-framing group received messages emphasizing the benefits of engaging with cognitive training (e.g., “Regular metal challenge can have a positive impact on the brain”), while those in the negative-framing group received messages highlighting the risks of not engaging in such training (e.g., “Infrequent mental challenge can have a negative impact on brain”) [19]. Participants were instructed to engage in the cognitive training program comprising seven separate games (see Table 2) for five days a week for 45 minutes each session. The collected data includes the levels achieved in the games, ranging from 1 to 58, as well as five possible outcomes: Defeat, Stalemate, Victory, Abort, and Not Yet Finished. Two phases make up the dataset used in this research [19]. Participants in Phase 1 were required to adhere to a strict timetable for 12 weeks, playing for 45 minutes a day, 5 days in the week. The same participants in Phase 2 were asked to play as frequently as they desired during this unstructured, 6-week period.
Defining adherence and exploring predictive factors
As adherence cannot be determined in the absence of a prescribed schedule, we only examined the data gathered during the structured phase (Phase 1). Based on daily and weekly training interactions, quantitative and categorical weekly adherence at the minimum and full levels were established for the structured phase. Weekly adherence at a minimal level was defined as the number of days individuals met the minimum requirement in a day (> = 10 minutes) divided by five; weekly adherence at a full level (> = 36 minutes, representing 80 percent of 45 min) was defined as the number of days individuals met the full requirement in a week divided by five. The subsequent 10-week adherence at the minimum level and full level are the average weekly adherence for the entire 10 weeks, respectively. Following the definition of minimal and full-level adherence, participants were divided into two groups using the median value of each.
We collected comprehensive game data from every participant during their adherence with these cognitive training games, encompassing the number of sessions engaged in, the outcome of each play, and the level achieved in each play. The data presented in Fig 1 offers a preliminary summary of the mean frequency at which individuals engaged in each of the seven games over the initial two-week and twelve-week periods. Additionally, it illustrates the proportion of various outcomes observed throughout the same time intervals. The difference in play count may be due to differences in the duration of the game, which can affect the overall number of plays. Therefore, using play frequency as a metric for assessment may not be sufficient for a direct comparison. Analyzing the win-loss ratios provides valuable information on the performance patterns within each game. By comparing the win-loss percentages of each game during the early two weeks with the game performance across a whole twelve-week span, discernible patterns become evident. Significantly, the game “Trader Jack” demonstrates a greater percentage of stalemates compared to other games, a pattern that remains persistent during both the initial two weeks and the full twelve-week period. In contrast, the victory rates for games like “Riding Shotgun” and “Ante Up” are higher in the first two weeks compared to the twelve-week timeframe, suggesting a variation in performance with time. S2 Fig. displays an area chart illustrating the fluctuations in the number of results per game per week over a span of 12 weeks. The x-axis shows a time frame of one to twelve weeks, while the y-axis shows the total number of game outcomes every week. Each hue symbolizes a specific outcome in the game. The results consistently align with the overall win-loss ratio.
Average sessions of different games in previous two weeks (a) and in whole trial (b).
We developed machine learning models for each game, and based on these game performance data we created some predictors including the number of sessions they played for each of the 7 games in the first two weeks, the proportion of each of the 7 games played in terms of the number of sessions in the first two weeks, the proportion of each outcome for each game over all the sessions on the same game the first two weeks, the highest level they reached in initial two weeks, and how many days a person spent on reaching the median level of each game in the first two weeks. The original data variable information and the calculation methods of these three categories of variables are shown in Table 3 below. In defining the variable “number of days reaching middle level group,” we categorized participants’ maximum levels from various games over the initial two weeks into three distinct level groups: low, middle, and high. These levels were determined using the 25th and 75th percentiles as thresholds. Specifically, levels falling within the 0-25th percentile were classified as “low level group”, those within the 25th-75th percentile as “middle level group”, and scores exceeding the 75th percentile as “high level group”. “Number of days reaching middle level group” represents the number of days within a two-week span that a participant reaches this middle level group. If a participant does not achieve the middle level group within these two weeks, this variable is set to 15 days.
Prior to the beginning of this program, a survey was administered to gather participants’ baseline information, encompassing data on demographics, attitudes, and cognitive abilities. For a comprehensive overview of attitude and cognitive scores for all participants, please refer to S1 Table in the Support Information. The measurements of attitudinal and cognitive assessments comprise composite scores that assess technical proficiency, self-efficacy, subjective cognition, perceived benefits, objective reasoning, objective processing speed, objective memory instant recall, and objective memory delayed recall. These composite scores are computed using numerous z-scores, as shown in Table 4.
To assess the relationship between both continuous and categorical variables with adherence outcomes, t-tests were conducted for continuous variables to determine if their mean values differed significantly between participants who met the adherence threshold and those who did not. Additionally, Chi-square tests were applied to categorical variables to evaluate whether significant associations existed between the categorical variables and adherence. These analyses helped identify key predictors associated with adherence based on both continuous and categorical predictors. We chose those measures that showed statistical significance (alpha = 0.1) in the prediction models, and we ran separate analysis on both minimal level adherence and full level adherence to choose different sets of predictors. To mitigate the potential reduction in validity due to the small sample size in the “Unknown” gender category, we consolidated the gender variable into two groups: “Female” and “Male and Unknown.” This adjustment allowed us to apply the Chi-Square test. This choice is also consistent with our previous research [16]. The p-values for each profile factor are displayed in Table 5. For the prediction of 10-week adherence at a minimal level, a total of five z-score variables and three composite score variables were selected as predictors. The predictors of 10-week adherence at the full level were chosen to be the participants’ test message reminder type, six z-score variables, and three composite score variables.
Predicting adherence
This study is structured around three primary objectives: 1) To determine the cognitive games that exhibit the most robust predictive strength for adherence and to pinpoint the key performance indicators within these games that contribute to this predictive capacity. 2) To perform a comparative analysis to assess the relative predictive abilities of various profile factors versus game performance metrics in forecasting adherence. 3) To evaluate the effectiveness of ensemble models that integrate both profile factors and game performance data in predicting adherence over a subsequent ten-week period.
We employed diverse classification methods, including logistic regression, ridge regression, support vector machines, classification trees, and random forests, to predict the adherence from Week 3 to 12. The models were developed using two different sets of predictors. The first set included performance metrics from participants in seven cognitive games, with separate machine learning models for each game’s specific performance features. The second set of variables included baseline features, where z-scores and composite scores were used separately to create distinct models. This allowed for a comprehensive evaluation of the predictive power of the profile factors. Data normalization procedures were implemented to standardize the game performance features, ensuring comparability across various metrics. Additionally, a five-fold cross-validation approach was employed to assess the classification outcomes, enhancing the validity and reliability of the predictive models.
The ensemble modeling is a machine learning technique that combines many models, known as estimators, to create predictions by allowing the models to vote for the predicted class. There are two categories of voting classifiers based on the method of voting: soft and hard voting [35], which are also called unanimous and majority voting [36]. In hard voting, the final class prediction is decided through a majority vote from many estimator models. Soft voting involves the calculation of the probability for each class by each estimator in order to make the final class prediction [35]. Previous research has demonstrated the benefits of hybrid ensemble classifiers and voting-based ensembles in improving prediction accuracy and model robustness [36, 37]. By employing a soft voting ensemble modeling technique, the game performance models (“Supply Run”, “Ante-Up”, and “Sentry Duty”) that exhibit higher predictability of adherence are combined with the baseline model (composite score model) that demonstrates higher predictability of adherence to forecast minimal adherence and full adherence. The homogeneous ensemble technique employs logistic regression as the base learner and aggregates predictions from individual models through a soft voting mechanism to enhance prediction accuracy and reduce variances and biases seen in individual models.
We utilized accuracy and Area Under the Receiver Operating Characteristic (AUROC) to evaluate several classification models and visualized them by line and radar charts. The AUROC is a widely used metric for evaluating the performance of classification models, particularly in binary classification tasks [38]. To elucidate which independent variables derived from game performance most effectively predict adherence, we employed SHAP (SHapley Additive exPlanations) values within the logistic regression framework for each game. SHAP values, based on game theory’s Shapley values, offering a rigorous and consistent approach to measure the contribution of each feature to the predictive model [39]. The source code and data used for data analysis and model implementation can be accessed at the following GitHub repository: https://github.com/YuanyingPang/APPT_Game_Performance_Analysis.
Results
Fig 2 illustrates a line chart where the x-axis represents various machine learning models. Each data point on the chart corresponds to the combined average accuracy or AUROC score. The blue line represents the mean accuracy/AUROC score of various machine learning models applied to seven different cognitive games with the purpose of predicting adherence. For example, the starting point on this blue line of the right panel of Fig 2(A) is the average accuracy achieved by applying seven logistic regression models to their corresponding games. The orange line on the chart represents the accuracy/AUROC values obtained from the models with two different types of baseline predictors: composite score and z-scores. Based on Fig 2, we found that, on average, the features obtained from game performance are more reliable in predicting adherence compared to baseline features. Moreover, average accuracy and AUROC values of all machine learning models that used game performance features were higher than 0.7 when predicting minimal adherence. All models, except for the Decision Tree model, achieved average accuracy and AUROC scores above 0.7 when using game performance measures to predict full adherence. These metrics indicated a moderately strong ability of the models to classify participants based on their game performance in previous two weeks. Accuracy, representing the proportion of correctly predicted instances, suggests that our models can reliably identify adherence behaviors. The AUROC value, which measures the model’s ability to distinguish between adherent and non-adherent participants, further supports the model’s discriminative power. We also noted that logistic regression and ridge regression models regularly achieved better performance compared to other models, which supports our choice of logistic regression as the baseline model for further ensemble modeling.
Line Graph Illustrating the Average Predictive Performance of Different Models for Minimal (a) and Full Adherence (b); Note: The blue line represents average model accuracy using game performance predictors. The orange line indicates accuracy/AUROC from models with baseline predictors: composite score and z-scores.
To identify the cognitive game with the most robust predictive power for adherence, we visualized the performance metrics, as illustrated in Fig 3. In these radar charts, each axis represents a different machine learning model, and the distinct colors correspond to different games. The radar charts elucidate the differential predictive capabilities of each game across various models, highlighting their unique strengths in adherence forecasting. When synthesizing the predictive outcomes for both minimal and full adherence for each game, the aggregated accuracy rate consistently exceeds the 70% threshold. Notably, “Supply Run”, “Ante Up”, and “Sentry Duty” excelled in terms of predictive validity related to adherence levels. When using the performance of these three games in the first two weeks to predict the following 10-week adherence, both AUROC and AUPRC values were above 0.8, indicating that they were highly effective in identifying adherents or non-adherents. These games were thus strategically chosen to underpin the ensemble modeling process, aiming to enhance the accuracy of adherence predictions by integrating them with baseline feature data.
Comparative Analysis of Classification Models Using Game Performance Metrics to Predict Levels of Adherence: (a) Minimal Adherence, and (b) Full Adherence.
Table 6 presents the performance metrics of the ensemble model in comparison to the four logistic regression base models. The ensemble model, which synthesizes the predictive outputs from these base models, attained an AUROC of 0.86 and an AUPRC of 0.85 for predicting minimal adherence. Furthermore, for predicting full adherence, the ensemble model achieved an AUROC of 0.86 and an AUPRC of 0.86. These results signify an enhancement of over 0.03 for both metrics when contrasted with the most proficient individual base model. The table delineates the detailed performance metrics for each of the base models as well as for the ensemble model, are indicated in Table 6.
By calculating the SHAP value for each independent variable, we were able to ascertain its relative importance within the model. These values were then ranked for each variable across the logistic regression models corresponding to the seven games. Subsequently, we computed the average ranking of importance for each variable to derive a final hierarchy of variable importance, which is delineated in Table 7.
The aggregated SHAP value rankings reveal a consistent pattern across the models, indicating that the most salient predictors for predicting both minimal and full adherence are the same. Specifically, the variables that consistently emerged as the most influential are the highest level attained by the participants within the initial two weeks, the number of game sessions they played and the percentage of overall gameplay instances. These findings underscore the significance of early engagement and sustained interaction as key determinants of adherence.
To further investigate the relationship between individual game performance and subsequent 10-week adherence, we utilized the summary tables in S1 Fig. to demonstrate the effects of different game performances. The utilization of the SHAP values summary plot is a highly efficient method for evaluating the significance and effects of features within the framework of our predictive model. Each data point on the summary picture represents the SHAP value assigned to a particular feature or instance. For example, a feature characterized by mainly blue dots on the right side of the graph suggests that lower values of this variable enhance the probability of a positive outcome. We provide concise visual representations of the three most crucial game performance metrics, which are determined to be the top three based on the average SHAP values. We observed a negative correlation between the participants’ maximum level over the first two weeks and their likelihood of remaining engaged with the game. There was a positive correlation between the number of games played during the initial two weeks and the likelihood of continued adherence in cognitive training. Moreover, the percentage of different games played during the initial two weeks had a different influence on adherence. In the context of the game “Supply Run,” there is a positive correlation between the percentage of games played and the likelihood of participants meeting both the minimal and full adherence standards. In the game “Riding Shotgun”, a higher percentage correlated with a greater likelihood of achieving full adherence level. Other than the two previously mentioned, we observed a negative correlation between the percentage of the game played in the previous two weeks and the probability of achieving either minimal or full adherence.
Discussion
Our research investigates the factors influencing overall adherence to cognitive training, incorporating baseline and game performance predictors. Our findings reveal that age and gender do not significantly affect adherence, which is consistent with prior research [18, 40]. Additionally, some researchers have suggested that memory function may impact adherence [18], which is aligned with our results. We found that delayed and immediate recall significantly affect overall adherence. However, some scholars argue that cognitive capability does not reliably predict commitment, persistence, or compliance with cognitive training [40]. Their study, however, only measured working memory and fluid reasoning as cognitive capabilities. Our study found that the composite score for objective reasoning also did not significantly affect adherence. Compared to the previously published study [16] using the same dataset, we found that by adding the game play data in the first two weeks, we can significantly improve the prediction accuracy of overall adherence even though the overall adherence is for 10 weeks, as opposed to 12 weeks used in the previous study [16].
Although personality predictors have some predictive ability regarding adherence, our primary finding is that game performance effectively predicts adherence to gamified cognitive training programs. The findings provide strong evidence that the game performance of specific cognitive games, namely “Supply Run,” “Ante Up,” and “Sentry Duty,” have significant predictive ability for adherence levels. The performance measures, such as the maximum level achieved, the total number of game sessions played, and the proportion of overall gameplay instances, have been identified as important indicators of adherence. The mean accuracy and AUROC values consistently exceeding the 0.7 threshold across all machine learning models highlight the strength and reliability of game performance variables in predicting both minimal and full adherence. The radar maps clearly displayed the predictive strengths of each game, emphasizing their distinct contributions to adherence predictions. These strengths may stem from differences in game design. Previous research by Boot et al. found that game enjoyment and perceived challenge were related to motivation [41] and adherence, while Hu et al. highlighted the importance of enjoyment in encouraging sustained play in cognitive assessment games [42].
Although we did not measure participants’ feelings about these games, we inferred some information from their early-stage game performance. The relationships between game performance and adherence rates are particularly noteworthy. The strong correlation between the number of sessions played and adherence suggests that early involvement is indicative of long-term adherence. The games in which participants played more sessions may offer greater enjoyment, leading to higher adherence. Conversely, the negative association between the highest level attained at the beginning of the program and continued adherence implies that maintaining an appropriate level of difficulty is crucial for long-term adherence. Some studies have used Dynamic Difficulty Adjustment techniques to maintain difficulty levels and improve player engagement [43]. Future research could explore the use of such techniques to enhance adherence to cognitive training among older adults.
Furthermore, the combined ensemble model that incorporates both baseline characteristics and first two weeks’ game performance data demonstrated superior performance compared to the individual logistic regression base models. This suggests that employing a comprehensive strategy that considers both individual characteristics and game-related data results in a more precise forecast of adherence. The findings have major implications for the development of tailored cognitive training programs. Through the identification of early indicators of potential lapses, interventions can be adjusted to boost user involvement and enhance adherence rates.
A few limitations should be noted to interpret the study more accurately. First, the sample size is relatively small and there were significantly more females than males in the study. This may impact the generalizability of the results. Nonetheless, we believe that the framework of using first two-week play data and baseline measures to predict adherence in the remaining weeks can be generalizable to other domains where a gamified intervention is used for treatment or behavioral change. Second, the game play data such as play time may not be accurate as participants may have put the tablet aside when the game was still running. Our recommendation for future studies is to use more advanced tracking technologies and improve user interfaces that can reduce errors in data capture.
Conclusions
The findings of this study provide a significant contribution to the field of cognitive training by showing that game performance data can accurately predict user adherence. This predictive capability allows for the identification of individuals who are at risk and may need extra support to stay engaged in cognitive training programs. Although the study has uncovered encouraging results, it admits certain limitations. The research has not investigated the impact of various game settings on adherence to certain games, which is a potential field for future research. Furthermore, the study’s emphasis on older adults may restrict the applicability of the findings to other demographic groups. Ultimately, this research establishes a basis for the creation of adaptive and individualized cognitive training programs. By utilizing the predictive capabilities of early game performance, we can potentially optimize the effectiveness of these programs, thereby promoting cognitive well-being in older adults. Future research should focus on improving the accuracy of prediction models and increasing their range of applications, to ensure that cognitive training continues to be a valuable and advantageous tool for a broader population.
Supporting information
S1 Fig.
Evaluation of Game Performance Effects Using SHAP Values Summary Plot (a) Minimal Adherence (b) Full Adherence.
https://doi.org/10.1371/journal.pone.0311279.s001
(PDF)
S2 Fig. Weekly fluctuations in game outcomes over 12 weeks.
https://doi.org/10.1371/journal.pone.0311279.s002
(PDF)
S1 Table. Descriptive statistics of attitude and cognitive scores for all participants.
https://doi.org/10.1371/journal.pone.0311279.s003
(PDF)
References
- 1.
Kamiya Y., Lai N. M. S., & Schmid K., “World population ageing 2020 highlights,” United Nations, 2020.
- 2. Kelley B. J. and Petersen R. C., “Alzheimer’s Disease and Mild Cognitive Impairment,” Neurol. Clin., vol. 25, no. 3, pp. 577–609, Aug. 2007, pmid:17659182
- 3. Mather M., “Aging and cognition,” WIREs Cogn. Sci., vol. 1, no. 3, pp. 346–362, 2010, pmid:26271375
- 4. Areosa S. A., Sherriff F., and McShane R., “Memantine for dementia,” Cochrane Database Syst. Rev., no. 2, p. CD003154, Apr. 2005, pmid:15846650
- 5. Birks J. S., “Cholinesterase inhibitors for Alzheimer’s disease,” Cochrane Database Syst. Rev., no. 1, 2006, pmid:16437532
- 6.
C. for D. E. and Research, “FDA approves treatment for adults with Alzheimer’s disease,” FDA, Jul. 2024, Accessed: Jul. 09, 2024. [Online]. Available: https://www.fda.gov/drugs/news-events-human-drugs/fda-approves-treatment-adults-alzheimers-disease
- 7. van Dyck C. H. et al., “Lecanemab in Early Alzheimer’s Disease,” N. Engl. J. Med., vol. 388, no. 1, pp. 9–21, Jan. 2023, pmid:36449413
- 8. Acevedo A. and Loewenstein D. A., “Nonpharmacological Cognitive Interventions in Aging and Dementia,” J. Geriatr. Psychiatry Neurol., vol. 20, no. 4, pp. 239–249, Dec. 2007, pmid:18004010
- 9. Chaldogeridis A. and Tsiatsos T., “Applied Sciences | Free Full-Text | Gamification Techniques and Best Practices in Computerized Working Memory Training: A Systematic Literature Review,” Appl. Sci., vol. 12, no. 19, p. 9785, 2022.
- 10. Lumsden J., Edwards E. A., Lawrence N. S., Coyle D., and Munafò M. R., “Gamification of Cognitive Assessment and Cognitive Training: A Systematic Review of Applications and Efficacy,” JMIR Serious Games, vol. 4, no. 2, p. e5888, Jul. 2016, pmid:27421244
- 11. Ballesteros S. et al., “Brain training with non-action video games enhances aspects of cognition in older adults: a randomized controlled trial,” Front. Aging Neurosci., vol. 6, 2014, Accessed: Apr. 05, 2023. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fnagi.2014.00277 pmid:25352805
- 12. Simons D. J. et al., “Do ‘Brain-Training’ Programs Work?,” Psychol. Sci. Public Interest, vol. 17, no. 3, pp. 103–186, Oct. 2016, pmid:27697851
- 13. Blocker K. A., Wright T. J., and Boot W. R., “Gaming preferences of aging generations,” Gerontechnology, vol. 12, no. 3, pp. 174–184, Jun. 2014, pmid:29033699
- 14. Scase M, Marandure B, Hancox J, Kreiner K, Hanke S, and Kropf J, “Development of and Adherence to a Computer-Based Gamified Environment Designed to Promote Health and Wellbeing in Older People with Mild Cognitive Impairment.,” Stud Health Technol Inf., vol. 236, pp. 348–355, 2017. pmid:28508817
- 15. He Z. et al., “New opportunities for the early detection and treatment of cognitive decline: adherence challenges and the promise of smart and person-centered technologies,” BMC Digit. Health, vol. 1, no. 1, p. 7, Feb. 2023,
- 16. He Z. et al., “A Machine-Learning Based Approach for Predicting Older Adults’ Adherence to Technology-Based Cognitive Training,” Inf. Process. Manag., vol. 59, no. 5, p. 103034, Sep. 2022, pmid:35909793
- 17. Singh A. et al., “Deep learning-based predictions of older adults’ adherence to cognitive training to support training efficacy,” Front. Psychol., vol. 13, 2022, Accessed: Apr. 05, 2023. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fpsyg.2022.980778 pmid:36467206
- 18. Turunen M et al., “Computer-based cognitive training for older adults: Determinants of adherence.,” PLoS One, vol. 14, no. 7, p. e0219541, 2019, pmid:31291337
- 19. Harrell E. R., Roque N. A., Boot W. R., and Charness N., “Investigating message framing to improve adherence to technology-based cognitive interventions,” Psychol. Aging, vol. 36, pp. 974–982, 2021, pmid:34460281
- 20. Boot W. R. et al., “Computer Proficiency Questionnaire: Assessing Low and High Computer Proficient Seniors,” The Gerontologist, vol. 55, no. 3, pp. 404–411, Jun. 2015, pmid:24107443
- 21. Roque N. A. and Boot W. R., “A New Tool for Assessing Mobile Device Proficiency in Older Adults: The Mobile Device Proficiency Questionnaire,” J. Appl. Gerontol. Off. J. South. Gerontol. Soc., vol. 37, no. 2, pp. 131–156, Feb. 2018, pmid:27255686
- 22. Schwarzer R. and Jerusalem M., “General Self-Efficacy Scale,” 1995,
- 23. Lawton M. P. and Brody E. M., “ASSESSMENT OF OLDER PEOPLE: SELF-MAINTAINING AND INSTRUMENTAL ACTIVITIES OF DAILY LIVING,” Nurs. Res., vol. 19, no. 3, p. 278, Jun. 1970.
- 24. Sullivan M. J., Edgley K., and Dehoux E., “A survey of multiple sclerosis: I. Perceived cognitive problems and compensatory strategy use,” Can. J. Rehabil., vol. 4, no. 2, pp. 99–105, 1990.
- 25. Berry J. M., West R. L., and Dennehey D. M., “Reliability and validity of the Memory Self-Efficacy Questionnaire,” Dev. Psychol., vol. 25, no. 5, pp. 701–713, 1989,
- 26. Harrell E. R., Kmetz B., and Boot W. R., “Is Cognitive Training Worth It? Exploring Individuals’ Willingness to Engage in Cognitive Training,” J. Cogn. Enhanc., vol. 3, no. 4, pp. 405–415, Dec. 2019, pmid:31773088
- 27. Rabipour S. and Davidson P. S. R., “Do you believe in brain training? A questionnaire about expectations of computerised cognitive training,” Behav. Brain Res., vol. 295, pp. 64–70, Dec. 2015, pmid:25591472
- 28. Arthur W. and Day D. V., “Development of a Short form for the Raven Advanced Progressive Matrices Test,” Educ. Psychol. Meas., vol. 54, no. 2, pp. 394–403, Jun. 1994,
- 29.
Raven J., “Standard progressive matrices. Raven Manual Section 3.” Oxford.: Oxford Psychologists, 1992.
- 30.
Ekstrom R. B. and Harman H. H., Manual for kit of factor-referenced cognitive tests, 1976. Educational testing service, 1976.
- 31. Edwards J. D. et al., “The useful field of view test: Normative data for older adults,” Arch. Clin. Neuropsychol., vol. 21, no. 4, pp. 275–286, May 2006, pmid:16704918
- 32. Wechsler D., “Wechsler Adult Intelligence Scale—Third Edition,” 1997,
- 33.
Schmidt M., Rey auditory verbal learning test: A handbook, vol. 17. Western Psychological Services Los Angeles, CA, 1996.
- 34. Brandt J., “Hopkins Verbal Learning Test,” 1991,
- 35. Kumari S., Kumar D., and Mittal M., “An ensemble approach for classification and prediction of diabetes mellitus using soft voting classifier,” Int. J. Cogn. Comput. Eng., vol. 2, pp. 40–46, Jun. 2021,
- 36.
I. Gandhi and M. Pandey, “Hybrid Ensemble of classifiers using voting,” in 2015 International Conference on Green Computing and Internet of Things (ICGCIoT), Oct. 2015, pp. 399–404. https://doi.org/10.1109/ICGCIoT.2015.7380496
- 37. Devvrit M. Cheng, C.-J. Hsieh, and I. Dhillon, “Voting based ensemble improves robustness of defensive models,” Nov. 27, 2020, arXiv: arXiv:2011.14031.
- 38. A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms,” Pattern Recognit., vol. 30, no. 7, pp. 1145–1159, 1997.
- 39. Lundberg S. and Lee S.-I., “A Unified Approach to Interpreting Model Predictions,” arXiv.org. Accessed: Mar. 03, 2024. [Online]. Available: https://arxiv.org/abs/1705.07874v2
- 40. Tullo D., Feng Y., Pahor A., Cote J. M., Seitz A. R., and Jaeggi S. M., “Investigating the Role of Individual Differences in Adherence to Cognitive Training,” J. Cogn., vol. 6, no. 1, p. 48, 2023, pmid:37636013
- 41.
W. R. Boot, D. Souders, N. Charness, K. Blocker, N. Roque, and Thomas Vitale, “The Gamification of Cognitive Training: Older Adults’ Perceptions of and Attitudes Toward Digital Game-Based Interventions,” presented at the Human Aspects of IT for the Aged Population. Design for Aging: Second International Conference, ITAP 2016, Part of HCI International 2016, Toronto, ON, Canada,: Springer International Publishing, Jul. 2016, pp. 290–300. Accessed: Jun. 16, 2024. [Online]. Available: https://link.springer.com/chapter/10.1007/978-3-319-39943-0_28
- 42. Hu Y. Z., Urakami J., Wei H, Vomberg L. H, and Chignell M, “Longitudinal analysis of sustained performance on gamified cognitive assessment tasks,” Appl. Neuropsychol. Adult, vol. 0, no. 0, pp. 1–25, 2022, pmid:35196467
- 43. Kristan D., Bessa P., Costa R., and de Carvalho C. V., “Creating Competitive Opponents for Serious Games through Dynamic Difficulty Adjustment,” Information, vol. 11, no. 3, p. 156, 2020.