Skip to main content
Advertisement
  • Loading metrics

The current landscape and future of tablet-based cognitive assessments for children in low-resourced settings

  • Megan S. McHenry ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Validation, Writing – original draft, Writing – review & editing

    msuhl@iu.edu

    Affiliation Department of Pediatrics, Indiana University School of Medicine, Indianapolis, United States of America

  • Debarati Mukherjee,

    Roles Conceptualization, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Indian Institute of Public Health—Bengaluru, Life Course Epidemiology, Bengaluru, Karnataka, India

  • Supriya Bhavnani,

    Roles Conceptualization, Data curation, Methodology, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Child Development Group, Sangath, India

  • Amir Kirolos,

    Roles Conceptualization, Data curation, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Women and Children’s Health, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, United Kingdom

  • Joe D. Piper,

    Roles Conceptualization, Data curation, Resources, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Blizard Institute, Queen Mary University of London, London, United Kingdom

  • Maria M. Crespo-Llado,

    Roles Conceptualization, Data curation, Resources, Writing – review & editing

    Affiliation Department of Women and Children’s Health, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, United Kingdom

  • Melissa J. Gladstone

    Roles Conceptualization, Formal analysis, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Department of Women and Children’s Health, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, United Kingdom

Abstract

Interest in measuring cognition in children in low-resourced settings has increased in recent years, but options for cognitive assessments are limited. Researchers are faced with challenges when using existing assessments in these settings, such as trained workforce shortages, less relevant testing stimuli, limitations of proprietary assessments, and inadequate parental knowledge of cognitive milestones. Tablet-based direct child assessments are emerging as a practical solution to these challenges, but evidence of their validity and utility in cross-cultural settings is limited. In this overview, we introduce key concepts of this field while exploring the current landscape of tablet-based assessments for low-resourced settings. We also make recommendations for future directions of this relatively novel field. We conclude that tablet-based assessments are an emerging and promising method of assessing cognition in young children. Further awareness and dissemination of validated tablet-based assessments may increase capacity for child development research and clinical practice in low-resourced settings.

Author summary

Tools that measure cognitive skills are key to identifying children in need of critical interventions to reach their full potential. However, there are barriers inherent in the application of traditional tools, such as the need for trained professionals, the relevance of the testing items, and the time needed to complete the evaluations. This disproportionately impacts populations living in low-resourced settings. Emerging research indicates that tablet computers can easily administer essential cognitive testing across low-resourced settings and overcome many of the barriers from the use of traditional tools. However, no resources are available that succinctly review the key considerations and available options for tablet-based cognitive assessments. Therefore, in this review, we summarize the existing known tablet-based cognitive assessments used in low-resourced settings and evaluate a myriad of factors important for the application of computerized cognitive assessments across different cultural contexts. We also offer recommendations for the future output of tablet-based assessments, which may mediate the gap for use of cognitive testing within low-resourced settings. Overall, we determine that tablet-based assessments are a promising solution among these settings, and further insight and awareness of these tools may increase their utility.

1. Introduction

As global focus shifts from ensuring children survive to enabling them to thrive, millions of children living in low-resourced settings are found to be at-risk of not attaining their full developmental potential [1]. This is due to their disproportionately high exposure to risk factors for poor development during critical periods in childhood [2], including malnutrition, iron-deficiency anemia, infectious diseases, exposure to violence, exposure to toxins, extreme poverty, maternal depression, and inadequate cognitive stimulation [3]. While interventions and programs are being implemented to address these risk factors [4,5], we need to understand how best to optimize neurocognitive outcomes within the context of scarce resources.

Healthy child development comprises a complex interplay of rapid physiological, psychological, and physical changes in response to early environmental experiences, with lasting effects on multiple domains of development, including cognition. Cognitive development in preschool years, through which children develop the skills to acquire, assimilate and apply knowledge, has been demonstrated to be predictive of later outcomes such as IQ and academic achievement [6,7]. Executive function, a key component of cognitive development, is our ability to temporarily manipulate information mentally (working memory), generate different solutions to a problem (cognitive flexibility), and maintain impulse control (inhibition). Executive function enables us to plan, focus attention, set and achieve goals [8], and it predicts math, reading, and science achievements in school [911]. Deficits in cognitive ability and executive function have long-term impacts across the life-course, starting with poor educational attainment and resulting in loss of adult income, thus impacting the livelihood of families, communities, and countries [4]. Therefore, high-quality measures of cognitive development, including executive functioning, are essential to identify children at-risk of not attaining their full developmental potential, triaging them into timely interventions, and finally, monitoring their impact on developmental trajectories within clinical, research, or community-based settings.

Many tools have been used in measurement of cognition and in some cases, executive function, in low-resource settings. Assessing cognitive ability, and in particular, executive function, in children within these settings, may be impeded by a number of challenges. These challenges include using tools with appropriate cultural and language adaptations, training a workforce for high-quality administration, and utilizing tools with adequate reliability and validity, while maintaining affordability and scalability [12,13].

Currently, many neurocognitive assessments depend on observations of child behavior conducted by child development and neuropsychology specialists, who undergo substantial training to become proficient in high quality and consistent administration, scoring and interpretation of assessment tools. Although capacity building and infrastructure in psychological testing has improved in recent years, many countries still have few trained workers qualified to implement these tools [14]. Examples of tools used for measuring cognition are the Kaufman-ABC or the Wechsler Preschool and Primary Scale of Intelligence (WPPSI) [15,16] and in younger children, the Bayley Scales for Infant and Toddler Development [17]. Measurement of executive function has been measured with tools including, but not limited to, the NEuroPSYchological Asssessment (NEPSY) [18,19], the knock-tap tests [20], the Spin the Pots task [21], or parent report measures such as the Behavior Rating Inventory of Executive Function (BRIEF) [22]. These all vary in reliability and validity [13]. Most of these tools have been developed and validated within high-resource settings, although some attempts have been made to develop versions for other settings. This can present further challenges, as most items in neuropsychological assessment tools are contextually related to settings where tools were developed and validated, therefore requiring lengthy and expensive adaptation process for use cross-culturally. This has been attempted in some settings, e.g., Kilifi, Kenya, where a battery was made for an African context; however, difficulties remain in capacity to scale-up training, application, and measurement [23]. Furthermore, tools created in high-resource contexts are often proprietary and the cost to administer in low-resourced settings can be prohibitive. There is, thus, a paucity of contextually relevant scalable cognitive assessment tools developed, validated, and used within low-resourced settings [24]. We define low-resourced settings as those with a population facing health inequities that negatively impact child development, typically in low- and middle-income countries.

Advances in technology, including the emergence of tablet computers, could be leveraged to aid in the scalable evaluation of cognition and particularly executive function, across a wide range of settings. Tablet computers are readily available, inexpensive, portable, and have functionality without internet. Paper-based childhood developmental assessment tools are increasingly being deployed on tablet computers using basic open data kits or similar programs. This includes the Malawi Developmental Assessment Tool (MDAT) [25] and the Global Scales of Early Development (GSED) [26]. While this standardizes administration and scoring by minimizing errors, these tools either rely on parent report, assuming parental knowledge of developmental milestones, or behavioral observations of children by non-specialists, which itself is resource-intensive and only partially addresses the workforce challenge.

In recent years, computerized tasks within tablet computers are increasingly being used to evaluate child performance directly. This has the potential to overcome many current challenges of measuring cognition in children globally. These tools permit the administration and scoring of a broad array of tasks that measure specific domains of cognitive functioning with minimal potential for error, while also having the potential to be gamified [27], increasing children’s interest in these cognitive tasks. Furthermore, with cameras (to capture images and videos during task performance), accelerometers and gyroscopes (to estimate motion and force on the screen) [19,20], timers (to assess latency in responses) [28], and microphones (to capture audio), developers can tap into a wide range of child responses to assess their cognitive abilities [29]. Such nuanced variables are not feasibly evaluated with traditional pen-and-paper administration of cognitive assessment tools [30,31]. Because the cognitive tasks are administered with a tablet computer rather than skilled observation of child behavior, it requires very little workforce support for administration or scoring, enabling non-specialists with minimal training to administer tasks. While smartphone technology has similar strengths, the smaller screen size presents some limitations and, at this point, only a few cognitive tools are under development for smartphone use. In sum, tablet computers provide opportunities to perform high-quality cognitive assessments of children in a manner that is easily scalable and practical in low-resourced settings.

While this technology appears to be the future of cognitive testing, no known summary of these tools exists. This overview aims to summarize the current landscape of tablet-based cognitive assessment tools used in low-resourced settings that directly measure child performance. We pose considerations for use of these tools with the existing state of the evidence and make recommendations for ways in which this novel field can move forward. This overview is for those who may want to use tools for programmatic and research evaluation and therefore specifically targets information on the feasibility and validity of the tools in their current format, when used in low-resource settings.

2. Methodology for compiling tools

We identified tablet-based cognitive assessment tools by performing a general scoping literature search as well as through reaching out to international content experts working in the field. We aimed to identify peer-reviewed articles providing information on tablet-based cognitive assessment tools from electronic databases that included PubMed and Google Scholar identifying any publications from January 2000 to September 2021. The search terms used included “cognition”, “executive function,” “pediatrics,” “tabled-based assessments,” and “children.” Inclusion criteria required that the assessment had to measure cognition was used in children under 18 years of age, was used in low-resourced settings, and was used on a tablet computer. Tools were excluded if they were only used in well-resourced settings or performed only on smartphones or laptop/desktop computers. Much of the information gathered required direct interviews with the developers and was not available through peer-reviewed articles. We therefore supplemented our literature search through general search engines, such as Google and Bing, and sought topic experts (i.e., those who lead and consult on projects measuring cognition in low-resourced settings, but do not themselves develop tools) to identify further tools. We emailed each application developer requesting an interview and further information about any other known tablet-based cognitive assessment tools used in low-resourced settings. Through our review of published papers, websites, and interviews with developers, we collected data on domains of cognition and executive function measured, time for training and administration, country of use and present adaptations, psychometric properties of each tool (as published to date), and feasibility of use.

3. Current landscape of tablet-based cognitive assessment tools for children

Numerous commercial and non-commercial tablet-based cognitive assessment tools have been developed in recent years. From our search, we identified 16 tools and described characteristics of each in Table 1 and Fig 1. Most tools were developed in North America, Europe, or Australia and initially used the English language, which was subsequently translated into other languages. Two tools had significant development work performed in India (DEEP and START), which included formative work in the community and iterative development of the user-interface through testing with non-specialist administrators and the target age range of children [32]. One tool has been further developed in Brazil (Educational Neuroscience App-Based Learning Environment (ENABLE). From interviews with the tool developers, the primary users of these tools, to date, are researchers, with administration being performed by research staff or community health workers. However, some tools, such as the Early Years Toolbox and NIH Toolbox, have also been used by educators and clinicians as well.

thumbnail
Table 1. Characteristics of tablet-based cognitive assessments.

https://doi.org/10.1371/journal.pdig.0000196.t001

thumbnail
Fig 1. Synthesis across tablet-based cognitive assessments.

The names of the tools are listed in the Y-axis, with non-commercial tools listed first, followed by commercial tools. The X-axis is the range of ages, in years, for the intended population for the tool’s use. The numbers encompassed by () by the tool names indicates the number of languages for which the tool is translated. The numbers within the bands indicate the number of minutes (min) required to complete each test, with the number of tests listed in []. The color of the bands indicates the amount of training required to administer, with blue indicating <4 h of required training, yellow indicating an estimated 4–12 h of required training, pink indicating an estimated >12 h of required training.

https://doi.org/10.1371/journal.pdig.0000196.g001

While nearly all commercial tools have age ranges that extend into adulthood, the non-commercial tools tend to have more narrow ranges, likely due to a target study population for whom the initial tool was developed. Most assessment tools contain batteries that consist of individual tests, and the length of the assessment is modifiable based on the number of tests selected. For example, Cantab and Cogstate each have over 16 tests that can be administered, depending on the needs of the user [33,34]. Most tools contain tests that each evaluate specific domains of cognition, often focusing on memory, attention, visual-spatial, and inhibition tasks (Table 2). Babyscreen and the Minnesota Executive Function Scale (MEFS)/EFgo test the youngest age range of children (as young as 18 months of age), but only have 1 test administered measuring multiple sub-domains of cognition [35,36]. The MEFS/Efgo tool administers a test based similar to the Dimensional Change Card Sort task, aiming to measure inhibition, cognitive flexibility, and memory [36]. This approach is linked to the challenges of tapping into specific, differentiated dimensions of cognitive tasks within these early ages [37].

thumbnail
Table 2. Domains measured by tablet-based cognitive assessments.

https://doi.org/10.1371/journal.pdig.0000196.t002

4. Considerations for identifying the tool(s) of choice

While the concept and aims of these tablet-based cognitive assessment tools are often similar, their strengths and limitations are highlighted for potential users to consider while identifying an appropriate tool for their setting. Some limitations are inherent to the way tools are funded and designed. Non-commercial tools are often developed by academicians with grant funding. These are therefore typically dependent on this funding or other program fees to maintain and update applications along with the tablet operating system. While these tools are often open-source, and hence costs are generally low, researchers often must directly contact developers to arrange for an agreement for tool use. While this benefits smaller projects with budgetary constraints, the lack of available support can cause delays in their implementation or any adjustments required.

Specific tools also have their own strengths and limitations. The Early Years Toolbox is the only tool that is freely downloadable and has instructions on administration on its website, a great strength in its accessibility. However, due to the “open access” nature of the tool, this results in limited knowledge on the full extent on locations of use and the degree of success of cross-cultural administration. This can only be gleaned from published articles and reports of its use, which may be subject to publication bias. A summary of the strengths and limitations for each tool is included in Table 3. Associated psychometrics and additional information about these tools are detailed in the S1 File.

thumbnail
Table 3. General strengths and limitations of tablet-based cognitive assessments.

https://doi.org/10.1371/journal.pdig.0000196.t003

We identified a number of factors we consider important when choosing a tool for use in a low-resourced setting for measuring cognition in early childhood. These include (a) ease of using technology to measure cognition in the early childhood; (b) validity in varying cultural contexts; (c) ease of adaptability for different settings; (d) overcoming workforce challenges; and (e) accessibility.

Use of technology to assess cognition in the early childhood

There is still debate as to whether young children can be assessed appropriately and reliably using technology in the early years. Within this review, most tablet-based tools start testing at 3 years and older. In high-resourced settings, evidence suggests that children as young as 2 years may appropriately engage with digital technologies [38,39]. However, at these early ages, the foundations of executive functioning and cognition are being laid, adding to the challenge of isolating those constructs for testing [40]. Measurement of cognition at under 3 years of age may therefore, still be best performed with in-person or parent-report assessment tools [41] or with neurophysiological tests or imaging that map neural correlates underlying cognitive functions [42]. Many tools, especially non-commercial tools, are gamified, featuring appealing and narrative graphics for children. While this aids in overcoming the challenge of engaging children in assessment tools, the bright and interesting images may inadvertently lead to an overestimation of the domain of attention and impulsivity [27].

Validity of tools for varying socioeconomic and cultural settings

An important first step in choosing a cognitive tool is to critically review its measurement properties (such as validity and reliability). The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) guidelines provide a framework to guide this [43] and supports the use of both qualitative (acceptability, face, and content validity) and quantitative aspects of validity (construct, cultural, and structural validity) alongside reliability (Fig 2). While nearly all the tablet-based cognitive tests included in this review (14/16) have some preliminary or published validity data (S1 File), the detail and rigor of this psychometric testing differs among tools. Some tools have attempted validation against a “gold standard.” However, this is complex, particularly in international low-resourced settings since “gold standard assessment tools” are typically developed and validated in high-resourced western settings.

thumbnail
Fig 2. Assessing the measurement properties of a tool.

Domains of psychometric properties adapted from the following publication: Mokkink LB, Terwee CB, Knol DL, Stratford PW, Alonso J, Patrick DL, et al. The COSMIN checklist for evaluating the methodological quality of studies on measurement properties: A clarification of its content. BMC Medical Research Methodology. 2010;10(1):22.

https://doi.org/10.1371/journal.pdig.0000196.g002

Evaluating validity of tools can be particularly challenging within settings where children have limited daily exposure to technology or the items contained within the assessment tool. Tools are often developed for children with high exposure to technology, indicating that limited exposure may impact a child’s ability to interact with the tool’s content in the same meaningful way. However, evidence is emerging that suggests the lack of prior exposure to smart devices may not impact the accuracy with which tablet-based tools can measure cognition [28]. Indeed, a “rights-based perspective” would argue that technology coupled with culturally neutral content can benefit children globally. However, our review has identified rigorous psychometric testing as a gap in this field, which warrants additional consideration as tablet-based tools are disseminated widely.

Notably, caution should be taken when interpreting the assessment scores for children who differ significantly, either by language, age, or culture, from the original normative population of the assessment tool. Certain aspects of cognition are more adaptive and highly prioritized in some settings compared to others, and thus, the normative range of scores is likely to differ across settings. Thus, issues will arise when one depends on the normative population data to determine cutoffs for deficits in cognition or to function as a comparison or control group.

Ease of adaptability of tools for different settings

The first generation of tablet-based assessment tools were programmed with a single language in mind, with no ability to adapt for different contexts or languages. More recent tools have built-in functionality to allow for easy adaptability between contexts and languages, as long as the appropriate adaptation methodologies are used. Some tools require very little language for their use. For example, Developmental Assessment on an E-Platform (DEEP) and Babyscreen have little to no language incorporated into the application, making linguistic adaptation an easy task [32,35]. Other tools contain more use of language, resulting in time-intensive adaptation processes involving forward and backward translations and cognitive interviewing to ensure face validity. This process takes multiple cycles and may require full re-programming of the test by assessment tool developers, the case for a recent cultural and linguistic adaptation of the NIH Toolbox for Kenya [44]. Even within a single country, certain images may need to be adjusted to accommodate the broadest range of cultures and contexts with the most familiar images used among cultures within the region.

Overcoming workforce challenges

A significant advantage of tablet-based cognitive assessment tools is their ability to simultaneously administer, score, and record, either within the tablet computer itself, or a cloud server. By removing the requirement for a psychologist or highly trained individual to administer the tool, these assessment tools can be administered by non-specialists on a scale far beyond traditional psychological tests. Most of the non-commercial (8/11) and commercial (4/5) tools require only a few hours of training for test administration, allowing for assessors from a broad array of educational and training backgrounds. Nearly all the tests do not require an assessor’s observations as part of the scoring. One of the few exceptions is the NIH Toolbox’s List Sorting task, which measures working memory and requires an assessor to input whether a verbally repeated series of words is correct or incorrect [45]. With minimal input from the assessors, these tablet-administered tools reduce the risk of bias and human error and make great strides in overcoming workforce challenges in low-resourced settings.

Accessibility

While few assessment tools are intended for commercial use, most have been created within academic settings and are freely available with open-source coding (Table 1). Developers of some tools, e.g., the BENCI and Early Years Toolbox, stipulate at inception that their tools remain free of cost so that they could be easily utilized in low-resourced settings. The few academic, non-commercial tools that require some funding directly relate the costs to the time required to support the developer in adapting the code for a new setting and other factors, such as server maintenance.

For commercial tools, some have a published fee for their use. This can range from a few hundred dollars for a subscription, to many thousand dollars when paying for tests per participant. Costs may vary dependent on the number of tests and administrations included within the assessment battery. While costs associated with the use of commercial assessment tools may be a deterrent for many researchers in low-resourced settings, it ensures tools used are adequately maintained, with up-to-date information technology infrastructure and developers on staff to troubleshoot any data-related issues.

5. Future directions for tablet-based pediatric cognitive assessment tools

This review aimed to summarize the current landscape of tablet-based assessment tools that measure cognition in children, particularly those used in low-resourced settings, as a potential solution to poor healthcare infrastructure and workforce-related barriers. We identified 16 tools that ranged the full spectrum of possibilities: from open-sourced to proprietary, and from those in their early stages of piloting in one region to those with extensive validation in multiple countries. As this novel area of digital pediatric cognitive assessment tools emerge and are pushed up the global mHealth agenda [29,46], it is critical for users to consider a tool’s psychometric properties, such as validity and reliability, before integrating it into clinical practice, research, or public health developmental surveillance systems. The COSMIN checklist provides useful guidance, not only to evaluate the validity and utility of a novel digital assessment tool, but also for developers to keep in mind while planning validation studies, or when describing the strengths and limitations of their tools to potential users [43].

In the absence of a true “gold-standard,” tool developers should aim to generate local normative scores for these novel tools in the target populations, instead of benchmarking them to the available “gold-standards.” The developers of the ENABLE and DEEP tool hope to develop a cloud database from their users, so that global sample “norms” can become available in open-source platforms (verbal communication from N. Pitchford and S. Bhavnani, respectively (February 2021)).

Additionally, while this review focuses on tablet-based assessment tools, we are keenly aware that smartphones represent the ultimate game changer in terms of achieving scale. The use of tablet-based assessment tools still requires a trained individual to go into the community or household for administration, whereas a smartphone-based assessment tool could potentially be downloaded onto a phone and then self-administered or administered by a parent within the home. Lead investigators of the NIH Toolbox are involved with the development of self-administered smartphone-based tests, Mobile Toolbox, which they hope will be available for public dissemination in 2023 [47]. Given the ubiquity of smartphones globally—over 5 billion subscribers, 70% of those residing in low-resourced settings—and cellular networks connecting 85% of the world’s population [29], we believe the potential for these cognitive assessment tools to scale will vastly improve with smartphones administration. Tablet-based tools were the focus on this review as there are currently a number of well-studied and validated options available for use in low-resources settings. They also form a useful “bridge” between more expensive and potentially fragile laptop computers and a larger screen than mobile devices. It is clear, however, that smartphones are more accessible and cheaper. Presently, less information is available about applications on these devices for measuring cognition in children, smartphone-based cognitive assessments for other populations, such as adults with dementia, have recently been developed [48,49]. It is likely that this review will need updating considering the fast-moving field and will need to include a focus on smartphone-based tools once data on their validity are available. Over 80% of World Health Organization (WHO) member states use at least 1 mHealth initiative operationalized through smartphones [29], with features such as the use of videos and decision support systems, that have proven useful in improving maternal and child health in a variety of low-resourced settings [50]. Therefore, the integration of cognitive assessments in smartphones may be the next leap towards optimizing child development at scale across global settings.

The digital administration of these tools also makes it possible to sync data collected from different modalities, such as eye tracking and electroencephalography, to provide a deeper, more integrated level of evaluation. With advances in technology, further spurred by substantial increases in use of telemedicine during the Coronavirus Disease 2019 (COVID-19) pandemic, these assessment tools may facilitate the possibility of virtual, parent-led, in-home well-baby checks in the future [51].

A forum to house the available digital tools, regularly updated to reflect the latest progress and associated data, would be valuable to stakeholders with interest in early childhood development. Such an effort has been initiated by the World Bank [52]. For this review, we aim to use an example of this and consolidate the information gleaned within the IMPACT Measures Tool database [53]. This online and open-source database is based on a research-driven scoring system that allows for the comparison of early childhood and parenting measures based on 4 categories (i.e., cost, usability, cultural relevance, and technical merit) [54]. Attending to the current landscape and new demands in low-resourced settings, we thought that the IMPACT database would benefit from the addition of digital measures, but other digital databases, as they emerge, should be considered.

This review has some limitations. Because we did not perform a formal systematic review, it is possible that we have missed some existing literature regarding tools currently being used in low-resourced settings outside of the network of investigators with whom we connected. Further, the dynamic and fast-paced nature of this emerging field would imply that new and updated tools are being added to the existing pool rapidly. Some have not yet been used in low-resourced settings and therefore will have not been included in this review. However, despite these limitations, we believe that this review of the existing tablet-based cognitive assessment tools adds great value, since the current landscape of available tools and possible future directions have not yet been summarized in this field.

Brief considerations for developers of digital cognitive evaluation tools

While this review is primarily targeted for users of tablet-based tools measuring cognition, we provide the following recommendations for tool developers. A primary consideration in the development of new tools should be their scalability. To benefit the large number of children who are faltering in cognition development within low-resourced settings, these tools must have the ability to adopt at scale within clinical practice and health systems. The mHealth Assessment and Planning for Scale (MAPS) toolkit published by the WHO provides a useful guide for iterative assessment of tool readiness for scaling up of mHealth tools, as well as providing strategies to address common barriers inherent in the pathway to scaling up [54]. Similarly, “Beyond Scale” launched by the Digital Impact Alliance is a free online course that highlights key challenges and solutions to scaling up mHealth solutions [55].

While understanding the importance of scalability, we also strongly recommend that tool developers closely collaborate with stakeholders from low-resourced settings—children and their families, researchers, health system staff, and policy makers—across all phases of development to ensure that the tool is designed optimally for its intended contexts, is affordable for scale-up, and able to be integrated within the health and educational sectors [56]. In partnership with key stakeholders, developers should also consider the local laws about data privacy and security. Most of the tools described in this review collect only de-identified study IDs, with password-protected storage on secure servers and varying levels of encryption and password keys. Ensuring security of data within the tablets and cloud storage is an essential feature for this technology.

And finally, cognitive tools only have the potential for meaningful impact when they are disseminated and being utilized. To ensure that information is readily available to others, we recognize that the addition of digital assessment tools in online repositories of child development measures is essential [53,57]. These efforts will help researchers and practitioners to understand and be informed as to what is best for them to use when selecting evaluation tools for their programs in this fast moving and dynamic field.

Conclusions

Tablet-based cognitive assessment tools may finally overcome the barriers of inadequate health systems that lead to poor measurement of child development outcomes in low-resourced settings. Data derived from these tools can then provide the foundation for drafting contextually relevant policies and practices—from the sub-local to the global—to optimize the developmental potential of all children globally.

Supporting information

S1 File. This supplement contains 2 appendices.

Appendix A is a table outlining the psychometric measurement properties of the tablet-based cognitive tools included in this review. Appendix B is another table that describes the additional neurodevelopmental domains evaluated by tablet-based assessments, with contact information of the developers.

https://doi.org/10.1371/journal.pdig.0000196.s001

(DOCX)

Acknowledgments

We would like to thank all of the developers of these tablet-based applications, who willingly answered numerous questions as we strived to understand their tool’s function and purpose. Many of these individuals created these tablet-based tools to improve the manner in which cognition is assessed around the world. We appreciate their commitment and dedication to finding effective ways of measuring pediatric cognition globally.

References

  1. 1. Lu C, Black MM, Richter LM. Risk of poor development in young children in low-income and middle-income countries: an estimation and analysis at the global, regional, and country level. Lancet Glob Health. 2016;4(12):e916–e922. pmid:27717632
  2. 2. Nelson CA 3rd, Gabard-Durnam LJ. Early Adversity and Critical Periods: Neurodevelopmental Consequences of Violating the Expectable Environment. Trends Neurosci. 2020;43(3):133–143. pmid:32101708
  3. 3. Walker SP, Wachs TD, Meeks Gardner J, Lozoff B, Wasserman GA, Pollitt E, et al. Child development: risk factors for adverse outcomes in developing countries. Lancet. 2007;369(9556):145–157. pmid:17223478
  4. 4. Richter LM, Daelmans B, Lombardi J, Heymann J, Boo FL, Behrman JR, et al. Investing in the foundation of sustainable development: pathways to scale up for early childhood development. Lancet. 2017;389(10064):103–118. pmid:27717610
  5. 5. Black MM, Hurley KM. Early child development programmes: further evidence for action. Lancet Glob Health. 2016;4(8):e505–e506. pmid:27443769
  6. 6. Anzman-Frasca S, Francis LA, Birch LL. Inhibitory Control is Associated with Psychosocial, Cognitive, and Weight Outcomes in a Longitudinal Sample of Girls. Transl Issues Psychol Sci. 2015;1(3):203–216. pmid:26417610
  7. 7. Van der Ven SH, Kroesbergen EH, Boom J, Leseman PP. The development of executive functions and early mathematics: A dynamic relationship. Br J Educ Psychol. 2012;82(1):100–19. pmid:22429060
  8. 8. Harvard University. Executive Function and Self Regulation 2020. Available from: https://developingchild.harvard.edu/science/key-concepts/executive-function/.
  9. 9. Bull R, Scerif G. Executive functioning as a predictor of children’s mathematics ability: Inhibition, switching, and working memory. Dev Neuropsychol. 2001;19(3):273–293. pmid:11758669
  10. 10. St Clair-Thompson HL, Gathercole SE. Executive functions and achievements in school: Shifting, updating, inhibition, and working memory. Q J Exp Psychol. 2006;59(4):745–759. pmid:16707360
  11. 11. Cortés Pascual A, Moyano Muñoz N, Quílez RA. The Relationship Between Executive Functions and Academic Performance in Primary Education: Review and Meta-Analysis. Front Psychol. 2019;10(1582). pmid:31354585
  12. 12. Neuropsychology of children in Africa: Perspectives on risk and resilience. Boivin MJ, Giordani B, editors. New York, NY, US: Springer Science + Business Media; 2013. p. 347.
  13. 13. Semrud-Clikeman M, Romero RAA, Prado EL, Shapiro EG, Bangirana P, John CC. [Formula: see text]Selecting measures for the neurodevelopmental assessment of children in low- and middle-income countries. Child Neuropsychol. 2017;23(7):761–802. pmid:27609060
  14. 14. Bruckner TA, Scheffler RM, Shen G, Yoon J, Chisholm D, Morris J, et al. The mental health workforce gap in low- and middle-income countries: a needs-based approach. Bull World Health Organ. 2011;89(3):184–194. pmid:21379414
  15. 15. Hamadani JD, Tofail F, Nermell B, Gardner R, Shiraji S, Bottai M, et al. Critical windows of exposure for arsenic-associated impairment of cognitive function in pre-school girls and boys: a population-based cohort study. Int J Epidemiol. 2011;40(6):1593–1604. pmid:22158669
  16. 16. Fernald LCH, Prado E, Kariger P, Raikes A. A Toolkit for Measuring Early Childhood Development in Low and Middle-Income Countries. Washington, D.C.; 2017.
  17. 17. Bayley N. Bayley Scales of Infant and Toddler Development. Pearson; 2005.
  18. 18. Dalen K, Jellestad FK, Kamaloodien K. THE TRANSLATION OF THE NEPSY-II TO AFRIKAANS, SOME ETHICAL REFLECTIONS. CogniŃie, Creier, Comportament / Cognition, Brain, Behavior. 2007;XI:609–20.
  19. 19. Korkman M. NEPSY . A developmental neurop-sychological assessment. Test materials and manual. 1998.
  20. 20. Nampijja M, Apule B, Lule S, Akurut H, Muhangi L, Elliott AM, et al. Adaptation of Western measures of cognition for assessing 5-year-old semi-urban Ugandan children. Br J Educ Psychol. 2010;80(Pt 1):15–30. pmid:19594989
  21. 21. Hughes C, Ensor R. Executive function and theory of mind in 2 year olds: a family affair?. Dev Neuropsychol. 2005;28(2):645–668. pmid:16144431
  22. 22. Gioia GA, Isquith PK, Guy SC, Kenworthy L. TEST REVIEW Behavior Rating Inventory of Executive Function. Child Neuropsychol. 2000;6(3):235–238.
  23. 23. Holding PA, Taylor HG, Kazungu SD, Mkala T, Gona J, Mwamuye B, et al. Assessing cognitive outcomes in a rural African population: development of a neuropsychological battery in Kilifi District. Kenya J Int Neuropsychol Soc. 2004;10(2):246–260.
  24. 24. Boggs D, Milner KM, Chandna J, Black M, Cavallera V, Dua T, et al. Rating early child development outcome measurement tools for routine health programme use. Arch Dis Child. 2019;104(Suppl 1):S22. pmid:30885963
  25. 25. Gladstone M, Lancaster GA, Umar E, Nyirenda M, Kayira E, van den Broek NR, et al. The Malawi Developmental Assessment Tool (MDAT): The Creation, Validation, and Reliability of a Tool to Assess Child Development in Rural African Settings. PLoS Med. 2010;7(5):e1000273. pmid:20520849
  26. 26. Black M, Bromley KCavallera V, Cuartas J, Dua T, Eekhout Iet al. The Global Scale for Early Development (GSED): Bernard van Leer Foundation. 2019 [updated 2019 Jun 18; cited 2021 May 11]. Available from: https://earlychildhoodmatters.online/2019/the-global-scale-for-early-development-gsed/.
  27. 27. Lumsden J, Edwards EA, Lawrence NS, Coyle D, Munafò MR. Gamification of Cognitive Assessment and Cognitive Training: A Systematic Review of Applications and Efficacy. JMIR Serious Games. 2016;4(2):e11. pmid:27421244
  28. 28. Mukherjee D, Bhavnani S, Swaminathan A, Verma D, Parameshwaran D, Divan G, et al. Proof of Concept of a Gamified DEvelopmental Assessment on an E-Platform (DEEP) Tool to Measure Cognitive Development in Rural Indian Preschool Children. Front Psychol. 2020;11:1202. pmid:32587551
  29. 29. World Health Organization. mHealth: New horizons for health through mobile technologies: second global survey on eHealth. Switzerland; 2011.
  30. 30. Millar L, McConnachie A, Minnis H, Wilson P, Thompson L, Anzulewicz A, et al. Phase 3 diagnostic evaluation of a smart tablet serious game to identify autism in 760 children 3–5 years old in Sweden and the United Kingdom. BMJ Open. 2019;9(7):e026226. pmid:31315858
  31. 31. Anzulewicz A, Sobota K, Delafield-Butt JT. Toward the Autism Motor Signature: Gesture patterns during smart tablet gameplay identify children with autism. Sci Rep. 2016;6(1):31107. pmid:27553971
  32. 32. Bhavnani S, Mukherjee D, Dasgupta J, Verma D, Parameshwaran D, Divan G, et al. Development, feasibility and acceptability of a gamified cognitive DEvelopmental assessment on an E-Platform (DEEP) in rural Indian pre-schoolers—a pilot study. Glob Health Action. 2019;12(1):1548005. pmid:31154989
  33. 33. Bangirana P, Sikorskii A, Giordani B, Nakasujja N, Boivin MJ. Validation of the CogState battery for rapid neurocognitive assessment in Ugandan school age children. Child Adolesc Psychiatry Ment Health. 2015;9:38. pmid:26279675
  34. 34. Syväoja HJ, Tammelin TH, Ahonen T, Räsänen P, Tolvanen A, Kankaanpää A, et al. Internal consistency and stability of the CANTAB neuropsychological test battery in children. Psychol Assess. 2015;27(2):698–709. pmid:25528164
  35. 35. Twomey DM, Ahearne C, Hennessy E, Wrigley C, De Haan M, Marlow N, et al. Concurrent validity of a touchscreen application to detect early cognitive delay. Arch Dis Child. 2020. archdischild-2019-318262. pmid:32948515
  36. 36. Meuwissen AS, editor. The psychometrics of the Minnesota Executive Function Scale. Society for Research in Child Development; 2017; Austin. Texas.
  37. 37. Best JR, Miller PH. A developmental perspective on executive function. Child Dev. 2010;81(6):1641–1660. pmid:21077853
  38. 38. McPake J, Plowman L, Stephen C. Pre-school children creating and communicating with digital technologies in the home. Br J Educ Technol. 2013;44(3):421–431.
  39. 39. Nacher V, Jaen J, Navarro E, Catala A, González P. Multi-touch gestures for pre-kindergarten children. Int J Hum Comput Stud. 2015;73:37–51.
  40. 40. Aylward GP, Aylward BS. The Changing Yardstick in Measurement of Cognitive Abilities in Infancy. J Dev Behav Pediatr. 2011;32(6):465–468. pmid:21555956
  41. 41. Bradley-Johnson S, Johnson CM. Infant and toddler cognitive assessment. Psychoeducational assessment of preschool children, 4th ed. Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers; 2007. 325–57.
  42. 42. Jones EJH, Goodwin A, Orekhova E, Charman T, Dawson G, Webb SJ, et al. Infant EEG theta modulation predicts childhood intelligence. Sci Rep. 2020;10(1):11232. pmid:32641754
  43. 43. Mokkink LB, Terwee CB, Knol DL, Stratford PW, Alonso J, Patrick DL, et al. The COSMIN checklist for evaluating the methodological quality of studies on measurement properties: A clarification of its content. BMC Med Res Methodol. 2010;10(1):22. pmid:20298572
  44. 44. Duffey MM, Ayuku D, Ayodo G, Abuonji E, Nyalumbe M, Giella AK, et al. Translation and Cultural Adaptation of NIH Toolbox Cognitive Tests into Swahili and Dholuo Languages for Use in Children in Western Kenya. J Int Neuropsychol Soc. 2021;1–10. pmid:34027848
  45. 45. Weintraub S, Bauer PJ, Zelazo PD, Wallner-Allen K, Dikmen SS, Heaton RK, et al. I. NIH TOOLBOX COGNITION BATTERY (CB): INTRODUCTION AND PEDIATRIC DATA. Monogr Soc Res Child Dev. 2013;78(4):1–15. pmid:23952199
  46. 46. Barkman C, Weinehall L. Policymakers and mHealth: roles and expectations, with observations from Ethiopia, Ghana and Sweden. Glob Health Action. 2017;10(sup3):1337356. pmid:28838303
  47. 47. Bionetworks Sage. MobileToolbox: MobileToolbox is a set of tools that allow you to embed self-administered cognitive tests into your research study. Sage Bionetworks. 2021 [cited 2021 Sept 13]. Available from: https://www.mobiletoolbox.org/.
  48. 48. Paddick SM, Yoseph M, Gray WK, Andrea D, Barber R, Colgan A, et al. Effectiveness of App-Based Cognitive Screening for Dementia by Lay Health Workers in Low Resource Settings. A Validation and Feasibility Study in Rural Tanzania. J Geriatr Psychiatry Neurol. 2021;34(6):613–621.
  49. 49. Katz MJ, Wang C, Nester CO, Derby CA, Zimmerman ME, Lipton RB, et al. T-MoCA: A valid phone screen for cognitive impairment in diverse community samples. Alzheimers Dement. 2021;13(1):e12144. pmid:33598528
  50. 50. Agarwal S, Perry HB, Long L-A, Labrique AB. Evidence on feasibility and effective use of mHealth strategies by frontline health workers in developing countries: systematic review. Trop Med Int Health. 2015;20(8):1003–1014. pmid:25881735
  51. 51. Hall CM, Bierman KL. Technology-assisted Interventions for Parents of Young Children: Emerging Practices, Current Research, and Future Directions. Early Child Res Q. 2015;33:21–32. pmid:27773964
  52. 52. Measuring Child Development: A Toolkit for Doing It Right [Internet]. The World Bank. 2017 [cited 9-2-2021]. Available from: https://www.worldbank.org/en/programs/sief-trust-fund/publication/a-toolkit-for-measuring-early-child-development-in-low-and-middle-income-countries.
  53. 53. IMPACT Measures Tool: Scoring Guidebook. [press release]. PsyArXiv, 21 May 2021.
  54. 54. World Health Organization. The MAPS toolkit: mHealth assessment and planning for scale. Geneva, Switzerland: World Health Organization; 2015.
  55. 55. Digital Impact Alliance. Beyond Scale: How to Make Your Digital Development Program Sustainable. Washington D.C.: Digital Impact Alliance; 2017 [updated 1 Dec 2017; cited 2021 9 Feb]. Available from: https://digitalimpactalliance.org/research/beyond-scale-how-to-make-your-digital-development-program-sustainable/.
  56. 56. Petersen C, Adams S, DeMuro P. mHealth: Don’t Forget All the Stakeholders in the Business Case. Medicine 20. 2015;4:e4. pmid:26720310
  57. 57. EC PRISM. IMPACT Measures Tool: EC PRISM. [cited 2021 May 18]. Available from: https://impact.ecprism.org/measures.