Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The relationship between internet-gaming experience and executive functions measured by virtual environment compared with conventional laboratory multitasks

  • Yong-Quan Chen,

    Roles Data curation, Formal analysis

    Affiliation Department of Psychology, National Cheng Kung University, Tainan, Taiwan

  • Shulan Hsieh

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Department of Psychology, National Cheng Kung University, Tainan, Taiwan, Institute of Allied Health Sciences, National Cheng Kung University, Tainan, Taiwan

The relationship between internet-gaming experience and executive functions measured by virtual environment compared with conventional laboratory multitasks

  • Yong-Quan Chen, 
  • Shulan Hsieh


The aim of this study was to investigate if individuals with frequent internet gaming (IG) experience exhibited better or worse multitasking ability compared with those with infrequent IG experience. The individuals’ multitasking abilities were measured using virtual environment multitasks, such as Edinburgh Virtual Errands Test (EVET), and conventional laboratory multitasks, such as the dual task and task switching. Seventy-two young healthy college students participated in this study. They were split into two groups based on the time spent on playing online games, as evaluated using the Internet Use Questionnaire. Each participant performed EVET, dual-task, and task-switching paradigms on a computer. The current results showed that the frequent IG group performed better on EVET compared with the infrequent IG group, but their performance on the dual-task and task-switching paradigms did not differ significantly. The results suggest that the frequent IG group exhibited better multitasking efficacy if measured using a more ecologically valid task, but not when measured using a conventional laboratory multitasking task. The differences in terms of the subcomponents of executive function measured by these task paradigms were discussed. The current results show the importance of the task effect while evaluating frequent internet gamers’ multitasking ability.


This era of digital technology has brought about revolutionary changes to human civilization. With the development of digital technology, people tend to perform multiple tasks at the same time more often to make their lives more productive. In this study, we are particularly interested in exploring whether more frequent internet gaming (IG) is associated with higher or lower efficacy in multitasking ability. Additionally, we also compare the multitasking ability measured using a naturalistic-based task (i.e., in a virtual environment) with that measured using conventional laboratory executive function tasks.

There is no consensus in previous research related to these research questions. For example, Dong et al. [1] examined participants with internet addiction, as measured using the internet addiction test (IAT) [2], and observed that they had lower cognitive flexibility according to the conventional laboratory-based Stroop task [3]. Several other studies also observed poorer cognitive function, especially cognitive control, which is associated with frequent internet use [410].However, Boot et al. [11] observed that video game experts outperformed non-gamers in many cognitive tasks, including tracking moving objects at greater speeds, performing more accurately in a visual short-term memory test, switching between tasks more quickly, and making decisions about rotated objects more quickly and accurately. Several other studies also observed a positive effect of internet use related to various cognitive functions, including visual and visuospatial attention, dual tasks, inhibition, planning, working memory, episodic verbal memory, visual and verbal reasoning, and fluid intelligence [8, 1225].

Previous studies have mostly used conventional laboratory tasks that have been challenged in terms of their representativeness of an individual’s abilities in accomplishing real-life activities. For example, in the memory research domain, evidence has shown that using conventional laboratory tasks would affect the evaluation outcome regarding an individual’s cognitive efficacy compared with naturalistic tasks in real or virtual environment. In another example, conventional laboratory prospective memory tasks are related to memory for future intentions. Participants could be asked to perform an ongoing task and occasionally respond to prospective memory cues to perform another expected task, which is a form of dual tasks [2627].

Researchers have observed that older adults are impaired in conventional prospective memory tasks, but they seem to outperform younger adults on naturalistic prospective memory tasks, such as remembering to make phone calls or to take medications, even when the participants are asked not to use external aids [28]. Along these lines, we speculated that previous studies investigating the issue of whether frequent internet use is associated with higher or lower cognitive efficacy might have overlooked the effect of conventional laboratory task paradigms compared to naturalistic paradigms. Therefore, we compared multitasking abilities between two groups of participants with frequent or infrequent internet-gaming experience using conventional laboratory and virtual environment task paradigms.

In a laboratory setting, researchers have developed representative tasks to measure multitasking abilities, such as dual-task [2931] and task-switching [3234] paradigms. Logie developed the Edinburgh Virtual Errands Test (EVET) to mimic everyday multitasking activities in a virtual environment. In the conventional laboratory dual-task paradigm, participants are usually required to perform two tasks in parallel, whereas in the EVET, participants perform a series of tasks in a particular order and by interleaving, or switching from one to the other, when each of the tasks is completed [35]. In the conventional laboratory task-switching paradigm, rapid task switching is required, whereas tasks in the EVET involve much longer time scales where rapid and accurate response times are less crucial, and most of the tasks have a clear end point [35].

The performance on the EVET has been suggested to be a good predictor of problems with planning and “intentionality” (i.e., goal-directed behavior) in everyday life [36]. Studies have also shown that the virtual environment of the EVET can reflect task performance in a real-life environment [37]. The EVET involves a wide range of cognitive functions acting in concert, such as retrospective and prospective memory, working memory, planning, and implementation of a plan to achieve multiple goals, such as completing multiple errands as instructed. In contrast, the majority of tasks in a laboratory setting focus on individual cognitive functions in relative isolation and give less attention to various cognitive abilities simultaneously [35].

However, coordinating and strategically deploying several different cognitive functions are essential in everyday life scenarios, especially in a digital technology environment. In other words, everyday multitasking is different from concurrent dual-task demands [2931] or from the microstructure of rapid switching between laboratory tasks [3234]. Instead, it involves several subtasks that have different requirements and how these subtask attempts are scheduled by individuals [35]. The conventional laboratory dual-task and task-switching paradigms address only specific components of the cognitive system [35], such as the core (or “primary-order”) subcomponents of working memory and cognitive flexibility, respectively in the Diamond’s executive function model [38]

Therefore, we explored these different aspects of cognitive processes that may be involved in conventional laboratory or virtual environment task paradigms. We hypothesized that frequent internet gamers might not necessarily perform similarly between a conventional laboratory task and a virtual environment task. Additionally, the frequent IG group might not necessarily perform more poorly than infrequent IG group when evaluated using a more naturalistic task such as the EVET.

Materials and methods

This study protocol was approved by the Human Research Ethics Committee of the National Cheng Kung University, Tainan, Taiwan, R.O.C. to protect the participants’ right according to the Declaration of Helsinki and the rule of research at the University. All participants signed an informed consent form before participating in the experiments.


Seventy-eight young healthy college students who were recruited via a bulletin board system and advertisements in BBS and on Facebook participated in this study. Two of them did not fill in questionnaires properly and 4 of them did not complete the experimental tasks, hence only 72 (36 females; 36 males) participants’ data remained to be analyzed. The sample size was chosen based on prior research that analyzed the lowest and highest 25% of media-use scores from 92 participants, resulting in 23 participants for each group [12]. The two groups’ switch costs reached statistical significance at p = 0.02. In order to reach a power of 0.8 and p < 0.05, the estimated sample size was at least 18 subjects per group. The remaining participants were right-handed, and 20 to 30 years old (mean age, 23.43; SD, 2.21). Individuals aged from 20 to 30 years are considered as digital natives ( No history of neurological, psychiatric disorders, cardiovascular diseases or any forms of addiction (e.g., alcohol, drug, internet, etc.) was detected in the self-report. All participants were not at risk of depression or anxiety disorder, as evaluated using the Beck Depression Inventory II (scores < 13; [3941]) and Beck Anxiety Inventory (scores < 7; [4243]). Each participant received an honorarium of NT$350 (US $12) for completing the study.

Experimental instruments

Virtual environment multitasking task: Edinburgh Virtual Errands Test.

This study used the EVET [4446]( to evaluate participants’ multitasking ability in a virtual naturalistic setting. EVET requires participants to complete eight errand tasks efficiently within 8 minutes while navigating through a simulated environment on a computer. The environment consisted of a four-story building with a set of stairs and five rooms along the left and right ends of each floor surrounding a central elevator.

Each participant was provided with instructions as follows before receiving the experimental test: “Please imagine that you are a student and are assigned to do a list of errands for your teacher. The errands are listed in a particular order, but you can vary the order at any time as you wish. However, you are also told not to enter any of those rooms unless the rooms are on the list. You have eight minutes to complete these assignments. Please complete all the assignments as soon as you can, and complete as many as possible.”

An example of the errand list is as follows:

  • Pick up the brown package in T4 and take it to G6
  • Pick up the newspaper in G3 and take it to the Desk in S4
  • Obtain the keycard in F9 and unlock G6 (via G5)
  • Meet the person in S10 before 3:00 min
  • Obtain the stair-code from the notice board in G8 and unlock the stairwell
  • Turn on Cinema S7 at 5:30 min
  • Turn off the Lift G Floor
  • Sort the red and blue binders in room S2. Sort as many binders as you can.

G = ground floor; F = first floor; S = second floor; T = third floor

In the EVET experiment, participants were given 2 minutes to study their errand list, followed by free recall, then another 5 minutes of further study, and a test of cued recall. Then, participants were asked to plan the order in which they should perform each errand to achieve the maximum efficiency in task completion. Next, they were asked to verbally recall the errand list and building rules until they could recall 100% of the list. Participants then performed the EVET (i.e., navigating through a simulated environment) for 8 minutes (neither errand list nor plan were present during the navigation test). Afterwards, they were asked to recall the errand they had attempted or failed to complete. Participants were cued about any errands they had omitted, along with the errands correctly recalled from the recount task. Finally, participants were given an alternative set of errands (Set A or Set B; see Fig 1) and were asked to plan the order of errands (e.g., 35 participants who performed set A were subsequently asked to plan set B, and vice versa), which provided another measure of planning, but without performing the EVET a second time.

Fig 1.

A. Pipeline of Edinburgh Virtual Errands Test (EVET) experiment. Participants familiarized themselves with the building plan and task errands and performed the EVET. B. Task rules and two sets of errands.

A general “EVET score” was calculated based on participants’ overall performance (accounting for completed errands and incorrect actions). Points were added for each errand completion, bonus points were awarded based on the number of folders sorted and the time discrepancy for timed errands, and points were deducted for picking up incorrect objects, entering rooms not on the errand list, and breaking the building rules. Bonus and penalty points were given/deducted on a five-point scale (0–4) based on a cutoff score that was calculated based on the frequency distribution of raw scores among the participant sample in Logie et al.’s study (2011)[44]. The minimum possible score was −12 and the maximum was 20. The EVET subscores were calculated as follows:

Travel time: the total time (in seconds) during which the participant was navigating within the virtual building minus the time spent on entering the room. The purpose was to calculate the participants’ implementation of the mission navigation efficiency.

Learn: the sum of the free and cued recall scores. Before participants performed the EVET test, they were requested to recall (free and cued recall) the errands. If they could recall a key point, they received 1 point. The maximum number of points for the errands was 42.

Recount: After the EVET test, participants were immediately asked to recall any violations of the rules that occurred during the EVET test. If participants could recall a key word, they received 1 point. The maximum possible score was 28.

Remember: the sum of the free and cued recall scores. After participants performed the EVET test, they were requested to recall (free and cued recall) the errands. If they could recall one key point they received 1 point toward their score. The maximum number of points was 42 for the errands.

Pretest and posttest plans: the plan score before and after the EVET test. Both pretest and posttest plans were compared to the averaged plan of the top three participants’ EVET scores, respectively.

Plan follow: the EVET pretest plan was compared with the actual completion order of the EVET test. If the planning and the actual completion order’s locations/positions were the same, the participant received 1 point. For example, if the planning order was ABC, and the actual completion order was BCA, in which C was completed after B, the participant received 1 point.

Conventional laboratory multitasking task

Dual task.

The dual task was adapted from the website of Cognition Laboratory Experiments, designed by John H. Krantz ( In this version of a dual-task experiment, there was a primary visual tracking task where participants tried to keep a dot inside of a rectangle box, similar to following the road. During this tracking task, letters (or sounds) might appear and if a target letter or an auditory sound appeared, participants had to respond to it by clicking the mouse (i.e., a dual-task condition in which the secondary task could be a visual or an auditory task).

During the single-task condition, if participants successfully maintained a dot inside the rectangle box, then the box would turn to a cyan color and no tracking error would be recorded (Fig 2). Participants were encouraged to keep the box the cyan color by maintaining the dot inside the box. The range for the direction of the dot’s motion to be tracked could change from 0 to 360 degrees with each update. The smaller the angle variation, the more direct was the movement of the dot, and thus it was easier to follow. The larger the angle variation, the more random was the movement of the dot, and thus it was more difficult to follow. The speed of the dot was eight pixels per update and the diameter of the dot was four pixels. In the dual-task condition (visual–visual or visual–auditory task), while participants were tracking the dot, a letter (i.e., a visual secondary task) or an auditory sound (i.e., an auditory secondary task) might appear (Fig 2). If a target letter (“X”) or sound (1000 Hz) appeared (for the duration of 150 ms followed by a stimulus-onset asynchrony of 500 ms), participants had to respond to it by clicking the left button of the mouse. When other non-target letters or sounds appeared, no responses were needed. There were a total of two single-task and two dual-task blocks (one block with a visual task as the secondary task, and one block with an auditory task as the secondary task), consisting of 15 trials per block.

Fig 2. An example of a dual task.

A: A single-task condition: the primary tracking task when the dot was outside of the box. B: A dual-task condition: the primary tracking task with the secondary visual (letter “X”) task. C: A dual-task condition: the primary tracking task with the secondary auditory (sound 1000 Hz) task.

The dual-task costs for the primary tracking task were measured by calculating the number of pixels between the box and the dot (the smaller the numbers of pixels, the better is the performance) in single-task and dual-task conditions respectively, and then the pixels in the single-task condition were subtracted from those in the dual-task condition. There were two types of dual-task costs: one visual dual-task cost (with a letter task as the secondary task) and one auditory dual-task cost (with an auditory task as the secondary task).

Task-switching paradigm.

The task-switching paradigm was adapted from the alternating-runs procedure paradigm designed by Rogers and Monsell (1995; Experiment 1)[34]. There were four consonants (G, K, M, R) and four vowels (A, E, I, U) as the letter task’s stimuli, and four odd numbers (3, 5, 7, 9) and four even numbers (2, 4, 6, 8) as the digit task’s stimuli. In the alternating-run (mixed-task) blocks, participants alternated between runs of two trials presented in one of four quadrants (i.e., the spatial version, see [34]). For example, stimuli that appear in the upper two positions indicate Task A (e.g., letter task; consonant vs. vowel judgment task), where in the lower two positions indicate Task B (e.g., digit task: odd vs. even number task; Fig 3). Participants were instructed to respond to consonants or odd numbers by pressing the key with their right finger, and to vowels or even numbers by pressing the key with their left finger (stimulus-response mapping was counterbalanced across participants).

Fig 3. An example of task switching.

Stimuli that appear in the upper two positions indicate Task A (e.g., letter task; consonant vs. vowel judgment task), where in the lower two positions indicate Task B (e.g., digit task: odd vs. even number task). Participants were instructed to respond to consonants or odd numbers by pressing the key with their left finger, and to vowels or even numbers by pressing the key with their right finger (stimulus–response mapping was counterbalanced across participants).

Rogers and Monsell (1995)[34] computed switch costs by subtracting the average reaction time (RT) or accuracy of the repeat trials (the second trial on the sequence of Task A–Task A and Task B–Task B) from those of the corresponding switch trials (the second trial on the sequence of Task A–Task B and Task B–Task A). In this way, switch costs (also known as local switch costs) can be measured accurately without contamination from unwanted differences in factors such as memory load, effort, or arousal (i.e., mixing costs). However, in the pure (single)-task blocks, participants performed the same task (either a letter task or digit task) throughout all quadrant positions. The mixing cost can be measured by subtracting the average RT or accuracy of the repeated trials in the pure-task blocks from those of the repeated trials in the mixed-task blocks. There were a total of two single-task blocks (one for the letter task; one for the digit task), and two alternating-run task blocks, consisting of 65 trials per block.


Each participant first provided written informed consent, and was then given the BDI-II, BAI, Chen’s Internet Addiction Scale (CIAS; [47]), and Internet Use Questionnaire [48] to complete. The Internet Use Questionnaire was used to evaluate the amount of experience each participant had in playing online games. The participants were split into groups based on the following questions: Do you play any online game (tablet, computer or mobile phone) with interaction with other players? How many hours do you spend on playing such online game? Do you play any online game with a single operation? How many hours?

After the questionnaires were completed, each participant performed the EVET, dual-task, and task-switching paradigms on the computer. All of the experiments were completed in an average of about 2.5 hours.

Statistical analysis

All the statistics were performed using R software (R x64 3.4.0), and two-tailed Welch’s unequal variances t-tests were used. The significance level was set at 0.05 (uncorrected). The effect size and power tests were performed using the power analysis package offered by R software ( Spearman’s ρ correlation analysis was used to explore the associations among EVET scores, dual-task cost, switch cost, mixing cost, time spent on using the internet and playing online games, and CIAS.


Minimal data set for Tables 15 presented in the Results section can be found in S1 File.

Table 1. Classification of frequent vs. infrequent internet-gaming (IG) experience groups based on the questions regarding the hours playing online game types (playing with others or alone) respectively in the Internet Use Questionnaire (Lin, 2011).

These groups’ demographic information and their Chen Internet Addiction Scale (CIAS) scores were also compared.

Table 2. Statistical tests between the EVET total score and sub-test scores between Set A and Set B.

Table 3. Statistical tests for the EVET total score and subtest scores between the groups of frequent internet-gaming (IG) experience and infrequent IG experience.

Table 4. Statistical results for dual task performance (dual-task cost) and task switching (switch cost; mixing cost) between the groups of frequent internet-gaming experience and infrequent internet-gaming experience.

Table 5. Correlation matrix among EVET, dual-task cost, switch cost, mixing cost, time spent on online game and Chen Internet Addiction Scale (CIAS) score.

The Internet Use Questionnaire

There were 36 participants in the frequent IG experience group (playing online game interacting with others: 8.72 ± 7.10 hours; playing online games alone: 3.97 ± 5.33 hours), and 36 participants in the infrequent IG experience group (playing online game interacting with others: 0.06 ± 0.23 hours; playing online game alone: 0.54 ± 1.41 hours). Please note, there were original 38 participants in the frequent IG experience group and 40 participants in the infrequent IG experience group as mentioned in the Participant section, but 6 of them were removed from the data analyses due to their incomplete questionnaires or experiments. The frequent IG group showed a significantly higher number of playing hours compared with the infrequent IG group (interacting with others: t = 7.32, p = 1.47e-08; alone: t = 3.74, p < .001; Table 1). The two groups’ demographic information is provided in Table 1.

Chen Internet Addiction Scale score

The mean CIAS score for the frequent IG group was 58.17 ± 11.17, and for infrequent IG group was 53.28 ± 13.22. No significant difference was found between CIAS scores in the two groups (p = .09; Table 1).

EVET performance

Set A vs. Set B.

The EVET scores ranged from 0 to 19 in our sample. Participants who performed task set A (mean, 9.17, SD, 4.93) performed worse than those who performed set B (mean, 11.64, SD, 5.37), but this result only just reached marginal significance (t = −2.03, p = 0.054)(Table 2). Because there was no significant difference in the total score between the two EVET sets, we merged the data from these two sets in the subsequent analyses.

Frequent vs. infrequent internet-gaming groups.

The frequent IG experience group showed significantly lower scores compared with the infrequent IG experience group for the total EVET score (t = 5.13, p = 2.45e-06; effect size: 1.21; power: .99), EVET travel time (t = −3.47, p = .001; effect size: 0.82; power: .93), EVET recount (t = 3.30, p = .0002; effect size: 0.78; power: 0.90), EVET pretest plan (t = 3.38, p = .001; effect size: 0.80; power: 0.92), and EVET plan follow (t = 4.77, p = 1.07e-05; effect size: 1.12; power: 0.99; Table 3). The results suggested that the frequent IG group exhibited overall better EVET performance than the infrequent IG group. Please note, in the frequent IG group, their types of gaming experience, such as tablet, computer or mobile phone, were controlled while comparing their EVET performance with that of the infrequent IG group. Twenty-eight out of the 36 participants played computer games, whereas 8 out of the 36 participants played mobile phone or tablet games. These two subgroups did not show significant differences in their EVET, dual task and task switching performance.

Dual-task performance

Overall dual-task performance was not different between the two groups, including single-task tracking and dual-task tracking performance (Table 4). Their dual-task costs for the visual–visual dual-task condition and for the visual–auditory dual-task condition were not significantly different (visual–visual: p = .06; visual–auditory: p = .20; Table 4).

Task-switching performance

The two groups did not differ from each other in overall task-switching performance, including the reaction time (RT) in the pure block condition, repeat, and switch trials in the mixed-task block conditions (Table 4). Their switch costs did not differ significantly from each other in terms of RT (p = .41) and accuracy (p = .99), and their mixing costs also did not differ significantly from each other in terms of RT (p = .16) and accuracy (p = .08; Table 4).

EVET performance related to dual-task cost, switch cost, mixing cost, and internet gaming time

To summarize, only EVET scores differed significantly between the frequent and infrequent IG experience groups. Dual-task cost, switch cost, and mixing cost were not different between the two groups. We further examined if these variables were correlated among each other. Table 5 shows the correlation matrix among EVET, dual-task cost, switch cost, mixing cost, time spent on online game, and CIAS score.


The aim of this study was to investigate whether individuals with frequent IG experience exhibited better or worse multitasking ability than those with infrequent IG experience, using virtual environment and conventional laboratory tasks. The idea of comparing the EVET with dual tasks or task switching may not be novel [35, 44], but it has not been explored in young, healthy populations of frequent and infrequent internet gamers. The results showed that participants in the frequent IG group performed better than those in the infrequent IG group according to the EVET. However, the performance of frequent IG group on the dual task (visual–visual; visual–auditory) and task switching (switch cost; mixing cost) did not differ significantly from that of infrequent IG group.

The results suggest that the frequent IG group exhibited better multitasking efficacy when measured using a more natural task, but not when measured using a conventional laboratory multitasking task. To the authors’ knowledge, these findings have not been reported before in a single study. Therefore, the current results show the importance of the task effect where evaluating frequent internet gamers’ cognitive ability. This might also explain why there are discrepant findings on whether frequent internet use is associated with better multitasking ability. A similar phenomenon can be seen in perspective memory research, where naturalistic-based and laboratory-based PM tasks were shown to be differentially sensitive in reflecting memory efficacy [2628], as well as in the neuropsychological research domain, where a higher ecological multitasking test (such as multiple-errands test [36] or Virtual Errands Test [37]) was more sensitive to evaluate executive dysfunction in brain-damaged patients. The frequent internet gamers examined did not necessarily have internet addiction disorder or internet gaming disorder according to the Chen Internet Addiction Scale (CIAS). Nine participants had CIAS scores greater than 68. We excluded these nine participants from the analyses. The results remained the same as reported. Therefore, the observation of better multitasking ability in the frequent IG group was not associated with internet addiction. The current findings are limited to individuals who play internet games in a healthy manner and do not represent those with addictive behavior.

A possible explanation for the different results between task paradigms is that a virtual naturalistic-based task is superficially similar to internet gaming scenarios. To test this hypothesis, we divided internet games into two types: those involving a gaming scenario that is different from the EVET (e.g., a shooting game such as a multiplayer online battle arena), and those involving a game scenario that is similar to the EVET (e.g., spatial navigation in a virtual environment). If the main reason why the EVET was more sensitive to evaluating frequent or infrequent internet gamers’ multitasking ability was simply because of the superficial task features, then frequent internet gamers who play games that differ from the EVET would exhibit worse EVET scores than those who play games that are more similar to the EVET. However, the EVET in these two subgroups should not differ if the better multitasking ability of internet gamers in the EVET is not only the result of superficial task similarity. We observed no difference between the subgroups.

Another more critical possibility from a theoretical point of view is that compared to the dual-task and task-switching paradigms, the EVET measures more cognitive components, including planning and execution, sustained attention, prospective memory, verbal and visuospatial working memory, task switching, processing speed, cognitive bottlenecks, and sequential control [35, 44]. In the EVET, participants perform a series of tasks in a particular order and by interleaving or switching from one to the other when each of the tasks is completed [35, 44]. In addition, an individual needs to memorize an errand list and preplan the order of the multiple errands to complete all tasks within 8 minutes while navigating around a virtual environment. Therefore, the current findings may suggest that the experience of playing internet games is more closely related to the ability to coordinate and strategically deploy several different cognitive functions, but not directly related to a single cognitive function, as often measured by a conventional laboratory task.

This phenomenon may be associated with the era of digital technology and social changes that have fueled the rise of multitasking in more complex scenarios. When playing online games, players must not only follow the rules but also often need to switch their attention back and forth between tasks, which is multitasking. These task demands may not be fully covered by conventional laboratory task paradigms, but they are covered by a more naturalistic-based EVET task. This could also partly explain why there were discrepancies regarding whether internet use was positively or negatively associated with cognitive functions [125]. The results may partly depend on whether one measures multiple cognitive functions in concert or in isolation.

However, further consideration is needed before fully advocating the use of a virtual naturalistic-based EVET task. According to Diamond’s executive function model [38], executive function has three core subcomponents (inhibitory control, working memory, and cognitive flexibility) and three higher-order subcomponents (reasoning, problem-solving, and planning). Based on this model, the EVET appears to contain more higher-order subcomponents, which are similar to aspects of fluid intelligence including the ability to reason, problem solving, and seeing patterns or relationships among items [49]. These higher-order subcomponents are not measured by the dual-task or task-switching paradigms, which instead measure the core (or “primary-order”) subcomponents of working memory and cognitive flexibility, respectively [38]. Therefore, the current findings may also suggest that playing internet games is more closely related to aspects of higher-order executive function (such as reasoning, problem-solving, and planning) but not to primary-order executive function. This study did not extensively measure all types of core and higher-order subcomponents using various laboratory-based tasks. It is likely that other types of conventional laboratory tasks that directly measure the higher-order subcomponents are also able to differentiate the abilities of frequent versus infrequent internet gamers. Future research is needed to clarify the issue.

If the EVET can sufficiently evaluate an individual’s multitasking ability in relation to different degrees of internet gaming experience (see [50]), applying it in clinical assessments for intervention programs may be more convenient than using a combination of multiple conventional laboratory tasks. Video games have become popular and more available in the past several decades and they have become even more easily accessible through handheld devices such as mobile phones or tablets, which provide users with unlimited access. Moreover, there is growing interest in video game development for treating organic brain diseases in elderly patients [5152] or attention deficit hyperactivity disorder (ADHD) [53]. Therefore, choosing a concise and sensitive task to evaluate the outcomes of intervention programs is important for clinical research.

Assessments and outcome measures have important roles in the effectiveness of interventions. The current findings do not to demonstrate a causality link between frequent internet gaming and multitasking performance. However, they may potentially provide useful information for researchers in choosing a sensitive task paradigm to evaluate the effect of intervention programs for internet gaming on an individual’s multitasking ability.

Supporting information


We thank De-Cyuan Liu and Yun-Hsuan Chang for help with data collection. We thank the Mind Research and Imaging Center (MRIC) at NCKU for consultation and instrument availability. This work was supported by the grants MOST 104-2420-H-006-004-MY2 and MOST 106-2420-H-006-005-MY2 from the Ministry of Science and Technology of Taiwan to SH. The Edinburgh Virtual Errands Task program was provided by Steven Trawley, Matthew Logie, and Robert Logie.


  1. 1. Dong G, Lin X, Zhou H, Lu Q. Cognitive flexibility in internet addicts: fMRI evidence from difficult-to-easy and easy-to-difficult switching situations. Addict Behav. 2014;39(3):677–83. pmid:24368005
  2. 2. Stroop JR. Studies of interference in serial verbal reactions. J Exp Psychol. 1935;18(6):643.
  3. 3. Young K. Internet addiction: diagnosis and treatment considerations. J Contemp Psychother. 2009;39(4):241–6.
  4. 4. Chan PA, Rabinowitz T. A cross-sectional analysis of video games and attention deficit hyperactivity disorder symptoms in adolescents. Ann Gen Psychiatry. 2006;5(1):16.
  5. 5. Kirsh SJ, Olczak PV, Mounts JR. Violent video games induce an affect processing bias. Media Psychol. 2005;7(3):239–50.
  6. 6. Mathews VP, Kronenberger WG, Wang Y, Lurito JT, Lowe MJ, Dunn DW. Media violence exposure and frontal lobe activation measured by functional magnetic resonance imaging in aggressive and nonaggressive adolescents. J Comput Assist Tomogr. 2005;29(3):287–92. pmid:15891492
  7. 7. Ophir E, Nass C, Wagner AD. Cognitive control in media multitaskers. Proc Natl Acad Sci U S A. 2009;106(37):15583–7. pmid:19706386
  8. 8. Sun D-L, Chen Z-J, Ma N, Zhang X-C, Fu X-M, Zhang D-R. Decision-making and prepotent response inhibition functions in excessive internet users. CNS Spectr. 2009;14(2):75–81. pmid:19238122
  9. 9. Zhou Y, Wu Y, Yang L, Fu L, He K, Wang S, et al. The impact of transportation control measures on emission reductions during the 2008 Olympic Games in Beijing, China. Atmos Environ. 2010;44(3):285–93.
  10. 10. Zimmerman FJ, Christakis DA. Associations between content types of early media exposure and subsequent attentional problems. Pediatrics. 2007;120(5):986–92. pmid:17974735
  11. 11. Boot WR, Kramer AF, Simons DJ, Fabiani M, Gratton G. The effects of video game playing on attention, memory, and executive control. Acta Psychol (Amst). 2008;129(3):387–98.
  12. 12. Alzahabi R, Becker MW. The association between media multitasking, task-switching, and dual-task performance. J Exp Psychol Hum Percept Perform. 2013;39(5):1485. pmid:23398256
  13. 13. Andrews G, Murphy K. Does video game playing improve executive functioning? 2006. In Vanchevsky M. A. (Ed.) Front in Cog Sci (pp. 145–161). New York. Nova Science Publishers Inc.
  14. 14. Chisholm JD, Hickey C, Theeuwes J, Kingstone A. Reduced attentional capture in action video game players. Atten Percept Psychophys. 2010;72(3):667–71. pmid:20348573
  15. 15. Anguera JA, Boccanfuso J, Rintoul JL, Al-Hashimi O, Faraji F, Janowich J, et al. Video game training enhances cognitive control in older adults. Nature. 2013;501(7465):97. pmid:24005416
  16. 16. Green CS, Bavelier D. Action video game modifies visual selective attention. Nature. 2003;423(6939):534. pmid:12774121
  17. 17. Johnson GM. Verbal and visual reasoning in relation to patterns of Internet use. Internet Res. 2008;18(4):382–92.
  18. 18. Johnson GM. Cognitive processing differences between frequent and infrequent Internet users. Comput Human Behav. 2008;24(5):2094–106.
  19. 19. Karle JW, Watter S, Shedden JM. Task switching in video game players: Benefits of selective attention but not resistance to proactive interference. Acta Psychol (Amst). 2010;134(1):70–8.
  20. 20. Kearney P. Cognitive Callisthenics: Do FPS computer games enhance the player’s cognitive abilities? 2005. DiGRA 2005: Changing Views: Worlds in Play, 2005 International Conference
  21. 21. Ko C-H, Hsiao S, Liu G-C, Yen J-Y, Yang M-J, Yen C-F. The characteristics of decision making, potential to take risks, and personality of college students with Internet addiction. Psychiatry Res. 2010;175(1):121–5.
  22. 22. Mishra J, Zinni M, Bavelier D, Hillyard SA. Neural basis of superior performance of action videogame players in an attention-demanding task. J Neurosci. 2011;31(3):992–8. pmid:21248123
  23. 23. Riesenhuber M. An action video game modifies visual processing. Trends Neurosci. 2004;27(2):72–4. pmid:15106651
  24. 24. Tun PA, Lachman ME. The association between computer use and cognition across adulthood: use it so you won’t lose it? Psychol Aging. 2010;25(3):560. pmid:20677884
  25. 25. Unsworth N, Redick TS, McMillan BD, Hambrick DZ, Kane MJ, Engle RW. Is playing video games related to cognitive abilities? Psychol Sci. 2015;26(6):759–74. pmid:25896420
  26. 26. Cherry KE, Martin RC, Simmons-D’Gerolamo SS, Pinkston JB, Griffing A, Drew Gouvier W. Prospective remembering in younger and older adults: Role of the prospective cue. Memory. 2001;9(3):177–93. pmid:11469312
  27. 27. West R, Covell E. Effects of aging on event-related neural activity related to prospective memory. Neuroreport. 2001;12(13):2855–8. pmid:11588590
  28. 28. Henry JD, MacLeod MS, Phillips LH, Crawford JR. A meta-analytic review of prospective memory and aging. Psychol Aging. 2004;19(1):27. pmid:15065929
  29. 29. Kahneman D. Attention and effort: Prentice-Hall Englewood Cliffs, NJ; 1973.
  30. 30. Navon D, Gopher D. On the economy of the human-processing system. Psychol Rev. 1979;86(3):214.
  31. 31. Wickens CD. Processing resources and attention. Multiple-task performance. 1991;1991:3–34.
  32. 32. Allport A, Styles E, Hsieh S. Shifting intentional set: Exploring the dynamic. 1994. In Umiltà C. & Moscovitch M. (Eds.), Attention and performance series. Attention and performance 15: Conscious and nonconscious information processing (pp. 421–452). Cambridge, MA, US: The MIT Press.
  33. 33. Meiran N. Reconfiguration of processing mode prior to task performance. J Exp Psychol Learn Mem Cogn. 1996;22(6):1423.
  34. 34. Rogers RD, Monsell S. Costs of a predictable switch between simple cognitive tasks. J Exp Psychol Gen. 1995;124(2):207.
  35. 35. Logie RH, Law A, Trawley S, Nissan J. Multitasking, working memory and remembering intentions. Psychol Belg. 2010;50(3–4):309–26.
  36. 36. Burgess PW, Alderman N, Evans J, Emslie H, Wilson BA. The ecological validity of tests of executive function. J Int Neuropsychol Soc. 1998;4(6):547–58. pmid:10050359
  37. 37. McGeorge P, Phillips LH, Crawford JR, Garden SE, Sala SD, Milne AB, et al. Using virtual environments in the assessment of executive dysfunction. Presence: Teleoperators Virtual Environ. 2001;10(4):375–83.
  38. 38. Diamond A. Executive functions. Annu Rev Psychol. 2013;64:135–68. pmid:23020641
  39. 39. Beck A, Ward C, Mendelson M, Mock J, Erbaugh J. An inventory for measuring. Arch Gen Psychiatry. 1961;4:561–71. pmid:13688369
  40. 40. Beck AT, Steer RA, Brown GK. Beck depression inventory-II. San Antonio. 1996;78(2):490–8.
  41. 41. Lu M-L. Reliability and validity of the Chinese version of the Beck Depression Inventory-II. Taiwanese J Psychiatry. 2002;16:301–10.
  42. 42. Che H-H, Lu M-L, Chen H-C, Lee Y-J. Validation of the Chinese version of the Beck Anxiety Inventory. Formosan J Med. v 10. 2006;(4):447–54.
  43. 43. Steer RA, Beck AT. Beck Anxiety Inventory. 1997. In Wood C. P. Z. R. J. (Ed.), Evaluating stress: A book of resources, (pp. 23–40). Lanham, MD, US: Scarecrow Education.
  44. 44. Logie RH, Trawley S, Law A. Multitasking: Multiple, domain-specific cognitive functions in a virtual environment. Mem Cognit. 2011;39(8):1561–74. pmid:21691876
  45. 45. Trawley SL, Law AS, Logie RH. Event-based prospective remembering in a virtual world. Q J Exp Psychol. 2011;64(11):2181–93.
  46. 46. Trawley SL, Law AS, Logie MR, Logie RH. Desktop virtual reality in psychological research: A case study using the Source 3D game engine. 2013. EVET Technical Report.
  47. 47. Chen S-H, Weng L-J, Su Y-J, Wu H-M, Yang P-F. Development of a Chinese Internet addiction scale and its psychometric study. Chin J Psychol. 2003.
  48. 48. Lin M-P. The impact of personality and cognitive factors on the internet addiction among college students in Taiwan: One-year follow-up. Unpublished PhD, NCKU, Taiwan. 2011.
  49. 49. Ferrer E, O’Hare ED, Bunge SA. Fluid reasoning and the developing brain. Front Neurosci. 2009;3:3.
  50. 50. Chang YH, Liu DC, Chen YQ, Hsieh S. The Relationship between Online Game Experience and Multitasking Ability in a Virtual Environment. Appl Cogn Psychol. 2017;31(6):653–61.
  51. 51. Hall AK, Chavarria E, Maneeratana V, Chaney BH, Bernhardt JM. Health benefits of digital videogames for older adults: a systematic review of the literature. Games Health J. 2012;1(6):402–10. pmid:26192056
  52. 52. Nacke L, Drachen A, Kuikkaniemi K, Niesenhaus J, Korhonen HJ, Hoogen WM, et al., editors. Playability and player experience research. Proceedings of DiGRA 2009: Breaking New Ground: Innovation in Games, Play, Practice and Theory; 2009: DiGRA.
  53. 53. Strahler Rivero T., Herrera Nuñez LM, Uehara Pires E., & Amodeo Bueno OF. ADHD rehabilitation through video gaming: a systematic review using PRISMA guidelines of the current findings and the associated risk of bias. Front Psychiatry. 2015;6:151. pmid:26557098