Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Reward-associated distractors can harm cognitive performance

  • Dorottya Rusz ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft

    d.rusz@psych.ru.nl

    Affiliation Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands

  • Erik Bijleveld,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands

  • Michiel A. J. Kompier

    Roles Conceptualization, Funding acquisition, Investigation, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands

Abstract

When people carry out cognitive tasks, they sometimes suffer from distractions, that is, drops in performance that occur close in time to task-irrelevant stimuli. In this research, we examine how the pursuit of rewards contributes to distractions. In two experiments, participants performed a math task (in which they could earn monetary rewards vs. not) while they were exposed to task-irrelevant stimuli (that were previously associated with monetary rewards vs. not). In Experiment 1, irrelevant cues that were previously associated with rewards (vs. not) impaired performance. In Experiment 2, this effect was only replicated when these reward-associated distractors appeared relatively early during task performance. While the results were thus somewhat mixed, they generally support the idea that reward associations can augment the negative effect of distractors on performance.

Introduction

Distractions, which we define as performance decrements that occur closely after the onset of a task-irrelevant stimulus, are believed to impair concentration and thwart people’s productivity [13]. For instance, interruptions from colleagues harm work productivity [4], using one’s laptop or smartphone during lectures is related to worse academic outcomes [5,6], and using one’s smartphone during driving can lead to fatal consequences [7]. Although the negative consequences are well established, the underlying cognitive/attentional mechanisms of distractions are not yet entirely clear.

In the past, distractions have been mostly seen as originating from a stimulus-driven (i.e., bottom-up) attentional mechanism. That is, stimuli that are physically salient (e.g., because of their abrupt onsets [8] or distinctive colors [9]) are more likely to attract attention–even if these stimuli are irrelevant for the task at hand. This attentional mechanism can explain, for example, why a blinking smartphone screen (with an abrupt onset and distinctive color) attracts attention away from attending to a lecture or driving a car. Recent research, however, shows that physical salience alone may not be able to fully explain distractions. There is rapidly growing evidence that the extent to which task-irrelevant cues grab attention also depends on how much value people associate with those task-irrelevant cues [1014]. In these studies, participants first learned to associate some stimulus features (i.e., color) with the delivery of valuable rewards (i.e., earning money). Later, in a test phase, they performed a visual search task, while the previously reward associated cues reappeared as distractors that needed to be ignored. These studies repeatedly found that participants’ attention was captured by previously-rewarded stimuli, even though these stimuli were completely irrelevant to the task that needed to be done.

While the effect of reward-associated distractors is well established in attentional and visual search tasks (c.f., [1517]), fewer studies investigated how reward-associated distractors impact other cognitive processes [18,19]. Because real-life tasks (e.g., taking an exam, writing a paper) often involve a large set of cognitive control operations (e.g., maintenance and updating of goal relevant information) beyond visual attention, it is important to investigate whether the impact of reward-related distractors is generalizable across different cognitive operations [2022]. If this possibility was true, it would suggest that reward-driven distractions have important implications for real-life settings at work, education, and driving, in which optimal performance requires central executive resources [23]. The first aim of this study, therefore, is to expand the existing literature and investigate whether the negative effect of reward-related distractors (i.e., reward-driven distraction) extends to cognitive control operations.

The second aim of this study is to test whether different motivational states influence this reward-driven distraction effect. That is, if the extent to which people get distracted is dependent on how much value they associate with distractors, it should also matter how much value they associate to the current task. That is, people are expected to try to optimize performance (i.e., to exploit) in a task as long as this task yields more valuable outcomes than its potential alternatives [24,25]. In line with this idea, Müller and colleagues [26] found that monetary incentives can reduce the impact of distractors and help the maintenance of task-relevant information, which leads to better performance. However, when the outcome value of the task decreases, people become less motivated and tend to search for (i.e., to explore) alternative behaviors that could provide higher value to them—eventually leading to distraction from the primary task [25]. Based on this line of reasoning, we tested whether distraction by reward-related cues is especially strong in situations when the task does not yield any valuable outcomes–in other words, we predicted that reward-driven distraction is most pronounced when people are not motivated to pursue the current task.

To test these ideas, we developed a new experimental task, building on previous research [10,14]. In short, in the first part of the task, participants learned to associate different colors with monetary (vs. no monetary) rewards. Later, in a second part, they were solving math problems while the previously reward-associated colors reappeared, but this time they had to be ignored. To manipulate participant’s motivational states, some of the math problems were incentivized with monetary rewards. Now, we introduce our experimental task in detail, lay out our specific predictions, and present results from two experiments.

The experimental paradigm

Reward learning phase

In this task, we adopted a well-established reward learning and testing procedure (e.g., [10,11,14]). In the learning phase, each trial consisted of four stimulus-pairs: a letter and a digit presented in close proximity (see Fig 1). Participants’ task was to indicate whether the letters (e.g., W, X, Y, Z) appeared in the correct alphabetical order. One of these letters was always colored in either red or blue. Although participants were told that they could earn money based on correct responses, their reward was also dependent on the colored letter in the sequence. That is, a red letter always predicted earning high rewards (+ 8 eurocents), whereas blue always predicted no rewards (+ 0 eurocents) at the end of the trial (the colors were counterbalanced across participants). We expected that via repeated exposure (150 trials), participants learn to associate rewards to these colors, and these colors, in turn, gain attentional processing priority–a mechanism that has been repeatedly demonstrated in previous research [11,2730]. In other words, by repeated pairing with reward, these colors would become more salient and therefore would attract attention more than other stimuli.

thumbnail
Fig 1. Sequence of events in the training phase.

(A) An example of a no reward trial, where the red colored letter “X” predicted no reward. (B) An example of a high reward trial, where the blue colored letter “Y” predicted high reward (8 cents).

https://doi.org/10.1371/journal.pone.0205091.g001

Reward-driven distraction phase

Our main objective was to examine whether these reward-associated cues harm performance in a complex task. For this purpose, we chose a math task that requires a broad set of cognitive functions that people use at work and education [31]. In this phase, participants again saw sequences of four stimulus-pairs: a digit and a letter presented in close proximity (see Fig 2). This time they had to add up the digits and report their sum. Importantly, in the sequence, one of the letters was presented in the previously reward- (vs. no reward) associated color. These colored letters were now task-irrelevant, so they needed to be ignored.

thumbnail
Fig 2. Sequence of events in the test phase.

Examples of high or no task value trials with (A) a distractor (e.g., color red “X”) that was previously associated with no reward (B) a distractor (e.g., color blue “Y”) that was previously associated with high reward.

https://doi.org/10.1371/journal.pone.0205091.g002

In general, we expected that colored letters that were associated high (vs. no) rewards, would impair performance. Specifically, to get an insight of this performance decrement, we have to zoom in the exact procedure of a trial. First of all, trials were not self-paced, meaning that the digits were presented in a limited time window (700 ms/digit). So, participants had to perform mental additions rather quickly. This was especially demanding during presentation of the second and third stimulus pair, in which participants had to (a) keep mental representations of the targets active (i.e., maintain the sum of the previous digits in working memory), while (b) update this mental representation with new target information (i.e., the next digit in the sequence). Reward-associated distractors appeared during these stimulus pairs. Importantly, as working memory prioritizes processing reward related information [32,33], we expected that previously reward-associated distractors would be prioritized in working memory over the target digits. Consequently, there would be less capacity available to encode target digits, which would weaken the mental representations of these digits and would make it more difficult to update the representation with the subsequent digit, especially given the limited available time. If mental representations would indeed become weaker because of the reward-associated distractors, participants simply would not be able to compute the upcoming mental operation within the allotted time, which would result in an incorrect response. Therefore, we operationalized performance as the percentage of accurate responses on the math task.

We also tested whether reward-driven distraction would be especially strong when people are not motivated to perform the task. In order to test this possibility, we manipulated participant’s motivational states in the test phase by using a monetary reward procedure (e.g., [3436]). That is, before the trial started, participants were told that they could earn 20 eurocents for a correct response. We expected that promising monetary rewards would induce a high-motivation state, which has shown to boost cognitive resources and effort to perform the task [3739]. In turn, we expected that this high motivational state would shield mental representations of goal-relevant information from distraction [26,40]. In sum, we expected that high motivational states would suppress reward-driven distraction.

Hypotheses

In line with decades of research (e.g., [4144]), we hypothesize that people are more accurate in the math task when they can earn monetary rewards (Hypothesis 1). Second, more importantly, we hypothesize that people are less accurate on the math task when they are exposed to distractors that were previously associated with high (vs. no) rewards (Hypothesis 2). Finally, we hypothesize that people are less accurate when they are exposed to distractors that were previously associated with high (vs. no) rewards, especially when their current task does not yield rewarding outcomes (Hypothesis 3).

Exploring reward-driven distraction

In addition to testing our hypotheses, we also explored two different aspects of our paradigm. First, we explored whether the timing of reward-associated distractors mattered. That is, we explored whether disruptions in performance were stronger when the previously reward-associated distractors appeared early (i.e., during the second stimulus pair) vs. late (during the third stimulus pair). Because people actively monitor the time flow of events and update their expectancy about future events [4547], the timing of distractors may well affect reward-driven distractions.

Second, we explored whether reward-driven distraction influenced performance stability/reliability. That is, on top of traditional performance measures (i.e., response times and accuracy), we computed performance variability. Indeed, previous research implies that high motivational states lead to more stable performance (i.e., less fluctuations in performance; e.g., [40,48]). Based on this idea, it is plausible that increased motivation does not just have a general effect on accuracy, but that it reduces the frequency of distractions and thus improves performance stability.

Experiment 1

Method

Participants and design.

This research has been approved by the Ethics Committee of the Social Science Faculty (ECSW2017-0805-50).

Forty-seven students from Radboud University participated in the current study. Students could participate if they (a) slept at least 6 hours during the night before the experiment, (b) were not colorblind, and (c) were native Dutch speakers could participate. After data collection, 3 participants nevertheless reported to have slept less than 6 hours, so they were excluded from the final analysis. Moreover, following similar prior studies (e.g., [2730]), we excluded 9 participants who performed below 60% accuracy. We did this exclusion to make sure that the final sample consisted only of participants who were capable of performing the task. As such, the final sample consisted of thirty-five students (26 females and 9 males; mean age = 22.3 years, SD = 3.7). Participants received compensation in the form of a gift voucher based on their performance (ranging from 7.5–12.5 €). The study used a 2(task value: low vs. high) × 2(distractor value: low vs. high) within-subjects design.

Procedure.

Participants were seated in a cubicle in front of a computer. First, they signed a consent form and filled out a questionnaire assessing demographics (age, sex), hours of sleep at the previous night, and their need for money on a 1 (not at all) to 7 (very much) scale (“To what extent are you in need for money at the moment?”). Afterwards, they carried out the task (see below). Finally, they reported on a 1 (not at all) to 7 (very much) scale how motivated they were and how demanding and difficult they felt the task was (for descriptive statistics, see Table 1). The experiment took 40 minutes to finish. Upon completion of the experiment, they were given the money they earned during the task.

thumbnail
Table 1. Descriptive statistics of subjective measures separately for Experiment 1 and Experiment 2.

https://doi.org/10.1371/journal.pone.0205091.t001

Task.

Stimuli

The task was designed with E-prime 2.0. Our stimuli were made up of letters and numbers presented in font size 24 in the middle of a monitor screen with a resolution of 1920x1080 pixels.

Training phase

Participants first saw a fixation cross, then four sequential displays of a number and a letter presented (e.g., 8W, X5, 9Y, and Z7; see Fig 1). In this phase, letters were the targets and participants were instructed to report whether they were in the correct alphabetical order (e.g., W, X, Y, Z). They responded by pressing “Q” for correct and “P” for incorrect trials. On half of the trials (n = 75), one of the letters had a different color. This colored letter could appear either on the second or the third sub-trial. If this letter was blue (or red, counterbalanced across participants), participants could earn a monetary reward (8 cents). If it was red (or blue, counterbalanced), participants could earn no monetary reward. On low-value trials (e.g., red), responses were followed by visual feedback indicating “Good” or “False”. High-reward trials (e.g., blue) were additionally followed by reward feedback (+ 8 cents) and the total amount that has been earned during the task so far. Participants were not informed about the reward contingency beforehand. There was a 500 ms break in between trials. In total, participants completed 4 practice trials and one block of 150 training trials.

Test phase

After the training phase, participants directly started the test phase. First, they received instructions and then immediately started the math task. Participants first saw a fixation cross. Then, they saw the monetary reward that they could earn by responding correctly on that trial (see Fig 2). On half of the trials, participants could earn money (up to 20 cents); on the other half of the trials, they could not (0 cent). Subsequently, participants saw four displays, like in the training phase, each showing a number and a letter (e.g., 8W, X5, 9Y, and Z7). In this part, numbers were the targets and participants were instructed to report whether the sum of the presented numbers (e.g., 8 + 5 + 9 + 7 = 29) was higher or lower than the number presented in the next display (e.g., 28). They responded by pressing “Q” for smaller and “P” for larger sums (29 is bigger than 28, so the correct response would be “P”). On no task value trials, responses were followed by visual feedback indicating “Good” or “False”. High task value trials were additionally followed by reward feedback (e.g., + 16 cents) and the total amount that has been earned during the task so far (e.g., 4.54 €). The amount that could be won on a certain trial decreased with time [36], so fast responses were encouraged. There was a 500 ms break in between trials. Identical to the training phase, on half of the trials, one letter was always red and the other half of the trials one letter was always blue (again, these colored letters could only appear either on the second or the third sub-trial). In this case, the letters served as task-irrelevant stimuli, previously associated with monetary (vs. no monetary) rewards. In total, participants completed 10 practice and one block of 64 test trials (16 trials per condition).

Results

Data treatment and performance measures

Responses that were three standard deviations faster or slower than the participant’s mean and responses (based on e.g., [12,49,50]) faster than 300 ms (which were considered guesses) were deleted, which resulted in the exclusion of 5% of trials. For each condition, we computed three performance measures. First, our major performance measure was accuracy, which indicated the percentage of correct responses on the math task. Second, although we did not expect an effect of task value or distractor value on participants’ speed, we explored this variable. So, we computed response times mean (RTM) to explore the average response speed on the math task. Third, we explored performance variability. That is, we computed RT coefficient of variation (RTCV) to assess relative speed variability on the math task–based on suggestions from prior work [40,51]. Neither RTM, nor RTCV were influenced by task and distractor value manipulations (all ps > .05; see descriptive statistics in Table 2, see S1 Appendix for RT analyses, see Table 3 for variability analyses).

thumbnail
Table 2. Descriptive statistics of outcome measures separately for Experiment 1 and Experiment 2.

https://doi.org/10.1371/journal.pone.0205091.t002

thumbnail
Table 3. Results of the experimental effects on accuracy and performance variability both in Experiment 1 and Experiment 2.

https://doi.org/10.1371/journal.pone.0205091.t003

Confirmatory analyses

To test Hypotheses 1, 2, and 3, we performed a GLM analysis with task value (high vs. no) and distractor value (high vs. no) as within subject independent variables, and accuracy scores as dependent variable (see Fig 3A). Effect sizes were calculated based on Lakens 2013 [52]. In line with Hypothesis 1, the main effect of task value was significant, F(1, 34) = 6.92, p = .013, η2 = .03, indicating that participants were more accurate when they could earn money (vs. no money; see Table 2 for descriptive statistics). The main effect of distractor value was also significant, F(1, 34) = 11.66, p = .002, η2 = .03, showing that people were less accurate when they were exposed to a high (vs. no) value distractor (in line with Hypothesis 2). The Task value × Distractor value interaction was not significant, F(1, 34) = .79, p = .380, η2 = .003 –thus showing no support for Hypothesis 3.

thumbnail
Fig 3. Results of Experiment 1.

(A) Accuracy scores for no (gray bars) vs. high (black bars) value distractor trials both in no vs. high task value conditions. (B) Mean accuracy scores by distractor value (high vs. low) on all trials (Overall), on trials where the distractor appeared early, and on trials where the distractor appeared late. Error bars reflect standard errors.

https://doi.org/10.1371/journal.pone.0205091.g003

We further explored whether participants’ (a) need for money (reported in S2 Appendix) and (b) early vs. late distractor appearance affected the results. We corrected for multiple testing by applying Pocock’s boundary [53] for 4 sequential analysis (i.e., same GLM 4 times: people low in need for money, people high in need for money, early trials, and late trials) by lowering the alpha level from 0.05 to 0.0182 –a procedure suggested by [54].

Exploratory analysis: Distractor timing

We examined whether the timing of (high vs. no reward) distractors in the sequence moderated the effect of distractor value. To test this, we performed the same GLM analyses as above, but now also including Distractor timing (early vs. late) as an additional within-subjects predictor. We specifically examined the Distractor timing × Distractor value interaction, F(1, 34) = .98, p = .329, η2 < .001, which was not significant (see Fig 3B). So, we found no clear evidence for the idea that high (vs. no) value distractors affected performance differently based on whether it appeared early or late.

For consistency with Experiment 2 (see below), we further explored the data and ran our original GLM with a particular interest for the main effect of distractor value, separately for trials in which the distractor appeared early (i.e., in the second stimulus screen) vs. late (i.e., in the third stimulus screen). On early distractor trials, participants were less accurate on high (M = 76%, SD = 16%) vs. no reward distractor (M = 81%, SD = 16%) conditions, F(1, 34) = 6.21, p = .018, η2 = .025. On late distractor trials, the main effect of distractor value was not significant, F(1, 34) = 3.30, p = .078, η2 < .001.

Discussion

The results of Experiment 1 provide initial evidence for a motivational perspective on distraction. In line with Hypothesis 1, we found that people were more accurate when they could earn money on the task. This is consistent with the idea that motivation (e.g., monetary incentives) boost cognitive control processes that lead to better performance on cognitive tasks [38].

In line with Hypothesis 2, we found that people were less accurate when they were exposed to irrelevant cues that were previously associated with high (vs. no) reward. This finding supports the idea that (a) distraction may be reward-driven [15,17] (b) that reward-associated distractors interfere with the active, ongoing maintenance of task relevant information and thus impair cognitive performance. We wanted to better understand when these reward-associated distractors harm task performance the most. So, we further explored whether the timing of distractors moderated this reward-driven distraction effect, but found no evidence for the possibility.

Finally, contrary to Hypothesis 3, we found that participants were no more likely to be distracted (by high-reward distractors) when they were in a high (vs. low) motivational state. In order to investigate whether the results were replicable, we conducted another experiment on an independent sample. Before the start of data collection, we preregistered Experiment 2, a direct replication of Experiment 1 at the Open Science Framework (https://osf.io/y74kx/). Specifically, we pre-registered our hypotheses, the planned sample size, and the analysis plan. As it is important to distinguish between analyses that were planned before vs. not [55], we present analyses that were preregistered as confirmatory and analyses that were not preregistered as exploratory.

Experiment 2

Method

The design, procedure, and task were identical to Experiment 1. The preregistration, experimental materials, and the data can be found in OSF (https://osf.io/y74kx/). We conducted a power analysis with Glimmpse [56], which suggested that a sample size of N = 54 should be sufficient to detect all effects of interest with power = .90. Because of our rather strict exclusion criteria (see below), we wanted to be on the safe side. Therefore, we recruited seventy-three students from Radboud University. The same a priori exclusion criteria were applied as in Experiment 1. One participant reported to sleep less than 6 hours during the night before the experiment and five participants performed below 60% accuracy, so they were excluded from the final analysis. The final sample consisted of sixty-six students (50 females and 16 males; mean age = 23.4, SD = 5.4). For descriptive statistics of subjective measures, see Table 1. Participants received monetary compensation in the form of a gift voucher based on performance (ranging from 7.5–12.5 €).

Results

Responses that were three standard deviations faster or slower than the participant’s mean and responses faster than 300 ms (which were considered guesses) were deleted, which resulted in the exclusion of 2% of trials. As pre-registered, to test Hypotheses 1, 2, and 3, we performed a GLM analysis with task value (high vs. no) and distractor value (high vs. no) as within-subject independent variables, and accuracy scores (percentage of correct responses) as the dependent variable (see Fig 4A).

thumbnail
Fig 4. Results of Experiment 2.

(A) Accuracy scores for no (gray bars) vs. high (black bars) value distractor trials both in no vs. high task value conditions. (B) Mean accuracy scores by distractor value (high vs. no) on all trials (Overall), on trials where the distractor appeared early, and on trials where the distractor appeared late. Error bars reflect standard errors.

https://doi.org/10.1371/journal.pone.0205091.g004

Confirmatory analyses.

In line with Hypothesis 1, replicating results of Experiment 1, the main effect of task value was significant, F(1, 65) = 8.65, p = .005, η2 = .001, indicating that participants were more accurate when they could earn money (vs. no money; see Table 2 for descriptive statistics). Unlike in Experiment 1, the main effect of distractor value was not significant, F(1, 65) = .13, p = .721, η2 < .001,–showing no support for Hypothesis 2. As in Experiment 1, the interaction effect was also not significant, F(1, 65) = .22, p = .644, η2 < .001 (i.e., no support for Hypothesis 3).

Exploratory analyses on participants’ response times and need for money are in S1 Appendix and S2 Appendix. Like in Experiment 1, we corrected for multiple testing by applying Pocock’s boundary [53] for 4 sequential analysis (i.e., same GLM 4 times: people low in need for money, people high in need for money, early trials, and late trials) by lowering the alpha level from 0.05 to 0.0182.

Exploratory analysis: Distractor timing.

We ran our original GLM, but now adding distractor timing as a factor. The Distractor timing × Distractor value interaction was significant, F(1, 65) = 4.52, p = .037, η2 = .011, suggesting that the timing of the distractor moderated the effect of distractor value. Follow up analyses revealed that, the main effect of distractor value was neither significant on early-distractor trials, F(1, 65) = 3.47, p = .067, η2 < .001, nor on late-distractor trials, F(1, 65) = 1.34, p = .251, η2 < .001.

In short, in Experiment 2, while the Timing × Distractor value interaction was significant, we found no main effect of distractor value separately in early and late timing trials. On the contrary, in Experiment 1, the Timing × Distractor value interaction was not significant, but the early impact of distractors seemed stronger than late. To provide the most reliable effect size estimates we can provide at this point, we re-ran the analysis on the pooled data from both experiments, a procedure suggested by Lakens [57]. We explored the Distractor timing × Distractor value interaction, which was significant, F(1, 100) = 4.21, p = .043, η2 = .006. Inspection of Fig 5B suggests that the effect of distractor value was the largest for early distractors (η2 = .013), compared to late distractors late (η2 < .001). So, considering both studies together, we found some support for the possibility that early high-reward distractors had a stronger impact than late high-value distractors. We note that this finding should be interpreted with caution, as these analyses were not pre-registered and because the effect size was small.

thumbnail
Fig 5. Pooled data from Experiment 1 and Experiment 2.

(A) Accuracy scores for no (gray bars) vs. high (black bars) value distractor trials both in no vs. high task value conditions. (B) Mean accuracy scores by distractor value (high vs. low) on all trials (Overall), on trials where the distractor appeared early, and on trials where the distractor appeared late. Error bars reflect standard errors.

https://doi.org/10.1371/journal.pone.0205091.g005

Discussion

The purpose of Experiment 2 was to replicate the results of Experiment 1. As in Experiment 1, people were more accurate when they could earn money in the task (Hypothesis 1). Unexpectedly, however, we did not replicate the negative effect of reward-associated distractors on performance (Hypothesis 2), i.e., people were not less accurate when they were exposed to distractors that carried high (vs. no) value. As this null effect was somewhat surprising given the strength of the effect in Experiment 1 (η2 = .03), and given the fact that the task was identical, we again explored whether early vs. late distractors have had different effects on accuracy, but now on the pooled sample (N = 101). We found that the impact of reward-associated distractors was more pronounced when it appeared early vs. late in the math sequence. Although this analysis was not pre-registered (and should thus be interpreted with caution), this finding suggests that the effect of distractor value may be more pronounced in the early stages of task performance (for interpretations, see General Discussion). In Experiment 2, like in Experiment 1, we did not find that people’s current motivational state affected the impact of high-value distractors; thus, findings do not provide support for Hypothesis 3.

General discussion

In this research, we had two major aims: (a) to test whether reward-associated distractors harm cognitive control processes and (b) to test whether this reward-driven distraction effect can be eliminated by high motivational states. Two identical experiments yielded strong evidence for the positive effect of monetary incentives on cognitive performance (Hypothesis 1), some evidence that reward-associated distractors disrupt cognitive control processes (Hypothesis 2), and no evidence that high motivational states (i.e., promising monetary rewards) reduce reward-driven distraction (Hypothesis 3). We will discuss each of these findings in detail below.

In line with Hypothesis 1, both experiments showed that people were more accurate in solving mental additions when they could earn money. This finding is in line with the well-established idea that monetary incentives boost cognitive processes (c.f., [37,38,41]), particularly the active maintenance of information in working memory [58,59].

On the contrary, evidence for reward-driven distraction (Hypothesis 2) was somewhat mixed across studies. In Experiment 1, we found direct support for this hypothesis: people were more distracted by high (vs. no) reward-associated distractors–independently of whether distractors appeared early vs. late during task performance. However, in Experiment 2, support for Hypothesis 2 was less clear: the timing of reward-associated distractors moderated the distraction effect. When we explored this idea further in a combined analysis of both studies in order to get the most reliable effect size [52], we found that early distractors were indeed more harmful than late distractors (η2 = .013 vs. η2 = .000; see below for interpretations). Yet, as the latter finding was done using a non-pre-registered analysis, it should be interpreted with some caution. In sum, results from two studies provide preliminary support for the idea that (a) irrelevant, but rewarding cues may disrupt cognitive control processes and (b) that this effect may be stronger when distractors appear early in a sequence of cognitive control operations. This finding extends prior research and shows that reward-associated distractors do not only slow down visual search [11,13], but they likely interrupt more complex cognitive control operations (i.e., maintenance and updating task-relevant information). These conclusions are consistent with growing literature that distractions may stem from a reward-driven mechanism [15,39].

Reward-driven distraction, in this study, seemed to be influenced by the timing of distractor in the math sequence. More specifically, we expected that high (vs. no) reward-associated distractors will gain priority in working memory over goal-relevant information [32,33], which will weaken the mental representations of targets, which will result in incorrect responses. Unexpectedly, this effect seemed strongest when high reward associated distractors appeared early (vs. late) in the math sequence. This finding could be explained by conditional probability monitoring [4547,60,61]. Specifically, conditional probability monitoring is the phenomenon that people continuously monitor the flow of events, and update their expectancy about upcoming events; this expectancy, in turn, affects how they deal with future, unexpected events. Applying this to our paradigm, it seems likely that participants learned that each trial contained a distractor. Also, participants may have learned that when the distractor did not appear early (i.e., in the 2nd stimulus pair), it definitely had to appear late (i.e., in the 3rd stimulus pair). As a result, when the distractor appeared late, participants had the opportunity to prepare for it helping them to shield goal-relevant information from the reward-associated distractor. Conversely, such preparation could not happen when the distractor appeared early. As pooled data from both experiments were in line with this explanation, it would be interesting to test this idea in a confirmatory manner. Such confirmatory work would shed more light on the circumstances under which reward-associated distractors disrupt cognitive control processes. Conditional probability monitoring may well be part of the explanation.

Although participants performed better when they could earn monetary rewards on the task, both experiments showed no evidence for Hypothesis 3 –i.e., the prediction that reward-driven distraction would be the strongest when participants are not motivated to perform the task. This is somewhat surprising, given that several contemporary models of motivation and task performance suggest that people’s performance is determined by a computation between the value of the outcomes of the present task and the value of alternatives [6264]. Also, we expected that higher motivational states would help protect mental representations of goal-relevant information [26,40] from reward-associated distracting information. By contrast to these ideas, our findings suggest that task-irrelevant stimuli (that are associated with rewards) may impact performance independently of whether people are currently motivated to perform well. Future research is necessary to better understand whether—and, if so, under what conditions—rewards for current task can shield people from distractions. Perhaps it may have been confusing for participants that both the value of distractors and the value of the task was manipulated with monetary rewards. To circumvent this issue, future studies could apply simple “try harder” instructions, which has been shown to be efficient in inducing stable performance, which shields against the impact of distractors [40].

Adopting a reward-driven perspective on distraction [14] has implications for practice. First, optimal performance at work and school is known to rely on central executive resources [23], which we found to be disrupted by task-irrelevant stimuli associated with rewards. Smartphones may be an instance of such stimuli: at least, smartphones are pervasive sources of distraction [65] that indeed interfere with work [66] and study [5]. Assuming that smartphones have rewarding properties [67], this way of thinking about smartphones—i.e., as reward-related distractors—may support new models of smartphone-related behavior (e.g., smartphone addiction could be conceptualized, and treated, as a condition similar to Gambling Disorder)

Strengths and limitations

Throughout this project, we aimed to work in an open and transparent way, in line with recent discussions in psychology [55,68,69]. Specifically, we preregistered the second experiment and tried to directly replicate our results, aiming to actively avoid drawing false conclusions that would eventually distort the literature on the topic [70].

However, in this replication attempt (Experiment 2) we found evidence for reward-driven distraction only when the distractor appeared early in the math sequence. Thus, we have to be careful with drawing strong conclusions. Although, it is plausible that out of multiple studies testing the same hypotheses, some tests turn out to be non-significant [57], the inconsistency between studies is surprising as the effect of rewarding distractors on attentional capture has been well established by prior work [15]. We should mention, however, that our experiments are different from most prior experiments [11,14] in this field in two major ways. Below, we address these two aspects and provide methodological suggestions.

First, unlike previous experiments, we manipulated participants’ current motivational state in the test phase (to test Hypotheses 1 and 3). Specifically, in half of the trials, people could earn money for performing well. Possibly, these performance incentives affected people’s motivational state throughout the task, on all trials, in a sustained way (e.g., [51]). Very speculatively, such sustained changes in motivational state may have reduced the potency of high-value distractors. In future research, it may be promising to solely examine the effect of rewarding distractors, independently of current motivational states.

Second, an important difference between the present research and previous experiments [14] in this field concerns the spatial position of target and distractor. Although previous studies used a search task, in which stimuli are typically located far apart, we used a task in which target and distractor were close next to each other, in the center of the screen. Importantly, this central part of the visual field is processed most efficiently [71]. This enhanced processing efficiency can potentially explain why our task may be less sensitive to the effects of high-value distractors. To further investigate this possibility, future studies may use a task in which the location of target and distractor is farther from each other, more like in visual search paradigms.

Concluding remarks

The present research provides some first steps in investigating when and how rewarding irrelevant cues disrupt executive control processes. We found that people sometimes perform worse on a math task when they are exposed to a stimulus that was previously-rewarded (vs. not rewarded) and when this stimulus appears early during task performance. This effect was not moderated by people’s current motivational states. Our studies join a growing body of literature [11,14,19,59] that suggests that it may be fruitful to think of distractions from a reward-driven perspective.

Supporting information

S1 Appendix. Exploratory analysis on participants’ response times both in Experiment 1 and Experiment 2.

https://doi.org/10.1371/journal.pone.0205091.s001

(DOCX)

S2 Appendix. Exploratory analysis on participants’ need for money both in Experiment 1 and Experiment 2.

https://doi.org/10.1371/journal.pone.0205091.s002

(DOCX)

References

  1. 1. Gazzaley A, Rosen L. The distracted mind: Ancient brains in a high-tech world. Cambridge: MIT Press; 2016.
  2. 2. Carr N. The shallows: What the Internet is doing to our brains. New York: Norton & Company; 2011.
  3. 3. Hassan R. The age of distraction: Reading, writing, and politics in a high-speed networked economy. New Brunswick: Transaction Publishers; 2011.
  4. 4. Sykes ER. Interruptions in the workplace: A case study to reduce their effects. Int J Inf Manage. 2011;31: 385–394.
  5. 5. Sana F, Weston T, Cepeda NJ. Laptop multitasking hinders classroom learning for both users and nearby peers. Comput Educ. 2013;62: 24–31.
  6. 6. Samaha M, Hawi NS. Relationships among smartphone addiction, stress, academic performance, and satisfaction with life. Comput Human Behav. 2016;57: 321–325.
  7. 7. Caird JK, Johnston KA, Willness CR, Asbridge M, Steel P. A meta-analysis of the effects of texting on driving. Accid Anal Prev. 2014;71: 311–318. pmid:24983189
  8. 8. Yantis S, Jonides J. Abrupt visual onsets and selective attention: evidence from visual search. J Exp Psychol Percept Perform. 1984;10: 601–621.
  9. 9. Theeuwes J. Perceptual selectivity for color and form. Percept Psychophys. 1992;51: 599–606. pmid:1620571
  10. 10. Le Pelley M, Pearson D, Griffiths O, Beesley T. When goals conflict with values: Counterproductive attentional and oculomotor capture by reward-related stimuli. J Exp Psychol Gen. 2015;144: 158–171. pmid:25420117
  11. 11. Theeuwes J, Belopolsky A V. Reward grabs the eye: Oculomotor capture by rewarding stimuli. Vision Res. 2012;74: 80–85. pmid:22902641
  12. 12. Bourgeois A, Neveu R, Bayle DJ, Vuilleumier P. How does reward compete with goal-directed and stimulus-driven shifts of attention? Cogn Emot. 2015;31: 1–10. pmid:26403682
  13. 13. Hickey C, van Zoest W. Reward-associated stimuli capture the eyes in spite of strategic attentional set. Vision Res. 2013;92: 67–74. pmid:24084197
  14. 14. Anderson BA, Laurent P a, Yantis S. Value-driven attentional capture. Proc Natl Acad Sci. 2011;108: 10367–10371. pmid:21646524
  15. 15. Anderson BA. The attention habit: How reward learning shapes attentional selection. Ann N Y Acad Sci. 2016;1369: 24–39. pmid:26595376
  16. 16. Le Pelley M, Mitchell CJ, Beesley T, George DN, Wills AJ. Attention and Associative Learning in Humans: An Integrative Review. Psychol Bull. 2016;142: 1111–1140. pmid:27504933
  17. 17. Failing M, Theeuwes J. Selection history: How reward modulates selectivity of visual attention. Psychonomic Bulletin and Review. 2017: 1–25.
  18. 18. Infanti E, Hickey C, Turatto M. Reward associations impact both iconic and visual working memory. Vision Res. 2015;107: 22–29. pmid:25481632
  19. 19. Krebs RM, Boehler CN, Woldorff MG. The influence of reward associations on conflict processing in the Stroop task. Cognition. 2010;117: 341–347. pmid:20864094
  20. 20. Anderson BA, Laurent PA, Yantis S. Generalization of value-based attentional priority. Vis cogn. 2012;20: 37–41. pmid:24294102
  21. 21. Hickey C, Kaiser D, Peelen M V. Reward Guides Attention to Object Categories in Real-World Scenes. J Exp Psychol Gen. 2015;144: 264–273. pmid:25559653
  22. 22. Lee J, Shomstein S. Reward-based transfer from bottom-up to top-down search tasks. Psychol Sci. 2014;25: 466–75. pmid:24335604
  23. 23. Banich MT. Executive function: The search for an integrated account. Curr Dir Psychol Sci. 2009;18: 89–94.
  24. 24. Cohen JD, McClure SM, Yu AJ. Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration. Philos Trans R Soc Lond B Biol Sci. 2007;362: 933–942. pmid:17395573
  25. 25. Aston-Jones G, Cohen JD. An integrative theory of locus coeruleus-norepinephrine function: adaptive gain and optimal performance. Annu Rev Neurosci. 2005;28: 403–50. pmid:16022602
  26. 26. Müller J, Dreisbach G, Goschke T, Hensch T, Lesch K-P, Brocke B. Dopamine and cognitive control: the prospect of monetary gains influences the balance between flexibility and stability in a set-shifting paradigm. Eur J Neurosci. 2007;26: 3661–3668. pmid:18088285
  27. 27. Anderson BA, Laurent PA, Yantis S. Value-driven attentional priority signals in human basal ganglia and visual cortex. Brain Res. 2014;1587: 88–96. pmid:25171805
  28. 28. Failing M, Nissens T, Pearson D, Le Pelley ME, Theeuwes J. Oculomotor capture by stimuli that signal the availability of reward. J Neurophysiol. 2015;114: 2316–2327. pmid:26289464
  29. 29. Maclean MH, Giesbrecht B. Neural evidence reveals the rapid effects of reward history on selective attention. Brain Res. 2015;1606: 86–94. pmid:25701717
  30. 30. Hickey C, Chelazzi L, Theeuwes J. Reward changes salience in human vision via the anterior cingulate. J Neurosci. 2010;30: 11096–11103. pmid:20720117
  31. 31. DeStefano D, LeFevre J. The role of working memory in mental arithmetic. Eur J Cogn Psychol. 2004;16: 353–386.
  32. 32. Klink PC, Jeurissen D, Theeuwes J, Denys D, Roelfsema PR. Working memory accuracy for multiple targets is driven by reward expectation and stimulus contrast with different time-courses. Sci Rep. 2017;7. pmid:28831072
  33. 33. Gong M, Li S. Learned reward association improves visual working memory. J Exp Psychol Hum Percept Perform. 2014;40: 841–56. pmid:24392741
  34. 34. Bijleveld E, Custers R, Aarts H. Adaptive reward pursuit: how effort requirements affect unconscious reward responses and conscious reward decisions. J Exp Psychol Gen. 2012;141: 728–42. pmid:22468672
  35. 35. Bijleveld E, Custers R, Aarts H. The unconscious eye opener: Pupil dilation reveals strategic recruitment of resources upon presentation of subliminal reward cues. Psychol Sci. 2009;20: 1313–1315. pmid:19788532
  36. 36. Bijleveld E, Custers R, Aarts H. Once the money is in sight: Distinctive effects of conscious and unconscious rewards on task performance. J Exp Soc Psychol. 2011;47: 865–869.
  37. 37. Pessoa L. How do emotion and motivation direct executive control? Trends Cogn Sci. 2009;13: 160–166. pmid:19285913
  38. 38. Pessoa L, Engelmann JB. Embedding reward signals into perception and cognition. Front Neurosci. 2010;4: 1–8.
  39. 39. Botvinick M, Braver T. Motivation and cognitive control: From behavior to neural mechanism. Annu Rev Psychol. 2015;66: 83–113. pmid:25251491
  40. 40. Steinborn MB, Langner R, Huestegge L. Mobilizing cognition for speeded action: try-harder instructions promote motivated readiness in the constant-foreperiod paradigm. Psychol Res. 2017;81: 1135–1151. pmid:27650820
  41. 41. Zedelius CM, Veling H, Custers R, Bijleveld E, Chiew KS, Aarts H. A new perspective on human reward research: How consciously and unconsciously perceived reward information influences performance. Cogn Affect Behav Neurosci. 2014;14: 493–508. pmid:24399682
  42. 42. Bijleveld E, Custers R, Aarts H. Human reward pursuit: From rudimentary to higher-level functions. Curr Dir Psychol Sci. 2012;21: 194–199.
  43. 43. Garbers Y, Konradt U. The effect of financial incentives on performance: A quantitative review of individual and team-based financial incentives. J Occup Organ Psychol. 2014;87: 102–137.
  44. 44. Liljeholm M, O’Doherty JP. Anything you can do, you can do better: neural substrates of incentive-based performance enhancement. PLoS Biol. Public Library of Science; 2012;10: e1001272. pmid:22363210
  45. 45. Langner R, Steinborn MB, Chatterjee A, Sturm W, Willmes K. Mental fatigue and temporal preparation in simple reaction-time performance. Acta Psychol (Amst). 2010;133: 64–72. pmid:19878913
  46. 46. Steinborn MB, Langner R. Distraction by irrelevant sound during foreperiods selectively impairs temporal preparation. Acta Psychol (Amst). 2011;136: 405–418. pmid:21333960
  47. 47. Miller J, Schröter H. Online response preparation in a rapid serial visual search task. J Exp Psychol Hum Percept Perform. 2002;28: 1364–1390. pmid:12542133
  48. 48. Sanders A. Elements of human performance: reaction processes and attention in human skill. Lawrence Erlbaum Associates; 1998.
  49. 49. Asgeirsson AG, Kristjánsson Á. Random reward priming is task-contingent: the robustness of the 1-trial reward priming effect. Front Psychol. 2014;5: 309. pmid:24782808
  50. 50. Anderson BA, Yantis S. Value-driven attentional and oculomotor capture during goal-directed, unconstrained viewing. Attention, perception, Psychophys. 2012;74: 1644–53. pmid:22810561
  51. 51. Flehmig HC, Steinborn M, Langner R, Scholz A, Westhoff K. Assessing intraindividual variability in sustained attention: Reliability, relation to speed and accuracy, and practice effects. Psychol Sci. 2007;49: 132.
  52. 52. Lakens D. Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Front Psychol. 2013;4. pmid:24324449
  53. 53. Pocock SJ. Group sequential methods in the design and analysis of clinical trials. Biometrika. 1977;64: 191–199.
  54. 54. Lakens D, Etz AJ. Too true to be bad: When sets of studies with significant and nonsignificant findings are probably true. Soc Psychol Personal Sci. 2017; 1–7. pmid:29276574
  55. 55. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie N, et al. A manifesto for reproducible science. Nat Hum Behav. Macmillan Publishers Limited; 2017;1: 1–9.
  56. 56. Guo Y, Pandis N. Sample-size calculation for repeated-measures and longitudinal studies. Am J Orthod Dentofac Orthop. 2015;147: 146–149. pmid:25533082
  57. 57. Lakens D. Performing high-powered studies efficiently with sequential analyses. Eur J Soc Psychol. 2014;44: 701–710.
  58. 58. Heitz RP, Schrock JC, Payne TW, Engle RW. Effects of incentive on working memory capacity: Behavioral and pupillometric data. Psychophysiology. 2008;45: 119–129. pmid:17910734
  59. 59. Zedelius CM, Veling H, Aarts H. Boosting or choking—How conscious and unconscious reward processing modulate the active maintenance of goal-relevant information. Conscious Cogn. 2011;20: 355–362. pmid:20510630
  60. 60. Los SA, Kruijne W, Meeter M. Hazard versus history: Temporal preparation is driven by past experience. J Exp Psychol Hum Percept Perform. 2017;43: 78–88. pmid:27808547
  61. 61. Langner R, Steinborn MB, Eickhoff SB, Huestegge L. When Specific Action Biases Meet Nonspecific Preparation: Event Repetition Modulates the Variable-Foreperiod Effect. Journal of Experimental Psychology: Human Perception and Performance. 2018. pmid:29975096
  62. 62. Kurzban R, Duckworth A, Kable JW, Myers J. An opportunity cost model of subjective effort and task performance. Behav Brain Sci. 2013;36: 661–679. pmid:24304775
  63. 63. Kool W, Botvinick M. A labor/leisure tradeoff in cognitive control. J Exp Psychol Gen. 2014;143: 131–141. pmid:23230991
  64. 64. Inzlicht M, Schmeichel BJ, Macrae CN. Why self-control seems (but may not be) limited. Trends Cogn Sci. 2014;18: 127–133. pmid:24439530
  65. 65. Wilmer HH, Chein JM. Mobile technology habits: patterns of association among device usage, intertemporal preference, impulse control, and reward sensitivity. Psychon Bull Rev. 2016;23: 1607–1614. pmid:26980462
  66. 66. Jett QR, George JM. Work interrupted: A closer look at the role of interruptions in organizational life. Acad Manag Rev. 2003;28: 494–507.
  67. 67. Oulasvirta A, Rattenbury T, Ma L, Raita E. Habits make smartphone use more pervasive. Pers Ubiquitous Comput. 2012;16: 105–114.
  68. 68. Open Science Collaboration. Estimating the reproducibility of psychological science. Science (80-). 2015;349: aac4716. pmid:26315443
  69. 69. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, et al. Promoting an open research culture. Science (80-). 2015;348: 1422–1425. pmid:26113702
  70. 70. Chambers CD. Registered Reports: A new publishing initiative at Cortex. Cortex. 201349: 609–610.
  71. 71. Carrasco M. Visual attention: The past 25 years. Vision Res. 2011;51: 1484–1525. pmid:21549742