Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Vehicle avoidance: The hierarchy of visual attention towards animals, plants, and vehicles

  • Chihiro Kioka,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Sustainable System Sciences, Osaka Prefecture University, Sakai, Japan

  • Kunihito Tobita

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Writing – original draft, Writing – review & editing

    tobita@omu.ac.jp

    Affiliation Department of Psychology, Osaka Metropolitan University, Sakai, Japan

Abstract

The biophilia hypothesis posits that humans have an innate affinity for nature, with natural landscapes effortlessly capturing their attention, and a tendency to seek nature. The animate monitoring hypothesis suggests that humans have evolved to quickly detect and respond to animals for survival. The plant awareness disparity hypothesis argues that people notice plants less than animals due to perceptual biases and preferences. Based on these hypotheses, it was predicted that people’s visual attention would be superior towards animals, plants, and manufactured objects, in that order. This study investigated the hierarchy of visual attention towards animals (birds, mammals and humans), plants (fruit), and manufactured objects (vehicles) using a dot-probe task framework. The findings revealed no significant differences in reaction time or attentional bias for animal or plant stimuli. In contrast, perceptual processing was inhibited when viewing a vehicle and attentional avoidance occurred, resulting in slower reactions than to animals or plants. These findings offer partial support for the proposed hierarchy of visual attention, suggesting that while natural stimuli such as animals and plants receive comparable attention, some manufactured objects may elicit perceptual avoidance.

Introduction

Frequent visits to green spaces are associated with a lower risk of attention deficit hyperactivity disorder [1], Alzheimer’s disease [2], depression, and hypertension [3]. These examples illustrate the physical and mental health benefits people can experience from spending time in various natural environments. The biophilia hypothesis posits that humans possess a high innate affinity for nature, which is a trait modern humans share with our ancestors [4]. By gravitating towards nature, individuals may envision a future in which they can thrive [5], optimise their abilities, and enhance their overall well-being [6].

Evidence supporting this preference for nature can also be observed in visual attention bias. In a previous study using the dot-probe task, probes replacing natural landscapes dominated by vegetation were detected more quickly than those replacing urban landscapes [7]. Schiebel et al. [8] further confirmed participants’ inclination to approach nature and avoid urban environments using the dot-probe task, implicit association test, and approach-avoidance task. Natural landscapes capture more human attention than urban landscapes, effortlessly guiding one’s focus towards nature [9].

The animate monitoring hypothesis (AMH) posits that, within the realm of nature, animate objects possess an attentional advantage [10]. The group-foraging nature of early humans meant that rapid and frequent monitoring of both humans and animals was essential because their situations could change quickly. The AMH has been validated through a range of tasks. For example, in the change-detection task [10], changes involving humans and animals were found to be more frequently and swiftly detected than changes related to vehicles, buildings, plants, or tools, and were less prone to being overlooked. Furthermore, in the visual search task [11], animals were detected more rapidly than inanimate objects, while in the attentional blink task [12], the duration of undetectability of new stimuli following initial detection was shorter for animals than for non-animals. In addition, a comparison of the detection performance between artificial intelligence (AI) and humans revealed that while AI and humans performed equally well in detecting cars, humans outperformed AI in detecting humans [13]. These findings suggest that humans excel at recognising object categories of evolutionary significance. As noted above, accumulating evidence supports the AMH.

In contrast, the concept of plant awareness disparity (PAD) [14], formerly known as plant blindness, has been gaining wide recognition. Wandersee and Schussler [15] raised concerns that the United States displayed less interest in plants than animals, thereby negatively affecting the public’s scientific literacy. They initiated a campaign to enhance the public’s understanding of plants, attributing PAD not only to a preference for animals but also to visual perception mechanisms that make plants more likely to be overlooked. In practice, participants have been found to recall more animals than plants in recall tasks [16,17], and more readily detect animals compared with plants in the attentional blink task [18]. These studies underscore that PAD is reflected not only in people’s knowledge and interests but also in their behaviour.

Considering the three theories of biophilia, AMH, and PAD, visual attention likely prioritises animals, followed by plants and then non-living organisms. Previous studies have reported findings consistent with this hypothesis. Jackson and Calvillo [19] found that in a visual search task, humans and animals, body parts and fruits, and tools and vehicles were detected the quickest, in that order. They also suggested that evolutionary relatedness, which indicates the extent to which humans are exposed to relevant stimuli throughout the evolutionary process, may significantly affect visual information processing. Although differences in attentional tendencies among the three groups were noted, a comprehensive understanding of the underlying mechanisms is still needed. One perspective indicates that people may respond to the probe faster because it serves as an alerting stimulus. Conversely, the challenge of diverting attention from an alert target may delay responses to other stimuli [20]. Thus, the AMH can be interpreted in two ways: animal-related stimuli elicit faster responses because they inherently demand attention or responses to non-animal stimuli may be delayed because attention remains preoccupied with the alert animal target.

The purpose of this study was to confirm the hierarchy in visual attention towards animals, plants, and manufactured objects. This study further sought to elucidate whether this hierarchy is driven by facilitated attention or difficulty in disengaging. To achieve these aims, we used a dot-probe task capable of distinguishing between facilitated attention and attentional disengagement difficulty. Specifically, we tested three hypotheses: (1) responses to presented stimuli are faster in the order of animals, plants, and manufactured objects, (2) attentional facilitation occurs towards animals and plants, and (3) attentional facilitation is stronger for animals than for plants. If these hypotheses are supported, it would demonstrate that responses to animals are faster owing to the attraction of visual attention, and that the attracting power is more robust for animals than for plants.

Experiment 1

Methods

Participants.

The Graduate School of Sustainable System Sciences Ethics Committee at Osaka Metropolitan University approved all procedures (2022 (1) – 36). The experimental protocol adhered to the latest version of the Declaration of Helsinki. Informed consent was obtained through the participant recruitment screen on the crowdsourcing platform. Participants indicated their understanding of the experiment’s details displayed on the screen and their willingness to participate by checking a designated box. The participants received financial compensation for their involvement, and all surveys were conducted online. The recruitment period for this experiment was from May 14 to May 23, 2023.

An a priori power analysis conducted using the R package Superpower [21] determined that a sample size of 47 was needed to obtain a power of over.80 (see S1 Appendix for details). To account for potential data exclusions, 80 participants were recruited. Among the 80 participants engaged through CrowdWorks (https://crowdworks.jp), three were excluded because of errors in saving their experimental data. Thus, data were analysed from 74 participants (27 female, 46 male, 1 non-response; Mage = 41.11 years, SD = 8.97, range = 22–58 years) were analysed, after excluding two who failed the instructional manipulation check and one who self-reported that their experimental data should not be included in the study.

Stimuli.

Object categorisation involves distinct cognitive neural processes that are contingent on an object’s conceptual level [21,22]. Birds were categorised as animals, fruits as plants [23], and vehicles and tools as manufactured objects [10,19], aligning with the superordinate categorisation of Rosch et al. [24] to equalise category hierarchies. Birds, fruit, and vehicles were designated target stimuli, whereas tools were considered neutral stimuli because the previous study reported slower reaction times towards tools than the other three categories [10,19]. The stimuli for birds were selected from the class Aves in the Linnaean taxonomy [25]. The Standard Tables of Food Composition in Japan were used to select fruits [26]. Land vehicles were chosen from class 12 (Vehicles; apparatus for locomotion by land, air, or water) defined in the Nice classification of the World Intellectual Property Organization [27]. Tools were subjectively selected by the authors, ensuring no overlap with other categories. Sixteen images, all depicting tools, were used in the practice trials. The main trials included 112 images in total: 64 tools, 16 birds, 16 fruits, and 16 vehicles. There was no overlap between the images used in the practice and main trials. All stimuli were static images obtained from Pixabay (https://pixabay.com/) and were resized to 260 x 260 px.

The effect of colour information on object recognition was stronger for colour diagnostic objects (that is, objects such as fire engines and lemons that appear in a consistent colour) than for non-colour diagnostic objects (by contrast, objects such as cars and hammers that do not consistently appear in a characteristic colour) [28,29]. The influence of colour-based information on neural activity during object processing was found to occur for fruits but not for animals or tools [23]. Moreover, because colour captures attention merely by being present on the screen, regardless of stimulus category [3033], line drawings that excluded the effects of colour were used in the experiments.

As an index of the objective visual complexity of the stimuli, the compressed image file size was used (see Supplementary Materials S1 Table). The JPEG compressed file size correlates with subjective evaluations of visual complexity, indicating that images with larger file sizes are more complex [3437]. An analysis of variance was conducted to examine the effect of stimulus category on visual complexity, yielding significant results (see Supplementary Materials S2 Table). Post hoc t tests revealed that fruits were significantly more complex than birds, whereas there were no significant differences between vehicles and fruits, or vehicles and birds. Additionally, birds, fruits, and vehicles were found to be more complex than tools. For the post hoc t-tests, p-values were adjusted using the Benjamini and Hochberg method to control the false discovery rate.

Procedure.

This experiment used the version of the dot-probe task version developed by Koster et al. [20], which incorporates neutral trials as a baseline for reaction time and enables differentiation between facilitated attention and attentional disengagement difficulty. The experimental programme was constructed using PsychoPy ver. 2022.1.2 [38] and hosted on Pavlovia.org (https://pavlovia.org/). The total duration of the experiment was approximately 20 minutes.

Fig 1 illustrates the sequence of a single trial. A blank screen was presented for 1000 ms, followed by a fixation point at the centre of the screen for 1000 ms. Two images were simultaneously displayed side-by-side following the disappearance of the fixation point. Stimulus onset asynchrony (SOA) was set at 100 ms and 500 ms with identical stimulus durations to assess automatic initial attention allocation and controlled attention allocation [39]. Subsequent to the image disappearing, the probe emerged in one of the two positions where the images were presented and remained visible until the participant responded. Participants were instructed to press the ‘F’ key as quickly as possible using their index finger if the probe appeared on the left side of the screen or the ‘J’ key if it appeared on the right.

The images were paired and featured either one target and one neutral stimulus or two neutral stimuli. Pairings of target and neutral stimuli were presented, and trials were considered congruent if the probe appeared in the position of the target stimulus. Trials with the probe on the opposite side were deemed incongruent. Each block comprised 56 trials arranged in a 7 (trial type: bird congruent/incongruent, fruit congruent/incongruent, vehicle congruent/incongruent, neutral) × 2 (stimulus position: left, right) × 2 (probe position: left, right) × 2 (SOA: 100, 500) design. Each image was presented once within a block and the order and pairing combinations were randomised.

Participants completed eight practice trials, followed by four blocks, totalling 224 trials. Between blocks, the participants were allowed an arbitrary rest period. An instructional manipulation check was administered after the second block to identify participants who had not earnestly engaged in the experiment [40,41]. After completing an experiment, participants were provided with the following instruction: ‘Last, it is vital to our study that we only include responses from participants who devoted their full attention to this study. Otherwise, years of effort (the researchers’ efforts and other participants’ time) could be wasted. You will receive credit for this study no matter how you answer the next item. We appreciate your honesty!’ Participants were then asked to express their preference with a simple ‘yes’ or ‘no’ response to the following question: ‘In your honest opinion, should we use your data in our analyses for this study?’ This question was designed to efficiently assess participants’ attentiveness to the experiment [42].

Statistical analysis.

The attentional bias index (ABI), attentional facilitation index (AFI), and disengagement index (DI) were computed based on reaction times [43,44]. The ABI was defined as the reaction time in the incongruent trials minus the reaction time in the congruent trials. A positive ABI value indicated that attention was directed towards the target stimulus (vigilance), whereas a negative value signified that attention was turned away from the target stimulus (avoidance). The AFI was calculated by subtracting the reaction time in the congruent trials from the reaction time in the neutral trials. A positive AFI indicated that attention was facilitated towards the target stimulus, whereas a negative AFI suggested inhibited processing of the target stimulus. The DI was obtained based on reaction time in the neutral trials minus the reaction time in the incongruent trials, with a negative DI indicating difficulty disengaging attention from the target stimulus [45].

A within-subjects three-factor analysis of variance (ANOVA) (factors: category, congruency, and SOA) was applied to the reaction times. For the ABI, AFI, and DI, a one-sample t-test was conducted with the null hypothesis set to zero. Additionally, a within-subjects two-factor ANOVA (factors: category and SOA) was used to analyse each indicator. Mendoza’s multi-sample sphericity test [46] was used before the ANOVA, and degrees of freedom were adjusted using Greenhouse–Geisser’s ε [47] for factors violating the sphericity assumption. For one-sample and post hoc t-tests, p-values were adjusted using the false discovery rate method proposed by Benjamini and Hochberg [48]. Analyses were conducted using R software version 4.1.3 [49] and the ‘anovakun’ function version 4.8.9 [50] for ANOVAs. The significance level was set at.05 for all analyses.

Split-half reliabilities for reaction time, ABI, AFI, and DI were obtained using the splithalf package [51] for R. Reliability estimates were based on 5000 random splits and corrected with the Spearman–Brown formula [52,53].

Results

Data preparation.

No participant exhibited a correct response rate below 80% (M = 99.7%, range = 98.2–100%) [54]. The number of incorrect trials for each participant ranged from 0 to 4 (M = 0.65, SD = 0.99) and the incorrect trials were subsequently excluded from the analysis. Trials with reaction times shorter than 200 ms or longer than 2000 ms were also excluded (M = 0.01, SD = 0.12, range = 0–1) [20]. Values more than three standard deviations from the mean in each condition for each participant were identified as outliers and excluded. The number of outliers per participant varied from 0 to 5 (M = 1.39, SD = 1.26), and overall, 104 outliers were excluded from the analysis. Based on these procedures, 0.92% of the total trials were excluded from the overall dataset.

Reaction time.

Table 1 shows descriptive statistics for reaction times at 100 ms and 500 ms SOA. S3 Table provides the detailed results of the ANOVAs performed on reaction times. A significant main effect of SOA was observed, indicating that reaction times were slower in the 100 ms SOA condition (M = 479.6 ms, 95% CI [468.8, 490.4], SD = 115.6) compared to the 500 ms condition (M = 449.5 ms, 95% CI [439.3, 459.6], SD = 109.1; F(1, 73) = 165.26, p < .001, ηp2 = .694). Furthermore, a significant main effect of category was identified with adjustments made to the interaction between category and congruency. However, the main effect of congruency and the other interaction effects were not significant. For congruent trials, reaction times were slower for vehicles (M = 472.3 ms, 95% CI [453.7, 490.9], SD = 114.5) than for birds (M = 461.7 ms, 95% CI [443.1, 480.4], SD = 115.0; t(147) = −6.24, p < .001, dz = −.092) or fruits (M= 462.0 ms, 95% CI [443.9, 480.0], SD = 111.2; t(147) = 5.15, p < .001, dz = .091), with no significant difference between birds and fruits(t(147) = −0.13, p = .896, dz = −.002). Reaction times for vehicles were slower in congruent trials (M = 472.3 ms, 95% CI [453.7, 490.9], SD = 114.5) than in incongruent trials (M = 463.3 ms, 95% CI [445.1, 481.5], SD = 112.1; t(147) = 4.83, p < .001, dz = .079). The simple effect of category on incongruent trials and the simple effect of congruency on birds and fruits were not significant.

thumbnail
Table 1. Descriptive statistics for reaction times (ms) in Experiment 1.

https://doi.org/10.1371/journal.pone.0330475.t001

The permutation-based split-half reliability for reaction time was very high across all categories, regardless of SOA or congruency (Spearman–Brown reliability = 0.98, 95% CI [0.95, 0.99], see Supplementary Materials S4 Table for details).

Attention bias index.

Fig 2 presents the results of the ABI, AFI, and DI calculations. The detailed results of the one-sample t-test performed on each indicator are shown in S5 Table, and the detailed results of the ANOVA are shown in S6 Table. The one-sample t-test results revealed negative ABI values for vehicles in the 100 ms and 500 ms SOA conditions. No significant ABIs were observed for either birds or fruits. An ANOVA demonstrated a significant main effect of category, indicating that the ABI was smaller for vehicles (M = −9.0 ms, 95% CI [−12.7, −5.3], SD = 22.7) than for birds (M = 1.4 ms, 95% CI [−2.1, 5.0], SD = 21.8; t(147) = 4.43, p < .001, dz = .469) or fruits (M = 2.8 ms, 95% CI [−0.2, 5.9], SD = 18.9; t(147) = −4.47, p < .001, dz = −.568). However, no significant differences were found between birds and fruits (t(147) = −0.57, p = .569, dz = −.069). The main effects of SOA and the interaction between category and SOA were not significant. These results suggest that participants’ attention was drawn towards neutral stimuli rather than vehicles. However, it remains unclear whether this was due to attention facilitation in the neutral condition or avoidance of vehicles. This will be further investigated in the subsequent analyses of AFI and DI.

thumbnail
Fig 2. Raincloud plots of the attentional bias index (ABI, left), attentional facilitation index (AFI, centre), and disengagement index (DI, right) in Experiment 1.

The results in the 100 ms SOA condition are in the top row, and the results in the 500 ms SOA condition are in the bottom row. Black dots in the cloud indicate mean values and error bars indicate 95% confidence intervals. SOA = stimulus onset asynchrony.

https://doi.org/10.1371/journal.pone.0330475.g002

The outcomes of a one-sample t-test showed a negative AFI for vehicles in both the 100 ms and 500 ms SOA conditions, whereas no significant AFI was observed for birds or fruits. An ANOVA indicated a significant main effect of category; however, the main effects of SOA and the interaction between category and SOA were insignificant. Post hoc comparisons revealed that vehicle processing (M = −8.5 ms, 95% CI [−12.4, −4.6], SD = 24.1) was more inhibited than bird (M = 2.1 ms, 95% CI [−1.8, 5.9], SD = 23.8; t(147) = 6.24, p < .001, dz = .441) or fruit processing (M = 1.8 ms, 95% CI [−2.8, 6.4], SD = 28.4; t(147) = −5.15, p < .001, dz = −.389), with no significant differences between birds and fruits (t(147) = 0.13, p = .896, dz = .009).

DI was not subjected to an ANOVA because a one-sample t-test did not yield significant values for either condition. The Spearman–Brown reliability for ABI, AFI, and DI ranged from −0.52 to 0.23, indicating low internal consistency (see Supplementary Materials S7 Table for details).

Discussion

In Experiment 1, the dot-probe task was used to investigate attentional bias towards birds, fruits, and vehicles. In congruent trials, reaction times were longer for vehicles than for birds or fruits. Based on the ABI and AFI values, this discrepancy arose from the inhibition of vehicle processing and attentional avoidance away from vehicles. These findings were consistent across both the 100 ms and 500 ms SOA conditions, indicating that attentional avoidance occurred in automatic and controlled attention allocation.

In contrast, no discernible attentional bias was observed for birds or fruits, rendering the present results incongruent with both the AMH and PAD. Loucks et al. [55] noted that most AMH studies primarily employed mammals as stimuli, suggesting that animals bearing more remarkable similarities to humans are preferentially processed. Given that birds are represented in the brain at greater distances from humans than mammals [56], the AMH and PAD may not have been supported because of the absence of preferential processing for birds. Consequently, Experiment 2 was conducted using the same methodology but with mammals instead of birds to further explore this phenomenon.

Experiment 2

Methods

The procedures and statistical analyses employed in this experiment were identical to those in Experiment 1, except that mammals were used to replace birds as stimuli. The stimuli for mammals were selected from the class Mammalia in the Linnaean taxonomy [25]. A one-way ANOVA on the objective visual complexity indicated a significant effect of category. Post hoc t tests showed that mammals, fruits, and vehicles were more complex than tools, while no significant differences were observed between mammals, fruits, and vehicles (see Supplementary Materials S8 and S9 Tables for details).

Participants.

The Graduate School of Sustainable System Sciences Ethics Committee at Osaka Metropolitan University approved all procedures (2023 (1) – 14). The experimental protocol adhered to the latest version of the Declaration of Helsinki. Informed consent was obtained through the participant recruitment screen on the crowdsourcing platform. Participants indicated their understanding of the experiment’s details displayed on the screen and their willingness to participate by checking a designated box. The participants received financial compensation for their involvement, and all surveys were conducted online. Participants were recruited between November 8 and November 21, 2023.

Eighty participants took part in the experiment via CrowdWorks. Four participants who made errors in storing the experimental data and one who failed the instructional manipulation check were excluded from the analysis. None of the participants self-reported that their experimental data should not be included in the analysis. Ultimately, data from 75 participants (38 female and 37 male; Mage = 44.39 years, SD = 9.45, range = 24–67 years) were included in the analysis.

Results

Data preparation.

All participants achieved a correct response rate exceeding 80% (M = 99.7%, range = 95.5–100%). The number of incorrect responses for each participant ranged from 0 to 10 (M = 0.77, SD = 1.55). Similarly, the number of trials with reaction times faster than 200 ms or slower than 2000 ms ranged from 0 to 5 for each participant (M = 0.12, SD = 0.61). In addition, for each condition and participant, between 0 and 6 trials (M = 1.45, SD = 1.51) were more than 3 standard deviations from the mean, totalling 109 trials. Based on these procedures, 1.05% of the total dataset was excluded from analysis.

Reaction time.

Table 2 shows the descriptive statistics for reaction times in each condition. Detailed results of the ANOVA for the reaction times are shown in S10 Table. A significant main effect was observed for SOA, with 100 ms trials (M = 441.0 ms, 95% CI [435.4, 446.6], SD = 60.4) exhibiting slower reaction times than 500 ms trials (M = 412.2 ms, 95% CI [407.0, 417.4], SD = 56.0; F(1, 74) = 209.51, p < .001, ηp2 = .739). The main effects of category and congruency were also significant but influenced by the interaction between category and congruency. No other interactions were significant. In congruent trials, vehicles (M = 434.2 ms, 95% CI [424.7, 443.8], SD = 59.3) elicited slower responses than mammals (M = 426.6 ms, 95% CI [416.5, 436.7], SD = 62.3; t(149) = −4.27, p < .001, dz = −.124) or fruits (M = 426.6 ms, 95% CI [416.9, 436.3], SD = 59.9; t(149) = 5.78, p < .001, dz = .128), with no significant differences between mammals and fruits (t(149) = 0.00, p = 1.000, dz = .000). Response times for vehicles were slower in congruent trials (M = 434.2 ms, 95% CI [424.7, 443.8], SD = 59.3) than in incongruent trials (M = 423.9 ms, 95% CI [414.6, 433.1], SD = 57.3; t(149) = −8.03, p < .001, dz = −.177). A similar trend was observed for mammals (MIncongruent = 423.5 ms, 95% CI [413.4, 433.6], SD = 62.6; MCongruent = 426.6 ms, 95% CI [416.5, 436.7], SD = 62.3). The simple effect of category on incongruent trials and that of congruency on fruit trials were not significant.

thumbnail
Table 2. Descriptive statistics for reaction times (ms) in Experiment 2.

https://doi.org/10.1371/journal.pone.0330475.t002

The Spearman–Brown reliability for reaction time ranged from 0.93 to 0.96 across all categories, demonstrating high reliability regardless of SOA or congruency (Spearman–Brown reliability = 0.98, 95% CI [0.95, 0.99], see Supplementary Materials S11 Table for details).

Attention bias index.

Fig 3 shows the ABI, AFI, and DI distributions. Detailed results of the one-sample t-test and ANOVA are presented in S12 and S13 Tables. Results from one-sample t-tests for ABI indicated a negative bias in the vehicle 100 ms and 500 ms SOA conditions, as well as in the mammal 500 ms SOA condition. No significant ABI values were found for the other conditions. An ANOVA revealed a significant main effect of category; however, the main effect of SOA and interaction between category and SOA were not significant. Multiple comparisons showed that vehicles (M = −10.4 ms, 95% CI [−12.9, −7.8], SD = 15.8) exhibited a smaller ABI than mammals (M = −3.1 ms, 95% CI [−5.7, −0.5], SD = 15.9; t(149) = 3.95, p < .001, dz = .459) or fruits (M = −1.8 ms, 95% CI [−4.4, 0.9], SD = 16.5; t(149) = −4.66, p < .001, dz = −.532), with no significant differences between mammals and fruits (t(149) = −0.74, p = .460, dz = −.082).

thumbnail
Fig 3. Raincloud plot of the attentional bias index (ABI), attentional facilitation index (AFI), and disengagement index (DI) in Experiment 2.

The results in the 100 ms SOA condition are in the top row, and the results in the 500 ms SOA condition are in the bottom row. Black dots in the cloud indicate mean values and error bars indicate 95% confidence intervals. SOA = stimulus onset asynchrony.

https://doi.org/10.1371/journal.pone.0330475.g003

One-sample t-tests for the AFI showed negative values only for vehicles (100 ms SOA and 500 ms). Fruits and mammals did not exhibit significant AFI values. An ANOVA indicated that the main effect of SOA and interaction between category and SOA were not significant; only the main effect of category was significant. Vehicles (M = −8.5 ms, 95% CI [−11.1, −5.8], SD = 16.4) displayed a smaller AFI compared to mammals (M = −0.8 ms, 95% CI [−5.2, 3.6], SD = 27.2; t(149) = 4.27, p < .001, dz = .315) and fruits (M = −0.8 ms, 95% CI [−3.4, 1.7], SD = 15.7; t(149) = −5.78, p < .001, dz = −.476), with no significant differences between mammals and fruits (t(149) = 0.00, p < 1.000, dz = .000).

Significantly positive one-sample t-test results for DI were observed only for mammals in the 500 ms SOA, with no significant values in the other trials. An ANOVA was not conducted for DI [45], as it is meaningful for negative values. The permutation-based split-half reliability for ABI, AFI, and DI ranged from −0.46 to 0.65, suggesting insufficient reliability (see Supplementary Materials S14 Table for details).

Discussion

Overall, Experiment 2 yielded results similar to those of Experiment 1. Attention was directed away from vehicles, as indicated by the ABI, which was attributed to the inhibition of vehicle-related processing, as reflected in the AFI. Additionally, vehicles led to delayed responses in the congruent trials compared to mammals and fruits. The results for mammals and fruits mirrored those obtained for birds and fruits in Experiment 1, demonstrating the absence of significant attentional bias. The negative attentional bias identified for mammals in the 500 ms SOA contradicts the AMH. According to the AMH, humans elicit the strongest attentional advantage [10]. Therefore, we conducted a third experiment using images of humans to test whether an attentional bias would emerge.

Experiment 3

Methods

This experiment followed the same procedures and statistical analyses as in Experiments 1 and 2, except that human images were used as stimuli instead of birds or mammals. One-way ANOVA revealed a significant effect of category on objective visual complexity. Post-hoc comparisons indicated that humans, fruits, and vehicles were more complex than tools, with no significant differences among the former three (see Supplementary Materials, S15 and S16 Tables).

Participants.

All procedures were approved by the Graduate School of Sustainable System Sciences Ethics Committee of Osaka Metropolitan University (2023 (1)–14) and complied with the Declaration of Helsinki. Participants provided informed consent online via a checkbox on the crowdsourcing platform. They received monetary compensation, and all procedures were conducted online between July 23 and July 24, 2025.

Eighty individuals participated via CrowdWorks. Four were excluded: two due to data storage issues and two due to failing the instructional check. None of the participants requested data exclusion. The final sample included 76 participants (31 female and 45 male; Mage = 43.05 years, SD = 8.50, range = 23–63 years).

Results

Data preparation.

All participants had over 80% correct responses (M = 99.7%, range = 96.9–100%). The errors per participant ranged from 0 to 7 (M = 0.63, SD = 1.23). Trials with reaction times below 200 ms or above 2000 ms ranged from 0 to 4 per participant (M = 0.20, SD = 0.69). Trials exceeding ±3 SDs from the mean per condition ranged from 0 to 6 (M = 1.87, SD = 1.58). Across all exclusion criteria, a total of 205 trials were removed, accounting for 1.20% of the data.

Reaction time.

Descriptive statistics are presented in Table 3, and detailed ANOVA results in S17 Table. A main effect of SOA was found, with responses slower at 100 ms (M = 448.8 ms, 95% CI [442.9, 454.7], SD = 64.2) than at 500 ms (M = 424.2 ms, 95% CI [418.2, 430.2], SD = 65.5; F(1, 75) = 94.63, p < .001, ηp2 = .558). Significant main effects of category and congruency were also observed, though these were modulated by their interaction. In congruent trials, responses to vehicles (M = 444.3 ms, 95% CI [433.8, 454.9], SD = 65.8) were slower than to humans (M = 433.7 ms, 95% CI [423.0, 444.3], SD = 66.5; t(151) = −5.59, p < .001, dz = −.161) and fruits (M = 436.1 ms, 95% CI [425.3, 446.9], SD = 67.4; t(151) = 4.46, p < .001, dz = .123), with no difference between the latter two. For vehicles, reaction times were slower in congruent than in incongruent trials (t(151) = −5.65, p < .001, dz = −.140). No significant effects were found for category in incongruent trials or for congruency in the human and fruit conditions. Split-half reliability for reaction time was high across conditions (rs = .95–.96; see S18 Table).

thumbnail
Table 3. Descriptive statistics for reaction times (ms) in Experiment 3.

https://doi.org/10.1371/journal.pone.0330475.t003

Attention bias index.

Fig 4 displays the ABI, AFI, and DI distributions. One-sample t-tests (S19 Table) revealed negative ABI values for vehicles in both SOA conditions; the other conditions showed no significant biases. ANOVA results (S20 Table) showed a significant effect of category, with vehicles (M = −9.1 ms, 95% CI [−12.3, −5.9], SD = 15.8) exhibiting smaller ABI than humans (M = −0.1 ms, 95% CI [−3.8, 3.6], SD = 23.0; t(151) = 3.89, p < .001, dz = .419) and fruits (M = 0.0 ms, 95% CI [−3.3, 3.4], SD = 20.8; t(151) = −3.94, p < .001, dz = −.449). No difference emerged between humans and fruits, and no main effect of SOA or interaction was found.

thumbnail
Fig 4. Raincloud plot of the attentional bias index (ABI), attentional facilitation index (AFI), and disengagement index (DI) in Experiment 3.

The results in the 100 ms SOA condition are in the top row, and the results in the 500 ms SOA condition are in the bottom row. Black dots in the cloud indicate mean values and error bars indicate 95% confidence intervals. SOA = stimulus onset asynchrony.

https://doi.org/10.1371/journal.pone.0330475.g004

AFI was significantly negative only for vehicles, with no effect observed for humans or fruits. ANOVA again revealed only a category effect: vehicles (M = −9.0 ms, 95% CI [−12.3, −5.8], SD = 20.1) showed smaller AFI than both humans (M = 1.6 ms, 95% CI [−2.5, 5.7], SD = 25.8; t(151) = 5.59, p < .001, dz = .454) and fruits (M = −0.9 ms, 95% CI [−4.8, 3.1], SD = 24.6; t(151) = −4.46, p < .001, dz = −.361). No significant difference was found between humans and fruits.

DI values were non-significant across conditions, and no ANOVA was conducted. Reliability estimates for ABI, AFI, and DI were low (rs = −0.20 to 0.47; see S21 Table).

Discussion

Experiment 3 replicated the findings of Experiments 1 and 2, showing delayed responses and attentional avoidance of vehicles. Even when humans were used as stimuli, no attentional bias towards them was observed, mirroring the results previously found for birds and mammals.

General discussion

The current study posited that attention would be preferentially directed towards animals, plants, and manufactured objects, in that order. Attentional bias was investigated through a dot-probe task. In Experiment 1, which included birds, fruits, vehicles, and tools as stimuli, no attentional bias towards birds and fruits was detected. Attentional avoidance and slower responses were instead evident when vehicles were involved. Experiment 2 and 3 involved mammals and humans as stimuli, which have been suggested to possess an attentional advantage over birds. However, the results were nearly identical to those of Experiment 1. Consequently, the three hypotheses of the present study were not supported, although the findings do offer valuable insights into attentional inclinations towards animate and inanimate objects.

The absence of a difference in attentional bias between flora, fauna, and tools contradicts the findings of previous studies [8,57], in which participants tended to focus more on nature than on cities. This discrepancy could be attributed to the stimuli used. In the current study, mammals and fruits were presented to participants to align the hierarchy of conceptual categories across stimuli [24]. However, natural landscapes, in contrast to urban landscapes, are characterised by their richness in colour and fractality. Fractality encompasses entire landscapes rather than individual objects [58], and several studies on biophilia have utilised landscapes containing multiple elements as experimental stimuli [59,60]. The high-level processing that elicits mental images of the natural environment is efficient for perceptual recovery [61]; therefore, stimuli with limited natural properties that extract elements from the landscape may lack the restorative elements and attractiveness necessary to capture people’s attention. Even if the stimuli consist of extracted elements, the results may differ if, for example, flowers, which are known for their psychologically restorative effects [62], were shown. Nevertheless, the present study used fruits in accordance with previous attention research. Caution should be exercised when interpreting the relationship between biophilia and attention, and future experiments that employ landscapes that align with the natural requirements associated with biophilia are warranted.

The current findings of a lack of significant differences in reaction times for animals compared to those for fruits and tools contradict the AMH. Studies employing change detection [10], visual search [11], and attentional blink tasks [12] have supported the AMH; however, investigations using flicker [63] and multiple object tracking tasks [64] have failed to demonstrate an attentional advantage towards animals. Differential attentional patterns across experimental tasks have also been observed in research on threat perception [65] and food [66]. This variability may be attributed to the multifaceted nature of assessing human visual and cognitive functions, including attention, memory, decision-making, reward processing, and spatial vision [67]. The fact that the AMH did not manifest in the dot-probe task involving humans or mammals lends credence to the notion that animal advantage does not primarily stem from attention, but may be attributable to other factors, such as the ease of short-term memory retention and encoding [68].

The present results demonstrated no discernible differences between animals and fruits, implying that PAD is not primarily rooted in variations in attentional bias towards animals and plants. This outcome aligns with the findings of one study that used the one-back task with line drawings and grayscale images [23]. What other factors besides attention could explain PAD? Colour cannot account for PAD, as evidenced by faster reaction times for fruits compared to animals and tools in a one-back task using colour images. Neural activity comparisons between colour and grayscale images revealed distinct neural responses to fruits but no differences for animals or tools [23]. Moreover, PAD was observed in a visual search task even when using monochrome stimuli [19]. These findings suggest that colour confers an advantage to fruits and that other factors play a more prominent role in shaping PAD. Regarding familiarity, mammals and birds, which are frequently featured in publications [69,70], are more familiar to most people than plants. Novel stimuli are more likely to capture attention than familiar stimuli [71,72], and familiarity favours plants in attentional allocation. Therefore, familiarity is unlikely to explain PAD. Considering that PAD was previously observed in a recognition task based on colour images [16, 17], this could plausibly be due to attitudinal, interest, encoding, or retaining factors.

In all three experiments, responses to vehicles were slower than those to animals and fruits. This finding partially aligns with prior research employing change detection [10] and visual search tasks [19], wherein vehicles were detected more slowly than animals but faster than other manufactured objects. Reaction time showed high internal consistency, and the raw reaction times [73] and initial attentional component measured in the 100 ms SOA condition [74] have demonstrated reasonable reliability. The observed delayed response to vehicles relative to other stimuli in the 100 ms SOA condition in all three experiments suggests a certain degree of reliability.

In the present study, encountering a vehicle inhibited perceptual processing and led to attentional avoidance, resulting in slower responses compared to the effects of encountering animals and plants. Daily life involves mild risks, such as road accidents, and healthy control participants have been found to often deliberately redirect their attention away from stimuli associated with risks below a certain threshold [45]. This is done to prevent heightened anxiety from trivial fears [75,76] or to pursue goals efficiently [77]. Although the present study did not directly assess perceived threat, the delayed responses to vehicles may tentatively reflect processing inhibition and attentional avoidance influenced by such perceptions. Nonetheless, concerns have been raised regarding the interpretation of the bias index computed in the dot-probe task [78], as well as the low reliability and congruency of the results [79]. In the present study, ABI, AFI, and DI also showed low internal consistency. The reason for the low internal consistency of bias indices was that taking the difference between two highly correlated measures reduces individual variability. When individual variability is small, the reliability coefficient tends to be low [80]. Additionally, the experiment itself constituted a within-subjects design, aiming to minimise individual differences to obtain robust results, which contributed to low internal consistency. While such experimental paradigms may not be reliable as measures of individual differences, they can be applicable in studies focusing on within-subject differences [81,82]. To enhance the robustness and stability of the attentional suppression and delayed response effects on vehicles, future research should validate these results by utilising event-related markers of spatial attention (e.g., N2pc components) or eye-tracking indices.

The results of the present study allow for an alternative interpretation. Of the 112 stimulus images used in the experiment, 64 were tools, and the number of tool images was greater than that for other categories. People tend to expect that the stimuli they are frequently exposed to will likely reappear [83]. Performance can improve when expected events occur [84,85], and it is possible that the expectation of tools accelerated the response to them. The reason why prioritised attention to animals and plants was not observed might be that responses to tools were quicker owing to this expectation. Even if the expectation effect on tools is removed, the finding that responses to vehicles are slower than those to animals and plants remains unchanged. However, avoidance of vehicles might not occur. This possibility should be tested through an experimental design that controls for the number of stimuli across categories.

This study has a limitation that warrants acknowledgement. The visual features of the stimuli may have introduced bias into the results. Among visual features, colour was excluded by using line drawings. Statistical tests on visual complexity showed that vehicles were not more complex than animals or plants. However, factors such as power spectrum, luminance, contrast, and shape were not controlled. These low-level visual features do not affect the superiority of animals in the visual search task [11], but it is unclear whether they have an effect in the dot-probe task. Additionally, line drawings may have excluded other important features, such as fractality, that are inherent in natural photographs. These features could play a significant role in attentional processes, and their absence might have influenced our findings. Future experiments that control for these visual features using text stimuli, image statistics [86], or photographs could clarify whether the results of this study were driven by lower-level or higher-level processes.

Conclusion

This study aimed to elucidate the hierarchy in visual attention towards animals, plants, and manufactured objects, and to determine whether this hierarchy was due to facilitated attention or disengagement difficulty. Consistently across all three experiments, responses to vehicles were slower than those to animals and plants. Contrary to our hypotheses, no attentional biases for animals and plants were observed. Unexpectedly, vehicles inhibited perceptual processing and attention, resulting in slower responses. These findings suggest that attentional hierarchies may not be solely driven by evolutionary relevance of stimuli, and that certain man-made objects may actively suppress attention. These results may have been influenced by the imbalance in the number of stimuli, warranting further investigation.

Supporting information

S1 Appendix. R codes used for A priori power analysis.

https://doi.org/10.1371/journal.pone.0330475.s001

(DOCX)

S1 Table. The objective visual complexity of the stimuli in Experiment 1 based on the JPEG compressed file size.

https://doi.org/10.1371/journal.pone.0330475.s002

(DOCX)

S2 Table. Analysis of variance results for visual complexity of the stimuli in Experiment 1.

https://doi.org/10.1371/journal.pone.0330475.s003

(DOCX)

S3 Table. Analysis of variance results for reaction times in Experiment 1.

https://doi.org/10.1371/journal.pone.0330475.s004

(DOCX)

S4 Table. Spearman-Brown reliability for reaction times (ms) in Experiment 1.

https://doi.org/10.1371/journal.pone.0330475.s005

(DOCX)

S5 Table. Results of one-sample t-tests on attentional tendency indices in Experiment 1.

https://doi.org/10.1371/journal.pone.0330475.s006

(DOCX)

S6 Table. Results of analysis of variance for the attentional bias index and attentional facilitation index in Experiment 1.

https://doi.org/10.1371/journal.pone.0330475.s007

(DOCX)

S7 Table. Spearman-Brown reliability for ABI, AFI, and DI in Experiment 1.

https://doi.org/10.1371/journal.pone.0330475.s008

(DOCX)

S8 Table. The objective visual complexity of the stimuli in Experiment 2 based on the JPEG compressed file size.

https://doi.org/10.1371/journal.pone.0330475.s009

(DOCX)

S9 Table. Analysis of variance results for visual complexity of the stimuli in Experiment 2.

https://doi.org/10.1371/journal.pone.0330475.s010

(DOCX)

S10 Table. Analysis of variance results for reaction times in Experiment 2.

https://doi.org/10.1371/journal.pone.0330475.s011

(DOCX)

S11 Table. Spearman-Brown reliability for reaction times in Experiment 2.

https://doi.org/10.1371/journal.pone.0330475.s012

(DOCX)

S12 Table. Results of one-sample t-tests on attentional tendency indices in Experiment 2.

https://doi.org/10.1371/journal.pone.0330475.s013

(DOCX)

S13 Table. Results of analysis of variance for the attentional bias index and attentional facilitation index in Experiment 2.

https://doi.org/10.1371/journal.pone.0330475.s014

(DOCX)

S14 Table. Spearman-Brown reliability for ABI, AFI, and DI in Experiment 2.

https://doi.org/10.1371/journal.pone.0330475.s015

(DOCX)

S15 Table. The objective visual complexity of the stimuli in Experiment 3 based on the JPEG compressed file size.

https://doi.org/10.1371/journal.pone.0330475.s016

(DOCX)

S16 Table. Analysis of variance results for visual complexity of the stimuli in Experiment 3.

https://doi.org/10.1371/journal.pone.0330475.s017

(DOCX)

S17 Table. Analysis of variance results for reaction times in Experiment 3.

https://doi.org/10.1371/journal.pone.0330475.s018

(DOCX)

S18 Table. Spearman-Brown reliability for reaction times in Experiment 3.

https://doi.org/10.1371/journal.pone.0330475.s019

(DOCX)

S19 Table. Results of one-sample t-tests on attentional tendency indices in Experiment 3.

https://doi.org/10.1371/journal.pone.0330475.s020

(DOCX)

S20 Table. Results of analysis of variance for the attentional bias index and attentional facilitation index in Experiment 3.

https://doi.org/10.1371/journal.pone.0330475.s021

(DOCX)

S21 Table. Spearman-Brown reliability for ABI, AFI, and DI in Experiment 3.

https://doi.org/10.1371/journal.pone.0330475.s022

(DOCX)

Acknowledgments

During the preparation of this work, the authors used ChatGPT (OpenAI, GPT-4, accessed via ChatGPT Plus at https://chat.openai.com) to improve the readability and language of the manuscript. The authors reviewed and edited the AI-generated content as needed and take full responsibility for the final content of the publication.

References

  1. 1. McCormick R. Does access to green space impact the mental well-being of children: a systematic review. J Pediatr Nurs. 2017;37:3–7. pmid:28882650
  2. 2. Oudin A. Short review: air pollution, noise and lack of greenness as risk factors for Alzheimer’s disease- epidemiologic and experimental evidence. Neurochem Int. 2020;134:104646. pmid:31866324
  3. 3. Shanahan DF, Bush R, Gaston KJ, Lin BB, Dean J, Barber E, et al. Health benefits from nature experiences depend on dose. Sci Rep. 2016;6:28551. pmid:27334040
  4. 4. Wilson EO. Biophilia. Cambridge, MA: Harvard University Press; 1984.
  5. 5. Kaplan R, Kaplan S, Brown T. Environmental preference: a comparison of four domains of predictors. Environ Behav. 1989;21(5):509–30.
  6. 6. Kaplan R, Kaplan S. Well‐being, reasonableness, and the natural environment. Appl Psych Health Well-Being. 2011;3(3):304–21.
  7. 7. Joye Y, Pals R, Steg L, Evans BL. New methods for assessing the fascinating nature of nature experiences. PLoS One. 2013;8(7):e65332. pmid:23922645
  8. 8. Schiebel T, Gallinat J, Kühn S. Testing the Biophilia theory: automatic approach tendencies towards nature. J Environ Psychol. 2022;79:101725.
  9. 9. Kaplan S. The restorative benefits of nature: toward an integrative framework. J Environ Psychol. 1995;15(3):169–82.
  10. 10. New J, Cosmides L, Tooby J. Category-specific attention for animals reflects ancestral priorities, not expertise. Proc Natl Acad Sci U S A. 2007;104(42):16598–603. pmid:17909181
  11. 11. He C, Cheung OS. Category selectivity for animals and man-made objects: beyond low- and mid-level visual features. J Vis. 2019;19(12):22. pmid:31648308
  12. 12. Lindh D, Sligte IG, Assecondi S, Shapiro KL, Charest I. Conscious perception of natural images is constrained by category-related visual features. Nat Commun. 2019;10(1):4106. pmid:31511514
  13. 13. Yang A, Liu G, Chen Y, Qi R, Zhang J, Hsiao J. Humans vs. AI in Detecting Vehicles and Humans in Driving Scenarios. Proceedings of the Annual Meeting of the Cognitive Science Society. 2023. pp. 45.
  14. 14. Parsley KM. Plant awareness disparity: a case for renaming plant blindness. Plants People Planet. 2020;2(6):598–601.
  15. 15. Wandersee JH, Schussler EE. Preventing plant blindness. Am Biol Teach. 1999;61(2):82–6.
  16. 16. Schussler EE, Olzak LA. It’s not easy being green: student recall of plant and animal images. J Biol Educ. 2008;42(3):112–9.
  17. 17. Zani G, Low J. Botanical priming helps overcome plant blindness on a memory task. J Environ Psychol. 2022;81:101808.
  18. 18. Balas B, Momsen JL. Attention “blinks” differently for plants and animals. CBE Life Sci Educ. 2014;13(3):437–43. pmid:25185227
  19. 19. Jackson RE, Calvillo DP. Evolutionary relevance facilitates visual information processing. Evol Psychol. 2013;11(5):1011–26. pmid:24184882
  20. 20. Koster EHW, Crombez G, Verschuere B, De Houwer J. Selective attention to threat in the dot probe paradigm: differentiating vigilance and difficulty to disengage. Behav Res Ther. 2004;42(10):1183–92. pmid:15350857
  21. 21. Rogers TT, Hocking J, Mechelli A, Patterson K, Price C. Fusiform activation to animals is driven by the process, not the stimulus. J Cogn Neurosci. 2005;17(3):434–45. pmid:15814003
  22. 22. Tyler LK, Stamatakis EA, Bright P, Acres K, Abdallah S, Rodd JM, et al. Processing objects at different levels of specificity. J Cogn Neurosci. 2004;16(3):351–62. pmid:15072671
  23. 23. Yao L, Fu Q, Liu CH. The roles of edge-based and surface-based information in the dynamic neural representation of objects. Neuroimage. 2023;283:120425. pmid:37890562
  24. 24. Rosch E, Mervis CB, Gray WD, Johnson DM, Boyes-Braem P. Basic objects in natural categories. Cogn Psychol. 1976;8(3):382–439.
  25. 25. Linné CV, Salvius L. Caroli Linnaei … Systema naturae per regna tria naturae: secundum classes, ordines, genera, species, cum characteribus, differentiis, synonymis, locis. Holmiae: Impensis Direct. Laurentii Salvii; 1758.
  26. 26. The Council for Science and Technology, Ministry of Education, Culture, Sports, Science and Technology, Japan. Standard tables of food composition in Japan - 2020 - (Eighth Revised Edition) 2020 [cited 2024 Jul 20]. Available from: https://www.mext.go.jp/a_menu/syokuhinseibun/mext_01110.html
  27. 27. World Intellectual Property Organization. Nice Classification. 2024 [cited 2024 Jul 20]. Available from: https://nclpub.wipo.int/enfr/?class_number=12
  28. 28. Bramão I, Reis A, Petersson KM, Faísca L. The role of color information on object recognition: a review and meta-analysis. Acta Psychol (Amst). 2011;138(1):244–53. pmid:21803315
  29. 29. Tanaka J, Weiskopf D, Williams P. The role of color in high-level vision. Trends Cogn Sci. 2001;5(5):211–5. pmid:11323266
  30. 30. Geyer T, Müller HJ, Krummenacher J. Expectancies modulate attentional capture by salient color singletons. Vis Res. 2008;48(11):1315–26. pmid:18407311
  31. 31. Nordfang M, Dyrholm M, Bundesen C. Identifying bottom-up and top-down components of attentional weight by experimental analysis and computational modeling. J Exp Psychol Gen. 2013;142(2):510–35. pmid:22889161
  32. 32. Theeuwes J. Cross-dimensional perceptual selectivity. Percept Psychophys. 1991;50(2):184–93. pmid:1945740
  33. 33. Theeuwes J. Perceptual selectivity for color and form. Percept Psychophys. 1992;51(6):599–606. pmid:1620571
  34. 34. Donderi DC, McFadden S. Compressed file length predicts search time and errors on visual displays. Displays. 2005;26(2):71–8.
  35. 35. Forsythe A, Mulhern G, Sawey M. Confounds in pictorial sets: the role of complexity and familiarity in basic-level picture processing. Behav Res Methods. 2008;40(1):116–29. pmid:18411534
  36. 36. Marin MM, Leder H. Effects of presentation duration on measures of complexity in affective environmental scenes and representational paintings. Acta Psychol (Amst). 2016;163:38–58. pmid:26595281
  37. 37. Tuch AN, Bargas-Avila JA, Opwis K, Wilhelm FH. Visual complexity of websites: Effects on users’ experience, physiology, performance, and memory. Int J Hum-Comput Stud. 2009;67(9):703–15.
  38. 38. Peirce J, Gray JR, Simpson S, MacAskill M, Höchenberger R, Sogo H, et al. PsychoPy2: experiments in behavior made easy. Behav Res Methods. 2019;51(1):195–203. pmid:30734206
  39. 39. Cooper RM, Langton SRH. Attentional bias to angry faces using the dot-probe task? It depends when you look for it. Behav Res Ther. 2006;44(9):1321–9. pmid:16321361
  40. 40. Miura A, Kobayashi T. Influence of satisficing on online survey responses. Kodo Keiryogaku. 2018;45(1):1–11.
  41. 41. Oppenheimer DM, Meyvis T, Davidenko N. Instructional manipulation checks: Detecting satisficing to increase statistical power. J Exp Soc Psychol. 2009;45(4):867–72.
  42. 42. Meade AW, Craig SB. Identifying careless responses in survey data. Psychol Methods. 2012;17(3):437–55. pmid:22506584
  43. 43. Gronchi G, Righi S, Pierguidi L, Giovannelli F, Murasecco I, Viggiano MP. Automatic and controlled attentional orienting in the elderly: a dual-process view of the positivity effect. Acta Psychol (Amst). 2018;185:229–34. pmid:29550693
  44. 44. Righi S, Gronchi G, Ramat S, Gavazzi G, Cecchi F, Viggiano MP. Automatic and controlled attentional orienting toward emotional faces in patients with Parkinson’s disease. Cogn Affect Behav Neurosci. 2023;23(2):371–82. pmid:36759426
  45. 45. Koster EHW, Verschuere B, Crombez G, Van Damme S. Time-course of attention for threatening pictures in high and low trait anxiety. Behav Res Ther. 2005;43(8):1087–98. pmid:15922291
  46. 46. Mendoza JL. A significance test for multisample sphericity. Psychometrika. 1980;45(4):495–8.
  47. 47. Greenhouse SW, Geisser S. On methods in the analysis of profile data. Psychometrika. 1959;24(2):95–112.
  48. 48. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B: Stat Methodol. 1995;57(1):289–300.
  49. 49. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2023.
  50. 50. Iseki R. anovakun (version 4.8.9.) 2024 [cited 2024 Feb 4]. Available from: http://riseki.php.xdomain.jp/
  51. 51. Parsons S. splithalf: robust estimates of split half reliability. J Open Source Softw. 2021;6(60):3041.
  52. 52. Brown W. Some experimental results in the correlation of mental abilities. Br J Psychol. 1910;3(3):296–322.
  53. 53. Spearman C. Correlation calculated from faulty data. Br J Psychol. 1910;3(3):271–95.
  54. 54. Nicolet-dit-Félix M, Gillioz C, Mortillaro M, Sander D, Fiori M. Emotional intelligence and attentional bias to emotional faces: Evidence of hypersensitivity towards emotion information. Pers Ind Diff. 2023;201:111917.
  55. 55. Loucks J, Reise B, Gahite R, Fleming S. Animate monitoring is not uniform: implications for the animate monitoring hypothesis. Front Psychol. 2023;14:1146248. pmid:37179895
  56. 56. Sha L, Haxby JV, Abdi H, Guntupalli JS, Oosterhof NN, Halchenko YO, et al. The animacy continuum in the human ventral vision pathway. J Cogn Neurosci. 2015;27(4):665–78. pmid:25269114
  57. 57. Valtchanov D, Ellard CG. Cognitive and affective responses to natural scenes: effects of low level visual properties on preference, cognitive load and eye-movements. J Environ Psychol. 2015;43:184–95.
  58. 58. Taylor R. The potential of biophilic fractal designs to promote health and performance: a review of experiments and applications. Sustainability. 2021;13(2):823.
  59. 59. Meidenbauer KL, Stenfors CUD, Young J, Layden EA, Schertz KE, Kardan O, et al. The gradual development of the preference for natural environments. J Environ Psychol. 2019;65:101328.
  60. 60. Olivos-Jara P, Segura-Fernández R, Rubio-Pérez C, Felipe-García B. Biophilia and biophobia as emotional attribution to nature in children of 5 years old. Front Psychol. 2020;11:511. pmid:32265804
  61. 61. Menzel C, Reese G. Seeing nature from low to high levels: mechanisms underlying the restorative effects of viewing nature images. J Environ Psychol. 2022;81:101804.
  62. 62. Mochizuki-Kawai H, Matsuda I, Mochizuki S. Viewing a flower image provides automatic recovery effects after psychological stress. J Environ Psychol. 2020;70:101445.
  63. 63. LaPointe MRP, Lupianez J, Milliken B. Context congruency effects in change detection: opposing effects on detection and identification. Visual Cogn. 2013;21(1):99–122.
  64. 64. Hagen T, Espeseth T, Laeng B. Chasing animals with split attention: are animals prioritized in visual tracking? Iperception. 2018;9(5). pmid:30202509
  65. 65. Cisler JM, Bacon AK, Williams NL. Phenomenological characteristics of attentional biases towards threat: a critical review. Cognit Ther Res. 2009;33(2):221–34. pmid:20622985
  66. 66. Luo C, Qiao S, Zhuang X, Ma G. Dynamic attentional bias for pictorial and textual food cues in the visual search paradigm. Appetite. 2023;180:106318. pmid:36206971
  67. 67. Eckstein MP. Visual search: a retrospective. J Vis. 2011;11(5):14. pmid:22209816
  68. 68. Hagen T, Laeng B. Animals do not induce or reduce attentional blinking, but they are reported more accurately in a rapid serial visual presentation task. Iperception. 2017;8(5). pmid:29085619
  69. 69. Clucas B, McHugh K, Caro T. Flagship species on covers of US conservation and nature magazines. Biodivers Conserv. 2008;17(6):1517–28.
  70. 70. Prokop P, Masarovič R, Hajdúchová S, Ježová Z, Zvaríková M, Fedor P. Prioritisation of charismatic animals in major conservation journals measured by the altmetric attention score. Sustainability. 2022;14(24):17029.
  71. 71. Johnston WA, Hawley KJ, Plewe SH, Elliott JM, DeWitt MJ. Attention capture by novel stimuli. J Exp Psychol Gen. 1990;119(4):397–411. pmid:2148574
  72. 72. Maratos FA, Staples P. Attentional biases towards familiar and unfamiliar foods in children. The role of food neophobia. Appetite. 2015;91:220–5. pmid:25862982
  73. 73. Waechter S, Nelson AL, Wright C, Hyatt A, Oakman J. Measuring attentional bias to threat: reliability of dot probe and eye movement indices. Cogn Ther Res. 2013;38(3):313–33.
  74. 74. Chapman A, Devue C, Grimshaw GM. Fleeting reliability in the dot-probe task. Psychol Res. 2019;83(2):308–20. pmid:29159699
  75. 75. Ellenbogen MA, Schwartzman AE, Stewart J, Walker C-D. Stress and selective attention: the interplay of mood, cortisol levels, and emotional information processing. Psychophysiology. 2002;39(6):723–32. pmid:12462500
  76. 76. Krohne HW. The concept of coping modes: relating cognitive person variables to actual coping behavior. Adv Behav Res Ther. 1989;11(4):235–48.
  77. 77. Mogg K, Bradley BP. A cognitive-motivational analysis of anxiety. Behav Res Ther. 1998;36(9):809–48. pmid:9701859
  78. 78. Clarke PJF, Macleod C, Guastella AJ. Assessing the role of spatial engagement and disengagement of attention in anxiety-linked attentional bias: a critique of current paradigms and suggestions for future research directions. Anxiety Stress Coping. 2013;26(1):1–19. pmid:22136158
  79. 79. Thigpen NN, Gruss LF, Garcia S, Herring DR, Keil A. What does the dot-probe task measure? A reverse correlation analysis of electrocortical activity. Psychophysiology. 2018;55(6):e13058. pmid:29314050
  80. 80. Infantolino ZP, Luking KR, Sauder CL, Curtin JJ, Hajcak G. Robust is not necessarily reliable: From within-subjects fMRI contrasts to between-subjects comparisons. Neuroimage. 2018;173:146–52. pmid:29458188
  81. 81. Dang J, King KM, Inzlicht M. Why are self-report and behavioral measures weakly correlated? Trends Cogn Sci. 2020;24(4):267–9. pmid:32160564
  82. 82. Hedge C, Powell G, Sumner P. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behav Res Methods. 2018;50(3):1166–86. pmid:28726177
  83. 83. Miller J, Anbar R. Expectancy and frequency effects on perceptual and motor systems in choice reaction time. Mem Cognit. 1981;9(6):631–41. pmid:7329244
  84. 84. Gaschler R, Schwager S, Umbach VJ, Frensch PA, Schubert T. Expectation mismatch: differences between self-generated and cue-induced expectations. Neurosci Biobehav Rev. 2014;46 Pt 1:139–57. pmid:24971824
  85. 85. Umbach VJ, Schwager S, Frensch PA, Gaschler R. Does explicit expectation really affect preparation? Front Psychol. 2012;3:378. pmid:23248606
  86. 86. Oliva A, Torralba A. Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis. 2001;42(3):145–75.