Orienting visual attention is closely linked to the oculomotor system. For example, a shift of attention is usually followed by a saccadic eye movement and can be revealed by micro saccades. Recently we reported a novel role of another type of eye movement, namely eye vergence, in orienting visual attention. Shifts in visuospatial attention are characterized by the response modulation to a selected target. However, unlike (micro-) saccades, eye vergence movements do not carry spatial information (except for depth) and are thus not specific to a particular visual location. To further understand the role of eye vergence in visual attention, we tested subjects with different perceptual styles. Perceptual style refers to the characteristic way individuals perceive environmental stimuli, and is characterized by a spatial difference (local vs. global) in perceptual processing. We tested field independent (local; FI) and field dependent (global; FD) observers in a cue/no-cue task and a matching task. We found that FI observers responded faster and had stronger modulation in eye vergence in both tasks than FD subjects. The results may suggest that eye vergence modulation may relate to the trade-off between the size of spatial region covered by attention and the processing efficiency of sensory information. Alternatively, vergence modulation may have a role in the switch in cortical state to prepare the visual system for new incoming sensory information. In conclusion, vergence eye movements may be added to the growing list of functions of fixational eye movements in visual perception. However, further studies are needed to elucidate its role.
Citation: Solé Puig M, Puigcerver L, Aznar-Casanova JA, Supèr H (2013) Difference in Visual Processing Assessed by Eye Vergence Movements. PLoS ONE 8(9): e72041. https://doi.org/10.1371/journal.pone.0072041
Editor: Kun Guo, University of Lincoln, United Kingdom
Received: April 11, 2013; Accepted: July 4, 2013; Published: September 19, 2013
Copyright: © 2013 Solé Puig et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This work was supported by grants to HS (PSI2010-18139 & SAF2009-10367) from the Spanish Ministry of Education and Science (MICINN) and (2009-SGR-308) from the Catalan government (AGAUR). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Attention is a neural mechanism for selecting relevant sensory information for further visual information processing. The neural circuits of attention are closely linked to the oculomotor system, e.g. . A shift of attention is usually followed by a saccadic eye movement to shift the eye gaze towards the attended location. Visual attention also relates to small fixational saccades that do not change the focus of eye gaze, where the direction of micro-saccades may uncover the orientation of covert visual attention [2,3]. The function of fixational eye movements is not limited to attention but also includes the prevention of the loss of conscious vision , the improvement of visual acuity [5,6], the reduction of binocular disparity , and the adjustment of eye positions after a target saccade .
Recently we found an unpredicted but clear relation of another type of eye movement namely eye vergence movements with visual attention . Vergence refers to the simultaneous movement of both eyes in opposite directions to obtain single binocular vision. When the eyes rotate towards each other (convergence) the angle of eye increases and when the eyes rotate away from each other (divergence) the angle becomes smaller (Figure 1). We observed that during steady gaze fixation visual stimuli modulate the angle of eye vergence, and when orienting attention the eyes briefly converge to a nearer plane, i.e. the vergence angle increases after visual stimulation. This modulation in eye vergence is not a near triad-effect, neither related to pupil size, and is independent of the occurrence of micro-saccades . Instead the increase in vergence angle correlates with bottom-up and top-down induced shifts in visuospatial attention . For instance, vergence angle strongly modulates after cueing the target location but not for uncued targets. Unlike micro- saccades, however, eye vergence movements do not carry directional information of the target because of the nature of such movements.
The eyes focus on a single point in space. The angle of eye vergence relates to the distance of the focus point to the eyes. For a near point the vergence angle (α1) is larger than for a far point (α2). α represents the angle of eye vergence.
Therefore, to further understand the relation between vergence eye movements and spatial attention, we made use of people with different perceptual styles. Perceptual style refers to the characteristic way individuals perceive environmental stimuli, and is characterized by the differentiation according to local and global processing [10-13]. Individuals with a global perceptual style have problems to ignore the spatial context of a stimulus (field dependent observers; FD) while individuals with a local style are not so influenced by surrounding stimuli (field independent observers; FI). Perceptual style is attributed to differentiation in brain organization  that may result in distinctive attention and spatial abilities [15-18].
We found that eye vergence movements differ among people with different perceptual styles. As perceptual is style is mainly characterized in terms of visuo-spatial processing, this finding may suggest that vergence is relevant to spatial attention [19,20]. Alternatively modulation in vergence eye movements may have a role in the switch in cortical state .
In task 1 (Figure 2a), the average response time was significantly shorter in FI group compared to FD group (mean±sem: 633.2±9.8 ms vs. 696.8±11.9 ms; t-test (687) =-4.13, p<0.001). We found that observers from both groups responded faster in the cue condition than in the no-cue condition (mean±sem; FI: 578±11 ms vs. 689±14 ms; FD: 655±15 ms vs. 746±18 ms; t-test (714) =-6.5; p<0.001; Figure 3a). Thus on average FI subjects are 78 ms and 57 ms faster in the cue and no-cue condition, respectively. Regarding the detection performance, FI and FD observers had similar (X2-test (1073) =2.92; p=0.09) percentage of correct responses (correct responses; FI: 86.1%; FD: 81.6%).
A: The cue/no-cue task. B The matching task. Time is from fixation onset.
Average reaction times from the cue/no-cue task (A) and from the matching task (B). Error bars are SEM.
In task 2 (Figure 2b), we found that FI observers responded significantly faster (71 ms) compared to FD observers (mean±sem: 743±12 ms vs. 672±10 ms; t-test (481) =-4.62; p<0.001; Figure 3b). Percentage of correct responses was higher for FI observers (72.7%) compared to the FD observers (54.8%) X2-test (746) =25.62; p<0.001)).
The positions of both eyes were simultaneously monitored to compute the angle of eye vergence during fixation. As previously described, we found in the first task that the angle of eye vergence was higher in the cue condition compared to no-cue condition for FI and FD subjects (FI: t-test (356)=10.03, p<0.01; FD: t-test (356)=5.76, p<0.01; Figure 4). Thus the eyes start to converge after the presentation of the cue stimulus. We compared the strength in the modulation of the angle of eye vergence between FI and FD subjects. For the cue condition, the angle of eye vergence was larger in FI subjects compared to the vergence angle in FD subjects (t-test (372)=4.84, p<0.01; Figure 4). No differences in vergence modulation between groups were found in no-cue condition (t-test (340)=1.67, p>0.05; Figure 4). We next analyzed the modulation in the angle of eye vergence during the matching task. Also here the modulation in the angle of eye vergence was stronger in FI subjects than in FD subjects (t-test (480)=4.82, p<0.01; Figure 5). Thus FI subject show stronger eye convergence than FD subjects. When FI and FD subjects viewed the same visual stimulation sequence but without any instructed task (control task), no difference in vergence modulation between FI and FD subjects was observed (t-test (200)=-0.0361, p>0.05; Figure 5).
A: Average modulation in the angle of eye vergence from all subjects in the cue (green) and no-cue (red) conditions for FI (continuous lines) and FD (dotted lines) subjects. Higher values of vergence angle represent convergence. Time is from cue/no-cue onset. B: Mean modulation in eye vergence for FI and FD subjects. Asterisks denote significant (p<0.01) differences. Error bars are SEM.
A: Average modulation across all subjects in the angle of eye vergence during the task (black) and control task (blue) for FI (continuous lines) and FD (dotted lines) subjects. Higher values of vergence angle represent convergence. Time is from the onset of the peripheral change B: Mean modulation in eye vergence for FI and FD subjects. Asterisks denote significant (p<0.01) differences. Error bars are SEM.
To better understand the role of vergence eye movements in visuospatial attention, we tested people with different perceptual styles (FI and FD observers) in two paradigms that involve orienting attention, and analyzed the modulation in the angle of eye vergence. In both tasks we observed that FI observers responded faster and were more accurate than FD subjects. We also found that both FI and FD subject had stronger modulation in eye vergence in the cue condition than in the no-cue condition. Furthermore, we observed that in the cue condition of the first task and in the experimental condition of the second task, FI subjects have a stronger modulation in the angle of eye vergence than FD subjects. Thus observers converge (and not diverge) their eyes after cue/stimulus presentation where FI subjects show stronger convergence than FD subjects. The changes in vergence angle (0.1°-0.2°) seem compatible with the tolerance range is Panum’s fusion area (15-30 arcmin). The absence of modulation in eye vergence during the control task signifies that it does not reflect visual stimulation but argues for a perceptual origin of the vergence modulation in the experimental task. That is to say, to change the angle of eye vergence visual stimulation alone is not sufficient. It necessitates attention of the subject.
A possible explanation for our results is to consider the distance of the peripheral target location, which is slightly further away from the eyes than the central fixation point. After the presentation of the cue, subjects focus on the peripheral target to fuse both retinal images of the target while maintaining fixation at the central point. According to such idea, the eyes diverge to a more distant plane and the angle of eye vergence decrease after cue/stimulus presentation. However, we found an increase in vergence angle, which argues against a near-triad explanation. This conclusion is supported by our previous paper  were we placed targets at different eccentricities (7° and 14° from the fixation point). The results of this study showed that for all targets the strength of the modulation in vergence was similar.
Recently we proposed a new role of eye vergence movements in attention and found that modulation in eye vergence relates to shifts in visuospatial attention . Vergence modulation may follow the shift in attention  or may just co-vary with or even cause orienting attention. For instance, in our previous paper we provided evidence that eye vergence starts to modulate around the same time as subjects start to shift their attention . Thus our current findings may be explained in terms of attentional differences between FI and FD subjects. The idea that perceptual style is ascribed to distinctive attentional abilities [15-17] agrees with such notion. Some studies suggest that FD subjects have greater difficulty in maintaining attention on a given aspect of information and in attending selectively to relevant cues, particularly in the presence of distracting elements [16,23]. Moreover, depending on the perceptual style subjects seem to attend to different aspects of information: FD subjects tend to focus their attention on global aspects of the information to be processed, while FI subjects tend to focus on partial aspects, e.g. [24,25]. Also, FD subjects are less effective in using their attentional resources resulting in poorer performance compared FI observers [26-28]. In contrast, other studies found no difference in stimulus detection performance, which indicates similar attention ability for FI and FD observers .
The zoom lens model [19,20] predicts an adjustment of the attention focus depending on the demanding task. So, this theory suggests a trade-off between the size of covered region by attention and processing efficiency of sensory processing because of limited processing capacities. Müller et al  found results according to this physiological prediction: while the extent of activated retinotopic visual cortex increased with the size of the attended region, the level of neural activity in a given sub-region decreased. Therefore a possible albeit a speculative explanation for the difference in eye vergence modulation between FI and FD observers is the different extent of activated visual area. The stronger increase in eye vergence modulation in the FI group could reflect a smaller size of the attended region. This smaller size of the attended region could explain the better performance behaviour of this group, which agrees with the notion that FI subjects are not so influenced by the context of a stimulus and are more biased towards local stimuli.
An alternative explanation for the faster reaction times observed in FI subjects may relate to the velocity or efficiency of stimulus processing . The detection performance of FI and FD subjects in the first task was similar. Thus, we assume that the detection performance of the peripheral target in the second task was also similar for both groups. Therefore, the observed difference in reaction times to the central stimulus between the FI and the FD group in the second task is likely an outcome of dissimilar memory capacity, decision-making, and/or speed of stimulus processing. Accordingly, this means that the observed difference in vergence modulation between the FI and the FD subjects before the presentation of the central target relates to a difference in velocity or efficiency of stimulus processing and not to a difference in orienting attention. This idea may also explain vergence modulation in the first task. The cue stimulus induces the eyes to convergence thereby preparing the visual system for upcoming sensory information. However, the speculated link between vergence modulation and enhancement of stimulus processing needs to become spatial specific, i.e. to a single target. Otherwise in the no-cue condition vergence modulation is expected to occur as well. Thus, the improved reaction times observed in FI subjects may be explained by a superior preparatory phase. We speculate that the modulation in vergence eye movements before target onset could have a role in the switch in cortical state that has been observed in the visual cortex to prepare the visual system for new incoming sensory information leading to rapid or more efficient stimulus processing .
In conclusion, it becomes clear that small fixational eye movements have various roles in visual perception. In this regard, eye vergence movements, besides depth perception, may have a role in visual attention. However, further studies should elucidate this relationship.
Materials and Methods
The study was approved by the Ethics committee of the Faculty of Psychology of the University of Barcelona in accordance with the ethical standards laid down in the 1954 Declaration of Helsinki. All observers gave written informed consent before participating.
To test vergence in relation to perceptual differences we selected eight FD and FI subjects (mean age: 21.88; STD: 3.48) at the extremes of the distribution of the scores of 157 participants tested with the 3rd section of GEFT . High scores on this test are indicative for a local processing style, while low scores indicate a bias for global processing. As a criterion for extremity we used a percentage of correct responses (hits) to select the ~5% best or worst subjects of the population. To belong to the FD group, performance must be less than 50%, and to belong to the FI group performance should exceed 80%. All the participants were naïve to the purpose of the study and all had normal or corrected-to-normal vision. Participants received credit points or money for taking part in the experiment.
The stimuli were displayed on a PC Pentium-IV 3000 with a Phillips Brilliance 22″ (CRT) screen. The display resolution was 24 pixels per degree (size: 1024 x 768 pixels or 27.6° x 20.7°). We used in-house C++ software for presenting the stimuli. The participants’ position of gaze was monitored using a binocular EyeLink II eye-tracking system at 500 Hz (SR Research System, Ontario, Canada). To compensate for any head movements, we used individually molded bite bars (UHCOTECH Head Spot, University of Houston, Texas, USA).
Stimuli and procedure
Stimuli consisted of a fixation cross (5x5 pixels) surrounded by 8 peripheral bars (3x11 pixels), with an eccentricity of 7.5°. The stimuli were black on a grey background. Participants were sitting in a dimly lighted room, at 47 cm of the PC. Both experiments consisted of 4 sets of 32 trials for each task. Before starting the task, some training trials were conducted.
Observers were required to fixate to the central cross. After 300 ms after start fixation, 8 peripheral bars appeared. Then after 1000 ms, a cue (red line pointing to one of the peripheral bars, 3x13 pixels) or a no-cue (a red cross, 13x13 pixels) stimulus appeared for 100 ms at the central position (Figure 2a). The cue indicated (100% valid) the target. After an additional period of 1000 ms, one of the peripheral bars (target) briefly (100 ms) changed its orientation (tilt of 20° to the left or right). Participants had to respond by a button press as fast and accurately as possible whether the target tilted to the left or to the right. Feedback was not given to the observers.
Observers were required to fixate to the central cross. After 300 ms, 8 peripheral bars appeared, and after 1500 ms, one of the peripheral bars (peripheral target) changed briefly (50 ms) its orientation (20°). After additional fixation period of 1500 ms, a tilted bar (20°) appeared (for 50 ms) at the fixation cross position (central target). Participants responded with a button press (2 alternative choice) if they detected a match or a non-match in the orientation (Figure 2b). In an additional control experiment, the subjects viewed the same visual stimuli sequence. However, the subjects were instructed to fixate the central cross without performing any task.
Eye data analysis
While subjects fixated the central cross, we calculated the angle of eye vergence as described in . For the calculation of both eye gaze vectors we used the real distance from the screen to the observer and the actual inter-pupil distance. For each subject, the eye vergence data were normalized by dividing the raw data by the maximum value of the recorded samples from fixation onset to target onset. Only correct trials were analyzed. For the calculation of the mean eye vergence angle, we selected per subject a window of 100 ms around the maximum peak modulation in the average vergence angle, i.e. 550 to 650 ms after the onset of the cue/no-cue stimulus (Task 1) or change of the peripheral stimulus (Task 2).
Conceived and designed the experiments: MSP JAAC HS. Performed the experiments: MSP LP. Analyzed the data: MSP HS. Contributed reagents/materials/analysis tools: MSP HS. Wrote the manuscript: MSP HS.
- 1. Awh E, Armstrong KM, Moore T (2006) Visual and oculomotor selection: links, causes and implications for spatial attention. Trends Cogn Sci 10: 124-130. doi:https://doi.org/10.1016/j.tics.2006.01.001. PubMed: 16469523.
- 2. Hafed ZM, Clark JJ (2002) Microsaccades as an overt measure of covert attention shifts. Vision Res 42: 2533-2545. doi:https://doi.org/10.1016/S0042-6989(02)00263-8. PubMed: 12445847.
- 3. Engbert R, Kliegl R (2003) Microsaccades uncover the orientation of covert attention. Vision Res 43: 1035-1045. doi:https://doi.org/10.1016/S0042-6989(03)00084-1. PubMed: 12676246.
- 4. Martinez-Conde S, Macknik SL, Troncoso XG, Dyar TA (2006) Microsaccades counteract visual fading during fixation. Neuron 49: 297-305. doi:https://doi.org/10.1016/j.neuron.2005.11.033. PubMed: 16423702.
- 5. Rucci M, Desbordes G (2003) Contributions of fixational eye movements to the discrimination of briefly presented stimuli. J Vis 19: 852-864. PubMed: 14765967.
- 6. Ko HK, Poletti M, Rucci M (2010) Microsaccades precisely relocate gaze in a high visual acuity task. Nat Neurosci 13: 1549-1554. doi:https://doi.org/10.1038/nn.2663. PubMed: 21037583.
- 7. Engbert R, Kliegl R (2004) Microsaccades Keep the Eyes’ Balance During Fixation. Psychol Sci 15: 431–436. doi:https://doi.org/10.1111/j.0956-7976.2004.00697.x. PubMed: 15147499.
- 8. Pérez Zapata L, Aznar-Casanova JA, Supèr H (2013) Two stages of programming eye gaze shifts in 3-D space. Vision Res, 86C: 15-26. doi:https://doi.org/10.1016/j.visres.2013.04.005. PubMed: 23597580.
- 9. Solé Puig M, Pérez Zapata L, Aznar-Casanova JA, Supèr H (2013) A role of eye vergence in covert attention. PLOS ONE 8(1): e52955. doi:https://doi.org/10.1371/journal.pone. PubMed: 23382827. 0052955.
- 10. Witkin HA, Goodenough DR (1981) ognitive Styles: Essence and Origin (Psychological Issues Monograph NO. 51).. NY: International University Press, Inc.
- 11. Milne E, Szczerbinski M (2009) Global and local perceptual style, field-independence, and central coherence: An attempt at concept validation. Adv Cogn Psychol 5: 1-26. doi:https://doi.org/10.2478/v10053-008-0062-8. PubMed: 20523847.
- 12. Frith U (1989) Autism: Explaining the enigma. Oxford: Blackwell Scientific Publications.
- 13. Carroll JB (1993) Human cognitive abilities: A survey of factor-analytic studies. Cambridge: Cambridge University Press.
- 14. Tinajero C, Páramo MF, Cadaveira F, Rodriguez-Holguin S (1993) Field dependence-independence and brain organization: the confluence of two different ways of describing general forms of cognitive functioning? A theoretical review. Percept Mot Skills 77: 787-802. doi:https://doi.org/10.2466/pms.1918.104.22.1687. PubMed: 8284155.
- 15. Baillargeon R, Pascual-Leone J, Roncadin C (1998) Mental-attentional capacity: does cognitive style make a difference? J Exp Child Psychol 70(3): 143-166. doi:https://doi.org/10.1006/jecp.1998.2452. PubMed: 9742177.
- 16. Guisande MA, Páramo MF, Tinajero C, Almeida LS (2007) Field dependence-independence (FDI) cognitive style: an analysis of attentional functioning. Psicothema 19(4): 572-577. PubMed: 17959109.
- 17. López-Villalobos JA, Delgado J, Pérez I, Serrano I, Alberola S et al. (2010) Utility of Children’s Embedded Figures Tests in Attention Deficit Hyperactivity Disorder. Clínica Salud 21(1): 93-103. doi:https://doi.org/10.5093/cl2010v21n1a8.
- 18. Guillot A, Champely S, Batier C, Thiriet P, Collet C (2007) Relationship between spatial abilities, mental rotation and functional anatomy learning. Advances in Health Science and Educational Theory and Practice 12(4): 491-507. doi:https://doi.org/10.1007/s10459-006-9021-7. PubMed: 16847728.
- 19. LaBerge D (1983) Spatial extent of attention to letters and words. J Exp Psychol Hum Percept Perform 9: 371–379. doi:https://doi.org/10.1037/0096-1522.214.171.1241. PubMed: 6223977.
- 20. Eriksen CW, St James JD (1986) Visual attention within and around the field of focal attention: A zoom lens model. Percept Psychophys 40: 225–240. doi:https://doi.org/10.3758/BF03211502. PubMed: 3786090.
- 21. Supèr H, Van der Togt C, Spekreijse H, Lamme VAF (2003) Internal state of the monkey primary visual cortex predicts figure-ground perception. J Neurosci 23: 3407-3414. PubMed: 12716948.
- 22. Erkelens CJ, Collewijn H (1991) Control of vergence: gating among disparity inputs by voluntary target selection. Exp Brain Res 87(3): 671-678. PubMed: 1783036.
- 23. Amador-Campos JA, Kirchner-Nebot T (1999) Correlations among scores on measures of field dependence-independence cognitive style, cognitive ability, and sustained attention. Percept Mot Skills 88(1): 236-239. doi:https://doi.org/10.2466/pms.19126.96.36.199. PubMed: 10214649.
- 24. Clark HT, Roof KD (1988) Field dependence and strategy use. Percept Mot Skills, 66: 303-307. doi:https://doi.org/10.2466/pms.19188.8.131.523.
- 25. Tsakanikos E (2006) Associative learning and perceptual style: Are associated events perceived analytically or as a whole? Pers Individ Dif 4: 579-586.
- 26. Burton JK, Moore DM, Holmes GA (1995) Hypermedia concepts and research: An overview. Comput Hum Behav 11: 345-369. doi:https://doi.org/10.1016/0747-5632(95)80004-R.
- 27. Goode PE, Goddard PH, Pascual-Leone J (2002) Event-related potentials index cognitive style differences during a serial-order recall task. Int J Psychophysiol 43: 123-140. doi:https://doi.org/10.1016/S0167-8760(01)00158-1. PubMed: 11809516.
- 28. Miyake A, Witzki A, Emerson M (2001) Field dependence-independence from a working memory perspective: A dual-task investigation of the Hidden Figures Test. Memory 9: 445-457. doi:https://doi.org/10.1080/09658210143000029. PubMed: 1174759411594363.
- 29. Kirchner T, Forns M, Amador JA (1990) Relaciones entre las dimensiones de dependencia-independencia de campo, introversión-extroversión y tiempos de reacción. Anu Psicología 46: 53-63.
- 30. Müller NG, Bartelt OA, Donner TH, Villringer A, Brandt SA (2003) A physiological correlate of the "Zoom Lens" of visual attention. J Neurosci 23(9): 3561-3565. PubMed: 12736325.
- 31. Colombo J, Mitchell DW, Coldren JT, Freeseman LJ (1991) Individual differences in infant visual attention: are short lookers faster processors or feature processors? Child Dev 62(6): 1247-1257. doi:https://doi.org/10.2307/1130804. PubMed: 1786713.
- 32. Witkin HA, Olman PK, Raskin E, Kamp SA (1981) Test de Figuras Enmascaradas. Madrid: T.E.A.