Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability.
The speed of sight has fascinated scientists and philosophers for centuries. In the blink of an eye, observers can rapidly and effortlessly perform a variety of categorization tasks such as categorizing a scene as open, as natural, or as a beach. The past decade of work has shown that there exist systematic differences in behavioral responses across different categorization tasks: For instance, participants appear to be faster and more accurate at categorizing a scene as outdoor (i.e., superordinate level) compared to categorizing a scene as a beach (i.e., basic level). Here, we describe a computational model combined with human psychophysics experiments, which help shed light on the underlying mechanisms. Using a large natural scene database, we trained machine learning algorithms for different categorization tasks and showed that it is possible to derive confidence measures that accurately predict variations in participants’ behavioral responses across categorization tasks and stimulus sets. Using the computational model to sample stimuli for a human experiment, we demonstrated that it is possible to reverse the superordinate advantage, rendering human observers superordinate categorization slower and less accurate than basic categorization—effectively challenging previous interpretations of the phenomenon. The study further offers a vivid example on how computational models can help summarize and organize existing experimental data as well as plan and interpret new experiments.
Citation: Sofer I, Crouzet SM, Serre T (2015) Explaining the Timing of Natural Scene Understanding with a Computational Model of Perceptual Categorization. PLoS Comput Biol 11(9): e1004456. https://doi.org/10.1371/journal.pcbi.1004456
Editor: Wolfgang Einhäuser, Technische Universitat Chemnitz, GERMANY
Received: December 16, 2014; Accepted: July 19, 2015; Published: September 3, 2015
Copyright: © 2015 Sofer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This work was supported by the National Science Foundation (NSF) early career award [grant number IIS-1252951 to TS]. Additional support was provided by the Defense Advanced Research Projects Agency (DARPA) young faculty award [grant number YFA N66001-14-1-4037 to TS], the Office of Naval Research (ONR) grant [grant number N000141110743 to TS], the Brown Institute for Brain Sciences (BIBS), the Center for Vision Research (CVR), and the Center for Computation and Visualization (CCV). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Categorization is perhaps one of our most critical visual functions as it allowed our ancestors to distinguish friend from foe and the edible from the inedible. Observers can rapidly extract meaning from brief presentations of complex visual scenes —far exceeding the best existing engineered artificial systems .
Observers can reliably perform a variety of categorization tasks  such as categorizing a scene as open, as outdoor, or as a beach. However, it has also been shown that there exist systematic differences in participants’ behavioral responses across categorization tasks. In particular, categorizing a scene as open or navigable (i.e., attribute level) necessitates shorter presentation times than categorizing a scene as a lake or a beach (i.e., basic level, see ). (Note that our definition of basic-levelness follows the common usage in vision science (see [5–13]) and reflects a logical  rather than functional definition of the basic level.) Similarly, participants appear to be faster and more accurate when categorizing a scene as outdoor (i.e., superordinate level) compared to categorizing a scene at a basic level [7, 9, 11, 13]. A very recent study further suggests that subordinate scene categorization is less sensitive and slower than basic level categorization .
Beyond the categorization of natural scenes, there exist systematic differences in behavioral responses for object categories across taxonomic levels with observers’ subordinate-level categorization (e.g., pigeons vs. other birds) being slower and less accurate than basic-level categorization (e.g., birds vs. non-birds, see . Similarly, basic-level categorization (e.g., birds vs. dogs) has been shown to be slower than superordinate categorization (e.g., animals vs. non-animals, see ). Participants tend to be faster and more accurate at categorizing faces at the superordinate level (i.e. categorizing faces vs. non-faces) compared with categorizing faces at the familiarity level (famous vs. non-famous, see ). However, for both familiar faces and other individually-known familiar objects, categorization at the subordinate level is faster than at the basic level . Similarly, there exist systematic differences in behavioral responses for different social inference tasks : For instance, categorization at the level of intentionality is faster than categorization at the level of belief and personality.
Such systematic behavioral differences across categorization tasks are often taken as suggestive evidence for an underlying hierarchical organization of categorization processes with some categorization tasks taking precedence over others [5–13], but see also [18–20]. Overall, the past decade of research on visual categorization has produced a significant and rapidly increasing amount of data and, while systematic differences across categorization tasks have been well-characterized to date, little is known about the underlying mechanisms.
In this study, we describe a computational model to account for variations in participants’ behavioral responses (both accuracy and reaction time) across tasks and stimuli for the rapid categorization of natural scenes. Previous work has proceeded along two seemingly parallel paths (see [21, 22] for discussions) with a nearly exclusive focus on modeling either visual representations (see  for review) or categorization and decision-making (see  for review). Here, we implemented a single integrated paradigm that links perception with categorization processes.
Formally, visual categorization corresponds to the process of associating visual stimuli xi = 1…m to category labels yi = 1…m to form (xi,yi) exemplar-label pairs. xi may be parametrized by a feature vector in a N-dimensional perceptual space . Fig 1A illustrates such a feature space for an hypothetical population of N = 2 feature detectors (in practice, we expect N to be much larger). Learning to categorize visual stimuli requires learning a categorization boundary that best represents the relation between input images xi and their corresponding category labels yi. Once a categorization boundary had been learned, the classification of a stimulus depends on its position relative to the categorization boundary: One side of the categorization boundary will be associated with a target set of stimuli while the other side will be associated with the distractor set. An illustration for hypothetical decision boundaries corresponding to different taxonomic levels is shown in Fig 1B–1D. According to this computational framework, different categorization tasks correspond to different decision boundaries, which carve the same perceptual space, an idea that has motivated the development of most existing computational models of perceptual categorization (see  for review).
(A) Perceptual space: Visual features are first extracted from individual images, which can then be represented as datapoints in an N-dimensional space. (B–D) Categorization boundaries: The model assumes that different categorization tasks carve up the same perceptual space and correspond to different categorization boundaries (shown for hypothetical tasks: Superordinate level—‘natural’ vs. ‘man-made’ in (B), basic level—‘beach’ vs. ‘forest’ in (C) and scene attribute—‘easy’ vs. ‘hard’ to navigate in (D).
We used a rudimentary visual representation based on the “gist” algorithm  but other visual representations are possible (see  for review; see also Discussion). We further used a large image database  to train and test machine learning classifiers (regularized logistic regression) and estimate the decision boundaries associated with many different scene categorization tasks. A task-dependent measure of perceptual discriminability can then be derived for a particular categorization task by considering the distance between individual stimuli and the categorization boundary (Fig 2A). The basic intuition for this measure is that, for a particular categorization task, images that are closer to the categorization boundary will be harder to categorize than those that are further away leading to behavioral responses that are slower and less accurate. Furthermore, these values can be aggregated to yield estimates of accuracy for arbitrary sets of target and distractor stimuli (Fig 2B).
(A) For an individual image and a specific categorization task (e.g., task 1), discriminability values are derived from the model by considering the distance d1 between the image and the categorization boundary associated with task 1. Here we tested the hypothesis that for a given stimulus and task, discriminability values drive participants’ average categorization accuracy and reaction times. (B) Discriminability values can also be computed for arbitrary sets of target (green) and distractor (brown) images. The normalized distance between these two distributions will determine how easy or difficult the task, as a whole, will be for human participants.
The goal of the present study was to test the hypothesis that the perceptual discriminability of individual stimuli for a particular task is one of the main factors driving behavioral responses. While this hypothesis is built-in for many categorization models (see  for review), it had so far only been tested with simple artificial stimuli where participants were trained to learn a new object category parametrized by two dimensions (e.g., , but see also  for alternative models.) However, this hypothesis has not yet been tested for well-learned, natural categories.
We first found that model-derived discriminability values predicted well behavioral responses for different categorization tasks as reported in two published studies [4, 11]. In addition, in experiment 1, we were further able to show that the model accurately predicted variations in accuracy and reaction time at the level of individual stimuli within the context of a scene categorization task. We then used the model to test the hypothesis that the so-called “superordinate advantage” [7, 9, 11, 13], whereby superordinate categorization is faster and more accurate than basic categorization, may reflect the greater perceptual discriminability of scenes at the superordinate vs. basic level. Consistent with this hypothesis, we first found that the model was consistent with the reported results of a published study on the superordinate advantage . In experiment 2, we further showed that it is possible to use model-derived discriminability values to sample stimuli and to effectively reverse the superordinate advantage, making participants superordinate categorization slower and less accurate than basic categorization, thus offering a possible perceptual explanation of the phenomenon.
Overall, our results provide a computational-level explanation for the systematic variations in rapid categorization behavioral responses across taxonomic levels, suggesting that these differences may simply reflect natural variations in perceptual discriminability. Our study thus challenges several existing theories of visual processing and offers a vivid example of how computational models can help summarize existing data as well as plan and interpret novel experiments.
The protocol was approved by Brown IRB [protocol #1002000135] and was carried out in accordance with the provisions of the World Medical Association Declaration of Helsinki. All participants reported having normal or corrected-to-normal vision and gave written informed consent.
Images were converted to grayscale, cropped to a squared image, and then rescaled to 256×256 pixels. To minimize low-level brightness differences between targets and distractors, stimuli for each individual session were set to a constant mean brightness value (equal for all images in the corresponding session).
The visual representation used here, called “gist” is relatively low-level [25, S1 Fig]. It was chosen for its simplicity in the absence of any strong evidence that a more complex visual representation would lead to significantly different model predictions (S2 Fig). Briefly, the image was first convolved with a bank of 32 Gabor filters (4 different scales in 8 orientations). The resulting convolution maps were then averaged separately in individual cells on a 4 × 4 grid covering the whole image. This yielded a 512-dimensional feature vector. Matlab source code for the “gist” is available online (see  for details).
Categorization boundaries were learned from natural scenes using a logistic regression classifier with L2 regularization. Software was implemented in Python using the scikit-learn  and the liblinear library . Comparable levels of accuracy and qualitatively similar patterns of results were obtained with other types of classifiers (e.g., SVMs) as well as more complex kernels (see S1 Text). Categorization tasks were modeled as binary categorization tasks except in two of the comparisons with published results [9, 11], which were modeled using a one-vs-all multi-class classification approach.
Classifiers were trained and tested using cross-validation techniques whereby images were split into disjoint training and test sets multiple times at random (with replacement). The number of images sampled had to be varied across experiments because we tried to use the maximal number of samples available while creating datasets containing a balanced number of positive and negative samples. Except when noted, training and test data were split using an 80–20% training-test split. All hyper-parameters were optimized on the training set using a 5-fold cross-validation procedure.
Discriminability values were estimated by computing the average (test) classification error for each image in the dataset over multiple splits of the training/test data. This measure is simpler to compute than estimating the average distance between an image and the categorization boundary across all training/test data splits and, in practice, we found these two measures to agree closely.
Except when noted, the model accuracy was computed as the average rate of correct (test) classification over all random splits of the training/test data (N = 100, unless specified otherwise).
Apparatus and procedure.
Participants sat in a dimly lit room. They were instructed to sit with their back leaning against the chair so as to maintain a viewing distance of approximately 75 cm to the CRT monitor (800 × 600 pixels, refresh rate of 140 Hz). Stimulus presentation was controlled using Matlab and the PsychToolbox  on a Mac Pro. Behavioral responses were collected using two handheld thumb button switches connected to a response time box .
On each trial, the experiment ran as follows: On a black background (1) a fixation cross appeared for a variable time (1,100–1,600 ms); (2) a stimulus (10° × 10°) was presented for a single frame (7 ms). The order of image presentations was randomized. Participants were instructed to answer as fast and as accurately as possible by pressing the button in their strong hand if they saw a target, and the other button if they saw a distractor. Participants were forced to respond within 500 ms (a sound was played and a message displayed in the absence of a response past the response deadline). At the end of each block, participants received feedback about their accuracy. An illustration of the experimental paradigm is shown in Fig 3A.
(A). Overview of the experimental design: Each trial began with a fixation cross followed by the subsequent brief presentation of an image (7 ms). Participants were required to respond within 500 ms. (B) Representative scenes sampled at five distinct discriminability value levels for a natural vs. man-made categorization task. Note that the original stimuli used could not be shown because of copyright and were replaced instead by visually similar images found on Flickr under the creative common. (C) Average results across all participants: Accuracy (percentage of correct responses, blue) and mean reaction time (RT) for correct responses (red) as a function of discriminability values as predicted by the model. Curves correspond to a GLMM fit and error bars to the standard deviation of the mean. (D) Results for individual participants.
Sample size and stopping criterion.
Here, we applied a Bayesian analysis of results (see below). Thus, there was no need to predetermine a stopping rule or sample size, as the analysis does not depend on the researchers’ intentions .
Predicting behavioral categorization based on discriminability: Existing literature and experiment 1
Model initial validation.
We initially validated the model by postdicting behavioral categorization results reported in [4, 11]. Because of the relatively small size of the image dataset, the proportion of training images relative to test images had to be increased to reproduce the results described in  (96/4% training/test split). Because of the nature of the task in , the accuracy measure acci,j for discriminating between category i and j was computed as 1 minus the fraction of images from category i classified as category j and vice-versa.
Using the model to sample stimuli.
The model was trained to discriminate between ‘natural’ and ‘man-made’ scenes using the SUN database , which is currently the largest available scene database with nearly 400 basic level categories for a total of approximately 100,000 images. We selected the following man-made categories: ‘skyscraper,’ ‘highway,’ ‘street,’ ‘tower,’ ‘alley,’ ‘apartment building,’ and ‘amphitheater’ and the following natural categories ‘beach,’ ‘desert,’ ‘cultivated field,’ ‘coral reef,’ ‘iceberg,’ ‘forest’ (by combining ‘broad leaf tree,’ ‘needle leaf tree’ and ‘rain forest’), ‘mountain’ (by combining ‘mountain,’ ‘snowy mountain,’ ‘coast,’ and ‘cliff’).
Decision values were estimated for each individual image using the procedure described in General methods. We binned stimuli according to their associated discriminability values (5 bins for each superordinate category: 0.2, 0.4, 0.6, 0.8, and 1.0) and sampled 96 images for each bin. Each sampled image was inspected manually. If the category label for the image was found to be ambiguous (e.g., a house in a prairie may yield some ambiguity in the corresponding class label because a house is man-made but a prairie is natural), a stimulus was re-sampled from the same bin. For each level, the resulting mean discriminability values for the chosen images was computed to ensure that it remained close to the target discriminability values.
A total of 8 participants completed the experiment (6 males, 2 females; mean age 21 years, range 20–26; all right-handed). All participants reported having normal or corrected-to-normal vision and gave written informed consent.
The experiment followed the experimental design described in General methods. We used a within-subject factorial design: 2 categories (man-made vs natural) × 5 discriminability values (0.2, 0.4, 0.6, 0.8, and 1.0) derived from the model. Participants first viewed 20 natural and 20 man-made images randomly selected from the target and distractor image sets (4 from each discriminability value). Participants subsequently completed 16 blocks total. In each block, 6 images from each condition were presented in a random order leading to 60 images in each block and 960 trials in total.
Analysis of results.
Signal detection theory can be formulated as a special instance of a generalized linear model to estimate experimental effects on participants . It can also be extended to the population level using generalized linear mixed effect models , thus providing a very powerful and efficient estimation technique. In addition, mixed effect models are equivalent to Bayesian hierarchical models with an uninformative prior . Therefore, this analysis did not suffer from the common drawbacks associated with null hypothesis significance testing .
The response y of each participant was modeled as: where probit−1 denotes the cumulative distribution function of the standard normal distribution, βbias the response bias of the participant, and βsens the participant’s sensitivity for categorizing ‘natural’ vs. ‘man-made’ images for a middle discriminability target value (discriminability value = 0.6). βslope corresponds to the change in sensitivity associated with a change in discriminability value, and it is the parameter of interest for this analysis. For each trial, the participant’s response was set to 1 for ‘man-made’ responses and, and 0 for ‘natural’ responses. xsens was set to 0.5 for man-made trials, and −0.5 for natural trials. xslope, which codes for the discriminability value, was set to (−2,1,0,1,2) for natural images with corresponding discriminability values (1,0.8,0.6,0.4,0.2). For the man-made category, a reversed coding scheme was used.
The RT at trial i for each subject was modeled in a similar manner: with xslope coded as (−2,1,0,1,2) for images with associated discriminability values (1,0.8,0.6,0.4,0.2). P-values and confidence intervals for all experiments and model analyses were estimated using n = 10,000 Monte Carlo samples. P-values referred to a two-tailed test.
Using the model to reverse the superordinate advantage: Existing literature and experiment 2
Model initial validation.
We initially validated the model by postdicting behavioral categorization results reported in  (experiment 1 and 2), using the procedure described in General Methods. The model was trained and tested using the 8-scene database  (which constitutes a superset of the manually sampled subset used in ).
Using the model to sample stimuli.
For this experiment, we used the computational model to create sets of natural and man-made stimuli. First, sets of stimuli were obtained by considering combinations of 3 basic categories from a larger set of natural (‘beach,’ ‘desert,’ ‘cultivated field,’ ‘coral reef,’ and ‘iceberg’) and man-made (‘skyscraper,’ ‘highway,’ ‘street,’ ‘tower,’ ‘alley,’ ‘apartment building,’ and ‘amphitheater’) categories. The computational model was then tested on all possible combinations of natural and man-made sets for categorization at the superordinate level (man-made / target vs. natural / distractor categorization) and at the basic level (forest / target vs. natural / distractor categorization) as done in . The same set of natural stimuli was used as distractor for both categorization tasks.
We subsequently chose two sets of natural and two sets of man-made stimuli to create two experimental conditions: a superordinate advantage condition for which the model predicted high perceptual discriminability for superordinate-level categorization but low discriminability for basic-level categorization and a basic advantage condition for which the model predicted the opposite trend (low perceptual discriminability for superordinate-level categorization and high discriminability for basic-level categorization). This yielded the following category combinations for the superordinate advantage condition: ‘beach,’ ‘cultivated field,’ and ‘coral reef’ for natural categories and ‘alley,’ ‘street,’ and ‘skyscraper’ for man-made stimuli and the following category combinations for the basic advantage condition: ‘beach,’ ‘iceberg,’ and ‘desert’ for the natural categories while the man-made categories were ‘alley,’ ‘amphitheater,’ and ‘highway.’
For each experimental condition, 168 images were randomly sampled from both target and distractor categories. All sampled images were inspected visually and images for which the associated class label was deemed ambiguous were replaced by a randomly sampled image. To generate predictions for individual tasks, we re-trained the classifiers using the cross-validation procedure described in General methods.
A total of 24 participants completed the experiment (8 males, 16 females; mean age 24 years, range 18–25; all right-handed). All participants reported having normal or corrected-to-normal vision and gave written informed consent.
The experiment started with a practice block for an unrelated rapid categorization task (animal vs. non-animal) to familiarize participants with the experimental paradigm. The experiment began after participants correctly categorized 75% of the images in a single practice block. In addition, participants were allowed to browse through the stimulus set used in the session before the main experiment to familiarize themselves with the task.
We used a mixed design: 2 conditions (superordinate advantage and basic advantage) × 3 target categories (forest, mountain, and man-made). Half of the participants were assigned to the superordinate advantage condition and half to the basic advantage condition. Three tasks were tested: one superordinate (man-made vs. natural) and two basic categorization tasks (forest vs. natural and mountain vs. natural). Each participant completed 18 blocks (6 blocks for each task, 56 stimuli per block, 336 stimuli per task for a total of 1,008 trials). The order of the blocks was counterbalanced across participants. Each target image appeared only once for the entire experiment while each distractor appeared 3 times (once for each task). In each block, targets and distractors appeared with an equal probability. The target category was indicated at the beginning of each block with a written instruction on the screen together with 16 random exemplar images (8 targets and 8 distractors).
Analysis of results.
The original experiment included three tasks: 1 superordinate and 2 basic-level (forest and mountain) categorization tasks. The superordinate and the forest categorization tasks were the main factors tested in the experiment, and the mountain task was introduced to collect additional data. However, we observed a pervasive influence of a speed-accuracy tradeoff (SAT) for the mountain task: Participants appeared to be using a different SAT criterion (they were either more accurate and slower or less accurate and faster) and behavioral responses for this task could not simply be compared to behavioral responses for the other two tasks. This result did not conflict with our main hypothesis that the superordinate advantage can be reversed and the task was simply excluded from further analysis.
The behavioral response y of each participant was modeled as: where probit−1, βbias and βsens were defined as in experiment 1, and βcont corresponded to the change in sensitivity between the superordinate and the basic-level categorization task. For this analysis, βcont was the key parameter of interest. This formulation is similar to a two-factor anova, where βbias represents the first main effect, βsens the second main effect, and βcont the interaction.
As for experiment 1, for each trial, y was set to 1 if the participant pressed the target button, and 0 otherwise (non response trials were omitted). xsens was set to 0.5 for target trials and −0.5 for distractor trials. xcont was set to 0.25 for superordinate/target trials and for basic/distractor trials, and it was set to −0.25 for superordinate/distractor trials and for basic/target trials. All parameters were set as random effects to allow them to vary for each individual participant. The same model was fitted to each condition separately, and from each, one can derive the sensitivity for the two tasks in that condition:
The model used for RTs was similar, albeit simpler, since we only used correct trials. For each individual trial and subject, the RT was modeled as: where β0 denotes the mean RT, βbias the response bias, and βcont the change in RT between the two tasks. xbias was set to 0.5 for target trials −0.5 for distractor trials. xcont was set to 0.5 for the superordinate-level categorization task and to -0.5 for the basic-level categorization task. Monte-Carlo samples (n = 10,000) were used to estimate p-values and confidence intervals for all experiments and analyses. P-values refer to two-tailed test.
Predicting behavioral categorization based on discriminability: Existing literature and experiment 1
Model initial validation.
As an initial validation of the model, we considered two representative rapid scene categorization studies [4, 11] to compare the model’s predicted perceptual discriminability for different categorization tasks (across taxonomic levels) against human behavioral responses. For both studies, we trained the computational model using the stimuli set from the original experiments, and assessed the model’s discriminability for the same tasks. We then compared the model discriminability scores against human behavioral responses (as reported in the original studies).
In , the authors used a staircase procedure to estimate the presentation duration needed for participants to reach a fixed level of accuracy for fourteen distinct scene categorization tasks . These tasks were based on either scene attributes (concealment, depth, naturalness, navigability, openness, temperature, and transience) or basic level category membership (desert, field, forest, lake, mountain, ocean, and river). We took participants’ presentation time thresholds as reported in  and compared them to the model-predicted perceptual discriminability. As expected we found them to be negatively correlated (Spearman correlation; r(12) = −0.57,p = 0.03; Fig 4A).
(A) Negative correlation between the model predicted task discriminability and participants’ presentation time threshold . (B) Positive correlation between the model predicted task discriminability and participants sensitivity  (a small jitter was added to the display in (B) to improve visualization).
In , the authors looked at the rate at which participants classify two masked images, which belong to different categories, as belonging to the same category. This rate was used to define the perceptual similarity between any two categories. The authors tested all possible pairs of 15 categories, which resulted in 105 pairs of categories overall. We correlated the human sensitivity scores reported in  for individual tasks against the discriminability predicted by the model for the same tasks (Spearman correlation; r(103) = 0.64,p < 10−4; Fig 4B).
Overall, discriminability values derived from the computational model appeared sufficient to account for observed participants’ variations in behavioral responses for a relatively large and disparate number of tasks across experiments. Beyond this initial model validation, we will next show that it is possible to use the model to sample stimulus sets in order to systematically manipulate participants’ behavioral responses.
We assessed the accuracy and RTs from human participants using a rapid man-made vs. natural scene categorization paradigm. Images were sampled using discriminability values derived from the model. Sample images for each level of discriminability are shown in Fig 3B. On average, participants answered correctly on 83.0% of the trials (±2.4%). Trials for which participants failed to answer before the deadline were excluded from further analysis (5% of the total number of trials). The mean RT for correct responses was 372 ms (±7 ms), and is comparable to previously published results .
The model predicted a monotonic increase in accuracy and corresponding monotonic decrease in reaction time as a function of the stimulus discriminability values on either side of the categorization boundary. We thus fitted one generalized linear mixed effect model (GLMM) to behavioral responses to estimate the change in the rate of correct responses as a function of discriminability values and one separate GLMM to RTs (Methods). Decision values were found to have a significant effect at the group level for both accuracy (βslope = 0.14, 95% confidence interval CI = [0.10,0.17], p < 10−4) and RT (βslope = 3.92,CI = [2.85,5.02],p < 10−4). Results are shown in Fig 3C. These group-level results also held for individual participants as shown in Fig 3D (p < 10−3 for all participants).
These results validate the model key hypothesis that, for a given categorization task, variations in behavioral responses across stimuli are accounted for by the predicted stimulus’ perceptual discriminability for that particular task. Could natural variations in task discriminability thus also account for systematic variations in behavioral responses found across categorization tasks—including differences reported across taxonomic levels as exemplified by the “superordinate advantage”?
Using the model to reverse the superordinate advantage: Existing literature and experiment 2
Model initial validation.
A re-drawing of Fig 4 with the addition of color labels to indicate the taxonomic levels of the different tasks used in [4, 11] makes it clear that behavioral differences between taxonomic levels (attribute vs. basic level in Fig 5A or basic vs. superordinate level in Fig 5B) can be also explained by differences in perceptual discriminability. That is, the perceptual discriminability, as postdicted by the model for the attribute and superordinate categorization tasks used in  and  respectively, tend to be higher than for the corresponding basic categorization tasks. In addition, the model correctly postdicted the presentation threshold for the ‘forest’ (basic) category (which appeared to be faster than most attributes) or the ‘transience’ (attribute) category (which was comparable in speed to several basic level categories)—two categories that would be considered as outliers under a level-of-categorization interpretation.
(A) Attribute-level categories are labeled in blue and basic-level categories in red. (B) Basic-level categories are labeled in blue and superordinate-level categories in red.
These initial results suggest that the superordinate advantage could simply reflect natural variations in discriminability between different target and distractor sets. To explicitly test this hypothesis, we used data by  and found that the model postdicted a higher perceptual discriminability for their superordinate-level vs. basic-level categorization tasks and a lower discriminability for categorization between two basic categories that belong to the same superordinate class (e.g., both natural) compared to categorization between two basic categories that belong to different superordinate classes.
In , participants were tested on different categorization tasks using a backwards masking paradigm. In a first experiment, one group of participants performed a superordinate-level categorization task while another group performed a basic level categorization task. Consistent with participants’ behavioral responses , the model correctly postdicted a higher perceptual discriminability for superordinate vs. basic categorization as measured by the difference in sensitivity (A′) between superordinate and basic categorization (Human: M = 0.05±0.02; Model: M = 0.03±0.01) (see  for details). In a second experiment, participants had to discriminate between two basic categories that either belonged to the same or different superordinate categories. Again, consistent with participants’ behavioral responses , the model correctly postdicted a lower discriminability for categorization between two basic categories that belong to the same superordinate class (e.g., both natural) compared to categorization between two basic categories that belong to different superordinate classes. This effect was measured using the difference in sensitivity between the “same” task and the “different” task (Human: M = 0.14±0.06; Model: M = 0.05±0.01). Next, we demonstrate the contribution of perceptual discriminability to the superordinate advantage more directly by showing that it was possible to sample stimuli based on model-derived discriminability values to reverse the superordinate advantage—rendering a superordinate categorization task harder for human participants compared to a basic level categorization task.
We sampled stimuli from the SUN database using model discriminability values to yield either high discriminability for superordinate categorization but low discriminability for basic categorization to try to replicate the superordinate advantage (“superordinate advantage” condition) and a low discriminability for superordinate categorization and a high discriminability for basic categorization to try to reverse the superordinate advantage (“basic advantage” condition; Fig 6A). In each condition, participants had to perform both a superordinate (man-made vs. natural) and a basic categorization task (forest vs. natural). The only difference between the two conditions was the set of target and distractor stimuli used, which were both sampled from the SUN image dataset as in Experiment 1 (Fig 6B).
(A). Model discriminability values were used to sample stimulus sets to yield a high discriminability for superordinate categorization and a low discriminability for basic categorization to try to replicate the superordinate advantage (“superordinate advantage” condition) as well as a low discriminability for superordinate categorization and a high discriminability for basic categorization to try to reverse the superordinate advantage (“basic advantage” condition). (B) Representative images used in the experiment. Note that the original stimuli could not be shown because of copyright issues. Instead, shown are visually similar images from Flickr with a Creative Common licence. (C) Experimental results: The model correctly predicted higher accuracy and lower mean RTs for the superordinate vs. basic categorization task in the superordinate advantage condition and the opposite trend in the basic advantage condition.
Both the man-made and the natural superordinate categories consisted of images from three basic categories (Fig 6B). However, across conditions, different basic categories were chosen. This was done by running a large number of model simulations. Overall, for the superordinate task, we used all possible combinations of 3 man-made basic categories against all possible combinations of 3 natural basic categories (Fig 7). We simulated the basic categorization task by categorizing the forest category against all combinations of three natural categories. For each condition, we then obtained categories that maximized the difference between the superordinate and the basic tasks (see Methods for details).
We created many different image datasets to train and test the model on both a basic level categorization task (forest vs. natural stimuli) and a superordinate categorization task (man-made vs. natural stimuli). This was done by considering all possible combinations of 3 basic categories from a larger set of natural categories and all possible combinations of 3 basic categories from a larger set of man-made categories. We computed discriminability values for all the corresponding categorization tasks and chose natural and man-made combination sets of stimuli to create 2 experimental conditions: (1) A superordinate advantage condition for which the model predicted high perceptual discriminability for superordinate-level categorization but low discriminability for basic-level categorization (blue line). The combination set included ‘beach,’ ‘cultivated field,’ and ‘coral reef’ for natural categories and ‘alley,’ ‘street,’ and ‘skyscraper’ for man-made stimuli. (2) A basic advantage condition for which the model predicted the opposite trend (low perceptual discriminability for superordinate-level categorization and high discriminability for basic-level categorization, red line). The combination set included: ‘beach,’ ‘iceberg,’ and ‘desert’ for the natural category while the man-made category included ‘alley,’ ‘amphitheater,’ and ‘highway.’
As in Experiment 1, we used a GLMM to analyze participants’ sensitivity and mean RT for correct responses (Fig 6C). In the superordinate advantage condition, the average sensitivity was 2.31 (±0.19) for the superordinate task and 1.44 (±0.12) for the basic task. The within-subject difference in sensitivity was large and significant (βcont = 0.87,CI = [0.64,1.11], p < 10−4). Mean RTs were 356 ms (±6 ms) for the superordinate task and 364 ms (±6 ms) for the basic task. The within-subject difference in mean RT was significant as well (βcont = 8.36,CI = [0.11,16.63], p = 0.050).
The opposite pattern was observed in the basic advantage condition. The average sensitivity was 1.85 (±0.13) in the superordinate task, and 2.28 (±0.12) in the basic task. The within-subject difference in sensitivity was smaller than the other group but still highly significant (βcont = 0.43,CI = [0.24,0.63], p < 10−4). The mean RT was 376 ms (±5 ms) for the superordinate task, and 357 ms (±6 ms) for the basic task. The within-subject difference in mean RT was again large and significant (βcont = 19.14,CI = [12.01,26.5], p < 10−4).
We have described an integrated paradigm that links perceptual processes with categorization processes. We used a large natural scene database to train and test machine learning classifiers in order to derive task-dependent perceptual discriminability values for individual images based on their distance to different categorization boundaries. We showed that the resulting model is consistent with a host of published results [4, 9, 11]. In addition, we also designed two experiments to demonstrate that it is possible to use the model to sample stimuli in order to manipulate participants’ behavioral responses (both accuracy and reaction times).
In experiment 1, we showed that sampling stimuli with increasing discriminability values (i.e. with increasing distance to the category boundary) yields behavioral responses that are increasingly fast and accurate. This suggests that the perceptual discriminability of individual stimuli for a particular task is one of the main factors driving behavioral responses.
A few recent studies have hinted at the contribution of perceptual discriminability to categorization using isolated objects , objects in clutter  and scenes [38, 39]. It has been shown that the perceptual dissimilarity between categories directly affects the speed of superordinate-level vs. basic-level categorization in pigeons . Early work on scene and face processing already hinted at this contribution by showing, for instance, that the stimulus content across spatial scales affects scene categorization performance . Subsequent work has also shown that the manipulation of the phase and amplitude spectra of an image affects behavioral responses during scene superordinate categorization [42, 43]. More recently, it has been shown that a low-level perceptual similarity measure based on stimulus contrast predicts the ease of categorization judgments for both artificial stimuli  and natural scenes . Our study further demonstrates that it is possible to use modern machine learning tools and computer vision databases to predict human behavioral responses for many categorization tasks across taxonomic levels.
In experiment 2, we further showed that it is possible to use the model to sample stimuli in order to reverse the “superordinate advantage” rendering participants’ superordinate categorization arbitrarily slower and less accurate than basic categorization. Previous work has shown that it is possible to manipulate level-of-categorization effects by controlling the similarity between face stimuli  and the typicality of objects . Here, we used the model to sample stimuli based on computed discriminability values, possibly making a superordinate categorization task harder compared to a basic level categorization task simply by sampling the right stimuli.
Our results suggest that the superordinate advantage is at least in part driven by the perceptual discriminability of target and distractor stimulus sets. Simply put, superordinate-level categorization tasks tend to be easier than basic-level categorization tasks leading to observers’ behavioral responses that are faster and more accurate. This is consistent with the somewhat higher accuracy of both connectionist models  and modern computer vision systems for categorization at the superordinate vs. basic level  and is consistent with the fact that children learn to categorize natural object categories at the superordinate level first [48, 50].
Our results are consistent with the differentiation theory  and the Parallel Distributed Processing (PDP) theory  in that level-of-categorization effects as reported in multiple studies [5–7, 9, 11–13] arise, not because of privileged processing at particular taxonomic levels, but because of differences in perceptual discriminability across tasks. In addition, this perceptual explanation rules out an interpretation of level-of-categorization effects based on the “global-to-specific” theory of categorization, whereby categorization at more global (coarser) categorization stages need to be completed before categorization at more specific (finer) levels can begin. Hence, one would expect a basic advantage over subordinate categorization (e.g., detection preceding identification ) as well as a superordinate and attribute advantage over basic and subordinate categorizations [4, 5, 11]. Our results demonstrate that observed differences in timing across categorization tasks do not necessarily reflect the fact that some categorization tasks take precedence over others (see also [19, 20]).
While our results point to perceptual discriminability as playing a fundamental role in level-of-categorization effects, additional memory-related factors such as typicality are likely to affect rapid categorization. More generally, a complete model should also take into account known semantic contributions to visual categorization. One proposal is that mental representations of categories across taxonomic levels occupy nodes in a semantic network . The rapid perceptual categorization mechanisms studied here may determine which nodes get activated first before activation spreads to other nodes enabling the slower retrieval of information at other levels of categorization .
The present study also has implications for models of category learning and models on the development of visual expertise. It is known that experts can override the supremacy of one level of categorization found in novices with their own level of expertise (e.g. the subordinate level becomes faster for bird experts that are over-trained at the subordinate level, the basic-level becomes faster for Chinese character experts that are over-trained in discriminating characters at the basic level (irrespective of font and writing style, see  for review). One simple explanation consistent with our results is that practice for a task leads to long-term perceptual learning that increases the discriminability between targets and distractors, making participants faster and more accurate.
Despite its ability to account for behavioral responses, the proposed model remains relatively simple. We used a rudimentary visual representation based on the “gist” algorithm  and off-the-shelf machine learning classifiers (see  for a similar model used to explain the scene categorization advantage when scenes contained consistent vs. inconsistent objects). However, the fact that a relatively simple (V1-like) model of feature computation, seems sufficient to account for behavioral responses does not necessary imply that rapid scene categorization is based on low-level visual processing. We have tested alternative visual representations based on common features used in computer vision and found all these models to be relatively correlated. This could possibly reflect a limitation inherent to the ever limited size of natural image databases  as well as possible inherent biases such as photographers selecting vantage points . Note that such image bias is quite different from the “natural bias” reported here in terms of differences in perceptual discriminability across categorization tasks, which is likely to reflect physical properties of our visual environment as opposed to biases in the image dataset per se.
In addition, while the superordinate advantage has been described for other classes of stimuli beyond scenes such as animals  or faces [5, 6], we have here only considered the relevance of the model for scene categorization. The use of a similar framework for other type of classifications would be likely to require more elaborated visual representations. In theory, it should be relatively straightforward to test additional perceptual representations—possibly reflecting higher level visual processes (see  for a review).
A possible neural correlate for decision boundaries includes neurons with category-like tuning found throughout the cortex such as within the ventral stream, the prefrontal cortex (PFC) and the parietal cortex  and/or attentional processes that would differentially modulate individual feature dimensions according to their task diagnosticity . Perceptual spaces in practice tend to be more flexible than assumed in the model as novel features can be learned (i.e., the meaning of some of the dimensions may change and/or dimensions may be added as a result of learning and plasticity) and perceptual spaces can be reshaped by task history and other cognitive factors . Alternative categorization algorithms to the proposed decision boundary have been described based on either the distance to category prototypes  or the distance to individual exemplars . The proposed discriminability measures based on the distance between stimuli and decision boundaries could be easily extended to distances to exemplars or prototypes . While it is expected that a better model of the categorization process should improve the fit to behavioral data, it is unlikely to change any of our conclusions, since categorization models tend to produce similar behavioral predictions.
Overall, our study provides a computational level explanation for systematic variations found in behavioral responses for rapid categorization tasks across taxonomic levels, challenging several existing theories of visual processing and suggesting, instead, that observed differences in behavioral responses may simply reflect natural variations in perceptual discriminability.
S1 Text. Supplementary materials and methods including details on the comparison between different types of visual descriptors and classifiers.
S1 File. Supplementary file containing all image stimuli used and corresponding behavioral responses from human participants.
S1 Fig. Sketch of the gist visual representation used.
The response of a battery of filters at multiple orientations and spatial frequencies is first computed for an individual image. These filter responses are then spatially pooled to yield a 512-dimensional (gist) feature vector.
S2 Fig. Correlation between visual representations.
Simple visual representations like the gist tend to be relatively correlated with more complex ones including state-of-the-art visual descriptors from computer vision (see text for detail). This is true when correlating both the predicted class labels for individual train-test splits (A) and discriminability values computed across all train-test splits (B).
The authors would like to thank Dr. Michelle Greene (Stanford) and Prof. Thomas Palmeri (Vanderbilt University) for their useful feedback on the manuscript. We would also like to thank Sahar Shahamatdar for her contribution during the early stages of this work. Earlier versions of this work appeared as Abstract #36.401 presented at the 13th annual meeting of the Vision Science Society, 2013 (Understanding the nature of the visual representations underlying rapid categorization tasks by Imri Sofer, Kwang Ryeol Lee, Pachaya Sailamul, Sebastien Crouzet & Thomas Serre) and Abstract #55.12 presented at the 14th annual meeting of the Vision Science Society, 2014 (A simple rapid categorization model accounts for variations in behavioral responses across rapid scene categorization tasks by Thomas Serre, Imri Sofer & Sebastien Crouzet).
Conceived and designed the experiments: IS SMC TS. Performed the experiments: IS. Analyzed the data: IS. Wrote the paper: IS SMC TS.
- 1. Biederman I (1972) Perceiving real-world scenes. Science (80) 177: 77–80.
- 2. Fleuret F, Li T, Dubout C, Wampler EK, Yantis S, et al. (2011) Comparing machines and humans on a visual categorization test. Proc Natl Acad Sci 108: 17621–5. pmid:22006295
- 3. Tversky B, Hemenway K (1983) Categories of environmental scenes. Cogn Psychol 15(1): 121–149.
- 4. Greene MR, Oliva A (2009) The briefest of glances: the time course of natural scene understanding. Psychol Sci 20: 464–72. pmid:19399976
- 5. Grill-Spector K, Kanwisher N (2005) Visual Recognition As Soon as You know it is there, you know what it is. Psychol Sci 16: 152–160. pmid:15686582
- 6. Barragan-Jason G, Lachat F, Barbeau EJ (2012) How Fast is Famous Face Recognition? Front Psychol 3: 454. pmid:23162503
- 7. Joubert OR, Rousselet Ga, Fize D, Fabre-Thorpe M (2007) Processing scene context: fast categorization and object interference. Vision Res 47: 3286–97. pmid:17967472
- 8. Bowers JS, Jones KW (2008) Detecting objects is easier than categorizing them. Q J Exp Psychol 61: 552–7.
- 9. Loschky LC, Larson AM (2010) The natural/man-made distinction is made before basic-level distinctions in scene gist processing. Vis cogn 18: 513–536.
- 10. Mack ML, Palmeri TJ (2010) Decoupling object detection and categorization. J Exp Psychol Hum Percept Perform 36: 1067–79. pmid:20731505
- 11. Kadar I, Ben-Shahar O (2012) A perceptual paradigm and psychophysical evidence for hierarchy in scene gist processing. J Vis 12(13):16, 1–17. pmid:23255732
- 12. Malle BF, Holbrook J (2012) Is there a hierarchy of social inferences? The likelihood and speed of inferring intentionality, mind, and personality. J Pers Soc Psychol 102: 661–84. pmid:22309029
- 13. Prass M, Grimsen C, König M, Fahle M (2013) Ultra rapid object categorization: effects of level, animacy and context. PLoS One 8: e68051. pmid:23840810
- 14. Gosselin F, Schyns P (2001) Why do we slip to the basic level? Computational constraints and their implementation. Psychol Rev 108(4): 735–58. pmid:11699115
- 15. Malcolm GL, Nuthmann A, Schyns PG (2014) Beyond gist: strategic and incremental information accumulation for scene categorization. Psychol Sci 25: 1087–97. pmid:24604146
- 16. Macé MJM, Joubert OR, Nespoulous JL, Fabre-Thorpe M (2009) The time-course of visual categorizations: you spot the animal faster than the bird. PLoS One 4: e5927. pmid:19536292
- 17. Tanaka JW (2001) The entry point of face recognition: evidence for face expertise. J Exp Psychol Gen 130: 534–43. pmid:11561926
- 18. D’Lauro C, Tanaka JW, Curran T (2008) The preferred level of face categorization depends on discriminability. Psychon Bull Rev 15: 623–629. pmid:18567265
- 19. Mack ML, Palmeri TJ (2011) The Timing of Visual Object Categorization. Front Psychol 2: 1–8.
- 20. Vanrullen R (2011) Four common conceptual fallacies in mapping the time course of recognition. Front Psychol 2: 365. pmid:22162973
- 21. Schyns P (1998) The development of features in object concepts. Behav Brain 21: 1–54.
- 22. Richler JJ, Palmeri TJ (2014) Visual category learning. Wiley Interdiscip Rev Cogn Sci 5: 75–94.
- 23. Crouzet SM, Serre T (2011) What are the Visual Features Underlying Rapid Object Recognition? Front Psychol 2: 326.
- 24. Ashby FG (1992) Multidimensional models of perception and cognition. Hillsdale, New Jersey: Lawrence Erlbaum Associates, Inc.
- 25. Oliva A, Torralba A (2001) Modeling the shape of the scene: A holistic representation of the spatial envelope. Int J Comput Vis 42: 145–175.
- 26. Xiao J, Hays J, Ehinger K, Oliva A, Torralba A (2010) SUN database: Large-scale scene recognition from abbey to zoo. In: Comput. Vis. Pattern Recognit. pp. 3485–3492.
- 27. Maddox WT, Prinzmetal W, Ivry RB, Ashby FG (1994) A probabilistic multidimensional model of location information. Psychol Res 56: 66–77. pmid:8153245
- 28. Nosofsky R, Palmeri T (1997) Comparing exemplar-retrieval and decision-bound models of speed perceptual classification. Percept Psychophys 59 (7): 1027–1048 pmid:9360476
- 29. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, et al. (2011) Scikit-learn: Machine Learning in Python. J Mach Learn Res 12: 2825–2830.
- 30. Fan RE, Chang KW, Hsieh CJ, Wang XR, Lin CJ (2008) LIBLINEAR: A Library for Large Linear Classification. J Mach Learn Res 9: 1871–1874.
- 31. Brainard DH (1997) The Psychophysics Toolbox. Spat Vis. 10(4): 433–6. pmid:9176952
- 32. Li X, Liang Z, Kleiner M, Lu ZL (2010) RTbox: a device for highly accurate response time measurements. Behav Res Methods 42: 212–25. pmid:20160301
- 33. Kruschke JK (2010) Bayesian data analysis. Wiley Interdiscip Rev Cogn Sci 1: 658–676.
- 34. DeCarlo LT (1998) Signal detection theory and generalized linear models. Psychol Methods 3: 186–205.
- 35. Moscatelli A, Mezzetti M, Lacquaniti F (2012) Modeling psychophysical data at the population-level: the generalized linear mixed model. J Vis 12(11): 26, 1–17. pmid:23104819
- 36. Gelman A, Hill J (2007) Data analysis using regression and multilevel/hierarchical models. 1–651.
- 37. Mohan K, Arun SP (2012) Similarity relations in visual search predict rapid visual categorization. J Vis 12: 1–24.
- 38. Renninger LW, Malik J (2004) When is scene identification just texture recognition? Vision Res 44: 2301–11. pmid:15208015
- 39. Mack ML, Palmeri TJ (2010) Modeling categorization of scenes containing consistent versus inconsistent objects. J Vis 10(11): 1–11. pmid:20377288
- 40. Lazareva OF, Soto Fa, Wasserman Ea (2010) Effect of between-category similarity on basic level superiority in pigeons. Behav Processes 85: 236–245. pmid:20600696
- 41. Schyns P, Oliva A (1997) Flexible, diagnosticity-driven, rather than fixed, perceptually determined scale selection in scene and face recognition. Perception 26: 1027–1038. pmid:9509161
- 42. Gaspar CM, Rousselet GA (2009) How do amplitude spectra influence rapid animal detection? Vision Res 49: 3001–12. pmid:19818804
- 43. Joubert OR, Rousselet GA, Fabre-Thorpe M, Fize D (2009) Rapid visual categorization of natural scene contexts with equalized amplitude spectrum and increasing phase noise. J Vis 9(2): 1–16. pmid:19271872
- 44. Groen IIA, Ghebreab S, Lamme VAF, Scholte HS (2012) Spatially pooled contrast responses predict neural and perceptual similarity of naturalistic image categories. PLoS Comput Biol 8: e1002726. pmid:23093921
- 45. Groen IIA, Ghebreab S, Prins H, Lamme VAF, Scholte HS (2013) From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category. J Neurosci 33: 18814–24. pmid:24285888
- 46. D’Lauro C, Tanaka JW, Curran T (2008) The preferred level of face categorization depends on discriminability. Psychon Bull Rev 15: 623–629. pmid:18567265
- 47. Murphy GL, Brownell HH (1985) Category differentiation in object recognition: typicality constraints on the basic category advantage. J Exp Psychol Learn Mem Cogn 11: 70–84. pmid:3156953
- 48. Quinn PC, Johnson MH (2000) Global-Before-Basic Object Categorization in Connectionist Networks and 2-Month-Old Infants. Infancy 1: 31–46.
- 49. Deng J, Berg AC, Li K, Fei-Fei L (2010) What does classifying more than 10,000 image categories tell us? In: Proc. 11th Eur. Conf. Comput. Vis. Springer-Verlag, pp. 71–84.
- 50. Mandler JM, McDonough L (1993) Concept formation in infancy. Cogn Dev 8: 291–318.
- 51. Murphy G (2002) The big book of concepts. Cambridge, MA: MIT Press.
- 52. Rogers TT, Patterson K (2007) Object categorization: reversals and explanations of the basic-level advantage. J Exp Psychol Gen 136: 451–69. pmid:17696693
- 53. Liu J, Harris A, Kanwisher N (2002) Stages of processing in face perception: an MEG study. Nat Neurosci 5: 910–6. pmid:12195430
- 54. Jolicoeur P, Gluck MA, Kosslyn SM (1984) Pictures and names: Making the connection. Cogn Psychol 16(2): 243–275. pmid:6734136
- 55. Pinto N, Doukhan D, DiCarlo JJ, Cox DD (2009) A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Comput Biol 5: e1000579. pmid:19956750
- 56. Wichmann FA, Drewes J, Rosas P, Gegenfurtner KR (2010) Animal detection in natural scenes: Critical features revisited. J Vis 10: 1–27. pmid:20465326
- 57. Seger CA, Miller EK (2010) Category learning in the brain. Annu Rev Neurosci 33: 203–19. pmid:20572771
- 58. Çukur T, Nishimoto S, Huth AG, Gallant JL (2013) Attention during natural vision warps semantic representation across the human brain. Nat Neurosci 16: 763–70. pmid:23603707
- 59. Schyns PG (1997) Categories and percepts: a bi-directionnal framework for categorization. Trends Cogn Sci 1: 183–9. pmid:21223900
- 60. Posner MI, Keele SW (1968) On the genesis of abstract ideas. J Exp Psych 77(3): 353–363.
- 61. Nosofsky RM (1986) Attention, similarity, and the identification categorization relationship. J Exp Psychol Gen 115: 39–57. pmid:2937873
- 62. Jäkel F, Schölkopf B, Wichmann FA (2008) Generalization and similarity in exemplar models of categorization: Insights from machine learning. Psychon Bull Rev 15: 256–271. pmid:18488638