Fig 1.
Models of the mushroom bodies based on known neuroanatomy.
A Neuroanatomy: MB Mushroom Bodies; AL Antennal Lobe glomeruli (circles); ME & LO Medulla and Lobula optic neuropils. The relevant neural pathways are shown and labelled for comparison with the model. B Reduced model; neuron classes indicated at righthand side of sub-figure. C Full model, showing the model connectivity and indicating the approximate relative numbers of each neuron type. Colour coding and labels are preserved throughout all the diagrams for clarity. Excitatory and inhibitory connections indicated as in figure legend. Key of neuron types: KC, Kenyon Cells; PCT, Protocerebellar Tract neurons; IN, Input Neurons (olfactory or visual); EN, Extrinsic MB Neurons from the GO and NOGO subpopulations, where the subpopulation with the highest sum activity defines the behavioural choice in the experimental protocol (Fig 4).
Table 1.
Sufficiency table showing which learning mechanisms in the model are required for each result.
Fig 2.
The full and reduced versions of our model reproduce the transfer of sameness and difference learning.
A & B The average percentage of correct choices made by the model and real bees within blocks of ten trials as the task is learned (lines), along with the transfer of learning onto novel stimulus sets (bars). Both versions of the model reproduce the pattern of learning acquisition for DMTS (Full: N = 338, Reduced: N = 360) and DNMTS (Full & Reduced: N = 360) found when testing real bees (test for learning: P <0.0001), along with the transfer of learning (P <0.0001). For DMTS Giurfa A & B are the data from Experiments 1 & 2 respectively from Giurfa et al. [8], and for DNMTS Giurfa A & B are the data from Experiments 5 & 6 respectively from the same source. For an explanation of the initial offsets from chance for the model please see the text for panel D. C The blockade of plasticity from the MB and PCT pathways shows that the PCT pathway is necessary and sufficient for sameness and difference learning in the full model. All non-overlapping SEM error bars are significantly different. D PCT pathway learning in the absence of associative learning leads to preference for non-matching stimuli following pre-training, demonstrating that learning in the associative pathway changes the form of the sameness and difference acquisition curves. The equivalent offsets and error ranges for the first two blocks of Giurfa Experiments 1, 2, 5 & 6 along with the averages for DMTS and DNMTS for these blocks are shown alongside the model data for comparison as overlapping grey boxes—overlapping boxes create darker regions, thus the area of greatest darkness is the point where the most of the error ranges overlap. E The average activity of the model KC neurons when presented with repeated stimuli.
Fig 3.
The full model is capable of performing a range of conditioning tasks.
With modification of only the experimental protocol, our full model can successfully perform a range of conditioning tasks which can be performed by restrained (using the Proboscis Extention Reflex (PER) paradigm) and free flying bees. Performance closely matches experimental data with real bees (e.g. A: [46], B: [47], C & D: [48]).
Table 2.
Model parameters; all parameters are in arbitrary units.
Fig 4.
Experimental protocol for the model.
The model bee is moved between a set of states which describe different locations in the Y-maze apparatus (A): at the entrance (B), in the central chamber facing the left arm(D), in the central chamber facing the right arm (D), in the left arm or in the right arm (E,F). When at the entrance or in the main chamber the bee is presented with a sensory input corresponding to one of the test stimuli; GO selection leads the bee to enter the maze when at the entrance, and to enter an arm and experience a potential reward when facing that arm; NOGO leads the bee to delay entering the maze, or to choose another maze arm uniformly at random, respectively. We can then set the test stimuli presented to match the requirements of a given trial (e.g. entrance (A), main chamber left (A), main chamber right (B) for DMTS when rewarding the left arm, or DNMTS when rewarding the right arm).