Skip to main content
Advertisement

< Back to Article

Neural coding in the visual system of Drosophila melanogaster: How do small neural populations support visually guided behaviours?

Fig 4

How much shape information is preserved in the R2 population code?

ANNs were trained to estimate various properties of randomly generated ‘blob’ stimuli (n = 93,702) from raw views (3 × 14 = 42 pixels; blue), R2 neurons (n = 28; red), R4d neurons (n = 14; green) or R2 and R4d neurons (n = 42; magenta). Panels A–D show results for networks trained with elevation and azimuth and panels E–H for orientation and size. For each visual input a network was trained with 100 training cycles and average performance with blobs that were not part of the training set was taken. A and B: Plots of elevation and azimuth of the test visual stimuli vs the mean network output (n = 140,554). The dashed line indicates ideal performance (i.e. y = x) and the thickness of the lines at each point shows standard error. The possible values of elevation and azimuth were constrained by the size of the fruit fly visual field (approx. 120° × 270°). Within this range there were 22 possible values. C and D: Average network performance (r.m.s. error) for networks trained to recover elevation (C) or azimuth (D) for each type of visual input (colour code as above; n(train) = 4259, n(test) = 6389). Standard error is shown, but is very small. E and F: Network performance in recovering stimulus orientation and size. Orientation was constrained between 0° and 90°, to avoid the problem of aliasing, and varied with 22 levels. G and H: Average network performance (r.m.s. error) for networks trained to recover orientation (G) or size (H) and for each type of visual input (colour code as previously). Standard error is shown, but is very small.

Fig 4

doi: https://doi.org/10.1371/journal.pcbi.1005735.g004