Sequence learning recodes cortical representations instead of strengthening initial ones
Fig 7
Interference in sequence learning.
(A) Visual representation of two sequences as position-item associations (top) and the resulting frequency of associations (bottom) as defined by the associative sequence learning model. (B) Associative learning of two sequences on panel A would boost the representations of four individual sequences despite the statistical regularities being extracted from only two. See S4 Text for a worked example. (C) Histogram of the expected number of shared codes (item-position associations, x-axis) for a single 4-item sequence with all other possible 4-item sequences (n = 256, allowing repeats), measured as a proportion of sequences sharing the same number of codes (y-axis). (D) Histogram of the shared codes for a two-item (bi-gram) chunk representation. (E) Interference between sequence representations in the item-position model. X-axis displays how many sequences have been learned and lines on the plot display the proportion of other sequences affected by learning as a function of codes shared: the lines correspond to columns in panel C. The red line shows the proportion of sequences which have been unaffected by learning. (F) Interference between sequence representations in the chunk model.