Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The importance of forgetting: Limiting memory improves recovery of topological characteristics from neural data

  • Samir Chowdhury ,

    Contributed equally to this work with: Samir Chowdhury, Bowen Dai, Facundo Mémoli

    Roles Data curation, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Mathematics, The Ohio State University, Columbus, Ohio, United States of America

  • Bowen Dai ,

    Contributed equally to this work with: Samir Chowdhury, Bowen Dai, Facundo Mémoli

    Roles Data curation, Software, Validation, Visualization

    Affiliation Department of Computer Science, Dartmouth University, Hanover, New Hampshire, United States of America

  • Facundo Mémoli

    Contributed equally to this work with: Samir Chowdhury, Bowen Dai, Facundo Mémoli

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    memoli@math.osu.edu

    Affiliations Department of Mathematics, The Ohio State University, Columbus, Ohio, United States of America, Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, United States of America

Abstract

We develop of a line of work initiated by Curto and Itskov towards understanding the amount of information contained in the spike trains of hippocampal place cells via topology considerations. Previously, it was established that simply knowing which groups of place cells fire together in an animal’s hippocampus is sufficient to extract the global topology of the animal’s physical environment. We model a system where collections of place cells group and ungroup according to short-term plasticity rules. In particular, we obtain the surprising result that in experiments with spurious firing, the accuracy of the extracted topological information decreases with the persistence (beyond a certain regime) of the cell groups. This suggests that synaptic transience, or forgetting, is a mechanism by which the brain counteracts the effects of spurious place cell activity.

Introduction

The premise of our work is the commonly accepted theory that an animal’s awareness of its surroundings—the physical stimulus space—is encoded in the firing activity of place cells that are predominantly found in the CA1 and CA3 regions of its hippocampus [1]. Place cells are characterized by having firing patterns that are restricted to spatially localized regions called place fields [2]. Experimental results [3] suggest that the firing patterns or spike trains of place cells contribute spatial information that the brain uses to infer properties of the stimulus space. This has led to some interest in the following question: Without assuming place field information, what information can be extracted from only the spike trains of place cells? In this paper, we restrict the context of this question to rodent place cells. In [4], the authors used a mathematical shape analysis tool called homology to count the number of obstacles in an arena being explored by a rodent. This idea was further developed in [5, 6], where the authors used a time-series extension of homology called persistent homology (PH) [711] to identify bounds on the choices of parameters (e.g. number of place cells, sizes of place fields, firing rate) with respect to which homology could correctly identify topological features (i.e. connectivity/adjacency of locations) of the stimulus space from spike train data. Specifically, the authors simulated spiking activity of rodent place cells as the animal explored arenas having one or two obstacles (the topological features). The ground truth to be recovered was the number of obstacles in each arena. In [12], this study was expanded to utilize a newly developed method called directed network persistent homology [13], which significantly improved the 1-nearest neighbor classification error from that obtained via the methods used in [6]. The ground metric used for classification was the bottleneck distance [14], which is a natural metric for comparing persistent homology signatures.

Novelty of our work

The current thrust of our work is to incorporate the notion of forgetting information in the model for memory. The main tool that we are using for our analysis is zigzag persistent homology [15, 16], which allows the user to discard information (i.e. forget) in a systematic way while still building a persistent homology signature. Our current model for forgetting is very simple: for a specified length of time τ that is less than or equal to the experiment duration, we specify that the rodent remembers, at time t, only the information that it received in the window [tτ, t]. We test this forgetting model for a range of 100 different values of τ, on a dataset consisting of 750 simulations produced from a variety of firing rate and place field size parameters.

Our contribution lies in studying the organization of the zigzag persistent homology signatures via the bottleneck distance, and in tracking the variation of this organization as a function of the memory parameter τ. We study the organizational structure by using the 1-nearest neighbor classifier (see Fig 1), which is a well-established and conceptually clear method for assigning classes to unlabeled data.

thumbnail
Fig 1. Our results at a glance: A plot of error rate against memory size τ.

The maximum τ is 5000. This represents the full duration of the experiment. Legend: numbers following “rf” and “s” are mean firing rates (in Hz) and place field sizes (in cm), respectively.

https://doi.org/10.1371/journal.pone.0202561.g001

Our results (see Fig 1) support the surprising concept that more memory is not always better. A plot of the 1-nearest neighbor classification error shows that as τ increases from 0, the error rate initially drops, then achieves a minimum before rising again. Thus it seems that to achieve the best possible result in “learning” an arena, the rodent needs a balance between remembering and forgetting information. This result, while nonintuitive, is in line with recent literature in neuroscience [17] where it has been proposed that forgetting is an important step in the learning process.

Our premise

We restrict our study to the place cell activity in a rodent as it explores a planar region containing some obstacles. We refer to this region as an environment or an arena.

When referring to the topology of the environment, we specifically refer to a mathematical shape descriptor called homology. Homology has different interpretations in each dimension; 0-dimensional homology of a space refers to the number of its connected components, 1-dimensional homology refers to the number of loops, and so on (see §S1 File). In this paper, we focus only on 1-dimensional homology. The environments we consider are square regions with obstacles that the rodent cannot pass through; the 1-dimensional homology of each such environment is the number of obstacles contained in it, and this is the number of interest.

Next we describe what we mean by the topology of the synaptic potentiation complex. By potentiation, we simply mean increased (above baseline) synaptic connectivity. At the biological level, one of the contributors to this effect in the CA1 region of the hippocampus has been characterized as changes to AMPA receptors in the postsynaptic membrane (e.g. through an increase in channel conductance or in the number of receptors) [18, 19]. These changes are continuous, but for the purposes of the mathematical model, we will simplify the neurobiological effects to discrete “increase/stay at baseline” events.

Suppose we are conducting an experiment where we track the activity of an ensemble S of place cells as an animal explores one of the regions described above. For any subset σ of S containing at least two cells, any time t (a time instance during the experiment) and a time interval τ, we define the following potentiation function: Here we write cofiring to mean that all the cells in σ fired above a threshold (see §Materials and methods for our threshold choice) in a window of two theta cycles (∼350ms). In [4], σ was referred to as a cell group. This potentiation function is motivated by the classic Hebbian “fire together, wire together” principle. However, if enough time passes during which cells that have “wired together” have not fired together again, then they “unwire”.

Using this potentiation function with supplied values of t and τ, one builds a dynamic simplicial complex (see §S1 File) with node set S and a simplex for each subset σ with activation 1. We call this dynamic simplicial complex the synaptic potentiation complex. More specifically, suppose that the experiment starts at time 0 and ends at time T, and that data points are recorded at times {0, 1, 2, …, T − 1, T}. Fix a value of τ. Then a simplicial complex can be built via the rule given above for each value of t ∈ {0, 1, 2, …, T}. Thus we obtain a sequence of simplicial complexes. The point to note here is that this sequence is a bona fide dynamic simplicial complex: simplices that are “old” (representing place cells that have not cofired recently) are dropped from the complex, even as new simplices are being added. The lifetime of a simplex is controlled by the τ parameter, which we interpret as the memory capacity of the animal. Our work can broadly be described as computing the topology of this dynamic complex for differing values of τ to study how memory affects the animal’s understanding of the topology of the environment.

The main mathematical intuition underlying our work is a result from algebraic topology called the Nerve Theorem [20], which states that if a space X can be decomposed into smaller subspaces {A1, …, An} satisfying some well-behaved intersection properties, then the topology of the space is equivalent to the topology of a simpler space called the nerve of the decomposition. The nerve is a simplicial complex built on the indexing set as follows: a subset σI belongs to the nerve if and only if the intersection ∩iσ Ai is nonempty. The crucial observation made by Curto and Itskov [4] is that the topology of the synaptic potentiation complex can be used to extract the topology of the environment, by virtue of the Nerve Theorem. This can be seen as follows (for simplicity, assume τ = T): suppose that we have a labeled collection of n place cells whose place fields A1, A2, …, An cover all of the accessible regions of an environment (so obstacles are not fully covered). Suppose also that all the place fields are convex. The Nerve Theorem then guarantees that the nerve simplicial complex associated to the place fields has the same topology as the environment. Assuming that cell groups that cofire correspond precisely to place field intersections, it then follows that the topology of the environment is the same as the topology of the potentiation complex.

Related literature and our contributions

Tracking topological changes via persistent homology.

In [6], a similar topological framework was used to understand the following question: how long does it take for the topology of an animal’s synaptic potentiation complex to achieve the topology of its environment? At the beginning of the experiment, it is assumed that the animal has just begun exploring the environment, and that new connections are being added to the potentiation complex. As the experiment runs and the animal explores the environment, the topology of the network changes. The authors of [6] used a relatively new computational technique—persistent homology—to track these changes and identify the first time that the topology of the potentiation complex matches the topology of the environment in a stable manner. This event was referred to as learning, and the subspace of parameter space (number of place fields, place field size, mean firing rate) in which learning occurred was identified and named the stable learning region. In [6], it was argued that persistent homology provides more information than its non-persistent counterpart (that we call flat homology).

We made a further advance on this idea of using persistent homology in [12], where we first encoded the place cell firing information into a directed, weighted network. The nodes in this network were the place cells, and the asymmetric weight between a pair (i, j) of cells was given by the relative frequency with which cell j fired in a brief time window after cell i had already fired. This particular encoding thus captured causality relations between the firing activity of the place cells. After producing these networks, we used a notion of directed network persistent homology developed in [13] to obtain persistence barcodes. We then compared these barcodes to each other via a natural metric called the bottleneck distance [14]. Using this distance, we computed the 1-nearest neighbor classification error rate and analyzed the hierarchical cluster structure via single linkage dendrograms. The objective of computing these error rates and dendrograms was to show that the network persistent homology method could better distinguish between the barcodes arising from different types of arenas than the PH method used in [6]. As we describe in §Materials and methods, our analysis methods in this paper follow those used in [12].

For complementary work on neural topological representations (using persistent homology and other techniques), see [2124].

Topological changes in the presence of decaying connections.

In [25] the authors considered rodent hippocampal firing models that are related to ours, but different in key aspects. Here the authors considered one square arena with one square hole in the middle, and one trajectory that fully explores the arena. In their model, the authors considered only pairwise connections between neurons. These pairwise links were added by a rule similar to the cofiring rule described above. The links were allowed to decay via two models: an exponential decay model, and a fixed lifetime model similar to ours. A simplicial complex was then obtained from these pairwise links by taking the associated clique complex. In this construction, a d-simplex for d ≥ 2 is added whenever all the edges between the vertices in the simplex are present. This is less expensive than constructing a nerve complex. However, the clique complex is generally different from the nerve complex, and so the guiding principle afforded by the Nerve Theorem does not apply in this setting.

The authors of [25] applied zigzag persistent homology to this dynamic clique complex for a range of decay rates and obtained persistence barcodes. The crux of their work was in showing that at least for the exponential decay model, the 0 and 1-dimensional homology vector spaces both have rank 1 for all times after some time threshold. In other words, they showed that the instantaneous 0 and 1-dimensional homology rank matches the 0 and 1-dimensional homology rank of the arena, after some time length.

Our approach: Zigzag persistent homology, bottleneck distances, and error rates.

The standard persistent homology setup as described above is unable to accept as input the fully dynamic simplicial complexes constructed via Aτ. The recourse is to consider the more powerful tool of zigzag persistent homology [15], which performs persistent homology computations on dynamic simplicial complexes (see Fig 2 for an illustration). Our chief goal is to enrich the existing literature by testing the intuitive hypothesis that the more “memory” the place cells have (i.e. the greater the value of τ), the more accurately the topology of the synaptic potentiation complex matches the topology of the environment.

thumbnail
Fig 2. An example of a dynamic simplicial complex and its 1-dimensional barcode (see §S1 File).

The simplicial complexes in frames 1 and 3 include into 2; 3 and 5 include into 4, and 5 includes into 6. Any dynamic simplicial complex is characterized by this property that for any two simplicial complexes occurring consecutively in the sequence, one is contained in the other (in some order).

https://doi.org/10.1371/journal.pone.0202561.g002

To this end, we chose a set of biologically plausible [6] parameters for mean firing rates ({12 Hz, 14 Hz, 16 Hz, 18 Hz, 20 Hz}) and place field radii ({14 cm, 15 cm, 16 cm}). For each of the 15 choices of firing rate-place field size pairs, we generated 10 simulations of a rodent’s trajectory around an arena with 0, 1, 2, 3 and 4 obstacles for a total of 15 × 10 × 5 = 750 datasets. For each dataset, we computed the synaptic potentiation function Aτ for 100 choices of τ. These choices were obtained as follows: the experiment duration (10 minutes) was divided into 5000 time bins, and the values of τ were in the range [50, 5000] in increments of 50. Thus we obtained 750 × 100 = 75000 dynamic simplicial complexes. Next we applied 1-dimensional zigzag persistent homology to these filtrations to obtain 75000 persistence barcodes (see §S1 File). Each of these persistence barcodes provides a visual representation of the time intervals during which the topology of the potentiation complex matches the topology of the environment. Some example barcodes are provided in Fig 3.

thumbnail
Fig 3. Arenas, place field centers, and barcodes.

Top row: the five types of arenas we considered. Each arena has dimensions 200 cm × 200 cm. Blue dots represent centers of place fields. Orange squares mark positions of circular obstacles of radius 25 cm. Bottom row: Persistence barcodes obtained from running simulations on the five different arenas with mean firing rate 20 Hz, place field size 16 cm, and τ = 2000. Notice that the number of long bars in each barcode is equal to the number of obstacles in the corresponding arena.

https://doi.org/10.1371/journal.pone.0202561.g003

Our next step was where we differed completely from approaches taken in [4, 6, 25], but remained similar in spirit to our prior work in [12] (see §Introduction). The output space of persistent homology (often called the barcode space) is equipped with a natural metric called the bottleneck distance [14]. For each parameter pair and each choice of τ, the pairwise bottleneck distance matrix between the corresponding 50 persistence barcodes determines how they are clustered together (see Fig 4). Intuitively, there should be five clusters, corresponding to each type of arena. We computed a 50 × 50 pairwise bottleneck distance matrix for each choice of τ and parameter-pair, and performed 1-nearest neighbor classification on each of these datasets. The corresponding error rate provides a measure of the separation of these clusters in barcode space. We plotted the error rate as a function of τ for each of the 15 parameter choices (see Fig 1). Our computational results (see Fig 1 and §Results literature and our contributions) contradict the seemingly natural “more memory is better” hypothesis. Moreover, our results also indicate that the error rate achieves a global minimum at approximately τ = 2000, and then increases as τ increases. This suggests that forgetting connections—i.e. transience—plays an important role in learning the topology of the environment.

thumbnail
Fig 4. A bottleneck distance matrix.

This matrix was obtained from a simulation with mean firing rate 20 Hz, place field size 16 cm, and τ = 2000. Since the experiment duration is 5000, the maximum bar length in a barcode is ≤ 5000. The bottleneck distance between barcodes with a different number of long bars is approximately 5000 × 1/2 = 2500 (shown in yellow).

https://doi.org/10.1371/journal.pone.0202561.g004

In the pipeline described above, it was not necessary to directly examine each of the computed barcodes. However, as we describe next, there is valuable information that can be obtained by direct inspection of individual barcodes. The key principle motivating the following discussion is that the long bars in a barcode correspond to meaningful topological features, whereas short bars correspond to topological noise [10]. Consider the persistence barcodes in Fig 3. These are example barcodes obtained from one of our simulations using mean firing rate 20 Hz, place field size 16 cm, and τ = 2000. Assume for now that a bar is “long” if its length is ≥ 4000. Then the number of long bars in each barcode is equal to the number of obstacles in the corresponding arena. This difference between the barcodes will be sharply reflected in the bottleneck distance, as can be seen in Fig 4. Thus there are meaningful topological characteristics at play behind the classification via bottleneck distances.

While analyzing the increase in error rate shown in Fig 1, we found that it was useful to plot the number of long bars in a barcode as a function of τ. The complication in this strategy is that for large numbers of simulations (which arise from a set of stochastic processes), it is not straightforward to pick a decision boundary for counting a bar as “long” or otherwise. We settled on as the decision boundary because for the barcodes we examined, it seemed that each obstacle was giving rise to an individual bar of length ≥ 4000.

Transience in the hippocampus.

Each of the models of [4] and [6] considered synaptic potentiation events and ignored depotentiation. So for each trial simulation in each of these models, all of the cofiring events that occurred in the duration of the trial were used in informing the topological inference.

In our interpretation of place cell cofiring as synaptic potentiation, using full knowledge of cofiring has the following intuitive meaning: the animal is assumed to have full memory of the cofiring events in its hippocampus. Thus increased synaptic potentiation is interpreted as memory. Biologically, this interpretation is well-documented [26, 27]. It is important to note that depressed synaptic connections also play a role in memory [28, 29], perhaps by maximizing signal-to-noise ratio [30], but for simplicity, we will associate memory with only increased synaptic potentiation, i.e. by a value of 1 for the Aτ(σ, t) function above. Given the interpretation of memory as increased synaptic potentiation, it follows that forgetting, or transience, involves destabilization of synaptic connections (e.g. via depotentiation or elimination of synaptic connections) [17].

The biological mechanisms responsible for natural forgetting have been documented in [17]; we summarize some of those here. For example, just as an increase in the number of AMPA receptors in the postsynaptic membrane is associated with increased memory, a decrease in this number via AMPA receptor endocytosis has recently been associated with forgetting [31]. Other work has shown that natural forgetting involves a protein called Rac [32, 33]—overexpression of Rac accelerates forgetting, and inhibition extends memory. On a longer timescale (thus less relevant to our work, but still interesting), another contributor to transience is hippocampal neurogenesis: the generation of neurons from stem cells in the dentate gyrus region of the hippocampal formation [34, 35]. It has recently been shown that this generation process is competitive, and that the adult-born neurons may remap preexisting connections in the hippocampus [36].

Transience and overfitting.

It has been proposed [17] that the biological motivation for transience is at least twofold:

  1. forgetting old memories allows the brain to remove outdated information, thus improving behavioral flexibility, and
  2. forgetting particular details and remembering only sparse representations of ideas or concepts helps the brain avoid “overfitting” when performing prediction tasks on new data points.

Overfitting is a term also used in the statistical learning literature to describe a model that has far too many parameters in relation to the amount of available data [37]. The problem with any such model is that while it may fit a particular dataset very well, it will often be very poor at fitting a new dataset, even if the new data follows the same distribution as the training data. In the artificial neural networks literature, numerous techniques have been developed to reduce overfitting. One technique, named “optimal brain damage”, involves training the network and then systematically setting some weights to zero [38]. Another technique, called dropout, involves randomly dropping neurons and their connections from the network during training [39]. In [17], parallels were drawn between these techniques used in artificial neural networks and transience events that occur in the brain, and it was suggested that transience is a natural process that helps the brain avoid overfitting.

Returning to our topological paradigm, we are faced with the following question: what does it mean to overfit a topology? A formal answer to this question is interesting in its own right, but we were more interested in the parallels between some of the techniques described above (such as zeroing out weights in an artificial neural network) and our definition of the synaptic potentiation complex (removing certain simplices instead of retaining all the activated simplices). In [17], it was suggested that transience is a mechanism which aids the brain in decision-making—the analogy with our computational findings is that transience aids in minimizing the error rate.

Results

In Fig 5, we show the main results that we obtained. In five of the eight plots, we fixed the firing rate and plotted the error rate vs τ for the three choices of place field size. In the remaining plots, we fixed the place field size and plotted the error rate vs τ for the five choices of firing rates. In each of these figures, we see that the error rate drops to a minimum at τ ≈ 2000.

thumbnail
Fig 5. Plots of error rates vs τ.

Top row: Plots where the mean firing rate is fixed at 12 Hz, 14 Hz, 16 Hz, 18 Hz, and 20 Hz, respectively. Blue, red, and orange lines correspond to place field sizes of 14 cm, 15 cm, and 16 cm, respectively. Bottom row: Plots where the place field sizes are fixed at 14 cm, 15 cm, and 16 cm, respectively. Blue, red, orange, violet, and green lines denote mean firing rates 12 Hz, 14 Hz, 16 Hz, 18 Hz, and 20 Hz, respectively. As described in §Result literature and our contributions, for each fixed value of τ, the overall error rate tends to decrease as the mean firing rate and place field size increase.

https://doi.org/10.1371/journal.pone.0202561.g005

For each error rate computation, we have a 50 × 50 distance matrix of pairwise bottleneck distances between persistence barcodes obtained for each choice of τ and parameter pair. If there is no real clustering in the 50 points in barcode space, then the classification rule is expected to be correct 1/5 times, thus giving an error rate close to 4/5 = 0.8. If the dataset is tightly clustered into five well-separated regions, then the error rate is expected to drop to 0.

Stochastic firing increases error rate

Why does the error rate decrease and then increase again, instead of simply dropping as memory (τ) increases? There are numerous complications surrounding this question: a priori, the trajectory model, the place field distribution inside each arena, the choice of firing model, and the choice of parameters can all play a role in shaping the error rate plot.

In our initial experiments, we obtained this error rate phenomenon using a Poisson firing model and a random walk model for the rodent’s trajectory. As our first step in understanding the results, we repeated all our simulations with trajectories given by a modified billiards model, where the rodent followed a piecewise-linear trajectory and bounced off of walls. Some modifications were applied to ensure that the rodent visited corners obscured by obstacles with sufficient frequency (details in §Materials and methods). The billiards model was chosen to ensure that the rodent would fully explore the environment with a reliable frequency. This prevented simulations where the rodent would explore the environment fully in a selected interval of time steps, and never again in the remaining time steps. The results in this paper are all obtained using the billiards model. However, switching from the random walk model to the billiards model did not alter the error rate phenomenon—again we observed that the error rate would decrease, and then increase again.

To fully simplify our model and protect against any procedural errors, we next replaced the Poisson firing model by a binary firing model where a place cell fired if and only if the rodent entered the corresponding place field (see §Materials and methods). We ran this simulation using mean firing rate 20 Hz and place field size 15 cm. Since there was no stochasticity in this model, we obtained 5 persistence barcodes for each choice of τ (as opposed to 50). In this model, there was no classification to be done and thus no error rate to calculate. However, we were still able to analyze the barcodes and verify that the zigzag persistence machinery was working correctly. In Fig 6 (left panel) we plotted the number of bars of length greater than in each of the five barcodes as a function of τ. We chose = 4000 because in our simulations using the billiards model, the rodent fully explored each environment within the first 1000 time steps. In this plot, we see that for τ ≈1600 and above, there is exactly one barcode having i bars of length ≥ , for each i = 0, …, 4. This is expected behavior—as discussed above, the number of long bars in a barcode is a count of the number of obstacles in the environment that produced the barcode. In particular, the barcodes do not overcount or undercount the correct number of obstacles in the environment for any τ value greater than ≈ 1600.

thumbnail
Fig 6. Number of bars of length greater than = 4000 in the barcodes obtained via the binary model (left) and the fuzzy binary model (right) as a function of τ.

The plot on the left panel has only five lines. The plot on the right panel has 50 lines, 10 for each color (corresponding to 10 simulations on each arena).

https://doi.org/10.1371/journal.pone.0202561.g006

Next we introduced some stochasticity into the binary firing model and considered the fuzzy binary firing model (details in §Materials and methods). In this model, a place cell fired deterministically if the rodent entered its place field, and stochastically if the rodent was within some bounded distance from the place field center. This fuzzy model is a compromise between the fully deterministic binary firing and fully stochastic Poisson firing models. We ran 10 simulations of this fuzzy model with mean firing rate 20 Hz and place field radius 15 cm. In Fig 6 (right panel), we plotted the number of bars of length greater than = 4000 in the 50 barcodes as a function of τ. The important observation about this figure is that for large values of τ, some of the barcodes are overcounting the number of obstacles in the corresponding environment. Thus we expect the error rate to increase for large τ values. This suggests that stochastic firing causes error rate to increase for large values of τ.

An explanation for the error rate dip

The most interesting aspect of our results is the dip in the error rate curves of Fig 1. There is an intuitive explanation for this dip, which we illustrate in Fig 7. Here we depict a 1-obstacle arena. The orange disks represent place fields centered around the obstacle, and the overlaid dynamic simplicial complex is the synaptic potentiation complex. Since a place cell fires with high probability when the rodent is in its place field, a pair or triple of place cells is likely to cofire when their place fields intersect. Thus we have drawn edges (in black) between the centers of pairs of intersecting place fields, and added faces (in grey) between triples of intersecting place fields.

thumbnail
Fig 7. The dynamic simplicial complex used in explaining the error rate dip (see §Results).

The two triangular loops in the left panel are examples of off-obstacle loops. Red lines in the middle panel represent spurious connections that are formed by stochastic activity. The loops containing these red lines are examples of on-obstacle loops.

https://doi.org/10.1371/journal.pone.0202561.g007

In addition to the events described above, the Poisson firing model (or the fuzzy binary firing model) both permit place cells to cofire even when their place fields do not intersect. Two of these instances have been marked by red lines in the middle figure. Now we turn to the question of counting loops in each of these simplicial complexes. There is one principal loop corresponding to an obstacle in each of the figures. In the left and middle figures, there are two additional loops in black and red, respectively.

The short black loops do not enclose any part of an obstacle, so we call them off-obstacle loops. As the rodent explores the environment, it will pass through these loops and activate more place fields that “fill in” these short loops. This is illustrated in the middle figure. The red loops, which we call on-obstacle loops, cannot be filled in this way, simply because the animal cannot pass through the obstacle. There are two plausible methods by which the red loops can be filled in:

  1. enough triples of place cells may cofire stochastically to fill in the loop, and
  2. the red edge forming the loop may simply be forgotten after some time.

Given that cofiring between cells having non-intersecting place fields is a rare event, the second method seems to be more appropriate for explaining how and when the red loops disappear.

Accepting the “forgetting a loop” explanation, it is suggestive to consider the relationship between τ and the on-obstacle loops. The greater the value of τ, the longer an on-obstacle loop remains in the simplicial complex. The longer an on-obstacle loop remains in the complex, the more it contributes (erroneously) to the persistence barcode. Thus having a large value of τ can indirectly increase the error rate.

At the other extreme, suppose that the value of τ is taken to be very small. In this situation, there may never be enough memory for the dynamic simplicial complex in Fig 7 to contain the principal loop. Thus very small values of τ filter out too much information for the dynamic complex to have any meaningful topological information.

Effect of parameters on error rate

We observe directly from Fig 5 that increasing the mean firing rate and the place field size causes the error rate to decrease. The physical intuition behind this decrease is as follows: increased firing rate and place field size both contribute to increased cofiring. Increased cofiring in turn causes off-obstacle loops (as described above in the discussion surrounding the error rate dip) to be filled in more quickly. Thus the (erroneous) contribution of these off-obstacle loops to the barcode is reduced. The bottleneck distance is then better able to distinguish between barcodes arising from different arenas, and the error rate consequently decreases.

In Fig 8, we plotted the single linkage dendrograms [37] depicting the clustering behavior in the persistence barcodes arising from the 15 different choices of mean firing rate-place field size parameter-pairs. Here τ is fixed at its experimentally determined optimal value of 2000. The dendrograms in Fig 8 suggest the hypothesis that there exist five natural clusters in the data, corresponding to the five types of arenas. We tested this hypothesis in two stages. First we tested whether there were five well-separated clusters, regardless of the labels of the points in each cluster. For this first test, we computed the p-values recorded in Table 1 (details in §Materials and methods). The (*)-marked values indicate that five well-separated clusters truly emerge as the mean firing rate and place field size both increase.

thumbnail
Fig 8. Single linkage dendrograms showing clustering of persistence barcodes obtained from 15 pairs of parameter choices.

The x-axis corresponds to bottleneck distance. The y-axis corresponds to labels of the arenas. 0-4 hole arenas correspond to blue, green, red, magenta, and orange lines, respectively. Top, middle, and bottom rows correspond to fixed place field sizes of 14 cm, 15 cm, and 16 cm, respectively. From left to right, the columns correspond to fixed firing rates of 12 Hz, 14 Hz, 16 Hz, 18 Hz, and 20 Hz, respectively. See the discussion on the effect of parameters on error rates in §Results and §Materials and methods.

https://doi.org/10.1371/journal.pone.0202561.g008

Next we refined our analysis and verified that the persistence barcodes were not just well-separated into five clusters, but also that these clusters were consistent with the correct (arena type) labels. This comprised the second test of our hypothesis, and was crucial to exclude the possibility that one cluster contained many points from different arena types. To this end, we computed the 1-nearest neighbor classification error rates (details in §Materials and methods) in Table 2 (also Figs 1 and 5).

thumbnail
Table 2. 1-nearest neighbor classification error rates at τ = 2000.

https://doi.org/10.1371/journal.pone.0202561.t002

By inspecting the two tables Tables 1 and 2 simultaneously we can identify the following two types of interesting parameter pairs: those for which there exist five natural clusters, but with incorrect labels (marked with †), and those for which there exist five natural clusters with correct labels (marked with *). In particular, Tables 1 and 2 indicate that natural clusters with correct labels emerge as the mean firing rate and place field size both increase.

Materials and methods

All our data is available at https://web.gin.g-node.org/memoli/zz-memory. The following notation is used below:

  • numPF—Number of place fields
  • sizePF—Size (radius) of each place field
  • meanF—Mean firing rate
  • fireWin—Window over which cofiring occurs
  • fireTh—Firing threshold; cell is active if it fires above this quantity at any time

Arena generation

The family of arenas were generated as follows. Each Ai consisted of a 200 cm × 200 cm square with i obstacles of radius 25 cm. The obstacles were centered at coordinates (50 cm, 50 cm), (150 cm, 50 cm), (50 cm, 150 cm), and (150 cm, 150 cm), respectively. The arena generation code was supplied with parameters numPF and sizePF. Each arena was generated with numPF place fields with centers scattered uniformly at random, subject to the following rules: no place field center was placed within 25 cm of any obstacle, and any two place field centers were at least sizePF × 25 cm apart. The values used for sizePF were {14 cm, 15 cm, 16 cm}. The value of numPF was fixed at 150.

Trajectory model

The trajectory was generated via a billiards model that we now discuss. The starting position of the trajectory was chosen uniformly at random from the square [50 cm, 150 cm] × [50 cm, 150 cm], i.e. a square at the center of the arena. This point was initialized with a random initial direction. The trajectory then followed this direction until it intersected with a wall of the arena or an obstacle. If the trajectory intersected with a wall, it bounced off at the angle of reflection, with an error of 5 degrees (chosen from a uniform distribution). If the trajectory intersected with an obstacle, then two points were recorded: the point of entry e and the projected point of exit f that would occur if the trajectory could pass in a straight line through the obstacle. Next each point in the intersection of the trajectory and obstacle were mapped to the nearest boundary point of the obstacle. Thus the trajectory would meet the obstacle at e, then follow the boundary of the obstacle until reaching f, and then proceed at the initial angle with which it had reached e.

The billiards model was chosen because it is known to be strongly mixing [40], i.e. after some short period of time, a trajectory following the billiards model will have visited all regions of the arena uniformly. In our setup, it was important for the rodent to fully explore the arena, and to do so with some approximate periodicity during the course of the experiment.

The trajectory consisted of 5000 steps. The simulation was modeled after an experiment lasting 10 minutes. Each step was assumed to take 600s/5000 = 0.12s. The speed of the rodent was fixed at 25 cm / s.

Firing models

Poisson.

For each arena-trajectory pair, a spike train was generated according to the firing model described next. At each time step t, a vector ft was sampled from a lognormal distribution with mean meanF and standard deviation (1.2)meanF. Here ft is a numPF-dimensional vector of individual firing rate amplitudes, with one slot for each place field. We write rt to denote the position of the rodent at time t, and r to denote the list of place field centers. A vector of position-dependent firing rates was then calculated as follows:

The firing model was assumed to be a Poisson process with mean given by rt and time interval dt = 600/5000s = 0.12s. At each time step, the number of spikes produced by each place cell was given by stpoissrnd(ftdt).

The following values of meanF were used: {0.12 Hz, 0.14 Hz, 0.16 Hz, 0.18 Hz, 0.20 Hz}. The ensemble of spike trains is called a raster.

While the Poisson firing model is meant to simulate real data, we found that the interpretation of results is easier by first considering some simpler firing models. We describe these models next.

Binary.

In the binary firing model, the raster was generated through a purely deterministic process. At each time step, a place cell generated a spike if the rodent’s trajectory intersected the corresponding place field. The number of spikes at each time step was given by meanFdt.

Fuzzy binary.

The fuzzy binary firing model was a hybrid between the deterministic Binary and stochastic Poisson firing models. For each time instance t, we considered the collection A of place cells whose place fields intersected the rodent’s trajectory at t, and the collection B of place cells whose place field centers were within 2sizePF of the rodent’s trajectory. The cells in A fired deterministically according to the binary model described above. The cells in B activated with probability 0.2, and once activated, they generated meanFdt spikes.

We will write Poisson, Binary, and FuzzyB to denote the Poisson, binary, and fuzzy binary firing models, respectively.

Memory model and the synaptic potentiation complex

The output of the preceding steps was an integer-valued matrix of dimensions 150 × 5000, with entry (i, j) giving the number of times cell i fired at time j. Next a list file was created which contained, for each time t ∈ {1, 2, …, 5000}, the index of each cell whose aggregate firing in a window of size fireWin starting at t exceeded the threshold fireTh.

From this list file, an event file containing addition and deletion events was generated. This is described next. At time 0, an empty simplicial complex K0 was initialized. At time t, for t > 0, the list file was scanned to find all k-tuples of active cells, for k ∈ {0, 1, 2}. Such a k-tuple (i.e. a k-simplex) was added to Kt if it was not already present in Kt−1. The deletion event was as follows: at each time t, the simplices that had appeared at time tτ − 1 and had not appeared in the window [tτ, t] were deleted from Kt.

The resulting object was the synaptic potentiation complex. The special property of this dynamic simplicial complex was that it allowed for removal of simplices, i.e. the deletion events described above. The idea was that as the rodent explored and learned the arena, new simplices were added to the potentiation complex. Conversely, as time passed and the rodent forgot regions of the arena it had not visited recently, the corresponding simplices in the potentiation complex were removed.

This dynamic simplicial complex was then passed into Dionysus [41] for zigzag persistence computation, i.e. to compute the persistent homology of this at-times-growing, at-times-shrinking simplicial complex.

Relation between arenas and persistence barcodes

For each raster, the output from Dionysus was a persistence barcode. Each barcode was a collection of subintervals of [0, 5000]—the persistent intervals or bars. Among practitioners of persistent homology, the guiding principle is often [10] the following: short bars correspond to topological noise, and long bars correspond to meaningful topological features.

At a high level, the number of “long” bars in a barcode should reflect the number of obstacles in the arena from which the barcode was generated. Some examples are provided in Fig 3. However, there is no “correct” threshold for counting a bar as long or otherwise; the choice of is dependent on the user.

For a barcode B and an integer L ∈ [0, 5000], let LongBars(B, L) denote the number of persistent bars in B with length at least L. We computed LongBars(⋅, 4000) for each barcode that we obtained from performing the zigzag computation on rasters obtained via the Binary and FuzzyB models. Plots of this function against τ are provided in Fig 6.

1-nearest neighbor classification

For a fixed choice of firing rate-place field size parameters, we obtained 50 1-dimensional persistence barcodes and computed a 50 × 50 bottleneck distance matrix. Note that Dionysus provides a utility function for computing bottleneck distances. Next we computed the 1-nearest neighbor classification error rate over 1000 random choices of seed points. The seed points were chosen five at a time, one for each type of arena. After calculating error rates for all parameter pairs, the final result was a plot of error rates vs τ, as shown in Figs 1 and 5.

Cluster structure via dendrograms and p-value tests

For each choice of parameter-pair, the 50 resulting 1-dimensional barcodes formed a finite metric space when endowed with the bottleneck distance. We examined the hierarchical clustering structure of this finite metric space by applying single linkage hierarchical clustering and visualizing the result as a dendrogram. Fig 8 contains the dendrograms for the 15 parameter-pair choices at τ = 2000. Tables 1 and 2 list dendrogram p-values and 1-nearest neighbor classification error rates for the 15 parameter-pair choices at τ = 2000. In order to compute p-values for the dendrograms in Fig 8, we used the following procedure.

The null model.

The 15 parameter-pair choices were listed as p1, p2, …, p15. For the 50 barcodes corresponding to each parameter-pair pi, we computed the minimum and maximum of: the number of bars (n), the bar start times (s), and the bar lengths (l), respectively. In this way for pi we obtained intervals , , and . For each pi our null model consisted of three independent uniform distributions (number of bars), (start time of a bar), and (length of a bar).

The p-value test.

Next, for each pi, we computed 5000 trials where each trial consisted of a collection of 50 barcodes chosen from our null model. Thus we obtained a total of 15 × 5000 × 50 randomized barcodes. For each pi and each trial, we computed a 50 × 50 bottleneck distance matrix using only the top 15 longest bars from each barcode. Thus, for each pi, from these 5000 matrices, we obtained 5000 randomized single linkage dendrograms via Matlab’s linkage function.

For each dendrogram corresponding to a fixed parameter-pair, we obtained the cluster labels at the first linkage value where five or fewer clusters appeared. Using this clustering, we assigned silhouette coefficients [42] to each of the 50 underlying data points in the dendrogram. The ground metric for silhouette coefficient computation was taken to be the distance induced by the dendrogram, i.e. the merge heights between pairs of points [43]. The silhouette coefficient of the dendrogram was defined to be the average of these 50 coefficients. In general, the silhouette coefficient ranges from −1 to 1; a value close to 1 indicates meaningful clustering. The p-value for each of the observed dendrograms in Fig 8 was computed as the proportion (out of 5000) of randomized dendrograms with a silhouette coefficient greater than that of the observed dendrogram.

Discussion

Our results suggest that transience, or forgetting, is an important mechanism for learning the topological characteristics of an environment. This is in line with recent perspectives appearing in the neuroscience literature [17] on the importance of forgetting for better decision-making.

From our viewpoint, a useful next step would be to repeat our simulations using different decay models, e.g. the exponential decay model used in [25], and observe the shape of the resulting error rate plot. We expect that the dip in the error rate would still be present, but perhaps at a value different from τ = 2000.

In the current work, we have modeled “forgetting” as a passive process, in line with the classical decay theory model of memories dissipating with time. An interesting direction would be to study a forgetting model incorporating interference theory, which suggests that forgetting occurs as new experiences hinder the stability of old memories. Yet another approach would be to incorporate active models of forgetting. Such models hypothesize that forgetting is carried out by biologically regulated mechanisms, e.g. through overexpression of Rac proteins or a decrease in the number of AMPA receptors, as we described earlier.

The ultimate goal would be to procure experimental data and test the hypothesis that we put forth in this paper regarding transience and topological learning. There has already been some interest in producing experiments to verify the predictions arising from algebraic topology, e.g. [22]. Of course, there are numerous difficulties belying the use of real data that we ignored for the purposes of simulation: a significant number of place fields may be non-convex, and the data may be highly noisy. Despite these obstructions, there appears to be no obvious drawback to using zigzag persistent homology and bottleneck distances on real data. The complexity of the calculations scales with the number of place cells, and the number of place cells that can be recorded for real data is well within the capabilities of the available zigzag software.

Supporting information

S1 File. Background on persistent homology and zigzag persistence.

https://doi.org/10.1371/journal.pone.0202561.s001

(PDF)

Acknowledgments

This work was supported by NSF grant IIS-1422400.

References

  1. 1. Dayan P, Abbott LF. Theoretical neuroscience. vol. 10. Cambridge, MA: MIT Press; 2001.
  2. 2. O’Keefe J, Dostrovsky J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain research. 1971;34(1):171–175. pmid:5124915
  3. 3. Brown EN, Frank LM, Tang D, Quirk MC, Wilson MA. A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. The Journal of Neuroscience. 1998;18(18):7411–7425. pmid:9736661
  4. 4. Curto C, Itskov V. Cell groups reveal structure of stimulus space. PLoS Computational Biology. 2008;4(10). pmid:18974826
  5. 5. Dabaghian Y, Mémoli F, Singh G, Frank L, Carlsson G. Topological stability of the hippocampal spatial map. In: Front. Syst. Neurosci. Conference Abstract: Computational and systems neuroscience; 2009.
  6. 6. Dabaghian Y, Mémoli F, Frank L, Carlsson G. A topological paradigm for hippocampal spatial map formation using persistent homology. PLoS Comput Biol. 2012;8(8). pmid:22912564
  7. 7. Frosini P. Measuring shapes by size functions. In: Intelligent Robots and Computer Vision X: Algorithms and Techniques. International Society for Optics and Photonics; 1992. p. 122–133.
  8. 8. Edelsbrunner H, Harer J. Persistent homology-a survey. Contemporary mathematics. 2008;453:257–282.
  9. 9. Ghrist R. Barcodes: the persistent topology of data. Bulletin of the American Mathematical Society. 2008;45(1):61–75.
  10. 10. Carlsson G. Topology and data. Bulletin of the American Mathematical Society. 2009;46(2):255–308.
  11. 11. Edelsbrunner H, Morozov D. Persistent homology: theory and practice. 2014.
  12. 12. Chowdhury S, Dai B, Mémoli F. Topology of stimulus space via directed network persistent homology. Cosyne Abstracts 2017. 2017.
  13. 13. Chowdhury S, Mémoli F. A functorial Dowker theorem and persistent homology of asymmetric networks. Journal of Applied and Computational Topology. Forthcoming.
  14. 14. Edelsbrunner H, Harer J. Computational topology: an introduction. American Mathematical Soc.; 2010.
  15. 15. Carlsson G, De Silva V. Zigzag persistence. Foundations of Computational Mathematics. 2010;10(4):367–405.
  16. 16. Carlsson G, De Silva V, Morozov D. Zigzag persistent homology and real-valued functions. In: Proceedings of the twenty-fifth annual Symposium on Computational Geometry. ACM; 2009. p. 247–256.
  17. 17. Richards BA, Frankland PW. The Persistence and Transience of Memory. Neuron. 2017;94(6):1071–1084. pmid:28641107
  18. 18. Benke TA, Luthi A, Isaac JT, Collingridge GL. Modulation of AMPA receptor unitary conductance by synaptic activity. Nature. 1998;393(6687):793. pmid:9655394
  19. 19. Plant K, Pelkey KA, Bortolotto ZA, Morita D, Terashima A, McBain CJ, et al. Transient incorporation of native GluR2-lacking AMPA receptors during hippocampal long-term potentiation. Nature neuroscience. 2006;9(5):602. pmid:16582904
  20. 20. Björner A. Topological methods. Handbook of combinatorics. 1995;2:1819–1872.
  21. 21. Chen Z, Gomperts SN, Yamamoto J, Wilson MA. Neural representation of spatial topology in the rodent hippocampus. Neural computation. 2014;26(1):1–39. pmid:24102128
  22. 22. Giusti C, Pastalkova E, Curto C, Itskov V. Clique topology reveals intrinsic geometric structure in neural correlations. Proceedings of the National Academy of Sciences. 2015;112(44):13455–13460.
  23. 23. Spreemann G, Dunn B, Botnan MB, Baas NA. Using persistent homology to reveal hidden information in neural data. arXiv preprint arXiv:151006629. 2015.
  24. 24. Curto C. What can topology tell us about the neural code? Bulletin of the American Mathematical Society. 2017;54(1):63–78.
  25. 25. Babichev A, Morozov D, Dabaghian Y. Robust spatial memory maps encoded in networks with transient connections. arXiv preprint arXiv:171002623. 2017.
  26. 26. Josselyn SA, Köhler S, Frankland PW. Finding the engram. Nature reviews Neuroscience. 2015;16(9):521. pmid:26289572
  27. 27. Tonegawa S, Liu X, Ramirez S, Redondo R. Memory engram cells have come of age. Neuron. 2015;87(5):918–931. pmid:26335640
  28. 28. Dudek SM, Bear MF. Homosynaptic long-term depression in area CA1 of hippocampus and effects of N-methyl-D-aspartate receptor blockade. Proceedings of the National Academy of Sciences. 1992;89(10):4363–4367.
  29. 29. Kemp A, Manahan-Vaughan D. Hippocampal long-term depression: master or minion in declarative memory processes? Trends in neurosciences. 2007;30(3):111–118. pmid:17234277
  30. 30. Dayan P, Willshaw DJ. Optimising synaptic learning rules in linear associative memories. Biological cybernetics. 1991;65(4):253–265. pmid:1932282
  31. 31. Dong Z, Han H, Li H, Bai Y, Wang W, Tu M, et al. Long-term potentiation decay and memory loss are mediated by AMPAR endocytosis. The Journal of clinical investigation. 2015;125(1):234. pmid:25437879
  32. 32. Shuai Y, Lu B, Hu Y, Wang L, Sun K, Zhong Y. Forgetting is regulated through Rac activity in Drosophila. Cell. 2010;140(4):579–589. pmid:20178749
  33. 33. Liu Y, Du S, Lv L, Lei B, Shi W, Tang Y, et al. Hippocampal Activation of Rac1 Regulates the Forgetting of Object Recognition Memory. Current Biology. 2016;26(17):2351–2357. pmid:27593377
  34. 34. Nicolas T, Teng EM, Bushong EA, Aimone JB, Zhao C, Consiglio A, et al. Synapse formation on neurons born in the adult hippocampus. Nature neuroscience. 2007;10(6):727.
  35. 35. Toni N, Laplagne DA, Zhao C, Lombardi G, Ribak CE, Gage FH, et al. Neurons born in the adult dentate gyrus form functional synapses with target cells. Nature neuroscience. 2008;11(8):901–907. pmid:18622400
  36. 36. McAvoy KM, Scobie KN, Berger S, Russo C, Guo N, Decharatanachart P, et al. Modulating neuronal competition dynamics in the dentate gyrus to rejuvenate aging memory circuits. Neuron. 2016;91(6):1356–1373. pmid:27593178
  37. 37. Friedman J, Hastie T, Tibshirani R. The elements of statistical learning. vol. 1. Springer series in statistics Springer, Berlin; 2001.
  38. 38. LeCun Y, Denker JS, Solla SA. Optimal brain damage. In: Advances in Neural Information Processing Systems; 1989.
  39. 39. Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research. 2014;15(1):1929–1958.
  40. 40. Sinai YG. Dynamical systems with elastic reflections. Russian Mathematical Surveys. 1970;25(2):137–189.
  41. 41. Morozov D. Dionysus. Software available at http://www.mrzv.org/software/dionysus. 2012.
  42. 42. Rousseeuw PJ. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics. 1987;20:53–65.
  43. 43. Jardine N, Sibson R. Mathematical Taxonomy. Wiley series in probability and mathematical statistics. Wiley; 1971. Available from: https://books.google.com/books?id=ka4KAQAAIAAJ.