Fig 1.
Using database management systems as an analogy, the brain’s memory system lacks such a clear and complete chain of signal processing.
Fig 2.
A sample can be stored as a subgraph in a directed graph.
(a) Random activation of 30 nodes in the network. (b) Activated nodes propagate stimulus in the network to form a subgraph.
Fig 3.
How to record activation traces within a single node.
(a) After storing a sample, node vb’s index table adds an activation trace. (b) When node vb receives the same or similar input again, it reuses the previously recorded activation traces.
Fig 4.
(a) Node va propagates stimulus to node vb. (b) After node vb is activated, it finds that other nodes have occupied node vc. (c) Node vb becomes a resting state as it cannot continue to propagate stimulus. (d) Node va becomes a resting state because it has not activated any downstream nodes.
Fig 5.
Node state transition graph.
Fig 6.
Example of a dynamic process for directed graph stimulus propagation.
(a) t0: Initial nodes {va,vb,vc,vd} are activated, which performs stimulus propagation according to a weighted random selection algorithm. (b) t1: {ve,vf,vg,vi,vj} becomes active after receiving stimulus from the initial nodes. (c) t2: {vh,vk} becomes active after receiving stimulus from active nodes. (d) t3: The downstream nodes {vb,vc} of vh are all occupied, so stimulus cannot be propagated. vh becomes resting again. (e) t4: After vh becomes resting state, according to the avalanche effect, vj also becomes resting state. (f) t5: The subgraph iterates to a steady state. Start to release resources, and node va releases the occupancy of ve. (g) t6: After the resources are released, the dormant node vd restarts pathfinding and successfully activates vj. (h) t7: Node vh is successfully activated after receiving the stimulus from vj and passing the stimulus to vb. The subgraph is iterated to a stable state.
Table 1.
Different-scale network retrieval performance after storing 1000 samples.
Fig 7.
The relationship between the number of edges and network capacity and subgraph structure.
Fig 8.
Storage examples under different network connectivity.
(a) The sample-generated subgraph in an ER random graph with p = 0.07. (b) The sample-generated subgraph in an ER random graph with p = 0.04.
Fig 9.
Comparison of capacity between sparse and dense graphs.
(a) Results in a sparse graph. (b) Results in a dense graph.
Fig 10.
The impact of incomplete sample input on sample retrieval.
(a) Test results in a sparse graph. (b) Test results in a dense graph.
Fig 11.
The impact of noisy sample input on sample retrieval.
(a) Test results in a sparse graph. (b) Test results in a dense graph.
Fig 12.
The impact of both incomplete and contained noise nodes in the sample input on sample retrieval.
(a) Test results in a sparse graph. (b) Test results in a dense graph.
Table 2.
Restoration schemes for the index table after damage.
Fig 13.
The impact of partial node damage on sample retrieval.
(a) The change in accuracy in a sparse graph. (b) The change in completeness in a sparse graph. (c) The change in accuracy in a dense graph. (d) The change in completeness in a dense graph.
Fig 14.
The impact of partial directed edge damage on sample retrieval.
(a) The change in accuracy in a sparse graph. (b) The change in completeness in a sparse graph. (c) The change in accuracy in a dense graph. (d) The change in completeness in a dense graph.
Fig 15.
Six classic network structures.
(a) ER random graph with p = 0.1. (b) Globally coupled network. (c) Nearest-neighbor coupled network with L = 1. (d) Star coupled network. (e) Kleinberg network. (f) Price network.
Table 3.
Comparison of classic network models.
Fig 16.
Network storage example (a) Kleinberg network storage example. (b) Nearest-neighbor coupled network storage example.
Fig 17.
Comparison of the capacities of three models (sample node scale of 50).
Fig 18.
Comparison of the capacities of three models (sample node scale of 200).