Turing complete neural computation based on synaptic plasticity

In neural computation, the essential information is generally encoded into the neurons via their spiking configurations, activation values or (attractor) dynamics. The synapses and their associated plasticity mechanisms are, by contrast, mainly used to process this information and implement the crucial learning features. Here, we propose a novel Turing complete paradigm of neural computation where the essential information is encoded into discrete synaptic states, and the updating of this information achieved via synaptic plasticity mechanisms. More specifically, we prove that any 2-counter machine—and hence any Turing machine—can be simulated by a rational-weighted recurrent neural network employing spike-timing-dependent plasticity (STDP) rules. The computational states and counter values of the machine are encoded into discrete synaptic strengths. The transitions between those synaptic weights are then achieved via STDP. These considerations show that a Turing complete synaptic-based paradigm of neural computation is theoretically possible and potentially exploitable. They support the idea that synapses are not only crucially involved in information processing and learning features, but also in the encoding of essential information. This approach represents a paradigm shift in the field of neural computation.


Introduction
How does the brain compute? How do biological neural networks encode and process information? What are the computational capabilities of neural networks? Can neural networks implement abstract models of computation? Understanding the computational and dynamical capabilities of neural systems is a crucial issue with significant implications in computational and system neuroscience, artificial intelligence, machine learning, bio-inspired computing, robotics, but also theoretical computer science and philosophy.
In 1943, McCulloch and Pitts proposed the concept of an artificial neural network (ANN) as an interconnection of neuron-like logical units [1]. This computational model significantly contributed to the development of two research directions: (1) Neural Computation, which studies the processing and coding of information as well as as the computational capabilities of various kinds of artificial and biological neural models; (2) Machine Learning, which concerns the development and utilization of neural network algorithms in Artificial Intelligence (AI). The proposed study lies within the first of these two approaches. In this context, the computational capabilities of diverse kinds of neural networks have been shown to range from the finite automaton degree [1][2][3] up to the Turing [4] or even to the super-Turing levels [5][6][7] (see [8] for a survey of complexity theoretic results). In short, Boolean recurrent neural networks are computationally equivalent fo finite state automata; analog neural networks with rational synaptic weights are Turing complete; and analog neural nets with real synaptic weights as well as evolving neural nets are capable of super-Turing capabilities (cf. Table 1). These theoretical results have later been improved, motivated by the possibility to implement finite state machines on electronic hardwares (see for instance [9][10][11][12][13]). Around the same time, the computational power of spiking neural networks (instead of sigmoidal ones) has also been extensively studied [14,15]. More recently, the study of P systems-parallel abstract models of computation inspired from the membrane structure of biological cells-has become a highly active field of research [16][17][18].
Concerning the second direction, Turing himself brilliantly anticipated the two concepts of learning and training that would later become central to machine learning [36]. These ideas were realized with the introduction of the perceptron [37], which gave rise to the algorithmic conception of learning [38][39][40]. Despite some early limitation issues [41], the development of artificial neural networks has steadily progressed since then. Nowadays, artificial neural networks represent a most powerful class of algorithms in machine learning, thanks to their highly efficient training capabilities. In particular, deep learning methods-multilayer neural networks that can learn in supervised and/or unsupervised manners-have achieved impressive results in numerous different areas (see [42] for a brilliant survey and the references therein).
These approaches share a common and certainly sensible conception of neural computation that could be qualified as a neuron-based computational framework. According to this conception, the essential information is encoded into the neurons, via their spiking configurations, activation values or (attractor) dynamics. The synapses and their associated plasticity mechanisms are, by contrast, essentially used to process this information and implement the crucial learning features. For instance, in the simulation of abstract machines by neural networks, the computational states of the machines are encoded into activation values or spiking patterns of neurons [8]. Similarly, in most if not all deep learning algorithms, the input, output and intermediate information is encoded into activation values of input, output and hidden (layers of) neurons, respectively [42]. But what if the synaptic states would also play a crucial role in the encoding of information? What if the role of the synapses Table 1. Computational power of various models of recurrent neural networks. FSA, TM and TM/poly(A) stand for finite state automata, Turing machines and Turing machines with polynomial advice (which are super-Turing), respectively. REG, P and P/poly are the complexity classes decided in polynomial time by these three models of computation. The results in the case of classical computation can be found in [1][2][3][4][5][6][7][19][20][21][22][23][24]. Results in alternative infinite computational frameworks have also been obtained [25][26][27][28][29][30][31][32][33][34][35]. would not only be confined to the processing of information and learning processes, as crucial as these features might be? In short, what about a synaptic-based computational framework?
In biology, the various mechanisms of synaptic plasticity provide "the basis for most models of learning, memory and development in neural circuits" [43]. Spike-timing-dependent plasticity (STDP) refers to the biological Hebbian-like learning process according to which the synapses' strengths are adjusted based on the relative timings of the presynaptic and postsynaptic spikes [38,44,45]. It is widely believed that STDP "underlies several learning and information storage processes in the brain, as well as the development and refinement of neuronal circuits during brain development" (see [46] and the references therein). In particular, fundamental neuronal structures like synfire chains [47][48][49][50][51] (pools of successive layers of neurons strongly connected from one stratum to the next by excitatory connections), synfire rings [52] (looping synfire chains) and polychronous groups [53] (groups of neurons capable of generating timelocked reproducible spike-timing patterns), have all been observed to emerge in self-organizing neural networks employing various STDP mechanisms [52][53][54][55]. On another level, regarding STDP mechnisms, it has been shown that synapses might change their strengths by jumping between discrete mechanistic states, rather than by simply moving up and down in a continuum of efficacy [56].
Based on these considerations, we propose a novel Turing complete synaptic-based paradigm of neural computation. In this framework, the essential information is encoded into discrete synaptic states instead of neuronal spiking patterns, activation values or dynamics. The updating of this information is then achieved via synaptic plasticity mechanisms. More specifically, we prove that any 2-counter machine-and hence any Turing machine-can be simulated by a rational-weighted recurrent neural network subjected to STDP. The computational states and counter values of the machine are encoded into discrete synaptic strengths. The transitions between those synaptic weights are achieved via STDP. These results show that a Turing complete synaptic-based paradigm of computation is theoretically possible and potentially exploitable. They support the idea that synapses are not only crucially involved in information processing and learning features, but also in the encoding of essential information in the brain. This approach represents a paradigm shift in the field of neural computation.
The possible impacts of these results are both practical and theoretical. In the field of neuromorphic computing, our synaptic-based paradigm of neural computation might lead to the realization of novel analog neuronal computers implemented on VLSI technologies. Regarding AI, our approach might lead to the development of new machine learning algorithms. On a conceptual level, the study of neuro-inspired paradigms of abstract computation might improve the understanding of both biological and artificial intelligences. These aspects are discussed in the conclusion.

Recurrent neural networks
where a ij ðtÞ; b ij ðtÞ 2 Q are the rational weights of the synaptic connections from x j to x i and u j to x i at time t, respectively, c i ðtÞ 2 Q is the rational bias of cell x i at time t, and f is either the hard-threshold activation function θ or the linear sigmoid activation function σ defined by A neuron is called Boolean or analog depending on whether its activation value is computed by the function θ or σ, respectively. For any Boolean input stream u = u(0)u(1)u(2) � � �, the computation of N over input u is the sequence of internal states N ðuÞ ¼ xð0Þxð1Þxð2Þ . . ., where x(0) = 0 and the components of x(t) are given by Eq (1), for each t > 0. A simple recurrent neural network is illustrated in Fig 1. A spike-timing dependent plasticity (STDP) rule modifies the synaptic weights a ij (t) according to the spiking patterns of the presynaptic and postsynaptic cells x j and x i [45]. Here, we consider two STDP rules. The first one is a classical generalized Hebbian rule [38]. It allows the synaptic weights to vary across finitely many values comprised between two bounds a min and a max (0 < a min < a max < 1). The rule is given as follows: where bxc denotes the floor of x (the greatest integer less than or equal to x) and η > 0 is the learning rate. Accordingly, the synaptic weight a ij (t) is incremented (resp. decremented) by η at time t + 1 if the presynaptic cell x j spikes 1 time step before (resp. after) the postsynaptic cell x i . The floor function is used to truncate the activation values of analog neurons to their integer part, if needed. The synaptic weights enabled by this rule is illustrated in Fig 2. In the sequel, this STDP rule will be used to encode the transitions between the finitely many computational states of the machine to be simulated. The second rule is an adaptation to our context of a classical Hebbian rule. It allows the synaptic weights to vary across the infinitely many values of the sequence The rule is given as follows: As for the previous one, the synaptic weight a ij (t) is incremented (resp. decremented) at time t+1 if the presynaptic cell x j spikes 1 time step before (resp. after) the postsynaptic cell x i . But in this case, the synaptic weight varies across the infinitely many successive values of the sequence β. For instance, if a ij ðtÞ ¼ 1 2 þ 1 4 þ 1 8 ¼ 0:875 is incremented (resp. decremented) by the STDP rule, then a ij ðt þ 1Þ Here, the floor functions are removed, since this rule will only be applied to synaptic connections between Boolean neurons. The synaptic weights enabled by this rule is illustrated in Fig  2. In the sequel, this STDP rule will be used to encode the variations among the infinitely many possible counter values of the machine to be simulated. Excitatory and inhibitory connections are represented as red and blue arrows, respectively. Cells u 1 , u 2 , x 1 , x 2 are Boolean (activation function θ) whereas x 3 is analog (activation function σ). Over the Boolean input u = (1, 1) T (1, 0) T (0, 1) T , the network's computation is N ðuÞ ¼ ð0; 0; 0Þ • Q = {q 0 , . . ., q n−1 } is a finite set of computational states; • S is an alphabet of input symbols; • δ: Q × S ! Q is a transition function; • q 0 2 Q is the initial state; • F � Q is the set of final states.
Each transition δ(q, a) = q 0 signifies that if the automaton is state q 2 Q and reads input symbol a 2 S, then it will move to state q 0 2 Q. For any input w = a 0 a 1 � � �a p 2 S � , the computation of A over w is the finite sequence Such a computation is usually denoted as Input w is said to be accepted (resp. rejected) by automaton A if the last state q i pþ1 of computa-

Counter machines
A counter machine is a finite state automaton provided with additional counters [57]. The counters are used to store integers. They can be pushed (incremented by 1), popped (decremented by 1) or kept unchanged. At each step, the machine determines its next computational state according to its current input symbol, computational state and counters' states, i.e., if counters are zero or non-zero. Formally, a deterministic k-counter machine (CM) is a tuple C k ¼ ðQ; S; C; O; d; q 0 ; FÞ, where: • Q = {q 0 , . . ., q n−1 } is a finite set of computational states; • S is an alphabet of input symbols not containing the empty symbol � (recall that the empty symbol satisfies �w = w� = w, for any string w 2 S � ); • C = {?, >} is the set of counter states, where ?, > represent the zero and non-zero states, respectively; • N is the set of counter values (doesn't need to be hold in the tuple C k ); • O = {push, pop, −} is the set of counter operations; • q 0 2 Q is the initial state; • F � Q is the set of final states.
The value and state of counter j are denoted by c j and � c j , respectively, for j = 1, . . ., k. (In the sequel, certain cells will also be denoted by c j 's and � c j 's. The use of same notations to designate counter's values or states and specific cells will be clear from the context.) The "bar function" (c 7 ! � c) retrieves the counter's state from its value. It is naturally defined by � c j ¼ ? if c j = 0 and � c j ¼ > if c j > 0. The value of counter j after application of operation o j 2 O is denoted by o j (c j ). The counter operations influence their values in the following natural way: Each transition dðq; a; � c 1 ; . . . ; � c k Þ ¼ ðq 0 ; o 1 ; . . . ; o k Þ signifies that if the machine is state q 2 Q, reads the regular or empty input symbol a 2 S [ {�} and has its k counter being in states � c 1 ; . . . ; � c k 2 C, then it will move to state q 0 2 Q and perform the k counter operations o 1 , . . ., o k 2 O. Depending on whether a 2 S or a = �, the corresponding transition is called a regular transition or an �-transition, respectively. We assume that δ is a partial (rather than a total) function. Importantly, the determinism is expressed by the fact that the machine can never face a choice between either a regular or an �-transition, i.e., for any q 2 Q, any a 2 S and any � c 1 ; . . . ; � c k 2 C, if dðq; a; � c 1 ; . . . ; � c k Þ is defined, then dðq; �; � c 1 ; . . . ; � c k Þ is undefined [57].
For any input w = a 0 a 1 � � � a p 2 S � , the computation of a k-counter machine C k over input w can be described as follows. For each successive input symbol a i 2 S, before trying to process a i , the machine first tests if an �-transition is possible. If this is the case, it performs this transition. Otherwise, it tests if the regular transition associated with a i is possible, and if so, performs it. The deterministic condition ensures that a regular and an �-transition are never possible at the same time. When no more transition can be performed, the machine stops.
For any input w = a 0 a 1 � � � a p 2 S � , the computation of C k over w is the unique finite or infinite sequence of states, symbols and counter values encountered by C k while reading the successive bits of w possibly interspersed with � symbols. The formal definition involves the following notions.
An instantaneous description of C k is a tuple ðq; w; c 1 ; . . . ; c k Þ 2 Q � S � � N k . For any empty or non-empty symbol a 0 2 S [ {�} and any w 2 S � , the relation "'" over the set of instantaneous descriptions is defined as follows: Note that depending on whether a 0 = � or a 0 2 S, the relation "'" is determined by an �-transition or a regular transition, respectively. (Note also that when a 0 = �, one has a 0 w = �w = w, and in this case, the relation "'" keeps w unchanged). For any input w = a 0 a 1 � � � a p 2 S � , the determinism of C k ensures that there is a unique finite or infinite sequence of instantaneous descriptions ððq n i ; w i ; c 1;i ; . . . ; c k;i ÞÞ l i¼0 ; l 2 N [ f1g such that ðq n 0 ; w 0 ; c 1;0 ; . . . ; c k;0 Þ ¼ ðq 0 ; w; 0; . . . ; 0Þ is the initial instantaneous description, and ðq n i ; w i ; c 1;i ; . . . ; c k;i Þ ' ðq n iþ1 ; w iþ1 ; c 1;iþ1 ; . . . ; c k;iþ1 Þ, for all i < l. Then, the computation of C k over w, denoted by C k ðwÞ, is the finite or infinite sequence defined by where a 0 i ¼ � if w i = w i+1 (case of an �-transition), and a 0 i is the first bit of w i otherwise (case of a regular transition), for all i < l. Note that the computation over w can take longer than |w| = p + 1 steps, even be infinite, due to the use of �-transitions. The input w 2 S � is said to be accepted by C k if the computation of the machine over w is finite, consumes all letters of w and stops in a state of F, i.e., if a 0 l ¼ � and q n l 2 F. It is rejected otherwise. The set of all inputs accepted by C k is the language recognized by C k .
It is known that 1-counter machines are strictly more powerful than finite state automata, and k-counter machines are computationally equivalent to Turing machines (Turing complete), for any k � 2 [57]. However, the class of k-counter machines that do not make use of �-transitions is not Turing complete. For this reason, the simulation of �-transitions by our neural networks will be essential towards the achievement of Turing-completeness.
A k-counter machine can also be represented as a directed graph, as illustrated in language that is recursively enumerable but not context-free, i.e., it can be recognized by some Turing machine, yet by no pushdown automaton. Note that this 2-counter machine contains �-transitions.

Results
We show that any k-counter machine can be simulated by a recurrent neural network composed of Boolean and analog neurons, and using the two STDP rules described by Eqs (2) and (3). In this computational paradigm, the states and counter values of the machine are encoded into specific synaptic weights of the network. The transitions between those states and counter values are reflected by an evolution of the corresponding synaptic weights. Since 2-counter machines are computationally equivalent to Turing machines, these results show that the proposed STDP-based recurrent neural networks are Turing complete.

Construction
We provide an algorithmic construction which takes the description of a k-counter machine C k as input and provides a recurrent neural network N that simulates C k as output. The network N is constructed by assembling several modules together: an input encoding module, an input transmission module, a state module, k counter modules and several detection modules. These modules are described in detail in the sequel. The global behaviour of N can be summarized as follows.
1. The computational state and k counter values of C k are encoded into specific synaptic weights belonging to the state module and counter modules of N , respectively.
2. At the beginning of the simulation, N receives its input stream via successive activations of its input cells belonging to the input encoding module. Meanwhile, this module encodes the whole input stream into a single rational number, and stores this number into the activation value of a sigmoid neuron.
3. Then, each time the so-called tic cell of the input encoding module is activated, N triggers the simulation of one computational step of C k .
a. First, it attempts to simulate an �-transition of C k by activating the cell u � of the input transmission module. If such a transition is possible in C k , then N simulates it.
b. Otherwise, a signal is sent back the input encoding module. This module then retrieves the last input bit a stored in its memory, and attempts to simulate the regular transition of C k associated with a by activating the cell u a of the input transmission module. If such a transition is possible in C k , then N simulates it.
In other words, if the machine is in computational state q, reads input a and has counter states � c 1 ; � c 2 , then it will move to computational state q 0 and performs counter operations o 1 , o 2 . This 2-counter machine recognizes the language {0 n 1 n 0 n : n > 0}, i.e., the sequences of bits beginning with a strictly positive number of 0's followed by the same number of 1's and followed by the same number of 0's again. https://doi.org/10.1371/journal.pone.0223451.g004 Turing complete neural computation based on synaptic plasticity 4. The network N simulates a transition of C k as follows: first, it retrieves the current computational state and k counter values of C k encoded into k + 1 synaptic weights by means of its detection modules. Based on this information, it sends specific signals to the state module and counter modules. These signals update specific synaptic weights of these modules in such a way to encode the new computational state and counter values of C k .
The general architecture of N is illustrated in Fig 5. The general functionalities of the modules are summarized in Table 2. The following sections are devoted to the detailed description of the modules, as well as to the proof of correctness of the construction.
Stack encoding. In the sequel, each binary input stream will be piled up into a "binary stack". In this way, the input stream can be stored by the network, and then processed bit by bit at successive time steps interspersed by constant intervals. The construction of the stack is achieved by "pushing" the successive incoming bits into it. The stack is encoded as a rational number stored in the activation value of one (or several) analog neurons. The pushing and popping stack operations can be simulated by simple analog neural circuits [4]. We now present these notions in detail.
Input encoding module. The input encoding module is used for two purposes: pile up the successive input bits into a stack, and implement a "tic mechanism" which triggers the simulation of one computational step of the counter machine by the network. These two processes are described in detail below. This module (the most intricate one) has been designed on the basis of the previous considerations about stack encoding, involving neural circuits that implement the "pop", "top" and "pop" operations. It is composed of 31 cells in 0 , in 1 , end, tic, c 1 , . . ., c 20 , d 1 , . . ., d 7 , some of which being Boolean and others analog, as illustrated in Fig 6. It is connected to the input transmission module and the detection modules described below.
The three Boolean cells in 0 , in 1 and end are input cells of the network. They are used to transmit the successive inputs bits to the network. The transmission of input 0 or 1 is represented by a spike of cell in 0 or in 1 , respectively. At the end of the input stream, cell end spikes to indicate that all inputs have been processed.
The activity of this module, illustrated in Fig 7, can be described as follows. Suppose that the input stream a 1 � � � a p is transmitted to the network. While the bits a 1 , . . ., a p are being received, the module builds the stack γ = a 1 � � � a p , and stores its encoding � r g into the activation values of an analog neuron. To achieve this, the module first pushes every incoming input a i into a stack γ 0 (first 'push' circuit in Fig 6). Since pushed elements are by definition added on the top of the stack, γ 0 consists of elements a 1 , . . ., a p in reverse order, i.e., γ 0 = a p � � � a 1 . The encoding � r g 0 of stack γ 0 is stored in cell c 1 . Then, the module pops the elements of γ 0 from top to bottom (first 'pop' circuit in Fig 6), and pushed them into another stack γ (second 'push' Table 2. Modules composing the STDP-based recurrent neural network that simulates a k-counter machine.

INPUT ENCODING
• Store the successive input bits into a "stack".
• Implement a "tic mechanism" which triggers the simulation of one computational step of the machine. Turing complete neural computation based on synaptic plasticity circuit in Fig 6). After completion of this process, γ consists of elements a 1 , . . ., a p in the right order, i.e., γ = a 1 � � � a p . The encoding � r g of stack γ is stored in cell c 14 .
The Boolean cell tic is also an input cell. Each activation this cell triggers the simulation of one computational step of the counter machine by the network. When the tic cell spikes, it sends a signal to cell u � of the next input transmission module. The activation of u � attempts to launch the simulation of an �-transition of the machine. If, according to the current computational and counter states of the machine, an �-transition is possible, then the network simulates it via its other modules, and at the same time, sends an inhibitory signal to c 15 . Otherwise, after some delay ('delays' circuit in Fig 6), cell c 15 is activated. This cell triggers a sub-circuit that pops the current stack γ (second 'pop' circuit in Fig 6) and transmits its top element a 2 {0, 1} to cell u a of the next input transmission module. Then, the activation of u a launches the simulation of a regular transition of the machine associated with input symbol a, via the other modules of the network.
The module is composed of several sub-circuits that implement the top(), push() and pop() operations described previously, as shown in Fig 6. An input encoding module is denoted as input_encoding_module().
Input transmission module. The input transmission module is used to transmit to the network the successive input bits sent by the previous input encoding module. The module simply consists of 3 Boolean input cells u 0 , u 1 , u � followed by 3 layers of Boolean delay cells, as First of all, at successive time steps, cell in 0 or in 1 spikes depending on whether input 0 or 1 is received. Then, cell end spikes to indicate that all input bits have been processed. Meanwhile, the successive bits are pushed into a stack γ 0 whose encoding is hold by c 1 (first 'push' circuit). After all bits have been pushed, γ 0 contains all input bits in reverse order. Subsequently, c 2 , . . ., c 7 pop every element of γ 0 (first 'pop' circuit). Cell c 8 or c 9 spikes iff the popped element is a 0 or a 1, respectively. Afterwards, cells c 10 , c 11 push these elements back into a new stack, in order to build the reversed stack γ (second 'push' circuit). The encoding of γ is transferred to and hold by c 12 and c 13 at alternating time steps ('copy' circuit), and then hold by c 14 at every time step. After completion of this process, γ contains all input bits in the original order. Besides this, each time the tic cells spikes, it triggers the simulation of one computational step of the counter machine by the network. First, it attempts to simulatate an �-transition by activating cell u � of the next module. If this simulation step fails, cell c 15 is activated after some delay ('delays' circuit), which represents a signal telling that the top element of stack γ, instead of �, has to be given as next input symbol. In this case, c 14 , c 16  It is connected to the input encoding module described above, and to the state module, counter modules and detection modules described below. The activation of cell u 0 , u 1 or u � simulates the reading of input symbol 0, 1 or � by the counter machine, respectively. Each time such a cell is activated, the information propagates along the delay cells of the corresponding row. An input transmission module is denoted as input_transmission_module().
State module. In our model, the successive computational states of the counter machine are encoded as rational numbers, and stored as successive weights of a designated synapse w s (t) (subscript s refers to 'state'). More precisely, the fact that the machine is in state q k is In this simulation, the input stream 001101 and the "end of input" signal are transmitted via cells in 0 , in 1 , end at successive time steps 0, 1, 2, . . ., 7 (blue pattern). The successive input bits are first piled up in reverse order into a stack γ 0 whose encoding is stored as the activation value of c 1 , and then piled up again in the right order into a stack γ whose encoding is stored as the activation value of c 14 . The activation values of c 1 and c 14 over time are represented by the orange and red curves in the upper graph, respectively. Then, the tic cell spikes every 15 time steps from t = 20 onwards (blue pattern). Each such spike triggers the sub-circuit that pops stack γ and outputs its top element, 0 or 1, by activating cell c 19 or c 20 10 time steps later, respectively. We see that the successive input bits, namely 0, 0, 1, 1, 0, 1, 0 (blue pattern), are correctly output by cells c 19 or c 20 (red pattern).
https://doi.org/10.1371/journal.pone.0223451.g007 encoded by the rational weight w s (t) = a min + k � η, for k = 0, . . ., n − 1, where a min and η are parameters of the STDP rule given by Eq (2). Hence, the change in computational state of the machine is simulated by incrementing or decrementing w s (t) in a controlled manner. This process is achieved by letting w s (t) be subjected to the STDP rule of Eq (2), and by triggering specific spiking patterns of the presynaptic and postsynaptic cells of w s (t).
The state module is designed to implement these features. It is composed of a Boolean presynaptic cell pre s connected to an analog postsynaptic cell post s by a synapse of weight w s (t), as well as of 6(n − 1) Boolean cells c 1 , . . ., c 3(n − 1) and � c 1 ; . . . ; � c 3ðnÀ 1Þ (for some n to be specified), as illustrated in Fig 9. The synaptic weight w s (t) is subjected to the STDP rule of Eq (2), and has an initial value of w s (0) = a min . The architecture of the module ensures that the activation of cell c 3k+1 or � c 3kþ1 triggers successive specific spiking patterns of pre s and post s which, according to STDP (Eq (2)), increments or decrements w s (t) by (n − 1 − k) � η, for any 0 � k � n − 2,  (2), as well as of 6n Boolean cells c 1 , . . ., c 3(n−1) and � c 1 ; . . . ; � c 3ðnÀ 1Þ . The latter cells project onto pre s and post s via excitatory and inhibitory synapses. To increment (resp. decrement) the value of w s (t) by (n − 1 − k) � η (where η is the learning rate of the STDP rule of Eq (2)), it suffices to activate the blue cell c 3k+1 (resp. cell � c 3kþ1 ), where 0 � k � n − 2. https://doi.org/10.1371/journal.pone.0223451.g009 Turing complete neural computation based on synaptic plasticity respectively (for instance, if k = 0, then w s (t) is incremented or decremented by (n − 1) � η, whereas if k = n − 2, then w s (t) is only incremented or decremented by 1 � η). The module is linked to the input transmission module described above and to the detection modules described below.
The activity of this module, illustrated in Fig 10, can be described as follows. Suppose that at time step t, one has w s (t) = v and one wishes to increment (resp. decrement) w s (t) by (n − 1 − k) � η, where η is the learning rate of the STDP rule of Eq (2) and 0 � k � n − 2. To achieve this, we activate the cell c 3k+1 (resp. cell � c 3kþ1 ) (a blue cell of Fig 9). The activation of c 3k+1 (resp. cell � c 3kþ1 ) launches a chain of activations of the next cells (red events in Fig 10), which, according to the connectivity of the module, induces k successive pairs of spikes of pre s followed by post s (resp. post s followed by pre s ) (blue events in Fig 10). Thanks to the STDP rule of Eq (2), these spiking patterns increment (resp. decrement) k times the value of w s (t) by an amount of η. A state module with 6(n − 1) + 2 cells is denoted as state_module(n − 1).
Counter module. In our model, the successive counter values of the machine are encoded as rational numbers and stored as successive weights of designated synapses w c j ðtÞ, for j = 1, . . ., k (subscript c j refers to 'counter j'). More precisely, the fact that counter j has a value of n � 0 at time t is encoded by the synaptic weight w c j ðtÞ having the rational value r n ≔ P n i¼1 1 2 i (with the convention that r 0 ≔ 0). Then, the "push" (incrementing the counter by 1) and "pop" (decrementing the counter by 1) operations are simulated by incrementing or decrementing w c j ðtÞ appropriately.
The k counter modules are designed to implement these features. Each counter module is composed of 12 Boolean cells push, pop, test, = 0, 6 ¼ 0, pre c , post c , c 1 , c 2 , c 3 , c 4 , c 5 , as illustrated . . .Þ. The module is connected to the input transmission module described above and to detection modules described below.
The activity of this module, illustrated in Fig 12, can be described as follows. Each activation of the push (resp. pop) cell (blue events in Fig 12) propagates into the circuit and results 2 time steps later in successive spikes of the pre c and post c cells (resp. post c and pre c cells), which, thanks to the STDP rule of Eq (3), increment (resp. decrement) the value of w c (t) (red curve in Fig 12). The activation of the test cell (blue events in Fig 12) results 4 time steps later in the spike of the Boolean cell '= 0' or '6 ¼ 0' (red events in Fig 12), depending on whether w c (t) = 0 or w c (t) 6 ¼ 0, respectively. During this process, the value of w c (t) is first incremented (2 time steps later) and then decremented (2 time steps later again) back to its original value. In other words, the testing procedure induces a back and forth fluctuation of w c (t), without finally modifying it from its initial value (this fluctuation is unfortunately unavoidable). A counter module is denoted as counter_module().
Detection modules. Detection modules are used to retrieve-or detect-the current computational and counter states of the machine being simulated. This information is then employed to simulate the next transition of the machine. More precisely, each input symbol a 2 S [ {�}, computational state q 2 Q and counter states � c 1 ; . . . ; � c k 2 C of the machine are associated with a corresponding detection module. This module is activated if and only if the current input bit processed by the network is precisely a, the current synaptic weights w s (t) corresponds to the encoding of the computational state q, and the current synaptic weights w c 1 ðtÞ; . . . ; w c k ðtÞ are the encodings of counter values with corresponding counter states � c 1 ; . . . ; � c k . Afterwards, the detection module sends suitable activations to the state and counter modules so as to simulate the next transition dðq; a; � c 1 ; . . . ; � c k Þ ¼ ðq 0 ; o 1 ; . . . ; o k Þ of the machine. Formally, a detection module detects if the activation value of cell post s of the state module is equal to a certain value v, together with the fact that k signals from cells = 0 or 6 ¼ 0 of the k counter modules are correctly received. The module is composed of 4 Boolean cells connected in a feedforward manner, as illustrated in Fig 13. It is connected to the input transmission module, the state module and the counter modules described above.
The activity of this module, illustrated in Fig 14, can be described as follows. Suppose that at time step t, cell c 1 is spiking and cell post s has an activation value of v (with 0 � v � 1). Then, at time t + 1, both c 2 and c 3 spike (since they receive signals of intensity 1). At next time t + 2, two signals of intensities 1 kþ2 are transmitted to c 4 . Suppose that at this same time step, c 4 also receives k signals from the counter modules. Then, c 4 receives k + 2 signals of intensities 1 kþ2 , and hence spikes at time t + 3 (case 1 of Fig 14). By contrast, if at time step t, c 1 is spiking and post s has an activation value of v 0 > v (resp. v 0 < v), then at time t + 1 only c 2 (resp. c 3 ) spikes. Hence, at time t + 2, c 4 receives less than k + 2 signals of intensities 1 kþ2 , and thus stays quiet (cases 3 and 4 of Fig 14). Consequently, the 'detection cell' c 4 (blue cell of Fig 13) spikes if and only if post s has an exact activation value of v and c 4 receives exactly k signals from its  (2) and (3). The network is obtained by a suitable assembling of the modules described above. The architecture of N is illustrated in Fig 5, and its detailed construction is given by Algorithm 1. In short, the network N is composed of 1 input encoding module (line 1), 1 input transmission module (line 2), 1 state module (line 3), k counter modules (lines 4-6) and at most |Q|�| S [ {�}| � 2 k = 3n2 k detection modules (lines 7-11). The modules are connected together according to the patterns described in lines 12-47. This makes a total of Oðn2 k Þ cells and Oðnk2 k Þ synapses, which, since the number of counters k is

Turing completeness
We now prove that any k-counter machine is correctly simulated by its corresponding STDPbased RNN given by Algorithm 1. Since 2-counter machines are Turing complete, then so is the class of STDP-based RNNs. Towards this purpose, the following definitions need to be introduced.
Let N be an STDP-based RNN. The input cells of N are the cells in 0 , in 1 , end, tic of the input encoding module (cf. Fig 6, four blue cells of the first layer). Thus, inputs of N are vectors in B 4 whose successive components represent the spiking configurations of cells in 0 , in 1 , end, and tic, respectively. In order to describe the input streams of N , we consider the following vectors of B 4 : According to these notations, the input stream 0011end= 0 = 0tic corresponds to the following sequence of vectors provided at successive time steps i.e., to the successive spikes of cells in 0 , in 0 , in 1 , in 1 , end, followed by two times steps during which all cells are quiet, followed by a last spike of the cell tic.
For any binary input w = a 0 � � � a p 2 S � , let u w 2 ðB 4 Þ � be the corresponding input stream of N defined by where a i = 0 if a i = 0 and a i = 1 if a 1 = 0, for i = 0, . . ., p. In other words, the input stream u w consists of successive spike from cells in 0 and in 1 (inputs a 0 � � � a p ), followed by one spike from cell end (input end), followed by K 0 pþ1 time steps during which nothing happens (inputs = 0� � � = 0), followed by successive spikes from cell tic, interspersed by constant intervals of K time steps during which nothing happens (input blocks tic i = 0 � � � = 0). The value of K 0 pþ1 is chosen such that, at time step p + 2 + K 0 , the p + 1 successive bits of u w are correctly stored into cell c 14 of the input encoding module. The value of K is chosen such that, after each spike of the tic cell, the updating of the state and counter modules can be achieved within K time steps. Taking K 0 pþ1 � 3ðp þ 1Þ þ 4 and K � 17 + 3(n − 1) (where n = |Q|) satisfies these requirements. Note that In other words, a 00 i is the input symbol (possibly �) processed by N between t i and t i+1 . For instance, in Fig 15, the successive input bits processed by the network are displayed by the spiking patterns of the cells u � , u 0 , u 1 : one has a 00 0 ¼ � (only u � spikes between t 0 and t 1 ), a 00 1 ¼ 0 (both u � and then u 0 spike between t 1 and t 2 , but only u 0 leads to the activation of a detection module, even if this is not represented), a 00 2 ¼ 0 (u 0 spikes after u � between t 2 and t 3 ), a 00 3 ¼ 1 (u 1 spikes after u � between t 3 and t 4 ), etc. Now, for any input stream u w , the computation of N over u w is the sequence where In other words, the computation of N over u w is the sequence of successive values of w s ðtÞ; a 00 i ; w c 1 ðtÞ; . . . ; w c k ðtÞ, which are supposed to encode the successive states, input symbols and counter values of the machine to be simulated, respectively.
According to these considerations, we say that C k is simulated in real time by N , or equivalently that N simulates C k in real time, if and only if, for any input w 2 S � with corresponding input stream u w 2 ðB 4 Þ � , the computations of C k over w (Eq (4)) and of N over u w (Eq (5)) satisfy the following conditions: for all i = 0, . . ., l 1 , which implicitly implies that l 2 � l 1 (recall that r 0 ≔ 0 and r n ≔ P n i¼1 1 2 i , for all n > 0). In other words, C k is simulated by N iff, on every input, the computations of C k is perfectly reflected by that of N : the sequence of input symbols processed by C k and N coincide (Condition (7)), and the successive computational states and counter values of C k are properly encoded into the successive synaptic weights of w s ðtÞ; w c 1 ðtÞ; . . . ; w c k ðtÞ of N , respectively (Conditions (6) and (8)). According to these considerations, each state n i 2 N and counter value c j;i 2 N of C k is encoded by the synaptic value w s ðt i À 1Þ ¼ a min þ n i � Z 2 Q and w c j ðt i À 1Þ ¼ r c j;i 2 Q, for j = 1, . . ., k, respectively. The real time aspect of the simulation is ensured by the fact that the successive time steps (t i ) i�0 involved in the computation N ðwÞ are separated by a constant number of time steps K > 0. This means that the transitions of C k are simulated by N in fixed amount of time.
We now show that, in this precise sense, any k-counter machine is correctly simulated its corresponding STDP-based recurrent neural network. Theorem 1. Let C k be a k-counter machine and N be the STDP-based RNN given by Algorithm 1 applied on C k . Then, C k is simulated in real time by N .
Proof. Let w = a 0 � � � a p 2 S � be some input and u w 2 ðB 4 Þ � be its corresponding input stream. Consider the two computations of C k on w (Eq (4)) and of N on u w (Eq (5)), respectively: Hence, Conditions (6) and (8) are satisfied for i = 0, i.e., We now prove Condition (7) for i = 0. Towards this purpose, the following observations are needed. By construction and according to the value of K p+1 , at time t 0 − 1, cell c 14 of the input encoding module IN1 holds the encoding of the whole input w = a 0 � � � a p (the latter being considered as a stack). The top element of this stack is a 0 . Besides, according to Relations (9) (9)). Now, consider the symbol a 0 0 2 S [ f�g. Then either a 0 0 2 S or a 0 0 ¼ �. As a first case, suppose that a 0 0 2 S. Since a 0 0 6 ¼ � and a 0 0 is the first symbol processed by C k during its computation over input w = a 0 � � � a p (cf. Eq (4)), one necessarily has a 0 0 ¼ a 0 . Thus, dðn 0 ; a 0 0 ; � c 1;0 ; . . . ; � c k;0 Þ ¼ dðn 0 ; a 0 ; � c 1;0 ; . . . ; � c k;0 Þ, and the determinism of C k ensures that dðn 0 ; �; � c 1;0 ; . . . ; � c k;0 Þ is undefined. According to Algorithm 1 (lines 7-11), the module DETðn 0 ; a 0 ; � c 1;0 ; . . . ; � c k;0 Þ is instanciated, whereas DETðn 0 ; �; � c 1;0 ; . . . ; � c k;0 Þ is not. Hence, the dynamics of N between t 0 and t 1 goes as follows. At time t 0 , the cell tic of IN1 sends a signal to u � of IN2 (Algorithm 1, line 14) which propagates to the detection modules associated to symbol � (Algorithm 1, line 21). Since the module DETðn 0 ; �; � c 1;0 ; . . . ; � c k;0 Þ does not exist, it can certainly not be activated, and thus, the cell c 15 of IN1 will not be inhibited in return (Algorithm 1, line [24][25][26]. The spike of c 15 will then trigger the sub-circuit of IN1 that pops the top element of the stack currently encoded in c 14 , namely, the symbol a 0 . This triggers the activation of c 19 or c 20 of IN1 depending on whether a 0 = 0 or a 0 = 1. This activity then propagates to cells u a 0 and next d 3;a 0 of IN2 (Algorithm 1, lines 12-13). It propagates further to the detection modules of the form DET(�, a 0 , �, . . ., �), and in particular to DETðn 0 ; a 0 ; � c 1;0 ; . . . ; � c k;0 Þ (Algorithm 1, line 21). According to Relations (9), the cell c 4 of DETðn 0 ; a 0 ; � c 1;0 ; . . . ; � c k;0 Þ, and of this module only, will be activated, since it is the only module of this form capable of detecting the current weight w s (t) = a min + η � n 0 as well as the current counter states � c 1;0 ; . . . ; � c k;0 (Algorithm 1, lines 22-23 and 40-46). This amounts to saying that the symbol a 00 0 processed by N between t 0 and t 1 is equal to a 0 . Therefore, a 00 0 ¼ a 0 ¼ a 0 0 . This shows that in this case, Condition (7) holds for i = 0.
For the induction step, let m < l 1 , and suppose that Conditions (6) By the induction hypothesis (Condition (7)), a 00 m ¼ a 0 m . The definition of a 00 m ensures that the cell c 4 of one and only one detection module DETðq; a 00 m ; � c 1 ; . . . ; � c k Þ is activated between time steps t m and t m+1 , for some q 2 Q and some � c 1 ; . . . ; � c k 2 C. But by the induction hypotheses (Conditions (6) and (8) Hence, Relations (12) and (13) [27][28][29][30][31]. Hence, the activation of this detection module between t m and t m+1 induces subsequent spiking patterns of the state module which, by construction, increments (if n m+1 − n m > 0) or decrements (if n m+1 − n m < 0) the synaptic weight w s (t) by |n m+1 − n m | � η, and hence, changes it from its current value a min + n m � η (cf. Eq (12)) to the new value a min + n m � η + (n m+1 − n m ) � η = a min + n m+1 � η. Note that each spiking pattern takes 3 time steps, and hence, the updating of w s (t) takes at most 3(n − 1) time steps, where n is the number of states of C k (the longest update being when |n m+1 − n m | = n − 1, which takes 3(n − 1) time steps). Therefore, at time t m+1 − 1, one has w s;mþ1 ¼ a min þ n mþ1 � Z: This shows that Condition (6) is satisfied for i = m + 1.
Similarly, by Relation (10) Hence, L is recognizable by some Turing machine M. Conversely, suppose that L is recognizable by some Turing machine M. Then L is also recognizable by some 2-counter machine C 2 [57]. By Theorem 1, L is recognizable by some STDP-based RNN N .

Simulations
We now illustrate the correctness of our construction by means of computer simulations.
First, let us recall that the 2-counter machine of Fig 4 recognizes the recursively enumerable (but non context-free and non regular) language {0 n 1 n 0 n : n > 0}, i.e., the sequences of bits beginning with a strictly positive number of 0's followed by the same number of 1's and followed again by the same number of 0's. For instance, inputs w 1 = 001100 and w 2 = 0011101 are respectively accepted and rejected by the machine. Based on the previous considerations, we implemented an STDP-based RNN simulating this 2-counter machine. The network contains 390 cells connected together according to the construction given by Algorithm 1. We also set a min = η = 0.1 in the STDP rule of Eq (2). Two computations of this network over an accepting and a rejecting input stream are illustrated in Figs 15 and 16. These simulations illustrate the correctness of the construction described in Algorithm 1.
More specifically, the computation of the network over the input stream which corresponds to the encoding of w 1 = 001100, is displayed in Fig 15. In this case, taking K = 17 + 3(5 − 1) = 29 suffices for the correctness of the simulation (since the largest possible state update, in terms of the states' indices, is a change from q 5 to q 1 ). The lower raster plot displays the spiking activities of some of the cells of the network belonging to the input encoding module (in 0 , in 1 , end, tic), the input transmission module (u 0 , u 1 , u � ), the state module (pres s , post s ) and the two counter modules (push; pop; test; pre c k ; post c k ; ¼ 0; 6 ¼ 0, for k = 1, 2). From time step t = 0 to t = 6, the encoding of the input stream 001100 is transmitted to the network via activations of cells in 0 , in 1 and end (blue pattern). Between t = 6 and t = 30, the input pattern is encoded into activation values of sigmoid cells in the input encoding module, as illustrated in Fig 7. From t = 30 onwards, the tic cell is activated every 30 time steps in order to trigger the successive computational steps of the network. Each spike of the tic cell induces a subsequent spike of u � one time step later. At this moment, the network tries to simulate an �-transition of the counter machine. If such a transition is possible, the network performs it: this is the case at time steps t = 31, 181. Otherwise, the input encoding module retrieves the next input bit to be processed, and activates the corresponding cell u 0 or u 1 (blue pattern): this is the case at time steps t = 71, 101, 131, 161, 221, 251. In Fig 15 (cells u 0 , u 1 , u � ), we can see that on this input stream, the network processes the sequence of input symbols �0011�00.
Every time the network receives an input symbol (�, 0 or 1), it simulates one transition of the counter machine associated to this input. The successive computational states of the machine are encoded into the successive values taken by w s (t) (cf. Fig 15,  Recall that state n and counter value x of C k are encoded by the synaptic weights w s (t) = a min + n � η and w c (t) = r x in N , respectively. Accordingly, the previous values correspond to the encodings of the following states and counter values (q, c 1 , c 2 ) of the counter machine: These are the correct computational states and counter values encountered by the machine along the computation of input w 1 = 001100 (cf. Fig 4). Therefore, the network simulates the counter machine correctly. The fact that the computations of the machine and the network terminate in state 4 and with w s (t) = 0.5 = 0.1 + 4 � η, respectively, means that inputs w 1 and u w 1 are accepted by both systems.
As another example, the computation of the network over the input stream  Fig 16 (cells u 0 , u 1 , u � ). We see that on this input stream, the network processes the sequence of input symbols �0011�101. The successive synaptic weights ðw s ðtÞ; w c 1 ðtÞ; w c 2 ðtÞÞ at time steps t = 30k, for 1 � k � 10 are These are the correct computational states and counter values encountered by the machine working over input w 2 = 0011101 (cf. Fig 4). Therefore, the network simulates the counter machine correctly. The fact that the computations of the machine and the network terminate in state 1 and with w s (t) = 0.2 = 0.1 + 1 � η, respectively, means that inputs w 1 and u w 1 are rejected by both systems.

Discussion
We proposed a novel Turing complete paradigm of neural computation where the essential information is encoded into discrete synaptic levels rather than into spiking configurations, activation values or (attractor) dynamics of neurons. More specifically, we showed that any 2-counter machine-and thus any Turing machine-can be simulated by a recurrent neural network subjected to two kinds of spike-timing-dependent plasticity (STDP) mechanisms. The finitely many computational states and infinitely many counter values of the machine are encoded into finitely and infinitely many synaptic levels, respectively. The transitions between states and counter values are achieved via the two STDP rules. In short, the network operates as follows. First, the input stream is encoded and stored into the activation value of a specific analog neuron. Then, every time a tic input signal is received, the network tries to simulate an �-transition of the machine. If such a transition is possible, the network simulates it. Otherwise, the network retrieves from its memory the next input bit to be processed, and simulates a regular transition associated with this input. These results have been illustrated by means of computer simulations. An STDP-based recurrent neural network simulating a specific 2-counter machine has been implemented and its dynamics analyzed.
We emphasize once again that the possibility to simulate �-transitions is (unfortunately) necessary to the achievement of Turing completeness. Indeed, it is well-known that the class of k-counter machines that do not make use of �-transitions is not Turing complete, for any k > 0. For instance, the language L = {w#w: w 2 {0, 1} � } (the strings of bits separated by a symbol # whose prefix and suffix are the same), is recursively enumerable, but cannot be recognized by a k-counter machine without �-transitions. The input encoding module, as intricate as it is, ensures the implementation of this feature. It encodes and stores the incoming input stream so as to be able to subsequently intersperse the successive regular transitions (associated to regular input symbols) with �-transitions (associated to � symbols). By contrast, a k-counter machine without �-transitions could be simulated by an STDP-based neural network working in an online fashion. The successive input symbols would be processed as they arrive, and a regular transition be simulated for each successive symbol. An STDP-based neural net (as described in Fig 5) without input encoding module could simulate a k-counter machine without �-transitions. One would just need to add sufficiently many delay layers to its input transmission module in order to have enough time to emulate each regular transition.
In the present context, the STDP-based RNNs are capable of simulating Turing machines working in the accepting mode (i.e., machines that provide accepting or rejecting decisions of their inputs by halting in an accepting or a rejecting state, respectively). But it would be possible to adapt the construction to simulate Turing machines working also in the generative mode (i.e., machines that write the successive words of a language on their output tape, in an enumerative way). To this end, we would need to simulate the program and work tape of M by an STDP-based RNN N (as described in Theorem 1), and the output tape of M by an additional neural circuit N out plugged to N . Broadly speaking, the simulation process could be achieved as follows: • Every non-output move of M is simulated by the STDP-based RNN N in the usual way (cf. Theorem 1).
• Every time M is generating a new word w = a 1 � � �a n on its output tape, use the circuit N out to build step by step the encoding � r w ¼ P n i¼1 2a i þ1 4 i 2 ½0; 1� of w and store this value in a designated neuron c (as described in the paragraph "Input encoding module").
• When M has finished generating w, use the circuit N out to transfer the value � r w of c to another neuron c 0 , to set the activation value of c back to 0, and to output the successive bits of w by popping the the stack � r w stored in c 0 (again, as described in the paragraph "Input encoding module").
In this way, the STDP-based RNN N plugged to the circuit N out could work as a language generator: it outputs bit by bit the successive words of the language L generated by M. The implementation of the circuit N out is along the lines of what is described in the paragraph "input encoding module".
Concerning the complexity issue, our model uses OðnÞ neurons and OðnÞ synapses to simulate a counter machine with n states. Moreover, the simulation works in real-time, since every computational step of the counter machine can be simulated in a fixed amount of 17 + 3 (n − 1) time steps (17 time steps to transmit the next input bit up to the end of the detection modules, and at most 3(n − 1) time steps to perform the state and counter updates). In the context of rational-weighted sigmoidal neural networks, the seminal result from Siegelmann and Sontag uses 886 Boolean and analog neurons to simulate a universal Turing machine [4]. Recent results show that Turing completeness can be achieved with a minimum of 3 analog neurons only, the other ones being Boolean [58]. As for spiking neural P systems, Turing universality can be achieved with 3 or 4 neurons only, but this comes at the price of exponential time and space overheads (see [59], Table 1). In our case, the complexity of Turing universality is expected to be investigated in detail in a future work.
Regarding synaptic-based computation, a somehow related approach has already been pursued in the P system framework with the consideration of spiking neural P systems with rules on synapses [60]. In this case, synapses are considered as computational units triggering exchanges of spikes between neurons. The proposed model is shown to be Turing universal. It is claimed that "placing the spiking and forgetting rules on synapses proves to be a powerful feature, both simpler proofs and smaller universal systems are obtained in comparison with the case when the rules are placed in the neurons" [60]. In this context however, the information remains encoded into the number of spikes hold by the neurons, referred to as the "configuration" of the system. By contrast, in our framework, the essential informationthe computational states and counter values-is encoded into discrete synaptic levels, and their updates achieved via synaptic plasticity rules.
As already mentioned, it has been argued that in biological neural networks "synapses change their strength by jumping between discrete mechanistic states rather than by simply moving up and down in a continuum of efficacy" [56]. These considerations represent "a new paradigm for understanding the mechanistic underpinnings of synaptic plasticity, and perhaps also the roles of such plasticity in higher brain functions" [56]. In addition, "much work remains to be done to define and understand the mechanisms and roles these states play" [56]. In our framework, the computational states and counter values of the machine are encoded into discrete synaptic states. However, the input stream to be processed is still encoded into the activation value of a specific analog neuron. It would be interesting to develop a paradigm where this feature also is encoded into synapses. Moreover, it would be interesting to extend the proposed paradigm of computation to the consideration of more biological STDP rules.
It is worth noting that synaptic-based and neuron-based computational paradigms are not opposite conceptions, but intertwined processes instead. Indeed, changes in synaptic states are achieved via the elicitation of specific neuronal spiking patterns (which modify the synaptic strengths via STDP). The main difference between these two conceptions is whether the essential information is encoded and memorized into synaptic states or into spiking configurations, activation values or (attractor) dynamics of neurons.
In biology, real brain circuits do certainly not operate by simulating abstract finite state machines. And with our work, we do intend to argue in this sense. Rather, our intention is to show that a bio-inspired Turing complete paradigm of abstract neural computation-centered on the concept of synaptic plasticity-is not only theoretically possible, but also potentially exploitable. The idea of representing and storing essential information into discrete synaptic levels is, we believe, novel and worthy of consideration. It represents a paradigm shift in the field of neural computation.
Finally, the impacts of the proposed approach are twofold. From a practical perspective, contemporary developments in neuromorphic computing provide the possibility to implement neurobiological architectures on very-large-scale integration (VLSI) systems, with the aim of mimicking neuronal circuits present in the nervous system [61,62]. The implementation of our model on VLSI technologies would lead to the realization of new kinds of analog neuronal computers. The computational and learning capabilities of these neural systems could then be studied directly from the hardware point of view. And the integrated circuits implementing our networks might be suitable for specific applications. Besides, from a Machine Learning (ML) perspective, just as the dynamics of biological neural nets inspired neuronal-based learning algorithms, in this case also, the STDPbased recurrent neural networks might eventually lead to the development of new ML algorithms.
From a theoretical point of view, we hope that the study of neuro-inspired paradigms of abstract computation might contribute to the understanding of both biological and artificial intelligences. We believe that similarly to the foundational work from Turing, which played a crucial role in the practical realization of modern computers, further theoretical considerations about neural-and natural-based models of computation shall contribute to the emergence of novel computational technologies, and step by step, open the way to the next computational generation.
Supporting information S1 Files. Python code. All python scripts generating the results of the paper are provided in an attached zip folder files.zip. The description of the different files is given in Read_me.txt. (ZIP)