Figures
Abstract
Spiking neural P systems are a new candidate in spiking neural network models. By using neuron division and budding, such systems can generate/produce exponential working space in linear computational steps, thus provide a way to solve computational hard problems in feasible (linear or polynomial) time with a “timespace tradeoff” strategy. In this work, a new mechanism called neuron dissolution is introduced, by which redundant neurons produced during the computation can be removed. As applications, uniform solutions to two NPhard problems: SAT problem and Subset Sum problem are constructed in linear time, working in a deterministic way. The neuron dissolution strategy is used to eliminate invalid solutions, and all answers to these two problems are encoded as indices of output neurons. Our results improve the one obtained in Science China Information Sciences, 2011, 15961607 by Pan et al.
Citation: Zhao Y, Liu X, Wang W (2016) Spiking Neural P Systems with Neuron Division and Dissolution. PLoS ONE 11(9): e0162882. https://doi.org/10.1371/journal.pone.0162882
Editor: Andrew Adamatzky, University of the West of England, UNITED KINGDOM
Received: May 5, 2016; Accepted: August 30, 2016; Published: September 14, 2016
Copyright: © 2016 Zhao et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This research was supported by National Natural Science Foundation of China (http://www.nsfc.gov.cn/) 61472231 to XL, 61170038 to XL, 61402187 (not applicable) and 61502283 (not applicable). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Spiking neural P systems (in short, SN P systems) are a class of bioinspired parallel computing models, initiated by Ionescu, Păun and Yokomori in 2006 [1], which are inspired from information processing strategy and communication strategy between neurons. A SN P system is constructed by a group of neurons (a class of cells with only one membrane) communicating by sending signals (spikes, represented by object a) to neighboring neurons through synapses. Each neuron has a certain number of spikes and rules. Spikes can evolve through application of rules. Since SN P systems were proposed, they become a rapid developing area of membrane computing [2–15].
Researchers pay close attention to computational efficiency of SN P systems, especially the judgement whether NPcomplete problems have solutions or not in feasible time [16–26]. If a NPcomplete problem has a solution, the output neuron outputs a spike; otherwise, the output neuron outputs nothing. However, we need to find out the solutions in many situations. For instance, the register allocation problem is an application of SAT problem. This problem aims to build a mapping relationship between the virtual registers and the physical registers, and realize the rational utilization of physical register resources. In this case, we need to judge whether a good solution exists, while searching the solution by distributing the physical register resources according to the solution. In applications, many problems can be transformed into graph coloring problems, which is equivalent to SAT problems. To solve these problems, exact solutions are also essential.
For this purpose, neuron dissolution, which is a basic biological phenomenon aiming to remove unnecessary neurons, is introduced into SN P systems [27, 28], and a new class of SN P systems, SN P systems with neuron division and dissolution (DDSN P systems, for short) is proposed in this work. In DDSN P systems, division rules can generate exponent work space (in terms of neurons) which can be used to enumerate all possible results (one result is contained in one neuron), and dissolution rules can dissolve redundant neurons which can be used to remove wrong results. Neurons which represent all possible results are set as output neurons, and these output neurons with invalid results are dissolved in computational process. When the computation halts, the remaining output neurons show all right results. Uniform solutions to SAT problem and Subset Sum problem, which work in a deterministic way, are constructed as examples in this work.
The contributions of this work focus on the following three aspects. 1. The computational space efficiency is improved. If these redundant neurons are reserved, they will occupy huge computational resources such as storage. The dissolution rule can reduce the computational space needed and improve the computational space efficiency. 2. The system structure is clearer. If the redundant neurons are reserved, the SN P system will become complicated, and the useful neurons are not highlighted enough. By introducing the neuron dissolution mechanism, redundant neurons are dissolved immediately, and each of the remaining neuron has its function. 3. Exact solutions to NPcomplete problems can be obtained in linear time. Invalid solutions are eliminated during the computational process by neuron dissolution, and all solutions are encoded as indices of specific output neurons at halting, which can provide more valuable information for applications. Uniform solutions to SAT and Subset Sum problems are solved as examples.
The paper is organized as follows. Section 1 defines the SN P systems with neuron division and dissolution. Uniform solutions to SAT and Subset Sum problems in linear time using the proposed SN P systems with neuron division and dissolution are presented in section 2 and section 3. Conclusions are given in section 4.
1 SN P Systems with Neuron Division and Dissolution
1.1 Background
Biological systems, such as cells, tissues, and human brains, have deep computational intelligence. Biologically inspired computing, or bioinspired computing in short, focuses on regenerating computing architecture from biological systems to construct computing models and algorithms. Membrane computing is a novel research branch of bioinspired computing, initiated by Gh. Păun in 2002, which seeks to discover new computational models from the study of biological cells, particularly of the biological membranes [29, 30]. The obtained models are distributed and parallel bioinspired computing devices, usually called P systems. There are three mainly investigated P systems, celllike P systems, tissue P systems, and neurallike P systems (also known as spiking neural P systems). P systems, known as powerful computing models, are able to do what Turing machine can do, even solving computational hard problems [31–37].
SN P systems, as a new branch of membrane computing, are a shift from the celllike architecture to the neurallike architecture. The topological structure of SN P systems is a directed graph: neurons are placed in the vertices of the graph, and synapses act as edges. Each neuron can have a certain number of objects a (spikes) and a certain number of firing rules and forgetting rules. Through a firing rule, a neuron can send information to other neurons by emitting spikes to these neurons. Through a forgetting rule, a certain amount of spikes can be removed from a neuron. Both the firing rules and the forgetting rules have conditions of applied. If the number of spikes in a neuron is contained within in the number set of spikes determined by a regular expression, a rule can have the possibility to be applied. At each time step, one rule is nondeterministically chosen to be applied in each neuron. That is to say, rules are applied in a sequential manner from the view of the neuron, and in parallel from the view of the whole system.
Pan et al. introduced a novel idea to solve SAT problem in polynomial time by using neuron division and budding [25], which uses the neuron division and budding rules to generate more neurons according to the need in the computational process. Wang et al. proved that SN P systems with neuron division, not using neuron budding, can also solve SAT problem in polynomial time [26]. The biological motivation of neuron division and budding comes from the neural stem cells division. The neural stem cells have the ability to proliferate and differentiate into neurons, astrocytes and oligodendrocytes, therefore, they can supply massive tissue cells. In these SN P systems, neuron division rules and neuron budding rules are used to regenerate the above biological phenomena.
In neurons, there is another biological phenomenon called neuron apoptosis, which has a close relationship with neuron division and budding. Neuron apoptosis is a programmed neuron death controlled by a series of activities controlled by genes, such as the activation, expression and regulation of genes. It is not a self damage phenomenon under the pathological condition, but a actively death process. When unnecessary neurons or abnormal neurons occur in the process of neuron development or under the influence of some factors, neuron apoptosis can remove these neurons in multicellular organism to maintain a stable internal environment and to adapt to the environment better. It plays an important role in the evolution of the organism, the stability of internal environment and the development of multiple systems.
For the above biological phenomena, the neuron apoptosis mechanism is introduced into SN P systems, and neuron dissolution rule is designed. In this way, redundant neurons can be eliminated immediately.
1.2 System description
A SN P system with neuron division and dissolution of degree m is a construct of the form where:
 O = {a} represents the singleton alphabet where a is the spike;
 H represents the set of labels for neurons;
 syn ⊆ H × H represents a synapse dictionary (for each 1 ≤ i ≤ m, (i, i) ∉ syn);
 n_{i} ≥ 0 represents the spike numbers in neuron σ_{i} in the initial state (1 ≤ i ≤ m);
 R represents the set of all developmental rules of the following four forms
 firing rule [E/a^{c} → a^{p}; d]_{i}, where, i ∈ H, E is a regular expression over a, c ≥ 1, p ≥ 1, c ≥ p, d ≥ 0. If E = a^{c}, the firing rule is simply written as [a^{c} → a^{p}; d]_{i}. If d = 0, the firing rule is simply written as [E/a^{c} → a^{p}]_{i}. If E = a^{c} and d = 0, the firing rule is simply written as [a^{c} → a^{p}]_{i};
 forgetting rule [E/a^{s} → λ]_{i}, where, i ∈ H, E is a regular expression over a, s ≥ 1. If E = a^{s}, the forgetting rule is simply written as [a^{s} → λ]_{i};
 neuron division rule [E]_{i} → [ ]_{j}  [ ]_{k}, where, i, j, k ∈ H, E is a regular expression over a;
 neuron dissolution rule [E]_{i} → δ, where, i ∈ H, E is a regular expression over a, object δ represents that neuron σ_{i} is dissolved;
 in, out ⊆ H represent the input and output neurons of Π, respectively.
The synapse dictionary syn shows the initial structure of the system and guides how to establish new synapses when new neurons are established.
If neuron σ_{i} has h spikes, and a^{h} ∈ L(E), h ≥ c, the firing rule [E/a^{c} → a^{p}; d]_{i} can be applied. c spikes are consumed (h − c spikes remain in neuron σ_{i}.), and p spikes are emitted after d time units (steps). If d = 0, p spikes are emitted immediately; if d = 1, p spikes are emitted at the next step; if this firing rule is applied at step t and d ≥ 1, p spikes are emitted at step t + d. Neuron σ_{i} is closed at steps t, t + 1, t + 2, …, t + d − 1, which means no rule will be applied and no spike will be received in this period. At step t + d, neuron σ_{i} becomes open again, and can receive new spikes. Once these p spikes are emitted from neuron σ_{i}, they reach each neuron σ_{j} which has a synapse going from neuron σ_{i} to neuron σ_{j} and is open. The spikes sent to a closed neuron are lost.
If neuron σ_{i} has h spikes, and a^{h} ∈ L(E), h ≥ s, the forgetting rule [E/a^{s} → λ]_{i} can be applied. s spikes are consumed immediately.
If (1). neuron σ_{i} has h spikes, and a^{h} ∈ L(E), and (2). no synapse (i, j), (j, i), (i, k), (k, i) exists in the system, the neuron division rule [E]_{i} → [ ]_{j}  [ ]_{k} can be applied. All h spikes in neuron σ_{i} are consumed and neuron σ_{i} is divided into two neurons σ_{j} and σ_{k}. No spike is in neurons σ_{j} and σ_{k} at this moment. The labels of the two generated neurons can be different or the same, and the labels of the two generated neurons can be different from or the same with the label of their father neuron σ_{i}, too. The new generated neurons inherit the synapses of their father neuron σ_{i}. That is to say, if there is a synapse (i, g) going from neuron σ_{i} to neuron σ_{g}, two synapses (j, g) and (k, g) are established after the division rule is applied; if there is a synapse (g, i) going from neuron σ_{g} to neuron σ_{i}, two synapses (g, j) and (g, k) are established after the division rule is applied. In addition to inheritance of synapses, new generated neurons also have synapses provided by the synapse dictionary syn. Synapses not existing in the synapse dictionary syn may appear because of inheritance of synapses. The condition (2) avoids the situation that the start and the end of a synapse are the same neuron. For example, if synapse (i, j) exists in the system, synapses (j, j), (k, j) will appear which is not permitted.
A simple example shown in Fig 1 is used to show how division rules are applied. One spike and two division rules are in neuron σ_{3}. Considering the two conditions mentioned in the above paragraph, 1). Neuron σ_{3} has one spike a and the regular expressions of both two division rules are exactly {a}, where a ∈ {a}. Therefore, both of these two division rules meet the condition (1). 2). For rule [a]_{3} → [ ]_{2}  [ ]_{3}, the label 3 of the father neuron σ_{3} corresponds to i in the normalization rule, and the label 2 and 3 of the two new neurons σ_{2} and σ_{3} corresponds to j and k in the normalization rule. Synapses (i, j), (j, i), (i, k), (k, i) cannot exist in the system. That is to say, (3, 2), (2, 3), (3, 3), (3, 3) cannot exist in this system. However, a synapse (2, 3) is in this system, therefore rule [a]_{3} → [ ]_{2}  [ ]_{3} cannot be applied. Only rule [a]_{3} → [ ]_{3}  [ ]_{4} meet the two conditions. The spike a in neuron σ_{3} is consumed, neuron σ_{3} is divided into two neurons σ_{3} and σ_{4}, and two synapses (2, 3), (2, 4) going from neuron σ_{2} to these two new neurons are established because there is a synapse going from neuron σ_{2} to the father neuron σ_{3} of the two new neurons (the inheritance of synapses). Because rules in this system are related to the labels of neurons, the new neuron σ_{3} contains these two rules. The system is changed to Fig 2 after applying rule [a]_{3} → [ ]_{3}  [ ]_{4}.
If neuron σ_{i} has h spikes, and a^{h} ∈ L(E), the neuron dissolution rule [E]_{i} → δ can be applied. All h spikes in neuron σ_{i} are consumed and neuron σ_{i} is dissolved. All synapses going from/to neuron σ_{i} are dissolved, too.
A simple example shown in Fig 3 is used to show how dissolution rules are applied. Neuron σ_{1} has one spike a and the regular expression of the dissolution rule is exactly {a}, where a ∈ {a}. Then rule [a]_{1} → δ is applied and the system is changed to Fig 4 (Neuron σ_{1} is dissolved, and synapse (1, 2) connected with neuron σ_{1} is also dissolved.).
At each step, if only one rule in neuron σ_{i} can be applied, this rule must be applied; if two or more rules in neuron σ_{i} can be applied, one of these rules is applied nondeterministically. Rules are applied in a sequential manner in each neuron and in parallel between neurons.
The configuration of the system is described by the synapses connections, the spikes number in each neuron, and the state of each neuron (open or closed). By applying rules, the configuration is transformed from one to the next one. The transition sequence starting from the initial configuration is called a computation, and a computation halts if it reaches a configuration where all neurons are open and no rule can be applied.
SN P systems can used to solve the decision problem I_{X}, Θ_{X} both in a semiuniform way and in a uniform way, where I_{X} is a language over a finite alphabet and Θ_{X} is a total boolean function over I_{X} (The elements in I_{X} are instances.). In the semiuniform way, a specified SN P system is constructed for each instance of a decision problem, in which the instance parameters are embedded in the SN P system. In the uniform way, a SN P system is constructed for all instances of a decision problem, in which the different instances parameters enter the SN P system as input spikes. The uniform solutions are preferred because they only relate to the structure of a problem.
The input of a SN P system is a spike train , where r ≥ 1, i_{j} ≥ 0 for each 1 ≤ j ≤ r, which means i_{j} spikes enter the system through input neuron σ_{in} at step j. Specially, i_{j} = 0 means no spike enters the system at step j.
2 A Uniform Solution to SAT Problem
SAT (the satisfiability of conjunctive normal form expression) problem is one of the most typical NPcomplete problems. For a Boolean variable set X = {x_{1}, x_{2}, …, x_{n}}, a literal l_{i} is x_{i} or ¬x_{i} for 1 ≤ i ≤ n. A clause C_{i} is a disjunction of literals C_{i} = l_{n1} ∨ l_{n2} ∨ … ∨ l_{nr}, 1 ≤ r ≤ n. A conjunctive normal form (CNF, for short) is a conjunction of clauses C_{1} ∧ C_{2} ∧ … ∧ C_{m}. An assignment is a mapping X → {0, 1} from each variable x_{i} to its value (Value 1 represents true and value 0 represents false.). For example, X = {x_{1}, x_{2}, x_{3}}, the conjunctive normal form is (x_{1} ∨ ¬x_{2}) ∧ (x_{1} ∨ x_{3}). The x_{1} ∨ ¬x_{2} and x_{1} ∨ x_{3} are the two clauses. The first clause contains two literals x_{1} and ¬x_{2}, and the second clause contains two literals x_{1} and x_{3}. If an assignment of x_{1}, x_{2}, …, x_{n} can be found, which makes at least one literal true in each clause and then makes all m clauses true, this SAT problem is satisfiable. Otherwise, this SAT problem is unsatisfiable [38]. In the above example, let x_{1} = x_{2} = x_{3} = 1, the value of the conjunctive normal form is (1 ∨ 0) ∧ (1 ∨ 0) = 1 ∧ 1 = 1. Therefore, the SAT problem is satisfiable.
The formal definition of SAT problem is as follows.
Problem 1. NAME: SAT.
 INSTANCE: a set of clauses C = {C_{1}, C_{2}, …, C_{m}}, which is built on a Boolean variable set X = {x_{1}, x_{2}, …, x_{n}}.
 QUESTION: is there an assignment of Boolean variables x_{1}, x_{2}, …, x_{n} that can make the value of all clauses true?
SAT(n, m) denotes the set of all instances of the SAT problem having n variables and m clauses. In this section, a uniform solution working in a deterministic way is constructed by DDSN P system, which can solve all SAT(n, m) problems in linear time.
The instance parameters need to enter a SN P system, therefore the clauses need to be encoded as spikes form. Each clause contains either x_{j}, or ¬x_{j}, or none of these two. Different numbers of spikes are introduced into the system to distinguish these three situations.
A clause is represented by α_{i,1} ⋅ α_{i,2} ⋅ … ⋅ α_{i,n} in this way. For instance, a clause ¬x_{1}⋁x_{2}⋁x_{3} is represented by a^{2} ⋅ a ⋅ a.
In order to generate the necessary workspace before computing, a spike train (a^{0} ⋅)^{2n} is introduced into the front of each spike train.
The formal definition of DDSN P systems for SAT(n, m) problems (shown in Fig 5) is as follows. where:
 O = {a};
 H = {0, 1, 2, 3, d, in_{xi}, Cx_{i}1, Cx_{i}0, o_{t1, t2, …, tn}}(i, t_{1}, t_{2}, …, t_{n} = 1,2);
 syn = {(3, 2), (2, 1), (1, 2), (1, 0), (3, d)}
⋃{(d, in_{xi})i = 1, 2, …, n} ⋃{(in_{xi}, Cx_{i}1), (in_{xi}, Cx_{i}0)i = 1, 2, …, n}
⋃{(Cx_{1}1, o_{1}), (Cx_{i}1, o_{t1, t(i − 1)1})i = 2, 3, …, n}
⋃{(Cx_{1}0, o_{0}), (Cx_{i}0, o_{t1, t(i − 1)0})i = 2, 3, …, n};  n_{0} = 1, n_{2} = 1, n_{3} = 1, n_{d} = 2m, the number of spikes in other neurons is zero;
 in = σ_{inxi}, out = σ_{ot1, t2, …, tn}(i, t_{1}, t_{2}, …, t_{n} = 0, 1);
 firing rule:
[a → a]_{i}, i = 1, 2
[a → a; 2n − 1]_{3}
[a(a^{2})^{+}/a^{2} → a]_{d}
[a → a]_{inxi}, i = 1, 2, …, n
[a^{2} → a^{2}]_{inxi}, i = 1, 2, …, n
[a^{3} → a^{3}]_{inxi}, i = 1, 2, …, n
[a → a]_{Cxi1}, i = 1, 2, …, n
[a^{3} → a]_{Cxi1}, i = 1, 2, …, n
[a → a]_{Cxi0}, i = 1, 2, …, n
[a^{2} → a]_{Cxi0}, i = 1, 2, …, n
forgetting rule:
[a^{2} → λ]_{2}
[a^{2} → λ]_{Cxi1}, i = 1, 2, …, n
[a^{3} → λ]_{Cxi0}, i = 1, 2, …, n
[a → λ]_{ot1 t2…tn}, t_{1}, t_{2}, …, t_{n} = 0, 1
[a^{2} → λ]_{ot1 t2…tn}, t_{1}, t_{2}, …, t_{n} = 0, 1
…
[a^{n −1} → λ]_{ot1 t2…tn}, t_{1}, t_{2}, …, t_{n} = 0, 1
neuron division rule:
[a]_{0} → [ ]_{o1}  [ ]_{o0}
[a]_{ot1} → [ ]_{ot11}  [ ]_{ot10}, t_{1} = 0, 1
[a]_{ot1 t2} → [ ]_{ot1 t21}  [ ]_{ot1 t20}, t_{1}, t_{2} = 0, 1
…
[a]_{ot1 t2…tn−1} → [ ]_{ot1 t2…tn−11}  [ ]_{ot1 t2…tn−10}, t_{1}, t_{2}, …, t_{n−1} = 0, 1
neuron dissolution rule:
[a^{n}]_{ot1 t2…tn} → δ, t_{1}, t_{2}, …, t_{n} = 0, 1.
Computation starts when spike trains enter the system through input neurons σ_{inx1}, σ_{inx2}, …, σ_{inxn}, respectively. Neuron σ_{0} and its children neurons need 2n steps to generate 2^{n} neurons (workspace) to enumerate all assignments of variables by applying neuron division rules (One neuron represents one assignment of variables.), therefore (a^{0} ⋅)^{2n} are added to the front of each spike train.
Generation Stage: At step one, neuron σ_{0} has one spike, the division rule [a]_{0} → [ ]_{o1}  [ ]_{o0} is applied to generate neurons σ_{o1} and σ_{o0}, which means an assignment in regard to x_{1} has two choices: 1 or 0. Synapses (1, o_{1}) and (1, o_{0}) are established through the inheritance of synapse (1, 0), and synapses (Cx_{1}1, o_{1}) and (Cx_{1}0, o_{0}) are established through synapse dictionary syn. Synapse (Cx_{1}1, o_{1}) establishes a channel between the input and the assignment including x_{1} = 1; synapse (Cx_{1}0, o_{0}) establishes a channel between the input and the assignment including x_{1} = 0. At the same time, auxiliary neuron σ_{2} has one spike, rule a → a is applied and one spike is emitted to neuron σ_{1}; auxiliary neuron σ_{3} has one spike, rule a → a; 2n − 1 is applied and one spike will be emitted to neurons σ_{2} and σ_{d} at step 2n. The system after step one is shown in Fig 6.
At step two, neuron σ_{1} has one spike, the firing rule a → a is applied, and one spike is emitted to neurons σ_{2}, σ_{o1} and σ_{o0}.
At step three, each of neurons σ_{o1} and σ_{o0} has one spike, the division rule [a]_{ot1} → [ ]_{ot11}  [ ]_{ot10}(t_{1} = 1, 0) is applied to generate neurons σ_{o11}, σ_{o10}, σ_{o01} and σ_{o00}, which means an assignment in regard to x_{1} and x_{2} has four choices: 11, 10, 01, 00. Synapses (1, o_{11}), (1, o_{10}), (1, o_{01}) and (1, o_{00}) are established through the inheritance of synapses (1, o_{1}), (1, o_{0}); synapses (Cx_{1}1, o_{11}), (Cx_{1}1, o_{10}), (Cx_{1}0, o_{01}) and (Cx_{1}1, o_{00}) are established through the inheritance of synapses (Cx_{1}1, o_{1}), (Cx_{1}0, o_{0}); synapses (Cx_{2}1, o_{11}), (Cx_{2}1, o_{01}), S(Cx_{2}0, o_{10}) and (Cx_{2}0, o_{00}) are established through synapse dictionary syn. Synapses (Cx_{1}1, o_{11}) and (Cx_{1}1, o_{10}) establish channels between the input and the assignments including x_{1} = 1; synapses (Cx_{1}0, o_{01}) and (Cx_{1}0, o_{00}) establish channels between the input and the assignments including x_{1} = 0; synapses (Cx_{2}1, o_{11}) and (Cx_{2}1, o_{01}) establish channels between the input and the assignments including x_{2} = 1; synapses (Cx_{2}0, o_{10}) and (Cx_{2}0, o_{00}) establish channels between the input and the assignments including x_{2} = 0. At the same time, auxiliary neuron σ_{2} has one spike, rule a → a is applied and one spike is emitted to neuron σ_{1}. The system after step three is shown in Fig 7.
Similar process repeats. At step 2n − 1, 2^{n} neurons labeled o_{t1 t2…tn}(t_{1}, t_{2}, …, t_{n} = 0, 1) are generated. The system after step 2n − 1 is shown in Fig 8.
At step 2n, each neuron σ_{ot1 t2…tn} receives one spike emitted from neuron σ_{1} which will be deleted at the next step by the forgetting rule [a → λ]_{ot1 t2…tn}. Neuron σ_{2} receives two spikes (One is emitted from neuron σ_{1}, and another one is emitted from σ_{3}), the forgetting rule a^{2} → λ is applied at step 2n + 1, and no spike will be emitted to neuron σ_{1} later. At the same time, neuron σ_{d} receives one spike emitted from σ_{3}. The system after step 2n is shown in Fig 9.
Input Stage: At step 2n + 1, the first clause of the conjunctive normal form expression enters the system through input neurons σ_{inxi}, i = 1, 2, …, n. The literal in regard to x_{1} enters neuron σ_{inx1}; the literal in regard to x_{2} enters neuron σ_{inx2}; … the literal in regard to x_{n} enters neuron σ_{inxn}. At the same time, one spike is emitted to neuron σ_{inxi} from neuron σ_{d}.
At step 2n + 2, the spikes in neuron σ_{inxi} are replicated, and are emitted to neurons σ_{Cxi1} and σ_{Cxi0}.
At step 2n + 3, different rules are applied according to the number of spikes in neurons σ_{Cxi1} and σ_{Cxi0}.
For neuron σ_{Cxi1}:
 If one spike is in neuron σ_{Cxi1}, which means neither x_{i} nor ¬x_{i} is in the clause, rule a → a is applied. One spike is emitted to neurons having synapses going from neuron σ_{Cxi1} to them. It aims to show that x_{i} = 1 makes no contribution to let the clause true.
 If two spikes are in neuron σ_{Cxi1}, which means x_{i} is in the clause, rule a^{2} → λ is applied. These two spikes are deleted. It aims to show that x_{i} = 1 makes contribution to let the clause true.
 If three spikes are in neuron σ_{Cxi1}, which means ¬x_{i} is in the clause, rule a^{3} → a is applied. One spike is emitted to neurons having synapses going from neuron σ_{Cxi1} to them. It aims to show that x_{i} = 1 makes no contribution to let the clause true.
For neuron σ_{Cxi0}:
 If one spike is in neuron σ_{Cxi0}, which means neither x_{i} nor ¬x_{i} is in the clause, rule a → a is applied. One spike is emitted to neurons having synapses going from neuron σ_{Cxi0} to them. It aims to show that x_{i} = 0 makes no contribution to let the clause true.
 If two spikes are in neuron σ_{Cxi0}, which means x_{i} is in the clause, rule a^{2} → a is applied. One spike is emitted to neurons having synapses going from neuron σ_{Cxi0} to them. It aims to show that x_{i} = 0 makes no contribution to let the clause true.
 If three spikes are in neuron σ_{Cxi0}, which means ¬x_{i} is in the clause, rule a^{3} → λ is applied. These three spikes are deleted. It aims to show that x_{i} = 0 makes contribution to let the clause true.
Satisfiability Stage: Each neuron σ_{ot1 t2…tn}(t_{1}, t_{2}, …, t_{n} = 0, 1) receives zero or more spikes at step 2n + 3. If one neuron σ_{ot1 t2…tn} receives n spikes which means the clause contains n literals that make no contribution to let the clause true, the dissolution rule [a^{n}]_{ot1 t2…tn} → δ is applied at step 2n + 4 to dissolve this neuron (The value of the first clause is false, therefore this assignment is not the answer to this SAT problem.). Otherwise, at least one literal is true in this assignment and this assignment is reserved to check the next clause.
Due to m clauses are in a SAT problem, the satisfiability checking stage lasts for m + 3 steps. If some neurons σ_{ot1 t2…tn} are still in the system at step 2n + m + 3, the labels of these neurons σ_{ot1 t2…tn} are all solutions to this SAT problem, i.e., this SAT problem is satisfiable. Otherwise, this SAT problem is unsatisfiable.
It can be seen that any SAT(n, m) problem can be solved in linear time, and all solutions can be obtained through this system.
Some steps comparison results between our solution and other solutions, which use the neuron division to solve the NPcomplete problems, are shown in Table 1.
Considering a SAT problem SAT (3, 3): (x_{1} ⋁ x_{2}) ⋀ (¬x_{2} ⋁ x_{3}) ⋀ (¬x_{1} ⋁ x_{2} ⋁ x_{3}), the DDSN P system Π_{3,3} is used to solve it. After 12 computational steps, neurons σ_{o111}, σ_{o101} and σ_{o011} are remaining which shows that {x_{1} = true, x_{2} = true, x_{3} = true}, {x_{1} = true, x_{2} = false, x_{3} = true} and {x_{1} = false, x_{2} = true, x_{3} = true} are all solutions to this SAT problem.
The SN P system with neuron division and budding and the SN P system with neuron division need 21 steps and 26 steps to judge this problem has solutions, respectively, while our DDSN P system need only 12 steps.
SAT problems with different sizes (1 ≤ n, m ≤ 50) are solved using the three systems in Table 1 and the computational steps of each system are shown in Figs 10, 11 and 12 by MATLAB R2014a. As can be seen from these figures, the computational steps of DDSN P system are stable and much fewer, especially when the problem size is larger.
3 A Uniform Solution to Subset Sum Problem
Subset Sum problem is one of the most typical NPcomplete problems. The formal definition of it is as follows [38].
Problem 2. NAME: SUBSET SUM.
 INSTANCE: a set of positive integers X = {x_{1}, x_{2}, …, x_{n}} and a positive integer S.
 QUESTION: is there a subset B ⊆ X that ?
Subset Sum problem (n) denotes the set of all instances of the Subset Sum problem having n integers. In this section, a uniform solution working in a deterministic way is constructed by DDSN P system, which can solve all Subset Sum problem (n) problems in linear time.
An integer is represented by corresponding number of spikes. In order to generate necessary workspace before computing, a spike train (a^{0} ⋅)^{2n} is introduced into the front of each spike train.
The formal definition of DDSN P Systems for Subset Sum problem (n) (shown in Fig 13) is as follows. where:
 O = {a};
 H = {0, 1, 2, 3, 4, s, in_{i}, d_{i1}, d_{i2}}(i = 1, 2, …, n);
 syn = {(3, 2), (2, 1), (1, 2), (1, 0), (4, s), (s, 0)}
⋃{(in_{i}, d_{i1}), (in_{i}, d_{i2}), (d_{i1}, 4)i = 1, 2, …, n}
⋃{(d_{1,2}, o_{1})}⋃{(d_{i2}, o_{t1, …, t(i − 1)1})i = 2, 3, …, n};  n_{0} = 1, n_{2} = 1, n_{3} = 1, the number of spikes in other neurons is zero;
 in = σ_{ini}, σ_{s}, out = σ_{ot1, t2, …, tn}(i, t_{1}, t_{2}, …, t_{n} = 0, 1);
 firing rule:
[a → a]_{i}, i = 1, 2
[a → a; 2n − 1]_{3}
[a^{3}(a^{3})^{+}/a^{3} → a^{3}]_{ini}, i = 1, 2, …, n
[a^{3} → a]_{ini}, i = 1, 2, …, n
[a → a]_{di1}, i = 1, 2, …, n
[a^{3} → a^{3}]_{di2}, i = 1, 2, …, n
[a^{n} → a]_{4}
[a(a^{2})^{+}/a^{2} → a^{2}]_{s}
[a → a]_{s}
forgetting rule:
[a^{2} → λ]_{2}
[a^{3} → λ]_{di1}, i = 1, 2, …, n
[a → λ]_{di2}, i = 1, 2, …, n
[a^{2}(a^{3})^{+}/a^{5} → λ]_{ot1 t2…tn}, t_{1}, t_{2}, …, t_{n} = 0, 1
[a → λ]_{ot1 t2…tn}, t_{1}, t_{2}, …, t_{n} = 0, 1
neuron division rule:
[a]_{0} → [ ]_{o1}  [ ]_{o0}
[a]_{ot1} → [ ]_{ot11}  [ ]_{ot10}, t_{1} = 0, 1
[a]_{ot1 t2} → [ ]_{ot1 t21}  [ ]_{ot1 t20}, t_{1}, t_{2} = 0, 1
…
[a]_{ot1 t2…tn−1} → [ ]_{ot1 t2…tn−11}  [ ]_{ot1 t2…tn−10}, t_{1}, t_{2}, …, t_{n−1} = 0, 1
neuron dissolution rule:
[a(a^{3})^{+}]_{ot1 t2…tn} → δ, t_{1}, t_{2}, …, t_{n} = 0, 1
[a^{2}]_{ot1 t2…tn} → δ, t_{1}, t_{2}, …, t_{n} = 0, 1.
Computation starts when spike trains enter the system thrgouth input neurons σ_{in1}, σ_{in2}, …, σ_{inn} and σ_{s}, respectively. Neuron σ_{0} and its children neurons need 2n steps to generate 2^{n} neurons (workspace) to enumerate all subsets of x_{1}, x_{2}, …, x_{n} by applying neuron division rules (One neuron represents one subset.), therefore (a^{0} ⋅)^{2n} are added to the front of each spike train.
Generation Stage: At step one, neuron σ_{0} has one spike, the division rule [a]_{0} → [ ]_{o1}  [ ]_{o0} is applied to generate neurons σ_{o1} and σ_{o0}, which means one subset in regard to x_{1} has two choices: x_{1} is included in this subset (represent by 1) and x_{1} is not included in this subset (represent by 0). Synapses (1, o_{1}) and (1, o_{0}) are established through the inheritance of synapse (1, 0); synapses (s, o_{1}) and (s, o_{0}) are established through the inheritance of synapse (s, 0); the synapse (d_{12}, o_{1}) is established through synapse dictionary syn. The synapse between neurons d_{12} and o_{1} establishes a channel between the input and the subset having x_{1}. At the same time, auxiliary neuron σ_{2} has one spike, rule a → a is applied and one spike is emitted to neuron σ_{1}; auxiliary neuron σ_{3} has one spike, rulea → a; 2n − 1 is applied and one spike will be emitted to neurons σ_{2} at step 2n. The system after step one is shown in Fig 14.
At step two, neuron σ_{1} has one spike, the firing rule a → a is applied, and one spike is emitted to neurons σ_{2}, σ_{o1} and σ_{o0}.
At step three, each of neurons σ_{o1} and σ_{o0} has one spike, the division rule [a]_{ot1} → [ ]_{ot11}  [ ]_{ot10}(t_{1} = 1, 0) is applied to generate neurons σ_{o11}, σ_{o10}, σ_{o01} and σ_{o00}, which means one subset in regard to x_{1} and x_{2} has four choices: x_{1}x_{2} are included in this subset (represent by 11), x_{1} is included in this subset and x_{2} is not included in this subset (represent by 10), x_{1} is not included in this subset and x_{2} is included in this subset (represent by 01), and x_{1}x_{2} are not included in this subset (represent by 00). Synapses (1, o_{11}), (1, o_{10}), (1, o_{01}), (1, o_{00}), (s, o_{11}), (s, o_{10}), (s, o_{01}), (s, o_{00}), (d_{12}, o_{11}) and (d_{12}, o_{10}) are established through the inheritance of synapse (1, o_{1}), (1, o_{0}), (s, o_{1}), (s, o_{0}) and (d_{12}, o_{1}); synapses (d_{22}, o_{11}) and (d_{22}, o_{01}) are established through synapse dictionary syn. Synapses (d_{12}, o_{11}) and (d_{12}, o_{10}) establish channels between the input and the subsets having x_{1}; synapses (d_{22}, o_{11}) and (d_{22}, o_{01}) establish channels between the input and the subsets having x_{2}. At the same time, auxiliary neuron σ_{2} has one spike, rule a → a is applied and one spike is emitted to neuron σ_{1}. The system after step three is shown in Fig 15.
Similar process repeats. At step 2n − 1, 2^{n} neurons labeled o_{t1 t2…tn}(t_{1}, t_{2}, …, t_{n} = 1, 2, …, n) are generated. The system after step 2n − 1 is shown in Fig 16.
At step 2n, each neuron σ_{ot1 t2…tn} receives one spike emitted from neuron σ_{1} which will be deleted at the next step by the forgetting rule [a → λ]_{ot1 t2…tn}. Neuron σ_{2} receives two spikes (One is emitted from neuron σ_{1}, and another one is emitted from neuron σ_{3}.), the forgetting rule a^{2} → λ is applied at step 2n + 1, and no spike will be emitted to neuron σ_{1} later. The system after step 2n is shown in Fig 17.
Input Stage: At step 2n + 1, x_{1}, x_{2}, …, x_{n} enter the system through input neurons σ_{ini}(i = 1, 2, …, n). 3x_{1} + 3 spikes (a^{3x1+3}) enter neuron σ_{in1}; 3x_{2} + 3 spikes (a^{3x2+3}) enter neuron σ_{in2};… 3x_{n} + 3 spikes (a^{3xn+3}) enter neuron σ_{inn}.
At step 2n + 2, the firing rule a^{3}(a^{3})^{+}/a^{3} → a^{3} is applied, and three spikes are replicated and are emitted to neurons σ_{di1} and σ_{di2}. Spikes in neuron σ_{di1} are forgotten, and spikes in neuron σ_{di2} are emitted to these σ_{ot1 t2…,tn} having synapses going from neuron σ_{di2} to them (These neurons represent the subsets having the integer x_{i}.) at step 2n + 3. This process repeats until only 3 spikes are in neuron σ_{ini}.
At step 2n + x_{i} + 2, the firing rule a^{3} → a is applied, and one spike is replicated and is emitted to neurons σ_{di1} and σ_{di2}.
At step 2n + x_{i} + 3, the spike in neuron σ_{di1} is emitted to neuron σ_{4} showing that all spikes in neuron σ_{ini} have been passed to neurons σ_{ot1 t2…,tn} having synapses going from neuron σ_{di2} to them. The spike in neuron σ_{di2} is forgotten. Up to this step, 3x_{i} spikes are emitted to neurons σ_{ot1 t2…,tn} which represent the subsets having the integer x_{i}.
When all input spikes in neurons σ_{ini} are passed to neurons σ_{ot1 t2…tn} at step 2n + x_{max} + 3(x_{max} represents the maximum integer of all n integers.), neuron σ_{4} receives n spikes, and one spike is emitted to neuron σ_{s} at step 2n + x_{max} + 4. Up to this step, the number of spikes in neurons σ_{ot1 t2…,tn} is .
Checking Stage: At step 2n + x_{max} + 5, 2s + 1 spikes are in neuron σ_{s}, the firing rule a(a^{2})^{ + }/a^{2} → a^{2} is applied, two spikes are emitted to neurons σ_{ot1 t2…tn}. This process lasts for S circles.
At step 2n + x_{max} + s + 4, only one spike is in neuron s, and this spike is emitted to neurons σ_{ot1 t2…tn}.
There are three rule execution situations in neurons σ_{ot1 t2…tn}.

spikes are in neuron σ_{ot1 t2…tn} initially. 2 spikes are emitted to this neuron from neuron σ_{S}, then forgetting rule a^{2}(a^{3})^{+}/a^{5} → λ can be applied with 5 spikes consumed. The number of spikes decreases to . This process repeats S times, and all spikes in neuron σ_{ot1 t2…tn} are consumed. At this step, the last one spike is emitted to this neuron from neuron σ_{S}, forgetting rule a → λ can be applied to consume this spike. 
spikes are in neuron σ_{ot1 t2…tn} initially. 2 spikes are emitted to this neuron from neuron σ_{S}, then forgetting rule a^{2}(a^{3})^{+}/a^{5} → λ can be applied with 5 spikes consumed. The number of spikes decreases to . This process repeats times, and all spikes in neuron σ_{ot1 t2…tn} are consumed. At this step, two spikes are emitted to this neuron from neuron σ_{S}, and neuron dissolution rule [a^{2}]_{ot1 t2…tn} → δ(t_{1} t_{2}…t_{n} = 1, 2, …, n) is applied to dissolve this neuron. 
spikes are in neuron σ_{ot1 t2…tn} initially. 2 spikes are emitted to this neuron from neuron σ_{S}, then forgetting rule a^{2}(a^{3})^{+}/a^{5} → λ can be applied with 5 spikes consumed. The number of spikes decreases to . This process repeats S times, and spikes are remaining. At the next step, the last one spike in neuron σ_{S} is emitted to this neuron, and dissolution rule [a(a^{3})^{+}]_{ot1 t2…tn} → δ(t_{1} t_{2}…t_{n} = 1, 2, …, n) is applied to dissolve this neuron.
If some neurons σ_{ot1 t2…tn} are still in the system after step 2n + x_{max} + s + 5, the labels of these neurons σ_{ot1 t2…tn} are all solutions to this Subset Sum problem.
It can be seen that any Subset Sum problem (n) can be solved in linear time, and all solutions can be obtained through this system.
Some steps comparison results between our solution and other solutions, which use the nondeterministic method to solve the NPcomplete problems, are shown in Table 2, where, k means that all x_{1}, …, x_{n}, S can be transformed into kbit binary numbers.
The conventional methods use the nondeterminism of SN P systems to solve Subset Sum problem, which means that whether a random combination of integers is one of the solutions or not can be checked by one computational process. These SN P systems can only judge whether a certain subset is the answer or not, but cannot search all solution space to judge whether a Subset Sum problem has solutions. Even if we let all combinations be traversed artificially to determine whether a Subset Sum problem has solutions or not, (2^{n} − 1)times computations should be processed. Although the time complexity of each computation is a constant, the whole time complexity cannot be a polynomial of n. The proposed DDSN P system can solve the Subset Sum problem in a linear time, which improves the computational efficiency.
Considering a Subset Sum problem Subset Sum problem (4): X = {1, 2, 3, 4}, S = 5, the DDSN P system Π_{4} is used to solve it. After 22 computational steps, neurons σ_{o0110} and σ_{o1001} are remaining which shows that {2, 3} and {1, 4} are all solutions to this Subset Sum problem. Methods proposed in [19, 20, 23] need 165 steps, 330 steps and 270 steps, respectively to judge this problem.
A series of Subset Sum problems: X = {1, 2, …, n}, S = 5 are solved using the five systems in Table 2 and the computational steps of these systems are shown in Fig 18 by MATLAB R2014a. The computational steps of DDSN P system are much fewer, especially when the problem size is larger.
4 Conclusions
The new mechanism called neuron dissolution is introduced into the framework of SN P systems in this work. By this mechanism, redundant neurons can be dissolved immediately. The computational resources can be saved, which means more work can be done using the same resources, or the same work can be done using less resources. We also proved that this new variant of SN P system can obtain all solutions to NPcomplete problems (Invalid solutions are eliminated by neuron dissolution.), such as SAT problem and the Subset Sum problem, in linear time, which enhances the application fields of SN P systems such as the register allocation problem.
This work provides a new thought of storing information in SN P systems, which can be used to store other information. The dissolution rule can be used to many situations to decrease the space complexity of a SN P system. This variant of SN P system can be used to solve other NPcomplete problems and application problems. It is also an attractive direction to introduce other biological phenomena into SN P systems to reduce computational resources and enhance computational space efficiency.
Author Contributions
 Conceptualization: YZ XL.
 Formal analysis: YZ WW.
 Funding acquisition: XL.
 Methodology: YZ XL.
 Project administration: XL.
 Resources: WW.
 Software: YZ WW.
 Supervision: XL.
 Validation: YZ WW.
 Visualization: XL.
 Writing – original draft: YZ WW.
 Writing – review & editing: XL.
References
 1. Ionescu M, Păun G, Yokomori T. Spiking neural P systems. Fundamenta Informaticae. 2006 Aug;71(2):279–308.
 2. Cabarle F G C, Adorna H N, PérezJiménez M J, Song T. Spiking neural P systems with structural plasticity. Neural Computing and Applications. 2015 Feb;26(8):1905–1917.
 3. Cabarle F G C, Buño K C, Adorna H N. On the delays in spiking neural P systems. Symposium on Mathematical Aspects of Computer Science. 2012 Dec;12(12):25–29.
 4. Cabarle F G C, Buño K C, Adorna H N. Time after time: notes on delays in spiking neural P systems. Theory and Practice of Computation. 2012 Sep;7:82–92.
 5. Cabarle F G C, Adorna H N, PérezJiménez M J. Sequential spiking neural P systems with structural plasticity based on max/min spike number. Neural Computing and Applications. 2016 Jul;27(5):1337–1347.
 6. Song T, Pan L. Spiking neural P systems with rules on synapses working in maximum spikes consumption strategy. IEEE Transactions on Nanobioscience. 2015 Jan;14(1):38–44. pmid:25389243
 7. Song T, Pan L. Spiking neural P systems with rules on synapses working in maximum spiking strategy. IEEE Transactions on Nanobioscience. 2015 Jun;14(4):465–477.
 8. Song T, Zou Q, Zeng X, Liu X. Asynchronous spiking neural P systems with rules on synapses. Neurocomputing. 2015 Mar;151(1):1439–1445.
 9. Song T, Xu J, Pan L. On the universality and nonnniversality of spiking neural P systems with rules on synapses. IEEE Transactions on Nanobioscience. 2015 Dec;14(8):960–966. pmid:26625420
 10. Pan L, Zeng X, Zhang X, Jiang Y. Spiking neural P systems with weighted synapses. Neural Processing Letters. 2012 Feb;35(1):13–27.
 11. Tu M, Wang J, Peng H, Shi P. Application of adaptive fuzzy spiking neural P systems in fault diagnosis model of power systems. Chinese Journal of Electronics. 2014 Jan;23(1):87–92.
 12. Zhang G, Rong H, Neri F, PérezJiménez M J. An optimization spiking neural P system for approximately solving combinatorial optimization problems. International Journal of Neural Systems. 2014 Aug;24(05):1–16.
 13. Zhang X, Pan L, Păun A. On universality of axon P systems. IEEE Transactions on Neural Networks and Learning Systems. 2015 Nov;26(11):2816–2829. pmid:25680218
 14. Zeng X, Zhang X, Zhang J, Liu J. Simulating spiking neural P systems with circuits. Journal of Computational and Theoretical Nanoscience. 2015 Sep;12(9):2023–2026.
 15. Song T, Zheng P, Wong M L D, Wang X. Design of logic gates using spiking neural P systems with homogeneous neurons and astrocyteslike control. Information Sciences. 2016 Dec; 372: 380–391.
 16. Chen H, Ionescu M, Ishdorj T O. On the efficiency of spiking neural P systems. Fourth Brainstormming Week on Membrane Computing. 2006 Jan;1:195–206.
 17. Ishdorj T O, Leporati A. Uniform solutions to SAT and 3SAT by spiking neural P systems with precomputed resources. Natural Computing. 2008 Dec;7(4):519–534.
 18. Leporati A, GutiérrezNaranjo M A. Solving Subset Sum by spiking neural P systems with precomputed resources. Fundamenta Informaticae. 2008 Nov;87(1):61–77.
 19. Leporati A, Zandron C, Ferretti C, Mauri G. Solving numerical NPcomplete problems with spiking neural P systems. Membrane Computing. 2007 Jun;4860:336–352.
 20. Leporati A, Mauri G, Zandron C, Păun Gh, PérezJiménez M J. Uniform solutions to SAT and Subset Sum by spiking neural P systems. Natural Computing. 2009 Dec;8(4):681–702.
 21. Leporati A, Zandron C, Ferretti C, Mauri G. On the computational power of spiking neural P systems. International Journal of Unconventional Computing. 2009 Jan;5(5):459–473.
 22. Cabarle F G C, Hernandez N H S, Martínezdel Amor M A. Spiking Neural P Systems with Structural Plasticity: Attacking the Subset Sum Problem. Membrane Computing. 2015 Aug;9504:106–116.
 23. Song T, Luo L, He J, Chen Z, Zhang K. Solving subset sum problems by timefree spiking neural P systems. Applied Mathematics and Information Sciences. 2014 Jan;8(1):327–332.
 24. Song T, Zheng H, He J. Solving vertex cover problem by tissue P systems with cell division. Applied Mathematics and Information Science. 2014 Jan;8(1):333–337.
 25. Pan L, Păun Gh, PérezJiménez M J. Spiking neural P systems with neuron division and budding. Science China Information Sciences. 2011 Aug;54(8):1596–1607.
 26. Wang J, Hoogeboom H J, Pan L. Spiking neural P systems with neuron division. Membrane Computing. 2011 Aug;361–376.
 27. Galli R, Gritti A, Bonfanti L, Vescovi A L. Neural stem cells an overview. Circulation Research. 2003 May;92(6):598–608. pmid:12676811
 28.
Brown D A, Yang N, Ray S D. Encyclopedia of Toxicology (Third Edition). Amsterdam: Elsevier.; 2014.
 29. Păun Gh. Computing with membranes. Journal of Computer and System Sciences. 2000 Aug;61(1):108–143.
 30.
Păun Gh., Rozenberg G., Salomaa A.. The Oxford handbook of membrane computing. Oxford: Oxford University Press.; 2010.
 31. Song T, Pan L. Spiking neural P systems with request rules. Neurocomputing. 2016 Jun;193(12):193–200.
 32. Wang X, Song T, Gong F, Zheng P. On the computational power of spiking neural P systems with selforganization. Scientific Reports. 2016 Jun;6:27624. pmid:27283843
 33. Song T, Wang X. Homogenous spiking neural P systems with inhibitory synapses. Neural Processing Letters. 2015 Aug;42(1):199–214.
 34. Song T, Liu X, Zeng X. Asynchronous spiking neural P systems with antispikes. Neural Processing Letters. 2015 Dec;42(3):633–647.
 35. DíazPernil D, Berciano A, PenaCantillana F, GutiérrezNaranjo M A. Segmenting images with gradientbased edge detection using membrane computing. Pattern Recognition Letters. 2013 Jun;34(8):846–855.
 36. Păun Gh, Păun R. Membrane computing and economics: numerical P systems. Fundamenta Informaticae. 2006 Sep;73(1–2):213–227.
 37. Pan L, Păun Gh, Song B. Flat maximal parallelism in P systems with promoters. Theoretical Computer Science. 2016 Apr;623(11):83–91.
 38.
Michael R G, David S J. Computers and intractability: a guide to the theory of NPcompleteness. San Francisco: WH Freeman and Co.; 1979.