Humans share with animals, both vertebrates and invertebrates, the capacity to sense the number of items in their environment already at birth. The pervasiveness of this skill across the animal kingdom suggests that it should emerge in very simple populations of neurons. Current modelling literature, however, has struggled to provide a simple architecture carrying out this task, with most proposals suggesting the emergence of number sense in multi-layered complex neural networks, and typically requiring supervised learning; while simple accumulator models fail to predict Weber’s Law, a common trait of human and animal numerosity processing. We present a simple quantum spin model with all-to-all connectivity, where numerosity is encoded in the spectrum after stimulation with a number of transient signals occurring in a random or orderly temporal sequence. We use a paradigmatic simulational approach borrowed from the theory and methods of open quantum systems out of equilibrium, as a possible way to describe information processing in neural systems. Our method is able to capture many of the perceptual characteristics of numerosity in such systems. The frequency components of the magnetization spectra at harmonics of the system’s tunneling frequency increase with the number of stimuli presented. The amplitude decoding of each spectrum, performed with an ideal-observer model, reveals that the system follows Weber’s law. This contrasts with the well-known failure to reproduce Weber’s law with linear system or accumulators models.
Citation: Yago Malo J, Cicchini GM, Morrone MC, Chiofalo ML (2023) Quantum spin models for numerosity perception. PLoS ONE 18(4): e0284610. https://doi.org/10.1371/journal.pone.0284610
Editor: Salvatore Lorenzo, Universita degli Studi di Palermo, ITALY
Received: December 23, 2022; Accepted: April 4, 2023; Published: April 25, 2023
Copyright: © 2023 Yago Malo et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data underlying the results presented in the study are available from: DOI 10.5281/zenodo.7692492.
Funding: JYM was supported by the European Social Fund REACT EU through the Italian national program PON 2014-2020, DM MUR 1062/2021. GMC was supported by Horizon 2020 European Research Council Advanced Grant GenPercept No. 832813 (to D.C.B.), Italian Ministry of Education PRIN2017 Grants 2017SBCPZY, and FLAG-ERA Joint Transnational Call 2019 Grant DOMINO. MCM was supported by GenPercept ERC-adv no. 832813 and PRIN 2017.
Competing interests: The authors have declared that no competing interests exist.
Humans and animals possess the ability to estimate how many objects are present in a given space or in a time interval, that is their numerosity. Humans perform this task remarkably well, with an error rate of about 20% over a very large range of items, up to 200 [1–4].
Determining the number of elements in a set is traditionally associated with counting. However, it emerges as a perceptual function taking place even without attention and with higher precision than related perceptual functions (such as area and density estimation) [5, 6]. Several neurophysiological studies have demonstrated that, in the primate, brain neurons selective to numerosity do exist and their tuning is proportional to the preferred numerosity . Remarkably, it is known that the task-associated uncertainty increases linearly with the numerosity itself, a law named after Weber . Weber’s law occurs regardless of the modality of presentation, i.e. whether numbers are delivered as distinct items in a single display, or as a sequence of items in the visual, auditory or tactile modality, as well as in the same position or in different ones. All these observations point to Weber’s law as a hallmark of the central mechanism which estimates numerosity from different sensory formats. Indeed, Weber’s law fails only under particular circumstances [3, 8], such as when items are too close together to be segregated, and mechanisms for estimation are based on the visual-texture grain and follow a square-root law. Interestingly, individual performance with sparse displays is predictive of mathematical literacy of the individual, whereas performance with high clutter display does not . This suggests that the perceptual mechanisms for rapid estimation of sparse displays are core to mathematical knowledge, whereas those that enable perception in high clutter displays are not.
It is as well interesting that Weber’s law cannot be attained by the simplest model of numerosity perception, which is one that counts the number of items in a region of space (or events in a period of time). In fact, the statistics of the latter follow a Poissonian distribution, implying that the variance scales with numerosity and hence that the standard deviation scales with its square root. This is a well-known problem in time perception where a central assumption is Poisson pacemaker: here, Weber’s law has been modelled only resorting to ad-hoc assumptions, otherwise it resembles the signature behaviour with cluttered presentations [10, 11]. Currently, the neural mechanisms allowing Weber’s law over such a large range of numerosities and conditions are not yet known.
Artificial Neural Network have been demonstrated to be a valid approach to the simulation of many aspect of perception. Pioneering work by John Hopfield, employing statistical mechanics, demonstrated that it was possible to study the behaviour of unsupervised networks comprised of simple processing units, based on spin-glass models in physics . Interestingly, Stoianov and Zorzi using a network of this kind, successfully modelled human processing of numerosity . The network, comprising one input layer and 2 hidden layers and trained in the classification of 2 dimensional stimuli containing various number of objects, spontaneously built a representation of numerosity, which could then be decoded by a simple linear classifier. If, in this model, the representations read-out of the classifier were used to perform a numerosity comparison, the behaviour would comply with Weber’s Law.
More recently, Nasr et al.  have found that some units of a deep convolutional neural network (CNN) designed and trained for objects recognition, could also perform number discrimination on static displays. Using 8 convolutional layers for local filtering and 5 pooling layers for aggregated response, this CNN exhibited numerosity recognition in around 10% of its units, with an approximate Weber’s law behaviour. Similar conclusions have been drawn with untrained networks , once again involving a limited amount of units. While these results are encouraging, a number of questions arise. In particular, whether numerosity perception is a global property of such neuronal network, since it is only encoded in a small percentage of a highly complex architecture. Another important problem involves the generalization to dynamic stimuli that unfold over space and time.
On the other hand, the last decade has seen the emergence of quantum models  to tackle non-linear complex dynamics. This Quantum-like paradigm  is based on the idea that the mathematical toolbox of quantum mechanics can describe phenomena which do not need to be quantum. Here, we note that while the classical models introduced by Hopfield  and our quantum mechanical model use similar spin Hamiltonians, see e.g. , the approaches are substantially different. The classical models focus on using adaptable statistical structures [19, 20] aiming at defining a flexible network connectivity suited for learning models  to encode the input information. In contrast, our quantum spin model highlight the fact that minimal symmetric small-sized networks, with pre-determined connectivity, are enough to recover input information.
Examples of the use of quantum models include the study of the generation of persistent quantum-chaotic patterns at a microscopic scale and the amplification of quantum effects to a macroscopic scale have been investigated [16, 22]. Quantum versions of the IIT theory for consciousness [23, 24] and information processing  have also been proposed. The general features have been explored by using dissipative quantum models of the brain, driven by entanglement, and the corresponding use of quantum field theory to model brain functional activity . Mathematical methods of quantum theory, especially quantum measurement theory, have been used to describe information processes in biosystems, and then applied to model combinations of cognitive effects and gene regulation . Quantum-inspired techniques in machine learning have also been introduced in psychology  leading to the field of the so-called quantum cognition , hinging instead on the mathematical structure of quantum probability. This approach has been used to address cognitive phenomena, such as reinforcement learning during human decision-making , known to be hard to be described by means of classical probability theory . For example, when incompatible events are involved, incompatibility produces superposition states of uncertainty which eventually result in violations of classical/probability laws . Along these lines, it has been highlighted how, to some extent, the brain could work as a not only classical but also quantum Bayesian-inference machine [32, 33]. Moreover, several proposals for the use of quantum spin models and neuronal activity has been discussed [12, 34].
Such approaches do not necessarily imply the emergence of microscopic quantum effects in the brain. This timely area of research per se [35–37], has been recently revisited based on increasing experimental efforts [38, 39] also in view of proposed theories . This flourishing research field is developing in parallel to advancements in the field of quantum biology , following on tremendous progress in experimental methods to investigate energy-excitation transfers in light-harvesting complexes .
Here, we use the nature of quantum-mechanics as a formal toolbox to describe the highly dynamical complexity underlying the encoding of numerosity perception. We present an example of a coarse-grained quantum model as a mathematical toolbox to map information processing of a network of neurons. We find that under general enough conditions, a quantum network with a very simple architecture is capable of counting the number of events as a global property, even when these excitations are introduced randomly in space and/or in time, with no need of any training and standard agnostic decoding.
Our main goal is to design a successful model for numerosity perception as an intrinsic and very general tool for dynamical stimuli.
When dealing with a network of neurons and its mapping into a quantum model, a rather natural idea is to associate to each neuron an on-off state of activity which, in the quantum domain, can be described in terms of a spin 1/2. Assuming this as a starting point, our modelling is developed based on a minimal set of essential traits and functions that the network should possess, while yet guided by the observation that the numerosity perception manifests as a global response and an inherent property of the network, robust with respect to space and time randomness and dissipation. In the first column of Table 1 we provide the list of the essential neuronal functions we consider relevant to address numerosity perception. The second column in Table 1 reports instead the mapping onto the corresponding objects or functions of the quantum model.
Quantum spin model
Based on the mapping in Table 1, we consider a 1D spin-1/2 chain, see Fig 1, with variable coupling ranging from simply nearest-neighbours (n.n.) to all-to-all (a.a.) connectivity. Excitations in the system, that we will model as ↑-polarised spins, can be transferred to connected sites via the exchange coupling. Moreover, the system is subject to an energy offset whenever there are several excitation in a near vicinity. The system Hamiltonian is then given by: (1) where correspond to the Pauli matrices in site i ∈ (1, M), are the corresponding raising and lowering operators, Jij are the local exchange amplitudes between connected spins. Here we consider Jij = J = 1 whenever two sites are linked and zero otherwise. This means for nearest neighbours Jij = 1 if |i − j| = 1 and Jij = 1 ∀i, j for the all-to-all coupling. Δij represent the offsite energy offsets defined as: (2) with σ a constant that determines the width of the interaction potential between sites, and Δ0 the overall amplitude of such potential.
(a) Diagram of the spin system with variable connectivity, we present the case of nearest-neighbour and all-to-all coupling; (b) Diagramatic representation of the Hamiltonian and dissipative terms: each spin can propagate an excitation to its neighbours through the exchange term with amplitude J. Each spin experiences an energy offset given by the state of its neighbours. This energy shift is given by a gaussian profile centered in each spin with amplitude Δ0, (c). Finally, each spin can interact with different dissipative channels, (d) leading to excitation losses and dephasing with a rate γi.
In the presence of coupling to an external environment, the dynamics of the system is described by its density operator ρ(t), whose evolution is governed by the Gorini–Kossakowski–Sudarshan–Lindblad (GSKL) equation : (3) where Jm represents the m-th dissipation channel and γm its corresponding dissipative rate. In our case, we consider two possible dissipative sources: excitation losses, here corresponding to spin flips and ; and dephasing, due to collisional processes limiting the quantum coherence of the coarse-grained system model, here .
Note that the Hamiltonian we have introduced in Eq 1 is a paradigmatic model for quantum magnetism, introduced initially by Heisenberg  and first solved by Yang and Yang  and Baxter . The model has a rich phase diagram exhibiting (anti)ferrogmatism both in the axial and planar axis. Continuous developments in both numerical and analytical tools have made the model relevant in both closed and dissipative scenarios up to these days, including for the simulation of systems unrelated to quantum magnetism as it is our case.
In order to produce the numerical simulations, we take advantage of stochastic unraveling methods, in particular quantum trajectories, allowing to reduce the numerical complexity of the computation at the cost of computing several trajectories. Details on the method can be found in .
We compute the time evolution of the system given by (3) for a series of relevant parameters, with the aim of building intuition on the expected dynamics in such a spin system.
In Fig 2, we present the evolution of the magnetization for an example system with M = 7 spins, where at the initial time t = 0 one of the spins is polarised in the ↑ direction while the rest remain in the resting state corresponding to ↓ orientation. As time progresses, the excitation can travel along the system due to the exchange coupling. In the nearest-neighbour (n.n.) case, shown in Fig 2a–2d, the excitation travels in a well-defined cone before covering the whole system and continues to travel back and forth as time evolves as we observe some revivals where the excitation returns to the original site after spreading across the whole system. The presence of the interaction Δ, Fig 2b, modulates the profile of the evolution but does not change the overall behaviour. This is true for a wide range of interaction parameters (Δ0, σ) as seen in Fig 2c where we present, as an example, a wider interaction profile. The addition of dissipation γl, Fig 2d, in the form of spontaneous spin decay uniformly reduces the signal in space and time, as predicted.
(a) Evolution of the local magnetisation for a system with M = 7 sites, J = 1, Δ0 = 0, γl = 0 starting from a single excitation (spin up) on the middle site at t = 0 and nearest neighbor (n.n.) coupling. The dashed line highlights the magnetisation spreading in a light-cone manner; (b) same as (a) for ; (c) same as (a) for ; (d) same as (a) for γl = 0.1; (e)-(h) Same as (a)-(d) for the all-to-all (a.a.) case. In the n.n., we can observe that the excitation propagates in a light cone until reaching the boundaries, and then oscillates back and forth. The addition of interaction leads to space modulations that build over time changing quantitatively the magnetisation but without affecting the general behaviour. Inclusion of a spin decay rate γl ≠ 0 leads to the loss of the excitation over time. In contrast, the a.a. does not display any excitation cone and the spin up evenly spreads to all the neighbours and bounces back and forth with a constant frequency, including interaction we observe again the appearance of space modulation in the propagation profile.
On the other hand, the all-to-all (a.a.) coupling scenario illustrated in Fig 2e–2h is remarkably different. We observe that the dynamics are much simpler. Here, the excitation is evenly spread into all the other sites—as they are all connected—to return back to the initial site at a rate ∝ J, that is the tunneling time. There, we can interpret all the initially ↓ sites as a single “supersite” where the excitation is evenly transferred to each one of them, to subsequently return in this oscillatory manner. In the presence of interaction, Fig 2f and 2g, the rate of tunneling varies between the individual sites and, as a result, some non-uniform spreading is observed while time evolves. The role of dissipation is similar to the n.n. case, reducing the overall excitations in the system uniformly over time. Most of the highlighted features remain unchanged when we consider multiple excitations in the system. For the sake of completeness, we include several examples in the S1 Appendix, see Fig 7.
We have so far focused on the case of adding one single excitation to the system. However, the most interesting aspect of the model comes to play when we consider several spin flips along the time evolution. In particular, we observe that the power spectrum of the time signals contains information on the number of spin flips that took place during that evolution. In Fig 3, we consider the frequency spectrum of the magnetisation in a given site for a chosen time window of length 10 ≤ tJ ≤ 20. In the initial time interval t ≤ 10J excitations in the form of spin flips in sites i = 1, 3, 5 respectively have been added to the system at fixed times. Here, we observe that in the n.n. case the spectrum has little dependence on the number N of spin-↑ in the evolution; finding a set of narrow peaks at low frequencies that do slightly change in width and amplitude with no consistent dependence on N.
Amplitude spectrum of the magnetisation on site i = 5 as a function of time in a system with parameters M = 7 sites, for varying number of spin flips during the evolution N = 1, 2, 3 displayed from left to right. In every panel we compare the results for the case of nearest-neighbours and all-to-all coupling, observing that the number of peaks and the overall behaviour of the spectrum shows small differences in the case of n.n., contrasting the a.a. case, where every time that a spin flip connects to a new magnetisation sector a new peak at a constant distance appears in the spectrum, showing the potential ability for the system to count. In both cases, the time domain used was 10 ≤ tJ ≤ 20 after the events occurring in sites i = 1, 3, 5 respectively at fixed times in the initial window between 0 ≤ tJ ≤ 10.
When the system is all-to-all coupled instead, we observe the appearance of peaks with equidistant frequencies for every additional spin flip. The temporal evolution of the magnetisation contains information on the numerosity for each site: a global increase of the number of excitations does increase the number of peaks with low-frequency harmonic components. Note that due to the up-down symmetry of the model, we can only distinguish up to N ≤ M/2 excitations, as an N number of spin-↑ higher than M/2 can be regarded as simply M − N spin-↓ particles.
We have presented here the case with no dissipation. However, the results are comparable if we consider any time window where the dissipation has not yet washed out all of the excitations from the system. We have also verified that the spectrum peaks do not depend on the specific time of, or interval between the measurements, nor on rotation angle, location of the spin flips, or the length of the time signal with the only variation being their relative amplitude, meaning that this is a robust feature. Details on these findings are in the Supporting information, see Figs 8 and 9 in S1 Appendix.
As we discussed, Fig 3 shows that larger numerosity of inputs introduce lower frequency components in the temporal evolution of the magnetization. Thus, to validate that the information for all numerosity is adequately encoded in the magnetization profile, we applied an ideal observer model based on a supervised classifier. In Fig 4 we show an outline of our estimation protocol. In order to estimate numerosity in our system, we first record the local magnetisation, which is given by the expectation value of the observable on every site as a function of time, where the profiles will resemble the patterns in Fig 2. We record the magnetisation in a fixed time window after all the stimuli have been added to the system, e.g. 10 ≤ tJ ≤ 20 in Fig 3. This is displayed in Fig 4.1.
(1) The local time signal of the magnetisation is registered in a chosen time window, after the excitations have entered the system; (2) The spectrum of the signal is computed and then averaged over the number M of sites in the network; (3) Averaging over realisations with excitations at random times, locations, and random angles produces a spectrum template for each numerosity; (4) Sample runs are then compared with the templates, using agnostic decoding; (5) The estimated errors are compared for different numerosities to assert whether the system follows the predicted Weber’s law.
As we deliver stimuli randomly over space and time, we compute the spectrum from the local magnetisation at each site and then produce a space-average spectrum for every single trajectory.
In order to decode the information embedded in the power spectra, we analyze them through an ideal observer approach, that is agnostic to the signal having a quantum or classical origin. In this approach, the decoder assesses the similarity of an input spectrum to that of a library of spectra which have been previously learned, and informs the decoder on the average behaviour of the system in response to one of the possible inputs. This method requires no knowledge of the internal processes that give rise to the specific states from a given input. It simply classifies the current internal state, comparing it to a library of previously learned power spectra that are averaged over a fixed number of simulations of order 102 (200 in the case of our results), each associated with a given numerosity, and eventually decodes it as the numerosity which on average produces the most similar internal state . See Fig 4.3.
Once the system has learned the average behaviour for each numerosity, we consider an ideal decoder for the power spectrum of a new sample with a yet-to-be-decoded number of spin flips. The similarity is evaluated by correlating the spectrum of the new input signal with those present in the library representing the average spectrum to each numerosity. Then the template associated with the highest correlation is selected. All correlations are accepted and no threshold is used.
In order to measure how the performance of the model varies as a function of the underlying numerosity, we run a virtual psychophysical experiment, as illustrated in Fig 4.4, in which two numerosities are compared: one as a base reference and one as a variable. Here, the ideal observer has to decide which of the two stimuli is more numerous, by applying the classifier illustrated above.
The probability for the decoder to classify the variable stimulus as the more numerous, is a monotonic function of the variable’s stimulus numerosity, well approximated by the integral of a cumulative gaussian function, as described in the Probit model : (4)
The median of the Probit indicates the point of subjective equality (where the two stimuli are deemed as having the same numerosity). The curve slope indicates instead the noiseness of the classification. Human-observers psychometric functions for numerosity classification are shallower for larger numerosity in linear scale, having the same width in logarithmic scale: a hallmark of Weber’s law. Here we observe the same phenomenon in the output of the classifier (see Fig 5).
(a) Example magnetisation profile from one of the sampled trajectories for the case of M = 18 spins and N = 4 random spin flips, with Δ0 = 0, γl = 0. (b) Average magnetisation for the same system for a set of Nt = 200 trajectories. As the events have random location and intensity, there is no information in the averaged time signal. (c) Average of the amplitude spectrum of the magnetisation signals in the time window between : the spectra were computed over individual trajectories and sites, then spaced averaged, and finally trajectory averaged. Here we display only the positive frequencies. (d) Psychometric function results for the quantum spin model with M = 18, and same parameters as in the top panel, where N spin flips random in location, time, and rotation angle, have occurred before registering the magnetisation. (e) Corresponding standard deviation estimates from the fits of the psychometric functions on the left. Here, we compare the previous case where each event is a spin flip of random amplitude (orange) with a case where the rotations where again random but their sum was constrained to be equal to 3π (blue). (f) Weber fraction associated to the results in the middle panel.
In Fig 5 we present the results of the estimation protocol applied to the magnetisation signal for a system with M = 18 spins where N = 1 − 9 spin flips occurred with random amplitudes at random times and locations. In Fig 5a we present an example trajectory of the magnetisation for the case with N = 4 where we can observe how excitations of varying intensity appear in the system at specific times. In contrast, an average over trajectories of the magnetisation renders no spatial features, Fig 5b. The average magnetisation simply drifts toward overall larger values as the stimuli are added uniformly distributed on average. This highlights the importance of processing the individual trajectories and not their averages. From the individual time signals, we can compose the average spectrum presented in Fig 5c.
The error in numerosity estimation is obtained from the fits of the psychometric functions as described in the Estimation Protocol section. The corresponding fits are displayed in Fig 5d, from smaller to larger numerosities. In Fig 5e, we provide the associated standard deviations of the estimator for the previous case, corresponding to spin flips of random amplitude (orange, random rotations (RR)), and compare them with the case where the spin flips had random rotations but their summed amplitude was constant and equal to 3π (blue, constant energy (CE)). This comparison is an important test, as many classical systems fail to decode numerosity when the overall excitation amplitude is kept constant. Both cases reproduce the predicted dependence and Weber’s law. The case with the constant total amplitude however, is seen to perform worse, due to the small average amplitude of each of the spin flips. Finally, the Weber fraction associated to the results is displayed in the right panel of Fig 5f observing the expected flat dependence. Given the system-size limitations, we were not able to provide signatures for N > M/2 = 9 elements. Note that the time window during which we store the time signal is not relevant, as demonstrated in the S1 Appendix, Fig 9. In this figure, we present the power spectrum for the same system, with varying time windows. We observe no qualitative differences in the spectra. In S1 Appendix, we justify the specific numerical values that we chose to ensure consistency over this time window.
In order to verify that the essential features of this behaviour in Fig 5d–5f lies in the evolution of the magnetization of the quantum system, we performed a series of validations of our decoding procedure aimed at demonstrating that the decoder and the data fitting do not introduce significant noise components. This analysis is included in Fig 6.
(a) Weber fraction associated to the decoding of numerosity for the magnetisation signals with random amplitude excitations (RR) in Fig 5, for varying number of trajectories Nt = 3, 5, 10, 40 with which we construct our library templates. We observe that the decoding converges with a very limited number of samples. (b) Weber fraction associated to the frequency decoding for sinusoidal activation maps with frequency ranging from f = 1 − 18, both noiseless and with noise proportional to the inverse of the frequency. We show that the decoder does not incorporate additional noise to the analysis by comparing the performance of both classical signals.
The first test, in Fig 6a, was related to the decoding performance when the libraries were built with a limited number of trajectories. For this we consider the same data set as in Fig 5 in the case of random amplitude of excitations (RR) where we used Nt = 200 trajectories. We show that this result is largely independent from the number of trajectories used to construct the templates displaying stably traits of Weber’s law using as few as Nt = 5 trajectories. This indicates that our ideal decoder performs well with limited samples and that our results are not affected by the specific parameters of the decoder.
Secondly, we test whether the decoder can faithfully capture a pattern of noise present in the signal. To this end, we move away from the data of our quantum simulation and test the decoder with sinusoidal activities computing the spectra from a simple sequence of activations for a noise-free version and one corrupted by 1/f noise, proportional to the inverse of the sinusoid frequency. More specifically we generated a set of 18 different sine gratings, that differ in temporal frequency with simultaneous activity in all the nodes. The spectra for each of the 18 frequencies contain only one peak and cover nicely the frequency range of the system. We use the decoder to classify the frequency of the sinusoid, in a similar fashion to classifying the number of events, which we did with the quantum system. Fig 6b shows that in the absence of noise in the activity gratings, we obtain good performance with a very low noise floor. Instead, 1/f noise, which is more disruptive at low frequencies, renders an output with similar behaviour to shot-noise matching the noise characteristics of the signal. This two additional tests confirm that the decoder does not introduce itself specific noise patterns but simply reveals the intrinsic variability in the activity of the system that is being decoded.
In this work, we have used a quantum spin model to tackle an open problem in human sensory systems, which is how humans are capable of perceiving the numerosity of events. Our results have shown that the time evolution of a very simple fully-connected spin-1/2 Heisenberg model can encode the number of stimuli that it has been exposed to. Moreover, we have shown that its performance is not driven by the overall input energy, that has been normalised, with the system remaining able to encode numerosity also in the presence of high input energy noise.
It is important to stress that here we have used quantum physics as a statistical tool for information processing, having no implications about the presence of quantum phenomena in the process of numerosity perception in our brain. We have presented a proof-of-principle model where, by adhering to the statistical rules of quantum mechanics instead than the classical ones, we are able to reproduce the behavior of complex networks with a minimal model which takes advantage of the properties of quantum systems. While we do not claim that quantum microscopic interactions in the brain are relevant for sensory processes, we suggest that the powerful mathematical structures existing in quantum models may be useful to simulate complex brain network dynamics.
From the perspective of quantum modeling, there is a wide range of extensions that would be desirable. This would include the use of larger networks by means of matrix product states , the use of heterogenous network topologies, e.g. with several fully-connected systems sparsely connected among themselves, or the use of non-Markovian open quantum systems allowing for information back-flow into our spin system. However, we believe that the current iteration of the model represents the essential ingredients for a successful representation of numerosity perception while adhering to minimal complexity.
Importantly, our model is able to exceed previous models in the field of perception of numerosity in two important aspects. Firstly, to our knowledge, no current model exists of how numerosity can be sensed and integrated both in space and time. Some models for numerosity of simultaneously presented items do exist [13–15] and are successful to some degree. However, no models exist for temporal numerosity. This is a notable shortcoming of current literature, as perception of temporal numerosity is strictly linked to spatial numerosity, and both aspects should be conceived as two facets of a single procedure for numerosity processing . Current models possibly can be upgraded to process items presented over time, introducing an ad hoc memory component. However this is a specific solution that cannot be generalized. Instead, our model captures both by construction.
The second interesting issue is that numerosity is encoded in the frequency of the magnetization arising after multiple spin flips and only for the all-to-all connectivity. Thus, our model highlights that in order to measure the number of events unfolding over time, collective interactions measured at all nodes and times may be critical. At least, this is the case for our quantum model. This approach is shifting the focus away from the isolated unit excitation to global network dynamics. Interestingly, global network dynamics measured as endogenous oscillations are an active area of research, and demonstrate the strong potential role in selecting salient information. In this respect our model offers a true paradigm shift, indicating that the frequency of the oscillations may encode the complexity of an incoming stimulus.
In addition, our model is not only capable of encoding numerosity, but also reproduces a critical hallmark of human sensory systems: Weber’s law. Whereas Weber’s law is a ubiquitous trait of sensory systems, it is particularly challenging from a modelling perspective, since neural firing generally follows Poisson statistics. In our system, Weber’s law holds both for randomly rotated systems and even for those where the overall energy is constant despite the change in the number of excitations. We also remark that in our system we have no ad hoc noise sources. The noise in our model mainly arises from the randomness in space, time, and intensity of the local excitations. The randomness-driven noise and the all-to-all connectivity seem to be able to capture Weber’s law.
We paired the current evolution of magnetizations with a specific decoding scheme borrowed from analysis and modelling of brain activity . While this model is appealing for its simplicity and biological plausibility, simulating an ideal observer performance, we stress that it is not an integral part of the quantum model. It constitutes a tool that is used to validate the quantum network model and to demonstrate both its performance and its adherence to Weber’s law. Having demonstrated the capability of the network to encode numerosity, any other decoder to extract the correct numerosity may be used, the simplest one being the detection in frequency of peak excitation. Our results have also demonstrated that Weber’s Law behaviour is not a consequence of the supervised classifier decoder or the noisiness, but it lies in the magnetization profiles themselves.
The present work can be extended in several ways. For instance, human studies show that visual judgments of numerosity can incorporate biases provided by non-numerical features, such as area, density and energy of the input [52, 53]. While the effects of area and density are often small , they can still be important test benches for models of numerosity. Interestingly, in a recent work, Testolin et al.  have demonstrated that deep neural networks that have not completed the training, simulate the cues induced by non-numerical features of human perception. In the current quantum simulations we eliminate the non-numerical feature, like total energy, by normalizing the input energy or by adding important energy noise. However, the other two important cues, i.e. area and density, could not be benchmarked adequately given the limitation of the maximum number of excitations allowed (half the length of the network). To demonstrate how these non-numerical features influence the quantum model, one would require substantially larger networks that make use of approximate methods for their simulation, see e.g. matrix product state methods .
In addition, in the present work we discussed only two extreme quantum architectures, with either local connectivity or all-to-all connectivity. Here, the all-to-all connectivity, which is the one that encodes numerosity, being closer to biological cortical connectivity. We note, however, that in the cortex the weight of the neuronal connectivity is not equal and can vary dynamically. As a result, it would be interesting to test, with a dedicated set of simulations, whether intermediate architectures could capture the input numerosity, and if those were also sensitive to the modulation of the non-numerical features in the numerosity encoding. In this manner, we could establish a comparison with the results of  which show that NN are prone to the influence of non-numerical features.
All in all, our quantum spin model hinges on very general and minimal assumptions and, as highlighted in the S1 Appendix, remains robust against tuning of system parameters while being able to reproduce essential traits of numerosity perception. We have shown how, after appropriate agnostic decoding, the behaviour of global time-dependent signals conveys information of the number of events the system has experience in the recent past. Refinements of the model can be introduced with sustainable efforts, both to probe larger systems and to investigate phenomena different from numerosity perception. Along these lines, we believe that our model opens unexplored avenues in simulating and interpreting the behaviour of complex brain networks and as a new framework to understand information processing in such systems.
- 1. Dehaene S. The neural basis of the Weber—Fechner law: a logarithmic mental number line. Trends Cogn Sci. 2003;7:145. pmid:12691758
- 2. Gallistel CR, Gelman R. Non-verbal numerical cognition: from reals to integers. Trends in Cognitive Sciences. 2000;4(2):59–65. pmid:10652523
- 3. Anobile G, Turi M, Cicchini GM, Burr DC. Mechanisms for perception of numerosity or texture-density are governed by crowding-like effects. Journal of Vision. 2015;15(5):4–4. pmid:26067522
- 4. Testolin A, Dolfi S, Rochus M, Zorzi M. Visual sense of number vs. sense of magnitude in humans and machines, Sci. Rep., vol. 10, no. 1, pp. 1–13, 2020. pmid:32572067
- 5. Burr D, Ross J. A visual sense of number. Current Biol. 2008;18:425. pmid:18342507
- 6. Cicchini GM, Anobile G, Burr DC. Spontaneous perception of numerosity in humans. Nature Communications. 2016;7(1):12536. pmid:27555562
- 7. Nieder A. The neuronal code for number. Nature Reviews Neuroscience. 2016;17(6):366–382. pmid:27150407
- 8. Anobile G, Cicchini GM, Burr DC. Separate Mechanisms for Perception of Numerosity and Density. Psychological Science. 2014;25(1):265–270. pmid:24270462
- 9. Anobile G, Castaldi E, Turi M, Tinelli F, Burr DC. Numerosity but not texture-density discrimination correlates with math ability in children. Developmental Psychology. 2016;(52(8)):1206–1216. pmid:27455185
- 10. Gibbon J, Church RM. Sources of variance in an information processing theory of timing. In Roitblat H. L., Bever T. G., & Terrace H. S. (Eds.), Animal cognition. Hillsdale, NJ: Erlbaum.; 1984.
- 11. Staddon JE, Higa JJ. Time and memory: towards a pacemaker-free theory of interval timing. Journal of the experimental analysis of behavior. 1999;(71(2)):215–251. pmid:10220931
- 12. Hopfield J J. Neural networks and physical systems with emergent collective computational properties. Proc. Nat. Acad. Sci. (USA) 79, 2554–2558 (1982). pmid:6953413
- 13. Stoianov I, Zorzi M. Emergence of a’visual number sense’ in hierarchical generative models. Nature Neuroscience 15(2):194–196 (2012). pmid:22231428
- 14. Nasr K, Viswanathan P, Nieder A. Number detectors spontaneously emerge in a deep neural network designed for visual object recognition. Sci Adv. 2019;5. pmid:31086820
- 15. Kim G, Jang J, Baek S, Song M, Paik SB. Visual number sense in untrained deep neural networks. Sci Adv. 2021;7:6127. pmid:33523851
- 16. Jedlicka P. Revisiting the Quantum Brain Hypothesis: Toward Quantum (Neuro)biology? Frontiers in Molecular Neuroscience. 2017;10:366. pmid:29163041
- 17. Khrennikov A. Ubiquitous Quantum Structure. Springer; 2010.
- 18. Agliari E, Barra A, Galluzzi A, Tantari D, Tavani F. A walk in the statistical mechanical formulation of neural networks. Proceedings of the International Joint Conference on Computational Intelligence, 2014.
- 19. Zorzi M, Testolin A. An emergentist perspective on the origin of number sense. Philos. Trans. R. Soc. B Biol. Sci., vol. 373, no. 1740, 2018.
- 20. Ackley D., Hinton G E, Sejnowski T J. A learning algorithm for Boltzmann machines. Cogn. Sci., vol. 9, no. 1, pp. 147–169, 1985.
- 21. Testolin A, Zou W Y, McClelland J L. Numerosity discrimination in deep neural networks: Initial competence, developmental refinement and experience statistics. Dev. Sci., vol. 23, no. 5, Sep. 2020. pmid:31977137
- 22. Satinover J. The Quantum Brain: The Search for Freedom and the next Generation of Man. John Wiley & Sons; 2001.
- 23. Tononi G, Koch C. Consciousness: here, there and everywhere? Phil Trans R Soc B. 2015;370:20140167. pmid:25823865
- 24. Barbosa LS, Marshall W, Albantakis L, Tononi G. Mechanism Integrated Information. Entropy. 2021;23:362. pmid:33803765
- 25. Zanardi P, Tomka M, Campos Venuti L. Towards Quantum Integrated Information Theory. arXiv:180601421. 2018;.
- 26. Sabbadini SA, Vitiello G. Entanglement and Phase-Mediated Correlations in Quantum Field Theory. Application to Brain-Mind States. Appl Sci. 2019;9:3203.
- 27. Basieva I, Khrennikov A, Ozawa M. Quantum-like modeling in biology with open quantum systems and instruments. Biosystems. 2021;201:104328. pmid:33347968
- 28. Li JA, Dong D, Wei Z, Liu Y, Pan Y, Nori F, et al. Quantum reinforcement learning during human decision-making. Nature Human Behaviour. 2020;4:294–307. pmid:31959921
- 29. Busemeyer JR, Fakhari P, Kvam P. Neural implementation of operations used in quantum cognition. Prog Biophys Mol Biol. 2017;130:53–60. pmid:28487218
- 30. Wang Z, Solloway T, Shiffrin RM, Busemeyer JR. Context effects produced by question orders reveal quantum nature of human judgments. Proc Natl Acad Sci USA. 2014;111:9431–9436. pmid:24979797
- 31. Bruza PD, Wang Z, Busemeyer JR. Quantum cognition: a new theoretical approach to psychology. Trends in Cognitive Sciences. 2015;19:383. pmid:26058709
- 32. Knill DC, Pouget A. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci. 2004;27:712–719. pmid:15541511
- 33. Friston K. The free-energy principle: a unified brain theory? Nat Rev Neurosci. 2010;11:127–138. pmid:20068583
- 34. Schneidman E, Berry MJ, Segev R, Bialek W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature. 2006;440(7087):1007–1012. pmid:16625187
- 35. Adams B, Petruccione F. Quantum effects in the brain: A review. AVS Quantum Sci. 2020;2:022901.
- 36. Penrose R, Hameroff S, Kak S, editor. Cosmology Science Pub., Cambridge; 2009.
- 37. de Barros JA, Suppesb P. Quantum mechanics, interference, and the brain. Journal of Mathematical Psychology. 2009;53:306–313.
- 38. Saxena K, Singh P, Sahoo P, Sahu S, Ghosh S, Ray K, et al. Fractal, Scale Free Electromagnetic Resonance of a Single Brain Extracted Microtubule Nanowire, a Single Tubulin Protein and a Single Neuron. Fractal Fract. 2020;4:11.
- 39. Singh P, Saxena K, Sahoo P, Ghosh S, Bandyopadhyay A. Electrophysiology using coaxial atom probe array: live imaging reveals hidden circuits of a hippocampal neural network. J Neurophysiology. 2021;125:2107–2116. pmid:33881910
- 40. Hameroff S, Penrose R. Consciousness in the Universe: A review of Orchestrated-OR theory. Phys of Life Rev. 2014;11:39. pmid:24070914
- 41. Ball P. Physics of life: The dawn of quantum biology. Nature. 2011;474:272–274. pmid:21677723
- 42. Fleminga GR, Scholes GD, Cheng YC. Quantum effects in biology. Procedia Chemistry. 2011;3:38–57.
- 43. Breuer HP, Petruccione F. The Theory of Open Quantum Systems. Oxford University Press; 2007.
- 44. Heisenberg W. Zur Theorie des Ferromagnetismus. Zeitschrift für Physik. 1928;49(9):619–636.
- 45. Yang CN, Yang CP. One-Dimensional Chain of Anisotropic Spin-Spin Interactions. I. Proof of Bethe’s Hypothesis for Ground State in a Finite System. Phys Rev. 1966;150:321–327.
- 46. Baxter RJ. One-dimensional anisotropic Heisenberg chain. Annals of Physics. 1972;70(2):323–337.
- 47. Daley AJ. Quantum trajectories and open many-body quantum systems. Advances in Physics. 2014;63(2):77–149.
- 48. Jazayeri M, Movshon JA. Optimal representation of sensory information by neural populations. Nat Neurosci. 2006;9(5):690–696. pmid:16617339
- 49. Aldrich JH, Nelson FD, Adler ES. Linear Probability, Logit, and Probit Models. Sage; 1984.
- 50. Schollwöck U. The density-matrix renormalization group in the age of matrix product states. Annals of Physics. 2011;326(1):96–192.
- 51. Anobile G, Arrighi R, Castaldi E, Burr DC. A Sensorimotor Numerosity System. Trends Cogn Sci. 2021;25(1):24–36. pmid:33221159
- 52. Leibovich T, Katzin N, Harel M, Henik A. From ‘sense of number’ to ‘sense of magnitude’—The role of continuous magnitudes in numerical cognition. Behav. Brain Sci., vol. 164, 2017. pmid:27530053
- 53. Gebuis T, Reynvoet B. The interplay between nonsymbolic number and its continuous visual properties. J. Exp. Psychol. Gen., vol. 141, no. 4, pp. 642–8, Nov. 2012 pmid:22082115