Quantum theory of mass potentials

Probabilistic formalism of quantum mechanics is used to quantitatively link the global scale mass potential with the underlying electrical activity of excitable cells. Previous approaches implemented methods of classical physics to reconstruct the mass potential in terms of explicit physical models of participating cells and the volume conductor. However, the multiplicity of cellular processes with extremely intricate mixtures of deterministic and random factors prevents the creation of consistent biophysical parameter sets. To avoid the uncertainty inherent in physical attributes of cell ensembles, we undertake here a radical departure from deterministic equations of classical physics, instead applying the probabilistic reasoning of quantum mechanics. Crucial steps include: (1) the relocation of the elementary bioelectric sources from a cellular to a molecular level; (2) the creation of microscale particle models in terms of a non-homogenous birth-and-death process. To link the microscale processes with macroscale potentials, time-frequency analysis was applied for estimation of the empirical characteristic functions for component waveforms of electroencephalogram (EEG), eye-blink electromyogram (EMG), and electrocardiogram (ECG). We describe universal models for the amplitude spectra and phase functions of functional components of mass potentials. The corresponding time domain relationships disclose the dynamics of mass potential components as limit distribution functions produced by specific microscale transients. The probabilistic laws governing the microscale machinery, founded on an empirical basis, are presented. Computer simulations of particle populations with time dependent transition probabilities reveal that hidden deterministic chaos underlies development of the components of mass potentials. We label this kind of behaviour “transient deterministic chaos”.


Introduction
Physiological mass potentials produced as a result of electrochemical activity of excitable cells are noninvasive, reliable, and objective markers of various psychophysiological functions. EEG, eye blink EMG, and ECG, typical examples of mass potentials, have been used in a wide variety of research and clinical applications in humans, to study basic stimulus processing, attentional factors, emotion, personality variables, dysfunction in clinical populations, etc. However, the interpretation of mass potentials rests mainly on an empirical PLOS ONE | https://doi.org/10.1371/journal.pone.0198929 July 5, 2018 1 / 37 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 understanding, and so an adequate theory of the underlying generation mechanisms would be of great value. Phenomenologically, the mass potential is a product of hierarchically organized physiological systems with multiple levels of organization, from molecular to cellular [1]. The attempts to create quantitative relationships between the global and cellular scales in electrophysiology are known as the forward and inverse problems, respectively, particularly in electroencephalography [2] and electrocardiography [3]. Thus far, these widely researched problems have been studied using methods of classical physics. The assumption common to all approaches is that elementary voltages produced in some way by the underlying cells are "building blocks", the linear summation of which creates the mass potential. This linear model means that a global scale potential contains parameters of all participating microscopic scale sources of electricity. This leads to intractably huge numbers of degrees of freedom and prevents a unique determination of the mass effect.
The current study is the first, to our knowledge, to propose a radically different approach to the theory of mass potentials, based on the probabilistic formalism of quantum mechanics. A fundamental aspect of quantum mechanics that is not present in classical physics is the statistical nature of its assertions. In classical mechanics, the state of a system uniquely determines the values of all the physical quantities associated with it. In quantum mechanics, the state of a system defines the physical quantities only as random variables, i.e., it determines the laws of distributions obeyed by the physical quantities.
Statistical methods of quantum mechanics have been successfully applied to describe the macroscopic picture emerging in many-particle systems with a host of microscopic random effects. A classic example is Brownian motion, which portrays the macroscopic picture emerging from erratic movements of tiny particles suspended in a fluid [4]. Numerous physical applications include gases, fluids, semiconductors, plasma, electrons and ions in conductors, etc.
Each application necessitates development of a specific probabilistic model. Particularly significant revisions of the theory and computational methods are needed to deal with biological objects. A crucial step in this context is the relocation of elementary charges underlying generation of mass potentials from the cellular level to the molecular level. Accordingly, elementary sources of electricity in our theory are charged particles, the ions which cross the cell membrane in both directions. The size and stochasticity of these entities conform their attributes to quantum physics.
The probabilistic nature of ion transport was discovered by the patch-clamp technique, which provided a means of measuring ion currents through individual ion channels in the cellular membrane [5]. A fundamental finding has been that individual ion channels are essentially stochastic entities that open and close in a random way. A probabilistic interpretation of this evidence implies that the channel state may be treated as a memoryless random variable [6]. This means that the future state of the channel depends only on its present state, and not on how that state was reached. The remarkable consistency of these properties with assumptions of the Markov processes inspired the development of stochastic models of ion channels using continuous-time discrete-state Markov chains [7]. The major challenge is that Markov chains are hidden in the time course of observable processes.
Empirically based approaches to overcome this difficulty include a group of techniques known collectively as hidden Markov models [8][9][10]. A critical challenge is to reconcile stochastic mechanisms of single ion channels with deterministic behavior of observable processes. A new feature of our theory is that observable processes are components of mass potentials produced by huge numbers of charged particles acting on the microscopic scale. Thus, we deal not with the details of ion channel gating nor the chemical nature of ions but with the sizes of extracellular charged particle ensembles produced by trans-membrane transport processes.
This problem was addressed in part in the context of quantal transmitter turnover [11][12]. The Markov model was formulated in terms of the birth and death process (BDP). Having adopted this probabilistic framework, we consider a component waveform as a transient potential, and construct its microscale model using BDP of a "transient" type. The time course of this non-homogenous BDP is defined by the time dependent birth and death rates that are not known in advance.
Our goal was to identify these entities by using only experimentally determined components of various mass potentials (EEG, ECG, and eye-blink EMG) so that testable predictions of emergent global scale behaviour have no element of circularity.
To solve this problem we have developed an original methodology of time-frequency analysis using the similar basis function (SBF) algorithm. The methodology is presented in the Materials and Methods section (see Time-frequency analysis using the SBF algorithm).
In the same section we present an algorithm of numerical simulations which provide the means to compare empirically based solutions with the theory (see Algorithm of numerical simulations).
In the Results section we present the following outcomes of our study: • A novel microscale model of ion transport created in terms of a non-homogenous BDP. On this basis we introduce an integral equation linking micro-and macro-scale processes.
• Empirical identification of universal models of amplitude spectra and phase functions of half-waves from EEG, ECG, and eye-blink EMG records. On this basis we deduce the characteristic function and half wave function (HWF) as universal the frequency and time domain constituents of mass potentials.
• Introduction of nonlinear macroscale equations as the tools for description of the dynamics of mass potentials.
• Identification of the primary and secondary particle populations and formulation of the probabilistic rules that govern transport of particles in these populations during resting and transient conditions.
• Numerical simulations of the micro scale processes, which reveal transient amalgamation of deterministic and stochastic trends, with this process termed transient deterministic chaos.
• The statement of the self-similarity of HWFs indicating the universality of the theory.
In the discussion section we point out that probabilistic formalism of our theory provides solutions to problems unresolved by previous deterministic models.

Ion transport in terms of nonhomogeneous BDP
Mass potentials resemble electrical phenomena occurring in ensembles of multiple excitable cells immersed in an interstitial fluid. The elementary bioelectric sources acting at the microscopic scale are ions, both positively and negatively charged particles, which cross the cell membrane in both directions. The membrane serves as a separating material that divides the tissue into the extracellular and intracellular compartments. Due to a high resistance, relative to the resistance of the extracellular space, the cell membranes are rather good electrical insulators [13]. This prevents the charged particles located inside the cells from producing measurable changes in potential differences between various spatial locations in the extracellular space. Based on this evidence we presume in our theory that potential fields in the extracellular space are produced by extracellular charges, i.e. the extracellular particle populations.
Since ion channels capture and release ions in a random way, the pertinent extracellular particle population develops as a stochastic process. It is generally accepted that Markov processes with discrete states in continuous time are adequate mathematical models for stochastic interpretation of ion-channel mechanisms [7].
A specific aspect of our approach is that we deal not with the details of ion channel gating but with extracellular particle populations produced by the ion transport processes. We consider the cell membrane as a border that separates extracellular particle populations from particles inside the cells. Through this border an ion enters or leaves the extracellular particle population. To describe these events we adapt the theoretical framework of the previous modelling studies of the short-term synaptic plasticity in which the BDP, an important class of Markov processes, has been settled as a tool for modelling particle populations of neurotransmitters [11][12].
Let the integer-valued time-dependent random variable X(t) (here and throughout the paper boldface letters denote random variables) measure at time t the size of the population of charged particles (ions acting as point charges) involved in the creation of an extracellular monopole. Following conventional notions we regard the number of particles x i = X(t i ) as the state of the particle population at the time instant t i . The chances of inter-state transitions are evaluated by the transition probabilities expressed in terms of the birth and death rates.
We implement the main assumption of the Markov process, that during a sufficiently small element of time, Δ, the probability of the change of the X(t) by more than one particle is negligibly small: where P denotes probability and k is an integer. Therefore, the particle system can change its state only through transition to the nearest neighbours. An increase of the population size by a unit represents birth, X(t+Δ) = X(t)+1, whereas a decrease by a unit represents death, X(t+Δ) = X(t)-1. Thus, a particle moving out of a cell would constitute a 'death' for the inside of the cell and a 'birth' for the extracellular compartment. Wide classes of BDPs with constant transition probabilities deal with stationary processes. However, the complex dynamics of global scale potentials indicates the transient changes in behavior of the underlying particle systems. Thus we employ the BDP of a "transient" type. As the most suitable mathematical tool we use the nonhomogeneous BDP in which the birth and death rates may be any specified functions of the time t [14]. The probabilities of population changes for nonhomogeneous BDP are: P½Xðt þ DÞ ¼ XðtÞ where λ(t) and μ(t) and are the birth and death rates, respectively. Let us choose Δ in compliance with Eq (1) and represent the time evolution of X(t) as the succession of discrete states x i . The permitted states of the particle population at the time t i+1 are: x i+1 = x i +1 (birth), x i+1 = x i (unchanged size), or x i+1 = x i -1 (death). The probabilities of the corresponding inter-state transitions are: wherepðiÞ and pðiÞ denote the time dependent transition probabilities for birth and death, respectively. From Eqs (2) and (3) we get: These equations provide the means for a step-by-step evaluation of the time evolution of the particle population. However, the functions λ(t) and μ(t) are not known in advance. Our goal is to estimate these functions on an empirical basis. Thus, we need to link the changes of the particle population acting on the microscopic scale with observable dynamics of the global scale processes.
To outline a physical basis for such an approach we suppose that transport of particles creates a thin cloud of positive and negative ions spread over the outer surfaces of the cell membranes. Particle distribution that takes place uniformly over the membrane surface would not change the resting conditions of the extracellular current flow and would therefore generate no extracellular potential transients. The event that disturbs the uniformity of particle distributions and creates measurable potential changes in the extracellular space acting as a monopole is a transient synchronization of ion flows across cellular membranes of large ensembles of functionally linked excitable cells. Such a time-locked transient in the particle population is induced by a triggering event and consequently can be specified by the time instant from which the process started.
Taking t = 0 as the beginning of the transient, the expected trajectory of the mean population size e(t) from 0 to t is given by [14] eðtÞ ¼ EfXðtÞg ¼ exp½À rðtÞ; where E{} denotes expected value and rðtÞ ¼ Z t 0 ½mðxÞ À lðxÞdx: Combining the above equations we obtain According to the theory of the field potentials within the frequency range of physiological interest, the extracellular space may be regarded with reasonable accuracy as a resistive medium [13,15]. Thus, we may assume that the voltage of the global scale transient potential u(t) is proportional to the expected population size, i.e. u(t) = kÁe(t), where k is the coefficient of proportionality. We regard integral equation Eq (6) as a bridge linking extracellular potentials with the variables λ(t) and μ(t) governing the microscopic scale processes.

Gaussian amplitude spectrum of empirical half-wave
We assume that a transient increase of the particle population size is reflected on the global scale by peaking waveforms of relevant mass potentials. Such an approach is in harmony with the widely accepted notion that various positive and negative peaks (maxima and minima) in the waveform of the recorded potential reflect the summed activity of the underlying cellular populations with specific structure-function organization.
To identify peaking waveforms in the time course of a mass potential we need to estimate segments over which the waveform deflections develop. The methodology adopted here is that of the method of fragmentary decomposition introduced in earlier works [16,17].
Considering a mass potential as the time function v(t), we deal with a series of recorded values v m = v(t m ) at regular, discrete time intervals t m = mΔ, where Δ is the sampling interval. The segmentation points are defined as zero-crossings and points of global and local minima of |v(t)|. More particularly, if (v m−1 0 AND v m+1 > 0) OR (v m−1 ! 0 AND v m+1 < 0), then τ i = t m is qualified as zero-crossing (segmentation point), where i is the number of the segmentation point. The point τ i = t m specifies a global or local minimum of |v(t)| if |v m-1 |!|v m | |v m+1 |. In this way a sequence of segmentation points τ 0 ,. . .,τ i ,. . .,τ N is formed.
The signal segment between adjacent segmentation points is regarded as an empirical half wave (HW). Given a segment of the length T i = τ i − τ i+1 between the points τ i-1 and τ i (i = 1,. . .,N), the HW is defined as In the interval from 0 to T i this function reproduces the fragment of the signal v(t) between adjacent segmentation points τ i-1 and τ i . Therefore, in the interval [τ 0 , τ N ] vðtÞ ¼ Suppose that the empirical HW incorporated by this equation is produced by microscale processes considered in the previous section and summarized by Eq (6). If this is the case, we expect that the relationships between the micro-and macro-scales should be indicated by some specific features of eligible HWs.
To approach this problem we combine consideration of the time domain on the one hand, and the frequency domain on the other, using the time-frequency analysis, which yields more powerful results than does considering the two domains separately.
The major procedure of the time-frequency analysis of a mass potential v(t) is the transformation of each w i (t) to the frequency domain. The frequency domain counterpart of the w i (t) given in the [0, T i ] interval is defined by the exponential finite Fourier transform where ω = 2πf, f is frequency and i ¼ ffiffiffiffiffiffi ffi À 1 p . A significant computational challenge is that readily available techniques of digital spectral analysis, such as the Fast Fourier transform, are not suited to short-term spectral decompositions. As an adequate tool of time-frequency analysis we use the SBF algorithm [18] (see Materials and methods: Time-frequency analysis using the SBF algorithm for details).
Typical results of the time-frequency analysis are presented in  of various amplitude spectra we transform the spectra into a universal dimensionless form using normalization of both the amplitude and frequency.
Using HW 5, for example, the normalized amplitude spectrum is depicted as a blue line in Fig 2a. This is a typical form of W Ã (ω) for the analytical model of which we use the Gaussian spectrum suggested as the frequency domain model of the amplitude spectra of the event related potential (ERP) and eye-blink EMG components in previous papers [16,19]. Thus, we consider an expected profile of empirical W Ã (ω) in the form of the Gaussian function where σ is a parameter. We may regard W Ã (ω) as the frequency response of a low pass filter, a conventional parameter of which is the cut-off frequency F C . At this frequency the attenuation of the amplitude spectrum drops by 3dB, i.e. W Ã (ω C ) = 1/ p 2 (%0.707), where ω C = 2πF C . Given F C , s ¼ Combining W Ã (ω) and G(ω) at this frequency, we obtain an excellent fit of the Gaussian spectrum from Eq (9) to the normalized empirical amplitude spectrum.
For normalization of the frequency scale we use F C as a basis unit and define dimensionless relative frequency γ = ω/2πF C = ω/ω C . Appropriately scaled in magnitude and frequency, the standard empirical amplitude spectrum is defined as Z(γ) = W Ã (ω c γ).
Using (9) as the model of W Ã (ω), an expected form of Z(γ) is the Gaussian spectrum defined as GðgÞ ¼ expðÀ g 2 Þ: Note that Z(γ) = G(γ) at γ = 1, the relative frequency which corresponds to f = F C . Amplitude spectra from In a similar way we have applied the time-frequency analysis to the HWs from records of other mass potentials. The advantage of the frequency domain is that multiple forms of HWs are reduced to a universal simple analytical form of Gaussian model G(γ). Fig 3 illustrates typical results for EEG and ECG records shown in the upper panel. Superposed dimensionless amplitude spectra are shown in the lower panel. These and numerous similar analyses of different HWs from various EEG, ECG, and eye-blink EMG recordings indicate that combining Z(γ) and G(γ) at γ = 1 produces remarkably accurate fits of analytical G(γ) to empirical Z(γ).
To estimate accuracy of fits we define the dimensionless extension ratio ε = F B /F C , where F B is the boundary frequency, i.e. the frequency below which the mean square error (MSE) between W Ã (ω) and G(ω) does not exceed the threshold value T. The fit in Fig 2a, with ε = 1.74, gives a visual idea of how F C and F B are estimated.
According to our theory, the global scale components of mass potentials reflect statistical distributions of underlying elementary sources acting on the microscopic scale. We may interpret the amplitude spectra as global scale averages of statistical distributions.
Because we deal with amplitude spectra of EEG, ECG, and EMG signals originated from functionally different sources, an important problem was to determine whether the statistical distributions produced by the corresponding samples are different or indistinguishable in statistical terms. The parameter ε is an adequate tool for this purpose because this single parameter is sufficient to evaluate the fitting results.
We estimated values for different HWs from varieties of EEG, ECG, and eye-blink EMG records (see Materials and methods: Kolmogorov Smirnov test for details). The HW was accepted as eligible for further analysis if the number of the HW samples was !8. This condition eliminated from analysis a small portion of HWs, approximately 2-3%.
Using datasets of ε dimensionless parameters collected from different sources we applied the two-sample Kolmogorov-Smirnov test to examine the null hypothesis that the samples are drawn from populations with the same distribution. The test has the advantage of making no assumption about the particular distribution of data, i.e. it is non-parametric and distribution free. To apply the tests we created separate samples from EEG, EMG, and ECG data, each containing values of ε from 100 different HWs. The means and standard deviations (SD) of ε values were as follows. EEG: mean = 1.781 (SD = 0.215); ECG: mean = 1.792 (SD = 0.187); eyeblink EMG: mean = 1.788 (SD = 0.227).
Normalized by the sample size, empirical cumulative distribution functions evaluated from each of these sample sets are illustrated in Fig 4: EEG-red line, ECG-green line, and eyeblink EMG-blue line. Given two cumulative distribution functions normalized by the sample size, the greatest discrepancy between these measures, called the D-statistic, serves as a criterion to reject the null hypothesis. Given equal size for both samples (100 values of ε in the tests in question), the null hypothesis is rejected if D!0.2 (5%).
The maximum absolute differences computed for different pairs are as follows. EEG vs ECG: 0.13; EEG vs EMG: 0.09; ECG vs EMG: 0.14. Similar results documented for empirical half waves from various records are well below 0.2 and do not provide any reasons to reject the null hypothesis. It is important to note the highly stereotypical character of the test results repeated for various HWs.

Linear phase function
The calculations of the amplitude spectra and F B parameter were followed by the estimation of the phase functions. We found that the initial part of the empirical phase function δ(ω) shows consistency with a simple linear model where β is a parameter. As numerous calculated linear fits indicate, the deviation from linearity can be neglected over the frequency range from 0 to 1.4ÁF C . The typical result is illustrated in Fig 2b. Thus, the Quantum theory of mass potentials estimation of β was reduced to the calculation of linear regression lines using δ(f) samples from f = f 0 to f = 1.4ÁF C . The slope of the regression line serves as the estimate of β.

Characteristic and half-wave functions
Deduced analytical dependencies in Eqs (9) and (10) define a universal equation of the complex spectrum in the form Referring to the probabilistic formalism of quantum mechanics, we qualify complex value G(iω) as a characteristic function [20]. Accordingly, the time domain counterpart of the characteristic function, g(t), can be considered as a distribution.
A conventional relationship between the characteristic function F(iω) and the corresponding time domain distribution function f(t) is established by the reciprocal Fourier integrals: If F(iω) in the first of these integrals is expressed over an infinite frequency scale by the Eq (11), i.e. F(iω) = G(iω), then f(t) is a normal distribution defined on an infinite time scale. However, we deal with a causal process which starts from the time instant when the triggering event initiates the component development. Such a condition establishes specific relationships between the real and imaginary parts of G(iω). Both functions contain the same information, and either one alone is sufficient to find the time domain counterpart [21]. We refer to the imaginary part G I ðoÞ ¼ Im½GðioÞ ¼ exp½À ðsoÞ 2 =2sinðboÞ: Using this function we define the time domain solution at t>0 by the sine Fourier trans- This integral has an analytical solution [22]. For t!0 where c P ðtÞ ¼ exp½À ðt À bÞ 2 =2s 2 ; These functions are illustrated in Fig 5a by the solid lines: ψ(t)-green, ψ P (t)-blue and ψ S (t)-red. The dotted lines indicate that ψ P (t) and ψ S (t) are the fragments of shifted normal distributions. Eq (12) is consistent with the wave function in a general form of the d'Alembert's solution [23]. A crucial difference is that the latter is defined on an infinite time scale while ψ(t) defined by Eq (12) is zero at t<0. To take into account this specific feature we term ψ(t) a half-wave function (HWF).

Nonlinear macroscale equations
The system producing HWF is expressed by the following system of nonlinear differential equations: We may qualify the system as a non-autonomous nonlinear system, that is, a system driven by time-varying processes.
Given σ = 1, coloured lines in Fig 5b show ψ(t) with different values of β (0.5, 1,1.5, 2, 2.5). Notable changes in the waveform profiles indicate that ψ(t) has the potential to fit empirical half-waves with various relationships between ascending and descending shapes.

Rules of birth and death in particle populations
We wish to deduce ψ(t) as a product of the temporal evolution of a microscale particle population underlying component generation. We regard the appearance of the two terms in Eq (12) as indications of the two identifiable particle sub-populations, the primary particle population associated with ψ p (t) and the secondary particle population associated with ψ s (t).
We first consider the primary particle population, using as a model the BDP with the birth and death rates λ P (t) and μ P (t), respectively. Inserting e(t) = ψ p (t) in Eq (6)  ¼ exp½À ðt À bÞ 2 =2s 2 : Consequently, It is a simple matter to solve this equation and deduce l P ðtÞ ¼ l P ¼ b=s 2 and m P ðtÞ ¼ t=s 2 : Turning to the secondary particle population, we express ψ s (t) in terms of the birth and death rates denoted by λ S (t) and μ S (t), respectively. Replacement of e(t) in Eq (6) by ψ s (t) gives us It follows from the solution of this equation that We summarize results in the form of the following rules governing the birth and death rates for the primary and secondary particle populations: Rule 1. After activation at t = t 0 by the triggering event, the transient behavior of the primary particle population develops as a non-homogenous BDP with the constant rate of birth and the time dependent rate of death Rule 2. After activation at t = t 0 by the triggering event, the transient behavior of the secondary particle population develops as a non-homogenous death process with the time-dependent rate of death Turning to the resting conditions, it is essential that ion transport is balanced for both cations and anions. This means that, in the resting states, the sizes of the primary and secondary particle populations fluctuate over constant mean values. We consider the development of these events as simple BDPs with constant rates of birth and death. To define parameters of these processes we note that Eqs (14)-(16) contain the time dependent term μ p (t) and the time independent term λ p . On a physical basis we associate the time dependent μ p (t) with the gated ion channels and constant λ p with the resting ion channels. In this context λ p participates in both the resting and transient conditions. Thus, we consider λ p as a universal estimate for the birth and death rates during the resting conditions. Accordingly, the ion transport, balanced for both particle populations, is supported by the condition: where l r P ;, m r P ;, l r S and m r S denote the resting state rates of birth and death for primary and secondary particle populations.
It is important to note that resting state conditions are not recognizable from the global scale. The parameters included in Eq (17) are deductions from the rules governing the transient regimes. Thus we may expect the existence of additional resting state parameters which do not affect the transient components.

Transition probabilities
Given identified σ and β parameters, established rules of birth and death allow us to complete, on an empirical basis, the description of transition probabilities (4) and (5). We deduce the following formulas of transition probabilities for transient conditions.
Primary particle population: Secondary particle population: According to Eq (17) the birth and death processes in the primary and secondary particle populations are governed under the resting conditions by the birth and death rates equal to β/σ 2 . The corresponding transition probability is Now we have an explicit set of equations for transition probabilities that provide means for numerical reconstructions of the time evolution of the particle populations during the transient and resting conditions.

Physical basis of numerical simulations
Numerical simulations are unique tools used to reconstruct the time courses of particle populations in different trials. To provide a physical basis, the schemes in The transient conditions arise from induced synchronized activity in an ensemble of closely located and functionally linked excitable cells. According to our theory, such activity causes transient changes in the membrane machinery that controls the ion exchange between the intracellular and extracellular compartments. The most important effect identifiable by our theory, from the macroscopic scale, is the appearance of differences between the behavior of positive and negative charges. The scheme in Fig 6b shows the triggering event, the influence of which on the transmembrane ion transfers divides particles in the extracellular compartment into the primary (P) and secondary (S) populations, governed by different rules. A crucial effect following from Rule 2 above is the blockage of the "birth" processes in the secondary particle population. Thus, during the transient conditions the particles in the primary population behave as a non-homogenous birth and death process while the particles in the secondary population behave as a non-homogenous death process.

Numerical simulations
The numerical simulations were designed to reconcile solutions of the system described by non-linear deterministic Eq (13) with the probabilistic model of non-homogenous BDPs describing the behaviour of individual particles in our theory. To deal with particle numbers we consider a numerical counterpart of Eq (12) in the form: X N (t) = X P (t)-X S (t), where X P (t) and X S (t) are the numbers of particles in the primary and secondary populations, respectively. X N (t) is the number of particles producing the net charge. A general procedure is organized as the succession of standard steps dealing, in sequential order, with the temporal evolution of x i over the time intervals [t i , t i+1 ] were i takes values from -M to N-1. The time interval from t -M to t 0 corresponds to the resting condition. At t = t 0 the resting state is switched to the transient conditions calculated from t 0 to t N (see Materials and methods: Algorithm of numerical simulations for details).
Simulations illustrated in Fig 7 extend over the time interval from -10 to 70 ms, with t = 0 corresponding to the instant at which the transient starts. As an initial condition, an equal size N 0 = 50 was prescribed for both populations. The parameters σ = 13.3 ms and β = 26.2 ms were taken from Table 1 for EEG half-wave 2.
Using defined parameters, it was important to satisfy the condition of Eq (1) by the choice of a sufficiently small time interval Δ during which the expected change of the population size by more than one particle is negligibly small. Using Monte Carlo simulations for estimation of the numbers of particles crossing the membrane, different values of Δ were tested. In this way the value Δ = 0.0001 ms was selected for the following simulations. The corresponding segmentation points over 80 ms time interval were t i = iÁΔ with i taking values from 0 to 800,000.
The transition from the resting to transient condition was simulated as the change of the resting state transition probabilities defined by Eq (21) to the transient state transition probabilities in Eqs (18)- (20). The change occurs in a "smooth" fashion. This means that the sizes of the primary and secondary particle populations developed under the resting conditions serve as initial conditions for the transient regimes.
During the resting conditions (interval from -10 ms to t = 0) the transport of particles between the primary and secondary populations is balanced. The particle numbers fluctuate over the mean value equal to N 0 . The manner in which the transition from the resting to transient conditions contributes to a rapid change of the net charge is due almost entirely to the change of the birth and death rates in the primary particle population. In the general case, the size of the primary population is governed by the complex interplay of the birth and death transition probabilities. The onset of the transient conditions gives rise to both probabilities. Initially, from t = 0 to the time instant indicated by the arrow in Fig 7b, the birth probability prevails over the death probability. In this stage nearly a tenfold increase of the size of the primary population occurs. After the peak, the death probabilities take a progressively larger share. As a result, the size of the primary population declines and returns to the initial condition.
Compared with the primary population, the effect of the transients in the secondary particle population on the net charge is minor and brief. As shown in Fig 7c, at t = 0 the probability of birth in the secondary population drops to zero. This blocks particle transfer from the inside to outside of the cells.
In order to decide whether the mass effects of particle movements are sufficient for a full account of the dynamics of macroscale processes, it is necessary to compare the results of computer simulations with the theory. In agreement with Eq (12), we consider transient conditions starting at t = 0. Since ψ(0) = 0, we use ψ 0 = ψ P (0) = ψ S (0) as the initial condition for theoretical solutions. The corresponding initial condition for numerical simulations is

Quantum theory of mass potentials
It is evident that an increase in the number of particles brings single trial samples of ψ Ã (t) to closer agreement with the theoretical model. Thus, the role of deterministic factors in statistical samples of X Ã N ðtÞ increases with an increase in the number of particles involved. To emphasize this tendency we have estimated absolute values of residuals between ψ Ã (t) and X Ã N ðtÞ. Using different values of N 0 from10 to 500, 20 single trials for each value were calculated. In each trial the residuals were estimated in 100 equidistant points. From these measurements the average residuals were calculated. The points in the plot in Fig 8b show the dependency of average residuals on the number of particles N 0 . The increase of the particle numbers to several hundreds makes single trials perfectly identical to the theoretical solution. This provides convincing evidence that, under transient conditions, the particle behavior in both primary and secondary particle populations develops as an amalgamation of deterministic and stochastic processes. These are two broadly defined classes of processes with specific properties. We now consider this situation from the point of view of deterministic chaos, i.e. processes that share attributes with both deterministic signals and stochastic processes.

Transient deterministic chaos
The research on chaos phenomena is progressing steadily, extending to a diverse range of applications. However, in the face of evolving hypotheses and concepts no universal definition Quantum theory of mass potentials of the system producing chaos has been accepted. The necessary conditions are the non-linearity of the system generating chaos and its sensitivity to initial conditions. A specific of biological systems is that, in most cases, the equations of the sources of deterministic chaos are unknown [24]. Accordingly, the identification of chaotic behavior is usually based on the analysis of empirical time series extracted from the process in question. The most popular methods for characterizing chaotic behaviors are Lyapunov exponents and Kolmogorov-Sinai entropy. These are essentially indirect methods which do not define the equations but diagnose whether the process in question is chaotic or non-chaotic. This limitation applies equally to other diagnostic methods, among which are the Poincare map, correlation dimensions, fractal dimensions, and attractor reconstructions from the time series.
Our approach actually does not need such indirect methods because the theory provides complete models of the HWF generation on both the macro-and micro-scales. The two major criteria which allow us to qualify the system producing these processes as chaotic are as follows.
1. The non-linearity of the system producing HWF.
2. Sensitive dependence of the behavior of the system on initial conditions. The nonlinearity of the system is evident on the macroscopic scale, where the nonlinear Eq (13) define the temporal evolution of HWF. The chaotic behavior is hidden in the dynamics of mass potentials and is only recognizable by specific Gaussian profiles of the HWF shapes. It is important to note that the specific statistical background of such patterns provided means to establish the rules 1 and 2 that govern chaotic phenomena taking place on the microscale. An important additional factor for confirmation of chaos is that the chaotic system defined by our theory arises solely from the equations of the system, without the need for additional factors.
The strong dependency of the chaotic processes on initial conditions is seen from the computer simulations illustrated in Fig 9. The parameters are the same as in the simulations shown Quantum theory of mass potentials in Fig 7. The simulations started 10 ms before arrival of the triggering event. At that instant both P and S populations contained 50 particles. Because the sizes of P and S populations were changing randomly as simple BDPs in the initial 10 ms interval, the initial sizes of the whole population were slightly different in different trials. As the results of the simulations indicate, these minor differences in the initial conditions lead to significantly different future behavior of the realizations of X N (t).
Another important feature of chaotic processes is short term predictability. It is clear from the simulation results in Fig 9 that, despite substantial differences in the trajectories of X N (t) realizations, it is possible to give rough estimates of some parameters, for example the times at which the trajectories reach maximums. We may also claim that features of predictability are also incorporated in the time courses of X N (t) realizations, because the averages of these trajectories from different trials converge on the analytical solutions.
An advantage of the analytical methods of our theory is the possibility to make a clear distinction between the chaos and pure stochasticity. During the resting periods we deal with purely stochastic processes. The start of the HWF generation is specified by the time instant when the triggering event arrives. Thus the deterministic chaos develops as a transient process.
We label this amalgamation of deterministic and stochastic processes "transient deterministic chaos".

Statistical self-similarity of half waves
An essential outcome of our numerical simulations is the evidence of common statistical and deterministic rules that govern the generation of functional components of mass potentials from different ensembles of multiple excitable cells. This means that cellular ensembles may be divided into constituent parts governed by the same probabilistic and deterministic rules as the whole ensemble. In a most general context, this result may be associated with Mandelbrot's concept of a fractal, an object composed of similar sub-units that resemble the structure of the whole object [25].
Fractal properties are divided into different categories. Our results may be attributed to the fractal notion of statistical self-similarity. We summarize this outcome of our theory in the following statement of the statistical self-similarity of the half-waves of mass potentials: Consider HWF ψ(t) as a transient mass potential produced by a synchronous activation of a large ensemble L of closely located and functionally linked excitable cells. Let us extract a part of L regarded as a fraction F. We state that the macroscale mass potentials produced by F are governed by the same statistical distribution as the whole population L.
At the macroscopic scale, where statistical factors are hidden, the mass potential develops as the succession of half-wave functions induced at τ 0 ,. . .τ i,. . ., τ N time instants. Thus, the model of a mass potential e(t) has the following form where index "i" labels different half-wave functions with corresponding σ and β parameters. Characteristic examples of the models of different mass potentials are shown in Fig 10. The high accuracy of Eq (22) is a remarkable outcome of our theory since the HWFs building the models were derived entirely from experimental records without any adjustments to various origins of the mass potentials to which the models were applied. Table 1 lists the values of parameters supporting the models in Fig 10.

Discussion
Quantum mechanics provides successful explanations of microscopic phenomena in almost all branches of physics, and also serves as an indispensable source of guiding principles for efforts to create quantum biology [26][27][28] and, particularly, quantum neurobiology [29]. Quantal analyses play a vital role in understanding the junctional mechanisms by which information is transmitted in the nervous system, from cell to cell across synapses. In the early 1950s, Bernard Katz and his colleagues revolutionized synaptic physiology with the discovery that transmitter substance is released from a terminal in the form of acetylcholine packets of a constant size [30]. This achievement earned Katz the Nobel Prize for Physiology and Medicine in 1970, and established the concept of the acetylcholine quantum as a fundamental entity of transmitter substance. Various models of different synaptic phenomena have been created on this probabilistic basis.
The first author of the present paper (DM) has developed a double barrier quantal model of neurotransmitter release in which the changing amounts of quanta available for release have been deduced as the result of quanta turnover between the two postulated presynaptic pools of transmitter [11,12]. The quanta exchange between the pools has been described in the probabilistic terms of a linear birth-and-death process (BDP).
In the present study we preserve the general principles of this modelling approach but employ a more wide-ranging class of Markov processes, non-homogenous BDP. This probabilistic framework provides a radically different approach to the analysis and interpretation of mass potentials than did previous deterministic models of mass potentials.
The major assumption of deterministic models is that dynamics of the mass potential can be deduced as the superposition of membrane potentials produced by the underlying cellular elements. A general mathematical framework is known as volume conductor theory, several versions of which address EEG [31], ECG [15], and EMG [32]. Significant difficulties are created by the intractably huge multiplicity of cellular elements, along with an insufficient knowledge of their parameters. Under such uncertainty little confidence can be placed on particular solutions from a large number of different models.
The idea of an alternative phenomenological approach using statistical considerations has been put forward by Elul [33]. Based on the analysis of the synchronization of EEG sources, Elul proposed that evolution of brain waves may be governed by statistical regularities following from the central limit theorem. Thus, EEG may simply be accounted for as the normally distributed output resulting from a combination of the activity of many independent neuronal generators.
The mathematics behind these propositions has never been worked out in detail. The major problem is to support probabilistic propositions by a quantitative microscale model. That is, the combination of general probabilistic methods with specific particle models create the tools of the probabilistic formalism of quantum mechanics, with unique power to link the microscale and macroscale processes.
As far as we are aware, our study is the first to provide an empirically grounded microscale model of mass potentials, and the first to approach the genesis of mass potentials on this original probabilistic basis. This permits a great deal to be inferred about the mass effects of very complicated cellular structures without even mentioning cells and channels, or even being very explicit about internal makeup. A fundamental point is the consideration of deterministic components of mass potentials as a limit of underlying microscale processes. The crucial step is the change of basic microscale units from continuous time membrane potentials to elementary particles (point charges). The vanishingly small role of individual particles in the generation of the macroscopic scale processes reduces the problem to the study of the limiting behavior of large numbers of random variables. This is a classical problem in probability theory [34]. Some purely mathematical aspects of how deterministic potentials, such as membrane potentials, emerge as a limit from the underlying stochastic sources have been recently considered by Austin [35].
Referring to various probabilistic models of ion transport [36], the new feature of the probabilistic framework of our theory is the implementation of non-homogenous BDPs with time dependent rates of the birth and death. There are at least two important consequences of this extension of the probabilistic tools employed.
First, it becomes possible to extend purely stochastic models of microscale machinery by emergence of transients with deterministic trends, which we have classified as transient deterministic chaos.
Second, the link was established by Eq (6) between the microscale transients and macroscale components of mass potentials. This link provides a way to estimate the microscale parameters on an empirical basis by consideration of mass-potentials as non-stationary processes.
On the macroscopic scale we use fragmentary decomposition, a model-based method of non-stationary signal analysis, to decompose mass potentials into functionally meaningful components on an adaptive segmentation basis [16,17]. At this step we follow the conventional empirical understanding of functional components as the positive and negative peaking waveforms that are observed in the time course of the mass potential. The possibility to transfer these empirical pieces of mass potentials to the universal analytical model has been originally indicated by the results of the time-frequency analysis of the late component waveforms (N 1 , P 2 , N 2, and P 3 ) of ERPs recorded in a conventional auditory oddball paradigm [37]. It appeared that, although different components are associated with different neural sources, their normalized amplitude spectra are practically identical, being accurately described over a wide range of frequencies by a Gaussian function. Similar results have also been obtained for single trial eye-blink EMGs [16] and conventional P, Q, R, S, and T components of ECG [38]. This study unites previous considerations of different mass potentials into a general quantum theory which supports the reconstruction of various mass potentials on a common theoretical and computational basis.
The striking similarity of the amplitude spectra of different mass potentials can be seen by comparing the plots in Fig 1c and the lower panel of Fig 3. Such results are somewhat difficult to reconcile with our physiological intuitions and concepts about different structure and functions of EEG, ECG, and EMG generators. To provide a clue to the understanding of this seemingly paradoxical situation, we note that transition from a deterministic to a quantum model replaces consideration of physical parameters of the underlying sources with an account of their statistical properties. In this context, an elementary source of electricity is a point charge carried by an extracellular ion and treated as a random variable. The mass effect emerges as a limit from the collective behavior of a large number of point charges acting on the microscopic scale. The appearance of Gaussian functions in the basic Eqs (9) and (12) indicates that the mass effect produced by multiple microscale events is governed by the rules of the central limit theorem. The theorem states that any process of random sampling, given a sufficiently large sample, tends to produce a normal distribution of sample values, even if the subsets of the whole population from which the samples are drawn do not follow a normal distribution. An important aspect of this phenomenon is that statistical regularities considered by the central limit theorem are independent of the physical nature of the objects from which the statistical samples are drawn. Thus, we may explain the resemblance of the amplitude spectra of EEG, ECG, and EMG signals by the fact that, although the cellular organization of these signals is quite different, the molecular events on the microscopic scale are governed by similar statistical regularities.
Applications of the central limit theorem are usually associated with single normal distribution. In such cases the sources of random samples are stationary stochastic processes.
The ability of our theory to deal with transients is provided by the specific composition of the HWF from two Gaussian functions with identical parameters. We regard these Gaussian functions as products of separate but operationally linked particle populations with identical parameters. The primary population contains particles of the same polarity as the macroscopic component. The particles from the secondary population have an opposite polarity. Thus, an important finding of our study is that, during development of the transient deterministic chaos on the microscopic scale, the two sorts of participating particles with opposite polarities are recognizable from the macroscopic scale.
We may state that HWF ψ(t), emerging as the macroscopic scale effect from synchronized chaotic ion movements on the microscopic scale, can be regarded as a universal building block from which various mass potentials are composed. In terms of quantum mechanics, this effect can be qualified as universality, the evidence that there are properties for a large class of systems that are independent of the exact structural and functional details of the system. This probabilistic basis implemented by our theory provides the means to reduce intractably huge numbers of microscale variables to a universal macroscale object in the form of a HWF with the fewest parameters (κ, σ, and β). As limit distributions for the sums of random variables, these parameters discriminate those aspects of the molecular machinery that are significant on the global scale from those that are not.
The possibility to decompose a complex signal into a set of universal "building blocks" plays an important role in different fields of physics and biology. Thus, synaptic physiology has greatly benefited from the quantum analysis which deduced a miniature end potential as the building block from which postsynaptic potentials are composed. An important outcome is that this concept is applicable to various types of synaptic junctions with different molecular organizations. This is a remarkable feature of quantal universality which applies equally to our treatments.
To our knowledge, our study is the first to put the models of various types of mass potentials on a common theoretical and computational basis. Using HWF as a universal building block, we obtain a general model of the mass potential in the form of Eq (22). This equation suggests that a mass potential evolves as the succession of transient components induced at consecutive time instants. Each component is a HWF described by the system of non-linear Eq (13) with specific sets of estimated parameters.
Conventional procedures of parameter estimation consider a mass potential as the set of components in the form of peaking waves. Given this widely accepted conceptual framework, the variety of different scoring engines, known as peak picking procedures, reduce an electrophysiological signal to the peak amplitudes and latencies. A critical limitation of these procedures is that measurements of a signal at isolated time points are unable to characterize the waveform dynamics which contains important functional and diagnostic information.
Considering the component as being not just a peak in the waveform, but a whole deflection (positive or negative) with a specific shape, the HWF is an adequate analytical model of such an entity. The timing (τ) and magnitude (κ) of a HWF correspond to conventional component parameters. However, no parameters have so far been accepted as adequate measures of the component shapes. In this regard, the σ and β parameters appear as universal shape parameters with the potential to provide important information about the dynamics of the underlying microscale processes. The positive impact of this innovation would be to open up new ways of modeling the mass potentials in both physical and physiological settings. The flexibility and remarkable accuracy of this methodology is seen in Fig 10, where excellent fits to the samples of different mass potentials (EEG, ECG, and eye-blink EMG) with various activity patterns have been obtained using HWF as a universal component of the dynamic model. The possibility to decompose different sections of mass potentials from different spatial locations into ensembles of universal building blocks identifiable on an empirical basis may have important applications for solutions to the forward and inverse problems in electrophysiology. The uniqueness and universality of the HWF provides a means to avoid unsubstantiated speculations about physical factors contributing to the generation of mass potentials. However, the aims of these problems, particularly predictions of the spatial locations of mass potentials, are quite specific and demand special considerations. We are planning to approach some of these problems in future papers.

Time-frequency analysis using the SBF algorithm
The time-frequency analysis is applied to the time series of a mass potential presented by Eq (7) as the sum of empirical HWs. It is necessary to transform each w i (t) to the frequency domain using numerical calculation of the exponential finite Fourier transform expressed by Eq (8). Procedures applied to various HWs are universal. Thus, we omit subscripts in HW descriptions and consider w(t) as a target for the following transformations. The corresponding complex spectrum is defined by the following Fourier transform Numerical estimation of Fourier transforms is usually based on procedures employing various algorithms of the fast Fourier transform. The latter is supported by a Fourier series model of the data, i.e. the addressees are periodic signals [39]. This distinction with the Fourier integrals is a troublesome problem when aperiodic signals of short duration, like HW, are analyzed. When a waveform is not periodic in time, the spectral leakage may cause significant distortions of the frequency characteristics.The remedies of windowing and zero-padding usually introduce problems of their own.
As an adequate tool of the spectral analysis of HWs we use the SBF algorithm [18]. The algorithm is an original version of Filon-type methods that provide maximum precision in the estimation of trigonometric integrals using interpolation polynomials of different degrees [40].
Using the SBF algorithm we deal with numerical calculations of the real and imaginary parts of the complex spectrum W(iω): where W C (ω) and W S (ω) are finite cosine and sine Fourier transforms of w(t) and [0, T] is the interval of integration.
For numerical calculations w(t) is specified by its sampled values w i = w(t i ), where t i = iΔ, i takes values from 0 to N and t N = T. Over these samples a piece-wise linear polynomial h(t) is created .  Fig 11(a) illustrates the principle of piecewise-linear interpolation. Given w(t) (blue line), the interpolant, h(t), is the broken red line created by joining the nodal points 0, 1, 2, 3 and 4 by the straight lines. In the same fashion the interpolation can be continued for any number of the following nodal points.
A specific aspect of the SBF algorithm is that h(t) is decomposed into the weighted sum of similar basis functions of the following form hðtÞ ¼ where a i is the interpolation coefficient and r(t) is a similar basis function defined in the form of the triangular basis function (TBF): The simple geometric form of TBF (unit right-angled triangle) is illustrated in Fig 11c. The interpolant, h(t), h(t) is created as a piece-wise linear polynomial which satisfies the interpolation condition: where h i = h(t i ). A general procedure of the evaluation of the interpolation coefficients consists of writing Eq (23) for each interpolation point and finding the solution of the resulting system of N linear equations by conventional matrix methods [18].
While this set of linear equations is easily solved by standard methods, there is a further simplification possible due to the fact that r(t m /t n ) = 0 for m!n. To make this mode of interpolation as intuitive as possible, we use as an intermediate element the hat function, one of the geometrical objects employed by the method of finite-elements [41]. The hat function is defined as The interpolation capability of the hat function is supported by the two properties. First, ϑ i (t) vanishes everywhere except on the two subintervals to which t i belongs. Second, ϑ i (t) is unity at the node i and zero at all other nodes.
Using the hat function, the interpolant (23) may be presented in the form Now the interpolation coefficients are equal to the samples of w(t). This simplification is the result of the change of the basis function from the TBF in Eq (23) to the hat function in Eq (25). The geometric principle found useful in this connection may be seen by reference to   (23) and (25). The nodal points from 1 to 3 are the tops of the hat functions multiplied by the interpolation coefficients (triangles a1c, b2d and c3e). The graph in Fig 11b shows that the hat function (triangle abc with bd = 1) may be decomposed into the sum of three TBFs triangles ohc, ofd and oga). Thus, in the interval from t i-1 to t i+1 the corresponding hat function is defined as and Δt i = t i -t i-1 . In these terms Comparison with Eq (23) shows that interpolation coefficients are readily found to be where α 0 = 1 and it is assumed that w o = 0 and w N+1 = 0. The system of simultaneous linear equation has thus been reduced to a set of independent linear equations. Now h(t) is described in terms of a linear combination of TBFs, the finite Fourier cosine-and sine-integrals of which are given by These functions are illustrated in Fig 11d. According to the similarity theorem of the theory of Fourier transforms, the compression of the abscissa in the time domain corresponds to the expansion of the abscissa plus contraction of the ordinate in the frequency domain. These relationships reduce the entire issue of the transform calculations to some standard manipulations with relatively simple functions R C (ω) and R S (ω). The cosine-and sine-Fourier integrals from h(t) obtain the following representations W C ðoÞ ¼ The corresponding amplitude spectrum W(ω) and phase function δ(ω) are defined as: q dðoÞ ¼ arctg½W S ðoÞ=W C ðoÞ: These are continuous functions of angular frequency. Accordingly, the values of the frequency domain characteristics can be calculated for arbitrary sets of points. We employ a logarithmic frequency scale for calculations of amplitude spectra and a natural frequency scale for calculations of phase functions.
In the case of a logarithmic scale, the frequency characteristics were calculated for angular frequencies ω i = ω 0 C i-1 (i = 1,. . .,N), where ω 0 is initial angular frequency and C>1 is the parameter that defines the sampling rate. For selection of this parameter it is convenient to use the formula C = exp(ln10/N D ), where N D is the number of samples per decade. In the case of a natural frequency scale, ω i = ω 0 +iΔω (i = 0,. . .,N), where Δω is the discretization step.

Kolmogorov Smirnov test
An important prerequisite of the statistical analysis is that the empirical amplitude spectra that we deal with are normalized for both amplitude and frequency. These are dimensionless theoretical and empirical spectra, G(γ) and Z(γ) respectively.
For comparison of these spectra we refer to the discrete values of relative frequency γ denoted as γ[i] = γ 0 C i-1 (i = 1,2,. . .) where γ 0 = ω 0 /ω C . These are evenly distributed points on the logarithmic abscissa scale.
To estimate discrepancy between G(γ) and Z(γ) we measure MSE[m,n], which represents the mean square error calculated at the range of frequencies γ[i] from i = m to i = n.
Observations of various amplitude spectra revealed that, typically, the fits virtually coincide in the range of relative frequencies from 0 to 1. At γ>1 the errors increase with increase of frequency. Taking into account these peculiarities, we organized the tests of discrepancies as a two-step procedure using empirically established error thresholds T1 = 0.0001 and T2 = 0.002. Given the samples of ε in the form of two different ensembles, ε 1 ¼ fε 1 1; ; ::; ε 1 n ; ::; ε 1 N g and ε 2 ¼ fε 2 1; ; ::; ε 2 k ; ::; ε 2 K g, we use the Kolmogorov-Smirnov two sample test in order to decide whether ε 1 and ε 2 are produced by the same or different distributions [42]. Each of the data sets ε 1 and ε 2 is converted to the cumulative frequency distribution. The test is based on the evaluation of the maximum vertical deviation D between the cumulative frequency distributions. The null hypothesis that the two distributions are the same is rejected if the value of D exceeds the critical value defined by the tables of D statistics.

Algorithm of numerical simulations
The processes to be reconstructed in numerical simulations are the BDPs satisfying the general condition in the form of Eq (1). Specific aspects of particle turnover in the primary and secondary particle populations are taken into account by empirically based transition probabilities defined by Eqs (18)- (21). Calculations are organized as the succession of standard steps dealing in sequential order with the time intervals [t i , t i+1 ] for i taking values from -M to N-1. The time interval from t -M to t 0 corresponds to the resting conditions. At t = t 0 the resting state is switched to the transient conditions.
For the primary particle population the procedure for each step is as follows: 1. Set x i to be the initial state of the particle population. For the first interval beginning at t -M define the initial state by an arbitrary integer N 0 , i.e. x -M = N 0 . (18) and (19) for the transient conditions, or use Eq (21) for the resting conditions.

Estimate the transient probabilities from Eqs
3. Pick out random real numbers r b and r d using a random number generator to produce real numbers in the range from 0 to 1.

Empirical mass potentials: EEG, ECG, eye-blink EMG
A powerful element of probabilistic methods of quantum mechanics is universality, an ability to discriminate macroscale properties that are independent of the exact structural and functional details of an underlying microscale system [43]. To be able to deal with this aspect of our theory on an empirical basis we have selected EEG, ECG, and eye-blink EMG mass potentials. Although the cellular origins of these signals are quite different, they all depend upon ions crossing a membrane, leaving one compartment and entering another.
As characteristic samples of EEG, ECG, and eye-blink EMG signals, we used the data available from previously published studies [37,44,45]. Ethical approval was provided by the Macquarie University Human Research Ethics Committee (Medical Sciences) (EC00448) and by the Wake Forest University Institutional Review Board (IRB00021587).
Though there is some uncertainty with regard to the generators of EEG, there is general agreement that EEG arises from synchronized synaptic activity in populations of cortical neurons [46]. Also, the non-excitable glial cells have been shown to contribute to these processes, especially at low frequencies [47,48].
The EEG data used in our analyses were collected and described in a study of single-trial auditory event related potentials [37]. A standard auditory "oddball" paradigm was employed. The EEGs were recorded from 40 healthy subjects (20 males and 20 females; age 20-50 years). Stereo headphones conveyed in a random order 1500-Hz task-relevant tones and 1000-Hz task-irrelevant tones. EEGs were recorded from 19 electrode sites according to the 10-20 international system and off-line electro-oculogram artefact corrected using a technique based on Gratton et al. [49]. The EEG segments for analysis have been extracted from digitally stored data sets from both pre-and post-stimuli intervals.
The surface ECG, obtained by recording the potential differences between electrodes placed on the surface of the skin, is a common indicator of electrical activity of the heart, both in clinical and research settings. Conventional conceptualization of the ECG as an ensemble of P, Q, R, S, and T major waves associates these deflections in ECG with different aspects of heart performance during the cardiac cycle. The P-wave reflects the excitation of the atria, the QRS complex that of the ventricles, and the T-wave is associated with the recovery of the initial electrical state of the ventricles. This means that different microscale events produce half-waves identified in the ECG time course.
The ECG data used in our analyses were collected and described in a study of the dynamics of ECG waveforms [44]. For purposes of this study, the ECG data from a group of 14 healthy subjects (7 males and 7 females; age 22-62 years) were selected. Utilizing built-in selection facilities of an ELI 250c Mortara electrocardiograph, standard 10 s segments of filtered (0.05 to 300 Hz frequency range) and digitized (1 ms sampling rate) ECGs were extracted from each of 12 leads for the time-frequency analysis.
Because the eye-blink EMG is less familiar to a wide audience than EEG and ECG, we provide some basic facts about this remarkable electrophysiological signal. Eye-blink EMG is an indicator of motor unit action potentials in the orbicularis oculi muscle that are caused by activity of the facial nerve (CN7) [50]. It is widely used as an electrophysiological marker of the startle reflex, i.e. a brainstem response to a sudden stimulus, such as a sound, a flash of light, a tap to the forehead, a puff of air to the side of the face, or an electrical pulse to the forehead [51]. Enabling the evaluation of information processing at different levels of the central nervous system, eye-blink EMG has been used in a wide variety of research and clinical applications in humans, to study basic stimulus processing [52,53], attentional factors [54], emotion [55], and personality variables [56].
Eye-blink EMG is measured from surface electrodes placed on the skin overlaying the orbicularis oculi, and develops over time as a succession of positive and negative deflections (peaks) generally accepted as physiologically and clinically meaningful components.
The results are from a study of 35 healthy undergraduate students (ages 18-22 years). Startle eyeblinks were elicited with a 50 ms duration 100 dB broadband noise with an instant rise time, presented via Sennheiser headphones. EMG data were collected using 4 mm inner diameter biopotential electrodes attached to the face below the left eye. The EMG signal was filtered by a Biopac EMG 100 bioamplifier, passing 0.1 to 500 Hz, and was sampled by a Biopac MP150 workstation at a rate of 5000 Hz. A multiple-pass moving averaging was further applied for baseline wander correction. After these procedures 300 ms EMG segments were extracted from single trials (50 ms pre-stimulus onset and 250 ms post-stimulus onset). Only data from control startle trials (no prepulse, no task) were included in this analysis.