< Back to Article

A probabilistic, distributed, recursive mechanism for decision-making in the brain

Fig 2

The MSPRT and rMSPRT as a diagram.

Circles joined by arrows are the Bayes’ rule. All C evidence streams (data) are used to compute every one of the N likelihood functions. The product of the likelihood and prior probability of every hypothesis is normalized by the sum (∑) of all products of likelihoods and priors, to produce the posterior probability of that hypothesis. All posteriors are then compared to a constant threshold. A decision is made every time with two possible outcomes: if a posterior reached the threshold, the hypothesis with the highest posterior is picked, otherwise, sampling from the evidence streams continues. The MSPRT as in [25] and [17] only requires what is shown in black. The general recursive MSPRT introduced here re-uses the posteriors Δ time steps in the past for present inference, thus re-using itself; hence the rMSPRT is shown in black and blue. If we are to work with the negative-logarithm of the Bayes’ rule—as we do in this article—all relations between computations are preserved, but products of computations become additions of their logarithms and the divisions for normalization become their negative logarithm. Eq 9 shows this for the rMSPRT. The rMSPRT itself is formalized by Eq 10.

Fig 2