Figure 1.
The Separatrix Algorithm addresses two main sub-problems.
The first is to use observed binary outcomes (top) to estimate the probability of success (bottom), and the second is to choose new points to sample. This is done so as to identify a particular isocline, called the separatrix, as illustrated by the dashed gray line.
Figure 2.
One-dimensional hyperbolic tangent analysis.
(A) Shown are the true success probability function (dashed line), LHS samples (full and empty circles), the inferred distribution (hypercolor), and the most likely value (black line). The vertical magenta line is at the separatrix corresponding to an interest level of
. (B) The probability density after observing
samples using the Separatrix Algorithm. Note that the estimate is tight near the separatrix. (C) The inner workings of the igBDOE algorithm. First, test and sample points are loaded from the previous iteration in which they were sampled from the variance of the interest distribution, solid black (left axis), which in turn is computed from the interest distribution:
is in blue-dash and
is in red dash-dot. The expected KL divergence is plotted for each of the candidate sample points (green circles, right axis). The best
of these candidates, indicated by red crosses, will be selected. (D) The final density estimate shows that the igBDOE algorithm was placing samples in and around the separatrix. Ticks on the x-axis represent samples.
Table 1.
Parameters values.
Figure 3.
One-dimensional hyperbolic tangent performance.
For the one-dimensional hyperbolic tangent test function (18), the Separatrix Algorithm outperforms Latin hypercube sampling and traditional BDOE on a likelihood-based performance metric (19).
Figure 4.
Two-dimensional separatrix results.
The inference algorithm was applied at all points on a regular grid after collecting
samples. Here, we display the mode (A), variance (B), and samples (C). The dashed line in (A) is the true separatrix, and the solid line is the estimate. Circles and crosses in (C) represent failures and successes, respectively, and red dots indicate samples selected on the final iteration.
Figure 5.
Two-dimensional hyperbolic tangent performance.
The Separatrix Algorithm again outperforms Latin hypercube sampling on the mean log likelihood metric, which was evaluated at points spaced evenly in arc-length along the separatrix.
Figure 6.
Malaria model separatrix results.
The separatrix (A), variance (B), and samples with density (C) after simulating the malaria model times.