Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI
Fig 3
Evaluation of stepwise training protocol on chaotic Lorenz attractor.
A. Relative frequency of normalized KL divergences evaluated on the observation space () after running the EM algorithm with the PLRNN-SSM-anneal (blue) and PLRNN-SSM-random (red) protocols on 100 distinct trajectories drawn from the Lorenz system (with T = 1000, and M = 8, 10, 12, 14). B. Same as A for normalized expected joint log-likelihood Eq(z|x)[log p(X,Z|θ)] (see S1 Text Eq 1). C. Decrease in KLx over the distinct training steps of ‘PLRNN-SSM-anneal’ (see Algorithm-1; the first step refers to a LDS initialization and was removed). D. Increase in (rescaled) expected joint log-likelihood across training steps 2−31−3 in ‘PLRNN-SSM-anneal’. Since the protocol partly works by systematically scaling down Σ, for comparability the log-likelihood after each step was recomputed (rescaled) by setting Σ to the identity matrix. E. Representative example of joint log-likelihood increase during the EM iterations of the individual training steps 2−31−3 for a single Lorenz trajectory. Unstable system estimates and likelihood values<-103 were removed from all figures for visualization purposes.