Fig 1.
Schematic of an update step of the NCA.
For each C channel pixel in the S × S lattice x(n) at step n, a perception vector z(n) is constructed to encode local information via convolution with hard-coded kernels K. This perception vector is fed through a dense neural network Fθ with trainable weights W1, W2, and biases v. The nonlinear activation function u(⋅) is applied on the single hidden layer of the network. The output of this network yields the incremental update to that pixel, which is applied in parallel to all pixels with the stochastic mask σ to determine the lattice state x(n+1) at step n + 1.
Fig 2.
1D phase space representation of NCA trajectories, predictions and true states y(m, r).
Here M = 3, R = 2. The first batch (x(⋅,1)) is trained with re-initialised intermediate states, whereas the second batch (x(⋅,2)) is trained with propagated intermediate states.
Fig 3.
Snapshots taken from the training data used for learning PDE dynamics.
PDE is run for N = 1024 steps with timestep 1 and DA = 0.1, DB = 0.05, α = 0.06230, γ = 0.06268.
Fig 4.
A: loss as a function of time sampling t. Training loss shows the minimum loss during training epochs, averaged over 4 random initialisations, with standard deviation as error bars. Test loss shows how the best trained NCA (minimal training loss) performs on unseen initial conditions. B: Initial condition and true state (PDE simulation) at n = 2048. C: Snapshots of NCA trajectories (at n = 2048) based on unseen initial conditions, with varying time sampling t. Each NCA is trained for 4000 epochs, with a mini-batch size B = 64.
Fig 5.
Snapshots of PDE and NCA trajectories from an unseen initial condition.
NCA trained with C = 8, Identity and Laplacian kernels, relu activation, trained on sampling t = 32 for 4000 epochs with Euclidean loss.
Fig 6.
A: loss as a function of noise intensity ξ, on unseen initial condition. ξ interpolates between the PDE trajectory (ξ = 0) and uniform noise (ξ = 1) as the training data for the NCA. Also shown is an interpolating moving average ± standard deviation, as there is significant variation introduced by random parameter initialisation. B: Initial condition and true state (PDE simulation) at n = 2048. C: Snapshots of NCA trajectories (at n = 2048) based on unseen initial conditions, with varying noise intensity ξ. Each NCA is trained for 4000 epochs, with a mini-batch size B = 64.
Fig 7.
Given a space invader initial condition, morph through a microbe and remain stable at a rooster pattern. Images taken from https://emojipedia.org/google, used under Apache License 2.0: https://github.com/googlefonts/noto-emoji/blob/main/LICENSE
Fig 8.
NCA trained on image morphing task with different kernels and activations.
16 channels, 64 steps between images. A,B: training loss and snapshots of NCA with relu activation and various kernels. C,D: training loss and snapshots of NCA with Identity, gradient and Laplacian kernels, for various activation functions.
Fig 9.
NCA trained on image morphing task.
Relu activation; Identity, gradient and Laplacian kernels. A,B: Training loss and snapshots of 16 channel NCAs trained with different time sampling. C,D: Training loss and snapshots of NCAs trained with time sampling of 32, and various numbers of channels.
Fig 10.
Local stability behaviour of two NCA.
A: 32 channels, 32 steps between images. B: 16 channels, 64 steps between images. Top left heatmap in each case shows how many pixels of the final image change (by more than 0.1 to account for random fluctuations) when that pixel is perturbed in the initial condition. The other images all show snapshots of the final state when the initial condition is perturbed locally, for different perturbation locations.
Fig 11.
Rightmost column shows extrapolation beyond training time, demonstrating stability of the final state.
Top row shows snapshots from unperturbed trajectory. Middle row shows snapshots from minimal initial perturbation that destroys the final state (minimising ). Bottom row shows snapshots from maximal initial perturbation that preserves the final state (maximising
). NCA (32 channels; Identity, gradient and Laplacian kernels; time sampling t = 32, relu activation) trained on image morphing task. Initial condition taken from https://emojipedia.org/google, used under Apache License 2.0.
Fig 12.
Behaviour of trained NCA on symmetrically perturbed inputs.
Left column shows inputs, middle two shows final state behaviour for NCA with asymmetric kernels (identity, gradient and Laplacian), rightmost two shows final state behaviour for NCA with symmetric kernels (identity, Laplacian, average). Augmented data examples show NCAs trained to trajectories rotated to random angles. Initial condition taken from https://emojipedia.org/google, used under Apache License 2.0.