Skip to main content
Advertisement

< Back to Article

Fig 1.

Overview of LocaNMF: A decomposition of the WFCI video into spatial components A and temporal components C, with the spatial components soft-aligned to an atlas, here the Allen Institute Common Coordinate Framework (CCF) atlas.

More »

Fig 1 Expand

Table 1.

A summary of the notation for LocaNMF, with the corresponding matrix dimensions and descriptions.

More »

Table 1 Expand

Fig 2.

LocaNMF can accurately recover the spatial and temporal components in simulated WFCI data.

(A) Left column: two example ground truth spatial components; Middle and Right columns: the corresponding spatial components as recovered by (Middle column) LocaNMF; (Right column) SVD. (B) Correlation between ground truth spatial components and those recovered by (Top) LocaNMF; (Bottom) SVD.

More »

Fig 2 Expand

Fig 3.

Spatial and temporal maps of all regions in three different recording sessions from two different mice, as found with LocaNMF.

Note that LocaNMF outputs multiple components per atlas region. Left: the first, second and third component extracted from each region provided in each row, colored by region. Right: The trial-averaged temporal components for Session 1, Mouse 1 (aligned to lever grab), with the same color scheme as the spatial components. Link to a decomposed video of one trial here.

More »

Fig 3 Expand

Fig 4.

Comparison with ROI analysis.

A. LocaNMF spatial components that are anchored to an Allen region show further specificity that may be lost if considering the average fluorescence in the Allen region as per an ROI analysis. B. The mean number of components recovered by LocaNMF. The bars are colored according to the cortical region they belong in, but note that there is one bar per subregion (ex. primary somatosensory cortex, right hand side upper limb). The dashed line at 1 signifies the number of components found with an ROI analysis.

More »

Fig 4 Expand

Fig 5.

LocaNMF can capture long range correlations that are difficult to analyze via SVD.

Top left: example de-localized spatial component recovered by SVD. This component places significant weight on multiple widely-separated brain regions. The corresponding temporal component is shown in the lower left panel. In the same dataset, two separate components are recovered by LocaNMF, capturing activity in each of the two distant brain regions activity (top middle and right panels). LocaNMF recovers two separate time courses here (lower right), allowing us to quantify the correlation between the regions (R = 0.79).

More »

Fig 5 Expand

Fig 6.

LocaNMF extracts localized spatial components that are consistent across two recording sessions across different days (session length = 49 and 64 minutes; in each case the mouse was performing a visual discrimination task).

Example spatial components extracted from three different regions and two different sessions for one mouse expressing GCaMP6f, using A. SVD, and B. LocaNMF as in Algorithm 1. Note that LocaNMF components are much more strongly localized and reproducible across sessions. Cosine similarity of spatial components across two sessions in the same mouse using C. SVD after component matching using a greedy search, and D. LocaNMF. As in the simulations, note that LocaNMF components are much more consistent across sessions.

More »

Fig 6 Expand

Fig 7.

LocaNMF applied to data from a mouse expressing jRGECO1a, with sessions of length 5 minutes.

A-D. Legend and conclusions similar to Fig 6A–6D.

More »

Fig 7 Expand

Fig 8.

Correlation maps of temporal components extracted by LocaNMF show consistencies across sessions and animals.

A. Top canonical correlation coefficient between the temporal components of any two regions, shown for four different sessions of 49 to 64 minutes each, recorded across two mice. B. Example traces of two highly correlated regions. C. Violin plot of mean squared difference between the correlation maps of the 20 different sessions across 10 mice; on average, within-mice differences are smaller than across-mice differences (One-tailed t-test p = 0.0025).

More »

Fig 8 Expand

Fig 9.

Brain areas show consistencies in their activity around task-related behavior, and in their ability to decode direction of licking activity.

A. The LocaNMF components of the trial-averaged activity of the right hand side primary visual cortex (VISp) under left and right visual stimulus, and of the primary somatosensory area, upper limb area (SSp-ul), left and right hand sides, before and after the lever grab. Each color indicates a different component in the same region. Standard error of the mean is shaded. B. The top demixed Principal Component of the trial-averaged activity of the right hand side primary somatosensory area, mouth (SSp-m1:R) and right hand side secondary motor cortex (MOs1:R) before and after the onset of a lick to the left or right spout (onset at time 0). Standard error of the mean is shaded. The activity around licking left or right in both regions is consistent across the two sessions. C. Decoding accuracy on held-out data for the direction of lick (Left vs. Right spout) using only components in a shaded brain region. A logistic decoder was used on the time courses on data from 0.67s before and 0.33s after the event (lick left or lick right).

More »

Fig 9 Expand

Fig 10.

Decoding paw position from WFCI signals.

Top Left: One frame of the DeepLabCut output, with decoded positions of left and right paws in blue and red. Top right: Relative decoding accuracy when the decoder was restricted to use signals from just one brain region, as a fraction of the R2 using all signals from all brain regions. Area acronyms are provided in Table 2. Bottom: Decoding of DLC components using data from all brain regions for one mouse. Link to corresponding real-time videos for a few trials here, with DLC labels in black, and decoded paw location in blue and red for left and right paw respectively.

More »

Fig 10 Expand

Table 2.

Acronyms of the regions in the Allen atlas.

More »

Table 2 Expand