Motion prediction enables simulated MR-imaging of freely moving model organisms

Magnetic resonance tomography typically applies the Fourier transform to k-space signals repeatedly acquired from a frequency encoded spatial region of interest, therefore requiring a stationary object during scanning. Any movement of the object results in phase errors in the recorded signal, leading to deformed images, phantoms, and artifacts, since the encoded information does not originate from the intended region of the object. However, if the type and magnitude of movement is known instantaneously, the scanner or the reconstruction algorithm could be adjusted to compensate for the movement, directly allowing high quality imaging with non-stationary objects. This would be an enormous boon to studies that tie cell metabolomics to spontaneous organism behaviour, eliminating the stress otherwise necessitated by restraining measures such as anesthesia or clamping. In the present theoretical study, we use a phantom of the animal model C. elegans to examine the feasibility to automatically predict its movement and position, and to evaluate the impact of movement prediction, within a sufficiently long time horizon, on image reconstruction. For this purpose, we use automated image processing to annotate body parts in freely moving C. elegans, and predict their path of movement. We further introduce an MRI simulation platform based on bright field videos of the moving worm, combined with a stack of high resolution transmission electron microscope (TEM) slice images as virtual high resolution phantoms. A phantom provides an indication of the spatial distribution of signal-generating nuclei on a particular imaging slice. We show that adjustment of the scanning to the predicted movements strongly reduces distortions in the resulting image, opening the door for implementation in a high-resolution NMR scanner.

We use a standard MATLAB conversion to convert the images to grayscale and keep only 1/z 2 of all pixels to speed up subsequent performance (every z-th line, every z-th column is kept, here z = 3, see Fig 3): here : c 1 = 0.2989, c 2 = 0.5870, c 3 = 0.1140.
To segment the worm, a simple threshold segmentation will be erroneous due to illumination effects, noise, and dark corners. To be independent of illumination effects, we generate a background image excluding the worm by calculating at each pixel position the 75th percentile 1 along the time scale X 0.75 = (x 0.75 i,j ). The obtained background estimation is smoothed by a closing step (structuring element B: DISC, r = 3): where the operators ⊕ and respectively denote the dilation and erosion. To obtain the worm segment, we subtract the background estimation from each image frame (Fig 3d). As a threshold for the segmentation of the worm, we choose the 1st percentile x 0.01 k of all m · n pixels in the difference image at every time step (assuming that the worm covers an area of less than 1% of the image). We then discard all remaining objects except the biggest one. As an outcome, we obtain image frames X worm [k] of the segmented worm.
Within the worm segment, the direction of the worm needs to be identified. This can be done by estimating the position of the head and the center of gravity (COG). We calculate for each time sample k the COG based on the pixels in the binary image ) T are then smoothed (1st order low-pass filtering over time) to obtain variations for future predictions. Due to quantization sometimes the COG does not move, therefore prediction algorithms might fail. The resulting time series is denoted To find the head, the moving parts of the worm need to be identified. Observations show that the head is always moving left/right, the tail does not. We calculate an optical flow image which contains the difference of two succeeding segmentations We use the integer values to distinguish between pixels where the worm moved to (value 1), and those, where it came from (value -1). For each time sample, we then sum up the last 10 optical flow images X mov [k] (this could also be done by simply subtracting images with ∆k = 10, discard all negative values (body moved away), apply an opening (=denoising and deletion of pixels along the complete body). The resulting values represent the optical flow of the head and by applying the same COG approach as before the position of the head is defined. The resulting time series is denoted (x raw 1,head [k], x raw 2,head [k]). If k = 1 (video just started), we set the head estimation to the COG, and if no head estimation can be calculated (e.g. due to missing movement), we retain the last one. As a final step, we smooth (low-pass filtering over time) the estimated head position values. The resulting time series is denoted ( The center of gravity is predicted by linear extrapolation (quadratic functions do not work!) according to The segmented worm is filtered with a Gaussian convolution filter (to remove noisy edges), and the segment is thinned to a line (skeletonization, see Fig 3g).
To predict an arbitrary position within the worm (which can be identified by a mouse click later-on), we introduce a normalized coordinate system along the skeleton line of the worm (0: head, 1: tail). A click point (x 1,c [k], x 2,c [k]) at time sample k can then be transformed into a percentage value s[k] ∈ [0, 1] defining the relative position of the click within the worm. For past time samples, the Cartesian coordinates of the same s-value are calculated and the velocity of the click point along the line coordinate system is calculated and filtered.
Assuming that the shape of the worm stays roughly the same 2 , the velocity along the shape is used to predict where the click point will be located after ∆k time samples. The predicted points are filtered as well. The resulting time series is denoted (x 1,c,p [k], x 2,c,p [k]).

Similarity measure
We compare the simulation outcome to the true image by the structural similarity measure that was introduced in [1]. This measure assesses the similarity between images X and Y , based on the luminance l, the contrast c, and the structure st. Using µ as the brightness mean, σ 2 as the brightness variance, and σ xy as the brightness co-variance, similarity is calculated according to α, β, and γ are weights of the luminance, contrast, and structure respectively. Within this paper we assumed an equal effect of the three terms, thus α = β = γ = 1. c 1 , c 2 , and c 3 are small real non-negative variables used to stabilize the division when the denominator is small. The default values of these constants are c 1 = (0.01 * L) 2 , c 2 = (0.03 * L) 2 , and c 3 = c 2 /2 with L being the dynamic range of the images. S(X, Y ) takes values between 0 and 1, where 0 corresponds to no similarity, while 1 represents a complete similarity between the images. Fig. S1 shows the statistical assessment of the effect of the prediction horizon on the quality of the MR imaging. The data in this figure is taken from the simulation results for four slices along the worm and from eight videos of different worms. The figure shows that the prediction algorithm works quite well for all worms, and the results exhibit a high degree of resemblance. Moreover, and as expected, the results demonstrate how the accuracy of prediction and thus the MRI quality decays with the increased prediction horizon.