^{1}

^{*}

^{1}

^{1}

^{1}

EJH, OD, MAS, and RS conceived and designed the experiments. EJH performed the experiments. EJH, OD, and MAS analyzed the data. EJH and RS contributed reagents/materials/analysis tools. EJH, OD, and RS wrote the paper.

The authors have declared that no conflicts of interest exist.

Adaptability of reaching movements depends on a computation in the brain that transforms sensory cues, such as those that indicate the position and velocity of the arm, into motor commands. Theoretical consideration shows that the encoding properties of neural elements implementing this transformation dictate how errors should generalize from one limb position and velocity to another. To estimate how sensory cues are encoded by these neural elements, we designed experiments that quantified spatial generalization in environments where forces depended on both position and velocity of the limb. The patterns of error generalization suggest that the neural elements that compute the transformation encode limb position and velocity in intrinsic coordinates via a gain-field; i.e., the elements have directionally dependent tuning that is modulated monotonically with limb position. The gain-field encoding makes the counterintuitive prediction of hypergeneralization: there should be growing extrapolation beyond the trained workspace. Furthermore, nonmonotonic force patterns should be more difficult to learn than monotonic ones. We confirmed these predictions experimentally.

A computational model offers a unifying explanation of seemingly disparate findings from human reaching experiments.

Behavioral (

Neurophysiological experiments have suggested that the motor cortex may be one of the crucial components of the neural system that learns internal models of limb dynamics (

Computational models with elements reflecting some of the cell properties found in neurophysiological experiments have attempted to explain how patterns of generalization during adaptation may be related to the neural representation. These computational models hypothesize that an internal model is composed of “elements,” or bases, each encoding only part of sensory space, and that population codes combine these elements when computing sensorimotor transformations (

We performed a set of experiments that examined how the neural elements might simultaneously encode limb position and velocity. We show that movement errors generalize with a pattern that suggests a linear or monotonic encoding of limb position space and that this encoding is multiplicatively modulated by an encoding of movement direction. The gain-field encoding of limb position and velocity that we infer from the generalization patterns is strikingly similar to neural encoding of these parameters in the motor cortex (

(A) The origin of the center movements is aligned with the subject's body midline, and the origins of the left and right movements are symmetrically positioned with a given separation distance (

(B) The average trajectories in three positions—left, center, and right (one subject per column)—for the first third of the movements from the first field set (trials 1–28). Dashed lines are movements during which force field is on and dotted lines are catch trials. Separation distances between neighboring movements (

(C) The average trajectories for the first third of the fifth field set (trials 337–364). The task is much easier to learn when the three movements are spatially separated from each other.

The robot could apply arbitrary patterns of force to the hand. We programmed it so that the movements were perturbed by a viscous curl-field. In a viscous curl-field, the force is proportional to speed and perpendicular to velocity. However, our viscous curl-fields also depended on position. During movements from the left starting position, the robot perturbed the hand with a clockwise curl-field (

We found a limited ability to adapt to such position-dependent viscous curl-fields.

As a measure of error, we used displacement perpendicular to target direction at 250 ms into the movement (perpendicular error [PE]). ^{−8} for distance factor;

(A) PE averaged across six subjects of Group 1 (

(B) Errors were averaged across six subjects of Group 4 (

(C) Average learning index (Equation 1) across groups. Learning index is plotted against the separation distance between movements. Thin lines show the first adaptation set and thick lines show the last adaptation set.

(D) Generalization index (Equation 2) against spatial distance between movements.

We were struck by another difference between ^{−8} for the distance factor;

The above results demonstrate that when different forces are to be associated with two movements that are in the same direction but at different spatial locations, generalization decreases with increased distance between them. On the other hand, earlier results had found that when movements to various directions are learned at a single location, learning generalizes to other arm locations very far away (

To reconcile these two apparently contradictory findings, we performed a simulation of the internal model in which the force field was represented as a population code via a weighted sum of basis elements. Each element was sensitive to both the position and velocity of the arm. The crucial question was how each element should code limb position and velocity to best account for all the available data on generalization. Previous work had shown that velocity encoding was consistent with Gaussian-like functions (

One way to represent limb position and velocity is with basis elements that encode each variable and then add them. However, additive encoding cannot adapt to fields that are nonlinear functions of position and velocity, e.g.,

(A) A polar plot of activation pattern for a typical basis function in the model. The polar plot at the center represents activation for an eight-direction center-out reaching task (targets at 10 cm). Starting point of each movement is the center of the polar plot. The shaded circle represents the activation during a center-hold period and the polygon represents average activation during the movement period. The eight polar plots on the periphery represent activation for eight different starting positions. Each starting position corresponds to the location of the center of each polar plot. The preferred positional gradient of this particular basis function has a rightward direction. The preferred velocity is an elbow flexion at 62°/s.

(B) A state diagram of weights in a simple system with two basis functions. The trajectory from the origin to (½

(C) The bases were used in an adaptive controller to learn the task in

(D) Simulated movement errors in an experiment where spatial distance was the same as in Group 4 in

(E) Learning index of the last target set against spatial distance. Dotted lines are from the simulation and thick solid lines are from subjects; correlation coefficient is 0.96. Note that thick solid lines are the same lines as in

(F) Generalization index in the last target set against spatial distance. Dotted lines are from the simulation and solid lines are from subjects; correlation coefficient is 0.99.

We found that when a network learned to represent the force field via a gain-field encoding of limb position and velocity, it produced movements that matched the generalization pattern both in the current experiment and in earlier reports.

We used the data in

We chose to use hand position _{1}(_{2}(_{1}·_{1}(_{2}·_{2}(_{1} and _{2} are weights for _{1} and _{2}, respectively (refer to _{1} = _{2} = _{1} changed more than _{2} because _{1} was bigger than _{2} for movements on the right, and after movements on the left (_{2} / Δ_{1}) is equal to the ratio _{2}(_{1}(

The trial-by-trial variation in the center movements is also clarified by examining this state-space diagram; i.e., any deviation from the middle dotted line means a nonzero force expectation for the center movements and larger deviations correspond to larger errors in the center movements. Thus, update vectors with a slope near −1 lead to both faster learning and smaller variance in the middle movements. Therefore, the slope

We adjusted

We found that the gain-field model could also account for a number of other previously published results. The experiments we focus on here are adapting to a field that depended only on limb velocity and not position (

_{1}(_{2}(−_{1} = (^{2}^{2} + 2^{2}) and _{2} = (−^{2}^{2} + 2^{2}). Thus, the force function approximated by these weights is ^{2}^{2}^{2} + ^{2}) + ^{2}/(^{2}^{2} + ^{2}). Therefore, the expected force is again a linear function with slope (^{2}^{2}^{2} + ^{2}). This slope decreases as the gain

In the Shadmehr and Moussavi experiment (2000), subjects trained in a clockwise viscous curl-field (

(A)

(B) Simulation results in the same format as (A); correlation coefficient to subject data is 0.89.

(C) A spring-like force field, F→ =

(D) Hand path trajectories in field trials from the first set in the spring-like force field. Each set consists of 192 center-out movements. Targets are given at eight positions on 10 cm circumference in pseudorandom order.

(E) Hand path trajectories in field trials from the fifth set of training.

(F) Hand path trajectories of catch trials from the fifth set.

It is also known that subjects can adapt to position-dependent spring-like force fields (

Our hypothesis regarding adaptation with a basis that encodes position and velocity as a linear gain-field has two interesting consequences: (1) a change to the pattern of forces can substantially increase the difficulty of a task; and (2) there should be hypergeneralization; i.e., forces expected in an untrained part of the workspace may be larger than the ones experienced at a trained location.

(A) A field where forces are linearly dependent on both limb position and velocity.

(B) A field where forces are linearly dependent on limb velocity but nonlinearly dependent on limb position. Gain-field encoding predicts that the field in (B) will be harder to learn than one in (A).

(C) Learning index of subjects (

(D) Gain-field encoding predicts hypergeneralization. The figure shows movements and its associated force field during training and test sets.

(E) Performance of subjects (

We tested these predictions in two separate groups of subjects. Six subjects trained in the force pattern of

An interesting property of systems that learn with gain-fields is that in some conditions, local adaptation should result in an increasing extrapolation, i.e., hypergeneralization. Consider a situation in which, during the training sets, subjects make movements in the center as well as at 5 cm to the right (

One concern is the weak learning during the training sets. However, the average learning index for the center movements in the last set is 0.46, and this is significantly different from zero (

When people reach to various directions in a small workspace, velocity- or acceleration-dependent forces that they experience are generalized broadly to other arm positions as far as 80 cm away (

We hypothesized that adaptation to arm dynamics was due to an internal model that computed a map from sensory variables (limb position and velocity) to motor commands (force or torque). These elements were sensitive to both the position and velocity of the arm. The main question was how these variables were encoded. We first performed behavioral experiments to characterize the limits of adaptation to position-dependent forces. This allowed us to quantify the sensitivity of position coding. We found that generalization to neighboring movements decayed gradually with separation distance, implying a very broad position encoding. We found that a Gaussian representation would require a full width at half-maximum of approximately 80 cm to explain our results. Since a Gaussian this broad would be indistinguishable from a monotonic function, we used a linear function instead. A linear basis is a simple monotonic encoding of position space. We combined position and velocity by making position a linear gain-field on the directional sensitivity.

Using a gain-field basis to simulate the learning of arm movements, we found that the parameters that fit our pattern of decaying generalization could also account for a number of previously published results on generalization in position space. These results were the generalization of learning over a large workspace and the ability to learn stiffness fields. Additionally, we tested two behavioral predictions of our model to further test the hypothesis of gain-field coding. Theory predicted that a simple rearrangement of position-dependent forces would change a task from easily learnable to very difficult. It also predicted that in a two-point adaptation paradigm, expected forces would be extrapolated so that larger forces would be expected outside the trained workspace. The behavioral results agreed with these theoretical predictions. Thus, our model used a multiplicative interaction between coding of limb position and velocity to explain behavioral data during learning dynamics of reaching movements and successfully predicted data from a variety of experimental paradigms.

However, if the internal models for dynamics are represented as a population code with gain-fields, these two factors are easily explained by the proximity of the population code for the close distance and the monotonic change of population code with the starting positions.

Multiplicative interaction of two independent variables in cell encoding is called gain-field coding. Although we described our gain-field as a velocity-dependent signal that is modulated by limb position, it can also be described as a position-dependent signal that gets modulated by limb velocity. Gain-fields originally described the tuning properties of cells that are responsive to both visual stimuli and eye position in area 7a of the parietal cortex. The receptive field of these cells remains retinotopic while the gain of the retinotopic curve is modulated linearly by eye position (

There are other ways to form a basis set. A prominent example is an additive basis set. In an additive set, a function of position is added to a function of movement direction. Some neurophysiological experiments have used this kind of model, rather than a multiplicative model, to relate neural discharge in the motor cortex and cerebellum to limb position and velocity (

Although our computational model was derived from psychophysical experiments, a number of neurophysiological findings seem to be consistent with properties of our basis elements.

First, neurophysiological recordings support our monotonic position encoding in joint angle coordinates. Human muscle spindle afferents, both individually and as a population, represent static joint position monotonically (

Another distinct property of our basis elements is that their activity is modulated by both position and velocity.

Lastly, the output of our basis elements is associated with a preferred joint torque vector. With adaptation, these torques rotate.

Although we used a linear encoding of limb position because of its mathematical simplicity, data based on three positions are not sufficient to distinguish a linear from a nonlinear basis function. Therefore, at this point, our basis functions are best viewed as having a monotonic property. Monotonic gain-field coding of position and velocity makes an intriguing prediction regarding behavior. When two different forces are experienced at two different arm positions (as in

Another issue is that our findings may be the result of limits of visual acuity; i.e., a decreased ability to distinguish the starting positions might cause a spurious finding of position-dependent coding. One way we addressed this concern was to use color-coding to make sure subjects could distinguish the left, center, and right targets. However, it is possible that the system is not capable of using color cues while it is capable of using spatial cues. Note that this interpretation implies that, at large separations, position serves as an explicit cue triggering separate internal models. That interpretation is not consistent with the earlier results in which generalization across large distances seemed to imply that position is represented continuously. If position is a discrete cue for building different internal models, then it is not clear how one could learn a force field that depends continuously on position, as in spring-like fields. It is also not clear why learning ability would decrease in the nonlinear force field pattern. Thus, although it is possible that some effects are due to the explicit cues, this cannot entirely explain our findings without a continuous encoding of position.

In sum, we report that generalization properties of learning arm dynamics can be explained using basis elements that encode limb position and velocity in intrinsic coordinates using a multiplicative, gain-field interaction. Hand position seems to be encoded monotonically and velocity seems to be encoded using Gaussian elements. The result is a gain-field where position monotonically modulates the gain of velocity tuning. We predict that this encoding will be reflected in the activity of neurons responsible for adaptation to dynamics of reaching movements.

Thirty-three healthy individuals (16 women and 17 men) participated in this study. The average age was 27.5 y (range: 21–50 y). The study protocol was approved by the Johns Hopkins University School of Medicine Institutional Review Board and all subjects signed a consent form.

As a measure of error, we report the displacement perpendicular to target direction at 250 ms into the movement (PE). However, we also tried other measures, such as perpendicular displacement at the maximum tangential velocity, maximum perpendicular displacement, and averaged perpendicular displacement during early phase of movement. Results that we present here are consistent among all these measures of error, and we report only PE.

During adaptation, trajectories in field trials become straighter, while the trajectories of catch trials become approximately a mirror image of those in earlier field movements (

The internal model may be computed as a population coding via a set of basis elements, each encoding some aspect of the limb's state (_{env}_{i}_{i}

We hypothesized that the bases have a receptive field in terms of the arm's velocity (in joint space) and that the discharge at this receptive field is modulated monotonically as a function of the arm's position; i.e., the elements represent the arm's position and velocity as a gain-field:

Typical output of this basis for various limb positions and movement directions is plotted in _{shoulder}, _{elbow}_{d}, a 2 × 1 vector composed of shoulder and elbow joint velocity) centered on the preferred velocity (_{i}

To fit experimental data, we varied only two parameters of the model: the slope (^{−}^{1} and 1.3 for the slope and constant, respectively, gave a good fit of generalization as a function of separation distance. To simulate human arm reaching, we used a model of the arm's dynamics that described the physics of our experimental setup (

(60 KB PDF).

OD was supported by a postdoctoral fellowship from the National Institutes of Health and by a Distinguished Postdoctoral Fellowship from the Johns Hopkins University Biomedical Engineering Department. This work was supported by grants from the National Institute of Neurological Disorders and Stroke (NS-37422 and NS-46033).

degree of freedom

perpendicular error

standard error of the mean