The Inactivation Principle: Mathematical Solutions Minimizing the Absolute Work and Biological Implications for the Planning of Arm Movements

An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements.


Proof of Theorem 4
The proof is based upon Thom's transversality theorem. We will then make the computations in the spaces of jets. For a positive integer m and a pair (X, u) ∈ R 2n ×R n , we denote by J m (X,u) the space of m-jets at (X, u) of functions in C ∞ (R 3n , R).
Fix now a point X 0 ∈ R 2n which is not an equilibrium of the vector field F . We define A m (X 0 ) ⊂ J m (X 0 ,0) as the set of m-jets of functions f ∈ C ∞ (R 3n , R) such that the trajectory of Equation 11 issued from X 0 and associated to the control u = 0 is locally minimizing for the optimal control problem (P f ).
Lemma 1. A m (X 0 ) is contained in a vector subspace of J m (X 0 ,0) of codimension n(m − 2).
Proof. Without lack of generality we assume X 0 = 0. Let j m 0 f be a m-jet in A m (0). By definition of A m (0), the trajectory X(·) of F issued from 0 minimizes the problem (P f ) on an interval I = [0, s]. Thus X(·) satisfies Pontryagin's Maximum Principle on I: there exists a smooth function P = (p, q) : I → R n × R n (the smoothness of P results from that of X) and λ ≥ 0 such that, for all t ∈ I, (P (t), λ) = 0 and: H(X, P, λ, u) = p T y + q T φ(X, u) − λf (X, u).
Note that, since 0 ∈ int U , property (P2) implies ∂H ∂u (X(t), P (t), λ, 0) = 0. It follows: If λ = 0, then q ≡ 0. Fromq ≡ 0 and (P1) we deduce p ≡ 0 and then (P, λ) ≡ 0, which is impossible. Thus λ is positive and a standard argument of homogeneity allows normalizing it to λ = 1. Finally, from respectively (P1) and (P2), the following holds on the interval I: and, Now, recall that on I the dynamic isẊ = F (X). Since X 0 = 0 is not an equilibrium point of F , we assume, up to a local change of the coordinates X = (X 1 , . . . , X 2n ) on R 2n , that F = ∂ ∂X1 . Differentiating Equation 1 with respect to time leads to: in which we omit the evaluation at (X, 0). On the other hand, we can also obtainq T andq T by differentiation of Equation 2:q

Substituting these expressions and Equation 2 into Equation 3
, we eliminate q T ,q T , andq T and we obtain: where, for every X, R X is a linear mapping and X → R X is smooth. Successive derivations and evaluation of the derivatives at t = 0 (recall that X(0) = 0) lead to a system of equations of the form: where each R k is a linear mapping. Thus we have proved A m (0) ⊂ ker ψ, where ψ : J m 0 → R n(m−2) is the linear mapping which associates to a m-jet j m 0 f . This linear mapping being obviously surjective, the conclusion follows.
Theorem 4 follows from Lemma 1 combined with the classical Thom's transversality Theorem.
Remark 1. In the computations in the jet space, only f (X, 0), ∂f ∂u (X, 0), and their derivatives with respect to X appear. Thus the statement of Theorem 4 still holds if we replace C ∞ (R 3n , R) by the set of polynomial functions of u with coefficients in C ∞ (R 2n , R), or, even better, by the space of functions f (X, u) differentiable with respect to u at u = 0 (and such that f (X, 0) and ∂f ∂u (X, 0) are smooth). On the other hand, since the set O is open, it is also possible to replace C ∞ (R 3n , R) by any of its open subsets, for instance by the set of strictly convex functions w.r.t. u in C ∞ (R 3n , R).

Proof of Theorem 5
We consider a control system where the control acts linearly on the acceleration, with as many inputs as degrees of freedom: where • x belongs to R n (or to a n-dimensional differentiable manifold); • the control u ∈ R n is bounded: Setting X = (x, y), we rewrite the system as: where F and b 1 , . . . , b n are vector fields on R 2n . An equilibrium of this system is a stationary trajectory X ≡ X 0 , associated to a control u ≡ u 0 with: Fix a "source-point" X 0 ∈ R 2n , a "target-point" X 1 ∈ R 2n , and a time T > 0. Given a function f on R 3n , we define the following optimal control problem: among the trajectories of Equation 4 joining X 0 to X 1 .
We will restrict to functions f (X, u) in SC, the set of C ∞ functions from R 2n ×R n to R which are strictly convex with respect to u (in the strong sense, of course, that the Hessian is positive definite). The precise result we show is more than Theorem 5: it shows that the bad subset is very small (has infinite codimension).

Theorem 1.
There exists an open and dense subset O ′ of SC (endowed with the C ∞ Whitney topology) such that, if f ∈ O ′ , then (P f ) does not admit minimizing controls u with a component u i vanishing on a subinterval of [0, T ], except maybe if the associated trajectory on the subinterval is an equilibrium of the system. In addition, for every integer N , the set O ′ can be chosen so that its complement has codimension greater than N .
Of course we assume T > T min , the minimum time. Again the proof is based upon Thom's transversality theorem, we will then make the computations in the spaces of jets. For a positive integer N and a pair (X, u) ∈ R 2n × R n , we denote by J N (X,u) the space of N -jets at (X, u) of functions in C ∞ (R 3n , R).
Then there exists t ∈ I such that the N -jet j N (X(t),u(t)) f belongs to a semialgebraic subset of J N (X(t),u(t)) of codimension greater than N − 2n. Proof. Recall that, under the hypothesis of the lemma, there is a trajectory (X, u) minimizing (P f ). Moreover this trajectory is not the projection of a singular extremal, and its associated control u is continuous. Thus, applying Pontryagin's Maximum Principle on I, there exists a C 1 function P = (p, q) : I → R n × R n such that, for all t ∈ I: where H is the normal Hamiltonian of the problem, From (P1), the following holds on the interval I: On the other hand, (P2) implies that, for every t ∈ I, u(t) satisfies the Karush-Kuhn-Tucker conditions: there exist Lagrange multipliers λ + (t), λ − (t) in R n such that: Since the control u is continuous, we may assume without lack of generality that there exist a nonempty subinterval J of I and an integer m ∈ {0, . . . , n − 1} such that: • for i = m + 1, . . . , n − 1, u i is constant on J and equals to u − i or u + i ; • u n ≡ 0 vanishes on J (i.e., i 0 = n); as a consequence, λ + n = λ − n = 0 and, (N (x) T q) n = ∂f ∂u n (X, u) on J.
Denote byv = (v 1 , . . . , v m ) the first m coordinates of a vector v ∈ R n . Then the minimizing control can be written as u(t) = (ū(t), u 0 ), where u 0 ∈ R n−m is constant, and, Case 1. The matrix ∂ 2 f ∂ū 2 (X, u) is invertible on a subinterval J ′ of J.
It results from the Implicit Functions Theorem applied to Equation 6 that u is C 1 on J ′ and, for all t ∈ J ′ , where L F and L bi denote the Lie derivative with respect to respectively F and b i . We use Equation 5 to eliminateq(t) in the expression and we obtain: where Q X is a rational function depending smoothly on X.
Fix now s ∈ J ′ . SinceẊ(t) = F (X(t)) + i u i (t)b i (X(t)) is never vanishing on J ′ , we may assume, up to a local change of the coordinates X = (X 1 , . . . , X 2n ) on R 2n near X(s), that F (X) + i u i (s)b i (X) = ∂ ∂X1 . Differentiating (N (x) T q) n = ∂f ∂un (X, u) with respect to time near t = s leads to d dt (N (x(t)) T q(t)) n = where ∆u s (t) = u(t) − u(s). We substitute the expressions Equation 7 ofu(t) and Equation 5 ofq n into this equation, and we obtain, for t near s, where R 1 X is a rational function with coefficients depending smoothly on X, and α i , 1 ≤ i ≤ 3n, denotes the i th component of the vector α = (X, u).
Successive derivations (with substitution ofu(t) by Equation 7 and ofṗ anḋ q by Equation 5 at each step) and evaluation of the derivatives at t = s lead to a system of equations of the form, for k ≥ 1, ∂ k+1 f ∂un∂X k 1 (X(s), u(s)) + R k (P (s), ∂ j f ∂αi 1 ···∂αi j (X(s), u(s)); j ≤ k + 1) = 0, where R k is a rational function, and if j = k + 1 then at least one of the α i ℓ is aū i .
In order to show thatū is C 1 and to derive an expression foru, we need to introduce some notations. We define inductively a sequence of mappings V ℓ : R 2n × R n → R m by: • for a positive integer ℓ, the components of V ℓ are: where r ℓ = r ℓ (X, u) is the rank of the matrix ∂V ℓ−1 ∂ū (X, u).
By hypothesis, r 1 (X(t), u(t)) is smaller than m for t ∈ J. Since X(·) and u(·) are continuous, up to a permutation of the indices {1, . . . , m}, there is a subinterval J ′ of J such that, for any ℓ ≥ 1, • the rank r ℓ (X(t), u(t)) is constant on J ′ ; • the function 1≤i,j≤r ℓ is never vanishing on J ′ ; • if r ℓ < m, then Notice that an easy induction shows the following expression: where G k,ℓ is a polynomial function of the derivatives of the form ∂ j f ∂ūi 1 ···∂ūi j , with j ≤ ℓ + 1, each i l ≤ k, and l i l < k(ℓ + 1).
Denote by L the largest integer such that r L < m (we set L = +∞ if the latter condition is always satisfied). Then, for ℓ = 1, . . . , L, V ℓ m (X, u) ≡ 0 on J ′ . If moreover L < ∞, there holds on J ′ , 0, . . . , 0) and ∂V L ∂ū (X, u) invertible, with u(·) = (ū(·), u 0 ). It then results from the Implicit Functions Theorem that u is C 1 on J ′ . Following exactly the argument of Case 1, we obtain a system of equations of the form, for a fixed s ∈ J ′ , where R ′ k is a rational function of P (s) and of derivatives ∂ j f ∂αi 1 ···∂αi j (X(s), u(s)) such that j ≤ k + L and, if one of the α i ℓ is u n , then j ≤ k + 1 and j = k + 1 implies that at least one of the other α i ℓ ′ is aū i . Set M = min(L, N − 1). Let Ω N 2 be the set of N -jets j N (X(s),u(s)) f such that: δ 1 (X(s), u(s)) . . . δ M (X(s), u(s)) = 0.
Theorem 1 follows from Lemma 2 combined with standard transversality arguments.

Computation of Extremals in the 2-dof Case
We use the stratification of the (u 1 , u 2 )-plane with respect to the "sign of coordinates". Thus we have the following analysis.
1. In the strata u 1 , u 2 > 0, the maximum ofH(u 1 , u 2 ) is solution of the following system (setting s 1 = −1, s 2 = −1) : Regrouping the u ′ i s all together, we get: and, Which is a system of the general form: The solutions follow: 2. In the strata u 1 > 0 and u 2 < 0, the maximum is solution of the same system, and has the same expression (Equation 9), but taking s 1 = −1 and s 2 = +1.