Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action

Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans.


Robotic agent
The human-sized mobile robot is equipped with a pair of seven degrees-of-freedom manipulators [56] of anthropomorphic dimensions. An admittance-type control scheme based on a wrench sensor (JR3) in the wrist of the robot realizes compliant behavior of the manipulator when touching the environment. The effector of the right manipulator is equipped with an electromagnetic gripper which allows fast grasps and releases of ferromagnetic objects. A marker-to-effector calibration routine enables robust visionguided grasping of marked objects by minimizing the error between marker positions and the effector position the manipulator is controlled to. Details on the software architecture can be found in [57]. The algorithms implementing the estimation of the human phase, the synchronization processes, the trajectory generation and the manipulator control scheme are developed in MATLAB/Simulink. Utilizing MATLAB Real-Time Workshop, the corresponding routines are executed at a sampling rate of 1 kHz on the onboard PCs of the robot running Ubuntu Linux. The overall processing delay between perception and action is approximately ∆t p = 30 ms, which is the average time elapsing from marker movement until movement response of the robot.

Design of the synchronization behavior
The vector field H is designed, which defines the phase difference dynamicsΦ = ∆ω + H(Φ), with the phase difference Φ = θ H − θ R . The unstable equilibrium points separating the regions of attraction are equally spaced between the stable equilibrium points given in Table 1, see Fig. S1. By splitting the phase difference dynamics under the assumption of isotropic coupling, we obtain the cross-coupled phase entrainment processθ The processing delay ∆t p of the robot is compensated by adding the constant phase shift ∆θ H = ω R ∆t p to the human phase estimateθ H . The entrainment process of the relative primitive durations is realized according to the above developed example, i.e. according to (22) and (23). Within the regions of attraction defined by the lower bounds d R l = 1 2 d R 0 and the upper bounds d R h = 3 2 d R 0 around the initial values d R 0 and depending on the active mode m, the equilibrium relations summarized in Table 1 are attracted.

Transformation between movement and phase
The instantaneous phase estimateθ H (t) is determined according to the classification and prediction technique proposed above. The state ξ H = [y HẏH ] T is defined, with y H andẏ H denoting the y-components of the tracked Cartesian position and velocity of the human hand. Velocity is obtained from numerical differentiation. For on-line segmentation, the threshold velocity |ẏ H | = 0.03 ms −1 is used. Event prediction for phase estimation is performed based on R = 21 reference limit cycles that have been generated by the minimum-jerk movement model, see Fig. S2. The weighing of position and velocity is defined by the matrix Q = diag(1, 0.7). The metric difference threshold is set to ∆ξ th = 0.05. The relative primitive durations are sampled at completion of each cycle i, i.e. d H (t 8,i ), through on-line segmentation of the human trajectory and averaged over the last three values.
The effector trajectory of the robot is realized by the minimum-jerk model described above, which yields the fixed path depicted in Fig. S3. The pick positions of the objects are visually tracked during interaction, whereas the place positions are calibrated in advance via markers.

Collision avoidance
Whenever the effector is close to either the human hand or to an empty pick/occupied place position, the phase velocity of the robot is modulated bẏ Depending on the Euclidean distance ∆x between the effector position and the human hand or the occupied/empty goal points, the smooth blending function  is applied, which implements a simple collision avoidance behavior. Within the upper distance bound set to ∆x h = 0.25 m, the phase velocityθ R ′ is gradually slowed down to zero, reached at the lower distance bound ∆x l = 0.15 m.