Skip to main content
Advertisement
  • Loading metrics

Human impedance modulation to improve visuo-haptic perception

  • Xiaoxiao Cheng ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    xiaoxiao.cheng@manchester.ac.uk (XC); e.burdet@imperial.ac.uk (EB)

    Affiliations Department of Bioengineering, Imperial College of Science Technology and Medicine, London, United Kingdom, Department of Electrical and Electronic Engineering, The University of Manchester, Manchester, United Kingdom

  • Shixian Shen,

    Roles Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing – review & editing

    Affiliation Department of Bioengineering, Imperial College of Science Technology and Medicine, London, United Kingdom

  • Ekaterina Ivanova,

    Roles Formal analysis, Investigation, Methodology, Supervision

    Affiliations Department of Bioengineering, Imperial College of Science Technology and Medicine, London, United Kingdom, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom

  • Gerolamo Carboni,

    Roles Conceptualization, Investigation, Methodology, Writing – review & editing

    Affiliation Department of Bioengineering, Imperial College of Science Technology and Medicine, London, United Kingdom

  • Atsushi Takagi,

    Roles Conceptualization, Investigation, Methodology, Software, Writing – review & editing

    Affiliation NTT Communication Science Laboratories, NTT R&D, Kanagawa, Japan

  • Etienne Burdet

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – review & editing

    xiaoxiao.cheng@manchester.ac.uk (XC); e.burdet@imperial.ac.uk (EB)

    Affiliation Department of Bioengineering, Imperial College of Science Technology and Medicine, London, United Kingdom

Abstract

Humans activate muscles to shape the mechanical interaction with their environment, but can they harness this control mechanism to best sense the environment? We investigated how participants adapt their muscle activation to visual and haptic information when tracking a randomly moving target with a robotic interface. The results exhibit a differentiated effect of these sensory modalities, where participants’ muscle coactivation increases with the haptic noise and decreases with the visual noise, in apparent contradiction to previous results. These results can be explained when considering muscle spring-like mechanics, where stiffness increases with coactivation to regulate motion guidance. Increasing coactivation to more closely follow the motion plan favors accurate visual over haptic information, while decreasing it filters visual noise and relies more on accurate haptic information. We formulated this active sensing mechanism as the optimization of visuo-haptic information and effort. This optimal information and effort (OIE) model can explain the adaptation of muscle activity to unimodal and multimodal sensory information when interacting with fixed or dynamic environments, or with another human, and can be used to optimize human-robot interaction.

Author summary

It is well known that the human brain tends to stiffen the limbs when there is a movement error during physical interactions. However, by systematically investigating how muscle coactivation varies with incoming visual and haptic information, we show that this is not always the case. Instead limb stiffness is regulated by the brain to optimize incoming sensory information from the interaction. The manuscript thus describes the following contributions:

  • Muscle coactivation (from which the limb stiffness depends) changes in accordance with the quality of visual and haptic cues. It relaxes with increasing visual noise and stiffens with haptic perturbation, contrary to suggestions from previous studies.
  • We propose a mathematical model of this active sensing mechanism, according to which muscle coactivation is adapted by the brain to minimize prediction error about the environment based on visual and haptic information, while saving effort.
  • This computational model: i) predicts the observed results both qualitatively and quantitatively; ii) can explain the results on coactivation adaptation to force field and interaction with human partners that we know from the literature; iii) extends the existing computational models, and can be used to improve human-robot interaction.

Introduction

How do humans interact with their environment? It is known that the central nervous system (CNS) regulates the limbs’ stiffness by coordinating muscle activation to shape the energy exchange with the environment [5, 20], such as unstable situations typical of tool use [16, 31]. The prevailing explanation for the observed adaptation of muscle coactivation is that it is adjusted to minimize errors in the presence of disturbances [14, 34, 39]. However, how this affects visuo-haptic sensing has been little investigated. For instance, when skiing down a bumpy slope as shown in Fig 1, should one stiffen the legs to best sense the terrain, or relax them to filter out perturbations? This may be particularly important when visual information is degraded, such as in foggy conditions or at dawn, where one must rely on one’s feet to feel the terrain and avoid falling.

thumbnail
Fig 1. When skiing down a slope, one may relax their legs to filter out perturbations in clear conditions (left panel), or stiffen them to better sense the terrain in low visibility, such as at dawn or in fog (right).

Original photographies from Unsplash, courtesy of Andri Klopfenstein (left) and Greg Rosenke (right).

https://doi.org/10.1371/journal.pcbi.1013042.g001

Few studies have examined how muscle stiffness regulation is influenced by visual disturbances, and the results have shown complex response patterns. For example, [6, 28, 37] found that in arm reaching tasks, endpoint stiffness decreased with larger target sizes, indicating that the CNS increases stiffness to enhance control precision. However, endpoint stiffness did not significantly increase in response to lateral visual noise during arm reaching tasks, unlike the increase observed with mechanical vibrations [38]. This suggests that the CNS may respond differently to visual versus haptic disturbances. Further research is needed to explore how visual disturbances affect motion control through muscle stiffness regulation.

Visual and haptic information are critical in physical human-robot collaboration, including applications such as physical rehabilitation [12], collaborative robotics for industrial manufacturing [11], and shared control of semi-autonomous vehicles [26]. However, it is unclear how the CNS combines these sensory inputs in real time. When integrating sensory signals over short intervals, the CNS accounts for both sensory discrepancies and temporal delays to achieve optimal multi-sensory integration and feedback control [13]. Interestingly, the presence of visual feedback during a mechanical disturbance does not increase the magnitude of the muscle response but does reduce its variance [22]. Additionally, we observed a modulation of coactivation when physically connected individuals track a common target [2]. The partner with superior visual acuity tends to stiffen their arm and lead the movement, while the other relaxes their arm. Notably, the partners adjust their coactivation differently depending on the levels of visual and haptic noise.

In order to systematically study how humans adapt their muscle activation with visual and haptic feedback, we conducted an experiment in which subjects tracked a randomly moving target using wrist flexion/extension movements while being connected to the human like tracking controller of [33] (Fig 2A). We examined the influence of visual and haptic feedback with different levels of noise first separately (Fig 2B), then in combination (Fig 2C). In the visual conditions the target presented on the monitor was either a sharp disk, or a dynamic cloud of normally distributed dots. We also introduced a haptic perturbation of varying amplitude to the interaction torque. Conditions with a specific noise level were presented pseudo-randomly. We observed that the visual and haptic noise levels have different effects on the coactivation adaptation, suggesting that the brain modulates body impedance based not only on movement error [15], but also on its influence on specific sensory interactions. Subsequently, we developed and tested a computational model to examine the mechanism behind muscle coactivation adaptation (Fig 2D).

thumbnail
Fig 2. Experiment setup and protocol.

A: Participants were asked to track a randomly moving target with noisy visual feedback and in some conditions were connected to the human-like tracking controller of [33]. B,C: Experiment protocol of separate visual or haptic feedback experiment, with each block consisting of nine trials. The 13 participants received only visual/haptic feedback in random order and each with random noise level (B). Another 22 participants experienced nine integrated visual and haptic conditions presented in a random order (C). (D) illustrates the mechanical modeling scheme of the human-robot interaction with visual and haptic noise.

https://doi.org/10.1371/journal.pcbi.1013042.g002

Results

We first analyzed how visual feedback affects the motion control (Fig 3A, 3C). After the initial solo trials with only visual feedback, the tracking error remains stable in each of the three visual feedback conditions (slope ). However, the tracking error increases with the magnitude of visual noise (larger error in each of the weak and strong conditions relative to clean vision, p<0.01, pairwise Wilcoxon tests). On the other hand, the coactivation level decreases with larger visual noise (p<0.001 for strong noisy condition relative to sharp vision, p = 0.04 for weak noisy condition compared to sharp vision, paired t-test).

thumbnail
Fig 3. Results of solely visual/haptic feedback experiment.

A&B: Evolution of tracking error and coactivation with visual and/or haptic feedback, where error bars represent standard error. C&D: The mean and standard error of tracking error and coactivation for all subjects during the last four trials. In the visual feedback condition (A&C), with increasing visual noise the tracking error increases while the coactivation decreases. In the haptic feedback condition (B&D), there is a decaying trend of both tracking error and coactivation over the trials and no clear difference among noise conditions.

https://doi.org/10.1371/journal.pcbi.1013042.g003

These results seem to contradict previous observations that muscle coactivation increases with the magnitude of movement error [6, 17, 18, 37]. However they can be explained when considering the spring like muscle mechanics [4], where muscle activation increases stiffness and viscosity while also shortening the muscle. By coordinating the activation of antagonist muscles, the CNS can thus control the force at the hand as well as the spring stiffness and reference position. In particular we can consider the reference position that emerges when the CNS controls muscles’ activity to move the limbs. As illustrated in Fig 2D, with clear visual information, increasing muscle coactivation will increase the limb’s endpoint stiffness and guide it closer to this accurate motion plan. However, when the target information becomes noisy, relaxing muscle rigidity can filter this noise and prevent transferring it to perception of the limb’s movement. This explains why coactivation decreases with an increasing level of noise in the visual target.

Next we investigated how the CNS regulates muscle coactivation when both visual and haptic feedbacks are provided. Fig 4A shows that the error decreases fast in the solo trials (slope ) and reaches a steady level ( for the last seven trials). The level of error remains stable in all the interactive conditions (non-significant slope with p>0.05). A two-way ART ANOVA shows that the tracking error depends on both visual noise () and haptic noise (), increasing with the amplitude of visual or haptic noise (p<0.001 for all pairwise comparisons between noise levels, Wilcoxon tests).

thumbnail
Fig 4. Results of tracking with combined visual and haptic noise.

A: Evolution of tracking error and coactivation with visual or haptic feedback, with error bars representing the standard error. The tracking error saturates in the initial solo trials, and increases with both visual and haptic noise during interactive trials. coactivation shows a slower decrease across trials. B: Mean and standard error of tracking error and coactivation for all subjects during the last four trials. Tracking error increases with either visual or haptic noise, while muscle coactivation increases with haptic noise and decreases with visual noise. C: Muscle coactivation and reciprocal activation waveforms along with their frequency spectrum. Muscle coactivation remains relatively constant, while reciprocal activation changes synchronized with the movement, as is confirmed by their spectra.

https://doi.org/10.1371/journal.pcbi.1013042.g004

The muscle coactivation tends to decrease with trials, indicating a learning effect to integrate the two sensory feedback modalities (especially at the largest level of haptic noise with for sharp vision). Consistent with the sole visual feedback condition in the previous experiment, coactivation decreases with the visual noise as shown in Fig 4B (significant interaction between visual and haptic noise factors ), where the sharp vision conditions results in a higher coactivation level compared to both weak and strong visual noise conditions for each level of haptic noise (all p<0.003, Wilcoxon tests). However, muscle coactivation increases with the level of haptic perturbation, especially between weak and strong haptic perturbation conditions. The increment of this increase depends on the visual noise level: it increases the most in the sharp visual condition (p<0.05, Wilcoxon tests) and becomes less clear with the increase of visual noise (all p>0.05, Wilcoxon tests).

In order to understand how the activity of antagonistic muscle is modulated dynamically, we align the reciprocal activation and coactivation profiles as shown in Fig 4C. Muscle coactivation remains relatively constant for each sensory noise condition (slope for all conditions). On the other hand, the reciprocal activation is modulated to produce movement dynamics (the averaged Pearson correlation coefficient between reciprocal activation and target movement is 0.820.09). The frequency spectrum of reciprocal activation has three peaks at the target movement frequencies (0.2, 0.5 and 8.5 Hz) in contrast to the essentially flat spectrum of muscle coactivation. These results indicate that reciprocal activation generates the tracking movement and responds to the haptic perturbation, while the coactivation level is regulated to deal with the specific noise condition.

Modeling of visuo-haptic sensing

What are the principles of the coactivation adaptation? Above experiments show that coactivation tends to decrease with practice, and is modulated by both visual noise of the target, and external haptic noise. However, these two noise sources have an opposite effect: coactivation increases with a larger level of visual noise, but decreases with haptic noise. We posit that these apparently contradictory trends can be explained through the following sensorimotor interaction principles:

  • Muscles’ activation generation corresponds to the CNS using reciprocal activation to move the limbs and coactivation to guide them along a motion plan with suitable viscoelasticity [20].
  • Coactivation is adapted to maximise performance considering both visual target noise and haptic noise at the limbs while concurrently minimising effort.

The mechanics of these principles can be illustrated as in Fig 2D, where muscle coactivation can tune the viscoelasticity of the hand to follow the planned movement. When the target is visually sharp, the motion plan is accurate thus it is useful to stiffen in order to follow it closely. However when the target is noisy, it is preferable to relax in order to avoid injecting own visual noise into the hand movement and benefit from the external haptic guidance.

These principles can be formulated mathematically by considering the maximal likelihood prediction error when integrating visual and haptic information:

(1)

with the standard deviation of the haptic noise exerted on the limb. Critically, the standard deviation of hand movement relative to the motion plan following the target (as defined in Eq (13)) can be regulated through the muscle coactivation u.

The above sensorimotor interaction principles thus correspond to the concurrent minimization of the prediction error and effort , i.e. of the cost function

(2)

with the effort ratio . Muscle coactivation can then be adapted using a gradient descent optimization:

(3)

This optimal information and error (OIE) model was tested on the data of the tracking experiment with combined visual and haptic noise. The effective noise values that best fit the nine data points of Fig 5 are given in Table 1, yielding an effort ratio  = 2.26. The model predicted muscle coactivation values are shown in Fig 5. We also tested the tracking error minimization (TEM) model of [15] that explains the motor learning in novel force fields. In this model, the coactivation u increases with each new trial to minimize tracking error e, and decreases to minimize effort, according to

(4)
thumbnail
Fig 5. Simulation results of normalized coactivation.

(A) Comparison between the tracking error minimization (TEM) model and our optimal information and effort (OIE) model across the nine experiment conditions. The OIE model predicted normalized coactivation values closely aligned with the experimental data, while the TEM model produced large prediction errors. (B) OIE model predictions of normalized coactivation as a function of visual and haptic noise levels. Black dots represent the recorded average coactivation of 20 participants in the final trial. Red dots represent the fitted data from the OIE model. The blue dots show the model’s predicted coactivation in unobserved noise conditions.

https://doi.org/10.1371/journal.pcbi.1013042.g005

thumbnail
Table 1. Identified effective visual and haptic noise values.

https://doi.org/10.1371/journal.pcbi.1013042.t001

The predicted values from the OIE and TEM models are compared with experimental data as shown in Fig 5A. The results indicate that while the TEM model correctly predicts the trend of increased coactivation with haptic noise, it fails to capture the decrease in coactivation with visual noise. In contrast, the OIE model closely matches the experimentally measured coactivation levels in all variations of visual and haptic noise. The superiority of the OIE model is further supported by the Akaike information criterion (AIC): the small sample-size normalised AIC value [10, 29] for the OIE model is –6.5, which is lower than the value of –2.9 for the TEM model, showing that the OIE model better accounts for information loss and the number of independent parameters. Fig 5B illustrates how the OIE model accurately predicts the specific modulation of coactivation to varying levels of visual and haptic noise.

Could the OIE model predict the effect of blind haptic interaction? In this case the lack of visual feedback can be modeled through visual noise with infinite deviation , thereby making the cost to minimize . The OIE model then predicts that subjects connected haptically to the controller tracking the target (but without visual feedback) would minimize independent of the haptic noise level. The results of the experiment testing this prediction are shown in Fig 3B, 3D. In the sole haptic feedback conditions, both tracking error and muscle coactivation decreased over the trials (F(1, 12) = 10.70, p < 0.01, two-way ART ANOVA comparing the mean of all subjects in the first and last three trials). Consistent with the model prediction, muscle coactivation remained at a minimum value and did not depend on the noise level (p>0.30, paired Wilcoxon tests). There was little change in tracking error with the haptic perturbation level (F(2, 24) = 1.30, p = 0.28, two-way ART ANOVA).

Discussion

In physical human-robot interaction, sensory signals crucial for motor control are derived from both visual input and haptic feedback, which provides substantial information about the movement intentions of the human operator. However, the process by which the CNS adjusts muscle coactivation to optimize the use of visual and haptic signals, thereby enhancing the effectiveness of human-robot collaboration, remains unclear. This paper systematically investigated how different levels of visual and haptic disturbances affect muscle activation during target-tracking tasks.

In a first experiment, we evaluated the impact of noise on muscle control within either the visual or the haptic channel. The results show that muscle coactivation decreased with increasing levels of visual noise and showed no substantial variation in response to the intensity of haptic disturbances. This suggests that the CNS regulates muscle coactivation by considering not just movement error but also sensing uncertainty. When there is no visual noise and motion planning can be relied upon, muscle coactivation increases to ensure that the planned trajectories are well followed. Conversely, as visual noise and the associated uncertainties in motion planning increase, muscle coactivation decreases. In the absence of visual feedback, which can be interpreted as maximal uncertainty from visual sensing, muscle coactivation remains low and becomes insensitive to the intensity of haptic disturbance.

The second experiment extended these findings by examining the influence of noise in scenarios with both visual and haptic feedback. Muscle coactivation decreased with an increase in visual noise (consistent with the observations in the single feedback conditions) and increased with rising haptic disturbances (in line with previous studies [16]). This indicates that the coactivation adaptation mechanism is influenced by both visual and haptic feedback. When visual feedback is clear, muscle coactivation significantly increases with rising haptic noise; however, this response diminishes markedly when visual feedback is blurred. This behavior suggests that the CNS reliance on a particular sensory modality varies with the uncertainty associated with each feedback type, revealing a complex, nonlinear interplay between these modalities. Notably, muscle coactivation drops sharply at the onset of visual noise but decreases as visual noise intensifies. Additionally, muscle coactivation shows a general downward trend with an increasing number of experimental trials, likely reflecting the CNS strategy to minimize metabolic cost [15, 36].

By systematically studying how subjects interact with visual and haptic information, we were able to decipher the mechanism of body impedance adaptation during the interaction with the environment. Our experimental and simulation results demonstrated that subjects regulate coactivation to optimally integrate visual and haptic information while minimizing effort. This enables them to extract useful information about their environment and plan accurate movements. A computational model based on the optimization of information and effort (OIE) was used to predict muscle coactivation levels under different visuo-haptic sensory conditions. This OIE model explains how the CNS integrates multi-modal sensory information, considering their respective noise level, to enhance perceptual acuity with minimal metabolic cost. Notably, the results obtained could not be predicted by previous models of muscle coactivation adaptation [15, 24, 35], which only considered movement error and suggested that muscle activation would increase with either visual or haptic noise. In contrast, the OIE model accounts for sensorimotor interactions and the influence of each sensory modality’s noise on target perception, tracking performance, and effort, thereby successfully predicting the observed actions.

Previous studies on arm-reaching movements have shown that accuracy constraints during movement influence muscle coactivation. For instance, [6, 28, 37] reported that participants slightly reduced muscle coactivation when accuracy constraints were relaxed. Therefore, in our experiments the visual noise in the target cloud may have been perceived as an enlarged target and contribute to reducing muscle coactivation. This effect is also predicted from the cost function of OIE incorporating minimization of metabolic costs. Future studies should investigate how participants perceived the cloud target and exploited it to relax the tracking constraint, and how this would combine with their consideration of the movement statistics.

Interestingly, the study [8] employed the same tracking task and noise pattern as the present study but yielded opposite effects, as participants could increase haptic guidance by cocontracting antagonist wrist muscles. Under these conditions, participants increased coactivation in response to larger visual noise deviations and decreased it with larger haptic noise. However, these apparently contradictory effects of noise in [8] and the present study are reconciled when assuming that the CNS modulates stiffness to optimize tracking performance by integrating visual and haptic information while accounting for their respective noise characteristics. The OIE model, which implements this approach, shows qualitative and quantitative agreement with the coactivation modulation observed in both [8] (as analyzed in [7]) and in the present study.

While the relaxation trend with a larger target observed in [6, 28, 37] aligns with the effects of large visual noise observed in the present study, it cannot explain the findings of [8]. In contrast, by minimizing prediction error, the OIE model can correctly predict the results of both [8] and the present study. Importantly, the OIE model also minimizes effort, suggesting a relaxation of coactivation with a large target as seen in arm-reaching tasks [28]. Therefore, the OIE model explains how visual and haptic information can be optimally utilized in a stochastic sense while accounting for target shape and size.

The OIE model is compatible with existing models for the motor adaptation to visual and haptic perturbations. Both a destabilizing force field [5, 1618] corresponding to an increase of haptic noise, or amplification of the hand deviation in visual feedback [6, 37], both result in increased stiffness, as predicted by the OIE model. In turn, this means that the OIE model extends the computational models of [15, 35] according to which stiffness increases with hand movement error.

The current OIE model implementation is limited by its lack of consideration for internal sensorimotor noise and neuromuscular control. As a result, it cannot be used to simulate movement dynamics in its present form. Using a similar cost function as Eq 2, it may be extended to simulate movements by incorporating a model of neuromechanics including sensorimotor noise and muscle mechanics [31], such as in [1, 4], along with a prediction mechanism like [33].

In conclusion, muscle coactivation is not automatically tuned to minimize movement error as previously thought [15, 24, 35]. Instead, it is skillfully regulated by the CNS to extract maximal information from the environment. The OIE model presented in this paper explains the adaptation of muscle coactivation during interactions with force fields, dynamic environments with visual noise at the target, and haptic noise at the hand, as in the experiments of this paper, as well as during collaboration with other humans [2]. While active sensing has been identified previously in vision [30], this is, to our knowledge, the first evidence of body adaptation to improve visuo-haptic sensing. Furthermore, while sophisticated algorithms have been developed for such active inference e.g. [19], the experimental results were well predicted through the simple OIE model.

This approach could be applied in scenarios such as shared driving where a vehicle equipped with sensors provides haptic feedback to the human driver through the steering wheel [9]. When the human has noisy sensory feedback—such as during nighttime driving or when facing sunlight—the robotic system can increase its control impedance with OIE to offer greater assistance. Conversely, the robot reduces its control impedance when its sensory information about the environment becomes uncertain. This adaptive strategy is applicable across various human-robot collaboration contexts, including teleoperation, robot-assisted surgery, and collaborative robots in industrial manufacturing, where dynamically responding to changing conditions is essential for effective interaction and performance.

Materials and methods

Ethics statement

The experiment was approved by the Research Ethics Committee of Imperial College London (No. 15IC2470). Before starting the experiment, each participant gave informed written consent, filled in a demographic questionnaire and an Edinburgh Handedness Inventory form [27]. No participant reported a sensorimotor impairment.

Experiment setup

Each participant was seated on a height-adjustable chair, next to the Hi5 robotic interface [25] with the dominant wrist attached to a handle of the interface during flexion/extension movement. They received visual feedback of the target angle and of their wrist flexion/extension angle on their monitor, and/or haptic feedback from the interaction with the tracking controller of [33] (Fig 2A).

The Hi5 handle is connected to a current-controlled DC motor (MSS8, Mavilor) that can generate torques of up to 15 Nm, and is equipped with a differential encoder (RI 58-O, Hengstler) to measure the wrist angle and a torque sensor (TRT-100, Transducer Technologies) to measure the exerted torque in the range [0,11.29] Nm. The handle is controlled at 1 kHz using Labview Real-Time v14.0 (National Instruments) and a data acquisition board (DAQ-PCI-6221, National Instruments) while the data was recorded at 100 Hz.

The activation of two antagonist wrist muscles, the flexor carpi radialis (FCR) and extensor carpi radialis longus (ECRL) were recorded during the movement from each participant. Muscle electromyographic (EMG) signals were measured with surface electrodes using a medically certified non-invasive 16-channel EMG system. The EMG data was recorded at 100 Hz. The raw EMG signal was i) high-pass filtered at 20 Hz by using a second-order Butterworth filter to remove drifts in the EMG and ii) rectified and passed through a low-pass second-order Butterworth filter with a 15 Hz cutoff frequency to obtain the envelope of the EMG activity.

Tracking task

The participants were instructed to “track the moving target as accurately as possible” using wrist flexion-extension movements. The target was moving according to

(5)

where started in each trial from a randomly selected offset time of the multi-sine function in order to minimize memorization of the target’s motion.

In solo trials, the participants tracked the target without active torque from the robot. Otherwise, participants’ wrist was connected to the tracking controller of [33] by a compliant virtual spring, yielding the torque (in Nm)

(6)

where q (in degrees) denotes the participant’s wrist angle, and qc the controller’s target angle computed as in [33]. The connection stiffness was selected such that subjects could clearly sense the robot’s movement but compliant enough to let them actively pursue the tracking task [21]. This controller has been shown to induce a similar behavior to human interaction [21]. Using this human-like interaction (rather than direct human interaction as in [2]) allows for the direct manipulation of haptic noise.

The experiment considered three haptic noise conditions: sharp haptic information (H0 condition) without noise, or a torque perturbation with  Nm in the weak haptic noise condition H1 and  Nm in the strong haptic noise condition H2.

For visual feedback, a target was displayed on the monitor for participants to track with three conditions. In the sharp visual condition V0, the target was displayed as a 8 mm diameter disk, which had the same visual condition as in solo trials. In the noisy visual conditions V1 and V2, the target was displayed as a “cloud” of eight randomly distributed dots around the nominal target position, where each of the eight dots was sequentially replaced every 100 ms. The cloud’s vertical position relative to the target was normally distributed with [0, (15 mm)2], the angular distance position relative to the target was distributed with [0, ], and the velocity with [0, (101.6 mm/s)2]. The weak visual noise condition V1 was defined through = 21.32 mm and the strong visual noise condition V2 through = 52.78 mm.

Muscle activation calibration

The participants placed their wrists in the most comfortable neutral posture, which was defined as 0. Constrained at that posture, they were then instructed to sequentially flex or extend the wrist to exert torque. Each phase was 4 s long and was followed by a 5 s rest period to avoid fatigue. The latter period was used as a reference activity in the relaxed condition. This procedure was repeated four times at flexion/ extension torque levels of {1, 2, 3, 4} Nm and {-1, -2, -3, -4} Nm, respectively.

The recorded muscle activity of each participant was then linearly regressed against the torque values. Specifically, the torque of the flexor muscle was modeled from the envelope of the EMG activity uf as

(7)

and similarly for the torque of the extensor muscle .

Experimental protocol

After the EMG calibration, the participants carried out nine solo trials to get familiar with the tracking task and the dynamics of the wrist interface. This was followed by series of 20 s long trials as described in Fig 2B. After each trial, the target disappeared, and the participants had to place their respective cursor on the starting position at the center of the screen. The next trial then started after a 5 s rest period and a 3 s countdown. The participants could take an extra break at will between trials by keeping the cursor away from the screen center.

Behavior with only visual or only haptic feedback.

13 naive subjects (six females and seven males) aged 21-25 years (mean = 22.5, sd = 1.05) were recruited to study the influence of visual or haptic feedback separately. One participant was left-handed (with Laterality Quotient LQ = -43) and the others were right-handed (LQ > 40).

Each participant carried out two blocks with visual feedback, respectively haptic feedback, presented in a random order. Furthermore, the noise conditions were presented randomly in each block. Nine interaction trials of 20 seconds were carried out in each of these conditions.

Behavior with visual and haptic feedback.

For the coupled visuo-haptic feedback experiment, 22 naive subjects (twelve females and ten males) aged 22-35 years (mean = 24.1, sd = 3.06) were recruited to study the combined effect of visual and haptic feedback. One participant was ambidextrous (LQ = -29) and the others were right-handed (LQ > 40). Due to incomplete EMG data for two participants, data analysis was conducted using the remaining 20 subjects.

There were nine blocks of nine interaction trials, each with one of the nine different noise conditions (resulting from the combination of visual the three noise levels and three haptic perturbations) presented in a random order.

Analysis

The tracking error and muscle coactivation were used as metrics to analyze the participants’ tracking performance and impedance adaptation to different visuo-haptic noise conditions. To represent the overall tracking accuracy within a trial, the tracking error is defined as the root mean squared error between the target position q(t) and the hand position q(t) during one trial:

(8)

Using the torque regression model of Eq 7, the joint reciprocal activation is defined as

(9)

where and are the (positive) flexor and extensor activations, and the coactivation is defined as

(10)

The average muscle coactivation of each trial for a specific subject is computed as

(11)

The normalised coactivation of each participant was used in the subsequent analysis, which is calculated as

(12)

where and are the minimum and the maximum of the averaged muscle coactivation of all interaction trials of a participant respectively.

Linear mixed effects (LME) statistical analysis via restricted maximum likelihood (REML) was applied to every condition on both the tracking error and coactivation, in order to assess performance stability and evaluate whether the participants had adapted to noise. The model was fitted with the trial number as a fixed slope (s) and a random intercept for each grouping factor (subject ID). For visual or haptic feedback experiment, the Shapiro-Wilk test showed that the coactivation was normally distributed while the tracking error was not. For visual and haptic feedback experiment, the Shapiro-Wilk test showed that neither the tracking error nor the coactivation was normally distributed. For metrics with non-normal distribution, repeated measures ART ANOVA was used to analyze the effect of the visual and haptic noise, the paired Wilcoxon signed-rank test was used for post-hoc non-parametric hypothesis testing. For metrics with normal distribution, repeated measures ANOVA was used to assess the impact of visual or haptic noise. Paired T-test was conducted for post-hoc comparison between groups. The p-values for all comparisons were adjusted using the Holm-Bonferroni method.

Visuo-haptic noise model

The CNS perceives the target movement through visual feedback and haptic connection to the target, which is known to degrade the signal quality [32]. Assuming that this effect results in independent noise, the standard deviation of internal sensory noise is

(13)

where is the deviation of visual noise. Experiment data exhibit a saturation of tracking error with the increase of visual fuzziness (the size of the cloud) so the visual noise is modelled as

(14)

where is the angular deviation of target cloud given in the experiment. is the deviation due to the joint compliance decreasing with muscle coactivity [32], which was identified in [8] as

(15)

Furthermore, the tracking error increases with the amplitude of haptic perturbation thus a quadratic regression model is used for the haptic noise:

(16)

The parameters of the computational model are identified using the coactivation data of the last trial for all noise conditions. The effective visual noise deviation and effective haptic noise were identified by minimizing the variation of the cost derivative to satisfy the first-order necessary optimal (Karush–Kuhn–Tucker) conditions [3]. Considering the relationship between the deviation and the wrist’s viscoelasticity, the visual noise deviation and the haptic noise deviation each has three parameters, resulting in six parameters to identify:

Using the collected coactivation data {uij(9)}, a grid search is performed to determine the effects of visual and haptic noise under {sharp, weak, strong} conditions. Particle swarm optimization (PSO) [23] is employed within a bound range of [0,70] to optimize the noise parameters , yielding the values shown in Table 1. The optimal effort ratio = 2.26 is then computed as the solution of(17)

A least-square regression using the identified parameters  = 1.21,   = 66.18 in Eq 14 was used to express the relationship between the angular deviation of the visual cloud and effective deviation of visual noise . A quadratic regression with  = 5.05,   = 6.84,   = 41.68 was identified to model the relation between the perturbation amplitude and effective haptic noise.

Supporting information

S1 Fig. Average and standard deviation of real time errors in different visuo-haptic perturbation conditions among participants

https://doi.org/10.1371/journal.pcbi.1013042.s001

(EPS)

S2 Fig. The tracking error of a representative participant.

https://doi.org/10.1371/journal.pcbi.1013042.s002

(EPS)

S3 Fig. Average and standard deviation of real time motor torque among participants.

https://doi.org/10.1371/journal.pcbi.1013042.s003

(EPS)

S4 Fig. The motor torque of a representative participant.

https://doi.org/10.1371/journal.pcbi.1013042.s004

(EPS)

S5 Fig. The torque interception and slope vary among subjects.

https://doi.org/10.1371/journal.pcbi.1013042.s005

(EPS)

S6 Fig. An example of the torque regression with EMG signals collected in calibration.

https://doi.org/10.1371/journal.pcbi.1013042.s006

(EPS)

Acknowledgments

We thank Dagmar Sternad for discussions on the experimental protocol, and the participants for taking part in the experiment.

References

  1. 1. Berret B, Conessa A, Schweighofer N, Burdet E. Stochastic optimal feedforward-feedback control determines timing and variability of arm movements with or without vision. PLoS Comput Biol. 2021;17(6):e1009047. pmid:34115757
  2. 2. Börner H, Carboni G, Cheng X, Takagi A, Hirche S, Endo S, et al. Physically interacting humans regulate muscle coactivation to improve visuo-haptic perception. J Neurophysiol. 2023;129(2):494–9. pmid:36651649
  3. 3. Boyd S, Boyd SP, Vandenberghe L. Convex optimization. Cambridge University Press. 2004.
  4. 4. Burdet E, Franklin DW, Milner TE. Human robotics: neuromechanics and motor control. MIT Press. 2013.
  5. 5. Burdet E, Osu R, Franklin DW, Milner TE, Kawato M. The central nervous system stabilizes unstable dynamics by learning optimal impedance. Nature. 2001;414(6862):446–9. pmid:11719805
  6. 6. Calalo JA, Roth AM, Lokesh R, Sullivan SR, Wong JD, Semrau JA, et al. The sensorimotor system modulates muscular co-contraction relative to visuomotor feedback responses to regulate movement variability. J Neurophysiol. 2023;129(4):751–66. pmid:36883741
  7. 7. Carboni G. Human adaptive haptic sensing. Ph.D. Thesis, Imperial College of Science, Technology and Medicine. 2022.
  8. 8. Carboni G, Nanayakkara T, Takagi A, Burdet E. Adapting the visuo-haptic perception through muscle coactivation. Sci Rep. 2021;11(1):21986. pmid:34753996
  9. 9. Cheng X, Geng X, Huang Y, Burdet E. Haptic feedback of front vehicle motion may improve driving control. IEEE Robot Autom Lett. 2025;10(1):343–9.
  10. 10. Cohen N, Berchenko Y. Normalized information criteria and model selection in the presence of missing data. Mathematics. 2021;9(19):2474.
  11. 11. Colgate JE, Peshkin M, Klostermeyer SH. Intelligent assist devices in industrial applications: a review. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2003. p. 2516–21. https://doi.org/10.1109/iros.2003.1249248
  12. 12. Colombo R, Sanguineti V. Rehabilitation robotics: technology and application. Academic Press. 2018.
  13. 13. Crevecoeur F, Munoz DP, Scott SH. Dynamic multisensory integration: somatosensory speed trumps visual accuracy during feedback control. J Neurosci. 2016;36(33):8598–611. pmid:27535908
  14. 14. Franklin DW, Osu R, Burdet E, Kawato M, Milner TE. Adaptation to stable and unstable dynamics achieved by combined impedance control and inverse dynamics model. J Neurophysiol. 2003;90(5):3270–82. pmid:14615432
  15. 15. Franklin DW, Burdet E, Tee KP, Osu R, Chew CM, Milner TE, et al. CNS learns stable, accurate, and efficient movements using a simple algorithm. J Neurosci. 2008;28(44):11165–73. pmid:18971459
  16. 16. Franklin DW, Liaw G, Milner TE, Osu R, Burdet E, Kawato M. Endpoint stiffness of the arm is directionally tuned to instability in the environment. J Neurosci. 2007;27(29):7705–16. pmid:17634365
  17. 17. Franklin DW, So U, Kawato M, Milner TE. Impedance control balances stability with metabolically costly muscle activation. J Neurophysiol. 2004;92(5):3097–105. pmid:15201309
  18. 18. Franklin S, Wolpert DM, Franklin DW. Visuomotor feedback gains upregulate during the learning of novel dynamics. J Neurophysiol. 2012;108(2):467–78. pmid:22539828
  19. 19. Friston KJ, Daunizeau J, Kilner J, Kiebel SJ. Action and behavior: a free-energy formulation. Biol Cybern. 2010;102(3):227–60. pmid:20148260
  20. 20. Hogan N. Impedance control: an approach to manipulation. In: IEEE American Control Conference. IEEE. 1984. p. 304–13.
  21. 21. Ivanova E, Carboni G, Eden J, Kruger J, Burdet E. For motion assistance humans prefer to rely on a robot rather than on an unpredictable human. IEEE Open J Eng Med Biol. 2020;1:133–9. pmid:35402952
  22. 22. Kasuga S, Crevecoeur F, Cross KP, Balalaie P, Scott SH. Integration of proprioceptive and visual feedback during online control of reaching. J Neurophysiol. 2022;127(2):354–72. pmid:34907796
  23. 23. Kennedy J, Eberhart R. Particle swarm optimization. In: International Conference on Neural Networks (ICNN). IEEE. 1995. p. 1942–8. https://doi.org/10.1109/icnn.1995.488968
  24. 24. Li Y, Ganesh G, Jarrasse N, Haddadin S, Albu-Schaeffer A, Burdet E. Force, impedance, and trajectory learning for contact tooling and haptic identification. IEEE Trans Robot. 2018;34(5):1170–82.
  25. 25. Melendez-Calderon A, et al. Hi5: a versatile dual-wrist device to study human-human interaction and bimanual control. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2011. p. 2578–83.
  26. 26. Mulder M, Abbink DA, Boer ER. Sharing control with haptics: seamless driver support from manual to automatic control. Hum Factors. 2012;54(5):786–98. pmid:23156623
  27. 27. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9(1):97–113. pmid:5146491
  28. 28. Osu R, Kamimura N, Iwasaki H, Nakano E, Harris CM, Wada Y, et al. Optimal impedance control for task achievement in the presence of signal-dependent noise. J Neurophysiol. 2004;92(2):1199–215. pmid:15056685
  29. 29. Portet S. A primer on model selection using the Akaike Information Criterion. Infect Dis Model. 2020;5:111–28. pmid:31956740
  30. 30. Rao RP, Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci. 1999;2(1):79–87. pmid:10195184
  31. 31. Selen LPJ, Franklin DW, Wolpert DM. Impedance control reduces instability that arises from motor noise. J Neurosci. 2009;29(40):12606–16. pmid:19812335
  32. 32. Takagi A, Usai F, Ganesh G, Sanguineti V, Burdet E. Haptic communication between humans is tuned by the hard or soft mechanics of interaction. PLoS Comput Biol. 2018;14(3):e1005971. pmid:29565966
  33. 33. Takagi A, Ganesh G, Yoshioka T, Kawato M, Burdet E. Physically interacting individuals estimate the partner’s goal to enhance their movements. Nat Hum Behav. 2017;1(3).
  34. 34. Tee KP, Franklin DW, Kawato M, Milner TE, Burdet E. Concurrent adaptation of force and impedance in the redundant muscle system. Biol Cybern. 2010;102(1):31–44. pmid:19936778
  35. 35. Theodorou EA, Buchl J, Schaal S. A generalized path integral control approach to reinforcement learning. J Mach Learn Res. 2010;11:3137–81.
  36. 36. Todorov E, Jordan MI. Optimal feedback control as a theory of motor coordination. Nat Neurosci. 2002;5(11):1226–35. pmid:12404008
  37. 37. Wong J, Wilson ET, Malfait N, Gribble PL. Limb stiffness is modulated with spatial accuracy requirements during movement in the absence of destabilizing forces. J Neurophysiol. 2009;101(3):1542–9. pmid:19144739
  38. 38. Wong J, Wilson ET, Malfait N, Gribble PL. The influence of visual perturbations on the neural control of limb stiffness. J Neurophysiol. 2009;101(1):246–57. pmid:18667545
  39. 39. Yang C, Ganesh G, Haddadin S, Parusel S, Albu-Schaeffer A, Burdet E. Human-like adaptation of force and impedance in stable and unstable interactions. IEEE Trans Robot. 2011;27(5):918–30.