Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Novel Hybrid Adaptive Controller for Manipulation in Complex Perturbation Environments

  • Alex M. C. Smith,

    Affiliation Centre for Robotics and Neural Systems, Plymouth University, Plymouth, UK

  • Chenguang Yang ,

    Affiliations Centre for Robotics and Neural Systems, Plymouth University, Plymouth, UK, College of Automation Science and Engineering, South China University of Technology, Guangzhou, China

  • Hongbin Ma,

    Affiliations State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, Beijing, China, School of Automation, Beijing Institute of Technology, Beijing, China

  • Phil Culverhouse,

    Affiliation Centre for Robotics and Neural Systems, Plymouth University, Plymouth, UK

  • Angelo Cangelosi,

    Affiliation Centre for Robotics and Neural Systems, Plymouth University, Plymouth, UK

  • Etienne Burdet

    Affiliation Department of Bioengineering, Imperial College London, London, UK

Novel Hybrid Adaptive Controller for Manipulation in Complex Perturbation Environments

  • Alex M. C. Smith, 
  • Chenguang Yang, 
  • Hongbin Ma, 
  • Phil Culverhouse, 
  • Angelo Cangelosi, 
  • Etienne Burdet


In this paper we present a hybrid control scheme, combining the advantages of task-space and joint-space control. The controller is based on a human-like adaptive design, which minimises both control effort and tracking error. Our novel hybrid adaptive controller has been tested in extensive simulations, in a scenario where a Baxter robot manipulator is affected by external disturbances in the form of interaction with the environment and tool-like end-effector perturbations. The results demonstrated improved performance in the hybrid controller over both of its component parts. In addition, we introduce a novel method for online adaptation of learning parameters, using the fuzzy control formalism to utilise expert knowledge from the experimenter. This mechanism of meta-learning induces further improvement in performance and avoids the need for tuning through trial testing.


Modern robots are expected to interact extensively with the environment and with humans [1, 2]. This interaction with dynamic and unknown environments requires a control method that maintains stability and task effectiveness despite disturbances. One of the first schemes proposed to control interaction with an unknown environment is impedance control [3]. The environment is modeled as an admittance and the manipulator as an impedance, so that interactive control is achieved through the exchange of energy. Impedance control can be designed on top of adaptive control, which compensates parametric uncertainties [46]. Adaptive impedance control methods, developed in [79], have improved the operational performance of a traditional impedance controller. In particular, the work in [9] shows how stability and successful performance can be gradually acquired despite the initial interaction instability typical of tool use such as drilling or carving [10].

Parallel to these developments, studies have shown that the human nervous system can adapt mechanical impedance (e.g. the resistance to perturbations) to succeed in performing tasks in stable and unstable environments [11, 12]. This is achieved through co-contraction of agonist/antagonist muscle groups, as demonstrated in Fig 1(a). The nervous system adapts motor commands to stabilise interactions through independent control of impedance and exerted force; the adaptation automatically selects suitable muscle activations to compensate for the interaction force and instability. At the same time, metabolic cost is minimised through the natural relaxation of muscle groups when error is sufficiently small. A model for this learning was introduced in [13, 14], which gave rise to a novel kind of non-linear adaptive controller that has been successfully demonstrated on robots [15]. The adaptation of impedance in this biomimetic controller follows a “v-shaped” algorithm, as shown in Fig 1(b). Conventionally designed adaptive control designs are typically focussed on the estimation of uncertain parameters under stable motion [16]; in comparison, the biomimetic control design is able to acquire stability in unstable dynamics as well as minimise control effort, through adaptation of force and impedance [9]. Similar to muscle relaxation, under stable interaction the controller also demonstrates compliance, which has received much attention in recent research on robotic manipulation [17] [18].

Fig 1. How co-contraction affects muscle impedance.

(a): By contracting at the same time with different forces, the flexor and extensor muscles work together to maintain effector torque, but with increased impedance. (b): the “v-shape” of the adaptive law. Impedance increases irrespective of error direction, and decreases when error is below a threshold; this mechanism ensures minimisation of metabolic cost (i.e. control effort).

The present paper extends this novel adaptive controller in two aspects: the first contribution is hybrid task-space/joint-space control. Controllers are typically implemented in either joint space (corresponding to the actuators) or in Cartesian space (in which case the inverse kinematics must be solved). Both of these control methods have advantages and disadvantages:

  • In contrast to joint space controllers, Cartesian controllers allows for intuitive trajectories in the world space. Objects placed in the workspace typically have a Cartesian representation, e.g. a box placed 0.1 metres in front of the robot.
  • On the other hand, robots typically require inputs in joint-space, i.e. torques rather than forces and moments. Therefore, joint space control is less computationally expensive than Cartesian space control, as it avoids the inverse kinematic problem. This is especially true for under-actuated or redundant robots like the Baxter manipulator.
  • Telepresence tasks may be more intuitive in joint space, when an anthropomorphic robot is imitating a human operator.
More specifically to this work,
  • Joint control can make the manipulator robust against disturbances along any part of the arm by monitoring joint-space errors.
  • Cartesian control is sensitive to task-specific disturbances occurring at the end-effector.

Therefore, a hybrid joint-Cartesian space control scheme is developed and investigated in this paper to take advantages of these two control approaches. The Cartesian task we study is that of carrying an object along a given trajectory while disturbances are applied either on the endpoint or along the arm (or both), similar to noise rejection when holding a glass of champagne in a crowded room [19]. This extends developments found in [20] and [21].

Another aspect of adaptive control that has received little attention is the setting of learning parameters. These parameters are typically tuned by the user, in order to complete the task and improve performance, e.g. by minimising the tracking error. Automating the selection of learning parameters is not an easy task. Real-world manipulator systems have complex and unknown dynamics due to interaction with the environment, which is difficult—or in some cases, impossible—to model. The neural network-based approach of [22, 23] may be used to estimate uncertainties in order to avoid some of these problems. However, fuzzy logic can be used to transfer expertise from a human operator in order to make rational decisions in the face of imprecise data [2426]. Fuzzy logic has been successfully introduced into control systems to improve performance [27], and recently has been used in non-linear control systems [28] and robot manipulation [29]. This paper thus develops a method based on fuzzy logic to set the learning parameters.

The concepts of this paper will be simulated and tested on one arm of the Baxter robot (Fig 2). Baxter is a bimanual, low cost robot, designed for introductory industrial applications from Rethink Robotics©, which has recently become available in a research version for use in academia.

Fig 2. Baxter arm, and disturbance forces acting on it.

Ftask acts at the end-effector and Fenvt is applied further up the arm as described in Eqs (2) and (4). The model generated using MATLAB and Peter Corke’s Robotics Toolbox is shown on the right [37].

Control problem

Baxter is required to move along a given trajectory under the influence of a high frequency, low amplitude vibration at the end-effector, simulating the type of disturbance a tool might produce. In addition, a high amplitude and low frequency perturbation is applied to a point on the arm away from the end-effector, to simulate collision with an operator or with the environment. For reference, nomenclature is provided in Table 1.

Robot Dynamics

The robot arm dynamics are given as: (1) where q denotes the vector of joint angles, M(q) ∈ ℝn×n is the symmetric, bounded, positive definite inertia matrix, and n is the degree of freedom (DoF) of the robot arm; denotes the Coriolis and Centrifugal force; G(q) ∈ ℝn is the gravitational force; τu ∈ ℝn is the vector of control input torque; and τdist ∈ ℝn is the disturbance torque caused by friction, environmental disturbances or loads as described in the next section. The control torques τu are generated by the designed controllers in order to achieve desired performance in terms of motion tracking and disturbance rejection.


We assume that the disturbance torque τdist can be broken down to two components to simulate both a task disturbance at the end effector, described here as Ftask, and an environmental disturbance Fenvt applied on the arm, as shown in Fig 2: (2) is applied on the endpoint, where 0 < Ap ≤ 20 is the amplitude and 100 < ωp ≤ 1000 the frequency of oscillation in Hertz. In joint space, the torque applied is then (3) where the Jacobian J(q) is defined through . The environmental disturbance is given by (4) where 20N < Ar ≤ 100N is the perturbation amplitude, similar to average limits of human push/pull strength [30], and 0.1 < ωr ≤ 1 the frequency in Hertz, which provides a slowly changing disturbance. To simulate the environmental force Fenvt being applied at a point on the arm, e.g. at the elbow, the Jacobian matrix J is reduced by a matrix Z, defined as (5) where z is the number of joints from the base to the contact point; e.g. if the force is applied on the elbow, z = 4. The torque can then be derived as (6) The disturbance torque τdist in Eq (1) is comprised of a combination of terms in Eqs (6) and (3).

Adaptive Control

Feedforward controller

Given the dynamics of a manipulator in Eq (1), we employ the following controller as the initial torque input (7) where L(t)ɛ(t) corresponds to a desired stability margin [9] which produces minimal feedback (similar to the passive impedance effect of muscles and tendons), and the first three terms are feed-forward compensation for the manipulator’s dynamics. As in sliding mode control, we use the tracking error (8) where (9) are joint angle and angular velocity errors, respectively. In addition to the above control input τr(t), we develop two adaptive controllers in joint space and task space as follows.

Joint space adaptive control.

The human-like adaptive law for tuning the feed-forward and feedback components of the control torque τu from [9] is applied both in joint and task spaces. The adaptation here is continuous during movement, rather than trial after trial on repeated movements, so that tracking error and effort are continuously minimised. Let us define (10) where −τ(t) is the learned feed-forward torque, and −K(t)e(t) and are feedback torque terms due to stiffness and damping, respectively. The adaptive laws introduced in [9] for a trajectory of period T are given as: (11) In the present paper we decouple the forgetting factor γ(t) from the gain matrices Q(⋅) in order to avoid high frequency oscillation, which can occur when both γ and Q(⋅) are large. As mentioned above, we consider the adaptation in continuous time, rather than by iteration over consecutive trials, yielding the joint space adaptation laws: (12) where δt is the sampling time, Kj(0) = 0[n×n] and Dj(0) = 0[n×n]. Qτ, QKj, QDj ∈ ℜn×n are diagonal positive-definite gain matrices. Furthermore, in [9], γ(t) ∈ ℜn×n was diagonal with (13) which requires two tuning variables, a and b. To simplify parameter selection, γ is redefined as (14) which requires only one variable, αj, to describe the shape (as shown in Fig 3) but maintaining the same functionality. This also presents the advantage of simple application of a fuzzy inference engine, as described in a later section.

Fig 3. How the magnitude of α affects the forgetting factor γ.

Higher values of α have a high narrow shape, so that when tracking performance is good the control effort is reduced maximally. When tracking performance is poor, the forgetting factor is small, increasing applied feedback torque.

Task space adaptive control.

Task-space control is designed in a similar manner to joint space. First, we define the error term in Cartesian space: (15) This leads to a change in the feed-forward and feedback terms described in Eq (12) to (16) so that (17) and the task-space forgetting factor is defined similarly to Eq (14), below: (18)

Hybrid Controller.

The combination of the basic controller of Eq (7), the joint space controller of Eq (10) and the task space controller of Eq (17) yields the hybrid controller, and therefore the input torque τu (19) where Ω ∈ ℜn×n is a weighting matrix, designed such that the joint torque feedback is limited to certain joints, dependent on the required task. Assuming an accurate dynamic model of the robot is available, the torques due to disturbance τdist are given as (20) i.e. the modeled system torques minus the input torque. By normalising this vector of torques to the maximum element, the weighting matrix Ω can be formed: (21) which is then applied to Eq (19), so that joint-space control torque is applied primarily to those joints which are under the influence of large disturbance forces, and less to those which are not; this limits the control effort being applied unnecessarily, reducing the overall control effort that would otherwise be applied.

Fuzzy Inference of Control Gains

Traditionally, the user sets the learning parameters Q(⋅) and α(⋅) based on experience of how the system responds at run-time, in order to ensure good control performance. Here, expert knowledge of the system is distilled into a fuzzy inference engine to tune the gains online, so that no prior user experience is required. An improvement in performance is also expected, as the system will pick appropriate gain values depending on the system response to unpredictable disturbances. Inferences are made according to the magnitudes of the tracking error and control effort, which we want to minimise, and also give a good indication of overall performance of the controller.

There are several steps required for fuzzy inference of an output Y. First, fuzzification maps a real scalar value (for example, temperature) into fuzzy space; this is achieved using membership functions. Let X be a space of points, with elements xX [31]. A fuzzy set A in X is described by a membership function μA(x) associating a grade of membership μA(xi) in the interval [0, 1] to each point x in A.

In this paper we use simple triangular membership functions, which have low sensitivity to change in input and are computationally inexpensive [32]. Additionally, from [32], all membership functions are set so that the completeness ϵ of all fuzzy sets is 0.5; this reduces uncertainty by eliminating areas in the universe of discourse with low degrees of truth, and also ensures reasonable overshoot, as described in [33].

Several definitions are required. A union, which corresponds to the connective OR, of two sets A and B is a fuzzy set C (22) An intersection, which corresponds to connective AND, can similarly be described: (23) The Cartesian product can be used to describe a relation between two or more fuzzy sets; let A be a set in universe X and B a set in universe Y [34]. The Cartesian product of A and B will result in a relation (24) where the fuzzy relation R has a membership function (25) This is used in the Mamdani min-implication, to relate an input set to an output set, i.e. IF x is A THEN y is B. A rule set is then used to implicate the output, which is max-aggregated for all rules [25]. Defuzzification is then performed, using the common centroid method [35]. The defuzzified value y* is calculated using (26) which computes the centre of mass of the aggregated output membership function, and relates the μ value back to a crisp output.

The raw inputs to our fuzzy systems are the joint-space tracking error and effort, ɛj, τu and similarly, in task-space, ɛx, Fu. Before fuzzification can be performed, the inputs must be normalised so that the same inference engine is generic and is not dependent on the input magnitude. A baseline average of tracking errors , , input torque and input force are calculated for each degree of freedom over the total simulation time per time step : (27) These are then used to calculate the inputs to the fuzzy system, i.e. values which give an indication of performance compared to the previous iteration: (28) For all inputs to our fuzzy systems, a value less than σ indicates an improvement and values greater than σ indicate that performance is worse. Here we set σ = 0.5, so that the input range is roughly between 0 and 1. There is no upper limit to the variables generated in Eq (28), so any input above unity returns a maximum truth value in the ‘high’ classification. This allows a generic set of input membership functions to be applied to all systems.

These normalised variables are then used in the adaptive laws Eqs (12), (14), (16) and (18) as , , , for the joint-space controller, and correspondingly for the task-space controller.

The rules for fuzzy inference of the control gains are set using expert knowledge. In general: IF control effort is too high THEN gain is set low; IF tracking error is poor THEN gain is set high, as shown in Table 2 for Q(⋅). The truth table for the forgetting factor gain (Table 3) is slightly different, in that α is required to be larger when tracking error is improved. Note that Q(⋅) and α, the outputs of the fuzzy inference system, are bounded: (29) where the maximum values are set according to previous trials performed without application of the fuzzy system.

Table 2. Truth table for inference of output Q(⋅) based on fuzzy memberships of , .

Table 3. Truth tables for inference of output α(⋅) based on fuzzy memberships of , .

How changes in control effort and tracking error affect the Q(⋅) gains is shown in Fig 4(a). It can be seen that in general: gain increases when tracking error is high and control effort is low, and minimal gain occurs when tracking error is low and control effort is high. The surface of fuzzy inference of α is shown in Fig 4(b) where it can be seen that the forgetting factor will be at its greatest when tracking error is low and control effort is high.

Fig 4. Surface plots showing rule surfaces.

(a): adaptation gain QDx, and (b): value of αx, based on inputs and described in Eq (28). Task-space gains are characterised by a similar surface.


The stability of the controller in joint space and convergence to a small bounded set were shown in [9], and the proof for the Cartesian space controller is similar. However, here the diagonal adaptation gain matrices Q(⋅) are time varying, which must be taken into account. From [9] Appendix C, the difference in energy of the system δV(k) = δVp(t)+δVc(t) is shown to converge to zero. No change to the derivation of the first part δVp(t) is needed here, so that section of the proof still holds. A change is made in comparison to [9], equations (39–41) where is replaced with so that (30) Defining a new variable δQ ≡ diag[I ⊗ δQK, I ⊗ δQD, I ⊗ δQτ] (where ⊗ is the Kronecker product) allows us to add another term to the end of [9](44), producing (31) The term inside the last integrand can be described by where (32) given that , and , where ɛK,D,τ are defined as the minimum eigenvalues of . This can then be added to the condition in [9](46) which gives the inequality (33) where γ′ = Q−1 γ, and . This is a sufficient condition to prove stability, following the details in appendix C of [9], and given that Q(t) is bounded by the output of fuzzy inference stipulated in Eq (29).


The task consisted of tracking a smooth minimal jerk trajectory along the y coordinate defined as: (34) where T is the movement duration. Joint-space angular velocity is computed using the pseudo inverse J(q) ≡ JT(JJT)−1 of the Jacobian, through (35) from which the position and acceleration can be found respectively using (36) Simulations of the proposed task and controller were performed using MATLAB with a kinematic and dynamic Baxter robot rigid joint model, implemented using Peter Corke’s Robotics Toolbox [36, 37]. To test the controller under continuous different conditions, the two disturbance forces Fenvt and Ftask were introduced in different phases:

  • Phase I: No disturbance;
  • Phase II: Ftask only;
  • Phase III: Fenvt only;
  • Phase IV: Fenvt and Ftask.

Performance was analysed in each phase, to observe the controller’s reaction to different perturbations. It was expected that joint-space control would improve rejection of Fenvt, and task-space control to reject disturbance caused by Ftask; the order of phases was set so that the adaptation progress would be easier for readers to understand. A performance index, η, was calculated from the integral of the product of input force Fu and task-space tracking error ɛx: (37) where Q, R ∈ ℜ6×6 are positive diagonal scaling matrices, and ts and tf were set to obtain η for each phase of the simulation. A small performance index η corresponds to small tracking error and control effort, and thus indicates good performance.


Hybrid Control

Performance of the hybrid controller τu(t) = τr(t)+τx(t)+Ωτj(t) was compared against the controller in joint-space only, when τu(t) = τr(t)+τj(t), and in task-space only, where τu(t) = τr(t)+τx(t). Disturbance parameters remain the same in each case; for Ftask(t) defined in Eq (2), p = 20 sin(2π 50 t), and for Fenvt(t) from Eq (4) the parameters are r = 100 sin(2π 0.1042 t). The trajectory period and travel distance were set to 4.8s and 0.2m respectively. Each simulation phase corresponds to one completion of the trajectory of Eq (34).

The Cartesian tracking error ɛx in Fig 5(a) for all three control schemes shows how task-space performs better when a tool-type disturbance is applied, but suffers when a large disturbance is applied away from the end-effector. In this case, joint-space control was able to more effectively reduce tracking error. When combined in the hybrid controller, tracking error was reduced further. From Fig 5(b) it can be noted that there was little difference in the overall amount of control effort being applied between the three methods. The measures of tracking error and control effort were combined to form the performance index η for each phase, shown in Fig 5(c). A clear difference could be seen in the performances of the task-space and joint-space controllers between phases II and III, where the disturbance type was switched from Ftask to Fenvt; task-space control was better at handling the former, and joint-space the latter. The hybrid controller showed a slight improvement over joint-space in phase II but exhibited an improvement over its component parts in phases III and IV. Considering ||τu|| was similar for all three, as seen in Fig 5(b), this suggests that the hybrid control was applying control in a more targeted fashion, i.e. only applying additional feedback to the joints which require it.

Fig 5. Comparison of controllers performance.

(a): In the first phase (0 < t < 4.8) little difference can be observed in tracking error for the three controllers. In phase II task-space has the lowest error, and joint space the highest, with the hybrid control in between, as expected due to the disturbance type. In the next two phases (9.6 < t < 19.2) task-space control produces the highest error, while the hybrid controller shows a much lower tracking error than its component parts. (b): Examining the input torques τu little difference can be seen between the three control schemes. (c): The performance index η in each phase demonstrates the limitations of each control type under different disturbance conditions. In particular task-space control performance is degraded in phases III, IV where joint-space is superior. Hybrid control shows improved performance over both.

By examining the evolution of feed-forward torque in Fig 6(a) we see how in phases III and IV large increases were made to compensate for the low frequency Fenvt disturbance, predominantly in the first joint (the rotation of which is aligned with the x-y plane). Comparing the magnitude of feed-forward torque between controllers it is clear that joint-space control generated much higher torques, while hybrid control torques were much lower and less weighted towards joint 1.

Fig 6. Learned feed-forward torque and stiffness.

In (a) we can see how the feedforward torque increases in the last two phases to compensate for the low frequency disturbance. (b) Comparison of stiffness geometry represented by ellipses in the x and y planes, of midpoint of phases I—II, for each controller. Note for task-space and hybrid control the ellipse is elongated primarily in the x-axis corresponding to the perturbation direction.

Cartesian stiffness ellipses are shown in Fig 6(b); In task-space and hybrid control, it can be observed how the stiffness changed from a slight orientation in the y-direction (due to the trajectory moving along this axis) to a much larger ellipse predominantly in the x-axis: aligned with the direction of disturbance. Joint-space control, however, produced ellipses less-aligned with the direction of disturbance. This shows that feedback torque is being applied inefficiently in this case.

Fuzzy Inference of Control Gains

The effectiveness of the fuzzy inference of control gains Q(⋅) and α was tested through implementation on the hybrid controller, and compared against results obtained in the previous section (where control gains are fixed). Base-line averages described in Eq (28) and upper limits of adaptation gains were calculated from data collected running the hybrid controller in the previous experiment, which were then used as the input to the fuzzy engines affecting the adaptive laws.

By examining Fig 7(a) we can see that there was an improvement in tracking error in phase II, but not so much in other phases, where it is similar to previous results. However, by comparing the results with Fig 7(b) we can see that although control torque was not reduced in the first two phases, there was a significant reduction in the last two; this demonstrates not only that the online tuning is able to reduce tracking error when control effort is already minimal, but also reduces the control effort required to maintain good tracking. This is reflected in Fig 7(c) which shows in all disturbance phases that the aggregate performance index score was improved by tuning the learning parameters online.

Fig 7. Performance from fuzzy tuning of learning parameters.

In (a), tracking error for the hybrid controller (green) is compared to the same controller with fuzzy tuning of adaptive parameters (purple). (b): Comparison of input control torques for the two control schemes. (c): Performance indices calculated for each phase, showing an improvement for all phases where disturbance is present.

In Fig 8(a) and 8(b) the feed-forward torques of the proximal joints are compared. We can see that the fuzzy tuning had a much higher response amplitude, although the shape has remained the same. Compared with Fig 8(c) and 8(d) the stiffness ellipse displays a reduced magnitude with fuzzy tuning. This suggests that the online-tuned controller increased feed-forward torque while sacrificing stiffness to reduce the control effort observed in Fig 5(b), although the geometry of the ellipse was maintained in the direction of disturbance.

Fig 8. Force and impedance with and without fuzzy inference.

(a), (b): The shape of evolution through time is similar between the two controllers; however, the fuzzy hybrid controller applies a larger feed-forward torque. (c), (d): Ellipses are for the hybrid controller have a higher magnitude than the same controller with fuzzy parameter tuning; note that scaling in the fuzzy tuning case is ×0.02 scaled. Ellipses in the second phase are elongated in the direction of disturbance.


This paper investigated the ideas of combining joint-space and task-space feedback control to create a hybrid controller, and of online fuzzy tuning of learning parameters.

The controller was based on a bio-inspired design, which has been shown to acquire stable and successful performance with minimal effort. The controller was implemented on a dynamic model of the redundant Baxter robot arm. The results show that the hybrid controller displays reductions in tracking error of around 26% and 16% on average for the task and joint-space controllers respectively, with only a 6% maximum increase in control effort. Thus, demonstrating the hybrid controller is able to benefit from both joint-space and Cartesian-based control, providing robustness against disturbances occurring at the end-effector or any point along the arm.

The results further show how fuzzy inference can be used to set the learning parameters automatically, instead of the normal practice of setting them manually. The simulation results demonstrate an average 24% reduction in control effort and 15% improvement in overall performance with this fuzzy meta-learning than with fixed learning parameters, as well as avoiding the need for trial testing to select optimum values for adaptation gains. We also note that the method used to normalise inputs to the fuzzy system may enable iterative performance improvement, as the performance of the current iteration is compared against the previous, and the fuzzy system seeks to reduce tracking error and control effort as much as possible.

Author Contributions

Conceived and designed the experiments: AS CY EB. Performed the experiments: AS CY. Analyzed the data: AS CY HM EB. Contributed reagents/materials/analysis tools: CY AC PC. Wrote the paper: AS CY HM PC AC EB.


  1. 1. Peshkin MA, Colgate JE, Wannasuphoprasit W, Moore CA, Gillespie RB, Akella P. Cobot architecture. Robotics and Automation, IEEE Transactions on. 2001;17(4):377–390.
  2. 2. Lambercy O, Dovat L, Gassert R, Burdet E, Teo CL, Milner T. A haptic knob for rehabilitation of hand function. Neural Systems and Rehabilitation Engineering, IEEE Transactions on. 2007;15(3):356–366.
  3. 3. Hogan N. Impedance control: An approach to manipulation. In: American Control Conference, 1984. IEEE; 1984. p. 304–313.
  4. 4. Cheng L, Lin Y, Hou ZG, Tan M, Huang J, Zhang W. Adaptive tracking control of hybrid machines: a closed-chain five-bar mechanism case. Mechatronics, IEEE/ASME Transactions on. 2011;16(6):1155–1163.
  5. 5. Li Z, Yang C, Ding N, Bogdan S, Ge T. Robust adaptive motion control for underwater remotely operated vehicles with velocity constraints. International Journal of Control, Automation and Systems. 2012;10(2):421–429.
  6. 6. Li Z, Yang C, Tang Y. Decentralised adaptive fuzzy control of coordinated multiple mobile manipulators interacting with non-rigid environments. IET Control Theory & Applications. 2013;7(3):397–410.
  7. 7. Kelly R, Carelli R, Amestegui M, Ortega R. On adaptive impedance control of robot manipulators. In: Robotics and Automation, 1989. Proceedings., 1989 IEEE International Conference on. IEEE; 1989. p. 572–577.
  8. 8. Colbaugh R, Seraji H, Glass K. Direct adaptive impedance control of robot manipulators. Journal of Robotic Systems. 1993;10(2):217–248.
  9. 9. Yang C, Ganesh G, Haddadin S, Parusel S, Albu-Schäeffer A, Burdet E. Human-Like Adaptation of Force and Impedance in Stable and Unstable Interactions. IEEE Transactions on Robotics. 2011;27(5):918–930.
  10. 10. Ganesh G, Jarrassé N, Haddadin S, Albu-Schaeffer A, Burdet E. A versatile biomimetic controller for contact tooling and haptic exploration. In: Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE; 2012. p. 3329–3334.
  11. 11. Burdet E, Osu R, Franklin DW, Milner TE, Kawato M. The central nervous system stabilizes unstable dynamics by learning optimal impedance. Nature. 2001;414(6862):446–449. pmid:11719805
  12. 12. Franklin DW, Osu R, Burdet E, Kawato M, Milner TE. Adaptation to stable and unstable dynamics achieved by combined impedance control and inverse dynamics model. Journal of Neurophysiology. 2003;90(5):3270–3282. pmid:14615432
  13. 13. Franklin DW, Burdet E, Tee KP, Osu R, Chew CM, Milner TE, et al. CNS learns stable, accurate, and efficient movements using a simple algorithm. The Journal of Neuroscience. 2008;28(44):11165–11173. pmid:18971459
  14. 14. Tee KP, Franklin DW, Kawato M, Milner TE, Burdet E. Concurrent adaptation of force and impedance in the redundant muscle system. Biological cybernetics. 2010;102(1):31–44. pmid:19936778
  15. 15. Ganesh G, Albu-Schaffer A, Haruno M, Kawato M, Burdet E. Biomimetic motor behavior for simultaneous adaptation of force, impedance and trajectory in interaction tasks. In: Robotics and Automation (ICRA), 2010 IEEE International Conference on. IEEE; 2010. p. 2705–2711.
  16. 16. Mahyuddin MN, Khan SG, Herrmann G. A novel robust adaptive control algorithm with finite-time online parameter estimation of a humanoid robot arm. Robotics and Autonomous Systems. 2014;62(3):294–305.
  17. 17. Khan SG, Herrmann G, Lenz A, Al Grafi M, Pipe T, Melhuish C. Compliance Control and Human–Robot Interaction: Part IIExperimental Examples. International Journal of Humanoid Robotics. 2014;11(03).
  18. 18. Wang W, Loh RN, Gu EY. Passive compliance versus active compliance in robot-based automated assembly systems. Industrial Robot: An International Journal. 1998;25(1):48–57.
  19. 19. Ganesh G, Haruno M, Kawato M, Burdet E. Motor memory and local minimization of error and effort, not global optimization, determine motor behavior. Journal of Neurophysiology. 2010;104(1):382–390. pmid:20484533
  20. 20. Smith A, Yang C, Ma H, Culverhouse P, Cangelosi A, Burdet E. Biomimetic joint/task space hybrid adaptive control for bimanual robotic manipulation. In: Control & Automation (ICCA), 11th IEEE International Conference on. IEEE; 2014. p. 1013–1018.
  21. 21. Smith A, Yang C, Ma H, Culverhouse P, Cangelosi A, Burdet E. Dual adaptive control of bimanual manipulation with online fuzzy parameter tuning. In: Intelligent Control (ISIC), 2014 IEEE International Symposium on. IEEE; 2014. p. 560–565.
  22. 22. Cheng L, Hou ZG, Tan M. Adaptive neural network tracking control for manipulators with uncertain kinematics, dynamics and actuator model. Automatica. 2009;45(10):2312–2318.
  23. 23. Cheng L, Hou ZG, Tan M, Zhang WJ. Tracking control of a closed-chain five-bar robot with two degrees of freedom by integration of an approximation-based approach and mechanical design. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on. 2012;42(5):1470–1479.
  24. 24. Zadeh LA. Fuzzy Logic. Computer. 1988;21(4):83–93.
  25. 25. Mamdani EH, Assilian S. An experiment in linguistic synthesis with a fuzzy logic controller. International journal of man-machine studies. 1975;7(1):1–13.
  26. 26. Tanaka K. An introduction to fuzzy logic for practical applications. Springer; 1997.
  27. 27. Mamdani EH. Application of fuzzy algorithms for control of simple dynamic plant. In: Proceedings of the Institution of Electrical Engineers. vol. 121. IET; 1974. p. 1585–1588.
  28. 28. Li H, Yu J, Hilton C, Liu H. Adaptive sliding-mode control for nonlinear active suspension vehicle systems using T-S fuzzy approach. Industrial Electronics, IEEE Transactions on. 2013;60(8):3328–3338.
  29. 29. Tan J, Ju Z, Hand S, Liu H. Robot navigation and manipulation control based-on fuzzy spatial relation analysis. International Journal of Fuzzy Systems. 2011;13(4):292–301.
  30. 30. Woodson WE, Tillman B, Tillman P. Human factors design handbook: information and guidelines for the design of systems, facilities, equipment, and products for human use. McGraw-Hill; 1992.
  31. 31. Zadeh LA. Fuzzy Sets. Information and Control. 1965;8(3):338–353.
  32. 32. Bouchon-Meunier B, Dotoli M, Maione B. On the choice of membership functions in a mamdani-type fuzzy controller. In: Proceedings of the First Online Workshop on Soft Computing, Nagoya, Japan. Citeseer; 1996.
  33. 33. Mizumoto M. Fuzzy controls under various fuzzy reasoning methods. Information Sciences. 1988;45(2):129–151.
  34. 34. Ross TJ. Fuzzy logic with engineering applications. John Wiley & Sons; 2009.
  35. 35. Takagi T, Sugeno M. Fuzzy identification of systems and its applications to modeling and control. Systems, Man and Cybernetics, IEEE Transactions on. 1985;15(1):116–132.
  36. 36. Corke PI. A robotics toolbox for MATLAB. Robotics & Automation Magazine, IEEE. 1996;3(1):24–32.
  37. 37. Ju Z, Yang C, Ma H. Kinematic Modeling and Experimental Verification of Baxter Robot. In: Proceedings of the 33rd Chinese Control Conference Nanjing, China, 28–30 Jul, 2014. CCC; 2014. p. 8518–8523.