Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints.
Citation: Mugge W, Kuling IA, Brenner E, Smeets JBJ (2016) Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy. PLoS ONE 11(3): e0150912. https://doi.org/10.1371/journal.pone.0150912
Editor: Sliman J. Bensmaia, University of Chicago, UNITED STATES
Received: July 31, 2015; Accepted: February 22, 2016; Published: March 16, 2016
Copyright: © 2016 Mugge et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This research is part of the H-Haptics perspective program supported by the Dutch Technology Foundation STW (www.stw.nl), which is part of the Netherlands Organization for Scientific Research (NWO): STW grant 12160. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
We rely on vision and proprioception to provide information about the position and orientation of our body. Our central nervous system integrates this into a single percept [1,2]. When executing movements humans make both random and systematic errors. Some of the systematic errors probably arise from biases between proprioceptive and visual estimates [3–6]. Can we reduce these errors by providing forces that scale with the distance from the goal?
Research into reaching movements in force fields has primarily focused on measuring movement errors [6,7] and the way we cope with them through error correction [8,9], motor adaptation  or motor learning . Perturbation forces introduce errors that are corrected through feedback mechanisms and (in subsequent repetitions) through feedforward mechanisms. It has been shown that the position sense is not disturbed by external forces at the end-effector [6,12], which implies that forces can be applied in haptic robot interaction without distorting the position percept of the human operator. The relation between forces and position errors is very relevant for telemanipulation, a technique used for situations in which a human operator is required (in order to judge and resolve unexpected situations) while physical presence is undesirable (e.g. due to hazardous environments) [13,14]. In telemanipulation, an operator controls a remote slave device through physically interacting with a master that serves as an input device. Providing the operator with force feedback from the interaction between the slave and its environment yields a reduction in task completion time [15–17], control effort , and cognitive workload .
Even with perfect transparency, where the forces on the operator are an exact copy of the interactive forces between the slave robot and its environment, performance will be limited by errors of the human operator. A promising recent development that may circumvent this limitation is haptic shared control, where the operator and an automatic controller act and exchange information in a simultaneous and continuous way [19,20]. Haptic shared control is an approach in which an assisting system continuously communicates a calculated ideal control input to the human operator through forces, which can be seen as a way of combining automation and manual control. It has proven itself in car driving, reducing lane keeping variability by 30% while reducing visual demand . Although lane keeping can be fully automated, it is preferred to keep the driver in charge and engaged due to unpredictability and legal responsibility issues [20,22–24]. In haptic shared control the torque from the controller informs the driver how to correct the current steering angle to the optimal one without ever overpowering the driver. That is an essential property of haptic shared control: the driver is always able to override the additional torque .
Experiments in other areas have also revealed benefits of haptic shared control. Rosenberg  presented additional passive guiding forces—called virtual fixtures—that assisted the operator during a teleoperated peg-in-hole task, improving task performance. Boessenkool et al.  showed that haptic shared control during free air movements in a bolt-spanner task reduced errors, control effort and cognitive workload. Dvorkin et al.  applied various haptic manipulations to reaching movements of patients with traumatic brain injury, and found superior performance in a haptic nudge condition in which a force of 1N was briefly applied in the direction of the target when the subject stopped moving. The above-mentioned improvements may be based on the additional information that the haptic shared control communicates to the human operator, but there is an alternative explanation.
Forces that guide the operator to the target, normally do not only provide information about the direction and distance to the target (through the direction and magnitude of the force), but also feel intuitive: the operator can directly follow the forces to reach the target. Our study examined whether haptic guidance needs to be intuitive to be useful, or whether it is the additional information about the target’s position with respect to the hand that improves the operator’s performance in a position reproduction task.
Three horizontal force fields were centered on the target; the relation between the direction of the force and the direction to the target differed between the fields, but the force scaled with distance in the same way. The three force fields thus provide exactly the same information to the subjects. Although the movements may be less straight in less intuitive force fields and subjects may require more time to extract the information than from more intuitive ones, if it is the additional information that the haptic shared control provides that is responsible for the reduction of motor errors, then all force fields will reduce end point errors to the same extent. If it is not only the information that matters, but also how intuitively it is presented (the ease with which it can be used), then only assisting forces will reduce motor errors. We therefore examined whether in the absence of time constraints, subjects would reproduce a prior position equally well with 90 degrees or 180 degrees rotated guidance as with guidance directed towards the target.
This study consisted of 2 experiments; ten subjects (all right-handed, 30.8 SD 9.3 years, 6 male and 4 female) took part in Experiment 1 and ten subjects (all right-handed, 31.5 SD 9.1 years, 4 male and 6 female) in Experiment 2. As 4 subjects performed both the experiments (of which 2 are co-authors of this paper), 16 subjects participated in total. The study is part of an ongoing research program that has been approved by the ethics committee of the Faculty of Human Movement Sciences of VU University. All subjects were naïve about the purpose of the experiments (the two co-authors performed the experiments before being informed about the details) and gave their written informed consent prior to participation.
Subjects made self-paced reaches from a starting position, centered close to the chest, to two targets in the horizontal plane (located 0.25m away from the starting position and 0.10m left or right of the sagittal plane) while holding the end effector of a PHANToM Premium 3.0/6DoF (Geomagic). Subjects fully enclosed the end effector of the haptic device in their hand with their thumb operating the button to indicate the completion of the reach (Fig 1). Vision of the hand and robot was obstructed by a table and a monitor (33.8x27.0cm, 1280x1024 pixels, 60 Hz) in front of the subject. Specialized software was used for the onscreen visualizations and data acquisition at 300 Hz (D-Flow, MotekForce Link). The monitor always showed a 15 mm radius circle (line thickness 4 mm) as starting point, and sometimes showed a 10 mm radius disk as target (Experiment 1 only) and a 9 mm radius disk as cursor (indicating the hand position). The movement of the cursor corresponded to the horizontal component of the hand movement. In trials in which no cursor was presented, the cursor reappeared when the hand approached the starting position after the reach, to help the subject accurately locate the starting position. When at the starting position, pushing the button initiated the next trial with the next force field and visual indicator settings.
Subjects made reaches to a visual/haptic goal while holding a haptic device (PHANToM Premium 3.0/6DoF). Right panel shows the onscreen visualizations with a target indicator in Experiment 1 (top) and without a target indicator in Experiment 2 (bottom).
To prevent the subject’s hand from drifting vertically from the target plane, either the color of the target (Experiment 1) or that of the starting point (Experiment 2) indicated whether the height was correct: green if the height was within 30 mm of the target plane, red if above and blue if below.
All force fields (50 N/m) were centered on the target and capped at 5 N for distances of more than 10 cm from the target to prevent excessive displacements from the starting position when initiating the force field. The forces were independent of the vertical position of the handle and no forces were exerted in vertical direction. All force fields (all conditions except for the Null condition) provided the same information to the subject; the magnitude of the force indicated the distance from the target and the direction of the force indicated the direction to the target. The only difference is that in two force fields the direction of the force was rotated with respect to the direction to the target (Fig 2):
- Null condition (no guidance, only Experiment 1)
- Assisting guidance condition (guidance directed towards the target, 0 degrees rotated)
- Opposing guidance condition (guidance directed away from the target, 180 degrees rotated)
- Perpendicular guidance condition (guidance directed perpendicular to the target direction, 90 degrees rotated)
The Null condition without forces was only used in Experiment 1; the three conditions with force fields were used in both experiments. The Assisting (most intuitive), Opposing and Perpendicular (least intuitive) guidance proportionally scale the force with the distance to the target, effectively providing the same information regarding the location of the target.
Assisting guidance is an intuitive implementation of guidance, which is known to improve performance [21,25–27]. To attain the target, one can simply move along with the force. Opposing guidance is less intuitive as one will have to oppose the force to move in the direction of the target. Perpendicular guidance presents a complex relationship between the force and the required movement.
Task instruction and experimental protocol
In Experiment 1, subjects were instructed to move to a visual target. In Experiment 2, subjects were instructed to accurately locate the center of the force field, at which the force was zero. In both experiments, subjects were instructed to move to the target as accurately as possible, with no time constraints, and push the button when the target was attained. They were informed that the target location did not include a vertical component, but were nevertheless requested to try to maintain a green height indicator throughout the movement.
In Experiment 1, the reaches were performed in pairs towards the same visual target, first with a visible cursor (visual trial) and then without (blind trial). In Experiment 2, only haptic trials were performed. They were similar to the blind trials of Experiment 1, but the target was the center of the force field rather than a visible target and they were not alternated with visual trials with a visible cursor. Therefore, subjects could not use memory of (a movement towards) a visual target, but were forced to use the haptic information. Note that there was no Null condition in Experiment 2, as this condition would not provide any information on target location. In both experiments there were separate blocks of trials for all conditions. These blocks were presented to the subject in random order. Before presenting each block of trials with a certain force field, the force field was explained to the subject and he/she performed several training trials. Experiment 1 consisted of 192 trials (2 visual/blind conditions x 4 force fields x 2 target locations x 12 repetitions) and Experiment 2 of 72 trials (3 force fields x 2 target locations x 12 repetitions).
The 2D positions at which the subjects pushed the button were measured and three performance measures were taken:
- Accuracy (systematic error) was calculated as the distance between the mean end point and the target.
- Precision (random error) was calculated as the average distance between the individual end points and the mean end point.
- Reaching time was calculated as the time between the button push at the starting position and the button push at the end point. Note that it was explicitly stated to the subjects that time was not a factor.
Additionally, the path length was analyzed to judge whether the trajectories were more straight (shorter path length) in certain conditions. The accuracy, precision, reaching time, and path length in the visual and blind trials (Experiment 1), and in the haptic trials (Experiment 2), were all assessed separately. Repeated measures ANOVA’s were performed on accuracy, precision, reaching time, and path length with a significance level at p<0.05. Subsequent post hoc analyses were Bonferroni corrected and in case sphericity was violated according to Mauchly’s test for sphericity the lower bound estimate was used.
Subjects made consistent almost straight goal-directed reaches to the targets. For the three conditions with force fields, the early parts of the trajectories (close to the starting position, see Fig 3) reflect initial displacements due to the onset of the force field at the starting position. In the visual trials the subjects accurately attained the targets for all four conditions. In the blind trials the end points of the movements (Fig 4) show larger random errors and systematic errors typically fell short of the targets. The smaller precision in the absence of vision indicates that the forces themselves were not the decisive factor in attaining the end points.
Reaches were quite straight, with only evident effects of the force fields at the start of the movement when the forces were the strongest and the subjects experienced a transient onset of the force field. Top panels show visual trials; bottom panels show blind trials.
The black dots represent the targets, the plusses the end points of individual trials and the orange circle the average end point position for each target. Top panels show visual trials; bottom panels show blind trials.
In visual trials, we found a significant main effect of guidance condition on reaching time (Fig 5; F3,27 = 14.149, p<0.001) and path length (F1,9 = 7.085, p = 0.026), without any effect on accuracy and precision (F1,9 = 0.231, p = 0.642; F1,9 = 1.113, p = 0.319 respectively). Post-hoc tests on the reaching times revealed that the visual trials with Assisting guidance were completed significantly faster than those in any other guidance condition, while errors were not larger. Post-hoc tests on the path length only revealed significantly longer path lengths in the Opposing condition in comparison with the Null condition.
Error bars indicate standard deviation over subjects and asterisks indicate significance (*: p<0.05, **: p<0.01, and ***: p<0.001). The assisting guidance condition improved the subjects’ performance with respect to all other guidance conditions: reaches were completed faster (both types of trials) and with increased accuracy and precision (blind trials).
In blind trials, we found significant main effects of guidance condition on accuracy, precision, reaching time and path length (F3,27 = 12.323, p<0.001; F3,27 = 12.498, p<0.001; F1,9 = 9.352, p = 0.014; F1,9 = 7.034, p = 0.026, respectively). Post-hoc tests revealed that the blind trials with Assisting guidance were significantly more accurate and precise than the blind trials of any other guidance condition. In addition, movements with Assisting guidance in blind trials were completed significantly faster than those with Opposing and Perpendicular guidance, but were not significantly different from those of the Null condition (p = 0.515). Post-hoc tests on the path length did not reveal any significant differences.
Experiment 1 shows that only assisting guidance reduced subjects’ errors with respect to no guidance at all (Null condition). For the Perpendicular as well as the Opposing guidance, the errors were larger than with intuitive guidance, showing that it is not only the information that leads to the reduction of errors. Surprisingly, the information did not appear to contribute at all, as the errors in the two non-intuitive conditions tends to be larger than without guidance.
Subjects’ movements differed considerably from those in the blind trials in Experiment 1. Subjects now made explorative movements to determine the center of the force fields (Fig 6).
In Experiment 2, without a preceding visual trial and without visual targets, subjects made more explorative movements than in Experiment 1.
We found significant main effects of guidance condition on accuracy, precision, reaching time, and path length (Fig 7; F2,18 = 5.204, p = 0.016; F1,9 = 7.400, p = 0.024; F2,18 = 12.427, p<0.001; F1,9 = 8.102, p = 0.019, respectively). Post-hoc tests revealed that with Assisting guidance movements were completed significantly faster, with shorter path lengths, and more precisely than with Opposing and Perpendicular guidance, but there were no significant differences in accuracy between the conditions. Thus, in the haptic trials the same main effects were present as in the blind trials of Experiment 1.
In Experiment 1, haptic guidance only improved performance when the force field provided guidance forces towards the target (Assisting), despite the fact that two other force fields (Opposing and Perpendicular) provided the same amount of information regarding the position of the target. In Experiment 2, the benefit of intuitive guidance on the performance measures, accuracy, precision and reaching time, persisted. In the less intuitive conditions the movements had longer path lengths. Possibly, in those conditions, the forces disturbed the subject’s reach. Alternatively the subjects might have needed more time and force inputs to extract the information from the Opposing and Perpendicular forces, because the movements appeared to be more exploratory. Apparently, it is essential to provide the guidance in a way that directly helps perform the task, perhaps because people intuitively expect a force to be helpful in these circumstances, but it could also be that because such guidance assists performance it seems to them to be trustworthy, thereby making its use intuitive.
One might argue that subjects just need more practice to learn to use the less intuitive guidance implementations. We therefore explored whether we could find any learning effects in the accuracy and precision in the blind trials of Experiment 1 and the haptic trials of Experiment 2, but they were not found (not shown). For Experiment 1, this suggests that the subjects chose to ignore the information that was present in the guidance and trust their memory of the position, based on the preceding visual trial. Subjects indicated in an informal debrief of the experiment that they were unaware that only two target positions were used, some indicated to have perceived many more. The better performance in the blind trials (with preceding visual trials) than in the haptic trials reflects the benefit of the memory of the position from the preceding trial. The path lengths were also considerably longer without prior knowledge of the target location (compare haptic trials in Experiment 2 with blind trials in Experiment 1), indicating more exploratory movements in the haptic trials (see Fig 6). The main finding is that guidance did not reduce the errors unless it assisted the movements, despite the lack of target dynamics and time constraints, and despite subjects taking more time in the other conditions.
Another possible explanation for our data is that the positive stiffness that the assisting guidance provides reduces variability due to motor noise (“Guidance directly influences the execution errors, either reducing or enlarging these errors” ). However, the same reasoning would predict that performance would be worse with the negative stiffness of the Opposing condition, as compared to the Null and the Perpendicular condition, which it is not, indicating that the results cannot easily be explained through the combined mechanical properties of the human and the robot.
The much smaller errors in the visual trials than in the blind trials, even with Assisting guidance, shows that the final position is primarily (maybe even exclusively) determined by (inaccuracies of) the subject. If the forces near the target would drag the subject to the target then the accuracy would not improve (much) with visual feedback. Moreover, if the forces near the target would move the subject to it when assisting, then subjects would have been able to have the forces move them away from it when opposing. The extent to which one can know that one is at the target does not depend on the kind of force field, because whenever there was a noticeable force or resulting movement the subject could know that they were not on target. Apparently, there was a considerable range within which the applied forces were too low to overcome the subject’s impedance and move the hand. If these subtle forces are difficult to perceive, what makes Assisting guidance so special? Why would finding the bottom of a valley be easier than finding the top of a hill?
Subjects might trust the guidance more when it is assisting them, because they feel that they are in agreement with the system, collectively moving towards the target. This feels intuitive in the sense that you instantly understand the system’s intentions. At the end point the subjects may therefore be more inclined to ‘listen’ to what the guidance is telling them. On top of that, the implementation of the Assisting guidance allows subjects to give the authority to the guidance by lowering their stiffness. This ties closely into optimal control  where the subject may be aware that the noise of the robot is below that of his/her motor noise. The destabilizing property of the Opposing condition may evoke behavior that preserves stability, such as cocontraction that may mask the effects of the guidance through signal-dependent noise . Yet, stability alone cannot fully explain the results as the Perpendicular condition has no radial (in- or outward) stiffness and did not improve performance with respect to the Null or Opposing condition either. Subjects may have been too stubborn to reduce their stiffness in the Opposing and Perpendicular conditions. Possibly the fact that giving way to the guidance by lowering one’s stiffness in itself does not work towards achieving the target, makes the operator unwilling to lower his or her stiffness in order to better extract the information.
Our finding that informative but non-intuitive guidance did not improve performance may be useful in applications such as rehabilitation. So far, methods that use interactive visuo-haptic environments have potential advantages over more standard forms of clinical care . Therapy robots typically move a limb through a pre-specified trajectory where deviations from this trajectory result in forces toward this trajectory [28,31]. These forces range from soft guidance with assisting forces to hard guidance with tunnel walls around the pre-specified trajectories. In the latter, the subject can be fully passive and have the robot perform the movement [32,33]. Hitherto, motor learning with haptic guidance has not yet revolutionized the field. In fact, there are examples where learning of novel visuomotor transformations with haptic guidance added to visual feedback was poorer than without haptic guidance [28, 34]. The motor system adapts to the environment including the haptic guidance, creating aftereffects that interfere with performance when the haptic guidance is taken away [35–37]. Our finding may indicate that interaction with haptic guidance occurs on a subconscious level, which may be a second cause for the limited benefit of haptic guidance in motor learning.
Conclusion and Implications
Subjects only improved performance on a position reproduction task with assistive haptic guidance when the additional haptic information was presented in an intuitive manner. The same information presented non-intuitively was ignored, even in our paradigm with static targets and no time constraints. Moreover, the effects of accuracy, precision, reaching time, and path length persisted when performing a purely haptic task, indicating that the effects were not specific to movements towards remembered positions from visual trials.
Intuitive implementation of guidance is important and likely ties into effectively combining the capabilities and limitations of the human and the machine (optimal control).
S1 Data. Raw_Data_Exp1.zip Supplementary Information—Raw Data Experiment 1
We thank MotekForce Link for support using D-Flow and for developing the driver for interfacing the control software suite with the Phantom haptic device. This research is part of the H-Haptics perspective program supported by the Dutch Technology Foundation STW, which is part of the Netherlands Organization for Scientific Research (NWO): STW grant 12160.
Conceived and designed the experiments: WM. Performed the experiments: WM. Analyzed the data: WM IK EB JS. Contributed reagents/materials/analysis tools: WM. Wrote the paper: WM IK EB JS.
- 1. Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415: 429–433. pmid:11807554
- 2. Van Beers RJ, Sittig AC, Denier van der Gon JJ (1999) Integration of proprioceptive and visual position-information: An experimentally supported model. Journal of Neurophysiology 81, 1355–1364. pmid:10085361
- 3. Rincon-Gonzalez L, Buneo CA, Helms Tillery SI (2011) The proprioceptive map of the arm is systematic and stable, but idiosyncratic. PLoS One 6(11):e25214. pmid:22110578
- 4. Smeets JBJ, Van Den Dobbelsteen JJ, De Grave DDJ, Van Beers RJ, Brenner E (2006) Sensory integration does not lead to sensory calibration. Proc Natl Acad Sci USA 103: 18781–18786. pmid:17130453
- 5. Wilson ET, Wong J, Gribble PL (2010) Mapping proprioception across a 2D horizontal workspace. PLoS One 5: e11851. pmid:20686612
- 6. Kuling IA, Brenner E, Smeets JB (2013) Proprioception is robust under external forces. PLoS One 8(9):e74236. pmid:24019959
- 7. Kuling IA, Brenner E, Smeets JB (2015) Torques do not influence proprioceptive localization of the hand. Exp Brain Res 233(1):61–8. pmid:25200177
- 8. Kurtzer I, Pruszynski JA, Scott SH (2009) Long-latency responses during reaching account for the mechanical interaction between the shoulder and elbow joints. J Neurophysiol 102(5):3004–15. pmid:19710379
- 9. Scott SH (1999) Apparatus for measuring and perturbing shoulder and elbow joint positions and torques during reaching. J Neurosci Methods 15;89(2):119–27. pmid:10491942
- 10. Shadmehr R, Mussa-Ivaldi FA (1994) Adaptive representation of dynamics during learning of a motor task. J Neurosci 14(5 Pt 2):3208–24. pmid:8182467
- 11. Wu HG, Smith MA (2013) The generalization of visuomotor learning to untrained movements and movement sequences based on movement vector and goal location remapping. J Neurosci 33(26):10772–89. pmid:23804099
- 12. Cordo PJ, Flanders M (1989) Sensory control of target acquisition. Trends Neurosci 12: 110–117. pmid:2469217
- 13. O’Malley MK, Gupta A, Gen M, Li Y (2006) Shared Control in Haptic Systems for Performance Enhancement and Training. ASME J Dynamic Systems Measurement and Control 128:75–85.
- 14. Wildenbeest JG, Abbink DA, Heemskerk CJ, van der Helm FC, Boessenkool H (2013) The impact of haptic feedback quality on the performance of teleoperated assembly tasks. IEEE Trans Haptics 6(2):242–52. pmid:24808307
- 15. Draper JV, Moore WE, Herndon JN, Weil BS (1987) Effects of Force Reflection on Servomanipulator Task Performance. Proc Int’l Topical Meeting Remote Systems and Robots in Hostile Environments.
- 16. Massimo M, Sheridan T (1989) Variable Force and Visual Feedback Effects on Teleoperator Man-Machine Performance. Proc NASA Conf Space Robotics 1:89–98.
- 17. Hannaford B, Wood L, McAffee DA, Zak H (1991) Performance Evaluation of a Six-Axis Generalized Force-Reflecting Teleoperator, IEEE Trans. Systems, Man and Cybernetics 21(3): 620–633.
- 18. Vitense HS (2003) Multimodal Feedback: An Assessment of Performance and Mental Workload. Ergonomics 46:68–87. pmid:12554399
- 19. Abbink DA, Mulder M, Boer ER (2011) Haptic shared control: smoothly shifting control authority? Cognition, Technology & Work 14:19–28.
- 20. Mars F, Deroo M, Hoc JM (2014) Analysis of human-machine cooperation when driving with different degrees of haptic shared control. IEEE Trans Haptics 7(3):324–33. pmid:25248215
- 21. Griffiths PG, Gillespie RB (2005) Sharing Control between Humans and Automation Using Haptic Interface: Primary and Secondary Task Performance Benefits. Human Factors 47:574–590. pmid:16435698
- 22. Sheridan TB (2002) Humans and Automation: System Design and Research Issues. Wiley-Blackwell, 2002.
- 23. Mulder M, Abbink DA, Boer ER (2008) The effect of haptic guidance on curve negotiation behavior of young, experienced drivers. SMC 2008, IEEE International Conference on Systems, Man and Cybernetics, pp. 804–809.
- 24. Mulder M, Abbink DA, Boer ER (2012) Sharing Control With Haptics: Seamless Driver Support From Manual to Automatic Control. Human Factors: The Journal of the Human Factors and Ergonomics Society 54:681–686.
- 25. Rosenberg LB (1993) Virtual Fixtures: Perceptual Tools for Telerobotic Manipulation. Proc Virtual Reality Ann Int’l Symp 1993.
- 26. Boessenkool H, Abbink DA, Heemskerk CJ, van der Helm FC, Wildenbeest JG (2013) A task-specific analysis of the benefit of haptic shared control during telemanipulation. IEEE Trans Haptics 6(1):2–12. pmid:24808263
- 27. Dvorkin AY, Ramaiya M, Larson EB, Zollman FS, Hsu N, Pacini S, et al. (2013) A "virtually minimal" visuo-haptic training of attention in severe traumatic brain injury. J Neuroeng Rehabil 10:92. pmid:23938101
- 28. Van Asseldonk EH, Wessels M, Stienen AH, van der Helm FC, van der Kooij H (2009) Influence of haptic guidance in learning a novel visuomotor task. J Physiol Paris 103(3–5):276–85. pmid:19665551
- 29. Todorov E, Jordan MI (2002) Optimal feedback control as a theory of motor coordination. Nat Neurosci 5(11):1226–35. pmid:12404008
- 30. Hamilton AF, Jones KE, Wolpert DM (2004) The scaling of motor noise with muscle strength and motor unit number in humans. Exp Brain Res 157(4):417–30. pmid:15014922
- 31. Aisen ML, Krebs HI, Hogan N, McDowell F, Volpe BT (1997) The effect of robot-assisted therapy and rehabilitative training on motor recovery following stroke. Arch Neurol 54:443–446. pmid:9109746
- 32. Hesse S, Schulte-Tigges G, Konrad M, Bardeleben A, Werner C (2003) Robot-assisted arm trainer for the passive and active practice of bilateral forearm and wrist movements in hemiparetic subjects. Arch Phys Med Rehabil 84:915–920. pmid:12808550
- 33. Lynch D, Ferraro M, Krol J, Trudell CM, Christos P, Volpe BT (2005) Continuous passive motion improves shoulder joint integrity following stroke. Clin Rehabil 19:594–599. pmid:16180594
- 34. Heuer H, Rapp K (2011) Active error corrections enhance adaptation to a visuo-motor rotation. Exp Brain Res 211:97–108. pmid:21472441
- 35. Heuer H, Lüttgen J (2014)Motor learning with fading and growing haptic guidance. Exp Brain Res 232(7):2229–42. pmid:24736860
- 36. Heuer H, Rapp K (2014) Haptic guidance interferes with learning to make movements at an angle to stimulus direction. Exp Brain Res 232(2):675–84. pmid:24276313
- 37. Novakovic V, Sanguineti V (2011) Adaptation to constant-magnitude assistive forces: kinematic and neural correlates. Exp Brain Res 209:425–436. pmid:21305377