Fig 1.
Schematic diagram of the overall architecture of the C-GAP model for hand function rehabilitation robot control, illustrating the workflow from multimodal signal processing and feature extraction via contrastive cross-modal attention and stacked GRU temporal modeling to adaptive PID control output.
Fig 2.
Structure diagram of the Contrastive Cross-Modal Attention (C-CMA) module, illustrating the process from multi-modal signal embedding to cross-modal interaction and feature fusion for temporal consistency modeling.
Fig 3.
Overview of the experimental platform, including the rehabilitation robot hardware, multimodal sensors.
Table 1.
Performance comparison of different models on the Ninapro DB5 and MUSED-I datasets.
Table 2.
Performance comparison of different models in terms of model parameters, hardware resource usage, inference latency, and training time.
Table 3.
Performance comparison of different models based on force limit event occurrence rate, safety threshold response time, and emergency stop success rate.
Fig 4.
Comparison of the generalization stability of the C-GAP, LLMT, and FL-HPR models on pathological data.
The horizontal axis represents the Fugl-Meyer score (0-66, reflecting the degree of motor function impairment in stroke patients, with higher scores indicating less impairment), and the vertical axis represents the stroke patient action classification accuracy (ACC_Stroke).
Table 4.
Performance comparison of different action categories on the Ninapro DB5 and MUSed-I datasets.
Fig 5.
Attention weight matrix of the C-CMA module for typical movements.
Fig 6.
The figure shows two confusion matrices, representing the model’s action classification results on the Ninapro DB5 dataset (left) and the MUSED-I dataset (right).
Fig 7.
The C-GAP model’s total loss and overall accuracy as a function of training epochs (Epochs) on the Ninapro DB5 and MUSED-I datasets.
Table 5.
Ablation study: Performance comparison of different model configurations on Ninapro DB5 and MUSed-I datasets.