This study introduces a new Reinforcement Learning Assist-as-Needed (RL-AAN) controller intended for robot-assisted upper-limb rehabilitation after stroke, which leverages a modified action-dependent heuristic dynamic programming (ADHDP) framework. Unlike conventional adaptive assist-as-needed controllers based on Iterative Learning Control (ILC-AAN), the proposed RL-AAN controller autonomously adjusts the trade-off between movement errors and robot assistance in response to the user's recent performance, in real-time, while relying on a small set of high-level tunable parameters that do not require subject-specific manual adjustments. The RL-AAN controller was implemented on a cable-driven, end-effector type rehabilitation robot and validated against a conventional ILC-AAN controller through perturbation-based reaching tasks involving a group of healthy individuals. Compared to ILC-AAN, the RL-AAN controller significantly reduced the amount of robot assistance required during training, promoting user active participation and task performance. Following training with the RL-AAN controller, retention tests showed more accurate arm-reaching trajectories compared to ILC-AAN training, highlighting the potential of RL-AAN for future use in exercise-based rehabilitation. Overall, this work contributes to ongoing research into developing control strategies that enable personalization in physical human-robot interaction (pHRI) and robot-assisted rehabilitation.

Personalized Adaptive Assistance With Reinforcement Learning Control Enhances Engagement, Performance, and Retention in Robot-Assisted Arm-Reaching Exercises

Minto R.;Boschetti G.;
2026

Abstract

This study introduces a new Reinforcement Learning Assist-as-Needed (RL-AAN) controller intended for robot-assisted upper-limb rehabilitation after stroke, which leverages a modified action-dependent heuristic dynamic programming (ADHDP) framework. Unlike conventional adaptive assist-as-needed controllers based on Iterative Learning Control (ILC-AAN), the proposed RL-AAN controller autonomously adjusts the trade-off between movement errors and robot assistance in response to the user's recent performance, in real-time, while relying on a small set of high-level tunable parameters that do not require subject-specific manual adjustments. The RL-AAN controller was implemented on a cable-driven, end-effector type rehabilitation robot and validated against a conventional ILC-AAN controller through perturbation-based reaching tasks involving a group of healthy individuals. Compared to ILC-AAN, the RL-AAN controller significantly reduced the amount of robot assistance required during training, promoting user active participation and task performance. Following training with the RL-AAN controller, retention tests showed more accurate arm-reaching trajectories compared to ILC-AAN training, highlighting the potential of RL-AAN for future use in exercise-based rehabilitation. Overall, this work contributes to ongoing research into developing control strategies that enable personalization in physical human-robot interaction (pHRI) and robot-assisted rehabilitation.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3586318
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact