Abstract

We examine how a human–robot interaction (HRI) system may be designed when input–output data from previous experiments are available. Our objective is to learn an optimal impedance in the assistance design for a cooperative manipulation task with a new operator. Due to the variability between individuals, the design parameters that best suit one operator of the robot may not be the best parameters for another one. However, by incorporating historical data using a linear autoregressive (AR-1) Gaussian process, the search for a new operator’s optimal parameters can be accelerated. We lay out a framework for optimizing the human–robot cooperative manipulation that only requires input–output data. We characterize the learning performance using a notion called regret, establish how the AR-1 model improves the bound on the regret, and numerically illustrate this improvement in the context of a human–robot cooperative manipulation task. Furthermore, we show how our approach’s input–output nature provides robustness against modeling error through an additional numerical study.

References

1.
Hogan
,
N.
,
1985
, “
Impedance Control: An Approach to Manipulation: Part II—Implementation
,”
ASME J. Dyn. Syst. Meas. Control
,
107
(
1
), pp.
8
16
.
2.
Lu
,
W.-S.
, and
Meng
,
Q.-H.
,
1991
, “
Impedance Control With Adaptation for Robotic Manipulations
,”
IEEE Trans. Rob. Autom.
,
7
(
3
), pp.
408
415
.
3.
Huo
,
Y.
,
Li
,
P.
,
Chen
,
D.
,
Liu
,
Y.-H.
, and
Li
,
X.
,
2021
, “
Model-Free Adaptive Impedance Control for Autonomous Robotic Sanding
,”
IEEE Trans. Autom. Sci. Eng.
,
19
(
4
), pp.
3601
3611
.
4.
Sun
,
T.
,
Yang
,
J.
,
Pan
,
Y.
, and
Yu
,
H.
,
2023
, “
Repetitive Impedance Learning-Based Physically Human–Robot Interactive Control
,”
IEEE Trans. Neural Netw. Learn. Syst.
,
35
(
8
), pp.
1
10
.
5.
Li
,
X.
,
Liu
,
Y.-H.
, and
Yu
,
H.
,
2018
, “
Iterative Learning Impedance Control for Rehabilitation Robots Driven by Series Elastic Actuators
,”
Automatica
,
90
, pp.
1
7
.
6.
Yang
,
C.
,
Peng
,
G.
,
Li
,
Y.
,
Cui
,
R.
,
Cheng
,
L.
, and
Li
,
Z.
,
2018
, “
Neural Networks Enhanced Adaptive Admittance Control of Optimized Robot–Environment Interaction
,”
IEEE Trans. Cybern.
,
49
(
7
), pp.
2568
2579
.
7.
Ficuciello
,
F.
,
Villani
,
L.
, and
Siciliano
,
B.
,
2015
, “
Variable Impedance Control of Redundant Manipulators for Intuitive Human–Robot Physical Interaction
,”
IEEE Trans. Rob.
,
31
(
4
), pp.
850
863
.
8.
Sun
,
T.
,
Peng
,
L.
,
Cheng
,
L.
,
Hou
,
Z.-G.
, and
Pan
,
Y.
,
2019
, “
Stability-Guaranteed Variable Impedance Control of Robots Based on Approximate Dynamic Inversion
,”
IEEE Trans. Syst. Man Cybern. Syst.
,
51
(
7
), pp.
4193
4200
.
9.
Li
,
Z.
,
Li
,
X.
,
Li
,
Q.
,
Su
,
H.
,
Kan
,
Z.
, and
He
,
W.
,
2022
, “
Human-in-the-Loop Control of Soft Exosuits Using Impedance Learning on Different Terrains
,”
IEEE Trans. Rob.
,
38
(
5
), pp.
2979
2993
.
10.
Bock
,
T.
, and
Linner
,
T.
,
2016
,
Construction Robots: Volume 3: Elementary Technologies and Single-Task Construction Robots
,
Cambridge University Press
,
New York
.
11.
Yang
,
Y.
,
Ding
,
Z.
,
Wang
,
R.
,
Modares
,
H.
, and
Wunsch
,
D. C.
,
2021
, “
Data-Driven Human–Robot Interaction Without Velocity Measurement Using Off-Policy Reinforcement Learning
,”
IEEE/CAA J. Autom. Sin.
,
9
(
1
), pp.
47
63
.
12.
Modares
,
H.
,
Ranatunga
,
I.
,
Lewis
,
F. L.
, and
Popa
,
D. O.
,
2015
, “
Optimized Assistive Human–Robot Interaction Using Reinforcement Learning
,”
IEEE Trans. Cybern.
,
46
(
3
), pp.
655
667
.
13.
Li
,
Z.
,
Liu
,
J.
,
Huang
,
Z.
,
Peng
,
Y.
,
Pu
,
H.
, and
Ding
,
L.
,
2017
, “
Adaptive Impedance Control of Human–Robot Cooperation Using Reinforcement Learning
,”
IEEE Trans. Ind. Electron.
,
64
(
10
), pp.
8013
8022
.
14.
Williams
,
C. K.
, and
Rasmussen
,
C. E.
,
2006
,
Gaussian Processes for Machine Learning
, Vol. 2,
MIT Press
,
Cambridge, MA
.
15.
Marco
,
A.
,
Berkenkamp
,
F.
,
Hennig
,
P.
,
Schoellig
,
A. P.
,
Krause
,
A.
,
Schaal
,
S.
, and
Trimpe
,
S.
,
2017
, “
Virtual Vs. Real: Trading Off Simulations and Physical Experiments in Reinforcement Learning With Bayesian Optimization
,”
IEEE International Conference on Robotics and Automation
,
Singapore
,
May 29–June 3
, pp.
1557
1563
.
16.
Lau
,
E.
,
Srivastava
,
V.
, and
Bopardikar
,
S. D.
,
2023
, “
A Multi-fidelity Bayesian Approach to Safe Controller Design
,”
IEEE Control Syst. Lett.
,
7
, pp.
2904
2909
.
17.
Jin
,
Z.
,
Liu
,
A.
,
Zhang
,
W.-A.
,
Yu
,
L.
, and
Su
,
C.-Y.
,
2023
, “
A Learning Based Hierarchical Control Framework for Human–Robot Collaboration
,”
IEEE Trans. Autom. Sci. Eng.
,
20
(
1
), pp.
506
517
.
18.
Pöhler
,
L.
,
Umlauft
,
J.
, and
Hirche
,
S.
,
2019
, “
Uncertainty-Based Human Motion Tracking With Stable Gaussian Process State Space Models
,”
IFAC-PapersOnLine
,
51
(
34
), pp.
8
14
. .
19.
Siciliano
,
B.
,
Sciavicco
,
L.
,
Villani
,
L.
, and
Oriolo
,
G.
,
2010
,
Robotics: Modelling, Planning, and Control
,
Springer
,
London
.
20.
McRuer
,
D. T.
, and
Jex
,
H. R.
,
1967
, “
A Review of Quasi-linear Pilot Models
,”
IEEE Trans. Hum. Factors Electron.
,
8
(
3
), pp.
231
249
.
21.
Srinivas
,
N.
,
Krause
,
A.
,
Kakade
,
S. M.
, and
Seeger
,
M. W.
,
2012
, “
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
,”
IEEE Trans. Inform. Theory
,
58
(
5
), pp.
3250
3265
.
22.
Kennedy
,
M. C.
, and
O’Hagan
,
A.
,
2000
, “
Predicting the Output From a Complex Computer Code When Fast Approximations Are Available
,”
Biometrika
,
87
(
1
), pp.
1
13
.
23.
Craig
,
J. J.
,
2004
,
Introduction to Robotics: Mechanics and Control
, 3rd ed.,
Pearson Education
,
London, UK
.
24.
Wei
,
L.
, and
Chen
,
G.
,
2020
, “
Extended High-Gain Observer Based Output Feedback Linearization of Robot Manipulator
,”
IEEE Conference on Cyber Technology in Automation, Control, and Intelligent Systems
,
Xi'an, China
,
October
, pp.
73
79
.
25.
Petersen
,
K. B.
, and
Pedersen
,
M. S.
,
2008
, “
The Matrix Cookbook
,”
Tech. Univ. Denmark
,
7
(
15
), p.
510
.
You do not currently have access to this content.