Volume 31, Issue 3 pp. 417-428
Research Article

Bounded-input iterative learning control: Robust stabilization via a minimax approach

Brian Driessen

Corresponding Author

Brian Driessen

Mechanical Engineering Department, Wichita State University, Wichita, KS 67260 USA

Correspondence to: Brian Driessen, Mechanical Engineering Department, Wichita State University, Wichita, KS 67260, USA.

E-mail: [email protected]

Search for more papers by this author
Nader Sadegh

Nader Sadegh

Department of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA

Search for more papers by this author
Kwan Kwok

Kwan Kwok

Robotics Center, Sandia National Laboratories, Albuquerque, NM 87185-1003 USA

Search for more papers by this author
First published: 16 August 2016

Summary

In this paper, we consider the design problem of making the convergence of the bounded-input, multi-input iterative learning controller presented in our previous work robust to errors in the model-based value of the input-output Jacobian matrix via a minimax (min-max or 'minimize the worst case') approach. We propose to minimize the worst case (largest) value of the infinity-norm of the matrix whose norm being less then unity implies convergence of the controller. This matrix is the one associated with monotonicity of a sequence of input error norms. The input-output Jacobian uncertainty is taken to be an additive linear one. Theorem 3.1 and its proof show that the worst-case infinity-norm is actually minimized by choosing either the inverse of the centroid of the set of possible input-output Jacobians or a zero matrix. And an explicit expression is given for both the criteria used to choose between the two matrices and the resulting minimum worst-case infinity norm. We showed previously that the matrix norm condition associated with monotonicity of a sequence of output-error norms is not sufficient to assure convergence of the bounded-input controller. The importance of knowing which norm condition is the relevant one is demonstrated by showing that the set of minimizers of the minimax problem formulated with the wrong norm does not contain in general minimizers of the maximum relevant norm and moreover can lead to a gain matrix that destroys the assured convergence of the bounded-input controller given in previous work. Copyright © 2016 John Wiley & Sons, Ltd.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.