Robust adaptive model predictive control: Performance and parameter estimation
Summary
For systems with uncertain linear models, bounded additive disturbances and state and control constraints, a robust model predictive control (MPC) algorithm incorporating online model adaptation is proposed. Sets of model parameters are identified online and employed in a robust tube MPC strategy with a nominal cost. The algorithm is shown to be recursively feasible and input-to-state stable. Computational tractability is ensured by using polytopic sets of fixed complexity to bound parameter sets and predicted states. Convex conditions for persistence of excitation are derived and are related to probabilistic rates of convergence and asymptotic bounds on parameter set estimates. We discuss how to balance conflicting requirements on control signals for achieving good tracking performance and parameter set estimate accuracy. Conditions for convergence of the estimated parameter set are discussed for the case of fixed complexity parameter set estimates, inexact disturbance bounds, and noisy measurements.
1 INTRODUCTION
Model predictive control (MPC) repeatedly solves a finite-horizon optimal control problem subject to input and state constraints. At each sampling instant a model of the plant is used to optimize predicted behavior and the first element of the optimal predicted control sequence is applied to the plant.1 Any mismatch between model and plant causes degradation of controller performance.2 As a result, the amount of model uncertainty strongly affects the bounds of the achievable performance of a robust MPC algorithm.3
To avoid the disruption caused by intrusive plant tests,2 adaptive MPC attempts to improve model accuracy online while satisfying operating constraints and providing stability guarantees. Although the literature on adaptive control has long acknowledged the need for persistently exciting (PE) inputs for system identification,4 few articles have explored how to incorporate PE conditions with feasibility guarantees within adaptive MPC.5 In addition, adaptive MPC algorithms must balance conflicting requirements for system identification accuracy and computational complexity.5, 6
Various methods for estimating system parameters and meeting operating constraints are described in the adaptive MPC literature. Depending on the assumptions on model parameters, parameter identification methods such as recursive least squares (RLS)7, comparison sets,8 set membership identification,9, 10 and neural networks11, 12 have been proposed. Heirung et al13 propose an algorithm where the unknown parameters are estimated using RLS and system outputs are predicted using the resulting parameter estimates. The use of RLS introduces nonlinear equality constraints into the optimization. On the other hand, the comparison model approach described in Aswani et al8 addresses the trade-off between probing for information and output regulation by decoupling these two tasks; a nominal model is used to impose operating constraints whereas performance is evaluated via a model learned online using statistical identification tools. However, the use of a nominal model implies that the comparison model approach cannot guarantee robust constraint satisfaction.
Tanaskovic et al9 consider a linear finite impulse response (FIR) model with measurement noise and constraints. This approach updates a model parameter set using online set membership identification; constraints are enforced for the entire parameter set and performance is optimized for a nominal prediction model. The article proves recursive feasibility but does not show convergence of the identified parameter set to the true parameters. To avoid the restriction to FIR models, Lorenzen et al10 consider a linear state space model with additive disturbance. An online-identified set of possible model parameters is used to robustly stabilize the system. However, the approach suffers from a lack of flexibility in its robust MPC formulation, which is based on homothetic tubes,14 allowing only the centers and scalings of tube cross-sections to be optimized online, and it does not provide convex and recursively feasible conditions to ensure PE control inputs.
In this article, we also consider linear systems with parameter uncertainty, additive disturbances, and constraints on system states and control inputs. Compared with Reference 10, the proposed algorithm reduces the conservativeness in approximating predicted state tubes by adopting more flexible cross-section representations. Building on Reference 15, we take advantage of fixed complexity polytopic tube representations and use hyperplane and vertex representations interchangeably to further simplify computation. We use, similarly to Reference 13, a nominal performance objective, but we impose constraints robustly on all possible models within the identified model set. We prove that the closed-loop system is input-to-state stable (ISS). In comparison with the min-max approach of Reference 15, the resulting performance bound takes the form of an asymptotic bound on the 2-norm of the sequence of closed-loop states in terms of the 2-norms of the additive disturbance and parameter estimate error sequences. In addition, we convexify the persistence of excitation (PE) condition around a reference trajectory and include a penalty term in the cost function to promote convergence of the parameter set. The convexification method is somewhat analogous to that proposed in References 16, 17, where the uncertainty information of the parameter set is approximated using a nominal gain. Here, however, the convexification is obtained by direct linearization of PE constraints on predicted trajectories. The cost function modification proposed here allows the relative importance of the two objectives, namely, controller performance and convergence of model parameters, to be specified.
Bai et al18 consider a particular set membership identification algorithm and show that the parameter set estimate converges with probability 1 to the actual parameter vector (assumed constant) if: (a) a tight bound on disturbances is known; (b) the input sequence is persistent exciting; and (c) the minimal parameter set estimate is employed. However, the minimal set estimate can be arbitrarily complex, and to provide computational tractability various nonminimal parameter set approximations have been proposed, such as n-dimensional balls19 and bounded complexity polytopes.9 The current article allows the use of parameter set estimates with fixed complexity and proves that, despite their approximate nature, such parameter sets converge with probability 1 to the true parameter values. We also derive lower bounds on convergence rates for the case of inexact knowledge of the disturbance bounding set and for the case that model states are estimated in the presence of measurement noise.
This article has five main parts. Section 2 defines the problem and basic assumptions. Section 3 gives details of the parameter estimation, robust constraint satisfaction, nominal cost function, convexified PE conditions, and the MPC algorithm. Section 4 proves recursive feasibility and ISS of the proposed algorithm. Section 5 proves the convergence of the parameter set in various conditions and Section 6 illustrates the approach with numerical examples.
Notation: and
denote the sets of integers and reals, and
,
. The ith row of a matrix A and ith element of a vector a are denoted [A]i and [a]i. Vectors and matrices of 1s are denoted 1, and
is the identity matrix. For a vector a, ‖a‖ is the Euclidean norm and
; the largest element of a is
and
. The absolute value of a scalar s is
and the floor value is ⌊s⌋.
is the number of elements in a set
.
is Minkowski addition for sets
and
, and
. The matrix inequality A ≽ 0 (or A ≻ 0) indicates that A is positive semidefinite (positive definite) matrix. The k steps ahead predicted value of a variable x is denoted xk, and the more complete notation xk|t indicates the k steps ahead prediction at time t. A continuous function
is a
-function if it is strictly increasing with
, and is a
-function if in addition
as s → ∞. A continuous function
is a
-function if, for all t ≥ 0,
is a
-function, and, for all s ≥ 0,
is decreasing with
as t → ∞. For functions
and
we denote
, and
with
.
2 PROBLEM FORMULATION AND PRELIMINARIES










Assumption 1. (Additive disturbance)The disturbance wt lies in a convex and compact polytope , where




Assumption 2. (Parameter uncertainty)The system matrices A and B are affine functions of the parameter vector :





Assumption 3. (State and control constraints)The set




Assumption 4. (Feedback gain and contractive set)There exists a polytopic set and feedback gain K such that
is
-contractive for some
, that is



3 ADAPTIVE ROBUST MPC
In this section, a parameter estimation scheme based on References 20, 21 is introduced. We then discuss the construction of tubes to bound predicted model states and associated constraints.
3.1 Set-based parameter estimation
At time t we use observations of the system state xt to determine a set of unfalsified model parameters. The set
is then combined with the parameter set estimate
to construct a new parameter set estimate
.

























3.2 Polytopic tubes for robust constraint satisfaction































































3.3 Objective function





















3.4 Augmented objective function and persistent excitation





Previously proposed MPC strategies that incorporate PE constraints consider the PE condition to be defined on an interval such as {t − Nu + 1, … , t}, where t is current time, which means that the PE constraint depends on only the first element of the predicted control sequence. Marafioti et al23 simplify the PE condition by expressing it as a nonconvex quadratic inequality in ut. Likewise, Lorenzen et al10 show that the PE condition is equivalent to a nonconvex constraint on the current control input. Lu and Cannon15 linearize the PE condition about a reference trajectory and thus obtain a sufficient condition for persistent excitation.


The inclusion of predicted future states and control inputs in this PE condition allows for greater flexibility in meeting the constraint. To avoid nonconvex constraints, we derive a convex relaxation that provides a sufficient condition for (27).




















where are intermediate variables.






Although incorporating a condition such as (27) or the convex relaxation (28) and (29) into a MPC strategy does not ensure that the closed-loop system satisfies a corresponding PE condition, its effect on the convergence rate of the estimated parameter set is significant, as shown in the numerical example in Section 6. In addition, is a meaningful and straightforward coefficient to tune, and the PE constraint can be easily switched off by setting
.
3.5 Proposed algorithm
- 1. Obtain the current state xt and set x = xt.
- 2. Update
using (11) and the nominal parameter vector
using (25), and solve (23) for
.
- 3.
At t = 0 compute the initial reference state and control sequences
and
, for example, by solving the nominal problem described in Remark 4.
At t > 0 compute the reference state and control sequences
and
, using the solution
at t − 1, and
(30)
- 4. Compute
,
,
,
,
,
the solution of the semidefinite program (SDP):
- 5. Implement the current control input
.
Remark 2.In offline step 1, T can be chosen so that the polytope in Assumption 4 approximates a robust control invariant set of the form
, where
satisfies
for some
. Using the vertex representation of
, the matrix
can be computed by solving a SDP (see e.g. Reference 24, Chapter 5). This approach allows the number of rows in T to be specified by the designer. However, T can alternatively be chosen so that
approximates the minimal robust positive invariant (RPI) set or the maximal RPI set for the system (1) and (2) under a specified stabilizing feedback law.22
Remark 3.In offline step 3, the computation of subject to (6) for given T can be performed by solving a LP using the vertex representation of
[ 22, chap. 7]. The objective of minimizing
is chosen to make the constraints of problem
easier to satisfy. In particular, choosing K so that
in (6) ensures that
exists satisfying the terminal constraints (20b-d).
Remark 4.At t = 0, the reference sequences and
may be computed by solving

Remark 5.The online computation of the proposed algorithm may be reduced by updating only once every Nu > 1 time steps. For example, in Step 2, set
for t ∈ {Nu,2Nu, …} and
at all times t ∉ {Nu,2Nu, …}.
In Section 4, we use the property that to show that the solution,
, of
at time t − 1 forms part of a feasible solution of
at time t. As a result, the reference sequences
,
in Step 3 are feasible for problem
at all times t > 0.
4 RECURSIVE FEASIBILITY AND STABILITY
4.1 Recursive feasibility




Proposition 1. (Recursive Feasibility)The online MPC optimization is feasible at all times
if
is feasible at t = 0 and
for all time t.
Proof.If is feasible at t − 1, then at time t,
is: feasible for (14) because
; feasible for (18) and (19) for
because
is feasible for (18) and (19) for
and
; and feasible for (18) and (19) for k = N − 1 and feasible for (20) because
is feasible for (20) and
. Finally, we note that (22) is necessarily feasible and (28) necessarily holds for some scalar
if
.
4.2 Input-to-state stability












Lemma 2. (ISS-Lyapunov function [25])The system



(i). contains the origin in its interior, is compact, and is a robust positively invariant set for (32), that is,
for all
,
,
, and
.
(ii). There exist - functions
,
-functions
,
and a function
such that for all
,
is continuous, and for all
,


In the following, we define as the set of states x such that problem
is feasible and assume that
is nonempty. In addition, for a given state x, nominal parameter vector
, and parameter set
, we denote
as the optimal solution of problem
, and let
be the corresponding optimal value of the cost in (24), so that
.
Theorem 1.Assume that and the nominal parameter vector
is not updated, that is,
for all
. Then for all initial conditions
, the system (1) with control law
, where
is the first element of
, robustly satisfies the constraint (2) and is ISS with region of attraction
.
Proof.We first show that condition (i) of Lemma 2 is satisfied with . If
is feasible at t = 0, then (20) implies that
exists such that
satisfies

Therefore is feasible for
for all
, and hence
. In addition, the robust invariance of
implied by (35) ensures that
, and since
due to Assumption 1,
must contain the origin in its interior. Furthermore, Proposition 1 shows that if
is initially feasible, then it is feasible for all t ≥ 0, so that
is positively invariant for (32). Finally,
is necessarily compact by Assumption 3, and it follows that condition (i) of Lemma 2 is satisfied if
.
We next consider the bounds (33) in condition (ii) of Lemma 2. For a given state x, nominal parameter vector and parameter set
, problem
with
and Q, R ≻ 0 is a convex quadratic program. Therefore,
is a continuous positive definite, piecewise quadratic function of x26 for each
and
. Furthermore,
is compact due to Assumption 2 and it follows that there exist
-functions
,
such that (33) holds with



















Following the proof of Theorem 5 in Limon et al25 and using the weak triangle inequality for -functions,27 we obtain











Corollary 1.Assume that and the nominal parameter vector
is updated at each time
using (25). Then for all initial conditions
, the system (1) with control law
, where
is the first element of
, robustly satisfies the constraint (2) and is ISS with region of attraction
.
Proof.It can be shown that condition (i) of Lemma 2 and the bounds (33) in condition (ii) of Lemma 2 are satisfied with and
for some
-functions
,
using the same argument as the proof of Theorem 1. To show that (34) is also satisfied and hence complete the proof we use an argument similar to the proof of Theorem 1. In particular, as before we define
using the optimal solution of
,
, and








The update law (25) ensures that since
. Hence, for all
, we have


























Remark 6.The ISS property implies that there exists a -function
and
-functions
and
such that for all feasible initial conditions
, the closed-loop system trajectories satisfy, for all
,

5 CONVERGENCE OF THE ESTIMATED PARAMETER SET




Bai et al18 show that, for such a system, the diameter of the parameter set constructed using a set-membership identification method converges to zero with probability 1 if the uncertainty bound is tight and the regressor Dt is PE. We extend this result and prove convergence of the estimated parameter set in more general cases. Specifically, in this article we avoid the problem of computational intractability arising from a minimum volume update law of the form
. Instead, we derive stochastic convergence results for parameter sets with fixed complexity and update laws of the form
.
In this section, we first discuss relevant results for an update law that gives a minimal parameter set estimate for a given sequence of states (but whose representation has potentially unbounded complexity), before considering convergence of the fixed-complexity parameter set update law of Section 3.1. We then compute bounds on the parameter set diameter if the bounding set for the additive disturbances is overestimated. Finally, we demonstrate that similar results can be achieved when errors are present in the observed state, as would be encountered, for example, if the system state were estimated from noisy measurements. In each case we relate the PE condition to the rate of parameter set convergence. We also prove that the parameter set converges to a point (or minimal uncertainty set) with probability one.
In common with Bai et al,18, 29 we do not assume a specific distribution for the disturbance input. However, the set bounding the model disturbance is assumed to be tight in the sense that there is nonzero probability of a realization wt lying arbitrarily close to any given point on the boundary,
, of
.
Assumption 5. (Tight disturbance bounds)For all and any
the disturbance sequence {w0,w1, …} satisfies
, for all
, where
whenever
.
Assumption 6. (Persistent Excitation)There exist positive scalars ,
and an integer Nu ≥ ⌈p/nx⌉ such that, for each
we have
and

We further assume throughout this section that the rows of are normalized so that
for all i.
5.1 Minimal parameter set





Proposition 2.For all , all
, and for any
such that
, under Assumptions 1, 5, and 6 we have


Proof.Assumption 1 implies that there exists so that
for any given
and
Therefore, if
satisfies
, then the definition (39) of
implies
, and hence
from (38). But












Proof.For the nontrivial case of t ≥ Nu we have since
. Also
and Proposition 2
implies
if
. Therefore,

Proof.For any and
such that
, Theorem 2 implies that
is necessarily finite, and since
requires that
, the Borel-Cantelli lemma therefore implies that
. It follows that
as t → ∞ with probability 1 since
is arbitrary.
5.2 Fixed complexity parameter set
In order to reduce computational load and ensure numerical tractability, we assume that the parameter set is defined by a fixed complexity polytope, as in (10) and (11). This section shows that, although a degree of conservativeness is introduced by fixing the complexity of
, asymptotic convergence of this set to the true parameter vector
still holds with probability 1.
Theorem 3.If is updated according to (10), (11) and Remark 5, and Assumptions 1, 5, and 6 hold, then for all
such that
for some
, we have, for all
and any
,

Proof.For the nontrivial case of t ≥ Nu we have since
by Lemma 1. Consider therefore the probability that any given
satisfying
lies in
. Define vectors gj for
by

Assumption 1 implies that, for any given , there exists a
such that
. Accordingly, choose
so that gj in (40) satisfies
for each
. Then








Corollary 3.Under Assumptions 1, 5, and 6, the fixed complexity parameter set estimate converges to
with probability 1.
5.3 Inexact disturbance bounds
We next consider the case in which the set bounding wt in Assumption 1 does not satisfy Assumption 5. Instead, we assume that a compact set
providing a tight bound on wt exists but is either unknown or nonpolytopic or nonconvex. We define the unit ball
and use a scalar
to characterize the accuracy to which
approximates
.
Assumption 7. (Inexact disturbance bounds) is a compact set such that
for some
, and, for all
and
, the disturbance sequence {w0,w1, …} satisfies, for all
,
and
, where
whenever
.
Remark 8.Assumption 7 implies that . As a result, every point in
can be a distance no greater than
from a point in
, that is
.
Theorem 5.Let Assumptions 1, 6, and 7 hold and let be the fixed complexity parameter set defined by (10), (11) with Remark 5. Then, for all
such that
for some
and any
, we have, for all
,

Proof.A bound on the probability that satisfying
lies in
can be found using the same argument as the proof of Theorem 3. Thus, choose
,
so that
, where






Therefore, if for all
, then
which implies
. From Assumption 7 and the independence of the sequence {w0,w1, …} we therefore conclude that

5.4 System with measurement noise






Assumption 8. (Measurement noise bounds) is a compact convex polytope with vertex representation
.







Thus, Assumption 8 implies that the unfalsified set is a convex polytope and the parameter set
can be estimated using, for example, the update law (10), (11) if
is known.
Assumption 9. (Tight measurement noise and disturbance bounds)For all ,
, and
we have








Corollary 6.Let Assumptions 1, 6, 8, and 9 hold and , with
given by (43). Then for all
such that
, for all
and all
, we have

Corollary 7.Let Assumptions 1, 6, 8, and 9 hold and let be the fixed complexity parameter set defined by (10), (11) with Remark 5 and (43). Then for all
such that
for some
and any
, we have, for all
,

Corollary 8.Under Assumptions 1, 6, 8, 9 the parameter set defined in Corollary 6 or 7 converges to
with probability 1.
Remark 9.If measurement noise is present, modifications to the proposed algorithm in Section 3.5 are needed. If no additional output constraints are present, then, given the noisy measurement yt, the constraint (14) should be replaced by


6 NUMERICAL EXAMPLES
This section presents simulations to illustrate the operation of the proposed adaptive robust MPC scheme. The section consists of two parts. The first part investigates the effect of additional weight in optimization problem
by using the example of a second-order system from Reference 30. The second part demonstrates the relationship between the speed of parameter convergence and minimal eigenvalue
from the PE condition.
6.1 Objective function with weighted PE condition






All simulations were performed in Matlab on a 3.4 GHz Intel Core i7 processor, and the online MPC optimization was solved using Mosek.31 For purposes of comparison, the same parameter set update law and nominal parameter update law were used in all cases. Robust satisfaction of input and state constraints and recursive feasibility were observed in all simulations, in agreement with Proposition 1. To illustrate satisfaction of the state constraint [x]2 ≥ −0.3, Figure 1 shows the cross-sections of the robust state tube predicted at t = 0, with initial condition x0 = (3,6), together with the closed-loop state trajectories for 10 different initial conditions.



Table 1 compares the computational time and parameter sizes of the proposed algorithm and existing algorithms when the same initial conditions and disturbance sequences {w0,w1, … } are used. Algorithm (A) refers to the robust adaptive MPC in Section 3.4 of Lorenzen et al.10 Algorithm (B) is a modification of algorithm (A) that incorporates the PE constraint , which is implemented as described in Lu and Cannon15 with a fixed
value:
. Algorithms (C) and (D) are the algorithm proposed in Section 3.5, with and without the PE condition, respectively.
(A) | (B) | (C) | (D) | |
---|---|---|---|---|
Algorithm | Homothetic tube (no PE)10a | Homothetic tube with PE10 | Proposed algorithm (no PE)a | Proposed Algorithm with PE (![]() |
Yalmip time (s) | 0.1825 | 0.7407 | 0.2053 | 0.2608 |
Solver time (s) | 0.1354 | 0.2065 | 0.1016 | 0.0766 |
Computational time (s) (Yalmip + solver time) | 0.3179 | 0.9472 | 0.3069 | 0.3374 |
![]() |
18.26 | 18.51 | 18.26 | 16.56 |
- aFor algorithms without PE constraint, a QP solver, Gurobi,32 is used instead of Mosek.
- Abbreviations: MPC, model predictive control; PE, persistently exciting.
Consider first algorithms (A) and (C) in Table 1. Although (C) uses a more flexible tube representation, there is negligible difference in overall computational time relative to (A). This is due to the use of a more efficient method of enforcing constraints on predicted tubes in (A) than (C), which introduces additional optimization variables to enforce these constraints. The more flexible tube representation employed in (C) provides a larger terminal set, as shown in Figure 2. Moreover, the formulation of (C) incorporates information on in the constraints, and as a result, the terminal set increases in size over time as the parameter set
shrinks. On the other hand, the homothetic tube MPC employed in (A) employs a terminal set that is computed offline and is not updated online.

Comparing algorithms (B) and (D) in Table 1, it can be seen that implementing the PE condition using an augmented cost function and linearized constraint (as in (D)) results in lower computation and faster parameter convergence than using a PE constraint with a fixed value of (as in (B)). Tuning the value of
in algorithm (B) is challenging, since a value that is too small results in slow convergence whereas choosing
too large frequently causes the PE constraint to be infeasible. Moreover, whenever the PE constraint is infeasible in algorithm (B), the MPC optimization is solved a second time without the PE constraint, thus increasing computation.
Figure 3 shows the effect of the weighting coefficient in the objective function (29) on the parameter set
when the same initial conditions (x0 = [3, 4]⊤,
,
) and disturbance sequences {w0,w1, …} are used. Larger values of
place greater weighting on
in the MPC cost (29), and thus on satisfaction of the PE condition (27). Therefore, increasing
results in a faster convergence rate in the parameter set volume. When the same weighting coefficient
is used, performing the parameter set update periodically (as discussed in Remark 5) slows down the convergence rate of the parameter set, as shown by the green line. For this simulation, the parameter set update (online step 2 in Section 3.5) takes only 2% of the total computational time.

The relationship between weighting coefficient and volume of parameter set
is shown in Figure 4. For values of
between 10−3 and 103, closed-loop simulations were performed with the same initial conditions, disturbance sequences, and initial nominal model and parameter set. The parameter set volume after 20 time-steps is shown. Figure 4 also shows that increasing
results in a faster parameter set convergence rate, in agreement with Figure 3.

For the same set of simulations, Figure 5 shows the optimal value of in (28) and (27) against
. From (29), it is expected that a larger
value will increase the influence of the term
, thus pushing
to be more positive. The left-hand figure shows the value of
in the convexified constraint (28). As expected, the increase in
leads to a smooth increase in
initially, but after a certain point, any further increase in the weighting factor
does not affect the calculated
value. The right-hand figure shows the value of
in the PE condition (27). The difference between
and
illustrates the conservativeness of the convexification proposed in Section 3.4. Note that this can be reduced by repeating steps (3) and (4) in the online part of the proposed algorithm, thus iteratively recomputing the reference sequences
,
and reducing the conservativeness of linearization to any desired level. It is interesting to note that, although the optimal value of
in (28) levels off at
, the value of
in the PE condition (27) increases monotonically between
and
. The smaller
values observed with (27) also explain the lower rates of parameter convergence for small values of
in Figure 4. In practice, it can be used as a guideline for the tuning of
.

Table 2 illustrates the convergence of the estimated parameter set over a large number of time steps for the initial condition x0 = [2, 3]⊤ and a randomly generated disturbance sequence {w0,w1, … }. Here, was chosen as 103 to speed up the convergence process. In agreement with Theorem 3,
has shrunk to a small region around the true parameter value at t = 5000.
Time step (t) | 0 | 1 | 5 | 50 | 100 | 500 | 1000 | 5000 |
---|---|---|---|---|---|---|---|---|
![]() |
100 | 30.21 | 18.50 | 14.46 | 12.75 | 11.11 | 1.51 | 0.25 |
6.2 Relationship between PE coefficient and convergence rate









Taking the window length in (27) to be Nu = 10, closed-loop trajectories were computed for 10 time steps and the parameter set was updated according to (11). Simulations were performed for 500 different initial conditions, and the average value of
was computed for each initial condition using 100 random disturbance sequences {w0,w1, …}. Figure 6 illustrates the relationship between the average size of the identified parameter set
and the average value of
in the PE condition (27). Clearly, increasing
results in a smaller parameter set on average, and hence a faster rate of convergence of
, which is consistent with the analysis of Section 5.2. The inner and outer radii shown in the figure on the left are the radii of the smallest and largest spheres, respectively, that contain and are contained within the parameter set estimate after 10 time steps. A similar trend can also be seen between the average volume of the parameter set
and the ensemble average value of
.





7 CONCLUSIONS
In this article, we propose an adaptive robust MPC algorithm that combines robust tube MPC and set membership identification. The MPC formulation employs a nominal performance index and guarantees robust constraint satisfaction, recursive feasibility and ISS. A convexified persistent excitation condition is included in the MPC objective via a weighting coefficient, and the relationship between this weight and the convergence rate of the estimated parameter set is investigated. For computational tractability, a fixed complexity polytope is used to approximate the estimated parameter set. The article proves that the parameter set will converge to the vector of system parameters with probability 1 despite this approximation. Conditions for convergence of the estimated parameter set are derived for the case of inexact disturbance bounds and noisy measurements. Future work will consider systems with stochastic model parameters and probabilistic constraints. Quantitative relationships between the convergence rate of the estimated parameter set and conditions for PE will be investigated further and methods of enforcing PE of the closed-loop system will be considered.