Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Convergence and stability analysis of the Extended Infinite Horizon Model Predictive Control

Abstract

Model Predictive Control (MPC) is a popular technology to operate industrial systems. It refers to a class of control algorithms that use an explicit model of the system to obtain the control action by minimizing a cost function. At each time step, MPC solves an optimization problem that minimizes the future deviation of the outputs which are calculated from the model. The solution of the optimization problem is a sequence of control inputs, the first of which is actually applied to the system. The optimization process is then repeated at subsequent time steps. In the context of MPC, convergence and stability are fundamental issues. This paper presents a mathematical analysis for convergence and stability of two important controllers: the Extended Infinite Horizon MPC developed by Odloak [Odloak D. Extended robust model predictive control. AIChE J. 2004;50(8): 1824–1836] and the Extended Infinite Horizon MPC with zone control [González AH, Odloak D. A stable MPC with zone control. J Process Control. 2009;19: 110–122]. Our analysis provides tuning strategies that can be implemented in practice, and the mathematical tools we use are intended to serve as a rigorous background for future studies and developments of related MPC approaches.

Introduction

Model Predictive Control (MPC) originated in the late seventies and has been developed considerably since then [1]. It was originally developed to meet the specialized control needs of petroleum refineries and power plants, but today MPC represents a powerful technology to operate complex dynamic systems, with several industrial applications, including process control, automotive systems, robotics, and energy management. MPC refers to a class of control algorithms that use an explicit model of the system to obtain the control action by minimizing a cost function. The model represents a dynamic relation between system inputs (control actions) and outputs (measurements). The purpose of the model is to predict the future response of the outputs over a prediction horizon. At each time step, an MPC algorithm solves an optimization problem that contains a performance cost function, the predictive model of the system and constraints on inputs and outputs. The solution of this problem is a sequence of inputs, the first input in the optimal sequence is then applied to the system, and the optimization process is repeated at the next time step, incorporating updated output measurements.

Fundamental aspects of any control system are convergence and stability, which together are often called asymptotic stability. Convergence refers to the ability of the optimization process to reach a solution that satisfies the control objectives. On the other hand, guarantee of stability is essential to prevent undesirable behaviors such as oscillations, instability, or divergence, which can compromise the performance and safety of the system. Various approaches have been proposed to analyze and ensure the stability of MPC. Since MPC uses a prediction model, the stability of the closed-loop system with MPC is classified in two types: if the prediction model perfectly represents the process system, the stability is nominal, and if there is uncertainty in the prediction model, the stability is robust. An usual method to obtain nominal stability of an MPC closed-loop system is to adopt an infinite prediction horizon. In this sense [2], developed an MPC regulator with infinite prediction horizon and input and output constraints. For stable open-loop systems, the infinite horizon can be reduced to a finite horizon MPC with a terminal weight computed through the solution of a Lyapunov equation. The infinite horizon MPC was later extended to the reference tracking problem by [3]. This work overcame the need to know the system steady state allowing the application to the output-tracking problem and the regulator problem with unknown disturbances, which was one of the major barriers to implement infinite horizon MPC in practice. Furthermore [4], presented an easier and more practical infinite horizon MPC. Following the method of [3], slack variables were added to the optimization problem, allowing a minimal violation of constraints, and keeping the cost function limited for the disturbed system. This feature is important for practical implementation.

In this paper, we consider the Extended Infinite Horizon MPC (EIHMPC) proposed by [4], which is a nominal infinite horizon MPC that considers an Output Prediction-Oriented Model (OPOM). The OPOM is a state-space model arranged in the incremental form of inputs, and it is developed from the analytical form of the step response of the system [3]. The EIHMPC was further developed for zone control, where the outputs are controlled inside zones or ranges instead of fixed set-points [5]. The consideration of the output zones results in additional degrees of freedom left to the MPC optimization problem, which means that all or some inputs are free to be moved to optimal targets that can be defined externally or calculated through a real time optimization (RTO) layer. Over the last 20 years, these controllers have gained attention in the academic community and chemical process industry. For instance, the EIHMPC with zone control served as a basis for subsequent approaches: it was extended to systems with integrating poles [68], to integrating systems with optimizing targets [9], to dead time systems [1014] and to unstable systems [15]. This controller was also formulated in two layers to receive the real time optimization targets, which are the solution of an economic optimization problem [1618]. Moreover, the EIHMPC has been implemented in the context of process design methodologies that consider simultaneously economic profit, dynamic performance, and process safety [19,20]. On the other hand, control technologies based on the EIHMPC have also been successfully applied in real process industries, especially oil refineries [2125] as well as pilot scale plants [26,27]. Nowadays, the EIHMPC is part of an in-house advanced control package developed by Petrobras (Petróleo Brasileiro S.A.) and has been implemented in many process units of the main oil refineries of Brazil [28].

Although the EIHMPC and EIHMPC with zone control have been widely implemented, their theoretical foundations concerning asymptotic stability under general conditions have remained incomplete. The original papers only provided intuitive insights on asymptotic stability for the specific case where the gain matrix D0 is regular [4] and the input horizon m is equal to 1 [4,5]. Rigorous and general proofs covering both arbitrary gain matrix and input horizon have been lacking, limiting further theoretical extensions and reliable tuning guidelines. The main objective of this paper is to provide mathematical proofs of asymptotic stability for both controllers with general gain matrix D0, i.e., not necessarily regular (allowing the input and output vectors to have different dimensions), and general input horizon m. Despite the similarities, it is important to strengthen that the EIHMPC approach with fixed set-point is not a particular case of the EIHMPC with zone control. Therefore it is important to present mathematical proofs for each one of them. As the reader may note, the mathematical treatment of the optimization problems of both controllers presents several technical difficulties, such as the construction of suitable weighting matrices S and Su for the corresponding slack variables and the formulation of appropriate control strategies to establish convergence. This reveals the challenge of obtaining rigorous results. As a byproduct of our analysis, for both approaches, we provide explicit expressions for tuning parameters S and Su that can be implemented in practice. Finally, we believe that the proofs presented in this work could be adapted to derived approaches and serve as a mathematical background for future studies and developments.

The paper is organized as follows. In the Extended Infinite Horizon MPC section, we present the OPOM and the EIHMPC formulation, the corresponding convergence and stability theorems and their proofs. In the Extended Infinite Horizon MPC with zone control section, we present the EIHMPC with zone control, the convergence and stability results and their proofs. In the Appendix, we give a recipe to compute the tuning parameter S of the EIHMPC, followed by a toy example to illustrate its computation.

Extended Infinite Horizon MPC

We begin by introducing the Extended Infinite Horizon Model Predictive Control in the Formulation and results section, which is based on the Output Prediction-Oriented Model. The OPOM is a discrete time state-space representation that includes the input increments , the outputs y, and the states xs and xd. Subsequently, we present the cost function V of the controller, which contains an infinite-horizon term, along with a terminal state constraint on xs that includes a slack variable .

We assume that the open-loop system described by the OPOM is stable (Assumption 1). Open-loop stability means that the open-loop system converges to any steady state when an input value is applied. It is relevant in practice because if the system is unstable, the EIHMPC cannot be implemented. Under this assumption, and by decomposing the infinite horizon cost V into two components, it is possible to reduce the infinite term to a finite horizon term with a terminal weight . Then, the formulation of the corresponding control optimization problem is presented. It includes both the terminal state constraint on xs and the input constraints. At each time step k, the optimization variables are the sequence of input increments and the slack variable .

Next, we introduce Assumption 2, which states that the output reference (also called set-point) r is reachable from the input set . Here, reachability refers to the capacity of the system to reach r from some input within . Its relevance relies on the fact that if r is unreachable, the output cannot converge to that reference value. In practice, when r is unreachable, the input saturates (i.e., reaches the boundary of ) trying, but not succeeding, to bring the output to the set-point.

Based on Assumptions 1 and 2, Theorems 1 and 2 are established, which address the convergence and stability of the closed-loop system.

Formulation and results

The EIHMPC considers the following OPOM, which is a discrete time state-space model obtained from the step response [4,5]. For discrete time , let

(1)

and

(2)

where , , , , , , , and I is the identity matrix of dimension ny. It is worth noting that the matrix Dd considered here corresponds to the matrix DdFN in [4]. In the state equation (1), xs is called the static part of the state of the system, while xd is called the dynamic part. The vector u represents the inputs of the model and y stands for the outputs. Matrix D0 is called the static gain of the system. We will not go deeper into the structure of the matrices appearing in the model, because it is not relevant for the present work. The interested reader may consult the above references for more details.

In the infinite horizon MPC setting, we denote by the input horizon. Since, at each time step , an optimization problem is solved for u over the horizon time interval of size m, we introduce the following notation: for , we define as the j-th move of the input solution of the optimization problem (to be defined below) at time step k. For the other variables of the model, we will use a similar notation. It is important to point out that, at each time step k, only the first move of the input solution of the optimization problem, , is implemented in the system. As a consequence, at each time step , ,

and, for ,

The EIHMPC is based on the following cost function

where , is the set-point, is a slack vector, , and are positive definite. To prevent the above cost from being unbounded, as discussed in [4], the following constraint is imposed

(3)

We will work under the following assumption on the OPOM, which was already present in [4].

Assumption 1. The system is stable, that is, the spectral radius of F is strictly smaller than 1.

Using Assumption 1 and (3), we obtain, for ,

(4)

where

is positive semidefinite. Also, note that .

Now, in order to state the control optimization problem, we first fix and rectangles in containing the origin 0. Let us also define

Control Optimization problem: The control optimization problem of the EIHMPC can be stated as follows: at each time step , we determine that minimizes Vk subject to constraints (3),

Without loss of generality, we assume that, at time 0, the system is in the steady state , , . Since and are convex sets and R is positive definite, the solution of the above optimization problem is unique. Let , denote the solution at time step k, and let be the corresponding cost,

where , and are obtained from , (1) and (2).

We also need the following

Assumption 2. The output reference r is such that for some .

We finally state the results about convergence and stability for the EIHMPC. We emphasize that we do not suppose that the gain matrix D0 is regular in the following theorems.

Theorem 1 (Convergence). Under Assumptions 1 and 2, we can choose the matrix S such that

Theorem 2 (Stability). Under Assumptions 1 and 2, the controller is stable, that is, for any , there exists such that implies that for all .

Proofs of Theorems 1 and 2

The proof of Theorem 1 is based on the following key ideas:

  1. Non-Increasing Nature of the Cost Function (Lemma 1): This result was already established in [4] and states that the optimal cost function is non-increasing in k.
  2. Convergence of the Dynamic State (Lemma 2) It is shown that the dynamic part of the state xd converges to zero as .
  3. Convergence of the Orthogonal Component of the Input Sum (Lemma 3): This lemma proves that the orthogonal component of the sum of future control input increments tends to zero as . The result uses Lemmas 1 and 2, together with the terminal state constraint on xs. Here, the deviation is expressed in terms of the OPOM matrix D0 and the sum of input increments. We also exploit the fact that the restriction of D0 to the orthogonal complement of its kernel, denoted , is a linear isomorphism. This formulation emphasizes that D0 may be a singular matrix.
  4. Convergence of the Control Input to the Reference Affine Subspace (Proposition 1): This proposition demonstrates that if the matrix S is correctly chosen, the distance between the optimal control input and the affine subspace Ur (which contains ur such that ) converges to zero as .
  5. Proof by Contradiction: Combining Proposition 1 and Lemmas 2 and 3, the final argument to prove Theorem 1 goes by contradiction, leading to the conclusion that as .

To prove Theorem 2, we propose a feasible initial solution with an associated cost such that . Then, using the parallelogram law, the optimal output deviation is shown to be bounded from above (up to a constant) by the euclidean norm of r at each time step. Consequently, we conclude that if the set-point r is sufficiently close to the starting point of the process (which is located at the origin), the output deviation also remains small at each time step, thereby confirming nominal stability.

We start with several technical results that will be useful to prove Theorems 1 and 2. The following lemma was already established in [4], but for the sake of completeness we present its proof.

Lemma 1. Under Assumption 1, the sequence is non-increasing.

Proof. At time step k + 1, let be such that

Note that is a feasible strategy for the control optimization problem at time step k + 1 (by “feasible strategy” we mean that satisfies all the constraints of the optimization problem). Indeed, by letting , we have that

so that (3) is satisfied. Moreover, note that for , and recalling that , we obtain

for .

The cost corresponding to this strategy is

and, since for , we have that

Now observe that

and

so that

and then

Since the strategy at time step k + 1 is not necessarily the optimal one, we have that and the sequence is non-increasing. □

Lemma 2. Under Assumption 1, we have that

Proof. From the proof of Lemma 1, we have that

Observe that converges since it is non-increasing and non-negative. Therefore we deduce that

(5)

Moreover, for , we have that

Now, let be the euclidean norm on . By Assumption 1, we have that

Hence, for any , there exists such that

and, since , there exists such that, for all ,

Finally, for all ,

that is, as . □

We now equip with the euclidean norm and the associated scalar product . We decompose

We also denote by the restriction of D0 to with values in . Observe that is a linear isomorphism. For a vector , we denote by the unique vector in such that .

Lemma 3. Under Assumption 1 we have that

Proof. Recall that . Since

and by Lemma 2

we have that

But, using (3), we have that

so that, since is a linear isomorphism,

and finally, by (5), we obtain

Recall Assumption 2 and let us denote by . Observe that for any , D0u=r. Let us also denote by Pr the orthogonal projection on the affine subspace Ur. We recall that Pr is not a linear map unless .

We now build up the matrix S (in the canonical basis of ) that will be considered later on. For this, fix respectively an orthonormal basis of , , an orthonormal basis of Im D0, , and an orthonormal basis of , . Consider the square matrix M of in the first two basis. Using the LQ-factorization, there exists a lower triangular matrix L and an orthogonal matrix O such that M = LO. Since M is regular, we have that L is regular. Now, consider the matrix K1 (resp. K2) of dimension (resp. ) whose column vectors are the orthogonal projections of the canonical vectors of on (resp. ). Define , which is positive definite on . Furthermore, we can check that for all , .

Proposition 1. Consider with a positive real number. We can choose large enough (depending only on the parameters of the model other than S) such that under Assumptions 1 and 2 we have that

Proof. The proof goes by contradiction, that is, assume that . Then, for all there exists a subsequence such that for all ,

By Lemma 3, there exists n0 such that, for ,

This implies that for ,

(6)

Indeed, applying the triangle inequality and using the fact that for all , we obtain

Also, there exists k0 such that for all we have

(7)

Then for some to be chosen later, consider the following strategy at time k such that and k = kn for some .

where is the point in such that , for and given by

Since is convex, we can choose small enough such that this strategy is indeed feasible. We now compute the cost of this strategy and compare it to the optimal cost. First, observe that

Thus, using (7), we obtain that

On the other hand, we have

Hence, using (6), we obtain that

After some elementary computation, we obtain that

where

(8)

Using the Cauchy-Schwarz inequality, we have that

(9)

for some positive constant C1 that depends on and Q. Thus, we obtain that

for some positive constant C2 that depends on and Q.

Now let be the angle between and for . Since is a rectangle we have that and therefore . Now using Lemma 2, we can choose k large enough such that . Therefore, we obtain,

for some positive constant C3 (see the Appendix for an explicit expression) that depends on the parameters of the model other than S. Now, let us take small enough and such that the strategy given by is feasible. Consider . We can check that in this case , which gives us the desired contradiction. □

Proof of Theorem 1

From Proposition 1 and Lemma 3, we deduce that . Indeed, recalling that for all , observe that

as .

Finally, we will show that . To this end, we first observe that using (4), since , if then . Thus, assume that . This implies that and therefore there exists some subsequence such that for all , . Now consider . For n large enough using the strategy , for all and (we can check that it is indeed feasible) we obtain by Lemmas 2 and 3

where G is from (8). This gives us the desired contradiction. □

Proof of Theorem 2

Recall that the system is assumed to be initially in the steady state , , , and consider the following feasible strategy for the control optimization problem at time step k = 1,

which has an associated cost . Since the above strategy is not necessarily the optimal one, and using Lemma 1, we have that for all . But, for any ,

and, on the other hand, by the parallelogram law,

for some positive constant C1. Thus we have that for some positive constant C2, for all . This implies that, for any , there exists such that for all , if . □

Extended Infinite Horizon MPC with zone control

In this section, we present the EIHMPC with zone control. The Formulation and results section introduces the controller formulation, which is based on the OPOM described previously in the Extended Infinite Horizon MPC section. This MPC formulation has two key features: the first one is that the set-point r is treated as an optimization variable; therefore, we denote it here as ysp. The second feature is that the controller receives external input targets udes.

We begin by defining the cost function and the terminal constraints: a terminal state constraint on xs, and an input constraint that links the control inputs to the external target udes. At each time step k, the optimization variables of this MPC are the sequence of input increments , the output reference vector ysp,k, the slack variables associated with the terminal state constraint , and those slacks related to the input constraint .

For this controller, we work under Assumption 1 (which for convenience will be renamed as Assumption 3 below) and Assumption 4, which states that the input target udes is reachable, i.e., belongs to the input domain , and the corresponding predicted steady state belongs to the set-point domain . Assumption 4 guarantees the consistency between the input target and the set-point domain. In practice, without this assumption, the controller will not be able to guide, at the same time, the input to its target and the output to the set-point.

Based on these assumptions, Theorems 3 and 4 establish the convergence and stability of the closed-loop system under the EIHMPC with zone control.

Formulation and results

In this section, we consider the EIHMPC with zone control [5]. This controller uses the OPOM given by (1) and (2) for prediction. The input target is sent at each time step by the Real Time Optimization (RTO) layer to the MPC layer. Unlike [4], in this MPC the set-point is a variable of the optimization problem and is calculated at each time step. Here, the exact values of the set-point are not important, as long as they remain inside a range with specified limits. As in the Extended Infinite Horizon MPC section, for , we define as the j-th move of the input solution of the optimization problem (to be defined below) at time step k. For the other variables of the model, we will use a similar notation.

The EIHMPC with zone control is based on the following cost function: for ,

where and are slack vectors, and , , , and are positive definite weighting matrices. To prevent the cost from being unbounded, as discussed in [5], we impose the terminal constraints

(10)

and

(11)

In this section we also work under Assumption 1, which for convenience we rename as

Assumption 3. The system is stable, that is, the spectral radius of the matrix F is strictly smaller than 1.

Under Assumption 3 and constraints (10) and (11) we have that, for ,

or, in other words,

where

is positive semidefinite and satisfies .

In order to state the control optimization problem, we first fix and rectangles in and rectangle in , all of them containing the corresponding origin. Let us recall that

Control optimization problem: The control optimization problem of the EIHMPC with zone control is the following: at each time step , we determine that minimizes subject to constraints (10), (11),

Here we assume that, at time 0, the system is in the steady state u(0) = 0, xs(0) = 0, xd(0) = 0. Since , and are convex sets and R and Sy are positive definite, the solution of the above optimization problem is unique. Let , , , be the solution of the control optimization problem at time step k and let be the corresponding cost,

where is obtained from , and , and are obtained from , (1) and (2).

In the following, we also need the

Assumption 4. The input target udes is such that and .

We finally state our results about convergence and stability of this second controller.

Theorem 3 (Convergence). Let be the identity matrix of dimension nu and H be the matrix given by

Under Assumptions 3 and 4, if then

Theorem 4 (Stability). Under Assumptions 3 and 4, the controller is stable, that is, for any , there exists such that implies that

Proofs of Theorems 3 and 4

The proof of Theorem 3 follows a similar reasoning to that of Theorem 1, although it is simplified by the presence of the target udes for the input variable u. Indeed, while in Lemma 3 we only have that

now the presence of the input target udes allows us to prove that

see Lemma 5. As a consequence, we obtain the apparently stronger fact that

see Corollary 1. This crucial fact enables us to prove by contradiction that as , for a well chosen Su. The proof of Theorem 4 parallels the proof of Theorem 2.

We start with several technical results that will be useful to prove Theorems 3 and 4. The following lemma was established in [5], but for the sake of completeness we present its proof.

Lemma 4. Under Assumption 3, the sequence is non-increasing.

Proof. Let , , , be the following strategy for the control optimization problem at time step k + 1:

We can check that the above strategy is feasible for the control optimization problem at time step k + 1, and its cost is

Now, since and for , we have that

But observe that and then

On the other hand, we have that , so that

Moreover, note that and then . Thus, we deduce that

Finally, since the strategy , , , at time step k + 1 is not necessarily the optimal one, we have that and then we conclude that the sequence is non-increasing. □

Lemma 5. Under Assumption 3, we have that

Proof. From the proof of the last lemma, we have that

Since the sequence is non-increasing and non-negative, we deduce that it converges and then

(12)

Using (11), we have that

Thus, by (12), we obtain that

Now, observe that

Under Assumption 3, for any , there exists such that

and there exists such that, for all , . Finally, for all ,

that is, as . □

Lemma 6. Under Assumption 3, we have that

Proof. By Lemma 5, we can choose k large enough so that

For such a k, let , , , be such that

and observe that this is a feasible strategy for the control optimization problem at time step k. Indeed, we have that

and

so that (10) and (11) are satisfied, and we also have that

for . Moreover, note that and for .

Now, the cost corresponding to the above strategy is

and, for , we have that and

Moreover, we also have that

and then we obtain that

where

Hence, by Lemma 5, we deduce that

On the other hand, note that and , therefore

Finally, since the sequence converges, we conclude that

As a byproduct of the last lemma, we have the following □

Corollary 1. Under the same assumption of Lemma 6, we have that

Proof of Theorem 3

Assume that , where is the identity matrix of dimension nu and

and suppose that .

By Corollary 1, we can choose a large enough k such that

(13)

For such a k, let be such that

and consider the following strategy for the control optimization problem at time step k:

It is straightforward to check that this strategy satisfies (11) and, from the fact that the solution of the control optimization problem at time step k satisfies (10) (11), we deduce that

which can be used to check that the above strategy also satisfies (10). Moreover, we have that , for , and

Using the fact that , which follows from (11), we also have that

since both and . Also, note that since both and . Thus, the above strategy is indeed feasible at time step k, and the corresponding cost is

Hence, we have that

Now, if then by equivalence of the norms, we have that

(where is the euclidean norm) and therefore there exists c > 0 such that infinitely often. Since

(14)

and, for ,

(15)

and also

(16)

by Lemma 5 and Corollary 1, for , there exists k0 (large enough) satisfying (13) such that

since . Finally, as , we conclude that , which is a contradiction.

On the other hand, if then, by Lemma 6, we have that

and using (14), (15), (16), Lemma 5 and Corollary 1 we deduce that, for , there exists k1 (large enough) satisfying (13) such that

and then , which is a contradiction once again. □

Proof of Theorem 4

Consider the following strategy for the control optimization problem at time step k = 1,

Since the system is assumed to be initially in the steady state , , , the above strategy is indeed feasible and it has an associated cost given by

Moreover, since this strategy is not necessarily the optimal one, and using Lemma 4, we have that for all .

Now, for any , we have that

On the other hand,

for some positive constant C1 and

for some positive constant C2. Thus, we obtain

for some positive constant C3, for all , which finally implies the result. □

Conclusion

The main contribution of this paper is to provide a rigorous mathematical analysis for the asymptotic stability, i.e., convergence and stability, of two important approaches in Model Predictive Control (MPC) based on the Output Prediction-Oriented Model (OPOM): the EIHMPC developed by [4] and the EIHMPC with zone control [5]. The key contributions include:

  1. Rigorous proof for the EIHMPC: The first objective was to provide a rigorous proof of asymptotic stability for this controller. This analysis significantly extends prior works and develops mathematical proofs using geometric and algebraic tools that are valid for any input horizon m and general gain matrix D0, including singular matrices. The results confirm convergence (Theorem 1), showing that the optimal cost function converges to zero as time tends to infinity, provided the matrix S is correctly chosen. The results also confirm nominal stability (Theorem 2), demonstrating that if the output reference r is sufficiently close to the starting point of the process, the optimal output deviation remains small for all time steps. We already emphasized that the EIHMPC is not a particular case of the EIHMPC with zone control, justifying separate mathematical proofs for both controllers.
  2. Rigorous proof for the EIHMPC with zone control: In the same spirit, this paper developed a rigorous proof of asymptotic stability for the EIHMPC with zone control, considering any input horizon m and general gain matrix D0. Our analysis established convergence (Theorem 3), showing that the optimal cost function converges to zero if the tuning parameter matrix Su is well chosen. It also established nominal stability (Theorem 4), concluding that if the external input target udes is close enough from the starting point of the process, the deviation in output and input variables remains small for all time steps.
  3. Practical implementation enhancements: As a byproduct of the mathematical analysis, we provided implementable expressions for tunning parameters S of the EIHMPC (see the proof of Proposition 1 and the Appendix) and Su of the EIHMPC with zone control (see Theorem 3).
  4. Mathematical background: The proofs presented in this work are also intended to provide a mathematical foundation to study asymptotic stability in related nominal MPC frameworks (e.g., RTO-MPC integration, multi-model adaptive RTO-MPC integration). Another natural direction for future work would be, for example, to extend the convergence and stability results to corresponding robust controllers.

Appendix

In this appendix, we first obtain an explicit expression for the constant C3 that appears in the proof of Proposition 1. For a positive semidefinite matrix A, let us define

where is the euclidean norm. It is well known that is the spectral radius of A. Starting back from the first inequality of (9), we have that the term

and therefore we obtain that

(17)

where and Z are defined in the proof of Proposition 1.

Next, we give a recipe to compute the matrix S from Theorem 1. This is done in three steps:

  1. From matrix D0, compute the matrix (see the paragraph just above Proposition 1 for the definitions of K1, K2 and L). Observe that in the case D0 regular, the expression of boils down to .
  2. Next, compute (or obtain an upper bound for) C3 using expression (17).
  3. Finally, choose and set .

Now, in order to illustrate the above recipe, we present the following toy example. Consider a system with nu = 3, , , ur = (1/2,1/2,1/2) and

Also, consider the input horizon m = 2 and weighting matrices Q = I2 and R = I3, i.e., the identity matrices of dimensions 2 and 3, respectively.

From D0, and ur, we deduce the following:

1. Computation of Ŝ:

We first obtain the matrix M of in the orthonormal bases of and ImD0 (see the paragraph just above Proposition 1):

The LQ-factorization M = LO, with lower triangular L and orthogonal O, is given by

We also deduce that the matrix K1 is simply the two-dimensional identity, while K2 is scalar null. Thus, we have that

2. Upper bound for C3:

Now, we obtain the matrix

which is the solution to the Lyapunov equation . Next, we compute the matrix Z from (8),

The spectral radii of Z, and are respectively , and . Thus, we obtain that

3. Deduction of S:

Finally, we choose , which gives

List of symbols

D0, Dd, F, : matrices of the OPOM (see (1) and (2));

xs: static part of the state of the OPOM;

xd: dynamic part of the state of the OPOM;

u: input variable of the OPOM;

: input increment variable of the OPOM;

y: output variable of the OPOM;

m: input horizon for both controllers;

V: cost function of the EIHMPC;

Q, S: weighting matrices of the EIHMPC;

: slack variable of the EIHMPC;

r: set-point of the EIHMPC;

: cost function of the EIHMPC with zone control;

Qu, Qy, Su, Sy: weighting matrices of the EIHMPC with zone control;

, : slack variables of the EIHMPC with zone control;

udes: input target of the EIHMPC with zone control;

ysp: set-point of the EIHMPC with zone control;

R: input increment weighting matrix for both controllers;

: input domain for both controllers;

: input increment domain for both controllers;

: set-point domain for the EIHMPC with zone control.

References

  1. 1. Camacho EF, Bordons C. Model predictive control. 2 ed. London: Springer; 2007.
  2. 2. Rawlings JB, Muske KR. The stability of constrained receding horizon control. IEEE Trans Automat Contr. 1993;38(10):1512–6.
  3. 3. A. Rodrigues M, Odloak D. MPC for stable linear systems with model uncertainty. Automatica. 2003;39(4):569–83.
  4. 4. Odloak D. Extended robust model predictive control. AIChE Journal. 2004;50(8):1824–36.
  5. 5. González AH, Odloak D. A stable MPC with zone control. J Process Control. 2009;19(1):110–22.
  6. 6. Carrapiço OL, Odloak D. A stable model predictive control for integrating processes. Comput Chem Eng. 2005;29:1089–99.
  7. 7. González AH, Marchetti JL, Odloak D. Extended robust model predictive control of integrating systems. AIChE Journal. 2007;53(7):1758–69.
  8. 8. Costa EA, Schnitman L, Odloak D, Martins MAF. A one-layer stabilizing model predictive control strategy of integrating systems with repeated poles. J Control Autom Electr Syst. 2021;33(2):369–81.
  9. 9. Alvarez LA, Francischinelli EM, Santoro BF, Odloak D. Stable model predictive control for integrating systems with optimizing targets. Ind Eng Chem Res. 2009;48:9141–50.
  10. 10. H. A, Odloak D. Robust model predictive control for time delayed systems with optimizing targets and zone control. In: Robust control, theory and applications. InTech; 2011. https://doi.org/10.5772/15039
  11. 11. Santoro BF, Odloak D. Closed-loop stable model predictive control of integrating systems with dead time. J Process Control. 2012;22(7):1209–18.
  12. 12. Martins MAF, Yamashita AS, Santoro BF, Odloak D. Robust model predictive control of integrating time delay processes. J Process Control. 2013;23(7):917–32.
  13. 13. Pataro IML, da Costa MVA, Joseph B. Advanced simulation and analysis of MIMO dead time compensator and predictive controller for ethanol distillation process. IFAC-PapersOnLine. 2019;52(1):160–5.
  14. 14. Pataro IML, Gil JD, Americano da Costa MV, Guzmán JL, Berenguel M. A stabilizing predictive controller with implicit feedforward compensation for stable and time-delayed systems. J Process Control. 2022;115:12–26.
  15. 15. Martins MAF, Odloak D. A robustly stabilizing model predictive control strategy of stable and unstable processes. Automatica. 2016;67:132–43.
  16. 16. Alvarez LA, Odloak D. Robust integration of real time optimization with linear model predictive control. Comput Chem Eng. 2010;34:1937–44.
  17. 17. Alvarez LA, Odloak D. Reduction of the QP-MPC cascade structure to a single layer MPC. J Process Control. 2014;24(10):1627–38.
  18. 18. de Oliveira RC, de Carvalho RF, Alvarez LA. Multi-model adaptive integration of real time optimization and model predictive control. IFAC-PapersOnLine. 2019;52(1):661–6.
  19. 19. de Carvalho RF, Alvarez LA. Simultaneous process design and control of the Williams-Otto reactor using infinite horizon model predictive control. Ind Eng Chem Res. 2020;59:15979–89.
  20. 20. Marques FH, Alvarez LA. Advanced process control system with MPC as a new approach for layer of protection analysis. J Loss Prevent Process Indust. 2023;83:104993.
  21. 21. Carrapiço OL, Santos MM, Zanin AC, Odloak D. Application of the IHMPC to an industrial process system. IFAC Proceed Volumes. 2009;42(11):851–6.
  22. 22. Porfírio CR, Odloak D. Optimizing model predictive control of an industrial distillation column. Control Eng Pract. 2011;19(10):1137–46.
  23. 23. Strutzel FAM, Odloak D, Zanin AC. Economic MPC of an Industrial Diesel Hydrotreating Plant. In: Power and Energy / 807: Intelligent Systems and Control / 808: Technology for Education and Learning, 2013. https://doi.org/10.2316/p.2013.807-022
  24. 24. Strutzel FAM. Controle IHMPC de um processo industrial de hidrotratamento de diesel. São Paulo: Universidade de São Paulo, Escola Politécnica; 2014.
  25. 25. Martin PA, Zanin AC, Odloak D. Integrating real time optimization and model predictive control of a crude distillation unit. Braz J Chem Eng. 2019;36(3):1205–22.
  26. 26. Martin PA, Odloak D, Kassab F. Robust model predictive control of a pilot plant distillation column. Control Eng Pract. 2013;21(3):231–41.
  27. 27. Silva BPM, Santana BA, Santos TLM, Martins MAF. An implementable stabilizing model predictive controller applied to a rotary flexible link: An experimental case study. Control Eng Prac. 2020;99:104396.
  28. 28. Sêncio RR. Model predictive control based on the output prediction-oriented model: a dual-mode approach, and robust distributed algorithms. São Paulo: Universidade de São Paulo, Escola Politécnica; 2022.