Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Low peaking-phenomenon cascade high-gain observer design with LPV/LMI method

Abstract

To cope with the well-known peaking phenomenon and noise sensitivity in the application of the High-Gain observer, a parameter tuning method based on the LPV/LMI approach for a 2nd-order cascade observer structure is proposed in this paper. Compared to other high-gain observer methods, this method can significantly reduce the infimum of gain in the observer, thereby reducing the peak phenomenon of state estimation and the influence of measurement output noise. By transforming the observer structure into a Luenberger-like structure, the parameters of the observer can be solved by one linear matrix inequality (LMI) with a high-gain effect or a 2n of LMI sets (LMIs) without a high-gain effect. Then by decomposing the nonlinear part of the system dynamics into high-dimensional and low-dimensional parts, we could solve the adjustable number of LMIs can be solved to obtain the result with limited high-gain effect. Stability analysis based on the Lyapunov method proves the convergence of this method, and the effectiveness of this method is verified through applications to one single-link mechanical arm model and a vehicle trajectory estimation application.

1 Introduction

Since the proposal of high-gain observers, due to the convenience of adjusting a single parameter, it has been widely used in the control of nonlinear systems [14]. Its convenience is reflected in its design steps. Compared to directly designing the observer parameter vector K for the Luenberger observer, a gain matrix T(θ) = diag(θ, ⋯, θn) is introduced in the measurement output feedback terms in the high-gain observer dynamics, making the linear part of the error dynamics a multiple of θn to the nonlinear part. Therefore, after the stability of the linear part of the observer, the stability of its nonlinear part can be achieved by adjusting the gain coefficient θ alone, and increasing this parameter can improve the convergence rate of the error [5]. However, to ensure the robustness of the error steady state, the gain coefficient θ and its related feedback gain are often too large, leading to the peaking phenomenon of measurement and high sensitivity to measurement noise, which would reduce the robustness of the observer [6].

In the past few decades, improvements for high-gain observers can be mainly classified into three categories:

1) Feedback term processing

Most improvement methods usually directly handle the excessive feedback terms caused by the gain matrix T(θ). In the feedback term processing of the observer dynamics, existing methods often improve the performance of the observer by introducing filtering, dead zone, saturation, and other processes. Treangle et al. [7] filtered the measurement output to reduce the impact of high-frequency noise. Cocetti et al. [8] achieved the same effect by introducing a dead zone into the feedback terms. Astolfi et al. [9] introduced a saturation element into the feedback terms to suppress the peak phenomenon. A similar improvement of the standard high-gain observer was proposed by Farza et al. [6]. Although adjusting the feedback items has become a convention, the information loss caused by dead zones is unacceptable in some cases [8]. While the saturated application can eliminate peak phenomena, it cannot handle the measurement sensitivity issues brought by high gain. The grid filtering technique is quite an efficient way to overcome the measurement noise, but it would cause high-frequency phase-shift that causes distortions of the estimate [10].

2) Cascade observer structure

Astolfi et al. [5] proposed a Marconi/Astolfi high-gain observer structure, observing an n-dimensional system through the cascade of n-1 second-order high-gain observers, limiting the order of gain to 2 while enhancing the ability to suppress high-frequency noise [11]. Meanwhile, based on this method, Khalil [12] proposed a cascade method for (n-1)th 1-dimensional sub-observers and introduced saturation in the sub-observers to reduce the peak phenomenon. Boizot et al. [13] introduced the high-gain method into the Kalman filter, under the smallest high-gain action possible to ensure the robustness of the state estimation results and the convergence of the high-gain observer. The cascade structures proposed above do have the ability to cope with the measurement noise, but the actual gain to measurement noise in the higher dimensional is the same as the standard high-gain observer.

3) Optimization of the gain coefficient

Alessandri et al. [14] proposed a standard high-gain observer structure with increasing gain θ over time to deal with the peaking phenomenon at the start of the convergence of state estimation. Zemouche et al. [15] combined a method called LPV/LMI [16] with the design of high-gain observers to propose a new low-gain observer design method, which can reduce the dependence of the stability of the nonlinear part on the gain coefficient by optimizing the observer’s linear part, i.e., the value of the observer coefficient K, thereby reducing the overall gain of the observer while stabilizing the system. The utilization of the gain optimization technique is a convenient way to balance the performance of the high-gain effect and the robustness of the observer. Thus, the time-varying gain or gain with a lower limit bound could also be utilized on the other observer structures, which is mentioned in [15].

Based on the above research, this paper proposes a parameter design method based on LPV/LMI technology for Marconi/Astolfi type observer structures. Compared to directly designing parameters for the standard high-gain observer in [15], this method can reduce sensitivity to measurement noise and, compared to [5], maintains low gain characteristics. First, the observer structure needs to be transformed into a canonical Luenberger form, and then the single LMI solution method corresponding to the pole configuration is obtained. Then, the LPV/LMI form of the multiple LMIs solution method is obtained through the gradient decomposition of the nonlinear characteristics. Finally, the parameter solution algorithm is given, and the simulation effect is compared through two examples of a single-link mechanical arm and a vehicle trajectory estimation.

The innovations of this work are as follows:

  • By converting the error dynamic into the Luenberger-like form, we develop a parameter design method for the 2nd-order cascade observer structure [5] based on the LMI and LPV/LMI method and prove the stability (Section 3.1 and Section 3.2).
  • Based on the LPV/LMI methods, the nonlinear part could be decomposed in the low dimensional and high dimensional parts by grid decomposition and processed separately with high gain effect and LPV/LMI methods, and finally obtain the main theorem. Then we propose an algorithm to calculate the parameter(Section 3.3).
  • Two examples are used to verify the practicality of the proposed method in this paper. By comparing our method with the original pole assign method [5], the standard High-gain observer method [1], and a new filtered High-gain observer [7], the effectiveness of our method has been proved.

Notations:

  • (Ai, Bi, Ci) present the observable canonical triplets of dimension i,
  • Ii denotes the identity
  • is the Lie derivative of h(x) under vector field f.
  • Ti(θ) = diag(θ, θ2, ⋯, θi).
  • , .
  • .

2 Problem description

2.1 System description

We consider a class of nonlinear single input single output (SISO) systems of the form (1) where is the state variable, is the measured output, is the input, f(⋅), g(⋅) and h(⋅) are C functions. and represent the system disturbance and measurement disturbance, respectively. Both disturbances are bounded.

Let Zu(z0, t) be the solution of (1) going through z0 at time 0 with input u. The definition of observability of nonlinear system (1) that will be crucial for the following analysis is reviewed.

Definition 1 (Differential Observability). [17] System (1) is differentially observable of order N on an open subset , if mapping is injective on . Furthermore, it is regarded as strongly differentially observable on if the mapping is also an immersion.

Definition 2 (Uniform Observability). [17] System (1) is uniformly observable on an open subset , if for any pairs with xaxb, any T > 0, and any C1 input u defined on [0, T), there exists a time t < T such that h(Xu(xa, t)) ≠ h(Xu(xb, t)) and for all st.

Lemma 1. [18] If system (1) is uniformly observable and strongly differentially observable of order N = m on an open set containing the compact set , it can be transformed on into a full Lipschitz triangular canonical form of dimension n = m.

In general, if system (1) is uniformly observable and 2-order strongly differentially observable, as stated in Lemma 1, by selecting x = Hn(z), we can obtain (2) which is the actual observed system, where , φ(x) can be chosen satisfying , and , both φ(⋅) and g(⋅) can be locally lipschitz on as a result of Lemma 1.

Remark 1. Lemma 1 ensure the existence of the canonical observability form and the Lipschitz property of nonlinear dynamics, which is a necessary condition in the designing of high-gain observers [19].

2.2 Observer form

To estimate the states of (2), we use a kind of 2-order cascade high-gain observer form proposed in [5] as below (3) with

In (3), is the state of ith sub-observer for i = 1, ⋯, n − 2, is the state of (n − 1)th sub-observer, is the ith state estimation of system (2) and presents the auxiliary estimation of (i + 1)th state, Ki = [ki1, ki2] is denoted as the design parameter of ith sub-observer. φs(⋅) is equivalent to φ(⋅) in . In this context, we assume that φs(⋅) agrees with the global Lipschitz condition as below. This structure can be conceptualized as being composed of (n − 1) second-order high-gain sub-observers shown in Fig 1.

Assumption 1 (Globally Lipschitz condition) (4) where is the Lipschitz constant.

Define xi = [xi, xi+1] (where xi is the i-th state of the system (2), then the object system (2) can be extended into following form Thus we can obtain error dynamic by defining estimation error and auxiliary error . For convenience, define state error e = (e1, ⋯, en−1), where ei = (ei, ϵi+1). Then the error dynamic is also equivalent to (5) where , and system matrix is in form of with Ei = AiT2(θ)KiC2, and . Astolfi et.al. developed a pole assign method to make M Hurwitz in [5], as the following Lemma.

Lemma 2. [5] Let be an arbitrary Hurwitz polynomial. There exists a choice of (ki1, ki2), i = 1, ⋯, n − 1 such that the characteristic polynomial of M coincides with .

Although estimation error can asymptotically converge by setting the gain parameter θ to a sufficiently large value in [5], the peaking phenomena are still present in the observation results due to the large observer gain.

In this paper, we adopt the observer form proposed in [5]. However, unlike the pole design method described in Lemma 2, observer parameter Ki can be optimized via multiple linear matrix inequations (LMIs) after transformation to the error dynamics, resulting in a significant reduction on the gain parameter θ.

3 Main result

First, rewrite the error dynamics (6) as below (6) where with being the parameter matrix that needs to be solved. The observer parameter matrix can be calculated as Ki = Li12×1.

Then, transform the error variable in (6) with (which is equivelent to ). And we have the following transformed error dynamic. (7)

Remark 2. Since we are using the transformed error dynamics , the nonlinear part of the error dynamics here is . By the mean value theorem, its linearized result is , whereis the gradiant of φs. From the definition of the high-gain matrix Tn, it can be seen that higher-dimensional nonlinear gradients are less affected by high gains, which ultimately leads to our main result.

Under the new form of error dynamics in (7), the value of Ki can be determined using an LMI-based method rather than relying solely on pole assignment, which corresponds to Theorem 1. The high-gain matrix Tn is used to handle the nonlinear components. Then by decomposing the nonlinear error into gradients, the nonlinear constraints are transformed into a set of 2n linear constraints. This allows us to use multiple LMIs to address both the nonlinear components and the pole assignment for the observer, which is proved in Theorem 2.

Finally, we decompose the nonlinear constraint into two linear constraint sets based on dimensionality. The low-dimensional component is handled using the approach from Theorem 1 with high gains, while the high-dimensional component is tackled using the LPV/LMI method outlined in Theorem 2. Ultimately, this process leads to the formulation of the theorem and algorithm presented in Theorem 2.

Under the new form of error dynamics in (7), the value of Ki can be determined using an LMI-based solver method rather than relying on pole assignment.

3.1 1-LMI optimization with the largest infimum of gain

This approach is presented and supported by the following theorem.

Theorem 1. Consider the error dynamics (6). The state estimation error e will be asymptotically stable if there exist scalar μ ∈ (0, 1), and matrices for i = 1, ⋯, n − 1, and symmetric positive definite matrices for i = 1, ⋯, n − 1, such that the following LMI is feasible: (8) where

Once the LMIs are solved, L = P−1R, K = L 1(2n−2)×1, and .

Proof. Choose Lyapunov function as , then (9)

Notice that nonlinear error can be transformed into time-variable gradient form (10) where denotes the gradient of φs(x) at x = ς. ς refers to a point between x and , and .

For simplicity, we will replace the expression with , and utilize ∇ instead of ∇ς.

By substituting (8) and (10) in (9), following inequality is obtained (11)

Let ϑ = ‖P−1θ, then if ‖P‖ ≥ 1, there is (12)

According to Assumption 1, the gradient of φs(x) is bounded, specifically satisfying . This implies that the absolute value of each partial derivative is also bounded, ensuring for all i = 1, ⋯, n. Thus we can obtain the following inequality: (13) If , then Meanwhile, since θ > 2, it follows that and which means that exhibits diagonal dominance, which, as per Gershgorin’s circle Theorem [20], implies it is negative definite. Thus we have

Consequently, asymptotically decreases towards zero, indicating the asymptotic stability of the transformed estimation error .

Furthermore, since the actual estimation error e is linearly related to , the asymptotic stability of estimation error e is obtained.

In other cases where ‖P‖ ≤ 1, the same result can be obtained by selecting ϑ = θ directly. Thus the proof is finished.

Theorem 1 presents a method for optimizing the observer parameter matrix K. However, similar to the pole assign method, the gain of the observer can become significantly large, leading to a pronounced peaking phenomenon in the estimation results. In contrast, by employing the LMI technique, we can tune the observer parameter by solving multiple LMIs in the following context. This allows us to mitigate or even eliminate the high-gain effect, resulting in improved estimation performance.

3.2 2n-LMIs optimization with minimum infimum of gain

Assumption 1 implies that the gradient ∇ is bounded within a compact set Φ, given by: where ϕi represents the partial derivative ∂φs/∂xi. It’s a compact set with its vertices contained in the set:

Consequently, the nonlinear error belongs to the set , and the Lyapunov derivative (9) can also be contained within a set related to Φ as below: (14) where .

By treating the Lipschitz nonlinear dynamic as a Linear Parameter-Variable (LPV) part, it is possible to utilize multiple Linear Matrix Inequalities (LMIs) techniques to solve the observer parameter matrix K without resorting to the high-gain method (And the gain parameter θ = 1 in this situation). This approach allows for achieving stability in the presence of the dominant linear component while considering the Lipschitz nonlinearity separately. Thus we have the following Theorem:

Theorem 2. Consider the error dynamic (6). Let θ0 represent the infimum of the gain. The state estimation error e will be asymptotically stable if there exists scalar μ ∈ (0, 1) and, matrices for i = 1, ⋯, n − 1, and symmetric positive definite matrices for i = 1, ⋯, n − 1, such that the following 2n Linear Matrix Inequalities (LMIs) are feasible: (15) where

Once the LMIs are solved, L = P−1R, K = L 12n−2×1, and θ ∈ {θ|θθ0, θ0 = 1}.

Proof. Given that Φ is a compact set with vertices , it follows that for every ∇ ∈ Φ, the following feature is satisfies: thus

Let θ = 1, then the following inequality is satisfied for (9) substituting (15) into it, then we have

Finally, following the proof outlined in Theorem 1, we could establish the asymptotic stability of the state estimation error e.

Theorem 2 presents an alternative approach to determining the observer parameter, K, that eliminates the need for considering the gain effect. However, it may encounter challenges when applying this method, particularly in cases where the Lipschitz constant is large. The optimization process requires solving a significant number of LMIs, resulting in constraints that vary significantly. This burdens the LMI solver considerably and may even render the task infeasible due to computational limitations.

Thus in the next section, by combining Theorem 1 and Theorem 2, the main result is achieved through the solution of a variable number of multiple LMIs, enabling the determination of the observer parameter K. Moreover, this approach ensures that the infimum of gain required is sufficiently modest.

3.3 -LMIs optimization with limited infimum of gain

According to (13), it can be observed that the impact of the gain θ on the nonlinear component diminishes as the state order increases. This implies that the high-gain effect is more pronounced in lower-order states compared to higher-order ones.

It is natural to decompose the nonlinear error into lower-order and higher-order components: (16) where e[a,b] = [0, ⋯, 0, ea, ⋯, eb, 0, ⋯, 0] for b > a, represents the lower-order components for high-gain effect, and denotes the higher-order components associated with the LMIs effect, thus ∇ = ∇HG + ∇LMI.

This allows us to independently apply the high-gain effect and the LMIs effect to each component. By doing so, we can reduce both the number of required LMIs and the infimum of gain θ.

Take . Additionally, we define a set , and consider the vertices of this set denoted by then we have the following main Theorem.

Theorem 3. Consider the error dynamic (6). Let θ0 represent the infimum of the gain. The state estimation error e will be asymptotically stable if there exists scalar μ ∈ (0, 1), integer js ∈ {1, 2, ⋯, n}, and matrices for i = 1, ⋯, n − 1, and symmetric positive definite matrices for i = 1, ⋯, n − 1, such that the following Linear Matrix Inequalities (LMIs) are feasible: (17) where

Once the LMIs are solved, L = P−1R, K = L 12n−2×1, and .

Proof. Following the step in Theorem 2, estimation error dynamic (7) can be contained in set associated to (18)

For every , there exists inequality (19) then by substituting (17) and (19) into (18), we can express the derivative of the Lyapunov function as (20)

Similar to Theorem 1, if ‖P‖ > 1, defining , we have (21) When and θ > 2, it follows that and Consequently, the diagonal dominance of is achieved, leading to thus asymptotic stability of both and e can be established. In the case where ‖P‖ < 1, the asymptotic stability can be directly obtained, thereby concluding the proof.

Remark 3. The proposed method offers a significant improvement over Theorem 1 by reducing the gain to its 1/(1 + js)th power, where js is an adjustable integer ranging from 1 to n. Furthermore, in comparison to Theorem 2, it effectively reduces the number of LMIs from 2n to . This advancement allows for a desirable balance between the number of LMIs that need to be solved and the desired gain magnitude, achieved through the tunable parameter js. Notably, when js = n, Theorem 3 simplifies to Theorem 2, while for js = 0, it further simplifies to Theorem 1. These observations highlight the flexibility and versatility of the proposed method.

We can observe from Theorem 3 that the component for exhibits a direct relationship with the selected gain parameter θ. Specifically, as the gain increases, the scale of decreases, thereby leading to the more relaxed constraints of the LMIs. To overcome this challenge, we could formulate an algorithm that employs an iterative approach to solve the LMIs and subsequently solve the infimum of the gain, ensuring a stable solution. Thus we present the optimization algorithm for the Lower Power Cascade High-gain observer (LPCHGO) (3) below:

Algorithm 1: Parameter Optimization for Observer Matrix K and Gain θ

1 Choose the value of js within the range of 1 to n, initial infimum for gain , the stopping change rate of gain dθ, and the maximum number of tolerable infeasible solutions nf;

2 Solve the Linear Matrix Inequalities (LMIs) (17) with , obtaining K(1) and , regardless of the feasibility of the solution;

3 while do

4  Solve the LMIs (17) with , obtaining K(i+1) and , regardless of the feasibility of the solution

5if Encounter nf consecutive infeasible solutions; then

6   Go back to step 1 and reduce the value of js;

7end

8i = i + 1

9 end

10 Set the observer parameter matrix as K = K(i + 1) and the infimum for gain as , choose a proper gain parameter θ > θ0 depending on the demand of convergence speed

Remark 4. In the initial iterations of the algorithm, when solving step 2 with a small initial value of , the LMIs associated with the may become infeasible to solve due to the large scale of the set. However, after one or two iterations, the value of is optimized and adjusted to a reasonable magnitude, thereby improving the feasibility of the LMIs.

To compare the effectiveness of this method, especially to the high-order systems, we contrasted it with the pol assignment method of the cascade high-gain observer structure (LPCHGO) [5] and standard high gain method (STDHGO) [1] across various dimensions under the same lipschitz constant (Lφ = 1), and the final results are presented in Table 1:

4 Application to two physical applications

In this section we will use two physical models to prove our method’s applicability and performance.

4.1 Single-link robot system

The first model is a common single-link robot system introduced in [21].

Following the parameter used in [7], the following equations describe the model of the single-link robot system: (22) The control input is given by: parameters of the controller are listed in Table 2. Additionally, the controller parameters are given as

Considering the 4th-order differential observability and uniform observability of (22) on the set , we can employ the following coordinate transformation to obtain a new system representation: (23)

By applying this diffeomorphism transformation, the resulting system is in the observable canonical form: (24) which is suitable for designing a High-Gain observer.

To demonstrate the effectiveness and superiority of the method proposed in this paper (LPCHGO/LMI), we conducted a comparative analysis with the original pole-assign method presented in [5] (LPCHGO/POL), an additional High-Gain Observer methods introduced in [7] (FILHGO), as well as the standard high-gain observer firstly introduced in [1].

The Lipschitz constant can be calculated as Lφ = 2, choosing js = 2. The parameters of the above four methods are listed in Table 3. Note that the poles used in the pole-assign method are determined as (−0.1, −0.2, −0.2, −0.3, −0.3, −0.4, −0.4, −0.5).

thumbnail
Table 3. Design parameters of three high-gain observer methods.

https://doi.org/10.1371/journal.pone.0307637.t003

The initial conditions for system (22) is x(0) = [0.5, 0, 0, 0]T and zeros for all observers, respectively. The responses of system (22) and observers with no external disturbance (v = 0) are shown in Fig 2. The estimated state, in particular, can track the real state accurately and quickly.

On the other hand, in Fig 3 the measurement noise is chosen as (25). where ρ(−1, 1)present a random noise signal bounding in the range (−1, 1).

According to the gain reduction, it can be seen clearly that the impact of measurement noise is significantly cut down.

4.2 Vehicle trajectory estimation

In the second example we use the vehicle trajectory estimation [22]. And the comparison between our method with the method proposed in [22] has been taken. The two-dimensional motion dynamic of the vehicle is expressed as follows: (25) where X and Y denote the two-dimensional coordinates of the vehicle’s tracking point, V represents the value of the tracking point’s velocity, A denotes the value of acceleration, ϕ stands for yaw angle, β represents the slip angle, and lr signifies the distance from the front wheels to the tracking point. It is assumed that the vehicle’s acceleration and the rate of change of slip angle are nearly zero. And the canonical observable form of (25) after coordinate transforming could be expressed as below. (26) Following the approach in [22], we set the yaw angle rate as a fixed value, resulting in the system parameters taking the following form: (27) Where the lipschitz constant is Lφ = 1.25. And the parameters of two LPCHGOs is and θ = 6.1220. Choose the initial value as X = −30m; Y = 2m; A = −1m/s2; ; φ = −0.5rad; lr = 1m; . The estimation result is shown in Fig 4.

The comparison with the original method will be carried out in the presence of measurement noise, and the noise v = 0.5ρ(−0.5, 0.5) + 0.05sin(100t) is added to the positioning measurement output of the X and Y axes. The result is shown in Fig 5.

thumbnail
Fig 5. The vehicle trajectory estimation result with noise v = 0.5ρ(−0.5, 0.5) + 0.05sin(100t).

https://doi.org/10.1371/journal.pone.0307637.g005

5 Conclusions

In this paper, we proposed an LMI/LPV method for tuning the parameters of a 2nd-order cascade HGO structure, which is proved to have a better low-pass feature than the standard structure. We could significantly reduce its gain by applying the LMI/LPV technique. The stability analysis and the simulation results could imply this fact. In the future, we could utilize the time-varying gain to improve its convergence speed.

References

  1. 1. Gauthier JP, Kupka IA. Observability and observers for nonlinear systems. SIAM journal on control and optimization. 1994;32(4):975–994.
  2. 2. Khalil HK. High-gain observers in feedback control: Application to permanent magnet synchronous motors. IEEE Control Systems Magazine. 2017;37(3):25–41.
  3. 3. Meng S, Meng F, Zhang F, Li Q, Zhang Y, Zemouche A. Observer design method for nonlinear generalized systems with nonlinear algebraic constraints with applications. Automatica. 2024;162:111512.
  4. 4. Mousavi S and Guay M. A Low-power Multi High-Gain Observer Design for a Class of Nonlinear Systems. IEEE Transactions on Automatic Control. 2024.
  5. 5. Astolfi D, Marconi L. A high-gain nonlinear observer with limited gain power. IEEE Transactions on Automatic Control. 2015;60(11):3059–3064.
  6. 6. Farza M, Ragoubi A, Hadj Saïd S, M’Saad M. Improved high gain observer design for a class of disturbed nonlinear systems. Nonlinear Dynamics. 2021;106:631–655.
  7. 7. Tréangle C, Farza M, M’Saad M. Filtered high gain observer for a class of uncertain nonlinear systems with sampled outputs. Automatica. 2019;101:197–206.
  8. 8. Cocetti M, Tarbouriech S, Zaccarian L. High-gain dead-zone observers for linear and nonlinear plants. IEEE Control Systems Letters. 2018;3(2):356–361.
  9. 9. Astolfi D, Marconi L, Praly L, Teel AR. Low-power peaking-free high-gain observers. Automatica. 2018;98:169–179.
  10. 10. Astolfi D, Zaccarian L, Jungers M. On the use of low-pass filters in high-gain observers. Systems & Control Letters. 2021;148:104856.
  11. 11. Astolfi D, Marconi L, Teel A. Low-power peaking-free high-gain observers for nonlinear systems. In: 2016 European Control Conference (ECC). IEEE; 2016. p. 1424–1429.
  12. 12. Khalil HK. Cascade high-gain observers in output feedback control. Automatica. 2017;80:110–118.
  13. 13. Boizot N, Busvelle E, Gauthier JP. An adaptive high-gain observer for nonlinear systems. Automatica. 2010;46(9):1483–1488.
  14. 14. Alessandri A, Rossi A. Time-varying increasing-gain observers for nonlinear systems. Automatica. 2013;49(9):2845–2852.
  15. 15. Zemouche A, Zhang F, Mazenc F, Rajamani R. High-gain nonlinear observer with lower tuning parameter. IEEE Transactions on Automatic Control. 2018;64(8):3194–3209.
  16. 16. Zemouche A, Boutayeb M. On LMI conditions to design observers for Lipschitz nonlinear systems. Automatica. 2013;49(2):585–591.
  17. 17. Bernard P, Praly L, Andrieu V, Hammouri H. On the triangular canonical form for uniformly observable controlled systems. Automatica. 2017;85:293–300.
  18. 18. Gauthier JP, Hammouri H, Othman S. A simple observer for nonlinear systems applications to bioreactors. IEEE Transactions on automatic control. 1992;37(6):875–880.
  19. 19. Bernard P, Andrieu V, Astolfi D. Observer design for continuous-time dynamical systems. Annual Reviews in Control. 2022;53:224–248
  20. 20. Bell HE. Gershgorin’s theorem and the zeros of polynomials. The American Mathematical Monthly. 1965;72(3):292–295.
  21. 21. Isidori A. Nonlinear control systems. New York:Springer; 1995.
  22. 22. Alai H, Zemouche A, Rajamani R. Vehicle Trajectory Estimation Using a High-Gain Multi-Output Nonlinear Observer. IEEE Transactions on Intelligent Transportation Systems. 2024;25(6):5733–5742