Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Event-triggered iterative learning control for output constrained multi-agent systems

  • Wei Cao ,

    Contributed equally to this work with: Wei Cao, Jinjie Qiao

    Roles Conceptualization, Project administration, Supervision, Writing – review & editing

    yiyuqq168@163.com

    Affiliation College of Computer and Control Engineering, Qiqihar University, Qiqihar, China

  • Huanhuan Li ,

    Roles Conceptualization, Software, Validation, Writing – original draft

    ‡ HL and YZ also contributed equally to this work.

    Affiliation College of Computer and Control Engineering, Qiqihar University, Qiqihar, China

  • Jinjie Qiao ,

    Contributed equally to this work with: Wei Cao, Jinjie Qiao

    Roles Funding acquisition, Writing – review & editing

    Affiliation College of Economics and Management, Qiqihar University, Qiqihar, China

  • Yi Zhu

    Roles Data curation, Investigation

    ‡ HL and YZ also contributed equally to this work.

    Affiliation College of Computer and Control Engineering, Qiqihar University, Qiqihar, China

Abstract

An event-triggered iterative learning consensus tracking control strategy is proposed for output constrained nonlinear discrete-time multi-agent systems. Firstly, the estimated Pseudo partial derivative(PPD) algorithm is determined based on the input and output data of the system, and the output observer is designed based on the estimated PPD. Secondly, the deadband controller is designed based on the output estimation error of the observer, and the event trigger condition is determined by comparing the size of the output estimation error and the deadband controller function value, and the agents communicate when the trigger condition is satisfied, and do not communicate when it is not satisfied. Then, the event-triggered iterative learning control algorithm is constructed using the estimated PPD, the trigger condition and the measurement error, and the convergence of the algorithm is proved by using the Lyapunov function, and the proposed algorithm can make the output constrained multi-agent system consistently and completely tracking on the desired trajectory without the need of real-time communication conditions. Finally, the simulation results further validate the effectiveness of the control protocol.

1. Introduction

A multi-agent system is a system that consists of a number of individual agents with sensing and execution capabilities, and that accomplishes a complex task through inter-agent coordination. Multi-agent systems [1] are autonomous, fault-tolerant, collaboratively distributed and scalable, and have higher performance and efficiency than single systems. Currently, many experts in the field of control have conducted extensive research on multi-agent systems. For example, the cluster control problem [2,3] the consensus problem [47], and the formation control problem [810] for agent bodies. Among them, consensus is the basis for the study of other problems in multi-agent systems. Consensus of a multi-agent system refers to the convergence of one or some states of all the agents in the network [1113]. Most of the research results only achieve consensus of the agents in the time domain, i.e., the states and outputs of the system converge as time approaches infinity. However, in practice, there exists a class of control systems that perform specific tasks repeatedly or periodically for a finite period time. For such systems, conventional control algorithms are no longer applicable.

Iterative learning control is an effective control method to achieve complete tracking over a finite time interval [1722]. The algorithm is suitable for systems with repetitive operation characteristics, and the controller structure of the algorithm is simple and has low requirements on the system model. In view of these advantages, many scholars have studied the consensus problem of multi-agent systems using iterative learning control methods. For example, [23] used an iterative learning control scheme to study the consensus problem of discrete linear multi-agent systems. In [24], an iterative learning algorithm is proposed to be unified in both continuous and discrete time domains, which can ensure that the output of the system converges to the desired trajectory within a finite time interval. Therefore, [25] investigated the consensus problem of a class of nonlinear multi-agent systems, and utilized an iterative learning control algorithm to achieve full tracking of the desired trajectory. In [26], D-type and PD-type learning laws with iterative initial states are used to solve the consensus tracking problem of nonlinear multi-agent systems with impulsive inputs. In [27], an adaptive iterative learning control method is investigated to solve the consensus problem of nonlinear multi-agent systems under state constrained. In [28], an iterative learning control method is adopted to solve the consensus problem of continuous linear multi-agent systems under output constrained, and the convergence of the system is proved by using the norm and disk theorem. In [29], a distributed iterative learning control algorithm is proposed to solve the consensus problem of discrete nonlinear multi-agent systems with output constrained.

Based on the above literature, considering saving system resources and reducing computer energy loss, many scholars in recent years have proposed an event-triggered iterative learning control strategy for the consensus problem of multi-agent system. For example, [30] proposed an event-triggered iterative learning control method for the consistency problem of nonlinear discrete-time multi-agent systems. In [31], an event-triggered distributed model-free iterative learning control strategy is proposed for the consensus problem of nonlinear multi-agent systems with random link packet loss. The above control algorithm can indeed reduce system resources and computer energy loss, but it is no longer suitable for solving the saturation problem of actuators and sensors in the communication process. For this reason, [32] proposed an event-triggered control protocol to study the consensus problem of nonlinear multi-agent systems with relative state constrained. In [33], static and dynamic event-triggered strategies are proposed to deal with the multi-agent consensus problem with output constrained.

Analyzing the above literature, it is found that [23,24,28] studied the consensus problem of linear multi-agent system by using the iterative learning control method, compared with the linear system, in the practical application, the state variables of the system are mostly nonlinear relationships between them, so the above control algorithms are not applicable for the nonlinear strongly coupled system. In [30,31], although the event-triggered iterative learning control algorithm is used to solve the problem of complete tracking in a finite time, the control algorithm did not take into account the fact that the output is limited, which makes the system unstable or even destabilized. The control algorithms in [32,33] solved the problem of consistent tracking and space resource saving for the system under output constrained, but the algorithms realized the goal of trajectory tracking when the time tends to infinity, and did not consider the problem of repeated operation of trajectory tracking for a finite period of time. [29] used iterative learning control algorithm to solve the consistency problem of the system under output constrained, but [29] did not consider the problem of insufficient system space and bandwidth constraints, and when the system is in the ideal state and still receive and send data periodically, which will be a large amount of waste of space resources and computer energy loss.

In view of the above analysis, in order to make the output constrained nonlinear multi-agent systems realize consistent and complete tracking in a limited time, and analyzing it from the perspective of saving the system resources and computer energy loss, an event-triggered distributed iterative learning control algorithm is proposed. The main contributions of this method are reflected in the following three points: 1) The time-varying gain of this control algorithm consists of PPDs, and the tracking error percentage can be adjusted in real time to control the input data changes. The control algorithm does not need a specific mathematical model of the controlled object, and has few parameters and a simple structure. 2) Give the sufficient condition for system convergence, design the output observer by using PPD, and monitor the output data changes in real time to solve the consensus convergence problem of the system under the output constrained. 3) Design the dead zone controller to avoid Zeno phenomenon, by comparing the output estimation error and the size of the dead zone controller function value, to design the event triggered mechanism, under the triggered mechanism, each agent only needs to judge whether to communicate or not according to its own triggered conditions, to avoid the interference of the communication between the agents to affect the triggered conditions. At the same time, the algorithm reduces the energy loss and saves the system resources.

This paper is organized as follows: section 2 introduces the main contents of the study, including graph theory knowledge, dynamic model description of agents and output constrained description; section 3 is about the controller design, including the definition of event-triggered PPD, the design of the event communication mechanism, the design of the controller, as well as the proof of convergence of the algorithms and the theorems; section 4 verifies the effectiveness of the proposed algorithms through numerical simulation; section 5 summarizes and gives an outlook for future research.

2. Problem description

2.1. Preliminaries

The communication topology between agents in a multi-agent system can be described by a graph theory, therefore, the graph G is defined as to represent the communication topology between agents, where: V, E, and A respectively are the vertex set, the edge set and the adjacency matrix of the graph, , , . The element in E indicates that the agents can communicate with each other, is the weight value of the edge, if agent j can receive information from agent i, then ; if agent j cannot receive information from agent i, then . The set of neighboring of an agent is , and the degree matrix is , is the sum of the elements of the jth row of the adjacency matrix A, and the Laplace matrix of the graph G is represented as . The complete topology with virtual leader 0 can be described as , denotes the set of edges of the graph , and denotes the adjacency matrix of the graph . Consider the relationship between a virtual leader and a follower, if the agent j is directly connected to the virtual leader, then ; otherwise , and the matrix is denoted as .

2.2. Model statement

In this paper, we study a class of nonlinear discrete-time multi-agent systems consisting of N agents, where the dynamics of the jth agent is described as follows:

(1)

where is the time interval, denotes the jth agent, and k is used to represent the k th iteration, , , respectively are the state vectors, control input vectors and measurement output vectors of the system, B is the input matrix of the system, and is an unknown nonlinear function. is the output constrained threshold of the jth agent , the output constrained value of each agent is expressed as , the output constrained function is defined as follows:

(2)

For the purpose of this analysis, it is assumed that the system satisfies the following conditions:

Assumption 1 [11] is a continuous nonlinear function, and the partial derivative with respect to exists.

Assumption 2 [1416] Let the system satisfy the generalized Lipschitz continuity condition along the iteration axis, i.e., there exists a such that , where ; ; and , .

Lemma 1 [14] The system can be represented by a dynamic linearized model in compact form, subject to assumptions 1 and 2, as follows:

(3)

where , is the pseudo partial derivative, and time-varying, .

Assumption 3 [29] From the dynamic linearization model, can be positive or negative, and in this paper we assume that . Define the consensus measurement error of the agent j at the kth iteration as:

(4)

where is an estimate of . Let denote the tracking error of the agent, the actual tracking error is , define to denote the estimation error of an agent, where is the upper and lower bounds of the output constrained function.

Definition 1 The multi-agent system is said to reach consensus if and only if the difference between the output trajectories of the agents and the desired trajectories is a small positive constant, i.e.

(5)

where , .

Assumption 4 [29] The desired output of the system at each moment in a finite time interval is within a measurable range, therefor . Define

, , satisfying

Assumption 5 In the graph , all the followers have direct or indirect access to the virtual leader’s trajectory information.

Assumption 6 The initial state of the agent remains the same at each iteration, i.e., , is a given initial state.

Remark 1 is a condition for proving that is bounded.

Remark 2 Assumption 5 is a necessary communication condition for achieving unified and complete tracking of multi-agent systems, if the agent in the system cannot directly or indirectly obtain the trajectory information of the virtual leader, then this agent will become an isolated agent in the system, unable to track the expected trajectory, and thus unable to achieve consensus tracking of the system.

Remark 3 Assumption 6 is the basic condition for iterative learning control algorithm to achieve complete tracking of system trajectory, it indicates that the system strictly repeats the initial state each time it is tracked.

The control objective of this paper is to construct an event-triggered distributed iterative learning control algorithm for a nonlinear discrete-time multi-agent system satisfying assumptions 1–6 when the system output is constrained and when only some of the agents are able to acquire the desired trajectory information, so that all the agents can completely track the desired trajectory in a finite time interval.

For a better understanding of the meaning of the parameters mentioned below, the parameters are given in Table 1.

3. Controller design

3.1. Event-triggered PPD update

Considering the data transmission process with output constrained (2), the following event-triggered pseudo partial derivative (PPD) updating law is devised:

(6)(7)

is the vth trigger moment, is the th trigger moment.

When , or ,

(8)

where is an estimate of the pseudo partial derivative , ρ is the step factor, and , is a weighting factor to regulate excessive variations in the pseudo-skewed derivatives.

Theorem 1 If assumption 3 holds, updating (6) ensures that is bounded.

Proof:

3.1.1. Trigger moments.

, defining , then, according to the event-triggered PPD update algorithm (6) could get:

(9)

owing to:

, , and evidently . So there must be a constant such that:

(10)

The compact dynamic linearized model of the system shows that . And then from assumption 1 it follows that:

(11)

Therefore, from (9)–(11), it can be obtained:

(12)

which can be further obtained as . It can be shown that is bounded, by lemma 1 it follows that is bounded, so must be bounded. It can therefore be assumed that .

3.1.2. Non-triggering moments.

, from (6), , by lemma 1, is bounded, so it follows that must be bounded.

This ends the proof.

3.2. Event-triggered communication mechanism design

Design the output observer of the system model as follows:

(13)

where is the output of the observer, is an input to the system, is the output estimation error, χ is the feedback gain of the observer.

(14)

where is the th trigger moment.

(15)(16)

where .

From (14) and (16), we can see that, the input and output values of the observer are related to whether the system is triggered or not, when the system is triggered, the inputs and outputs of the observer are updated normally; when the system is not non-triggered, the inputs and outputs of the observer will remain as they were at the time of the last trigger.

The event triggering conditions for the agent j are defined as follows:

(17)

where, is the output gain error of the observer, defining as:

(18)

is the output estimation error of the observer, defining as:

(19)

is a deadband controller function, defining as:

(20)

τ is a very small normal number, it will be analyzed in theorem 2. , is PPD estimate of the last.

Theorem 2 If the compact form dynamic linearized model of the system satisfies assumptions 1–3, and are updated using the event-triggered PPD update (6) and (7), and the system satisfies event triggering condition (17), then consensus measurement error is bounded.

Proof:

Substituting (13) into (19), and then from (3), (15), and (18), we can get:

(21)

where . Owing to , , and , thus it follows that is bounded. From this, we can see, is bounded, and there must be α constant such that . Next, the boundedness of is analyzed in terms of both triggering and non-triggering moments.

3.2.1. Triggering moments.

When the system satisfies the event triggering conditions, from (14) and (16), we can obtain:

, , therefore, it is clear that , , substituting them into (21) can obtain:

(22)

Define the Lyapunov function as follows:

,

therefor

(23)

Substituting (22) into (23) can obtain:

,

once again

, consequently,

(24)

once again , according to the compact form of the system dynamic linearized dynamic model (3) can be obtained as:

(25)

enable , . Consequently, it follows that when the following equation holds:

(26)

there is , it further follows that is bounded.

3.2.2. Non-triggering moments.

When the system is non-triggering moment, , , substituting (21) into (23) can obtain:

,

according to the inequality , the above equation can be obtained:

(27)

where . Then according to (17) and (20), it can be obtained:

(28)

where , . From (23) and (27), it can be obtained:

(29)

where if , then we can obtain:

(30)

From (28), we can see that, χ must satisfy when there is , and according to (30), it follows that converges, therefore, it can be shown that is bounded.

This ends the proof.

Remark 4 From (17), (20), and (26), when the system is triggered several times in a row or , deadband controller function value is 0, that would break event triggering condition (17), this causes the system to leave the event-triggering state, and the system stops triggering, therefore, the designed deadband controller can effectively avoid the occurrence of Zeno-like phenomenon.

3.3. Control protocol design

Design the event-triggering distributed control protocol as follows:

(31)

where , β is the stability weight, whose exact range will be given in theorem 3.

Remark 5 The consensus measurement error, equation (4), is determined from the topology and the output observer value of each agent. Through the event-triggering condition (17), the state update of the output observer (13) and the PPD estimation (6) is realized by controlling the of the 1 and 0 state switching, and further realizing the state update of the control protocol (31).

Lemma 2 If is a subrandom matrix varying along the iteration axis, and the diagonal elements are all positive, then using M to represent all possible subrandom matrices , one obtains:

,

where , P can be arbitrarily selected from the .

Theorem 3 If the system model satisfies assumptions 1–6, and the event triggering mechanism (17)–(20) can be utilized to realize the state update of the control protocol (31), then when satisfies the following equation , the estimation error a of multi-agent system can be made bounded, i.e., the tracking error of all the agents is bounded.

Proof: defining

(32)(33)

From the above equation and assumption 4, it is easy to obtain . For simplicity of analysis, record , according to . The consensus measurement error (4) can be rewritten as follows:

(34)

For the purpose of proof analysis, define the following set of vectors:

,

,

,

,

,

.

Therefore, Equation (35) is expressed as a vector as follows:

(35)

the convergence of the tracking error of the system is analyzed in terms of triggering and non-triggering moments.

3.3.1. Non-triggering moments.

When the system is in non-triggering moments, , the control input is equal to the input value at the last trigger, this leads to an increase in the estimation error of the output of the observer, further leading to an increase in system tracking error, therefore, the value of the deadband controller function is no longer 0, the event triggering condition is satisfied, and the system enters the event triggering state.

3.3.2. Triggering moments.

When the system is at the triggering moments, , according to can obtain:

(36)

once again , which leads to

(37)

where , , and there is satisfying the following equation

(38)

Therefore, according to and the range of values of β in theorem 3, it is known that the row sum of the matrix must be less than 1, the matrix is then a subrandom matrix. Which can be further obtained as:

(39)

From the above analysis, χ and are bounded, so there must be a constant ω such that , .

From lemma 2 and (37), we can obtain:

where defining , therefore, there is , so it follows that

The above analysis shows that, i.e. tracking estimation error is bounded. It follows that the tracking error is bounded.

This ends the proof.

Remark 6 From , it can be seen that the upper term of the tracking error is affected by the parameters and χ. χ is the feedback gain of the observer, b and r respectively are bounded on the system inputs and pseudo-partial derivatives, when the system model is determined, is basically unchanged, and the tracking error can be changed by changing the value of χ. The number of system triggers can be further adjusted to affect the convergence speed of the system.

4. Simulation results and discussion

In this section, the effectiveness of the proposed scheme is verified by a discrete-time nonlinear single-input single-output multi-agent system and a discrete-time nonlinear multi-input and multi-output system, the system consists of 1 leader and 5 followers. The simulation environment used in this paper: windows 10 system, x64 processor, MATLAB R2020a version.

Example 1. The effectiveness of the proposed scheme is verified by a discrete-time nonlinear single-input single-output multi-agent system, it is modeled as follows:

where is the time interval, denotes the jth agent, . The desired trajectory is given, the communication topology of this paper is shown in Fig 1.

As you can see from the communication topology diagram, the agents 1 and 3 have direct access to the leader’s information, and the agents 2, 4, and 5 cannot access the leader’s information, therefore, . The Laplace matrix of Fig 1 is:

The output limitation thresholds for each agent in the system are set to be respectively . The state of each agent at the initial moment in the simulation is given separately:

The initial value is set as , , the parameters are set to , , , , , , . Since the maximum value of the diagonal elements in L is 3, set , the experimental results are analyzed as follows:

Fig 2 gives the desired trajectory and the output of each agent in the 20th iteration, it can be found from the figure that, five agents have output constrained phenomenon under the event triggering mechanism, the output constrained restriction has a significant impact on the system.

Figs 3 and 4 show the outputs of the system at 200 and 500 iterations. Analyzing Fig 3, at 200 iterations, it can be seen from the figure that, under the control algorithm of this paper, the system output gradually approaches the desired output through the continuous correction of the system input by the measurement consensus error, and then the system tracking error gradually decreases, and the phenomenon of the system output limitation is weakened, but there is still a large error with the desired trajectory; with the increase of the number of iterations, at the 500th iteration, the output of each agent is completely tracking on the desired trajectory in the finite time interval.

Fig 5 gives the triggering moments of each agent, it can be seen from the figure that the triggering moments of each agent are intermittent, so it is effectively verified that the designed deadband controller avoids Zeno behavior very well.

Fig 6 shows the maximum output estimation error of the system, from Fig 6, the maximum output estimation error is 0 at about 300 iterations, indicating that the output observer designed in this paper can effectively estimate the observer output.

thumbnail
Fig 6. Maxmum output estimation error of the observer.

https://doi.org/10.1371/journal.pone.0315209.g006

Define the event trigger rate to reflect the number of communication requests from different agents in the direction of the iteration axis, , where denotes the number of event-triggering communications of agent j during the iteration process, N denotes the number of iterations of the system. The event triggering rate of each agent is shown in Table 2.

In order to measure the impact of the algorithms of this paper on the effectiveness of control, the literatures [29,30] algorithms are compared with this paper’s algorithm. Fig 7a shows the maximum tracking error curve of the algorithm in literature [30], Fig 7b shows the maximum tracking error curve of the algorithm in literature [29], and Fig 7c shows the maximum tracking error curve of the algorithm in this paper. Comparing Fig 7a and 7c, it can be found that the control algorithm in this paper utilizes a small number of iterations to make the maximum tracking error of the system converge to 0 in a finite period of time, which indicates that the control algorithm in this paper can effectively solve the consistency tracking problem of the multi-intelligence system under the output constraints, and has good robustness. Comparing Fig 7b and 7c, it can be found that the algorithm in this paper has similar control effect with the algorithm in the literature [29], but from Fig 8, the algorithm of this paper has a low triggering rate, which is better than the iterative learning control algorithm with time-driven mechanism in terms of saving space resources and reducing energy loss. Therefore, the control algorithm proposed in this paper not only has good convergence performance, but also can better save space resources and energy loss.

thumbnail
Fig 7. Comparison of maximum tracking error along the iteration axis.

(a) No output constrained and event trigger. (b) Under output constrained and No event trigger. (c) Under output constrained and Event trigger.

https://doi.org/10.1371/journal.pone.0315209.g007

From note 5, it can be found that the output tracking error of each agent is related to the value of χ. Fig 9 shows the maximum tracking error curve and trigger moment of each agent at time , and Fig 10 shows the maximum tracking error curve and trigger moment of each agent at time .

thumbnail
Fig 9. Maximum tracking error and trigger moment for each agent ().

(a) Trigger moments for each agent. (b) Maximum tracking error along the iteration axis.

https://doi.org/10.1371/journal.pone.0315209.g009

thumbnail
Fig 10. Maximum tracking error and trigger moment for each agent ().

(a) Trigger moments for each agent. (b) Maximum tracking error along the iteration axis.

https://doi.org/10.1371/journal.pone.0315209.g010

According to Figs 9a and 10a, it can be found that the triggering moments of each agent are intermittent, so the designed deadband controller can avoid the Zeno phenomenon. As analyzed in Table 3, the number of triggers becomes more in , and the maximum tracking error converges slower; the number of triggers becomes less in , but it needs more iteration number to arrive the bounded stability, i.e., the system takes longer time to reach stabilization, as observed in Figs 9b and 10b, and it can be seen that the value of χ affects the system performance.

thumbnail
Table 3. Trigger counts of each agent and average trigger counts of all agents (χ takes different values).

https://doi.org/10.1371/journal.pone.0315209.t003

Example 2. The effectiveness of the proposed scheme is verified by a discrete-time nonlinear multi-input and multi-output system, the system is modeled as follows:

where is the time interval, denotes the jth agent, . The desired trajectory , is given. The state of each agent at the initial moment in the simulation is given. The output limitation thresholds for each agent in the system are set to be , . The communication topology is the same as Example 1.

The initial value is set as , . The parameters are set to , , , , , , . Since the maximum value of the diagonal elements in L is 3, set , the experimental results are analyzed as follows:

Analyzing Figs 11ad and 12ad, in 30 iterations, the two outputs of the system have obvious output constrained phenomenon, with the increase of the number of iterations, in 60 iterations, the constrained phenomenon of the first output of the system disappeared, and the second output of the system is still constrained phenomenon. The two outputs of the system are completely tracking the desired trajectory when iteration is 500 times.

thumbnail
Fig 11. First output and maximum tracking error for each agent.

(a) Output of each agent at time . (b) Output of each agent at time . (c) Output of each agent at time . (d) Maximum tracking error along the iteration axis.

https://doi.org/10.1371/journal.pone.0315209.g011

thumbnail
Fig 12. Second output and maximum tracking error for each agent.

(a) Output of each agent at time . (b) Output of each agent at time . (c) Output of each agent at time . (d) Maximum tracking error along the iteration axis.

https://doi.org/10.1371/journal.pone.0315209.g012

From Figs 11d and 12d, the first maximum tracking error is 0 at about 100 iterations, which indicates that the first output of the system has completely tracked the desired trajectory; the second maximum tracking error is 0 at about 200 iterations, which indicates that the second output of the system has completely tracked the desired trajectory.

Analyzing Figs 13 and 14, it can be found that the system trigger moments are intermittent in a certain period time, proving that the designed deadband controller can effectively avoid the occurrence of Zeno phenomenon.

thumbnail
Fig 13. The first output trigger moment of each agent.

https://doi.org/10.1371/journal.pone.0315209.g013

thumbnail
Fig 14. The second output trigger moment of each agent.

https://doi.org/10.1371/journal.pone.0315209.g014

From the simulation results of Example 1 and Example 2, it can be found that the control algorithm in this paper can not only solve the consensus problem of single-input-single-output multi-agent systems, but also solve the consensus problem of multiple-input-multiple-output-multi-agent systems under the constraint of system output.

5. Conclusion

In this paper, an event-triggered distributed iterative learning control algorithm is proposed for the consistency problem of output constrained nonlinear multi-agent systems, and the convergence of the control algorithm is proved by using the Lyapunov function. The algorithm is simple in structure, has few parameters, does not need the model information of the controlled object, and can make the output constrained multi-agent systems track the desired trajectory consistently and completely in a finite time interval without the need of real-time communication. The designed output observer solves the problem that the measured values are not easy to obtain under the output constrained. The event-triggered mechanism effectively saves the system space and reduces the energy loss of the computer, and the designed deadband controller effectively avoids the Zeno behavior. Finally, it is verified that the control algorithm in this paper can also solve the consensus problem of multi-input multi-output multi-agent systems.

This paper mainly addresses the problem of consistent tracking and space saving for nonlinear systems under output constrained, and the event-triggered iterative learning control algorithm used is proposed without considering the noise interference, which may increase the tracking error of the system and the number of iterations. How to further reduce the number of iterations and improve the convergence speed of the system under noise interference will be one of the future research directions of this paper.

Supporting information

S1 Data. Simulation data for the figures and tables.

https://doi.org/10.1371/journal.pone.0315209.s001

(ZIP)

References

  1. 1. Gong X, Li X, Shu Z. Distributed byzantine-resilient observer for high-order integrator multiagent systems on directed graphs: an edge-based approach. IEEE Trans Automat Contr. 2024;69(5):3294–300.
  2. 2. Xiao F, Wang L. State consensus for multi-agent systems with switching topologies and time-varying delays. Int J Control. 2006;79(10):1277–84.
  3. 3. Xu D, Chen G. Autonomous and cooperative control of UAV cluster with multi-agent reinforcement learning. Aeronautical J. 2022;126(1300):932–51.
  4. 4. Li G, Ren C-E, Chen CLP, Shi Z. Adaptive iterative learning consensus control for second-order multi-agent systems with unknown control gains. Neurocomputing. 2020;393:15–26.
  5. 5. Liu C-L, Liu F. Consensus analysis for multiple autonomous agents with input delay and communication delay. Int J Control Autom Syst. 2012;10(5):1005–12.
  6. 6. Cheng Z, Zhu Y, Chi K, Li Y. Reliability and delay analysis of multicast in binary molecular communication. Nano Commun Networks. 2016;9:17–27.
  7. 7. Gong X, Li X, Shu Z, Feng Z. Resilient output formation-tracking of heterogeneous multiagent systems against general byzantine attacks: a twin-layer approach. IEEE Trans Cybern. 2024;54(4):2566–78. pmid:37318961
  8. 8. Cao W, Sun M. Formation control of partially irregular multi-agent systems with iterative learning. Control Decision. 2018;33(9):1619–24.
  9. 9. Gu LW, Wang YL, Ma L. Cooperative formation control of multi-agent systems on iterative learning. Control Eng China. 2021;28(11).
  10. 10. Gong X, Basin MV, Feng Z, Huang T, Cui Y. Resilient time-varying formation-tracking of multi-uav systems against composite attacks: a two-layered framework. IEEE/CAA J Autom Sinica. 2023;10(4):969–84.
  11. 11. Zhu Y. Consensus control of multi‐agent systems with switched linear dynamics. IET Control Theory Appl. 2023;17(18):2474–84.
  12. 12. Lan Y-H, Wu B, Shi Y-X, Luo Y-P. Iterative learning based consensus control for distributed parameter multi-agent systems with time-delay. Neurocomputing. 2019;357:77–85.
  13. 13. Yin Y, Bu X, Zhu P, Qian W. Point-to-point consensus tracking control for unknown nonlinear multi-agent systems using data-driven iterative learning. Neurocomputing. 2022;488:78–87.
  14. 14. Bu X, Yu Q, Hou Z, Qian W. Model free adaptive iterative learning consensus tracking control for a class of nonlinear multiagent systems. IEEE Trans Syst Man Cybern, Syst. 2019;49(4):677–86.
  15. 15. Dai X, Wang C, Tian S, Huang Q. Consensus control via iterative learning for distributed parameter models multi-agent systems with time-delay. J Franklin Inst. 2019;356(10):5240–59.
  16. 16. Wang Y, Li H, Qiu X, Xie X. Consensus tracking for nonlinear multi-agent systems with unknown disturbance by using model free adaptive iterative learning control. Applied Mathematics and Computation. 2020;365:1–15.
  17. 17. Zhang X, Hou Z. Data-driven predictive point-to-point iterative learning control. Neurocomputing. 2023;518:431–9.
  18. 18. Yang L, Li Y, Huang D, Xia J, Zhou X. Spatial iterative learning control for robotic path learning. IEEE Trans Cybern. 2022;52(7):5789–98. pmid:35044925
  19. 19. Huang J, Zheng H, Li H, Li G, Qiu C. Time-varying pilot factor–iterative learning control algorithm with control parameter learning ability. Journal of Vibration and Control. 2020;27(13–14):1629–43.
  20. 20. Lin N, Chi R, Huang B, Hou Z. Event-triggered nonlinear iterative learning control. IEEE Trans Neural Netw Learn Syst. 2021;32(11):5118–28. pmid:33048755
  21. 21. Jianhong W, Xiaoyong G. Iterative learning data driven strategy for aircraft control system. AEAT. 2023;95(10):1588–95.
  22. 22. Nguyen PD, Nguyen NH. An intelligent parameter determination approach in iterative learning control. Eur J Control. 2021;61:91–100.
  23. 23. Cao W, Sun M. Finite-time consensus iterative learning control of discrete time-varying multi-agent systems. Control and Decision. 2019;34(4):891–6.
  24. 24. Gu P, Tian S. Consensus tracking control via iterative learning for singular multi‐agent systems. IET Control Theory Appl. 2019;13(11):1603–11.
  25. 25. Fu Q. Iterative learning control for nonlinear heterogeneous multi-agent systems with multiple leaders. Transactions of the Institute of Measurement and Control. 2020;43(4):854–61.
  26. 26. Cao X, Fečkan M, Shen D, Wang J. Iterative learning control for multi-agent systems with impulsive consensus tracking. NAMC. 2021;26(1):130–50.
  27. 27. Shen D, Xu J ‐X. Distributed adaptive iterative learning control for nonlinear multiagent systems with state constraints. Adaptive Control & Signal. 2017;31(12):1779–807.
  28. 28. Wei YD, Li ZG, Du YJ, Chen YJ. Iterative learning control for consensus of measurement-constrained linear multi-agent systems. Control Theory Appl. 2021;38(7):963–70.
  29. 29. Liang J, Bu X, Liu J, Qian W. Iterative learning consensus tracking control for a class of multi-agent systems with output saturation. Control Theory and Applications. 2018;35(6):786–94.
  30. 30. Zhao H, Peng L, Xie L, Wu P, Chen Y. Event-triggered model-free adaptive iterative learning bipartite consensus control for multi-agent systems. Control Decision. 2022;37(10):2552–8.
  31. 31. Wang H, Li H. Event-triggered learning control for nonlinear multi-agent systems with data random packet dropouts. Control Theory & Applications. 2022;39(9):1688–98.
  32. 32. Yang J, Zhang H, Chu H, Zhang W. Output event triggered consensus control of nonlinear multi-agent systems with relative state constraints. ISA Trans. 2021;108:164–77. pmid:32854958
  33. 33. Huang Y, Yue X, Wang J, Ma K, Huang Z. Distributed fuzzy adaptive event‐triggered finite‐time consensus tracking control for uncertain nonlinear multi‐agent systems with asymmetric output constraint. Intl J Robust & Nonlinear. 2022;33(1):440–65.