Model-Free Event-Triggered Optimal Consensus Control of Multiple Euler-Lagrange Systems via Reinforcement Learning

Publisher:
Institute of Electrical and Electronics Engineers
Publication Type:
Journal Article
Citation:
IEEE Transactions on Network Science and Engineering, 2021, 8, (1), pp. 246-258
Issue Date:
2021
Full metadata record
IEEE This paper develops a model-free approach to solve the event-triggered optimal consensus of multiple Euler-Lagrange systems (MELSs) via reinforcement learning (RL). Firstly, an augmented system is constructed by defining a pre-compensator to circumvent the dependence on system dynamics. Secondly, the Hamilton-Jacobi-Bellman (HJB) equations are applied to the deduction of the model-free event-triggered optimal controller. Thirdly, we present a policy iteration (PI) algorithm derived from reinforcement learning (RL), which converges to the optimal policy. Then, the value function of each agent is represented through a neural network to realize the PI algorithm. Moreover, the gradient descent method is used to update the neural network only at a series of discrete event-triggered instants. The specific form of the event-triggered condition is then proposed, and it is guaranteed that the closed-loop augmented system under the event-triggered mechanism is uniformly ultimately bounded (UUB). Meanwhile, the Zeno behavior is also eliminated. Finally, the validity of this approach is verified by a simulation example.
Please use this identifier to cite or link to this item: