Excessive Disturbance Rejection Control of Autonomous Underwater Vehicle using Reinforcement Learning

Publication Type:
Conference Proceeding
Issue Date:
Full metadata record
Small Autonomous Underwater Vehicles (AUV) in shallow water might not be stabilized well by feedback or model predictive control. This is because wave and current disturbances may frequently exceed AUV thrust capabilities and disturbance estimation and prediction models available are not sufficiently accurate. In contrast to classical model-free Reinforcement Learning (RL), this paper presents an improved RL for Excessive disturbance rejection Control (REC) that is able to learn and utilize disturbance behaviour, through formulating the disturbed AUV dynamics as a multi-order Markov chain. The unobserved disturbance behaviour is then encoded in the AUV state-action history of fixed length, its embeddings are learned within the policy optimization. The proposed REC is further enhanced by a base controller that is pre-trained on iterative Linear Quadratic Regulator (iLQR) solutions for a reduced AUV dynamic model, resulting in hybrid-REC. Numerical simulations on pose regulation tasks have demonstrated that REC significantly outperforms a canonical controller and classical RL, and that the hybrid-REC leads to more efficient and safer sampling and motion than REC.
Please use this identifier to cite or link to this item: