Experimental research on deep reinforcement learning in autonomous navigation of mobile robot

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
Proceedings of the 14th IEEE Conference on Industrial Electronics and Applications, ICIEA 2019, 2019, pp. 1612-1616
Issue Date:
2019-06-01
Filename Description Size
08833968.pdfPublished version778.21 kB
Adobe PDF
Full metadata record
© 2019 IEEE. The paper is concerned with the autonomous navigation of mobile robot from the current position to the desired position only using the current visual observation, without the environment map built beforehand. Under the framework of deep reinforcement learning, the Deep Q Network (DQN) is used to achieve the mapping from the original image to the optimal action of the mobile robot. Reinforcement learning requires a large number of training examples, which is difficult to directly be applied in a real robot navigation scenario. To solve the problem, the DQN is firstly trained in the Gazebo simulation environment, followed by the application of the well-trained DQN in the real mobile robot navigation scenario. Both simulation and real-world experiments have been conducted to validate the proposed approach. The experimental results of mobile robot autonomous navigation in the Gazebo simulation environment show that the trained DQN can approximate the state action value function of the mobile robot and perform accurate mapping from the current original image to the optimal action of the mobile robot. The experimental results in real indoor scenes demonstrate that the DQN trained in the simulated environment can work in the real indoor environment, and the mobile robot can also avoid obstacles and reach the target location even with dynamics and the presence of interference in the environment. It is therefore an effective and environmentally adaptable autonomous navigation method for mobile robots in an unknown environment.
Please use this identifier to cite or link to this item: