DeepGoal: Learning to drive with driving intention from human control demonstration
- Publication Type:
- Journal Article
- Robotics and Autonomous Systems, 2020, 127
- Issue Date:
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is currently unavailable due to the publisher's embargo.
The embargo period expires on 1 May 2022
© 2020 Elsevier B.V. Recent research on automotive driving has developed an efficient end-to-end learning mode that directly maps visual input to control commands. However, it models distinct driving variations in a single network, which increases learning complexity and is less adaptive for modular integration. In this paper, we re-investigate human's driving style and propose to learn an intermediate driving intention region to relax the difficulties in end-to-end approach. The intention region follows both road structure in image and direction towards goal in public route planner, which addresses visual variations only and figures out where to go without conventional precise localization. Then the learned visual intention is projected on vehicle local coordinate and fused with reliable obstacle perception to render a navigation score map that is widely used for motion planning. The core of the proposed system is a weakly-supervised cGAN-LSTM model trained to learn driving intention from human demonstration. The adversarial loss learns from limited demonstration data with one local planned route and enables reasoning of multi-modal behaviors with diverse routes while testing. Comprehensive experiments are conducted with real-world datasets. Results indicate the proposed paradigm can produce more consistent motion commands with human demonstration and shows better reliability and robustness to environment change. Our code is available at https://github.com/HuifangZJU/visual-navigation.
Please use this identifier to cite or link to this item: