DeepGoal: Learning to drive with driving intention from human control demonstration

Publisher:
ELSEVIER
Publication Type:
Journal Article
Citation:
Robotics and Autonomous Systems, 2020, 127
Issue Date:
2020-05-01
Full metadata record
© 2020 Elsevier B.V. Recent research on automotive driving has developed an efficient end-to-end learning mode that directly maps visual input to control commands. However, it models distinct driving variations in a single network, which increases learning complexity and is less adaptive for modular integration. In this paper, we re-investigate human's driving style and propose to learn an intermediate driving intention region to relax the difficulties in end-to-end approach. The intention region follows both road structure in image and direction towards goal in public route planner, which addresses visual variations only and figures out where to go without conventional precise localization. Then the learned visual intention is projected on vehicle local coordinate and fused with reliable obstacle perception to render a navigation score map that is widely used for motion planning. The core of the proposed system is a weakly-supervised cGAN-LSTM model trained to learn driving intention from human demonstration. The adversarial loss learns from limited demonstration data with one local planned route and enables reasoning of multi-modal behaviors with diverse routes while testing. Comprehensive experiments are conducted with real-world datasets. Results indicate the proposed paradigm can produce more consistent motion commands with human demonstration and shows better reliability and robustness to environment change. Our code is available at https://github.com/HuifangZJU/visual-navigation.
Please use this identifier to cite or link to this item: