CAMRL: A Joint Method of Channel Attention and Multidimensional Regression Loss for 3D Object Detection in Automated Vehicles

Publication Type:
Journal Article
IEEE Transactions on Intelligent Transportation Systems, 2022, PP, (99)
Issue Date:
Full metadata record
Fully automated vehicles collect information about their road environments to adjust their driving actions, such as braking and slowing down. The development of artificial intelligence (AI) and the Internet of Things (IoT) has improved the cognitive abilities of vehicles, allowing them to detect traffic signs, pedestrians, and obstacles for increasing the intelligence of these transportation systems. Three-dimensional (3D) object detection in front-view images taken by vehicle cameras is important for both object detection and depth estimation. In this paper, a joint channel attention and multidimensional regression loss method for 3D object detection in automated vehicles (called CAMRL) is proposed to improve the average precision of 3D object detection by focusing on the model’s ability to infer the locations and sizes of objects. First, channel attention is introduced to effectively learn the yaw angles from the road images captured by vehicle cameras. Second, a multidimensional regression loss algorithm is designed to further optimize the size and position parameters during the training process. Third, the intrinsic parameters of the camera and the depth estimate of the model are combined to reduce the object depth computation error, allowing us to calculate the distance between an object and the camera after the object’s size is confirmed. As a result, objects are detected, and their depth estimations are validated. Then, the vehicle can determine when and how to stop if an object is nearby. Finally, experiments conducted on the KITTI dataset demonstrate that our method is effective and performs better than other baseline methods, especially in terms of 3D object detection and bird’s-eye view (BEV) evaluation.
Please use this identifier to cite or link to this item: