AB - © 2016 IEEE. This paper investigates sound source mapping in a real environment using a mobile robot. Our approach is based on audio ray tracing which integrates occupancy grids and sound source localization using a laser range finder and a microphone array. Previous audio ray tracing approaches rely on all observed rays and grids. As such observation errors caused by sound reflection, sound occlusion, wall occlusion, sounds at misdetected grids, etc. can significantly degrade the ability to locate sound sources in a map. A three-layered selective audio ray tracing mechanism is proposed in this work. The first layer conducts frame-based unreliable ray rejection (sensory rejection) considering sound reflection and wall occlusion. The second layer introduces triangulation and audio tracing to detect falsely detected sound sources, rejecting audio rays associated to these misdetected sounds sources (short-term rejection). A third layer is tasked with rejecting rays using the whole history (long-term rejection) to disambiguate sound occlusion. Experimental results under various situations are presented, which proves the effectiveness of our method. AU - Su, D AU - Nakamura, K AU - Nakadai, K AU - Miro, JV DA - 2016/11/28 DO - 10.1109/IROS.2016.7759430 EP - 2777 JO - IEEE International Conference on Intelligent Robots and Systems PY - 2016/11/28 SP - 2771 TI - Robust sound source mapping using three-layered selective audio rays for mobile robots VL - 2016-November Y1 - 2016/11/28 Y2 - 2024/03/29 ER -