An Audio-visual Solution to Sound Source Localization and Tracking with Applications to HRI
- Publisher:
- ARAA
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the Australasian Conference on Robotics & Automation (ACRA), 2016, pp. 1 - 10 (10)
- Issue Date:
- 2016-11-05
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Robot audition is an emerging and growing branch in the robotic community and is necessary for a natural Human-Robot Interaction (HRI). In this paper, we propose a framework that integrates advances from Simultaneous Localization And Mapping (SLAM), bearing-only target tracking, and robot audition techniques into a unifed system for sound source identification, localization, and tracking. In indoors, acoustic observations are often highly noisy and corrupted due to reverberations, the robot ego-motion and background noise, and possible discontinuous nature of them. Therefore, in everyday interaction scenarios, the system requires accommodating for outliers, robust data association, and appropriate management of the landmarks, i.e. sound sources. We solve the robot self-localization and environment representation problems using an RGB-D SLAM algorithm, and sound source localization and tracking using recursive Bayesian estimation in the form of the extended Kalman Filter with unknown data associations and an unknown number of landmarks. The experimental results show that the proposed system performs well in the medium-sized cluttered indoor environment.
Please use this identifier to cite or link to this item: