Audio-Visual Object Classification for Human-Robot Collaboration

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, 2022-May, pp. 9137-9141
Issue Date:
2022-04-27
Filename Description Size
Audio-Visual_Object_Classification_for_Human-Robot_Collaboration.pdfPublished version1.42 MB
Adobe PDF
Full metadata record
Human robot collaboration requires the contactless estimation of the physical properties of containers manipulated by a person for example while pouring content in a cup or moving a food box Acoustic and visual signals can be used to estimate the physical properties of such objects which may vary substantially in shape material and size and also be occluded by the hands of the person To facilitate comparisons and stimulate progress in solving this problem we present the CORSMAL challenge and a dataset to assess the performance of the algorithms through a set of well defined performance scores The tasks of the challenge are the estimation of the mass capacity and dimensions of the object container and the classification of the type and amount of its content A novel feature of the challenge is our real to simulation framework for visualising and assessing the impact of estimation errors in human to robot handovers
Please use this identifier to cite or link to this item: