Learning Hidden Human Context in 3D Office Scenes by Mapping Affordances Through Virtual Humans

Publisher:
World Scientific
Publication Type:
Journal Article
Citation:
Unmanned Systems, 2015, 03 (04), pp. 299 - 310
Issue Date:
2015-10
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
USpaper.pdfSubmitted Version8.65 MB
Adobe PDF
Ability to learn human context in an environment could be one of the most desired fundamental abilities that a robot should have when sharing a workspace with human co-workers. Arguably, a robot with appropriate human context awareness could lead to a better human–robot interaction. In this paper, we address the problem of learning human context in an office environment by only using 3D point cloud data. Our approach is based on the concept of affordance-map, which involves mapping latent human actions in a given environment by looking at geometric features of the environment. This enables us to learn the human context in the environment without observing real human behaviors which themselves are a nontrivial task to detect. Once learned, affordance-map allows us to assign an affordance cost value for each grid location of the map. These cost maps are later used to develop an active object search strategy and to develop a context-aware global path planning strategy.
Please use this identifier to cite or link to this item: