Learning object, grasping and manipulation activities using hierarchical HMMs

Publisher:
Kluwer Academic Publishers
Publication Type:
Journal Article
Citation:
Autonomous Robots, 2014, 37 (3), pp. 317 - 331
Issue Date:
2014
Full metadata record
Files in This Item:
Filename Description Size
ThumbnailAURO_GRASP_2013_2nd_revisionFinal.pdfAccepted Manuscript Version2.06 MB
Adobe PDF
This article presents a probabilistic algorithm for representing and learning complex manipulation activities performed by humans in everyday life. The work builds on the multi-level Hierarchical Hidden Markov Model (HHMM) framework which allows decomposition of longer-term complex manipulation activities into layers of abstraction whereby the building blocks can be represented by simpler action modules called action primitives. This way, human task knowledge can be synthesised in a compact, effective representation suitable, for instance, to be subsequently transferred to a robot for imitation. The main contribution is the use of a robust framework capable of dealing with the uncertainty or incomplete data inherent to these activities, and the ability to represent behaviours at multiple levels of abstraction for enhanced task generalisation. Activity data from 3D video sequencing of human manipulation of different objects handled in everyday life is used for evaluation. A comparison with a mixed generative-discriminative hybrid model HHMM/SVM (support vector machine) is also presented to add rigour in highlighting the benefit of the proposed approach against comparable state of the art techniques. © 2014 Springer Science+Business Media New York.
Please use this identifier to cite or link to this item: