Increased task perception for adaptable human-robot collaboration

Publication Type:
Thesis
Issue Date:
2018
Full metadata record
This thesis presents investigation into enhancing the robustness and adaptability of robot action generation in human-interactive scenarios, by means of a heightened level of task or scene perception which in turn leads to a lessened reliance upon the observed behaviours of the robot’s human counterpart. In human-robot interaction under the learning from demonstration paradigm, the demonstration is most often carried out by able experts who are capable of performing the task with a very high degree of proficiency while also considering the robot’s physical limitations (movement speed limits, joint singularities, etc.). As a result of this, the actions of the robot’s partner in the obtained training samples can be considered to be near-optimal. A disparity then naturally arises when working with end-users whose performance may be hindered by a range of factors such as disability, inexperience, or fatigue. The lack of task-specific goodness in these observed partner behaviours can then lead to unpredictable or unsafe robot actions in demonstration learning frameworks where an arguably excessive emphasis is placed upon the partner performing their share of the task at a skill level comparable to that of the demonstrators. As gathering a sufficiently large quantity of training data samples to encompass such a broad scope of human aptitude is generally infeasible, it becomes arguable that a greater emphasis for robot action modelling should instead be placed upon the task or the work scene that both agents are operating within. An example of this is in collaborative object handling between two humans; one would naturally generate suitable actions for the task by considering the movements of the leader alongside the object and the space they are moving through. The information derived from the latter two observations increases the chance that imperfections in leader behaviour can be adequately compensated for. This allows for an improved adaptability to novel task conditions, and also increased robustness when the observations of partner behaviour are insufficiently informative for safe action planning. These benefits can be primarily attributed to the trained models being more resilient against a lack of informativeness or task goodness in observed partner behaviour, by instead supplementing such missing fine details with information directly drawn from the immediate environment in which the interactive activity is taking place. This concept of increased task and environmental perception is assessed across two significantly different human-robot interaction paradigms: intelligent wheelchair navigation, and physical humanoid collaboration. For wheelchair navigation, a framework for the generation of expert-stylized short-term paths that can be concatenated for traversal ‘anywhere’ is realized as a flexible adaptation upon the conventional approach of static long-term destinations within known occupancy maps. The reliance upon immediately available on-board sensor data, as opposed to the more conventionally restrictive features such as platform position within the map, allows proactively assisted traversal through settings novel to demonstration data without the need for retraining goal inference models. For physical humanoid collaboration, robust robot action generation is achieved when faced with novel task conditions and ambiguous partner observation, serving as an intuitive extension to action generation postulated solely upon briefly observed partner movements. This is evaluated in a collaborative object covering exercise by a human-humanoid team, where object parameters automatically drawn from visual scene data compensates for uninformative human partner observation.
Please use this identifier to cite or link to this item: