Autonomous Active Perception Framework for Large Object Recognition Tasks

Publication Type:
Thesis
Issue Date:
2022
Full metadata record
In recent decades, the technology development in hardware and software has stimulated robotics systems’ implementation in various fields. Robots are most commonly deployed in dangerous or mundane working environments, significantly reducing accidents, injuries and casualties in the workforce. Ideally, robots would be designed to perform the tasks autonomously, conducting calculations and making decisions based on sensory data. However, although robotics research has continuously advanced throughout the last half a century, there are still many complicated tasks where robots cannot achieve full autonomy yet. In these scenarios, interaction from a human supervisor may be required to make control decisions either locally or remotely. The quality of the decisions made by the supervisor or operator depends heavily on the available sensory feedback from the robotics system, which helps the human perceive the environment where the robot is operating. Therefore, perception capabilities are required for all autonomous and semi-autonomous robotics systems to process and make sense of the received data, so the system or the operator can perform necessary actions. This thesis focuses on developing an active perception framework for robots working remotely, where the human operator cannot directly perceive the surrounding environment. The specific modality of sensor data received from the remote system is coloured Three-Dimensional (3D) point cloud data obtained from sensing devices such as Light Detection and Ranging (LiDAR) or depth cameras. Additionally, this thesis investigates the practicality and benefits of utilising Virtual Reality (VR) as a tool to visualise the data obtained from a remote system.
Please use this identifier to cite or link to this item: