Active Perception for Deformable 3D Pointclouds
- Publication Type:
- Thesis
- Issue Date:
- 2019
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Supplying precise and comprehensive representation of an object by assembly of pointclouds ultimately assists a robot in enhancing the reliability of its perception. An efficient data acquisition approach is to steer a depth sensor in 3D space actively to be positioned in the best (𝘰𝘱𝘵𝘪𝘮𝘢𝘭) viewpoints to scan the desirable parts of the object and then align (𝘳𝘦𝘨𝘪𝘴𝘵𝘦𝘳) and integrate the captured scans effectively and seamlessly to reconstruct a 3D model with high fidelity.
As the first contribution, we propose an optimization on a manifold approach to find the optimal position and orientation (𝘱𝘰𝘴𝘦) of a depth sensor in continuous 3D space. It has been demonstrated previously that precise measurement by a depth sensor is achieved when it is gazing at the object perpendicularly. Accordingly, the proposed terms of the objective function are to align the main axis of the depth sensor towards parts of interest while also prioratising areas with higher task-relevant information, such as curvature. The resulting poses achieved by this method conform to numerical and visual evaluations on several objects with a significantly less computation compared to the-state-of-the-art.
Reconstructing objects with high fidelity necessitates dealing with a variety of scenarios which can be differentiated in terms of temporal configurations and articulation of objects in the scene, namely rigidity or non-rigidity. Arguably the most challenging scenario is where a single depth sensor is scanning a texture-less object that is deforming non-rigidly. Under these conditions, apart from the computational overhead, most of the mesh reconstruction methods fail to yield satisfactory results. Moreover, there is not sufficient visual features on the surface to be extracted for correspondence.
Given these limitations, this thesis, as the second contribution, proposes a non-rigid registration for mesh-free and color-free pointclouds based on the 𝘴𝘰𝘧𝘵 𝘱𝘢𝘳𝘵𝘪𝘵𝘪𝘰𝘯𝘪𝘯𝘨 concept. The soft patches (partitions) as the features are, then, equipped with local descriptors to provide a metric for association. Assuming that the global deformation of the object is the aggregation of local rigid transformations, this association is refined by measuring the deviation of each potentially corresponding soft-patch and its neighborhood from a rigidity metric defined by the As-Rigid-As-Possible algorithm. The established local correspondences are assigned with transformations that are subsequently propagated to the nearby points. Experimental results demonstrate the capabilities of this framework in handling large deformations and highly articulated objects.
Fusing the aligned pointclouds, a 3D model of the targeted object is incrementally developed, and this model, coupled with the current scan, contributes to a formulation for selecting the next region of interest leading to the next optimal viewpoint. Unlike the conventional approaches regarding deformable objects (which take a great model where the extent of the object is seen and then it deforms), our proposed pipeline explores beyond the bounds of the current acquired frame and reconstructed model and continuously evolves it leveraging an exploration and exploitation strategy. The application of the devised framework for reconstruction is demonstrated on rigid and non-rigid objects demonstrating high fidelity to the original shape.
Please use this identifier to cite or link to this item: