Globally convergent visual-feature range estimation with biased inertial measurements

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Automatica, 2022, 146, pp. 110639
Issue Date:
2022-12-01
Filename Description Size
1-s2.0-S0005109822005039-main.pdfPublished version925.19 kB
Adobe PDF
Full metadata record
The design of a globally convergent position observer for feature points from visual information is a challenging problem, especially for the case with only inertial measurements and without assumptions of uniform observability, which remained open for a long time. We give a solution to the problem in this paper assuming that only the bearing of a feature point, and biased linear acceleration and rotational velocity of a robot – all in the body-fixed frame – are available. Further, in contrast to existing related results, we do not need the value of the gravitational constant either. The proposed approach builds upon the parameter estimation-based observer recently developed in Ortega et al. (2015) and its extension to matrix Lie groups in our previous work. Conditions on the robot trajectory under which the observer converges are given, and these are strictly weaker than the standard persistency of excitation and uniform complete observability conditions. Finally, as an illustration, we apply the proposed design to the visual inertial navigation problem.
Please use this identifier to cite or link to this item: