Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors

Publisher:
Nature Research
Publication Type:
Journal Article
Citation:
Nature Electronics, 2020, 3, (9), pp. 563-570
Issue Date:
2020-09-01
Filename Description Size
NE-2020.pdfPublished version2.23 MB
Adobe PDF
Full metadata record
© 2020, The Author(s), under exclusive licence to Springer Nature Limited. Gesture recognition using machine-learning methods is valuable in the development of advanced cybernetics, robotics and healthcare systems, and typically relies on images or videos. To improve recognition accuracy, such visual data can be combined with data from other sensors, but this approach, which is termed data fusion, is limited by the quality of the sensor data and the incompatibility of the datasets. Here, we report a bioinspired data fusion architecture that can perform human gesture recognition by integrating visual data with somatosensory data from skin-like stretchable strain sensors made from single-walled carbon nanotubes. The learning architecture uses a convolutional neural network for visual processing and then implements a sparse neural network for sensor data fusion and recognition at the feature level. Our approach can achieve a recognition accuracy of 100% and maintain recognition accuracy in non-ideal conditions where images are noisy and under- or over-exposed. We also show that our architecture can be used for robot navigation via hand gestures, with an error of 1.7% under normal illumination and 3.3% in the dark.
Please use this identifier to cite or link to this item: