TY - JOUR AB - © 2015 Elsevier Ltd. Abstract When an autonomous robot is deployed in a structural environment to visually inspect surfaces, the capture conditions of images (e.g. camera's viewing distance and angle to surfaces) may vary due to un-ideal robot poses selected to position the camera in a collision-free manner. Given that surface inspection is conducted by using a classifier trained with surface samples captured with limited changes to the viewing distance and angle, the inspection performance can be affected if the capture conditions are changed. This paper presents an approach to calculate a value that represents the likelihood of a pixel being classifiable by a classifier trained with a limited dataset. The likelihood value is calculated for each pixel in an image to form a likelihood map that can be used to identify classifiable regions of the image. The information necessary for calculating the likelihood values is obtained by collecting additional depth data that maps to each pixel in an image (collectively referred to as a RGB-D image). Experiments to test the approach are conducted in a laboratory environment using a RGB-D sensor package mounted onto the end-effector of a robot manipulator. A naive Bayes classifier trained with texture features extracted from Gray Level Co-occurrence Matrices is used to demonstrate the effect of image capture conditions on surface classification accuracy. Experimental results show that the classifiable regions identified using a likelihood map are up to 99.0% accurate, and the identified region has up to 19.9% higher classification accuracy when compared against the overall accuracy of the same image. AU - To, AWK AU - Paul, G AU - Liu, D DA - 2016/08/10 DO - 10.1016/j.rcim.2015.07.003 EP - 102 JO - Robotics and Computer-Integrated Manufacturing PY - 2016/08/10 SP - 90 TI - An approach for identifying classifiable regions of an image captured by autonomous robots in structural environments VL - 37 Y1 - 2016/08/10 Y2 - 2024/03/28 ER -