A blind deconvolution model for scene text detection and recognition in video

Publication Type:
Journal Article
Citation:
Pattern Recognition, 2016, 54 pp. 128 - 148
Issue Date:
2016-06-01
Full metadata record
Files in This Item:
Filename Description Size
1-s2.0-S003132031600011X-main.pdfPublished Version8.29 MB
Adobe PDF
© 2016 Elsevier Ltd. All rights reserved. Text detection and recognition in poor quality video is a challenging problem due to unpredictable blur and distortion effects caused by camera and text movements. This affects the overall performance of the text detection and recognition methods. This paper presents a combined quality metric for estimating the degree of blur in the video/image. Then the proposed method introduces a blind deconvolution model that enhances the edge intensity by suppressing blurred pixels. The proposed deblurring model is compared with other state-of-the-art models to demonstrate its superiority. In addition, to validate the usefulness and the effectiveness of the proposed model, we conducted text detection and recognition experiments on blurred images classified by the proposed model from standard video databases, namely, ICDAR 2013, ICDAR 2015, YVT and then standard natural scene image databases, namely, ICDAR 2013, SVT, MSER. Text detection and recognition results on both blurred and deblurred video/images illustrate that the proposed model improves the performance significantly.
Please use this identifier to cite or link to this item: