A quad tree based method for blurred and non-blurred video text frames classification through quality metrics

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), 2016, pp. 4023 - 4028
Issue Date:
2016
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
07900263.pdfPublished version488.53 kB
Adobe PDF
Blur is a common artifact in video, which adds more complexity to text detection and recognition. To achieve good accuracies for text detection and recognition, this paper suggests a new method for classifying blurred and non-blurred frames in video. We explore quality metrics, namely, BRISQUE, NRIQA, GPC and SI, in a new way for classification. We estimate the values of these metrics with the help of predefined samples called reference values. To widen the difference between metric values for better classification, we introduce scaling factors as a non-linear sigmoidal function, which considers the metric of each current frame and its reference and results in templates. Based on the characteristics of metrics, the proposed method finds a relationship between the metrics to derive rules for classification. To classify the frame containing local blur, we explore quad tree division with classification rules which divide non-blurred blocks to identify local blur. We use standard databases, namely, ICDAR 2013, ICDAR 2015 and YVT videos for experimentation, and evaluate the proposed method in terms of text detection and recognition rates given by text detection and binarization methods before and after classification.
Please use this identifier to cite or link to this item: