A quad tree based method for blurred and non-blurred video text frames classification through quality metrics
- Publication Type:
- Conference Proceeding
- Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), 2016, pp. 4023 - 4028
- Issue Date:
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Blur is a common artifact in video, which adds more complexity to text detection and recognition. To achieve good accuracies for text detection and recognition, this paper suggests a new method for classifying blurred and non-blurred frames in video. We explore quality metrics, namely, BRISQUE, NRIQA, GPC and SI, in a new way for classification. We estimate the values of these metrics with the help of predefined samples called reference values. To widen the difference between metric values for better classification, we introduce scaling factors as a non-linear sigmoidal function, which considers the metric of each current frame and its reference and results in templates. Based on the characteristics of metrics, the proposed method finds a relationship between the metrics to derive rules for classification. To classify the frame containing local blur, we explore quad tree division with classification rules which divide non-blurred blocks to identify local blur. We use standard databases, namely, ICDAR 2013, ICDAR 2015 and YVT videos for experimentation, and evaluate the proposed method in terms of text detection and recognition rates given by text detection and binarization methods before and after classification.
Please use this identifier to cite or link to this item: