FACLSTM: ConvLSTM with focused attention for scene text recognition

Publication Type:
Journal Article
Citation:
Science China Information Sciences, 2020, 63 (2)
Issue Date:
2020-02-01
Filename Description Size
20191007_Science_R1.pdfSubmitted Version847.27 kB
Adobe PDF
Full metadata record
© 2020, Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature. Scene text recognition has recently been widely treated as a sequence-to-sequence prediction problem, where traditional fully-connected-LSTM (FC-LSTM) has played a critical role. Owing to the limitation of FC-LSTM, existing methods have to convert 2-D feature maps into 1-D sequential feature vectors, resulting in severe damages of the valuable spatial and structural information of text images. In this paper, we argue that scene text recognition is essentially a spatiotemporal prediction problem for its 2-D image inputs, and propose a convolution LSTM (ConvLSTM)-based scene text recognizer, namely, FACLSTM, i.e., focused attention ConvLSTM, where the spatial correlation of pixels is fully leveraged when performing sequential prediction with LSTM. Particularly, the attention mechanism is properly incorporated into an efficient ConvLSTM structure via the convolutional operations and additional character center masks are generated to help focus attention on right feature areas. The experimental results on benchmark datasets IIIT5K, SVT and CUTE demonstrate that our proposed FACLSTM performs competitively on the regular, low-resolution and noisy text images, and outperforms the state-of-the-art approaches on the curved text images with large margins.
Please use this identifier to cite or link to this item: