Spatial Pyramid-Enhanced NetVLAD With Weighted Triplet Loss for Place Recognition.

Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
IEEE transactions on neural networks and learning systems, 2020, 31, (2), pp. 661-674
Issue Date:
Filename Description Size
08700608.pdfPublished version2.92 MB
Adobe PDF
Full metadata record
We propose an end-to-end place recognition model based on a novel deep neural network. First, we propose to exploit the spatial pyramid structure of the images to enhance the vector of locally aggregated descriptors (VLAD) such that the enhanced VLAD features can reflect the structural information of the images. To encode this feature extraction into the deep learning method, we build a spatial pyramid-enhanced VLAD (SPE-VLAD) layer. Next, we impose weight constraints on the terms of the traditional triplet loss (T-loss) function such that the weighted T-loss (WT-loss) function avoids the suboptimal convergence of the learning process. The loss function can work well under weakly supervised scenarios in that it determines the semantically positive and negative samples of each query through not only the GPS tags but also the Euclidean distance between the image representations. The SPE-VLAD layer and the WT-loss layer are integrated with the VGG-16 network or ResNet-18 network to form a novel end-to-end deep neural network that can be easily trained via the standard backpropagation method. We conduct experiments on three benchmark data sets, and the results demonstrate that the proposed model defeats the state-of-the-art deep learning approaches applied to place recognition.
Please use this identifier to cite or link to this item: