Fused feature encoding in convolutional neural network
- Publication Type:
- Journal Article
- Multimedia Tools and Applications, 2019, 78 (2), pp. 1635 - 1648
- Issue Date:
|Huo2019_Article_FusedFeatureEncodingInConvolut.pdf||Published Version||1.38 MB|
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. Recently, deep hashing (DH) methods have been proposed to learn specific image representations and a series of hash functions. However, existing DH methods mainly use convolutional neural networks (CNN) to extract global features, losing some local information. What’s more, the pairwise or triplet wise model applied in DH methods increases computational complexity and storage requirements. In this paper, we propose a new DH method called fused feature encoding (FFE). In FFE, we introduce a bypass from the intermediate convolutional layer to extract images’ local information and unify local and global information into one neural network to explore richer semantic information within the image. In our model, the number of neurons in the global or local encoding layer corresponds to the number of global or local encoding bits respectively. We also apply a new method to update the weights in our network to improve the efficiency. Experimental results show the superiority of the proposed approach over the state-of-the-arts.
Please use this identifier to cite or link to this item: