Part-stacked CNN for fine-grained visual categorization
- Publication Type:
- Conference Proceeding
- Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, 2016-January pp. 1173 - 1182
- Issue Date:
Files in This Item:
|Part-stacked CNN for fine-grained visual categorization.pdf||Published Version||885.05 kB|
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In the context of fine-grained visual categorization, the ability to interpret models as human-understandable visual manuals is sometimes as important as achieving high classification accuracy. In this paper, we propose a novel Part-Stacked CNN architecture that explicitly explains the finegrained recognition process by modeling subtle differences from object parts. Based on manually-labeled strong part annotations, the proposed architecture consists of a fully convolutional network to locate multiple object parts and a two-stream classification network that encodes object-level and part-level cues simultaneously. By adopting a set of sharing strategies between the computation of multiple object parts, the proposed architecture is very efficient running at 20 frames/sec during inference. Experimental results on the CUB-200-2011 dataset reveal the effectiveness of the proposed architecture, from multiple perspectives of classification accuracy, model interpretability, and efficiency. Being able to provide interpretable recognition results in realtime, the proposed method is believed to be effective in practical applications.
Please use this identifier to cite or link to this item: