SPFusionNet: Sketch segmentation using multi-modal data fusion

Publication Type:
Conference Proceeding
Citation:
Proceedings - IEEE International Conference on Multimedia and Expo, 2019, 2019-July pp. 1654 - 1659
Issue Date:
2019-07-01
Full metadata record
© 2019 IEEE. The sketch segmentation problem remains largely unsolved because conventional methods are greatly challenged by the highly abstract appearances of freehand sketches and their numerous shape variations. In this work, we tackle such challenges by exploiting different modes of sketch data in a unified framework. Specifically, we propose a deep neural network SPFusionNet to capture the characteristic of sketch by fusing from its image and point set modes. The image modal component SketchNet learns hierarchically abstract ro-bust features and utilizes multi-level representations to produce pixel-wise feature maps, while the point set-modal component SPointNet captures local and global contexts of the sampled point set to produce point-wise feature maps. Then our framework aggregates these feature maps by a fusion network component to generate the sketch segmentation result. The extensive experimental evaluation and comparison with peer methods on our large SketchSeg dataset verify the effectiveness of the proposed framework.
Please use this identifier to cite or link to this item: