CU-Net: Component Unmixing Network for Textile Fiber Identification

Publication Type:
Journal Article
Citation:
International Journal of Computer Vision, 2019, 127 (10), pp. 1443 - 1454
Issue Date:
2019-10-01
Filename Description Size
Feng2019_Article_CU-NetComponentUnmixingNetwork.pdfPublished Version2.28 MB
Adobe PDF
Full metadata record
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. Image-based nondestructive textile fiber identification is a challenging computer vision problem, that is practically useful in fashion, decoration, and design. Although deep learning now outperforms humans in many scenarios such as face and object recognition, image-based fiber identification is still an open problem for deep learning given imbalanced sample and small sample size samples. In this paper, we propose the Component Unmixing Network (CU-Net) for nondestructive textile fiber identification. CU-Net learns effective representations given imbalanced sample and small sample size samples to achieve high-performance textile fiber identification. CU-Net comprises a Deep Feature Extraction Module (DFE-Module) and a Component Unmixing Module (CU-Module). Initially, mixed deep features are extracted by DFE-Module from the input textile patches. Then, CU-Module is employed to extract unmixed representations of different fibers from the mixed deep features. In CU-Module, we introduce a self-interchange and a restraining loss to reduce the mixture between representations of different fibers. Furthermore, we extend CU-Net to the proportion analysis task with very good effect. Extensive experiments demonstrate that: (1) self-interchange and the restraining loss effectively unmix different fiber representations and improve fiber identification accuracy; and (2) CU-Net achieves more accurate fiber identification than the current state-of-the-art multi-label classification methods.
Please use this identifier to cite or link to this item: