Multiple-task learning and knowledge transfer using generative adversarial capsule nets
- Publication Type:
- Conference Proceeding
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018, 11320 LNAI pp. 669 - 680
- Issue Date:
|Multiple-task learning and knowledge transfer using generative adversarial capsule nets.pdf||Published version||983.95 kB|
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© Springer Nature Switzerland AG 2018. It is common that practical data has multiple attributes of interest. For example, a picture can be characterized in terms of its content, e.g. the categories of the objects in the picture, and in the meanwhile the image style such as photo-realistic or artistic is also relevant. This work is motivated by taking advantage of all available sources of information about the data, including those not directly related to the target of analytics. We propose an explicit and effective knowledge representation and transfer architecture for image analytics by employing Capsules for deep neural network training based on the generative adversarial nets (GAN). The adversarial scheme help discover capsule-representation of data with different semantic meanings in respective dimensions of the capsules. The data representation includes one subset of variables that are particularly specialized for the target task – by eliminating information about the irrelevant aspects. We theoretically show the elimination by mixing conditional distributions of the represented data. Empirical evaluations show the propose method is effective for both standard transfer-domain recognition tasks and zero-shot transfer.
Please use this identifier to cite or link to this item: