Multi-instance learning from positive and unlabeled bags

Publication Type:
Conference Proceeding
Citation:
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2014, 8443 LNAI (PART 1), pp. 237 - 248
Issue Date:
2014-01-01
Filename Description Size
ThumbnailMulti-Instance Learning from Positive and Unlabeled Bags .pdf Published version697.15 kB
Adobe PDF
Full metadata record
Many methods exist to solve multi-instance learning by using different mechanisms, but all these methods require that both positive and negative bags are provided for learning. In reality, applications may only have positive samples to describe users' learning interests and remaining samples are unlabeled (which may be positive, negative, or irrelevant to the underlying learning task). In this paper, we formulate this problem as positive and unlabeled multi-instance learning (puMIL). The main challenge of puMIL is to accurately identify negative bags for training discriminative classification models. To solve the challenge, we assign a weight value to each bag, and use an Artificial Immune System based self-adaptive process to select most reliable negative bags in each iteration. For each bag, a most positive instance (for a positive bag) or a least negative instance (for an identified negative bag) is selected to form a positive margin pool (PMP). A weighted kernel function is used to calculate pairwise distances between instances in the PMP, with the distance matrix being used to learn a support vector machines classifier. A test bag is classified as positive if one or multiple instances inside the bag are classified as positive, and negative otherwise. Experiments on real-world data demonstrate the algorithm performance. © 2014 Springer International Publishing.
Please use this identifier to cite or link to this item: