Multi-View Multi-Label Learning with Sparse Feature Selection for Image Annotation

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE Transactions on Multimedia, 2020, 22, (11), pp. 2844-2857
Issue Date:
2020-11-01
Filename Description Size
08960273.pdfPublished version3.81 MB
Adobe PDF
Full metadata record
© 1999-2012 IEEE. In image analysis, image samples are always represented by multiple view features and associated with multiple class labels for better interpretation. However, multiple view data may include noisy, irrelevant and redundant features, while multiple class labels can be noisy and incomplete. Due to the special data characteristic, it is hard to perform feature selection on multi-view multi-label data. To address these challenges, in this paper, we propose a novel multi-view multi-label sparse feature selection (MSFS) method, which exploits both view relations and label correlations to select discriminative features for further learning. Specifically, the multi-labeled information is decomposed into a reduced latent label representation to capture higher level concepts and correlations among multiple labels. Multiple local geometric structures are constructed to exploit visual similarities and relations for different views. By taking full advantage of the latent label representation and multiple local geometric structures, the sparse regression model with an l_{2,1}-norm and an Frobenius norm (F-norm) penalty terms is utilized to perform hierarchical feature selection, where the F-norm penalty performs high-level (i.e., view-wise) feature selection to preserve the informative views and the l_{2,1}-norm penalty conducts low-level (i.e., row-wise) feature selection to remove noisy features. To solve the proposed formulation, we also devise a simple yet efficient iterative algorithm. Experiments and comparisons on real-world image datasets demonstrate the effectiveness and potential of MSFS.
Please use this identifier to cite or link to this item: