Robust active representation via ℓ<inf>2,p</inf>-norm constraints[Formula presented]

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Knowledge-Based Systems, 2022, 235, pp. 107639
Issue Date:
2022-01-10
Filename Description Size
1-s2.0-S0950705121009011-main.pdfPublished version1.24 MB
Full metadata record
Active learning maximizes the performance of the current learning model by soliciting benefits from unlabeled data. In its early stage with insufficient labels, finding representations that maintain a consistent hypothesis with the entire unlabeled pool is an intelligent paradigm. From a matrix perspective, this paradigm can be transformed into a sparse representation of the input matrix by the ℓ2,1-norm constraint. However, the ℓ2,1-norm constraint is used in the loss function, which may lead to the mean accumulation problem, resulting in sub-optimal mean-centering and low robustness to outliers. In this paper, to solve the aforementioned problems, we generalize the ℓ2,1-norm into the ℓ2,p-norm constraint, where 0
Please use this identifier to cite or link to this item: