Convex sparse PCA for unsupervised feature learning

Publication Type:
Journal Article
Citation:
ACM Transactions on Knowledge Discovery from Data, 2016, 11 (1)
Issue Date:
2016-07-01
Filename Description Size
a3-chang.pdfPublished Version617.79 kB
Adobe PDF
Full metadata record
© 2016 ACM. Principal component analysis (PCA) has been widely applied to dimensionality reduction and data preprocessing for different applications in engineering, biology, social science, and the like. Classical PCA and its variants seek for linear projections of the original variables to obtain the low-dimensional feature representations with maximal variance. One limitation is that it is difficult to interpret the results of PCA. Besides, the classical PCA is vulnerable to certain noisy data. In this paper, we propose a Convex Sparse Principal Component Analysis (CSPCA) algorithm and apply it to feature learning. First, we show that PCA can be formulated as a low-rank regression optimization problem. Based on the discussion, the l2,1-norm minimization is incorporated into the objective function to make the regression coefficients sparse, thereby robust to the outliers. Also, based on the sparse model used in CSPCA, an optimal weight is assigned to each of the original feature, which in turn provides the output with good interpretability. With the output of our CSPCA, we can effectively analyze the importance of each feature under the PCA criteria. Our new objective function is convex, and we propose an iterative algorithm to optimize it. We apply the CSPCA algorithm to feature selection and conduct extensive experiments on seven benchmark datasets. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art unsupervised feature selection algorithms.
Please use this identifier to cite or link to this item: