Nonsmooth Penalized Clustering via ℓ<inf>p</inf>Regularized Sparse Regression

Publication Type:
Journal Article
Citation:
IEEE Transactions on Cybernetics, 2017, 47 (6), pp. 1423 - 1433
Issue Date:
2017-06-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
07460120.pdfPublished Version1.92 MB
Adobe PDF
© 2016 IEEE. Clustering has been widely used in data analysis. A majority of existing clustering approaches assume that the number of clusters is given in advance. Recently, a novel clustering framework is proposed which can automatically learn the number of clusters from training data. Based on these works, we propose a nonsmooth penalized clustering model via ℓp(0 < p < 1) regularized sparse regression. In particular, this model is formulated as a nonsmooth nonconvex optimization, which is based on over-parameterization and utilizes an ℓp-norm-based regularization to control the tradeoff between the model fit and the number of clusters. We theoretically prove that the new model can guarantee the sparseness of cluster centers. To increase its practicality for practical use, we adhere to an easy-to-compute criterion and follow a strategy to narrow down the search interval of cross validation. To address the nonsmoothness and nonconvexness of the cost function, we propose a simple smoothing trust region algorithm and present its convergent and computational complexity analysis. Numerical studies on both simulated and practical data sets provide support to our theoretical results and demonstrate the advantages of our new method.
Please use this identifier to cite or link to this item: