Double shrinking sparse dimension reduction

Publication Type:
Journal Article
IEEE Transactions on Image Processing, 2013, 22 (1), pp. 244 - 257
Issue Date:
Filename Description Size
Thumbnail2012004944OK.pdf1.11 MB
Adobe PDF
Full metadata record
Learning tasks such as classification and clustering usually perform better and cost less (time and space) on compressed representations than on the original data. Previous works mainly compress data via dimension reduction. In this paper, we propose 'double shrinking' to compress image data on both dimensionality and cardinality via building either sparse low-dimensional representations or a sparse projection matrix for dimension reduction. We formulate a double shrinking model (DSM) as an ℓ1 regularized variance maximization with constraint ∥x∥2=1, and develop a double shrinking algorithm (DSA) to optimize DSM. DSA is a path-following algorithm that can build the whole solution path of locally optimal solutions of different sparse levels. Each solution on the path is a 'warm start' for searching the next sparser one. In each iteration of DSA, the direction, the step size, and the Lagrangian multiplier are deduced from the Karush-Kuhn-Tucker conditions. The magnitudes of trivial variables are shrunk and the importances of critical variables are simultaneously augmented along the selected direction with the determined step length. Double shrinking can be applied to manifold learning and feature selections for better interpretation of features, and can be combined with classification and clustering to boost their performance. The experimental results suggest that double shrinking produces efficient and effective data compression. © 1992-2012 IEEE.
Please use this identifier to cite or link to this item: