Minimax sparse logistic regression for very high-dimensional feature selection

Publication Type:
Journal Article
Citation:
IEEE Transactions on Neural Networks and Learning Systems, 2013, 24 (10), pp. 1609 - 1622
Issue Date:
2013-06-25
Filename Description Size
Thumbnail2013004101OK.pdf1.21 MB
Adobe PDF
Full metadata record
Because of the strong convexity and probabilistic underpinnings, logistic regression (LR) is widely used in many real-world applications. However, in many problems, such as bioinformatics, choosing a small subset of features with the most discriminative power are desirable for interpreting the prediction model, robust predictions or deeper analysis. To achieve a sparse solution with respect to input features, many sparse LR models are proposed. However, it is still challenging for them to efficiently obtain unbiased sparse solutions to very high-dimensional problems (e.g., identifying the most discriminative subset from millions of features). In this paper, we propose a new minimax sparse LR model for very high-dimensional feature selections, which can be efficiently solved by a cutting plane algorithm. To solve the resultant nonsmooth minimax subproblems, a smoothing coordinate descent method is presented. Numerical issues and convergence rate of this method are carefully studied. Experimental results on several synthetic and real-world datasets show that the proposed method can obtain better prediction accuracy with the same number of selected features and has better or competitive scalability on very high-dimensional problems compared with the baseline methods, including the \ell 1-regularized LR. © 2013 IEEE.
Please use this identifier to cite or link to this item: