Learning Sparse SVM for Feature Selection on Very High Dimensional Datasets

Publication Type:
Conference Proceeding
Proceedings of the 27 th International Conference on Machine Learning, 2010, pp. 1047 - 1054
Issue Date:
Full metadata record
Files in This Item:
Filename Description Size
2013004291OK.pdf450.24 kB
Adobe PDF
A sparse representation of Support Vector Ma- chines (SVMs) with respect to input features is desirable for many applications. In this paper, by introducing a 0-1 control variable to each input feature, l0-norm Sparse SVM (SSVM) is con- verted to a mixed integer programming (MIP) problem. Rather than directly solving this MIP, we propose an efficient cutting plane algorithm combining with multiple kernel learning to solve its convex relaxation. A global convergence proof for our method is also presented. Compre- hensive experimental results on one synthetic and 10 real world datasets show that our proposed method can obtain better or competitive perfor- mance compared with existing SVM-based fea- ture selection methods in term of sparsity and generalization performance. Moreover, our pro- posed method can effectively handle large-scale and extremely high dimensional problems.
Please use this identifier to cite or link to this item: