Two-layer multiple kernel learning

Publication Type:
Conference Proceeding
Citation:
Journal of Machine Learning Research, 2011, 15 pp. 909 - 917
Issue Date:
2011-12-01
Filename Description Size
zhuang11a.pdfPublished version555.07 kB
Adobe PDF
Full metadata record
Multiple Kernel Learning (MKL) aims to learn kernel machines for solving a real machine learning problem (e.g. classification) by exploring the combinations of multiple kernels. The traditional MKL approach is in general "shallow" in the sense that the target kernel is simply a linear (or convex) combination of some base kernels. In this paper, we investigate a framework of Multi-Layer Multiple Kernel Learning (MLMKL) that aims to learn "deep" kernel machines by exploring the combinations of multiple kernels in a multi-layer structure, which goes beyond the conventional MKL approach. Through a multiple layer mapping, the proposed MLMKL framework offers higher flexibility than the regular MKL for finding the optimal kernel for applications. As the first attempt to this new MKL framework, we present a Two-Layer Multiple Kernel Learning (2LMKL) method together with two efficient algorithms for classification tasks. We analyze their generalization performances and have conducted an extensive set of experiments over 16 benchmark datasets, in which encouraging results showed that our method performed better than the conventional MKL methods. Copyright 2011 by the authors.
Please use this identifier to cite or link to this item: