Efficient multitemplate learning for structured prediction

Publication Type:
Journal Article
Citation:
IEEE Transactions on Neural Networks and Learning Systems, 2013, 24 (2), pp. 248 - 261
Issue Date:
2013-01-01
Filename Description Size
Thumbnail2013004103OK.pdf610.71 kB
Adobe PDF
Full metadata record
Conditional random fields (CRF) and structural support vector machines (structural SVM) are two state-of-theart methods for structured prediction that captures the interdependencies among output variables. The success of these methods is attributed to the fact that their discriminative models are able to account for overlapping features on all input observations. These features are usually generated by applying a given set of templates on labeled data, but improper templates may lead to degraded performance. To alleviate this issue, in this paper we propose a novel multiple template learning paradigm to learn structured prediction and the importance of each template simultaneously, so that hundreds of arbitrary templates could be added into the learning model without caution. This paradigm can be formulated as a special multiple kernel learning problem with an exponential number of constraints. Then we introduce an efficient cutting-plane algorithm to solve this problem in the primal and present its convergence. We also evaluate the proposed learning paradigm on two widely studied structured prediction tasks, i.e., sequence labeling and dependency parsing. Extensive experimental results show that the proposed method outperforms CRFs and structural SVMs because of exploiting the importance of each template. Complexity analysis and empirical results also show that the proposed method is more efficient than Online multikernel learning on very sparse and high-dimensional data. We further extend this paradigm for structured prediction using generalized p-block norm regularization with p > 1, and experiments show competitive performances when p (1, 2).© 2012 IEEE.
Please use this identifier to cite or link to this item: