Sequential Labeling with Structural SVM under Nondecomposable Losses
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Neural Networks and Learning Systems, 2018, 29 (9), pp. 4177 - 4188
- Issue Date:
- 2018-09-01
Closed Access
| Filename | Description | Size | |||
|---|---|---|---|---|---|
| NNLS_minor_revision.pdf | Accepted Manuscript Version | 541.12 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2012 IEEE. Sequential labeling addresses the classification of sequential data, which are widespread in fields as diverse as computer vision, finance, and genomics. The model traditionally used for sequential labeling is the hidden Markov model (HMM), where the sequence of class labels to be predicted is encoded as a Markov chain. In recent years, HMMs have benefited from minimum-loss training approaches, such as the structural support vector machine (SSVM), which, in many cases, has reported higher classification accuracy. However, the loss functions available for training are restricted to decomposable cases, such as the 0-1 loss and the Hamming loss. In many practical cases, other loss functions, such as those based on the F1 measure, the precision/recall break-even point, and the average precision (AP), can describe desirable performance more effectively. For this reason, in this paper, we propose a training algorithm for SSVM that can minimize any loss based on the classification contingency table, and we present a training algorithm that minimizes an AP loss. Experimental results over a set of diverse and challenging data sets (TUM Kitchen, CMU Multimodal Activity, and Ozone Level Detection) show that the proposed training algorithms achieve significant improvements of the F1 measure and AP compared with the conventional SSVM, and their performance is in line with or above that of other state-of-the-art sequential labeling approaches.
Please use this identifier to cite or link to this item:
