Generalized Hidden-Mapping Minimax Probability Machine for the training and reliability learning of several classical intelligent models

Publication Type:
Journal Article
Citation:
Information Sciences, 2018, 436-437 pp. 302 - 319
Issue Date:
2018-04-01
Filename Description Size
1-s2.0-S0020025518300458-main.pdfPublished Version3.09 MB
Adobe PDF
Full metadata record
© 2018 Elsevier Inc. Minimax Probability Machine (MPM) is a binary classifier that optimizes the upper bound of the misclassification probability. This upper bound of the misclassification probability can be used as an explicit indicator to characterize the reliability of the classification model and thus makes the classification model more transparent. However, the existing related work is constrained to linear models or the corresponding nonlinear models by applying the kernel trick. To relax such constraints, we propose the Generalized Hidden-Mapping Minimax Probability Machine (GHM-MPM). GHM-MPM is a generalized MPM. It is capable of training many classical intelligent models, such as feedforward neural networks, fuzzy logic systems, and linear and kernelized linear models for classification tasks, and realizing the reliability learning of these models simultaneously. Since the GHM-MPM, similarly to the classical MPM, was originally developed only for binary classification, it is further extended to multi-class classification by using the obtained reliability indices of the binary classifiers of two arbitrary classes. The experimental results show that GHM-MPM makes the trained models more transparent and reliable than those trained by classical methods.
Please use this identifier to cite or link to this item: