Trying to understand how analogical classifiers work

Publication Type:
Conference Proceeding
Citation:
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, 7520 LNAI pp. 582 - 589
Issue Date:
2012-10-23
Filename Description Size
Thumbnail2013007891OK.pdf Published version312.38 kB
Adobe PDF
Full metadata record
Based on a formal modeling of analogical proportions, a new type of classifier has started to be investigated in the last past years. With such classifiers, there is no standard statistical counting or distance evaluation. Despite their differences with classical approaches, such as naive Bayesian, k-NN, or even SVM classifiers, the analogy-based classifiers appear to be quite successful. Even if this success may not come as a complete surprise, since one may imagine that a general regularity or conformity principle is still at work (as in the other classifiers), no formal explanation had been provided until now. In this research note, we lay bare the way analogy-based classifiers implement this core principle, highlighting the fact that they mainly relate changes in feature values to changes in classes. © 2012 Springer-Verlag.
Please use this identifier to cite or link to this item: