From Bayesian classifiers to possibilistic classifiers for numerical data

Publication Type:
Conference Proceeding
Citation:
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, 6379 LNAI pp. 112 - 125
Issue Date:
2010-10-25
Filename Description Size
Thumbnail2013007834OK.pdf Published version291.67 kB
Adobe PDF
Full metadata record
Naïve Bayesian classifiers are well-known for their simplicity and efficiency. They rely on independence hypotheses, together with a normality assumption, which may be too demanding, when dealing with numerical data. Possibility distributions are more compatible with the representation of poor data. This paper investigates two kinds of possibilistic elicitation methods that will be embedded into possibilistic naïve classifiers. The first one is derived from a probability-possibility transformation of Gaussian distributions (or mixtures of them), which introduces some further tolerance. The second kind is based on a direct interpretation of data in fuzzy histogram or possibilistic formats that exploit an idea of proximity between attribute values in different ways. Besides, possibilistic classifiers may be allowed to leave the classification open between several classes in case of insufficient information for choosing one (which may be of interest when the number of classes is large). The experiments reported show the interest of possibilistic classifiers. © 2010 Springer-Verlag Berlin Heidelberg.
Please use this identifier to cite or link to this item: