Self-adaptive probability estimation for Naive Bayes classification

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
The 2013 International Joint Conference on Neural Networks, 2013, pp. 1 - 8
Issue Date:
2013-01
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2013002377OK.pdf425.92 kB
Adobe PDF
Probability estimation from a given set of training examples is crucial for learning Naive Bayes (NB) Classifiers. For an insufficient number of training examples, the estimation will suffer from the zero-frequency problem which does not allow NB classifiers to classify instances whose conditional probabilities are zero. Laplace-estimate and M-estimate are two common methods which alleviate the zero-frequency problem by adding some fixed terms to the probability estimation to avoid zero conditional probability. A major issue with this type of design is that the fixed terms are pre-specified without considering the uniqueness of the underlying training data. In this paper, we propose an Artificial Immune System (AIS) based self-adaptive probability estimation method, namely AISENB, which uses AIS to automatically and self-adaptively select the optimal terms and values for probability estimation. The unique immune system based evolutionary computation process, including initialization, clone, mutation, and crossover, ensure that AISENB can adjust itself to the data without explicit specification of functional or distributional forms for the underlying model. Experimental results and comparisons on 36 benchmark datasets demonstrate that AISENB significantly outperforms traditional probability estimation based Naive Bayes classification approaches.
Please use this identifier to cite or link to this item: