Emotion Classification of Text Based on BERT and Broad Learning System

Springer International Publishing
Publication Type:
Conference Proceeding
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2021, 12858 LNCS, pp. 382-396
Issue Date:
Filename Description Size
978-3-030-85896-4_30.pdfPublished version332.68 kB
Adobe PDF
Full metadata record
Emotion classification is one of the most important tasks of natural language processing (NLP). It focuses on identifying each kind of emotion expressed in text. However, most of the existing models are based on deep learning methods, which often suffer from long training time, difficulties in convergence and theoretical analysis. To solve the above problems, we propose a method for emotion classification of text based on bidirectional encoder representation from transformers (BERT) and broad learning system (BLS) in this paper. The texts are input into BERT pre-trained model to obtain context-related word embeddings and all word vectors are averaged to obtain sentence embedding. The feature nodes and enhancement nodes of BLS are used to extract the linear and nonlinear features of text, and three cascading structures of BLS are designed to transform input data to improve the ability of text feature extraction. The two groups of features are fused and input into the output layer to obtain the probability distribution of each kind of emotion, so as to achieve emotion classification. Extensive experiments are conducted on datasets from SemEval-2019 Task 3 and SMP2020-EWECT, and the experimental results show that our proposed method can better reduce the training time and improve the classification performance than that of the baseline methods.
Please use this identifier to cite or link to this item: