文章摘要
结合自注意力的BERT文本情感分析
BERT Text Sentiment Analysis Combined with Self-attention
投稿时间:2024-07-19  修订日期:2024-10-08
DOI:
中文关键词: 键词:BERT模型  文本情感分析  自注意力机制
英文关键词: Key words: BERT model  text sentiment analysis  self-attention mechanism
基金项目:1.2022年度安徽省高等学校自然科学研究重点项目:面向公共安全领域的舆情文本情感分析的研究与应用,项目编号:2022AH052939; 2.2022年度院级教学研究重点项目:基于CDIO理念的Web前端开发教学改革研究初探,项目编号:2022yjjyxm11
作者单位邮编
朱珍元* 安徽警官职业学院 230031
苏喻 合肥师范学院 合肥综合性国家科学中心 
摘要点击次数: 285
全文下载次数: 0
中文摘要:
      在文本情感分析领域,BERT模型因其强大的特征提取能力而广泛应用。然而,实证研究表明,未对BERT进行微调的情况下,其准确性可能遭受显著损失,导致模型的实际效果未能达到预期。为了解决这一问题,提出一种结合自注意力的BERT文本情感分析模型:BERT-BLSTM-Attention。该模型通过综合利用BERT的预训练能力与BLSTM和自注意力机制,增强对文本情感的理解和分析。首先,BERT模型被用于将输入的文本数据表示为高维特征向量。BERT作为一种强大的预训练模型,能够捕捉到丰富的语义信息和上下文特征,为后续的模型提供基础输入。在这一阶段,BERT的双向编码能力使模型可以从前后文中提取出更多细腻的语义信息,这对于情感分析至关重要。然后,在BLSTM层之后引入了多头自注意力机制。自注意力机制的加入,使得模型可以在处理输入序列时,更加关注文本中重要的部分,通过动态分配权重来强化这些关键特征的作用。多头自注意力机制通过并行计算多个注意力头,允许模型学习到不同的表示,从而在多种层面上捕捉文本的细节和重要性。这种机制增强了模型对情感线索的敏感性,特别是在面对长文本时,更能有效地识别影响情感的关键信息。最后,模型在输出层使用SoftMax函数进行文本情感分类。在这一阶段,基于收集到的特征,模型能够生成每种情感类别的概率分布,为情感分类提供输出。在进行有效分类的同时,模型也展示了出色的泛化能力。实验发现,引入自注意力机制的BLSTM模型的准确率比未引入的BLSTM模型高1.8%,比未使用BERT模型的准确率高0.9%,充分说明了本文模型在语言特征提取方面的有效性。
英文摘要:
      Abstract: In the field of text sentiment analysis, BERT model is widely used because of its powerful feature extraction ability. However, empirical research shows that without fine-tuning BERT, its accuracy may suffer significant loss, resulting in the actual effect of the model failing to meet expectations. In order to solve this problem, a BERT text emotion analysis model combining self-attention is proposed: BERT-BLSTM-Attention. The model makes comprehensive use of BERT's pre-training ability, BLSTM and self-attention mechanism to enhance the understanding and analysis of text emotion. First, the BERT model is used to represent the input text data as high-dimensional feature vectors. BERT, as a powerful pre-trained model, can capture rich semantic information and contextual features to provide basic input for subsequent models. At this stage, BERT's bidirectional coding capability allows the model to extract more detailed semantic information from the context, which is crucial for sentiment analysis. Then, after the BLSTM layer, the multi-head self-attention mechanism is introduced. With the addition of self-attention mechanism, the model can pay more attention to the important parts of the text when processing the input sequence, and strengthen the role of these key features by dynamically assigning weights. By computing multiple heads of attention in parallel, the multi-head self-attention mechanism allows the model to learn different representations to capture the details and importance of the text on multiple levels. This mechanism enhances the sensitivity of the model to emotional cues, especially in the face of long text, and is more effective in identifying key information that affects emotion. Finally, the model uses SoftMax function in the output layer for text sentiment classification. At this stage, based on the collected features, the model is able to generate a probability distribution for each emotion category, providing an output for emotion classification. In addition to effective classification, the model also shows excellent generalization ability. The experimental results show that the accuracy of the BLSTM model with self-attention mechanism is 1.8% higher than that without BLSTM, and 0.9% higher than that without BERT model, which fully demonstrates the effectiveness of the proposed model in language feature extraction.
View Fulltext   查看/发表评论  下载PDF阅读器
关闭