New Advances in Affective Computing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 February 2025 | Viewed by 17959

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
Interests: affective computing; sentiment analysis; argumentation mining

E-Mail Website
Guest Editor
Lab of Big Data Analysis and Application, University of Science and Technology of China, Hefei 230027, China
Interests: natural language processing; social media analysis; multimodal intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Computing, Harbin Institute of Technology, Harbin 150001, China
Interests: natural language processing; sentiment analysis

Special Issue Information

Dear Colleagues,

The special issue of New Advances in Affective Computing aims to explore the cutting-edge developments and emerging trends in the field of affective computing. Affective computing focuses on the study and development of intelligent systems that can recognize, interpret, and respond to human sentiments. This Special Issue aims to provide a platform for researchers, academicians, and industry professionals to showcase their novel contributions in the realm of affective computing. The scope of this Special Issue encompasses various interdisciplinary areas, including computer science, artificial intelligence, psychology, neuroscience, and human-computer interaction. Topics of interest include but are not limited to sentiment recognition and generation, affective interaction design, affective computing in social media and big data, affective robotics, and ethical considerations in affective computing. Particularly, this special issue emphasizes the development of multi-modal sentiment analysis, aiming to explore how sentimental information can be extracted from multiple sources such as text, images, video, audio, and more, and integrated to better understand and interpret human sentiments accurately. Additionally, this special issue investigates the research related to stance detection and argumentation mining, which contributes to a deeper understanding of individual and societal sentimental attitudes. By presenting the latest advancements and discoveries in affective computing, this special issue intends to foster collaborations, inspire new research directions, and pave the way for the practical application of affective computing technologies in diverse domains.

Prof. Dr. Ruifeng Xu
Prof. Dr. Tong Xu
Dr. Yanyan Zhao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • affective computing
  • sentiment analysis
  • multi-modal sentiment analysis
  • stance detection
  • argumentation mining

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 789 KiB  
Article
EiGC: An Event-Induced Graph with Constraints for Event Causality Identification
by Xi Zeng, Zhixin Bai, Ke Qin and Guangchun Luo
Electronics 2024, 13(23), 4608; https://doi.org/10.3390/electronics13234608 - 22 Nov 2024
Viewed by 85
Abstract
Event causality identification (ECI) focuses on detecting causal relationships between events within a document. Existing approaches typically treat each event-mention pair independently, overlooking the relational dynamics and potential conflicts among event causalities. To tackle this challenge, we propose the Event-induced [...] Read more.
Event causality identification (ECI) focuses on detecting causal relationships between events within a document. Existing approaches typically treat each event-mention pair independently, overlooking the relational dynamics and potential conflicts among event causalities. To tackle this challenge, we propose the Event-induced Graph with Constraints (EiGC), which models the complex event-level causal structures in a more realistic manner, facilitating comprehensive causal relation identification. To be more specific, we construct a graph based on diverse event-driven knowledge sources, such as coreference and co-occurrence relations. A graph convolutional network (GCN) is then employed to encode these structural features, effectively capturing both local and global dependencies between nodes. Additionally, we implement event-aware constraints through integer linear programming, incorporating the principles of uniqueness, non-reflexivity, and coreference consistency in event-causal relationships. This approach ensures logical consistency and prevents conflicts in the prediction outcomes. Experimental results on three widely used datasets illustrate that our proposed EiGC approach achieves excellent performance among all the baseline models. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

15 pages, 1799 KiB  
Article
Heterogeneous Hierarchical Fusion Network for Multimodal Sentiment Analysis in Real-World Environments
by Ju Huang, Wenkang Chen, Fangyi Wang and Haijun Zhang
Electronics 2024, 13(20), 4137; https://doi.org/10.3390/electronics13204137 - 21 Oct 2024
Viewed by 668
Abstract
Multimodal sentiment analysis models can determine users’ sentiments by utilizing rich information from various sources (e.g., textual, visual, and audio). However, there are two key challenges when deploying the model in real-world environments: (1) the limitations of relying on the performance of automatic [...] Read more.
Multimodal sentiment analysis models can determine users’ sentiments by utilizing rich information from various sources (e.g., textual, visual, and audio). However, there are two key challenges when deploying the model in real-world environments: (1) the limitations of relying on the performance of automatic speech recognition (ASR) models can lead to errors in recognizing sentiment words, which may mislead the sentiment analysis of the textual modality, and (2) variations in information density across modalities complicate the development of a high-quality fusion framework. To address these challenges, this paper proposes a novel Multimodal Sentiment Word Optimization Module and a heterogeneous hierarchical fusion (MSWOHHF) framework. Specifically, the proposed Multimodal Sentiment Word Optimization Module optimizes the sentiment words extracted from the textual modality by the ASR model, thereby reducing sentiment word recognition errors. In the multimodal fusion phase, a heterogeneous hierarchical fusion network architecture is introduced, which first utilizes a Transformer Aggregation Module to fuse the visual and audio modalities, enhancing the high-level semantic features of each modality. A Cross-Attention Fusion Module then integrates the textual modality with the audiovisual fusion. Next, a Feature-Based Attention Fusion Module is proposed that enables fusion by dynamically tuning the weights of both the combined and unimodal representations. It then predicts sentiment polarity using a nonlinear neural network. Finally, the experimental results on the MOSI-SpeechBrain, MOSI-IBM, and MOSI-iFlytek datasets show that the MSWOHHF outperforms several baselines, demonstrating better performance. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

15 pages, 1789 KiB  
Article
A Comparison-Based Framework for Argument Quality Assessment
by Jianzhu Bao, Bojun Jin, Yang Sun, Yice Zhang, Yuhang He and Ruifeng Xu
Electronics 2024, 13(20), 4088; https://doi.org/10.3390/electronics13204088 - 17 Oct 2024
Viewed by 472
Abstract
Assessing the quality of arguments is both valuable and challenging. Humans often find that making pairwise comparisons between a target argument and several reference arguments facilitates a more precise judgment of the target argument’s quality. Inspired by this, we propose a comparison-based framework [...] Read more.
Assessing the quality of arguments is both valuable and challenging. Humans often find that making pairwise comparisons between a target argument and several reference arguments facilitates a more precise judgment of the target argument’s quality. Inspired by this, we propose a comparison-based framework for argument quality assessments (CompAQA), which scores the quality of an argument through multiple pairwise comparisons. Additionally, we introduce an argument order-based data augmentation strategy to enhance CompAQA’s relative quality comparison ability. By introducing multiple reference arguments for pairwise comparisons, CompAQA improves the objectivity and precision of argument quality assessments. Another advantage of CompAQA is its ability to integrate both pairwise argument quality classification and argument quality ranking tasks into a unified framework, distinguishing it from existing methods. We conduct extensive experiments using various pre-trained encoder-only models. Our experiments involve two argument quality ranking datasets (IBM-ArgQ-5.3kArgs and IBM-Rank-30k) and one pairwise argument quality classification dataset (IBM-ArgQ-9.1kPairs). Overall, CompAQA significantly outperforms several strong baselines. Specifically, when using the RoBERTa model as a backbone, CompAQA outperforms the previous best method on the IBM-Rank-30k dataset, improving Pearson correlation by 0.0203 and Spearman correlation by 0.0148. On the IBM-ArgQ-5.3kArgs dataset, it shows improvements of 0.0069 in Pearson correlation and 0.0208 in Spearman correlation. Furthermore, CompAQA demonstrates a 4.71% increase in accuracy over the baseline method on the IBM-ArgQ-9.1kPairs dataset. We also show that CompAQA can be effectively applied to fine-tune larger decoder-only pre-trained models, such as Llama. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

12 pages, 1398 KiB  
Article
A Bullet Screen Sentiment Analysis Method That Integrates the Sentiment Lexicon with RoBERTa-CNN
by Yupan Liu, Shuo Wang and Shengshi Yu
Electronics 2024, 13(20), 3984; https://doi.org/10.3390/electronics13203984 - 10 Oct 2024
Viewed by 754
Abstract
Bullet screen, a form of online video commentary in emerging social media, is widely used on video websites frequented by young people. It has become a novel means of expressing emotions towards videos. The characteristics, such as varying text lengths and the presence [...] Read more.
Bullet screen, a form of online video commentary in emerging social media, is widely used on video websites frequented by young people. It has become a novel means of expressing emotions towards videos. The characteristics, such as varying text lengths and the presence of numerous new words, lead to ambiguous emotional information. To address these characteristics, this paper proposes a Robustly Optimized BERT Pretraining Approach (RoBERTa) + Convolutional Neural Network (CNN) sentiment classification algorithm integrated with a sentiment lexicon. RoBERTa encodes the input text to enhance semantic feature representation, and CNN extracts local features using multiple convolutional kernels of different sizes. Sentiment classification is then performed by a softmax classifier. Meanwhile, we use the sentiment lexicon to calculate the emotion score of the input text and normalize the emotion score. Finally, the classification results of the sentiment lexicon and RoBERTa+CNN are weighted and calculated. The bullet screens are grouped according to their length, and different weights are assigned to the sentiment lexicon based on their length to enhance the features of the model’s sentiment classification. The method combines the sentiment lexicon can be customized for the domain vocabulary and the pre-trained model can deal with the polysemy. Experimental results demonstrate that the proposed method achieves improvements in precision, recall, and F1 score. The experiments in this paper take the Russia–Ukraine war as the research topic, and the experimental methods can be extended to other events. The experiment demonstrates the effectiveness of the model in the sentiment analysis of bullet screen texts and has a positive effect on grasping the current public opinion status of hot events and guiding the direction of public opinion in a timely manner. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

20 pages, 1391 KiB  
Article
A Hybrid Approach to Dimensional Aspect-Based Sentiment Analysis Using BERT and Large Language Models
by Yice Zhang, Hongling Xu, Delong Zhang and Ruifeng Xu
Electronics 2024, 13(18), 3724; https://doi.org/10.3390/electronics13183724 - 19 Sep 2024
Viewed by 1479
Abstract
Dimensional aspect-based sentiment analysis (dimABSA) aims to recognize aspect-level quadruples from reviews, offering a fine-grained sentiment description for user opinions. A quadruple consists of aspect, category, opinion, and sentiment intensity, which is represented using continuous real-valued scores in the valence–arousal dimensions. To address [...] Read more.
Dimensional aspect-based sentiment analysis (dimABSA) aims to recognize aspect-level quadruples from reviews, offering a fine-grained sentiment description for user opinions. A quadruple consists of aspect, category, opinion, and sentiment intensity, which is represented using continuous real-valued scores in the valence–arousal dimensions. To address this task, we propose a hybrid approach that integrates the BERT model with a large language model (LLM). Firstly, we develop both the BERT-based and LLM-based methods for dimABSA. The BERT-based method employs a pipeline approach, while the LLM-based method transforms the dimABSA task into a text generation task. Secondly, we evaluate their performance in entity extraction, relation classification, and intensity prediction to determine their advantages. Finally, we devise a hybrid approach to fully utilize their advantages across different scenarios. Experiments demonstrate that the hybrid approach outperforms BERT-based and LLM-based methods, achieving state-of-the-art performance with an F1-score of 41.7% on the quadruple extraction. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

11 pages, 632 KiB  
Article
A Multi-Hop Reasoning Knowledge Selection Module for Dialogue Generation
by Zhiqiang Ma, Jia Liu, Biqi Xu, Kai Lv and Siyuan Guo
Electronics 2024, 13(16), 3275; https://doi.org/10.3390/electronics13163275 - 19 Aug 2024
Viewed by 708
Abstract
Knowledge selection plays a crucial role in knowledge-driven dialogue generation methods, directly influencing the accuracy, relevance, and coherence of generated responses. Existing research often overlooks the handling of disparities between dialogue statements and external knowledge, leading to inappropriate knowledge representation in dialogue generation. [...] Read more.
Knowledge selection plays a crucial role in knowledge-driven dialogue generation methods, directly influencing the accuracy, relevance, and coherence of generated responses. Existing research often overlooks the handling of disparities between dialogue statements and external knowledge, leading to inappropriate knowledge representation in dialogue generation. To overcome this limitation, this paper proposes an innovative Multi-hop Reasoning Knowledge Selection Module (KMRKSM). Initially, multi-relational graphs containing rich composite operations are encoded to capture graph-aware representations of concepts and relationships. Subsequently, the multi-hop reasoning module dynamically infers along multiple relational paths, aggregating triple evidence to generate knowledge subgraphs closely related to dialogue history. Finally, these generated knowledge subgraphs are combined with dialogue history features and synthesized into comprehensive knowledge features by a decoder. Through automated and manual evaluations, the exceptional performance of KMRKSM in selecting appropriate knowledge is validated. This module efficiently selects knowledge matching the dialogue context through multi-hop reasoning, significantly enhancing the appropriateness of knowledge representation and providing technical support for achieving more natural and human-like dialogue systems. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

27 pages, 20206 KiB  
Article
Comparison of Sentiment Analysis Methods Used to Investigate the Quality of Teaching Aids Based on Virtual Simulators of Embedded Systems
by Andrzej Radecki and Tomasz Rybicki
Electronics 2024, 13(10), 1811; https://doi.org/10.3390/electronics13101811 - 7 May 2024
Viewed by 1105
Abstract
Virtual simulators of embedded systems and analyses of student surveys regarding their use at the early stage of the process of learning embedded systems, are presented in this article. The questionnaires were prepared in the Polish language, and the answers were automatically translated [...] Read more.
Virtual simulators of embedded systems and analyses of student surveys regarding their use at the early stage of the process of learning embedded systems, are presented in this article. The questionnaires were prepared in the Polish language, and the answers were automatically translated into English using two publicly available translators. The results of users’ experiences and feelings related to the use of virtual simulators are shown on the basis of detected sentiment using three chosen analysis methods: the Flair NLP library, the Pattern library, and the BERT NLP model. The results of the selected sentiment detection methods were compared and related to users reference answers, which gives information about the methods quality of the methods and their possible use in the automated review analysis process. This paper comprises detailed sentiment analysis results with a broader statistical approach for each question. Based on the students feedback and sentiment analysis, a new version of the TMSLAB v.2 virtual simulator was created. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

17 pages, 295 KiB  
Article
Single- and Cross-Lingual Speech Emotion Recognition Based on WavLM Domain Emotion Embedding
by Jichen Yang, Jiahao Liu, Kai Huang, Jiaqi Xia, Zhengyu Zhu and Han Zhang
Electronics 2024, 13(7), 1380; https://doi.org/10.3390/electronics13071380 - 5 Apr 2024
Viewed by 1116
Abstract
Unlike previous approaches in speech emotion recognition (SER), which typically extract emotion embeddings from a trained classifier consisting of fully connected layers and training data without considering contextual information, this research introduces a novel approach. It integrates contextual information into the feature extraction [...] Read more.
Unlike previous approaches in speech emotion recognition (SER), which typically extract emotion embeddings from a trained classifier consisting of fully connected layers and training data without considering contextual information, this research introduces a novel approach. It integrates contextual information into the feature extraction process. The proposed approach is based on the WavLM representation and incorporates a contextual transform, along with fully connected layers, training data, and corresponding label information, to extract single-lingual WavLM domain emotion embeddings (SL-WDEEs) and cross-lingual WavLM domain emotion embeddings (CL-WDEEs) for single-lingual and cross-lingual SER, respectively. To extract CL-WDEEs, multi-task learning is employed to remove language information, marking it as the first work to extract emotion embeddings for cross-lingual SER. Experimental results on the IEMOCAP database demonstrate that the proposed SL-WDEE outperforms some commonly used features and known systems, while results on the ESD database indicate that the proposed CL-WDEE effectively recognizes cross-lingual emotions and outperforms many commonly used features. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

19 pages, 868 KiB  
Article
Combining wav2vec 2.0 Fine-Tuning and ConLearnNet for Speech Emotion Recognition
by Chenjing Sun, Yi Zhou, Xin Huang, Jichen Yang and Xianhua Hou
Electronics 2024, 13(6), 1103; https://doi.org/10.3390/electronics13061103 - 17 Mar 2024
Cited by 1 | Viewed by 1530
Abstract
Speech emotion recognition poses challenges due to the varied expression of emotions through intonation and speech rate. In order to reduce the loss of emotional information during the recognition process and to enhance the extraction and classification of speech emotions and thus improve [...] Read more.
Speech emotion recognition poses challenges due to the varied expression of emotions through intonation and speech rate. In order to reduce the loss of emotional information during the recognition process and to enhance the extraction and classification of speech emotions and thus improve the ability of speech emotion recognition, we propose a novel approach in two folds. Firstly, a feed-forward network with skip connections (SCFFN) is introduced to fine-tune wav2vec 2.0 and extract emotion embeddings. Subsequently, ConLearnNet is employed for emotion classification. ConLearnNet comprises three steps: feature learning, contrastive learning, and classification. Feature learning transforms the input, while contrastive learning encourages similar representations for samples from the same category and discriminative representations for different categories. Experimental results on the IEMOCAP and the EMO-DB datasets demonstrate the superiority of our proposed method compared to state-of-the-art systems. We achieve a WA and UAR of 72.86% and 72.85% on IEMOCAP, and 97.20% and 96.41% on the EMO-DB, respectively. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

12 pages, 990 KiB  
Article
Multi-Modal Sarcasm Detection with Sentiment Word Embedding
by Hao Fu, Hao Liu, Hongling Wang, Linyan Xu, Jiali Lin and Dazhi Jiang
Electronics 2024, 13(5), 855; https://doi.org/10.3390/electronics13050855 - 23 Feb 2024
Cited by 3 | Viewed by 3206
Abstract
Sarcasm poses a significant challenge for detection due to its unique linguistic phenomenon where the intended meaning is often opposite of the literal expression. Current sarcasm detection technology primarily utilizes multi-modal processing, but the connotative semantic information provided by the modality itself is [...] Read more.
Sarcasm poses a significant challenge for detection due to its unique linguistic phenomenon where the intended meaning is often opposite of the literal expression. Current sarcasm detection technology primarily utilizes multi-modal processing, but the connotative semantic information provided by the modality itself is limited. It is a challenge to mine the semantic information contained in the combination of sarcasm samples and external commonsense knowledge. Furthermore, as the essence of sarcasm detection lies in measuring emotional inconsistency, the rich semantic information may introduce excessive noise to inconsistency measurement. To mitigate these limitations, we propose a hierarchical framework in this paper. Specifically, to enrich the semantic information of each modality, our approach uses sentiment dictionaries to obtain the sentiment vectors by evaluating the words extracted from various modalities, and then combines them with each modality. Furthermore, in order to mine the joint semantic information implied in the modalities and improve measurement of emotional inconsistency, the emotional information representation obtained by fusing each modality’s data is concatenated with the sentiment vector. Then, cross-modal fusion is performed through cross-attention, and, finally, the sarcasm is recognized by fusing low-level information in the cross-modal fusion layer. Our model is evaluated on a public multi-modal sarcasm detection dataset based on Twitter, and the results demonstrate its superiority. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

18 pages, 718 KiB  
Article
Construction of an Event Knowledge Graph Based on a Dynamic Resource Scheduling Optimization Algorithm and Semantic Graph Convolutional Neural Networks
by Xing Liu, Long Zhang, Qiusheng Zheng, Fupeng Wei, Kezheng Wang, Zheng Zhang, Ziwei Chen, Liyue Niu and Jizong Liu
Electronics 2024, 13(1), 11; https://doi.org/10.3390/electronics13010011 - 19 Dec 2023
Cited by 3 | Viewed by 1646
Abstract
Presently, road and traffic control construction on most university campuses cannot keep up with the growth of the universities. Campus roads are not very wide, crossings do not have lights, and there are no full-time traffic management personnel. Teachers and students are prone [...] Read more.
Presently, road and traffic control construction on most university campuses cannot keep up with the growth of the universities. Campus roads are not very wide, crossings do not have lights, and there are no full-time traffic management personnel. Teachers and students are prone to forming a peak flow of people when going to and from classes. This has led to a constant stream of traffic accidents. It is critical to conduct a comprehensive analysis of this issue by utilizing voluminous data pertaining to school traffic incidents in order to safeguard the lives of faculty and students. In the case of domestic universities, fewer studies have studied knowledge graph construction methods for traffic safety incidents. In event knowledge graph construction, the reasonable release and recycling of computational resources are inefficient, and existing entity–relationship joint extraction methods are unable to deal with ternary overlapping and entity boundary ambiguity problems in relationship extraction. In response to the above problems, this paper proposes a knowledge graph construction method for university on-campus traffic safety events with improved dynamic resource scheduling algorithms and multi-layer semantic graph convolutional neural networks. The experiment’s results show that the proposed dynamic computational resource scheduling method increases GPU and CPU use by 25% and 9%. On the public dataset, the proposed data extraction model’s F1 scores for event triples increase by 1.3% on the NYT dataset and by 0.4% on the WebNLG dataset. This method can help the relevant university personnel in dealing with unexpected traffic incidents and reduce the impact on public opinion. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

20 pages, 927 KiB  
Article
A Dynamic Emotional Propagation Model over Time for Competitive Environments
by Zhihao Chen, Bingbing Xu, Tiecheng Cai, Zhou Yang and Xiangwen Liao
Electronics 2023, 12(24), 4937; https://doi.org/10.3390/electronics12244937 - 8 Dec 2023
Viewed by 1300
Abstract
Emotional propagation research aims to discover and show the laws of opinion evolution in social networks. The short-term observation of the emotional propagation process for a predetermined time window ignores situations in which users with different emotions compete over a long diffusion time. [...] Read more.
Emotional propagation research aims to discover and show the laws of opinion evolution in social networks. The short-term observation of the emotional propagation process for a predetermined time window ignores situations in which users with different emotions compete over a long diffusion time. To that end, we propose a dynamic emotional propagation model based on an independent cascade. The proposed model is inspired by the interpretable factors of the reinforced Poisson process, portraying the “rich-get-richer” phenomenon within a social network. Specifically, we introduce a time-decay mechanism to illustrate the change in influence over time. Meanwhile, we propose an emotion-exciting mechanism allowing prior users to affect the emotions of subsequent users. Finally, we conduct experiments on an artificial network and two real-world datasets—Wiki, with 7194 nodes, and Bitcoin-OTC, with 5881 nodes—to verify the effectiveness of our proposed model. The proposed method improved the F1-score by 3.5% and decreased the MAPE by 0.059 on the Wiki dataset. And the F1-score improved by 0.4% and the MAPE decreased by 0.013 on the Bitcoin-OTC dataset. In addition, the experimental results indicate a phenomenon of emotions in social networks tending to converge under the influence of opinion leaders after a long enough time. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Graphical abstract

23 pages, 6708 KiB  
Article
Deep Learning Short Text Sentiment Analysis Based on Improved Particle Swarm Optimization
by Yaowei Yue, Yun Peng and Duancheng Wang
Electronics 2023, 12(19), 4119; https://doi.org/10.3390/electronics12194119 - 2 Oct 2023
Cited by 4 | Viewed by 1568
Abstract
Manually tuning the hyperparameters of a deep learning model is not only a time-consuming and labor-intensive process, but it can also easily lead to issues like overfitting or underfitting, hindering the model’s full convergence. To address this challenge, we present a BiLSTM-TCSA model [...] Read more.
Manually tuning the hyperparameters of a deep learning model is not only a time-consuming and labor-intensive process, but it can also easily lead to issues like overfitting or underfitting, hindering the model’s full convergence. To address this challenge, we present a BiLSTM-TCSA model (BiLSTM combine TextCNN and Self-Attention) for deep learning-based sentiment analysis of short texts, utilizing an improved particle swarm optimization (IPSO). This approach mimics the global random search behavior observed in bird foraging, allowing for adaptive optimization of model hyperparameters. In this methodology, an initial step involves employing a Generative Adversarial Network (GAN) mechanism to generate a substantial corpus of perturbed text, augmenting the model’s resilience to disturbances. Subsequently, global semantic insights are extracted through Bidirectional Long Short Term Memory networks (BiLSTM) processing. Leveraging Convolutional Neural Networks for Text (TextCNN) with diverse convolution kernel sizes enables the extraction of localized features, which are then concatenated to construct multi-scale feature vectors. Concluding the process, feature vector refinement and the classification task are accomplished through the integration of Self-Attention and Softmax layers. Empirical results underscore the effectiveness of the proposed approach in sentiment analysis tasks involving succinct texts containing limited information. Across four distinct datasets, our method attains impressive accuracy rates of 91.38%, 91.74%, 85.49%, and 94.59%, respectively. This performance constitutes a notable advancement when compared against conventional deep learning models and baseline approaches. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

Back to TopTop