Next Article in Journal
A Green Flexible Job-Shop Scheduling Model for Multiple AGVs Considering Carbon Footprint
Next Article in Special Issue
Unmet Healthcare Needs among the Elderly Korean Population: Before and during the COVID-19 Pandemic
Previous Article in Journal
A Method for Optimizing the Layout of Public Service Facilities Based on the Needs of Different Age Groups: An Analysis of Hongkou District, Shanghai
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiplayer Online Battle Arena (MOBA) Games: Improving Negative Atmosphere with Social Robots and AI Teammates

1
School of Design, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, China
2
School of Design, Southern University of Science and Technology, Shenzhen 518055, China
3
School of the Arts, Universiti Sains Malaysia, George Town 11800, Malaysia
4
Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, China
5
Department of Communication Science, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands
*
Author to whom correspondence should be addressed.
Systems 2023, 11(8), 425; https://doi.org/10.3390/systems11080425
Submission received: 24 May 2023 / Revised: 2 August 2023 / Accepted: 10 August 2023 / Published: 14 August 2023
(This article belongs to the Special Issue Digital Health for Better Health and Life)

Abstract

:
Electronic sports show significant user churn caused by a toxic gaming atmosphere, and current GUI-based interventions are insufficient to address the issue. Based on the theoretical framework of Perceiving and Experiencing Fictional Characters, a new hybrid interaction interface and paradigm combined with tangibles is proposed to counter negative mood. To support the frustrated users of Massive Online Battle Arena (MOBA) games, we added AI teammates for better personal performance and social robots for the disclosure of negative mood. We hypothesized that AI teammates’ invisibility and anonymity would mitigate negative emotions; an effect amplified by the presence of social robots. A comparative experiment was conducted with 111 participants. Social robots for emotion-oriented coping improved user mood but AI teammates for problem-oriented coping did so better, although their higher levels of experienced anonymity may not have been preferred. Unexpectedly, conversing with a robot after playing with an AI teammate brought the mood back to that experienced when talking to a robot alone, while increasing the distancing tendencies. With this in mind, AI and social robots can counter the negative atmosphere in MOBA games, positively contributing to game design and empathic human–computer interaction.

Graphical Abstract

1. Introduction

Electronic sports (eSports) are video game-based competitive activities, running in the virtual environment of the ‘real world’ [1]. They have been gaining widespread social acceptance in recent years. From a traditionally prejudiced and skeptical standpoint, eSports’ hedonic effect is generally negatively associated with public health, owing to addiction, violence, toxicity, and sexual harassment [2,3,4,5]. The ingrained and intransmutable generational disparities in the public’s perception and acceptance of eSports has also further deepened concerns about engagement [6,7]. However, fortunately, a shift has occurred towards a more constructive attitude in recent years, mainly driven by multiple advantageous conditions, such as the promotion of cultural inclusiveness, commercial legitimization, and health-technical interventions [8,9,10]. Additionally, a positive mindset, spirit, and a sense of cohesion within the community may foster social connectedness [11,12]. For example, the World Health Organization encouraged the global gaming industry to launch the activity ‘#PlayApartTogether’ to help prevent the spread of the COVID-19 pandemic as a risk-free alternative space for competition [13]. Further, a growing volume of research supports the positive consequences of eSports, including psychological well-being, social connectedness, stress reduction, cognitive skills, and skill transferability [14,15,16,17,18].
Recently, the International Olympic Committee (IOC) placed eSports on the official agenda of Olympic events [19]. Seven recognized events (e.g., League of Legends, Dota, Three Kingdoms 2, and Street Fighter 5) were also approved as official medal sports at the Asian Games Hangzhou 2022 [20]. There are an estimated 223 million enthusiasts worldwide, with over USD 40 million in the greatest prize pool of the game Dota 2 [21,22]. It is expected that the global eSports market will generate USD 1.61 billion in 2024, representing a compound annual growth rate (CAGR) of 11.6% between 2021 and 2024 [23]. Although eSports contribute to digital culture and benefit the economy, systematic and sustainable development remains a challenge. Among the issues is the habitual negative gaming atmosphere, causing user retention problems and gamer churn, which is most represented by Multiplayer Online Battle Arena (MOBA) games [24,25,26].
MOBA games are recognized as a prominent social interaction-based eSports genre, and the gaming atmosphere is closely linked to players’ social experiences. Within gaming, players collaboratively engage in competitive spaces structured with standard rules, immersive scenarios, and multifaceted performances by controlling individual digital avatars to pursue victory [27,28,29,30,31]. Socially, MOBA games are troubled by prolonged team-based competition that is inefficient and skill levels of teammates that are misunderstood, leading to antisocial and toxic behavior. The form of communication is limited yet overly expressive, generating a negative atmosphere that is strongly associated with lower levels of players’ well-being [27,32]. For example, 80% of US players believe co-players make prejudiced comments while 74% report having experienced harassment [25]. Negative behavior among players is considered an integral and acceptable aspect of the competitive game experience [24]. Toxicity is fueled by the inherent competitiveness (i.e., killing each other) of MOBA games but is only weakly linked to success [33]. In the long run, negative player behavior substantially threatens the game’s balanced atmosphere [32]. Players who encounter negative behavior are more likely to stop playing by up to a stunning 320% [34]. Therefore, it is crucial to supply channels through which to release negative emotions and address MOBA’s negative gaming atmosphere during social interactions [35].
A positive gaming experience has profound effects on user retention, willingness to engage with content, and consumer loyalty, while avoiding unsustainable user behaviors [32,36,37]. A solution may be the application of artificial intelligence (AI) teammates that are designed to collaborate with different human teammates at different skill levels so as to enhance their performance [38,39,40,41]. However, the potential and effectiveness of helpful AI teammates to intervene within a negative atmosphere and to provide emotional support has not been investigated. If players can be successful despite being negative, they need a different incentive to stop insulting other players and to behave more pleasantly [33]. To counter a negative atmosphere, an AI for gamers’ competence improvement should be combined with a medium for emotional disclosure if things go wrong.
Self-disclosure after a bad experience is best served by social robots, more so than by other media such as writing or a WhatsApp (version 2.11.109) or WeChat (version 8.0.16) group [42,43]. Social robots can be perceived as the physical representation of AI teammates. They can serve as an intuitive supplement to address the limitations of AI teammates in effectively fostering healing and enhancing positive emotions. Although the use of social robots has advanced quickly in the fields of health care [44] and education [45], their potential to optimize negative atmosphere in games (i.e., MOBA) is overlooked.
Our research questions, then, are do AI teammates and social robots improve the gaming atmosphere in eSports? If so, how do they actualize the process of improvement?
In this research, we first interpreted the genesis of the negative gaming atmosphere from the perspective of the Stimulus–Organism–Response (S-O-R) theory and defined the necessity of the presence of AI teammates and social robots. Then, we employed Interactively Perceiving and Experiencing Fictional Characters (I-PEFiC) as the foundational model of human–robot interaction (HRI) [46,47]. By combining these theories with gaming scenarios, we further distilled and developed novel, verifiable components that capture the interventional characteristics of AI teammates and social robots. Our aim was to offer a theoretical explanation of current game experiences and an intervention scheme for the eSports industries to avoid a negative gaming atmosphere and to provide emotional support in competitive gaming environments.

2. Literature Review

2.1. The Gaming Atmosphere in MOBA Games

In this section, we interpret the negative atmosphere in MOBA games through the lens of Mehrabian and Russell’s Stimulus–Organism–Response (S-O-R) theory [48]. From the environmental psychological perspective that S-O-R advances, a negative gaming atmosphere can be divided into three stages: (1) Stimulus, (2) Organism, and (3) Response (Figure 1).

2.1.1. Stage of Stimulus: The Generation of Negative Gaming Atmosphere

MOBA games closely simulate or even enhance the psychological and physiological characteristics of traditional sports [49], especially in terms of acquiring heightened flow and self-presence [50]. MOBA games are characterized by short-term, fast-paced, and decision-intensive gameplay within limited virtual space and time, without the long-term character development seen in role-playing games, which are driven by narratives and statistics [26]. Players of MOBA games may develop cognitive skills (e.g., endurance, attention, dedication) and higher-order capabilities (e.g., expertise, strategic metagaming, and emotional discipline) [51,52], but at the cost of physical fitness and mobility [53,54].
Unfortunately, certain players enjoy aggressive or hostile behaviors towards others, live in excessive escapism, or obsess about self-presentation [55,56,57,58]. Consequently, players show an exaggerated picture of themselves that involves over-commitment to victory and the desire to perform well, releasing negative stimuli to whoever is not as dedicated. Typically, these negative stimuli relate to poor competencies and unsatisfactory matchmaking, unacceptable conversations, and holding radical attitudes, which lead to conflicts, complaints, arguments, insults, and sarcasm among players and non-players [59].

2.1.2. Stage of Organism: The Processing of Negative Gaming Atmosphere

Upon exposure to negative stimuli, players remain in a negative state that awaits resolution. This may even go so far that physical damage is incurred. A negative gaming atmosphere may facilitate the incidence of obesity and cardiovascular disease, musculoskeletal disorders, and sleep disorders, including insomnia [60,61,62]. Mentally, it can result in feelings of fatigue, monotony, and burnout [63,64,65]. Over time, the long-term psychological well-being of individuals may be significantly compromised, potentially leading to or exacerbating various disorders such as anxiety, depression, attention deficit hyperactivity disorder, and obsessive-compulsive disorder, especially in those susceptible to such conditions [66,67,68,69]. Furthermore, excessive gaming can contribute to social isolation and the erosion of social skills [70,71].
The individual’s capacity to handle negative stimuli varies; personalities characterized by openness and inclusiveness are inclined to employ strategies related to cognitive restructuring [72], for instance, physical social interaction, meditation, and sports psychology [73]. However, the effectiveness and implementation of these strategies are subjective and differ from person to person [73]. Conversely, individuals with low resilience to negative stimuli lack effective coping mechanisms and may resort to irrational regulation methods. For example, one prominent manifestation is excessive engagement in gaming as a means of escape, which can lead to prolonged and compulsive behavior. Therefore, players may continue irrational or unsatisfactory playing despite the constant pressure caused by the stimulus.

2.1.3. Stage of Response: The Diffusion of Negative Gaming Atmosphere

The attitude of irrational gaming itself becomes a new stimulus, generating a negative atmosphere in the game environment. Some players may turn their negative behaviors into aggressive tactics, exploiting loopholes, or engaging in unsportsmanlike conduct, which disrupts the gameplay experience and amplifies the negative atmosphere [24,32,74,75]. Players also may transfer their negative behaviors to other media or even to real life [76,77]. Others will escape negative experiences by abandoning the game [32].
The constant exposure to negativity drains enjoyment and motivation, leading to a loss of interest. These negative associations can limit exposure to new and potentially rewarding experiences. To publishers, such effects corrode the productive value of the entertainment content and the sustainability of operation.

2.2. Potential Interventions: AI Teammates and Social Robots

To bypass the self-enhancing negativity in MOBA games, we opted for an evidence-based design of intervention techniques [73,76]. Current interventions mainly focus on the processing and diffusion stages of the S-O-R framework, featuring toxicity detection in chat logs, fair matchmaking, conversational restrictions, reporting, and penalty moderation [33,78,79,80]. Additionally, most MOBA games allow players to exert autonomy over the in-game pursuits (to satisfy the autonomy need), exercise their skill and knowledge while facing challenges (to meet the competence need), and cultivate social connections (either with peers or in-game characters for fulfilling relatedness needs) [58,81].
These functionalities indeed reduce a negative gaming atmosphere and, for autonomy, sufficient intervention schemes provide players with character control, satisfying their sense of embodied interaction, such as the utilization of Virtual Reality [82,83]. However, the other two roots of negative behavior have not been addressed: competence (or the lack thereof) and building rapport or relationships (and the lack thereof). Therefore, we want to promote AI teammates for enhancing competence, and social robots to address any negative mood that remains after playing.

2.2.1. Enhanced Competence through AI Teammates

Users are sensitive to AI teammates, which are AI-controlled characters that are part of a player’s team and can improve their gameplay. The unpredictability of player performance can spur competition apprehension, potentially affecting self-efficacy [84]. Experienced teammates and an effective team foster stronger friendships with better allocation of roles among team members, ultimately increasing the chances of winning the game [85]. On being trained on successful strategies by high-ranking players, AI teammates stabilize the gameplay [38,39,40,41]. They offer a sense of predictability and control that allow for optimal coordination with human players, while counterbalancing human strategic mistakes.
The following use of AI teammates is considered typical. Basically, players sometimes intentionally go AFK (Away From Keyboard), the MOBA game “King of Glory” develops a mechanism to optimize matches by the AI teammates that continue playing in competitions [86]. To improve the quality of team communication, the Meta-Command Communication Framework enables AI teammates to communicate with humans and other agents, effectively collaborating with human players across different levels [39]. To gain professional knowledge of the game, AI teammates model ‘JueWu-SL’ learned higher-order human manipulations and achieved a surprising win rate in continuous games, even beating professional teams while cooperating with human players [40].
In view of the effectiveness of AI teammates, in our experimental study, we wished to compare a standard human teammate version of a MOBA game (i.e., MOBA 1.0) with a helpful AI teammate version (i.e., MOBA 2.0) in terms of the improvement of competence of players.

2.2.2. Relatedness through Self-Disclosure to Social Robots

Teams that build up friendships among the members are more effective [86]. This means that players should reduce the emotional distance between their human and artificial teammates or, even better, increase their involvement with them. Self-disclosure is a fundamental component of human communication, sharing personal information with others and bridging communication gaps [87]. Self-disclosure often involves revealing personal information, life experiences, emotions, opinions, and thoughts. Engaging in self-disclosure with social robots can improve an individual’s cognitive understanding of personal attributes, increase self-efficacy, strengthen social connections, reduce biases, and promote psychological recovery and relief [46,88].
Even though the level of self-disclosure can vary depending on factors such as perceived trustworthiness, the context of the interaction, and the type of information being disclosed, individuals are generally willing to disclose personal information to social robots, particularly when the robot is designed to provide social support or companionship [89]. Social robots have been shown to invite self-disclosure of negative mood better than other media [42,43] and fit the environment of digital gaming, AI characters, and Virtual Reality. Therefore, we believe that social robots employed after gaming can facilitate self-disclosure without affecting other players’ real physical space. Offline robots can provide a channel for players to disclose and express themselves without introducing new negative stimuli to the game. In view of the potential of social robots to improve mood and build up a rapport, we will compare the application of a social robot after playing a standard human teammate version of a MOBA game (i.e., MOBA 1.0) to the application of a robot after playing a helpful AI teammate version (i.e., MOBA 2.0) in terms of the performance of self-disclosure of players.

2.3. The Theoretical Framework of Potential Interventions

2.3.1. Interactively Perceiving and Experiencing Fictional Characters (I-PEFiC)

The I-PEFiC model [46] aligns with various interactive process theories, such as the Technology Acceptance and Affordance theory [90,91]. I-PEFiC is a model of human–character interaction that has three phases: encoding, comparison, and response. The I-PEFiC model integrates two main processes that are evoked during an encounter with a game character: the engagement process and the interaction process (Figure 2).
Within this framework, the gamer may feel involved with a game character because it is beautifully designed but concurrently feel at a distance because the character has low skills or is an evil opponent. This so-called involvement–distance trade-off is the result of evaluating the features of a character on several dimensions, as shown in Figure 2, which together form an experience of engagement (which may be ambiguous). Alongside that level of (ambiguous) engagement, use intentions prompt the player to undertake action in favor of or against another character (whether human or AI-driven). With I-PEFiC to account for the user experience of game characters and robots, we included several of its dimensions in our survey, namely affordances (e.g., skilled, unskilled), valence (e.g., positive, negative mood), involvement (e.g., feeling friendship), and distance (i.e., having cold feelings towards the other, whether human or artificial).

2.3.2. The Anonymity (AN) and Invisibility (IN) of AI Teammates

One of the features of AI teammates is their ability to maintain Anonymity, as they are not controlled by real players and do not have real identities. Anonymity in communication refers to the ability of a person to communicate without revealing their identity or personal information, which can be achieved through various means, such as using a pseudonym or anonymous messaging services [92,93]. The physicalization of online networked games has made distributed, anonymous, and multimodal participation experiences the norm. The need for personal comfort that is inherent to Anonymity, while influenced by cultural and personal factors, is not diminished in high-level role-playing [94,95].
Anonymity is a form of non-identifiability; players prefer a form of it to express controversial content or behaviors [96]. AI teammates can assume the roles of real players who would have otherwise been involved in the interaction. Playing with Anonymity enables players to maximize their self-presentation. They can openly express their thoughts about game-related details (e.g., toxic conversations) without the AI teammates taking offense or negatively affecting other players. Furthermore, players openly disclosing their behavior towards AI teammates can alleviate the unpleasant experiences they encounter during gameplay. This can further engage players in a positive state of mind and enable them to showcase their competencies.
Another feature of AI teammates is invisibility. Collaborating with them can invisibly and smartly support teammates’ willingness and ability to game. MOBA games have already incorporated fragmented AI functions in order to improve players’ performances during gameplay, generally supporting them with the assistance of equipment recommendations, troop combinations, and automatic manipulation [97].
According to Media Equation Theory, when computers (i.e., AI teammates) are equipped with human-like functionalities and appearances, humans tend to respond to them as if they were another person [98]. Similarly, according to the Computers Are Social Actors paradigm, players apply social rules and expectations to computers, even though they know these machines lack emotions, intentions, or human motivations [99]. When computers provide competition benefits, it triggers players’ unconscious reactions, making them feel obligated to collaborate with AI teammates.
Collaborating with AI teammates allows players to showcase their individual abilities more effectively. The collaboration of humans and AI mainly focuses on creating specific characteristics of role behavior within the team, such as emotions, willpower, critical thinking, and decision making [100]. In human–AI collaboration, the cluster behavior of team roles is shaped by the human’s personality, psychological abilities, current values and motivations, domain limitations, experiences, and role learning [101]. Therefore, human players’ significance in the game must be emphasized. However, the stability and predictability of AI teammates enable players to focus on developing their own skills and tactics without overly relying on unpredictable human teammates [40]. This freedom allows players to express themselves more effectively and consistently, leading to a greater sense of achievement and personal growth.

2.3.3. The Affordance (AF) and Valence (Val) of Social Robots

Affordance is a term that refers to an individual’s perception of the action possibilities of an object or environment [91]. The affordances of social robots affect how people perceive them as aids or obstacles to achieving user goals, affecting people’s behaviors and experiences. Emotional affordances encompass various mechanisms through which emotional content is utilized to convey or gather emotional significance in any given context. These mechanisms can involve bodily expressions, social norms, objects laden with values, or the extension of space, among other factors [102]. Affective affordances can be used to improve the emotional content of human–robot interactions, leading to a more positive user experience.
Robotic social technologies include physical contact, facial expressions, co-articulatory gestures, multimodal speech-to-gesture, eye gaze, and simulated personality traits. In our case, we used a social robot to invite self-disclosure about negative mood after gameplay.
The valence of an event refers to its implied outcome, the intrinsic attractiveness or repulsiveness [103]. Positive valence encourages approach tendencies, whereas negative valence strengthens avoidance. Positively valenced emotions frequently and typically motivate behavioral change. Negative emotions might, however, also produce moving-against inclinations that include fighting or attacking obstacles [47].
In the context of MOBA gaming, the pursuit of success is associated with anxiety, and intensive participation amplifies negative emotions [104,105]. In general, interacting with social robots can have positive effects, such as enhancing health knowledge, reducing physical pain, and improving mental health symptoms, and it can also improve geriatric conditions [106,107,108]. Further, reports from dialogue trials with social robots indicate high participant satisfaction, suggesting that atypical conversational agents are enjoyable and proactive [109]. Interacting with social robots can also alleviate concerns about others’ behavior, enabling individuals to reflect on their own actions [110]. By inviting self-disclosure about negative game experiences through a social robot, we hope to improve the positively valenced tendencies and counter aggression and disappointment.

2.3.4. The Engagement (Involvement (IO) and Distance (DT)) of AI Teammates and Social Robots

A player’s involvement and emotional distance are two distinct experiences that do not form two ends of a bidirectional dimension; both can be experienced concurrently. The trade-off between involvement and distance better explains the appreciation of a game character or a social robot than either involvement or distance alone [111]. We will treat involvement and distance as separate factors, indicating two dimensions of engaging with a robot.
When interacting with AI teammates and social robots, there is a possibility of experiencing opposing tendencies of involvement and distance. These tendencies are not mutually exclusive but operate in parallel, shaping the overall evaluation. Involvement and distance can be conceptualized as separate slopes with different gradients, representing the dynamics of conflict between approaching a desirable goal and avoiding potential harm [46,47].
Initially, the tendency to approach is stronger than avoidance—known as a “positivity offset”—but, over time, the avoidance tendency grows faster and is termed a “negativity bias”. The development of the involvement–distance trade-off is regarded as a continuous process in which involvement initially outweighs distance but reaches an equilibrium point where doubt, apathy, or ambivalence may arise. The engagement with social robots and AI teammates probably follows a similar pattern [46]. Involvement (IO) represents the level of active engagement, whereas Distance (DT) reflects the degree of emotional and psychological detachment. The trade-off between IN and DT better explains individuals’ like or dislike of a game character or a social robot [111]. Balancing significant involvement with an appropriate level of distance is crucial for maximizing appreciation and achieving an optimal engagement experience.

2.3.5. An Interactive Paradigm of Intervention

Anonymity can increase engagement and improve team relations, especially when combined with ‘therapeutic’ dialogue using a robot. Anonymous communication offers a protected mode of messaging, allowing players to seek excitement and social acceptance while avoiding criticism of their identity. Helpful teammates take practical action to increase the odds of winning, and their invisible active participation is key. Self-disclosure, the process of revealing feelings, thoughts, beliefs, and opinions, is critical to achieving psychological recovery, and an empathetic and efficient disclosure process can prompt quick engagement [112]. We propose that a game with a lively and fun robot that interacts with players and encourages self-disclosure will improve the practical effect of anonymity and intangible teammate help (Figure 3).

3. Research Hypotheses

Following the hypothesized interactive paradigm (Figure 3), we proposed a number of research hypotheses with which to investigate and improve the negative gaming atmosphere (Figure 4). In general, we expect that the Anonymity (AN) and Invisibility (IN) of AI Teammates will lead to greater performance and competence of players [113]. Further, if social robots are used to provide the Affordance (AF) of verbal self-disclosure, players’ emotional Valence (before–after intervention: Vb-Va) will be further improved [113]. The psychological Distance (DT) among players will be decreased and Involvement (IO) will be increased. More specifically, our hypotheses are as follows (see also Figure 4):
H1: 
The Affordance (AF) of social robots is significantly and positively correlated with players’ Involvement (IO).
H2: 
The AF of social robots is negatively correlated with players’ psychological Distance (DT).
H3: 
The AF of social robots is positively correlated with players’ emotional Valence (Val).
H4: 
Players’ Val is negatively correlated with DT.
H5: 
Players’ Val is positively correlated with IO.
H6: 
Players’ DT is negatively correlated with IO.
H7: 
Invisibility (IN) of helpful teammates is positively correlated with Anonymity (AN).
H8: 
IN of AI teammates is negatively correlated with IO.
H9: 
IN of AI teammates is positively correlated with DT.
H10: 
AN of AI teammates is negatively correlated with IO.
H11: 
AN of AI teammates is negatively correlated with DT.
Note: with respect to H3–H5, we used a Valence difference score (Val = MVb − MVa).

4. Experimental Design and Execution

4.1. Participants and Design

Approval for our research was obtained from the institutional Ethical Review Board (protocol number: HSEARS20200204003). A total of 122 subjects were invited to participate in the experiment. Among the participants aged 18–29, 46.8% were female, 53.2% were male, and 85.6% had a bachelor’s degree or higher. Participants were fully informed about the purpose of the experiment, and all data were anonymized. Participants engaged in both a human and an AI teammate mode of a MOBA game, and the experiment took approximately 40–70 min. For the human teammate mode, players were required to choose “King of Glory” or “Leagues of Legends” and to enter regular 5v5 matches or ranked matches [114,115]. Participants were asked to form a team with four teammates and defeat the opposing faction.
The experiment was conducted in non-clinical and non-laboratory settings, mainly within the personal learning and living spaces of the participants through remote means. The subjects were healthy individuals with full and independent legal capacity recruited through public announcements. Students are the primary target group of MOBA games; therefore, we recruited participants through posters and social media at universities and schools, in particular, The Hong Kong Polytechnic University and the Southern University of Science and Technology.
We required a minimum of one month of MOBA gaming experience so that players would have developed a preliminary understanding of a MOBA game’s rules and operations. During this period, players are likely to have acquired basic skills and strategies, although there is still room for improvement in terms of competitive proficiency.
To ensure that participants would actually play the game, they were requested to upload a screenshot of the game outcome/evaluation screen or provide an oral report. After completing the questionnaire, participants received a reward of CNY 18.8 per person or CNY 23.8 per person in a three-person group (who did not participate in the experiment together). Participants could withdraw from the experiment at any time; their data would be deleted, and they would not receive any compensation.

4.2. Experimental Settings

4.2.1. Settings in Human and AI Teammate Mode

For the AI teammate mode, players can choose the 5v5 human–machine match from the King of Glory S27 version as the AI teammate test vehicle by lobby-matchmaking mode-human machine match-5v5 King’s Canyon. Players using Leagues of Legends v 4.2.6.7 can access it from lobby-play-human machine mode. To ensure fair play, users in this mode were assigned four AI teammates to play against five enemy AIs. For most players, the friendly AI was more robust and the enemy AI was weaker, making the game less challenging and easier to win [113].
To ensure a balanced and fair gaming experience in MOBA games, it is common practice to implement standardized character position selection in daily and ranked matches, regardless of whether the decision is made by human or AI teammates. The main goal is to promote strategic diversity and team balance by including a diverse selection of characters and placing them strategically at the beginning of the match [116]. Players decide on a particular positional role that they are good at and choose a character who has skills that match the positional roles. Therefore, the lineups were the same on the friendly and opposing sides. There was the development lane (gunner and support), the opposing lane (fighter or tank), the middle lane (mage), and the wilderness lane (assassin).

4.2.2. Settings of the Social Robot

Due to COVID-19-related social restrictions, the interaction between social bots and players in this study was mainly presented in the form of video clips in the questionnaire. For this reason, we purchased a social robot, an early-learning humanoid robot, CAER, compatible with the game environment and produced by Yingjia Toy Industry Co., Ltd. (Swatow city, China), which we nicknamed KING-bot [117]. KING-bot was a social robot with player characteristics of being loyal, optimistic, and futuristic, which could evoke empathy with rich and smooth facial expression changes [118]. Additionally, we chose a stable video-shooting angle with sufficient light to face the potential viewers from the first-person perspective. We recorded and edited the video to guide players to disclose their negative emotions. The script (in Chinese) of the video ran as follows:
Hi, I’m your good gaming partner KING-bot. So happy to watch you finish this wonderful game! I think everyone did great, and no one did particularly badly, and I think that’s because you played such an important role in the battle! Perhaps there are some minor aggravations in the game, but you should ignore them because you are so resourceful and brave, and I think you can make a big breakthrough in the next game!
For the visual graphic that matches the script, we controlled KING-bot’s self-contained components to record 27 s of vertically composed action videos, such as moving forward, reaching, and turning the head (see Supplementary Material ‘Video of Human–Robot Interaction’).

4.3. Measurements

4.3.1. Questionnaire Design

We designed a questionnaire in two parts—1 and 2 (Supplementary Material ‘Questionnaire Design’)—which users filled out after playing human teammate and AI teammate mode. Both parts consisted of seven blocks of measurement scales: Vb, Va, AF, IO, DT, IN, and AN. Measurement scales were composed of indicative and counter-indicative Likert-type items, followed by a 6-point rating scale (1 = strongly disagree, 6 = strongly agree) [119]. Each measurement scale consisted of four statements indicating a particular construct and four statements indicating the opposite construct. Except for the introduction at the beginning and demographics at the end of the questionnaire, blocks of items were presented in a different order for each participant, and items within blocks were randomized. See Supplementary Materials (1) and (2) for a detailed description of the variables and notation in this study.

4.3.2. Valence (Val)

The questions utilized for Valence before human–robot interaction (Vb) and Valence after human–robot interaction (Va) were derived from the relevant studies conducted by Duan et al. and Luo et al. [42,43], respectively, and were administered both prior to and following the interaction between participants and KING-bot. These inquiries employed positive and negative indicators to evaluate alterations in users’ emotional states. For instance, the four indicative items associated with ‘positive Valence before treatment’ represented a unipolar conceptualization, exemplified by phrases such as “I feel good”. In contrast, the four counter-indications formed a unipolar conception of ‘negative Valence before treatment’, with statements such as “I feel bad”. As a result, the measurement of ‘Valence after treatment’ comprised four indicative (unipolar positive) and four counter-indicative (unipolar negative) statements. By combining these two unipolar Valence scales with negative Valence recoded, a bipolar conception of Valence was established.

4.3.3. Affordances (AF)

The concept of Affordances was used as a measure to evaluate perceived ease of use, intuitiveness, clarity of functionality, and discoverability of features. We adopted the items used by Van Vugt et al. in their research and made modifications based on our specific context [46]. The perceived affordances of KING-bot were assessed using eight items. Players rated “I think KING-bot is competent/knowledgeable/skillful/clever” to indicate their positive perception of affordance. On the other hand, “I think KING-bot is clumsy” represented the negative affordance perceived by players.

4.3.4. Engagement: Involvement (IO) and Distance (DT)

For Engagement (Involvement and Distance), we followed Van Vugt et al. [46,120]. Involvement with the robot was measured with four items, for instance: ‘KING-bot is like a friend to me’, ‘I feel friendly toward KING-bot’, and ‘KING-bot can understand me’. Distance towards the robot was measured with four items, such as: ‘I feel unfriendly toward KING-bot’ and ‘I want to ignore KING-bot’.

4.3.5. Invisibility (IN)

Invisibility pertains to the imperceptible background operations of the AI in helping to achieve user goals. We designed a scale for Invisibility that consisted of four main perspectives: collaboration level, collaboration care, collaboration strategy, and collaboration stability. Examples of positive statements are: “They help me achieve victory”, “They care about me”, “They are strategic”, and “They play steadily”. Additionally, we included four opposing statements: “They are irrelevant to achieving victory”, “They ignore my feelings sometimes”, “They are hopeless strategically”, and “They play unsteadily”.

4.3.6. Anonymity (AN)

We utilized “The Scale of Perceived Anonymity”, assessing individuals’ perceptions of anonymity in different contexts [121] as well as “Anonymity” from the “Perceived Social Affordances of Communication Channels Scale” [122]. By integrating the contextual elements of MOBA games, we determined that Anonymity (AN) can be utilized to evaluate an individual’s perception of social environment pressure, level of interaction, verbal expression, and the demonstration of skills by players.
Four positively framed items queried “I feel relaxed”, “I can interact with my teammates”, “I can express my thoughts boldly”, and “I can utilize my skills effectively”. Additionally, four negatively framed items were included: “I feel under pressure”, “I tend to avoid interactions with my teammates”, “I hesitate to express my thoughts”, and “I find it challenging to utilize all of my skills”.

4.4. Procedure

At the beginning, participants were asked to select one MOBA platform (i.e., King of Glory or League of Legends) to be used during the experiment, which could not be changed subsequently. In Round 1 (Figure 5), participants engaged in human teammate mode, during which they would be teamed up with four real players. Afterward, participants were instructed to open their phones and complete Questionnaire 1, recording their emotional state after human-based gameplay (Valence before HRI). Next, they proceeded with the HRI process, where they watched pre-recorded video clips of KING-bot on a questionnaire webpage where they could self-disclose through speech. Finally, participants completed the remaining items on Questionnaire 1, including another assessment of their emotional state (Valence after HRI) and measures that related to other dimensions.
In Round 2 (Figure 5), participants engaged in a game with AI teammates, where they would be teamed up with four AI players. Similar to Round 1, participants were required to open their phones and complete Questionnaire 2, providing immediate emotional states, engaging in self-disclosure interactions with a social robot, and answering other measurement items related to their emotional states.

5. Data Analysis and Results

5.1. The Analysis of Samples

5.1.1. Sample Size

From the 122 invited participants, we collected 111 valid questionnaires with a validity rate of 91% (Supplementary Material ‘Raw Dataset’). During the experiment, subjects communicated and cooperated with us remotely, and the average completion times of parts 1 and 2 of the survey were 3:50 m and 2:49 m, respectively.
We used the software G-power 3.1 to analyze the sample size [123]. For the GLM (General Linear Model) repeated measures method, we presupposed the presence of a medium effect size f = 0.25, statistical test power 1–ß = 0.8, and significance level α = 0.05, and the G-power results indicated that at least 74 subjects were needed. With N = 111 and n = 95, the test power should be stable beyond 80%.
After the initial Cronbach analyses, we tested the discriminant validity of the items by means of Principal Component Analysis. After removing certain items that were scattered over components, the remainder were neatly arranged in the expected components, showing that measurement scales were divergent. After PCA, all measurement scales achieved good reliability (Cronbach’s α ≥ 0.74). Valences before and after (Vb and Va, four items each) and the difference scores (Val) achieved Cronbach’s alpha > 0.87. Affordances (four items) achieved Cronbach’s α = 0.88; Involvement (four items) Cronbach’s α = 0.86; Distance (three items, after deletion of one item based on poor discriminant validity), Cronbach’s α = 0.85; Invisibility (four items), Cronbach’s α = 0.75; and Anonymity (four contra-indicative items), Cronbach’s α = 0.79. The results of the reliability analyses are compiled in Table 1.
We calculated scale means and performed an outlier analysis using boxplots, finding that participants 12, 17, 24, 47, 50, and 71 were outliers in Vb, and participants 5, 17, 59, and 91 were outliers in Va. Participants 1, 5, 10, 32, 33, and 108 were outliers in AF. Participants 5, 34, and 59 were outliers in DT, and there were no outliers in IO, IN, and AN. In the following steps, we performed effects analyses with (N = 111) and without outliers (n = 95).

5.1.2. Demographics

We examined whether age was associated with Valence-before-treatment (Vb), Valence-after-treatment (Va), Affordance (AF), Involvement (IO), Distance (DT), Invisibility (IN), and Anonymity (AN). We calculated Pearson’s binary correlations (two-tailed) and did not find any significant relationship with age (p > 0.05). It is worth noting that certain correlations did occur between variables. In line with I-PEFiC theory, Involvement was negatively correlated with Distance (r = −0.369, p = 0.000), indicating that distancing tendencies negatively impacted people’s engagement with the agency (Involvement-Distance trade-off).
We ran a MANOVA (Pillai’s Trace) to examine whether gender and educational level had an effect on the dependents and found a small multivariate effect (V = 0.346, F(21,297) = 1.847, p = 0.014) but no univariate effect of gender per se (p > 0.05). Thus, gender was removed from subsequent analyses.
Education level, however, had a significant univariate impact on mean Affordance (MAF) (V = 5.796, F(3297) = 6.634, p = 0.000, ηp2 = 0.162). Education level also influenced mean Anonymity (MAN) to some extent (V = 2.169, F(3297) = 2.804, p = 0.044, ηp2 = 0.075). In MAF, those with high school, secondary school, or technical school degrees experienced higher levels of Affordance than those with university bachelor’s degrees (M = 1.09, SD = 0.35, p = 0.012), and those with master’s degrees or above (M = 1.51, SD = 0.38, p = 0.001). In MAN, those with master’s degrees and above experienced more Affordance than those with junior college degrees (M = 1.02, SD = 0.36, p = 0.028) and experienced more Anonymity.
We then tested education level with two GLM repeated measures procedures, once for MOBA 1.0 (human teammates) and once for MOBA 2.0 (AI teammates), with the seven dependents as within. No significant interaction effects occurred, and no significant univariate effects were established when outliers were removed: F(3.91) = 1.378, ηp2 = 0.043, p = 0.255. We concluded that education level does not have to be used as a control variable to check for confounding effects.
We constructed three data sets, one containing all 111 participants, one containing 95 participants (no outliers), and one containing 16 participants (outliers only). We evaluated our hypotheses with these three sets.

5.1.3. Gaming Performance

A total of 56 players provided us with their detailed game data (Supplementary Materials ‘Data of Competitions’). For players who chose King of Glory, the average game duration was above the minimum set by the system (6 min for AI teammate mode and 15 min for human teammate mode). Players who chose League of Legends for their experiment took longer, 35–45 min in total. The average gold earned was 9893, with a median of 10,288, showing that they were fully engaged in the game.
Kills/Deaths/Assists (K/D/A) is a statistical measure used in MOBA games to track a player’s performance in terms of the number of kills, deaths, and assists during the game. In terms of competitive performance, their average K/D/A score in human team mode was 6.1/4.1/10.8, while in AI teammate mode it was 14.3/2.7/4.0, indicating their solid experience. Most Valuable Player (MVP) is an award given to the player who has made the greatest impact or contribution to a game or match. Their win rate in human teammate mode was 80% with an MVP rate of 28%, and in AI teammate mode it was 96% with an MVP rate of 88%. These data vividly show that these players have a certain level of skill and experience.
Further questioning revealed that many of them had previously achieved the “King” rank in King of Glory ranked games, while players with experience in League of Legends had achieved the “Gold” rank. This information indicates that many of them had more than a month of MOBA experience.

5.2. Manipulation Check

To analyze the survey results, we used the software SPSS 26.0 [126]. Raw output files can be consulted in the Supplementary Material ‘Data Modeling and Analysis’.
To determine whether KING-bot provoked any emotions at all and whether teammates (real people or AI) elicited any mood changes, we ran a GLM repeated measures procedure for N = 111, n = 95, and n = 16. Table 2 shows that, except for the outlier group (n = 16), different types of teammates (N = 111 and n = 95) showed significant multivariate effects for Valence-before-treatment (Vb), Valence-after-treatment (Va), Affordance (AF), Involvement (IO), Distance (DT), Invisibility (IN), and Anonymity (AN).
The results of the univariate effects analysis are shown in Table 3. With or without outliers (N = 111 and n = 95), both types of teammates exerted significant effects. For outliers (n = 16), the differences remained insignificant. Our manipulation was successful: compared to human teammates only, AI teammates brought positive changes to players’ emotions, and the different measures were sensitive to playing with or without AI teammates.

5.3. Effects of Social Robots and AI Teammates

With N = 111, we ran a paired-samples t-test for mean Valence before (MVb = 3.82, SD = 0.83) versus after talking to a robot (MVa = 4.27, SD = 0.97), irrespective of playing with an AI teammate (MOBA 1.0 and 2.0 combined). The difference was significant, with a considerable effect size (t(110) = −4.37, p = 0.000 (2-tailed), Cohen’s d = −0.42, CI = −0.608–−0.220), underscoring that robot intervention significantly improved the mood of aggravated players. We repeated the analysis without the outliers (n = 95) and found comparable results (MVb = 3.96, SD = 0.71 vs. MVa = 4.35, SD = 0.87; t(94) = −3.91, p = 0.000, d = −0.40, CI = −0.610–−0.191), so the effect is generally valid.
To assess the interaction between game versions and robot intervention, we conducted a GLM repeated measures analysis for mean Valence before and after talking to a robot without being preceded by an AI teammate (MOBA 1.0) versus while being preceded by an AI teammate (MOBA 2.0). For N = 111, the multivariate effects were significant with quite a large effect size (Pillai’s Trace): V = 0.49, F(3108) = 34.45, p = 0.000, ηp2 = 0.49. In addition, the within-subjects effect was significant with a decent effect size: F(1110) = 78.91, p = 0.000, ηp2 = 0.42. An excerpt of Table 4 and Table 5 showing players’ mean Valence before and after talking to a robot, without (MOBA 1.0) and with an AI teammate (MOBA 2.0) (N = 111) is presented below (Table 6):
We scrutinized the significant interaction with paired-samples t-tests for N = 111 (Table 5). In MOBA 1.0, without an AI teammate, mean Valence before talking to a robot (MVb = 3.01) was significantly lower than after (MVa = 4.23): t(110) = −3.61, p = 0.000, Cohen’s d = −0.72, CI = −0.930–−0.512; self-disclosure to a robot improved mood. The level of mean Valence after talking to a robot was not significantly different when talking to a robot alone (MOBA 1.0, MVa = 4.23) or doing so with an AI helper included (MOBA 2.0, MVa = 4.31): t(110) = −0.77, p = 0.445.
However, the AI helper raised mood the most, more so than a robot on its own. Table 5 shows that, in MOBA 2.0, including an AI teammate, Valence before talking to the robot (MVb = 4.61) was significantly higher than after (MVa = 4.31); the effect of the AI helper improved the gameplay. After the happiness of being helped by an AI teammate, self-disclosure to the robot seemed to have lessened that effect: t(110) = 2.55, p = 0.012, d = 0.24, CI = −0.052–0.430. Indeed, being helped by the AI teammate evoked the highest mean scores of Valence (MOBA2, MVb = 4.61), higher than talking to the robot alone (MOBA 1.0, MVa = 4.23): t(110) = −3.28, p = 0.001, d = −0.31, CI = −0.501–−0.120. Without the outliers (n = 95), the arrangement of these effects did not change, although the differences were more pronounced and the effect sizes were stronger.

5.4. Effects of AI Teammate

To further investigate the user experience of working with an AI teammate compared to with human teammates, we ran a GLM multivariate analysis on all dependents with MOBA version 1.0 vs. 2.0 as the fixed factor. For N = 111, the multivariate effects were significant with a considerable effect size (Pillai’s Trace): V = 0.435, F(6105) = 13.468, p = 0.000, ηp2 = 0.44. Additionally, the within-subjects effect of teammate mode was significant with an acceptable effect size: F(1110) = 49.621, p = 0.000, ηp2 = 0.31.
The following is based on Table 5 and Table 7. Three paired-sample t-tests showed significant results for N = 111, which did not change when removing the outliers. Regarding the teammates, we compared Valence towards the human teammate (HmVb) with valence towards the AI teammate (AiVb) and found that positive feelings were significantly higher after playing with the AI helper: HmVb (M = 3.01, SD = 1.30) vs. AiVb (M = 4.62, SD = 1.04), t(110) = −10.18, p = 0.000. The AI teammate (AiAN) showed significantly higher levels of Anonymity than its human (HmAN) counterpart: HmAN (M = 3.81, SD = 1.12) vs. AiAN, (M = 4.41, SD = 1.27), t(110) = −4.02, p = 0.000. Due to being preceded by an AI helper, emotional distance towards the robot counselor was significantly higher in the AI mode (AiDT) than after playing with human (HmDT) teammates: HmDT (M = 3.00, SD = 1.18) vs. AiDT (M = 3.26, SD = 1.46), t(110) = −2.00, p = 0.048 (with n = 95, p = 0.013).
Despite more Anonymity, AI teammates, on average, improved the gamers’ moods the most, more so than human teammates and even more so than talking to a robot afterward. With an AI helper preceding, the robot actually evoked more distancing tendencies.

5.5. The Model of MOBA Game Player’s Engagement Behavior

The data from the human teammate model (MOBA 1.0) and the AI teammate model (MOBA 2.0) were pooled and averaged to measure the interrelationships between the variables. We used G-power software to make preliminary predictions about the sample size needed to build the linear multiple regression model.
We presupposed the presence of a medium effect size f2 = 0.15, statistical test power 1 − ß = 0.8, and significance level α = 0.05, and the G-power results indicated that at least 98 subjects were needed. This a priori estimation of the required sample size at a medium effect size of f2 = 0.15 showed that a total subject size of 98 was appropriate, the test power would be stable above 80%, and the sample size (N = 111) in this study was sufficient to produce scientifically reliable results. Thus, the valid sample size of 111 that we collected can be used to predict and support our hypothetical model. In the following steps, we will further verify the hypothesized model and the interrelationships among the variables with the IBM tool SPSS AMOS [126].
To investigate the relationships among variables, AMOS 26.0 was used to conduct Structural Equation Modeling. To ensure the validity of the model, we first removed the items with standardized factor loadings below 0.6 and reliability SMC below 0.36. Removed items were MIN4i (std = 0.539), MAN6c (std = 0.586), and MIO2i (std = 0.598).

5.5.1. Reliability and Convergence

First, the credibility and validity of the associated hypotheses models were evaluated. As shown in Table 2, the values for Cronbach’s α were between 0.748 and 0.912, demonstrating a high level of internal consistency. The convergent validity was measured by item factor loadings (k), composite reliability (CR), and AVE. The values of (k) for all variables were significant at over 0.60. CR for all variables was in the range from 0.755 to 0.885. All the AVE values also exceeded 0.50. In general, all coefficients exceeded the specified thresholds, indicating that the internal consistency of the variables in the model was high. Thus, the items in the questionnaire were reliable for the hypothesized model. For the relevant test results, please consult Table 8.

5.5.2. Identifying Factors for Validity

The discriminant validity test requires that the measure does not reflect other variables. The square root of the mean-variance value has to exceed the correlation between the construct of interest and the other constructs. Table 9 shows that the square root was always more significant than the degree of correlation, indicating that all variables have some degree of discriminant validity.

5.5.3. Model’s Degree of Fit

Structural validity measures the degree of fit, and the coefficients must meet the relevant requirements. We assessed the following indices, and the results are shown in Table 10. The RMSEA (Root Mean Square Error of Approximation) value was 0.082, the CFI (Comparative fit index) value was 0.896, and the IFI (incremental fit index) value was 0.898. These resultant values were at standard levels, which suggests the model had a reasonable to good fit and that our hypotheses sufficiently fit with the data collected.

5.6. Hypotheses Test

The research hypotheses were validated by the experiment and data analysis (see Table 11). Each of the nine hypotheses was tested using the SEM. The associated R-squared and paths demonstrate the degree of support for the theoretical model retrieved in the data; the results are shown in Figure 6. All paths are significant, so the nine hypotheses proposed in this study were confirmed.
Figure 6 shows the R2 and path coefficients of the model, clearly reflecting the existence of a specific influence relationship between the variables. According to Cohen’s evaluation standard, R2 ≥ 0.01 has little explanatory power, R2 ≥ 0.09 has medium explanatory power, and R2 ≥ 0.25 has strong explanatory power [127]. In this research model, R2 ranged from 0.169 to 0.573, indicating reasonably high explanatory power. In more detail, R2 of Involvement (IO) was 0.573, which means that Affordance, Anonymity, and Distance explained 57.3% of the variance in Involvement. In addition, Valence and Invisibility considerably reduced feelings of Distance (DT) at 29.9%. Distance and Involvement are the main dimensions of user engagement, so our model showed predictive factors influencing MOBA–player engagement.

6. Discussion

With respect to our research question, AI teammates and social robots both have the potential to improve the gaming atmosphere in eSports. The results were as expected: the Invisibility of the AI teammates helped players to participate more actively in the game. The anonymous environment raised the comfort level and enabled some users to express their thoughts with a greater willingness to engage in deeper dyadic participation. The AI teammates may have facilitated problem-oriented coping strategies, after which talking to a robot for negative-mood regulation seemed to have been superfluous [113].
Furthermore, as in other studies, players were satisfied with the robots’ affordance of intervening with the players’ emotions, improving their engagement, and willingness to continue the interaction [46]. Social robots seem to positively affect the negative gaming atmosphere and may further emotion-based coping strategies [113]. In line with Duan et al. and Luo et al.’s research [42,43], the positive interventional effect of social robots on improving users’ negative emotions has been shown to make users more willing to become friends with social robots and may stir enthusiasm to engage in the next game [43].
The low correlation we found between Involvement and Distance (H6: DT→IO) shows that the two concepts are not two ends of the same dimension, as suggested by the research of Konijn and Hoorn [111] and Konijn and Bushman [128]. Within the confines of our participants being from mainland China and Hong Kong, the four key findings of our study are as follows.

6.1. The Double-Edged Enhanced Competence Supported by AI Teammates

The Invisibility granted by AI teammates can reduce the psychological distance between players (H9: IN→DT). The results were extremely positive, as the inclusion of AI teammates significantly improved the overall level of gameplay and significantly increased the probability of winning. In addition, the effect of Invisibility on increasing the Anonymity of AI teammates (H7: IN→AN) was also significant.
AI teammates’ Invisibility was better able to meet users’ psychological needs, such as the enjoyment of aggression through smoother and faster kills and a greater sense of self-presentation and sustained pleasure [1]. This allowed players to maintain a superior gaming experience without the need to elicit negative stimuli.
However, we also found that the significant effect of the Anonymity of AI teammates did not fully satisfy our intentions, which is not consistent with our expectations (H10: AN→IO, H11: AN→DT).
In conversations with participants after the experiment, we learned that Anonymity might make it more difficult for human players to communicate and form emotional relationships with their AI teammates effectively. If human players are unable to communicate effectively with their AI teammates, they may still be frustrated with the game. Thus, the impact of AI teammates’ Anonymity on the gaming experience can be complex and context-dependent, depending on factors such as skill level and player needs. However, the current design of AI teammates is based primarily on the performance of the AI algorithms and rarely addresses the importance of bridging the communication gap between humans and agents to improve performance [39].
Therefore, it is important to consider not only whether they exist anonymously or in a more adaptive form based on player preferences, such as cultural and personal factors, but also communication when implementing them [94,95].

6.2. The Effective Self-Disclosure Process Conducted by Social Robots

Social robots as interaction partners have high affordance and provide easy access to positive treatment of players and improve their involvement in the game (H1: AF→IO, H3: AF→Val).
For players who were willing to interact with a social robot, regardless of their human or artificial teammates, emotional Valence improved when they finished the game and self-disclosed to the robot (H3: AF→Val). Social robots had a greater impact on participants’ emotions than human teammates (Table 7). Players’ emotions were mimicked, emotional valence was enhanced, and this reduced the psychological distance between them and the other players (H4: Val→DT).
Through the feedback, we also received constructive suggestions that the appearance of the social robot is crucial during the intervention process, as it increases the affordances of the technology [129,130]. KING-bot possessed a strong upper body with a thin waist, symbolizing creativity and strength, consistent with the task of healing emotions [131]. KING -bot’s visual cues lead to higher esthetic judgments, aroused emotions, and evoked perceptual expressions in users [132].
While current experimental results have shown the importance of the visual appearance of social robots in enhancing human emotions, there is still much to be explored in terms of the possibilities of multisensory forms of interaction. For example, there are several multisensory factors that need to be considered to improve the affordances of HRI. These factors include user inputs (e.g., verbal communication, body movements, tactile buttons, and proximity) and social robot outputs (e.g., body contact, distance, and facial expressions) [102].

6.3. The Potential Conflicts of the AI Teammates and Social Robots

We have discovered that adding a social robot after already having played with a supportive AI teammate does not bolster the uplifting effect; it actually brings it down to about the level of talking to a robot without being helped by an AI teammate and adds some feelings of distance to it. Additionally, users are not receptive to overly extreme, flat emotion regulation techniques.
We started with the assumption that, after negative gameplay, social robots for self-disclosure may alleviate aggravation, which was supported by our results. We also assumed that adding an AI teammate to increase winning chances may do so, after which possible remaining frustration may be relieved by the robot. The latter did not occur. Although social robots improved the gamers’ mood to a high degree, AI teammates did more so, and adding a robot on top of an artificial teammate downregulated the (still) good mood. A graphical representation of these results is found in Figure 7.

7. Prototype Design and Vision

To more clearly express practical implications and future research prospects, we created a prototype of MOBA Pro (Figure 8).
MOBA Pro helps players build real-time connections with social robots and AI teammates. At the same time, MOBA Pro can enhance its capabilities by collecting relevant data during interventions with social robots and AI peers. Using machine learning and deep learning techniques, it can then train highly personalized interaction models tailored to individual users. This approach enables a personalized and technologically comprehensive interactive experience that leads to greater overall effectiveness.
In future entertainment, removing negative emotions from users will no longer be limited to a single GUI channel. With changes in emotional computing, multimodal and physiological interfaces, material science, and other technologies, ubiquitous computer systems can also access human behavioral and perceptual data at a deeper level and interact seamlessly with humans.
This prototype is merely illustrative to show the practical significance of our theoretical model. Further efforts can be made by designers and engineers whose contributions will help to further develop and refine the theoretical model.

8. Limitations and Future Work

This study has the following research limitations:
(a)
Due to the spread of COVID-19 in China, we were unable to closely observe users’ immediate responses to HRI and collect more informative data, which deprived us of the opportunity to gain further research insights. Once the epidemic has subsided, we will conduct our next study in a fully offline physical environment.
(b)
We did a before and after comparison of the human teammate mode and then the AI teammate mode, and this order may have some impact on the results. This will need to be addressed in later studies to more thoroughly examine the separate and mixed effects of the human teammate mode and the AI teammate mode.
(c)
It is important to be aware that our measurement methods may have limitations. Although we have reported the design principles and effectiveness of the measurement metrics for all instruments, it is worth noting that these instruments are preliminary and should be refined when considering future proposals.
(d)
In terms of research assumptions and model design, our study could have considered more disaggregated emotion assessment factors. MOBA players’ emotions are multifaceted and simplistic binary categorizations of emotions may be inappropriate. It could be argued that the AI teammates and social robots caused conflict for some participants, which could exacerbate the player’s negative emotions.
In addition, some participants described the phenomenon of a negative atmosphere in MOBA games, the novelty of the experiment once completed, and their mixed preferences for interacting with a social robot. On the positive side, one of the participants remarked: “This robot is so cute; will it appear on the market as a co-branded product shortly? It looks like it will be a good companion. Good luck with your experiment!” On the downside, another participant stated: “Must we have an AI match? Is this an insult to my skills? Why can’t I curse when playing MOBA? It is my nature to curse; I am the best player of the BNU Zhuhai”. Additionally, another player said: “The robot in this video is honestly a bit scary. I think a bit of the Valley of Terror effect and whether non-video interaction can reduce the Valley of Terror effect”. In other words, it would be wise to consider population segmentation, discerning those who focus on cuteness from those who focus on personal prowess while avoiding uncanny effects by making robots tangible interfaces instead of screen-based avatars.
In view of our participants’ comments, we hope to conduct more contextual and task exploration work in the future, such as addressing the following questions:
(a)
Does a touch interface make users feel more comfortable during the interaction when the robot is created from e-textile materials?
(b)
Using sentiment analysis, can social robots sense the users’ diverse emotions during MOBA interactions?
(c)
To achieve natural interactions and user experiences, how can social robots further absorb existing technologies to improve usability in the physical interaction of MOBAs?

9. Conclusions

The current study is the third in a row that shows that, more so than other types of media, social robots can alleviate the negative mood young adults are in due to negative news [43], unsupportive social media groups [42], and this time, rejection by human teammates in online games. We investigated the negative effects of MOBA games and found that emotional support by a social robot is strong but by an AI teammate is even stronger. We suspect that emotional coping after having solved the problem is too little too late. As the doctor says: “Don’t double the dosage”!
In terms of theoretical contributions, this study used the I-PEFiC framework as a theoretical foundation and modularized the intervention properties of social robots and AI teammates, resulting in a novel theoretical framework for dealing with the negative atmosphere in MOBA games for the HCI domain. In terms of practical contributions, this study proposed a new interactive paradigm for emotional intervention techniques. It has the potential to help users disclose more negative emotions to be released while enjoying eSports and provide them with more support in dealing with pressure.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/systems11080425/s1.

Author Contributions

Y.W. was responsible for the conceptual design, literature review, and experimental design. J.F.H. designed the method and part of the questionnaire. L.W. assisted with the research and experiments. Y.D. and S.C. worked on data analysis and modeling. Y.W. and L.W. were responsible for the initial writing of the manuscript. J.F.H. directed and managed the entire study and was responsible for revising the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The contribution by J.F.H. was supported by the project Negative-mood Reduction Among HK Youth With Robot PAL (Personal Avatar for Life) of the Artificial Intelligence in Design Laboratory under the InnoHK Research Clusters, Hong Kong Special Administrative Region Government (grant number RP2-3).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Human Subjects Ethics Sub-committee of the university, filed under HSEARS20200204003.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Upon written request, data can be made available by the corresponding author.

Acknowledgments

We thank the anonymous reviewers for their valuable comments on an earlier draft of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hamari, J.; Sjöblom, M. What Is ESports and Why Do People Watch It? Internet Res. 2017, 27, 211–232. [Google Scholar] [CrossRef] [Green Version]
  2. Bartholow, B.D.; Bushman, B.J.; Sestir, M.A. Chronic Violent Video Game Exposure and Desensitization to Violence: Behavioral and Event-Related Brain Potential Data. J. Exp. Soc. Psychol. 2006, 42, 532–539. [Google Scholar] [CrossRef]
  3. Bartholow, B.D.; Anderson, C.A. Effects of Violent Video Games on Aggressive Behavior: Potential Sex Differences. J. Exp. Soc. Psychol. 2002, 38, 283–290. [Google Scholar] [CrossRef]
  4. Griffiths, M. Internet Addiction—Time to Be Taken Seriously? Addict. Res. 2000, 8, 413–418. [Google Scholar] [CrossRef]
  5. Singh, M. Compulsive Digital Gaming: An Emerging Mental Health Disorder in Children. Indian J. Pediatr. 2019, 86, 171–173. [Google Scholar] [CrossRef]
  6. Snodgrass, J.G.; Bagwell, A.; Patry, J.M.; Dengah, H.J.F.; Smarr-Foster, C.; Oostenburg, M.V.; Lacy, M.G. The Partial Truths of Compensatory and Poor-Get-Poorer Internet Use Theories: More Highly Involved Videogame Players Experience Greater Psychosocial Benefits. Comput. Hum. Behav. 2018, 78, 10–25. [Google Scholar] [CrossRef]
  7. Thiel, S.-K.; Reisinger, M.; Röderer, K. “I’m Too Old for This!”: Influence of Age on Perception of Gamified Public Participation. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, Rovaniemi, Finland, 12–15 December 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 343–346. [Google Scholar]
  8. Engerman, J.A.; Carr-Chellman, A. Understanding Game-Based Learning Cultures: Introduction to Special Issue. Educ. Technol. 2017, 57, 23–27. [Google Scholar]
  9. Newman, J.I.; Xue, H.; Watanabe, N.M.; Yan, G.; McLeod, C.M. Gaming Gone Viral: An Analysis of the Emerging Esports Narrative Economy. Commun. Sport 2022, 10, 241–270. [Google Scholar] [CrossRef]
  10. Kattula, D.; Philip, S.; Shekhar, A.; Basu, A. A Game of Policies and Contexts: Restricting Gaming by Minors. Lancet Psychiatry 2021, 8, 1035–1036. [Google Scholar] [CrossRef]
  11. López-Cabarcos, M.Á.; Ribeiro-Soriano, D.; Piñeiro-Chousa, J. All That Glitters Is Not Gold. The Rise of Gaming in the COVID-19 Pandemic. J. Innov. Knowl. 2020, 5, 289–296. [Google Scholar] [CrossRef]
  12. Shan, D.; Xu, J.; Liu, T.; Zhang, Y.; Dai, Z.; Zheng, Y.; Liu, C.; Wei, Y.; Dai, Z. Subjective Attitudes Moderate the Social Connectedness in Esports Gaming during COVID-19 Pandemic: A Cross-Sectional Study. Front. Public Health 2023, 10, 1020114. [Google Scholar] [CrossRef]
  13. Balhara, Y.; Chandiok, K. Can #PlayOurpartTogether Help Prevent Miscommunication? Asian J. Psychiatry 2020, 52, 102123. [Google Scholar] [CrossRef]
  14. Kim, J.; Kim, M. Spectator E-Sport and Well-Being through Live Streaming Services. Technol. Soc. 2020, 63, 101401. [Google Scholar] [CrossRef]
  15. Soares, A.K.S.; Goedert, M.C.F.; Vargas, A.F. Mental Health and Social Connectedness During the COVID-19 Pandemic: An Analysis of Sports and E-Sports Players. Front. Psychol. 2022, 13, 802653. [Google Scholar] [CrossRef]
  16. Rudolf, K.; Soffner, M.; Bickmann, P.; Froböse, I.; Tholl, C.; Wechsler, K.; Grieben, C. Media Consumption, Stress and Wellbeing of Video Game and ESports Players in Germany: The ESports Study 2020. Front. Sports Act. Living 2022, 4, 665604. [Google Scholar] [CrossRef]
  17. Boonwang, T.; Namwaing, P.; Srisaphonphusitti, L.; Chainarong, A.; Kaewwong, S.; Kaewwong, T.; Duangsawang, N.; Sawunyavisuth, B.; Ngamjarus, C.; Sawanyawisuth, K.; et al. Esports May Improve Cognitive Skills in Soccer Players: A Systematic Review. Asia-Pac. J. Sci. Technol. 2022, 27, APST-27. [Google Scholar] [CrossRef]
  18. Murphy, C.P.; Wakefield, A.; Birch, P.D.J.; North, J.S. Esport Expertise Benefits Perceptual-Cognitive Skill in (Traditional) Sport. J. Expert. 2021, 227–237. Available online: https://www.journalofexpertise.org/articles/volume3_issue4/JoE_3_4_Murphy_etal.pdf (accessed on 23 May 2023).
  19. IOC. Communique of the Olympic Summit. 2023. Available online: https://olympics.com/ioc/news/communique-of-the-olympic-summit (accessed on 13 July 2023).
  20. Asian Games Hangzhou 2022: Official Website. Available online: https://www.hangzhou2022.cn/En/competitions/sports/competitive/202204/t20220410_47302.shtml (accessed on 13 July 2023).
  21. Newzoo Newzoo: Esports Sponsorship Alone Will Generate Revenues of More Than $600 Million This Year. Available online: https://newzoo.com/resources/blog/newzoo-esports-sponsorship-alone-will-generate-revenues-of-more-than-600-million-this-year (accessed on 14 July 2023).
  22. Esports Earnings Largest Overall Prize Pools in Esports. Available online: https://www.esportsearnings.com/tournaments (accessed on 14 July 2023).
  23. Newzoo Newzoo’s Global Esports Live Streaming Market Report 2021 (Free Version). Available online: https://newzoo.com/resources/trend-reports/newzoos-global-esports-live-streaming-market-report-2021-free-version (accessed on 14 July 2023).
  24. Beres, N.A.; Frommel, J.; Reid, E.; Mandryk, R.L.; Klarkowski, M. Don’t You Know That You’Re Toxic: Normalization of Toxicity in Online Gaming. In Proceedings of the Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar]
  25. Kowert, R. Dark Participation in Games. Front. Psychol. 2020, 11, 2969. [Google Scholar] [CrossRef] [PubMed]
  26. Mora-Cantallops, M.; Sicilia, M.-Á. MOBA Games: A Literature Review. Entertain. Comput. 2018, 26, 128–138. [Google Scholar] [CrossRef]
  27. Demediuk, S.; Murrin, A.; Bulger, D.; Hitchens, M.; Drachen, A.; Raffe, W.L.; Tamassia, M. Player Retention in League of Legends: A Study Using Survival Analysis. In Proceedings of the Proceedings of the Australasian Computer Science Week Multiconference, Brisbane, Australia, 29 January–2 February 2018; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar]
  28. Funk, J. Technology Change, Economic Feasibility, and Creative Destruction: The Case of New Electronic Products and Services. Ind. Corp. Chang. 2018, 27, 65–82. [Google Scholar] [CrossRef]
  29. Llorens, M.R. ESport Gaming: The Rise of a New Sports Practice. Sport Ethics Philos. 2017, 11, 464–476. [Google Scholar] [CrossRef]
  30. McCutcheon, C.; Hitchens, M.; Drachen, A. ESport vs IrlSport. In Advances in Computer Entertainment Technology; Cheok, A.D., Inami, M., Romão, T., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 531–542. [Google Scholar]
  31. Qi, S.; Shu-Ming, W.; Chun-Chih, C.; Chia-Hui, H. Research on Influencing Factors of MOBA Mobile Games Based on Factor Analysis. In Proceedings of the 2021 2nd International Conference on Computer Communication and Network Security (CCNS), Xining, China, 30 July–1 August 2021; pp. 29–33. [Google Scholar]
  32. Tyack, A.; Wyeth, P.; Johnson, D. The Appeal of MOBA Games: What Makes People Start, Stay, and Stop. In Proceedings of the Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, Austin, TX, USA, 16–19 October 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 313–325. [Google Scholar]
  33. Märtens, M.; Shen, S.; Iosup, A.; Kuipers, F. Toxicity Detection in Multiplayer Online Games. In Proceedings of the 2015 International Workshop on Network and Systems Support for Games (NetGames), Zagreb, Croatia, 3–4 December 2015; pp. 1–6. [Google Scholar]
  34. GDCVault More Science Behind Shaping Player Behavior. Available online: https://www.gdcvault.com/play/1022160/More-Science-Behind-Shaping-Player (accessed on 17 July 2023).
  35. Aurora Mobile Data Reports | Mobile Game Churn User Research Report. Available online: https://www.moonfox.cn/insight/report/894 (accessed on 17 July 2023).
  36. Balakrishnan, J.; Griffiths, M.D. Loyalty towards Online Games, Gaming Addiction, and Purchase Intention towards Online Mobile in-Game Features. Comput. Hum. Behav. 2018, 87, 238–246. [Google Scholar] [CrossRef] [Green Version]
  37. Yang, H.-E.; Wu, C.-C.; Wang, K.-C. An Empirical Analysis of Online Game Service Satisfaction and Loyalty. Expert Syst. Appl. 2009, 36, 1816–1825. [Google Scholar] [CrossRef]
  38. do Nascimento Silva, V.; Chaimowicz, L. On the Development of Intelligent Agents for MOBA Games. In Proceedings of the 2015 14th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames), Piauí, Brazil, 11–13 November 2015; pp. 142–151. [Google Scholar]
  39. Gao, Y.; Liu, F.; Wang, L.; Lian, Z.; Wang, W.; Li, S.; Wang, X.; Zeng, X.; Wang, R.; Wang, J.; et al. Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective. arXiv 2023, arXiv:2304.11632. [Google Scholar]
  40. Ye, D.; Chen, G.; Zhao, P.; Qiu, F.; Yuan, B.; Zhang, W.; Chen, S.; Sun, M.; Li, X.; Li, S.; et al. Supervised Learning Achieves Human-Level Performance in MOBA Games: A Case Study of Honor of Kings. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 908–918. [Google Scholar] [CrossRef]
  41. Zhang, R.; McNeese, N.J.; Freeman, G.; Musick, G. “An Ideal Human”: Expectations of AI Teammates in Human-AI Teaming. Proc. ACM Hum.-Comput. Interact. 2021, 4, 1–25. [Google Scholar] [CrossRef]
  42. Luo, R.L.; Zhang, T.X.Y.; Chen, D.H.-C.; Hoorn, J.F.; Huang, I.S. Social Robots Outdo the Not-So-Social Media for Self-Disclosure: Safe Machines Preferred to Unsafe Humans? Robotics 2022, 11, 92. [Google Scholar] [CrossRef]
  43. Duan, Y.; Yoon, M.; Liang, Z.; Hoorn, J. Self-Disclosure to a Robot: Only for Those Who Suffer the Most. Robotics 2021, 10, 98. [Google Scholar] [CrossRef]
  44. Chang, W.-L.; Šabanovic, S.; Huber, L. Observational Study of Naturalistic Interactions with the Socially Assistive Robot PARO in a Nursing Home. In Proceedings of the The 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 294–299. [Google Scholar]
  45. Ahmad, M.I.; Khordi-moodi, M.; Lohan, K.S. Social Robot for STEM Education. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 90–92. [Google Scholar]
  46. van Vugt, H.C.; Hoorn, J.F.; Konijn, E.A.; Dimitriadou, A.d.B. Affective Affordances: Improving Interface Character Engagement through Interaction. Int. J. Hum.-Comput. Stud. 2006, 64, 874–888. [Google Scholar] [CrossRef]
  47. Hoorn, J.F.; Konijn, E.A. Perceiving and Experiencing Fictional Characters: An Integrative Account1. Jpn. Psychol. Res. 2003, 45, 250–268. [Google Scholar] [CrossRef]
  48. Mehrabian, A.; Russell, J.A. An Approach to Environmental Psychology; The MIT Press: Cambridge, MA, USA, 1974; ISBN 0-262-13090-4. [Google Scholar]
  49. Bányai, F.; Griffiths, M.D.; Király, O.; Demetrovics, Z. The Psychology of Esports: A Systematic Literature Review. J. Gambl. Stud. 2019, 35, 351–365. [Google Scholar] [CrossRef] [Green Version]
  50. Nakamura, J.; Csikszentmihalyi, M. The Concept of Flow. Handb. Posit. Psychol. 2002, 89, 105. [Google Scholar]
  51. Larsen, L.J. The Play of Champions: Toward a Theory of Skill in ESport. Sport Ethics Philos. 2022, 16, 130–152. [Google Scholar] [CrossRef]
  52. Schubert, M.; Drachen, A.; Mahlmann, T. Esports Analytics Through Encounter Detection. In Proceedings of the MIT Sloan Sports Analytics Conference, Boston, MA, USA, 11–12 March 2016; MIT Sloan: Cambridge, MD, USA, 2016. [Google Scholar]
  53. Polman, R.; Trotter, M.; Poulus, D.; Borkoles, E. ESport: Friend or Foe? In Proceedings of the Serious Games; Göbel, S., Garcia-Agundez, A., Tregel, T., Ma, M., Baalsrud Hauge, J., Oliveira, M., Marsh, T., Caserman, P., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 3–8. [Google Scholar]
  54. Rambusch, J.; Jakobsson, P.; Pargman, D. Exploring E-Sports: A Case Study of Game Play in Counter-Strike. In Proceedings of the Situated Play: The 2007 World Conference of Digital Games Research Association, Tokyo, Japan, 24–28 September 2007; Digital Games Research Association (DiGRA): Tokyo, Japan, 2007; Volume 4, pp. 157–164. [Google Scholar]
  55. Lu, Y.; Zhou, T.; Wang, B. Exploring Chinese Users’ Acceptance of Instant Messaging Using the Theory of Planned Behavior, the Technology Acceptance Model, and the Flow Theory. Comput. Hum. Behav. 2009, 25, 29–39. [Google Scholar] [CrossRef]
  56. Shin, D. Empathy and Embodied Experience in Virtual Environment: To What Extent Can Virtual Reality Stimulate Empathy and Embodied Experience? Comput. Hum. Behav. 2018, 78, 64–73. [Google Scholar] [CrossRef]
  57. Formosa, J.; O’Donnell, N.; Horton, E.M.; Türkay, S.; Mandryk, R.L.; Hawks, M.; Johnson, D. Definitions of Esports: A Systematic Review and Thematic Analysis. In Proceedings of the ACM on Human-Computer Interaction; Association for Computing Machinery: New York, NY, USA, 2022; Volume 6. [Google Scholar] [CrossRef]
  58. Allen, J.J.; Anderson, C.A. Satisfaction and Frustration of Basic Psychological Needs in the Real World and in Video Games Predict Internet Gaming Disorder Scores and Well-Being. Comput. Hum. Behav. 2018, 84, 220–229. [Google Scholar] [CrossRef]
  59. de Mesquita Neto, J.A.; Becker, K. Relating Conversational Topics and Toxic Behavior Effects in a MOBA Game. Entertain. Comput. 2018, 26, 10–29. [Google Scholar] [CrossRef]
  60. Király, O.; Urbán, R.; Griffiths, M.D.; Ágoston, C.; Nagygyörgy, K.; Kökönyei, G.; Demetrovics, Z. The Mediating Effect of Gaming Motivation Between Psychiatric Symptoms and Problematic Online Gaming: An Online Survey. J Med. Internet Res. 2015, 17, e88. [Google Scholar] [CrossRef] [Green Version]
  61. Tholl, C.; Bickmann, P.; Wechsler, K.; Froböse, I.; Grieben, C. Musculoskeletal Disorders in Video Gamers—A Systematic Review. BMC Musculoskelet. Disord. 2022, 23, 678. [Google Scholar] [CrossRef]
  62. Lee, S.; Bonnar, D.; Kim, Y.; Lee, Y.; Lee, S.; Gradisar, M.; Suh, S. Sleep Characteristics and Risk Factors of Korean Esports Athletes: An Exploratory Study. Sleep Med. Res. 2020, 11, 77–87. [Google Scholar] [CrossRef]
  63. Hense, J.; Mandl, H. Learning in or with Games. In Digital Systems for Open Access to Formal and Informal Learning; Sampson, D.G., Ifenthaler, D., Spector, J.M., Isaias, P., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 181–193. ISBN 978-3-319-02264-2. [Google Scholar]
  64. Gowler, C.P.R.; Iacovides, I. “Horror, Guilt and Shame”—Uncomfortable Experiences in Digital Games. In Proceedings of the Proceedings of the Annual Symposium on Computer-Human Interaction in Play, Barcelona, Spain, 22–25 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 325–337. [Google Scholar]
  65. Dai, B.; Luo, R.; Xu, Y. Review on the Studies of Social Media Fatigue: Meanings, Antecedents and Consequences. J. Mod. Inf. 2019, 39, 9. [Google Scholar] [CrossRef]
  66. Kowal, M.; Conroy, E.; Ramsbottom, N.; Smithies, T.; Toth, A.; Campbell, M. Gaming Your Mental Health: A Narrative Review on Mitigating Symptoms of Depression and Anxiety Using Commercial Video Games. JMIR Serious Games 2021, 9, e26575. [Google Scholar] [CrossRef] [PubMed]
  67. Yen, J.-Y.; Lin, H.-C.; Chou, W.-P.; Liu, T.-L.; Ko, C.-H. Associations Among Resilience, Stress, Depression, and Internet Gaming Disorder in Young Adults. Int. J. Environ. Res. Public. Health 2019, 16, 3181. [Google Scholar] [CrossRef] [Green Version]
  68. Weinstein, A.; Weizman, A. Emerging Association Between Addictive Gaming and Attention-Deficit/Hyperactivity Disorder. Curr. Psychiatry Rep. 2012, 14, 590–597. [Google Scholar] [CrossRef] [PubMed]
  69. Wenzel, H.G.; Bakken, I.J.; Johansson, A.; Götestam, K.G.; Øren, A. Excessive Computer Game Playing among Norwegian Adults: Self-Reported Consequences of Playing and Association with Mental Health Problems. Psychol. Rep. 2009, 105, 1237–1247. [Google Scholar] [CrossRef] [PubMed]
  70. Tateno, M.; Teo, A.R.; Ukai, W.; Kanazawa, J.; Katsuki, R.; Kubo, H.; Kato, T.A. Internet Addiction, Smartphone Addiction, and Hikikomori Trait in Japanese Young Adult: Social Isolation and Social Network. Front. Psychiatry 2019, 10, 455. [Google Scholar] [CrossRef] [Green Version]
  71. Hall, L.C.; Drummond, A.; Sauer, J.D.; Ferguson, C.J. Effects of Self-Isolation and Quarantine on Loot Box Spending and Excessive Gaming—Results of a Natural Experiment. PeerJ 2021, 9, e10705. [Google Scholar] [CrossRef]
  72. Trotter, M.G.; Coulter, T.J.; Davis, P.A.; Poulus, D.R.; Polman, R. Social Support, Self-Regulation, and Psychological Skill Use in E-Athletes. Front. Psychol. 2021, 12, 722030. [Google Scholar] [CrossRef]
  73. Sabtan, B.; Cao, S.; Paul, N. Current Practice and Challenges in Coaching Esports Players: An Interview Study with League of Legends Professional Team Coaches. Entertain. Comput. 2022, 42, 100481. [Google Scholar] [CrossRef]
  74. Kou, Y. Toxic Behaviors in Team-Based Competitive Gaming: The Case of League of Legends. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, Virtual Event Canada, 2–4 November 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 81–92. [Google Scholar]
  75. Cai, L.; Huang, Z.; Feng, Q.; Chang, X.; Yan, K. Co-Transformation of Digital Health and ESport in Metaverse: Moderating Effects of Digital Personality on Mental Health in Multiplayer Online Battle Arena (MOBA). Int. J. Environ. Res. Public Health 2023, 20, 760. [Google Scholar] [CrossRef]
  76. Tng, S.T.; Ho, K.H.; Pau, K. Need Frustration, Gaming Motives, and Internet Gaming Disorder in Mobile Multiplayer Online Battle Arena (MOBA) Games: Through the Lens of Self-Determination Theory. Int. J. Ment. Health Addict. 2022. [Google Scholar] [CrossRef] [PubMed]
  77. Chang, Y.-H.; Liu, D.-C.; Chen, Y.-Q.; Hsieh, S. The Relationship between Online Game Experience and Multitasking Ability in a Virtual Environment. Appl. Cogn. Psychol. 2017, 31, 653–661. [Google Scholar] [CrossRef]
  78. Myślak, M.; Deja, D. Developing Game-Structure Sensitive Matchmaking System for Massive-Multiplayer Online Games. In Proceedings of the Social Informatics: SocInfo 2014 International Workshops, Barcelona, Spain, 11 November 2014; Aiello, L.M., McFarland, D., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 200–208. [Google Scholar]
  79. Türkay, S.; Formosa, J.; Adinolf, S.; Cuthbert, R.; Altizer, R. See No Evil, Hear No Evil, Speak No Evil: How Collegiate Players Define, Experience and Cope with Toxicity. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–13. [Google Scholar]
  80. Wang, Q. A Comparison of Moderation Systems in DOTA2 and League of Legends from a Player Perspective. Master’s Thesis, Uppsala University, Department of Game Design, Uppsala, Sweden, 2023. [Google Scholar]
  81. Deci, E.L.; Ryan, R.M. The General Causality Orientations Scale: Self-Determination in Personality. J. Res. Personal. 1985, 19, 109–134. [Google Scholar] [CrossRef]
  82. Ross-Stewart, L.; Lee, R. VR Training and Imagery Training in Esports. J. Imag. Res. Sport Phys. Act. 2023, 18, 20230003. [Google Scholar] [CrossRef]
  83. Turkay, S.; Formosa, J.; Cuthbert, R.; Adinolf, S.; Brown, R.A. Virtual Reality Esports—Understanding Competitive Players’ Perceptions of Location Based VR Esports. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021. [Google Scholar]
  84. Wang, C.-M.; Hong, J.-C.; Ye, J.-H.; Ye, J.-N. The Relationship among Gameplay Self-Efficacy, Competition Anxiety, and the Performance of ESports Players. Entertain. Comput. 2022, 42, 100489. [Google Scholar] [CrossRef]
  85. Pobiedina, N.; Neidhardt, J.; del Carmen Calatrava Moreno, M.; Grad-Gyenge, L.; Werthner, H. On Successful Team Formation: Statistical Analysis of a Multiplayer Online Game. In Proceedings of the 2013 IEEE 15th Conference on Business Informatics, Vienna, Austria, 15–18 July 2013; pp. 55–62. [Google Scholar]
  86. Kings of Glory Optimization of Drop-Hosting AI Teammates 2023. Available online: https://pvp.qq.com/coming/v2/system/0813dxtg.shtml (accessed on 17 July 2023).
  87. Costescu, C.A.; Vanderborght, B.; David, D.O. The Effects of Robot-Enhanced Psychotherapy: A Meta-Analysis. Rev. Gen. Psychol. 2014, 18, 127–136. [Google Scholar] [CrossRef] [Green Version]
  88. Zhu, X.; Liang, W.; Xv, W.; Wang, Y. The Key Strategies for Increasing Users’ Intention of Self-Disclosure in Human-Robot Interaction through Robotic Appearance Design. In Proceedings of the SHS Web of Conferences, Hangzhou, China, 31 March–2 April 2023; EDP Sciences: Les Ulis, France, 2023; Volume 165. [Google Scholar]
  89. Mutlu, B.; Forlizzi, J. Robots in Organizations: The Role of Workflow, Social, and Environmental Factors in Human-Robot Interaction. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The Netherlands, 12–15 March 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 287–294. [Google Scholar]
  90. Davis, F. A Technology Acceptance Model for Empirically Testing New End-User Information Systems. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1985. [Google Scholar]
  91. Turvey, M. Preliminaries to a Theory. Perceiving Act. Knowing Ecol. Psychol. 1977, 211, 210–211. Available online: https://haskinslabs.org/sites/default/files/files/Reprints/HL0168A.pdf (accessed on 23 May 2023).
  92. Bulavina, M.; Biryukova, Y.; Kurilenko, V.; Dunaeva, L.; Akhnina, K. Speech Strategy of Anonymity in Communication. SHS Web Conf 2019, 69, 00020. [Google Scholar] [CrossRef] [Green Version]
  93. Scott, C.R.; Rains, S.A. (Dis)Connections in Anonymous Communication Theory: Exploring Conceptualizations of Anonymity in Communication Research. Ann. Int. Commun. Assoc. 2020, 44, 385–400. [Google Scholar] [CrossRef]
  94. Taylor, T.L. Life in Virtual Worlds: Plural Existence, Multimodalities, and Other Online Research Challenges. Am. Behav. Sci. 1999, 43, 436–449. [Google Scholar] [CrossRef]
  95. Bell, M. Online Role-Play: Anonymity, Engagement and Risk. Educ. Media Int. 2001, 38, 251–260. [Google Scholar] [CrossRef]
  96. Zhang, K.; Kizilcec, R. Anonymity in Social Media: Effects of Content Controversiality and Social Endorse-Ment on Sharing Behavior. In Proceedings of the International AAAI Conference on Web and Social Media, Ann Arbor, MI, USA, 1–4 June 2014; Volume 8, pp. 643–646. [Google Scholar] [CrossRef]
  97. Lin, J.; He, J.; Zhang, N. Application of Behavior Tree in AI Design of MOBA Games. In Proceedings of the 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention (ICKII), Seoul, Republic of Korea, 12–15 July 2019; pp. 323–326. [Google Scholar]
  98. Reeves, B.; Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Pla; Bibliovault OAI Repository; The University of Chicago Press: Chicago, IL, USA, 1996. [Google Scholar]
  99. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  100. Siemon, D. Elaborating Team Roles for Artificial Intelligence-Based Teammates in Human-AI Collaboration. Group Decis. Negot. 2022, 31, 871–912. [Google Scholar] [CrossRef]
  101. Gander, F.; Gaitzsch, I.; Ruch, W. The Relationships of Team Role- and Character Strengths-Balance with Individual and Team-Level Satisfaction and Performance. Front. Psychol. 2020, 11, 566222. [Google Scholar] [CrossRef] [PubMed]
  102. Vallverdú, J.; Trovato, G. Emotional Affordances for Human–Robot Interaction. Adapt. Behav. 2016, 24, 320–334. [Google Scholar] [CrossRef]
  103. Scherer, K.R. The Nature and Dynamics of Relevance and Valence Appraisals: Theoretical Advances and Recent Evidence. Emot. Rev. 2013, 5, 150–162. [Google Scholar] [CrossRef]
  104. Grüsser, S.M.; Thalemann, R.; Griffiths, M.D. Excessive Computer Game Playing: Evidence for Addiction and Aggression? Cyberpsychol. Behav. 2007, 10, 290–292. [Google Scholar] [CrossRef]
  105. Lee, Z.W.Y.; Cheung, C.M.K.; Chan, T.K.H. Explaining the Development of the Excessive Use of Massively Multiplayer Online Games: A Positive-Negative Reinforcement Perspective. In Proceedings of the 2014 47th Hawaii International Conference on System Sciences, Waikoloa, HI, USA, 6–9 January 2014; pp. 668–677. [Google Scholar]
  106. Diehl, J.J.; Schmitt, L.M.; Villano, M.; Crowell, C.R. The Clinical Use of Robots for Individuals with Autism Spectrum Disorders: A Critical Review. Res. Autism Spectr. Disord. 2012, 6, 249–262. [Google Scholar] [CrossRef] [Green Version]
  107. Moerman, C.J.; van der Heide, L.; Heerink, M. Social Robots to Support Children’s Well-Being under Medical Treatment: A Systematic State-of-the-Art Review. J. Child Health Care 2019, 23, 596–612. [Google Scholar] [CrossRef]
  108. Abbott, R.; Orr, N.; McGill, P.; Whear, R.; Bethel, A.; Garside, R.; Stein, K.; Thompson-Coon, J. How Do “Robopets” Impact the Health and Well-Being of Residents in Care Homes? A Systematic Review of Qualitative and Quantitative Evidence. Int. J. Older People Nurs. 2019, 14, e12239. [Google Scholar] [CrossRef]
  109. Laranjo, L.; Dunn, A.G.; Tong, H.L.; Kocaballi, A.B.; Chen, J.; Bashir, R.; Surian, D.; Gallego, B.; Magrabi, F.; Lau, A.Y.S.; et al. Conversational Agents in Healthcare: A Systematic Review. J. Am. Med. Inform. Assoc. 2018, 25, 1248–1258. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Laban, G.; George, J.-N.; Morrison, V.; Cross, E.S. Tell Me More! Assessing Interactions with Social Robots from Speech. Paladyn J. Behav. Robot. 2021, 12, 136–159. [Google Scholar] [CrossRef]
  111. Konijn, E.A.; Hoorn, J.F. Some Like It Bad: Testing a Model for Perceiving and Experiencing Fictional Characters. Media Psychol. 2005, 7, 107–144. [Google Scholar] [CrossRef] [Green Version]
  112. Kahn, J.; Garrison, A. Emotional Self-Disclosure and Emotional Avoidance: Relations with Symptoms of Depression and Anxiety. J. Couns. Psychol. 2009, 56, 573–584. [Google Scholar] [CrossRef]
  113. Liang, H.; Xue, Y.; Pinsonneault, A.; Wu, Y.A. What Users Do besides Problem-Focused Coping When Facing IT Security Threats: An Emotion-Focused Coping Perspective. MIS Q. 2019, 43, 373–394. [Google Scholar] [CrossRef] [Green Version]
  114. Tencent Games The Introduction of King of Glory 2022. Available online: https://pvp.qq.com/web201605/introduce.shtml (accessed on 17 July 2023).
  115. Riot Games League of Legends 2022. Available online: https://www.leagueoflegends.com/en-us/ (accessed on 17 July 2023).
  116. Gao, G.; Min, A.; Shih, P.C. Gendered Design Bias: Gender Differences of in-Game Character Choice and Playing Style in League of Legends. In Proceedings of the 29th Australian Conference on Computer-Human Interaction, Brisbane, QLD, Australia, 28 November–1 December 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 307–317. [Google Scholar]
  117. Ying Jia Toy The Beauty of Technology: Smart Bluetooth App Robot CAER 2022. Available online: http://www.amwelltoys.com/cn/goodshow/3465.html#. (accessed on 17 July 2023).
  118. Konijn, E.A.; Hoorn, J.F. Differential Facial Articulacy in Robots and Humans Elicit Different Levels of Responsiveness, Empathy, and Projected Feelings. Robotics 2020, 9, 92. [Google Scholar] [CrossRef]
  119. Joshi, A.; Kale, S.; Chandel, S.; Pal, D. Likert Scale: Explored and Explained. Br. J. Appl. Sci. Technol. 2015, 7, 396–403. [Google Scholar] [CrossRef]
  120. Wakita, T.; Ueshima, N.; Noguchi, H. Psychological Distance Between Categories in the Likert Scale: Comparing Different Numbers of Options. Educ. Psychol. Meas. 2012, 72, 533–546. [Google Scholar] [CrossRef]
  121. Hite, D.M.; Voelker, T.A.; Robertson, A. Measuring Perceived Anonymity: The Development of a Context Independent Instrument. J. Methods Meas. Soc. Sci. 2014, 5, 22–39. [Google Scholar] [CrossRef] [Green Version]
  122. Fox, J.; McEwan, B. Distinguishing Technologies for Social Interaction: The Perceived Social Affordances of Communication Channels Scale. Commun. Monogr. 2017, 84, 298–318. [Google Scholar] [CrossRef]
  123. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A Flexible Statistical Power Analysis Program for the Social, Behavioral, and Biomedical Sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef] [PubMed]
  124. Bangor, A.; Kortum, P.T.; Miller, J.T. An Empirical Evaluation of the System Usability Scale. Int. J. Human–Comput. Interact. 2008, 24, 574–594. [Google Scholar] [CrossRef]
  125. Lee, Y.; Kozar, K.A.; Larsen, K. The Technology Acceptance Model: Past, Present, and Future. Commun. Assoc. Inf. Syst. 2003, 12, 50. [Google Scholar] [CrossRef]
  126. IBM Corp. IBM SPSS Amos. Available online: https://www.ibm.com/support/pages/downloading-ibm-spss-amos-26 (accessed on 17 July 2023).
  127. Cohen, P.N. Black Concentration Effects on Black-White and Gender Inequality: Multilevel Analysis for U.S. Metropolitan Areas. Soc. Forces 1998, 77, 207–229. [Google Scholar] [CrossRef]
  128. Konijn, E.A.; Bushman, B.J. World Leaders as Movie Characters? Perceptions of George W. Bush, Tony Blair, Osama Bin Laden, and Saddam Hussein. Media Psychol. 2007, 9, 157–177. [Google Scholar] [CrossRef] [Green Version]
  129. Haring, K.S.; Silvera-Tawil, D.; Takahashi, T.; Watanabe, K.; Velonaki, M. How People Perceive Different Robot Types: A Direct Comparison of an Android, Humanoid, and Non-Biomimetic Robot. In Proceedings of the 2016 8th International Conference on Knowledge and Smart Technology (KST), Chiangmai, Thailand, 3–6 February 2016; pp. 265–270. [Google Scholar]
  130. Barfield, J.K. Self-Disclosure of Personal Information, Robot Appearance, and Robot Trustworthiness. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 67–72. [Google Scholar]
  131. Sung, W.O.; Song, M.J.; Chung, K. Applying Sasang Typology Theory to Robot Appearance Design. In Proceedings of the RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju Island, Republic of Korea, 26–29 August 2007; pp. 1066–1071. [Google Scholar]
  132. Nittono, H.; Fukushima, M.; Yano, A.; Moriya, H. The Power of Kawaii: Viewing Cute Images Promotes a Careful Behavior and Narrows Attentional Focus. PLoS ONE 2012, 7, e46362. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The negative gaming atmosphere in MOBA games.
Figure 1. The negative gaming atmosphere in MOBA games.
Systems 11 00425 g001
Figure 2. Interactively Perceiving and Experiencing Fictional Characters (I-PEFiC).
Figure 2. Interactively Perceiving and Experiencing Fictional Characters (I-PEFiC).
Systems 11 00425 g002
Figure 3. The hypothesized interactive paradigm of gaming.
Figure 3. The hypothesized interactive paradigm of gaming.
Systems 11 00425 g003
Figure 4. Research hypotheses.
Figure 4. Research hypotheses.
Systems 11 00425 g004
Figure 5. Experimental design and procedure.
Figure 5. Experimental design and procedure.
Systems 11 00425 g005
Figure 6. The Analysis of Structural Equation Model.
Figure 6. The Analysis of Structural Equation Model.
Systems 11 00425 g006
Figure 7. Robots reduce negative mood but AI teammates more so. Adding a robot to an AI teammate does not further improve the gamer’s mood.
Figure 7. Robots reduce negative mood but AI teammates more so. Adding a robot to an AI teammate does not further improve the gamer’s mood.
Systems 11 00425 g007
Figure 8. Prototype design: application ‘MOBA Pro,’ (A) concept of prototype, (B) interaction style of prototype.
Figure 8. Prototype design: application ‘MOBA Pro,’ (A) concept of prototype, (B) interaction style of prototype.
Systems 11 00425 g008
Table 1. Measurement items for evaluating.
Table 1. Measurement items for evaluating.
MeasureReliabilityScale MeanSDSource
Valence-before-treatment (Vb)0.8653.820.83Duan et al. [43] and Luo et al. [42]
Valence-after-treatment (Va)0.9124.270.97Duan et al. [43] and Luo et al. [42]
Valene (Val)0.8680.461.10Duan et al. [43] and Luo et al. [42]
Affordance (AF)0.8753.701.05Van Vugt et al. [46]
Involvement (IO)0.8583.981.03Van Vugt et al. [46]
Distance (DT)0.8533.131.14Psychological Distance Scale [46]
Invisibility (IN)0.7483.370.85System Usability Scale [124] and Technology Acceptance Model Scale [125]
Anonymity (AN)0.7934.110.90The Scale of Perceived Anonymity [121] and Communication Channels Scale [122]
Table 2. Results of multivariate effects of experiment with measures.
Table 2. Results of multivariate effects of experiment with measures.
VFdfpηp2n
0.43513.4686.1050.0000.435111
0.49214.3656.890.0000.49295
0.5532.0596.100.1490.55316
Table 3. Results of univariate effects.
Table 3. Results of univariate effects.
Variables:Measure
VFdfpηp2n
0.31149.6211.1100.0000.311111
0.41867.4531.940.0000.41895
0.0210.3231.150.5780.21016
Variables:measure
VFdfpηp2n
0.45320.4146.1050.0000.157111
0.51523.7166.890.0000.20195
0.6421.8006.100.1080.10716
Table 4. Results of univariate effects.
Table 4. Results of univariate effects.
Variables:Experiment
VFdfpηp2n
0.31149.6211.1100.0000.311111
0.41867.4531.940.0000.41895
0.0210.3231.150.5780.21016
Variables:measure
VFdfpηp2n
0.45320.4146.1050.0000.157111
0.51523.7166.890.0000.20195
0.6421.8006.100.1080.10716
Table 5. Means and SDs of human and AI teammates.
Table 5. Means and SDs of human and AI teammates.
HmAI
VariablesMeanSDVariablesMeanSDn
HmVb3.011.30 AiVb4.62 1.04 111
HmVa4.231.07 AiVa4.31 1.16 111
HmAF3.640.98 AiAF3.75 1.33 111
HmIO3.951.14 AiIO4.01 1.29 111
HmDT3.001.18 AiDT3.26 1.46 111
HmIN3.341.04 AiIN3.40 1.45 111
HmAN3.811.12 AiAN4.41 1.27 111
HmAI
VariablesMeanSDVariablesMeanSDn
HmVb3.131.30 AiVb4.79 0.78 95
HmVa4.281.06 AiVa4.43 1.00 95
HmAF3.770.75 AiAF3.87 1.22 95
HmIO4.011.10 AiIO4.06 1.19 95
HmDT2.921.12 AiDT3.27 1.40 95
HmIN3.241.00 AiIN3.50 1.40 95
HmAN3.731.09 AiAN4.49 1.25 95
HmAI
VariablesMeanSDVariablesMeanSDn
HmVb2.311.10 AiVb3.62 1.68 16
HmVa3.971.14 AiVa3.63 1.72 16
HmAF2.911.70 AiAF3.06 1.76 16
HmIO3.631.34 AiIO3.70 1.78 16
HmDT3.461.48 AiDT3.17 1.88 16
HmIN3.921.12 AiIN2.83 1.63 16
HmAN4.261.18 AiAN3.91 1.34 16
Table 6. Players’ mean Valence without and with an AI teammate.
Table 6. Players’ mean Valence without and with an AI teammate.
MOBA 1.0MOBA 2.0
Valence:M (SD)M (SD)
before3.01 (1.30)4.61 (1.04)
after4.23 (1.07)4.31 (1.16)
Table 7. Results of paired-samples t-test B.
Table 7. Results of paired-samples t-test B.
N = 111n = 95
Variablestpdftpdf
HmVb-AiVb−10.18 0.000 110−10.06 0.00094
HmVa-AiVa−0.77 0.445 110−1.33 0.18694
HmAF-AiAF−1.06 0.293 110−0.92 0.35994
HmIO-AiIO−0.47 0.638 110−0.40 0.68894
HmDT-AiDT−2.00 0.048 110−2.53 0.01394
HmIN-AiIN−0.36 0.722 110−1.38 0.17294
HmAN-AiAN−4.02 0.000 110−4.90 0.00094
Table 8. Effective reliability and convergence.
Table 8. Effective reliability and convergence.
Item CodeItem LoadingsAVEC.R.
MAF1i0.783 0.637 0.875
MAF2i0.866
MAF3i0.754
MAF4i0.784
MIN1i0.681 0.508 0.755
MIN2i0.693
MIN3i0.761
MAN5c0.748 0.517 0.763
MAN7c0.675
MAN8c0.733
MIO1i0.914 0.685 0.866
MIO3i0.710
MIO4i0.846
Mval1i0.829 0.658 0.885
Mval2i0.742
Mval3i0.834
Mval4i0.836
MDT2c0.789 0.671 0.858
Table 9. Validity of correlations and discriminations between components.
Table 9. Validity of correlations and discriminations between components.
AVEINAFValANDTIO
IN0.508 0.713
AF0.637 0.202 0.798
Val0.658 0.085 0.419 0.811
AN0.517 −0.411 −0.083 −0.035 0.719
DT0.671 0.220 −0.118 −0.383 −0.090 0.819
IO0.685 0.157 0.702 0.373 −0.217 −0.298 0.828
Table 10. The model fits.
Table 10. The model fits.
Fit IndexesRecommend ValueResults Value
χ2/df<21.737
RMESA<0.080.082
SRMR<0.10.090
CFI>0.800.896
GFI>0.800.802
PCFI>0.500.764
PGFI>0.500.618
PNFI>0.500.673
IFI>0.800.898
TLI>0.800.877
Table 11. The model path analysis.
Table 11. The model path analysis.
HypothesesPath Coefficientt-ValueS.E.p-ValueSupport
H1: AF→IO0.6596.5410.126***Yes
H3: AF→Val0.4193.7760.118***Yes
H4: Val→DT−0.404−3.6470.121***Yes
H6: DT→IO−0.237−2.7280.0930.006Yes
H7: IN→AN0.2542.1490.160.032Yes
H9: IN→DT−0.411−3.0880.1630.002Yes
H10: AN→IO−0.184−2.0820.1050.037Yes
Notes: (1) ‘***’ denoted an even higher level of significance at the 0.001 level, namely p < 0.001. (2) The analysis of the structural equation model yielded the following R2 values: Val: 0.175, AN: 0.169, DT: 0.21, and IO: 0.573.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Dai, Y.; Chen, S.; Wang, L.; Hoorn, J.F. Multiplayer Online Battle Arena (MOBA) Games: Improving Negative Atmosphere with Social Robots and AI Teammates. Systems 2023, 11, 425. https://doi.org/10.3390/systems11080425

AMA Style

Wang Y, Dai Y, Chen S, Wang L, Hoorn JF. Multiplayer Online Battle Arena (MOBA) Games: Improving Negative Atmosphere with Social Robots and AI Teammates. Systems. 2023; 11(8):425. https://doi.org/10.3390/systems11080425

Chicago/Turabian Style

Wang, Yimin, Yonglin Dai, Shaokang Chen, Lingxin Wang, and Johan F. Hoorn. 2023. "Multiplayer Online Battle Arena (MOBA) Games: Improving Negative Atmosphere with Social Robots and AI Teammates" Systems 11, no. 8: 425. https://doi.org/10.3390/systems11080425

APA Style

Wang, Y., Dai, Y., Chen, S., Wang, L., & Hoorn, J. F. (2023). Multiplayer Online Battle Arena (MOBA) Games: Improving Negative Atmosphere with Social Robots and AI Teammates. Systems, 11(8), 425. https://doi.org/10.3390/systems11080425

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop