1. Introduction
The concept of the Future Internet envisions a transformative shift in the global digital framework, aiming to transcend existing boundaries by enhancing connectivity, adaptability, and robustness. It aspires to develop a more intuitive, dynamic, and fortified network, accommodating cutting-edge advancements like IoT, 5G, and beyond. Future Internet frameworks are expected to incorporate sophisticated elements such as AI, distributed ledger technologies, and edge computing to streamline data management and elevate user experiences. In the context of museums, AI can revolutionize visitor interactions through machine learning algorithms that personalize content based on user preferences, real-time emotion tracking, and historical data analysis. IoT enables the seamless integration of physical exhibits with digital platforms, allowing museums to enhance the visitor experience by providing augmented and virtual reality tours, or interactive displays that react to visitor behavior. Sentiment analysis, when applied to visitor feedback, social media activity, and even live emotional data, helps museums understand and predict emotional responses, tailoring exhibits and narratives that resonate deeply with individual visitors. These technologies are aligned with the Future Internet vision by fostering a network of emotionally adaptive, personalized, and responsive museum environments. This evolution targets the creation of an agile, resilient digital ecosystem, facilitating seamless communication and interaction across an array of platforms and devices, pushing the envelope of what the internet can achieve. In the realm of the Future Internet and social media, sentiment analysis is poised to transcend mere data processing, becoming a conduit for richer, emotionally attuned human–machine symbiosis. Imagine platforms that not only parse the lexicon of emotions but also resonate with nuanced affective states, fostering transformative learning experiences. These systems would dynamically respond to emotional cues, crafting bespoke narratives and interactions that evolve in real-time. They could cultivate environments where machines and humans coalesce in a dance of empathy, deciphering the labyrinth of feelings and forging deeper cognitive bonds. Such advancements promise a paradigm shift, where the digital fabric becomes a living entity, responsive and adaptive, amplifying the potential for immersive, emotionally enriched educational journeys.
In today’s digital age, social media has become a powerful tool for engaging and attracting audiences. It serves as a platform to announce ticket sales for exhibitions, festivals, presentations, creative meetings, workshops, film screenings, literary–musical evenings, concerts, charitable events, and fairs. The growing trend of museums using digital resources to provide online services is causing a boom in “digital museology”. This emergent field focuses on how online museum visits impact people’s willingness to visit in person, drawing on theories of presence and cognitive–emotional–behavioral theory. It aims to understand how the digital museum experience affects the audience’s desire to visit physical museums, driven by images and short video content [
1]. Interestingly, the number of social media followers does not directly correlate with actual museum attendance. However, sentiment analysis of social media engagement allows museums to extract meaningful insights from followers’ emotional responses to online content, which in turn can inform the creation of exhibits that emotionally connect with visitors. AI algorithms can automatically analyze the tone and emotional content of social media comments, identifying patterns that highlight visitor preferences. Meanwhile, IoT-connected devices within museum spaces can further integrate these data by linking them to in-person visitor behavior, creating a feedback loop between the digital and physical realms. By leveraging these technologies, museums can continuously adapt their exhibits to enhance both virtual and real-world experiences, leading to deeper emotional and cognitive engagement. There are variations in the popularity of different virtual formats, with some museums equally sought after across all social networks, while others, especially regional ones, thrive primarily on domestic social platforms. Notably, there is a growing trend among museums in establishing channels on YouTube, a phenomenon that was relatively rare in this institutional community not too long ago. It is increasingly evident that a truly modern museum in today’s world is hardly conceivable without Social Media Marketing (SMM) [
2]. This demonstrates the critical intersection between social media engagement and museum attendance, further explored in research like that of Deng et al., which analyzes the influence of digital interactions on physical visits [
1].
Museums serve as centers of culture, reflection, personal connections, and communication, often enhanced by human–computer interaction (HCI) systems. Research conducted at the Powell-Cotton Museum in the UK, for instance, analyzed visitors’ emotional reactions to artifacts through structured interviews and thematic analysis. The results indicated that visitors strive to find meaningful and personal connections when asked to emotionally respond to artifacts [
3]. This aligns with the broader trend of museums creating emotionally driven experiences, a phenomenon seen in projects like “Sensitive Pictures”, co-created with the Munch art museum. Here, visitors choose emotions, find corresponding paintings, and engage in emotional narratives, further connecting HCI with affective computing (AC) to analyze emotional responses via facial expression recognition [
4].
Building on these principles, new recommendation systems for museums are now designed to personalize visitors’ exhibition paths based on their emotional states. For example, at the Modern Art Museum “Palazzo Buonaccorsi” in Macerata, an interactive totem equipped with a touch screen and a Convolutional Neural Network for facial coding helps assess visitors’ emotions, gender, and age, thereby curating personalized artwork suggestions. Extensive testing demonstrated that the system enhances the positivity of visitors’ emotional experiences by creating an interactive and emotional connection with the artworks [
5]. This aligns with the growing role of affective computing in museums, where emotion-driven technologies play a pivotal role in enhancing visitor engagement [
6]. AI-driven emotion recognition systems can further enhance these experiences by analyzing visitors’ real-time emotional states through facial recognition, voice analysis, or even wearable IoT devices that monitor physiological responses like heart rate or skin conductance. Sentiment analysis, applied to post-visit surveys or social media interactions, helps museums refine future exhibits to better match visitor expectations and emotional reactions. IoT devices installed in exhibits can provide real-time adjustments to lighting, audio, or display features based on detected emotional responses, making each museum visit a uniquely adaptive experience tailored to the visitor’s emotional journey.
The present work forms the basis for an extensive analysis of the emotional states of visitors via social media. This is achieved through the definition of a specific methodology and a new model of emotional computing based on the acquisition of data in the physical field with a questionnaire administered to 1000 learners. First, here is a short overview of the emotional computing context of interest in the present paper.
Sentiment analysis and emotion recognition technologies contribute to this growing field by harnessing machine learning models for real-time emotional analysis. While some approaches rely on social media data and text analysis, others, like EEG-based emotion recognition, provide more accurate physiological measurements. For instance, the study at Tianyi Pavilion Museum used EEG signals combined with the PAD emotional model to measure visitors’ emotional tendencies in real-time, demonstrating the potential of physiological signals to offer deeper insights into emotional engagement [
7,
8]. This model could complement the sentiment analysis we propose, which involves analyzing emotional responses gathered through online questionnaires and social media interactions.
The integration of Gratch and Marsella’s Emotion and Adaptation (EMA) model offers a strong theoretical framework for understanding how emotions impact decision-making and behavior, further informing the design of these digital experiences [
9,
10].
Our work builds on the previous foundations by developing a new model of emotional computing that combines social media data, sentiment analysis, and cognitive–emotional theory to measure the emotional states of visitors. Through a comprehensive methodology that includes questionnaires and sentiment analysis across social media platforms, we aim to bridge the gap between digital and physical museum experiences, expanding the potential for affective computing in modern museology.
Ultimately, as we continue to explore the intersection between social media, affective computing, and visitor engagement, it becomes clear that emotional experiences play a crucial role in fostering both online and physical interactions with museum contents [
11,
12]. Integrating AI, IoT, and sentiment analysis into museum environments aligns with the Future Internet’s overarching goals by ensuring that exhibitions’ content and context can dynamically adapt to visitors’ emotional and cognitive states. These technologies enable museums to transition from static information repositories to interactive, emotionally attuned spaces that respond to the needs and preferences of each visitor in real-time. As AI models become more sophisticated and IoT technologies evolve, we anticipate that museums will offer even more personalized, immersive, and emotionally impactful experiences. By integrating multimodal emotion recognition technologies, museums can further personalize and enhance visitor experiences, offering a more emotionally resonant and enriching environment that transcends traditional museum boundaries [
13].
Our research contributes to the evolving field of digital museology and affective computing by proposing a new model that integrates social media data, sentiment analysis, and cognitive–emotional theories. By bridging these areas, we aim to enhance the understanding of emotional engagement in museum experiences, both in the digital realm and within physical spaces [
14,
15]. Then, the objective of this work is to develop a computational model designed to enhance the emotional aspect of learning experiences within museum environments. The focus is on representing and managing affective and emotional feedback to understand how emotions significantly impact the learning process in a museum context. The proposed model aims to identify and quantify emotions during a visitor’s engagement with museum exhibits. The key goals include exploring methods and techniques for assessing and recognizing emotional responses in museum visitors, as well as feedback management strategies based on the detection of visitors’ emotional states.
We offer a unique perspective by integrating social media and cognitive–emotional theories to explore museum visitors’ emotional engagement while existing studies provide technological and methodological advances that can enhance the precision of emotional measurement in real-world applications. Consequently, the present work could also prompt others to refine the model by incorporating multimodal data and real-time emotional analysis methods, as are used in the referenced works.
2. Methods and Techniques for Enhancing Visitor Emotional Engagement
In this section, our focus shifts to the methodologies and techniques utilized in museums to enhance visitor engagement through the detection of affectivity and emotions.
Machines should be able to identify and detect human emotions in affective computing and then adjust their actions accordingly. These days, machine learning techniques are mostly used to attain these goals. These methods are used to generate classification labels or coordinates in a valence–arousal space by processing various emotional-related information. However, this technique has significant drawbacks because it disregards the neurophysiological mechanisms underlying implicit emotional states. Furthermore, current machine learning techniques do not gain from understanding emotional dynamics and appropriately adjusting [
16]. To improve their understanding of visitors’ behavior and experience, art museum staff have historically relied on surveys and observations. However, these methods frequently result in empirical data and measurements that are constrained in both time and space. The ability to collect data on human behavior has only lately been changed by the widespread use of digital technologies. In this study, the questionnaires were structured using Likert-scale questions to quantify the intensity of emotions such as excitement, frustration, and interest. This study involved 1000 participants with an average age of 22 years (±2 years), representing an age range of between 18 and 30 years. The sample was evenly distributed between males and females. All participants were university students who had completed high school and were pursuing degrees in diverse fields such as Humanities, Law, Economics, Engineering, Sciences, and Medicine. This selection ensured a broad representation of young adults from different academic backgrounds, allowing this study to examine emotional engagement with museum experiences from a diverse yet focused demographic. Each question measured emotional responses on a scale of 1 to 10, allowing for detailed quantitative analysis. The qualitative aspect of the questionnaire included open-ended questions, encouraging visitors to reflect on their emotional experiences. The data from these responses were processed through statistical methods, including factor analysis, to identify the key emotional drivers influencing visitor engagement. Then, we identified new opportunities to apply computational and comparative analytical tools supported by the increased availability of large-scale datasets on visitor behavior quantification. In [
17], the authors examine how visitors behave at the Louvre Museum using longitudinal anonymized information gathered from non-intrusive Bluetooth devices. They look at how long visitors stay in the museum and how this relates to the density of occupied spaces around the artwork. The knowledge and comprehension of museum staff regarding the visitor experience are enhanced by this data analysis.
Identifying Affective States: Museums employ various techniques to identify and assess the emotional states of their visitors, recognizing that emotions play a crucial role in the overall museum experience. These methods encompass a spectrum of modalities, including voice, facial expressions, physiological responses, body language, and interactive tests. For emotion recognition, we utilized advanced machine learning algorithms, such as Convolutional Neural Networks (CNNs) for real-time facial expression recognition and Support Vector Machines (SVMs) for voice emotion analysis. We employed a Convolutional Neural Network (CNN) architecture consisting of multiple convolutional layers followed by pooling layers, culminating in fully connected layers. Specifically, the architecture included three convolutional layers with a depth of 32, 64, and 128 filters, respectively, along with kernel sizes of 3 × 3. The model was trained using the Adam optimizer with a learning rate of 0.001 and a batch size of 32.
For the Support Vector Machine (SVM), a radial basis function (RBF) kernel was applied, as it effectively handles non-linear relationships in the data. To optimize the SVM’s performance, we utilized a grid search for hyperparameter tuning, systematically exploring different combinations of parameters, such as the penalty parameter C and the kernel coefficient γ. This approach enabled us to identify the optimal settings that yielded the best classification accuracy on the validation set. Physiological data, including heart rate and electrodermal activity, were gathered using wearable sensors and processed using signal-filtering techniques to eliminate noise. These methods provided a comprehensive view of emotional states by fusing data from multiple sources. To ensure the reliability of the emotional classification, the models were validated using cross-validation with ground truth data derived from self-reported emotional responses and physiological readings. Each modality comes with its set of advantages and limitations, making it essential to consider factors that contribute to the effectiveness of a particular modality, such as the following:
Validity of the Signal: How naturally the modality aligns with the identification of an affective state in a museum context.
Reliability in Real Environments: The modality’s capability to consistently and accurately capture emotional responses in the dynamic setting of a museum.
Temporal Resolution: The ability of the modality to provide timely data that align with the specific needs of the museum experience.
Costs and User Intrusiveness: The financial implications and how invasive or obtrusive the modality might be for the visitor.
Here, we describe the main methods and techniques for affectivity detection in museums.
Facial Expressions: This method capitalizes on the distinct facial expressions associated with each basic emotion [
18]. Pioneered by Ekman, this technique examines the universality of facial expressions and their recognition across various cultures. Many applications within museums have utilized facial expressions to decipher visitor emotions.
Body Language and Posture: The utilization of body posture as an instrument for detecting emotional states has proven advantageous [
19]. While facial expressions may sometimes reveal emotions, posture and body language can provide unique insights, especially those that have an unconscious nature [
20].
Vocal Emotion Recognition: The voice within verbal communication carries a wealth of information, containing valuable features for determining emotional characteristics. Quantitative studies in vocal emotions have a longer history than studies of facial expressions. Vocal emotion recognition often involves analyzing “prosodic” information, which includes pitch, duration, and intensity [
21,
22,
23].
Physiology: Museums increasingly employ machine learning techniques to identify patterns in physiological activity that correspond to the expression of different emotions. Most measures are noninvasive, based on electrical signals generated by the brain, heart, muscles, and skin. Advances in wearable physiological sensors have opened up new opportunities to infer visitor effects from physiology, overcoming some practical challenges [
24].
Multimodality: Museums understand that affect recognition can encompass various modalities, including posture, gestures, voice, facial expressions, and diverse physiological signals. In real-time, dynamic museum settings, ensuring the reliability of multimodal data fusion (e.g., facial expressions, posture, voice) requires several strategies. Firstly, each modality is validated individually through cross-validation methods. For example, facial expression recognition uses Convolutional Neural Networks (CNNs) trained on large datasets, and the results are cross-validated with self-reported emotional states for accuracy. Similarly, voice emotion analysis employs Support Vector Machines (SVMs) to detect vocal tone variations, such as pitch and intensity, which are also validated using ground truth data. In terms of posture and body language, sensors are strategically placed to capture movement patterns, and the reliability of these signals is enhanced by employing signal filtering techniques that eliminate noise from external factors, such as background movement or lighting changes. Additionally, wearable physiological sensors (e.g., heart rate monitors, electrodermal activity) capture data that are processed using signal filtering and normalization techniques to reduce variability caused by movement or environmental conditions. Each modality is assessed in terms of its temporal resolution to ensure timely data collection that reflects the visitor’s experience. By employing decision fusion, the system integrates the outputs of different classifiers for each modality to provide an overall emotional state. This approach increases the reliability of the system in a real-time setting, as multiple sources are used to cross-check the emotional classification. Multimodal human–computer interaction systems are recognized as the next step in enhancing the museum experience. These systems can combine signals from different sensors in three primary ways:
Data Fusion: Applied to raw data from each signal when they share the same temporal resolution.
Feature Fusion: Involves the integration of features extracted from each signal and is commonly used in affective computing. Characteristics for each signal are mainly the mean, median, standard deviation, maximum, and minimum, along with some unique features from each sensor.
Decision Fusion: Merges the output of the classifier for each signal, providing an integrated overview of the sensors. This approach is commonly employed in creating multimodal museum experiences. Decision fusion is performed by merging the output of the classifier for each signal. Affective states are classified by each sensor and then integrated to produce an overview of the sensors. This is the approach most used for multimodal systems.
These methods and techniques are fundamental in elevating visitor engagement and understanding the emotional responses of museumgoers. Integrating AI and IoT technologies for tracking emotions in museums undeniably opens up a range of ethical concerns, particularly around data privacy, informed consent, and potential emotional manipulation. The implementation of emotional computing in museums will have to overcome privacy and ethical issues. Gathering real-time emotional information through facial recognition, voice analysis, and physiological measurements gives rise to critical questions regarding the handling of these data and the consent of the visitors. To mitigate these, we offer the following recommendations:
Transparency and consent: Museums should inform visitors about data collection methods and their purposes, allowing them a possibility of opting out if desired. Policies, clear and visible, should state that data are anonymized and safely stored, to assuage the privacy concerns of the visitors.
Avoiding Manipulation: Emotional data should only be used to enhance the visitor experience in a supportive, non-intrusive way. Safeguards must be in place to prevent emotional data from influencing visitor behavior in unintended ways, ensuring authenticity and ethical integrity in the museum experience.
These ethical guidelines help make sure emotional computing acts as an enhancement to the museum experience and not an intrusive monitoring tool. Firstly, the collection of emotional data through facial recognition, voice analysis, or physiological measurements raises critical questions about how museums will handle sensitive personal information. Any system implemented must follow strict guidelines to protect visitor privacy, ensuring that all data are anonymized and securely stored to prevent unauthorized access or breaches. Secondly, obtaining explicit consent from visitors before collecting emotional data is essential. This can be achieved through transparent policies that inform visitors about how their data will be used, offering them the option to opt out at any point. Furthermore, museums must be mindful of the potential for emotional manipulation, where emotional data could be used to influence visitor behavior or responses to exhibits in unintended ways. This could detract from the authenticity of the museum experience. To address this, museums should implement safeguards that prevent the misuse of emotional data, ensuring that their primary function remains to enhance the visitor experience in a supportive, non-intrusive manner.
3. Enhancing Visitor Experiences Through Text Analysis and Sentiment Assessment
In this section, we delve into the methods and techniques employed within the museum context to enhance visitor experiences through text analysis and sentiment assessment.
The study of how visitors express their emotions through written text or how specific textual content evokes different emotional responses is vital in the museum domain. From the historical point of view, this approach resonates with the well-known work by Osgood, who pioneered research in this area. Osgood used multidimensional scaling (MDS) to create displays of emotional words, classifying them based on the similarities between words. A diverse set of words was provided and rated by individuals from various cultural backgrounds. These words can be envisioned as points in a multidimensional space, with the distance between pairs of words signifying their similarity. The dimensions that emerged from this work included “evaluation”, “potency”, and “activity”. Evaluation quantifies the pleasantness or unpleasantness of a word, resembling hedonic valence. Potency indicates the intensity level associated with a word, while activity relates to whether a word is active or passive. These dimensions closely align with the circumplex model of emotion, which includes valence and arousal, recognized as fundamental in describing emotional states [
25,
26].
More recently, researchers like Samsonovich and Ascoli [
27] have used French and English dictionaries to construct “conceptual maps of value”, akin to Osgood’s work, and found similar underlying dimensions.
Another research line explores lexical analysis of text to identify words predictive of visitors’ emotional states [
28,
29,
30,
31,
32,
33]. This approach often builds upon linguistic research. For instance, the Linguistic Inquiry and Word Count (LIWC) is a validated tool used for analyzing text by categorizing it based on a dictionary. Techniques based on LIWC aim to identify specific items revealing the emotional content in the text. For example, first-person singular pronouns (e.g., “I” and “me”) have been linked to negative feelings [
30,
34,
35].
Text analysis holds particular importance in the museum context due to the text-based interfaces frequently employed in interactive exhibits. Sentiment analysis involves classifying text based on its overall sentiment.
Sentiment analysis, also known as Opinion Mining, plays a significant role in the museum domain by extracting and interpreting opinions expressed in documents or text. Typically, sentiment analysis categorizes text into “positive” and “negative” sentiment categories. The sentiment polarity classification aims to determine whether an opinion expressed in text is “positive” or “negative” or where it falls on the continuum between these two extremes [
33].
In sentiment analysis for museums, the classification can take various forms, including binary categorization (e.g., classifying text as “positive” or “negative”), regression (e.g., assigning a numerical value between 0, representing “extremely negative”, and 10, representing “extremely positive”), or ranking (e.g., determining which text exhibits a more positive sentiment on a given topic). Sentiment polarity classification is typically based on two opposing classes, such as “positive” and “negative”.
A fascinating area of research in sentiment analysis in the museum context focuses on affect analysis. Affect analysis seeks to identify and extract emotionally charged text and subsequently classify the emotional sentiment expressed. This involves the identification of specific emotions appearing in the text as a form of opinion [
36,
37]. Many studies in this area draw inspiration from the six universal emotions described by Ekman: anger, disgust, fear, happiness, sadness, and surprise [
18]. Research has led to the creation of various affect classification models based on these emotions, analyzing input text on a sentence-by-sentence basis, and exploring techniques to understand global mood at the document level.
In sum, sentiment analysis and text assessment in the museum domain open up a world of possibilities for enhancing visitor engagement and understanding emotional responses to exhibits. Researchers explore diverse methodologies, including statistics, machine learning, and fine-grained attribute-level sentiment analysis, to uncover and harness the rich emotional experiences visitors encounter.
4. Enhancing Visitor Engagement Through Affective Feedback Management
Feedback plays a pivotal role in the context of museum experiences, much like it does in the realm of learning. In traditional computer-based testing, feedback is often limited to informing the visitor about the correctness of their responses. However, in Intelligent Tutoring Systems (ITSs) for museums, the ability to respond to and manage emotional feedback is essential for providing visitors with a meaningful and enriching experience. Tailoring Feedback to Visitor Emotions. In the world of museum engagement, the most appropriate emotional feedback should be invoked based on the visitor’s current emotional state. For instance, if a visitor responds incorrectly to an exhibit, the system can offer a hint or an alternative perspective with encouraging comments to assist them. Additionally, feedback can allow for the following:
Show understanding by acknowledging the complexity of the exhibit or question. In practical museum settings, emotional feedback is adapted based on real-time data collected from visitors’ responses to exhibits. For example, if a visitor displays frustration when interacting with an exhibit, such as through incorrect answers or disengagement, the system might provide additional hints or reframe the information in a more approachable way. At the Modern Art Museum “Palazzo Buonaccorsi”, an interactive totem was used to assess visitors’ emotions and offer personalized artwork suggestions based on facial expression analysis and emotional feedback. The system was able to adjust content dynamically, offering more challenging pieces to those exhibiting curiosity or encouraging further exploration for those showing interest. Such tailored feedback not only addresses immediate emotional states but also contributes to long-term learning. Research has shown that dynamically responding to visitors’ emotional cues, such as excitement or frustration, increases retention and overall satisfaction. Visitors who felt understood and supported during their interactions with exhibits were more likely to stay engaged, retain information, and exhibit a desire to return for future visits. By continually fostering positive emotional states, museums can enhance learning outcomes and deepen the emotional connection between visitors and the exhibits.
Assure visitors that they are on the right path and doing well in their exploration.
Spark curiosity and challenge visitors, encouraging them to delve deeper.
Motivate visitors by highlighting the educational objectives and the benefits of the museum experience.
Recognizing and Responding to Visitor Emotions. Effectively classifying a visitor’s emotions is a crucial step in developing a museum system that is responsive to their emotional states. Equally important is the development of mechanisms that empower these systems to intelligently respond to emotions, as well as the visitor’s cognitive, motivational, and social states. Key questions arise. How can an affect-sensitive museum system respond optimally to visitors to enhance their engagement and learning outcomes? Can machine learning algorithms learn optimal strategies for fostering positive attitudes and enhancing long-term learning? [
38].
An affect-sensitive museum system can integrate assessments of a visitor’s cognitive, affective, and motivational states into its strategies to keep visitors engaged, boost self-confidence, pique their interest, and maximize their learning outcomes [
39]. For instance [
40,
41], if a visitor exhibits signs of frustration, the system can provide helpful hints to facilitate knowledge acquisition and offer empathetic comments to boost motivation. Conversely, if a visitor appears disengaged, the system can present more captivating or challenging exhibits. Research has shown that dynamically responding to a visitor’s emotional and cognitive states significantly enhances their learning and engagement during their museum visit.
Designing Human-Centered Systems. The challenge then becomes, “How can we design systems that can intelligently respond to a visitor’s affective state, even when it is difficult to determine in human-to-human interaction?” Museum systems should be equipped to navigate this uncertainty and balance it with the potential risks and benefits of various interventions [
42,
43].
In other words, our museum systems need to possess the capability to perform risk/benefit analyses when considering potential affective interventions. This means weighing the uncertainty associated with a visitor’s emotional state against the possible advantages or drawbacks of different interaction strategies, to make informed decisions that enhance the overall museum experience.
5. A New Model for Representing Emotional and Affective States in the Museum Context
Using the scenario and context presented in the previous sections, here, we present our work to establish a methodology for identifying and quantifying the emotional state of museum visitors from a computational perspective. This work does not claim to provide a definitive approach to assessing emotional or affective states, but it specifically focuses on developing a computational model for evaluating emotions and affective states within the realm of ICT (Information and Communication Technology) in museum contexts.
Given the variety of technologies that can now be used to enhance museum interaction in virtual settings, this section aims to provide a solid methodological foundation that can be applied across the board, regardless of the technology used. Moreover, as we will see in the following sections, this methodology was employed through questionnaires distributed via social media to a sample of 1000 learners. Although 1000 tests are a significant number, especially for a scientific article, in this work, we considered it the minimum threshold for experimentation. In the context of the Future Internet, the goal is for future studies by the present authors or other colleagues to achieve even higher numbers, with the potential to reach millions of visitors virtually guided in the enjoyment of museum services via social media.
Drawing from extensive research and the analysis of major paradigms and models for managing emotions and affections within Intelligent Tutoring Systems (ITSs), we refer to the emotions highlighted by Arrayo et al. in [
44], emphasizing their relevance in the context of learning. Arrayo’s work builds upon Ekman’s categorization of emotions and identifies four key classes of basic emotions. We represent their classifications along two axes (see
Figure 1 and
Figure 2).
The assessment of affective and emotional states is designed to serve as inputs, allowing users to assign values to a state (character’s emotion) on two levels:
To assess emotions as Boolean types, a questionnaire with twelve questions is employed, i.e., with three questions for each axis. This questionnaire leads to the assignment of a score where −1 corresponds to an extreme emotion, 1 corresponds to the opposite extreme, and 0 represents indifference towards that emotion.
For quantitative emotional assessment, specific emotions are identified and mapped on a scale of 1 to 10 through ten targeted questions, each tailored to a particular emotional class.
Our approach is delineated in the following sections. Both emotivity and affectivity states are analyzed as distinct four-dimensional spaces. The model comprises the following steps:
Step Level 1: Stimulus–Response: This step indicates whether the visitor responds positively, negatively, or indifferently to emotional/affective stimuli provided via the questionnaire.
Step Level 2: Output Response—Quantity: Parameters are quantified on a scale ranging from 1 to 10 through ten specific questions.
Step Level 3: Estimation of Dominance: Tuple parameters are generalized concerning the output of the second step and the weighting given to each of the visitor’s parameters.
Step Level 4.1: Evaluation of Emotivity/Affectivity: A set of parameters is quantified, including absolute emotion, emotional arrangement, and others.
Step Level 4.2: Characterization of Emotivity/Affectivity: Building upon values obtained in the previous step, we determine the extent to which a visitor experiences one emotion more strongly than another in the context of their museum experience.
6. Application of Methodology and Modeling to Museum Visitors
To develop an AI model based on the multi-step process described for evaluating the emotional states of museum visitors, we created a structured algorithm that leverages machine learning and AI techniques for real-time analysis and response. Below, we present an AI model design with an associated flowchart for better understanding.
STEP LEVEL 1: STIMULUS–RESPONSE IN THE MUSEUM CONTEXT
In the context of a museum visit, an individual’s state can be characterized as a four-dimensional tuple, S = (A, B, C, D), within the four-dimensional space, V. Our primary objective is to stimulate museum visitors with learning experiences (LEs) provided by the system, aiming to pre-quantify their emotional states. It is essential to note that this pre-quantification yields a trivalent output (−1, 0, 1), representing qualitative, rather than quantitative, emotional responses.
For instance, consider a questionnaire consisting of twelve questions, three for each of the variable’s A, B, C, and D. The outcome will be a score (−1, 0, or 1) for each emotion class:
For example, if S = (1, −1, −1, 0), this indicates that the visitor is experiencing anxiety (A = 1), disinterest (B = −1), excitement (C = −1), and no self-esteem or frustration (D = 0).
STEP LEVEL 2: QUANTITATIVE OUTPUT RESPONSE
Parameters to which the user provides negative or indifferent responses are not further considered. Instead, the remaining parameters are quantified on a scale of 1 to 10 (or in percentage) through 30 specific questions, as detailed later.
These questions aim to measure the degree of anxiety (A), interest (B), and excitement (C).
To calculate the quantitative scores, we use the following formula:
Given the potential challenge of getting a visitor to respond to a lengthy questionnaire, a balanced approach is implemented to assess the emotional status effectively.
STEP LEVEL 3: ESTIMATION OF DOMINANCE
In this step, different parameters (elementary emotive status) of the tuple are generalized based on the output from Step 2 and the assigned weight of each visitor’s parameters. The dominant emotional state is considered and numerically quantified.
For example, if variables A, B, C, and D have different weights (with values between 0 and 100), the higher the value is toward 100, the more sensitive the visitor is to that parameter. This enables us to ascertain the percentage levels of various emotions and assess the dominance hierarchy among the parameters. For instance, if A = 70, B = 90, and C = 100, this indicates that the visitor is primarily excited, followed by interested, and only partially anxious. Thus, we can infer that the visitor is having an enjoyable experience, where excitement dominates, followed by interest, and lastly, anxiety.
STEP LEVEL 4.1: EVALUATION OF EMOTIONS
In this phase, we introduce one or more parameters to derive a representative variable for the emotive state of the visitor. These parameters include the following quantities:
where
is the maximum observed activity of the individual; if
then
Moreover, is the cumulative mean value of emotionality in different actions (different questionnaires given at different times); in other words, has the memory of the emotive state of the visitors.
- 3.
Emotive distance of the visitor with respect to their mean value
- 4.
Emotive distance of the visitor with respect to the mean value of a group
where s
G is the relative state to a group.
- 5.
Emotive derivative of a visitor with respect to their group
These parameters provide a comprehensive evaluation of the visitor’s emotional and affective status within the museum context.
STEP LEVEL 4.2: CHARACTERIZATION OF EMOTIVITY IN THE MUSEUM
The status of a museum visitor can be characterized using the four parameters, A, B, C, and D. This approach is valid for both emotivity and affectivity states. As there are three possible values (−1, 0, 1) for each of the four parameters, there are a total of 34 = 81 possible combinations, describing the full range of emotional states when using emotive parameters. The same is true for the affections, meaning we have 81 possible combinations. When considering the total emotive and effective state, we have 38 different profiles of visitors, that is, 6561 different profiles of visitors.
In summary, this methodology allows us to effectively assess and quantify the emotional and affective states of museum visitors, offering a valuable tool for enhancing the overall museum experience.
Here, in the following, we set out the AI Implementation Strategy:
Neural Network: Use a Convolutional Neural Network (CNN) trained with supervised learning to predict real-time emotional states. CNNs can extract and analyze hierarchical data features, beneficial for interpreting emotional nuances.
Input Layer: Takes normalized responses as a feature map, allowing CNN layers to identify patterns.
Convolutional Layers: Apply filters to capture spatial dependencies in data, with ReLU activation functions to enhance data representation.
Pooling Layers: Downsample data while retaining key emotional features for efficient processing.
Fully Connected Layers: Integrate the features extracted by convolutional and pooling layers for final emotional representation.
Output Layer: Provides trivalent predictions (−1, 0, 1) representing emotional states and calculates dominance and emotional metrics.
Here is a Python code implementing the CNN-based emotional analysis strategy. This code uses Keras, a high-level neural network API, running on top of TensorFlow. The following code is structured to preprocess the input data, build a CNN model, train it with labeled data, and make predictions.
The CNN model has an input layer, convolutional layers, pooling layers, and a fully connected layer, with an output layer that provides trivalent predictions (−1, 0, 1) for emotional states. For this example, let’s assume we have data prepared in a suitable format, as well as labeled emotional states.
Code Outline:
Preprocess Input Data: Load and normalize the responses.
Build CNN Model: Define a CNN architecture with convolutional, pooling, and fully connected layers.
Compile and Train Model: Train with labeled data.
Predict Emotional States: Make predictions of new visitor responses.
In the following, we present the relevant part of the Python code.
import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv1D, MaxPooling1D, Flatten, Dense from tensorflow.keras.optimizers import Adam from tensorflow.keras.utils import to_categorical from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler # Assume data is pre-loaded as ‘responses’ (input data) and ‘labels’ (output labels) # ‘responses’ is a 2D array where each row represents a visitor’s response features # ‘labels’ is a 1D array with trivalent emotional state labels (−1, 0, 1) # e.g., responses = np.array([[...], [...], ...]) # e.g., labels = np.array([−1, 0, 1, ...]) # Step 1: Preprocess Input Data # Normalize input data scaler = StandardScaler() responses_normalized = scaler.fit_transform(responses) # Convert trivalent labels to categorical (for softmax output layer) labels_categorical = to_categorical(labels + 1, 3) # Shift labels (−1, 0, 1) to (0, 1, 2)
# Reshape input for CNN (assuming a 1D CNN as data is a sequence of features per visitor) responses_normalized = responses_normalized.reshape((responses_normal ized.shape[0], responses_normalized.shape[1], 1)) # Split data into training and test sets X_train, X_test, y_train, y_test = train_test_split(responses_normalized, labels_cate gorical, test_size = 0.2, random_state = 42) # Step 2: Build CNN Model model = Sequential([ Conv1D(filters = 32, kernel_size = 3, activation = ‘relu’, input_shape = (re sponses_normalized.shape[1], 1)), MaxPooling1D(pool_size = 2), Conv1D(filters = 64, kernel_size = 3, activation = ‘relu’), MaxPooling1D(pool_size = 2), Flatten(), Dense(64, activation = ‘relu’), Dense(3, activation = ‘softmax’) # 3 output units for trivalent emotional states (− 1, 0, 1) ]) # Step 3: Compile and Train Model model.compile(optimizer = Adam(learning_rate = 0.001), loss = ‘categorical_crossentro py’, metrics = [‘accuracy’]) # Train the model history = model.fit(X_train, y_train, epochs = 20, batch_size = 32, valida tion_data = (X_test, y_test)) # Step 4: Predict Emotional States # Function to predict and decode the emotional state of a new visitor response def predict_emotion(response): # Normalize and reshape the response response_normalized = scaler.transform([response]) response_normalized = response_normalized.reshape((1, response_normal ized.shape[1], 1)) # Predict using the CNN model prediction = model.predict(response_normalized) emotional_state = np.argmax(prediction) − 1 # Decode back to original labels (− 1, 0, 1) return emotional_state
# Example usage of the prediction function new_response = np.array([...]) # Replace with a new visitor response as a feature array predicted_emotion = predict_emotion(new_response) print(“Predicted emotional state:”, predicted_emotion) # Output dominance and emotional metrics (example placeholder) def calculate_metrics(response): # This would involve more sophisticated calculations, but here we use a simple placeholder # Compute some example metrics (ε, ξ, ζ, ρ, σ) based on the CNN’s output return { “ε”: np.mean(response), “ξ”: np.std(response), “ζ”: np.min(response), “ρ”: np.max(response), “σ”: np.var(response) } metrics = calculate_metrics(new_response) print(“Emotional Metrics:”, metrics) |
In relation to this code, we offer the following explanations of key points:
Data Preprocessing: The code normalizes the responses, reshapes them for CNN input, and converts labels to a categorical format.
CNN Model: The model has two Conv1D layers with ReLU activation for feature extraction, followed by MaxPooling1D layers to downsample data and a fully connected layer for classification.
Prediction and Metrics: predict_emotion() takes a new visitor response, normalizes it, reshapes it, and returns a trivalent prediction (−1, 0, 1). calculate_metrics() provides a simple framework to output metrics like mean, standard deviation, minimum, maximum, and variance.
7. Experimental Results
To evaluate the model in a realistic context, we organized a virtual tour of the Paestum Archeological Site, and we conducted the analysis after interviewing 1000 students at Salerno University before the end of the year, in a specific event named “Museum in Christmas”. A specific database of about 500 contents in images and short videos was created. A subset of 36 items (images/videos) was randomly selected and rotated from the 500 items to be presented to each visitor. Here, we report the results according to the model described in the previous section.
Figure 3 is a histogram showing the age distribution of university learners in Salerno, distinguishing between males and females, normalized to 1000 learners.
The distribution is based on an assumed average age of 22 years with a standard deviation of 2 years, and an age range between 18 and 30 years. The data are evenly distributed between males and females. The sample includes students who initially completed high school and are currently studying in fields such as Humanities, Law, Economics, Engineering, Sciences, and Medicine.
We conducted the interviews using a specific questionnaire. Here, we give examples of typical questions.
Generic question: Confidence–Anxiety
- -
How confident do you feel in navigating and understanding the exhibits in this museum?
(−1) Not confident at all, experiencing high anxiety.
(0) Neutral, neither confident nor anxious.
(1) Very confident, with minimal or no anxiety.
Now, let us suppose that a status of confidence emerged from the generic question; then, we presented a set of specific questions, like the following example.
Specific question: Confidence
- -
Considering your overall museum experience, how much has your confidence influenced your enjoyment of the exhibits and information presented?
(−1) Low confidence has detracted significantly from enjoyment.
(0) Confidence has a balanced influence on enjoyment.
(1) High confidence has enhanced overall satisfaction and enjoyment.
ANALYSES OF THE ANSWERS TO EMOTIONAL QUESTIONS.
On the basis of 1000 questionnaires, we obtained the following results, as reported in
Table 1 and
Table 2.
The data show that excitement, which was reported by 76% of visitors, plays a critical role in enhancing both engagement and learning outcomes in the museum context. Visitors who expressed excitement were observed to engage more deeply with interactive exhibits, suggesting a direct correlation between excitement and cognitive stimulation. This aligns with existing theories of emotional learning, where heightened emotional states like excitement foster better retention and curiosity. Conversely, frustration, experienced by 66% of respondents, indicates potential design challenges in exhibits, which may have negatively impacted the learning process. Addressing such frustrations through more intuitive interfaces or personalized guidance could mitigate these barriers and improve overall visitor satisfaction.
In
Figure 4, each pair of emotions has a color with a higher intensity if it is dominant in the emotional duo and a lower intensity if it is subordinate. In
Figure 5, we show the experiments in terms of 36 questions.
Here, we give the statistical results for the emotive questions:
Mean | 68.6% |
Standard Deviation | 21.7% |
Min | 27.0% |
Max | 98.0% |
The findings highlight the significant impact of emotional states, particularly excitement and frustration, on the museum experience. Analysis of the data revealed that the emotional states of excitement and frustration have a large bearing on visitor engagement and learning outcomes. Visitors in excited states were found to have deeper engagement with exhibits, which may suggest that this emotional state increases cognitive stimulation and retention of information. Frustration, on the other hand, points to design challenges that could detract from the museum experience.
This finding flags the importance of emotional computing in the design of adaptive museum experiences. By using real-time emotion recognition, museums may tailor interactions and adjust the content of exhibits according to the emotions of visitors. This ability can team up with the goals of the Future Internet towards creating digital ecosystems that are not only responsive but also emotionally adaptive, to promote positive engagement with fewer barriers to learning. To validate the emotional recognition model, we applied various statistical techniques to analyze and interpret the data gathered from the questionnaires. The data were processed using factor analysis, aimed at identifying the key emotional drivers influencing visitor engagement. Likert-scale responses (ranging from 1 to 10) were collected, allowing for quantitative analysis. The statistical reliability of the model was ensured using Cronbach’s alpha, which tested the internal consistency of the emotional scale and returned a reliability coefficient of 0.82, indicating high reliability. We ensured that the assumptions of ANOVA were met by conducting preliminary tests to assess the homogeneity of variance, such as Levene’s test, and by verifying that the data followed a normal distribution. When assumptions were not met, alternative non-parametric tests were considered.
Regarding the cross-validation of the emotional recognition model, a k-fold cross-validation approach was adopted. This method allowed us to test the robustness of the model and optimize its parameters; it is particularly useful in multimodal contexts where signals from various sources (such as facial expressions and posture data manually annotated by the human operator) are integrated. Cross-validation helped prevent overfitting and ensured that the model was generalizable to new data. To assess the statistical significance of the emotional recognition model, ANOVA (Analysis of Variance) was employed. The ANOVA results showed that emotional states such as excitement, frustration, and curiosity had a statistically significant effect on visitor engagement (p < 0.05). For further validation, cross-validation was performed on the dataset. The dataset was divided into training and test sets, with the model achieving an accuracy of 85% in classifying emotional states correctly. This approach ensures that the findings are generalizable and that the model is robust across different datasets. Excitement, which is associated with heightened engagement and learning, underscores the importance of creating exhibits that evoke curiosity and joy. Museums could amplify this positive emotional response by designing more interactive and immersive displays. On the other hand, addressing frustration through adaptive feedback mechanisms, as suggested by the emotional feedback analysis, could reduce disengagement and improve learning outcomes. Thus, strategically managing visitor emotions can lead to more fulfilling and educational museum visits.
AFFECTIVE QUESTIONS
In analogy with emotions, we now outline how we used the methodology described above but considering affections. Here, we report an example.
Generic question: Love–Hate
- -
On a scale of −1 to 1, how much do you love the overall atmosphere and experience of the museum?
(−1) Strong dislike or hatred for the museum experience.
(0) Neutral, neither loving nor hating the museum atmosphere.
(1) Strong love and positive sentiment towards the museum experience.
Specific question: Love
- -
To what extent does your appreciation for the museum’s architecture and design contribute to your overall enjoyment of the visit?
(−1) Strong dislike for the architecture negatively impacts enjoyment.
(0) Neutral, architecture has a balanced influence on enjoyment.
(1) A strong love for architecture enhances overall satisfaction.
ANALYSES OF THE ANSWERS TO AFFECTIVE QUESTIONS
For the affective questions, we followed the same approach described above for the emotions, and on the basis of 1000 questionnaires, we obtained the following results, as reported in
Table 3 and
Table 4.
The affective responses revealed that positive emotions such as love (54%) and passion (67%) significantly enhance visitor engagement. Visitors who reported higher levels of love for the museum’s atmosphere were also more likely to express satisfaction with their overall experience. This suggests that fostering emotional connections through thoughtful exhibit design and content presentation can elevate visitors’ appreciation and retention of the material. Similarly, passion, which was strongly associated with engagement, indicates that emotionally charged exhibits resonate deeply with visitors, leading to longer and more meaningful interactions with the displays.
In
Figure 6, each pair of affections has a color, with a higher intensity if it is dominant in the emotional affectivity and a lower intensity if it is subordinate. In
Figure 7, we show the experiments in terms of 36 questions.
Here, we give the statistical results for the affective questions:
Mean | 61.7% |
Standard Deviation | 13.8% |
Min | 19.0% |
Max | 77.0% |
The affective responses indicated that emotions like love and passion play a pivotal role in shaping the overall visitor experience. Exhibits that evoke strong positive emotions are more likely to create lasting impressions and foster deeper emotional connections with the museum’s content. Understanding these affective states allows museums to tailor their environments to not only inform but also emotionally engage visitors. By doing so, museums can offer more impactful experiences that resonate on both intellectual and emotional levels, enhancing educational outcomes and encouraging repeat visits.
These responses and analyses provide insights into the visitor’s emotions, preferences, and their impact on the overall museum experience. It is important to recognize that individual responses may vary, and addressing specific concerns raised in the feedback can contribute to an improved visitor experience. Indeed, we noted that visitors were more sensitive to emotional questions than to affective ones. The mean value of emotions was greater than that for affections. Moreover, responsiveness was higher for positive emotions and affections than for negative ones.
Here in
Figure 8, we report the experimental results for emotions and affections in histogram forms. In addition, we show the mean values, the standard deviations, and the minimum and maximum values for both categories.
Here, we give the percentages (%) of users’ reactions with respect to the emotional and affective states, as represented in
Figure 2 and
Figure 3, respectively.
| Affectivity | Emotivity |
1 | 55.33% | 74.33% |
2 | 71.33% | 81.33% |
3 | 68.66% | 74.33% |
4 | 63.33% | 67.33% |
8. Conclusions
Starting from the consideration in the Introduction about the Future Internet and the implications in terms of emotions and affections for more interactive solutions and the involvement of social media, in the present work, we have introduced a framework for researching emotional engagement in museums. The primary objective was to bridge the gap between human emotions and technologically driven experiences within the museum context. This study considered how users interact with museums on social media platforms, delved into the emotional responses of museum visitors, and ultimately established connections between human emotions and the technological aspects shaping the museum experience. Then, we discussed the methods and techniques employed by museums to enhance visitor emotional engagement by detecting affectivity and emotions, after a specific stimulus driven by context, consisting of images and short videos randomly selected and rotated from a custom dataset specifically built for Paestum Museum. We emphasized the importance of identifying and assessing the emotional states of museum visitors in shaping the overall museum experience. A spectrum of modalities for affective state identification was covered in the Discussion, including Ekman’s work. This work concludes by underscoring the fundamental role of these methods in elevating visitor engagement and understanding emotional responses, setting the stage for a detailed exploration of tools used by museums to enhance emotional connections with their audiences. This paper has explored methods and techniques used within the museum context to enhance visitor experiences through text analysis and sentiment assessment. Here, this paper has emphasized the significance of understanding how visitors express emotions in written text and how specific content can evoke varied emotional responses in museums. The Discussion included the work of Osgood and others, focusing on multidimensional scaling for emotional word displays. Tools like Linguistic Inquiry and Word Count (LIWC) were mentioned. The exploration was extended to affective analysis, aiming to identify emotionally charged text and classify emotional sentiments. Then, our attention was devoted to exploring the pivotal role of feedback in enhancing visitor engagement within the museum context, drawing insightful parallels between museum experiences and between learning environments. That section underscored the importance of intelligent feedback management within Intelligent Tutoring Systems (ITSs) for museums, emphasizing the system’s capability to respond to and navigate emotional feedback for a meaningful and enriching visitor experience. The text delved into the concept of tailoring feedback based on visitor emotions, providing illustrative examples of how feedback can be strategically designed to assist, encourage, understand, assure, spark curiosity, challenge, and motivate visitors in response to their interactions with exhibits. Recognizing and responding to visitors’ emotions were identified as crucial components in the development of responsive museum systems. This research advances the role of emotional computing in creating personalized, immersive museum experiences. By integrating real-time emotional feedback, museums can now respond dynamically to visitor emotions, providing personalized paths through exhibitions. This model not only enhances engagement but also sets the groundwork for more interactive and intelligent museum spaces, in line with the ‘Future Internet’ vision. As emotional computing becomes a foundational component, it allows for more profound emotional connections between visitors and exhibits, which aligns with the goals of creating adaptive, emotionally resonant environments. Consequently, this paper posed thought-provoking questions about the optimal response mechanisms of affect-sensitive museum systems, aiming to foster positive attitudes and maximize long-term learning outcomes. The integration of assessments encompassing cognitive, affective, and motivational states into system strategies was highlighted as a key approach to keeping visitors engaged. These findings contribute to the development of future museum experiences that leverage the capabilities of the ‘Future Internet.’ AI and IoT-driven technologies, integrated with emotional computing, will enable museums to offer hyper-personalized experiences that respond to real-time emotional data. This convergence of technology and emotional engagement paves the way for museums to become adaptive, empathetic environments capable of evolving based on visitor interactions. This research showcases the potential of emotional computing for creating adaptive, visitor-centered museum experiences. Museums can deliver hyper-personalized content using real-time emotional data that best fit visitors’ needs and preferences, increasing both engagement and learning outcomes. The capability reflects the goals of the Future Internet in fostering emotionally responsive digital environments supporting immersive educational experiences. Future research could investigate other applications, such as long-term emotional data analysis, to further drive personalization in museums. The results are a gateway for museums to integrate emotional computing into the core of visitor engagement and to reconfigure the frontiers of museum experiences in the digital era.
Emotional computing in museums provides transformative potential with personalized empathetic interaction. Affective technologies can move museums beyond static presentations of information and create an environment that can dynamically changes according to the emotional states brought in by individual visitors. For example, a display that detects high interest might present more in-depth information, while a frustrated visitor might be presented with encouraging prompts or simplified content. The tailoring of these interactions contributes to higher satisfaction among visitors by affording both emotional and cognitive engagement.
The Future Internet framework envisages digital spaces that are not only inter-connected but also emotionally intuitive, responding in real-time to user data. In the museum context, emotional computing is at the core of this vision, creating a hyper-personalized experience that evolves in real-time with emotional data. It empowers museums to be adaptive, where visitors can engage with exhibits in ways that are personally meaningful to them, thereby fostering deeper, more lasting relationships with the material.
In future studies, it could be very interesting to collect feedback and carry out adaptive engagement actions through technological solutions capable of gathering and analyzing memory effects and the persistence of emotional–affective stimuli, aimed at enhancing visitor engagement and fostering personalized interaction with museum assets. In doing so, the integration of emotional computing not only enhances learning outcomes but also redefines the boundaries of digital and physical museum experiences. Additionally, it addresses the inherent challenge of designing human-centered systems capable of intelligently responding to a visitor’s affective state, especially in situations where determining emotions in human-to-human interaction may be complex. This not only emphasizes the critical role of tailored feedback in the museum context but also highlights the necessity of intelligent responsiveness to visitor emotions and the design of human-centered systems to navigate uncertainty, ultimately contributing to a positive and enriching museum experience.
Through the presented experimental results from a set of 1000 students, we validated the model, which can accordingly be considered for larger-scale analyses in the future using the wide variety of technologies of the Future Internet, as well as social media technologies, IoT, and AI. While the proposed emotional recognition model has proven effective in controlled environments, implementing it in real-world museums requires the consideration of both technological and financial limitations. For smaller or regional museums with limited budgets, a scaled-down version of the model could be employed. This would focus on simpler technology, such as emotion detection through mobile apps or kiosks that utilize pre-existing hardware. Museums could opt for off-the-shelf AI solutions capable of performing basic sentiment analysis on visitor feedback, eliminating the need for costly real-time monitoring systems. Additionally, cloud-based platforms could reduce the upfront costs associated with data storage and processing. For larger museums with more significant funding, the full integration of Internet of Things (IoT) devices and AI-driven systems is feasible. These museums can implement comprehensive systems that include facial expression recognition, real-time physiological monitoring, and personalized feedback loops. The flexibility of the model allows museums to tailor the level of technological integration based on their available resources. Moreover, museums can gradually introduce these technologies in phases, starting with visitor feedback collection and sentiment analysis before scaling up to include more advanced interactive exhibits driven by real-time emotional data. This incremental approach ensures financial sustainability while maintaining technological innovation.