Next Article in Journal
European Institutional Discourse Concerning the Russian Invasion of Ukraine on the Social Network X
Next Article in Special Issue
Are ChatGPT-Generated Headlines Better Attention Grabbers than Human-Authored Ones? An Assessment of Salient Features Driving Engagement with Online Media
Previous Article in Journal
Integrating Artificial Intelligence and Big Data in Spanish Journalism Education: A Curricular Analysis
Previous Article in Special Issue
Recommender Systems and Over-the-Top Services: A Systematic Review Study (2010–2022)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative AI and Media Content Creation: Investigating the Factors Shaping User Acceptance in the Arab Gulf States

by
Mahmoud Sayed Mohamed Ali
1,2,*,
Khaled Zaki AbuElkhair Wasel
3,4 and
Amr Mohamed Mahmoud Abdelhamid
5,6
1
Mass Communication Department, College of Arts and Sciences, Abu Dhabi University, Abu Dhabi P.O. Box 59911, United Arab Emirates
2
Public Relations and Advertising Department, Faculty of Mass Communication, Beni-Suef University, Beni-Suef P.O. Box 62511, Egypt
3
Communication and Multimedia Department, University College of Bahrain, Manama P.O. Box 55040, Bahrain
4
Journalism Department, Faculty of Mass Communication, Cairo University, Giza P.O. Box 12613, Egypt
5
Mass Communication Department, College of Communication, Al Qasimia University, Sharjah P.O. Box 63000, United Arab Emirates
6
Radio and Television Department, Faculty of Mass Communication, Beni-Suef University, Beni-Suef P.O. Box 62511, Egypt
*
Author to whom correspondence should be addressed.
Journal. Media 2024, 5(4), 1624-1645; https://doi.org/10.3390/journalmedia5040101
Submission received: 5 October 2024 / Revised: 27 October 2024 / Accepted: 29 October 2024 / Published: 6 November 2024

Abstract

:
This article aims to investigate the factors that affect behavioural intention (BI) and user behaviour (UB) among Arabian users of generative artificial intelligence (GenAI) applications in the context of media content creation. The study’s theoretical framework is grounded in the unified theory of acceptance and use of technology (UTAUT2). A sample of 496 users was analysed using the partial least squares structural equation modelling technique (PLS-SEM). The results revealed that BI is significantly influenced by performance expectancy, effort expectancy, social influence, hedonic motivation, habit, and user trust, with hedonic motivation having the greatest impact. In terms of UB, facilitation conditions, habit, user trust, and BI were all found to have a positive and significant impact. This study contributes to the existing theory on the utilisation of GenAI applications by organising findings pertaining to the use of AI technology for media content creation.

1. Introduction

Information technology has undergone rapid advancement in recent years, driven in part by artificial intelligence (AI) and related technologies and applications. This progress has been particularly notable since the rise of generative AI (GenAI), which uses advanced algorithms to analyse data patterns and use that information to create various types of new content, including text, images, sounds, videos, and code (Michel-Villarreal et al. 2023).
Major technology companies, such as Microsoft and Google, have begun to compete in launching GenAI applications. OpenAI, supported by Microsoft, announced the interactive chatbot ChatGPT in November 2022; Google launched the Google Bard application, and other applications have since appeared, such as Midjourney AI for photos, which is also owned by OpenAI (Feuerriegel et al. 2023).
Technology companies are intensifying their efforts to not only release new apps but also enhance existing ones, elevating them to a professional level of content creation utilising four cutting-edge technologies: natural language, machine learning, computer vision, and artificial neural networks. These technologies simulate the functions of the human brain and are highly effective at learning from large volumes of data (Fui-Hoon Nah et al. 2023).
In the context of the recent developments in generative artificial intelligence during the current year, OpenAI launched ChatGPT 4o with Canvas, which features performance and efficiency improvements that allow users to edit texts and codes and develop ideas (OpenAI 2024). Google also introduced a new update to its Gemini 1.5 Pro model, which supports text, image, and video analysis with multimedia capabilities (Google AI 2024).
Microsoft launched Copilot Vision, which enables users to analyse texts and images on web pages in real-time while maintaining privacy and security while browsing (Microsoft 2024). It also launched MidJourney version 6.1, which offers significant improvements in image quality and accuracy while reducing visual defects and enhancing details of small elements (MidJourney 2024).
GenAI represents significant monetary potential, with global corporate profits projected to range from $2.6 and $4.4 trillion per year. According to World Economic Forum forecasts, GenAI is projected to contribute around $16 trillion to the worldwide economy by the year 2030 (Paige et al. 2023).
One of the key ethical challenges posed by AI in creative fields is the blurring of the lines between human-generated and AI-generated content. This raises crucial questions about authenticity and plagiarism, as the distinction between inspiration and imitation becomes increasingly blurred (Kanont et al. 2024).
Additionally, the incorporation of AI into creative workflows has raised apprehensions regarding the possible displacement of creative professionals in fields like media and film, with AI’s ability to generate texts and compositions raising fears of job losses among writers (Cheng 2024). Others argue that AI can act as a valuable collaborator, enhancing rather than replacing human creativity (Lai 2023).
Arab countries, especially those in the Gulf region, have made numerous efforts to adopt GenAI and benefit from its applications. The United Arab Emirates has created the position of Minister of State for Artificial Intelligence and the Digital Economy and developed the National Strategy for Artificial Intelligence 2031, which aims to achieve global leadership in the field of AI. It has also launched the Artificial Intelligence Programme and established the Artificial Intelligence and Digital Transactions Council with the aim of creating a stimulating environment for the application of AI technologies and developing guidelines for GenAI (Artificial Intelligence Office, UAE 2024).
The Kingdom of Saudi Arabia established the Saudi Data and Artificial Intelligence Authority (SDAIA), issued two guides on using GenAI, created the National Center for Artificial Intelligence, and strengthened the research and innovation system. AI is expected to contribute 58.8 trillion Saudi riyals to the domestic product by 2030 (SDAIA 2024).
Qatar launched a project called The Arab Model for Generative Artificial Intelligence Fanar to develop Arabic content using GenAI, aiming to improve both accuracy and understanding. In the Kingdom of Bahrain, the Labor Fund “Tamkeen” launched the Artificial Intelligence Academy, a training platform that helps young people enhance their innovative and creative abilities. It also created the Sheikh Nasser Center for Research and Development in Artificial Intelligence and designed a guide to help governments use AI technologies in a responsible and sustainable manner (Nasser Vocational Training Center 2023).
Trust in AI applications among users in Gulf countries is influenced by several factors, including the level of awareness of potential concerns, misuse and its impact on psychological security, and cultural factors (Aldossary et al. 2024; Pashentsev et al. 2024; Alshamsi et al. 2024). Concerns about the impact of AI on the labour market make users anxious about their professional future, especially among workers in media institutions (Al Adwan et al. 2024).
In the Arab Gulf states, efforts are underway to localise GenAI technology and raise awareness about its applications. Various studies have analysed the challenges and ethical issues problems associated with AI and its applications and investigated the impact of AI on professions and opportunities while failing to address user acceptance and motivation for using these technologies. This study aims to explore the factors influencing user behavioural intention (BI) and user behaviour (UB) regarding GenAI in the Arab Gulf states. The following four questions will be explored:
Q1: What factors influence the acceptance of GenAI applications among users in the Arabian Gulf states?
Q2: What are the most widely used applications of GenAI among users in the Arabian Gulf states?
Q3: What are the uses of GenAI applications among users in the Arabian Gulf states?
Q4: To what extent are users in the Arabian Gulf region aware of the differences between human-generated content and GenAI applications?

1.1. Literature Review

Factors Affecting Acceptance of Generative Artificial Intelligence Applications

GenAI applications, which integrate deep learning with natural language processing to enable human-like interactions, represent a significant advance over traditional AI systems (Markovič 2024). These applications, such as ChatGPT, Copilot, Gemini, and others, are changing how we communicate and engage with technology (Victor et al. 2023).
Studying the acceptance of GenAI applications is essential to grasp the elements that motivate both individuals and organizations to adopt this technology (Ho et al. 2022). Studies have investigated various factors influencing GenAI adoption, highlighting the perceptions, benefits, challenges, and levels of acceptance among different user groups, as well as technology acceptance, UB, technological progress, and ethical considerations (Ling et al. 2021).
Yin et al. (2023) highlighted the underlying factors predicting professionals’ acceptance of and intention to use GenAI, focusing on the anxiety about AI among professionals working in creative industries. The study indicated that factors such as performance expectancy (PE), social influence (SI), pleasure motives (hedonic motivation, HM), and habit (HT) significantly influenced user intentions and acceptance. In contrast, expected effort (EE), facilitating conditions (FC), and price value (PV) had no significant impact.
A study by Kim and Kim (2020), which surveyed media professionals who produce news using GenAI, found that media reliability was the most significant influencing factor for the acceptance of GenAI content. Readers prioritised the reliability of the media source over technical aspects, indicating that in the media, trust is crucial in the acceptance and use of GenAI applications.
In the context of the academic community, Strzelecki et al. (2024) examined the factors affecting the intention to use ChatGPT for research and teaching among academics in Poland. They found that HT, PE, and HM were key factors shaping teachers’ and researchers’ intentions to adopt ChatGPT, with both personal innovation and SI also positively influencing its acceptance. EE and ease of adoption did not significantly affect academics’ acceptance of this tool, suggesting that academics are willing to invest more effort in technology if they perceive it as valuable to their work. The study identified age as the sole variable influencing academics’ acceptance and adoption of AI applications.
Lavidas et al. (2024) came to a different conclusion. In exploring students’ use of GenAI at the Greek University of Patras, it found that acceptance and adoption were unaffected by gender, age, and experience. Instead, the study confirmed that PE, HT, BI, FC, and HM significantly influenced students’ intentions to use GenAI applications. Furthermore, Salama (2023) found that gender and specialisation had no effect on students’ acceptance of GenAI applications, with students considering them valuable tools capable of enhancing their creative work and abilities. The study also identified that expected benefits, ease of use, attitudes towards use, and BI influenced their acceptance and use of GenAI applications.
Some studies have identified obstacles limiting the wider adoption of GenAI in the academic context (Abdullah and Zaid 2023; Kshetri 2024), such as lack of relevance; excessive reliance; inadequate support, training, and resources; and, most importantly, ethical issues.
Vo and Nguyen (2024) examined how English major students perceive the use of ChatGPT for language development and identified the factors influencing their acceptance of GenAI applications. They discovered that while the students found ChatGPT easy to use and considered it useful for language learning, they were neutral about its overall usefulness. The study also confirmed that gender had no effect on students’ acceptance and perceptions of GenAI applications. This finding was confirmed by Nja et al. (2023), who examined the factors influencing the use of AI applications by science teachers and found that behaviour and intention to use GenAI applications were unaffected by gender, age, and place of residence.
In the engineering field, Russo (2024) found that social factors had no significant influence on GenAI usage intention. Additionally, while personal and environmental factors shaped technology perceptions, they did not directly affect their usage intention. Bernal and Maligaya (2024) investigated the levels of awareness and acceptance of GenAI among airline engineers and its application in problem-solving. They observed significant acceptance among the airline engineers, with expected usefulness and ease of use playing key roles in their acceptance and adoption of GenAI applications.
When it comes to user trust (UT), the relationship between humans and AI is complex. Studies indicate that although people may forgive human errors, they quickly lose trust in AI, which confirms the importance of the trust factor in accepting GenAI (Wirtz et al. 2018). Individuals’ demographic characteristics and attitudes towards the efficiency of GenAI applications significantly influence their trust in them (Yakar et al. 2022).
The use of GenAI applications also brings up numerous ethical issues about authorship, academic integrity, trust, misinformation, deepfakes, algorithmic bias, transparency, accountability, and impact on human creativity (Al-kfairy et al. 2024; Wakunuma and Eke 2024; Porsdam Mann et al. 2023; Bautista et al. 2024; Fui-Hoon Nah et al. 2023). Privacy concerns are another critical area for ethical scrutiny. The gathering and utilization of personal information for training AI models brings forth numerous significant inquiries regarding consent, data ownership, and the right to privacy (Oladoyinbo et al. 2024; Hastuti and Syafruddin 2023).

1.2. Theoretical Framework

The UTAUT2 model, introduced in 2012, swiftly gained widespread recognition due to its capacity to elucidate the factors impacting technology adoption. Its efficacy has been demonstrated across various contexts, technologies, cultures, and nationalities. Venkatesh (2022) considered UTAUT a powerful general theoretical model that could be integrated with other models to enhance the comprehension of technology adoption behaviour. PE, SI, and FCs emerged as primary influencers directly affecting the intention to use.
The internet and smartphones have become crucial components of daily life. Numerous scholars have endeavoured to identify the determinants shaping individuals’ acceptance and comprehension of new technology (Hughes et al. 2019), exploring myriad theories, contexts, units of analysis, and research methodologies (Choudrie and Dwivedi 2005; Dwivedi and Williams 2008). These encompass, among others, the diffusion of innovation (DoI) by Rogers (1962), the technology acceptance model (TAM) by Davis (1989), the theory of planned behaviour (TPB) by Ajzen (1991), and the theory of task–technology fit (TTF) by Goodhue and Thompson (1995).
The TAM has significantly contributed to developing frameworks to explain and anticipate the acceptance and adoption of novel technologies. The TAM suggests that the perception of a new technology’s usefulness, as well as its ease of use, significantly influence its acceptance. Extending the TAM framework, Venkatesh et al. (2012) introduced the UTAUT, amalgamating prior studies and introducing the effects of new variables, including PE, effort expectancy (EE), SI, FC, HM, PV, and HT, on BI.
Several extensions have emerged that include additional variables and are collectively referred to as (TAM++). The technology acceptance model (TAM) has been widely validated as a leading scientific model and a reliable model for explaining, predicting, and improving user acceptance across a range of technology applications (Davis et al. 2024).
In previous studies related to AI usage, UTAUT has been empirically tested for its validity and reliability (Lin and Lee 2023; Venkatesh 2022; Zhang 2020) and has proven effective in assessing the acceptance of new technologies like GenAI. De Graaf et al. (2017) used UTAUT as a framework for evaluating the acceptability of social robots, while Terblanche and Kidd (2022) scrutinised adoption determinants for AI and the Internet of Things (IoT) AI. Kuberkar and Singhal (2020) and Ho and Cheung (2024) found that trust in AI influences the public’s perceptions of autonomous drones.
Ho and Cheung (2024) augmented the UTAUT2 model to enhance the understanding of public trust and intention to utilise AI. Upadhyay et al. (2022) found that factors like PE, openness, SI, HM, and creative motives positively impact entrepreneurs’ inclination to use AI. Ahmed et al. (2023) conducted a comprehensive meta-analysis, revealing that PE, perceived usefulness, UT, and HT were the most accurate predictors of consumer BI for mobile app adoption.
Alkhwaldi and Abdulmuhsin (2022) enriched UTAUT2 by incorporating contextual factors such as trust and autonomy as key predictors of remote learning acceptance. Maican et al. (2023) utilised structural equation modelling (SEM) to analyse how UTAUT2 factors affect BI towards GenAI, noting the influences of language proficiency and gender. Wang and Zhang (2023) evaluated factors driving Generation Z’s embrace of GenAI-assisted design, finding positive effects of EE, PV, and HM but no significant effect of PE. Zhu et al. (2024) extended the UTAUT2 model with ethical influencing factors, while Yin et al. (2023) explored the adoption of GenAI, identifying predictors including PE, SI, HM, HT, and AI anxiety.
In this study, the UTAUT model is critical for explaining and analysing factors affecting the acceptance of GenAI applications among users in the Arab Gulf states.

1.3. Research Hypotheses

The UTAUT2 model was used to investigate 12 hypotheses (see Figure 1) that centre on the key influences related to GenAI:
H1. 
PE has a positive direct influence on the BI to use GenAI in media content creation.
H2. 
EE has a positive direct influence on the BI to use GenAI in media content creation.
H3. 
SI has a positive direct influence on the BI to use GenAI in media content creation.
H4. 
FCs have a positive direct influence on the BI to use GenAI in media content creation.
H5. 
FCs have a positive direct influence on GenAI UB in media content creation.
H6. 
HM has a positive direct influence on the BI to use GenAI in media content creation.
H7. 
PV has a positive direct influence on the BI to use GenAI in media content creation.
H8. 
HT has a positive direct influence on the BI to use GenAI in media content creation.
H9. 
HT has a positive direct influence on GenAI UB in media content creation.
H10. 
UT has a positive direct influence on the BI to use GenAI in media content creation.
H11. 
UT has a positive direct influence on GenAI UB in media content creation.
H12. 
BI has a positive direct influence on GenAI UB in media content creation.
Figure 1. Hypothesised model (adopted from the UTAUT2 model).
Figure 1. Hypothesised model (adopted from the UTAUT2 model).
Journalmedia 05 00101 g001
This figure illustrates the proposed conceptual model, showing the relationships between key constructs influencing user behaviour and behavioural intention. The constructs include social influence, performance expectancy (PE), effort expectancy (EE), facilitating conditions (FCs), user trust (UT), hedonic motivation (HM), habit (HT), and price value (PV). Arrows represent hypothesised relationships, labelled as H1 through H12, indicating the proposed hypotheses tested in the study.
Table 1 presents an overview of measurement scales, factor loadings, means, and standard deviations (SD) for each construct of the UTAUT2 model. The table outlines specific items for each construct, including item descriptions, average scores, and SD, illustrating the respondents’ average agreement and consistency (or lack thereof) among each construct. This structure is useful in evaluating the model’s constructs, is backed by clearly defined measures and trustworthy sources, and guarantees clarity and consistency in interpretation.

2. Method

2.1. Research Design

Using the UTAUT2 model as its foundation, this study utilised a quantitative research design and survey methodology to examine the factors influencing the acceptance of GenAI by users in the Arabian Gulf for content production.

2.2. Sample Description

The study targeted users in the Arabian Gulf region, focusing on users of GenAI applications in the UAE, Saudi Arabia, and Bahrain, whether citizens or residents, who varied by gender, age, nationality, and years of experience using GenAI applications. The sample was selected using a non-random, purposive sampling method, chosen to ensure relevance to the research objectives.
G*power (v3.1.9.4), as described by Faul et al. (2009), was utilised to calculate the minimum necessary sample size according to statistical power. The sample consisted of 526 respondents, with 30 participants excluded for unsuitability. Thus, 496 cases were valid for data analysis, which was considered sufficient for SEM analysis.
The Arab Gulf states were chosen based on their efforts to localise GenAI technology and spread awareness of its applications, which enabled the respondents to answer the questionnaire accurately. These countries also include many nationalities with diverse cultural and social backgrounds, which contributes to enriching the results of the field study.
Using a non-random sample in this study is justified for several reasons. It enables researchers to select a specific target relevant to the research objectives, especially when comprehensive databases for random sampling are unavailable (Shaliha and Marsasi 2024; Schnuerch and Erdfelder 2023).
This approach is appropriate in terms of selecting eligible respondents for the study, ensuring that they have knowledge and experience in dealing with GenAI applications, which makes the data collected more relevant to the study objectives. This is vital in the Gulf region due to its wide geographical and demographic diversity, as the use of a random sample requires large financial resources and logistical difficulties related to collecting comprehensive data on all citizens and residents in the target countries.
This study’s exploratory nature focuses on examining how specific user groups interact with GenAI technology. Therefore, the primary aim is not absolute generalisation but rather an understanding of behavioural patterns and user trends within this newly emerging technological context.
As shown in Table 2, 50.4% of the respondents were male and 49.6% were female. Most respondents were aged 18–37 (79.7%), with only 20.3% over 38 years old. This may be due to younger people having greater familiarity with AI. Most respondents (73.2%) had less than one year of experience using GenAI. In terms of the level of interest in GenAI applications, 20.8% reported a constant interest, 44.2% expressed occasional curiosity, and 35.1% said they rarely use it.

2.3. Data Collection

Using a web survey created in Google Docs, data were collected via social media platforms, email lists, and professional networks. The online responses were collected from mid-April until the end of May 2024. Participation was voluntary.

2.4. Survey Instrument

UTAUT2 was utilised to develop the survey instrument, which included five main sections: level of experience with GenAI applications, everyday use of GenAI applications, level of awareness of the distinction between GenAI-generated and human-produced content, confidence in the content produced using GenAI, and factors affecting the acceptance of GenAI applications, including PE, EE, SI, FC, HM, PV, HT, and UT. To test the instrument, we ran a trial with a small user sample. Feedback from this test led to adjustments to improve clarity and reliability.

2.5. Data Analysis

In this investigation, researchers applied two types of statistical software, SPSS (v22) and SmartPLS (3.2.9), to analyse the research dataset. The researchers chose the PLS-SEM technique due to the numerous constructs related to the number of items and the model’s complexity. In addition, PLS-SEM can be applied to non-normal datasets (Al-Sharafi et al. 2022) and is a non-parametric statistical method that performs exceptionally well on small sample sizes (Roy et al. 2024).
A bootstrapping technique with 5000 subsamples, along with the path coefficient of the constructs, was used to assess the statistical significance of the findings (Kashyap and Agrawal 2020). There are two primary steps for analysing data with PLS-SEM: the measurement model and the structural model.

2.6. Normality Test

The multivariate normality of the dataset was also checked using a web-based calculator (Zhang and Yuan 2018). Following Mardia (1970), we obtained multivariate skewness (β = 992.4121, p-value < 0.001) and multivariate kurtosis (β = 4942.3441, p-value < 0.001). This implies that the data are not in multivariate normal distribution. PLS-SEM is a highly effective approach for handling non-normal datasets in such scenarios (Hair et al. 2019), which provides further support for the use of PLS-SEM.

2.7. Measurement Model Assessment

Initially, the measurement model was verified for accuracy and reliability. To accomplish this, we comprehensively analysed multiple facets of the variables. For the internal consistency and reliability of the constructs, we tested factor loadings (λ), composite reliability (CR), and Cronbach’s alpha (α) values. The cut off value for the measures was at least 0.70 (Roy 2022). All λ, CR, and α values exceeded the threshold value. We then employed average variance extracted (AVE) values to evaluate the convergent validity of the constructs. Normally, an AVE value of more than 0.50 is acceptable (Hair et al. 2019). All our AVE values were over 0.50, confirming convergent validity. Table 3 presents the outcomes.
Subsequently, discriminant validity was assessed in PLS-SEM using the Fornell and Larcker characteristics (1981) and the heterotrait–monotrait ratio (HTMT) (Henseler et al. 2015). According to Fornell and Larcker’s criteria, the square root of the AVE values must be more than the construct correlations. As shown in Table 4, all the square roots of the AVE values exceeded the correlations of the constructs. All the HTMT scores were below the expected standard of 0.80 (Kline 2023), revealing no discriminant validity issues. Table 4 displays the outcomes. All the HTMT scores were below the expected standard, indicating no issues with discriminant validity.

3. Results

Table 5 shows that ChatGPT is the most used among respondents (79.8%). This high usage is expected, given that ChatGPT was the first GenAI application introduced by OpenAI. Its popularity in the Gulf countries can be attributed to its ease of access; diversity of uses; early entry into the market, which helped it gain wide fame; continuous updates; integration with many platforms; and extensive media coverage at the beginning of its appearance, which created social momentum that encouraged more people to try it.
Gemini was the second most-used application (25.6%), followed by Copilot (19.2%), Midjourney (15.1%), DALL·E 3 (9.1%), Snapchat’s My AI (3.2%), and others (2.4%). The results show that respondents in the Arab Gulf states use a variety of GenAI applications. These applications range from those used to generate and produce text, like ChatGPT and Gemini, to those used to generate images, like Midjourney. These findings align with Victor et al. (2023), Yin et al. (2023), and Upadhyay and Khandelwal (2018), demonstrating the widespread use of GenAI applications in fields including education, media, and entrepreneurship.
Table 6 shows the diverse utilisation of GenAI apps in everyday activities. For composing and editing text, 16.7% consistently utilise GenAI, 44.8% use it occasionally, and 38.5% rarely employ it. For editing and processing photographs, 24.4% always use it, 37.9% use it sometimes, and 37.7% rarely use it. The utilisation of GenAI is consistently highest in the animation production and film and visual programmes industries, with 48.8% of individuals in these fields always using it. Just 14.1% consistently use GenAI to interpret text, while 48.0% rarely do. For ad and poster design, 34.3% consistently utilise GenAI, whereas 23.6% always use it for spell-checking and grammar verification. The utilisation rate for creating engineering designs is consistently the highest, at 49.0%.
The results indicate a variety of areas in which respondents use GenAI applications. Yilmaz et al. (2023) found that GenAI is frequently used in artistic and technical domains but is less prevalent in translation and text editing. This is understandable given the ongoing criticism of the writing and translation accuracy of these apps. GenAI apps have made it easy for users to create designs in seconds, which has led to its widespread use in the arts and sciences.
Table 7 illustrates the different degrees of awareness regarding the distinction between GenAI-generated text and human-created content. Specifically, 15.1% of the participants consistently find it challenging to differentiate the two, 59.5% occasionally have difficulties, and 25.4% rarely experience difficulty. For images, 15.3% of people consistently find it effortless to identify AI-generated photos, 42.7% sometimes find it effortless, and 41.9% rarely find it effortless. For video, 15.7% of individuals consistently find it effortless to distinguish, 44.4% occasionally, and 39.9% rarely. With respect to advertisements, 16.5% of individuals consistently find ads created by humans more noticeable, 50.8% find them more noticeable at times, and 32.7% rarely find them more noticeable. Clearly, a considerable percentage of respondents had difficulty regularly differentiating between GenAI information and content produced by humans. However, a large portion of users find this difficult only occasionally or not at all.
These results confirm the findings of Tiernan et al. (2023) in providing users with digital media literacy skills that enable them to distinguish between human content and GenAI content.

3.1. Structural Model Assessment

The investigators employed PLS-SEM techniques to determine how the predictor variables influence BI and UB. To assess the structural model, we evaluated the path coefficient (β) of the variables. We found that 10 out of 12 hypotheses were supported. Table 8 represents the output of the structural model evaluation.
We found that PE has a statistically significant and favourable effect on BI, with a beta coefficient (β) of 0.203 and a p-value of less than 0.05. Therefore, H1 is supported. BI is significantly and positively correlated with EE (β = 0.127, p < 0.05), SI (β = 0.196, p < 0.05), HM (β = 0.279, p < 0.05), HT (β = 0.082, p < 0.05), and UT (β = 0.124, p < 0.05). Therefore, H2, H3, H6, H8, and H10 are supported. Among these independent variables, HM has the highest impact on BI, with a beta coefficient of 0.279 (p < 0.05). The findings revealed that FCs (β = −0.009, p > 0.05) and PV (β = −0.056, p > 0.05) have no significant impact on BI. Therefore, H4 and H7 are not supported.
The findings demonstrate that FCs (β = 0.185, p < 0.05), HT (β = 0.308, p < 0.05), UT (β = 0.105, p < 0.05), and BI (β = 0.343, p < 0.05) significantly affect UB. Therefore, H5, H9, H11, and H12 are supported. The findings confirm that BI is the most impactful factor for respondents in the context of UB, since β = 0.343 (p < 0.001). See Figure 2.
This figure presents the results of the structural model analysis, indicating the relationships between various constructs such as social influence, performance expectancy, effort expectancy, facilitating conditions, user trust, hedonic motivation, habit, price value, user behaviour, and behavioural intention. The paths are annotated with standardised coefficients, where significant paths are marked with * (p < 0.05), ** (p < 0.01), and non-significant paths are denoted as “ns”. The model depicts the strength and direction of these relationships, providing insights into the factors influencing user behaviour and intentions.

3.2. Assessment of the Explanatory Power

Finally, we evaluated the explanatory power of the structural model by determining the coefficient of determination (R2) of the model. For BI, the R2 value was 0.589. Based on the R2 value, we can infer that 58.9% of the variation in the BI construct was explained by the independent variables (PE, EE, SI, HM, HT, and UT). In the same vein, FC, HT, UT, and BI explained 54.2% of the variation in UB. According to Hair et al. (2019), R2 values of 0.25, 0.50, and 0.75 indicate weak, moderate, and strong explanatory power of the model, respectively. Therefore, we can conclude that the model has moderate explanatory power for prediction.

4. Discussion

In this study, we examined the factors affecting BI and UB among users in Arab Gulf states who adopt GenAI apps, using the modified UTAUT2 model. The findings enhance our understanding of the interconnections between different components of the UTAUT2 and their impact on BI and UB.
The study proposed 12 hypotheses, 8 addressing how GenAI apps affect respondents’ BI to use them and 4 focusing on the consequences of GenAI applications on UB. Only 2 hypotheses were found to be insignificant, while 10 were supported by the analysis.
The component with the greatest influence on BI is HM (β = 0.279). This supports the results of Tseng et al. (2019), who demonstrated that HM significantly affected platforms and learning management systems in terms of BI. Additionally, Tian et al. (2024) observed the utilisation of ChatGPT in relation to HM. HM is not associated with the tangible outcomes of utilising GenAI technologies but rather with the subjective experience of pleasure and satisfaction that individuals may derive from using this technology and achieving important outcomes.
Studies have indicated that HM has a large and favourable impact on BI. Gunadi et al. (2023) emphasises that hedonic drive plays an important role in shaping BI, particularly among younger users such as Generation Z. Given their great preference for technology that delivers amusement and pleasure, HM is a crucial factor in their acceptance of generative AI. Similarly, Sudirjo et al. (2023) claim that hedonic incentive has a major influence on users’ behavioural intentions, implying that the pleasure gained from utilising technology can increase acceptance rates. Suyanto et al. (2024) also suggests that the regular use of technology can improve BI, particularly when users like their interactions. This habitual involvement might be especially noticeable in generative AI systems, where users may build habits for creating and sharing content, which increases their willingness to continue using the device.
PE, with a coefficient of 0.203, is the second most influential aspect for BI. Studies have found that PE significantly motivates users to adopt innovative educational tools such as ChatGPT (Strzelecki et al. 2024) and Google Classroom (Kumar and Bervell 2019). This study found that PE significantly influences respondents, motivating them to adopt a new technology that facilitates both learning and content creation. Their motivation increases if they believe that GenAI applications will enhance their expertise.
These findings are supported by Biloš and Budimir (2024), and Wang and Zhang (2023). When users believe that the benefits of utilising AI tools outweigh the costs, their acceptance and interaction with these technologies increases, which has a significant impact on their behavioural intentions and the overall acceptability of these applications.
The EE of GenAI systems directly affects the BI of respondents to utilise them, which suggests that participants are willing to invest time in learning new technology, provided they perceive it as beneficial. The findings of this study corroborate EE’s influence on shaping BI in various educational institutions, such as the adoption of software engineering technologies (Wrycza et al. 2016), mobile technologies (Hu et al. 2020), and Learning Management System (LMS) software (Raza et al. 2021). In contrast, several researchers found non-significant results (Strzelecki and ElArabawy 2024; Strzelecki et al. 2024; Zacharis and Nikolopoulou 2022).
SI also positively influences BI. Previous research has demonstrated that SI positively influences technology acceptance (Oye et al. 2012) and learning tools (Tseng et al. 2019), as well as the utilisation of ChatGPT (Strzelecki et al. 2024). For GenAI applications, SI encompasses the perspective of the modern environment, which may include family, friends, and colleagues, among others. The likelihood of an Arabic Gulf state user adopting GenAI applications is directly proportional to the extent to which their environment actively encourages them to do so.
Various studies have found that HT positively and significantly influences BI, including those of Tamilmani et al. (2019) for technology in general, Alotumi (2022) in education and learning, and Strzelecki et al. (2024) for ChatGPT. This suggests that as Arab Gulf state users become more familiar with GenAI applications, they will use them more regularly, eventually coming to see them as standard, much like using a search engine or online translator.
Lastly, the study found a strong positive association between BI and UT, confirming the findings of Hooda et al. (2022). When users perceive that GenAI applications offer tangible benefits for content creation, they are more likely to use them to enhance their productivity and skills. This research produced a theoretical model that integrated trust into the underlying UTAUT framework. The investigation posited that trust, a variable that has not been explored in previous models of technology adoption, is essential in the adoption of GenAI applications.
In contrast, FCs—which refer to whether sufficient resources exist for GenAI applications to be used effectively—have no significant impact on BI. This demonstrates that Arab Gulf state users do not solely prioritise the ease of use or accessibility when selecting a GenAI application. Instead, they consider the implications of using these tools, which implies that addressing concerns about these impacts may be a strategic approach to increase adoption. This is supported by previous studies by Alowayr (2021), Strzelecki and ElArabawy (2024), and Strzelecki et al. (2024) on the acceptance of ChatGPT in the academic realm.
Furthermore, the intricacy and novelty of generative AI applications may raise scepticism among potential users, reducing the influence of facilitating conditions. This is supported by the findings of Eamon’s study on AI adoption among business professionals, which found that trust and views towards AI greatly influenced behavioural intention more frequently than facilitating conditions (Emon et al. 2023). This scepticism may originate from concerns about the reliability and ethical consequences of AI technologies, causing users to prioritise their own judgements of the technology over the external support available to them.
Facilitating conditions may have less effect on environments with extensive digital infrastructure, such as Gulf countries like the UAE, Saudi Arabia, and Bahrain, which have heavily invested in smart technologies and digital services. Users may not view these factors as important to GenAI adoption because they already have easy access to the technology.
Like FCs, PV, which represents the belief that the benefits of using technology outweigh its expenses, did not exhibit a significant correlation with BI. Prior technology adoption studies have demonstrated that PV has a significant impact on BI (Tamilmani et al. 2019) and, more specifically, technology for learning (Azizi et al. 2020). However, this study found that PV is not the primary factor Arab Gulf state users consider when deciding whether to utilise GenAI applications. This result is consistent with that of Strzelecki et al. (2024). This may be due to the simple reason that several apps for GenAI, such as ChatGPT, provide a free version with substantial capability. In contexts where generative AI tools are available at no cost, the importance of price value is often overlooked, suggesting that users may not take cost into account when making adoption decisions (Biloš and Budimir 2024).
According to Romero-Rodríguez et al. (2023) and Kanont et al. (2024), free access to tools such as ChatGPT may initially encourage adoption. However, the potential introduction of costs could significantly alter users’ perceptions of equity and accessibility, thereby influencing their behavioural intentions to adopt such technologies. Furthermore, other motivational factors, such as perceived usefulness and ease of use, may override price value perceptions. Gupta (2024) suggests that entrepreneurs are more motivated by the perceived enjoyment and benefits derived from generative AI technologies than by their cost implications.
After testing the BI hypotheses, we proceeded to test the four UB hypotheses. The primary determinant of UB in relation to GenAI applications is BI, with a coefficient of 0.343. UB measures how extensively a user engages with a specific technology to complete a task, whereas BI reflects their preparedness and desire to apply the technology for the activity. Consequently, a stronger intention to use GenAI apps leads to more active engagement with them. An important observation arises—respondents will only actively engage with GenAI applications if they feel prepared and eager to utilise them. The willingness to act (BI) is acquired through the influence of several other circumstances. Many academics have emphasised the pivotal significance of BI in the utilisation of AI applications (Gansser and Reich 2021; Strzelecki et al. 2024).
Notably, FCs (β = 0.185) statistically significantly and positively influence UB. This confirms the findings of Strzelecki et al. (2024). However, FCs only have a direct effect on UB. As a result, the availability of assistance and resources for GenAI applications (FCs) will not affect the preparedness of learners to utilise GenAI applications. However, it may have an impact on the level of involvement they maintain with the platform.
Additionally, HT has a tremendous influence on UB in the context of GenAI applications. If consumers become accustomed to these applications, they are more likely to use them to enhance content. All these findings have been corroborated by earlier research (Gansser and Reich 2021; Strzelecki et al. 2024).
In conclusion, the UT variable has a considerable positive influence on the UB variable. When it comes to GenAI applications, if consumers perceive them as trustworthy, they are more likely to continue using them.

5. Implications

5.1. Practical Implications

The findings enhance our comprehension of the pivotal elements that impact the acceptance and assimilation of GenAI systems across numerous organisations for content creation. The findings suggest that the HM (human–machine interaction) and PE (perceived ease of use) variables significantly impact Arabian users’ intent to embrace and deploy GenAI systems. As users become increasingly proficient in GenAI and witness the positive outcomes of their collaboration with this technology, they will develop a sense of enjoyment. HM will also enhance their inclination to utilise GenAI in the future.
Furthermore, it is important to emphasise the significance of the SI factor. After a user has discussed the benefits and drawbacks of utilising GenAI applications, it is recommended that they disseminate their expertise and experience to their acquaintances, relatives, and coworkers. When evaluating GenAI systems, users should consider both positive and negative practices rather than solely basing their opinion on someone’s negative experience. EE, which pertains to the perceived level of ease in utilising a technology, significantly influences BI in the context of GenAI. The probability of consumers adopting and incorporating GenAI apps into their daily routines is higher when they perceive these tools as intuitive and user-friendly. A greater EE decreases the cognitive burden associated with acquiring and utilising the technology, hence enhancing its attractiveness.
UT is a crucial factor in influencing the desire to utilise GenAI for content development. Users’ faith in the accuracy, reliability, and ethical production of GenAI content increases their likelihood of adopting and depending on these technologies in their creative processes. On the other hand, if users have reservations regarding the trustworthiness or honesty of AI-generated content, or if they are worried about possible biases or ethical consequences, their desire to use such technology decreases. It is crucial to establish and uphold UT in GenAI for content production by being transparent, accountable, and demonstrating high quality.

5.2. Theoretical Implications

This study’s main conceptual contribution is in the comprehensive literature assessment undertaken on the application of GenAI for content production. It facilitates the organisation of information regarding the utilisation of GenAI systems in content creation, scientific research, and text composition, catering to users from numerous organisations. We suggest that our work serves as an exemplar of SI and can help users better understand the capabilities of GenAI systems for content generation and other functions. This paper highlights the significance of exercising caution when utilising GenAI applications for content creation and didactics. It also underscores the need to carefully examine all text and materials generated by AI tools before incorporating them into further use. This study developed a conceptual framework that incorporated the trust factor into the foundational UTAUT framework within the specific context of GenAI applications for the Arab region. The analysis asserted that trust is crucial in the context of GenAI applications, and it had not been utilised in earlier models of technology deployment.
The findings contribute to helping decision-makers, stakeholders, and social media influencers in the Gulf countries to direct their strategies to improve the understanding of the factors influencing the adoption of GenAI technologies in the media content industry. This can be achieved by, first, improving the productivity and quality of content through accelerating reporting, editing, and idea generation. Second, government entities, media outlets, and technology developers should cooperate to create tools that meet the needs of the local market. Third, establishing regulatory and ethical guidelines should ensure the handling of data privacy issues and reducing bias in AI-generated content. Finally, the launch of training programs for media content creators to develop their skills in using artificial intelligence is essential in light of the contribution of strong technological infrastructure and government support through initiatives such as the UAE Artificial Intelligence Strategy 2031 and Saudi Vision 2030 in providing a conducive environment to enhance the adoption of these technologies.

6. Limitations and Future Research

The current study’s sample of 496 users does not represent the entire population of the Arab Gulf countries, and demographic variations in age, gender, education, occupation, and socioeconomic status can affect the generalisability of the results.
The cultural context of the Arab Gulf states may restrict the generalisability of the findings, given that cultural norms, values, and attitudes towards technology adoption can vary significantly worldwide.
Although the UTAUT2 model is comprehensive, it may not cover all the factors that influence GenAI acceptance and use, such as ethical concerns, the level of technological proficiency, data privacy issues, and AI creativity.
The authors propose several future studies to address the gaps in GenAI and user acceptance research.
  • Ethical challenges of using generative AI in media content production.
  • Journalists’ acceptance of GenAI applications in the Arabian Gulf states.

Author Contributions

Supervision, M.S.M.A.; writing—original draft preparation, M.S.M.A. and A.M.M.A.; conceptualization, M.S.M.A., K.Z.A.W. and A.M.M.A.; methodology, M.S.M.A. and A.M.M.A.; data curation, M.S.M.A. and A.M.M.A.; formal analysis, M.S.M.A. and A.M.M.A.; investigation, K.Z.A.W.; editing, K.Z.A.W.; resources, K.Z.A.W. and A.M.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical approval has been obtained from the Institutional Review Board (IRB) Committee of Abu Dhabi University, with reference number CAS-0000031.

Informed Consent Statement

All participants gave their informed consent before participating in the online survey. They received detailed information about the study objectives, procedures, potential risks, and benefits. They were made clear that their participation was completely voluntary, with the freedom to withdraw at any time. In addition, the consent process ensured that participants’ responses were confidential and anonymous.

Data Availability Statement

A study includes the original contributions provided in the article; for further information, please contact the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abdullah, Zaleha, and Norasykin Mohd Zaid. 2023. Perception of Generative Artificial Intelligence in Higher Education Research. Innovative Teaching and Learning Journal 7: 84–95. [Google Scholar] [CrossRef]
  2. Ahmed, Mazharuddin Syed, John Everatt, Wendy Fox-Turnbull, and Fahad Alkhezzi. 2023. Systematic Review of Literature for Smartphones Technology Acceptance Using Unified Theory of Acceptance and Use of Technology Model (UTAUT). Social Networking 12: 29–44. [Google Scholar] [CrossRef]
  3. Ajzen, Icek. 1991. The theory of planned behavior. Organizational Behavior and Human Decision Processes 50: 179–211. [Google Scholar] [CrossRef]
  4. Al Adwan, Muhammad Noor, Mohmad El Hajji, and Hossam Fayez. 2024. Future anxiety among media professionals and its relationship to utilizing artificial intelligence techniques: The case of Egypt, France, and UAE. Online Journal of Communication and Media Technologies 14: e202425. [Google Scholar] [CrossRef]
  5. Aldossary, Aminah Saad, Alia Abdullah Aljindi, and Jamilah Mohammed Alamri. 2024. The role of generative AI in education: Perceptions of Saudi students. Contemporary Educational Technology 16: 10–11. [Google Scholar] [CrossRef]
  6. Al-kfairy, Mousa, Dheya Mustafa, Nir Kshetri, Mazen Insiew, and Omar Alfandi. 2024. Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics 11: 58. [Google Scholar] [CrossRef]
  7. Alkhwaldi, Abeer F., and Amir A. Abdulmuhsin. 2022. Crisis-centric distance learning model in Jordanian higher education sector: Factors influencing the continuous use of distance learning platforms during COVID-19 pandemic. Journal of International Education in Business 15: 250–72. [Google Scholar] [CrossRef]
  8. Alotumi, Mohialdeen. 2022. Factors influencing graduate students’ behavioral intention to use Google Classroom: Case study-mixed methods research. Education and Information Technologies 27: 10035–63. [Google Scholar] [CrossRef]
  9. Alowayr, Ali. 2021. Determinants of mobile learning adoption: Extending the unified theory of acceptance and use of technology (UTAUT). The International Journal of Information and Learning Technology 39: 1–12. [Google Scholar] [CrossRef]
  10. Alshamsi, Ibrahim, Kaneez Fatima Sadriwala, Fouad Jameel Ibrahim Alazzawi, and Boumedyen Shannaq. 2024. Exploring the impact of generative AI technologies on education: Academic expert perspectives, trends, and implications for sustainable development goals. Journal of Infrastructure, Policy and Development 8: 8532. [Google Scholar] [CrossRef]
  11. Al-Sharafi, Mohammed A., Mostafa Al-Emran, Mohammad Iranmanesh, Noor Al-Qaysi, Noorminshah A. Iahad, and Ibrahim Arpaci. 2022. Understanding the Impact of Knowledge Management Factors on the Sustainable Use of AI-Based Chatbots for Educational Purposes Using a Hybrid SEM-ANN Approach. Interactive Learning Environments 31: 7491–510. [Google Scholar] [CrossRef]
  12. Artificial Intelligence Office, UAE. 2024. Available online: https://ai.gov.ae/ar/ (accessed on 12 April 2024).
  13. Azizi, Seyyed Mohsen, Nasrin Roozbahani, and Alireza Khatony. 2020. Factors affecting the acceptance of blended learning in medical education: Application of UTAUT2 model. BMC Medical Education 20: 367. [Google Scholar] [CrossRef]
  14. Bautista, Yohn Jairo Parra, Carlos Theran, and Richard Aló. 2024. Ethical Considerations of Generative AI: A Survey Exploring the Role of Decision Makers in the Loop. Proceedings of the AAAI Symposium Series 3: 391–98. [Google Scholar] [CrossRef]
  15. Bernal, Ena A., and Jonald L. Maligaya. 2024. Examining the Levels of Awareness and Acceptance of Generative Artificial Intelligence to Supplement Problem Solving Analysis in an Aerospace Company. Ignatian International Journal for Multidisciplinary Research 2: 1739–49. [Google Scholar] [CrossRef]
  16. Biloš, Antun, and Bruno Budimir. 2024. Understanding the adoption dynamics of chatgpt among generation z: Insights from a modified utaut2 model. Journal of Theoretical and Applied Electronic Commerce Research 19: 863–79. [Google Scholar] [CrossRef]
  17. Cheng, Guo. 2024. Research on the Displacement Impact of Artificial Intelligence on the Film Industry. Highlights in Business Economics and Management 28: 48–53. [Google Scholar] [CrossRef]
  18. Choudrie, Jyoti, and Yogesh Kumar Dwivedi. 2005. Investigating the research approaches for examining technology adoption issues. Journal of Research Practice 1: D1. [Google Scholar]
  19. Davis, Fred D. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. Management Information Systems Quarterly 13: 319. [Google Scholar] [CrossRef]
  20. Davis, Fred D., Andrina Granić, and Nikola Marangunić. 2024. The Technology Acceptance Model: 30 Years of TAM. Cham: Springer International Publishing AG. [Google Scholar]
  21. De Graaf, Maartje M. A., Somaya Ben Allouch, and Jan AGM Van Dijk. 2017. Why Would I Use This in My Home? A Model of Domestic Social Robot Acceptance. Human-Computer Interaction 34: 115–73. [Google Scholar] [CrossRef]
  22. Dwivedi, Yogesh K., and Michael D. Williams. 2008. Demographic influence on UK citizens’ e-government adoption. Electronic Government 5: 261. [Google Scholar] [CrossRef]
  23. Emon, Md Mehedi Hasan, Farheen Hassan, Mehzabul Hoque Nahid, and Vichayanan Rattanawiboonsom. 2023. Predicting Adoption Intention of Artificial Intelligence ChatGPT. AIUB Journal of Science and Engineering (AJSE) 22: 189–99. [Google Scholar] [CrossRef]
  24. Faul, Franz, Edgar Erdfelder, Axel Buchner, and Albert-Georg Lang. 2009. Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods 41: 1149–60. [Google Scholar] [CrossRef] [PubMed]
  25. Feuerriegel, Stefan, Jochen Hartmann, Christian Janiesch, and Patrick Zschech. 2023. “Generative AI”. Business & Information Systems Engineering 66: 111–26. [Google Scholar] [CrossRef]
  26. Fui-Hoon Nah, Fiona, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. 2023. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research 25: 277–304. [Google Scholar] [CrossRef]
  27. Gansser, Oliver Alexander, and Christina Stefanie Reich. 2021. A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society 65: 101535. [Google Scholar] [CrossRef]
  28. Gefen, David, and Detmar Straub. 2003. Managing User Trust in B2C e-Services. E-Service Journal/E-Service Journal 2: 7. [Google Scholar] [CrossRef]
  29. Goodhue, Dale L., and Ronald L. Thompson. 1995. Task-Technology Fit and Individual Performance. Management Information Systems Quarterly 19: 213. [Google Scholar] [CrossRef]
  30. Google AI. 2024. Gemini 1.5 Pro Model Updates and Capabilities. Available online: https://ai.google.dev (accessed on 25 October 2024).
  31. Gunadi, Valentino, Nur Indah Septyani, Riyadh Annafi, and Robertus Nugroho Perwiro Atmojo. 2023. The effect of live streaming methods in online sales on behavioral intention in generation z. E3s Web of Conferences 426: 02127. [Google Scholar] [CrossRef]
  32. Gupta, Varun. 2024. An empirical evaluation of a generative artificial intelligence technology adoption model from entrepreneurs’ perspectives. Systems 12: 103. [Google Scholar] [CrossRef]
  33. Hair, Joseph F., Jeffrey J. Risher, Marko Sarstedt, and Christian M. Ringle. 2019. When to use and how to report the results of PLS-SEM. European Business Review 31: 2–24. [Google Scholar] [CrossRef]
  34. Hastuti, Rochmi, and None Syafruddin. 2023. Ethical Considerations in the Age of Artificial Intelligence: Balancing Innovation and Social Values. West Science Social and Humanities Studies 1: 76–87. [Google Scholar] [CrossRef]
  35. Henseler, Jörg, Christian M. Ringle, and Marko Sarstedt. 2015. A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science 43: 115–35. [Google Scholar] [CrossRef]
  36. Ho, Shirley S., and Justin C. Cheung. 2024. Trust in artificial intelligence, trust in engineers, and news media: Factors shaping public perceptions of autonomous drones through UTAUT2. Technology in Society 77: 102533. [Google Scholar] [CrossRef]
  37. Ho, Yi-Hui, Syed Shah Alam, Mohammad Masukujjaman, Chieh-Yu Lin, Samiha Susmit, and Sumaiya Susmit. 2022. Intention to adopt AI-powered online service among tourism and hospitality companies. International Journal of Technology and Human Interaction 18: 1–19. [Google Scholar] [CrossRef]
  38. Hooda, Apeksha, Parul Gupta, Anand Jeyaraj, Mihalis Giannakis, and Yogesh K. Dwivedi. 2022. The effects of trust on behavioral intention and use behavior within e-government contexts. International Journal of Information Management 67: 102553. [Google Scholar] [CrossRef]
  39. Hu, Sailong, Kumar Laxman, and Kerry Lee. 2020. Exploring factors affecting academics’ adoption of emerging mobile technologies-an extended UTAUT perspective. Education and Information Technologies 25: 4615–35. [Google Scholar] [CrossRef]
  40. Hughes, D. Laurie, Nripendra P. Rana, and Yogesh K. Dwivedi. 2019. Elucidation of IS project success factors: An interpretive structural modelling approach. Annals of Operation Research/Annals of Operations Research 285: 35–66. [Google Scholar] [CrossRef]
  41. Kanont, Kraisila, Pawarit Pingmuang, Thewawuth Simasathien, Suchaya Wisnuwong, Benz Wiwatsiripong, Kanitta Poonpirome, Noawanit Songkram, and Jintavee Khlaisang. 2024. Generative-AI, a Learning Assistant? Factors Influencing Higher-Ed Students’ Technology Acceptance. The Electronic Journal of e-Learning 22: 18–33. [Google Scholar] [CrossRef]
  42. Kashyap, Ankur, and Rajat Agrawal. 2020. Scale development and modeling of intellectual property creation capability in higher education. Journal of Intellectual Capital 21: 115–38. [Google Scholar] [CrossRef]
  43. Kim, Soyoung, and Boyoung Kim. 2020. A decision-making model for adopting al-generated news articles: Preliminary results. Sustainability 12: 7418. [Google Scholar] [CrossRef]
  44. Kline, Rex B. 2023. Principles and Practice of Structural Equation Modeling, 5th ed. New York: Guilford Publications. [Google Scholar]
  45. Kshetri, Nir. 2024. The academic industry’s response to generative artificial intelligence: An institutional analysis of large language models. Telecommunications Policy 48: 102760. [Google Scholar] [CrossRef]
  46. Kuberkar, Sachin, and Tarun Kumar Singhal. 2020. Factors influencing adoption intention of AI powered chatbot for public transport services within a smart city. International Journal of Emerging Technologies in Learning 11: 948–58. [Google Scholar]
  47. Kumar, Jeya Amantha, and Brandford Bervell. 2019. Google Classroom for mobile learning in higher education: Modelling the initial perceptions of students. Education and Information Technologies 24: 1793–817. [Google Scholar] [CrossRef]
  48. Lai, Yuehua. 2023. The Impact of AI-Driven Narrative Generation, Exemplified by ChatGPT, on the Preservation of Human Creative Originality and Uniqueness. Lecture Notes in Education Psychology and Public Media 26: 121–24. [Google Scholar] [CrossRef]
  49. Lavidas, Konstantinos, Iro Voulgari, Stamatios Papadakis, Stavros Athanassopoulos, Antigoni Anastasiou, Andromachi Filippidi, Vassilis Komis, and Nikos Karacapilidis. 2024. Determinants of Humanities and Social Sciences Students’ Intentions to Use Artificial Intelligence Applications for Academic Purposes. Information 15: 314. [Google Scholar] [CrossRef]
  50. Lin, Rong-Rong, and Jung-Chieh Lee. 2023. The supports provided by artificial intelligence to continuous usage intention of mobile banking: Evidence from China. Aslib Journal of Information Management 76: 293–310. [Google Scholar] [CrossRef]
  51. Ling, Erin Chao, Iis Tussyadiah, Aarni Tuomi, Jason Stienmetz, and Athina Ioannou. 2021. Factors influencing users’ adoption and use of conversational agents: A systematic review. Psychology & Marketing 38: 1031–51. [Google Scholar] [CrossRef]
  52. Maican, Catalin Ioan, Silvia Sumedrea, Alina Tecau, Eliza Nichifor, Ioana Bianca Chitu, Radu Lixandroiu, and Gabriel Bratucu. 2023. Factors influencing the behavioural intention to use AI-Generated images in business: A UTAUT2 perspective with moderators. Journal of Organizational and End User Computing 35: 1–32. [Google Scholar] [CrossRef]
  53. Mardia, Kanti V. 1970. Measures of multivariate skewness and kurtosis with applications. Biometrika 57: 519–30. [Google Scholar] [CrossRef]
  54. Markovič, Daniel. 2024. Current options and limits of digital technologies and artificial intelligence in social work. SHS Web of Conferences 184: 05003. [Google Scholar] [CrossRef]
  55. Michel-Villarreal, Rosario, Eliseo Vilalta-Perdomo, David Ernesto Salinas-Navarro, Ricardo Thierry-Aguilera, and Flor Silvestre Gerardou. 2023. Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT. Education Sciences 13: 856. [Google Scholar] [CrossRef]
  56. Microsoft. 2024. An AI Companion for Everyone. Microsoft Blog. Available online: https://blogs.microsoft.com/blog/2024/10/01/an-ai-companion-for-everyone/ (accessed on 25 October 2024).
  57. MidJourney. 2024. Version 6.1 Release Notes and Updates. Available online: https://updates.midjourney.com/version-6-1/ (accessed on 25 October 2024).
  58. Nasser Vocational Training Center. 2023. Available online: https://rb.gy/4hc5kc (accessed on 18 April 2024).
  59. Nja, Cecilia Obi, Kimson Joseph Idiege, Uduak Edet Uwe, Anne Ndidi Meremikwu, Esther Etop Ekon, Costly Manyo Erim, Julius Ukah Ukah, Eneyo Okon Eyo, Mary Ideba Anari, and Bernedette Umalili Cornelius-Ukpepi. 2023. Adoption of artificial intelligence in science teaching: From the vantage point of the African science teachers. Smart Learning Environments 10: 42. [Google Scholar] [CrossRef]
  60. Oladoyinbo, Tunboson Oyewale, Samuel Oladiipo Olabanji, Oluwaseun Oladeji Olaniyi, Olubukola Omolara Adebiyi, Olalekan J. Okunleye, and Adegbenga Ismaila Alao. 2024. Exploring the Challenges of Artificial Intelligence in Data Integrity and its Influence on Social Dynamics. Asian Journal of Advanced Research and Reports 18: 1–23. [Google Scholar] [CrossRef]
  61. OpenAI. 2024. Introducing Canvas: A New Way to Collaborate with AI. Available online: https://openai.com/index/introducing-canvas/ (accessed on 25 October 2024).
  62. Oye, N. D., N. A. Iahad, and N. Ab. Rahim. 2012. The history of UTAUT model and its impact on ICT acceptance and usage by academicians. Education and Information Technologies 19: 251–70. [Google Scholar] [CrossRef]
  63. Paige, Amer, Sven Blomberg, Eva Lee, Megha Sinha, Douglas Merrill, Adi Pradhan, Steven Shaw, and Alexander Sukharevsky. 2023. The Golden Age of Technology Powered by Generative AI: A Comprehensive Guide for CIOs and CTOs. McKinsey & Company. Available online: https://www.mckinsey.com/featured-insights/highlights-in-arabic/technologys-generational-moment-with-generative-ai-a-cio-and-cto-guide-arabic/ar (accessed on 22 April 2024).
  64. Pashentsev, Evgeny, Vladilena Chebykina, and Ruslan Nikiforov. 2024. The Malicious Use of AI: Challenges to Psychological Security in the United Arab Emirates. ББK 16.6 M21 94: 8–9. [Google Scholar]
  65. Porsdam Mann, Sebastian, Brian D. Earp, Sven Nyholm, John Danaher, Nikolaj Møller, Hilary Bowman-Smart, Joshua Hatherley, Julian Koplin, Monika Plozza, Daniel Rodger, and et al. 2023. Generative ai entails a credit–blame asymmetry. Nature Machine Intelligence 5: 472–75. [Google Scholar] [CrossRef]
  66. Raza, Syed Ali, Zubaida Qazi, Wasim Qazi, and Maiyra Ahmed. 2021. E-learning in higher education during COVID-19: Evidence from blackboard learning system. Journal of Applied Research in Higher Education 14: 1603–22. [Google Scholar] [CrossRef]
  67. Rogers, Everett M. 1962. Diffusion of Innovations, 3rd ed. New York: Free Press. [Google Scholar]
  68. Romero-Rodríguez, José-María, María-Soledad Ramírez-Montoya, Mariana Buenestado-Fernández, and Fernando Lara-Lara. 2023. Use of chatgpt at university as a tool for complex thinking: Students’ perceived usefulness. Journal of New Approaches in Educational Research 12: 323–39. [Google Scholar] [CrossRef]
  69. Roy, Sanjoy Kumar. 2022. The impact of age, gender, and ethnic diversity on organizational performance: An empirical study of Bangladesh’s banking sector. International Journal of Financial, Accounting, and Management 4: 145–61. [Google Scholar] [CrossRef]
  70. Roy, Sanjoy Kumar, Mst Musfika, and Ummey Habiba. 2024. Moderated Mediating Effect on Undergraduates’ Mobile Social Media Addiction: A Three-Stage Analytical Approach. Journal of Technology in Behavioral Science, 1–20. [Google Scholar] [CrossRef]
  71. Russo, Daniel. 2024. Navigating the Complexity of Generative AI Adoption in Software Engineering. ACM Transactions on Software Engineering and Methodology 33: 1–50. [Google Scholar] [CrossRef]
  72. Salama, Hossam. 2023. Employing Artificial Intelligence Techniques in Developing the Productions of Media Students in Gulf Universities. The Union of Arab Universities for Media & Communication Technology Research 11: 62–63. [Google Scholar] [CrossRef]
  73. Saudi Authority for Data and Artificial Intelligence. 2024. Available online: https://sdaia.gov.sa/en/default.aspx (accessed on 7 April 2024).
  74. Schnuerch, Martin, and Edgar Erdfelder. 2023. Building the Study. Cambridge: Cambridge University Press, pp. 103–24. [Google Scholar] [CrossRef]
  75. Shaliha, Alifia Indah Putri, and Endy Gunanto Marsasi. 2024. The influence of attitude and perceived risk to optimize intention to adopt based on theory of planned behavior in generation z. Ekombis Review Jurnal Ilmiah Ekonomi Dan Bisnis 12: 1679–94. [Google Scholar] [CrossRef]
  76. Strzelecki, Artur, and Sara ElArabawy. 2024. Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. British Journal of Educational Technology 55: 1209–30. [Google Scholar] [CrossRef]
  77. Strzelecki, Artur, Karina Cicha, Mariia Rizun, and Paulina Rutecka. 2024. Acceptance and use of ChatGPT in the academic community. Education and Information Technologies, 1–26. [Google Scholar] [CrossRef]
  78. Sudirjo, Frans, David Ahmad Yani, Amat Suroso, and Aulia Ar Rakhman Awaludin. 2023. Comparative analysis of customer acceptance of digital wallet gopay, dana and ovo using unified theory of acceptance and use of technology. Jurnal Sistim Informasi Dan Teknologi 5: 38–43. [Google Scholar] [CrossRef]
  79. Suyanto, Mohamad Afan, Luh Komang Candra Dewi, Donny Dharmawan, Dadang Suhardi, and Silvia Ekasari. 2024. Analysis of the influence of behavior intention, technology effort expectancy and digitalization performance expectancy on behavior to use of qris users in small medium enterprises sector. Jurnal Informasi Dan Teknologi 6: 57–63. [Google Scholar] [CrossRef]
  80. Tamilmani, Kuttimani, Nripendra P. Rana, and Yogesh K. Dwivedi. 2019. Use of ‘habit’is not a habit in understanding individual technology adoption: A review of UTAUT2 based empirical studies. In Smart Working, Living and Organising: IFIP WG 8.6 International Conference on Transfer and Diffusion of IT, TDIT 2018, Portsmouth, UK, June 25, 2018, Proceedings. Cham: Springer International Publishing, pp. 277–94. [Google Scholar]
  81. Terblanche, Nicky, and Martin Kidd. 2022. Adoption Factors and Moderating Effects of Age and Gender That Influence the Intention to Use a Non-Directive Reflective Coaching Chatbot. SAGE Open 12: 215824402210961. [Google Scholar] [CrossRef]
  82. Tian, Weiqi, Jingshen Ge, Yu Zhao, and Xu Zheng. 2024. AI Chatbots in Chinese Higher Education: Adoption, Perception, and Influence Among Graduate Students—An Integrated Analysis Utilizing UTAUT and ECM Models. Frontiers in Psychology 15: 1268549. [Google Scholar] [CrossRef]
  83. Tiernan, Peter, Eamon Costello, Enda Donlon, Maria Parysz, and Michael Scriney. 2023. Information and Media Literacy in the Age of AI: Options for the Future. Education Sciences 13: 906. [Google Scholar] [CrossRef]
  84. Tseng, Timmy H., Shinjeng Lin, Yi-Shun Wang, and Hui-Xuan Liu. 2019. Investigating teachers’ adoption of MOOCs: The perspective of UTAUT2. Interactive Learning Environments 30: 635–50. [Google Scholar] [CrossRef]
  85. Upadhyay, Ashwani Kumar, and Komal Khandelwal. 2018. Applying artificial intelligence: Implications for recruitment. Strategic HR Review 17: 255–58. [Google Scholar] [CrossRef]
  86. Upadhyay, Nitin, Shalini Upadhyay, Salma S. Abed, and Yogesh K. Dwivedi. 2022. Consumer adoption of mobile payment services during COVID-19: Extending meta-UTAUT with perceived severity and self-efficacy. International Journal of Bank Marketing 40: 960–91. [Google Scholar] [CrossRef]
  87. Venkatesh, Viswanath. 2022. Adoption and use of AI tools: A research agenda grounded in UTAUT. Annals of Operation Research/Annals of Operations Research 308: 641–52. [Google Scholar] [CrossRef]
  88. Venkatesh, Viswanath, James Y. L. Thong, and Xin Xu. 2012. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. Management Information Systems Quarterly 36: 157. [Google Scholar] [CrossRef]
  89. Venkatesh, Viswanath, Michael G. Morris, Gordon B. Davis, and Fred D. Davis. 2003. User Acceptance of Information Technology: Toward a Unified View. Management Information Systems Quarterly 27: 425. [Google Scholar] [CrossRef]
  90. Victor, Bryan G., Sheryl Kubiak, Beth Angell, and Brian E. Perron. 2023. Time to Move Beyond the ASWB Licensing Exams: Can Generative Artificial Intelligence Offer a Way Forward for Social Work? Research on Social Work Practice 33: 511–17. [Google Scholar] [CrossRef]
  91. Vo, Anh, and Huong Nguyen. 2024. Generative Artificial Intelligence and ChatGPT in Language Learning: EFL Students’ Perceptions of Technology Acceptance. Journal of University Teaching & Learning Practice 21: 199–218. [Google Scholar] [CrossRef]
  92. Wakunuma, Kutoma, and Damian Eke. 2024. Africa, ChatGPT, and Generative AI Systems: Ethical Benefits, Concerns, and the Need for Governance. Philosophies 9: 80. [Google Scholar] [CrossRef]
  93. Wang, Yiyang, and Weining Zhang. 2023. Factors Influencing the Adoption of Generative AI for Art Designing among Chinese Generation Z: A structural equation modeling approach. IEEE Access 11: 143272–84. [Google Scholar] [CrossRef]
  94. Wirtz, Bernd W., Jan C. Weyerer, and Carolin Geyer. 2018. Artificial Intelligence and the Public Sector—Applications and Challenges. International Journal of Public Administration 42: 596–615. [Google Scholar] [CrossRef]
  95. Wrycza, Stanislaw, Bartosz Marcinkowski, and Damian Gajda. 2016. The Enriched UTAUT Model for the Acceptance of Software Engineering Tools in Academic Education. Information Systems Management 34: 38–49. [Google Scholar] [CrossRef]
  96. Yakar, Derya, Yfke P. Ongena, Thomas C. Kwee, and Marieke Haan. 2022. Do People Favor Artificial Intelligence Over Physicians? A Survey Among the General Population and Their View on Artificial Intelligence in Medicine. Value in Health 25: 374–81. [Google Scholar] [CrossRef] [PubMed]
  97. Yilmaz, Fatma Gizem Karaoglan, Ramazan Yilmaz, and Mehmet Ceylan. 2023. Generative Artificial Intelligence Acceptance Scale: A Validity and Reliability Study. International Journal of Human-computer Interaction, 1–13. [Google Scholar] [CrossRef]
  98. Yin, Ming, Bingxu Han, Sunghan Ryu, and Min Hua. 2023. Acceptance of Generative AI in the Creative Industry: Examining the Role of AI Anxiety in the UTAUT2 Model. In Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, pp. 288–310. [Google Scholar] [CrossRef]
  99. Zacharis, Georgios, and Kleopatra Nikolopoulou. 2022. Factors predicting University students’ behavioral intention to use eLearning platforms in the post-pandemic normal: An UTAUT2 approach with ‘Learning Value’. Education and Information Technologies 27: 12065–82. [Google Scholar] [CrossRef]
  100. Zhang, Weiwei. 2020. A Study on the User Acceptance Model of Artificial Intelligence Music Based on UTAUT. Han-guk Keompyuteo Jeongbo Hakoe Nonmunji/Han’gug Keompyuteo Jeongbo Haghoe Nonmunji 25: 25–33. [Google Scholar] [CrossRef]
  101. Zhang, Zhiyong, and Ke-Hai Yuan. 2018. Practical Statistical Power Analysis Using Webpower and R. Granger: ISDSA Press. [Google Scholar] [CrossRef]
  102. Zhu, Wenjuan, Lei Huang, Xinni Zhou, Xiaoya Li, Gaojun Shi, Jingxin Ying, and Chaoyue Wang. 2024. Could AI Ethical Anxiety, Perceived Ethical Risks and Ethical Awareness About AI Influence University Students’ Use of Generative AI Products? An Ethical Perspective. International Journal of Human-Computer Interaction, 1–23. [Google Scholar] [CrossRef]
Figure 2. Results of the structural model. Note: * p < 0.05, ** p < 0.01, ns = non-significant.
Figure 2. Results of the structural model. Note: * p < 0.05, ** p < 0.01, ns = non-significant.
Journalmedia 05 00101 g002
Table 1. Measurement scale and factor loadings, means, and standard deviation (SD).
Table 1. Measurement scale and factor loadings, means, and standard deviation (SD).
ConstructsItemsDescriptionMeanSDMeasure Definition and Source
Performance
expectancy (PE)
PE1I expect generative AI applications to provide new and innovative ideas.4.160.89PE “refers to the extent to which individuals believe that using a system will help them attain gains in job performance or enhance their performance in learning processes” (Venkatesh et al. 2003).
PE2These applications help me accomplish tasks quickly.4.170.82
PE3Using generative AI applications increases my work performance.4.030.91
PE4The use of generative AI applications brings about positive change.3.990.91
PE5Using generative AI applications enhances my chances of solving the problems I encounter.4.010.95
PE6Generative AI applications help me create content in a short time without affecting quality.3.930.98
PE7I expect these apps to produce diverse content that meets my needs.4.000.94
Effort expectancy (EE)EE1Applications are capable of completing tasks autonomously without the need for constant human intervention.3.790.98EE is defined “as the degree of ease or effort associated with the use of technology” (Venkatesh et al. 2003).
EE2Generative AI applications are easy to use.4.010.87
EE3I can be good at using applications without needing much technical experience.3.970.89
EE4My interaction with generative AI applications is clear and understandable.3.990.86
Facilitating conditions (FCs)FC1Generative AI applications align with other technologies I use.3.980.88
FC2I can get help from others when I have difficulty using apps.3.990.92FC is defined as “the degree to which an individual believes that an organisational and technical infrastructure exists to support use of the system” (Venkatesh et al. 2003).
FC3If I encounter any problems while using generative AI applications, I have access to the necessary information and technical support to resolve them.3.920.96
FC4Subscription to generative AI applications matches the systems and software I use.3.910.93
FC5The information needed to use the apps is available and easy to access.4.010.90
Social influence (SI)SI1People who are important to me think I should use generative AI applications.3.840.94SI is defined “as the extent to which important others, such as family and friends, believe that an individual should use a particular technology” (Venkatesh et al. 2003).
SI2People whose opinions I value prefer that I use generative AI applications.3.850.97
SI3The organisations I work with allow me to use the apps.3.800.96
SI4My use of generative AI applications increases other people’s interactions with me.3.791.01
SI5Users of generative AI applications are better placed in their organisations.3.860.95
Hedonic motivation (HM)HM1Using generative AI applications is fun.4.150.82HM is “the fun or pleasure derived from using a technology” (Venkatesh et al. 2012).
HM2Generative AI applications provide an impressive experience.4.170.81
Price value (PV)PV1Companies offer generative AI applications at affordable prices.3.701.04PV is “an individual’s trade-off between the perceived benefits of using the system and its monetary cost” (Venkatesh et al. 2012).
PV2I consider subscription prices for generative AI applications to be good value for the money.3.731.00
Habit (HT)HT1Using generative AI applications has become a habit for me.3.661.14HT is defined “as the extent to which an individual tends to perform behaviours automatically because of prior learning and experiences with the technology” (Venkatesh et al. 2012).
HT2Using generative AI applications has become natural for me.3.971.10
User Trust (UT)UT1I trust the accuracy of content produced using generative AI.3.700.99UT is “users’ confidence in the security and reliability of the technology” (Gefen and Straub 2003).
UT2I trust the realism of images produced using generative AI.3.620.99
UT3The video produced by generative AI is as close to the truth as possible.3.731.01
UT4The sounds produced by generative AI approximate natural sounds.3.841.01
Behavioural intention (BI)BI1I intend to continue using generative AI applications in the future.4.160.88BI refers to “individuals’ willingness and intention to use a particular technology for a specific task or purpose” (Venkatesh et al. 2012).
BI2I will always try to use generative AI applications in my life.4.010.95
BI3I plan to continue using generative AI applications frequently.4.040.89
User behaviour (UB)UB1I use generative AI applications regularly in my work.3.880.97UB “aims to explain user technology acceptance and usage behaviour. According to the model, actual use refers to the extent to which users actually use a technology for a specific task or purpose” (Strzelecki et al. 2024).
UB2I use generative AI applications on various devices I own.3.930.98
UB3I spend a lot of time using generative AI applications.3.651.09
Table 2. Respondents’ profile.
Table 2. Respondents’ profile.
VariablesCategoriesFrequencyPercent
GenderMale25050.4
Female24649.6
Age18–27 years28958.3
28–37 years10621.4
38–477715.5
48 years and over244.8
statusCitizen26553.4
Resident23146.6
Experience with GenAILess than one year36373.2
More than one year13326.8
Interest in GenAI applicationsAlways10320.8
Sometimes21944.2
Rarely17435.1
Table 3. Measurement model results.
Table 3. Measurement model results.
FactorsItemsLoadings (λ)αCRAVE
Behavioural intention (BI)BI10.8940.8810.9270.808
BI20.894
BI30.909
Effort expectancy (EE)EE10.7340.9040.9040.904
EE20.859
EE30.874
EE40.881
Facilitating conditions (FCs)FC10.7830.8770.9110.672
FC20.773
FC30.854
FC40.833
FC50.851
Hedonic motivation (HM)HM10.9210.8000.9090.833
HM20.904
Habit (HT)HT10.9340.7310.8760.780
HT20.829
Performance expectancy (PE)PE10.7650.9190.9350.674
PE20.825
PE30.835
PE40.828
PE50.835
PE60.824
PE70.834
Price value (PV)PV10.8980.8120.9130.840
PV20.935
Social influence (SI)SI10.8160.8890.9180.693
SI20.854
SI30.813
SI40.828
SI50.849
User behaviour (UB)UB10.8620.8280.8970.744
UB20.853
UB30.872
User trust (UT)UT10.8480.8620.9060.707
UT20.858
UT30.843
UT40.815
Table 4. Discriminant validity results.
Table 4. Discriminant validity results.
Fornell–Larcker Results
BIEEFCHMHTPEPVSIUBUT
BI0.899
EE0.6110.839
FC0.5540.6590.819
HM0.6570.5860.5650.913
HT0.4330.3650.3860.3680.883
PE0.6460.6590.6180.6340.4290.821
PV0.2440.3540.4070.2730.1610.2870.917
SI0.5920.6060.6430.5430.4320.5250.3960.832
UB0.6330.5030.5450.5210.5640.5090.3170.6260.862
UT0.5240.5190.4940.4740.3450.5370.2680.4320.4820.841
HTMT results
BIEEFCHMHTPEPVSIUBUT
BI
EE0.698
FC0.6250.758
HM0.7810.6990.669
HT0.5330.4730.4830.493
PE0.7160.7370.6850.7370.547
PV0.2850.4260.4790.3340.2010.328
SI0.6640.6930.7250.6420.5260.5770.467
UB0.7410.6010.6380.6390.6820.5830.3890.731
UT0.5970.6090.5650.5650.4290.5990.3210.4870.565
The diagonal and bold values indicate the square roots of the AVEs. The off-diagonal values indicate correlations between the various constructs.
Table 5. The most widely used GenAI applications among Arab Gulf state users.
Table 5. The most widely used GenAI applications among Arab Gulf state users.
GenAI ApplicationsFrequencyPercent
Chat GPT39679.8
Gemini12725.6
Copilot9519.2
Midjourney7515.1
DALL·E 3459.1
Snapchat (My AI)163.2
Others122.4
Note: Respondents were allowed to choose more than one alternative for the GenAI applications they use.
Table 6. Uses for GenAI applications in everyday activities.
Table 6. Uses for GenAI applications in everyday activities.
QuestionsCategoriesFrequencyPercent
Writing and editing textAlways8316.7
Sometimes22244.8
Rarely19138.5
Editing and processing imagesAlways12124.4
Sometimes18837.9
Rarely18737.7
Animation productionAlways24248.8
Sometimes14429.0
Rarely11022.2
Film and visual programmes industryAlways24248.8
Sometimes14829.8
Rarely10621.4
Translating text into other languagesAlways7014.1
Sometimes18837.9
Rarely23848.0
Designing ads and postersAlways17034.3
Sometimes17435.1
Rarely15230.6
Checking spelling and grammarAlways11723.6
Sometimes18837.9
Rarely19138.5
Creating engineering designsAlways24349.0
Sometimes13527.2
Rarely11823.8
Table 7. Awareness of the distinction between content produced using GenAI and content produced by humans.
Table 7. Awareness of the distinction between content produced using GenAI and content produced by humans.
QuestionsCategoriesFrequencyPercent
It is difficult for me to differentiate between texts produced using generative AI and those written by humans.Always75.0015.1
Sometimes295.0059.5
Rarely126.0025.4
I can easily differentiate between images produced via generative AI and regular images.Always76.0015.3
Sometimes212.0042.7
Rarely208.0041.9
I can distinguish between videos produced through generative AI and those created by humans.Always78.0015.7
Sometimes220.0044.4
Rarely198.0039.9
Ads produced by humans are easier to spot than ads produced using generative AI.Always82.0016.5
Sometimes252.0050.8
Rarely162.0032.7
Table 8. The results of the structural model’s path coefficients.
Table 8. The results of the structural model’s path coefficients.
HypothesesRelationshipsPath Coefficients (β)t-Valuesp-ValuesDecisions
H1PE→BI0.2033.4060.001Supported
H2EE→BI0.1272.0580.040Supported
H3SI→BI0.1963.0360.002Supported
H4FC→BI−0.0090.1560.876Not supported
H5FC→UB0.1853.8900.000Supported
H6HM→BI0.2793.8760.000Supported
H7PV→BI−0.0561.5540.121Not supported
H8HT→BI0.0821.9970.046Supported
H9HT→UB0.3086.0690.000Supported
H10UT→BI0.1242.4870.013Supported
H11UT→UB0.1052.4060.016Supported
H12BI→UB0.3436.3910.000Supported
Note: PE = performance expectancy, EE = effort expectancy, SI = social influence, HM = hedonic motivation, PV = price value, FC = facilitating condition, HA = habit, UT = user trust, BI = behavioural intention, UB = user behaviour.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, M.S.M.; Wasel, K.Z.A.; Abdelhamid, A.M.M. Generative AI and Media Content Creation: Investigating the Factors Shaping User Acceptance in the Arab Gulf States. Journal. Media 2024, 5, 1624-1645. https://doi.org/10.3390/journalmedia5040101

AMA Style

Ali MSM, Wasel KZA, Abdelhamid AMM. Generative AI and Media Content Creation: Investigating the Factors Shaping User Acceptance in the Arab Gulf States. Journalism and Media. 2024; 5(4):1624-1645. https://doi.org/10.3390/journalmedia5040101

Chicago/Turabian Style

Ali, Mahmoud Sayed Mohamed, Khaled Zaki AbuElkhair Wasel, and Amr Mohamed Mahmoud Abdelhamid. 2024. "Generative AI and Media Content Creation: Investigating the Factors Shaping User Acceptance in the Arab Gulf States" Journalism and Media 5, no. 4: 1624-1645. https://doi.org/10.3390/journalmedia5040101

APA Style

Ali, M. S. M., Wasel, K. Z. A., & Abdelhamid, A. M. M. (2024). Generative AI and Media Content Creation: Investigating the Factors Shaping User Acceptance in the Arab Gulf States. Journalism and Media, 5(4), 1624-1645. https://doi.org/10.3390/journalmedia5040101

Article Metrics

Back to TopTop