Next Article in Journal
Recruitment, Affiliation, and Disengagement Among Men in Terrorist Organizations: A Systematic Review
Previous Article in Journal
Elite Politics, Mass Discontent and Political Inequality in South Korea: Who Represents Me?
Previous Article in Special Issue
Artificial Intelligence as an Opportunity for Journalism: Insights from the Brazilian and Portuguese Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Alien in the Newsroom: AI Anxiety in European and American Newspapers

by
Pablo Sanguinetti
and
Bella Palomo
*
School of Communication, University of Malaga, 29071 Málaga, Spain
*
Author to whom correspondence should be addressed.
Soc. Sci. 2024, 13(11), 608; https://doi.org/10.3390/socsci13110608
Submission received: 29 July 2024 / Revised: 31 October 2024 / Accepted: 3 November 2024 / Published: 7 November 2024
(This article belongs to the Special Issue Contemporary Digital Journalism: Issues and Challenges)

Abstract

:
The media portrayal of artificial intelligence (AI) directly impacts how audiences conceptualize this technology and, therefore, its use, development, and regulation. This study aims to measure a key aspect of this problem: the feeling of AI anxiety conveyed by news outlets that represent this technology as a sort of “alien” that is autonomous, opaque, and independent of humans. To do so, we build an AI anxiety index based on principal component analysis (PCA) and apply it to a corpus of headlines (n = 1682) about AI published before and after the launch of ChatGPT in ten newspapers: The New York Times, The Guardian, El País, Le Monde, Frankfurter Allgemeine Zeitung, San Francisco Chronicle, Manchester Evening News, La Voz de Galicia, Ouest France, and Münchner Merkur. The results show that ChatGPT not only boosted the number of AI headlines (× 5.16) but also reduced positive sentiments (−26.46%) and increased negatives (58.84%). The AI anxiety index also grew (10.59%), albeit driven by regional media (61.41%), while it fell in national media (−6.82%). Finally, the discussion of the variables that compose the index reveals the opportunities and challenges faced by national and regional media in avoiding the feeling of AI anxiety.

1. Introduction

Artificial intelligence (AI) is considered the most disruptive technology of our time (Păvăloaia and Necula 2023). Its transformative potential targets a wide range of industrial, intellectual, and social applications (Dwivedi et al. 2021), particularly after the “generative AI breakout year” of 2023 (Chui et al. 2023) that followed the launch of the chatbot ChatGPT, the fastest-growing consumer application in history with 100 million monthly users in just two months (Hu 2023). AI has evolved from a technical subject to an economic, cultural, philosophical, and ethical phenomenon (Sanguinetti 2023).
Despite its growing presence, AI is still an opaque technology, difficult to understand, and broadly perceived as a “black box” (Brauner et al. 2023). Its real contours appear blurred at both the societal and policy levels (Hudson et al. 2023). Its terminology reverberates with mythological, magical, fictional, and even religious tones since its beginnings (Giuliano 2020; Natale and Ballatore 2017). Even the term “AI” appears controversial, as it is as widely used as loosely defined (Brennen et al. 2018) and counts as an “umbrella term” (Nguyen and Hekman 2024) for a series of systems that work in different domains and tasks.
In this challenging context, journalists play a key role. The previous literature shows that media coverage of technology contributes to shaping its reality in a variety of ways that range from public perception to collective discourse and policy making, from individual understanding to research incentives and personal use (Zhai et al. 2020; Cave et al. 2019; Moriniello et al. 2024; The Royal Society 2018). This situation occurs especially when the given technology is emergent, evolving, and not entirely defined, as in the case of AI (Natale and Ballatore 2017; Donk et al. 2012; Scheufele and Lewenstein 2005). The impact of technology’s portrayal seems to be so relevant that Coeckelbergh (2023) proposed to incorporate the concept of “narrative responsibility” into the other ethical issues surrounding AI, such as bias, privacy, and transparency. Romele (2024) argued that “a comprehensive ethics of AI must address the way AI is communicated and narrated”.
This study contributes to this objective by focusing on a key aspect of AI representation in the news media: a feeling of unsettledness and uncertainty conveyed by the portrayal of AI as a sort of “alien” that is autonomous, opaque, and independent of humans, a misalignment with the reality of this technology that has gained importance within studies in this field under the term ‘AI anxiety’ (Sartori and Bocca 2023).
Our research question (RQ1) is best described as follows: How has the launch of ChatGPT influenced the level of AI anxiety in news media coverage across national and regional newspapers? Derived from this central question, we also aim to understand the following (RQ2): What are the key factors contributing to AI anxiety in regional and national media? To answer these questions, we build an AI anxiety index through principal component analysis (PCA) based on a series of variables observed in the previous literature. Then, we apply this index to measure a corpus of headlines (n = 1682) about AI published over a one-year period before and after the launch of ChatGPT in ten European and American newspapers: The New York Times, The Guardian, El País, Le Monde, Frankfurter Allgemeine Zeitung, San Francisco Chronicle, Manchester Evening News, La Voz de Galicia, Ouest France, and Münchner Merkur. The results show that the level of AI anxiety clearly grew after the launch of ChatGPT (10.59%), albeit driven by the regional media (61.41%), while it fell in the national media (−6.82%).

1.1. Media Portrayal of AI

Until recently, the importance of media coverage for the construction of informed public perceptions of AI has been largely neglected by scholars. For this reason, Romele (2022) diagnosed a “blind spot in AI ethics”, mentioning as an example the fact that the 881-page Oxford Handbook of Ethics of AI (Dubber et al. 2020) does not devote a single line to communication about AI. However, a review of the literature shows that this gap has started to be filled. In recent years, a growing number of scholars and studies are focusing on topics such as public perception and media portrayal of AI, applying a variety of qualitative, quantitative, and computational methods (Brause et al. 2023; Nguyen and Hekman 2024). From the diversity of studies in this new field emerge some common insights.
(a) Anthropomorphism and polarization: Dominant stories on AI frequently oversimplify its complexity (Cave et al. 2020) and polarize between exaggerated fears and hopes, between catastrophism and solutionism (Chubb et al. 2024). Instead of more nuanced, realistic, and inclusive coverage, media tend to magnify the power of AI systems, nurturing the expectation of a pseudo-artificial general intelligence, defined as a collective of technologies capable of solving nearly any problem (Brennen et al. 2022). Part of this distorted view is related to a general trend toward anthropomorphism (Salles et al. 2020), a bias that extends to images illustrating AI stories (Romele 2022). Rather than clarifying the algorithmic and statistical reality of current machine learning systems, mainstream media reinforce public narratives about “scary robots” (Cave et al. 2019). The first studies to cover the impact of ChatGPT and generative models confirmed “sensationalized” coverage (Roe and Perkins 2023) but found that the framing is “more nuanced than a simple dichotomy between positive and negative”.
(b) The industry’s influence on narratives: Media coverage is deeply influenced by industry sources. Many stories uncritically replicate the discourse of companies that pursue specific agendas, particularly those in big tech (Chubb et al. 2024). As a result, economic framing and business angles predominate over other areas and perspectives (Brause et al. 2023). Simultaneously, however, a content analysis of five major American newspapers from 2009 to 2018 conducted by Chuan et al. (2019) also showed that ethics “dramatically increased” from 2017 to 2018.
(c) Positive coverage: In line with the corporate interests mentioned in the previous point, several studies found that media representation of AI is mostly positive (Garvey and Maskal 2020; Zeng et al. 2022; Korneeva et al. 2023), challenging the assumption of a negative bias against AI in the news.
(d) Lack of diversity: Fictional narratives reinforce Western perspectives and a particular approach to race, for example, identifying AI with “whiteness” (Cave and Dihal 2020). Also, researchers focus strongly on Western media outlets, particularly those from the US and UK (Brause et al. 2023). Even though some scholars have analyzed media coverage from other countries such as China (van Noort 2024), Germany (Köstler and Ossewaarde 2022), Turkey (Sarisakaloğlu 2021), and the Netherlands (Vergeer 2020), among others, only a few of them (for example, Wang et al. 2023) offered cross-cultural and international comparisons.
(e) A growing interest in AI: Finally, it is worth noting that the quantitative boom of media coverage on AI unleashed by ChatGPT (and demonstrated by several studies, including this one) is not entirely new. This increase extends a trend that started in 2009, at least in the US (Fast and Horvitz 2016). According to the framing analysis of news media portrayal of AI by Nguyen and Hekman (2024), media interest in this technology steadily grew over the past decade and nearly quadrupled from 2010 to 2015.

1.2. AI Anxiety: Beyond the Positive–Negative Dichotomy

Many of the above-mentioned studies rely on the analysis of positive and negative coverage of AI. But this binary scheme does not sufficiently account for some contradictory outcomes. For instance, news media reveal a much more positive picture of AI’s potential than the social view of this technology (Wang et al. 2023). The same gap appears in the first version of the Latin American Artificial Intelligence Index (2023), which shows a difference between the positive tone in digital news outlets (42% optimistic and 13% pessimistic) and the more critical opinions in social media (only 23% optimistic and 31% pessimistic). Similarly, a survey of over 5000 people to discover emotional responses to AI, conducted by Sartori and Bocca (2023), concluded not only that the lay imaginary about this technology is predominantly negative but also that some of the supposedly utopian features covered by the survey (immortality, dominance, gratification) aroused high levels of concern. Gebru and Torres (2024) also went beyond the positive and negative schema and considered that both techno-utopianism and apocalyptic narratives of AI are “two sides of the same coin”. Namely, a series of organizations, personalities, and worldwide famous experts working on AI “divert resources toward trying to build AGI [artificial general intelligence] and stopping their version of an apocalypse in the far future, while dissuading the public from scrutinizing the actual harms that they cause in their attempts to build AGI” (Gebru and Torres 2024, p. 19).
These nuances demonstrate that the key differentiation in analyzing coverage of AI is not between positive and negative; rather, it is between a sober and realistic representation of the technology on the one hand and an exaggerated and distorted one (both for good and for bad) on the other. The concept of ‘AI anxiety’ accounts for that difference by focusing on the wrong conceptualization of what AI is and can be.
An example of AI anxiety can be found in the historian Yuval Noah Harari’s following statement about AI: “We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation” (Harari 2023). In short, the author argues that (a) AI is an alien reality; (b) humans have not developed but rather “encountered” it; (c) therefore, they do not understand it; (d) this implies that their responsibility and accountability is limited; (e) in conclusion, it is a technology that carries with it the worst possible danger: the extinction of the species. The opposite of this “alien” narrative is not necessarily a positive one but a more realistic one. An example of this is the sociotechnical approach defined by the Distributed AI Research Institute, the platform created by Gebru: “AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial” (Distributed AI Research Institute 2022). This sentence encapsulates the opposite of each point of the “alien” narrative: (a) AI is a reality constructed by people; (b) its development, therefore, is neither independent nor predetermined but depends on us; (c) the responsibility and accountability for its impact fall on humans; (d) it involves endowing the technology with deliberate processes and diverse perspectives; (e) in conclusion, AI can be a beneficial reality for the species.

1.3. Components of AI Anxiety

The first step in measuring the level of AI anxiety is to identify its key components in order to track them in a text and integrate them into a single index. Our literature review on the causes of AI anxiety identified three main areas related to technology misconceptions (socio-technical blindness, anthropomorphism, future orientation) and one related to journalistic practices (clickbait).
First, Johnson and Verdicchio (2017a, 2017b) considered that one of the main causes of AI anxiety is the so-called “sociotechnical blindness”, the failure to recognize that “AI is a system and always and only operates in combination with people and social institutions”. Related to this, they pointed to a second factor: a confusion about the concept of autonomy that mixes ‘autonomy’ as a key trait that makes us human (linked to aspects such as freedom, choice, morality) and the ‘autonomy’ that is assumed for certain machines, a concept that can imply various capabilities (from generating random numbers to interacting with the environment) but not those of free will or the ability to make decision.
Second, the tendency to anthropomorphize AI was proposed by Sartori and Bocca (2023) as an additional cause for AI anxiety. This is not a new problem: AI has historically been conceptualized in anthropomorphic terms (Watson 2019). In the field of narrative representation, Placani (2024) pointed out that “anthropomorphic language is so prevalent in AI that it seems inescapable”.
Third, Sartori and Bocca (2023) pointed out that the Western idea of modernity exhibits “a clear future-oriented posture” which is “intertwined with uncertainty and risk”, two common anxiety triggers. In similar terms, Johnson and Verdicchio (2017a) also attributed AI anxiety to an inaccurate conception of technological development that tends to jump to the endpoint of a path (for example, the creation of superhuman artificial intelligence) without thinking carefully about the steps needed to get to that endpoint.
Fourth, and given the role played by news media in shaping the public perception of AI, the causes of AI anxiety must be also traced back to journalism and its current context. A salient factor in this situation that is directly related to anxiety is the growing competition for audiences in a digital environment, a phenomenon that particularly affects the area of analysis of this study: the headlines. Beyond its primary function of giving a clear idea of an article’s content, a news headline in the digital realm offers a major strategy to attract the readers’ attention (Kuiken et al. 2017). The most pronounced version of this tendency, known as clickbait, presents some common features such as incomplete information, non-answered questions, forward referencing, exaggeration, and appeals expressions (Bazaco et al. 2019). These contribute to the hype for a topic such as AI and, therefore, must be considered when studying this subject, especially as they complement another challenge faced by newsrooms worldwide: the lack of AI training and expert editors who can critically analyze the rapid evolution of the sector (Beckett 2019).
These four areas of AI anxiety factors serve in this study as the basis to identify specific textual variables that can be traced and quantified in headlines.

2. Materials and Methods

To answer our research questions, we first selected a series of features based on the existing literature on the aforementioned concepts: anthropomorphism, autonomy, future, uncertainty, sociotechnical narratives, and clickbait, as well as the style guidelines for reporting about AI that we proposed in Beckett et al. (2023). We conducted a semi-automated analysis to detect these features in a corpus of headlines (n = 1682) published by ten leading newspapers from five countries in four languages, over a one-year period before and after the launch of ChatGPT in November 2022 (June 2022 to May 2023). Then, we selected the most relevant features and performed a principal component analysis (PCA) to create a single “anxiety index” for each headline. This allowed us to study how this phenomenon varies across countries and types of news outlets and how it evolved after the emergence of ChatGPT.

2.1. Corpus

The outlets analyzed for our study were chosen from European countries with different languages (United Kingdom, Spain, France, Germany) and the United States as an additional reference. The selection criteria were that these are the top countries in the SCIMago classification (Trillo-Domínguez et al. 2023) corresponding to the end of the period to be studied (summer 2023). From each country, the main newspaper with international coverage and the main newspaper with regional coverage according to the same ranking were chosen: The Guardian, El País, Le Monde, Frankfurter Allgemeine Zeitung, and The New York Times (national coverage) and Manchester Evening News, La Voz de Galicia, Ouest France, Münchner Merkur, and San Francisco Chronicle (regional coverage). The analysis was limited to online content in the period of one year, counting from six months before the launch of ChatGPT by OpenAI (30 November 2022) to six months thereafter, that is, from 1 June 2022 to 31 May 2023.
The unit of analysis was the headline. Here, we followed previous studies on headlines about multiple topics such as COVID-19 (Aslam et al. 2020), fake news (Calvillo and Smelter 2020), partisan news (Ross et al. 2021), and the subject of our analysis, AI coverage (Roe and Perkins 2023; Ouchchy et al. 2020). Even if the headline is only a partial and sometimes misleading component of the whole news story, it is this limitation that makes it a valuable unit for analyzing the potential shortcomings of AI representation (Leufer 2020; Beckett et al. 2023). Moreover, headlines offer unique features that are particularly relevant to this study. For example, headlines in online news outlets may act as clickbait to make the reader access the whole article (Kuiken et al. 2017). They also show higher levels of anthropomorphism (Cheng et al. 2024).
The corpus was extracted from the media and PR database Muck Rack (muckrack.com). Our selection of this platform was based on the double criteria of veteran and prestige. Several previous studies have used this tool to identify political reporters (Parmelee et al. 2019), access the 500 most-followed journalists on Twitter (Lasorsa et al. 2012), discover the top-mentioned journalist at a news organization (Vis 2013), or perform content analysis (Canella 2023). Although it has been noted that it is not representative of the information available on the Internet, it probably does expose the best compilation available (Lasorsa et al. 2012).
The Boolean search included terms linked to “artificial intelligence” in the four languages analyzed, with particularities such as the use of the spelling “A.I.” (with periods) by The New York Times or the forced capitalization of “AI” to avoid false positives with the French verb “ai”. The names of specific models launched during the period (Dall-e, ChatGPT, Bing, Bard, and Midjourney) were also included. The original search in online headlines of the ten newspapers (n = 1956) was processed by eliminating duplicate headlines, articles in alternative languages to that of the source outlet, and false positives, such as the use of “bard” to refer to Shakespeare, reducing the final dataset (n = 1682).

2.2. Variables

Six variables were directly provided by Muck Rack: headline, publication date, URL, media outlet, language, and news outlet’s country. We completed the basic dataset by automatically adding another three for each headline: translation to English (with Google Translate in a Google Sheets spreadsheet), type of outlet (regional or national), and whether the publication date was before or after the launch of ChatGPT.
A series of features based on the concepts mentioned in the previous section were then added through different methods. We used SpaCy and NLTK, two popular natural language processing libraries in Python, to identify headlines in which AI was an agent. To do so, we extracted the grammatical subjects (in active voice) or agents (in passive voice) of the headline and identified the cases where the subject or agent was AI or a related term, like a particular model (ChatGPT, Bing, Midjourney, Dall-e, Bard). The same method was used to list the verbs attributed to AI in those cases. The verbs were then manually classified between human and non-human actions. We also performed a name entity recognition (NER) task to extract mentions to names, places, organizations, etc., in each headline and to identify the first word type.
Other features were added looking for given terms with regular expression (RegEx) formulas in Google Sheets: the number of textual marks of future or uncertainty; question, exclamation, and ellipsis signs; and personal and possessive pronouns. Additionally, we calculated the Flesch Reading Ease (Kincaid et al. 1975), a popular readability score, with the Python library Texstat, and the average number of characters per word in the original language with the spreadsheet. We used the GPT-4o model to detect whether each headline contained mentions of at least one of the four main negative scenarios identified by Cave et al. (2020): dehumanization, alienation, obsolescence, and uprising.1 We compared the results with the output of several sentiment analysis methods and manually coded the headlines that presented divergent classification across methods. The sentiment analysis tools applied were an integration of ChatGPT 3.5 in Google Sheets, the textual processing tool SEANCE, and the Hugging Face model “facebook/bart-large-mnli” based on the Bart architecture (Lewis et al. 2019) to perform a zero-shot classification task (Yin et al. 2019) between positive and negative emotions.
To select the most relevant variables, all were analyzed with the open-source statistical software Jamovi v. 2.3.28.0 to detect relevant features and correlations. By optimizing the internal reliability of the considered items until a McDonald’s omega coefficient of 0.587 was reached, five variables were removed (Table 1).
The remaining nine variables were standardized and used to perform a principal component analysis (PCA) to extract three main components and their scores (Table 2). Finally, these scores were added to build a single “AI anxiety” index, which we normalized on a scale of 0–10 to improve its interpretability.

3. Results

3.1. A Quantitative and Uneven Boom of AI Coverage After ChatGPT

The largest numbers of news headlines related to AI appeared in the two national English-language dailies, The Guardian and The New York Times. The two Spanish newspapers came next, followed by the two German newspapers. The French appeared in the bottom half, as shown in Table 3. The corpus indicated that the emergence of ChatGPT at the end of November 2022 dramatically increased the number of articles in the analyzed media by more than five times (5.16), from 273 in the six months prior to the appearance of the popular model to 1409 in the six months after.
However, there are important differences between the various news outlets. The moderate variation for Frankfurter Allgemeine Zeitung (2.33) and La Voz de Galicia (1.97) is remarkable, in both cases because their AI coverage was already high before the turning point in November 2022. The opposite was seen for the regional Münchner Merkur (37.6), Manchester Evening News (27), and San Francisco Chronicle (10.70), suggesting that they rode the wave of a popular issue that they had not been covering in depth before. The other national media maintained a rate close to the average, ranging from 4.93 (The Guardian) to 10.69 (The New York Times). The increase was also more stable and closer to the average when we grouped national (5.42) and regional (4.73) newspapers.
The monthly count also showed that the growth was continuous. From November onwards, each month exceeded the previous one in terms of the number of articles. The May figure (446) was ten times that for November (44), and a drastic increase occurred in all countries (Figure 1).
The drastic increase in stories about AI after ChatGPT is not surprising. However, the fact that it occurred differently in the various newspapers studied seems to be relevant to this study. It is to be expected that the newspapers that suddenly increased their coverage on AI offered less balanced and nuanced stories than those that had been following the topic since before the launch of the popular chatbot, a hypothesis that is confirmed in the following sections.

3.2. Less Positive Coverage After ChatGPT

Before diving into the results of our AI anxiety index to answer our research question, it is worth following the strategy of previous studies and examining the evolution of sentiment towards AI in our corpus. Two major trends emerged. The first is a clear dominance of positive versus negative headlines, both before and after the launch of ChatGPT. This confirms the results of previous studies mentioned in the Section 1.1. Second, the proportion of headlines with negative emotion increased after the launch of ChatGPT. Figure 2 shows these trends according to the sentiment analysis conducted with the Hugging Face model.
These results were confirmed by a sentiment analysis of headlines with the Sentiment Analysis and Cognition Engine (SEANCE, Crossley et al. 2017). This open-source tool for text processing uses predefined word vectors from several source databases (including EmoLex and VADER). The comparison between headlines before and after ChatGPT (Table 4) showed that the hype unleashed by the chatbot not only made the media coverage less positive and more negative but also incremented feelings such as anger, anticipation, disgust, sadness, and surprise, while it reduced others such as joy and trust.
These results anticipate the major trends in our AI anxiety index presented in the next section. However, automated sentiment analysis presents limited reliability for this type of study, particularly when part of the emotion of each headline requires cultural context for its interpretation. Is the headline “Artificial intelligence to detect breast cancer in the poorest women” (El País, 2 October 2022) conveying a positive or negative emotion? What about “Generative A.I. Is Here. Who Should Control It?” (The New York Times, 21 October 2022)? The answer is ambiguous for a human, let alone a machine learning model. This is precisely the reason why it makes sense to measure “anxiety” instead of “sentiment”.

3.3. More “AI Anxiety” After ChatGPT, but Only in Regional News Outlets

Answering our RQ1, the index created for this study shows that the level of AI anxiety measured over one year increased by 10.59% after the launch of ChatGPT in the overall figure for all the newspapers studied (Table 5).
Three English-speaking newspapers lead the classification with a higher level of AI anxiety over the analyzed period, Manchester Evening News, The Guardian, and The New York Times, suggesting more sensationalist and hyped coverage in these countries. News outlets from the other three countries occupy the last spots in the table: Frankfurter Allgemeine Zeitung, Le Monde, and La Voz de Galicia.
The most striking finding is the uneven distribution of the general trend. While in the five national newspapers as a whole, the anxiety index fell by −6.82%, in the regional ones, it shot up by 61.41%. This pattern applies to all media in each group, with a single exception: an increase for El País (12.94%).
Figure 3 shows this development month by month and grouped by type of outlet. The index remained quite stable for national outlets, with regular fluctuations that look like clickbait cycles of topics discovered, exploited, and soon forgotten. Regional media showed lower AI anxiety than national media with similar curves until the end of November, when ChatGPT was presented and this parallel development abruptly changed; while the index tended to stabilize in national outlets after this date, the regionals presented sharper and continuous growth until they surpassed national media in February. Overall, the trend in AI anxiety over the year moved slightly downwards in the group of five national outlets and upwards for the five regional outlets. Both started to decline in March.
Turning to the main factors behind these general trends (RQ2), a number of qualitative observations are worth highlighting. While all the AI anxiety variables increased in the regional media after the launch of ChatGPT, the opposite occurred among national media, where six out of nine fell and only three increased (Table 6). The level of anxiety among regional outlets is mainly due to indicators of AI agency, anthropomorphism, and negative topics. A decrease in the number of concrete entities mentioned is also an important feature for regional media, as it reveals how often they connect AI stories to local reality and protagonists by including their names in headlines. On the other hand, among national outlets, the group of anxiety components that increased is primarily linked to clickbait features, such as greater use of pronouns, better readability scores (texts that are easier to understand), and the use of the future tense and uncertainty references.
More importantly, there is some correlation between a steeper increase in AI coverage after the launch of ChatGPT (Table 3) and a larger increase in the anxiety index (Table 5). The three newspapers that increased their production the most (Münchner Merkur, Manchester Evening News, and San Francisco Chronicle) also showed significant gains in the index (19.48%, 16.17%, and 49.53%, respectively). On the opposite side, smaller changes in the coverage of Le Monde, The Guardian, and Frankfurter Allgemeine Zeitung are associated with a decrease in the anxiety index (−9.10%, −6.23%, and −19.66%). An exception to this trend is La Voz de Galicia, where an insignificant increase in AI coverage after ChatGPT contrasts with a surge of 59.41% in the anxiety index, probably because this newspaper started from the lowest pre-ChatGPT level and any change represents a higher percentage.
Finally, there is a certain consistency across the variables. The newspapers at the top of Table 7 show a higher degree of anxiety (more orange) in most values, with only a few relevant exceptions. The most important is the NER value. We consider that fewer mentions of concrete persons, places, and organizations contribute to a higher level of anxiety (this is the reason why we inverted the number of entities extracted for each headline: more entities mean less anxiety). However, this variable correlates inversely with the rest, as already noted during the process of selecting which variables were to be included in this study.
The table also highlights other eloquent exceptions, such as the frequent use of AI as an agent, linked to anthropomorphizing verbs, in spots four and five of the table (Ouest France and Münchner Merkur), while this important feature seems to be low in the newspaper on top of the list (Manchester Evening News).

3.4. Two Opposite Cases

Having a standardized index allows comparisons to be made not only between different time series but also between different media outlets. As an example, we take two outlets with different characteristics at the extremes of the general classification of AI anxiety in Table 5.
La Voz de Galicia, with 35.7M monthly visits to its website according to the platform SimilarWeb, is one of the main regional media outlets in Spain. From a central newsroom in the city of La Coruña, it covers the entire region of Galicia (Northwest) with several local editions. In our ranking, it stands out for having the lowest anxiety index among the ten newspapers analyzed. This can be linked to another fact: it is the one that increased its coverage the least after ChatGPT (excluding Ouest France, which is misleading due to its low number of articles). This suggests that it has been closely following AI-related issues even before the “hype” unleashed by OpenAI’s chatbot. Its approach also offers an interesting example of how to do so without having to convey an anxious tone: many of its articles on AI, before and after ChatGPT, focus on local issues linked to Galicia and La Coruña, which is reflected in the large number of entities detected. Additionally, it shows an interest in telling current and developing stories (less use of the future), with a more real approach (fewer interrogations) and focused more on human or governmental protagonists (less use of AI as an agent and fewer references to the danger of AI).
The Guardian, one of the most prestigious national newspapers in Europe, with 342M monthly visits to its website (ten times the figure of La Voz de Galicia), also provided intensive coverage on AI before ChatGPT, although with fewer previous articles than La Voz de Galicia (54 vs. 79) and more than twice the increase after the launch of the popular chatbot (4.93 vs. 1.97 times). As an international reference, the newspaper offers more ‘delocalized’ stories (more headlines without specific entities) and more critical opinion articles by well-known names (which encourages a more subjective tone, with more mentions of negative topics). Because of its global readership and large online reach, the headlines collected also play with a style that is more aware of the importance of SEO and the need for clickable focuses (more questions, many in the future tense; greater readability; and more signal words and allusions to the reader in the second person).
Table 8 sums up both newspapers’ performance across all variables.

4. Discussion and Conclusions

This article contributes to the emergent field of studies about AI narratives by proposing a systematic and semiautomated way of analyzing one of its most prominent components, the concept of AI anxiety, through an index based on a series of nine variables. Answering RQ1, our index shows that the launch of ChatGPT, one of the most important milestones in the history of AI, increased the level of AI anxiety in the media. However, the two groups of analyzed newspapers present divergent patterns. While national media exhibited a slight decline in AI anxiety post-ChatGPT, regional outlets showed a substantial increase. Regarding RQ2, our results indicate that greater AI anxiety after ChatGPT correlates in almost all cases with a sudden increase in the number of news stories about AI, caused by the chatbot’s launch in November 2022. This surge was particularly drastic in some regional media, suggesting that they were more reactive and less equipped with the resources for balanced coverage compared to national outlets. However, this also means that lower scores in the anxiety index depend not on the prominence, the prestige, or the reach of a news outlet but rather on its sustained commitment to the coverage of AI and the corresponding expertise in the newsroom. In this context, the brief close-up on the cases of La Voz de Galicia and The Guardian revealed the unexpected benefits of regional newspapers in terms of nuanced and realistic coverage, such as the exploration of local stories and protagonists. Conversely, more powerful and global media face their own challenges, such as the loss of “ground” derived from an international perspective or the quest for a greater impact on social media.
These findings contribute to moving the study of AI narratives beyond the binary categories of positive–negative, hope–fear, and utopian–apocalyptic that characterize previous research (Roe and Perkins 2023; Moriniello et al. 2024; van Noort 2024). Instead, our index provides a new analytical tool to better quantify and understand the characteristics and causes of the misalignment between real and represented AI and the subsequent feeling of “anxiety” (Sartori and Theodorou 2022). This shift also moves away from an already outdated view of technology as deterministic and inevitable, and it aligns with more productive theoretical frameworks, such as the actor network theory developed by Latour (2007) or the mediation theory by Verbeek (2010). From this perspective, media coverage should abandon the portrayal of AI as an “alien” entity that is unexplainable, autonomous, and eventually lethal and rather move to a more relational and dynamic, less dichotomic and fixed conceptualization of the relation between humans and machines.
There are some limitations of this study that open interesting opportunities for further research. First, the analyzed corpus presents imbalances (such as the low number of items corresponding to some regional outlets) that may have distorted the results. A broader and more balanced dataset could compensate for this effect. Second, a longer period of analysis could provide additional insights, particularly considering that the impact of ChatGPT was not fully stabilized only six months after its launch (a trend that just started to be detected in our data, as shown in Figure 3). Furthermore, the previous six months were already distorted by a first wave of generative AI models with strong media coverage (Dall-e 2, Midjourney, Stable Diffusion). Third, as negative sentiment is a key component of anxiety, new paths to incorporate this variable into a composite index should be found. Although our index includes a variable that considers mentions of topics such as extinction, obsolescence, or alienation, there is still scope to measure negative emotional language in more detail. Fourth, in the same way that we included in our index the key dimension of anthropomorphism by looking at both the doer and the action in the headlines, further elements of the sentence could be analyzed, like adjectives or metaphors. A qualitative analysis could solve the intrinsic limitations of an automated analysis in this and the previous point. Fifth, even though headlines are particularly effective in conveying anxiety, it is crucial to broaden the analysis to cover additional components of news items, starting with the “blind spot” (Romele 2022) that represents another key element: images that illustrate AI stories. Finally, the dimension of audience perception is also needed to complete the landscape of AI representation in the press. A variable like engagement with each story in social media would add an extra layer to an AI anxiety index such as the one proposed in the present article.
By addressing these factors, future research can further refine this index and expand its potential to enhance our understanding of AI’s portrayal and its influence on public perception.

Author Contributions

Conceptualization, P.S.; methodology, P.S.; software, P.S.; validation, P.S.; formal analysis, P.S.; investigation, P.S.; resources, P.S.; data curation, P.S. and B.P.; writing—original draft preparation, P.S.; writing—review and editing, P.S. and B.P.; visualization, P.S.; supervision, B.P.; project administration, B.P.; funding acquisition, B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The dataset is publicly available on Zenodo with this link https://zenodo.org/records/14046281.

Acknowledgments

This article is part of the project: “Journalistic Applications of AI to Mitigate Disinformation: Trends, Uses, and Perceptions of Professionals and Audiences” (PID2023-147486OB-I00). We would like to thank Ivan Logrosan Tercero for his assessment in statistiscal analysis.

Conflicts of Interest

The authors declare no conflicts of interest.

Note

1
The following prompt was used to perform this task:
“This is a news headline related to artificial intelligence. Read it slowly and answer one by one the following questions. AI or the AI model mentioned in the headline is represented as a technology that…
(a)
can dehumanize us and make us lose our essence and values.
(b)
can uprise and escape human control.
(c)
can make humans obsolete and replace them.
(d)
is dangerous because it can be used to discriminate, kill, disinform, steal, etc.
Answer all the questions separated by commas. Answer only “yes” or “no” for each one, without further explanation. For example, your answer could look like this: “yes, no, no, no”.”

References

  1. Aslam, Faheem, Tahir Mumtaz Awan, Jabir Hussain Syed, Aisha Kashif, and Mahwish Parveen. 2020. Sentiments and Emotions Evoked by News Headlines of Coronavirus Disease (COVID-19) Outbreak. Humanities & Social Sciences Communications 7: 1–9. [Google Scholar] [CrossRef]
  2. Bazaco, Ángela, Marta Redondo, and Pilar Sánchez-García. 2019. Clickbait as a Strategy of Viral Journalism: Conceptualisation and Methods. Revista Latina de Comunicación Social 74: 94–115. [Google Scholar] [CrossRef]
  3. Beckett, Charlie. 2019. New Powers, New Responsibilities. A Global Survey of Journalism and Artificial Intelligence. London School of Economics, Polis, Journalism AI, November 18. [Google Scholar]
  4. Beckett, Charlie, Pablo Sanguinetti, and Bella Palomo. 2023. New Frontiers of the Intelligent Journalism. In Blurring Boundaries of Journalism in Digital Media: New Actors, Models and Practices. Edited by María-Cruz Negreira-Rey, Jorge Vázquez-Herrero, José Sixto-García and Xosé López-García. Cham: Springer International Publishing, pp. 275–88. [Google Scholar] [CrossRef]
  5. Brauner, Philipp, Alexander Hick, Ralf Philipsen, and Martina Ziefle. 2023. What Does the Public Think about Artificial intelligence?—A Criticality Map to Understand Bias in the Public Perception of AI. Frontiers in Computer Science 5: 1113903. [Google Scholar] [CrossRef]
  6. Brause, Saba Rebecca, Jing Zeng, Mike S. Schäfer, and Christian Katzenbach. 2023. Media Representations of Artificial Intelligence: Surveying the Field. In Handbook of Critical Studies of Artificial Intelligence. Cheltenham: Edward Elgar Publishing, pp. 277–88. [Google Scholar] [CrossRef]
  7. Brennen, J. Scott, Philip N. Howard, and Rasmus Kleis Nielsen. 2018. An Industry-Led Debate: How UK Media Cover Artificial Intelligence. Oxford: Reuters Institute for the Study of Journalism. [Google Scholar] [CrossRef]
  8. Brennen, J. Scott, Philip N. Howard, and Rasmus Kleis Nielsen. 2022. What to Expect When You’re Expecting Robots: Futures, Expectations, and Pseudo-Artificial General Intelligence in UK News. Journalism 23: 22–38. [Google Scholar] [CrossRef]
  9. Calvillo, Dustin P., and Thomas J. Smelter. 2020. An Initial Accuracy Focus Reduces the Effect of Prior Exposure on Perceived Accuracy of News Headlines. Cognitive Research: Principles and Implications 5: 55. [Google Scholar] [CrossRef]
  10. Canella, Gino. 2023. Journalistic Power: Constructing the ‘Truth’ and the Economics of Objectivity. Journalism Practice 17: 209–25. [Google Scholar] [CrossRef]
  11. Cave, Stephen, and Kanta Dihal. 2020. The Whiteness of AI. Philosophy & Technology 33: 685–703. [Google Scholar] [CrossRef]
  12. Cave, Stephen, Kate Coughlan, and Kanta Dihal. 2019. ‘Scary Robots’: Examining Public Responses to AI. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’19. New York: Association for Computing Machinery, pp. 331–37. [Google Scholar] [CrossRef]
  13. Cave, Stephen, Kanta Dihal, and Sarah Dillon. 2020. AI Narratives: A History of Imaginative Thinking About Intelligent Machines. Oxford: Oxford University Press. Available online: https://play.google.com/store/books/details?id=S53SDwAAQBAJ (accessed on 18 November 2023).
  14. Cheng, Myra, Kristina Gligoric, Tiziano Piccardi, and Dan Jurafsky. 2024. AnthroScore: A Computational Linguistic Measure of Anthropomorphism. arXiv arXiv:2402.02056. [Google Scholar]
  15. Chuan, Ching-Hua, Wan-Hsiu Sunny Tsai, and Su Yeon Cho. 2019. Framing Artificial Intelligence in American Newspapers. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’19. New York: Association for Computing Machinery, pp. 339–44. [Google Scholar] [CrossRef]
  16. Chubb, Jennifer, Darren Reed, and Peter Cowling. 2024. Expert Views about Missing AI Narratives: Is There an AI Story Crisis? AI & Society 39: 1107–26. [Google Scholar] [CrossRef]
  17. Chui, Michael, Bryce Hall, Alex Singla, and Alexander Sukharevsky. 2023. The State of AI in 2023: Generative AI’s Breakout Year. McKinsey. Available online: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year#/ (accessed on 23 January 2024).
  18. Coeckelbergh, Mark. 2023. Narrative Responsibility and Artificial Intelligence. AI & Society 38: 2437–50. [Google Scholar] [CrossRef]
  19. Crossley, Scott A., Kristopher Kyle, and Danielle S. McNamara. 2017. Sentiment Analysis and Social Cognition Engine (SEANCE): An Automatic Tool for Sentiment, Social Cognition, and Social-Order Analysis. Behavior Research Methods 49: 803–21. [Google Scholar] [CrossRef] [PubMed]
  20. Distributed AI Research Institute. 2022. About. Available online: https://www.dair-institute.org/about/ (accessed on 12 May 2024).
  21. Donk, André, Julia Metag, Matthias Kohring, and Frank Marcinkowski. 2012. Framing Emerging Technologies: Risk Perceptions of Nanotechnology in the German Press. Science Communication 34: 5–29. [Google Scholar] [CrossRef]
  22. Dubber, Markus Dirk, Frank Pasquale, and Sunit Das. 2020. The Oxford Handbook of Ethics of AI; Oxford: Oxford University Press. Available online: https://play.google.com/store/books/details?id=8PQTEAAAQBAJ (accessed on 25 May 2024).
  23. Dwivedi, Yogesh K., Laurie Hughes, Elvira Ismagilova, Gert Aarts, Crispin Coombs, Tom Crick, Yanqing Duan, Rohita Dwivedi, John Edwards, Aled Eirug, and et al. 2021. Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. International Journal of Information Management 57: 101994. [Google Scholar] [CrossRef]
  24. Fast, Ethan, and Eric Horvitz. 2016. Long-Term Trends in the Public Perception of Artificial Intelligence. arXiv arXiv:1609.04904. [Google Scholar] [CrossRef]
  25. Garvey, Colin, and Chandler Maskal. 2020. Sentiment Analysis of the News Media on Artificial Intelligence Does Not Support Claims of Negative Bias Against Artificial Intelligence. Omics: A Journal of Integrative Biology 24: 286–99. [Google Scholar] [CrossRef] [PubMed]
  26. Gebru, Timnit, and Émile P. Torres. 2024. The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence. First Monday 29: 13636. [Google Scholar] [CrossRef]
  27. Giuliano, Roberto Musa. 2020. Echoes of Myth and Magic in the Language of Artificial Intelligence. AI and Society 35: 1009–24. [Google Scholar] [CrossRef]
  28. Harari, Yuval Noah. 2023. Yuval Noah Harari Argues That AI Has Hacked the Operating System of Human Civilisation. The Economist. April 28. Available online: https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation (accessed on 12 May 2024).
  29. Hu, Krystal. 2023. ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note. Reuters. February 2. Available online: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ (accessed on 25 May 2024).
  30. Hudson, Andrew Dana, Ed Finn, and Ruth Wylie. 2023. What Can Science Fiction Tell Us about the Future of Artificial Intelligence Policy? AI & Society 38: 197–211. [Google Scholar] [CrossRef]
  31. Johnson, Deborah G., and Mario Verdicchio. 2017a. AI Anxiety. Journal of the Association for Information Science and Technology 68: 2267–70. [Google Scholar] [CrossRef]
  32. Johnson, Deborah G., and Mario Verdicchio. 2017b. Reframing AI Discourse. Minds and Machines 27: 575–90. [Google Scholar] [CrossRef]
  33. Kincaid, Peter J., Robert P. Fishburne, Jr., Richard L. Rogers, and Brad S. Chissom. 1975. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. Institute for Simulation and Training, University of Central Florida. Available online: https://stars.library.ucf.edu/istlibrary/56 (accessed on 21 July 2024).
  34. Korneeva, Ekaterina, Torsten Oliver Salge, Timm Teubner, and David Antons. 2023. Tracing the Legitimacy of Artificial Intelligence: A Longitudinal Analysis of Media Discourse. Technological Forecasting and Social Change 192: 122467. [Google Scholar] [CrossRef]
  35. Köstler, Lea, and Ringo Ossewaarde. 2022. The Making of AI Society: AI Futures Frames in German Political and Media Discourses. AI & Society 37: 249–63. [Google Scholar] [CrossRef]
  36. Kuiken, Jeffrey, Anne Schuth, Martijn Spitters, and Maarten Marx. 2017. Effective Headlines of Newspaper Articles in a Digital Environment. Digital Journalism 5: 1300–14. [Google Scholar] [CrossRef]
  37. Lasorsa, Dominic L., Seth C. Lewis, and Avery E. Holton. 2012. Normalizing Twitter. Journalism Studies 13: 19–36. [Google Scholar] [CrossRef]
  38. Latin American Artificial Intelligence Index. 2023. CENIA. Available online: https://indicelatam.cl/home-en-2024/ (accessed on 5 July 2024).
  39. Latour, Bruno. 2007. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: OUP Oxford. [Google Scholar]
  40. Leufer, Daniel. 2020. Why We Need to Bust Some Myths about AI. Patterns 1: 100124. [Google Scholar] [CrossRef]
  41. Lewis, Mike, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising Sequence-to-Sequence Pre-Training for Natural Language Generation, Translation, and Comprehension. arXiv arXiv:1910.13461. [Google Scholar]
  42. Moriniello, Flavio, Ana Martí-Testón, Adolfo Muñoz, Daniel Silva Jasaui, Luis Gracia, and J. Ernesto Solanes. 2024. Exploring the Relationship between the Coverage of AI in WIRED Magazine and Public Opinion Using Sentiment Analysis. Applied Sciences 14: 1994. [Google Scholar] [CrossRef]
  43. Natale, Simone, and Andrea Ballatore. 2017. Imagining the Thinking Machine: Technological Myths and the Rise of Artificial Intelligence. Convergence 26: 3–18. [Google Scholar] [CrossRef]
  44. Nguyen, Dennis, and Erik Hekman. 2024. The News Framing of Artificial Intelligence: A Critical Exploration of How Media Discourses Make Sense of Automation. AI & Society 39: 437–51. [Google Scholar] [CrossRef]
  45. Ouchchy, Leila, Allen Coin, and Veljko Dubljević. 2020. AI in the Headlines: The Portrayal of the Ethical Issues of Artificial Intelligence in the Media. AI & Society 35: 927–36. [Google Scholar] [CrossRef]
  46. Parmelee, John H., Nataliya Roman, Berrin Beasley, and Stephynie C. Perkins. 2019. Gender and Generational Differences in Political Reporters’ Interactivity on Twitter. Journalism Studies 20: 232–47. [Google Scholar] [CrossRef]
  47. Păvăloaia, Vasile-Daniel, and Sabina-Cristiana Necula. 2023. Artificial Intelligence as a Disruptive Technology—A Systematic Literature Review. Electronics 12: 1102. [Google Scholar] [CrossRef]
  48. Placani, Adriana. 2024. Anthropomorphism in AI: Hype and Fallacy. AI and Ethics 4: 691–98. [Google Scholar] [CrossRef]
  49. Roe, Jasper, and Mike Perkins. 2023. ‘What They’re Not Telling You about ChatGPT’: Exploring the Discourse of AI in UK News Media Headlines. Humanities and Social Sciences Communications 10: 1–9. [Google Scholar] [CrossRef]
  50. Romele, Alberto. 2022. Images of Artificial Intelligence: A Blind Spot in AI Ethics. Philosophy & Technology 35: 4. [Google Scholar] [CrossRef]
  51. Romele, Alberto. 2024. The AI Imagery: AI, Ethics, And Communication. In Handbook on the Ethics of Artificial Intelligence. Edited by David J. Gunkel. Cheltenham: Edward Elgar Publishing, pp. 262–73. [Google Scholar]
  52. Ross, Robert M., David G. Rand, and Gordon Pennycook. 2021. Beyond ‘fake News’: Analytic Thinking and the Detection of False and Hyperpartisan News Headlines. Judgment and Decision Making 16: 484–504. [Google Scholar] [CrossRef]
  53. Salles, Arleen, Kathinka Evers, and Michele Farisco. 2020. Anthropomorphism in AI. AJOB Neuroscience 11: 88–95. [Google Scholar] [CrossRef]
  54. Sanguinetti, Pablo. 2023. Tecnohumanismo. Por un diseño estético y narrativo de la inteligencia artificial. Madrid: La Huerta Grande. [Google Scholar]
  55. Sarisakaloğlu, Aynur. 2021. Türkiye’de Yayınlanan Haberlerde Yapay Zeka Teknolojilerinin Olanakları ve Zorlukları Hakkındaki Çerçevelemeler. Türkiye İletişim Araştırmaları Dergisi 37: 20–38. [Google Scholar] [CrossRef]
  56. Sartori, Laura, and Andreas Theodorou. 2022. A Sociotechnical Perspective for the Future of AI: Narratives, Inequalities, and Human Control. Ethics and Information Technology 24: 4. [Google Scholar] [CrossRef]
  57. Sartori, Laura, and Giulia Bocca. 2023. Minding the Gap(s): Public Perceptions of AI and Socio-Technical Imaginaries. AI & Society 38: 443–58. [Google Scholar] [CrossRef]
  58. Scheufele, Dietram A., and Bruce V. Lewenstein. 2005. The Public and Nanotechnology: How Citizens Make Sense of Emerging Technologies. Journal of Nanoparticle Research: An Interdisciplinary Forum for Nanoscale Science and Technology 7: 659–67. [Google Scholar] [CrossRef]
  59. The Royal Society. 2018. Portrayals and Perceptions of AI and Why They Matter. Available online: https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf (accessed on 18 November 2023).
  60. Trillo-Domínguez, Magdalena, Ramón Salaverrí-a, Lluí-s Codina, and Félix De-Moya-Anegón. 2023. SCImago Media Rankings (SMR): Situation and Evolution of the Digital Reputation of the Media Worldwide. Profesional de La Información/Information Professional 32: e320521. [Google Scholar] [CrossRef]
  61. van Noort, Carolijn. 2024. On the Use of Pride, Hope and Fear in China’s International Artificial Intelligence Narratives on CGTN. AI & Society 39: 295–307. [Google Scholar] [CrossRef]
  62. Verbeek, Peter-Paul. 2010. What Things Do: Philosophical Reflections on Technology, Agency, and Design; University Park: Penn State Press. Available online: https://play.google.com/store/books/details?id=vURh8gy8nPAC (accessed on 26 May 2024).
  63. Vergeer, Maurice. 2020. Artificial Intelligence in the Dutch Press: An Analysis of Topics and Trends. Communication Studies 71: 373–92. [Google Scholar] [CrossRef]
  64. Vis, Farida. 2013. Twitter as a Reporting Tool for Breaking News. Digital Journalism 1: 27–47. [Google Scholar] [CrossRef]
  65. Wang, Weili, John Downey, and Fan Yang. 2023. AI Anxiety? Comparing the Sociotechnical Imaginaries of Artificial Intelligence in UK, Chinese and Indian Newspapers. Global Media and China 2023: 20594364231196547. [Google Scholar] [CrossRef]
  66. Watson, David. 2019. The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence. Minds and Machines 29: 417–40. [Google Scholar] [CrossRef]
  67. Yin, Wenpeng, Jamaal Hay, and Dan Roth. 2019. Benchmarking Zero-Shot Text Classification: Datasets, Evaluation and Entailment Approach. arXiv arXiv:1909.00161. [Google Scholar]
  68. Zeng, Jing, Chung-Hong Chan, and Mike S. Schäfer. 2022. Contested Chinese Dreams of AI? Public Discourse about Artificial Intelligence on WeChat and People’s Daily Online. Information, Communication and Society 25: 319–40. [Google Scholar] [CrossRef]
  69. Zhai, Yujia, Jiaqi Yan, Hezhao Zhang, and Wei Lu. 2020. Tracing the Evolution of AI: Conceptualization of Artificial Intelligence in Mass Media Discourse. Information Discovery and Delivery 48: 137–49. [Google Scholar] [CrossRef]
Figure 1. Articles on AI by month and type of outlet.
Figure 1. Articles on AI by month and type of outlet.
Socsci 13 00608 g001
Figure 2. Positive and negative emotions per month.
Figure 2. Positive and negative emotions per month.
Socsci 13 00608 g002
Figure 3. AI anxiety index.
Figure 3. AI anxiety index.
Socsci 13 00608 g003
Table 1. Variable descriptions.
Table 1. Variable descriptions.
FeatureDescription
AI agencyIs AI (or related terms) the subject or agent of the sentence?
Anthropomorphic verbIs the action executed by AI typically human?
NER invertedMentions of names, places, organizations, or other entities (inverted: more specific names mean less anxiety)
Anxiety topicsDoes the headline refer at least to one of a series of fears related to AI? (dehumanization, alienation, obsolescence, uprising)
Future tense and uncertaintyHow many verbal or semantic references to the future or textual marks of uncertainty does the headline contain? (will, shall, won’t, future, coming, going to, would, may, could, might, some, maybe, perhaps, probably)
Personal/possessive pronounsDoes the headline contain at least one personal or possessive pronoun in the first or second person?
Signal wordsDoes the headline contain any of the following words: this, therefore, how, why, when, which, who?
Containing questionIs there a question mark in the headline?
Readability scoreFlesch Reading Ease score of the headline in English
Excluded variables
EllipsisIs there an ellipsis in the headline?
Containing exclamationIs there an exclamation mark in the headline?
First word typeDoes the headline start with a personal or possessive pronoun?
Shorter wordsAverage number of characters per word in the original language
Sentiment analysisIs the headline sentiment positive or negative?
Table 2. Result of the PCA.
Table 2. Result of the PCA.
Component Loadings
Component
123Uniqueness
AI agency0.815 0.314
Anthrop. verb0.839 0.294
Anxiety topics0.354 0.758
NER inverted 0.790.284
Signal words 0.4120.4790.592
Question 0.3330.5360.597
Future/uncertainty 0.395 0.814
Pronouns 0.553 0.662
Readability 0.75 0.428
Note: ‘varimax’ rotation was used.
Table 3. Headlines for each outlet, before and after ChatGPT.
Table 3. Headlines for each outlet, before and after ChatGPT.
Media OutletPre-ChatGPTPost-ChatGPTTotalGrowth (X) *
El País221812038.23
Frankfurter Allgemeine Zeitung521211732.33
La Voz de Galicia791562351.97
Le Monde191221416.42
Manchester Evening News1272827.00
Münchner Merkur518819337.60
Ouest France89171.13
San Francisco Chronicle1010711710.70
The Guardian542663204.93
The New York Times2323225510.09
Total regional1034875904.73
Total national17092210925.42
Total273140916825.16
* Growth was calculated by dividing the number of articles after ChatGPT by the number of articles before ChatGPT.
Table 4. Sentiment analysis with SEANCE: selected models and sentiments.
Table 4. Sentiment analysis with SEANCE: selected models and sentiments.
Launch of ChatGPT
ModelSentimentBeforeAfterChange
VaderNegative0.0620.09654.84%
Neutral0.6810.7165.14%
Positive0.2570.189−26.46%
EmoLexAnger0.0150.01930.13%
Anticipation0.0340.0364.32%
Disgust0.0090.0091.62%
Joy0.0720.048−32.63%
Sadness0.0160.01711.22%
Surprise0.0150.0163.93%
Trust0.0860.070−18.96%
Table 5. AI anxiety index by media outlet, before and after ChatGPT, sorted by total (from more to less anxiety).
Table 5. AI anxiety index by media outlet, before and after ChatGPT, sorted by total (from more to less anxiety).
Media OutletPre-ChatGPTPost-ChatGPTTotalChange
The Guardian4.284.024.06−6.23%
Manchester Evening News3.664.254.2316.17%
The New York Times4.363.933.97−9.90%
Ouest France3.203.933.5923.08%
Münchner Merkur3.003.583.5719.48%
San Francisco Chronicle2.413.613.5149.53%
El País2.933.313.2712.94%
Frankfurter Allgemeine Zeitung3.773.033.25−19.66%
Le Monde3.192.902.94−9.10%
La Voz de Galicia1.943.102.7159.41%
Total national3.843.583.62−6.82%
Total regional2.153.483.2561.41%
Total3.203.543.4910.59%
Table 6. Evolution of features linked to AI anxiety after ChatGPT in regional and national news outlets.
Table 6. Evolution of features linked to AI anxiety after ChatGPT in regional and national news outlets.
FeatureNationalRegional
AI agent−100.57%108.08%
Anthrop. verb−127.84%131.71%
Anxiety topic−69.97%103.82%
NER inverted−47.60%95.15%
Signal words−103.25%96.01%
Question−87.37%94.19%
Future/uncertainty134.98%51.08%
Pronouns362.05%73.58%
Readability257.82%85.91%
Table 7. Variable values per news outlet, sorted by the added level of AI anxiety.
Table 7. Variable values per news outlet, sorted by the added level of AI anxiety.
OutletAI AgentHuman VerbAnxiety TopicNER InvertedSignal WordsQuestionFuturePronounsReadability
Manchester Evening News−0.138−0.1350.515−0.060.62−0.050.060.450.48
The Guardian0.0820.0190.2680.05−0.010.260.150.170.47
The News York Times0.1020.0320.0580.000.160.07−0.020.130.68
Ouest France0.3040.1860.1230.320.00−0.20−0.09−0.13−0.27
Münchner Merkur0.1020.1920.0020.18−0.03−0.04−0.06−0.05−0.06
San Francisco Chronicle−0.044−0.0740.1310.05−0.13−0.040.07−0.100.31
El País−0.132−0.082−0.0140.100.06−0.120.060.11−0.55
Frankfuter Allgemeine Zeitung−0.0480.020−0.1560.170.04−0.06−0.22−0.22−0.14
Le Monde0.049−0.108−0.0330.06−0.22−0.18−0.07−0.22−0.67
La Voz de Galicia−0.170−0.058−0.418−0.50−0.09−0.10−0.01−0.14−0.54
Note: Average Z-values are given for each variable during the whole year. The color scale indicates the relative value of each variable compared to the other outlets (more orange, more “anxiety”).
Table 8. Comparison between La Voz de Galicia and The Guardian.
Table 8. Comparison between La Voz de Galicia and The Guardian.
La Voz de GaliciaThe GuardianDifference
VariablePrePostTotalPrePostTotalPrePostTotal
AI agent−0.475−0.015−0.1700.0870.0810.0820.5630.0960.252
Human verb−0.2750.052−0.0580.160−0.0100.0190.435−0.0610.077
Anxiety topic−0.929−0.160−0.4180.3900.2430.2681.3190.4030.686
NER inverted−0.723−0.389−0.5010.0090.0540.0460.7310.4430.548
Signal words−0.253−0.011−0.0920.202−0.053−0.0100.455−0.0420.082
Question−0.297−0.004−0.1030.4140.2320.2630.7120.2360.365
Future/uncertainty0.027−0.033−0.0130.0330.1740.1500.0050.2070.163
Pronouns−0.188−0.109−0.1360.0320.1950.1680.2200.3050.303
Readability−0.719−0.447−0.5390.5500.4570.4731.2680.9041.011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sanguinetti, P.; Palomo, B. An Alien in the Newsroom: AI Anxiety in European and American Newspapers. Soc. Sci. 2024, 13, 608. https://doi.org/10.3390/socsci13110608

AMA Style

Sanguinetti P, Palomo B. An Alien in the Newsroom: AI Anxiety in European and American Newspapers. Social Sciences. 2024; 13(11):608. https://doi.org/10.3390/socsci13110608

Chicago/Turabian Style

Sanguinetti, Pablo, and Bella Palomo. 2024. "An Alien in the Newsroom: AI Anxiety in European and American Newspapers" Social Sciences 13, no. 11: 608. https://doi.org/10.3390/socsci13110608

APA Style

Sanguinetti, P., & Palomo, B. (2024). An Alien in the Newsroom: AI Anxiety in European and American Newspapers. Social Sciences, 13(11), 608. https://doi.org/10.3390/socsci13110608

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop