Next Article in Journal
Cognitive Map of Perceptions of Social Networks as a Means of Justice in Sexual Offenses
Previous Article in Journal
Big Data Analysis of ‘VTuber’ Perceptions in South Korea: Insights for the Virtual YouTuber Industry
Previous Article in Special Issue
The Role of Artificial Intelligence in Contemporary Journalism Practice in Two African Countries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Friends or Foes? Exploring the Framing of Artificial Intelligence Innovations in Africa-Focused Journalism

by
Abdullateef Mohammed
1,*,
Adeola Abdulateef Elega
1,
Murtada Busair Ahmad
1 and
Felix Oloyede
2
1
Department of Mass Communication, Faculty of Arts and Social Sciences, Nile University of Nigeria, Abuja 900001, Nigeria
2
Family Medicine, Faculty of Medicine, Memorial University of Newfoundland, St John’s, NL A1B 3V6, Canada
*
Author to whom correspondence should be addressed.
Journal. Media 2024, 5(4), 1749-1770; https://doi.org/10.3390/journalmedia5040106
Submission received: 22 September 2024 / Revised: 8 November 2024 / Accepted: 13 November 2024 / Published: 18 November 2024

Abstract

:
The rise and widespread use of generative AI technologies, including ChatGPT, Claude, Synthesia, DALL-E, Gemini, Meta AI, and others, have raised fresh concerns in journalism practice. While the development represents a source of hope and optimism for some practitioners, including journalists and editors, others express a cautious outlook given the possibilities of its misuse. By leveraging the Google News aggregator service, this research conducts a content and thematic analysis of Africa-focused journalistic articles that touch on the impacts of artificial intelligence technology in journalism practice. Findings indicate that, while the coverage is predominantly positive, the tone of the articles reflects a news industry cautiously navigating the integration of AI. Ethical concerns regarding AI use in journalism were frequently highlighted, which indicates significant apprehension on the part of the news outlets. A close assessment of views presented in a smaller portion of the reviewed articles revealed a sense of unease around the conversation of power in the hands of tech giants. The impact of AI on the financial stability of media outlets was framed as minimal at present, suggesting a neutral, wait-and-see position of news outlets. Our analysis of predominantly quoted sources in the articles revealed that industry professionals and technology experts emerge as the most vocal voices shaping the narrative around AI’s practical applications and technical capabilities in the continent.

1. Introduction

AI has become an essential component of the automated processes of organizations in different industries. Over the past twenty years, there have been significant breakthroughs in the field of artificial intelligence (AI), which have led to increased adoption (Acemoglu and Restrepo 2018). According to Tyson and Zysman (2022), artificial intelligence is a form of technology advancement that amplifies routine-based changes involving the incorporation of intelligence into automation systems that replace humans in physical chores, as well as in routine and progressively nonroutine cognitive tasks. Shekhar (2019) defines artificial intelligence as a scientific field that allows computers and machines to acquire knowledge, make decisions, and employ logical thinking.
Anticipated advancements in this field are projected to be impressive, and numerous analysts forecast that AI will revolutionize business and interactions globally (Acemoglu and Restrepo 2018). The demand for AI has grown due to its capacity to address complex problems with limited human resources and knowledge, and within a restricted time frame (Shekhar 2019). In spite of these capacities, the worries over the incorporation of AI, especially in content-dependent industries like the newsrooms, have become evident. Journalism involves judgment, interpretation, creativity, and communication; thus, it remains uncertain whether AI would provide the same level of disruptive potential and benefits in journalism as it does in other industries that deal with physical products (Chan-Olmsted 2019).
Media framing tends to compress reality by highlighting some parts of it while obscuring others (Bryant and Dillard 2019). The media is perceived to be powerful in this regard due to its ability to set agenda (Mohammed et al. 2022). By being society’s main conduit of information on numerous subjects, the media wields substantial sway over individuals’ perspectives, beliefs, and actions (Vladisavljević 2015) through its selective reporting and presentation methods. The journalistic role has been argued to vary between informational–instructive, analytical–deliberative, critical–monitorial, advocative–radical, developmental–educative, and collaborative–facilitative (Hanitzsch and Vos 2018), which collectively empower journalists to influence AI-related discourse, shape public opinion about the technology, and deepen understanding of its potential risks or benefits—depending on how the narrative is framed. In the end, this framing can bridge the gap between apprehension and awareness and create a more equitable understanding of AI’s impact; in the same way, it can create negative perceptions of the technology if it is continually portrayed as a crisis or threat.
Existing research highlights significant aspects of AI’s impact on journalism but leaves important gaps, especially in the African context. The study by Radoli (2024) examines the transition from traditional media practices to AI-driven news production, by offering a broad perspective on the historical and ongoing changes within the media industry. Similarly, Nguyen and Hekman (2024) chart AI news frames across major international outlets like The New York Times and The Guardian to identify dominant framing strategies and associated data risks; meanwhile, the study by de-Lima-Santos and Ceron (2021) analyses the evolution of AI in the news industry through the JournalismAI initiative, by examining a database of AI applications categorized into major areas, including machine learning and natural language processing, to showcase global implementations and provide theoretical discussions on best practices and innovative uses of AI. Yet, these studies focus on global examples that do not quite address perspectives within and about African newsrooms. We sought to contribute to this discourse in our analysis of Africa-focused journalistic stories published over the last four years. Our objectives for this study, therefore, are as follows:
  • To examine perceptions of AI in the sampled articles;
  • To understand how AI adoption is framed in relation to journalism’s sustainability;
  • To identify the influential sources shaping these narratives;
  • To analyse the dominant themes within these articles.
Building on these objectives, the overarching research question guiding our inquiry is: How is AI adoption framed in Africa-focused news? We aim to offer a focused understanding of news media representation of AI and its impact within the African context.

2. Literature Review

2.1. Journalism in the Age of Artificial Intelligence

Innovations in media technology have, over the years, adapted journalism to various formats (Neuberger et al. 2019). The emergence of generative AI is establishing a new era that has the potential to further transform journalism and the content produced by the media (Pavlik 2023). AI machines such as ChatGPT, Synthesia, DALL-E, Google Gemini, and Meta AI are capable of producing images, text articles, reports, and videos from simple text prompts (Carlà et al. 2024). The development of these technologies has the ability to greatly streamline the production process for journalists and media outlets, allowing for quicker and more efficient content creation. According to Shi and Sun (2024), generative AI is currently employed to aid and expedite specific procedural tasks, such as creating content, verifying facts, processing data, generating images, converting speech, and translating. This reduces the workload on humans and enhances efficiency by enabling the generation of more extensive reports and analyses.
Generative AI is currently used sparingly in journalism and media operations (Adjin-Tettey et al. 2024). However, organizations like the Associated Press (AP) reportedly utilize AI for various tasks such as news gathering, production, and distribution (Pavlik 2023). Following their announcement of being the first major news organization to enter into a partnership with OpenAI, the joint proclamation stated that the agreement would enable the AP to investigate potential applications of generative AI in news products and services (Scire 2023). OpenAI has, as a result, trained its models on AP news stories dating back to 1985 to improve the performance of its machines. Liu (2022) also noted the use of generative AI in making magazine covers. This begs the question of whether AI has the potential to replace designers and reduce operational costs, bringing financial sustainability to newsrooms.
The ability to direct and refine the outputs of AI models relies heavily on prompting. In the context of large language models (LLMs), prompting refers to the process where structured queries guide the AI machines to produce specific outputs (Cain 2024), without necessitating specialized technical skills like coding or large-scale computational resources. Studies describe prompting as an essential process for maximizing LLMs’ performance and emphasize that various prompting strategies can directly influence output quality (Bansal 2024). Despite their ease of use, LLMs introduce several challenges in terms of accuracy, reliability, and accountability. Studies highlight that LLMs, though powerful, are prone to generating misleading or biased information, which poses risks, especially when applied in fields like journalism. Researchers Hicks et al. (2024) argue that these systems have “been plagued by persistent inaccuracies in their output”, often referred to as “hallucinations”; but, in their view, they would be more accurately described as “bullshit”, a term the authors believe more precisely captures the behaviour of the technology (p. 1). This behaviour is particularly problematic in journalism, where misinformation can easily spread if LLMs are uncritically relied upon (Leiser 2022). Furthermore, accountability is a persistent issue, as LLMs are typically seen as “black boxes” that offer little transparency regarding their decision-making processes (Barman et al. 2024). The hype surrounding LLMs is, in part, fuelled by the marketing strategies that amplify their perceived capabilities. Tech companies and enthusiasts often overstate what LLMs can achieve, by portraying them as close to fully autonomous systems when, in reality, they are still prone to errors and often perform inconsistently. Markelius et al. (2024) argue that the mechanisms of AI hype stretch far beyond the actual capabilities and presumed transformative power of the technology.
Furthermore, scholars have raised doubts regarding the practicality of machine ethics by challenging whether artificial systems can truly possess moral competence. Moral competence refers to the ability to make ethically correct decisions in different scenarios. They highlight the importance of “moral sensitivity”, which refers to having an understanding of cultural context and contextual elements as a necessary requirement for moral competence. Moral sensitivity refers to the practical ability to identify and understand the ethically significant aspects of various situations, as well as their relevance (Graff 2024). The majority of advocates for machine ethics do not necessarily hold the belief that artificial moral agents can function as complete moral agents. The reason for this is that there is a general consensus that AI systems lack some characteristics that are considered essential for complete moral agency, such as intentions, the ability to determine their own rules or purposes, consciousness, or a moral personality. Therefore, they are prone to making overly simplistic or prejudiced judgments that do not sufficiently tackle the complexity of ethical considerations, especially in a field like journalism (UNESCO 2023).

2.2. African Data and AI Initiatives

In the context of African journalism, the effective use of AI is fundamentally tied to the quality and accessibility of data, which remains a considerable hurdle across the continent (Kiemde and Kora 2020, 2021). This issue presents significant challenges in the development of AI models particularly because AI systems are only as good as the data they are trained on. The problem is exacerbated by the underutilized data, inadequate digital infrastructure, and limited data management skills in many African countries (Gwagwa et al. 2020; Eke et al. 2023). Consequently, AI models tend to rely on data from external sources, which are often not reflective of local realities, thus leading to non-representative AI models that fail to capture the cultural, social, and environmental nuance of African populations (Okolo et al. 2023). Gwagwa et al. (2020) note that Africa still scores very low in its AI readiness, primarily due to the persistent reliance on international partners and tech companies for essential support. This dependency raises concerns about the continent achieving ethically sound AI solutions, as frameworks presently being adopted equally embed values from the very international corporations upon which Africa depends (Eke et al. 2023).
Efforts to improve data accessibility and inclusivity in AI across Africa have been strengthened by various platforms and collaborations, which have brought about some level of resources and representation to the region. Open-source platforms such as Zindi, Kaggle, and openAFRICA offer African journalists, researchers, and practitioners avenues to curate and share locally relevant datasets, which are crucial for addressing the region’s well-documented dataset scarcity (Okolo et al. 2023). These platforms not only facilitate dataset access but also enable community-driven AI development, and allow African data scientists to innovate based on regional data rather than solely relying on externally sourced data that may lack local relevance (Gwagwa et al. 2020). Grassroots initiatives have also emerged as vital contributors to African AI development, often through collaborations that leverage regional expertise to create culturally and linguistically diverse datasets. For instance, the AI4D Africa Language Challenge—a collaboration between Zindi and AI for Development (AI4D), saw over 400 data scientists working to develop datasets for African languages such as Wolof, Hausa, Chichewa Igbo, Ewe, Kabiye, and Kiswahili, indicating that local initiatives can fill critical gaps left by global technology providers (Zindi 2020). Dubawa, a Nigerian fact-checking platform, equally plays a prominent role in this space. Established by the Premium Times Centre for Investigative Journalism (PTCIJ), Dubawa focuses on providing accurate information, promoting media literacy and enhancing the quality of journalism in Nigeria, Ghana, Sierra Leone, Liberia, and The Gambia. Through rigorous and AI-powered fact-checking processes, it addresses issues like false information, misinformation, and disinformation, especially around critical topics like health, politics, and social issues. As a verified signatory of the International Fact-Checking Network (IFCN) code of principles, the platform commits to training journalists and media practitioners, by encouraging the use of evidence-based reporting and verification skills to strengthen public trust in media. Additionally, the Lacuna Fund, which supports dataset development for low-resource contexts, equally improves data representation in key sectors like health and agriculture but also reinforces data sovereignty by keeping data control within African research communities. Okolo et al. (2023) contend that, while there is still a long way to go in improving the quality and accessibility of datasets representing the African continent, significant progress has been made thus far.

2.3. Overview of Emerging Ethical Frameworks for AI Use in Journalism

Inquiries into the adoption of AI in journalism are gaining momentum due to AI’s dual potential to enhance newsroom efficiency on one hand and undermine public trust in journalism on the other. Reporters Without Borders (RSF) observed that only a minority of newsrooms have so far established ethical guidelines to govern AI use. This makes defining what constitutes improper AI use in journalism more complex. We conducted searches across reputable databases and professional association websites to identify and retrieve relevant ethical guidelines published by influential organizations in journalism, technology, and ethics.
As RSF have noted, the currently available guidelines mainly address areas such as setting limits on the use of generative AI by journalists, ensuring that private information is not disclosed to or on AI platforms, affirming human responsibility for all published content, and recognizing the risk of bias inherent in generative AI tools. The scope and stringency of these guidelines, however, vary significantly, with some lacking common coverage of or addressing different aspects separately. However, when it comes to other AI applications like recommender systems, moderation tools, or forecasting models, media outlets generally have not issued any comprehensive rules or recommendations (Reporters Without Borders 2024).
The Washington Post’s AI policy highlights the importance of transparency about how and when AI is used in an article and suggests that AI should support and accelerate journalistic work in ways consistent with the organizations’ news judgment and ethical standards. The newspaper, in their policy, acknowledged their use of AI to suggest related content for readers, translate between languages, and sift through extensive documents or images (The Washington Post 2024). They, however, recognize the imperfections of AI tools as well as the constant need for verification of their outputs. The newspaper explicitly states that AI will not be used by them to generate images, video, or visual works that purport to represent reality without disclosing its use, in a bid to maintain the integrity of its published visuals. Their policy emphasizes that it remains important for journalists and technologists to vet, edit, and contextualize AI products (The Washington Post 2024).
The Associated Press (AP) equally outlined its approach to the deployment of artificial intelligence on its AI policy page. To the news agency, the ultimate goal is to use AI to streamline workflows, thus allowing journalists to focus on higher-impact tasks. AP acknowledged its use of AI for automatic transcription of video, generation of video shot lists, creation of story summaries, as well as automation of corporate earnings and sports stories (The Associated Press 2024).
Despite these exciting possibilities that AI brings to journalism, concerns about their ethical implications persist. On 6 September 2023, some 26 major media organizations representing a broad spectrum of the creative industry—including publishing, news, entertainment, magazine, and book publishing industries—released the handbook Global Principles for Artificial Intelligence. This pioneering document provides comprehensive guidance for the development, deployment, use and regulation of AI systems to ensure that business opportunities and innovation thrive within an ethical and accountable framework in the media and communication industry. The policy stresses that respect for intellectual property is paramount, with AI developers, operators, and deployers being required to avoid unauthorized use of proprietary content and ensure adequate remuneration is provided for rights holders (STM 2023).
Furthermore, the recently published Paris Charter on AI and Journalism represents one of the first international ethical benchmarks for AI in journalism, as it emphasizes the fundamental principles essential for ensuring reliable news and information in the AI era. It equally provides that AI technologies should enhance the capacity to deliver quality and trustworthy information while upholding the core principles of truthfulness, accuracy, fairness, impartiality, and independence (European Federation of Journalists, CDMSI 2023). While the charter is a pioneer in AI ethics for journalism, it does have some drawbacks. For instance, the World Association of Newspapers and News Publishers (WAN-IFRA) has expressed concerns about the recommendation for AI systems in journalism to be independently evaluated beforehand. WAN-IFRA argues that the decision on which AI systems to implement should remain with the news media company, rather than relying on external assessments. They emphasize that publishers, who bear the ultimate legal responsibility for their content, should establish their own safeguards and editorial standards, as outsourcing these decisions to an undefined external body is neither practical nor desirable (WAN-IFRA 2023).
In a similar move, on the 30th of November 2023, the Council of Europe’s Steering Committee on Media and Information Society (CDMSI) adopted the now-published, 41-page ‘Guidelines on the Responsible Implementation of Artificial Intelligence Systems in Journalism’. The guidelines hold that, while AI presents exciting possibilities for streamlining workflows and enhancing content production, responsible implementation requires careful consideration of ethical principles and potential pitfalls. Central to the guide is the notion that AI implementation should be mission-driven. That is, news organizations must carefully consider how AI aligns with their core values and journalistic ethics. This goes beyond mere technological feasibility or cost-effectiveness. The decision to utilize AI should be an editorial one, with clear accountability assigned to the editor-in-chief. Furthermore, this decision must be informed by a specific problem or task within the existing workflow. Simply automating tasks without considering the broader editorial context undermines the very values journalists strive to uphold (CDMSI 2023)
In spite of these efforts, studies have shown that ethical guidelines for AI use in media are still rare globally. Research by Forja-Pena et al. (2024), for example, sought to analyse how guidelines on AI use have been incorporated into journalistic codes of ethics across several countries of the world. Their findings indicate that out of 99 sampled codes, only the codes from Belgium, Costa Rica, Germany, and Lithuania have been revised to address the ethical use of AI in newsrooms. The Belgium’s code mandates transparency and editorial responsibility, while Costa Rica’s code emphasizes AI’s dual impact on democracy and calls for ethical AI use to reduce misinformation and maintain public trust. The German and Lithuanian codes were seen to discourage AI-generated distortions in media, stressing the need for image integrity to protect public perception of news (Forja-Pena et al. 2024). In their 10-year review of journalism research on AI, Loscote et al. (2024) equally observed that ethical issues in AI use for journalism are “poorly represented” (p. 887), while Porlezza and Schapals (2024) point that even in news organizations where ethical guidelines have emerged, their practical application remains challenging and under-studied, partly due to the opacity of AI algorithms, as well as the difficulties of embedding journalistic values into AI systems.
While it is clear from this review that the emerging ethical guidelines have some drawbacks, and that Africa is not alone in its lack of AI-specific ethical codes for journalism, the need for establishing such standards in Africa remains crucial. Without the standards in place, newsrooms, publishers, journalists, and other media professionals across the continent may be heading towards a terrain where ethical practices regarding the use of innovations as powerful as generative AI are not clearly defined.

2.4. Framing Artificial Intelligence Innovations: Do Representations Matter?

Framing of AI in news media hangs on the incursion of the technology into the information world, marked by its dual potential to enhance access to essential content for both professional and non-professional use on one hand, and a tendency toward content artificiality on the other. As part of their collective role in surveillance and mirroring for attention calling in the society, news media tend to justify framing of issues—in this case, AI—as a prime technique of agenda setting, particularly due to the perceived risks involved, even while recognizing the benefits the technology can bring.
Whether AI is framed positively or negatively, news media are generally expected to maintain their watchdog role, informing and educating the public for the greater good. In this sense, the varied representations of this expansive technology are far from inappropriate, considering the shifts its advancements have triggered over time (Goodman and Goodman 2006), and the aspiration of modern journalism to foster a better world. As with any issue, AI’s roles and impacts are meaningfully interpreted through the lens of media framing. Audiences’ understanding of AI largely depends on how the technology is reported or discussed across journalistic genres. This comprehension includes the framing of AI’s risks and benefits (Chuan et al. 2019). The media possesses a strong basis for elevating AI beyond mere salience, encouraging audiences to consider discourse about the technology more deeply and recognize its far-reaching implications, thus reorienting the public thinking towards AI (Chong and Druckman 2007). This capacity to influence perception is evident in instances where the media have successfully discouraged audiences from engaging with certain products, often by exposing scientific controversies about them (Chuan et al. 2019). Thus, the sentiments stirred by the media, whether for or against particular issues, are rarely dismissed, especially when it concerns technologies like AI, which are seen as having unpredictable societal consequences. For instance, an American newspaper’s portrayal of AI “as outperforming the medical expertise of the doctor”, instead of being an aiding agent, is an index of “troubling configuration” (Bunz and Braghieri 2022). Such an alarming configuration cannot be estranged from the polarizing politics of AI propagation in the news media with much tendency to overhype its potential benefits or overstress its menaces and harms (Nguyen and Hekman 2024).
The perspective from which impacts of AI might have been reported by news media, as an issue or an event, is one thing; the question of whether the framing of the impacts was personal or societal or if such a framing was episodic or thematic is another (Iyengar 1991). Regardless of the frame types in which AI might have been embodied, the target of the news media in achieving their social responsibility roles in society is to affect the audience’s evaluation of the technology by affecting their “frame in thoughts” about the risks and benefits and, by implication, have a manifest influence on their overall opinion on AI. The media strategically accomplishes this by consistently introducing new perspectives on the technology, offering concrete evidence in their narratives, and presenting this evidence within a logical and engaging context for the audience’s evaluation. Ultimately, “morally superior arguments” work (Chong and Druckman 2007) in these instances.
Although the correlation between frames in communication and frames in thought has long been assumed to be significant, there is no denying the fact that such a correlation, especially within the context of aggressive coverage of AI in news media, must have been influenced by some mediating and moderating factors, specifically psychological variables. A key psychological variable presumed to mediate or moderate AI framing in the news media is the need to store AI-related issues and events in memory for future retrieval (Chong and Druckman 2007). Research indicates that audiences often form opinions based on the convenience of retrieving information from memory, without engaging in conscious deliberation (Chong and Druckman 2007). Similarly, the competition among news media outlets to attract audiences is considered a sociological mediator or moderator in the relationship between news coverage and framing (Vergeer 2020). It is evident that audiences globally are drawn to news media that offer extensive coverage and heightened salience (Chuan et al. 2019). Thus, attracting larger audiences is a formidable factor in the media’s political economy. This has shifted news coverage architecture from traditional journalism practices to one that is market-driven, fuelling competition for audiences and advertisers, as well as driving mergers and acquisitions in response to the rise of social media, which challenges the autonomy of news organizations (Sjøvaag 2024). Therefore, the news media’s recourse to AI framing in all senses and logic can be said to matter in the data-driven epoch, where economic interest contaminates public interest and automation comes in full strength to cause dislocation in the labour market. News media’s diverse framing strategies of AI are aimed at protecting the journalism sector from death and workers from employment fall (Comunale and Manera 2024), while striving to maintain their influence on public opinion by applying both episodic and thematic frames effectively (Chong and Druckman 2007). In essence, how AI is framed by the media does not merely reflect our views; it actively crafts them and is a critical determinant of how we engage with the technology.
Fast and Horvitz (2017) analysed The New York Times coverage of AI over two decades to highlight the predominance of positive portrayals—such as AI being a tool that enhances daily tasks and data analysis. They added, however, that recent years have seen more attention on AI’s negative impacts, such as algorithmic biases in areas like racial profiling. Similarly, Chuan et al. (2019) found an increase in ethical framing, with media now balancing both AI’s benefits and potential drawbacks in discussions around morality. Cools et al. (2024) observed that media representations of AI in the U.S shifted from a dystopian view in the late 20th century to a more benefit-focused, optimistic portrayal entering the 21st century, thus illustrating how the narrative surrounding AI can fluctuate between utopian and dystopian; this often leaves audiences uncertain about AI’s true potential and limitations. Moran and Shaikh (2022) add to this discourse in their examination of five years of US and UK media coverage on AI in journalism, where they identified a divide between industry leaders, who emphasize AI’s utility in cost-saving and content production, and journalists, who express concerns about AI’s impact on the journalistic profession. This tension manifests in contrasting narratives: while newsroom leaders and funders frame AI optimistically, often anthropomorphizing it as “journalistic” to increase its acceptability, journalists raise questions about AI’s implications for labour, audience trust, and the fundamental human aspects of journalism. As such, these discussions highlight not only practical challenges but also existential ones and prompt a reflection on whether journalism—a field traditionally driven by human insight—can maintain its integrity if increasingly automated. The absence of similar research that examines the framing of AI innovations in Africa-focused journalistic stories necessitated this inquiry.

3. Methodology

In this study, we adopted a mixed-methods approach involving both quantitative and qualitative content analysis to evaluate AI’s portrayal in African journalism. The data collection process began with a targeted search on Google News, a platform selected for its extensive aggregation of articles from diverse news sources. Our search strategy involved using targeted keywords such as “AI”, “Artificial Intelligence”, “Media”, “Journalism”, “Africa”, and “Innovation” to engineer the relevant search results. The resulting dataset comprised 73 news stories drawn from 52 different news outlets, spanning coverage from 2021 to 2024 (4 years). The result covered publications from countries such as Nigeria, Kenya, South Africa, Ghana, and several others (see Appendix A). There was no restriction to English-language publications; however, all identified articles were written in English, which facilitated direct analysis. This specific timeframe (2021 to 2024) was selected to reflect the recent surge in AI applications and discourse, given that many notable AI innovations, especially the large language models like ChatGPT, Gemini, and others gained more prominence over the past four years. The focus on this period also allowed us to capture evolving sentiments in Africa, as AI gained traction within African news ecosystems in this period. The next stage involved a classification process to categorize each article according to the themes it presented. To begin, we created a coding scheme based on preliminary readings and existing literature on AI in journalism. Categories included “Technological Pessimism”, “Technological Optimism”, “Power Dynamics in Journalism”, and “Sustainability of Journalism”. Following this process, we carried out a quantitative content analysis to calculate the frequency and distribution of the identified themes and sub-themes, which allowed us to quantitatively assess the prominence of each theme within the sample.
To assess the sentiment within the articles, we employed NVivo, a qualitative data analysis software. We leveraged its autocode sentiment function, which leverages machine learning to interpret sentiments based on contextual cues. The process began with importing the retrieved articles into NVivo. Once set up, the autocode function scanned each article and assigned a sentiment score, by identifying paragraphs or sentences as conveying positive, negative, or neutral sentiments. The algorithm considered linguistic cues, sentence structure, and contextual indicators specific to each passage. For example, language emphasizing AI as an innovative or solution-driven technology was generally tagged as positive, while discussions highlighting AI’s ethical challenges, bias, or threats to journalistic integrity were tagged as negative. Following this run, we conducted a manual review of the sentiment coding to validate the algorithm’s outputs. This step involved re-reading some portions of the articles where sentiment tagging was performed to confirm the reliability of the procedure.

4. Data Presentation and Analysis

As seen in Table 1, the overwhelming reference to ethical issues in the use of AI in journalism (n = 35, 48%) across the sampled articles speaks to the fact that issues like misinformation, bias, and the erosion of journalistic integrity are of significant concern to these media entities. The article by Business Daily, for instance, reports that, in the wake of AI use in journalistic work, the media have a duty to provide accurate information; as a result, a clear strategy for media capacity building and ethical guidelines is essential, which is why the Media Council of Kenya (MCK) has appointed a 29-member team to develop guidelines on the integration of artificial intelligence (AI) in modern-day journalism (Mwangi 2023). Similarly, the Sunday Standard reports that AI, like any technological transformation, is a double-edged sword that raises complex issues around copyright—such as authorship, infringement, and fair use; this presents a significant legal challenge for the media industry in Africa, making it essential for the MCK task force to develop guidelines that ensure the appropriate and ethical integration of AI in journalism while considering global efforts and domestic laws.
The issue of job displacement, though less prominent (n = 3, 4%), mirrors the nascent concern among media workers that AI could render human journalists obsolete. History, as quoted in an article from The Herald, seems to be repeating itself. The article references a discussion that took place in Harare, Zimbabwe, at the “Media Alliance Strategy Stakeholders Conference Validation of Media Strategy”, where the evocative phrase “some kind of trepidation” was used to describe how journalists felt when typewriters replaced manual printing as well as how the development of the internet brought “much anxiety” to the media industry, as the days of monopolizing news production and dissemination came to an end. The article goes on to ask, “What will save journalism or the media industry at a time when audiences are now accessing all sorts of news or information at a click of a button?”. Furthermore, Legit.NG, a Nigerian news outlet, reports that AI tools are already used for tasks like transcription and summarization and that, in early 2023, Germany’s Axel Springer group reportedly cut jobs at newspapers due to AI’s ability to “replace” some journalists. While these fears persist, other articles suggest that innovation will only weed out those unwilling to adapt. In a Premium Times report in June 2024, Kadaria Ahmed, founder of RadioNow was quoted asserting that “AI will not take your job unless you are a lazy journalist because, at the heart of journalism is holding power accountable and ensuring that you set the agenda. No machine is going to do that” (Majeed 2024).
As seen in Table 2, the narrative of AI enhancing journalistic practices accounts for 23% of the sampled articles. This representation suggests a belief by a subset of news outlets in AI’s ability to enhance the practice of journalism. For instance, Business Day reports that “AI technology offers a range of capabilities that can enhance efficiency, accuracy, and relevance in delivering news and information to audiences” (Ademola 2024). This perception is rooted in the recognition of AI’s capacity to process large volumes of data, automate routine tasks, and assist in uncovering patterns that might otherwise go unnoticed. However, this idea of enhancement mainly implies an increase in efficiency rather than an elevation in the quality and integrity of journalism. As the article by Tech Central warns, the disruption brought by AI could lead to upheavals in content-dependent industries, and journalism is no exception (Nasila 2023). The real challenge lies in ensuring that AI tools are used to genuinely improve the quality of journalism without compromising the profession’s ethical standards. Similarly, 18% of the articles highlight AI’s role in improving productivity in journalism. This theme reflects the pragmatic appeal of AI in some African newsrooms, where financial constraints often limit the scope of operations. Business Day further points out that “AI tools can play a crucial role in streamlining editorial processes within print media organizations in Africa” by automating repetitive tasks and allowing journalists to focus on more substantive work (Ademola 2024). While the potential for AI-driven storytelling is recognized (n = 4, 5%), the result shows it has not yet been fully embraced in African journalism.
The notion of AI introducing creative solutions to journalism problems accounts for 7% of the sample. For instance, AI is perceived in some articles to be able to offer solutions to the problem of misinformation by enabling more sophisticated fact-checking processes. The Nation discusses the launch of AI tools by the Centre for Journalism Innovation and Development (CJID) aimed at enhancing the efficiency and accuracy of fact-checking processes for journalists and writers in Africa. These tools, which include an audio platform and a chatbot for flagging fake news, represent significant contributions to the fight against disinformation (Akowe 2024). However, the fact that nearly half of the articles (29, 40%) adopt multiple varying positions on the subject makes a clear categorization difficult and suggests that the discourse surrounding AI use in journalism is not monolithic but characterized by diverse perspectives.
A close assessment of views presented in 19 (26%) of the reviewed articles equally revealed a sense of unease around the conversation of power in the hands of tech giants. Articles like one from Premium Times, titled “Can we survive the ‘attack’ of Big Tech?”, raise a critical point. The article reports that Big Tech may be actively usurping the role of traditional media and taking an increasing share of advertising revenue and even jobs. It highlights the allure of AI-powered targeted advertising based on vast data troves as a strategy that traditional media will struggle to compete with. This economic advantage and dominance translates to a control over platform algorithms—the very tools that determine what the content audiences of today see. It further cites how politicians now prioritize social media platforms more for voter outreach due to their AI-powered algorithms, which translates to further economic losses for traditional media outlets (Ishiekwene 2023). These concerns resonate with those raised in another article by Radio Nigeria, which highlights that the weaponization of AI for content manipulation by political actors blurs the lines between truth and falsehood and ultimately weakens the quality of journalism (Gwamzhi 2023). Based on these narratives, the implication is clear—if Big Tech controls the tools for content creation and distribution, the potential for manipulation and control over information becomes immense.
Other issues observed in 9ninearticles (12%), as seen in Table 3, stem from the discourse of increased dependency on and influence of foreign tech in African newsrooms—a subtle highlight of the dilemma of whether foreign technologies adequately serve or improve local journalistic needs. Dependency on foreign tech raises issues of data sovereignty, technological sovereignty, and the potential for geopolitical influence on media narratives. The article by The African, for instance, reported that the continent is already being dominated by “American and, to a lesser extent, Chinese global tech titans”, such as Facebook, Google, Amazon, Huawei, and Microsoft. This dominance extends beyond just internet services; with the rise of AI, African countries find themselves “in highly asymmetrical and dangerously dependent relationships with big tech companies” (Maswabi and Nkala 2022). The article details how vast amounts of data—personal, non-personal, and even sensitive government data—are transferred to and hosted in data centres outside the continent, feeding into algorithms controlled by foreign companies (Maswabi and Nkala 2022).
This concern is echoed in another article by Tech Cabal, which emphasized the consequences of Africa’s absence in the ongoing development of the new AI treaty. This absence, the article argues, risks perpetuating the existing data dependency. Without a seat at the table, African nations have no voice in shaping how AI is developed and deployed (Ndege 2024). Further complicating the situation is the lack of progress in developing regulations for AI within Africa, as reported by Tech in Africa. While the EU has taken the lead with its AI treaty, African nations have not followed suit. This lack of local regulations could further solidify foreign tech’s hold on the continent’s media environment (Ashiru 2024).
Furthermore, in five articles (7%), we find a similar shadow of concern regarding the potential of AI serving to erode journalistic and editorial autonomy. The news outlet Horn Observer reported that AI-powered algorithms prioritize news content based on predicted user preferences, which potentially deviates from the journalistic principle of prioritizing public interest. This can lead to the creation of “filter bubbles”, where readers are only exposed to information that reinforces their existing beliefs, thus undermining the ability of journalists to present diverse perspectives. In another article, the South African news outlet Daily Maverick reported that, in addition to the frequent “hallucinations” and production of “fictionalized” information by some of these tools, African expressions and experiences are not part of most generative AI model training datasets. Therefore, using these tools, even for seemingly benign tasks like summation, could compromise journalistic autonomy and ultimately damage public trust and the reputation of the news media (Timcke and Wasserman 2023).
Our analysis equally reveals a narrative within 14 (19%) of the reviewed stories that positions AI as having positive impacts on the financial stability of journalism. In one of the articles, Tech Central cites a McKinsey report estimating the potential value of AI and analytics in media and entertainment at a staggering USD 448 billion by 2025. It further highlights AI’s potential for content creation, personalization, and metadata tagging—all aspects that can enhance user experience, attract new audiences, automate and streamline journalistic workflows, and ultimately offer a clear path towards financial stability for media outlets struggling with declining revenues (Nasila 2023).
A similar story by Zimbabwe Independent further emphasizes this point, by highlighting the financial struggles of Zimbabwean legacy media grappling with declining circulation and advertising revenue. The article argues that AI can “turbocharge newsroom operations and create synergies for easy data collection and automation” (Mugadzaweta 2023). Across these subsets of articles, we find that the focus was on how AI adoption could lead to significant cost reductions for media outlets, which would in turn free up resources for them to invest in other areas.
A more prominent proportion of the articles (22, 30%) revealed a rather cautious stance by mainly framing AI’s impact on the financial stability of media as limited or minimal at present. This neutral perspective represents a sort of wait-and-see position that acknowledges AI’s potential but also highlights limitations such as it being an unproven technology, its possibly high implementation costs, as well as the uncertain return on investment in AI tools. In one such article, Newman (2023) paints a picture of a news industry divided on the potential of AI. The article based its position on a survey of media leaders across countries of the world, including Nigeria, Kenya, and South Africa by Reuters, which revealed that most publishers are not optimistic that the new phase of AI will work out well for the news industry as a whole as more than a third of respondents felt only a few big media companies will benefit, leaving smaller outlets struggling.
Furthermore, the article contends that AI will have a negative impact on trust and misinformation in news, which raises conceTrns about its financial viability for media organizations. A different story, published by Daily Maverick in 2023, echoes these anxieties and adds a cautionary note regarding the financial viability of AI in journalism. It warns against the promises of technology companies, citing a historical pattern: “Based on past behaviour, technology companies will be quick to promise that their products can help newsrooms monetise other aspects of their business, like using their archive as a revenue source. In light of this, we might want to think about whether AI accountability encompasses a company’s transparency about how product development and investment decisions are made” (Timcke and Wasserman 2023).
This highlights the need for news organizations to be wary of unproven financial claims and focus more on developing sound, long-term strategies to answer the legal and ethical questions about data scraping and copyright infringement that the use of AI in news may prompt. Table 4 equally reveals that 12% of the articles express a mixed view on the subject and 38% of the articles are not directly related to the theme.
Our analysis of the predominantly quoted sources, as presented in Figure 1, was mainly intended to provide an understanding of who is shaping the media narrative surrounding AI innovations in journalism practice. Industry professionals (n = 28) and technology experts (n = 17) emerge as the loudest voices, shaping the narrative around AI’s practical applications and technical capabilities. This dominance is unsurprising given that industry professionals offer real-world insights into the challenges and opportunities of integrating AI, while technology experts provide clarity on the technology’s potential and as well as the mechanics of implementation. We also find that academics (n = 13), journalism startups (9%), and journalistic bodies (n = 16) served as crucial voices in the discourse of the long-term societal and ethical implications of AI on journalism. The Centre for Journalism Innovation and Development (CJID), for instance, emerged as a prominent voice across several of the reviewed articles due to its unique position as a leading media development think tank dedicated to nurturing innovation in African newsrooms. Government officials (11%) were the least-quoted group.
As shown in Table 5, our findings indicate that a majority of the articles adopt a moderately positive tone towards AI innovations, accounting for 55% of the sample. The discourse within the majority of the sampled articles is optimistic about the potential of AI to enhance journalistic practices, improve efficiency, and enable new forms of storytelling. We find that AI is framed in a number of articles as a tool that can help overcome challenges such as resource constraints, logistical inefficiencies, and rapid content generation in today’s digital-first environment. This optimistic outlook aligns with global trends in media where AI is heralded as a game-changer capable of bringing about significant improvements in content personalization, audience engagement, and operational efficiency.
However, the moderately positive tone also indicates a level of caution and measured enthusiasm. While the potential benefits of AI are recognized, there is an awareness of the risks and challenges associated with its implementation. Some of the concerns include the readiness of African newsrooms to integrate AI effectively, the need for adequate training and capacity building, and the importance of ensuring that AI tools are adapted to the specific socio-cultural contexts of African journalism. The emphasis on moderation rather than unbridled enthusiasm implies that the discourse, at present, is not driven solely by techno-optimism but is grounded in a realistic assessment of both the possibilities and limitations of AI in this region.
Further, the substantial portion of articles with a moderately negative tone, representing 36% of the sample, highlights the ambivalence surrounding AI in African journalism. We find that this sentiment mainly stems from ethical concerns surrounding AI. The use of AI in journalism raises questions about bias in algorithms and the appropriateness of AI-driven editorial decisions. There is equally a valid concern that AI systems, often developed in Western contexts, may not adequately reflect local realities. The negative tone associated with this view is a reaction to the perceived risks of erosion of local journalism traditions, and the potential for AI to undermine the credibility and trustworthiness of news if not carefully managed. This includes the potential for AI to exacerbate existing power imbalances, where large technology companies, often based outside the continent exert control over the African media industry, as captured in the article by The African.
The very negative sentiment, although representing a small fraction of the sample at 4%, is driven by concerns about the loss of human touch in journalism, where AI-generated content lacks the depth, empathy, and contextual understanding that human journalists bring to their work. As captured in one of such articles, AI-journalism, for now, “has no voice, no analysis, no feel or colour”, (Naidoo 2023). It adds that readers of AI-written text may be sent to non-existent references; as such, the models are not appropriate for time-sensitive, resource-intensive endeavours like news reporting, which needs meticulous fact-checking and cross-referencing (Naidoo 2023). This could result in a decline in the quality of journalism, with AI-driven news prioritizing speed and efficiency over depth and critical analysis.
The overall distribution of tones—dominated by moderate sentiments, both positive and negative—reflects the balanced and reflective nature of the discourse on AI in African journalism. This balance indicates that the media, at present, is engaging with AI in a thoughtful and critical manner, by recognizing both its potential benefits and its risks. It also suggests that the discourse is not polarized, but rather is characterized by a willingness to engage with the technology. This is a positive sign, as it shows that African journalists are not merely passively adopting AI but are actively interrogating its implications and seeking to shape its development in ways that are aligned with local needs and values.

Discussion of Findings

Although our findings indicate that the coverage of AI innovations in Africa-focused journalism is predominantly positive, the optimism is tempered by ethical concerns, issues of misinformation, and concerns about the potential loss of the potential loss of humanness in news production. In reference to these ethical problems, Diakopoulos (2019) stresses that the configuration of the human roles and the AI functions in news production “will not be easy” as social context in which the journalist’s report would always be contaminated by AI unless the workflows and practices are iteratively re-engineered by the human journalists, setting the limitations for the creative technology (Diakopoulos (2019, p. 35). That suggests that the tendency of misinformation in the news contents at the instance of algorithms can only be tamed with “a reorientation of the traditional watchdog function of journalism toward the power wielded by algorithms” (Diakopoulos 2019, p. 207). Otherwise, AI use may lead to an overload of information, and the popularization and marketing of incremental or even insignificant stories in the rush to publish with these engines (Tatalovic 2018). As scholars have often warned, the reliance on algorithms risks embedding bias, because most AI systems are developed outside the socio-cultural contexts in which they are deployed, making them ill-suited to local realities (Prabhakaran et al. 2022).
Although only a small proportion of articles (4%) directly express concerns on the issue of job displacement, this concern remains critical. Much like how digitization disrupted traditional media workflows, AI poses both opportunities and threats for journalists. However, the perception that only “lazy journalists” will be displaced (Majeed 2024), as noted in some of the articles, presents a somewhat reductionist view of this disruption, as the real concern is not merely about adaptation, but about the changing nature of journalistic work and the value placed on human expertise in this increasingly automated industry.
Twenty-three per cent (23%) of the sampled articles framed AI as being capable of enhancing journalistic practices through increased efficiency. Studies have consistently highlighted AI’s ability to automate routine tasks, allowing journalists to focus on higher-value reportorial work (Graefe 2016). However, we also observe that the improvements referenced are largely in terms of efficiency, not necessarily quality. This mirrors the concerns raised by Simon (2024), who finds that, while AI can optimize production workflows, it risks reducing the depth and correctness of journalistic inquiry if not carefully integrated. The scholar, through interviews with newsworkers, arrived at a position that contrary to the hype, the technology may “in fact, decrease efficiency if something produced by AI ends up needing to be laboriously checked by a human, or if its output cannot be fully trusted” (p. 18).
Further, the potential for AI to improve productivity in resource-constrained newsrooms, as emphasized by 18% of articles, speaks to the technology’s appeal in low-resource settings, such as the case of African media outlets where limited staffing and resources hinder content production. This reliance, however, could result in content homogenization, which in turn limits diverse perspectives in news coverage, as echoed in Cools and Diakopoulos (2024). Therefore, while AI offers productivity gains, it is imperative that these benefits do not come at the expense of source diversity in journalism.
We also find a common narrative that speaks to the disproportionate power that Big Tech wields in shaping the media industry (26%). This finding aligns with broader critiques of the increasing influence of technology companies over the distribution of news content (Bell and Taylor 2017). Some of the stories reviewed highlight how AI-powered targeted advertising is diverting advertising revenue away from traditional media, a development that mirrors similar trends in Europe and North America. The study by Zuboff (2019), for instance, has pointed out that AI-driven algorithms used by tech giants often prioritize profitability over public interest. Additionally, the eventual dependency on foreign AI technologies, as reported in 12% of the articles, highlights a broader geopolitical issue. African newsrooms’ reliance on external AI systems places them in asymmetric relationships with global tech titans, which compounds already existing issues of data sovereignty (Maswabi and Nkala 2022).
At present, the financial benefits of AI remain speculative for media outlets. While some articles emphasize AI’s potential for streamlining operations, enhancing content personalization, and improving targeted advertising—ultimately leading to cost reductions and financial recovery for struggling outlets (Nasila 2023)—others caution that such benefits may disproportionately favour larger, well-resourced companies capable of absorbing AI’s high implementation costs, leaving smaller organizations at a disadvantage. The report by Simon (2024), for instance, highlights that AI is likely to widen the gap between well-resourced international news organizations and local news organizations, especially those in the Global South, who are often overlooked in current discussions about AI in the news. This uncertainty about whether AI will genuinely offer financial stability, or merely add to operational costs, speaks to the broader issue of media sustainability in an increasingly precarious media market (Mohammed et al. 2024).
Our findings equally reveal that industry professionals and technology experts often serve as the primary arbiters of the conversations on AI use in journalism. While their voices are crucial for understanding the feasibility of AI adoption, their typical focus on technical efficiencies in newsrooms can inadvertently depoliticize the conversation. For instance, as Oravec (2019) noted, opinion leaders of technologies often downplay the regulatory, legal, and ethical complexities in favour of highlighting its transformative potential. This gives way to oversimplified narratives that ignore the need for a more diverse range of voices, including ethicists, social scientists, and similar communities, to ensure that the implications of AI are fully considered.
Finally, our findings echo patterns observed in global studies, where AI is framed as both an enabler and a disruptor in journalism (Moran and Shaikh 2022). For instance—similar to Fast and Horvitz’s (2017) observation of how media framing of AI’s enhancement of daily tasks is frequently accompanied by concerns over ethical risks—there is an observed pattern mirrored in our findings, where many of the articles emphasize AI’s potential to increase journalistic efficiency while noting the unresolved questions around autonomy, ethical considerations, and equitable power distribution.

5. Conclusions

This study set out to explore how artificial intelligence (AI) is portrayed in Africa-focused journalistic articles. What emerged was a narrative suspended between progress and peril. While AI’s ability to streamline processes, enhance audience engagement, and lower operational costs captivates many, its potential to distort truth, diminish editorial independence, and erode confidence in journalism tempers such optimism. We see a journalistic ecosystem caught between the allure of technological innovation and the existential threat of editorial integrity, where algorithms may optimize output but cannot replicate the delicacy, context, or moral responsibility that underpins news production in journalism. In addressing the research question of howAI adoption is framed in Africa-focused news, our analysis reveals that there is a growing awareness of AI’s potential to revolutionize journalism in Africa, but the discourse is dominated by concerns about ethical implications, human displacement, and the potential of the technology to erode autonomy and increase the influence of and dependency on foreign tech.
In all, while these risks must be carefully managed, the benefits of the technology should not be overlooked, especially as the industry is increasingly reliant on AI-driven approaches for analytics, content recommendation, and commercialization (Mohammed et al. 2024). Although African scholars such as Munoriyarwa et al. (2023) have explored country-specific adoption and implementation of AI in newsrooms on the continent, the unique contribution of this work lies in its unified approach to assessing the framing of AI technology in Africa-focused journalism platforms. This approach is especially crucial at a time when the continent is developing a unified artificial intelligence strategy to leveraging this cutting-edge technology to empower and develop various aspects of life on the continent (African Union 2024).
Summarily, we urge African news makers and policy makers across the continent to collaboratively develop frameworks, policies, and guidelines that not only outline the appropriate degree of AI adoption but equally provide some region-specific guidance.

Limitations and Suggestions for Further Research

This study is not without some limitations. The relatively small sample size of 73 news stories sourced from 52 news organizations may not adequately represent the full spectrum of narratives surrounding AI in African journalism, thus restricting the breadth of the insights presented on the subject. Additionally, while the data source we relied upon—Google News—helped us aggregate a range of articles from multiple sources, it still does not capture the full range of media outlets in operation within the regions studied. The temporal scope of the study equally limits our findings, as we mainly retrieved articles published within the last 4 years. Future research should leverage longitudinal studies that track the evolution of AI adoption in journalism over time to cross-examine how these early predictions and fears materialize or dissipate as the technology matures. Additionally, there is a need for more granular ethnographic studies focused on the internal dynamics within news organizations to shed light on how AI affects newsroom culture, labour relations, and day-to-day editorial decision making.

Author Contributions

Conceptualization, A.M. and A.A.E.; Investigation, A.M.; Data curation, A.M.; Writing – original draft, A.M., M.B.A. and F.O.; Writing – review & editing, A.A.E. and M.B.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data used for this study are provided in Appendix A.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

S/NTitleSourceYearCountry/Region
1Catch up or cut a new trail—AI’s expanding role in African journalismDaily Maverick2023South
Africa
2WAJIC23: Experts advocate for ethical use of artificial intelligence in African newsroomsBenjamin Dada2023Nigeria
3AMC: African Journalists Urged to Embrace Potential of AIThis Day2024Ghana
4CJID wins grant to drive inclusive AI research, innovation in NigeriaPremium Times2024Nigeria
5Experts call for data collaboration fostering Nigeria’s AI ecosystemBusiness Day2024Nigeria
6AI Technologies: The Double-Edged Sword of Innovation and Media CaptureHorn of Africa2024Kenya
7CJID sets to convene media stakeholders for the Journalism, Digital Tech, and AI DialogueDataphyte2024Nigeria
8CJID to launch two Artificial Intelligence tools to aid journalism practiceThe Nation2024Nigeria
9Moonshot Conversations in Nairobi unravels AI’s potential and gaps in AfricaTech Cabal2024Kenya
10WAJIC 2023: Journalism experts outline challenges, innovative trends in mediaPremium Times2023West Africa
11Future of media will rely on trust engendered by tech innovationMedia Online2024South Africa
12Largest-Ever African Investigative Journalism Conference Digs into New Tools, Using AI, and World-Class ExposésGIGN2022South Africa
13Why Africa is off to a bad start in the global AI raceMedia Makers Meet2024Africa
14How AI can help analyze women’s representation in the newsInt’l Journalists’ Network2022South Africa
15Media Council Kenya picks 29-member team to develop AI, data guidelines for journalistsBusiness Daily2023Kenya
16The Impact of AI on Investigative Journalism: Challenges and OpportunitiesTech Economy2024Africa
17How journalists and newsroom managers in Africa should think about the rise of AIReuters2023Kenya
18How to apply AI to modern journalism practiceThe Nation2024West Africa
19Is Artificial Intelligence a Threat to Journalism?VOA Africa2023West Africa
20The transformative power of AI in print media: A focus on AfricaBusiness Day2024Africa
21Anambra ICT Agency, Activ8 Hub Host AI Workshop for Media ProfessionalsTech Build2023Nigeria
22AI in Journalism: No voice, no analysis, no feel or colour … for nowThe African2023Africa
23AI Will Revolutionize Journalism in Nigeria—Prof PateDaily Trust2024Nigeria
24The Emergence of AI and Challenges of Modern NewsroomsTech Economy2023Nigeria
25Only lazy journalists should be scared of Artificial Intelligence—Founder, RadioNowPremium Times2024Nigeria
26Dialogue on future of Journalism in era of AI holds in AbujaThe Nation2024Nigeria
27Press Freedom: RSF says AI weakening JournalismRadio Nigeria2023Africa
28FactCheckHub at 3: Experts proffer solutions to tackling AI disinformation campaignsICIR2023Africa
29Will Artificial Intelligence Render Nigerian Journalists Unemployed?Osun Defender2023Nigeria
30SANEF calls for urgent regulation to protect struggling news media from AI threatsCity Press2024South Africa
31How AI will impact investigative journalism—Media GroupsBusiness Day2024Nigeria
32Effects of Artificial Intelligence in NewsroomThis Day2023Nigeria
33How AI will be—and already is—upending media and journalismTech Central2023South
Africa
34We ‘hired’ an AI writing assistant to create an article for us—our jobs are safe, for nowNews24, South Africa2021South Africa
35Here’s how AI can help journalists and protect news readers from misinformationTech Cabal2023Nigeria
36Next Wave: What is Africa’s place in the EU AI treaty?Tech Cabal2024Africa
37Kenyan media must take the AI bull by the hornsNation2023Kenya
38How journalists can leverage Artificial Intelligence (AI) chatbots like ChatGPTNairobi News2023Kenya
39Attitude change key to AI buy-in in newsroomsNation Africa2022East Africa
40Expert wants journalists to embrace AI for social media effectivenessNews Agency of Nigeria2023Nigeria
41Generative AI and journalism: A catalyst or roadblock for African newsrooms?New Era2023Zambia
42Disinformation Risks and Media Principles in Age of AICitizen Digital2024Kenya
43Task force on guidelines for artificial intelligence use by media timelyThe Sunday Standard2023Kenya
44AI, journalism and fact-checking in Ghana; navigating the mazeMy-Joy Online2024Ghana
45Emerging Influence: Africa’s Role in Shaping the EU AI TreatyTech In Africa2024Africa
46How Africa Can Achieve Data SovereigntyThe African2022South Africa, Kenya, Zimbabwe, and Nigeria
47Legal framework key to proper use of AI in Ghana—Margins CEOMy-Joy Online2024Ghana
48What will save journalism in the era of artificial intelligence?The Herald Zimbabwe2024Zimbabwe
49Digital digest: AI revolutionising journalism: Time newsrooms jump on boardZimbabwe Independent2024Zimbabwe
50Artificial Intelligence break-through in Media IndustryThe Point, Gambia2023Gambia
51Committee pushes for AI Guidelines in MediaDaily News, Tanzania2024Tanzania
52Will machines replace journalists, too?Premium Times2022Nigeria
53Experts identify importance of media in national development, seek adoption of AIInspen Online2022Nigeria
54AI’s relentless rise gives journalists tough choicesLegit NG2024Nigeria
55Media must wake up to opportunities of AIZimbabwe Independent2023Zimbabwe
56Can we survive the “attack” of Big Tech?Premium Times2023Nigeria
57What OpenAI’s deal with News Corp means for journalism (and for you)Biz Community2024South Africa
58The many promises and numerous perils of AI in South Africa’s newsroomsDaily Maverick2023South Africa
59Using AI in journalism, will AI beat journalists’ roles?Independent Online2024Egypt
6028% of journalists use Gen AI—ReportBusiness Day2024Nigeria
61Catholic Media Practitioners Need to “allow ethical values” Guide AI Deployment: Nigeria Catholic Secretariat OfficialACI, Africa2024Nigeria
62Priest Cautions Media, Students Against Overdependence On AILeadership2024Nigeria
63AI chatbots fit cause chaos for social media?BBC Pidgin2023Nigeria
64Media Monitoring Africa tackles the use of AI in media newsrooms and politicsCitizen2024South Africa
65Climate change, AI put African media on the spot Daily TrustDaily Trust2023Nigeria
66Hallucinations, Wrong Prompts; Why Kenyan Journalists Flop In The AI MagicCitizen Digital2024Kenya
67Formulate regulations as you embrace AI, Media urgedThe Star, Kenya2023Kenya
68Media Leaders Discuss Digital Transformations, AI, Others at Bloomberg Forum 2023Business Post2023South Africa
69Why Africa must demand a fair share in AI development and governanceTechPolicy2024Africa
70PREMIUM TIMES’ journalist, 39 other West African journalists complete AI Journalism FellowshipPremium Times2024West Africa
71Google’s Ex-Director Advises African Newsrooms On AI, Algorithm Changes, Revenue ModelsThe Whistler2024Africa
72Emefa Apawu, Paa Kwesi Asare, and Kwabena Offei-Kwadey Nkrumah to speak on The Influence of AI in JournalismGhana Web2024Ghana
73Akufo-Addo warns journalists of potential AI dangers in misinformationMy-Joy Online2024Ghana

References

  1. Acemoglu, Daron, and Pascual Restrepo. 2018. Artificial intelligence, automation, and work. In The Economics of Artificial Intelligence: An Agenda. Chicago: University of Chicago Press, pp. 197–236. [Google Scholar]
  2. Ademola, Ojo Emmanuel. 2024. The transformative power of AI in print media: A focus on Africa. Business Day, April 7. Available online: https://businessday.ng/technology/article/the-transformative-power-of-ai-in-print-media-a-focus-on-africa/ (accessed on 30 July 2024).
  3. Adjin-Tettey, Theodora Dame, Samuel Danso, Tigere Muringa, and Siphumelele Zondi. 2024. The Role of Artificial Intelligence in Contemporary Journalism Practice in Two African Countries. Journalism and Media 5: 846–60. [Google Scholar] [CrossRef]
  4. African Union. 2024. Continental Artificial Intelligence Strategy: Harnessing AI for Africa’s Development and Prosperity. Available online: https://au.int/sites/default/files/documents/44004-doc-EN-_Continental_AI_Strategy_July_2024.pdf (accessed on 23 July 2024).
  5. Akowe, Tony. 2024. Abuja CJID to launch two Artificial Intelligence tools to aid journalism practice. The Nation, May 24. Available online: https://thenationonlineng.net/cjid-to-launch-two-artificial-intelligence-tools-to-aid-journalism-practice/ (accessed on 30 July 2024).
  6. Ashiru, Grace. 2024. Emerging Influence: Africa’s Role in Shaping the EU AI Treaty. Tech in Africa, July 9. Available online: https://www.techinafrica.com/emerging-influence-africas-role-in-shaping-the-eu-ai-treaty/ (accessed on 30 July 2024).
  7. Bansal, Prashant. 2024. Prompt Engineering Importance and Applicability with Generative AI. Journal of Computer and Communications 12: 14–23. [Google Scholar] [CrossRef]
  8. Barman, Kristian Gonzalez, Nathan Wood, and Pawel Pawlowski. 2024. Beyond transparency and explainability: On the need for adequate and contextualized user guidelines for LLM use. Ethics and Information Technology 26: 47. [Google Scholar] [CrossRef]
  9. Bell, Emily, and Owen Taylor, eds. 2017. Journalism after Snowden: The Future of the Free Press in the Surveillance State. New York: Columbia University Press. [Google Scholar]
  10. Bryant, Christopher, and Courtney Dillard. 2019. The Impact of Framing on Acceptance of Cultured Meat. Frontiers in Nutrition 6: 1–10. [Google Scholar] [CrossRef]
  11. Bunz, Mercedes, and Marco Braghieri. 2022. The AI doctor will see you now: Assessing the framing of AI in news coverage. AI & Society 37: 9–22. [Google Scholar]
  12. Cain, William. 2024. Prompting change: Exploring prompt engineering in large language model AI and its potential to transform education. TechTrends 68: 47–57. [Google Scholar] [CrossRef]
  13. Carlà, Matteo Mario, Gloria Gambini, Antonio Baldascino, Federico Giannuzzi, Francesco Boselli, Emanuele Crincoli, Nicola Claudio D’Onofrio, and Stanislao Rizzo. 2024. Exploring AI-chatbots’ capability to suggest surgical planning in ophthalmology: ChatGPT versus Google Gemini analysis of retinal detachment cases. British Journal of Ophthalmology 108: 1457–69. [Google Scholar] [CrossRef] [PubMed]
  14. CDMSI. 2023. Guidelines on the Responsible Implementation of Artificial Intelligence Systems in Journalism. Council of Europe, December 12. Available online: https://www.coe.int/en/web/freedom-expression/-/guidelines-on-the-responsible-implementation-of-artificial-intelligence-ai-systems-in-journalism (accessed on 23 July 2024).
  15. Chan-Olmsted, Sylvia M. 2019. A review of artificial intelligence adoptions in the media industry. International Journal on Media Management 21: 193–215. [Google Scholar] [CrossRef]
  16. Chong, Dennis, and James N. Druckman. 2007. Framing Theory. Annual Review of Political Science 10: 103–26. [Google Scholar] [CrossRef]
  17. Chuan, Ching Hua, Wan-Hsiu Sunny Tsai, and Su Yeon Cho. 2019. Framing artificial intelligence in American newspapers. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. New York: ACM, pp. 339–44. [Google Scholar] [CrossRef]
  18. Comunale, Mariarosaria, and Andrea Manera. 2024. The Economic Impacts and the Regulation of AI: A Review of the Academic Literature and Policy Actions. IMF Working Paper No.2024/65. Available online: https://www.imf.org/en/Publications/WP/Issues/2024/03/22/The-Economic-Impacts-and-the-Regulation-of-AI-A-Review-of-the-Academic-Literature-and-546645 (accessed on 23 July 2024).
  19. Cools, Hannes, and Nicholas Diakopoulos. 2024. Uses of Generative AI in the Newsroom: Mapping Journalists’ Perceptions of Perils and Possibilities. Journalism Practice, 1–19. [Google Scholar] [CrossRef]
  20. Cools, Hannes, Baldwin Van Gorp, and Michael Opgenhaffen. 2024. Where exactly between utopia and dystopia? A framing analysis of AI and automation in US newspapers. Journalism 25: 3–21. [Google Scholar] [CrossRef]
  21. de-Lima-Santos, Mathias-Felipe, and Wilson Ceron. 2021. Artificial intelligence in news media: Current perceptions and future outlook. Journalism and media 3: 13–26. [Google Scholar] [CrossRef]
  22. Diakopoulos, Nicholas. 2019. Automating the News: How Algorithms Are Rewriting the Media. Cambridge, MA and London: Harvard University Press. [Google Scholar]
  23. Eke, Damian Okaibedi, Kutoma Wakunuma, and Simisola Akintoye. 2023. Responsible AI in Africa: Challenges and opportunities. In Responsible AI in Africa. Social and Cultural Studies of Robots and AI. Edited by Damian Okaibedi Eke, Kutoma Wakunuma and Simisola Akintoye. Cham: Palgrave Macmillan. [Google Scholar]
  24. Fast, Ethan, and Eric Horvitz. 2017. Long-term trends in the public perception of artificial intelligence. In Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: Association for the Advancement of Artificial Intelligence Press, vol. 31. [Google Scholar]
  25. Forja-Pena, Tania, Berta García-Orosa, and Xose López-García. 2024. The Ethical Revolution: Challenges and Reflections in the Face of the Integration of Artificial Intelligence in Digital Journalism. Communication & Society 37: 237–54. [Google Scholar]
  26. Goodman, J. Robyn, and Bred P. Goodman. 2006. Beneficial or Biohazard? How the Media Frame Biosolids. Public Understanding of Science 15: 359–75. [Google Scholar] [CrossRef]
  27. Graefe, Andreas. 2016. Guide to Automated Journalism. New York: Tow Center for Digital Journalism, Columbia University. [Google Scholar] [CrossRef]
  28. Graff, Joris. 2024. Moral sensitivity and the limits of artificial moral agents. Ethic Information Technology 26: 13. [Google Scholar] [CrossRef]
  29. Gwagwa, Arthur, Nagla Rizk Erika Kraemer-Mbula, Isaac Rutenberg, and Jeremy De Beer. 2020. Artificial Intelligence (AI) deployments in Africa: Benefits, challenges and policy dimensions. The African Journal of Information and Communication 26: 1–28. [Google Scholar] [CrossRef]
  30. Gwamzhi, Gwamkat. 2023. Press Freedom: RSF says AI weakening Journalism. Radio Nigeria, May 3. Available online: https://frcnnorthcentral.ng/2023/05/03/press-freedom-rsf-says-ai-is-weakening-journalism/ (accessed on 30 July 2024).
  31. Hanitzsch, Thomas, and Tim P. Vos. 2018. Journalism beyond democracy: A new look into journalistic roles in political and everyday life. Journalism: Theory, Practice & Criticism 19: 146–64. [Google Scholar] [CrossRef]
  32. Hicks, Micheal Townsen, James Humphries, and Joe Slater. 2024. ChatGPT is bullshit. Ethics and Information Technology 26: 38. [Google Scholar] [CrossRef]
  33. Ishiekwene, Azu. 2023. Can we survive the “attack” of Big Tech? Premium Times, November 16. Available online: https://gazettengr.com/azu-ishiekwene-can-we-survive-the-attack-of-big-tech/ (accessed on 30 July 2024).
  34. Iyengar, Shanto. 1991. Is Anyone Responsible? How Television Frames Political Issues. Chicago: University of Chicago Press. [Google Scholar] [CrossRef]
  35. Kiemde, Sountongnoma Martial Anicet, and Ahmed Dooguv Kora. 2020. The challenges facing the development of AI in Africa. Paper presented at 2020 IEEE International Conference on Advent Trends in Multidisciplinary Research and Innovation (ICATMRI), Buldhana, India, December 30; pp. 1–6. [Google Scholar]
  36. Kiemde, Sountongnoma Martial Anicet, and Ahmed Doguv Kora. 2021. Towards an ethics of AI in Africa: Rule of education. AI and Ethics 2: 35–40. [Google Scholar] [CrossRef]
  37. Leiser, Mark R. 2022. Bias, journalistic endeavours, and the risks of artificial intelligence. In Artificial Intelligence and the Media. Cheltenham: Edward Elgar Publishing, pp. 8–32. [Google Scholar]
  38. Liu, Gloria. 2022. The World’s smartest artificial intelligence just made its first magazine cover. Cosmopolitan. Available online: https://www.cosmopolitan.com/lifestyle/a40314356/dall-e-2-artificial-intelligence-cover/ (accessed on 30 July 2024).
  39. Loscote, Fabia, Adriana Gonçalves, and Claudia Quadros. 2024. Artificial Intelligence in Journalism: A Ten-Year Retrospective of Scientific Articles (2014–2023). Journalism and Media 5: 873–91. [Google Scholar] [CrossRef]
  40. Majeed, Bakare. 2024. Only lazy journalists should be scared of Artificial Intelligence—Founder, RadioNow. Premium Times, June 27. Available online: https://www.premiumtimesng.com/news/top-news/707485-only-lazy-journalists-should-be-scared-of-artificial-intelligence-founder-radionow.html (accessed on 30 July 2024).
  41. Markelius, Alva, Joahna Kuiper Connor Wright, Natalie Delille, and Yu-Ting Kuo. 2024. The mechanisms of AI hype and its planetary and social costs. AI and Ethics 4: 727–42. [Google Scholar] [CrossRef]
  42. Maswabi, Goitseone, and Sizo Nkala. 2022. How Africa Can Achieve Data Sovereignty. The African, July 12. Available online: https://theafrican.co.za/featured/how-africa-can-achieve-data-sovereignty-fa7624d6-c8ce-40e1-88f2-1d739960b7c3/ (accessed on 30 July 2024).
  43. Mohammed, Abullateef, Lateef Adekunle Adelakun, Abdulateef Adeola Elega, Aishat Sule-Otu, and Murtada Busair Ahmad. 2024. Age of techno-innovative journalism: A systematic mapping of entrepreneurial journalism research, 2000–2022. Communication Research and Practice 10: 1–25. [Google Scholar] [CrossRef]
  44. Mohammed, Abdullateef, Mojaye Eserinune McCarty, and Adelakun Lateef. 2022. Exploring the prevalence of agenda-setting theory in Africa-focused research, 2000–2020. Communicatio 48: 67–92. [Google Scholar] [CrossRef]
  45. Moran, Rachel E., and Sonia J. Shaikh. 2022. Robots in the news and newsrooms: Unpacking meta-journalistic discourse on the use of artificial intelligence in journalism. Digital Journalism 10: 1756–74. [Google Scholar] [CrossRef]
  46. Mugadzaweta, Silence. 2023. Media must wake up to opportunities of AI. Zimbabwe Independent, June 16. Available online: https://www.newsday.co.zw/theindependent/opinion/article/200012834/media-must-wake-up-to-opportunities-of-ai (accessed on 30 July 2024).
  47. Munoriyarwa, Allen, Sarah Chiumbu, and Gilbert Motsaathebe. 2023. Artificial intelligence practices in everyday news production: The case of South Africa’s mainstream newsrooms. Journalism Practice 17: 1374–92. [Google Scholar] [CrossRef]
  48. Mwangi, Kabuk. 2023. MCK picks 29-member team to develop AI, data guidelines for journalists. Business Daily, October 10. Available online: https://www.businessdailyafrica.com/bd/economy/mck-picks-29-member-team-to-develop-ai-data-guidelines--4396212#google_vignette (accessed on 30 July 2024).
  49. Naidoo, Dominic. 2023. AI-Journalism: No voice, no analysis, no feel or colour … for now. The African, May 30. Available online: https://theafrican.co.za/featured/ai-journalism-no-voice-no-analysis-no-feel-or-colour-for-now-8ed57c63-622c-4625-9755-d48b0e8920eb/ (accessed on 30 July 2024).
  50. Nasila, Mark. 2023. How AI will be—And already is—Upending media and journalism. Tech Central, January 27. Available online: https://techcentral.co.za/how-ai-will-and-already-is-upending-media-and-journalism/221449/ (accessed on 30 July 2024).
  51. Ndege, Adonijah. 2024. Next Wave: What is Africa’s place in the EU AI treaty? Tech Cabal, July 8. Available online: https://techcabal.com/2024/07/08/what-is-africas-place-in-the-eu-ai-treaty/ (accessed on 30 July 2024).
  52. Neuberger, Christoph, Nuernbergk Christian, and Langenohl Susanne. 2019. Journalism as Multichannel Communication: A newsroom survey on the multiple uses of social media. Journalism Studies 20: 1260–80. [Google Scholar] [CrossRef]
  53. Newman, Nic. 2023. Journalism, Media and Technology Trends and Predictions 2020. In Reuters Institute Report. Oxford: Reuters Institute for the Study of Journalism. Available online: https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2024 (accessed on 30 July 2024).
  54. Nguyen, Dennis, and Erik Hekman. 2024. The news framing of artificial intelligence: A critical exploration of how media discourses make sense of automation. AI & society 39: 437–51. [Google Scholar]
  55. Okolo, Chinasa T., George Obaido, and Kehinde Aruleba. 2023. Responsible AI in Africa—Challenges and Opportunities. In Responsible AI in Africa. Edited by Domain Okaibedi Eke, Kutoma Wakunuma and Simisola Akintoye. Social and Cultural Studies of Robots and AI. Cham: Palgrave Macmillan. [Google Scholar] [CrossRef]
  56. Oravec, Jo Ann. 2019. Artificial intelligence, automation, and social welfare: Some ethical and historical perspectives on technological overstatement and hyperbole. Ethics and Social Welfare 13: 18–32. [Google Scholar] [CrossRef]
  57. Pavlik, John V. 2023. Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism & Mass Communication Educator 78: 84–93. [Google Scholar] [CrossRef]
  58. Porlezza, Collin, and Aljosha Karim Schapals. 2024. AI Ethics in Journalism (Studies): An Evolving Field Between Research and Practice. Emerging Media, October 13. [Google Scholar] [CrossRef]
  59. Prabhakaran, Vinodkumar, Rida Qadri, and Ben Hutchinson. 2022. Cultural incongruencies in artificial intelligence. arXiv arXiv:2211.13069. [Google Scholar]
  60. Radoli, Lydia Ouma. 2024. Shifting Imageries: Artificial Intelligence and Journalism in African Legacy Media. Paper presented at the RAIS Conference Proceedings, Princeton, NJ, USA, April 4–5; Available online: https://rais.education/wp-content/uploads/2024/05/0358.pdf (accessed on 30 July 2024).
  61. Reporters Without Borders. 2024. AI and Media Ethics: Existing References Overview. Available online: https://rsf.org/sites/default/files/medias/file/2023/09/AI%20and%20media%20ethics%20existing%20references%20overview_0.pdf (accessed on 6 July 2024).
  62. Scire, Sarah. 2023. “Not a Replacement of Journalists in Any Way”: AP Clarifies Standards Around Generative AI. Cambridge: Nieman Lab. Available online: https://www.niemanlab.org/2023/08/not-a-replacement-of-journalists-in-any-way-ap-clarifies-standards-around-generative-ai/ (accessed on 30 July 2024).
  63. Shekhar, Sarnah Simanta. 2019. Artificial Intelligence in Automation. International Journal of Multidisciplinary 4: 14–17. [Google Scholar]
  64. Shi, Yi, and Lin Sun. 2024. How Generative AI Is Transforming Journalism: Development, Application and Ethics. Journalism and Media 5: 582–94. [Google Scholar] [CrossRef]
  65. Simon, Felix M. 2024. Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena. New York: Tow Center for Digital Journalism, Columbia University. Available online: https://www.cjr.org/tow_center_reports/artificial-intelligence-in-the-news.php (accessed on 30 August 2024).
  66. Sjøvaag, Helle. 2024. The business of news in the AI economy. AI Magazine 45: 246–55. [Google Scholar] [CrossRef]
  67. STM. 2023. Global Publishing and Journalism Organizations Unite to Release Comprehensive Global Principles for Artificial Intelligence (AI). Available online: https://www.stm-assoc.org/global-publishing-and-journalism-organizations-unite-to-release-comprehensive-global-principles-for-artificial-intelligence-ai/ (accessed on 23 July 2024).
  68. Tatalovic, Mico. 2018. AI Writing Bots Are About To Revolutionise Science Journalism: We Must Shape How This Is Done. Journal of Science Communication 17: 1–7. [Google Scholar] [CrossRef]
  69. The Associated Press. 2024. Artificial Intelligence: Leveraging AI to Advance the Power of Facts. Available online: https://www.ap.org/solutions/artificial-intelligence/ (accessed on 23 July 2024).
  70. The Washington Post. 2024. Ethics Policy|Verification and Fact-Checking Standards|Corrections Policy|Policy on Sources|Diversity Policy|AI Policy. Available online: https://www.washingtonpost.com/policies-and-standards/#ai. (accessed on 23 July 2024).
  71. Timcke, Scott, and Herman Wasserman. 2023. The many promises and numerous perils of AI in South Africa’s newsrooms. Daily Maverick, November 27. Available online: https://www.dailymaverick.co.za/article/2023-11-27-the-many-promises-and-numerous-perils-of-ai-in-south-africas-newsrooms/ (accessed on 30 July 2024).
  72. Tyson, Laura D., and John Zysman. 2022. Automation, AI & Work. Daedalus 151: 256–71. [Google Scholar] [CrossRef]
  73. UNESCO. 2023. Artificial Intelligence: Examples of Ethical Dilemmas|UNESCO. Available online: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics/cases (accessed on 23 July 2024).
  74. Vergeer, Maurice. 2020. Artificial intelligence in the Dutch press: An analysis of topics and trends. In Communicating Artificial Intelligence (AI). London: Routledge, pp. 5–24. [Google Scholar] [CrossRef]
  75. Vladisavljević, Nebosia. 2015. Media Framing of Political Conflict. Yorkshire: White Rose Research. Available online: https://eprints.whiterose.ac.uk/117315/1/Vladisavljevic (accessed on 23 July 2024).
  76. WAN-IFRA. 2023. Charter for AI and JOURNALISM: Tech Standards Must Be the Responsibility of Publishers. Available online: https://wan-ifra.org/2023/11/wan-ifra-steps-back-from-endorsing-new-charter-for-ai-and-journalism/ (accessed on 25 October 2024).
  77. Zindi. 2020. GIZ AI4D Africa Language Challenge—Round 2. Available online: https://zindi.africa/competitions/ai4d-african-language-dataset-challenge (accessed on 25 October 2024).
  78. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power. New York: PublicAffairs. [Google Scholar]
Figure 1. Predominantly quoted sources.
Figure 1. Predominantly quoted sources.
Journalmedia 05 00106 g001
Table 1. Technological pessimism.
Table 1. Technological pessimism.
FrameN%
AI raising ethical issues in journalism3548%
AI leading to job displacements in journalism34%
Multiple positions on the subject2129
Stories unrelated to theme1419%
Total73100%
Table 2. Technological optimism.
Table 2. Technological optimism.
Framen%
AI enhancing journalistic practices1723%
AI improving productivity in journalism1318%
AI enabling new forms of storytelling45%
AI introducing creative solutions journalism problems57%
Multiple positions on the subject2940%
Stories unrelated to theme710%
Total7310%
Table 3. Power dynamics in journalism.
Table 3. Power dynamics in journalism.
FrameN%
AI concentrating power in the hands of tech companies1419%
AI increasing dependency on and influence of foreign tech912%
AI compromising journalistic and editorial autonomy57%
Mixed positions on the subject1216%
Stories unrelated to theme3345%
Total73100%
Table 4. Sustainability of journalism.
Table 4. Sustainability of journalism.
FrameN%
AI improving financial stability of media organizations1419%
AI having no or minimal effect on financial stability of media organizations2230%
Mixed positions on the subject912%
Stories unrelated to theme2838%
Total73100%
Table 5. Overall tone of sampled articles on AI innovations in journalism.
Table 5. Overall tone of sampled articles on AI innovations in journalism.
Tonen%
Very Negative34%
Moderately negative2636%
Moderately positive4055%
Very positive45%
Total73100%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammed, A.; Elega, A.A.; Ahmad, M.B.; Oloyede, F. Friends or Foes? Exploring the Framing of Artificial Intelligence Innovations in Africa-Focused Journalism. Journal. Media 2024, 5, 1749-1770. https://doi.org/10.3390/journalmedia5040106

AMA Style

Mohammed A, Elega AA, Ahmad MB, Oloyede F. Friends or Foes? Exploring the Framing of Artificial Intelligence Innovations in Africa-Focused Journalism. Journalism and Media. 2024; 5(4):1749-1770. https://doi.org/10.3390/journalmedia5040106

Chicago/Turabian Style

Mohammed, Abdullateef, Adeola Abdulateef Elega, Murtada Busair Ahmad, and Felix Oloyede. 2024. "Friends or Foes? Exploring the Framing of Artificial Intelligence Innovations in Africa-Focused Journalism" Journalism and Media 5, no. 4: 1749-1770. https://doi.org/10.3390/journalmedia5040106

APA Style

Mohammed, A., Elega, A. A., Ahmad, M. B., & Oloyede, F. (2024). Friends or Foes? Exploring the Framing of Artificial Intelligence Innovations in Africa-Focused Journalism. Journalism and Media, 5(4), 1749-1770. https://doi.org/10.3390/journalmedia5040106

Article Metrics

Back to TopTop