Previous Article in Journal
Blockchain Solutions for Logistic Management
Previous Article in Special Issue
Security Analysis of Smart Contract Migration from Ethereum to Arbitrum
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework

by
Masabah Bint E. Islam
1,
Muhammad Haseeb
2,
Hina Batool
3,
Nasir Ahtasham
4 and
Zia Muhammad
4,5,*
1
Department of Computer Science, National University of Science and Technology, Islamabad 44000, Pakistan
2
Department of Computer Science, University of North Dakota, Grand Forks, ND 58202, USA
3
Department of Cybersecurity, Air University, Islamabad 44230, Pakistan
4
Department of Computer Science, North Dakota State University (NDSU), Fargo, ND 58105, USA
5
Department of Computing, Design, and Communication, University of Jamestown, Jamestown, ND 58405, USA
*
Author to whom correspondence should be addressed.
Blockchains 2024, 2(4), 458-481; https://doi.org/10.3390/blockchains2040020
Submission received: 12 October 2024 / Revised: 30 October 2024 / Accepted: 19 November 2024 / Published: 21 November 2024
(This article belongs to the Special Issue Key Technologies for Security and Privacy in Web 3.0)

Abstract

:
The integrity of global elections is increasingly under threat from artificial intelligence (AI) technologies. As AI continues to permeate various aspects of society, its influence on political processes and elections has become a critical area of concern. This is because AI language models are far from neutral or objective; they inherit biases from their training data and the individuals who design and utilize them, which can sway voter decisions and affect global elections and democracy. In this research paper, we explore how AI can directly impact election outcomes through various techniques. These include the use of generative AI for disseminating false political information, favoring certain parties over others, and creating fake narratives, content, images, videos, and voice clones to undermine opposition. We highlight how AI threats can influence voter behavior and election outcomes, focusing on critical areas, including political polarization, deepfakes, disinformation, propaganda, and biased campaigns. In response to these challenges, we propose a Blockchain-based Deepfake Authenticity Verification Framework (B-DAVF) designed to detect and authenticate deepfake content in real time. It leverages the transparency of blockchain technology to reinforce electoral integrity. Finally, we also propose comprehensive countermeasures, including enhanced legislation, technological solutions, and public education initiatives, to mitigate the risks associated with AI in electoral contexts, proactively safeguard democracy, and promote fair elections.

1. Introduction

AI refers to the simulation of human intelligence in machines that are programmed to think and learn like humans [1]. These systems can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision making, and language translation. The idea of artificial beings with intelligence dates back to ancient myths and stories. However, the seeds of modern AI were planted by philosophers who attempted to describe human thinking as the mechanical manipulation of symbols [2]. The field of AI was formally established in 1956 during a conference at Dartmouth College, where the term “artificial intelligence” was coined. This event is often considered the birth of AI as an academic discipline.
In the 1950s and 1960s, researchers developed early AI programs that could solve algebra problems, prove theorems, and play games like chess. These early successes led to high expectations for the future of AI [3,4]. Significant advancements occurred in the 21st century, particularly with the development of machine learning and deep learning techniques. These methods, which involve training algorithms on large datasets, have led to breakthroughs in areas like image and speech recognition. AI continues to evolve rapidly, with ongoing research and development aimed at creating more advanced and capable systems [5,6].
Nowadays, generative AI, including models like GPT-4, has become a significant focus. These models can generate human-like text, create images, and even produce music. They are being used in various applications, such as content creation, propaganda, and marketing campaigns [7,8]. AI is increasingly being used in politics in various innovative and impactful ways. For example, it can be used in political campaigns, where AI algorithms analyze voter data to create highly targeted political ads [9]. Current tools can be used to perform sentiment analysis in order to monitor social media and other platforms to gauge public sentiment about candidates and issues, helping campaigns adjust their strategies in real time [10,11].
Similarly, AI-powered chatbots provide voters with information about candidates, policies, and voting procedures, making it easier for people to engage with the political process. On the other hand, they could exhibit bias in their answers depending on the datasets they are trained on [12]. AI has become deeply integrated into many aspects of modern life, including political processes [13]. While AI offers significant benefits, such as improved efficiency and targeted communication in campaigns, it also introduces substantial risks. The spread of AI-generated disinformation can mislead voters, deepfakes can damage reputations, and AI-driven propaganda can exacerbate political polarization [14,15,16]. These risks are not merely theoretical; they have already manifested in various forms, threatening the foundations of democracy. This integration of AI into politics necessitates careful consideration and regulation to balance its benefits against potential threats to democratic integrity [17,18].
In March 2024, an alleged deepfake video depicting the fake arrest of former U.S. President Donald Trump went viral, illustrating the alarming potential of AI to generate convincing disinformation and manipulate public perception [19]. During the 2019 Indian general elections, fake news and doctored videos spread rapidly, which were later proven to be AI-generated, influencing voter sentiment and stoking communal tensions, notably affecting Prime Minister Narendra Modi and his opponents [20,21]. Similarly, in the 2020 Brazilian elections, it was claimed that automated bots flooded social media with disinformation, thereby manipulating public opinion and undermining trust in the electoral process, significantly impacting figures like President Jair Bolsonaro [22,23]. Also, in the Philippines, the 2022 Presidential elections were claimed to have been impacted by the use of AI-driven targeted advertising that exploited voter data to create highly personalized, and often misleading, political messages [24,25].
The potential for AI to generate disinformation, manipulate voter behavior, and compromise election security poses significant risks to democratic processes [26]. AI’s influence extends beyond disinformation, posing significant threats to the integrity of elections worldwide by exacerbating polarization, generating deepfakes, spreading propaganda, and enabling biased campaigns [27,28]. These incidents collectively highlight the pervasive and growing threat of AI in compromising the fairness and transparency of democratic elections globally [29,30].
In addressing the impact of AI on electoral integrity, it is essential to re-evaluate the assumption that democracy functions optimally through the sole presence of ”truthful information” and rational agents—a view often aligned with rational choice theory [31]. This model implies that access to accurate information inherently supports democratic stability, overlooking the historical context of electoral behaviors shaped by complex social, political, and psychological influences. Political campaigns have long employed various strategies, such as organizational mobilization, psychological persuasion, and rumor-mongering, to sway voter behavior, indicating that manipulation is a longstanding feature of electoral systems [32,33]. This traditional perspective can be expressed by the following simplified equation:
Truthful Information + Rational Agents = Democracy with Integrity
However, this model neglects the nuanced dynamics of human behavior and assumes an idealized version of democratic processes. It implies that manipulation is a new phenomenon introduced by AI, when in fact, as history shows, it has always been part of electoral tactics. In this context, AI acts as an enhancer of these pre-existing practices rather than an exogenous source of manipulation. For instance, AI-driven systems can analyze vast amounts of voter data to fine-tune persuasive strategies, amplifying traditional practices of targeted messaging and misinformation. Thus, rather than representing a fundamental change to the equation, AI modifies the parameters, allowing for automation and increased precision in tactics that have historically been present in electoral systems. The equation can be reframed as follows:
Information ( Truthful or Manipulative ) + AI - Enhanced Behavioral Influences = Democratic Impact ( Positive or Negative )
This revised equation reflects the dual role of information and influence, incorporating both the potential integrity-enhancing aspects of truthful information and the manipulation-enhancing aspects of AI.
The motivation for this research stems from the dual nature of AI in elections, its potential to enhance democratic engagement, and its capacity to undermine electoral integrity. The objective of this paper is to analyze how AI technologies contribute to political polarization, generate deepfakes, disseminate disinformation, create propaganda, and enable biased campaigns [34]. This research paper aims to explore how AI can have an impact on elections through various techniques and identify potential countermeasures to combat these threats. The language models are not neutral or objective; instead, they inherit biases from their training data and the individuals who design and utilize them [35,36,37]. If they are trained on a biased dataset, they possess the capability to manipulate or influence the decisions of voters who rely on their answers, which can affect global elections and democracy [38,39]. By understanding these mechanisms, we can propose strategies to mitigate their impact and protect democratic processes.
To the best of our knowledge, this is the first article providing a comprehensive analysis and thorough review of the threats posed by AI to global elections and democracy. It thoroughly covers critical areas and proposes extensive solutions to detect and mitigate these threats. This article aims to provide the knowledge and tools necessary to safeguard democratic processes and promote fair elections. This paper makes several key contributions:
1.
We explore how AI technologies can influence political processes and elections, specifically through the dissemination of false information and biased narratives, which can skew voter decisions and disrupt democratic processes.
2.
We detail various methods by which AI can sway elections, including the use of generative AI to create misleading content, such as fake narratives, images, videos, and voice clones, that can undermine political opposition, manipulate public perception, and spread political polarization.
3.
We present a Blockchain-based Deepfake Authenticity Verification Framework (B-DAVF) that establishes a systematic approach to verifying the authenticity of digital assets using blockchain technology for the detection of deepfakes.
4.
We propose comprehensive countermeasures, including regulatory, technological, and educational measures, to counteract the negative impacts of AI on elections.
This paper is organized as follows. Section 1 outlines the purpose and significance of exploring the intersection of technology and political integrity, particularly in electoral contexts, and presents the central thesis. Section 2 provides a literature review, summarizing current trends in the use of AI and its implications. In Section 3, the use of AI in political campaigning and its role in the spread of misinformation and deepfakes, along with the associated ethical considerations, are discussed. Section 4 introduces the framework, detailing its architecture and functionality for real-time video authenticity verification. Section 5 evaluates the B-DAVF’s effectiveness in assessing video authenticity. Section 6 discusses other countermeasures, including technological solutions, regulatory approaches, and public awareness initiatives, to mitigate misinformation risks. Section 7 includes a discussion of case studies of successful countermeasures, highlighting effective strategies and best practices for safeguarding electoral integrity against AI threats. This paper concludes in Section 8, which summarizes the key findings and suggests future research directions while emphasizing the need for proactive measures to protect democratic processes in an AI-driven landscape.

2. Literature Review

The purpose of this section is to provide a comprehensive overview of existing research and studies related to the application of AI in political campaigns and the associated risks. By reviewing the relevant literature, we establish the context and background for our study, highlighting the significance of AI-related threats to global elections and democracy. We identify gaps in the current body of knowledge that our research aims to address. This helps justify our study and its contributions to the field. We summarize the key findings from previous studies, providing a foundation for our analysis and findings. An overview of the literature review is presented in Table 1, which sets the stage for our research by synthesizing existing knowledge, identifying research gaps, and establishing a theoretical foundation for our study.
AI technologies have revolutionized political campaigns by enabling data-driven strategies, personalized voter engagement, and real-time adaptation. AI enhances the efficiency of political campaigns through data analysis, voter targeting, and strategy optimization [46]. AI tools can analyze vast datasets to identify voter preferences and tailor messages accordingly, significantly impacting voter behavior and election outcomes.
Chen et al. [47] highlighted the dual nature of AI in elections, emphasizing its potential to improve campaign efficiency while posing risks such as spreading disinformation and manipulating voter opinions. The study underscored the importance of balancing AI’s benefits with the need for ethical considerations and regulatory frameworks to mitigate its negative impacts. Deepfakes and AI-generated disinformation represent some of the most concerning threats to electoral integrity. Chen et al. [47] provided an extensive analysis of how AI can create realistic fake content that is difficult to distinguish from reality, thereby undermining trust in public figures and democratic institutions. The study detailed the mechanisms through which AI-generated deepfakes and disinformation can impact public opinion and influence voter behavior. Maria et al. [25] focused on the challenges posed by deepfakes in democratic processes, discussing their potential to spread false information and damage the reputations of political candidates. The authors called for the development of advanced detection technologies and robust legal frameworks to address the growing threat of AI-generated disinformation.
Political polarization is exacerbated by AI algorithms that prioritize content likely to engage users, often resulting in the promotion of sensational and divisive material. Studies have indicated that AI-driven content recommendation systems create echo chambers and filter bubbles, isolating users from diverse perspectives and reinforcing existing biases. Research by Pariser et al. [48] on filter bubbles provided a foundational understanding of how algorithms can create isolated information environments. This concept is further explored in the context of AI by Engin et al. [49], who examined the ethical implications of algorithmic bias and its impact on democratic discourse. These studies highlight the role of AI in deepening political divides by exposing users to content that aligns with their preexisting beliefs.
AI enhances the effectiveness of propaganda and biased campaigns through micro-targeting and psychological profiling. These techniques allow political campaigns to deliver personalized and persuasive messages to specific voter segments, subtly influencing their behavior. The Cambridge Analytica scandal is a prominent example of how AI and data analytics can be misused to manipulate voter behavior. Research by Cadwalladr and Carole et al. [50] revealed how the firm used AI to create detailed voter profiles and deliver targeted ads to influence the 2016 U.S. Presidential election.
Zhou et al. [51] recommended a solution that uses machine learning, big data analytics, and network theory to analyze millions of Twitter messages and predict election outcomes. The method accurately predicted the result of the recent Argentine primary Presidential election, which traditional pollsters failed to do, leading to a major bond market collapse. Anand Kumar et al. [52] researched the potential to improve polls, campaign methods, and voter registration, as well as the challenges posed by these improvements to the integrity of elections worldwide. They examined the political landscape of 2024 and presented case studies where nations used AI in voting systems, discussing the advantages and drawbacks. The article emphasized the need to safeguard the security and legitimacy of voting procedures in the face of AI deployment, urging politicians, election authorities, and the public to address the challenges posed by AI in elections.
Stepien-Zalucka et al. [53] provided practical guidance on navigating ethical dilemmas, the role of AI and smart algorithms in elections, maintaining integrity in research practices, and ensuring that research contributes positively to society. They also emphasized the importance of ethical decision making and the role of researchers in upholding ethical standards. Moreover, Mayank et al. [54] highlighted that AI-powered misinformation tools have the potential to skew public opinion, manipulate voter sentiments, and undermine the integrity of the democratic process. They analyzed case studies from recent Indian elections and compared them with global instances. The paper underscored the need for robust regulatory frameworks and digital literacy initiatives to safeguard the democratic ethos in the age of AI.
The rapid advancement of AI technologies has outpaced the development of corresponding regulatory frameworks, leading to significant gaps that allow for the unchecked use of AI in political processes. The General Data Protection Regulation (GDPR) [55] in the European Union sets a global standard for data privacy and protection, influencing how AI and personal data are used in political campaigns. However, the regulation primarily addresses data handling and transparency, with limited focus on the specific challenges posed by AI-driven disinformation and manipulation [56].
Several studies have advocated for creating comprehensive legislation that specifically addresses AI’s role in elections. Brundage et al. (2018) [13] called for international cooperation to develop harmonized regulations that can effectively mitigate AI threats. The authors emphasized the need for continuous updates to regulatory frameworks to keep pace with technological advancements.
Despite extensive research on AI’s impact on elections and democracy, several gaps remain. There is limited understanding of the long-term effects of AI-driven disinformation and polarization on public trust in democratic institutions. Most studies focus on specific regions, and there is a need for research that encompasses diverse geographical contexts to identify region-specific challenges and solutions. The rapid evolution of AI technologies necessitates ongoing research to anticipate and address new threats as they arise. This literature review highlights the transformative impact of AI on political campaigns and elections, encompassing both its benefits and risks. The findings underscore the urgent need for comprehensive regulatory frameworks, advanced detection technologies, and public education to safeguard electoral integrity. By addressing the identified gaps and pursuing future research directions, stakeholders can better understand and mitigate the threats posed by AI to democracy [57].
The integration of AI in political processes has garnered significant scholarly attention in recent years. This literature review synthesizes current research on AI’s impact on elections and democracy, focusing on key areas such as deepfakes, disinformation, political polarization, and biased campaigns. This review identifies gaps, controversies, and unresolved issues, laying the foundation for further investigation. Table 1 offers a comprehensive overview of some recent instances where AI has been misused in electoral processes across various countries. It highlights the versatility and potential dangers of AI in political contexts by detailing specific cases from the United States, India, Brazil, the Philippines, the United Kingdom, Kenya, Italy, and South Africa, spanning from 2019 to 2024. Each entry identifies the type of election affected, and the form of AI misuse, such as deepfake videos, AI-generated fake news, automated bots, and targeted advertising, and describes the technical aspects of the AI techniques involved. Moreover, the table outlines the impact of these AI applications on democracy, illustrating how they have misled public perception, exacerbated political tensions, manipulated voter behavior, and undermined trust in the electoral process.
In the context of the existing literature, most previous research has focused on using blockchain for data integrity and AI-based approaches for deepfake detection independently. However, our study differentiates itself by integrating blockchain technology specifically to create a comprehensive framework for real-time deepfake verification, thereby ensuring both the authenticity and traceability of digital content in political contexts. Unlike existing methods that either lack scalability or require centralized verification, our B-DAVF utilizes a decentralized approach that combines the transparency of blockchain with advanced AI capabilities. This novel combination offers unique advantages, such as improved data integrity, reduced susceptibility to tampering, and broader applicability across various domains, making our contribution distinct in the field.
As illustrated in Table 2, while prior studies have explored various aspects of AI applications in political campaigns and disinformation, none have concurrently addressed all critical factors—including AI campaign strategies, disinformation risks, deepfake detection, and educational initiatives—through the lens of blockchain technology. Our article stands out as a comprehensive approach that not only fulfills these aspects but also integrates a robust framework for countermeasures against digital threats.

3. Use of AI in Elections and Politics

Figure 1 shows that AI plays a multifaceted role in global elections, offering tools for data analysis, voter targeting, personalized messaging, and strategy optimization [58]. AI can significantly enhance voter engagement by providing personalized information about candidates and policies. AI-driven chatbots and virtual assistants can answer voter queries, provide reminders about voting dates, and offer information on polling locations [59]. AI systems can streamline various aspects of electoral management, such as voter list management, logistics planning, and resource allocation. This can lead to more efficient and transparent election processes.
AI can be used to detect and counteract disinformation by analyzing large volumes of data and identifying false or misleading content. AI-powered fact-checking tools can help verify the authenticity of information and prevent the spread of fake news [60]. AI technologies can improve accessibility for voters with disabilities. For example, AI-driven tools can provide real-time translations, speech-to-text services, and other assistive technologies to ensure that all voters can participate in the electoral process [61]. Finally, AI can analyze vast amounts of data, such as voter patterns, political speeches, and news articles, to provide valuable insights for political parties and candidates. This can help them tailor their campaigns more effectively and address the concerns of different voter segments [62].
However, these capabilities also present significant risks, including the potential for AI to impact elections through biased algorithms and disinformation campaigns. AI technologies can favor certain political parties over others by optimizing campaign strategies based on detailed voter profiles. This precision targeting can lead to unequal representation and manipulation of voter behavior. Case studies highlight instances where AI-driven campaigns have significantly influenced electoral outcomes. AI has become a transformative force in political campaigns and elections around the world. Its role encompasses several critical functions.
AI algorithms can process vast amounts of data from various sources, including social media, voter records, and public opinion surveys. This analysis helps campaigns understand voter behavior, preferences, and trends with unprecedented accuracy. Also, campaigns can identify and target specific voter segments with tailored messages. This precision targeting is based on detailed profiles that include demographic information, online behavior, and even psychological traits.
AI also enables the creation of highly personalized communication strategies. Campaigns can deliver customized messages to individual voters, addressing their unique concerns and interests, which increases engagement and the likelihood of influencing voter decisions. AI tools can also optimize overall campaign strategies by predicting the outcomes of various actions, such as advertising placements, public appearances, and policy announcements. This predictive capability allows campaigns to allocate resources more effectively and adjust tactics in real time.
The ability of AI to have a greater impact on elections lies in its capacity to influence voter behavior subtly yet profoundly. Here are the key mechanisms through which AI can impact electoral outcomes: AI’s ability to segment the electorate into highly specific groups means that campaigns can deliver messages that resonate deeply with each group. For example, AI can identify swing voters in key districts and target them with tailored ads that address their specific concerns, potentially swaying their votes.
By analyzing data from social media and other sources, AI can create detailed psychological profiles of voters. These profiles allow campaigns to craft messages that appeal to voters’ emotions, biases, and beliefs, making the messages more persuasive. AI can generate large volumes of content, including articles, social media posts, and videos. This capability enables campaigns to flood the information space with their narratives, overshadowing opposing messages and shaping public discourse. AI can monitor ongoing public sentiment and adjust campaign strategies on the fly. For instance, if a particular message is not resonating with voters, AI can identify this trend and suggest modifications or alternative messages that might be more effective.
AI’s impact on elections is not just theoretical; it has been observed in various real-world scenarios, where AI technologies have favored specific political parties or candidates. AI-driven data analytics has significantly shaped recent electoral strategies globally. Be it Trump’s campaign in the 2016 Presidential elections to target swing state voters or Emmanuel Macron’s 2017 campaign in France capitalizing on AI for targeted voter outreach, AI was extensively utilized to favor certain outcomes by diverting the natural course.
The incidents presented in Table 1 are prominent examples of how AI has been used to manipulate the general audience, voters, and their sentiments through data and visual evidence. Studies have shown that AI-generated content can significantly impact voter perceptions. For instance, AI-created social media posts and news articles can be indistinguishable from human-generated content, leading to widespread dissemination and influence. Data from sentiment analysis tools used in various campaigns demonstrate how AI can track and influence public sentiment. For example, sentiment analysis during the Brexit referendum revealed how targeted messaging shifted public opinion toward leaving the European Union. The use of AI in elections raises several ethical and regulatory concerns. These include issues of data privacy, the potential for voter manipulation, and the transparency of AI algorithms. Regulatory frameworks need to evolve to address these challenges and ensure that AI technologies are used ethically and responsibly in political campaigns.

3.1. AI Threats and Challenges to Elections and Democracy

AI presents numerous threats to the integrity of elections and democratic processes. This section explores these threats in detail, examining how AI can amplify political polarization, create and spread deepfakes and disinformation, propagate propaganda, and execute biased campaigns. Through case studies and examples, we illustrate the real-world impact of these AI-driven activities on voter behavior and election outcomes.

3.1.1. Polarization

AI algorithms, especially those used by social media platforms, prioritize content that maximizes user engagement. This often results in the promotion of sensationalist and emotionally charged content that can polarize public opinion [47]. Algorithms analyze user behavior and preferences to deliver content that aligns with and reinforces existing beliefs, creating echo chambers where individuals are exposed predominantly to viewpoints that match their own [63]. Some examples are as follows:
  • Research has shown that Facebook’s newsfeed algorithm tends to prioritize content that elicits strong reactions. During the 2016 U.S. Presidential election [64], this resulted in the amplification of politically polarizing content, deepening divisions among voters [65,66].
  • Similar effects have been observed on YouTube, where the recommendation algorithm often promotes extreme and controversial videos [67]. Studies have found that users who start with relatively neutral political content can quickly be led to more radical viewpoints through the platform’s recommendations [68].
  • Similar mechanisms were observed during the Brexit referendum, where AI algorithms amplified content that either strongly supported or opposed Brexit, leaving little room for moderate or balanced viewpoints [69]. This contributed to the highly polarized nature of the public debate [70].

3.1.2. Deepfakes and Disinformation

Deepfakes are hyper-realistic digital manipulations of audio and video that can make it appear as though individuals are saying or doing things they never did [71]. This technology poses a significant threat to elections by enabling the creation of convincing but false content that can mislead voters and damage reputations. These deepfakes can be used to discredit candidates, spread false information, or incite unrest [72].
AI can automate the creation and dissemination of false news articles, social media posts, and other content. These campaigns can be tailored to target specific voter demographics with customized disinformation designed to influence their perceptions and behaviors [73]. AI-driven disinformation campaigns in the 2020 U.S. Presidential election targeted specific voter demographics by spreading false information about candidates and election procedures. These campaigns were strategically designed to exploit social media algorithms, ensuring wide dissemination and a significant impact on voter perceptions [65,66].

3.1.3. Propaganda, Bias, and Campaigns

AI can enhance the effectiveness of propaganda and biased campaigns by enabling the creation of highly targeted and persuasive messages. These techniques can subtly influence voter behavior, often without them being aware [74,75], and potentially result in the following:
  • Erosion of trust: The spread of deepfakes and disinformation can erode public trust in media, political figures, and institutions [76]. When voters cannot distinguish between truth and falsehood, their confidence in the democratic process diminishes [77].
  • Increased polarization: AI-driven amplification of divisive content contributes to a polarized public discourse, making it difficult to achieve consensus on important issues [78].
  • Heightened instances of political violence: The proliferation of deepfakes and disinformation has the potential to inflame public sentiments, prompting individuals to engage in risky behaviors in support of or opposition to their chosen candidate [79,80].
  • Manipulation of voter behavior: AI-enabled micro-targeting and psychological profiling can manipulate voter behavior in ways that are difficult to detect and counteract. This manipulation can alter the outcome of elections by swaying undecided voters or suppressing voter turnout among certain demographics [81,82].
  • Disruption of electoral processes: Disinformation and deepfake campaigns can disrupt electoral processes by spreading false information about voting procedures, creating confusion, and inciting unrest [83,83].
AI’s role in global elections is multifaceted, offering both significant benefits and substantial risks. While AI can enhance the efficiency and effectiveness of political campaigns, it also has the potential to undermine democratic processes through manipulation and bias. Understanding these dynamics is crucial for developing strategies to mitigate the risks and harness the positive potential of AI in elections. This paper aims to provide a comprehensive analysis of these issues, contributing to the broader discourse on AI’s impact on democracy and electoral integrity.

4. Blockchain-Based Deepfake Authenticity Verification Framework (B-DAVF)

The B-DAVF is a comprehensive system designed to address the threats posed by AI-generated deepfakes. This framework consists of six major components that serve to ensure the integrity and authenticity of digital assets (images, videos, and audio). These components work synergistically to create a robust mechanism for managing authenticity in the digital space. The accompanying diagram in Figure 2 visually represents the interrelationships and flow between these components.

4.1. Content Creation

The process begins with content creation, the first step in our framework, which involves producing digital assets. At this stage, a creator generates content in the form of an image, video, or audio file. The digital asset serves as the foundational element for the subsequent verification processes. It is essential to clearly identify the creator and the digital asset itself, as this facilitates accurate tracking and authenticity verification. A crucial feature at this stage is generating a digital fingerprint in the form of a hash, which is a unique identifier derived from the content. This hash is vital for tracking and authentication throughout the lifecycle of the digital asset.

4.2. Registering the Asset

Once the asset has been created, the next step involves registering the asset on the blockchain. This includes generating a unique hash of the digital content, along with a comprehensive set of metadata that describe the essential characteristics of the asset, such as its title, creator, creation date, and a brief description. The metadata, combined with the asset’s hash, are then securely stored on the blockchain under the creator’s public key, establishing a clear link between the asset and its originator. This registration ensures that the creator’s identity is recorded and protected, enabling a reliable and transparent ownership structure, which is fundamental for verifying authenticity later in the process.
Algorithm 1 outlines the process of registering a digital asset on the blockchain. It lists the process of generating a unique hash of the digital content, creating the associated metadata, and securely storing it on the blockchain, linked to the creator’s public key. The algorithm is described as follows:
  • Inputs: The function RegisterAsset takes the digital asset content (C), the creator’s public key (P), the title (T), the creation date (D), and a brief description (Desc).
  • Generate Hash: A unique hash of the digital content is generated using a hashing function GenerateHash.
  • Create Metadata: A metadata dictionary is constructed containing the asset’s title, creator’s public key, creation date, description, and the content hash.
  • Get Blockchain Address: The creator’s public key is used to retrieve the corresponding blockchain address.
  • Store on Blockchain: The metadata are then securely stored on the blockchain at the creator’s address, which returns a transaction hash.
  • Return Transaction Hash: Finally, the transaction hash is returned, confirming the registration of the asset.
Algorithm 1 Registering an asset on the blockchain
Blockchains 02 00020 i001

4.3. Tracking Modifications

As digital assets are shared and repurposed, they may undergo modifications that alter their original form. In our framework, any modification triggers the recording of a new entry on the blockchain, which captures critical information about the changes. This entry includes the nature of the modifications made, the new hash generated, and the identity of the modifier. This creates an invaluable provenance record comprising details about the original asset, its modification history, timestamps of changes, and creator information. By continuously updating this record, we create a lineage of the asset that encapsulates its entire history of adaptations. This comprehensive documentation allows stakeholders to trace back the evolution of the digital content and assess its authenticity.

4.4. Storing the Provenance

Provenance records are key to the operation of the B-DAVF. All entries, encompassing original assets as well as any edited versions, are chronicled on the blockchain in a linked manner, forming a transparent and immutable record of the asset’s history. This chronological linkage ensures that anyone can access a complete history of the digital asset, providing critical insight into its lifecycle. The decentralized nature of the blockchain enhances security, mitigating risks related to data tampering and enhancing trust. Each block, containing vital data, is securely stored in the digital ledger, ensuring long-term protection of the asset’s provenance and authenticity.

4.5. Verification Process

The verification process is a core function of the B-DAVF, allowing users to determine the authenticity of a digital asset. Users can initiate a verification query, which entails a comparison of the current hash of the asset to the hash stored on the blockchain. Additionally, users can examine the asset’s provenance history, reviewing the comprehensive record of modifications and creator details. This dual approach—hash check and provenance examination—enables a thorough vetting procedure. If the current hash matches the stored hash, the asset is authenticated. However, any discrepancies would signal potential issues regarding the asset’s genuineness.
Algorithm 2 represents the verification process of a digital asset on the blockchain, which evaluates both the case where the asset matches the original and the case where it does not. The algorithm is described as follows:
  • Inputs: The function VerifyAsset takes the digital asset content (C’) and the creator’s public key (P).
  • Generate Hash: A hash (H’) of the incoming digital asset content is generated using a hash function (GenerateHash).
  • Get Blockchain Address: The blockchain address associated with the creator’s public key is retrieved.
  • Retrieve Metadata: The algorithm retrieves the stored metadata from the blockchain using the creator’s blockchain address.
  • Check Metadata: This checks the metadata and then proceeds as follows:
    If metadata are found, it proceeds to check whether the stored content hash matches the generated hash (H’).
    If the hashes match, it indicates that the asset is authentic and outputs a valid message.
    If the hashes do not match, it indicates that the asset has been tampered with.
  • No Metadata Found: If no metadata are found for the creator, the verification is declared invalid.
  • Return Verification Result: Finally, it returns the verification result (R).
Algorithm 2 Verifying an asset on the blockchain
Blockchains 02 00020 i002

4.6. Flagging and Reporting

Finally, if the verification process uncovers discrepancies, such as a hash mismatch or a history that indicates suspicious alterations, our framework incorporates a flagging and reporting mechanism. In the case of a hash match, the asset is confirmed as authentic, providing assurance to users. Conversely, a hash mismatch raises a red flag, indicating that the asset may be a potential deepfake. The framework allows users to report these findings, thereby creating a feedback loop that enhances the overall integrity of the digital ecosystem. This not only empowers users but also fosters a collective effort to combat misinformation and safeguard democratic discourse.

5. Framework Evaluation—Case Study: The ”Fake News” Election Video

Imagine a highly publicized election campaign where a video emerges, seemingly featuring a prominent political candidate making inflammatory statements. The video quickly gains traction on social media, generating significant public discourse. However, as the video spreads, experts begin to suspect that it may be a deepfake—a fabricated video created using AI technologies to manipulate the candidate’s image and voice. In this scenario, the B-DAVF can be used to assess the authenticity of the video.
At the onset, let us assume that a digital asset (the potentially manipulated video) has been created. A deepfake creator uses AI to generate a video that appears to feature the candidate making statements that could damage their reputation. This newly formed asset includes a digital fingerprint generated in the form of a hash based on the video’s binary data.
For our example, let us consider the scenario where the video’s creator did not register this asset properly on the blockchain, as they intended to remain anonymous. After the video has gained traction, digital forensics experts begin analyzing it. If our framework were utilized, each time a modification was made, a new entry would be recorded on the blockchain. This entry would document the nature of the change, a new hash generated from the altered content, and the identity of the person(s) who modified it.
When a verification request is initiated by a fact-checking organization or concerned citizens, they can utilize the framework’s features to authenticate the video. The verification process would involve two critical checks. First, they would compute the hash of the currently circulating video and compare it against the hash stored on the blockchain. Second, they would examine the provenance history, discovering that the original video has a clean record, devoid of any controversial statements.
During the verification process, if the hash of the circulating video does not match the original hash stored on the blockchain, this discrepancy indicates manipulation. Moreover, if the modification history shows that alterations were made after the original publication, the framework’s flagging mechanism activates. The system automatically flags the asset, categorizing it as a ”potential deepfake”.

5.1. Algorithm: VerifyDeepfakeVideo

Algorithm 3 shows the verification process of the ”fake news” election video case study, illustrating how the B-DAVF assesses the authenticity of the video and flags it as a potential deepfake. The algorithm is described as follows:
  • Input Parameters: The algorithm takes the circulating video content ( C ) and the creator’s public key (P).
  • Generate Hash: The hash ( H current ) of the circulating video is computed using a hash function (GenerateHash).
  • Retrieve Blockchain Address: The blockchain address associated with the creator’s public key is retrieved.
  • Retrieve Metadata: The algorithm attempts to fetch the stored metadata (including the original content hash and modification history) from the blockchain.
  • Check for Original Asset: If no metadata are found (StoredMetadata is null), it indicates that the original asset does not exist on the blockchain. The asset is flagged as a potential deepfake, and the result reflects that no authentic original was registered.
  • Hash Comparison: If metadata are found, the algorithm compares H current with the stored content hash and then proceeds as follows:
    If they match, the asset is deemed authentic, and no flag is raised.
    If they do not match, the asset is flagged as a potential deepfake due to the indication of tampering.
  • Modification History Check: If there is a modification history present in the metadata, the asset is also flagged as a potential deepfake.
  • Return Results: The algorithm returns both the verification result (R) and the flag status (F).
Algorithm 3 Verifying a deepfake video
Blockchains 02 00020 i003

5.2. Applications of the B-DAVF Beyond Political Contexts

The framework proposed in this study has applications beyond identifying misinformation in political elections. Due to its decentralized nature and the robustness of blockchain technology, the B-DAVF can also be applied to various domains that require secure and transparent verification of digital assets.
For instance, in the healthcare sector, the B-DAVF can be utilized to verify the authenticity of medical records and prevent tampering. By ensuring that patient data are immutable and transparently tracked, healthcare providers can enhance the integrity of patient information, thereby fostering trust among patients and practitioners. This capability is crucial in scenarios where accurate medical histories are essential for treatment, as it minimizes the risk of fraudulent records and ensures compliance with health regulations.
In the supply chain industry, the B-DAVF offers significant benefits through its ability to track and verify the authenticity of goods. By employing blockchain’s inherent transparency and traceability, stakeholders can monitor each step of the supply chain journey, from the origin of raw materials to the final product delivered to consumers. This transparency not only ensures the authenticity of products but also helps combat counterfeit goods, thus protecting both consumers and brands. Additionally, having authoritative data on goods can enhance operational efficiency and help with compliance with industry standards.
Moreover, the B-DAVF can be adapted for use in sectors such as finance, where it can assist in ensuring the integrity of transactions and preventing financial fraud through transparent audit trails. Similarly, in the realm of government and public records, the framework can enhance transparency and accountability in public service delivery by securely verifying and tracking government documents and transactions.
These additional applications demonstrate not only the versatility of the B-DAVF but also its profound potential to address a wide range of challenges across different sectors, extending well beyond political and electoral contexts.

6. Other Countermeasures Against AI Threats

Given the substantial risks that AI poses to the integrity of elections and democracy, it is crucial to implement effective countermeasures. This section outlines potential solutions to mitigate AI threats, discusses their effectiveness, and provides recommendations for policymakers, researchers, and the public. In this article, we discuss how AI can influence voter behavior and election outcomes, focusing on critical areas like political polarization, deepfakes, disinformation, propaganda, and biased campaigns. AI has the potential to significantly impact election results by disseminating false political information, favoring certain parties over others, and creating fake narratives, content, images, videos, and voice clones to undermine opposition.
There should be stringent protocols to control the use of AI in political campaigns, regulate AI-generated political ads, and understand the importance of insisting on truthfulness from political leaders. We need to act against the creation and dissemination of deepfakes, the ethical use of voter data, and the prevention of AI-driven voter manipulation. AI giants should be transparent about their algorithms, data sources, and the potential biases in their AI models. This will allow for independent audits and help ensure that AI systems are fair and unbiased. Importantly, AI systems used in political campaigns and elections must be transparent—the developers and operators of these systems should disclose the algorithms’ decision-making processes, data sources, and potential biases. Transparency will allow for greater accountability and trust in AI technologies. The Brennan Center’s research on AI threats emphasizes the importance of robust cyber hygiene and auditing software tabulations. It recommends independent audits of software used for vote tabulation to detect anomalies, ensuring the accuracy and integrity of election results. Election administrators should simulate AI threats and conduct mock elections and tabletop exercises to identify vulnerabilities and test resilience against AI-driven phishing campaigns and disinformation. Importantly, the public should be educated about the potential threats posed by AI, how disinformation spreads, and how to identify and report such instances. By increasing AI literacy, voters can better discern between genuine and AI-generated content. Workshops, public service announcements, and educational campaigns can empower citizens with the knowledge to critically evaluate political information.
Figure 3 illustrates a comprehensive framework for addressing AI-related threats through various countermeasures. It highlights four key areas: Regulatory Measures, Technological Solutions, Public Awareness and Education, and Suggestions for Policymakers and Researchers. Each category includes specific actions for mitigating the potential risks posed by AI in elections. Details are provided in the subsequent subsections.

6.1. Regulatory  Measures

In this category, specific actions for mitigating the potential risks posed by AI in elections include the following:
  • Legislation and Policy Development: Governments should develop and implement comprehensive legislation that specifically addresses the use of AI in political campaigns [84]. This includes regulations on data privacy, the transparency of AI algorithms, and the ethical use of AI in elections [85,86].
  • Transparency Requirements: Mandating the disclosure of AI use in political campaigns can help ensure that voters are aware of how AI technologies are being used to influence them. Campaigns should be required to disclose the types of AI technologies they use, the data sources they rely on, and the nature of the targeted messages [87,88].

6.2. Technological Solutions

In this category, specific actions for mitigating the potential risks posed by AI in elections include the following:
  • AI Detection Tools: Developing advanced AI tools that can detect and flag deepfakes, disinformation, and other AI-generated content is essential [89]. These tools can help identify false content before it spreads widely, allowing for timely intervention [90].
  • Strengthening Cybersecurity: Strengthening the cybersecurity of election systems is crucial to protect against AI-powered cyberattacks [91]. This includes the use of encryption, intrusion detection systems, and secure authentication protocols.

6.3. Public Awareness and Education

In this category, specific actions for mitigating the potential risks posed by AI in elections include the following:
  • Media Literacy Programs: Implementing media literacy programs can help the public critically evaluate the information they encounter [92]. These programs should teach individuals how to identify disinformation, understand the mechanisms behind AI-generated content, and recognize deepfakes [93].
  • Public Awareness Campaigns: Governments, NGOs, and tech companies should collaborate to launch public awareness campaigns about the risks posed by AI in elections. These campaigns can use various media channels to reach a broad audience and promote critical thinking [94].
The effectiveness of countermeasures depends on their implementation, enforcement, and the adaptability of regulatory frameworks and technologies to evolving AI threats. Effective legislation can provide a strong foundation for mitigating AI threats, but it requires continuous updates to address new developments in AI technologies. Enforcement mechanisms must be robust to ensure compliance and accountability. AI detection tools and cybersecurity measures can significantly reduce the spread of disinformation and protect election infrastructure. However, these tools need constant improvement to keep pace with advancements in AI-generated content. Similarly, educating the public is a long-term strategy that can build resilience against disinformation. While it may take time to see widespread changes in behavior, well-designed education programs can empower individuals to make informed decisions and reduce the impact of AI-driven manipulation.

6.4. Suggestions for Policymakers and Researchers

In this category, specific actions for mitigating the potential risks posed by AI in elections include the following:
  • Develop Comprehensive Legislation: Policymakers should focus on creating laws that address the specific challenges posed by AI in elections. This includes regulations on data usage, transparency, and ethical considerations [95].
  • Enhance International Cooperation: Given the global nature of AI technologies, international cooperation is essential. Policymakers should work together to develop harmonized regulations and share best practices to combat AI threats to democracy [96].
  • Establish Oversight Bodies: Independent oversight bodies should be established to monitor the use of AI in political campaigns, enforce regulations, and investigate violations [97].
  • Advance AI Detection Technologies: Researchers should continue to develop and refine AI tools that can detect deepfakes, disinformation, and other malicious AI-generated content. Collaboration between academia and industry can accelerate these advancements [98,99].
  • Analyze AI’s Impact on Democracy: Conducting research on the long-term effects of AI on democratic processes can provide valuable insights into how AI technologies influence voter behavior and public trust [100]. This research can inform the development of more effective countermeasures.
  • Implement Media Literacy Programs: Schools, universities, and community organizations should incorporate media literacy into their curricula [101]. These programs should focus on critical thinking skills and the ability to recognize and counteract disinformation [102].
  • Promote Public Awareness Campaigns: Continuous public awareness campaigns should be conducted to keep the public informed about AI threats and how to protect themselves [103]. These campaigns can use social media, traditional media, and public events to reach diverse audiences [104].

7. Discussion—Case Studies

Hash-based verification mechanisms are widely recognized within the blockchain literature, especially in areas such as supply chain management. Our application of these principles targets the distinct and emerging challenge of deepfake verification. The use of hash return properties provides a foundational, trusted layer of verification; however, our focus is to adapt this well-established mechanism specifically for authenticating media in contexts vulnerable to AI-driven disinformation, such as electoral systems. This adaptation involves leveraging blockchain’s transparency and immutability, not only for general provenance tracking but also as a safeguard against the rising influence of deepfake content. In this way, the framework integrates familiar blockchain techniques into a niche application that addresses the growing need for information integrity in digital media verification.
Moreover, countering the threats posed by AI to elections and democracy requires a multifaceted approach that combines regulatory measures, technological solutions, and public education. Policymakers, researchers, and the public can work together to develop and implement effective strategies to protect electoral integrity. Some notable case studies of successful countermeasures are discussed below.
The European Union’s General Data Protection Regulation (GDPR) has set a global standard for data privacy and protection [105]. Its stringent requirements for data handling and transparency have significantly influenced how political campaigns use AI and personal data [106]. By mandating clear consent for data collection and imposing strict penalties for non-compliance, the GDPR has provided a robust framework that other regions can emulate. This regulation ensures that personal data are handled responsibly, reducing the risk of misuse in political campaigns and enhancing voter trust in the electoral process [107].
Facebook’s Deepfake Detection Challenge is another exemplary initiative aimed at combating AI-generated disinformation. Recognizing the growing threat of deepfakes, Facebook launched a global challenge to improve deepfake detection technologies [108,109]. This initiative brought together researchers from around the world to develop advanced tools capable of identifying manipulated content. The challenge has led to significant advancements in the detection of AI-generated videos, making it more difficult for malicious actors to spread false information through deepfakes [110].
In Finland, public awareness initiatives have played a crucial role in increasing public resilience against disinformation campaigns. The Finnish government has implemented extensive media literacy programs aimed at educating the public about the dangers of disinformation and AI-generated content [111]. These programs focus on teaching critical thinking skills, helping individuals to recognize and evaluate the credibility of information they encounter online. As a result, the Finnish public has become more adept at identifying and resisting disinformation, thereby reducing the impact of AI-driven propaganda on the electoral process [112].
Companies and organizations are actively working to detect and stay ahead of AI threats to democracy. By developing innovative technologies, implementing robust regulatory frameworks, and promoting public education, these efforts collectively contribute to safeguarding democratic processes. Understanding and addressing these challenges is essential for harnessing the benefits of AI while protecting the integrity of elections and democratic institutions.

Verification and Historical Context of Electoral Challenges

A critical aspect of ensuring information integrity within electoral contexts is the verification of the content creator’s identity. Identity verification within the B-DAVF aims to establish a trusted link between the digital content and its source, ensuring that any media artifact circulating in the public sphere can be reliably traced to its origin. This mechanism not only strengthens the authenticity of digital content but also acts as a deterrent against the dissemination of falsified media. In an era where manipulated content can quickly gain traction and influence public perception, verifying the identity of digital content creators is essential for mitigating the spread of misinformation and deepfakes.
In addition to addressing modern AI-related concerns, it is essential to recognize that misinformation and manipulation are not new phenomena within democratic systems. Historical electoral processes have frequently been subject to attempts at manipulation, often through rumors, propaganda, and other non-AI-related tactics. These challenges highlight the need for robust verification measures, like those proposed in the B-DAVF, to help safeguard democratic integrity.
This viewpoint aligns with the insights of political theorist Hannah Arendt, who warned of the dangers posed by pervasive misinformation: “A people that can no longer distinguish between truth and lies cannot distinguish between right and wrong. And such a people, deprived of the power to think and judge, is, without knowing and willing it, completely subjected to the rule of lies”. This statement emphasizes the risks inherent in a media landscape saturated with misinformation, where the public’s ability to discern truth is compromised, potentially undermining the foundations of democratic decision making.
By incorporating identity verification and understanding the historical challenges to electoral integrity, our framework contributes to a layered approach that addresses both the current technological threats posed by AI and the enduring need for trusted information in democratic societies.

8. Conclusions

This research discusses the significant threats posed by AI to global elections and democracy. This article highlights that AI technologies facilitate the creation and dissemination of deepfakes and disinformation, undermining trust in public figures, media, and democratic institutions. AI enhances the effectiveness of propaganda and biased campaigns through micro-targeting and psychological profiling, subtly influencing voter behavior and manipulating public opinion. Real-world examples, such as the Cambridge Analytica scandal and deepfake videos in Indian elections, illustrate the profound impact of AI on electoral outcomes and the ethical concerns surrounding its use. This article proposes effective countermeasures, including regulatory measures, technological solutions, and public education. These strategies must be implemented and continuously updated to address evolving AI threats. The ability of AI to manipulate voter behavior and amplify polarization can undermine the legitimacy of election outcomes, erode public trust in democratic institutions, and destabilize political systems. Additionally, the widespread use of AI-generated disinformation can create confusion and foster cynicism among voters, making it challenging to maintain a well-informed electorate.
While this research provides a comprehensive analysis of AI threats to elections and democracy, several limitations must be acknowledged. This study primarily focuses on well-documented cases and may not capture all the nuanced ways AI is used in political campaigns globally. AI technologies are evolving rapidly, and new threats may emerge that are not covered in this research. Access to detailed data on the use of AI in political campaigns is limited, which may affect the depth of analysis in some areas. The case studies and examples primarily focus on specific regions, and the findings may not be fully generalizable to other contexts. Future research should aim to address these limitations and explore new areas of inquiry.
In future work, we aim to implement and evaluate the B-DAVF in practical settings to obtain quantitative insights into its real-world efficacy. This will involve deploying the framework within organizations that manage or verify digital media assets, particularly those vulnerable to deepfake risks, such as news outlets or political institutions. Through this practical application, we plan to measure key performance metrics, including detection accuracy, processing speed, and scalability. Additionally, conducting empirical studies across diverse environments will allow us to assess the framework’s adaptability and robustness against evolving deepfake technologies.
Also, we aim to conduct a comparative analysis between the innovative methods proposed in this paper and traditional methods, using specific numerical analysis results from real datasets and facial feature selections. These implementations will help validate the theoretical model presented in this paper and further refine the B-DAVF to enhance its utility for safeguarding electoral and democratic integrity.

Author Contributions

Writing—original draft, M.B.E.I.; Writing—review & editing, H.B. and N.A.; Visualization, M.H.; Project administration, Z.M.; Funding acquisition, Z.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors recognize the importance of ethical AI use in research and its societal impact. Tools named Grammarly, Microsoft Copilot, and ChatGPT were used to improve grammar, clarity, and coherence, but the authors take full responsibility for the originality, validity, and integrity of the content.

Conflicts of Interest

The authors declare that they have no conflicts of interest with respect to the publication of this article.

References

  1. Fetzer, J.H.; Fetzer, J.H. What Is Artificial Intelligence? Springer: New York, NY, USA, 1990. [Google Scholar]
  2. PK, F.A. What is Artificial Intelligence? In Success Is No Accident. It Is Hard Work, Perseverance, Learning, 821 Studying, Sacrifice and Most of All, Love of What You Are Doing or Learning to Do; L’ Ordine Nuovo Publication: New Delhi, India, 1984; Volume 65, Available online: https://core.ac.uk/download/pdf/523285678.pdf#page=76 (accessed on 11 October 2024).
  3. Wang, H. Proving theorems by pattern recognition I. Commun. ACM 1960, 3, 220–234. [Google Scholar] [CrossRef]
  4. Wang, H.; Wang, H. Computer theorem proving and artificial intelligence. In Computation, Logic, Philosophy: A Collection of Essays; Springer Science & Business Media: New York, NY, USA, 1990; pp. 63–75. [Google Scholar]
  5. Finn, P.; Bell, L.C.; Tatum, A.; Leicht, C.V. Assessing ChatGPT as a tool for research on US state and territory politics. Political Stud. Rev. 2024, 14789299241268652. Available online: https://journals.sagepub.com/doi/abs/10.1177/14789299241268652 (accessed on 11 October 2024). [CrossRef]
  6. Puggioni, R. Coming out as undocumented: Identity celebrations and political change. Societies 2024, 14, 130. [Google Scholar] [CrossRef]
  7. Wu, T.; He, S.; Liu, J.; Sun, S.; Liu, K.; Han, Q.L.; Tang, Y. A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA J. Autom. Sin. 2023, 10, 1122–1136. [Google Scholar] [CrossRef]
  8. Rozado, D. The political biases of chatgpt. Soc. Sci. 2023, 12, 148. [Google Scholar] [CrossRef]
  9. Dommett, K. Data-driven political campaigns in practice: Understanding and regulating diverse data-driven campaigns. Internet Policy Rev. 2019, 8, 7. [Google Scholar] [CrossRef]
  10. Sandoval-Almazan, R.; Valle-Cruz, D. Facebook impact and sentiment analysis on political campaigns. In Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age, Delft, The Netherlands, 30 May–1 June 2018; pp. 1–7. [Google Scholar]
  11. Vlados, C.M. The Current Evolution of International Political Economy: Exploring the New Theoretical Divide between New Globalization and Anti-Globalization. Societies 2024, 14, 135. [Google Scholar] [CrossRef]
  12. Kang, M. A Study of Chatbot Personality based on the Purposes of Chatbot. J. Korea Contents Assoc. 2018, 18, 319–329. [Google Scholar]
  13. Brundage, M.; Avin, S.; Wang, J.; Krueger, G. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv 2018, arXiv:1802.07228. [Google Scholar]
  14. Irfan, M.; Ali, S.T.; Ijlal, H.S.; Muhammad, Z.; Raza, S. Exploring The Synergistic Effects of Blockchain Integration with IOT and AI for Enhanced Transparency and Security in Global Supply Chains. Int. J. Contemp. Issues Soc. Sci 2024, 3, 1326–1338. [Google Scholar]
  15. Yankoski, M.; Weninger, T.; Scheirer, W. An AI early warning system to monitor online disinformation, stop violence, and protect elections. Bull. At. Sci. 2020, 76, 85–90. [Google Scholar] [CrossRef]
  16. Fiaz, F.; Sajjad, S.M.; Iqbal, Z.; Yousaf, M.; Muhammad, Z. MetaSSI: A Framework for Personal Data Protection, Enhanced Cybersecurity and Privacy in Metaverse Virtual Reality Platforms. Future Internet 2024, 16, 176. [Google Scholar] [CrossRef]
  17. Micha, E.; Shah, N. Can We Predict the Election Outcome from Sampled Votes? In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 2176–2183. [Google Scholar]
  18. Arshad, J.; Talha, M.; Saleem, B.; Shah, Z.; Zaman, H.; Muhammad, Z. A Survey of Bug Bounty Programs in Strengthening Cybersecurity and Privacy in the Blockchain Industry. Blockchains 2024, 2, 195–216. [Google Scholar] [CrossRef]
  19. Łabuz, M.; Nehring, C. On the way to deep fake democracy? Deep fakes in election campaigns in 2023. Eur. Political Sci. 2024, 1–20. [Google Scholar] [CrossRef]
  20. Bali, A.; Desai, P. Fake news and social media: Indian perspective. Media Watch 2019, 10, 737–750. [Google Scholar] [CrossRef]
  21. Christou, A. Theorising Pandemic Necropolitics as Evil: Thinking Inequalities, Suffering, and Vulnerabilities with Arendt. Societies 2024, 14, 171. [Google Scholar] [CrossRef]
  22. Benevenuto, F.; Melo, P. Misinformation Campaigns through WhatsApp and Telegram in Presidential Elections in Brazil. Commun. ACM 2024, 67, 72–77. [Google Scholar] [CrossRef]
  23. Kazim, M.; Pirim, H.; Shi, S.; Wu, D. Multilayer analysis of energy networks. Sustain. Energy, Grids Netw. 2024, 39, 101407. [Google Scholar] [CrossRef]
  24. Kim-Leffingwell, S.; Sallenback, E. Mnemonic politics among Philippine voters: A social media measurement approach. Democratization 2024, 1–23. [Google Scholar] [CrossRef]
  25. Pawelec, M. Deepfakes and democracy (theory): How synthetic audio-visual media for disinformation and hate speech threaten core democratic functions. Digit. Soc. 2022, 1, 19. [Google Scholar] [CrossRef]
  26. Coeckelbergh, M. The Political Philosophy of AI: An Introduction; John Wiley & Sons: New York, NY, USA, 2022. [Google Scholar]
  27. Pope, A.E. Cyber-securing our elections. J. Cyber Policy 2018, 3, 24–38. [Google Scholar] [CrossRef]
  28. Nazir, A.; Iqbal, Z.; Muhammad, Z. ZTA: A Novel Zero Trust Framework for Detection and Prevention of Malicious Android Applications. 2024. Available online: https://www.researchsquare.com/article/rs-4464369/v1 (accessed on 11 October 2024).
  29. Overton, S. Overcoming Racial Harms to Democracy from Artificial Intelligence. Iowa Law Rev. 2024. Forthcoming. [Google Scholar]
  30. Cupać, J.; Sienknecht, M. Regulate against the machine: How the EU mitigates AI harm to democracy. Democratization 2024, 31, 1067–1090. [Google Scholar] [CrossRef]
  31. Rosenfeld, S. Democracy and Truth: A Short History; University of Pennsylvania Press: Philadelphia, PA, USA, 2018. [Google Scholar]
  32. Porpora, D.; Sekalala, S. Truth, communication, and democracy. Int. J. Commun. 2019, 13, 18. [Google Scholar]
  33. Rosenbach, E.; Mansted, K. Can Democracy Survive in the Information Age? Belfer Center for Science and International Affairs: Cambridge, MA, USA, 2018; Volume 30. [Google Scholar]
  34. Saleem, B.; Ahmed, M.; Zahra, M.; Hassan, F.; Iqbal, M.A.; Muhammad, Z. A survey of cybersecurity laws, regulations, and policies in technologically advanced nations: A case study of Pakistan to bridge the gap. Int. Cybersecur. Law Rev. 2024, 5, 533–561. [Google Scholar] [CrossRef]
  35. Du-Harpur, X.; Watt, F.; Luscombe, N.; Lynch, M. What is AI? Applications of artificial intelligence to dermatology. Br. J. Dermatol. 2020, 183, 423–430. [Google Scholar] [CrossRef]
  36. Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–45. [Google Scholar] [CrossRef]
  37. Liu, X.Y.; Wang, G.; Yang, H.; Zha, D. Fingpt: Democratizing internet-scale data for financial large language models. arXiv 2023, arXiv:2307.10485. [Google Scholar]
  38. Wei, Z.; Xu, X.; Hui, P. Digital Democracy at Crossroads: A Meta-Analysis of Web and AI Influence on Global Elections. In Proceedings of the Companion Proceedings of the ACM on Web Conference 2024, Singapore, 13–17 May 2024; pp. 1126–1129. [Google Scholar]
  39. Javed, M.S.; Sajjad, S.M.; Mehmood, D.; Mansoor, K.; Iqbal, Z.; Kazim, M.; Muhammad, Z. Analyzing Tor Browser Artifacts for Enhanced Web Forensics, Anonymity, Cybersecurity, and Privacy in Windows-Based Systems. Information 2024, 15, 495. [Google Scholar] [CrossRef]
  40. Bakir, V.; Laffer, A.; McStay, A.; Miranda, D.; Urquhart, L. On Manipulation by Emotional AI: UK Adults’ Views and Governance Implications. Front. Sociol. 2024, 9, 1339834. [Google Scholar] [CrossRef]
  41. Masombuka, M.; Duvenage, P.; Watson, B. A Cybersecurity Imperative on an Electronic Voting System in South Africa-2024 and Beyond. In Proceedings of the ICCWS 2021 16th International Conference on Cyber Warfare and Security, Cookeville, TN, USA, 25–26 February 2021; Academic Conferences Limited: Oxfordshire, UK, 2021; p. 204. [Google Scholar]
  42. Maweu, J.M. “Fake elections”? Cyber propaganda, disinformation and the 2017 general elections in Kenya. Afr. J. Stud. 2019, 40, 62–76. [Google Scholar] [CrossRef]
  43. Martella, A.; Roncarolo, F. Giorgia Meloni in the spotlight. Mobilization and competition strategies in the 2022 Italian election campaign on Facebook. Contemp. Ital. Politics 2023, 15, 88–102. [Google Scholar] [CrossRef]
  44. Fears of AI Disinformation Cast Shadow over Turkish Local Elections. 2024. Available online: https://www.aljazeera.com/news/2024/3/28/fears-ai-disinformation-cast-shadow-over-turkish-local-elections (accessed on 21 July 2024).
  45. Posts Use Altered Image of Secret Service Agents following Trump Shooting. 2024. Available online: https://www.factcheck.org/2024/07/posts-use-altered-image-of-secret-service-agents-following-trump-shooting/ (accessed on 21 July 2024).
  46. Tomić, Z.; Damnjanović, T.; Tomić, I. Artificial intelligence in political campaigns. South East. Eur. J. Commun. 2023, 5, 17–28. [Google Scholar] [CrossRef]
  47. Yu, C. How Will AI Steal Our Elections? Center for Open Science: Charlottesville, VA, USA, 2024. [Google Scholar]
  48. Pariser, E. The Filter Bubble: What the Internet is Hiding from You; Penguin Press: London, UK, 2011. [Google Scholar]
  49. Bozdag, E. Bias in algorithmic filtering and personalization. Ethics Inf. Technol. 2013, 15, 209–227. [Google Scholar] [CrossRef]
  50. Cadwalladr, C.; Graham-Harrison, E. Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Guardian 2018, 17, 22. [Google Scholar]
  51. Zhou, Z.; Makse, H. Artificial intelligence for elections: The case of 2019 Argentina primary and presidential election. arXiv 1910, arXiv:1910.11227. [Google Scholar] [CrossRef]
  52. Chennupati, A. The threat of artificial intelligence to elections worldwide: A review of the 2024 landscape. World J. Adv. Eng. Technol. Sci. 2024, 12, 29–34. [Google Scholar] [CrossRef]
  53. Stepien-Zalucka, B. AI-voting?: A few words about the role of algorithms in elections. In Artificial Intelligence and Human Rights; Dykinson: Madrid, Spain, 2021; pp. 117–128. Available online: https://www.torrossa.com/en/resources/an/5109967 (accessed on 11 October 2024).
  54. Tomar, M.; Raj, N.; Singh, S.; Marwaha, S.; Tiwari, M. The Role of AI-driven Tools in Shaping the Democratic Process: A Study of Indian Elections and Social Media Dynamics. Ind. Eng. J. 2023, 52, 143–153. [Google Scholar]
  55. Voigt, P.; Von dem Bussche, A. The eu general data protection regulation (gdpr). In A Practical Guide, 1st ed.; Springer International Publishing: Cham, Switzerland, 2017; Volume 10, pp. 10–5555. [Google Scholar]
  56. Kingston, J. Using artificial intelligence to support compliance with the general data protection regulation. Artif. Intell. Law 2017, 25, 429–443. [Google Scholar] [CrossRef]
  57. Labu, M.R.; Ahammed, M.F. Next-Generation Cyber Threat Detection and Mitigation Strategies: A Focus on Artificial Intelligence and Machine Learning. J. Comput. Sci. Technol. Stud. 2024, 6, 179–188. [Google Scholar] [CrossRef]
  58. Muneer, S.; Farooq, U.; Athar, A.; Ahsan Raza, M.; Ghazal, T.M.; Sakib, S. A Critical Review of Artificial Intelligence Based Approaches in Intrusion Detection: A Comprehensive Analysis. J. Eng. 2024, 2024, 3909173. [Google Scholar] [CrossRef]
  59. Madsen, J.K. The Psychology of Micro-Targeted Election Campaigns; Springer: New York, NY, USA, 2019. [Google Scholar]
  60. Shahzad, F. Uses of Artificial Intelligence and Big Data for Election Campaign in Turkey. Master’s Thesis, Marmara Universitesi, Istanbul, Türkiye, 2021. [Google Scholar]
  61. Michael, T. General Election and the Study of the Future. J. Notariil 2018, 3, 130–136. [Google Scholar]
  62. Mustafa, Y.; Warka, M. Presidential Election and Vice President of the Republic of Indonesia Based on Pancasila Democratic Princicples. JL Pol’y Glob. 2019, 88, 1. [Google Scholar]
  63. Ohagi, M. Polarization of autonomous generative AI agents under echo chambers. arXiv 2024, arXiv:2402.12212. [Google Scholar]
  64. Thorson, K.; Cotter, K.; Medeiros, M.; Pak, C. Algorithmic inference, political interest, and exposure to news and politics on Facebook. Inf. Commun. Soc. 2021, 24, 183–200. [Google Scholar] [CrossRef]
  65. Bossetta, M. The digital architectures of social media: Comparing political campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 US election. J. Mass Commun. Q. 2018, 95, 471–496. [Google Scholar] [CrossRef]
  66. Alvarez, G.; Choi, J.; Strover, S. Good news, bad news: A sentiment analysis of the 2016 election Russian facebook ads. Int. J. Commun. 2020, 14, 3027–3053. [Google Scholar]
  67. Yesilada, M.; Lewandowsky, S. Systematic review: YouTube recommendations and problematic content. Internet Policy Rev. 2022, 11. [Google Scholar] [CrossRef]
  68. Matamoros-Fernández, A.; Gray, J.E.; Bartolo, L.; Burgess, J.; Suzor, N. What’s ”Up Next”? Investigating Algorithmic Recommendations on YouTube Across Issues and Over Time. Media Commun. 2021, 9, 234–249. [Google Scholar] [CrossRef]
  69. Chen, S. Artificial Intelligence in Democracy: Unraveling the Influence of Social Bots in Brexit through Cybernetics. Trans. Soc. Sci. Educ. Humanit. Res. 2024, 6, 324–329. [Google Scholar] [CrossRef]
  70. Risso, L. Harvesting your soul? Cambridge analytica and brexit. Brexit Means Brexit 2018, 2018, 75–90. [Google Scholar]
  71. Helmus, T.C. Artificial Intelligence, Deepfakes, and Disinformation; RAND Corporation: Santa Monica, CA, USA, 2022; pp. 1–24. [Google Scholar]
  72. Vaccari, C.; Chadwick, A. Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Soc. Media+ Soc. 2020, 6, 2056305120903408. [Google Scholar] [CrossRef]
  73. Fraga-Lamas, P.; Fernandez-Carames, T.M. Fake news, disinformation, and deepfakes: Leveraging distributed ledger technologies and blockchain to combat digital deception and counterfeit reality. IT Prof. 2020, 22, 53–59. [Google Scholar] [CrossRef]
  74. Beyle, H.C. Determining the effect of propaganda campaigns. Ann. Am. Acad. Political Soc. Sci. 1935, 179, 106–113. [Google Scholar] [CrossRef]
  75. Haq, E.U.; Zhu, Y.; Hui, P.; Tyson, G. History in Making: Political Campaigns in the Era of Artificial Intelligence-Generated Content. In Proceedings of the Companion Proceedings of the ACM on Web Conference 2024, Singapore, 13–17 May 2024; pp. 1115–1118. [Google Scholar]
  76. Puri, A.; Keymolen, E. The Doors of Janus: A critical analysis of the socio-technical forces eroding trust in the Rule of Law. Cardozo Arts Entertain. Law J. 2024. Forthcoming. [Google Scholar]
  77. Battista, D. Political communication in the age of artificial intelligence: An overview of deepfakes and their implications. Soc. Regist. 2024, 8, 7–24. [Google Scholar] [CrossRef]
  78. Francescato, D. Globalization, artificial intelligence, social networks and political polarization: New challenges for community psychologists. Community Psychol. Glob. Perspect. 2018, 4, 20–41. [Google Scholar]
  79. Feldstein, S. The road to digital unfreedom: How artificial intelligence is reshaping repression. J. Democr. 2019, 30, 40–52. [Google Scholar] [CrossRef]
  80. Savaget, P.; Chiarini, T.; Evans, S. Empowering political participation through artificial intelligence. Sci. Public Policy 2019, 46, 369–380. [Google Scholar] [CrossRef]
  81. Howard, P.N.; Woolley, S.; Calo, R. Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. J. Inf. Technol. Politics 2018, 15, 81–93. [Google Scholar] [CrossRef]
  82. Kertysova, K. Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Secur. Hum. Rights 2018, 29, 55–81. [Google Scholar] [CrossRef]
  83. Hibbs, D.A. Mass Political Violence: A Cross-National Causal Analysis; Wiley: New York, NY, USA, 1973; Volume 253. [Google Scholar]
  84. Rébé, N. New Proposed AI Legislation. In Artificial Intelligence: Robot Law, Policy and Ethics; Brill Nijhoff: Leiden, The Netherlands, 2021; pp. 183–224. [Google Scholar]
  85. Floridi, L. The European legislation on AI: A brief analysis of its philosophical approach. Philos. Technol. 2021, 34, 215–222. [Google Scholar] [CrossRef]
  86. Chae, Y. US AI regulation guide: Legislative overview and practical considerations. J. Robot. Artif. Intell. Law 2020, 3, 17–40. [Google Scholar]
  87. Felzmann, H.; Villaronga, E.F.; Lutz, C.; Tamó-Larrieux, A. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc. 2019, 6, 2053951719860542. [Google Scholar] [CrossRef]
  88. Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamó-Larrieux, A. Towards transparency by design for artificial intelligence. Sci. Eng. Ethics 2020, 26, 3333–3361. [Google Scholar] [CrossRef]
  89. Chaka, C. Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools. J. Appl. Learn. Teach. 2023, 6. [Google Scholar] [CrossRef]
  90. Weber-Wulff, D.; Anohina-Naumeca, A.; Bjelobaba, S.; Foltýnek, T.; Guerrero-Dib, J.; Popoola, O.; Šigut, P.; Waddington, L. Testing of detection tools for AI-generated text. Int. J. Educ. Integr. 2023, 19, 26. [Google Scholar] [CrossRef]
  91. Nadella, G.S.; Gonaygunta, H. Enhancing Cybersecurity with Artificial Intelligence: Predictive Techniques and Challenges in the Age of IoT. Available online: https://ijsea.com/archive/volume13/issue4/IJSEA13041007.pdf (accessed on 11 October 2024).
  92. Tiernan, P.; Costello, E.; Donlon, E.; Parysz, M.; Scriney, M. Information and Media Literacy in the Age of AI: Options for the Future. Educ. Sci. 2023, 13, 906. [Google Scholar] [CrossRef]
  93. Torok, M.; Calear, A.; Shand, F.; Christensen, H. A systematic review of mass media campaigns for suicide prevention: Understanding their efficacy and the mechanisms needed for successful behavioral and literacy change. Suicide Life-Threat. Behav. 2017, 47, 672–687. [Google Scholar] [CrossRef]
  94. Shalevska, E. The Future of Political Discourse: AI and Media Literacy Education. J. Leg. Political Educ. 2024, 1, 50–61. [Google Scholar] [CrossRef]
  95. Marinković, A.R. The New EU AI Act: A Comprehensive Legislation on AI or Just a Beginning? Glob. J. Bus. Integral Secur. 2023. Available online: http://gbis.ch/index.php/gbis/article/view/258 (accessed on 11 October 2024).
  96. Khan, A. The Intersection Of Artificial Intelligence And International Trade Laws: Challenges And Opportunities. IIUMLJ 2024, 32, 103. [Google Scholar] [CrossRef]
  97. Busuioc, M. AI algorithmic oversight: New frontiers in regulation. In Handbook of Regulatory Authorities; Edward Elgar Publishing: Cheltenham, UK, 2022; pp. 470–486. [Google Scholar]
  98. Salem, A.H.; Azzam, S.M.; Emam, O.; Abohany, A.A. Advancing cybersecurity: A comprehensive review of AI-driven detection techniques. J. Big Data 2024, 11, 105. [Google Scholar] [CrossRef]
  99. Beck, J.; Burri, T. From “human control” in international law to “human oversight” in the new EU act on artificial intelligence. In Research Handbook on Meaningful Human Control of Artificial Intelligence Systems; Edward Elgar Publishing: Cheltenham, UK, 2024; pp. 104–130. [Google Scholar]
  100. Holmes, W.; Persson, J.; Chounta, I.A.; Wasson, B.; Dimitrova, V. Artificial Intelligence and Education: A Critical View Through the Lens of Human Rights, Democracy and the Rule of Law; Council of Europe: Strasbourg, France, 2022. [Google Scholar]
  101. Su, J.; Ng, D.T.K.; Chu, S.K.W. Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Comput. Educ. Artif. Intell. 2023, 4, 100124. [Google Scholar] [CrossRef]
  102. Hristovska, A. Fostering media literacy in the age of ai: Examining the impact on digital citizenship and ethical decision-making. Журнал за медиуми и кoмуникации 2023, 2, 39–59. [Google Scholar]
  103. Fletcher, A.; McCulloch, K.; Baulk, S.D.; Dawson, D. Countermeasures to driver fatigue: A review of public awareness campaigns and legal approaches. Aust. N. Z. J. Public Health 2005, 29, 471–476. [Google Scholar] [CrossRef]
  104. Porlezza, C. Promoting responsible AI: A European perspective on the governance of artificial intelligence in media and journalism. Communications 2023, 48, 370–394. [Google Scholar] [CrossRef]
  105. Loré, F.; Basile, P.; Appice, A.; de Gemmis, M.; Malerba, D.; Semeraro, G. An AI framework to support decisions on GDPR compliance. J. Intell. Inf. Syst. 2023, 61, 541–568. [Google Scholar] [CrossRef]
  106. Torre, D.; Abualhaija, S.; Sabetzadeh, M.; Briand, L.; Baetens, K.; Goes, P.; Forastier, S. An ai-assisted approach for checking the completeness of privacy policies against gdpr. In Proceedings of the 2020 IEEE 28th International Requirements Engineering Conference (RE), Zurich, Switzerland, 31 August–4 September 2020; IEEE: New York, NY, USA, 2020; pp. 136–146. [Google Scholar]
  107. Sartor, G.; Lagioia, F. The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence; European Parliament: Bruxelles, Belgium, 2020. [Google Scholar]
  108. Korshunov, P.; Marcel, S. Deepfake detection: Humans vs. machines. arXiv 2020, arXiv:2009.03155. [Google Scholar]
  109. Zhu, K.; Wu, B.; Wang, B. Deepfake detection with clustering-based embedding regularization. In Proceedings of the 2020 IEEE fifth international conference on data science in cyberspace (DSC), Hong Kong, China, 27–29 July 2020; IEEE: New York, NY, USA, 2020; pp. 257–264. [Google Scholar]
  110. Strickland, E. Facebook takes on deepfakes. IEEE Spectr. 2019, 57, 40–57. [Google Scholar] [CrossRef]
  111. Luusua, A.; Ylipulli, J. Nordic cities meet artificial intelligence: City officials’ views on artificial intelligence and citizen data in Finland. In Proceedings of the 10th International Conference on Communities & Technologies-Wicked Problems in the Age of Tech, Seattle, WA, USA, 20–25 June 2021; pp. 51–60. [Google Scholar]
  112. Ourdedine, K. General Perception of Artificial Intelligence and Impacts on the Financial Sector in Finland. 2019. Available online: https://www.theseus.fi/handle/10024/170726 (accessed on 11 October 2024).
Figure 1. The figure provides an overview of the different advantages and threats of using AI in elections, political campaigns, and electoral management.
Figure 1. The figure provides an overview of the different advantages and threats of using AI in elections, political campaigns, and electoral management.
Blockchains 02 00020 g001
Figure 2. Overview of the Blockchain-based Deepfake Authenticity Verification Framework (B-DAVF). This diagram illustrates the six major components of the B-DAVF: (1) content creation, (2) registering the asset, (3) tracking modifications, (4) storing the provenance, (5) verification process, and (6) flagging and reporting.
Figure 2. Overview of the Blockchain-based Deepfake Authenticity Verification Framework (B-DAVF). This diagram illustrates the six major components of the B-DAVF: (1) content creation, (2) registering the asset, (3) tracking modifications, (4) storing the provenance, (5) verification process, and (6) flagging and reporting.
Blockchains 02 00020 g002
Figure 3. A visual representation of countermeasures against AI threats. This diagram outlines key strategies to mitigate AI risks. The main categories include Regulatory Measures, Technological Solutions, Public Awareness and Education, and Suggestions for Policymakers and Researchers. Each category is further broken down into specific actions to mitigate the potential risks posed by AI in elections.
Figure 3. A visual representation of countermeasures against AI threats. This diagram outlines key strategies to mitigate AI risks. The main categories include Regulatory Measures, Technological Solutions, Public Awareness and Education, and Suggestions for Policymakers and Researchers. Each category is further broken down into specific actions to mitigate the potential risks posed by AI in elections.
Blockchains 02 00020 g003
Table 1. Real-world examples of AI misuse in recent elections.
Table 1. Real-world examples of AI misuse in recent elections.
CountryYearElection TypeAI Misuse ExampleTechnical AspectsImpact on Democracy
India [20]2019General ElectionsAI-generated fake news and doctored videosNatural Language Processing (NLP) and image-editing toolsStoked communal tensions; influenced voter sentiment
United Kingdom [40]2019General ElectionsAI-generated articles and deepfakesDeep learning techniques for text generation and video manipulationSwayed public opinion; spread confusion
Brazil [22]2020Municipal ElectionsAutomated bots spreading disinformationSocial media bots automated via AI algorithmsManipulated public opinion; undermined trust in the process
South Africa [41]2021Local Government ElectionsAI-enhanced targeted propagandaSentiment analysis and micro-targeting based on user dataExacerbated political divisions; influenced voter behavior
Kenya [42]2022General ElectionsSocial media bots and fake news distributionAlgorithmic amplification of specific narrativesInfluenced electoral outcomes; increased political tension
Philippines [24]2022Presidential ElectionsAI-driven targeted advertisingData-mining techniques for voter profilingMisled political messages tailored to voter data
Italy [43]2023Parliamentary ElectionsDeepfake videos targeting politiciansAdvanced deep learning models for facial and voice mimicryDamaged reputations; misled voters
Turkey [44]2024Local Government ElectionsDeepfake videos targeting politiciansAdvanced deep learning models for facial and voice mimicryMisled voters
United States [19]2024Presidential ElectionDeepfake video of former President Donald TrumpUtilized machine learning to synthesize realistic videosMisled the public; influenced perceptions
United States [45]2024Presidential ElectionDoctored image of US Secret Service agentsUtilized image-editing toolsMisled the public; influenced perceptions
Table 2. Comparison of this article with previous research in the field of AI and political campaigns. The symbol ✓ indicates that the present study addresses the corresponding topic, while the symbol × signifies that it does not. This table highlights the areas of overlap and the gaps in the literature regarding the use of AI in political campaigns, disinformation risks, and related ethical considerations.
Table 2. Comparison of this article with previous research in the field of AI and political campaigns. The symbol ✓ indicates that the present study addresses the corresponding topic, while the symbol × signifies that it does not. This table highlights the areas of overlap and the gaps in the literature regarding the use of AI in political campaigns, disinformation risks, and related ethical considerations.
AuthorsAI CampaignsDisinformationDeepfakePolarizationFrameworksBlockchainEducationCountermeasures
Chen et al. [47]
Chen et al. [47]
Maria et al. [25]
Pariser et al. [48]
Engin et al. [49]
Cadwalladr et al. [50]
Zhou et al. [51]
Anand et al. [52]
Stepien et al. [53]
Mayank et al. [54]
Brundage et al. [13]
This Article
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Islam, M.B.E.; Haseeb, M.; Batool, H.; Ahtasham, N.; Muhammad, Z. AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework. Blockchains 2024, 2, 458-481. https://doi.org/10.3390/blockchains2040020

AMA Style

Islam MBE, Haseeb M, Batool H, Ahtasham N, Muhammad Z. AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework. Blockchains. 2024; 2(4):458-481. https://doi.org/10.3390/blockchains2040020

Chicago/Turabian Style

Islam, Masabah Bint E., Muhammad Haseeb, Hina Batool, Nasir Ahtasham, and Zia Muhammad. 2024. "AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework" Blockchains 2, no. 4: 458-481. https://doi.org/10.3390/blockchains2040020

APA Style

Islam, M. B. E., Haseeb, M., Batool, H., Ahtasham, N., & Muhammad, Z. (2024). AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework. Blockchains, 2(4), 458-481. https://doi.org/10.3390/blockchains2040020

Article Metrics

Back to TopTop