AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework
Abstract
:1. Introduction
- 1.
- We explore how AI technologies can influence political processes and elections, specifically through the dissemination of false information and biased narratives, which can skew voter decisions and disrupt democratic processes.
- 2.
- We detail various methods by which AI can sway elections, including the use of generative AI to create misleading content, such as fake narratives, images, videos, and voice clones, that can undermine political opposition, manipulate public perception, and spread political polarization.
- 3.
- We present a Blockchain-based Deepfake Authenticity Verification Framework (B-DAVF) that establishes a systematic approach to verifying the authenticity of digital assets using blockchain technology for the detection of deepfakes.
- 4.
- We propose comprehensive countermeasures, including regulatory, technological, and educational measures, to counteract the negative impacts of AI on elections.
2. Literature Review
3. Use of AI in Elections and Politics
3.1. AI Threats and Challenges to Elections and Democracy
3.1.1. Polarization
- Similar effects have been observed on YouTube, where the recommendation algorithm often promotes extreme and controversial videos [67]. Studies have found that users who start with relatively neutral political content can quickly be led to more radical viewpoints through the platform’s recommendations [68].
3.1.2. Deepfakes and Disinformation
3.1.3. Propaganda, Bias, and Campaigns
- Increased polarization: AI-driven amplification of divisive content contributes to a polarized public discourse, making it difficult to achieve consensus on important issues [78].
- Manipulation of voter behavior: AI-enabled micro-targeting and psychological profiling can manipulate voter behavior in ways that are difficult to detect and counteract. This manipulation can alter the outcome of elections by swaying undecided voters or suppressing voter turnout among certain demographics [81,82].
4. Blockchain-Based Deepfake Authenticity Verification Framework (B-DAVF)
4.1. Content Creation
4.2. Registering the Asset
- Inputs: The function RegisterAsset takes the digital asset content (C), the creator’s public key (P), the title (T), the creation date (D), and a brief description (Desc).
- Generate Hash: A unique hash of the digital content is generated using a hashing function GenerateHash.
- Create Metadata: A metadata dictionary is constructed containing the asset’s title, creator’s public key, creation date, description, and the content hash.
- Get Blockchain Address: The creator’s public key is used to retrieve the corresponding blockchain address.
- Store on Blockchain: The metadata are then securely stored on the blockchain at the creator’s address, which returns a transaction hash.
- Return Transaction Hash: Finally, the transaction hash is returned, confirming the registration of the asset.
Algorithm 1 Registering an asset on the blockchain |
4.3. Tracking Modifications
4.4. Storing the Provenance
4.5. Verification Process
- Inputs: The function VerifyAsset takes the digital asset content (C’) and the creator’s public key (P).
- Generate Hash: A hash (H’) of the incoming digital asset content is generated using a hash function (GenerateHash).
- Get Blockchain Address: The blockchain address associated with the creator’s public key is retrieved.
- Retrieve Metadata: The algorithm retrieves the stored metadata from the blockchain using the creator’s blockchain address.
- Check Metadata: This checks the metadata and then proceeds as follows:
- –
- If metadata are found, it proceeds to check whether the stored content hash matches the generated hash (H’).
- –
- If the hashes match, it indicates that the asset is authentic and outputs a valid message.
- –
- If the hashes do not match, it indicates that the asset has been tampered with.
- No Metadata Found: If no metadata are found for the creator, the verification is declared invalid.
- Return Verification Result: Finally, it returns the verification result (R).
Algorithm 2 Verifying an asset on the blockchain |
4.6. Flagging and Reporting
5. Framework Evaluation—Case Study: The ”Fake News” Election Video
5.1. Algorithm: VerifyDeepfakeVideo
- Input Parameters: The algorithm takes the circulating video content () and the creator’s public key (P).
- Generate Hash: The hash () of the circulating video is computed using a hash function (GenerateHash).
- Retrieve Blockchain Address: The blockchain address associated with the creator’s public key is retrieved.
- Retrieve Metadata: The algorithm attempts to fetch the stored metadata (including the original content hash and modification history) from the blockchain.
- Check for Original Asset: If no metadata are found (StoredMetadata is null), it indicates that the original asset does not exist on the blockchain. The asset is flagged as a potential deepfake, and the result reflects that no authentic original was registered.
- Hash Comparison: If metadata are found, the algorithm compares with the stored content hash and then proceeds as follows:
- –
- If they match, the asset is deemed authentic, and no flag is raised.
- –
- If they do not match, the asset is flagged as a potential deepfake due to the indication of tampering.
- Modification History Check: If there is a modification history present in the metadata, the asset is also flagged as a potential deepfake.
- Return Results: The algorithm returns both the verification result (R) and the flag status (F).
Algorithm 3 Verifying a deepfake video |
5.2. Applications of the B-DAVF Beyond Political Contexts
6. Other Countermeasures Against AI Threats
6.1. Regulatory Measures
- Transparency Requirements: Mandating the disclosure of AI use in political campaigns can help ensure that voters are aware of how AI technologies are being used to influence them. Campaigns should be required to disclose the types of AI technologies they use, the data sources they rely on, and the nature of the targeted messages [87,88].
6.2. Technological Solutions
- Strengthening Cybersecurity: Strengthening the cybersecurity of election systems is crucial to protect against AI-powered cyberattacks [91]. This includes the use of encryption, intrusion detection systems, and secure authentication protocols.
6.3. Public Awareness and Education
- Public Awareness Campaigns: Governments, NGOs, and tech companies should collaborate to launch public awareness campaigns about the risks posed by AI in elections. These campaigns can use various media channels to reach a broad audience and promote critical thinking [94].
6.4. Suggestions for Policymakers and Researchers
- Develop Comprehensive Legislation: Policymakers should focus on creating laws that address the specific challenges posed by AI in elections. This includes regulations on data usage, transparency, and ethical considerations [95].
- Enhance International Cooperation: Given the global nature of AI technologies, international cooperation is essential. Policymakers should work together to develop harmonized regulations and share best practices to combat AI threats to democracy [96].
- Establish Oversight Bodies: Independent oversight bodies should be established to monitor the use of AI in political campaigns, enforce regulations, and investigate violations [97].
- Analyze AI’s Impact on Democracy: Conducting research on the long-term effects of AI on democratic processes can provide valuable insights into how AI technologies influence voter behavior and public trust [100]. This research can inform the development of more effective countermeasures.
7. Discussion—Case Studies
Verification and Historical Context of Electoral Challenges
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Fetzer, J.H.; Fetzer, J.H. What Is Artificial Intelligence? Springer: New York, NY, USA, 1990. [Google Scholar]
- PK, F.A. What is Artificial Intelligence? In Success Is No Accident. It Is Hard Work, Perseverance, Learning, 821 Studying, Sacrifice and Most of All, Love of What You Are Doing or Learning to Do; L’ Ordine Nuovo Publication: New Delhi, India, 1984; Volume 65, Available online: https://core.ac.uk/download/pdf/523285678.pdf#page=76 (accessed on 11 October 2024).
- Wang, H. Proving theorems by pattern recognition I. Commun. ACM 1960, 3, 220–234. [Google Scholar] [CrossRef]
- Wang, H.; Wang, H. Computer theorem proving and artificial intelligence. In Computation, Logic, Philosophy: A Collection of Essays; Springer Science & Business Media: New York, NY, USA, 1990; pp. 63–75. [Google Scholar]
- Finn, P.; Bell, L.C.; Tatum, A.; Leicht, C.V. Assessing ChatGPT as a tool for research on US state and territory politics. Political Stud. Rev. 2024, 14789299241268652. Available online: https://journals.sagepub.com/doi/abs/10.1177/14789299241268652 (accessed on 11 October 2024). [CrossRef]
- Puggioni, R. Coming out as undocumented: Identity celebrations and political change. Societies 2024, 14, 130. [Google Scholar] [CrossRef]
- Wu, T.; He, S.; Liu, J.; Sun, S.; Liu, K.; Han, Q.L.; Tang, Y. A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA J. Autom. Sin. 2023, 10, 1122–1136. [Google Scholar] [CrossRef]
- Rozado, D. The political biases of chatgpt. Soc. Sci. 2023, 12, 148. [Google Scholar] [CrossRef]
- Dommett, K. Data-driven political campaigns in practice: Understanding and regulating diverse data-driven campaigns. Internet Policy Rev. 2019, 8, 7. [Google Scholar] [CrossRef]
- Sandoval-Almazan, R.; Valle-Cruz, D. Facebook impact and sentiment analysis on political campaigns. In Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age, Delft, The Netherlands, 30 May–1 June 2018; pp. 1–7. [Google Scholar]
- Vlados, C.M. The Current Evolution of International Political Economy: Exploring the New Theoretical Divide between New Globalization and Anti-Globalization. Societies 2024, 14, 135. [Google Scholar] [CrossRef]
- Kang, M. A Study of Chatbot Personality based on the Purposes of Chatbot. J. Korea Contents Assoc. 2018, 18, 319–329. [Google Scholar]
- Brundage, M.; Avin, S.; Wang, J.; Krueger, G. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv 2018, arXiv:1802.07228. [Google Scholar]
- Irfan, M.; Ali, S.T.; Ijlal, H.S.; Muhammad, Z.; Raza, S. Exploring The Synergistic Effects of Blockchain Integration with IOT and AI for Enhanced Transparency and Security in Global Supply Chains. Int. J. Contemp. Issues Soc. Sci 2024, 3, 1326–1338. [Google Scholar]
- Yankoski, M.; Weninger, T.; Scheirer, W. An AI early warning system to monitor online disinformation, stop violence, and protect elections. Bull. At. Sci. 2020, 76, 85–90. [Google Scholar] [CrossRef]
- Fiaz, F.; Sajjad, S.M.; Iqbal, Z.; Yousaf, M.; Muhammad, Z. MetaSSI: A Framework for Personal Data Protection, Enhanced Cybersecurity and Privacy in Metaverse Virtual Reality Platforms. Future Internet 2024, 16, 176. [Google Scholar] [CrossRef]
- Micha, E.; Shah, N. Can We Predict the Election Outcome from Sampled Votes? In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 2176–2183. [Google Scholar]
- Arshad, J.; Talha, M.; Saleem, B.; Shah, Z.; Zaman, H.; Muhammad, Z. A Survey of Bug Bounty Programs in Strengthening Cybersecurity and Privacy in the Blockchain Industry. Blockchains 2024, 2, 195–216. [Google Scholar] [CrossRef]
- Łabuz, M.; Nehring, C. On the way to deep fake democracy? Deep fakes in election campaigns in 2023. Eur. Political Sci. 2024, 1–20. [Google Scholar] [CrossRef]
- Bali, A.; Desai, P. Fake news and social media: Indian perspective. Media Watch 2019, 10, 737–750. [Google Scholar] [CrossRef]
- Christou, A. Theorising Pandemic Necropolitics as Evil: Thinking Inequalities, Suffering, and Vulnerabilities with Arendt. Societies 2024, 14, 171. [Google Scholar] [CrossRef]
- Benevenuto, F.; Melo, P. Misinformation Campaigns through WhatsApp and Telegram in Presidential Elections in Brazil. Commun. ACM 2024, 67, 72–77. [Google Scholar] [CrossRef]
- Kazim, M.; Pirim, H.; Shi, S.; Wu, D. Multilayer analysis of energy networks. Sustain. Energy, Grids Netw. 2024, 39, 101407. [Google Scholar] [CrossRef]
- Kim-Leffingwell, S.; Sallenback, E. Mnemonic politics among Philippine voters: A social media measurement approach. Democratization 2024, 1–23. [Google Scholar] [CrossRef]
- Pawelec, M. Deepfakes and democracy (theory): How synthetic audio-visual media for disinformation and hate speech threaten core democratic functions. Digit. Soc. 2022, 1, 19. [Google Scholar] [CrossRef]
- Coeckelbergh, M. The Political Philosophy of AI: An Introduction; John Wiley & Sons: New York, NY, USA, 2022. [Google Scholar]
- Pope, A.E. Cyber-securing our elections. J. Cyber Policy 2018, 3, 24–38. [Google Scholar] [CrossRef]
- Nazir, A.; Iqbal, Z.; Muhammad, Z. ZTA: A Novel Zero Trust Framework for Detection and Prevention of Malicious Android Applications. 2024. Available online: https://www.researchsquare.com/article/rs-4464369/v1 (accessed on 11 October 2024).
- Overton, S. Overcoming Racial Harms to Democracy from Artificial Intelligence. Iowa Law Rev. 2024. Forthcoming. [Google Scholar]
- Cupać, J.; Sienknecht, M. Regulate against the machine: How the EU mitigates AI harm to democracy. Democratization 2024, 31, 1067–1090. [Google Scholar] [CrossRef]
- Rosenfeld, S. Democracy and Truth: A Short History; University of Pennsylvania Press: Philadelphia, PA, USA, 2018. [Google Scholar]
- Porpora, D.; Sekalala, S. Truth, communication, and democracy. Int. J. Commun. 2019, 13, 18. [Google Scholar]
- Rosenbach, E.; Mansted, K. Can Democracy Survive in the Information Age? Belfer Center for Science and International Affairs: Cambridge, MA, USA, 2018; Volume 30. [Google Scholar]
- Saleem, B.; Ahmed, M.; Zahra, M.; Hassan, F.; Iqbal, M.A.; Muhammad, Z. A survey of cybersecurity laws, regulations, and policies in technologically advanced nations: A case study of Pakistan to bridge the gap. Int. Cybersecur. Law Rev. 2024, 5, 533–561. [Google Scholar] [CrossRef]
- Du-Harpur, X.; Watt, F.; Luscombe, N.; Lynch, M. What is AI? Applications of artificial intelligence to dermatology. Br. J. Dermatol. 2020, 183, 423–430. [Google Scholar] [CrossRef]
- Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–45. [Google Scholar] [CrossRef]
- Liu, X.Y.; Wang, G.; Yang, H.; Zha, D. Fingpt: Democratizing internet-scale data for financial large language models. arXiv 2023, arXiv:2307.10485. [Google Scholar]
- Wei, Z.; Xu, X.; Hui, P. Digital Democracy at Crossroads: A Meta-Analysis of Web and AI Influence on Global Elections. In Proceedings of the Companion Proceedings of the ACM on Web Conference 2024, Singapore, 13–17 May 2024; pp. 1126–1129. [Google Scholar]
- Javed, M.S.; Sajjad, S.M.; Mehmood, D.; Mansoor, K.; Iqbal, Z.; Kazim, M.; Muhammad, Z. Analyzing Tor Browser Artifacts for Enhanced Web Forensics, Anonymity, Cybersecurity, and Privacy in Windows-Based Systems. Information 2024, 15, 495. [Google Scholar] [CrossRef]
- Bakir, V.; Laffer, A.; McStay, A.; Miranda, D.; Urquhart, L. On Manipulation by Emotional AI: UK Adults’ Views and Governance Implications. Front. Sociol. 2024, 9, 1339834. [Google Scholar] [CrossRef]
- Masombuka, M.; Duvenage, P.; Watson, B. A Cybersecurity Imperative on an Electronic Voting System in South Africa-2024 and Beyond. In Proceedings of the ICCWS 2021 16th International Conference on Cyber Warfare and Security, Cookeville, TN, USA, 25–26 February 2021; Academic Conferences Limited: Oxfordshire, UK, 2021; p. 204. [Google Scholar]
- Maweu, J.M. “Fake elections”? Cyber propaganda, disinformation and the 2017 general elections in Kenya. Afr. J. Stud. 2019, 40, 62–76. [Google Scholar] [CrossRef]
- Martella, A.; Roncarolo, F. Giorgia Meloni in the spotlight. Mobilization and competition strategies in the 2022 Italian election campaign on Facebook. Contemp. Ital. Politics 2023, 15, 88–102. [Google Scholar] [CrossRef]
- Fears of AI Disinformation Cast Shadow over Turkish Local Elections. 2024. Available online: https://www.aljazeera.com/news/2024/3/28/fears-ai-disinformation-cast-shadow-over-turkish-local-elections (accessed on 21 July 2024).
- Posts Use Altered Image of Secret Service Agents following Trump Shooting. 2024. Available online: https://www.factcheck.org/2024/07/posts-use-altered-image-of-secret-service-agents-following-trump-shooting/ (accessed on 21 July 2024).
- Tomić, Z.; Damnjanović, T.; Tomić, I. Artificial intelligence in political campaigns. South East. Eur. J. Commun. 2023, 5, 17–28. [Google Scholar] [CrossRef]
- Yu, C. How Will AI Steal Our Elections? Center for Open Science: Charlottesville, VA, USA, 2024. [Google Scholar]
- Pariser, E. The Filter Bubble: What the Internet is Hiding from You; Penguin Press: London, UK, 2011. [Google Scholar]
- Bozdag, E. Bias in algorithmic filtering and personalization. Ethics Inf. Technol. 2013, 15, 209–227. [Google Scholar] [CrossRef]
- Cadwalladr, C.; Graham-Harrison, E. Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Guardian 2018, 17, 22. [Google Scholar]
- Zhou, Z.; Makse, H. Artificial intelligence for elections: The case of 2019 Argentina primary and presidential election. arXiv 1910, arXiv:1910.11227. [Google Scholar] [CrossRef]
- Chennupati, A. The threat of artificial intelligence to elections worldwide: A review of the 2024 landscape. World J. Adv. Eng. Technol. Sci. 2024, 12, 29–34. [Google Scholar] [CrossRef]
- Stepien-Zalucka, B. AI-voting?: A few words about the role of algorithms in elections. In Artificial Intelligence and Human Rights; Dykinson: Madrid, Spain, 2021; pp. 117–128. Available online: https://www.torrossa.com/en/resources/an/5109967 (accessed on 11 October 2024).
- Tomar, M.; Raj, N.; Singh, S.; Marwaha, S.; Tiwari, M. The Role of AI-driven Tools in Shaping the Democratic Process: A Study of Indian Elections and Social Media Dynamics. Ind. Eng. J. 2023, 52, 143–153. [Google Scholar]
- Voigt, P.; Von dem Bussche, A. The eu general data protection regulation (gdpr). In A Practical Guide, 1st ed.; Springer International Publishing: Cham, Switzerland, 2017; Volume 10, pp. 10–5555. [Google Scholar]
- Kingston, J. Using artificial intelligence to support compliance with the general data protection regulation. Artif. Intell. Law 2017, 25, 429–443. [Google Scholar] [CrossRef]
- Labu, M.R.; Ahammed, M.F. Next-Generation Cyber Threat Detection and Mitigation Strategies: A Focus on Artificial Intelligence and Machine Learning. J. Comput. Sci. Technol. Stud. 2024, 6, 179–188. [Google Scholar] [CrossRef]
- Muneer, S.; Farooq, U.; Athar, A.; Ahsan Raza, M.; Ghazal, T.M.; Sakib, S. A Critical Review of Artificial Intelligence Based Approaches in Intrusion Detection: A Comprehensive Analysis. J. Eng. 2024, 2024, 3909173. [Google Scholar] [CrossRef]
- Madsen, J.K. The Psychology of Micro-Targeted Election Campaigns; Springer: New York, NY, USA, 2019. [Google Scholar]
- Shahzad, F. Uses of Artificial Intelligence and Big Data for Election Campaign in Turkey. Master’s Thesis, Marmara Universitesi, Istanbul, Türkiye, 2021. [Google Scholar]
- Michael, T. General Election and the Study of the Future. J. Notariil 2018, 3, 130–136. [Google Scholar]
- Mustafa, Y.; Warka, M. Presidential Election and Vice President of the Republic of Indonesia Based on Pancasila Democratic Princicples. JL Pol’y Glob. 2019, 88, 1. [Google Scholar]
- Ohagi, M. Polarization of autonomous generative AI agents under echo chambers. arXiv 2024, arXiv:2402.12212. [Google Scholar]
- Thorson, K.; Cotter, K.; Medeiros, M.; Pak, C. Algorithmic inference, political interest, and exposure to news and politics on Facebook. Inf. Commun. Soc. 2021, 24, 183–200. [Google Scholar] [CrossRef]
- Bossetta, M. The digital architectures of social media: Comparing political campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 US election. J. Mass Commun. Q. 2018, 95, 471–496. [Google Scholar] [CrossRef]
- Alvarez, G.; Choi, J.; Strover, S. Good news, bad news: A sentiment analysis of the 2016 election Russian facebook ads. Int. J. Commun. 2020, 14, 3027–3053. [Google Scholar]
- Yesilada, M.; Lewandowsky, S. Systematic review: YouTube recommendations and problematic content. Internet Policy Rev. 2022, 11. [Google Scholar] [CrossRef]
- Matamoros-Fernández, A.; Gray, J.E.; Bartolo, L.; Burgess, J.; Suzor, N. What’s ”Up Next”? Investigating Algorithmic Recommendations on YouTube Across Issues and Over Time. Media Commun. 2021, 9, 234–249. [Google Scholar] [CrossRef]
- Chen, S. Artificial Intelligence in Democracy: Unraveling the Influence of Social Bots in Brexit through Cybernetics. Trans. Soc. Sci. Educ. Humanit. Res. 2024, 6, 324–329. [Google Scholar] [CrossRef]
- Risso, L. Harvesting your soul? Cambridge analytica and brexit. Brexit Means Brexit 2018, 2018, 75–90. [Google Scholar]
- Helmus, T.C. Artificial Intelligence, Deepfakes, and Disinformation; RAND Corporation: Santa Monica, CA, USA, 2022; pp. 1–24. [Google Scholar]
- Vaccari, C.; Chadwick, A. Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Soc. Media+ Soc. 2020, 6, 2056305120903408. [Google Scholar] [CrossRef]
- Fraga-Lamas, P.; Fernandez-Carames, T.M. Fake news, disinformation, and deepfakes: Leveraging distributed ledger technologies and blockchain to combat digital deception and counterfeit reality. IT Prof. 2020, 22, 53–59. [Google Scholar] [CrossRef]
- Beyle, H.C. Determining the effect of propaganda campaigns. Ann. Am. Acad. Political Soc. Sci. 1935, 179, 106–113. [Google Scholar] [CrossRef]
- Haq, E.U.; Zhu, Y.; Hui, P.; Tyson, G. History in Making: Political Campaigns in the Era of Artificial Intelligence-Generated Content. In Proceedings of the Companion Proceedings of the ACM on Web Conference 2024, Singapore, 13–17 May 2024; pp. 1115–1118. [Google Scholar]
- Puri, A.; Keymolen, E. The Doors of Janus: A critical analysis of the socio-technical forces eroding trust in the Rule of Law. Cardozo Arts Entertain. Law J. 2024. Forthcoming. [Google Scholar]
- Battista, D. Political communication in the age of artificial intelligence: An overview of deepfakes and their implications. Soc. Regist. 2024, 8, 7–24. [Google Scholar] [CrossRef]
- Francescato, D. Globalization, artificial intelligence, social networks and political polarization: New challenges for community psychologists. Community Psychol. Glob. Perspect. 2018, 4, 20–41. [Google Scholar]
- Feldstein, S. The road to digital unfreedom: How artificial intelligence is reshaping repression. J. Democr. 2019, 30, 40–52. [Google Scholar] [CrossRef]
- Savaget, P.; Chiarini, T.; Evans, S. Empowering political participation through artificial intelligence. Sci. Public Policy 2019, 46, 369–380. [Google Scholar] [CrossRef]
- Howard, P.N.; Woolley, S.; Calo, R. Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. J. Inf. Technol. Politics 2018, 15, 81–93. [Google Scholar] [CrossRef]
- Kertysova, K. Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Secur. Hum. Rights 2018, 29, 55–81. [Google Scholar] [CrossRef]
- Hibbs, D.A. Mass Political Violence: A Cross-National Causal Analysis; Wiley: New York, NY, USA, 1973; Volume 253. [Google Scholar]
- Rébé, N. New Proposed AI Legislation. In Artificial Intelligence: Robot Law, Policy and Ethics; Brill Nijhoff: Leiden, The Netherlands, 2021; pp. 183–224. [Google Scholar]
- Floridi, L. The European legislation on AI: A brief analysis of its philosophical approach. Philos. Technol. 2021, 34, 215–222. [Google Scholar] [CrossRef]
- Chae, Y. US AI regulation guide: Legislative overview and practical considerations. J. Robot. Artif. Intell. Law 2020, 3, 17–40. [Google Scholar]
- Felzmann, H.; Villaronga, E.F.; Lutz, C.; Tamó-Larrieux, A. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc. 2019, 6, 2053951719860542. [Google Scholar] [CrossRef]
- Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamó-Larrieux, A. Towards transparency by design for artificial intelligence. Sci. Eng. Ethics 2020, 26, 3333–3361. [Google Scholar] [CrossRef]
- Chaka, C. Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools. J. Appl. Learn. Teach. 2023, 6. [Google Scholar] [CrossRef]
- Weber-Wulff, D.; Anohina-Naumeca, A.; Bjelobaba, S.; Foltýnek, T.; Guerrero-Dib, J.; Popoola, O.; Šigut, P.; Waddington, L. Testing of detection tools for AI-generated text. Int. J. Educ. Integr. 2023, 19, 26. [Google Scholar] [CrossRef]
- Nadella, G.S.; Gonaygunta, H. Enhancing Cybersecurity with Artificial Intelligence: Predictive Techniques and Challenges in the Age of IoT. Available online: https://ijsea.com/archive/volume13/issue4/IJSEA13041007.pdf (accessed on 11 October 2024).
- Tiernan, P.; Costello, E.; Donlon, E.; Parysz, M.; Scriney, M. Information and Media Literacy in the Age of AI: Options for the Future. Educ. Sci. 2023, 13, 906. [Google Scholar] [CrossRef]
- Torok, M.; Calear, A.; Shand, F.; Christensen, H. A systematic review of mass media campaigns for suicide prevention: Understanding their efficacy and the mechanisms needed for successful behavioral and literacy change. Suicide Life-Threat. Behav. 2017, 47, 672–687. [Google Scholar] [CrossRef]
- Shalevska, E. The Future of Political Discourse: AI and Media Literacy Education. J. Leg. Political Educ. 2024, 1, 50–61. [Google Scholar] [CrossRef]
- Marinković, A.R. The New EU AI Act: A Comprehensive Legislation on AI or Just a Beginning? Glob. J. Bus. Integral Secur. 2023. Available online: http://gbis.ch/index.php/gbis/article/view/258 (accessed on 11 October 2024).
- Khan, A. The Intersection Of Artificial Intelligence And International Trade Laws: Challenges And Opportunities. IIUMLJ 2024, 32, 103. [Google Scholar] [CrossRef]
- Busuioc, M. AI algorithmic oversight: New frontiers in regulation. In Handbook of Regulatory Authorities; Edward Elgar Publishing: Cheltenham, UK, 2022; pp. 470–486. [Google Scholar]
- Salem, A.H.; Azzam, S.M.; Emam, O.; Abohany, A.A. Advancing cybersecurity: A comprehensive review of AI-driven detection techniques. J. Big Data 2024, 11, 105. [Google Scholar] [CrossRef]
- Beck, J.; Burri, T. From “human control” in international law to “human oversight” in the new EU act on artificial intelligence. In Research Handbook on Meaningful Human Control of Artificial Intelligence Systems; Edward Elgar Publishing: Cheltenham, UK, 2024; pp. 104–130. [Google Scholar]
- Holmes, W.; Persson, J.; Chounta, I.A.; Wasson, B.; Dimitrova, V. Artificial Intelligence and Education: A Critical View Through the Lens of Human Rights, Democracy and the Rule of Law; Council of Europe: Strasbourg, France, 2022. [Google Scholar]
- Su, J.; Ng, D.T.K.; Chu, S.K.W. Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Comput. Educ. Artif. Intell. 2023, 4, 100124. [Google Scholar] [CrossRef]
- Hristovska, A. Fostering media literacy in the age of ai: Examining the impact on digital citizenship and ethical decision-making. Журнал за медиуми и кoмуникации 2023, 2, 39–59. [Google Scholar]
- Fletcher, A.; McCulloch, K.; Baulk, S.D.; Dawson, D. Countermeasures to driver fatigue: A review of public awareness campaigns and legal approaches. Aust. N. Z. J. Public Health 2005, 29, 471–476. [Google Scholar] [CrossRef]
- Porlezza, C. Promoting responsible AI: A European perspective on the governance of artificial intelligence in media and journalism. Communications 2023, 48, 370–394. [Google Scholar] [CrossRef]
- Loré, F.; Basile, P.; Appice, A.; de Gemmis, M.; Malerba, D.; Semeraro, G. An AI framework to support decisions on GDPR compliance. J. Intell. Inf. Syst. 2023, 61, 541–568. [Google Scholar] [CrossRef]
- Torre, D.; Abualhaija, S.; Sabetzadeh, M.; Briand, L.; Baetens, K.; Goes, P.; Forastier, S. An ai-assisted approach for checking the completeness of privacy policies against gdpr. In Proceedings of the 2020 IEEE 28th International Requirements Engineering Conference (RE), Zurich, Switzerland, 31 August–4 September 2020; IEEE: New York, NY, USA, 2020; pp. 136–146. [Google Scholar]
- Sartor, G.; Lagioia, F. The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence; European Parliament: Bruxelles, Belgium, 2020. [Google Scholar]
- Korshunov, P.; Marcel, S. Deepfake detection: Humans vs. machines. arXiv 2020, arXiv:2009.03155. [Google Scholar]
- Zhu, K.; Wu, B.; Wang, B. Deepfake detection with clustering-based embedding regularization. In Proceedings of the 2020 IEEE fifth international conference on data science in cyberspace (DSC), Hong Kong, China, 27–29 July 2020; IEEE: New York, NY, USA, 2020; pp. 257–264. [Google Scholar]
- Strickland, E. Facebook takes on deepfakes. IEEE Spectr. 2019, 57, 40–57. [Google Scholar] [CrossRef]
- Luusua, A.; Ylipulli, J. Nordic cities meet artificial intelligence: City officials’ views on artificial intelligence and citizen data in Finland. In Proceedings of the 10th International Conference on Communities & Technologies-Wicked Problems in the Age of Tech, Seattle, WA, USA, 20–25 June 2021; pp. 51–60. [Google Scholar]
- Ourdedine, K. General Perception of Artificial Intelligence and Impacts on the Financial Sector in Finland. 2019. Available online: https://www.theseus.fi/handle/10024/170726 (accessed on 11 October 2024).
Country | Year | Election Type | AI Misuse Example | Technical Aspects | Impact on Democracy |
---|---|---|---|---|---|
India [20] | 2019 | General Elections | AI-generated fake news and doctored videos | Natural Language Processing (NLP) and image-editing tools | Stoked communal tensions; influenced voter sentiment |
United Kingdom [40] | 2019 | General Elections | AI-generated articles and deepfakes | Deep learning techniques for text generation and video manipulation | Swayed public opinion; spread confusion |
Brazil [22] | 2020 | Municipal Elections | Automated bots spreading disinformation | Social media bots automated via AI algorithms | Manipulated public opinion; undermined trust in the process |
South Africa [41] | 2021 | Local Government Elections | AI-enhanced targeted propaganda | Sentiment analysis and micro-targeting based on user data | Exacerbated political divisions; influenced voter behavior |
Kenya [42] | 2022 | General Elections | Social media bots and fake news distribution | Algorithmic amplification of specific narratives | Influenced electoral outcomes; increased political tension |
Philippines [24] | 2022 | Presidential Elections | AI-driven targeted advertising | Data-mining techniques for voter profiling | Misled political messages tailored to voter data |
Italy [43] | 2023 | Parliamentary Elections | Deepfake videos targeting politicians | Advanced deep learning models for facial and voice mimicry | Damaged reputations; misled voters |
Turkey [44] | 2024 | Local Government Elections | Deepfake videos targeting politicians | Advanced deep learning models for facial and voice mimicry | Misled voters |
United States [19] | 2024 | Presidential Election | Deepfake video of former President Donald Trump | Utilized machine learning to synthesize realistic videos | Misled the public; influenced perceptions |
United States [45] | 2024 | Presidential Election | Doctored image of US Secret Service agents | Utilized image-editing tools | Misled the public; influenced perceptions |
Authors | AI Campaigns | Disinformation | Deepfake | Polarization | Frameworks | Blockchain | Education | Countermeasures |
---|---|---|---|---|---|---|---|---|
Chen et al. [47] | ✓ | ✓ | ✕ | ✕ | ✓ | ✕ | ✕ | ✓ |
Chen et al. [47] | ✓ | ✓ | ✓ | ✕ | ✓ | ✕ | ✕ | ✓ |
Maria et al. [25] | ✕ | ✓ | ✓ | ✕ | ✕ | ✕ | ✕ | ✓ |
Pariser et al. [48] | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✕ | ✓ |
Engin et al. [49] | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✕ | ✓ |
Cadwalladr et al. [50] | ✓ | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | ✓ |
Zhou et al. [51] | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | ✕ | ✕ |
Anand et al. [52] | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | ✕ | ✕ |
Stepien et al. [53] | ✓ | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✓ |
Mayank et al. [54] | ✓ | ✓ | ✕ | ✕ | ✓ | ✕ | ✓ | ✓ |
Brundage et al. [13] | ✓ | ✓ | ✕ | ✕ | ✓ | ✕ | ✕ | ✓ |
This Article | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Islam, M.B.E.; Haseeb, M.; Batool, H.; Ahtasham, N.; Muhammad, Z. AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework. Blockchains 2024, 2, 458-481. https://doi.org/10.3390/blockchains2040020
Islam MBE, Haseeb M, Batool H, Ahtasham N, Muhammad Z. AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework. Blockchains. 2024; 2(4):458-481. https://doi.org/10.3390/blockchains2040020
Chicago/Turabian StyleIslam, Masabah Bint E., Muhammad Haseeb, Hina Batool, Nasir Ahtasham, and Zia Muhammad. 2024. "AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework" Blockchains 2, no. 4: 458-481. https://doi.org/10.3390/blockchains2040020
APA StyleIslam, M. B. E., Haseeb, M., Batool, H., Ahtasham, N., & Muhammad, Z. (2024). AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework. Blockchains, 2(4), 458-481. https://doi.org/10.3390/blockchains2040020