Next Article in Journal
Integrating Artificial Intelligence and Big Data in Spanish Journalism Education: A Curricular Analysis
Previous Article in Journal
Media Events in the Digital Age: Analysis of the Treatment of Elizabeth II and Juan Carlos I During the State Funeral
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating Online Mis- and Disinformation in Cyprus: Trends and Challenges

by
Dimitrios Giomelakis
,
Costas Constandinides
,
Maria Noti
and
Theodora A. Maniou
*
Department of Social and Political Sciences, University of Cyprus, P.O. Box 20537, CY-1678 Nicosia, Cyprus
*
Author to whom correspondence should be addressed.
Journal. Media 2024, 5(4), 1590-1606; https://doi.org/10.3390/journalmedia5040099
Submission received: 19 September 2024 / Revised: 21 October 2024 / Accepted: 22 October 2024 / Published: 29 October 2024

Abstract

:
Information disorder constitutes a critical threat to the public sphere, posing significant challenges and negatively affecting society, public trust, and overall democratic stability. This article investigates the phenomenon of online mis- and disinformation in Cyprus, drawing on people’s perceptions of this topic as well as the characteristics that enable disinformation campaigns to go viral. The study explores news consumption habits, people’s concerns about the impact of online disinformation, exposure to false or misleading content, common sources, verification methods, and media literacy. Furthermore, the study aims to shed light on the phenomenon of online mis- and disinformation in Cyprus and identify users’ perspectives.

1. Introduction

In the digital media landscape, fake content, hoaxes, and false or misleading news have become part of daily life. Information disorder, a broad term referring to all harmful information that pollutes and disrupts the information environment, and the proliferation of online disinformation constitute a threat to the public sphere, posing significant challenges and negatively affecting society, public trust, and overall democratic stability (Wardle and Derakhshan 2017; Bayer et al. 2019; Freelon and Wells 2020; McKay and Tenove 2021). With the rise of social media, an increasing number of people consume news and information from these platforms rather than from traditional media (Newman et al. 2024). Social media facilitates the rapid spread of information, as users can easily share, comment on, and discuss news with others through decentralised and distributed networks (Benkler et al. 2018). However, news on these platforms is often less reliable than that from traditional media, as unverified and false content can spread quickly online without the professional oversight typically found in traditional journalism, undermining the foundations of informed public discourse (Rúas Araujo et al. 2022).
In recent years, many cases have emerged where misleading stories, disinformation, and false content—especially on social media—have spread online, deceiving millions of people worldwide. Media outlets, eager to stay ahead of news stories, have sometimes published these false narratives at the expense of robust verification and overall news quality (Newman 2011; Silverman 2016), in a situation where the online information and media economy largely relies on monetising opinion through advertising revenue (Giomelakis et al. 2021). However, this approach can harm the reputation of journalists and news media if the information is proven false. Thus, disinformation can contribute to the growing distrust in media organisations (Van Duyn and Collier 2018; Tandoc 2019), leading more people to seek out alternative, low-credibility channels.
Researchers from various fields, including cognitive science, computer science, and media studies, have investigated different aspects of the problem of disinformation in an effort to mitigate its impact on society (Bronstein et al. 2019; Papadopoulou et al. 2019a, 2019b; Giomelakis et al. 2021). For example, recent studies found that individuals who have previously shared conspiracy theories or pseudoscientific content are more likely to share news labelled as fake, and vice versa (Bertani et al. 2024; Maniou and Papa 2023). Another study highlights the correlation between content posted online by political figures and disinformation, noting that the online activity of these actors can create an environment that encourages the widespread dissemination of such content (Said-Hung et al. 2024). However, legacy media outlets in some European countries can also serve as channels of false information (Dragomir and Horowitz 2024). Other scholars emphasise the role of fact-checkers as gatekeepers in sifting through content on social networks and combating viral disinformation (Zecchinon and Standaert 2024), while a different study conducted in four European countries suggests that a society’s information resilience is shaped by a combination of structural attributes, the characteristics of its knowledge-distribution institutions, and the actions and skills of its citizens (Dragomir et al. 2024). The strengthened EC Code of Practice (CoP) on Disinformation, building on the first CoP from 2018 (European Commission 2022), is another significant step in this direction. It includes extensive and precise commitments from 34 signatories—such as major online platforms, tech companies, fact-checkers, and civil society.
Empirical research has increasingly emphasised that the phenomenon of disinformation and its negative consequences should be studied within the broader context of media ecosystems (Wardle and Derakhshan 2017; Benkler et al. 2018; Golovchenko et al. 2020). This article focuses on Cyprus, an understudied media ecosystem, drawing on people’s perceptions as well as the characteristics that enable disinformation campaigns to go viral. To better understand and assess this phenomenon, the study explores different aspects, including news consumption habits, people’s concerns about the impact of online disinformation, exposure to false or misleading content, common sources, verification methods, and media literacy. It aims to shed light on the phenomenon of online disinformation in Cyprus, identify users’ perspectives, and serve as a resource for future comparative studies with other countries.

2. Literature Review

2.1. The Spread of Disinformation and Fake Content Online

Disinformation forms part of the broader phenomenon of information disorder. Other types of information disorder include (mis)information, where false information is shared without the intent to cause harm, and (mal)information, where genuine information is shared in order to cause harm (Wardle and Derakhshan 2017). According to the EC, disinformation is defined as “verifiably false or misleading information that is created, presented, and disseminated for economic gain or to intentionally deceive the public and may cause public harm” (European Commission 2018). Disinformation and false information can take various forms. Bad actors and narratives may employ mixtures of authentic and fabricated content, manipulated visual content such as images and videos, rumours, conspiracy theories, imposter content, trolls, and other tactics (Jack 2017; Wardle and Derakhshan 2017; Tucker et al. 2018). Disinformation campaigns frequently aim to exacerbate social divisions and spread distrust while also promoting false beliefs (Wardle and Derakhshan 2017; Marwick and Lewis 2017; Arif et al. 2018). The phenomenon of spreading false or misleading information is not new; the term ‘disinformation’ has been widely used since the 1950s, and history demonstrates that speculation, rumours, and gossip have long been a part of society (Manning and Romerstein 2004; Rúas Araujo et al. 2020). For many decades, different groups, individuals, and governments have tried to influence public opinion by spreading fake or misleading information to change political views and achieve their goals (Shu et al. 2020). What distinguishes the current situation from the past, beyond the rapid advancement of new technologies and communication abundance, is the active role of audiences and the process of “produsage” (Bruns 2008), which blurs the lines between passive consumption and active creation. Moreover, artificial intelligence (AI), particularly with the recent emergence of generative AI, can significantly impact the spread of online disinformation, adding a new dimension to this phenomenon. On the one hand, accessible and largely unregulated tools enable individuals to produce vast amounts of fake content and false information in various formats, such as images, videos, and audio files (Endert 2024). On the other hand, AI can also play an essential role in combating mis- and disinformation. AI-powered tools, with their ability to analyse patterns, context, and language, can assist in fact-checking and detecting false information (Li and Callegari 2024).
Disinformation is not solely driven by technology; it is also influenced by various socio-psychological factors and cognitive theories (Shu et al. 2017; Ciampaglia and Menczer 2018; Van der Linden 2023). For example, confirmation bias leads people to accept information, even if false, that aligns with and reinforces their existing values or beliefs (Nickerson 1998; Van der Linden 2023). Additionally, users, especially on social media, often follow others with similar viewpoints, resulting in the reception of content that supports their pre-existing narratives. They may also form groups where opinions become increasingly polarised, creating an echo chamber effect and fostering homogeneous communities (Figà Talamanca and Arfini 2022). In such environments, individuals are more likely to view a source as reliable if it is widely endorsed by others or comes from their social ‘circles’. They may also favour frequently repeated information, even if it is false or misleading (Shu et al. 2020). Furthermore, individuals are more likely to spread disinformation in uncertain situations (such as natural disasters or political events), when feeling emotionally overwhelmed or anxious, when the subject is important to them, and when they are unable to directly influence the situation through their actions (DiFonzo and Bordia 2007). They are more inclined to accept new information, even if false, as a way to resolve uncertainty. This uncertainty can trigger emotions like anger and anxiety, further contributing to the spread of false news and reducing people’s accuracy in sharing information (DiFonzo and Bordia 2007; Marcus 2017). In this context, false news and misleading content can act as a means to express emotions and provide tension relief (Allport and Postman 1947; Waddington 2012).
The harmful effects of disinformation can influence various aspects of our lives (Kapantai et al. 2021). For instance, in politics, it can lead to serious issues, ranging from the spread of propaganda to the manipulation of elections and attitudes towards democratic processes (Silverman 2016; Bayer et al. 2019; Freelon and Wells 2020). On a societal level, disinformation can create challenges, among others, affecting economic growth and public benefit (Chee 2020; Kapantai et al. 2021), while posing significant threats to both business owners and consumers (Valant 2015). In the fields of medicine and healthcare, disinformation commonly centres around topics such as vaccination, smoking, and nutrition (Jolley and Douglas 2014; Albarracin et al. 2018; Wang et al. 2019). Finally, in the media sector, disinformation affects journalists’ routines and may contribute to diminishing trust in the media (Van Duyn and Collier 2018; Tandoc 2019). In this context, detecting disinformation and verifying online content has emerged as a prominent research area attracting significant attention. Many studies have focused on finding ways to detect and combat the spread of disinformation and hoaxes, especially on social media (Yang et al. 2019; Papadopoulou et al. 2019a). Current methods can be divided into three main categories (Thurman et al. 2016; Shu et al. 2017): expert-oriented, where domain experts, including fact-checkers, examine data and fulfil a monitoring role; crowdsourcing-oriented methods, where individuals annotate content; and computational methods, where automated systems classify claims as true or false. The challenges of detecting disinformation can be categorised into two types: (a) content-related and (b) user-related challenges (Shu et al. 2020). Disinformation content often employs highly sensational language and extreme emotions to provoke a strong reaction from readers, making them more likely to engage with the post (Shu et al. 2017). As a result, posts with false content frequently go ‘viral’ and ‘trend’ on social media (Vosoughi et al. 2018). Social media users are vulnerable to disinformation and are often unaware of this vulnerability. In this context, a high level of media literacy and education is essential for credibility assessment and is regarded as one of the most effective strategies to combat mis- and disinformation (Goodman 2021; Potter 2022).

2.2. Disinformation in Cyprus

Limited research exists on disinformation in Cyprus and predominantly appears in the form of country reports produced in the context of EU-co-funded projects such as The Media Pluralism Monitor1 (Christophorou and Karides 2024) and The Mediterranean Digital Observatory, also known as MedDMO2 (Giomelakis and Maniou 2023). In Cyprus, the impact of disinformation became increasingly evident during the COVID-19 pandemic, which further amplified the uncertainty already felt by society (Kouros et al. 2023; Petrikkos 2021).
The first Cyprus-based fact-checking initiative, Science Hoaxes3, grew out of this urgent need to prevent the spread of unverified information concerning the coronavirus disease, comprising of scientists, academics, and media professionals. Scholarly work indicates that susceptibility to disinformation during the pandemic was heightened by the government’s struggle to manage both a legitimacy and a health crisis (Kouros et al. 2023); the treatment of the latter by the government and mainstream media was thus perceived by small movements opposing vaccines—in most cases influenced by far-right agendas—as a narrative intended to steer the public’s attention away from the issue of government corruption (ibid.).
The legitimacy crisis was caused by a series of reports published by Al Jazeera’s Investigative Unit titled “The Cyprus Papers,” aiming to shed light on alleged misconduct and controversial practices in the Cyprus citizenship-by-investment programme, also known as the ‘golden passports’ programme. Moreover, an article by PolitiFact’s chief correspondent Louis Jacobson (2023), which features Cyprus-based media experts and academics as well as fact-checkers, some associated with the newly founded Fact-Check Cyprus4 initiative and the MedDMO hub, underlines the island’s division5 as a factor that further increases fact-checking challenges. Moreover, the same commentators highlight the critical role of media literacy in combating disinformation, a point consistently emphasised by both researchers and state institutions tasked with the formal mission to implement relevant EU policies/directives, namely the Cyprus Radio Television Authority (CRTA) and the Cyprus Pedagogical Institute of the Ministry of Education, Sport, and Youth. For example, CRTA’s Antigoni Themistokleous (2019) argues that media literacy “should be viewed in a broader context in which information and digital literacies are inseparable”. Despite significant efforts by CRTA and educational institutions involved in media literacy activities6 to advocate for the integration of media education into the formal curriculum of primary and secondary education, further attempts to develop media literacy practices do not occur in the context of a state-supported strategy (Themistokleous 2023; Papaioannou and Themistokleous 2018). Although media literacy activities have expanded in recent years, including initiatives in state-funded film festivals aimed at helping children and young people appreciate the art of cinema, there remains a significant lack of state commitment to implementing a comprehensive media education policy.
Overall, while fact-checking initiatives such as Fact-Check Cyprus and activities that promote media literacy and news verification methods are encouraging, their potential to reach mainstream and online audiences/users is currently somewhat limited (Christophorou and Karides 2024; Giomelakis and Maniou 2023). The lack of commitment by Cypriot media to contribute to the development of a fact-checking culture is another significant drawback (Giomelakis and Maniou 2023), often attributed to the financial challenges domestic media organisations face (Christophorou and Karides 2024; Trimithiotis et al. 2024). A legal framework regarding the spread of fake news is currently being re-evaluated following concerns expressed by the Cyprus Union of Journalists and the Cyprus Media Ethics Committee that its complexities would hinder freedom of speech, leading experts to conclude that the country is “essentially unprepared” to tackle the challenges created by the proliferation of disinformation (Christophorou and Karides 2024, p. 27).
Potter (2022, p. 41) writes that existing ideas specific to media literacy are “indeed impressive in what they promise. But scholarly fields need to do more than promise; they need to create the knowledge that will deliver on those promises”. Recent accounts of disinformation in Cyprus paint an ineffective picture, yet the growth of activities that aim to bring together state institutions, academia, media professionals, and members of the public demonstrate a small step towards building a ‘do more than promise’ culture to counter disinformation.

3. Method

The study aims to shed light on the phenomenon of online disinformation in Cyprus, identify key characteristics from the users’ perspective, and serve as a resource for comparative studies with other countries, as it is a part of the MedDMO hub. It poses the following research questions:
RQ1: What is the degree of Cypriots’ exposure to false news, and on which platforms do they most frequently encounter it?
RQ1.1: In what form of content do they most often encounter false/misleading news, and in which news categories do they believe mis- and disinformation most commonly appear?
RQ2: How concerned are Cypriots about the prevalence of disinformation and its negative consequences on digital media, democracy, and society?
RQ2.1: How do they perceive the potential of AI to amplify online disinformation?
RQ3: How do Cypriots verify news content, if at all, and how do they believe individuals can contribute to the fight against online disinformation?
To answer these questions, an online survey was conducted in Cyprus during the first semester of 2024, following a pilot test in December 2023. The study received ethical clearance by the National Committee of Bio-Ethics in Cyprus (ref. number ΕΕΒΚ ΕΠ 2023.01.282). The questionnaire was developed by the research team of MedDMO based on relevant literature, as presented in the theoretical part of the study. The questionnaire was fully anonymised, and the potential participants were asked for their consent to participate in the study while having the option to withdraw at any time (for a detailed list of the questions, please refer to the Supplementary Materials). Cronbach’s Alpha test was conducted to check the reliability of the questionnaire (a av. = 0.83), and SPSS was used for the statistical analysis that follows.
The overall response rate to the questionnaire was 89%. The final sample consists of 292 interviewees (110 males, 175 females, and 7 other/prefer not to say) representing all age groups above 18 years old (see Table 1) and all five geographical regions of the Republic of Cyprus. As such, the overall sample can be considered representative with a standard deviation of <10%.
The data were collected utilising the snowball sampling methodology (Heckathorn and Cameron 2017). All interviewees were located in the Republic of Cyprus and either directly or indirectly affiliated with the MedDMO project. Initially, the questionnaire was sent to the list of MedDMO affiliates in Cyprus, who were then asked to send it to their list of affiliates in the country.

4. Results

4.1. RQ1

Nearly all respondents indicated that they were aware of the phenomenon of online misinformation and disinformation and recognised that some news published or posted online can be false or misleading. Supporting this, nearly 77% reported that they believed they had been exposed to such news content in the past. Regarding the frequency of encountering false news online (“How often do you believe that you are exposed to false news?”), 37% of respondents reported that they often encounter it, while around half (46%) reported encountering it sometimes, and 14% rarely (see Figure 1). On the other hand, most respondents (around 64%) stated that they had never knowingly shared a news story on social media that turned out to be false. It is worth noting that nearly 31% mentioned being unaware of having done so, and only 5.5% admitted to having knowingly shared false information. Among those who shared mis- or disinformation, the majority either deleted their post after realising the information was false or corrected it to point out that the news content was fake. In general, regarding sharing motives, 63% of respondents reported that they usually share an article because it makes them feel influential and helpful to others. Additionally, 52% stated that they share to encourage others to interact with the content and to gather their opinions.
Respondents were also asked to identify the platform where they most often encounter false news (Figure 2). Social media was by far the most prevalent response (74%), followed by news websites (11%) and TV stations (5.1%). Other traditional media, such as newspapers and radio, received lower numbers. Additionally, some responses mentioned encountering false news on messaging apps (e.g., Viber, Messenger, WhatsApp) or on-demand video services (e.g., Vimeo). Regarding their news consumption habits, when participants were asked about their usual actions upon encountering a news story that interests them, the responses revealed distinct patterns in their engagement with news content. 65% reported that they click on the post and visit the source to read the full story. In contrast, one in four (26%) stated that they simply read the post as displayed without further investigation.
To address the issue of the forms of content in which people most often encounter false or misleading news, respondents shared their experiences in consuming news. The majority (around 60%, see Figure 3) indicated that text is the format in which they most frequently encounter false news (e.g., clickbait articles/posts), followed by images (27%) and video content (11%) (e.g., AI-generated media and deepfakes). Furthermore, respondents were asked to identify the types of news where they believe mis- and disinformation typically occur. Politics and international relations was the most frequently cited category (75%), followed by lifestyle news (64%), business/economy news (37%), and health news (36%). Aside from lifestyle news, the other three categories are generally perceived to have a more immediate and direct impact on society. This aligns with the typical distinction between “hard news” and “soft news,” making them more vulnerable to mis- and disinformation. Interestingly, over 90% of respondents admitted that their interest in the news increases significantly during major events such as wars, pandemics, accidents, or natural disasters.

Demographics and Exposure to False News

An analysis of chi-square tests demonstrated a strong relationship (see Table 2) between the perceived sense of exposure to false news and demographic characteristics, specifically age (p-value = 0.015), level of education (p-value = 0.012), and work industry (p-value = 0.031). Regarding age, those between 21 and 50 had the most positive responses, indicating a higher perceived exposure to false news. In contrast, individuals under 20 and those between 61 and 70 had the most negative responses, believing they had never been exposed to false news. In terms of education level, positive responses were most common among individuals with a university degree (BA, MA, or PhD). Finally, when considering professional industry, the majority of positive responses came from people working in education, the media industry, and the public sector.

4.2. RQ2

A five-level Likert scale was used to assess concerns about the prevalence of disinformation, its negative consequences on the digital media, and its effect on democracy and society in general. The majority of participants from Cyprus (around 78%) reported being worried, acknowledging the risks this phenomenon can cause, with 33% extremely concerned and 45% moderately concerned (see Figure 4). About 17% were somewhat concerned, and only 4% were slightly concerned. These findings suggest a widespread awareness of the dangers posed by disinformation, highlighting the need for more efforts towards educational/training programmes and public awareness campaigns to combat its spread.
A prominent topic in contemporary discourse is the use and development of artificial intelligence (AI) and its impact on the information landscape. In this context, and as a follow-up to the previous topic, data from a related question (again using a five-level Likert scale) indicated that two-thirds of the respondents expressed concern about the negative impact of AI on the online media ecosystem (Figure 5). Around 67% reported being highly concerned that AI could exacerbate the spread of false or misleading information, worsening the phenomenon of information disorder. Nearly 22% were somewhat concerned, while 8.6% were slightly concerned, and 2% were not concerned at all. These results highlight the growing concern over AI’s role in spreading disinformation, suggesting a need for future regulations on its use in media and information sharing.

Demographics and the Impact on Participants’ Concerns

Again, an analysis of chi-square tests demonstrated a relationship between the perceived concern about the negative impact of online disinformation on society and demographic characteristics such as gender (p-value < 0.001) and work industry (p-value < 0.001). Similarly, the statistical analysis indicated an association between the same demographics, gender (p-value = 0.006) and work industry (p-value < 0.001), and the perceived concern about AI’s potential to exacerbate this phenomenon in today’s digital news ecosystem (Table 3). The responses showed that women were more worried about AI and the negative consequences of online disinformation. Additionally, individuals working in sectors such as education, the media industry, and public sector services also express greater concern about these issues.

4.3. RQ3

Participants were asked to report the methods used, if any, to verify a news story they found online (Figure 6). The most common method (around 40%) was verifying information by consulting various trustworthy news sources that they follow and read. It is important to note that a significant portion (one in four) mentioned that they conduct their own verification (e.g., by examining the source or author). However, approximately 15% reported that they do not usually verify news stories. Other responses included verifying information from experts they trust and follow (7.2%), visiting the website of a fact-checking organisation (5.8%), and using online tools or platforms for content verification, including image, audio, and video (4.1%). Additional responses included discussing the news with people from their social network/in-group, consulting with individuals knowledgeable about specific subjects, checking news stories on websites from different countries, and combining various methods depending on each case. The majority of respondents were unaware of any tools, techniques, or verification platforms to help them fact-check information themselves. In contrast, a few participants mentioned basic tools and methods, such as reverse image search and TinEye, or referred to established fact-checking organisations like Ellinika Hoaxes, factchecker.gr, FactCheck.org, and Politifact.com.
Participants were also asked for their opinions on how individuals can contribute to the fight against disinformation. The majority suggested verifying information before sharing it (68%), followed by educating themselves and others (61%), and following and reading trustworthy news sources (57%). Additionally, around one-third (32%) mentioned that participating in digital and media literacy training activities is also important in addressing this issue. Other responses included using critical thinking and common sense, completely ignoring misleading and false content, and cultivating a well-rounded perspective.

5. Discussion and Conclusions

5.1. General Exposure to False News and Where People Encounter It

Overall, our results highlight the widespread presence of mis- and disinformation in the daily lives of Cypriots. Over three-quarters of respondents (77%) reported exposure to false or misleading content, and a significant portion (37%) claimed to encounter it regularly online. Interestingly, the analysis revealed that individuals aged 21 to 50, as well as those with a university education, reported greater awareness of exposure to false news compared to younger respondents under 20. These results align with other studies in the field suggesting that older adults are generally more aware of fake content and false information than younger people and that there is a positive relationship between levels of education and awareness of mis- and disinformation (e.g., Photiou and Maniou 2018; Pennycook and Rand 2019; Lan and Tung 2024). People with higher education levels are typically more discerning news consumers and are better at identifying false news (Khan and Idris 2019; Kumar and Geethakumari 2014; Pennycook and Rand 2019). Conversely, younger generations are often exposed to fake or misleading content on social media and struggle to differentiate between true and false information (Pennycook and Rand 2019; Ipsos 2021). Notably, participants working in fields like education or the media industry also reported a higher perceived exposure to false news. This can be explained by their frequent interaction with diverse information sources and the need for ongoing learning and critical engagement with online content, which makes them more aware of the risks of mis- and disinformation.
As expected, social media was the predominant platform where respondents encountered false information (74%), with news websites and television following at a significant distance. This aligns with the findings of a recent international survey, which revealed that 68% of internet users in 16 countries believed that social media is the primary source of widespread disinformation (Ipsos and UNESCO 2023). This view was broadly shared across different nations, age groups, political preferences, and varying backgrounds. Furthermore, it corresponds with an opinion survey commissioned by the Cyprus Union of Journalists (2023), which also found that social media is the platform where Cypriots most frequently encounter false stories. This is particularly concerning given that social media is also one of the most common channels for Cypriots to access information. According to a recent survey conducted by the European Parliament (Eurobarometer 2023), social media platforms were the most used medium to access news (70%), followed by television (62%), and online press and news platforms (52%).
Regarding the actions users typically take when they see a news story in the media that interests them, the results highlighted varying levels of interaction among individuals when consuming news on social media platforms. A notable 26% mentioned that they read the post as displayed without further exploring the source, which can be risky given the prevalence of provocative or clickbait titles and emotional language on social media. The use of such headlines is a common practice among various content creators in today’s digital media ecosystem to capture readers’ attention, evoke strong emotions, and generate engagement. Headlines remain the primary communication method for news articles in an era where a story can go viral solely because of its headline. However, these headlines are often inaccurate or misleading, leading readers to believe things that are not true. Social media’s fast-paced environment often results in news articles being shared based solely on their headlines without readers engaging with the full article, frequently leading to misinterpreted or out-of-context stories. The concise and direct nature of headlines creates an immediate impression on readers, influencing how they process information about an article. In this context, psychologists have found that misleading headlines significantly impact readers’ memory of news articles and their inferential reasoning (Ecker et al. 2014).
In terms of content form, most respondents (around 60%) indicated that they most frequently encounter false news in text form, followed by images (27%) and videos (11%). Regarding different news categories, politics and international relations was the most frequently cited category (75%) where participants believed disinformation most frequently appeared, followed by lifestyle news (64%), business/economy news (37%), and health news (36%). This can be explained by the fact that three of the four categories (except lifestyle) fall within the “hard news” category and are generally perceived to have a more immediate and direct impact on society, making them more vulnerable to mis- and disinformation. Notably, regarding respondents’ general news consumption habits, the vast majority (over 90%) indicated a substantial increase in news consumption during significant events such as wars, pandemics, accidents, or natural disasters. However, this increased interest in news during major events can make individuals more susceptible to online mis- and disinformation, as the urgency and emotional impact of these events can lead them to seek and share information rapidly without thoroughly verifying its accuracy.

5.2. Concerns About the Negative Impact of Disinformation and AI

The vast majority of participants (78%) reported being moderately or extremely concerned about online disinformation and its negative impact on society. This concern was further heightened by a strong perception that AI could exacerbate the disinformation landscape. Approximately two-thirds (67%) expressed significant concern about AI’s role in amplifying false or misleading content, highlighting public apprehension about the convergence of these two critical contemporary issues. Chi-square test analyses were conducted to explore potential relationships between demographic characteristics and concerns about the negative impact of online disinformation on society, as well as the potential for AI to worsen this issue. It is notable that the analyses revealed a strong positive correlation, indicating that both gender and work industry (i.e., professional background) significantly influence the perceived concern about disinformation and AI. Responses indicated that women were more concerned about AI and the negative effects of online disinformation. Additionally, individuals working in fields such as education or the media industry showed higher levels of concern, which is unsurprising given their regular exposure to various information sources and their better understanding of information dissemination and information disorder in general. The direct impact of disinformation on their work and media/news trust, especially for media practitioners, also further contributes to this concern (Hameleers et al. 2022; Lee 2024).
Undeniably, AI, particularly generative AI, has significantly intensified the disinformation crisis. Easily accessible and largely unregulated new AI tools empower individuals to mass-produce false information and fake, fabricated content (Endert 2024). For example, deepfakes—manipulated media, including videos, images, or audio, created or altered using AI and deep learning techniques—can be used to spread disinformation, defame individuals, manipulate political events, or deceive people by creating convincing fake content (Sarris et al. 2023). These tools and the AI synthetic generation of text and visuals are constantly evolving, pushing the challenge to the next level and making the fact-checking process more difficult. Distinguishing between AI-generated and human-generated content is becoming increasingly challenging, not only for the digitally literate but also for detection mechanisms. According to the latest Global Risk Report by the World Economic Forum (2024), AI-driven mis/disinformation can be considered the most severe short-term risk the world faces. AI is accelerating the dissemination of misleading and distorted content, which can destabilise societies; false information could be leveraged not only as a source of societal disruption but also as a means of control by domestic actors pursuing political agendas.

5.3. Verification Practices and Users’ Contributions to the Fight Against Disinformation

The most common method to verify online news stories, used by around 40%, was to cross-check the information with reliable news sources. A significant number of users (25%) mentioned that they conduct their own verification, such as examining the source or author of each news item. Approximately 15% reported that they do not usually verify news stories, in line with a Eurostat survey (Eurostat 2021), which revealed that only 17% of Cypriots verify the accuracy of online information.
The fact that one in four reported verifying the news on their own is one of the most interesting findings of this study. However, this practice, which may involve avoiding reliable news media organisations, established fact-checking organisations, or consulting experts, raises significant concerns and can be linked to various types of bias that make people vulnerable to online mis- and disinformation. For example, confirmation bias can often drive people to share or believe false information that aligns with their beliefs (Nickerson 1998). Moreover, the phenomena of echo chambers (enacted by users themselves) and filter bubbles (caused by algorithms) can contribute to amplifying disinformation narratives (Pariser 2011; Figà Talamanca and Arfini 2022). During crises and breaking news events, people typically turn to official sources for information. When such information is unavailable, they tend to create informal social networks to share their own judgments and predictions, attempting to fill this information gap even if they do so incorrectly (Rosnow and Fine 1976). Regarding other verification practices, forensic automation and AI-driven forensic tools can further enhance the overall verification process. For example, Digital Image Forensics (DIF) tools can provide significant assistance related to various manipulation types and specific content characteristics (Katsaounidou et al. 2020). In this context, users’ ability to understand photo forensics and interpret the results is crucial for effectively identifying altered images and detecting potential forgery attempts. In the age of deepfakes, having basic skills and knowledge of machine-assisted forensic investigations, alongside traditional DIF practices, is essential for improving user awareness and critical thinking.
To combat the spread of disinformation, survey respondents suggested several actions that individuals can take. The most common recommendation, cited by 68% of participants, was to verify the accuracy of information before sharing it. Other popular responses included prioritising education for themselves and others, while many participants emphasised the importance of relying on trustworthy news sources. Finally, about one-third of respondents felt that actively participating in digital and media literacy training programmes is also important in addressing this issue. Enhanced media literacy is a crucial tool in the fight against information disorder and is essential for building public resilience against online disinformation. An audience that is both critically and digitally literate is better prepared to evaluate information, identify reliable sources, and make informed decisions (Goodman 2021).
However, media literacy in Cyprus—at least until recently—has lacked a systematic approach. The country lags behind many EU member states, primarily due to the absence of an adopted comprehensive policy framework. Media literacy is present within the education system in Cyprus, but it is not fully incorporated into the curriculum, with some teachers voluntarily engaging in relevant activities (Christophorou and Karides 2024). The Cyprus Pedagogical Institute, under the Ministry of Education and Culture, plays a crucial role by supporting various projects within the educational system, often partnering with the Cyprus Radio Television Authority. Notably, the recent OSIS Media Literacy Index (Open Society Institute—Sofia 2023) ranked Cyprus 28th out of 41, placing it in a cluster of countries at risk of further decline.
The current state of protection against (online) disinformation in Cyprus raises significant concerns, primarily due to the lack of a regulatory policy framework and a specific strategy. The results of the study indicate that these phenomena and information disorder in general are pervasive in the daily lives of people in Cyprus, with a high level of exposure among local people, particularly in news topics and areas such as politics and international relations, lifestyle news, business/economy news, and health news. Age, education level, and professional industry were identified as significant factors influencing the perceived sense of exposure to false news. Unsurprisingly, social media was by far the most common platform where respondents encountered false information, while text was deemed the most frequent form in which they encountered it. The majority of participants expressed a high level of concern about online disinformation and its negative impact on society, as well as about the potential for AI to exacerbate the disinformation landscape. Interestingly, the analysis also revealed that both age and work industry of citizens can significantly influence the level of concern regarding these issues. Regarding the verification process, the most common method reported by participants was the cross-checking of information with different reliable news sources they tend to follow. However, one in four mentioned that they conduct their own verification, and 15% reported that they do not usually verify news stories—two statistics that raise significant concerns. Finally, the most common recommendations to combat the spread of disinformation were to verify the accuracy of information before sharing it and to rely on trustworthy sources.
This study is not without limitations. Although the sample is sufficient to be considered statistically sound, a larger sample may advance the assessment of findings. Additionally, as with all studies involving surveys, our results depend on the honesty and accuracy of the participants’ responses. Future extensions of this work can include repeating the study with larger and more diverse samples, as well as conducting comparative studies with other countries both within and beyond the Mediterranean region.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/journalmedia5040099/s1. The Questionnaire of the study.

Author Contributions

Conceptualization, D.G. and T.A.M.; methodology, T.A.M.; validation, D.G. and T.A.M.; formal analysis, D.G.; investigation, D.G. and M.N.; resources, D.G. and M.N.; data curation, D.G. and M.N.; writing—original draft preparation, D.G., C.C. and T.A.M.; writing—review and editing, D.G., C.C. and T.A.M.; visualization, D.G. and M.N.; supervision, T.A.M.; project administration, T.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the MedDMO hub/Digital Europe Programme, Grant Agreement number 101083756.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the National Bio-Ethics Committee of Cyprus (ref. number ΕΕΒΚ ΕΠ 2023.01.282) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
The Media Pluralism Monitor is the main project of The Centre for Media Pluralism and Media Freedom. https://cmpf.eui.eu/media-pluralism-monitor-2024 (accessed on 5 September 2024).
2
MedDMO is a regional hub of the European Digital Media Observatory covering Greece, Cyprus and Malta launched on 1 December 2022. https://meddmo.eu.
3
www.sciencehoaxes.org (accessed on 10 September 2024).
4
Fact-Check Cyprus was established in November 2023 and operates in partnership with the Social Computing Research Center at the Cyprus University of Technology, contributing to the fact-checking work hosted by MedDMO.
5
Cyprus gained independence from British colonial rule in 1960, at which point the Republic of Cyprus was established, comprising mainly Greek Cypriots (78%) and Turkish Cypriots (18%) together with smaller minorities of Armenian, Maronite, and Roman Catholic Cypriots. In the early 1960s, fighting erupted between the two major communities, leading to increasing tensions and the gradual division of the country. In 1974, a coup backed by the Greek Junta against the elected president Makarios III was followed by a Turkish military offensive, resulting in the de facto division of the island into a Greek Cypriot-controlled Republic of Cyprus in the South and the ‘Turkish Republic of Northern Cyprus’ in the north, which remains unrecognised internationally, except by Turkey. In 2003, checkpoints along the buffer zone partially opened allowing residents on either side of the buffer zone to visit the other side.
6
For example, ‘Antibodies to Digital MisInformation’ and ‘Co-Creating Media Literate Youth’ (https://medialiteracy.cut.ac.cy) (accessed on 5 September 2024) are two such projects, also mentioned in the report of the EC expert group on tackling disinformation and promoting digital literacy through education and training (2022). Both projects were developed as a follow-up action of the 2019 ‘Combating Misinformation through Media Literacy’ conference organised by The Cyprus University of Technology, the Horizon 2020 European project Co-Inform, the Cyprus Pedagogical Institute of the Ministry of Education and Culture, and the US Embassy in Cyprus.

References

  1. Albarracin, Dolores, Romer Daniel, Jones Christopher, Jamieson Kathleen Hall, and Patrick Jamieson. 2018. Misleading claims about tobacco products in YouTube videos: Experimental effects of misinformation on unhealthy attitudes. Journal of Medical Internet Research 20: e229. [Google Scholar] [CrossRef] [PubMed]
  2. Allport, Gordon W., and Leo Postman. 1947. The Psychology of Rumor. Oxford: Henry Holt. [Google Scholar]
  3. Arif, Ahmer, Leo Graiden Stewart, and Kate Starbird. 2018. Acting the part: Examining information operations within# BlackLivesMatter discourse. Proceedings of the ACM on Human-Computer Interaction 2: 1–27. [Google Scholar]
  4. Bayer, Judit, Bitiukova Natalija, Bard Petra, Szakács Judit, Alemanno Alberto, and Erik Uszkiewicz. 2019. Disinformation and propaganda—Impact on the Functioning of the Rule of Law in the EU and Its Member States; European Parliament, LIBE Committee, Policy Department for Citizens’ Rights and Constitutional Affairs. Available online: https://www.europarl.europa.eu/RegData/etudes/STUD/2019/608864/IPOL_STU(2019)608864_EN.pdf (accessed on 2 September 2024).
  5. Benkler, Yochai, Faris Robert, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press. [Google Scholar]
  6. Bertani, Anna, Mazzeo Valeria, and Riccardo Gallotti. 2024. Decoding the News Media Diet of Disinformation Spreaders. Entropy 26: 270. [Google Scholar] [CrossRef] [PubMed]
  7. Bronstein, Michael, Pennycook Gordon, Bear Adam, Rand David, and Tyrone D. Cannon. 2019. Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. Journal of Applied Research in Memory and Cognition 8: 108–17. [Google Scholar] [CrossRef]
  8. Bruns, Axel. 2008. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang Publishing. [Google Scholar]
  9. Chee, Yun Foo. 2020. Combat 5G COVID-19 Fake News, Urges Europe. Available online: https://www.reuters.com/article/us-eu-telecoms-5g/combat-5g-covid-19-fake-news-urges-europe-idUSKBN2392N8 (accessed on 25 August 2024).
  10. Christophorou, Christophoros, and Nicholas Karides. 2024. Monitoring Media Pluralism in the Digital Era: Application of the Media Pluralism Monitor in the European Member States and in Candidate Countries in 2023. Country Report: Cyprus; EUI, RSC, Research Project Report. Centre for Media Pluralism and Media Freedom (CMPF). Available online: https://hdl.handle.net/1814/76997 (accessed on 5 September 2024).
  11. Ciampaglia, Giovanni L., and Filippo Menczer. 2018. Misinformation and Biases Infect Social Media, Both Intentionally and Accidentally. The Conversation. Available online: https://theconversation.com/misinformation-and-biases-infect-social-media-bothintentionally-and-accidentally-97148 (accessed on 20 August 2024).
  12. Cyprus Union of Journalists. 2023. Public Opinion Research. Nicosia: Cypronetwork. Available online: https://esk.org.cy/erevnes (accessed on 30 August 2024).
  13. DiFonzo, Nicholas, and Prashant Bordia. 2007. Rumor Psychology: Social and Organizational Approaches. Washington, DC: American Psychological Association. [Google Scholar]
  14. Dragomir, Marius, and Minna Aslama Horowitz. 2024. Epistemic Violators: Disinformation in Central and Eastern Europe. In Epistemic Rights in the Era of Digital Disruption. Cham: Palgrave Macmillan, pp. 155–70. [Google Scholar]
  15. Dragomir, Marius, Rúas Araújo Jose, and Minna Aslama Horowitz. 2024. Beyond online disinformation: Assessing national information resilience in four European countries. Humanities and Social Sciences Communications 11: 1–10. [Google Scholar] [CrossRef]
  16. Ecker, Ullrich K. H., Lewandowsky Stephan, Chang E. Pin, and Rekha Pillai. 2014. The effects of subtle misinformation in news headlines. Journal of Experimental Psychology: Applied 20: 323–35. [Google Scholar]
  17. Endert, Julius. 2024. Generative AI Is the Ultimate Disinformation Amplifier. DW Akademie. Available online: https://akademie.dw.com/en/generative-ai-is-the-ultimate-disinformation-amplifier/a-68593890 (accessed on 25 August 2024).
  18. Eurobarometer. 2023. Media & News Survey 2023. In Flash Eurobarometer. Brussels: European Parliament. Available online: https://europa.eu/eurobarometer/surveys/detail/3153 (accessed on 20 August 2024).
  19. European Commission. 2018. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions—Tackling Online Disinformation: A European Approach. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0236&rid=2 (accessed on 22 August 2024).
  20. European Commission. 2022. Disinformation: Commission Welcomes the New Stronger and More Comprehensive Code of Practice on Disinformation. Brussels: European Commission. Available online: https://ec.europa.eu/commission/presscorner/detail/en/ip_22_3664 (accessed on 22 August 2024).
  21. Eurostat. 2021. How Many People Verified Online Information in 2021? Available online: https://ec.europa.eu/eurostat/en/web/products-eurostat-news/-/ddn-20211216-3 (accessed on 20 August 2024).
  22. Figà Talamanca, Giacomo, and Selene Arfini. 2022. Through the newsfeed glass: Rethinking filter bubbles and echo chambers. Philosophy & Technology 35: 20. [Google Scholar]
  23. Freelon, Deen, and Chris Wells. 2020. Disinformation as political communication. Political Communication 37: 145–56. [Google Scholar] [CrossRef]
  24. Giomelakis, Dimitrios, and Theodora A. Maniou. 2023. Media Plurality Report for 2022: Cyprus, Greece and Malta. Valeta: MedDMO Consortium. [Google Scholar]
  25. Giomelakis, Dimitrios, Papadopoulou Olga, Papadopoulos Symeon, and Andreas Veglis. 2021. Verification of news video content: Findings from a study of journalism students. Journalism Practice 17: 1068–97. [Google Scholar] [CrossRef]
  26. Golovchenko, Yevgeniy, Cody Buntain, Gregory Eady, Megan A. Brown, and Joshua A. Tucker. 2020. Cross-platform state propaganda: Russian trolls on Twitter and YouTube during the 2016 US presidential election. The International Journal of Press/Politics 25: 357–89. [Google Scholar] [CrossRef]
  27. Goodman, Emma. 2021. Media Literacy in Europe and the Role of EDMO. European Digital Media Observatory (EDMO). Available online: https://edmo.eu/wp-content/uploads/2022/02/Media-literacy-in-Europe-and-the-role-of-EDMO-Report-2021.pdf (accessed on 5 September 2024).
  28. Hameleers, Michael, Anna Brosius, Franziska Marquart, Andreas C. Goldberg, Erika van Elsas, and Claes H. de Vreese. 2022. Mistake or manipulation? Conceptualizing perceived mis- and disinformation among news consumers in 10 European countries. Communication Research 49: 919–41. [Google Scholar] [CrossRef]
  29. Heckathorn, Douglas D., and Christopher Cameron. 2017. Network sampling: From snowball and multiplicity to respondent-driven sampling. Annual Review of Sociology 43: 101–19. [Google Scholar] [CrossRef]
  30. Ipsos. 2021. Youth Science Survey Report. Canada Foundation for Innovation and Acfas. Available online: https://www.innovation.ca/projects-results/current-topics-research-funding/youth-research-promising-future (accessed on 25 August 2024).
  31. Ipsos, and UNESCO. 2023. Survey on the Impact of Online Disinformation and Hate Speech. Available online: https://www.unesco.org/sites/default/files/medias/fichiers/2023/11/unesco_ipsos_survey.pdf (accessed on 25 August 2024).
  32. Jack, Caroline. 2017. Lexicon of lies: Terms for problematic information. Data & Society 3: 1094–96. [Google Scholar]
  33. Jacobson, Louis. 2023. Divisions in Cyprus Amplify Fact-Checking Challenges. PolitiFact. Available online: https://www.politifact.com/article/2023/dec/19/divisions-in-cyprus-amplify-fact-checking-challeng/ (accessed on 5 September 2024).
  34. Jolley, Daniel, and Karen M. Douglas. 2014. The effects of anti-vaccine conspiracy theories on vaccination intentions. PLoS ONE 9: e89177. [Google Scholar] [CrossRef] [PubMed]
  35. Kapantai, Eleni, Christopoulou Androniki, Berberidis Christos, and Vassilios Peristeras. 2021. A systematic literature review on disinformation: Toward a unified taxonomical framework. New Media & Society 23: 1301–26. [Google Scholar]
  36. Katsaounidou, Anastasia N., Gardikiotis Antonios, Tsipas Nikolaos, and Charalampos A. Dimoulas. 2020. News authentication and tampered images: Evaluating the photo-truth impact through image verification algorithms. Heliyon 6: e05808. [Google Scholar] [CrossRef]
  37. Khan, M. Laeeq, and Ika Idris. 2019. Recognise misinformation and verify before sharing: A reasoned action and information literacy perspective. Behaviour & Information Technology 38: 1194–212. [Google Scholar]
  38. Kouros, Theodoros, Papa Venetia, Ioannou Maria, and Vyronas Kapnisis. 2023. Conspiratorial narratives on Facebook and their historical contextual associations: A case study from Cyprus. Journal of Communication Inquiry 47: 422–39. [Google Scholar] [CrossRef]
  39. Kumar, Krishna K. P., and Gopalan Geethakumari. 2014. Detecting misinformation in online social networks using cognitive psychology. Human-Centric Computing and Information Sciences 4: 1–22. [Google Scholar] [CrossRef]
  40. Lan, Duong Hoai, and Tran Minh Tung. 2024. Exploring fake news awareness and trust in the age of social media among university student TikTok users. Cogent Social Sciences 10: 1–24. [Google Scholar] [CrossRef]
  41. Lee, Francis. 2024. Disinformation perceptions and media trust: The moderating roles of political trust and values. International Journal of Communication 18: 23. [Google Scholar]
  42. Li, Cathy, and Agustina Callegari. 2024. Stopping AI Disinformation: Protecting Truth in the Digital World; World Economic Forum. Available online: https://www.weforum.org/agenda/2024/06/ai-combat-online-misinformation-disinformation/ (accessed on 30 August 2024).
  43. Maniou, Theodora A., and Venetia Papa. 2023. The dissemination of science news in social media platforms during the COVID-19 crisis: Characteristics and selection criteria. Communication and Society 36: 35–46. [Google Scholar] [CrossRef]
  44. Manning, Martin J., and Herbert Romerstein. 2004. Historical Dictionary of American Propaganda. London: Bloomsbury Publishing Group. [Google Scholar]
  45. Marcus, George. 2017. How Affective Intelligence Can Help Us Understand Politics. Emotion Researcher, ISRE’s Sourcebook for Research on Emotion and Affect. Available online: https://emotionresearcher.com/how-affective-intelligence-theory-can-help-us-understand-politics/ (accessed on 10 September 2024).
  46. Marwick, Alice, and Rebecca Lewis. 2017. Media Manipulation and Disinformation Online. New York: Data & Society Research Institute. Available online: https://datasociety.net/library/media-manipulation-and-disinfo-online (accessed on 30 August 2024).
  47. McKay, Spencer, and Chris Tenove. 2021. Disinformation as a threat to deliberative democracy. Political Research Quarterly 74: 703–17. [Google Scholar] [CrossRef]
  48. Newman, Nic. 2011. Mainstream Media and the Distribution of News in the Age of Social Media. Oxford University Report. Available online: http://ora.ox.ac.uk/objects/uuid:94164da6-9150-4938-8996-badfdef6b507 (accessed on 20 August 2024).
  49. Newman, Nick, Richard Fletcher, Craig T. Robertson, Amy Ross Arguedas, and Rasmus Kleis Nielsen. 2024. Digital News Report 2024. Reuters Institute for the Study of Journalism. Available online: https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2024 (accessed on 23 August 2024).
  50. Nickerson, Raymond S. 1998. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology 2: 175–220. [Google Scholar] [CrossRef]
  51. Open Society Institute—Sofia. 2023. Media Literacy Index project of the European Policies Initiative (EuPI). Open Society Institute—Sofia Foundation (OSIS). Available online: https://osis.bg/wp-content/uploads/2023/06/MLI-report-in-English-22.06.pdf (accessed on 21 August 2024).
  52. Papadopoulou, Olga, Dimitrios Giomelakis, Lazaros Apostolidis, Symeon Papadopoulos, and Yiannis Kompatsiaris. 2019a. Context aggregation and analysis: A tool for user-generated video verification. Paper presented at 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, July 21–25, vol. 25. [Google Scholar]
  53. Papadopoulou, Olga, Markos Zampoglou, Symeon Papadopoulos, and Ioannis Kompatsiaris. 2019b. A corpus of debunked and verified user-generated videos. Online Information Review 43: 72–88. [Google Scholar] [CrossRef]
  54. Papaioannou, Tao, and Antigoni Themistokleous. 2018. An overview of media education in Cyprus: Concepts and policies. In Media Literacy: In Search of the Concept and the Function of Media Literacy. Nicosia: Advanced Media Institute/Metamesonykties Publications, pp. 35–53. [Google Scholar]
  55. Pariser, Eli. 2011. The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin Press. [Google Scholar]
  56. Pennycook, Gordon, and David G. Rand. 2019. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188: 39–50. [Google Scholar] [CrossRef]
  57. Petrikkos, Petros. 2021. Pandemic entanglement: COVID-19 and hybrid threats in the Republic of Cyprus. Cyprus Review 33: 199–228. [Google Scholar]
  58. Photiou, Irene, and Theodora A. Maniou. 2018. Changing audiences, changing realities: Identifying disinformation via new teaching curricula. In Crisis Reporting: Proceedings of EJTA Teachers Conference. Edited by Andreas Veglis and Nikos Drok. Michelen: EJTA Publications, pp. 64–72. [Google Scholar]
  59. Potter, W. James. 2022. Analysis of definitions of media literacy. Journal of Media Literacy Education 14: 27–43. [Google Scholar] [CrossRef]
  60. Rosnow, Ralph L., and Gary Alan Fine. 1976. Rumor and Gossip: The Social Psychology of Hearsay. New York: Elsevier. [Google Scholar]
  61. Rúas Araujo, Jose, Pérez-Curiel Concha, and Paulo Carlos López-López. 2020. New challenges and threats for journalism in the post-truth era: Fact-checking and the fake news combat. In Information Visualization in the Era of Innovative Journalism. London: Routledge, pp. 154–60. [Google Scholar]
  62. Rúas Araujo, José, John P. Wihbey, and Daniel Barredo-Ibáñez. 2022. Beyond fake news and fact-checking: A special issue to understand the political, social and technological consequences of the battle against misinformation and disinformation. Journalism and Media 3: 254–56. [Google Scholar] [CrossRef]
  63. Said-Hung, Elias, Sánchez-Esparza Marta, and Daria Mottareale-Calavese. 2024. The Role of Minority Political Groups in the Dissemination of Disinformation. The Case of Spain. Journalism Practice, 1–23. [Google Scholar] [CrossRef]
  64. Sarris, Nikos, Hamilos Efthimios, Palla Zoi, and Symeon Papadopoulos. 2023. The Deepfake Knowledge Base. Mediterranean Digital Media Observatory (MedDMO). Available online: https://meddmo.eu/about-deepfakes (accessed on 30 August 2024).
  65. Shu, Kai, Amrita Bhattacharjee, Faisal Alatawi, Tahora H. Nazer, Kaize Ding, Mansooreh Karami, and Huan Liu. 2020. Combating disinformation in a social media age. Wiley Interdisciplinary Reviews: Data Mining and Knowledge. Discovery 10: e1385. [Google Scholar]
  66. Shu, Kai, Sliva Amy, Wang Suhang, Tang Jiliang, and Liu Huan. 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter 19: 22–36. [Google Scholar] [CrossRef]
  67. Silverman, Craig. 2016. This Analysis Shows how Viral Fake Election News Stories Outperformed Real News on Facebook. Buzzfeed News. Available online: https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook (accessed on 20 August 2024).
  68. Tandoc, Edson C., Jr. 2019. The facts of fake news: A research review. Sociology Compass 13: e12724. [Google Scholar] [CrossRef]
  69. Themistokleous, Antigoni. 2019. The Role of National Regulatory Authorities in Tackling Disinformation. Cyprus Radio and Television Authority. Available online: https://cmpf.eui.eu/the-role-of-national-regulatory-authorities-in-tackling-disinformation (accessed on 22 August 2024).
  70. Themistokleous, Antigoni. 2023. Media education for children in Cyprus: Educating pupils to critically read advertisements. Media Education 14: 131–38. [Google Scholar] [CrossRef]
  71. Thurman, Neil, Schifferes Steve, Fletcher Richard, Newman Nic, Hunt Stephen, and Aljosha Karim Schapals. 2016. Giving computers a nose for news. Digital Journalism 4: 838–48. [Google Scholar] [CrossRef]
  72. Trimithiotis, Dimitris, Iacovos Ioannou, Vasos Vassiliou, Panicos Christou, Stelios Chrysostomou, Erotokritos Erotokritou, and Demetris Kaizer. 2024. Labservatory: A synergy between journalism studies and computer science for online news observation. Online Information Review. ahead of print. [Google Scholar] [CrossRef]
  73. Tucker, Joshua A., Andrew Guess, Pablo Barberá, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, and Brendan Nyhan. 2018. Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. Available online: https://doi.org/10.2139/ssrn.3144139 (accessed on 30 August 2024).
  74. Valant, Jana. 2015. Online Consumer Reviews: The Case of Misleading or Fake Reviews. European Parliamentary Research Service. Available online: https://www.europarl.europa.eu/RegData/etudes/BRIE/2015/571301/EPRS_BRI(2015)571301_EN.pdf (accessed on 28 August 2024).
  75. Van der Linden, Sander. 2023. Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity. New York: W. W. Norton & Company. [Google Scholar]
  76. Van Duyn, Emily, and Jessica Collier. 2018. Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Communication and Society 22: 29–48. [Google Scholar] [CrossRef]
  77. Vosoughi, Soroush, Roy Deb, and Sinan Aral. 2018. The spread of true and false news online. Science 359: 1146–51. [Google Scholar] [CrossRef]
  78. Waddington, Kathryn. 2012. Gossip and Organizations. London: Routledge. [Google Scholar]
  79. Wang, Yuxi, Martin McKee, Aleksandra Torbica, and David Stuckler. 2019. Systematic literature review on the spread of health-related misinformation on social media. Social Science & Medicine 240: 112552. [Google Scholar]
  80. Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking. Strasbourg: Council of Europe, vol. 27. [Google Scholar]
  81. World Economic Forum. 2024. The Global Risks Report 2024—19th Edition. Available online: https://www.weforum.org/publications/global-risks-report-2024/in-full (accessed on 5 September 2024).
  82. Yang, Shuo, Shu Kai, Wang Suhang, Gu Renje, Wu Fan, and Huan Liu. 2019. Unsupervised fake news detection on social media: A generative approach. Proceedings of the AAAI Conference on Artificial Intelligence 33: 5644–51. [Google Scholar] [CrossRef]
  83. Zecchinon, Pauline, and Olivier Standaert. 2024. The War in Ukraine Through the Prism of Visual Disinformation and the Limits of Specialized Fact-Checking. A Case-Study at Le Monde. Digital Journalism, 1–19. [Google Scholar] [CrossRef]
Figure 1. Frequency of exposure to false news in Cyprus.
Figure 1. Frequency of exposure to false news in Cyprus.
Journalmedia 05 00099 g001
Figure 2. Perceived encounter of false news in different media in Cyprus.
Figure 2. Perceived encounter of false news in different media in Cyprus.
Journalmedia 05 00099 g002
Figure 3. Perceived encounter of false news forms in the media.
Figure 3. Perceived encounter of false news forms in the media.
Journalmedia 05 00099 g003
Figure 4. Perceived concern regarding the impact of disinformation in Cyprus.
Figure 4. Perceived concern regarding the impact of disinformation in Cyprus.
Journalmedia 05 00099 g004
Figure 5. Perceived concern regarding the impact of AI on online disinformation in Cyprus.
Figure 5. Perceived concern regarding the impact of AI on online disinformation in Cyprus.
Journalmedia 05 00099 g005
Figure 6. Frequency of use of different means of verification.
Figure 6. Frequency of use of different means of verification.
Journalmedia 05 00099 g006
Table 1. Sample distribution.
Table 1. Sample distribution.
DemographicsDistribution
GenderMale (37.7%), Female (59.9%), Other (0.7%), Prefer not to say (1.7%)
Age18–20 (18.8%), 21–30 (19.5%), 31–40 (16.4%), 41–50 (25%), 51–60 (15.4%), 61–70 (2.8%), Prefer not to say (2.1%)
Level of education(Primary, Secondary, College, or Technical School) (21.5%), (BA, MA, PhD) (75%), Prefer not to say (2.1%), Other (1.4%)
Annual income<€20,000 (33.2%), €20,000–€50,000 (34.2%), €50,000–€80,000 (15.8%), >€80,000 (2.7%), Prefer not to say (14%)
Professional industryAgriculture (0.3%), Public sector (16.1%), Finance (3.8%), Entertainment (1.4%), Education (28.4%), Health care (1.4%), IT services (6.8%), Food-hotel services (2.4%), Legal services (0.7%), Media (21.2%), Military (1%), Prefer not to say (9.9%), Other (6.5%)
Table 2. Chi square tests regarding the relationship between demographic characteristics and perceived sense of exposure to false news in Cyprus.
Table 2. Chi square tests regarding the relationship between demographic characteristics and perceived sense of exposure to false news in Cyprus.
Demographicsp Value
Age0.015
Level of education0.012
Work industry0.031
Table 3. Chi square tests regarding the perceived concern about the negative impact of online disinformation on society and demographic characteristics.
Table 3. Chi square tests regarding the perceived concern about the negative impact of online disinformation on society and demographic characteristics.
Demographicsp Value
(Concern About the Negative Impact of Online Disinformation on Society)
p Value
(Concern about AI’s Potential to Exacerbate Online Disinformation)
Gender<0.0010.006
Level of education--
Work industry<0.001<0.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Giomelakis, D.; Constandinides, C.; Noti, M.; Maniou, T.A. Investigating Online Mis- and Disinformation in Cyprus: Trends and Challenges. Journal. Media 2024, 5, 1590-1606. https://doi.org/10.3390/journalmedia5040099

AMA Style

Giomelakis D, Constandinides C, Noti M, Maniou TA. Investigating Online Mis- and Disinformation in Cyprus: Trends and Challenges. Journalism and Media. 2024; 5(4):1590-1606. https://doi.org/10.3390/journalmedia5040099

Chicago/Turabian Style

Giomelakis, Dimitrios, Costas Constandinides, Maria Noti, and Theodora A. Maniou. 2024. "Investigating Online Mis- and Disinformation in Cyprus: Trends and Challenges" Journalism and Media 5, no. 4: 1590-1606. https://doi.org/10.3390/journalmedia5040099

APA Style

Giomelakis, D., Constandinides, C., Noti, M., & Maniou, T. A. (2024). Investigating Online Mis- and Disinformation in Cyprus: Trends and Challenges. Journalism and Media, 5(4), 1590-1606. https://doi.org/10.3390/journalmedia5040099

Article Metrics

Back to TopTop