Next Article in Journal
Control, Passion and Possession: Love as a Space of Violence in Adolescence
Next Article in Special Issue
When Readers Do Not Fight Falsehood: An Exploration of Factors Influencing the Perceived Realism of False News on International Disputes
Previous Article in Journal
Is Sharing One’s Personal Story of Victimization Preferred? Incarcerated Women’s Perspectives on Group Treatment for Sexual Trauma
Previous Article in Special Issue
Correction: Lilja et al. (2024). Civic Literacy and Disinformation in Democracies. Social Sciences 13: 405
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Narrow Margins and Misinformation: The Impact of Sharing Fake News in Close Contests

by
Samuel Rhodes
Faculty of Political Science, Moravian University, Bethlehem, PA 18018, USA
Soc. Sci. 2024, 13(11), 571; https://doi.org/10.3390/socsci13110571
Submission received: 19 July 2024 / Revised: 4 October 2024 / Accepted: 10 October 2024 / Published: 24 October 2024
(This article belongs to the Special Issue Disinformation and Misinformation in the New Media Landscape)

Abstract

:
This study investigates the impact of candidates disseminating fake news on voter behavior and electoral outcomes in highly competitive, partisan races. While the effects of fake news on electoral outcomes have been studied, research has yet to examine the impact of candidates’ strategic use of fake news in elections where it may have the greatest impact—close races. This research explores whether the use of fake news influences voter support, particularly among independent voters, in tightly contested elections. Through a conjoint survey experiment involving participants from Amazon MTurk, this study analyzes how variables such as race competitiveness, perceived risk of alienating independents, and the presence of partisan labels affect voter responses to candidates who spread misinformation. The findings indicate that while the competitiveness of a race does not significantly enhance support for candidates sharing fake news, the presence of partisan labels does. These results suggest that voter behavior in response to fake news is more closely tied to partisan identity than to strategic electoral considerations. This study highlights the complex dynamics of misinformation in electoral contexts and its implications for democratic processes.

1. Introduction

In recent years, the dissemination of fake news has emerged as a significant concern in the context of electoral politics. The rapid spread of misinformation, particularly through digital and social media platforms, has raised questions about its potential impact on voter behavior and election outcomes. In 2020, more than a quarter of the American adult population visited websites hosting fake news during the final weeks of the presidential election (Moore et al. 2022). Similarly, in 2016, the top fake news stories were shared on Facebook more widely than the leading mainstream news articles (Silverman and Singer-Vine 2016). These trends highlight not only the rapid spread of misinformation (Guess et al. 2019b) but also the potential influence of fake news on public opinion and political behavior. This study delves deeper into these dynamics in the context of electoral races, examining the repercussions of candidates disseminating fake news and its nuanced impact on diverse constituents.
Before and during his tenure as president, Donald Trump frequently shared false information and fake news on social media. This behavior, popularized by Trump, persists beyond his presidency and is now adopted by multiple elected officials (Macdonald and Brown 2022). Another prominent example of a political figure disseminating misinformation is U.S. House Representative Anna Paulina Luna. Before her election, Luna shared false claims about ballot tampering during the 2020 election, which she later deleted after it was widely debunked (Gardner 2022; Luna 2020). Luna’s actions reflect the strategic use of misinformation by candidates, highlighting how the spread of fake news may be employed as a campaign tool, though candidates might retract such statements when they perceive a risk to their electoral chances.
Perhaps most deserving of attention are races in which the margin of victory is notably slim. For example, this incident reflects a broader trend where the marginal effects of sharing fake news can potentially sway results in tight races. For instance, in 2020, Representative Lauren Boebert, a frequent conduit of fake news trafficking, won her Western Colorado district by a margin of approximately 500 votes or 0.17% (Ulloa and Vigdor 2022). Over the years, Representative Boebert has propagated misinformation regarding the COVID-19 virus (Lee 2021), COVID-19 vaccine (Roeloffs 2024), and the results of the 2020 election (Boebert 2020).
While much of the literature has focused on the role of elites and political polarization in the dissemination of fake news (Lazer et al. 2018; Vosoughi et al. 2018), less attention has been paid to the effects of candidates using fake news as a strategic campaign tool. The use of fake news has become increasingly common, especially among certain political groups, with research showing a notable rise in its use in the 2022 U.S. House elections compared to previous cycles (Macdonald and Brown 2022). However, the specific effects of candidates sharing fake news—particularly in closely contested races where such tactics may have the greatest potential to sway the outcome—remain relatively unexplored.
This paper seeks to fill this gap by investigating how the strategic use of fake news by candidates affects voter behavior and electoral outcomes in highly competitive, partisan races. This research examines whether the dissemination of fake news could influence voter support, particularly in scenarios where the electoral margins are narrow and the risk of alienating independent voters is high.
Hypotheses: Building on these considerations, this study proposes three hypotheses to explore the conditions under which candidates’ use of fake news affects voter behavior:
Hypothesis 1. 
The first hypothesis posits that as the competitiveness of an electoral race increases, the strategic dissemination of fake news by a candidate will be viewed more favorably by their supporters.
Hypothesis 2. 
The second hypothesis suggests that when the perceived electoral cost of sharing fake news is high—such as the potential loss of support among political independents—partisan supporters will view the sharing of fake news by the candidate less favorably.
Hypothesis 3. 
The third hypothesis posits that candidates who disseminate fake news will receive increased support when their actions are associated with clear partisan labels, as voters are more likely to overlook the sharing of false information when it aligns with their partisan identity.
By examining these factors, this paper aims to provide a deeper understanding of the role of candidates’ strategic use of fake news in close electoral contests and its broader implications for democratic processes.

2. Literature

2.1. Defining Fake News and Understanding the Role It Plays in American Politics

Lazer et al. (2018, p. 1094) defines fake news as “fabricated information that mimics news media content in form but not in organizational process or intent”. While fake news shares characteristics with misinformation (false or misleading information) and disinformation (false information designed to deceive people), it is distinct in that it attempts to pass itself off as real information.
Over the last ten years, fake news has become an increasingly influential feature of American society. In 2016, it may have helped tip the scales in favor of Donald Trump (Allcott and Gentzkow 2017). Millions of American voters were exposed to fake news stories like the fabricated murder–suicide of an FBI agent investigating Hillary Clinton’s e-mails (Sydell 2016). In 2020, fake news was responsible for spreading many dangerous falsehoods about COVID-19. For example, fake news influenced shortages of medical face masks and the malaria drug hydroxychloroquine (Carrion-Alvarez and Tijerina-Salina 2020). During the 2020 election, misinformation on Facebook was found to receive six times more clicks than factual news (Dwoskin 2021). Whether it comes to politics, vaccines, nutrition, or the stock market, fake news continues to influence and shape major societal outcomes in the United States (Lazer et al. 2018).
Recent comparative research has identified several macro-level characteristics that can either promote or inhibit the spread and influence of disinformation across different countries (Humprecht et al. 2020). Humprecht et al. (2020) developed a theoretical framework that identifies conditions under which disinformation is more likely to spread and exert influence. They argue that countries with robust media regulation, high trust in political institutions, low levels of political polarization, and strong norms of diverse news consumption are more resilient to the effects of fake news.
In contrast, the United States exhibits relatively weaker resilience across these dimensions (Humprecht et al. 2020, p. 17). The U.S. media landscape is characterized by lower levels of media regulation, creating a fertile ground for the spread of disinformation. High levels of political polarization exacerbate this issue by making certain segments of the population more likely to believe and share misinformation that aligns with their partisan views. Furthermore, trust in political institutions in the U.S. is comparatively low, which further contributes to the problem by making citizens more susceptible to believing false information. Additionally, norms around news consumption in the United States tend to favor sensationalized or partisan news, which often aligns with or amplifies false narratives.
Given these macro-level vulnerabilities, the United States serves as a critical context for studying the effects of fake news on voter behavior and electoral outcomes. Unlike Northern European countries, which are better equipped with societal guardrails to discern fake from fact (Humprecht et al. 2020, p. 16), Americans lack such resilience-enhancing factors. This may help explain why false information has disproportionately large effects in the United States compared to other countries. By focusing on the U.S., this study aims to explore the consequences of fake news dissemination in an environment where institutional and societal defenses against disinformation are comparatively weaker.
Studies on fake news and misinformation tend to focus on exposure to or strength of belief in the piece of misinformation to better understand the psychological mechanisms the stimulus has on an individual post-exposure. For example, Pennycook et al. (2018), citing the familiarity and fluency biases, find that prior exposure to fake news tends to increase the strength of belief in that misinformation. In large sample studies using the web-browsing history of participants, Guess et al. (2017) analyzes the real-world exposure of misinformation on individuals by marking their visits to websites and links hosting fake news stories. Similarly, (Fourney et al. 2017) analyzes the temporal and geographic dispersion of fake news exposure leading up to the 2016 election.
Beyond exposure and belief, Buchanan (2020), using a sample from the United Kingdom, explores the likelihood of individuals sharing false information online. His study finds that individuals who are more likely to share false information are those who believe the information is likely to be true or who have pre-existing attitudes consistent with the false information. This finding suggests that the act of sharing misinformation is not merely a function of demographic or personality traits—such as age—but is more strongly associated with cognitive biases and pre-existing beliefs. In the context of the current study, strength of partisan affiliation may be considered a pre-existing attitude, influencing both the perception of and the likelihood of sharing false information. This is consistent with findings by Guess et al. (2019a), who show that conservatives in the United States are the most likely to share false political misinformation, further underscoring the role of pre-existing ideological dispositions in the spread of fake news.
Experimental studies on misinformation have advanced our collective understanding of how and why fake news embeds itself in the political discourse and minds of partisans. Findings from Bullock (2007) and Thorson (2016) suggest that directionally motivated reasoning may explain why false beliefs persist even after individuals have been told that they were exposed to misinformation. In essence, individuals’ conclusions or beliefs are influenced by their subjective motivations in place of objective evidence or logic (Kunda 1990; Taber and Lodge 2006).
Similarly, Altay et al. (2020), using a US sample obtained from Amazon MTurk (440 Terry Ave N, Seattle, WA 98109), find that most people are reluctant to share fake news because it results in damage to one’s reputation that is difficult to repair—even for politically congruent material. However, a key difference between that study and the current one is the context provided to the participants: Altay et al. (2020) informs the participants that false information had been shared by one of two acquaintances. In contrast, the current study uses a fictional elite—a candidate running for the U.S. House of Representatives. This difference in context may explain why the results from Altay et al. (2020) do not fully transfer over to the current study, despite a similar sample origin.
These studies often emphasize how frequently individuals encounter fake news, the platforms where fake news proliferates, and the demographic characteristics of those most likely to be exposed. However, this project shifts the focus from exposure to examining the effects of fake news itself. Specifically, this study aims to test the effects of fake news in the context of a highly partisan, competitive electoral margins, where the potential cost is turning away independents. By incorporating these variables, this research seeks to understand how fake news influences voter behavior and perceptions in various scenarios. Importantly, fake news is a constant factor tested across all conditions, allowing for a comprehensive analysis of its impacts. This approach provides a nuanced understanding of how fake news can shape electoral outcomes and voter attitudes, particularly in closely contested races.

2.2. History of Utilizing False Information on the Campaign Trail

While the scale and reach of fake news in the twenty-first century is much more substantial than other eras of American political history, there are notable episodes involving the use of false or misleading claims on the campaign trail in other centuries.
The 1828 presidential election between Andrew Jackson and John Quincy Adams is a notable example of the use of false information on the campaign trail. One particularly egregious episode involved a smear campaign orchestrated by Jackson’s supporters, specifically New Hampshire Senator Isaac Hill, a member of Jackson’s informal advising group known as the “Kitchen Cabinet”. Hill published pamphlets that falsely claimed Adams had procured a young American girl for the Russian czar while serving as a diplomat in Russia. This baseless accusation was intended to damage Adams’ reputation and was part of a broader strategy of personal attacks and misinformation used throughout the campaign. While these disinformation tactics were not officially sanctioned by the Jackson campaign, the false allegations appeared in pamphlets despite Jackson’s insistence that attacks involving women not be used unless it was in response to an attack on his spouse (Parsons 2009, p. 144). The dissemination of such false information highlights the lengths to which political campaigns have historically gone to undermine opponents and influence public opinion.
Similar to the Isaac Hill episode of the nineteenth century, the strategy of deliberate and strategic misinformation from campaign-adjacent sources (in the modern iteration of the political action committee) was also evident in the twentieth century. George H.W. Bush’s successful campaign for the presidency in 1988 was marked in part by the racially charged politics of crime, a tactic that continues to reverberate in American politics. Horton, an African–American prisoner, committed violent crimes while on furlough from a Massachusetts prison, and this was used to portray Democratic opponent Gov. Michael Dukakis as soft on crime. Although the Bush campaign aired an ad attacking the Massachusetts furlough program without mentioning Horton, a more notorious ad titled “Weekend Passes” was produced by Larry McCarthy for the National Security Political Action Committee. This ad explicitly highlighted Horton’s crimes and stoked racial fears, leading to widespread media coverage and significant political fallout (Jamieson 1993). The Bush campaign distanced itself from the ad, but campaign strategist Lee Atwater famously stated, “If I can make Willie Horton a household name, we’ll win the election” (Baker 2018), reflecting the campaign’s strategic focus on this issue. The implication that Dukakis was directly responsible for Horton’s actions was misleading because it suggested personal culpability in an individual crime resulting from a broader state policy that had bipartisan support before and during his governorship. While the story does not represent an outright fabrication, as was the case with the Hill attacks on John Q. Adams, it blurred the lines by implying that Governor Dukakis was more involved than he actually was. It also obscured the bipartisan support that the furlough program had in Massachusetts during this period.
Similar to the 1988 episode involving advertisements from a campaign-adjacent political action committee, the 2000 Republican primary was also marked by allegations of underhanded tactics involving racially motivated smears. During the South Carolina primary, a phony poll was circulated that falsely suggested Senator John McCain had fathered an illegitimate black child (Johnson 2016, p. 359). In reality, McCain and his wife Cindy had adopted a dark-skinned girl from Bangladesh. This smear tactic, although not directly linked to Bush’s official campaign, was highly effective and played a substantial role in Bush’s victory in the South Carolina primary. Veteran journalist Joe Klein noted the striking similarities between this incident and previous instances involving highly personal and false information spread about Bush’s political opponents, such as rumors regarding the sexual orientation of Texas Governor Ann Richards. In both instances, Bush’s chief strategist, Karl Rove, was intimately involved with the Bush campaign. Perhaps the most glaring indictment of the Bush campaign was their lack of commentary on the false information (Johnson 2016, p. 360).
Past allegations and alleged use of misinformation to help steer the outcome of elections pale in comparison to what has now become a regular feature of twenty-first-century elections: rampant lies and disinformation campaigns directed not only from the campaign of presidential candidates but from the president himself. The 2020 election saw significant misinformation, particularly around mail-in voting and voter fraud. President Donald Trump and his allies repeatedly made unfounded claims that the election was “rigged” and that there was widespread voter fraud, despite numerous investigations and court rulings finding no evidence to support these claims. Trump also claimed that underage voters, dead people, and illegal immigrants had swung the outcome of the election (Fahey 2023, pp. 267–68). This disinformation campaign aimed to undermine public confidence in the electoral process and question the legitimacy of the election results, eventually culminating in the January 6 insurrection at the U.S. Capitol.

2.3. Negativity and Misinformation

Given that there is now some evidence suggesting that candidates who share misinformation and conspiracy theories may be doing so at strategic points during the campaign (Kornberg et al. 2023, p. 15), it is possible that it shares characteristics of a decision to go on the attack. Geer (2008) defines negativity as any critique of one candidate against another during a contest. This is a conscious decision of a candidate and campaign team that views negativity as one way to shape the outcome of the campaign to their favor (Lau and Rovner 2009). In other words, sharing misinformation may be purposeful in the same way that going on the offensive is purposeful (Damore 2002). Similarly, this paper hypothesizes that misinformation increases as the competition of the races rises; a similar effect has been observed with the increase and rise of negativity in U.S. Senate campaigns (Hale et al. 1996).
The biggest difference is that choosing to go negative is always a deliberate move, while distributing misinformation might be unintentional. While it may be difficult to ascertain whether the individual sharing the information is aware of the veracity of the material, some misinformation is easier to debunk than others; for example, President Donald Trump’s insistence that the 2020 election was stolen or somehow rigged against his candidacy. Despite an abundance of evidence to the contrary (including many court findings), Trump found his embrace of the so-called “Big Lie” to be a powerful tool to galvanize supporters and consolidate support within the Republican Party (Jacobson 2021).
The effects of misinformation and negativity on the electorate also share some similar traits. Content analysis of online misinformation shows that much of the material is vulgar, offensive, or otherwise jaw-dropping. For example, some of the most-read political fake news stories of the 2016 election cycle involve horrific topics such as assault and murder (Silverman 2016). One of the psychological mechanisms behind this engagement is likely similar to that of negative advertising, which has been found to be more arousing and better remembered than positive advertising (Bradley et al. 2007). This is theorized to be rooted in the powerful negativity bias, as human beings often process and remember negative emotions over good ones, even when it is suggested that positive thinking can be more powerful than negative thinking (Baumeister et al. 2001). Baumeister et al. (2001, p. 325) argue that this bias towards encoding negative information at the expense of positive information is probably an evolutionary response to danger that increases the likelihood of passing on one’s genes. It may also help explain why polarizing and disturbing fake news tends to be remembered over actual news, even after participants are told that the information is false (Thorson 2016).

2.4. False Information and the Campaign Environment

This paper theorizes that the spread of fake news can be viewed as a deliberate campaign strategy in closely contested races. In such situations, candidates deadlocked in polls might prioritize electoral victory over the truthfulness of their online content. Previous studies in political science indicate that the intensity of electoral competition can influence other campaign facets. For instance, it can lead to heightened negativity in TV ads (Damore 2002; Hale et al. 1996; Lau and Rovner 2009; Peterson and Djupe 2005), on campaign websites (Druckman et al. 2010), and within social media posts (Auter and Fine 2016).
While it is not possible to ascertain the motives and knowledge of every candidate sharing political misinformation, there are proxy measures of related behavior available for analysis. Recent work by Kornberg et al. (2023) shows patterns of 2022 candidates sharing posts related to the “Big Lie”; on Twitter, Facebook, and Instagram, there was a much greater volume of shared bogus claims leading up to, and eventually peaking on, Election Day. After this period, there was a substantial decline in the post-election period (Kornberg et al. 2023, p. 15).
For the survey experiment detailed in this paper, the participants are informed that the information shared by the candidate was spurious, yet the candidate stood by their action. Existing literature reveals that House members from fiercely competitive districts are more resistant to admitting guilt when caught in improprieties (Armstrong-Taylor 2012). This insinuates that the competitiveness of an electoral district could shape both perceptions and partisan behaviors. Based on this, the first hypothesis of this paper is that the more competitive the race is, the more favorable the sharing of fake news becomes in the eyes of supporters.

2.5. Weighing the Possible Electoral Cost of Sharing Fake News

Social psychology literature suggests that the cost of sharing fabricated information is variable; research indicates that lies told to advance one’s personal stature or harm other people are typically frowned upon. However, a piece of fabricated information that is seen to provide benefits to others is viewed as a more acceptable behavior (Lindskold et al. 1983). Scholarship has identified that electability is a factor that voters consider when casting ballots (Abramowitz 1989; Bartels 1988). Primary voters also consider viability when selecting a candidate (Brady 1993). This suggests that partisans’ enthusiasm may be depressed if a candidate’s behavior is seen as too aggressive or controversial.
Based on this, this paper’s second hypothesis is that the higher the perceived electoral cost of a candidate sharing fake news is (i.e., losing support among political independents), the less acceptable the sharing of fake news by the candidate becomes in the eyes of partisan supporters.

2.6. Partisanship, Polarization, and Fake News

The term “fake news” has evolved into a politically polarizing label, eliciting strong reactions from both Democrats and Republicans. When there is clarity regarding the source of misinformation, it is likely that partisan individuals will judge such behavior more leniently. Given the pronounced role of partisanship in molding individual attitudes in polarized times (Abramowitz and Saunders 2008), it is plausible to assume that partisan identifiers might enhance participants’ perceptions of candidates. Nyhan et al. (2020) demonstrated that fact-checking misleading statements barely affected candidate assessments or voting preferences, suggesting that candidates might retain support even after sharing fake news. Notably, this study diverges from that work in its methodology—here, no fact-checking is involved, and all the participants are uniformly informed that the information has been proven false. In a similar vein, Ansolabehere and Iyengar (1996) conducted an experiment where the participants encountered a news piece highlighting the accuracy concerns of a political attack rather than the attack itself. The outcome revealed a surge in support for the candidate launching the attack. Such findings hint at the possibility of partisan followers downplaying the use of fake news by their candidates. Because exposure to partisan media (fake or otherwise) has been found to further polarize voters (Levendusky 2013), we should expect respondents to react warmly towards their party’s candidate when partisan labels are included.
Consequently, the paper’s third hypothesis posits that candidates who disseminate fake news will receive increased support when their actions are associated with clear partisan labels, even when voters are aware that the information is false. This assumption is grounded in existing research on partisan identity, which suggests that voters are more likely to forgive or overlook unethical behavior, such as spreading false information, when it aligns with their political affiliation.
In this study, the respondents were informed that the shared information was false to allow for a direct examination of how partisan loyalty influences voter support in the face of clear misinformation. By removing uncertainty about the veracity of the information, this experiment focuses on the effects of partisanship and loyalty, isolating them from the cognitive biases related to belief in misinformation.

3. Methodology

3.1. Participant Selection

The participants were recruited via Amazon Mechanical Turk (MTurk), a widely used platform for gathering diverse and representative samples quickly and efficiently. MTurk has been shown to provide data quality comparable to traditional survey methods, particularly in political science research (Berinsky et al. 2012). This study utilized a convenience sample of MTurk users who met the following criteria: U.S. residents, over the age of 18, and fluent in English. Additional steps were taken to ensure that the sample was approximately split between Democratic and Republican participants.
MTurk is a popular tool in social science research due to its ability to rapidly gather diverse samples at a reduced financial cost. As Levay et al. (2016) note, MTurk has significantly lowered the financial barriers for survey researchers, enabling efficient data collection. Additionally, data can be gathered quickly, allowing for faster analysis with minimal drawbacks. Mullinix et al. (2015) found that results from MTurk convenience samples often produce estimates of causal effects that are comparable to those obtained from population-based samples.
However, more recent studies have identified some limitations with MTurk samples. For example, Munger et al. (2018) observed a lack of variation in the age and digital literacy of MTurk users. Levay et al. (2016) also examined the demographic and political composition of MTurk samples compared to more traditional surveys, such as the American National Election Survey (ANES), and found that MTurk users tend to be younger, more liberal, and more educated than the general population.
Despite these demographic differences, Levay et al. (2016) demonstrated that when controlling for key covariates such as age, gender, race, income, partisanship, and ideology, the differences between MTurk samples and those from traditional surveys like ANES were greatly reduced. This indicates that, with proper controls, MTurk can yield results similar to those from probabilistic sampling methods, making it a viable alternative for social science research, especially when resources or time are constrained.
In this study, several of the covariates recommended by Levay et al. (2016) were incorporated into the models, including income, education, and party identification, to ensure that the results were as reliable and generalizable as possible.
The sample size for this study consisted of 2224 participants, collected over a two-week period in July 2023. To ensure representativeness, the participants were required to be U.S. residents, at least 18 years of age, and fluent in English. The sample included a diverse range of demographics, with variations in age, gender, education level, and political affiliation. The participants’ political affiliations were balanced to reflect the broader U.S. electorate, including an approximately equal representation of Republicans and Democrats.
To minimize biases and ensure the integrity of the data, several exclusion criteria were applied. Throughout the survey, the participants encountered three attention checks, each spread across different parts of the survey. These attention checks asked the respondents to select a specific option from a Likert scale (e.g., “Please select ‘Strongly Disagree’”). If participants did not select the correct option, they were removed from the survey, and their responses were not recorded. Additionally, participants who provided incomplete responses or did not meet the eligibility criteria were excluded from the final analysis. The median duration for completing the survey experiment was approximately 6 min, which is consistent with the expected time frame for this type of experimental design.
This study was conducted following ethical guidelines, with informed consent obtained from all the participants before they began the survey. The participants were assured of their anonymity and the confidentiality of their responses, in accordance with institutional review board (IRB) standards.

3.2. Survey and Experimental Conditions

The respondents completed a survey designed to capture demographic details (age, gender, education, political affiliation), political knowledge, media consumption habits, and general trust in media sources. The survey also included attention checks to ensure response quality and gathered data on the respondents’ prior exposure to political misinformation. These demographic and behavioral data provided a baseline for comparing reactions across different treatment groups and were incorporated into the analysis to control for any potential confounding factors.
Following the collection of demographic information, the participants were randomly assigned to one of eight experimental conditions designed to explore the effects of various factors—race competitiveness, the perceived cost of sharing fake news, and the presence of partisan labels—on voter support for a candidate in a tightly contested election. These conditions were carefully constructed to isolate the effects of each factor, with a control group serving as the baseline comparison.
To ensure clarity and control within the experimental design, the respondents were informed that the information shared by Cooper was false. This decision reflects real-world political contexts where misinformation is often publicly debunked by fact-checking media outlets. In these scenarios, voters may still continue to support or defend the candidate, regardless of whether the misinformation has been proven false.
The purpose of this study is to examine the strategic use of misinformation by candidates, rather than to determine whether voters internalize or believe the false information. In many elections, voters in the U.S. are familiar with claims that candidates share false information, and the media often plays a key role in debunking such claims. However, this paper is attempting to establish whether voters believe these fact checks. Instead, this study is focused on whether the act of sharing false information—regardless of its debunking—affects voter behavior.
By explicitly informing the participants of the false nature of the news, this study isolates the effects of candidates’ strategic use of fake news, allowing us to focus on voter responses to the act of sharing false information rather than on their belief in the content’s truthfulness.

3.2.1. Consistent Elements Across Experimental Conditions

To maintain consistency and ensure comparability across the various treatment conditions, certain elements were included in every scenario presented to the participants. These elements were designed to provide a common context, allowing this study to focus on the effects of the manipulated variables—race competitiveness, perceived cost of sharing fake news, and the presence of partisan labels—without introducing additional confounding factors.
1. The photo:
In all eight experimental conditions, the participants were shown an image of a now-deleted tweet falsely claiming Dominion software switched votes from Trump to Biden, originally posted by Representative Luna (Luna 2020). This example was selected because it clearly exemplifies the strategic use of misinformation in campaigns (Rogers 2022).
The photo was included along with the accompanying text in the same window, providing a visual and narrative representation of the misinformation that Republican candidate Peter Cooper was accused of sharing. The inclusion of this photo across all conditions ensured that every participant, regardless of their assigned treatment, was exposed to the same visual stimulus related to the candidate’s use of fake news.
2. The allegations of unorthodox behavior: Each scenario included allegations that Peter Cooper had frequently shared stories from websites and social media accounts labeled as “fake news” by experts. This consistent narrative element was critical for ensuring that all the participants were equally aware of the candidate’s controversial behavior, which was central to the study’s focus on the impact of sharing misinformation.
3. The candidates In every condition, the participants were provided with Peter Cooper’s standardized response to the allegations: “My position is very clear. We need to restore faith in the election process and that starts by asking questions on how we can improve election integrity”. This consistent response was included to provide a uniform understanding of the candidate’s stance on the misinformation he was accused of spreading.
In each condition, Cooper is the fictionalized candidate sharing false information. However, the context of the race and use of partisan labels was variable and manipulated in the experimental conditions.
4. The core political context: Each treatment condition presented the same core political context, with Peter Cooper and Joseph Frey described as being locked in a battle to replace the retiring incumbent of the Second Congressional District. Cooper’s campaign was consistently characterized by its focus on lowering taxes and decreasing government spending, while Frey’s platform emphasized expanding access to healthcare and education. This ensured that all the participants received a similar political backdrop, with only the manipulated variables differing between conditions.
By maintaining these consistent elements across all conditions, this study was able to isolate the effects of the experimental manipulations and draw more accurate conclusions about how race competitiveness, perceived cost, and partisan labels influence voter behavior in the context of misinformation.

3.2.2. Control Group (Treatment 1)

The control group was designed to serve as a baseline, with no experimental manipulations applied to the scenario. The participants in this group were presented with a scenario where Republican Peter Cooper and Democrat Joseph Frey were described as being locked in a tight battle to replace the retiring incumbent of the Second Congressional District. Cooper’s campaign focused on lowering taxes and decreasing government spending, while Frey’s platform centered on expanding access to healthcare and education.
Throughout the race, allegations of unorthodox behavior had dogged Cooper, particularly regarding his tendency to share stories from websites and social media accounts that had been labeled as “fake news” by experts—defined as false information resembling real news media. An example provided included Cooper’s sharing of false information alleging that ballots had been altered in the 2020 election. This example was accompanied by a photo (included in all conditions) showing a tweet falsely claiming that Dominion software had switched votes from Trump to Biden. When questioned about his sharing of this information, Cooper responded, “My position is very clear. We need to restore faith in the election process and that starts by asking questions on how we can improve election integrity”.
The control group scenario presented the participants with this neutral narrative, without introducing any specific manipulations related to race competitiveness, perceived cost, or partisan labels. This allowed for a comparison against the other seven treatment conditions, which systematically varied one or more of these factors.

3.2.3. Treatment 2: Close Race, High Perceived Cost, Partisan Labels Present

In Treatment 2, the participants were informed that the race was highly competitive, with most polls considering it a toss-up and political observers noting that the outcome would have substantial implications for a closely divided House of Representatives. The scenario was otherwise identical to the control group, but with an additional manipulation: the participants were informed that some observers suggested sharing misinformation could turn away political Independents from supporting Cooper’s campaign. In the narrowly divided Second District, the potential loss of independent voters was framed as having the capacity to alter the outcome of the race. Partisan labels were also present, explicitly identifying Cooper as a Republican and Frey as a Democrat.

3.2.4. Treatment 3: Close Race, High Perceived Cost, Partisan Labels Absent

Treatment 3 mirrored Treatment 2 in its emphasis on a highly competitive race and the perceived high cost of sharing fake news. However, in this condition, partisan labels were omitted. The participants were not informed of Cooper’s or Frey’s party affiliations, allowing the study to isolate the effects of the perceived cost of sharing fake news without the influence of partisan identity.

3.2.5. Treatment 4: No Mention of Race Competitiveness, No Mention of Perceived Cost, Partisan Labels Present

In Treatment 4, participants were presented with a scenario similar to the control group but without any mention of race competitiveness or the perceived cost of sharing fake news. Partisan labels were present, explicitly identifying Cooper as a Republican and Frey as a Democrat. This condition was designed to examine the effect of partisan labels alone on voter support, isolating it from the influences of race competitiveness and perceived cost.

3.2.6. Treatment 5: Close Race, No Mention of Perceived Cost, Partisan Labels Present

In Treatment 5, participants were informed that the race was highly competitive, with the same emphasis on the potential implications for a closely divided House of Representatives. However, there was no mention of the perceived cost of sharing fake news. Partisan labels were included, meaning participants were informed of Cooper’s and Frey’s party affiliations. This condition allowed the study to examine how race competitiveness influences voter support in the absence of perceived cost, while factoring in partisan identity.

3.2.7. Treatment 6: No Mention of Race Competitiveness or Perceived Cost, Partisan Labels Absent

In Treatment 6, participants were presented with the scenario without any mention of race competitiveness or the perceived cost of sharing fake news, and without partisan labels. Cooper and Frey were described without reference to their party affiliations. This treatment aimed to examine voter support in the absence of cues related to race competitiveness, perceived cost, or partisan identity, effectively isolating the baseline perception of the candidates based solely on their policy platforms and the issue of sharing fake news.

3.2.8. Treatment 7: Close Race, No Mention of Perceived Cost, No Partisan Labels

Treatment 7 focused on a close race scenario, similar to the previous conditions, but with two key differences: there was no mention of the perceived cost of sharing fake news, and partisan labels were also absent. This condition allowed for the examination of voter support in a competitive race without the influences of perceived cost or partisan identity.

3.2.9. Treatment 8: High Perceived Cost, No Partisan Labels

Finally, Treatment 8 presented the participants with a scenario similar to Treatment 7, but with an emphasis on the perceived cost of sharing fake news. The participants were informed that sharing misinformation could turn away political Independents from supporting Cooper’s campaign, potentially altering the race’s outcome in the narrowly divided Second District. However, partisan labels were omitted, allowing the study to focus on the interaction between the perceived cost of sharing fake news and voter support, without the influence of partisan identity.

3.2.10. Summary of Experimental Conditions

Across these eight treatment conditions, the study systematically varied the factors of race competitiveness, perceived cost of sharing fake news, and the presence of partisan labels. The control group served as a neutral baseline, with no experimental manipulations applied. By comparing the results across these different conditions, the study aimed to uncover the nuanced ways in which these variables interact to influence voter behavior in a tightly contested election. A detailed table summarizing the different combinations of treatment conditions can be found in Table 1.

4. Model

After exposure, the participants answered a number of questions related to the candidate purportedly sharing fake news. The first model is based on their likelihood of voting for Cooper. It asked the respondents, “How likely are you to vote for Cooper, with 0 being `Not likely at all’ and 100 being ‘Very likely’”. An averaged scale was constructed based on the responses to form the first dependent variable: CooperVote:
CooperVote i = β 0 + β 1 Education i + β 2 Social i + β 3 Knowledge i + β 4 Income i + β 5 NewsAttention i + β 6 Republican i + β 7 Vote 2020 i + β 8 Dummy i + ϵ i
The question used in this study asked the participants about their feelings toward Cooper: “How favorable is your impression of Cooper, with 0 being ‘Not favorable at all’ and 100 being `Very favorable?”’ The responses to this question were used to construct an averaged scale that was used as a second dependent variable in the analysis: favorability. Like the other model, the same independent variables were included in the analysis:
Favorability i = β 0 + β 1 Education i + β 2 Social i + β 3 Knowledge i + β 4 Income i + β 5 NewsAttention i + β 6 Republican i + β 7 Vote 2020 i + β 8 Dummy i + ϵ i

4.1. Variables and Measures

This study examines several key variables to understand how the dissemination of fake news by candidates influences voter behavior and electoral outcomes. The independent variables considered in this research include race competitiveness, the perceived cost of sharing fake news, and the presence of partisan labels. Each of these variables is essential in exploring the dynamics of voter decision-making in the context of electoral campaigns characterized by misinformation.

4.2. Race Competitiveness

Race competitiveness is a key factor in this study, capturing whether an electoral race is perceived as highly competitive or not. An indicator variable was constructed to represent participants who were exposed to specific information suggesting that the race was highly competitive. In the treatment conditions where this variable was included, the participants were informed with the following text: “The majority of polls consider this race to be a toss-up. Political observers note that the outcome of the race will have substantial implications for a closely divided House of Representatives”. This wording was used to frame the election as a tight contest with significant stakes, thereby creating a perception of high competitiveness.
The indicator variable was operationalized as a binary measure, distinguishing between participants who were informed of the race’s competitiveness (those exposed to the statement) and those who were not (no mention of competitiveness).
This study posits that in highly competitive races, candidates may be more inclined to adopt riskier strategies, such as sharing fake news, in an effort to secure an electoral advantage. This assumption is supported by existing research, which suggests that tighter races may encourage more aggressive campaign tactics as candidates strive to sway undecided voters or solidify their base in a high-stakes environment (Geer 2012).

4.3. Perceived Cost of Sharing Fake News

Perceived cost of sharing fake news is a crucial variable in this study, designed to measure the potential electoral risks associated with a candidate’s use of misinformation. This variable specifically examines the likelihood that sharing fake news might alienate independent voters, which could be particularly damaging in a closely contested race.
An indicator variable was constructed to represent the participants who were exposed to the following information: “Some observers have suggested that sharing misinformation may turn away political Independents from supporting Cooper’s campaign. In the narrowly divided Second District, the loss of independents has the potential to alter the outcome of the race”. This text was presented in specific treatment conditions to suggest that the sharing of fake news by the candidate could deter political independents, thereby implying a high perceived cost.
The indicator variable was operationalized as a binary measure, distinguishing between participants who were informed of this potential risk (those exposed to the statement) and those who were not (no mention of the perceived cost).
This study hypothesizes that higher perceived costs associated with sharing fake news, as indicated by this variable, might deter candidates from employing such tactics. The potential loss of independent voters could outweigh the benefits of swaying partisan supporters.

4.4. Partisan Labels

Partisan labels are a significant variable in this study, as they highlight the candidates’ political affiliations, potentially influencing voter perception and behavior. An indicator variable was developed to represent the participants who were explicitly informed of the candidates’ partisan affiliations. In the treatment conditions where this variable was present, the participants were provided with the following text: “Republican Peter Cooper and Democrat Joseph Frey are locked in a tight battle to replace the retiring incumbent of the Second Congressional District”. This statement clearly identified each candidate’s party affiliation, potentially activating partisan biases among the participants.
The indicator variable was operationalized as a binary measure, differentiating between participants who were explicitly informed of the candidates’ political parties (those exposed to the statement) and those who were not (where the candidates were simply referred to as Peter Cooper and Joseph Frey, without any mention of their partisan labels).
The relevance of partisan labels to this study is underscored by numerous studies indicating that society’s most partisan individuals typically exhibit greater susceptibility to information that aligns with their partisan views (Levendusky 2013; Taber and Lodge 2006; Zaller 1992). This heightened susceptibility makes the inclusion of partisan labels particularly important when studying the impact of fake news exposure. This study hypothesizes that the inclusion of partisan labels may enhance or diminish support for a candidate, depending on the participant’s own political affiliation and the context of the race. This hypothesis is informed by prior research suggesting that voters often rely on party labels as cognitive shortcuts to make decisions, particularly in complex or highly competitive elections.

4.5. Political Knowledge

Political knowledge is an important variable in this study, as it reflects the participants’ awareness of key political facts and concepts. This variable is an aggregate scale derived from five political knowledge questions sourced from Delli Carpini and Keeter (1996). These questions assessed the respondents’ understanding of significant political roles, processes, and party dynamics in the United States. The participants were asked whether they knew what job or political office Kamala Harris held, whose responsibility it is to determine if a law is constitutional, how much of a majority is required for the U.S. Senate and House to override a presidential veto, which party currently has the most members in the U.S. House of Representatives, and which party is more conservative at the national level.
The participants’ responses to these questions were aggregated to create a composite political knowledge score. The mean score for the respondents was 3.374, with a standard deviation of 1.387, indicating that on average, the participants correctly answered just over three out of the five questions, with some variation around this average. The standard deviation suggests a moderate level of dispersion in political knowledge among the participants, meaning that while some individuals were well-informed, others had more limited political knowledge. This summary of political knowledge is crucial for understanding how well-informed the study’s participants were in the context of evaluating fake news and its potential impact on their electoral behavior.

4.6. Republican Identity

Political knowledge is another important factor, as President Trump’s supporters reportedly consume fake news at significantly higher rates than other demographics (Guess et al. 2018). This could be influenced by conservative figureheads who frequently critique mainstream media (Benkler et al. 2017). Multiple studies confirm that Republicans, in general, are more inclined to believe misinformation (Miller et al. 2015; Pasek et al. 2015) and exhibit greater skepticism towards fact-checking websites than their counterparts (Allcott and Gentzkow 2017). Such tendencies might elucidate why Republicans often struggle to discern fake news (Pennycook et al. 2021). To quantify this, a Republican binary variable was established, marking self-identified Republicans and those leaning Republican as (1), while categorizing self-identified Democrats and those leaning Democratic as (0).

4.7. Income

Income is an important variable in this study, as it helps in understanding voter behavior. Several studies, including the seminal work by Campbell et al. (1960), have found that income has a distinct effect on electoral behavior. In this study, the participants were asked to indicate their total family income before taxes using a 9-point scale, with the lowest category being “Less than $10,000” and the highest being “$150,000 or more”.
This variable is essential for controlling socioeconomic status, which can influence voter behavior and their susceptibility to misinformation. By accounting for income, this study can better isolate the effects of other variables, such as political knowledge and partisanship, on voter behavior in the context of fake news.

4.8. Social Media Use

Social media use is a critical variable in this study, given the role of social media as a primary conduit for guiding news consumers to websites peddling fabricated articles (Guess et al. 2018, 2019b; Silverman and Singer-Vine 2016). Almost 20% of Americans now assert that their principal news source is social media (Mitchell et al. 2020). To account for the influence of social media on exposure to misinformation, the analysis incorporated self-reported metrics regarding the number of days per week (from 0 to 7) that the participants used various social media platforms, including TikTok, Facebook, Instagram, Twitter, Pinterest, Reddit, Snapchat, and YouTube.
The responses were aggregated to create an overall social media use scale. The mean score for the respondents was 4.617, with a standard deviation of 1.596, indicating that, on average, the participants used social media on approximately four to five days per week, with some variation around this average. This scale provides a valuable measure of social media engagement, which is crucial for understanding how exposure to fake news via these platforms might influence voter behavior.

4.9. News Attention

News attention is an important variable in this study, as it helps to measure the respondents’ awareness of current events, which can influence their susceptibility to fake news. Zaller (1992) argued that public opinion is heavily influenced by exposure to elite discourse on politics. His “receive–accept–sample” model attributes variation in individual-level political awareness to exposure to what opinion leaders disseminate through news sources.
To measure their awareness of current events, the respondents were asked how much attention they paid to news about national politics across different mediums, including television, radio, print, the internet, and other sources. The participants selected an option on a 5-point Likert scale ranging from “none at all” to “a great deal”. This variable provides a useful indicator of the level of engagement the participants have with political news, which is crucial for understanding how exposure to news content might influence their responses to fake news in the context of electoral behavior.

4.10. Education

Education is a strong and reliable predictor of political beliefs, playing a central role in key public opinion texts such as Zaller (1992), Campbell et al. (1960), and Delli Carpini and Keeter (1996). These works demonstrate that education significantly shapes individuals’ political views, influencing their susceptibility to persuasion, their ideological consistency, and their engagement with the political process.
To measure the educational attainment of the participants in this study, the respondents were asked to indicate the highest level of school they had completed. The response options included the following: “Grade 8 or lower”, “Some high school, no diploma”, “High school diploma or equivalent”, “Some college, no degree”, “Associate degree”, “Bachelor’s degree”, “Master’s degree”, “Professional degree”, and “Doctoral degree”. The mean level of education among respondents was 4.902, with a standard deviation of 1.304. This indicates that, on average, the participants had completed some college education, with a reasonable degree of variability around this average.
This detailed spectrum of educational backgrounds is crucial for understanding how education may have influenced the respondents’ political beliefs and their responses to fake news in the context of electoral behavior. The variability in education levels among the respondents suggests that while many of the participants have some higher education, there was also a diverse range of educational attainment represented in the sample.

5. Results

5.1. Model 1: Effects of the Competitive Nature of the Race

Table 2 shows an OLS regression model testing the first hypothesis. While predictors such as news attention, political knowledge, income, 2020 vote, and partisan identity have p-values less than 0.05, the dummy variable “close race” was not found to be significant. Given this, the first hypothesis, which posited that the more competitive the race is, the more favorable the sharing of fake news becomes in the eyes of supporters, is not supported.

5.2. Model 2: Perceived Cost of Sharing Fake News

As shown in Table 3, the dummy variable indicating support for the second hypothesis related to the perceived cost of sharing fake news is not significant at the 0.05 threshold. Based on this, the second hypothesis, that a candidate’s favorability may be depressed by the perceived high electoral cost of sharing misinformation, is not supported.

5.3. Model 3: Effects of Partisan Labels

In Table 4, the dummy variable “partisan labels” was found to be statistically significant, with a p-value of less than 0.05 and a positive coefficient. This suggests that there is more support for a candidate sharing fake news when the partisan affiliation of the candidate is clear. Therefore, the third hypothesis that the partisan labels help alleviate the consequences of sharing false information is supported by the data. While all predictor variables were significant, only one was negative—political knowledge. This suggests that the more the respondents knew about politics, the less likely they were to support a candidate who was purported to have shared misinformation.

5.4. Model 4: Interaction of Partisan Label Treatment and News Attention

Compared to the other predictor variables of the third model, one stands out as particularly concerning—self-reported attention to the news. This coefficient, which is both positive and significant at the p < 0.001 threshold, suggests that those who pay more attention to the news are more inclined to favor the candidate sharing false information. To better understand this relationship, the same model was estimated with an interaction term of partisan labels and attention to news. The resulting OLS model, shown in Table 5, indicates that each interaction effect is approaching significance at the p < 0.1 level.
The results show a significant value for the term at p < 0.01 . Because interaction coefficients cannot be easily interpreted, a linear prediction was visually rendered. Figure 1 was generated to show that the greatest difference between the two sets of predicted probabilities is for the participants on the lowest end of the news attention scale. For those who pay little to no attention to the news, their evaluation of Cooper rises substantially when partisan labels are included. For those who pay more attention to the news, their evaluation of Cooper is higher compared to those who are not exposed to Cooper’s partisan labels (albeit, marginally so). The results reflect the power of partisan heuristics on the evaluations of the candidates and may act as a defense against accusations of candidates propagating false information.

6. Discussion

The results of this study provide new insights into the effects of candidates sharing fake news within the context of competitive elections, perceived electoral costs, and partisan identifiers. The findings reveal several nuanced dynamics regarding voter behavior, particularly under varying electoral conditions.

6.1. Hypothesis Evaluation and Broader Implications

The first hypothesis proposed that the more competitive the race is, the more favorable the sharing of fake news becomes in the eyes of supporters. However, the results from Model 1 indicated that the “close race” variable was not significant. This finding suggests that the competitiveness of the race does not significantly influence voter support for a candidate who shares misinformation. This is contrary to some expectations that close electoral margins would encourage riskier campaign tactics, such as the dissemination of fake news, to sway voter opinion. Future research could explore alternative conditions under which competitiveness might affect voter attitudes towards misinformation, possibly incorporating additional variables like media influence or voter skepticism.
The second hypothesis posited that a high perceived electoral cost of sharing fake news would lead to decreased favorability for the candidate among partisan supporters. The results from Model 2 indicated that this variable was also not significant, suggesting that concerns about losing independent voters may not substantially affect partisan supporters’ views on misinformation. This raises questions about the robustness of the perceived cost argument and indicates that partisans may be less influenced by concerns about broader electoral appeal. Future studies could delve into more nuanced measures of perceived cost or explore how different partisan identities might moderate this effect.
The third hypothesis stated that candidates who disseminate fake news would receive increased support when their actions are associated with clear partisan labels. The results from Model 3 support this hypothesis, with the “partisan labels” variable found to be significant and positively associated with voter support. This suggests that partisan labels can indeed serve as a heuristic that shields candidates from the negative consequences of sharing false information, reinforcing existing findings on the power of partisan identity in shaping political opinions (Taber and Lodge 2006; Zaller 1992). These findings underscore the importance of partisan identity in electoral behavior, especially in environments where misinformation is prevalent.

6.2. Limitations

6.2.1. Fake News Exposure Versus Effect

One key limitation of this study is its focus on the effects of fake news rather than its exposure. Many existing studies primarily emphasize how frequently individuals encounter fake news, the platforms where fake news proliferates, and the demographic characteristics of those most likely to be exposed. In contrast, this study aims to test the effects of fake news within the context of competitive electoral margins, the potential cost of turning away independents, and the presence of partisan identifiers. While this approach allows for a detailed analysis of how fake news influences voter behavior and perceptions under various conditions, it does not account for the broader landscape of fake news exposure. By concentrating solely on effects, this study may overlook the initial reach and frequency of fake news encounters, which are crucial in understanding its overall impact. Future research should aim to bridge this gap by integrating both exposure and effects to provide a more comprehensive understanding of fake news dynamics.

6.2.2. Lack of Racial Diversity

In the study’s sample, there was a notable overrepresentation of white participants. This racial homogeneity might limit the findings’ applicability to a wider, diverse audience. While race plays a significant role in many political behaviors and attitudes, it has not been pinpointed as a major determinant in the belief in fake news in existing studies. Factors such as political knowledge, party affiliation, and voting history generally have a more pronounced influence on one’s vulnerability to misinformation. However, the skewed representation in this study highlights the need for future research that includes a more racially and ethnically varied sample. This ensures a holistic grasp of the impact of fake news across different demographic segments in the US.

6.2.3. Ambiguity of Affect

A potential limitation of this study is the ambiguity associated with the dependent variable that measured the participants’ affective responses (e.g., feelings, attitudes) towards a candidate sharing fake news. The construct of affect can be complex and multi-dimensional, leading to potential issues in accurately capturing participants’ true emotional responses. Specifically, the “target of the affect” might be unclear or problematic, as participants may have mixed feelings that are not easily captured by a single scale or question. This ambiguity highlights the need for a more nuanced approach in future research to better understand and measure the affective responses of the participants towards candidates disseminating fake news.

6.2.4. Descriptions of Candidates

Even when partisan labels are excluded, the participants might deduce a candidate’s affiliations from policy stances commonly associated with Democratic (like healthcare and education) or Republican (such as lowering taxes) platforms. This decision was intentional within the research design, given that the prevalence of fake news leans more towards the right both in terms of its creation (Guess et al. 2018; Lazer et al. 2018) and possibly due to ideological disparities between Democrats and Republicans (Guay et al. 2023). Yet, the notable effects of partisan labels in supporting one of this paper’s hypotheses hint that partisanship might not always be discernible solely based on policy positions.

6.3. Future Research

In primary elections, the lines of party allegiance are often blurred, and the interactions between candidates, supporters, and the media can be more complex. The way that fake news is perceived, shared, and reacted to might differ substantially in such an environment. Exploring the effects of fake news in a primary election scenario, where the dynamics of partisanship, intra-party competition, and voter behavior may vary, presents an intriguing area for future research. Such exploration could provide further insights into the role and impact of fake news in shaping electoral outcomes and influencing voter behavior across different electoral settings.
The research design employed in this study is constrained by its specific focus on one Democratic candidate and one Republican candidate. This approach has its limitations, as it may not fully capture the dynamics and effects of fake news in other electoral contexts, particularly in primary elections where multiple candidates from the same party are competing.
It may also be beneficial for subsequent research to tailor treatments to mimic specific platforms such as Facebook or Twitter. For instance, treatments could incorporate the recognizable light-blue masthead of Facebook, complete with simulated comments, likes, and share buttons. Moving forward, researchers might also explore the utility of interactive experiments over static text or altered images to more accurately reflect real-world consumption of news.

Implications for Media Literacy and Education

The findings of this study have important implications for media literacy and education, particularly in addressing the challenges posed by misinformation. As noted by Gretter and Yadav (2018), educators in the United States generally have positive attitudes toward integrating media and information literacy into the classroom and recognize its importance as an essential skill for students. However, they also reported that such efforts are not adequately rewarded or emphasized in evaluations by stakeholders like school administrators and parents. This highlights a critical systemic barrier: while there is a consensus on the need for enhanced media literacy, the current educational framework does not sufficiently incentivize or reward educators for focusing on this area.
Addressing this gap is vital for developing more informed citizens who are capable of discerning fake news from factual information. Future research could explore strategies for integrating media literacy more effectively into the curriculum, including changes in policy and educator evaluation criteria that reward critical media engagement skills. As the current study shows, partisan affiliations significantly influence the impact of misinformation, indicating a need for educational programs that focus not only on factual accuracy but also on recognizing and critically assessing partisan cues and biases.

7. Conclusions

The findings of this study contribute to a deeper understanding of how the sharing of fake news by candidates impacts voter support, particularly in closely contested races where partisan identity plays a crucial role. While President Trump’s influence on the Republican Party is undeniable, with his frequent dissemination of misinformation becoming a normalized tactic among elected officials (Freiman 2020), this study shows that the perceived cost of such behavior does not always deter partisan supporters.
The vignette used in this paper was taken from Representative Anna Paulina Luna’s personal Twitter account before she was elected into office (Luna 2020). This fake news item was selected for the survey experiment because it was easily identifiable as fake news and widely debunked in the media (Rogers 2022). However, Representative Luna’s political calculus also dovetails well with the results presented here. Some time after sharing the fake news item, Luna deleted the post. This behavior suggests that she believes such behavior may have had the potential to damage her electoral chances. The results presented here suggest that this belief is supported by evidence; when framed in such a way that the sharing of false information could hurt a candidate’s chances, supporters cooled their opinion on Cooper. This aligns with other findings showing that partisans desire viability in their candidates (Abramowitz 1989; Bartels 1988; Brady 1993). Luna may have had a similar thought process when scrubbing her old social media accounts.
Yet, Luna’s standing as a favorite among conservative ranks and her solid alignment with President Trump’s stances suggest another dimension. Her strong ties to the MAGA movement might energize her Florida base, overshadowing any reservations regarding her dissemination of false information. This assertion finds its footing in the evidence emphasizing the power of partisan labels, particularly among infrequent voters. For political figures under scrutiny for such behaviors, doubling down on partisan affiliations may be a strategic move to rally wavering supporters or those typically detached from the electoral process (Armstrong-Taylor 2012).
Finally, there are important normative concerns about a polity in which this sort of behavior is becoming more routine. Particularly concerning is the emerging evidence that leveraging partisan identity could shield candidates from backlash for distributing false narratives. This seems to echo the stance of staunch partisans who persistently back figures like Trump, despite his recurrent misinformation campaigns. While envisioning a political arena where sharing fake news becomes standard might seem far-fetched, a decline in repercussions (in the form of electoral consequences) might lead an increasing number of candidates to adopt this tactic.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of Moravian University on July 10 (protocol code 45 CFR 46.104(d)).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The original data presented in the study are openly available in [openICPSR] at https://www.openicpsr.org/openicpsr/project/209610/version/V1/view.

Acknowledgments

The author thanks Patrick C. Meirick for his valuable insight and feedback on a draft of this paper presented at the 2023 Meeting of the American Political Science Association in Los Angeles, California.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Abramowitz, Alan I. 1989. Viability, electability, and candidate choice in a presidential primary election: A test of competing models. Journal of Politics 51: 977–92. [Google Scholar] [CrossRef]
  2. Abramowitz, Alan I., and Kyle L. Saunders. 2008. Is polarization a myth? The Journal of Politics 70: 542–55. [Google Scholar] [CrossRef]
  3. Allcott, Hunt, and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. Journal of Economic Perspectives 31: 211–36. [Google Scholar] [CrossRef]
  4. Altay, Sasha, Anne-Sophie Hacquin, and Hugo Mercier. 2020. Why do so few people share fake news? it hurts their reputation. New Media & Society 22: 210–26. [Google Scholar] [CrossRef]
  5. Ansolabehere, Stephen, and Shanto Iyengar. 1996. Can the press monitor campaign advertising? an experimental study. Harvard International Journal of Press/Politics 1: 72–86. [Google Scholar] [CrossRef]
  6. Armstrong-Taylor, Paul. 2012. When do politicians lie? The B.E. Journal of Economic Analysis & Policy 13. [Google Scholar] [CrossRef]
  7. Auter, Zachary J., and Jeffrey A. Fine. 2016. Negative campaigning in the social media age. Political Behavior 38: 999–1020. [Google Scholar] [CrossRef]
  8. Baker, Peter. 2018. Bush made willie horton an issue in 1988, and the racial scars are still fresh. The New York Times, December 3. [Google Scholar]
  9. Bartels, Larry M. 1988. Presidential Primaries and the Dynamics of Public Choice. Princeton: Princeton University Press. [Google Scholar]
  10. Baumeister, Roy F., Ellen Bratslavsky, Catrin Finkenauer, and Kathleen D. Vohs. 2001. Bad is stronger than good. Review of General Psychology 5: 323–70. [Google Scholar] [CrossRef]
  11. Benkler, Yochai, Robert Farris, Hal Roberts, and Ethan Zuckerman. 2017. Study: Breitbart-led right-wing media ecosystem altered broader media agenda. Columbia Journalism Review, March 3. [Google Scholar]
  12. Berinsky, Adam J., Gregory A. Huber, and Gabriel S. Lenz. 2012. Evaluating online labor markets for experimental research: Amazon.com’s mechanical turk. Political Analysis 20: 351–68. [Google Scholar] [CrossRef]
  13. Boebert, Lauren. 2020. I Am Very Tired of Hearing about Fixing Election Fraud Going Forward. An Election Just Happened. There Was Fraud. Fix That One First! X (Formerly Twitter). Available online: https://x.com/laurenboebert/status/1342067703729827841?lang=en (accessed on 27 August 2024).
  14. Bradley, Samuel D., James R. Angelini, and Sungkyoung Lee. 2007. Psychophysiological and memory effects of negative political ads: Aversive, arousing, and well remembered. Journal of Advertising 36: 115–27. [Google Scholar] [CrossRef]
  15. Brady, Henry E. 1993. Knowledge, strategy, and momentum in presidential primaries. Political Analysis 94: 1–38. [Google Scholar] [CrossRef]
  16. Buchanan, Tom. 2020. Why do people spread false information article? the effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation. PLoS ONE 15: e0239666. [Google Scholar] [CrossRef] [PubMed]
  17. Bullock, John G. 2007. Experiments on Partisanship and Public Opinion: Party Cues, False Beliefs, and Bayesian Updating. Ph.D. thesis, Stanford University, Stanford, CA, USA. [Google Scholar]
  18. Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. 1960. The American Voter. Hoboken: John Wiley & Sons. [Google Scholar]
  19. Carrion-Alvarez, Diego, and Perla X. Tijerina-Salina. 2020. Fake news in COVID-19: A perspective. Health Promotion Perspectives 10: 290–91. [Google Scholar] [CrossRef] [PubMed]
  20. Damore, David F. 2002. Candidate strategy and the decision to go negative. Political Research Quarterly 55: 669–85. [Google Scholar] [CrossRef]
  21. Delli Carpini, Michael X., and Scott Keeter. 1996. What Americans Know About Politics and Why It Matters. Yale: Yale University Press. [Google Scholar]
  22. Druckman, James N., Martin J. Kifer, and Michael Parkin. 2010. Timeless strategy meets new medium: Going negative on congressional campaign web sites, 2002–2006. Political Communication 27: 88–103. [Google Scholar] [CrossRef]
  23. Dwoskin, Elizabeth. 2021. Misinformation on facebook got six times more clicks than factual news during the 2020 election, study says. The Washington Post. September 4. Available online: https://www.washingtonpost.com/technology/2021/09/03/facebook-misinformation-nyu-study/ (accessed on 27 August 2024).
  24. Fahey, James J. 2023. The big lie: Expressive responding and misperceptions in the united states. Journal of Experimental Political Science 10: 267–78. [Google Scholar] [CrossRef]
  25. Fourney, Adam, Miklos Z. Racz, Gireeja Ranade, Markus Mobius, and Eric Horvitz. 2017. Geographic and temporal trends in fake news consumption during the 2016 us presidential election. Paper presented at 2017 ACM on Conference on Information and Knowledge Management, Singapore, November 6–10; pp. 2071–74. [Google Scholar]
  26. Freiman, Jordan. 2020. Republican congressman shares fake photo of obama with iranian president on twitter. CBS News, January. [Google Scholar]
  27. Gardner, Amy. 2022. A majority of gop nominees deny or question the 2020 election results: Experts say their dominance in the party poses a threat to the country’s democratic principles and jeopardizes the integrity of future votes. The Washington Post, October 12. [Google Scholar]
  28. Geer, John G. 2008. In Defense of Negativity: Attack Ads in Presidential Campaigns. Chicago: University of Chicago Press. [Google Scholar]
  29. Geer, John G. 2012. The news media and the rise of negativity in presidential campaigns. PS: Political Science & Politics 45: 422–27. [Google Scholar]
  30. Gretter, Sarah, and Aman Yadav. 2018. What do preservice teachers think about teaching media literacy?: An exploratory study using the theory of planned behavior. Journal of Media Literacy Education 10: 104–23. [Google Scholar] [CrossRef]
  31. Guay, Brian, Adam J. Berinsky, Gordon Pennycook, and David Rand. 2023. Examining partisan asymmetries in fake news sharing and the efficacy of accuracy prompt interventions. Working Paper. PsyArXiv, May 25. [Google Scholar] [CrossRef]
  32. Guess, Andrew, Brendan Nyhan, and Jason Reifler. 2017. Inside the fake news bubble? Consumption of article fake news in the 2016 u.s. election. Unpublished Manuscript. [Google Scholar]
  33. Guess, Andrew, Brendan Nyhan, and Jason Reifler. 2018. Selective exposure to misinformation: Evidence from the consumption of fake news during the 2016 us presidential campaign. Working Paper. European Research Council 9: 4. [Google Scholar]
  34. Guess, Andrew, Jonathan Nagler, and Joshua Tucker. 2019a. Less than you think: Prevalence and predictors of fake news dissemination on facebook. Science Advances 5: eaau4586. [Google Scholar] [CrossRef]
  35. Guess, Andrew, Kevin Munger, Jonathan Nagler, and Joshua Tucker. 2019b. How accurate are survey responses on social media and politics? Political Communication 36: 241–58. [Google Scholar] [CrossRef]
  36. Hale, Jon F., Jeffrey C. Fox, and Rick Farmer. 1996. Negative advertisements in us senate campaigns: The influence of campaign context. Social Science Quarterly 77: 329–43. [Google Scholar]
  37. Humprecht, Edda, Frank Esser, and Peter Van Aelst. 2020. Resilience to article disinformation: A framework for cross-national comparative research. The International Journal of Press/Politics 25: 493–516. [Google Scholar] [CrossRef]
  38. Jacobson, Gary C. 2021. Donald trump’s big lie and the future of the republican party. Presidential Studies Quarterly 51: 273–89. [Google Scholar] [CrossRef]
  39. Jamieson, Kathleen Hall. 1993. Dirty Politics: Deception, Distraction, and Democracy. Oxford: Oxford University Press. [Google Scholar]
  40. Johnson, Dennis W. 2016. Democracy for Hire: A History of American Political Consulting. Oxford: Oxford University Press. [Google Scholar]
  41. Kornberg, Maya, Coryn Grange, and Alicia Mergenthaler. 2023. The dark underbelly of the election conversation: Analysis of candidate social media posts during the 2022 midterm election. Unpublished manuscript. August 7. [Google Scholar]
  42. Kunda, Ziva. 1990. The case for motivated reasoning. Psychological Bulletin 108: 480. [Google Scholar] [CrossRef]
  43. Lau, Richard R., and Ivy Brown Rovner. 2009. Negative campaigning. Annual Review of Political Science 12: 285–306. [Google Scholar] [CrossRef]
  44. Lazer, David M. J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, and et al. 2018. The science of fake news. Science 359: 1094–96. [Google Scholar] [CrossRef]
  45. Lee, Bruce Y. 2021. Rep. lauren boebert falsely tweets how to make COVID-19 delta variant ‘go away’. Forbes. December 26. Available online: https://www.forbes.com/sites/brucelee/2021/07/12/rep-lauren-boebert-tweets-how-to-make-covid-19-delta-variant-go-away/ (accessed on 27 August 2024).
  46. Levay, Kevin E., Jeremy Freese, and James N. Druckman. 2016. The demographic and political composition of mechanical turk samples. Sage Open 6: 2158244016636433. [Google Scholar] [CrossRef]
  47. Levendusky, Matthew S. 2013. Why do partisan media polarize viewers? American Journal of Political Science 57: 611–23. [Google Scholar] [CrossRef]
  48. Lindskold, Svenn, Pamela S. Walters, and Helen Koutsourais. 1983. Cooperators, ccmpetitors, and response to grit. Journal of Conflict Resolution 27: 521–32. [Google Scholar] [CrossRef]
  49. Luna, Anna Paulina. 2020. Below are states & #’s of votes dominion software switched from trump to biden! Twitter. November. [Google Scholar]
  50. Macdonald, David, and Taylor Brown. 2022. Republicans share fake news more than democrats. here’s why. The Washington Post, August 29. [Google Scholar]
  51. Miller, Joanne M., Kyle L. Saunders, and Christina E. Farhart. 2015. Conspiracy endorsement as motivated reasoning: The moderating roles of political knowledge and trust. American Journal of Political Science 60: 824–44. [Google Scholar] [CrossRef]
  52. Mitchell, Amy, Mark Jurkowitz, J. Baxter Oliphant, and Elisa Shearer. 2020. Americans who mainly get their news on social media are less engaged, less knowledgeable. Pew Research Center, July 30. [Google Scholar]
  53. Moore, Ryan, Ross Dahlke, and Jeffrey Hancock. 2022. Exposure to untrustworthy websites in the 2020 u.s. election. Working Paper. Nature Human Behaviour 7: 1096–105. [Google Scholar] [CrossRef] [PubMed]
  54. Mullinix, Kevin J., Thomas J. Leeper, James N. Druckman, and Jeremy Freese. 2015. The generalizability of survey experiments. Journal of Experimental Political Science 2: 109–38. [Google Scholar] [CrossRef]
  55. Munger, Kevin, Mario Luca, Jonathan Nagler, and Joshua Tucker. 2018. Everyone on mechanical turk is above a threshold of digital literacy: Sampling strategies for studying digital media effects. Available online: http://kmunger.github.io/pdfs/clickbait_mturk.pdf (accessed on 27 August 2024).
  56. Nyhan, Brendan, Ethan Porter, Jason Reifler, and Thomas J. Wood. 2020. Taking fact-checks literally but not seriously? the effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior 42: 939–60. [Google Scholar] [CrossRef]
  57. Parsons, Lynn Hudson. 2009. The Birth of Modern Politics: Andrew Jackson, John Quincy Adams, and the Election of 1828. Oxford: Oxford University Press. [Google Scholar]
  58. Pasek, Josh, Gaurav Sood, and Jon A. Krosnick. 2015. Misinformed about the affordable care act? leveraging certainty to assess the prevalence of misperceptions. Journal of Communication 65: 660–73. [Google Scholar] [CrossRef]
  59. Pennycook, Gordon, Tyrone D. Cannon, and David G. Rand. 2018. Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General 147: 1865. [Google Scholar] [CrossRef]
  60. Pennycook, Gordon, Zachary Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles, and David G. Rand. 2021. Shifting attention to accuracy can reduce misinformation article. Nature 592: 590–95. [Google Scholar] [CrossRef]
  61. Peterson, David A. M., and Paul A. Djupe. 2005. When primary campaigns go negative: The determinants of campaign negativity. Political Research Quarterly 58: 45–54. [Google Scholar] [CrossRef]
  62. Roeloffs, Mary Whitfill. 2024. Boebert floats wild anti-vaccine conspiracy as cause of biden’s `decline’. Forbes. July 1. Available online: https://www.forbes.com/sites/maryroeloffs/2024/07/11/boebert-floats-wild-anti-vaccine-conspiracy-as-cause-of-bidens-decline/ (accessed on 27 August 2024).
  63. Rogers, Kaleigh. 2022. Most candidates who think 2020 was rigged are probably going to win in november. FiveThirtyEight, October 25. [Google Scholar]
  64. Silverman, Craig. 2016. This analysis shows how fake election news stories outperformed real news on facebook. Buzzfeed, November 16. [Google Scholar]
  65. Silverman, Craig, and Jeremy Singer-Vine. 2016. Most americans who see fake news believe it, new survey says. BuzzFeed News, December 6. [Google Scholar]
  66. Sydell, Laura. 2016. We tracked down a fake-news creator in the suburbs. Here’s what we learned. NPR, November 23. [Google Scholar]
  67. Taber, Charles S., and Milton Lodge. 2006. Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science 50: 755–69. [Google Scholar] [CrossRef]
  68. Thorson, Emily. 2016. Belief echoes: The persistent effects of corrected misinformation. Political Communication 33: 460–80. [Google Scholar] [CrossRef]
  69. Ulloa, Jazmine, and Neil Vigdor. 2022. Lauren boebert, far-right firebrand, wins re-election after recount. The New York Times, December 12. [Google Scholar]
  70. Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. The spread of true and false news article. Science 359: 1146–51. [Google Scholar] [CrossRef] [PubMed]
  71. Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press. [Google Scholar]
Figure 1. Effects of partisan labels on Cooper’s favorability.
Figure 1. Effects of partisan labels on Cooper’s favorability.
Socsci 13 00571 g001
Table 1. Treatment conditions in the experiment.
Table 1. Treatment conditions in the experiment.
TreatmentRace CompetitivenessPerceived CostPartisan Labels
Control (T1)Not MentionedNot MentionedPresent
T2CloseHighPresent
T3CloseHighAbsent
T4Not MentionedNot MentionedPresent
T5CloseNot MentionedPresent
T6Not MentionedNot MentionedAbsent
T7CloseNot MentionedAbsent
T8Not MentionedHighAbsent
Table 2. Model 1: OLS estimation—effects of close race on likelihood of voting for Cooper.
Table 2. Model 1: OLS estimation—effects of close race on likelihood of voting for Cooper.
VariableCoefficientStd. Err.
Close race indicator variable−0.7350.731
News attention1.383 **0.361
Avg. time on social media0.458 †0.246
Political knowledge−0.891 **0.279
Income1.113 **0.239
Education0.554 †0.288
Republican1.614 *0.769
Intercept59.886 **2.422
N2211
R 2 0.031
F ( 7 , 2203 ) 10.067
Significance levels: †: 10%; *: 5%; **: 1%.
Table 3. Model 2: OLS estimation—effects of high perceived cost of sharing fake news on favorability of Cooper.
Table 3. Model 2: OLS estimation—effects of high perceived cost of sharing fake news on favorability of Cooper.
VariableCoefficientStd. Err.
High cost indicator variable−0.8040.678
News attention1.227 **0.334
Avg. time on social media0.503 *0.228
Political knowledge−1.234 **0.259
Income1.017 **0.221
Education0.563 *0.267
Republican1.873 **0.713
Intercept62.450 **2.242
N2211
R 2 0.037
F ( 7 , 2203 ) 12.04
Significance levels: *: 5%; **: 1%.
Table 4. Model 3: OLS estimation—effects of partisan labels on favorability of Cooper.
Table 4. Model 3: OLS estimation—effects of partisan labels on favorability of Cooper.
VariableCoefficientStd. Err.
Partisan label indicator variable1.476 *0.677
News attention1.245 **0.334
Avg. time on social media0.504 *0.228
Political knowledge−1.219 **0.259
Income1.006 **0.221
Education0.561 *0.267
Republican1.799 *0.713
Intercept61.304 **2.239
N2211
R 2 0.038
F ( 7 , 2203 ) 12.535
Significance levels: *: 5%; **: 1%.
Table 5. Model 4: OLS estimation—interaction effects of partisan labels and news attention on favorability of Cooper.
Table 5. Model 4: OLS estimation—interaction effects of partisan labels and news attention on favorability of Cooper.
VariableCoefficientStd. Err.
Partisan label indicator variable15.406 *7.843
News attention (low)7.9635.197
News attention (moderate–low)7.4415.160
News attention (moderate–high)8.837 †5.098
News attention (high)12.206 *5.102
Interaction–partisan indicator (low news attn.−11.9938.048
Interaction–partisan indicator (moderate–low news attn.)−13.844 †7.997
Interaction–partisan indicator (moderate–high news attn.)−14.451 †7.923
Interaction–partisan indicator (high news attn.)−14.332 †7.944
Avg. time on social media0.476 *0.227
Political knowledge−1.111 **0.262
Income1.066 **0.222
Education0.4260.270
Republican1.685 *0.714
Intercept55.565 **5.387
N2211
R 2 0.045
F ( 14 , 2196 ) 7.466
Significance levels: †: 10%; *: 5%; **: 1%.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rhodes, S. Narrow Margins and Misinformation: The Impact of Sharing Fake News in Close Contests. Soc. Sci. 2024, 13, 571. https://doi.org/10.3390/socsci13110571

AMA Style

Rhodes S. Narrow Margins and Misinformation: The Impact of Sharing Fake News in Close Contests. Social Sciences. 2024; 13(11):571. https://doi.org/10.3390/socsci13110571

Chicago/Turabian Style

Rhodes, Samuel. 2024. "Narrow Margins and Misinformation: The Impact of Sharing Fake News in Close Contests" Social Sciences 13, no. 11: 571. https://doi.org/10.3390/socsci13110571

APA Style

Rhodes, S. (2024). Narrow Margins and Misinformation: The Impact of Sharing Fake News in Close Contests. Social Sciences, 13(11), 571. https://doi.org/10.3390/socsci13110571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop