Next Article in Journal
Inferring Urban Social Networks from Publicly Available Data
Next Article in Special Issue
Collecting a Large Scale Dataset for Classifying Fake News Tweets Using Weak Supervision
Previous Article in Journal
A Pattern Mining Method for Teaching Practices
Previous Article in Special Issue
iCaps-Dfake: An Integrated Capsule-Based Model for Deepfake Image and Video Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mutual Influence of Users Credibility and News Spreading in Online Social Networks

by
Vincenza Carchiolo
1,*,
Alessandro Longheu
2,
Michele Malgeri
2,
Giuseppe Mangioni
2 and
Marialaura Previti
2
1
Dipartimento di Matematica e Informatica (DMI), Università di Catania, 95131 Catania, Italy
2
Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI), Università di Catania, 95131 Catania, Italy
*
Author to whom correspondence should be addressed.
Future Internet 2021, 13(5), 107; https://doi.org/10.3390/fi13050107
Submission received: 16 March 2021 / Revised: 23 April 2021 / Accepted: 24 April 2021 / Published: 25 April 2021
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)

Abstract

:
A real-time news spreading is now available for everyone, especially thanks to Online Social Networks (OSNs) that easily endorse gate watching, so the collective intelligence and knowledge of dedicated communities are exploited to filter the news flow and to highlight and debate relevant topics. The main drawback is that the responsibility for judging the content and accuracy of information moves from editors and journalists to online information users, with the side effect of the potential growth of fake news. In such a scenario, trustworthiness about information providers cannot be overlooked anymore, rather it more and more helps in discerning real news from fakes. In this paper we evaluate how trustworthiness among OSN users influences the news spreading process. To this purpose, we consider the news spreading as a Susceptible-Infected-Recovered (SIR) process in OSN, adding the contribution credibility of users as a layer on top of OSN. Simulations with both fake and true news spreading on such a multiplex network show that the credibility improves the diffusion of real news while limiting the propagation of fakes. The proposed approach can also be extended to real social networks.

1. Introduction

Online Social Networks (OSNs) have recently increased the ability of individuals to find information in real time, gathering news about disparate topics both in traditional situations as well as in extreme circumstances as risk and/or crisis scenarios. In these cases, where modern communication technologies could be temporarily unavailable or reports from the news outlet correspondents delay, information coming from eyewitnesses who report events shortly after they happened through posts in OSNs becomes a significant alternative. This phenomenon, called gatewatching [1], is a form of news reporting and commenting that does not operate from an authoritative position, rather it works by exploiting the collective intelligence and knowledge of dedicated communities to filter the news flow and to highlight and debate topics pertaining to those communities.
Frequently, even traditional major press agencies collect information from these sources prior to breaking news and providing information, for example, they allowed to recognize in real time the outbreaks of infectious diseases [2] as in the case of Covid-19 [3], or to early detect the spread of seasonal influenza in order to organize containment measures [4,5,6]. OSNs were also used in the recent past to detect information about natural disasters, both to assist in managing rescue as well as in promptly warning affected populations to help as many people as possibile [7,8,9,10,11].
Beyond these extraordinary scenarios, the increasing amount of information available through these communities and related platforms often pushes people to prefer them instead of traditional channels to stay informed, therefore the responsibility for judging the content and accuracy of information moves from editors and journalists to online information users [12]. The main problem is that the information provided in OSNs often lacks professional gatekeepers to check content, i.e., people devoted to assess the veracity and relevance of news in order to choose which ones are worth to be publicly spread. If this can be interpreted as a natural consequence of the growth of user-generated content [13], a potential side-effect is the increase of fake news, because single users are usually less reliable than authoritative official sources in providing information [14].
Since news lacks the traditional markers used to determine source credibility, consumers become more responsible for making decisions about the credibility of information online and this results in a shifting towards the trustworthiness about information providers, that cannot be overlooked anymore and can effectively help in discriminating real from fake news. (Note that in this paper we will use the term trust and credibility interchangeably.)
Trust in OSNs is actually influenced by several factors [15,16]:
  • the propensity to trust, affected by each individual’s psychology and independent of other OSN users;
  • the set of mutual past interactions between users;
  • the opinion each user has about the veracity of others’ posts; note that acquaintance between two individuals does not imply mutual trust, rather it can be dynamically inferred by observing the users social behaviors over time.
In this paper, taking inspiration from [16], we evaluate how trustworthiness among OSNs users influences the news spreading and viceversa, in particular we first model the news spreading as an epidemic process that can be effectively studied with Susceptible-Infected-Recovered (SIR) approach [17,18]. We added to the original formulation provided by Newman [19], the contribution of OSNs user credibility (a.k.a. trust [20,21]) to reproduce their real behavior, being people willingness in spreading news actually affected by their trustworthiness of information sources.
We model the credibility of users with an additional weighted directed layer, namely the Credibility network, that complements the acquaintances network naturally present in OSNs. It has the same nodes of acquaintances network, with a pair of two directed edges for each indirect edge in the acquaintances network. The weight (i.e., credibility) measures how much an individual trusts his neighbor and vice versa, and it changes according to an update mechanism triggered every time a piece of news is spread over the corresponding acquaintances network. The main novelty of our work is in the way we model the news spreading: it happens in the OSN network, but it is influenced by the Credibility network. On the other hand, OSN spreading process impacts on the Credibility network by altering the weights of its links. This model can be seen as the results of two mutually coupled dynamic processes.
To assess the proposed model, we implemented a simulator that spread true and fake news first applying the Newman model in its original formulation and then introducing the contribution of the credibility network. The simulator was developed from scratch using Python and Matplotlib libraries.
Simulations show that thanks to the proposed multiplex network model, users trust can be effectively used to improve the diffusion of real news while limiting the propagation of fakes.
The organization of this paper is as follows. In Section 2, we present relevant works explaining how trust affects social relations in OSNs. In Section 3, we briefly show our previous works where news spreading in OSNs is addressed and trust among users is firstly introduced. We then present our simulations, discussing results and the effectiveness of our approach. In the last section, final remarks and future works are outlined.

2. Related Works

During the last two decades, various attempts have been carried out to evaluate how trust affects social relations within OSNs. In the following we discuss about some works addressing such issues.
In his book, Buskens [22] discusses two possible explanations for the emergence of trust via social networks: if OSNs members can sanction others untrustworthiness, the last ones may refrain from acting in an untrustworthy manner and if members are informed regularly about trustworthy behaviors, the trust will grow among these actors. A unique combination of formal model building and empirical methodology is used to derive and test hypotheses about the effects of networks on trust. Some models combine elements from game theory and network analysis [23,24]. Indeed, while OSN represents a good example for studying the spread of news inspired to infection in human and animal populations, the inherent high-dimensionality of networks is mathematically and computationally challenging [25].
The role of Trust in OSN is an important matter and can be studied from different point of view, Grabner-Kräuter and Bitter [26] aims at clarifying the role of trust and the relevance of facets of trust, social capital and embeddedness in OSNs. They view trust as a structurally embedded asset or a property of relationships and networks that helps to shape interaction patterns within OSNs and provide a study of the mechanism during the stages of the building of a trust-relationship. Our work relay on trust methods to manage the Credibility Layers and, specifically, we refers to the evolving mechanism of trust over time that “is dominated by trust based on the trustor’s knowledge and understanding about the trusted party resulting from past interactions” [27]. In general, the trust category we refer to in this paper belong to Social Capital concepts, i.e., the benefit obtained from social relationship and interactions that, in our case, is related to what extend a News will be considered fake [26].
Goldbeck [28] presented an algorithm, named ”Tidal Trust” to infer trust relationships in Semantic Web-based social networks. She described the social network analysis that leads to these algorithms and described their accuracy within two networks. The benefit of this analysis is their integration into applications to enhance the users’ experiences by endorsing their social preferences.
An accurate trust assessment algorithm was developed by Kuter and Goldbeck [29], they proposed a trust inference algorithm, named “SUNNY”, that uses a probabilistic sampling technique to estimate the confidence in the trust information from some designated sources. SUNNY performs a probabilistic logic sampling procedure over the Bayesian Network generated by their representation mapping to estimate lower and upper bounds on the confidence values, which are then used as heuristics to establish trust values of the nodes of the Bayesian Network.
Taherian et al. [30] proposed a new inference algorithm in OSNs called ”RN-Trust” that uses resistive network concept to represent trust networks. In these networks, each node in the trust network is mapped to a node in the resistive network, for each pair of adjacent nodes in the trust network, a resistor is placed between their corresponding nodes in the resistive network and the equivalent resistance of the so determined electric circuit is used to calculate the trust value from the source to the target.
Liu et al. [31] proposed a complex trust-oriented social network structure which includes complex social relationships and recommendation rules in a specific domain and they also proposed a Bayesian network based trust inference mechanism in complex social networks.
Adali et al. [32] presented a trust assessment algorithm determined from the communication behavior of social network users, in particular taking into account conversations and propagation of information from one person to another.
Caverlee et al. [33] created a reputation-based trust aggregation framework for supporting tamper-resilient trust establishment in online communities, called ”SocialTrust”, which provides a network-wide perspective on the trust of all users, and they also created a personalized extension, called ”mySocialTrust”, which provides a user-centric trust perspective that can be optimized for individual users within the network.
Wu and Wang [34] proposed a model to describe how to infer the trust value combining web mining usage and network topology. They constructed an user-oriented trust network based on small-world property, enhanced trust metric by considering social influence of the middle connectors and incorporated users potential relationship estimated from users’ interactions frequency and generated a compound weight to re-evaluate similarity matrix combined with users social trust relationship.
Jiang, Wang and Wu [35] focused on generating small trusted graphs for large OSNs and showed how to preprocess a social network by developing a user-domain based trusted acquaintance chain discovery algorithm. The algorithm used the small-world network characteristics of online social networks and took advantage of weak ties and, finally, they presented how to build a trust network and generate a trusted graph with the adjustable width breadth–first search algorithms.
Jiang et al. [36] analyzed the similarity between trust propagation and network flow, converted a trust evaluation task with path dependence and trust decay into a generalized network flow problem and proposed a computational approach for calculating trust, named ”GFTrust”, based on a modified network flow model with leakage.
Feng et al. used a similar approach to study attention evolution in OSN [37], they found that most of the viral spreading of popular messages follows a fractional SIR model which means that an individual with a large number of friends needs more repeated news before sharing a message. Indeed, it could imply that the probability an individual will share a message is proportional to the fraction of its friends that shared the same message with him/her. They suggest that a message with attractiveness below a critical threshold will not spread to a significant fraction of OSN population. Additionally, in our work we assume that a threshold exists and it is added to the spreading model.
Other authors, e.g., Weng et al. [38], considered the community structure and investigated its impact on the spreading of memes. Others used epidemic model to study the dynamics of information cascade, for instance, in [39] Jin et al. used an epidemic model based on four parameters and tried to fit empirical data.
Sreenivasan et al. [40] view the OSN as an indirect Graph and used it to drive the dynamics the probability of new message generation, its forwarding and the length of message buffer; they also studied how the popularity of a message is statistically distributed.
Online users shows a typical pattern of behavior since they tend to acquire information adhering to their worldviews [41] ignoring dissenting information [42] that increases the probability of the spreading of misinformation [43] so causing that fake news and inaccurate information may spread faster and wider than fact-based news [44]. The behaviour of unaware users seriously impact on the fake-news diffusion in OSN [45].
In the last decades, signed network attracted interest in epidemic spreading, and it is commonly recognized as a class of graphs where links between the nodes are labeled with positive or negative signs. Recently Miu et al. [46] proposed a type of information spreading with relative attributes in signed networks based on the SI model. In their model, the information is spread with a probability that varies according its relative attribute to each node, meaning that individuals prefer to spread good instead of bad information. Moreover, they find that information spreading is influenced by both the network content and the transformation mechanism of the relative attributes.
Saeedian et al. [47] investigated the epidemic spreading in evolving signed networks in the context of energy landscapes. They referred to susceptible-infected (SI) model to configure the complete signed graphs in which negative edges do not transmit the disease [48] and introduced an energy function to describe the relationship between the structural balance and the spreading process.
Another approach proposed in Literature based on epidemic theory takes into account the limited capacity of users to consume information which leads to a sort of competition among them. In [49,50] a recommendation model where agents can remember and forget also incorporating advertisement is proposed, while Huberman [51] considered content’s novelty and popularity as two major factors to get attention.
In [52] the authors found that attention fades over a natural time scale and in [53] the “collective attention” is studied [54,55] focusing on its sudden change after a shocking event. Several authors proposed agent-based models inspired by epidemic process where information is passed along the edges of a network [56,57,58].
Besides, it is quite difficult to capture all possible interactions among people whilst a two-layered networks may take into account more complex interactions such as, for instance, credibility of users and level of inaccuracy of a news. Shu et al. [59] presented a numerical study of information spreading in an interdependent spatial network composed of two identical networks. Wang et al. [60] explored the effect of inter layer contagion on spreading dynamics in multiplex networks. Yu et al. [61] explored how a limited contact capacity influences the information spreading applied on a two-layered multiplex network.
An additional challenge that is emerging is the strong impact of malicious account that amplify the spread of fake news and their credibility mainly because they include social bots, and cyborg users [62]. Social bots try to increase the popularity of a news creating the echo chamber effect for the propagation of fake news. In the same paper, a preliminary study performed using FakeNewsNet—a dataset that the authors claims to be one of the first publicly available comprehensive repository which contains news content, social context, and spatiotemporal information - shows that bot users are more likely to exist in the fake news spreading process.
Finally, news dissemination on COVID-19 strongly increased in recent months, and the factors that lead to the sharing of this misinformation could have a big impact on the people, for instance in [63] authors adopted the Uses and Gratification perspective [64] to provide another view of understanding the global problem of fake news proliferation occurred in Nigeria. Cinelli et al. in [65] present a massive data analysis related to COVID-19 on Twitter, Instagram, YouTube, Reddit and Gab. They analyze engagement and interest in the COVID-19 topic using epidemic models to study reproduction numbers over the platforms. Furthermore, they show that information from both fact-checked and inaccurate sources has the same spreading patterns.

3. Trust Based News Spreading Model

In this work we model trust among users as a credibility factor influencing news spreading in an OSN. To do that we introduce a new network, called Credibility Network (CN) [16,66], that forms a sort of overlay of the OSN. CN is a weighted directed graph, where nodes are users and a weighted directed link C i j between users i and j represents the direct credibility, i.e., how much user i trusts user j. Note that in general C i j C j i . In this work, we assume C i j ranges over [ 0 , 1 ] , where 0 models untrusted targets (i.e., i believes that j always provides fake news) and 1 models full trusted targets (i believes j always spreads true news). As said previously, CN is an overlay of OSN, meaning that each node (user) in the OSN corresponds to a node in the CN, therefore they have the same number of nodes. As shown in Figure 1, both networks can be seen as layers of a two-layers multiplex network [67].
Note that inter-layer links connect the same user across layers.
In our model, the news spreading process happens in the OSN layer, but it is influenced by the CN layer. On the other hand, as we will detail later, OSN layer spreading process impacts on the CN layer by changing (indirectly) the weights of the credibility network. Essentially, we have two coupled processes influencing one each other, as detailed in the next sections.

3.1. OSN Layer

Here we introduce the model adopted to describe how news propagate in the OSN layer. In particular, we applied a modified version of Susceptible-Infected-Recovered (SIR) model [18,68], usually adopted in epidemic spreading processes, where:
  • susceptible status corresponds to ignorant status, i.e., OSNs users who belong to this category are not aware of the news;
  • infected status corresponds to spreader status, i.e., OSNs users that are aware of news and try to spread it through sharing;
  • recovered status corresponds to stifle status, i.e., OSNs users aware of the news, but are not interested in propagating it. Considering that from the OSNs data analysis it is not easy to distinguish who viewed news without sharing it or who was not connected at news’ publishing time, we believe that this category should include users who have at least one neighbor who shared the news and did not re-share it.
We tailor to our context the transmissibility formula of Newman’s epidemic spreading model [19]:
T i j = 1 e β i j τ i
where T i j is the transmission probability, i.e., the probability that individual i infects his neighbor j, β i j [ 0 , 1 ] is the contact rate between i and j and τ i is the infection time, that is the time during which i can infect his neighborhood.
In OSNs context, we reinterpret T i j as the probability that i spreads news and j receives and propagates it, whereas β i j becomes the fraction of time during which i and j are simultaneously connected to the same OSN and finally τ i is viewed as the time for which the news propagated by i is visible in the foreground on OSN home page (this actually can depend on several factors and can also change for different kinds of OSN home pages).
Note that, while in real physical diseases i is naturally contagious, in the news spreading scenario being infected means that j believes the news received by i is reliable and therefore is also willing to transmit it over the network (via its neighborhood).
In our proposal, we also introduce the credibility C j i to Newman’s Equation (1) as follows:
T i j = 1 e β i j τ i C j i
C j i represents the direct credibility, i.e., how much j trusts i; this value is the weight of the direct link connecting node j with node i in the CN layer. The rationale behind the Equation (2) is that trust in an individual may depend on past interactions with him, but also on the reputation that he/she has in neighborhood and it is important because it can influence the attention people paid when he/she posts news.
In other terms, Equation (2) models the news diffusion process in the OSN layer as a function of CN layer (direct coupling between OSN and CN layers).

3.2. CN Layer

While the spreading process is influenced by the credibility, it is also true that directed credibility between any couple of users also depends on the past interactions between them. So the spreading process also determines how credibility changes over time.
To consider the evolution of C j i over time, we introduce a credibility update mechanism triggered every time a spreader node i proposes news to an uninformed node j. In the following we suppose that each news is labelled by a number x ( x = { 1 . . . n } ) . An ignorant node j that was exposed to a (new or re-shared) news n calculates the credibility towards node i taking into account all past interactions by using the following equation:
C j i ( n ) = x = 0 n ( x + 1 ) C n e w s x x = 0 n ( x + 1 )
where C n e w s x is the credibility j assigns to the news x.
This equation is a weighted average that allows credibility for i to be updated according to j’s opinion about the truth of news it receives from i. Indeed, in real OSNs people can change their behaviour over time, for instance, a node that often re-posted false news in the past could start to check news and stop forwarding fakes, or conversely a reliable node could get worse by starting to spread false news; in any case Equation (3) gives its neighbors a chance to tailor the credibility according to the truthfulness of news it propagates over time. To better understand the behaviour of Equation (3) over time, we simply rewrite it (trough a series of simple mathematical steps) in the following way:
C j i ( n ) = C n e w s 0 if n = 0 n n + 2 ( C j i ( n 1 ) + 2 n C n e w s n ) if n > 0
where the credibility j assigns to i after the propagation of the n e w s n , is expressed in terms of the credibility at the previous step plus the credibility j assigns to the n e w s n .
To show how Equation (4) works, in Figure 2a,b several simulated scenario are reported. In each scenario we simulate the propagation of 200 news. Scenario CT is the simple case in which i propagates to j news that are always true. In this case C j i assumes the constant value of 1 (j fully trusts i). In scenario CF, i propagates always fake news, so C j i is always zero. In scenario TF, i spreads true news for n = 1 . . . 50 and fake news for n > 50 . In this case, C j i is initially equal to 1, but after the 50th news it rapidly decreases to 0, assuming the value of 0.5 after only other 20 (fake) news. So, a user can lose half of his credibility if he/she spreads about 20 completely fake news. Scenario FT is the dual of TF. It shows how a user can gain credibility by spreading completely true news. In Figure 2b, scenario RTF simulates a user i that initially spreads 50 ’almost true’ news and then switches to ’almost false’ news. Arbitrarily, we are supposing that user j assigns a random score in the range [ 0.75 , 1.0 ] and [ 0.0 , 0.5 ] respectively to ’almost true’ and ’almost false’ news. Again RFT is the opposite scenario of RTF.
In general, how a given user scores a news is a quite subjective issue. However, just for simulation purpose we will assume the following values for each C n e w s x :
  • C n e w s x = 0 if the news x is false;
  • C n e w s x = 1 if x is true,
  • if x is false but perceived as true (e.g., it cites false authoritative sources, and/or it seems coming from logical reasoning or it starts from a real event), then C n e w s x [ 0 , 0.5 ] ,
  • if x is true but perceived as false (e.g., it does not cite any sources, it is poorly described or its content is strange enough to sound fake), then C n e w s x [ 0.5 , 1 ] .

3.3. News Propagation Model

In the following, we indicate with s the originator of the news (seed user/node), i.e., who injects the news in the OSN network.
Our model of news propagation works as follows:
(1)
Initially, the seed node s injects a news in the network, while each of his neighbors j uses Equation (2) to compute the probability to re-post the news or not.
(2)
If j re-post the news, it updates the credibility C j s on the CN layers by using Equation (3).
(3)
At the next step, infected nodes attempt to infect their neighborhood using the same mechanism, if it happens the directed credibilities is updated.
(4)
Another diffusion step is performed.
The spreading process went on until the contagion cycle ends. At this point the number of infected nodes is the number of users reached by the news. In Algorithm 1 is reported the news spreading algorithm described above.
Algorithm 1: News spreading algorithm.
Futureinternet 13 00107 i001

4. Simulations and Results

To evaluate the proposed model we performed several experiments to assess news propagation both for highly connected and semi-isolated nodes. All experiments were made on a two-layers scale–free network with 1000 nodes. A scale-free network is a network characterized by a power–law degree distribution (i.e., the fraction P ( k ) of nodes with degree k goes as P ( k ) k γ , where γ is a parameter that typically assumes a value in the range 2 < γ < 3 ). It means that in such a network the majority of nodes have few connections, while there are a few number of nodes, called hubs, with extraordinary high number of links. Specifically, in our experiments we employed scale-free networks since online social networks usually exhibit such a kind of behaviour [69]. Networks were generated by using the JGraphT Java Library [70]. In detail, the first layer was an undirected network representing OSN acquaintances while the second layer was the credibility network. All weights of the CN layer were initially set to 0.5 that represented a neutral credibility ( C i j = 0.5 ( i , j ) ).
To understand how local credibility updates affects news propagation, we considered a highly connected seed node since news propagation for semi-isolated nodes were influenced by both low contact rate and credibility, making it difficult to distinguish which one of them actually affected the lack of news propagation. We simulated the spreading of 150 different news in 5 scenarios, as reported in Table 1. All the presented results were the average over 10 different simulations for each scenario.
Scenarios A and B (i.e., all news spread are respectively true or false) were used to understand how a variation in the credibility of the spreader node affected its news spreading. Indeed, in real OSNs if a node lost its credibility fewer and fewer nodes were willing to receive and propagate its news (and vice versa).
Simulations reported in Cases C and D aimed at checking to what extent a change in a node’s behavior impacted its news spreading process and how fast changes happened; for instance, when the node initially spread only fakes, and after a while, it propagated only true news, or vice-versa.
All simulation results are reported in Figure 3. As shown, the number of infected nodes for the first news (i.e., C n e w s 1 ) was similar in all the presented scenarios. As soon as nodes spread other news, C j i played a fundamental role in the network evolution, as detailed in the following.

4.1. Case A and B

In the simulation scenario A, the number of infected nodes for each news progressively decreased as the number of spread news incremented. This confirmed that a node that continuously spread false news was gradually isolated decreasing its ability to influence other nodes. Conversely, in case B, the number of infected nodes for each news progressively increased as the number of spread news incremented. In this case the spreader node gradually gained credibility increasing its ability to affect the other nodes.

4.2. Case C and D

In case C the number of infected nodes for each news decreased with the same trend as case A for the first 50 news. Afterward, when an individual’s behavior changed from spreading false news to true ones, the curve changed accordingly. Similarly, when the behavior switched from good to bad again, the curve decreased. Similarly, in case D the number of infected nodes for each news increased with the same trend as case B for the first 50 propagated news. When the seed became a bad guy the curve trend decreased, and finally it increased as soon as the seed behavior became good again (true news).
Cases C and D showed another real social media users’ behavior: losing credibility was easier than achieving it again. Indeed, in case C the curve increased slowly after the first behavior change at the 50th news, whereas in case D, at the same point the curve decreased very rapidly. The same curve variations could be observed in the other behavior changing points at the 100th news.

4.3. Newman Case

In the last scenario simulated, the credibility did not affect the curve, so it was used only as a reference to show the improvements introduced by our model. Indeed, one would expect that this curve was (at least asymptotically) reached by the curve of case B (credibility always set to 1). Instead, at a steady state the case B simulations performed an average of about 130 infected nodes less than the Newman ones; this was caused by the fact that not all the true news were perceived as such so some of them received a low credibility value (although true) due to, for instance, for a lack of sources and/or relevant details. Furthermore, our credibility update formula kept track of past inserted news, albeit in a decreasing way due to aging factor, hence it would never reach C j i = 1 .

4.4. Credibility Distributions

To understand how the CN layer was affected by news propagation in the previous simulated scenarios, we studied how the incoming credibility strengths of each node of CN changed. The incoming strengths of a given node i were defined as the sum of the weights of the incoming links ( s i = j N i i n C j i where N i i n is the in-neighbourhood of i). We then calculated the incoming credibility strength distribution of the C N at different moments during previous described simulation scenarios.
In Figure 4a,b we show the incoming credibility strength distribution for cases B and D during the 1st, 50th, 100th and 150th news.
To effectively assess how the strengths distribution varies, we performed a curve fitting in a log-log scale graph according to the following function:
f ( x ) = a e b x + c
In Figure 4a we note that all fitting curves (except the first case) were almost overlapped. This behaviour is coherent with the finding reported in Figure 3—Case B. In fact, as the number of news spreads increased, the number of infected increased too, and this corresponded to an increments of all the direct credibility values, since all news were considered true by users. Moreover, we observed that the number of infected nodes was almost stable after the 50th news. This behaviour is also observed in Figure 4a, where curves corresponding to 50th, 100th and 150th news were practically overlapped, with a slope of respectively 0.5554, 0.5382 and 0.5259.
Instead, in Figure 4b we observed that, except for the initial case, the fitting curve moved forward for the first 50 true news while the fitting curve returned back for the next 50 fakes (’news 100’ plot) and finally (’news 150’ plot) it was almost overlapped to the ’news 50’ fitting curve for the last 50 true news. It suggested that on average weights of the CN layer fluctuated by following, and also imposing, specific news spreading dynamics. Note that results for case A and C are not shown here since they were opposite to case B and D respectively, therefore no further findings could be inferred.

5. Conclusions and Future Work

In this paper, we studied how the trustworthiness among OSNs users influences the news spreading process. We carried out several simulations using a 1000–nodes two layer multiplex scale-free network with a layer representing OSNs acquaintances and another representing users trust network. We considered five different scenarios, each one averaged over 10 simulations. Results show that credibilities update mechanism is able to correctly models the propagation dimension. In particular we noted that a well-connected seed node with medium or high credibility is always able to propagate news to its neighborhood, hence the news became viral. Conversely, if a node with low credibility tries to spread some news, the diffusion is successful only if it is able to persuade at least a hub node with neutral credibility (among its neighbors) to repost that news. News propagation of isolated nodes finally was often limited by low contact rates with his neighborhood and with medium or low credibility, hence it was very difficult to propagate the news.
The choice of a synthetic network to perform our first experiments allows to carry out assessment tests to validate the model through a well known and manageable network whereas real datasets requires to take into account the specific model they are based on, its features and possible biases.
In future works, we aim at removing some limitations of the proposal discussed here. In particular,
  • we planned to use Twitter datasets in order to validate our model with real data, such as those available at [71] or [72] catalogs;
  • further experiments on large networks (even not scale-free) will allow to test the scalability of our model;
  • we will also investigate on the applicability of our model to real OSNs; indeed, we believe that its formulation allows to adopt it in any social network where nodes neighbourhood is known, although sometimes this is allowed just for OSNs administrators and specific considerations may be required for each OSN (e.g., in FB the friendship is monodirectional whereas in Twitter the following–follower is a two-way relationship);
  • we will consider how our model can address the case of creation and diffusion of fakes from non-human accounts, i.e., bots or cyborgs [73].

Author Contributions

Formal analysis, V.C., A.L., M.M., G.M. and M.P.; investigation, V.C., A.L., M.M., G.M. and M.P.; methodology, V.C., A.L., M.M., G.M. and M.P. The authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially supported by the project of University of Catania PIACERI, Piano di Incentivi per la Ricerca di Ateneo.

Data Availability Statement

Not Applicable, the study does not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bruns, A. The Active Audience: Transforming Journalism from Gatekeeping to Gatewatching. Available online: http://snurb.info/files/The%20Active%20Audience.pdf (accessed on 24 April 2021).
  2. Chunara, R.; Andrews, J.R.; Brownstein, J.S. Social and news media enable estimation of epidemiological patterns early in the 2010 Haitian cholera outbreak. Am. J. Trop. Med. Hyg. 2012, 86, 39–45. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Hamzah, F.B.; Lau, C.; Nazri, H.; Ligot, D.; Lee, G.; Tan, C. CoronaTracker: Worldwide COVID-19 Outbreak Data Analysis and Prediction. Available online: https://www.who.int/bulletin/online_first/20-255695.pdf (accessed on 24 April 2021).
  4. Lee, K.; Agrawal, A.; Choudhary, A. Real-time disease surveillance using twitter data: Demonstration on flu and cancer. In Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining, Chicago, IL, USA, 11–14 August 2013. [Google Scholar]
  5. Christakis, N.A.; Fowler, J.H. Social network sensors for early detection of contagious outbreaks. PLoS ONE 2010, 5, e12948. [Google Scholar] [CrossRef] [Green Version]
  6. Schmidt, C.W. Trending now: Using social media to predict and track disease outbreaks. Environ. Health Perspect. 2012, 120, a30. [Google Scholar] [CrossRef] [PubMed]
  7. Sakaki, T.; Okazaki, M.; Matsuo, Y. Earthquake shakes Twitter users: Real-time event detection by social sensors. In Proceedings of the 19th International Conference On World Wide Web, Raleigh, NC, USA, 26–30 April 2010. [Google Scholar]
  8. Guy, M.; Earle, P.; Ostrum, C.; Gruchalla, K.; Horvath, S. Integration and dissemination of citizen reported and seismically derived earthquake information via social network technologies. In Advances in Intelligent Data Analysis IX; Springer: Berlin/Heidelberg, Germany, 2010; pp. 42–53. [Google Scholar]
  9. Spence, P.R.; Lachlan, K.A.; Griffin, D.R. Crisis communication, race, and natural disasters. J. Black Stud. 2007, 37, 539–554. [Google Scholar] [CrossRef]
  10. Pourebrahim, N.; Sultana, S.; Edwards, J.; Gochanour, A.; Mohanty, S. Understanding communication dynamics on Twitter during natural disasters: A case study of Hurricane Sandy. Int. J. Disaster Risk Reduct. 2019, 37, 101176. [Google Scholar] [CrossRef]
  11. Dragović, N.; Vasiljević, Ð.; Stankov, U.; Vujičić, M. Go social for your own safety! Review of social networks use on natural disasters–case studies from worldwide. Op. Geosci. 2019, 11, 352–366. [Google Scholar] [CrossRef]
  12. Haas, C.; Wearden, S.T. E-credibility: Building common ground in web environments. L1-Educ. Stud. Lang. Lit. 2003, 3, 169–184. [Google Scholar] [CrossRef]
  13. Kaplan, A.; Haenlein, M. Users of the World, Unite! The Challenges and Opportunities of Social Media. Bus. Horiz. 2010, 53, 59–68. [Google Scholar] [CrossRef]
  14. Glenski, M.; Volkova, S.; Kumar, S. User Engagement with Digital Deception. Available online: https://www.researchgate.net/publication/342250726_User_Engagement_with_Digital_Deception (accessed on 24 April 2021).
  15. Carchiolo, V.; Longheu, A.; Malgeri, M.; Mangioni, G.; Previti, M. Post Sharing-Based Credibility Network for Social Network. In Intelligent Distributed Computing XI; Springer International Publishing: Cham, Switzerland, 2018; pp. 149–158. [Google Scholar]
  16. Carchiolo, V.; Longheu, A.; Malgeri, M.; Mangioni, G.; Previti, M. Complex Networks IX; Springer International Publishing: Cham, Switzerland, 2018; pp. 303–310. [Google Scholar]
  17. Anderson, R.M.; May, R.M.; Anderson, B. Infectious Diseases of Humans: Dynamics and Control. Available online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2272191/pdf/epidinfect00031-0211.pdf (accessed on 24 April 2021).
  18. Diekmann, O.; Heesterbeek, J.A.P. Mathematical Epidemiology of Infectious Diseases: Model Building, Analysis and Interpretation. Available online: AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAApswggKXBgkqhkiG9w0BBwagggKIMIIChAIBADCCAn0GCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMww6khyh2WW_MdVFrAgEQgIICTuVMNaZzMClmiglP (accessed on 24 April 2021).
  19. Newman, M.E. Spread of epidemic disease on networks. Phys. Rev. E 2002, 66, 016128. [Google Scholar] [CrossRef] [Green Version]
  20. Carchiolo, V.; Longheu, A.; Malgeri, M.; Mangioni, G. Trust assessment: A personalized, distributed, and secure approach. Concurr. Comput. Pract. Exp. 2012, 24, 605–617. [Google Scholar] [CrossRef]
  21. Buzzanca, M.; Carchiolo, V.; Longheu, A.; Malgeri, M.; Mangioni, G. Direct trust assignment using social reputation and aging. J. Ambient Intell. Humaniz. Comput. 2017, 8, 167–175. [Google Scholar] [CrossRef]
  22. Buskens, V. Social Networks and Trust. Available online: https://books.google.co.id/books?id=XJ0SBwAAQBAJ&printsec=frontcover&dq=Social+networks+and+trust&hl=zh-CN&sa=X&ved=2ahUKEwjl7OTix5bwAhXP7nMBHbFFCPEQ6AEwAHoECAUQAg#v=onepage&q=Social%20networks%20and%20trust&f=false (accessed on 24 April 2021).
  23. Aymanns, C.; Foerster, J.; Georg, C.P. Fake News in Social Networks. Available online: https://www.researchgate.net/publication/319210331_Fake_News_in_Social_Networks (accessed on 24 April 2021).
  24. Kopp, C.; Korb, K.B.; Mills, B.I. Information-theoretic models of deception: Modelling cooperation and diffusion in populations exposed to “fake news”. PLoS ONE 2018, 13, 1–35. [Google Scholar] [CrossRef] [PubMed]
  25. Pellis, L.; Ball, F.; Bansal, S.; Eames, K.; House, T.; Isham, V.; Trapman, P. Eight challenges for network epidemic models. Epidemics 2015, 10, 58–62. [Google Scholar] [CrossRef]
  26. Grabner-Kräuter, S.; Bitter, S. Trust in online social networks: A multifaceted perspective. Forum Soc. Econ. 2015, 44, 48–68. [Google Scholar] [CrossRef] [Green Version]
  27. Lewicki, R.; Bunker, B. Trust in Relationships: A Model of Development and Decline. Available online: https://www.researchgate.net/profile/Roy-Lewicki/publication/232534885_Trust_in_relationships_A_model_of_development_and_decline/links/00b7d52ccca7d587e6000000/Trust-in-relationships-A-model-of-development-and-decline.pdf (accessed on 24 April 2021).
  28. Golbeck, J. Personalizing applications through integration of inferred trust values in semantic web-based social networks. In Proceedings of the Semantic Network Analysis Workshop at the 4th International Semantic Web Conference, Galway, Ireland, 7 November 2005. [Google Scholar]
  29. Kuter, U.; Golbeck, J. Sunny: A New Algorithm for Trust Inference in Social Networks Using Probabilistic Confidence Models. Available online: http://www.cs.umd.edu/~golbeck/downloads/Sunny.pdf (accessed on 24 April 2021).
  30. Taherian, M.; Amini, M.; Jalili, R. Internet and Web Applications and Services. In Proceedings of the Third International Conference on Internet and Web Applications and Services, Athens, Greece, 8–13 June 2008. [Google Scholar]
  31. Liu, G.; Wang, Y.; Orgun, M. Trust inference in complex trust-oriented social networks. Computational Science and Engineering. In Proceedings of the IEEE CSE 09, 12th IEEE International Conference on Computational Science and Engineering, Vancouver, BC, Canada, 29–31 August 2009. [Google Scholar]
  32. Adali, S.; Escriva, R.; Goldberg, M.K.; Hayvanovych, M.; Magdon-Ismail, M.; Szymanski, B.K.; Wallace, W.A.; Williams, G. Measuring behavioral trust in social networks. In Proceedings of the ISI 2010: IEEE International Conference on Intelligence and Security Informatics, Vancouver, BC, Canada, 23–26 May 2010. [Google Scholar]
  33. Caverlee, J.; Liu, L.; Webb, S. The SocialTrust framework for trusted social information management: Architecture and algorithms. Inf. Sci. 2010, 180, 95–112. [Google Scholar] [CrossRef]
  34. Yu, X.; Wang, Z. A enhanced trust model based on social network and online behavior analysis for recommendation. In Proceedings of the CiSE 2010: International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 10–12 December 2010. [Google Scholar]
  35. Jiang, W.; Wang, G.; Wu, J. Generating trusted graphs for trust evaluation in online social networks. Futur. Gener. Comput. Syst. 2014, 31, 48–58. [Google Scholar] [CrossRef]
  36. Jiang, W.; Wu, J.; Li, F.; Wang, G.; Zheng, H. Trust evaluation in online social networks using generalized network flow. IEEE Trans. Comput. 2016, 65, 952–963. [Google Scholar] [CrossRef]
  37. Feng, L.; Hu, Y.; Li, B.; Stanley, H.E.; Havlin, S.; Braunstein, L.A. Competing for Attention in Social Media under Information Overload Conditions. PLoS ONE 2015, 10, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Weng, L.; Menczer, F.; Ahn, Y.Y. Virality Prediction and Community Structure in Social Networks. Sci. Rep. 2013, 3, 1–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Jin, F.; Dougherty, E.; Saraf, P.; Cao, Y.; Ramakrishnan, N. Epidemiological Modeling of News and Rumors on Twitter. In Proceedings of the 7th Workshop on Social Network Mining and Analysis, Chicago, IL, USA, 11 August 2013. [Google Scholar]
  40. Sreenivasan, S.; Chan, K.S.; Swami, A.; Korniss, G.; Szymanski, B.K. Information Cascades in Feed-Based Networks of Users with Limited Attention. IEEE Trans. Netw. Sci. Eng. 2017, 4, 120–128. [Google Scholar] [CrossRef]
  41. Bessi, A.; Coletto, M.; Davidescu, G.A.; Scala, A.; Caldarelli, G.; Quattrociocchi, W. Science vs Conspiracy: Collective Narratives in the Age of Misinformation. PLoS ONE 2015, 10, 1–17. [Google Scholar] [CrossRef] [Green Version]
  42. Baronchelli, A. The emergence of consensus: A primer. R. Soc. Opt. Sci. 2018, 5, 172189. [Google Scholar] [CrossRef] [Green Version]
  43. Vicario, M.D.; Quattrociocchi, W.; Scala, A.; Zollo, F. Polarization and Fake News: Early Warning of Potential Misinformation Targets. ACM Trans. Web 2019, 13, 1–22. [Google Scholar] [CrossRef]
  44. Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef] [PubMed]
  45. Duffy, A.; Tandoc, E.; Ling, R. Too good to be true, too good not to share: The social utility of fake news. Inf. Commun. Soc. 2020, 23, 1965–1979. [Google Scholar] [CrossRef]
  46. Niu, Y.W.; Qu, C.Q.; Wang, G.H.; Wu, J.L.; Yan, G.Y. Information spreading with relative attributes on signed networks. Inf. Sci. 2021, 551, 54–66. [Google Scholar] [CrossRef]
  47. Saeedian, M.; Azimi-Tafreshi, N.; Jafari, G.; Kertész, J. Epidemic spreading on evolving signed networks. Phys. Rev. E 2016, 95, 022314. [Google Scholar] [CrossRef] [Green Version]
  48. Pastor-Satorras, R.; Castellano, C.; Van Mieghem, P.; Vespignani, A. Epidemic processes in complex networks. Rev. Mod. Phys. 2015, 87, 925–979. [Google Scholar] [CrossRef] [Green Version]
  49. Bingol, H. Fame Emerges as a Result of Small Memory. Available online: https://arxiv.org/pdf/nlin/0609033.pdf (accessed on 24 April 2021).
  50. Cetin, U.; Bingol, H.O. Attention competition with advertisement. Phys. Rev. E 2014, 90, 032801. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Huberman, B. Social Computing and the Attention Economy. J. Stat. Phys. 2013, 151. [Google Scholar] [CrossRef]
  52. Wu, F.; Huberman, B.A. Novelty and collective attention. Proc. Natl. Acad. Sci. USA 2007, 104, 17599–17601. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. He, X.; Lin, Y.R. Measuring and monitoring collective attention during shocking events. EPJ Data Sci. 2017, 6, 30. [Google Scholar] [CrossRef]
  54. Lehmann, J.; Gonçalves, B.; Ramasco, J.J.; Cattuto, C. Dynamical Classes of Collective Attention in Twitter. In Proceedings of the 21st International Conference on World Wide Web, Lyon, France, 16–20 April 2012. [Google Scholar]
  55. Sasahara, K.; Hirata, Y.; Toyoda, M.; Kitsuregawa, M.; Aihara, K. Quantifying collective attention from tweet stream. PLoS ONE 2013, 8, e61823. [Google Scholar] [CrossRef]
  56. Karrer, B.; Newman, M. Competing Epidemics on Complex Networks. Available online: https://arxiv.org/pdf/1105.3424.pdf (accessed on 24 April 2021).
  57. Sneppen, K.; Trusina, A.; Jensen, M.; Bornholdt, S. A minimal model for multiple epidemics and immunity spreading. PLoS ONE 2010, 5, e13326. [Google Scholar] [CrossRef] [Green Version]
  58. Oliveira, D.F.; Chan, K.S. The effects of trust and influence on the spreading of low and high quality information. Phys. A Stat. Mech. Appl. 2019, 525, 657–663. [Google Scholar] [CrossRef] [Green Version]
  59. Shu, P.; Gao, L.; Zhao, P.; Wang, W.; Stanley, H. Social contagions on interdependent lattice networks. Sci. Rep. 2017, 7, 44669. [Google Scholar] [CrossRef] [Green Version]
  60. Wang, X.; Li, W.; Liu, L.; Pei, S.; Tang, S.; Zheng, Z. Promoting information diffusion through interlayer recovery processes in multiplex networks. Phys. Rev. E 2017, 96, 032304. [Google Scholar] [CrossRef] [PubMed]
  61. Yu, X.; Yang, Q.; Ai, K.; Zhu, X.; Wang, W. Information Spreading on Two-Layered Multiplex Networks With Limited Contact. IEEE Access 2020, 8, 104316–104325. [Google Scholar] [CrossRef]
  62. Shu, K.; Mahudeswaran, D.; Wang, S.; Lee, D.; Liu, H. FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media. Big Data 2020, 8, 171–188. [Google Scholar] [CrossRef]
  63. Apuke, O.D.; Omar, B. Fake news and COVID-19: Modelling the predictors of fake news sharing among social media users. Telemat. Inform. 2021, 56, 101475. [Google Scholar] [CrossRef]
  64. Thompson, N.; Wang, X.; Daya, P. Determinants of News Sharing Behavior on Social Media. Available online: http://nikthompson.com/PDF/Thompson-Wang-2019-NewsSharing.pdf (accessed on 24 April 2021).
  65. Cinelli, M.; Quattrociocchi, W.; Galeazzi, A.; Valensise, C.; Brugnoli, E.; Schmidt, A.; Zola, P.; Zollo, F.; Scala, A. The COVID-19 social media infodemic. Sci. Rep. 2020, 10. [Google Scholar] [CrossRef]
  66. Carchiolo, V.; Longheu, A.; Malgeri, M.; Mangioni, G.; Previti, M. Complex Networks & Their Applications VI; Springer International Publishing: Cham, Switzerland, 2018; pp. 980–988. [Google Scholar]
  67. Kivelä, M.; Arenas, A.; Barthelemy, M.; Gleeson, J.P.; Moreno, Y.; Porter, M.A. Multilayer Networks. Available online: https://arxiv.org/pdf/1309.7233.pdf (accessed on 24 April 2021).
  68. Keeling, M.J.; Rohani, P. Modeling Infectious Diseases in Humans and Animals. 2011. Available online: https://www.researchgate.net/publication/23180326_Modeling_Infectious_Diseases_in_Humans_and_Animals (accessed on 24 April 2021).
  69. Mislove, A.; Marcon, M.; Gummadi, K.P.; Druschel, P.; Bhattacharjee, B. Measurement and Analysis of Online Social Networks. In Proceedings of the 7th ACM SIGCOMM Conference on Internet Measurement, San Diego, CA, USA, 24–26 October 2007. [Google Scholar]
  70. Michail, D.; Kinable, J.; Naveh, B.; Sichi, J.V. JGraphT: A Java Library of Graph Theory Data Structures and Algorithms. Available online: https://arxiv.org/pdf/1904.08355.pdf (accessed on 24 April 2021).
  71. DocNow Catalog. Available online: https://catalog.docnow.io/ (accessed on 24 April 2021).
  72. Zubiaga, A.; Ji, H. Tweet, but Verify: Epistemic Study of Information Verification on Twitter. Available online: https://arxiv.org/pdf/1312.5297.pdf (accessed on 24 April 2021).
  73. Shu, K.; Zhou, X.; Wang, S.; Zafarani, R.; Liu, H. The Role of User Profiles for Fake News Detection. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, Vancouver, BC, Canada, 27–30 August 2019. [Google Scholar]
Figure 1. A two-layers representation of our model: online social network (OSN) and credibility network (CN) layers.
Figure 1. A two-layers representation of our model: online social network (OSN) and credibility network (CN) layers.
Futureinternet 13 00107 g001
Figure 2. Direct credibility C j i dynamics, x-axis shows the time and y-axis the node credibility.
Figure 2. Direct credibility C j i dynamics, x-axis shows the time and y-axis the node credibility.
Futureinternet 13 00107 g002
Figure 3. Temporal evolution of the average number of infected nodes in different scenarios.
Figure 3. Temporal evolution of the average number of infected nodes in different scenarios.
Futureinternet 13 00107 g003
Figure 4. Incoming credibility strength distribution for cases B and D during the 1st, 50th, 100th and 150th news.
Figure 4. Incoming credibility strength distribution for cases B and D during the 1st, 50th, 100th and 150th news.
Futureinternet 13 00107 g004
Table 1. Simulation scenarios.
Table 1. Simulation scenarios.
Case C n e w s x Description
A C n e w s x [ 0 , 0.5 ] all news are false
B C n e w s x [ 0.5 , 1 ] all news are true
C n e w s x [ 0 , 0.5 ] 1 x 50 first 50 news are false
C C n e w s x [ 0.5 , 1 ] 51 x 100 next 50 news are true
C n e w s x [ 0 , 0.5 ] 101 x 150 last 50 news are false
C n e w s x [ 0.5 , 1 ] 1 x 50 first 50 news are true
D C n e w s x [ 0 , 0.5 ] 51 x 100 next 50 news are false
C n e w s x [ 0.5 , 1 ] 101 x 150 last 50 news are true
Newman/all news are propagated with pure Newman transmissibility formula
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Carchiolo, V.; Longheu, A.; Malgeri, M.; Mangioni, G.; Previti, M. Mutual Influence of Users Credibility and News Spreading in Online Social Networks. Future Internet 2021, 13, 107. https://doi.org/10.3390/fi13050107

AMA Style

Carchiolo V, Longheu A, Malgeri M, Mangioni G, Previti M. Mutual Influence of Users Credibility and News Spreading in Online Social Networks. Future Internet. 2021; 13(5):107. https://doi.org/10.3390/fi13050107

Chicago/Turabian Style

Carchiolo, Vincenza, Alessandro Longheu, Michele Malgeri, Giuseppe Mangioni, and Marialaura Previti. 2021. "Mutual Influence of Users Credibility and News Spreading in Online Social Networks" Future Internet 13, no. 5: 107. https://doi.org/10.3390/fi13050107

APA Style

Carchiolo, V., Longheu, A., Malgeri, M., Mangioni, G., & Previti, M. (2021). Mutual Influence of Users Credibility and News Spreading in Online Social Networks. Future Internet, 13(5), 107. https://doi.org/10.3390/fi13050107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop