1. Introduction
The absence of centralised authority in a Mobile Ad Hoc Network (MANETs), poses a key challenge due to the dire need for cooperative network operation amongst nodes. In MANETs, some nodes may exhibit behaviours that negate the routing protocol functionality through the disruption of the route discovery process [
1]. To ensure that data is readily available in a MANET, all nodes must function as a forwarder and participate in the transmission of data packets from the source to the desired destinations [
2]. MANETs can generally be set up anywhere and anytime due to their dynamic nature. However, as a result of their unique characteristics, MANETs are more vulnerable to various security threats [
3] such as grey-hole attacks, black-hole attacks, eavesdropping, denial of service attacks (DoS), etc. when compared to traditional networks.
Security in MANETs generally involves ensuring and maintaining the integrity and confidentiality of data, in addition to the legitimate use and availability of network service provided by each network node [
1]. The viability of a MANET is solely dependent on the reliability of nodes to actively participate in the route discovery processes and to honestly forward data packets for other nodes in the network. To attain optimal network performance, each node must continuously forward packets for nodes within its radio range when required. Forwarding or routing of data packets by a node requires the consumption of its limited energy without expecting any rewards for its actions. A situation where a significant number of nodes in a MANET selfishly decides to preserve their energy level by minimizing their network participation such as not actively responding to route requests or not forwarding data packets [
2], could lead to a degraded network performance [
3], and one of the main goals of designing and creating a MANET, i.e., to support vigorous and effective routing operations by ordinary nodes is defeated. Thus, there is a dire need for an efficient reputation and trust management system (RTMS) which encourages the active collaboration of nodes in the network. A lot of existing works on Reputation and Trust Management systems in MANETs [
4] enforced the collaboration of nodes by isolating and repudiating uncooperative nodes from the available network resources. These RTM systems focused primarily on modelling effective mechanisms that ensure collaboration among nodes and they usually consider no punitive measure as a form of incentive for the cooperative nodes in the network [
5,
6,
7,
8,
9].
In general, the cooperative nodes in most of the reviewed RTM systems do not reward nodes for the continuous utilization of their limited energy in forwarding packets for other nodes. Nodes that actively participate in route discovery processes and the forwarding of data packets tends to experience low energy levels after a certain period. These low energy levels may in turn hamper their ability to carry out successful network operations which in turn may have a negative effect on their respective reputation and trust as well as their network performance [
4].
These cooperative nodes usually end up getting penalised by the mechanisms deployed by these RTMs models for not being able to continuously carry out the expected network activities. This process of punishing active nodes after a long period of contributing to successful network operations is unfair. It is a known fact that nodes do not have unlimited energy levels and thus, there is a dire need for a reliable reputation and trust management system that would enforce cooperation by ensuring that collaborative nodes are rewarded for continuously conducting favourable network operations while nodes that are judged to be selfish or malicious nodes are penalised, isolated or denied the available the network resources. This concept of rewarding nodes for continuously carrying out favourable network operations was initially proposed by Chiejina et al. [
4]. The authors suggested a conceptual RTM model in which nodes that are judged to be trustworthy will be rewarded for their active network participation while untrustworthy nodes will be penalised in the network using a two-dimensional approach. However, this concept was not fully explored in their paper.
In this paper, we adopted the initial concept proposed in [
4] and we present a candour two-dimensional trustworthiness evaluation technique to determine the trustworthiness of nodes in MANETs. Our proposed RTM models the reputation and trust evaluation of nodes by ensuring both positive and negative behaviours exhibited by a node are considered before the trustworthiness of the node can be determined. This paper also explores the following:
Possible ways of understanding nodes’ behaviours in a MANET without bias
How the use of observed optimal weights of a node at any given time will enable candour in the trustworthiness evaluation of nodes in the proposed RTM model
How candour, which is the ability to make unbiased trust-aware decisions can be incorporated into reputation and trust management systems in MANETs.
In an overview, the proposed RTM system models the first-hand reputation of a target node using Dirichlet probability distribution. This idea was based on the discovery that the Dirichlet probability distribution provides a platform for designing a practical reputation system that enables the behaviours of nodes to be expressed using more than two possibilities. This allowed the observed behaviours of nodes in our proposed model to be measured from three possible natures which were termed benevolence, selfishness, and maliciousness. Furthermore, the novel candour two-dimensional trustworthiness evaluation technique presented in this paper is based on what a node says about other nodes and what it does with regard to forwarding packets. The observed optimal weights at any given time were recommended to be used in evaluating the trustworthiness of a node which would ensure that nodes are not unfairly penalized especially if they can still contribute to the network passively (by providing genuine second-hand reputations about other nodes).
The rest of the paper is organised as follows:
Section 2 contains related works on reputation and trust management systems in MANETs.
Section 3 discusses a two-dimensional view of a node’s network activities where node behaviours and the node categorization.
Section 4 and
Section 5 presents the proposed robust Dirichlet reputation and trust evaluation of Nodes in Mobile Ad Hoc Networks.
Section 6 presents details of the implementation work.
Section 7 presents the simulation results, and the analysis.
Section 8 presents the discussions and further analyses, and
Section 9 concludes by setting out the benefits of the proposed system and outlines future research works.
2. Related Works
Collaboration implementation in MANETs using the concept of reputation management systems has received considerable attention by researchers in the ad hoc network community over the past two decades of which a lot of research works have been proposed and carried out on reputation and trust and management (RTM). Most RTM models employ different monitoring techniques in gathering data, which are used in computing the reputation and trust of nodes in the networks [
5,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29]. Several publications have proposed various reputation management-based techniques in which nodes in MANETs monitor the packet forwarding activities of their neighbours. If a node contributes towards forwarding packets for other nodes, the reputation of the node is computed and increased [
5,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29]. Similarly, if the nodes are observed dropping packets that are presented to them for forwarding, the RTM models decrease the reputation of such nodes. A significant number of these RTM models employ weight-based or threshold-based techniques in analysing the computed reputation values of the observed nodes in their respective networks before deciding on the observed nodes. In some cases, if a node’s reputation drops below a specified threshold or weight, the node is penalised which could include being isolated from the network or deprived of the available network resources as in the proposed RTM models in [
5,
17,
19,
28,
29].
Li Yang et.al [
30] proposed a Dirichlet reputation system for reliable routing of wireless ad hoc networks. In their proposed work, they employed the use of the Dirichlet reputation model which is solely based on Bayesian inference theory to model and evaluate the reliability of nodes in their network in terms of packet delivery. Their proposed model uses a unique mechanism to determine, predict and select a reliable routing path through a blend of first-hand observation and second-hand reputation reports. Simulation results show that their proposed reputation system could decrease the damaging effects triggered by misbehaving nodes and in turn improve the throughput of the network.
Sun et al. [
31] proposed, designed, and implemented a trust model which is very effective in computing the trust level of observed nodes in the network using a probabilistic algorithm based on the uncertainty that a node being observed directly by its neighbours will carry out a specific action successfully, considering only the monitored information. Their proposed model was able to ensure that the routing of packets in the network is extra secure and it also improves the intrusion detection systems in the network.
In their proposed model, Na et al. [
26] employed a trust-based architecture that includes a reputation system and a Watchdog. Their proposed model uses a Positive Feedback Message (PFM) as evidence of the forwarding behavior of a node, which is fed into the Watchdog. The watchdogs deployed in their models normally monitor the events of data forwarding and count the arrival of the acknowledgment packets (ACKs) with respect to the forwarded data. This mechanism is used to determine a node’s forwarding ability which translates to its defined trustworthiness.
In their proposed reputation model, Chiejina et al. [
17] employs a novel direct monitoring technique to evaluate the reputation of a node in the network. Their model ensures that nodes that expend their energy in transmitting data and routing control packets for others can carry out their network activities while the misbehaving nodes are detected and isolated from the network. Simulation results show that their model is effective at curbing and mitigating the effects of misbehaving nodes in the network.
Additionally, Michiardi and Molva [
24] proposed a collaborative reputation system known as the CORE. Their model consists of a watchdog component, which is enhanced with a reputation system that distinguishes between observations (subjective reputation), positives report by others (indirect reputation), and task-specific behaviours (functional reputation). These various reputations are then weighted to generate an aggregated trust value which is employed in making decisions about collaborating with trustworthy nodes or to slowly isolate malicious and misbehaving nodes from the network. The reputation values in their model are acquired by viewing all nodes as both requesters and providers of nodes’ behavioural activities and analysing the derived results from the expected results for each request. Nodes exchange periodic updates of only good reputation data. As a result, there is a compromise between robustness against false reports and the swiftness of detection. Since only positive reports are exchanged in the proposed model, a false-positive report will make it extremely difficult to detect and isolate malicious nodes in the network.
In their proposed model, Cho et al. [
32] evaluated a trust management protocol for cognitive mission-driven Group communication systems in MANETs for expeditious development of satisfactory trust relationships between nodes that don’t have past interaction history among themselves. The authors outlined a composite trust algorithm which is a combination of social and Quality of Service (QoS) trust. This was achieved by applying a ranked Stochastic Petri Nets (SPN) model to depict the behaviour of a node with integrated intelligence to trade of trust space for trust level over a given period. Through numeric analysis, the authors were able to determine the best trust chain length to optimise the trust level of collaboration nodes on a given trust chain. Their model incorporated the unique characteristics of MANETs, and they were able to show that an utmost reliance on subjective reputation in computing trust will make a node more susceptible to risk and an utmost reliance on recommendations from other network nodes will allow conventional trust relationships which may lead to loss of cooperative opportunities.
In their proposed model, Buchegger and Boudec [
33] analysed a RTM system for MANETs and peer-to-peer networks in which the authors used both direct and second-hand reports in computing reputation and trust values for nodes in a network. The authors critically analysed the effects of spreading rumour in a MANET and they were able to filter false reports from liars among nodes before calculating the respective reputation and trust values of the nodes. By using accurate second-hand reports, the authors were able to increase the robustness of their RTM system and speed up the detection rate of malicious nodes.
In their proposed model, He et al. [
27] employed a secure and objective reputation-based incentive scheme for MANETs. The reputation of nodes in their proposed model is computed and quantified by objective measures, and the dissemination of reputation is carried out efficiently by a secured one-way-hash-chain-based authentication. Their proposed model also uses punitive measures as a way of encouraging packet forwarding and penalizing selfish nodes by probabilistically dropping packets that originate from those nodes.
After critically analysing and reviewing some of the above-mentioned related works to our proposed model, we identified some unaddressed issues in existing RTM systems [
5,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29] relating to fairness in its mode of operation of these RTM models. Our proposal considers that nodes have limited energy. Its functions cater to situations that may hamper an active node’s performance level due to low energy. It considers that genuine nodes in the network which are unable to actively forward packets due to low energy may still provide accurate recommendations. These recommendations usually require a low amount of energy to execute. Furthermore, the qualitative and quantitative node categorisation in existing RTM models has not been exhaustively analysed. Some past works on RTM systems [
5,
6,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
30] categorised nodes based on the good or bad behaviours they exhibited in the network. In this paper, we presented a categorisation in which a distinctive baseline is drawn between a node’s active network operations such as successful, or unsuccessful forwarding of packets, and its passive operations, such as the accuracy of the recommendations disseminated about other nodes. This research area has not been extensively investigated in past research work.
In addition, our proposed model used the observable optimal weights in evaluating the trustworthiness of a node in the network, to introduce and maintain candour in the trustworthy evaluation process of nodes from the computed total reputation and trust values, which is required in determining the true nature of a node from the computed values.
3. Two-Dimensional View of a Node’s Network Activities
The two-dimensional view takes into account the node’s active network operations (i.e., forwarding packets for other nodes) which are used in evaluating the reputation of the node, and the passive activities such as sending 2nd-hand reputation to other nodes. A two-dimensional categorisation of nodes in which a node’s active network operations and the accuracy of its second-hand reputations about other nodes are used in evaluating the trustworthiness of a node. In
Figure 1, the “
Y-axis” represents the weight of the accuracy of second-hand reputations a node makes about other nodes and the “
X-axis” represents the weight of the evaluated total reputation values of a node in the network. A node that falls into zones 1, 2, and 3 can be classified as being trustworthy because its second-hand reputations about other nodes are of high quality and very accurate. The node can be a good recommender, can be said to be honest or an accurate accuser. On the contrary, its trustworthiness evaluation based on its actual networks’ operations may differ. For instance, nodes that are in zone 1 are said to be untrustworthy because of very poor network operations. For the nodes in zone 2, their trustworthiness will be undecided or uncertain, which may be as a result of limited first-hand knowledge of its actual network operations. The nodes found in zone 3 could be classified as being trustworthy with regards to their recommendations about other nodes and with regards to their actual network operations. Examples of nodes in this category are cooperative and good nodes.
In the case of zones 4, 5, and 6, nodes found in those zones are said to have undecided or uncertain trustworthiness in terms of the accuracy of their recommendations, not enough information to help reach a decision. In terms of their network operations, nodes in zone 4 will be evaluated as untrustworthy because of their poor network operations. The nodes in zone 5 will be said to have undecided or uncertain trustworthiness. These nodes are most likely newcomers to the network or inactive, broken, or faulty nodes.
Lastly, nodes in zone 7, 8, and 9 are all said to be untrustworthy in terms of the poor quality and accuracy of their recommendations. In evaluating the nodes based on their actual network operations, nodes in zone 7 will be classified as untrustworthy. These nodes are mostly malicious, attackers or intruders. The nodes in zone 8 will have undecided or uncertain trustworthiness, while those in 9 will be classified as being trustworthy. Most nodes found in zone 8 and 9 could be called liars, malicious accusers, or bad recommenders.
A similar categorisation was carried out by Zouridaki et al. [
19], but their approach was based on a node’s ability to forward data packets and make good recommendations. Different other scenarios such as when a node carries out an attack such as grey-hole attacks, black-hole attacks were not considered. Furthermore, the approach presented in this paper considers a node that is new to a network. Moreover, the mathematical analysis for the categorisation is based on Dirichlet distribution which is fully analysed in a later section. Thus, the two-dimensional view approach aims to effectively evaluate the trustworthiness of a node.
3.1. Behaviours: Friendly and Threat Models
To observe, understand, analyse and categorise the behaviours of nodes in the proposed model, various behaviours have been designed which are exhibited and monitored during network operations. To ascertain the continuity of the network in the presence of selfish and malicious nodes, one of the proposed node behaviours is a well-behaved node which always guarantees that all the packets destined for other nodes are forwarded as expected and never dropped as long as a valid route is available. The selfish and malicious nodes serve as threat models while the well-behaved node serves as the friendly model. The different models depicting the nodes behaviours exhibited during network operations are given as:
Good node: Nodes in this category respond to all route requests as expected and ensure that all the data and control packets that are meant for other nodes are forwarded to the next-hop node or the recipient node if they are the last hop in the path.
Periodically selfish node: A periodically selfish node acts selfishly at regular intervals. The nature of its behaviour is aimed at conserving its limited energy resources rather than malicious. With regards to contributing to the network operations, it periodically participates in route discovery processes by forwarding control packets for other nodes. This is because control packets are smaller in size than data packets and consume less energy during packet transmission. Whenever a data packet is presented to this node for onward transmission, the data packet is dropped. In this threat model, a node intermittently replies to the route requests. For instance, it drops 2 out of every 3 control packets it receives but drops every data packet it receives for forwarding is dropped. Nodes that forward data packets to this node may perceive the network link as broken when they don’t receive acknowledgements for the first set of data sent. The network link to this node is then deleted from their route entries, but after a while, the connection is re-added when the node participates in the route discovery process again.
Low energy-constrained selfish node: Nodes in this category acts as good node during the first period of the network operations. At the later stage of the network operations, it acts as a periodically selfish node due to reduced energy levels.
Grey-hole node: A grey-hole node advertises valid routes. This node responds to all route requests it receives, but it periodically drops the data packets that are meant to be forwarded to the next-hop nodes or the recipient node if it’s the last-hop in the routing path. This node carries out grey-hole attacks during network operations.
Black-hole node: A black-hole advertises valid routes whenever a route is requested. For example, in the Dynamic Source Routing (DSR) protocol, a black-hole node in- creases the sequence number in the route reply packet to the highest number possible so that the source node sees it as the nearest node to the required destination. Further- more, it drops all the data packets that are meant for any other nodes continuously.
Malicious Packet Modifiers: Nodes in this category modify packets sent to it before forwarding it to the next hop. Malicious packet modifiers may modify the destination address of a packet before rerouting it. This could lead to denial-of-service attacks if it targets a specific node. Malicious packet modifiers could also decrease the Time-to-Live (TTL) in received packets to an artificially low value before forwarding them. This act means that a packet with a reduced TTL value may never reach its intended destination.
The other behaviours are based on the accuracy of second-hand reputations a node provides about other nodes. This behavioural model is categorised into two groups. These groups are given as:
Honest Node: Nodes in this category disseminate accurate second-hand reputations about other nodes that they have had interactions with in the past. This is aimed at computing an accurate total reputation of the nodes in the network. The dissemination of accurate second-hand reputations about other nodes in the network assists nodes with limited first-hand information about a target node to evaluate and decide if a node can be relied upon or not.
Dishonest Node: Nodes in this category disseminate false or incorrect second-hand reputations about other nodes in the network. This may be either for badmouthing or for ballot-stuffing the target nodes. Badmouthing of a node is a case whereby false second-hand reputation causes the evaluated trustworthiness of a node to decrease, while ballot-stuffing is a situation whereby the false second-hand reputations cause the evaluated trustworthiness of a node to increase.
3.2. The Importance of the Friendly and Threat Models
The described friendly and threat models are essential to this research work. The behaviours exhibited by these nodes will aid in evaluating the performance of the proposed RTM model under various conditions and scenarios. Some of these behaviours are exhibited as a combination such as a good and honest node. This type of node is good in terms of continuous forwarding packets and honest in terms of the accuracy of the second-hand reputation it provides about other nodes. The goal of having threat models such as grey-hole, black-hole, periodically selfish nodes, and malicious modifiers is to determine the performance of the network under various attacks, and a combination of various behaviours such as low energy-constrained selfish and honest behaviours will aid in analysing the proposed two-dimensional view of a node’s network activities. The total trustworthiness evaluation of a node in the proposed model is based on a combination of a node’s active and passive network activities.
5. Trust Module
The trust module computes and manages the trust evaluation of nodes in the network. Trustworthiness is a very essential property of nodes in the network because it helps to make informed routing decisions. Evaluating trustworthiness helps to determine which nodes are making positive contributions from nodes that are continuously displaying misbehaviours. Every node in the proposed model stores evaluated trust values in the database depicted in
Figure 2. The novel candour two-dimensional trustworthiness evaluation of a node is a combination of two very important components which are the computed total reputation values of a node and the trust value in terms of its accuracy of the second-hand reputations it provides about other nodes. The former has been analysed in
Section 4.2.5 and the latter will be discussed in the next subsection.
5.1. Trust Based on Accuracy of Second-Hand Reputations
To compute the trust value of a node with regards to the accuracy of its second-hand reputations about other nodes, the Bayesian approach is employed which means trust is expressed as having only two possible instances of behaviour of nodes, i.e., trustworthiness in terms of providing accurate second-hand reputations about other nodes and untrustworthiness in terms of providing inaccurate second-hand reputations about other nodes. This is different from the computation of the reputation of nodes using the Dirichlet distribution of which the behaviours of nodes are perceived to be benevolent (forwarding data packets), selfish (dropping data packets), and malicious (modifying packets before forwarding). Since the trust in terms of accuracy of second-hand reputations has two possible outcomes, employing the Beta distribution as a prior is adequate for the computational process. Mathematically, the Beta distribution is another form of the Dirichlet distribution with only two probability density function (pdf) shape parameters. The Beta distribution is conjugate. This means that a posterior probability will possess the same functional form as the prior. Hence, when the stored trust value of a node in terms of the accuracy of its second-hand reputations about other nodes is updated, the trust value will still follow the Beta distribution.
Let the trustworthiness of a node
A about a target node
B in terms of the accuracy of the second-hand reputations’ node
B gives about other nodes be given as:
where
represents trustworthiness for accurate second-hand reputations and
represents untrustworthiness for inaccurate second-hand reputations. At the onset of the network when a monitoring node has no prior knowledge of a target node’s ability to give accurate second- hand reputations,
, which indicates a uniform distribution owing to the absence of prior knowledge. As second-hand reputations are received from the neighbouring nodes, the deviation test is computed for each set of received reputation values for the target node. As described in
Section 4.2.3, using the Equation (13).
If the result of the deviation test is valid, the observed trust of the recommending node with regards to the accuracy of second-hand reputations about other nodes is updated positively. On the other hand, if the deviation test is invalid, the observed trust in terms of received inaccurate second-hand reputations is decreased.
Let
when the deviation test is valid (i.e., when it succeeds), and let
when the deviation test is invalid (unsuccessful), the new values of
and
is given as follows:
where
is the discount factor after a given period, and it’s such that
Equations (21) and (22) are similar to the equations employed by Buchegger and Boudec in [
5]. For every deviation test executed whenever a second-hand reputation reply is received by the monitoring node, the stored trust data of the recommending nodes, i.e.,
will be updated. The trust value for a node
B as evaluated by a monitoring node
A is determined by the expectation value of the Beta distribution. This is given by the equation below:
The computed expectation value is used when evaluating the trustworthiness of a node in the network using the novel candour two-dimensional trustworthiness evaluation technique.
5.2. Trustworthiness of a Node
The candour two-dimensional trustworthiness evaluation of a node is determined by combining the total reputation values of a node and from the trust value in terms of the accuracy of second-hand reputations provided about other nodes. This decision is handled by the interaction decision-making module. The interaction decision-making module is responsible for deciding which nodes are trustworthy of carrying out reliable network operations. The decision-making process is briefly described as follows. Let’s assume that a monitoring node A wants to determine if a target node B is completely trustworthy in terms of its actual network operations (what it does) and what it says about other nodes. Node A relies on the computed total reputation values (total reputation vector) and the trust values in terms of the accuracy of second-hand reputations. Let’s define some very important thresholds which serves as the expressions for tolerance in terms of reputation for forwarding packets for others, selfishly dropping packets, and malicious modification of packets before forwarding respectively. Furthermore, we also define as the threshold for the trustworthiness with regards to the accuracy for the second-hand reputations. For node A to classify node B as a totally trustworthy node with regards to its overall network behaviours’, the following conditions must be met.
With regards to its behaviours i.e., forwarding packets, dropping packets
and its trustworthiness with regards to the accuracy of its second-hand reputations as
It has already been established that the sum of the directly observed individual reputation values of a target such as a node B, by a monitoring node A as equals 1. Consequently, it is expected that the sum of the evaluated total reputation vectors , must be 1 as along as the second-hand reputations are accurate.
Therefore, for node
A to be totally trustworthy, its total reputation with regards to its behaviour must be classified as benevolent and its trust value with regards to the accuracy of second-hand reputations must be classified as honest. Nodes that fall into the category of being totally trustworthy with regards to their entire network operations are permitted to continue their positive network contributions i.e.,
On the other hand, nodes that are categorised as being untrustworthy i.e.,
are punished. Nodes in this category are isolated by ensuring that the entire route request that they generate are ignored. All the paths containing these nodes are deleted from the route cache. Finally, a special case of a node being classified as
selfish but
honest with regards to the accuracy of its second-hand reputations is handled during the trustworthiness evaluation process. Nodes in this category are not totally isolated from the network. Selfish behaviour displayed by a node in the network may be triggered by a node’s physical properties (loss of battery power, being overwhelmed by route, and forwarding requests). It may also be a resolute attempt to conserve its resources (battery and computing resources), or a random failure. On the other hand, misbehaving nodes reduce the reliability of the network. These malicious nodes misroute, modify, or inject packets (making them a part of a different data transfer). These nodes are mostly interested in attacking and damaging the network. Malicious nodes generally lower the security and integrity of the network traffic. The interactions between the monitoring, reputation and trust modules can be seen in
Figure 4.
Figure 4 presents an overview of the entire working of the proposed system.
7. Results and Analysis
This section presents the simulation results showing the evaluated reputation and trust values that a node computes after successful observations of its neighbours’ activities are analysed. Comprehensive analyses of the computed direct reputation values (first-hand), the second-hand reputations, and the total reputation of nodes will aid in understanding nodes’ behaviours in a MANET without bias such that both negative and positive behaviours exhibited by a node are reflected (observed) from the evaluated reputation and trust values.
7.1. Evaluation of Directly Computed (First-Hand) Reputation
The expectation values of the Dirichlet distribution were used in computing the various reputation values of nodes in the network.
Figure 5 and
Figure 6 presents the computed reputation vector of a target
N0 by two monitoring nodes (
N1 and
N3). The designated behaviours of
N0,
N1, and
N3 are shown in
Table 4. The
x-axis represents the simulation
It is expected that a good node such as
N0 will continuously forward every data packet that is presented to it subject to a valid route being available. The direct reputation vector of a node in the proposed model, given
, is a combination of three components as explained in
Section 4.2.2.
represents the 3-tuples
As illustrated in
Figure 5 and
Figure 6, the target node (
N0) is observed as forwarding data packets continuously by
N1 and
N3 which is reflected on the computed values of
. It can also be observed that both monitoring nodes (
N1 and
N3) computed respective values for
from
N0 activities. Although
N0 is selected to display benevolent behaviours during the simulation. The computed
was as a result of incorrect observation outcomes. Further investigations from analysing the NS2 trace files show that the few packets dropped by
N0 was as a result of buffer overflow of the packet queue. The packet queue holds packets that are meant for forwarding. It has a maximum number of packets it holds (50 packets in the simulations) while the forwarding node sources for the required path for the packets from the route cache. As more packets are received for forwarding, a good node may unintentionally drop the packets due to buffer overflow. Additionally, some of the dropped packets were as a result of packet expiration caused by queue time out or when packet TTL (Time-To-Live) reaches zero. Every packet has a limit it can stay in a queue before it times out. If the required path is not found before the queue times out, that packet may be dropped. These various packet drops may result in a good node being perceived as displaying selfish behaviour. However, since computing the direct reputation values are executed after monitoring is completed in the given monitoring interval as described in
Section 4.2.2. As long as
does not exceed the defined threshold, the value of
is negligible. On the other hand, no value for
was computed all through the various monitoring windows which were expected.
The computed values of
N0 by
N1 and
N3 demonstrate that the expectation values of the Dirichlet distribution are a viable mathematical solution to determine the reputation values of nodes in a network. Before this notion was fully established, the computed direct reputation values of three other behaviours (the three misbehaviours: periodically selfish node, grey-hole node and black-hole node) were also analysed as seen in
Figure 7,
Figure 8,
Figure 9 and
Figure 10.
Figure 7 presents the computed reputation values of
N3 exhibiting a periodically selfish behaviour. Due to its behavioural nature
N3 rarely responds to route requests which means that data packets are scarcely presented to it for forwarding. Any data that it receives as a result of participating in route discovery processes are dropped. In
Figure 7 it is observed that the values
is lower than 0.1 through the course of the recorded simulation time while
is higher than 0.9. This indicates that
N3 displayed the expected behaviour and the computed reputation values using the expectation of the Dirichlet distribution can model this behaviour.
Similarly,
Figure 8 and
Figure 9 presents the computed direct reputation values of
N8 by
N1 and
N8 by
N7. The behaviours of a grey-hole node are sometimes difficult to perceive from monitoring because of the deceptive nature of the node. A grey-hole node occasionally forwards data packets, but it can easily switch behaviours by dropping data packets maliciously. As shown in
Figure 8, after the first window of observation,
N1 could have been observed dropping and forwarding an equal number of packets which is reflected in the computed reputation values (
and
was computed as 0.5). Subsequent computed reputation values show
increasing to 0.9 while
decreased to 0.1. As more successful observation windows are completed, the deceptive nature of
N8 is reflected in the computed reputation values as observed in
Figure 8. The same trend is also observed from the computed reputation values carried out by
N7 after observing the activities of
N8 as shown in
Figure 9. An interesting feature of the graphs in
Figure 9 is the gradual decrease and increase in the computed values of
, and the reverse is observed in the values of
. After the first observation window was completed, the computed reputation values
, were ⟨0.5, 0.5, 0⟩. Subsequent computations show that the values of
,
varied as the simulations progressed. This sort of behaviour could be difficult for the monitoring node to capture which is reflected in the computed values of
and
as the simulation time reached the 850 s mark (the value of
registered a sharp decline, while
registered a sudden increase) as observed in
Figure 9. Thus, the incorporation of genuine second-hand reputations from neighbouring nodes could assist a monitoring node with further information about a target node. The decision to incorporate second-hand reputations.
7.2. Incorporating Accurate Second-Hand Reputations
Genuine aggregated second-hand reputations from 1-hop neighbours can be incorporated into the directly computed reputation values to get the total reputation values for a node being monitored. Honest second-hand reputations from 1-hop neighbours could benefit a monitoring node on a grey-hole target using the following examples in
Figure 8 and
Figure 9 in
Section 7.1. Due to the changing behaviours of
N8,
N7 could find it difficult to reach a decision about the behaviours of a grey-hole node (
N8) based on the directly computed reputation values. Assuming
N1 provides genuine second-hand reputations about other nodes. If
N7 and
N1 are 1-hop neighbours,
N7 can send a reputation request to
N1 about
N8 during the simulations. From
Figure 8 it can be observed that
N1 computed reputation values for
N8 from approximately 135 s of the simulation time. If
N7 sends a reputation request about
N8 to
N1, the values contained in the reputation reply will pass the deviation test that is performed to ensure that second-hand reputations are valid. The genuine second-hand reputations can be incorporated in calculating the total reputation of
N8.
The graphs present simulation results showing the comparison of computed direct reputation values and second-hand reputations from neighbouring nodes. A target node
N1 (exhibiting benevolent behaviour) was monitored by
N0.
N2,
N3,
N4, and
N5 are 1-hop neighbours to
N0. The behaviours displayed by the various nodes during the simulations are given in
Table 5. The second-hand reputations from (
N2, …,
N5) represent benevolence, selfishness, and maliciousness.
N3 and
N5 are designated to act as dishonest nodes so the second-hand reputation values they passed on to
N0 are inaccurate (the inaccurate second-hand reputations from dishonest nodes are generated such that it reflects a different nature from the behaviour being observed). As observed in
Figure 11,
Figure 12 and
Figure 13, there are significant differences in the respective second-hand reputation values from
N3 and
N5 when compared to the reputation values computed by
N0,
N2, and
N4. When the deviation test is carried out on the received second-hand reputation values at the various time intervals, the values from
N3 and
N5 will always fail the test because for each computed case the result will be higher than
which represents the threshold, and the result must not exceed this value for it to be valid as evaluated in
Section 5.1.
For node N0 to compute the total reputation values of the target node N1, N0 aggregates the genuine second-hand reputations from nodes N2 and N4 before computing it with its own directly measured reputation values to get the total reputation values for N1.
Their subsequent trustworthiness with regards to the accuracy of second-hand reputations is updated positively.
One of the benefits of incorporating second-hand reputations from genuine neighbours is that it could speed up the process of ascertaining the trustworthiness of a target node.
Furthermore, accurate second-hand reputations could also help a monitoring node to decide if its neighbouring nodes are honest.
7.3. Evaluation of the Two-Dimensional Trustworthiness of Nodes
Evaluating the total reputation values of a target node requires the combination of the directly computed reputation values and the aggregated accurate second-hand reputations from honest nodes. In the last section, it was established through analysing simulation results that the Dirichlet distribution is effective in modelling the behaviours of nodes. One important aspect of this research work is to determine how the observed and evaluated optimal weight of a target node can be used in determining the trustworthiness of a node. The optimal weights in this case are the most favourable evaluated total reputation and trust values observed by a monitoring node before establishing the trustworthiness of a target node in the proposed model. This requires analyses of various computed total reputation values of different target nodes and the trust values of the nodes based on the accuracy of the second-hand reputations it provides about other nodes. To achieve this goal, simulations were carried out using the parameters given in
Table 6.
The simulations were carried out using a fixed network of 20 nodes. 20 different scenarios representing 20 different network topologies were randomly generated which was aimed at replicating real live ad hoc networks. One important factor about the simulations carried out is to get the right proportion of the node behaviours mixture with regards to the benevolent, selfish, and malicious nature of nodes. For sustainable and effective simulations, it is important to ensure that nodes that will continue to forward packets for other nodes are readily available. This ensures that the network data transfer process is not halted as the simulation progresses. Having more good nodes in the network ensures data availability, increases the network lifetime and improves the probability that data packets from the source will get to the desired destinations. With regards to second-hand reputations from nodes neighbouring nodes, there is a need to have a balance in the proportion of honest recommenders and liars. A scenario whereby there are only liars in the network will defeat the goal of evaluating the trustworthiness of a target node based on what it does with regard to packets and what it says about other nodes.
Observing the two-dimensional view of a node’s network activities presents the novel candour two-dimensional trustworthiness evaluation technique to determine the trustworthiness of a node based on two important qualities as proposed at the onset of this research work. That is a target node’s total reputation which measures its ability to forward packets and its honesty which measures its ability to provide genuine second-hand reputations. From the computed values of the total reputation and trust of
N1 and
N2 as observed in
Figure 14 and
Figure 15, the values for the set of thresholds ⟨f, s, m⟩ defined in
Section 5.2 can be derived. However, before specifying the threshold values a target node must attain or not exceed before it can be categorised as being benevolent, selfish, or malicious. An overview of how other behaviours were observed and evaluated was also analysed.
Figure 16 and
Figure 17 show the graphs of the computed total reputation and the trust values of nodes
N6 and
N11 designated to act as good nodes with regards to packet forwarding and dishonest nodes with regards to second-hand reputations, they provide about other nodes. It is expected that the computed total reputation values of
N6 and
N11 (
) would increase as the network operation progresses. This is mainly because packets presented are forwarded to the desired destination or the next hop as the case may be. In terms of the reputation values measuring the selfishness and the malicious of nodes
N6 and
N9, as observed in
Figure 16 and
Figure 17, the computed values of
are within the range of (0.02, 0.2) while that of
is zero all through the observed network operation The computed values of the 3-tuples
confirms that nodes
N6 and
N9 exhibited the expected behaviours as observed and modelled by
N7 and
N11 using the Dirichlet distribution and second-hand reputations from neighbours. On the other hand, the trust values of both nodes are evaluated to be within the range of (0.46, 0.54). When compared to the computed trust values of
N1 and
N2 as observed in
Figure 14 and
Figure 15,
N1 and
N2 performed far much better than
N6 and
N9. Given these variations in the computed trust values, it is fair and appropriate to ensure that when categorising the four nodes
N1,
N2,
N6, and
N9, a unique distinction can be drawn as to which nodes are good enough to be called totally trustworthy. This distinction is not required if the nodes behave badly such as being selfish and disseminating false second-hand reputations as seen in
Figure 18 and
Figure 19.
As observed in
Figure 18 and
Figure 19, the computed total reputation values
which measures the selfishness of nodes
N14 and
N16 were evaluated to be within the range of (0.62, 0.8) and (0.68, 0.86) respectively. This reflects the expected designated behaviours of nodes
N14 and
N16 and it confirms that the two monitoring nodes
N12 and
N18 successfully observed their packet forwarding activities. The computed values of
and
also reflects the behaviours of both nodes. Similarly, being dishonest nodes, it is expected that observed computed trust values of nodes
N14 and
N16 will be below all through the simulations when compared to honest nodes likes nodes
N1 and
N2 as observed in
Figure 18 and
Figure 19. To ascertain the total trustworthiness of a node using the candour two-dimensional trustworthiness evaluation technique, nodes
N14 and
N16 will be categorised as being totally untrustworthy, which is reflected in the computed total reputation and trust values as observed and evaluated by nodes
N12 and
N18. If punitive measures were to be taken against nodes that fall under this category, it will be justified if nodes
N14 and
N16 are denied the limited available resources.
8. Discussions
In the process of evaluating the trustworthiness of a node, the candour concept must be preserved. For instance, if a target node was initially perceived as being benevolent due to the observed packet forwarding activities and honest with regards to accuracy of second-hand reputation (second-hand reputations), the target node will be categorised as being totally trustworthy if its computed total reputation and trust values meet the required thresholds. If the situation changes with regards to its packet forwarding activities as a result of reduced energy levels after subsequent monitoring intervals (observation windows) are completed, this node may be categorised as being selfish if the computed
value falls below the threshold while that of
increases. Typical scenarios are illustrated in
Figure 20 and
Figure 21, which present the computed total reputation and trust values of nodes
N5 and
N15.
N5 and
N15 exhibited more benevolent behaviours than selfish behaviours in the first part of the simulations and later changed their behaviours to more selfish than benevolent as their energy levels dropped to a set threshold. The observed weights are the computed total reputation values and the trust values such as:
, and
.
Further analyses of the computed values in
Figure 21 show that the computed total reputation value
of node
N5 dropped below 0.5 after the 570s. In this scenario, if node
N5 is categorised as selfish and penalised afterward, the monitoring node may be justified as long as the penalty does not involve total isolation of node
N5 from the network due to its continuous dissemination of genuine second-hand reputations. For candour which represents fairness to be incorporated into the categorisation of nodes the optimal weights of the set thresholds which determine the trustworthiness of nodes is specified within a given range such that
= ⟨(0.5 → 0.75),(0.50 → 0.25), 0⟩. This argument can be further justified when the computed total reputation and trust values of node
N15 depicted in
Figure 20 are analysed. Node
N15 is a typical example of a node that may be unfairly categorised if the threshold values that determine the trustworthiness of a node are constant.
As observed in
Figure 20, the computed
gradually increased as the simulation progressed which is likely due to more data packets being forwarded as observed by
N10 and good second-hand reputations from node
N10 neighbours about node
N15. The high computed trust values of node
N15 are a result of the accurate second-hand reputations it provided to node
N10. This remained high and steady all through the simulations which are expected because node
N15 was designated to always provide genuine second-hand reputations. As the simulation progressed the computed values of
gradually decreased while
increased. The threshold, f, and s, which determine if a node is benevolent or selfish are set to be 0.75 and 0.25. This ensures that node
N15 will be perceived as exhibiting a selfish behaviour between 390 → 400 s due to
dropping below 0.75 and
increasing above 0.25.
The evaluation process will be deemed fair if node
N15 fails in all aspects such as
Assuming
is given as 0.75. A situation where the computed trust values (
) of node
N15 as observed in
Figure 20 is above 0.75, it may be unfair if node
N15 is categorised as selfish and later penalised due to the computed
dropping slightly below f and the computed
increasing slightly above s. To maintain fairness in the trustworthy evaluation process of nodes from the computed total reputation and trust values as observed in
Figure 20, the set of threshold values
should be within a given range. As long as the computed trust values
remains above the set threshold
, and the total reputation value measuring selfishness,
, does not fall below the lower boundary of the given range (0.75, 0.5), a target node such as
N15 should not be penalised and isolated from the network.
Evaluating the trustworthiness of a node using these conditions may not comprise the security of the network and will not undermine the idea of a trust and reputation system. Rather, the concept of candour is enshrined in the trustworthiness evaluation of nodes which is necessary due to the limitations associated with mobile nodes in MANETs. From the analyses of the simulation results, it was established that the trustworthiness of a node in the proposed model is evaluated using the novel candour two-dimensional trustworthiness evaluation technique. The first is the total reputation of a node which is measured from 3-tuples
represent benevolence, selfishness, and maliciousness respectively. The second view is the trust value
which measures the accuracy of second-hand reputations a node provides about other nodes. From the analysed computed total reputation values, it was established that for the candour concept to be enshrined in the trustworthiness evaluation of a node, it is necessary for the set threshold values that determines the categorisation of nodes to be set within a given range to accommodate for changing network situations to ensure that nodes are not unfairly penalised or isolated in the network. Various network scenarios were analysed from the computed reputation and trust values from the simulated behaviours of the network nodes. From the various scenarios analysed it was concluded that for fairness to be enshrined in the trustworthy evaluation process, the calculated total reputation and trust values of a target node must meet the following conditions:
where the given values represent the optimal weights
for the thresholds that must be met before the trustworthiness of a node is established. The computed total reputation values
are evaluated such that:
In all the scenarios in which the trustworthiness of a node in the network will be determined, the value of m is set as zero ( < 0). This is to ensure that the proposed model does not tolerate or encourage the operations of malicious nodes. Selfish behaviours may be a direct result of a node’s physical properties (overloaded with forwarding requests, reduction in energy levels, and loss of battery power) which may be partially tolerated. On the other hand, malicious nodes may modify, inject or misroute packets. Their sole aim is to undermine security and integrity by attacking the network. This form of behaviour should not be tolerated in any form.