Next Article in Journal
Using Low-Cost Sensors to Develop a High Precision Lifting Controller Device for an Overhead Crane—Insights and Hypotheses from Prototyping a Heavy Industrial Internet Project
Next Article in Special Issue
Interest Forwarding in Named Data Networking Using Reinforcement Learning
Previous Article in Journal
A Lightweight Cipher Based on Salsa20 for Resource-Constrained IoT Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Cluster Structures for Management Decisions: The Digital CEO †

Intelligent Systems and Networks Group; Imperial College London, London SW7 2AZ, UK
This article is an extended version of the paper presented in Intelligent Systems Conference (2018).
Sensors 2018, 18(10), 3327; https://doi.org/10.3390/s18103327
Submission received: 1 August 2018 / Revised: 12 September 2018 / Accepted: 19 September 2018 / Published: 4 October 2018

Abstract

:
This paper presents a Deep Learning (DL) Cluster Structure for Management Decisions that emulates the way the brain learns and makes choices by combining different learning algorithms. The proposed model is based on the Random Neural Network (RNN) Reinforcement Learning for fast local decisions and Deep Learning for long-term memory. The Deep Learning Cluster Structure has been applied in the Cognitive Packet Network (CPN) for routing decisions based on Quality of Service (QoS) metrics (Delay, Loss and Bandwidth) and Cyber Security keys (User, Packet and Node) which includes a layer of DL management clusters (QoS, Cyber and CEO) that take the final routing decision based on the inputs from the DL QoS clusters and RNN Reinforcement Learning algorithm. The model has been validated under different network sizes and scenarios. The simulation results are promising; the presented DL Cluster management structure as a mechanism to transmit, learn and make packet routing decisions is a step closer to emulate the way the brain transmits information, learns the environment and takes decisions.

1. Introduction

Our brain takes decisions in a structured way while performing several functions at the same time. Our brain learns about the environment from our five senses; it stores memories to preserve our identity; it makes judgements on different situations; it protects itself against external threats or attacks. Our brain is formed by clusters of neurons [1] specialized in learning from different senses where information is transmitted as positive and negative spikes or impulses. It functions with two types of memories [2]; short-term memory is used for quick decisions and task-related actions whereas long-term memory preserves our identity and security. Another brain duality consists of its two operation modes [3]; consciousness under normal activities and unconsciousness under emergency situations such as being under external attack or routine operations like storing information while sleeping.
This paper presents the association of the most complex biological system; our brain with the most complex artificial system represented in large data networks: The Internet; the information infrastructure of the Big Data and the Web. The link between both of them is the Random Neural Network (RNN). Data networks collect information from users and transmit it to different locations; to perform this activity, they are required to make routing decisions based on different Quality of Service metrics while storing routing tables in memory under the threat of Cyber-attacks.
This paper proposes a Deep Learning (DL) Cluster Structure for Management Decisions that emulates the way the brain learns and makes choices and combines different Learning Algorithms. The proposed model combines the Random Neural Network Reinforcement Learning for fast local decisions and DL for long-term memory to remember network identity: QoS metrics (Delay, Loss and Bandwidth) and Cyber keys (User, Packet and Node). In addition, this paper includes a layer of DL management clusters (QoS, Cyber and CEO) that take the final routing decision based on the inputs from the DL QoS clusters and RNN Reinforcement Learning algorithm.
The Deep Learning Cluster Structures has been applied in the Cognitive Packet Network (CPN) for Quality of Service metrics and Cyber Security keys in Management Decisions based on packet routing and flow control. The RNN Reinforcement Learning Algorithm is chosen under normal or conscious operations due to its fast and adaptable routing learning as short memory whereas DL clusters are selected under external cyber-attacks. Deep Learning clusters take routing decisions based on the long-term memory in unconsciousness operation as safe and resilient, although inefficient and inflexible, routing.
A concepts review of Cybersecurity, Deep Learning and Deep Reinforcement Learning with their associated literature research is described in Section 2. The mathematical model of the Deep Learning clusters Structures for management decisions is defined in Section 3. The implementation of the QoS, Cyber and Management Clusters is presented on Section 4. The validation of the proposed model under different QoS and Cyber scenarios in small (nine nodes, one decision layer), medium (16 nodes, two decision layers) and large (25 nodes, three decision layers) is described in Section 5. Final discussion and bibliography are shared in Section 6 and References respectively.

2. Research Background

2.1. Cybersecurity

The expansion of the connectivity provided by the Ethernet and Internet protocols has enabled new industrial, technological and social applications and services, however, users are increasingly under new cybersecurity threats and risks. Ericsson [4] introduces cybersecurity issues and threats within Power Communications Systems in a smart grid infrastructure where network vulnerabilities and information security domains are analyzed. Ten [5] presented a survey on cybersecurity of critical infrastructure; in addition, they propose a Supervisory Control And Data Acquisition (SCADA) framework based on four procedures: Real-time monitoring, anomaly detection, impact analysis and mitigation strategy. They model an attack tree analysis with an algorithm for cybersecurity evaluation that incorporates password policies and port auditing. Cruz et al. [6] presented a distributed intrusion detection system for SCADA systems that includes different types of security agents tuned for each specific domain: Development of a network, device and process level capabilities, integration of signature and anomaly-based techniques against threats and finally the adoption of a distributed multi-layered design with message queues to transmit predefined events between elements. Wang et al. [7] proposed a framework to facilitate the development of adversary resistant Deep Neural Networks (DNN) by inserting a data transformation module between the sample and the DNN that avoids threat samples with a minimum impact on the classification accuracy. Tuor et al. [8] presented an unsupervised Deep Learning approach to detect anomalous network activity from system logs in real-time where events are extracted as features and the DNN learns users’ normal behavior or anomaly as potential malicious behavior. Wu et al. [9] presented a classification of cyber-physical attacks and risks in cyber manufacturing systems with possible mitigation measures such as supervised machine learning for classification and unsupervised machine learning for anomaly detection on physical data. Kim et al. [10] proposed a new cyber defensive computer control system architecture based on the diversification of hardware systems and unidirectional communications assuming that the detection and prevention of cyber-attacks will never be complete.

2.2. Deep Learning

Deep Learning is characterized by using a cascade of l-layers of non-linear processing units for feature extraction and transformation; each successive layer uses the output from the previous layer as input. Deep Learning learns multiple layers of representations that correspond to different levels of abstractions; those levels form a hierarchy of concepts where the higher the level, the more abstract concepts are learned. Schmidhuber et al. [11] examined DL in neural networks; the work includes deep supervised learning, unsupervised learning, reinforcement learning and evolutionary computation. It also includes an indirect search for short programs encoding deep and large networks. The success of machine learning algorithms generally depends on data representation. In order to obtain the appropriate objectives for learning good representations, computing representations and the geometrical connections between representation learning, density estimation and manifold learning; Bengio et al. [12] reviewed recent work in the area of unsupervised feature learning and DL, which includes advances in probabilistic models. They proposed a new probabilistic framework to include likelihood based probabilistic models, reconstruction based models such as autoencoder variants and geometrically based manifold learning approaches. Jie et al. [13] proposed a progressive framework to deep optimize neural networks. They combine the stability of linear methods with the ability of learning complex and abstract internal representations of DL methods. They introduce a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep learning model where the loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer.
The predominant algorithm to train DL uses stochastic gradient descent methods, although they are easy to implement, gradient descent is difficult to tune and parallelize. In order to overcome this issue, Le et al. [14] studied the advantages and disadvantages of off-the-shelf optimization algorithms in the context of simplification and to speed up the process of pre-training the unsupervised feature learning. Deep networks have been successfully applied to unsupervised feature learning for single modalities such as text, images or audio. However, Ngiam, J. et al. [15] proposed an application of deep networks to learn features over multiple modalities to demonstrate that cross-modality feature learning performs better than single modality learning. The deep network is trained with audio only data but tested with video only data and vice versa. Deep Neural Networks (DDNs) provide good results when large labeled training sets are available, however, they perform worse when mapping sequences to sequences. In order to address this issue, Sutskever et al. [16] presented an approach to sequence learning that makes minimal assumptions on the sequence structure. They use a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Bekker et al. [17] proposed an intracluster training strategy for DL with applications to language identification where the language clusters are used to define a cost function to train a neural network. Their method trains a classifier and analyzes the obtained confusion matrix where languages are simultaneously clustered in the columns and the rows of the confusion matrix. The language clusters are then used to define a modified cost function for training a neural network that learns to distinguish between the true language and languages within the same cluster.

2.3. Deep Reinforcement Learning

Deep Learning enables Reinforcement Learning to scale decision-making solutions that were previously unmanageable. A new algorithm called Double Deep Q Network (DQN) that generalizes an arbitrary function approximation was proposed by Hasselt et al. [18]. The algorithm includes DNN and reduces overestimations by decomposing the max operation in the target into action selection and action evaluation. Although DQN solves problems with high dimensional observation spaces; it can only manage discrete and low-dimensional action spaces. As presented by Lillicrap et al. [19], DQN depends on finding the action that maximizes the action-value function which in the continuous-valued case requires an iterative optimization process at each step. In order to overcome this issue, they propose an algorithm based on the deterministic policy gradient that can operate over continuous spaces. A framework for Deep Reinforcement Learning (DRL) that asynchronously executes multiple agents in parallel on multiple instances of the environment is proposed by Mnih et al. [20]. This parallelism decorrelates the agent’s data into a more stationary process using gradient descent for optimization of deep neural network controllers. A neural network architecture for model-free reinforcement learning where a dual network represents two separate estimators: one for the state value function and the other for the state-dependent action advantage function is presented by Wang et al. [21]. The two streams are combined via a special aggregating layer to produce an estimate of the state action-value function. A benchmark for continuous simple actions, high state and action dimensionality control, tasks with partial observations and tasks with a hierarchical structure is presented by Duan et al. [22]. They divide 31 tasks into basic control, locomotion and partially observable in order to achieve higher hierarchical structure tasks where higher level decisions can reuse lower level skills. Challenges posed by reproducibility, experimental techniques, and reporting procedures of DRL methods is investigated by Henderson et al. [23]. They present the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in Deep RL more reproducible. DRL for resource management problems in systems and networking is applied by Mao et al. [24]. The decision-making tasks where appropriate taken solutions depend on understanding the workload and environment experience.

3. Deep Learning Cluster Structures for Management Decisions

3.1. The Random Neural Network—Reinforcement Learning

The Random Neural Network (RNN) [25,26,27] represents more closely how signals are transmitted in many biological neural networks where they travel as spikes or impulses, rather than as analogue signal levels (Figure 1). The RNN is a spiking recurrent stochastic model for neural networks. Its main analytical properties are the “product form” and the existence of the unique network steady-state solution. It has been applied in different applications including search for exit routes for evacuees in emergency situations [28,29], pattern-based search for specific objects [30], video compression [31], and image texture learning and generation [32].
The RNN is composed of M neurons each of which receives excitatory (positive) and inhibitory (negative) spike signals from external sources which may be sensory sources or neurons (Figure 1). These spike signals occur following independent Poisson processes of rates λ+(m) for the excitatory spike signal and λ(m) for the inhibitory spike signal respectively, to cell m Є {1, …, M}.
The RL algorithm is based on the RNN with at least as many nodes as the number of decisions to be taken is generated where neurons are numbered 1, …, j, …, n; therefore for any decision i, there is some neuron i. Decisions in this RL algorithm with the RNN are taken by selecting the decision j for which the corresponding neuron is the most excited, the one with has the largest value of qj. The state qj is the probability that it is excited, these quantities satisfy the system of non-linear equations:
q j = λ + ( j ) r ( j ) + λ ( j ) .

3.2. The Cognitive Packet Network

The CPN was introduced by Gelenbe et al. [33,34,35,36,37]; it has been tested in large-scale networks up to 100 nodes with worst and best case performance scenarios. The CPN assigns routing and flow control capabilities to the packets rather than the nodes (Figure 2). QoS goals are assigned to Cognitive Packets (CP) within the CPN, which they follow when making routing decisions themselves with minimum dependence on the nodes. Cognitive Packets learn from experience of other CP packets with whom they interchange network information using n Mailboxes (MB) and their own inspection about the network storing network information in their Cognitive Map (CM).
Given some Goal G that the agent has to achieve as a function to be to be optimized and reward R as a consequence of the interaction with the environment; successive measured values of the R are denoted by Rl, l = 1, 2, … these are used to compute a decision threshold:
T l = α T l 1 + ( 1 α ) R l ,
where α is some constant 0 < α < 1. The agent takes the lth decision which corresponds to neuron j and then the lth reward Rl is measured and its associated Tl−1 is calculated.

3.3. Deep Learning Clusters

Deep Learning Clusters with RNN is described by Gelenbe, E. and Yin, Y. [38,39]. This model is based on the generalized queuing networks with triggered customer movement (G-networks) where customers are either “positive” or “negative” and customers can be moved from queues or leave the network (Figure 3). G-Networks are introduced by Gelenbe et al. [40,41]; an extension to this model is developed by Gelenbe et al. [42] where synchronized interactions of two queues could add a customer in a third queue. The model considers a special network M(n) that contains n identically connected neurons, each which has a firing rate r and external inhibitory and excitatory signals λ and λ+ respectively. The state of each cell is denoted by q, and it receives an inhibitory input from the state of some cell u which does not belong to M(n), therefore for any cell i Є M(n) there is an inhibitory weight w(u) ≡ w(u,i) > 0 from u to i.
The DL Architecture is composed of C multiple clusters, each of which is made up of an M(n) cluster each with n hidden neurons (Figure 4). For the c-th such cluster, c = 1, …, C, the state of each of its identical cells is denoted by qc. In addition, there are U input cells which do not belong to these C clusters, and the state of the u-th cell u = 1, …, U is denoted by q u ¯ . The cluster network has U input cells and C clusters. The Deep Learning clusters model defines:
  • I = (idl1, idl2, …, idlu), U-dimensional vector I Є [0,1]U for the input state q u ¯ for the cell u;
  • w(u,c), U × C matrix of weights from the U input cells to the cells in each of the C clusters;
  • Y = (ydl1, ydl2, …, ydlc), a C-dimensional vector Y Є [0,1]C for the cell state qc for the cluster c.
The network learns the U × C weight matrix w(u,c) by calculating new values of the network parameters for the input I and output Y using Gradient Descent learning algorithm which optimizes the network weight parameters w(u,c) from a set of input-output pairs (iu,yc).

3.4. Deep Learning Management Clusters

The Deep Learning management cluster was proposed by Serrano et al. [43]. It takes management decisions based on the inputs from different Deep Learning clusters (Figure 5); the Deep Learning Management cluster supervises the Deep Learning Clusters. The Deep Learning Management Cluster defines:
  • Imc = (imc1, imc2, …, imcc), C-dimensional vector Imc Є [0,1]C for the input state q c ¯ for the cluster c;
  • w(c), C-dimensional vector of weights from the C input clusters to the cells in the Management Cluster mc;
  • Ymc, a scalar Ymc Є [0,1], the cell state qmc for the Management Cluster mc.

3.5. Deep Learning Cluster Structures

3.5.1. Deep Learning Cluster Model

The DL Cluster Structure emulates the way the brain learns and makes choices by combing different learning algorithms. The proposed model is based on the RNN Reinforcement Learning for fast local decisions and DL for long-term memory to remember network identity: QoS metrics (Delay, Loss and Bandwidth) and Cyber keys (User, Packet and Node). The addition of a layer of DL Management Clusters (QoS, Cyber and CEO) takes the final routing decision based on the inputs from the DL QoS clusters and RNN Reinforcement Learning algorithm (Figure 6). The Deep Learning Cluster Structures has been applied in the CPN for Quality of Service metrics and Cyber Security keys in Management Decisions based on packet routing and flow control.
The RNN RL Algorithm is chosen by the CEO DL Management Cluster under normal or conscious operations due to its fast and adaptable routing learning as short memory whereas DL clusters are selected under external cyber-attacks based on the long-term memory in unconsciousness operation as a safe and resilient although inefficient and inflexible routing.
The RNN RL Algorithm instantaneously updates its network weights based on the direct observations from the network; this enables its routing algorithm to take quick decisions adaptable to changes. Deep Learning algorithm adapts slowly to network changes where the proposed model applies it as a reliable and safe routing when the CPN is compromised by a Cyber-attack; it emulates the brain in a subconscious mode with long-term memory; where it takes minimum decisions for defense or survival.

3.5.2. Deep Learning Clusters

DL clusters (Appendix A) learn the network identity that consists of QoS network metrics, including best routes for each QoS metric, and Cyber keys. A DL cluster is assigned to each QoS metric: Delay, Packet Loss and Bandwidth. Each QoS DL cluster learns the best-associated QoS metric with its best-associated node gates. When a node observes a better QoS route with a lower QoS metric; it learns its value and includes the gate on the first position of the QoS DL routing table.
In addition, a DL cluster is assigned per Cyber key: User, Packet and Node. The user cyber network weights authenticate the application that has transmitted the packet. The packet cyber network weights validate the packet transmitted is legitimate; this secures the network against Denial of Service attacks. The node cyber network weights authenticate the nodes within the CPN; this secures the CPN against impostor nodes. The Cyber network weights could have been assigned previously to the CPN nodes by the network administrator or the CPN nodes could have learnt them in an initialization mode. When a CPN node receives a CP; each Cyber DL cluster extracts its relevant keys and uses them as input and output values. If the quadratic error between the Cyber DL cluster output vector and the input vector is over a threshold then the CPN node considers the certificate as invalid or the CPN is under Cyber-attack.
This model defines three QoS clusters; Delay, Packet Loss and Bandwidth:
  • IQoS = (iQoS1, iQoS2, …, iQoSu) a U-dimensional IQoS Є [0,1]U vector where iQoS1, iQoS2, and iQoSu are the same value for each QoS type;
  • wQoS(u,c) is the U × C matrix of weights of the QoS Deep Learning Cluster;
  • YQoS = (yQoS1, yQoS2, …, yQoSc) a C-dimensional vector YQoS Є [0,1]C where yQoS1 is the QoS metric and yQoS2, …, yQoSc are the node’s QoS best routing gates.
In addition, this model defines three Cyber clusters; User Packet and Node:
  • ICyber = (iCyber1, iCyber2, …, iCyberu) a U-dimensional vector ICyber Є [0,1]U where iCyber1, iCyber2, …, iCyberu are the Cyber keys from the CP;
  • wCyber(u,c) is the U × C matrix of weights of the Cyber Deep Learning Cluster;
  • YCyber = (yCyber1, yCyber2, …, yCyberc) a C-dimensional vector YCyber Є [0,1]C where yCyber1, yCyber2, …, yCyberc are the Cyber keys from the DL cluster.

3.5.3. Deep Learning Management Cluster

The DL management clusters take the overall routing management decision (Figure 7). The QoS and Cyber management clusters analyze the output from their associated QoS and Cyber DL clusters respectively. If the Cyber management cluster detects a failure in the cyber certificates; the CEO management cluster routes the network Cognitive Packets as safe mode using the QoS DL clusters, otherwise, if the Cyber certificates are valid the CEO management cluster chooses the route provided by the RNN-RL routing algorithm as normal mode.
This model defines the QoS management cluster as:
  • Iqmc = (iqmc1, iqmc2, … iqmcc), a C-dimensional vector Iqmc Є [0,1]C with the values of the QoS Metrics for each QoS cluster;
  • wqmc(c) is the C-dimensional vector of weights that represents the Goal = (αDelay, βLoss, γBandwidth);
  • Yqmc, a scalar Yqmc Є [0,1] that represents the best QoS metric routing decision to be taken.
Cyber management cluster as:
  • Icmc = (icmc1, icmc2, … icmcc), a C-dimensional vector Icmc Є [0,1]C with the values of the key errors for each Cyber cluster (User, Packet, Node);
  • wcmc(c) is the C-dimensional vector of weights that represents the relevance of each Cyber Cluster;
  • Ycmc, a scalar Ycmc Є [0,1] that represents if the packet has passed the Cyber network security.
CEO management cluster as:
  • ICEOmc, a scalar ICEOmc Є [0,1] with the values of the QoS management cluster;
  • wCEOmc a scalar wCEOmc Є [0,1] that represents the error of the Cyber management cluster;
  • YCEOmc, a scalar YCEOmc Є [0,1] that represents the final routing decision.

4. Implementation

The Deep Learning Clusters Structure for Management Decisions is implemented in the CPN using the Network Simulator Omnet 5.0. The simulation covers several size nxn square CPNs where all the nodes in the same and adjacent layers are connected with each other. For simplicity, the simulation always considers the first node (Node 1) as the only transmitter and the last node (Node n) as the only receiver; the other nodes only participate in the routing of Cognitive Packets. An example of a 4 × 4 network is shown in Figure 8.
Each node has normalized QoS Delay, Loss and Bandwidth metrics as relative to their number; in an n × n network node i will have Delay: 10i; Loss: 5(ni) and Bandwidth: 5 + 10i respectively. The approach is represented in Table 1 for a 4 × 4 network. After two Cognitive Packets are sent with a defined QoS; the QoS metric swaps between each internal node the within the same column for a 4 × 4 CPN. This model proposes to set the RNN-RL network weights with initialization packets sent at random gates.

4.1. Quality of Service Deep Learning Cluster

The QoS DL clusters have three input cells (u = 3) and three output clusters (c = 3). The model therefore has iQoS-d1 = 0.5; iQoS-d2 = 0.5 and iQoS-d3 = 0.5; yQoS-d1 is the best QoS Delay metric, yQoS-d2 the best QoS Delay route and yQoS-d3 the second best Delay route. The model follows a similar approach for the Loss and Bandwidth QoS DL clusters respectively. The model normalizes the inputs of the DL clusters to (0.5+ QoS Metric/1000) and (0.5+ Best Gate/100) respectively.

4.2. Cyber Deep Learning Cluster

The Cyber DL clusters have ten input cells (u = 10) and ten output clusters (c = 10). The key is a vector of 10 dimensions. iCyber-uu, iCyber-pu, iCyber-nu have a value between 0.1 and 0.9 with increments 0.1∆. The Cyber DL clusters network weights are trained with the value of the input the same as the output.

4.3. Deep Learning Management Cluster

The inputs of the Cyber management cluster are the errors provided by each Cyber DL cluster and the value of its network weights are set with the same value (0.1) therefore different cyber DL clusters have the same priority. The output Ycmc is the overall Cyber quantified error decision based on a threshold. The input of the QoS management cluster are the best QoS metrics from each QoS DL cluster and the value of its networks weights corresponds to the Goal = (αDelay, βLoss, γBandwidth). The output Yqmc is quantified best QoS metric decision.
The input of the CEO management cluster is the value provided by the QoS management cluster and its network weight is the value provided by the Cyber management cluster. The output is the final routing decision between the different gates provided by the RNN-RL algorithm, Delay, Loss and Bandwidth DL clusters.

5. Experimental Results

The DL Clusters Structure for Management Decisions has been simulated in three different n × n Cognitive Packet Network sizes, 3 × 3, 4 × 4, and 5 × 5 with different Cyber keys; QoS metrics and Goal changes to assess the routing decision-making of our proposed DL Structure. Please note that we are not evaluating the routing protocol but the routing decision.

5.1. Cyber Deep Learning cluster results

The different Cyber DL clusters are validated where the security keys are modified at node 1 and the cyber validation error is measured at the next node 4 once the CPs have a stable route. The keys are gradually changed; from the correct key to 0.1∆ increments applied to the different key dimensions.
The Cyber DL cluster error largely increases even only with one 0.1∆ increment (Table 2). The results are consistent between the different Cyber DL clusters. Cyber key increments have a bigger error if they are applied in the same dimension rather than split into different dimensions.

5.2. Quality of Service Deep Learning Cluster Results (3 × 3 Nodes)

The 3 × 3 CPN is simulated with a continuous 160 Cognitive Packet stream. The first 20 packets are used to initialize the CPN network. Goal changes after 20 packets whereas QoS metric changes 2 packets after the new Goal is selected following Tl = 0.9Tl−1 + 0.1R where Tl is the Threshold at decision packet l and R is the Reward. The QoS DL clusters have been validated with seven different variable Goals for the same Cognitive Packet stream (Table 3).
The average error and learning algorithm iteration values for the QoS and Cyber DL clusters is shown in Table 4. The learning error of the QoS and Cyber DL Clusters is very reduced.
The number of updates in the network weights, or routing table, for the DL cluster and the RNN Reinforcement Learning is represented in Table 5.
The RNN Reinforcement Learning algorithm continuously updates its network weighs whereas the DL Cluster route only refreshes when a better route is found, however, the number of required iterations to update RNN-RL is only one whereas QoS DL clusters require approximately 160 iterations as shown in Table 4. The route decision taken by the CEO Management Cluster when the Cyber management cluster has authorized the different Cyber keys is shown in Table 6, Table 7 and Table 8 for only three different Goals as the simulation includes seven.
The route provided by the QoS DL clusters remains unchanged due to its slow learning process until the new best route is found by the RNN-RL. The Reward and Threshold of route decision taken by the CEO Management Cluster when the Cyber management cluster has authorized the different Cyber keys is shown in Figure 9 for the seven different Goals. When the new best route is discovered; the CPN Threshold adapts gradually to the original value.

5.3. Deep Learning Management Cluster Results (3 × 3 Nodes)

The DL Management Clusters (Cyber, QoS and CEO) on this section are validated under two different Cyber Security scenarios; ∆ = 0: normal operation and ∆ = 0.1: CPN under Cyber-attack. Three different strategic Cognitive Packets (CP 30, CP 85 and CP 148) are chosen for the 3 × 3 CPN validation with different Goals. Results are shown in Table 9.

5.4. Quality of Service Deep Learning Cluster Results (4 × 4 Nodes)

The 4 × 4 CPN is simulated with a continuous 380 Cognitive Packet stream. The first 100 packets are used to initialize the CPN network. Goal changes after 40 packets whereas QoS metric changes 2 packets after the new Goal is selected following Tl = 0.99Tl−1 + 0.01R where T is the Threshold at decision packet l and R is the Reward. The QoS DL clusters have been validated with seven different variable Goals for the same Cognitive Packet stream (Table 10).
The average error and learning algorithm iteration values for the QoS and Cyber DL clusters is shown in Table 11.
The number of updates in the network weights or routing table for the DL cluster and the RNN Reinforcement Learning is shown in Table 12.
The number of iterations to update RNN-RL is only one whereas DL clusters require approximately 150 iterations as shown in Table 11. The route decision taken by the CEO Management Cluster when the Cyber management cluster has authorized the different Cyber keys are shown in Table 13, for the first Goal only.
The results provided by the 4 × 4 CPN are similar to the 3 × 3 CPN. The first two packets follow the best route whereas the third packet acknowledges the QoS metrics have changed. RNN-RL finds the optimum route after Cognitive Packets explore the network and DL learns the route a Cognitive Packet after. The Reward and Threshold of route decision taken by the CEO Management Cluster when the Cyber management cluster has authorized the different Cyber keys is shown in Figure 10 for the seven different Goals. When the new best route is discovered; the CPN Threshold adapts gradually to the original value.

5.5. Deep Learning Management Cluster Results (4 × 4 Nodes)

The results provided by the DL management cluster confirm the proposed model. The correct quantification of the DL management cluster cell states and the selection of the accurate thresholds are fundamental to take relevant optimum decisions. Three different strategic Cognitive Packets are chosen (CP 107, CP 228 and CP 341) for the 4 × 4 CPN validation, where each one has a different Goal. Results for the two different Cyber Security scenarios; ∆ = 0: normal operation and ∆ = 0.1: CPN under Cyber-attack are shown in Table 14.

5.6. Quality of Service Deep Learning Cluster Results (5 × 5 Nodes)

The 5 × 5 CPN is simulated with a continuous 1550 Cognitive Packet stream. The first 1500 packets are used to initialize the CPN network with a single 1.0 × Delay Goal after 50 packets whereas QoS metric changes 2 packets after the Goal is selected following Tl = 0.999Tl−1 + 0.01R. The QoS DL clusters have been validated with only one Goal for the same Cognitive Packet stream (Table 15).
The average error and learning algorithm iteration values for the QoS and Cyber Deep Learning clusters is shown in Table 16.
The number of updates in the network weights, or Routing Table for the DL cluster and the RNN Reinforcement Learning is represented in Table 17.
The Network keeps sending Cognitive Packets until the value of the 1/Reward is lesser than the 1/Threshold. When the new best route is discovered as shown in Figure 11; the CPN Threshold adapts gradually to the original value.

5.7. Deep Learning Management Cluster Results (5 × 5 Nodes)

Results for the two different Cyber Security scenarios; ∆ = 0: normal operation and ∆ = 0.1: CPN under Cyber-attack are shown in Table 18. For the 5 × 5 CPN, the results of the DL Management cluster are consistent with the previous results, the DL management cluster adapts to network changes and provides the optimum route based on the current network conditions.

6. Conclusions

This paper has presented a Deep Learning Cluster Structure for Management Decisions. The proposed hierarchical decision model has been validated in the Cognitive Packet Network with three configurations: small size 3 × 3, medium size 4 × 4 and large size 5 × 5 with one, two and three layers of decision respectively. The addition of Deep Learning clusters specialized in different functions (Cyber, QoS, and Management) provides a flexible approach similar to how our brain performs; Deep Learning clusters are able to adapt and being assigned where more routing, computing and memory resources are required.
The RNN Reinforcement Learning algorithm adapts very quickly to variable QoS changes with fast decisions in short-term memory; whereas Deep Learning is slow to adapt to QoS changes as it learns from the RNN-DL algorithm and stores routing information in long-term memory. The CEO management cluster takes the right routing decisions based on the inputs from the QoS and Cyber Management Clusters. This allows the CPN to use a safe route in case of Cyber-attack, or a fast route under normal conditions. Future work will expand the validation gradually up to very large-scale networks (100 nodes, 8 decision layers).

Funding

The author declares no external funding was provided.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Figure A1. Deep Learning Cluster Structures for Management Decisions—Neural Schematic.
Figure A1. Deep Learning Cluster Structures for Management Decisions—Neural Schematic.
Sensors 18 03327 g0a1

References

  1. Bassett, S.; Bullmore, E. Small-World Brain Networks. Neuroscientist 2007, 12, 512–523. [Google Scholar] [CrossRef] [PubMed]
  2. Squire, L. Declarative and Nondeclarative Memory: Multiple Brain Systems Supporting Learning and Memory. J. Cogn. Neurosci. 1992, 4, 232–243. [Google Scholar] [CrossRef] [PubMed]
  3. Grossberg, S. The Link between Brain Learning, Attention, and Consciousness. Conscious. Cogn. 1999, 8, 1–44. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Ericsson, G. Cyber Security and Power System Communication, Essential Parts of a Smart Grid Infrastructure. IEEE Trans. Power Deliv. 2010, 25, 1501–1507. [Google Scholar] [CrossRef]
  5. Ten, C.; Manimaran, G.; Liu, C. Cybersecurity for Critical Infrastructures: Attack and Defense Modeling. IEEE Trans. Syst. Man Cybern. A 2010, 40, 853–865. [Google Scholar] [CrossRef] [Green Version]
  6. Cruz, T.; Rosa, L.; Proença, J.; Maglaras, L.; Aubigny, M.; Lev, L.; Jiang, J.; Simões, P. A Cybersecurity Detection Framework for Supervisory Control and Data Acquisition Systems. IEEE Trans. Ind. Inform. 2016, 12, 2236–2246. [Google Scholar] [CrossRef]
  7. Wang, Q.; Guo, W.; Zhang, K.; Ororbia, A.; Xing, X.; Liu, X.; Giles, L. Learning Adversary-Resistant Deep Neural Networks. arXiv, 2016; arXiv:1612.01401. [Google Scholar]
  8. Tuor, A.; Kaplan, S.; Hutchinson, B.; Nichols, N.; Robinson, S. Deep Learning for Unsupervised Insider Threat Detection in Structured Cybersecurity Data Streams; Association for the Advancement of Artificial Intelligence: Menlo Park, CA, USA, 2017; pp. 4993–4994. [Google Scholar]
  9. Wu, M.; Song, Z.; Moon, Y. Detecting cyber-physical attacks in CyberManufacturing systems with machine learning methods. J. Intell. Manuf. 2017, 1–13. [Google Scholar] [CrossRef]
  10. Huang, S.; Zhou, C.-J.; Yang, S.-H.; Qin, Y.-Q. Cyber-physical system security for networked industrial processes. Int. J. Autom. Comput. Sci. 2015, 12, 567–578. [Google Scholar] [CrossRef] [Green Version]
  11. Schmidhuber, J. Deep Learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
  12. Bengio, Y.; Courville, A.; Vincent, P. Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives. arXiv, 2012; arXiv:1206.5538. [Google Scholar]
  13. Shao, J.; Zhao, Z.; Su, F.; Cai, A. Progressive framework for deep neural networks: From linear to non-linear. J. China Univ. Posts Telecommun. 2016, 23, 1–7. [Google Scholar]
  14. Le, Q.; Ngiam, J.; Coates, A.; Lahiri, A.; Prochnow, B.; Ng, A. On optimization methods for Deep Learning. In Proceedings of the 28th International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011; pp. 265–272. [Google Scholar]
  15. Ngiam, J.; Khosla, A.; Kim, M.; Nam, J.; Lee, H.; Ng, A. Multimodal Deep Learning. In Proceedings of the 28th International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011; pp. 689–696. [Google Scholar]
  16. Sutskever, I.; Vinyals, O.; Le, Q. Sequence to Sequence Learning with Neural Networks. In Proceedings of the 27th Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 3104–3112. [Google Scholar]
  17. Bekker, A.; Opher, I.; Lapidot, I.; Goldberger, J. Intra-cluster training strategy for Deep Learning with applications to language identification. In Proceedings of the 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), Vietri sul Mare, Italy, 13–16 September 2016; pp. 1–6. [Google Scholar]
  18. Hasselt, H.; Guez, A.; Silver, D. Deep reinforcement learning with double Q-Learning. In Proceedings of the Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 2094–2100. [Google Scholar]
  19. Lillicrap, T.; Hunt, J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with Deep Reinforcement learning. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016; pp. 1–14. [Google Scholar]
  20. Mnih, V.; Badia, A.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; Kavukcuoglu, K. Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 1928–1937. [Google Scholar]
  21. Wang, Z.; Schaul, T.; Hessel, M.; Hasselt, H.; Lanctot, M.; Freitas, N. Dueling network architectures for deep reinforcement learning. In Proceedings of the International Conference on International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 1995–2003. [Google Scholar]
  22. Duan, Y.; Chen, X.; Houthooft, R.; Schulman, J.; Abbeel, P. Benchmarking deep reinforcement learning for continuous control. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 1329–1338. [Google Scholar]
  23. Henderson, P.; Islam, R.; Bachman, P.; Pineau, J.; Precup, D.; Meger, D. Deep Reinforcement Learning that Matters. In Proceedings of the Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, Edmonton, AB, Canada, 13–17 November 2018; pp. 1–26. [Google Scholar]
  24. Mao, H.; Alizadeh, M.; Menache, I.; Kandula, S. Resource Management with Deep Reinforcement Learning. In Proceedings of the 15th ACM Workshop on Hot Topics in Networks, Atlanta, GA, USA, 9–10 November 2016; pp. 50–56. [Google Scholar]
  25. Gelenbe, E. Random Neural Networks with Negative and Positive Signals and Product Form Solution. Neural Comput. 1989, 1, 502–510. [Google Scholar] [CrossRef]
  26. Gelenbe, E. Stability of the Random Neural Network Model. Neural Comput. 1990, 2, 239–247. [Google Scholar] [CrossRef]
  27. Gelenbe, E. Learning with the Recurrent Random Neural Network. In Proceedings of the IFIP Congress, Madrid, Spain, 7–11 September 1992; Volume 1, pp. 343–349. [Google Scholar]
  28. Gelenbe, E.; Wu, F. Large scale simulation for human evacuation and rescue. Comput. Math. Appl. 2012, 64, 3869–3880. [Google Scholar] [CrossRef]
  29. Filippoupolitis, A.; Hey, L.A.; Loukas, G.; Gelenbe, E.; Timotheou, S. Emergency response simulation using wireless sensor networks. In Proceedings of the 1st International Conference on Ambient Media and Systems, Quebec City, QC, Canada, 11–14 February 2008; Volume 21, pp. 1–7. [Google Scholar]
  30. Gelenbe, E.; Koçak, T. Area-based results for mine detection. IEEE Trans. Geosci. Remote Sens. 2000, 38, 12–24. [Google Scholar] [CrossRef] [Green Version]
  31. Gelenbe, E.; Sungur, M.; Cramer, C.; Gelenbe, P. Traffic and Video Quality with Adaptive Neural Compression. Multimed. Syst. 1996, 4, 357–369. [Google Scholar] [CrossRef]
  32. Atalay, V.; Gelenbe, E.; Yalabik, N. The Random Neural Network Model for Texture Generation. Int. J. Pattern Recognit. Artif. Intell. 1992, 6, 131–141. [Google Scholar] [CrossRef]
  33. Gelenbe, E. Cognitive Packet Network. U.S. Patent 6804201 B1, 12 October 2004. [Google Scholar]
  34. Gelenbe, E.; Xu, Z.; Seref, E. Cognitive Packet Networks. In Proceedings of the 11th IEEE International Conference on Tools with Artificial Intelligence, Washington, DC, USA, 8–10 November 1999; pp. 47–54. [Google Scholar]
  35. Gelenbe, E.; Lent, R.; Xu, Z. Networks with Cognitive Packets. In Proceedings of the 8th International Symposium on Modeling, Analysis, and Simulation on Computer and Telecommunication Systems, San Francisco, CA, USA, 29 August–1 September 2000; pp. 3–10. [Google Scholar]
  36. Gelenbe, E.; Lent, R.; Xu, Z. Measurement and performance of a cognitive packet network. Comput. Netw. 2001, 37, 691–701. [Google Scholar] [CrossRef]
  37. Gelenbe, E.; Lent, R.; Montuori, A.; Xu, Z. Cognitive Packet Networks: QoS and Performance. In Proceedings of the 10th IEEE International Symposium on Modeling, Analysis, and Simulation on Computer and Telecommunication Systems, Fort Worth, TX, USA, 16 October 2002; pp. 3–9. [Google Scholar]
  38. Gelenbe, E.; Yin, Y. Deep Learning with random neural networks. In Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; pp. 1633–1638. [Google Scholar]
  39. Yin, Y.; Gelenbe, E. Deep Learning in Multi-Layer Architectures of Dense Nuclei. arXiv, 2016; arXiv:1609.07160. [Google Scholar]
  40. Gelenbe, E. G-Networks: A Unifying Model for Neural Nets and Queueing Networks. Ann. Oper. Res. 1994, 48, 433. [Google Scholar] [CrossRef]
  41. Fourneau, J.M.; Gelenbe, E.; Suros, R. G-Networks with Multiple Class Negative and Positive Customers. In Proceedings of the International Workshop on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Durham, NC, USA, 31 January–2 February 1994; pp. 30–34. [Google Scholar]
  42. Gelenbe, E.; Timotheou, S. Random Neural Networks with Synchronized Interactions. Neural Comput. 2008, 20, 2308–2324. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Serrano, W.; Gelenbe, E. The Deep Learning Random Neural Network with a Management Cluster. In Proceedings of the International Conference on Intelligent Decision Technologies, Sorrento, Italy, 19 June 2017; pp. 185–195. [Google Scholar]
Figure 1. The Random Neural Network (RNN).
Figure 1. The Random Neural Network (RNN).
Sensors 18 03327 g001
Figure 2. The Cognitive Packet Network (CPN).
Figure 2. The Cognitive Packet Network (CPN).
Sensors 18 03327 g002
Figure 3. Cluster of Neurons.
Figure 3. Cluster of Neurons.
Sensors 18 03327 g003
Figure 4. Deep Learning Cluster.
Figure 4. Deep Learning Cluster.
Sensors 18 03327 g004
Figure 5. Deep Learning Management Cluster.
Figure 5. Deep Learning Management Cluster.
Sensors 18 03327 g005
Figure 6. Deep Learning Cluster Structure.
Figure 6. Deep Learning Cluster Structure.
Sensors 18 03327 g006
Figure 7. CPN node with Deep Learning clusters model.
Figure 7. CPN node with Deep Learning clusters model.
Sensors 18 03327 g007
Figure 8. CPN Network (4 × 4 Nodes).
Figure 8. CPN Network (4 × 4 Nodes).
Sensors 18 03327 g008
Figure 9. QoS Deep Learning cluster validation (3 × 3 Nodes).
Figure 9. QoS Deep Learning cluster validation (3 × 3 Nodes).
Sensors 18 03327 g009
Figure 10. QoS DL Cluster validation (4 × 4 Nodes).
Figure 10. QoS DL Cluster validation (4 × 4 Nodes).
Sensors 18 03327 g010
Figure 11. QoS DL Cluster validation (5 × 5 Nodes).
Figure 11. QoS DL Cluster validation (5 × 5 Nodes).
Sensors 18 03327 g011
Table 1. QoS Values (4 × 4 Nodes).
Table 1. QoS Values (4 × 4 Nodes).
Node 4 Initial—FinalNode 5 Initial—FinalNode 9 Initial—FinalNode 16 Initial—Final
Delay: 40–40Delay: 50–80Delay: 90–120Delay: 160–160
Loss: 65–65Loss: 60–45Loss: 40–25Loss: 05–05
Bandwidth: 45–45Bandwidth: 55–85Bandwidth: 95–125Bandwidth: 165–165
Node 3 Initial—FinalNode 6 Initial—FinalNode 10 Initial—FinalNode 15 Initial—Final
Delay: 30–30Delay: 60–70Delay: 100–110Delay: 150–150
Loss: 70–70Loss: 55–50Loss: 35–30Loss: 10–10
Bandwidth: 35–35Bandwidth: 65–75Bandwidth: 105–115Bandwidth:155–155
Node 2 Initial—FinalNode 7 Initial—FinalNode 11 Initial—FinalNode 14 Initial—Final
Delay: 20–20Delay: 70–60Delay: 110–100Delay: 140–140
Loss: 75–75Loss: 50–55Loss: 30–35Loss: 15–15
Bandwidth: 25–25Bandwidth: 75–65Bandwidth: 115–105Bandwidth: 145–145
Node 1 Initial—FinalNode 8 Initial—FinalNode 12 Initial—FinalNode 13 Initial—Final
Delay: 10–10Delay: 80–50Delay: 120–90Delay: 130–130
Loss: 80–80Loss: 45–60Loss: 25–40Loss: 20–20
Bandwidth: 15–15Bandwidth: 85–55Bandwidth: 125–95Bandwidth: 135–135
Table 2. Cyber Deep Learning Cluster Validation.
Table 2. Cyber Deep Learning Cluster Validation.
Dimension∆ = 0.0∆ = 0.1∆ = 0.2∆ = 0.3∆ = 0.4
19.7500 × 10−110.01020.04090.09210.1638
29.7537 × 10−110.02130.08510.19150.3406
39.7537 × 10−110.03260.13050.29380.5226
49.7537 × 10−110.04510.18060.40670.7238
59.7537 × 10−110.05760.23060.51950.9249
69.7537 × 10−110.07150.28670.64651.1519
79.7537 × 10−110.08510.34140.77031.3732
89.7537 × 10−110.10060.40380.91191.6273
99.7537 × 10−110.11530.46331.04701.8698
109.7537 × 10−110.13230.53211.20382.1526
Table 3. QoS Deep Learning Cluster Validation (3 × 3 Nodes)—Simulation Parameters.
Table 3. QoS Deep Learning Cluster Validation (3 × 3 Nodes)—Simulation Parameters.
PacketGoal NumberGoal DescriptionQoS
001–020-Network Initialization Packets
021–02211 × DelayInitial Values
023–04011 × DelayFinal Values
041–04221 × LossInitial Values
043–06021 × LossFinal Values
061–06231 × BandwidthInitial Values
063–08031 × BandwidthFinal Values
081–08240.5 × Delay + 0.5 × LossInitial Values
083–10040.5 × Delay + 0.5 × LossFinal Values
101–10250.5 × Delay + 0.5 × BandwidthInitial Values
103–12050.5 × Delay + 0.5 × BandwidthFinal Values
121–12260.5 × Loss + 0.5 × BandwidthInitial Values
123–14060.5 × Loss + 0.5 × BandwidthFinal Values
141–14270.3 × Delay + 0.3 × Loss + 0.3 × BandwidthInitial Values
143–16070.3 × Delay + 0.3 × Loss + 0.3 × BandwidthFinal Values
Table 4. Deep Learning Cluster Validation (3 × 3 Nodes).
Table 4. Deep Learning Cluster Validation (3 × 3 Nodes).
Cyber DL ClusterErrorIterationQoS DL ClusterErrorIteration
Cyber User6.96 × 10−1058QoS Delay9.59 × 10−10163.67
Cyber Packet7.34 × 10−10108QoS Loss9.16 × 10−10163.14
Cyber Node9.94 × 10−101162.33QoS Bandwidth9.16 × 10−10135.33
Table 5. Deep Learning Cluster vs RNN-RL (3 × 3 Nodes).
Table 5. Deep Learning Cluster vs RNN-RL (3 × 3 Nodes).
UpdatesRNN-RLQoS DelayQoS LossQoS Bandwidth
Initialization0413
CP 021-160140919
Table 6. Goal: 1 × Delay (3 × 3 Nodes).
Table 6. Goal: 1 × Delay (3 × 3 Nodes).
PacketRNN-RL RouteDL RouteBest RouteGoal 1/Reward1/Threshold
0211-4-91-4-91-4-9130.00130.00
0221-4-91-4-91-4-9130.00130.00
0231-4-91-4-91-6-9150.00130.00
0241-4-91-4-91-6-9150.00131.76
0251-4-91-4-91-6-9150.00133.38
0261-4-91-4-91-6-9150.00134.87
0271-5-91-4-91-6-9140.00136.25
0281-4-91-4-91-6-9150.00136.61
0291-2-6-91-4-91-6-9150.00137.84
0301-6-91-4-91-6-9130.00138.97
0311-6-91-6-91-6-9130.00138.02
0401-6-91-6-91-6-9130.00132.99
Table 7. Goal: 0.5 × Delay + 0.5 × Loss (3 × 3 Nodes).
Table 7. Goal: 0.5 × Delay + 0.5 × Loss (3 × 3 Nodes).
PacketRNN-RL RouteDL RouteBest RouteGoal 1/Reward1/Threshold
0811-4-91-4-91-4-982.5082.50
0821-4-91-4-91-4-982.5082.50
0831-4-91-4-91-6-987.5082.50
0841-5-91-4-91-6-985.0082.97
0851-6-91-4-91-6-982.5083.17
0861-6-91-6-91-6-982.5083.10
0871-6-91-6-91-6-982.5083.04
0881-6-91-6-91-6-982.5082.99
0891-6-91-6-91-6-982.5082.94
0901-6-91-6-91-6-982.5082.90
0911-6-91-6-91-6-982.5082.86
1001-6-91-6-91-6-982.5082.64
Table 8. Goal: 0.3 × Delay + 0.3 × Loss + 0.3 × Bandwidth (3 × 3 Nodes).
Table 8. Goal: 0.3 × Delay + 0.3 × Loss + 0.3 × Bandwidth (3 × 3 Nodes).
PacketRNN-RL RouteDL RouteBest RouteGoal 1/Reward1/Threshold
1411-4-91-4-91-4-9101.66101.66
1421-4-91-4-91-4-9101.66101.66
1431-4-91-4-91-6-9111.66101.66
1441-4-91-4-91-6-9111.66102.58
1451-4-91-4-91-6-9111.66103.42
1461-4-91-4-91-6-9111.66104.18
1471-5-91-4-91-6-9106.66104.89
1481-6-91-4-91-6-9101.66105.06
1491-6-91-6-91-6-9101.66104.71
1501-6-91-6-91-6-9101.66104.40
1511-6-91-6-91-6-9101.66104.12
1601-6-91-6-91-6-9101.66102.60
Table 9. DL Management Cluster Validation (3 × 3 Nodes).
Table 9. DL Management Cluster Validation (3 × 3 Nodes).
VariableCognitive Packet: 30
G: 1.0 × D + 0.0 × L + 0.0 × B
Cognitive Packet: 85
G: 0.5 × D + 0.5 × L + 0.0 × B
Cognitive Packet: 148
G: 0.3 × D + 0.3 × L + 0.3 × B
Cyber Attack∆ = 0.0∆ = 0.1∆ = 0.0∆ = 0.1∆ = 0.0∆ = 0.1
Cyber Icmc5 × 10−113.4 × 10−45 × 10−113.4 × 10−45 × 10−113.4 × 10−4
Cyber Ycmc0.99940.99690.99940.99690.99940.9969
QoS-Delay Iqmc0.63000.63000.31500.31500.21000.2100
QoS-Loss Iqmc0.00000.00000.26250.26250.17500.1750
QoS-Band Iqmc0.00000.00000.00000.00000.21330.2133
QoS-Delay Yqmc0.17650.17650.30000.30000.39130.3913
QoS-Loss Yqmc0.99940.99940.33960.33960.43540.4354
QoS- Band Yqmc0.99940.99940.99940.99940.38750.3875
CEO ICEOmc0.10000.10000.10000.10000.90000.9000
CEO wCEOmc(c)0.00000.99990.00000.99990.00000.9999
CEO YCEOmc0.99940.57460.99940.57460.99940.1305
Routing
Decision
RNN-DL
Gate-4
Node 6
DL-Delay
Gate-2
Node 4
RNN-DL
Gate-4
Node 6
DL-Delay
Gate-2
Node 4
RNN-DL
Gate-4
Node 6
DL-Band
Gate-2
Node 4
Table 10. QoS Deep Learning Cluster Validation (4 × 4 Nodes) —Simulation Parameters.
Table 10. QoS Deep Learning Cluster Validation (4 × 4 Nodes) —Simulation Parameters.
Cognitive PacketGoal NumberGoal DescriptionQoS Metric
000–100-Network Initialization Cognitive Packets
001–00211.0 × Delay + 0.0 × Loss + 0.0 × BandwidthInitial Values
003–04011.0 × Delay + 0.0 × Loss + 0.0 × BandwidthFinal Values
041–04220.0 × Delay + 1.0 × Loss + 0.0 × BandwidthInitial Values
043–08020.0 × Delay + 1.0 × Loss + 0.0 × BandwidthFinal Values
081–08230.0 × Delay + 0.0 × Loss + 1.0 × BandwidthInitial Values
083–12030.0 × Delay + 0.0 × Loss + 1.0 × BandwidthFinal Values
121–12240.5 × Delay + 0.5 × Loss + 0.0 × BandwidthInitial Values
123–16040.5 × Delay + 0.5 × Loss + 0.0 × BandwidthFinal Values
161–16250.5 × Delay + 0.0 × Loss + 0.5 × BandwidthInitial Values
163–20050.5 × Delay + 0.0 × Loss + 0.5 × BandwidthFinal Values
201–20260.0 × Delay + 0.5 × Loss + 0.5 × BandwidthInitial Values
203–24060.0 × Delay + 0.5 × Loss + 0.5 × BandwidthFinal Values
241–24270.3 × Delay + 0 × 3Loss + 0.3 × BandwidthInitial Values
243–28070.3 × Delay + 0 × 3Loss + 0.3 × BandwidthFinal Values
Table 11. Deep Learning Cluster Validation (4 × 4 Nodes).
Table 11. Deep Learning Cluster Validation (4 × 4 Nodes).
Cyber DL ClusterErrorIterationQoS DL ClusterErrorIteration
Cyber User6.96 × 10−1058.00QoS Delay9.34 × 10−10158.67
Cyber Packet7.34 × 10−10108.00QoS Loss9.22 × 10−10152.07
Cyber Node9.93 × 10−101017.87QoS Bandwidth8.83 × 10−10127.60
Table 12. Deep Learning Cluster vs. RNN-RL (4 × 4 Nodes).
Table 12. Deep Learning Cluster vs. RNN-RL (4 × 4 Nodes).
UpdatesRNN-RLQoS DelayQoS LossQoS Bandwidth
Initialization0867
CP 001-280280949
Table 13. Goal: 1 × Delay (4 × 4 Nodes).
Table 13. Goal: 1 × Delay (4 × 4 Nodes).
PacketRNN-RL RouteDL RouteBest RouteGoal 1/Reward1/Threshold
0011-5-9-161-5-9-161-5-9-16300.00300.00
0021-5-9-161-5-9-161-5-9-16300.00300.00
0031-5-9-161-5-9-161-8-12-16360.00300.00
0041-5-9-161-5-9-161-8-12-16360.00300.50
0051-5-9-161-5-9-161-8-12-16360.00301.00
0061-5-9-161-5-9-161-8-12-16360.00301.49
0071-5-9-161-5-9-161-8-12-16360.00301.98
0081-6-9-161-5-9-161-8-12-16350.00302.47
0091-7-9-161-5-9-161-8-12-16340.00302.88
0101-2-6-10-161-5-9-161-8-12-16360.00303.21
0111-8-9-161-5-9-161-8-12-16330.00303.69
0121-4-5-10-161-5-9-161-8-12-16390.00303.93
0131-3-5-11-161-5-9-161-8-12-16370.00304.61
0141-5-11-161-5-9-161-8-12-16340.00305.15
0151-6-11-161-5-9-161-8-12-16330.00305.46
0161-7-10-161-5-9-161-8-12-16330.00305.69
0171-2-7-11-161-5-9-161-8-12-16340.00305.91
0181-4-6-12-161-5-9-161-8-12-16360.00306.22
0191-8-10-161-5-9-161-8-12-16320.00306.68
0201-3-6-12-161-5-9-161-8-12-16350.00306.80
0211-5-11-161-5-9-161-8-12-16340.00307.18
0221-4-3-7-12-161-5-9-161-8-12-16380.00307.48
0231-2-8-11-161-5-9-161-8-12-16330.00308.07
0241-6-12-151-5-9-161-8-12-16320.00308.27
0251-7-12-151-5-9-161-8-12-16310.00308.39
0261-3-4-8-12-161-5-9-161-8-12-16370.00308.40
0271-8-12-161-5-9-161-8-12-16300.00308.92
0281-8-12-161-8-12-161-8-12-16300.00308.82
0291-8-12-161-8-12-161-8-12-16300.00308.73
0301-8-12-161-8-12-161-8-12-16300.00308.64
0401-8-12-161-8-12-161-8-12-16300.00307.80
Table 14. DL Management Cluster Validation (4 × 4 Nodes).
Table 14. DL Management Cluster Validation (4 × 4 Nodes).
VariableCognitive Packet: 107
G: 1.0 × D + 0.0 × L + 0.0 × B
Cognitive Packet: 228
G: 0.5 × D + 0.5 × L + 0.0 × B
Cognitive Packet: 341
G: 0.3 × D + 0.3 × L + 0.3 × B
Cyber Attack∆ = 0.0∆ = 0.1∆ = 0.0∆ = 0.1∆ = 0.0∆ = 0.1
Cyber Icmc5 × 10−113.4 × 10−45 × 10−113.4 × 10−45 × 10−113.4 × 10−4
Cyber Ycmc0.99940.99690.99940.99690.99940.9969
QoS-Delay Iqmc0.80000.80000.40000.40000.26660.2666
QoS-Loss Iqmc0.00000.00000.28750.28750.19160.1916
QoS-Band Iqmc0.00000.00000.00000.00000.27160.2716
QoS-Delay Yqmc0.14440.14440.25230.25230.33610.3361
QoS-Loss Yqmc0.99940.99940.31950.31950.41320.4132
QoS- Band Yqmc0.99940.99940.99940.99940.33190.3319
CEO ICEOmc0.10000.10000.10000.10000.90000.9000
CEO wCEOmc(c)0.00000.99990.00000.99990.00000.9999
CEO YCEOmc0.99940.57460.99940.57460.99940.1305
Routing
Decision
RNN-DL
Gate-6
Node 8
DL-Delay
Gate-3
Node 5
RNN-DL
Gate-6
Node 8
DL-Delay
Gate-6
Node 8
RNN-DL
Gate-6
Node 8
DL-Band
Gate-3
Node 5
Table 15. QoS Deep Learning Cluster Validation—Simulation Parameters (5 × 5 Nodes).
Table 15. QoS Deep Learning Cluster Validation—Simulation Parameters (5 × 5 Nodes).
Cognitive PacketGoal NumberGoal DescriptionQoS Metric
0000–1500-Network Initialization Cognitive Packets
001–00211.0 × Delay + 0.0 × Loss + 0.0 × BandwidthInitial Values
003–05011.0 × Delay + 0.0 × Loss + 0.0 × BandwidthFinal Values
Table 16. Deep Learning Cluster Validation (5 × 5 Nodes).
Table 16. Deep Learning Cluster Validation (5 × 5 Nodes).
Cyber DL ClusterErrorIterationQoS DL ClusterErrorIteration
Cyber User7.56 × 10−1362QoS Delay9.4 × 10−13221.11
Cyber Packet8.60 × 10−13125QoS Loss9.30 × 10−13182.40
Cyber Node9.91 × 10−132128.68QoS Bandwidth9.30 × 10−13200.71
Table 17. Deep Learning Cluster vs RNN-RL (5 × 5 Nodes).
Table 17. Deep Learning Cluster vs RNN-RL (5 × 5 Nodes).
UpdatesRNN-RLQoS DelayQoS LossQoS Bandwidth
Initialization08207
CP 001-05050100
Table 18. DL Management Cluster Validation (5 × 5 Nodes).
Table 18. DL Management Cluster Validation (5 × 5 Nodes).
VariableCognitive Packet: 034
G: 1.0 × D + 0.0 × L + 0.0 × B
Cyber Attack∆ = 0.0∆ = 0.1
Cyber Icmc5.14 × 10−143.47 × 10−4
Cyber Ycmc0.99940.9969
QoS-Delay Iqmc0.55900.5590
QoS-Loss Iqmc0.00000.0000
QoS-Band Iqmc0.00000.0000
QoS-Delay Yqmc0.19450.1945
QoS-Loss Yqmc0.99940.9994
QoS- Band Yqmc0.99940.9994
CEO ICEOmc0.10000.1000
CEO wCEOmc(c)0.00000.9999
CEO YCEOmc0.99940.5746
Routing
Decision
RNN-RL
Gate-8
Node 10
DL-Delay
Gate-4
Node 6

Share and Cite

MDPI and ACS Style

Serrano, W. Deep Learning Cluster Structures for Management Decisions: The Digital CEO. Sensors 2018, 18, 3327. https://doi.org/10.3390/s18103327

AMA Style

Serrano W. Deep Learning Cluster Structures for Management Decisions: The Digital CEO. Sensors. 2018; 18(10):3327. https://doi.org/10.3390/s18103327

Chicago/Turabian Style

Serrano, Will. 2018. "Deep Learning Cluster Structures for Management Decisions: The Digital CEO" Sensors 18, no. 10: 3327. https://doi.org/10.3390/s18103327

APA Style

Serrano, W. (2018). Deep Learning Cluster Structures for Management Decisions: The Digital CEO. Sensors, 18(10), 3327. https://doi.org/10.3390/s18103327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop