4. Proposed Ontology-Based Context Model
The proposed context ontology model provides vocabularies to represent context knowledge about network related situations and states of the IoT systems. The model is designed to facilitate context exchange and the understanding of such exchanged contexts between heterogeneous nodes of an IoT system to enable optimal context-aware management of the system’s underlying WSNs. Each node holds one instance of the model, which can use expression axioms to deduce the corresponding context knowledge according to the information exchanged at different levels and scopes.
This model is designed as a hierarchical structure of context classes where each class characterizes the contextual information of one or more constituent parts of the IoT system. The bottom level of the model is raw information directly inherited from device components and system entities. The upper levels are the proposed context ontology to model and present inferred contexts for constituent parts of the system.
Here, raw information refers to any information, typically in numeric format, acquired directly from hardware and software components of a single node. In addition, raw information can be exchanged between nodes to infer low level global contexts, e.g., nodes in a given area can exchange raw sensor readings to derive the environmental context of their surroundings.
Normally, low level contexts can be derived by comparing numeric values of raw information with predefined thresholds. As a simple example, the “HIGH”/“LOW” energy state of a node is a low level context derived by comparing the level of residual energy on a node with a threshold value representing half of its full battery capacity. In addition, probabilistic frameworks such as Hidden Markov Models (HMM) [
23,
24] can be applied when deriving high level contexts, which are not sensed directly but inferred from lower level contexts, and thus have a certain level of uncertainty, depending on the accuracy of the lower level contexts used [
16]. For instance, if the derived movement and location of a certain node
A are believed to be mostly (but not 100%) true, rather than to infer that “
A is leaving the network”, an inference that takes into account of the level of uncertainty such as “With high probability,
A is leaving the network”, can be more appropriate to describe the condition of the event.
The structure of the proposed context ontology model is shown in
Figure 3. The
Context Resource class is the root entity, which has two direct descendant classes:
Local Context and
Global Context classes. The following sections describe the proposed model according to their scopes and levels.
Figure 3.
Context ontology structure for the IoT systems.
Figure 3.
Context ontology structure for the IoT systems.
4.1. Local Context
On the left side of this model shows the context ontology of the Local Context class, which describes the contextual information of a single node. It is a direct super class to four context classes, namely Platform, Services, Surroundings, and Communication.
4.1.1. Platform Context
The Platform context class provides a high-level description of a node’s platform running state or capability based on context of its constituting hardware and software entities. The platform context can be utilized by a node to self-determine, or by other nodes (if provided with the context) to determine whether if it could undertake a certain role or task, such as serving as a cluster head or performing data aggregation in a WSN.
A. Hardware Context
The
Hardware context class describes the general resource or performance state of a node’s hardware platform, which consists of four hardware components: sensor, transceiver, computation resource and energy resource, each with its own context class.
Sensor: this context class can describe the operation mode of the sensing unit or basic context about its surroundings as deduced from its raw sensor data.
Transceiver: this context class mainly describes the operation mode of the transceiver, e.g., transmit, receive, idle, sleep, or off. The duration and frequency for which the transceiver operates in each mode directly impact the amount of energy that it consumes. Other communication attributes such as channel conditions and bit rate shall be described by the Communication context class.
Computation Resource: this context class describes the state of the processing and storage resources, e.g., CPU, memory or buffer storage, of the hardware platform. Such contexts can be particularly useful to support in-network mechanisms such as in-network video processing [
12] and data storage [
15].
Energy Resource: this context class describes the state of the energy resources of a node, which can be a battery, an energy harvesting device (e.g., solar cell), or other energy module. It is defined to provide energy-related context of a node, such as its residual energy level, energy consumption rate, or the energy generation rate of its harvesting hardware.
B. Software Context
The Software context class describes the state of the local OS and programs executing in a node. This can include the program configurations, performance of code executions, and other context that can be useful for WSN management. For instance, the code execution performance of a node, such as the time it takes by a program to process every 100 bytes of inbound data, can be used to determine whether the node can function as a distributed in-network processing node in the WSN.
4.1.2. Services Context
The Services context class describes the service roles or tasks that a node can perform. A single node may provide multiple services or carry out multiple tasks at the same time, e.g., a node can be a host to a software agent while acting as a data provider with its built-in sensing component. This node may also function as a cluster head in a hierarchical network, or a relaying node in a multi-hop path. By making the services context available within a node or to other nodes, internal or external functional entities may adapt accordingly to achieve better overall performance. For example, internal functional entities of a node located 1-hop away from a cluster head can adapt to increase the communication capacity in respond to the node’s service context of being a frequent relaying node for inbound data to the cluster head.
4.1.3. Surroundings Context
The Surroundings context class describes the state of a node’s surroundings as monitored by its built-in sensor. It is deduced from a time sequence of local sensor contexts, each of which only represents the state at the time when the raw sensor data is taken. For instance, from 10 consecutive sensor contexts provided by a built-in proximity sensor for human detection, of which eight are ‘detected’ and two are ‘not detected’, the context of the node’s surrounding area over that time span could be inferred as ‘highly active’.
4.1.4. Communication Context
The Communication context class provides a high-level description about the state of a node’s communication with other nodes. The state can be in terms of the general quality, efficiency, security, frequency, availability, or pattern of communication. It is deduced based on the low level contexts from the node’s communication protocol stack.
A. Protocol Stack Context
The Protocol Stack context class describes the state of each protocol layer in a node’s protocol stack. The physical layer context may express characteristics such as signal quality, channel conditions, interference, and spectrum availability. The medium access control (MAC) layer context may describe the availability, quality, and utilization of the links to the node’s direct neighbors, frame collision, and fairness of channel access. The network layer context may capture properties such as the quality, efficiency, and security of a node’s multi-hop path to other nodes, traffic distribution pattern, and group membership if the node participates in group communication. The transport layer context may provide knowledge about end-to-end reliability of connection between the node and other nodes, or the occurrence of congestion along its path of communication. Finally, the application layer may present the IoT application’s context of use, e.g., involving real-time or non-real time transmission, indoor or outdoor environment, mobile or static scenario, local area or wide area deployment, cooperative or non-cooperative nodes, etc. which can be utilized to infer the performance, efficiency, or security requirements of the node’s communication for the application.
4.2. Global Context
On the right side of this model shows the context ontology of the Global Context class, which describes the contextual information of the IoT system based on local contexts exchanged between nodes and other external contexts. It is a direct super class to five context classes, namely Distributed Platform, Distributed Services, Environment, Network, and External.
4.2.1. Distributed Platform Context
The Distributed Platform context class provides a high-level description of a system’s distributed platform running state or capability based on exchanged Platform context between nodes in the system. Each node of the distributed platform can adapt to this knowledge to improve the system performance. For example, in a distributed in-network storage system, this context can be used by a node to become aware of and adapt to the storage and computation resource levels of other nodes in order to balance the data storage and processing loads among them.
4.2.2. Distributed Services Context
The Distributed Services context class describes the service roles or tasks that can be performed by multiple nodes in a system based on exchanged Services contexts. This knowledge can be applied to assist in the selection of nodes to undertake certain networking roles or tasks. For example, in a cluster-based WSN, this context can be used by a departing cluster head to select the best node to take over its role without re-clustering the network.
4.2.3. Environment Context
The Environment context class describes the state of a system’s physical environment based on exchanged Surroundings contexts. It provides nodes with a wider view of the event occurrences in their environment than is possible with only local Surroundings context. In turn, nodes can utilize this knowledge to make more informed networking decisions. For example, in an event detection WSN, nodes detecting an event occurrence will transmit data about the event to a sink. By adapting their routing decisions to event contexts, packet congestion in the network can be avoided by routing data through nodes which have not detected any events.
4.2.4. Network Context
The
Network context class provides a high-level description about the state of an IoT system’s network based on exchanged
Communication contexts as well as contexts from any deployed network management station (NMS) for WSN, e.g., [
25]. The heightened awareness of the network state can bring about more effective solutions to address problems, particularly those due to inherent constraints (e.g., resource constraints) and vulnerabilities (e.g., open distributed nature) of WSN. This may consequently give rise to new solutions such as network-state aware resource scheduling or intrusion detection techniques.
4.2.5. External Context
The External context class represents any context originated from a source external to the system. This may include user related contexts of IoT applications such as user’s profile, preferences, and activity schedule, or contexts derived from weather forecast data, indoor or outdoor map information, which can be useful for WSN management.
5. Scenario Analysis
In this paper, a context-aware multi-path selection (CAMS) algorithm for video streaming in wireless multimedia sensor networks [
12] is selected as a use case of our proposed ontology model. In this algorithm, a sensor node can generate video streams of its surrounding environment from its physical onboard sensor components comprising of an image camera and microphone. Thus, each single video stream can be decomposed into two sub-streams—image and audio streams, and transmitted over multiple node-disjoint paths simultaneously. The CAMS algorithm can choose the right number of paths for transmitting each stream so that the overall throughput is maximized. The CAMS prioritises the transmissions and the available routing paths according to the stream content, and end-to-end delay of the path, respectively. The aim is to transmit high-priority content over low delay paths whenever possible.
The original CAMS algorithm does not explicitly consider the issue of heterogeneous nodes. However, in this analysis, we consider a network composed of heterogeneous video sensors of different resolutions. As a result, differences between video sensors in their end-to-end delay requirements can be expected. The end-to-end delay requirement of a high resolution video sensor will be more stringent than a low resolution video sensor as more information bits will be transmitted for a given image or audio frame, i.e., more bits per image pixel or digitized sound sample.
Based on the proposed ontology structure, the context model for CAMS as proposed in [
12] is shown on the left side of
Figure 4, which involves only local contexts, as the algorithm does not perform any exchange of priority related information. The right side of
Figure 4 shows the context model for CAMS that has been extended to utilize global context (to be explained later). The associated syntaxes used are defined in
Table 1. The local context resource, CAMS priority, is constituted of
Content priority and
Delay priority, which can be seen as corresponding to the
Surroundings context, and
Communication context, respectively, of the context ontology structure shown in
Figure 3.
Figure 4.
Context ontology model for context-aware multi-path selection (CAMS).
Figure 4.
Context ontology model for context-aware multi-path selection (CAMS).
Table 1.
Syntax definitions for CAMS algorithm.
Table 1.
Syntax definitions for CAMS algorithm.
Syntax | Definition |
---|
BrightnessLevel | Brightness of the split image frame |
LoudnessLevel | Loudness of the split audio frame |
Ibrightness | Brightness threshold for deciding the frame priority |
Iloudness | Loudness threshold for deciding the frame priority |
Path | A single routing path between a source and destination |
PathS | Set of Path between a source and destination |
Delaypath | End-to-end delay of a path |
DelaySpath | Set of Delaypath for each available path in PathS |
Thigh-priority _max | Maximum time for end-to-end transmission of a high-priority frame |
Tlow-priority _max | Maximum time for end-to-end transmission of a low-priority frame |
PathShigh | Set of available paths for high-priority frame transmission |
PathSlow | Set of available paths for low-priority frame transmission |
N | Total number of available paths in PathS |
Mhigh | Number of paths in PathShigh |
Mlow | Number of paths in PathSlow |
xPathShigh | Set of exchanged available paths for high-priority frame transmission |
xPathSlow | Set of exchanged available paths for low-priority frame transmission |
In CAMS, a video stream can be presented in Description Logic [
26] as:
which expresses that a single video stream is composed of an image stream and an audio stream, each being a sequence of image frames, and audio frames, respectively. A video source node has to decide the priority of each outbound image and audio frame based on their importance. Two qualitative context values for frame importance,
High_Priority and
Low_Priority, can be assigned by the video source nodes. A high priority image frame is defined as:
where an image frame is assigned a high priority if its brightness (
BrightnessLevel) is higher than predefined brightness threshold
Ibrightness, and, either the loudness (
LoudnessLevel) of the associated audio frame is equal or lower than predefined loudness threshold
Iloudness, or the priority of the immediate previous image frame is high as denoted by boolean parameter
High_Priority’.
Image. Similarly, a high priority audio frame is defined as:
where
High_Priority’.
Audio is the equivalent boolean parameter denoting the priority of the immediate previous audio frame. Both
High_Priority’.
Image and
High_Priority’.
Audio are initialized to False and updated to True or False according to the respective high or low priority of each transmitted image and audio frame.
Based on the above definitions, only one type of frame—image or audio frame—of the same split video frame can be assigned as high priority,
i.
e., both image and audio frames cannot be assigned as high priority at the same time. In addition, when both image and audio frames are above (or below) their respective brightness and loudness thresholds, they will inherit the respective priority level assigned to their immediate previous image and audio frames (
High_Priority’.
Image, High_Priority’.
Audio) in order to maintain stability of the video streaming as specified in [
12].
All available node-disjoint paths between a source-destination pair can also be assigned with a qualitative context based on their transmission latencies. Two qualitative context values used are:
Guaranteed_Trans.
_Delay and
Non_Guaranteed_Trans.
_Delay. A path is assigned with a non-guaranteed transmission delay context (
Non_Guaranteed_Trans.
_Delay) if it neither satisfies the end-to-end delay requirement of the high-priority frame nor low-priority frame:
If a path satisfies the end-to-end delay requirement of either the high- or low-priority frame, the path is assigned with a guaranteed transmission delay context (
Guaranteed_Trans.
_Delay):
The available routing paths for high-priority frame transmission (
PathShigh) between a source-destination pair is defined as a set of paths in
PathS whose end-to-end delay is equal or less than the end-to-end delay requirement of high-priority frame (
Thigh-priority_max):
Similarly, the available routing paths for low-priority frame transmission (
PathSlow) between the same source-destination pair is defined as:
It should be noted that the above Thigh-priority_max and Tlow-priority_max should be appropriately initialized for each node’s instance of the ontology model based on its video resolution. This will ensure that all frames are transmitted over paths whose end-to-end delay satisfies the end-to-end delay requirement corresponding to the priority and resolution of the frames.
The CAMS supports multi-path routing, and the relationship between the number of available paths for high-priority frame transmission (
Mhigh), low-priority frame transmission (
Mlow), and total number of available paths (
N) can be shown as:
which expresses that the number of paths for frame transmission (high and low priority) is bounded by the total number of available paths, and due to the more stringent delay requirement of high priority frame,
i.
e.,
Thigh-priority_
max < Tlow-priority_
max, there will be fewer paths in
PathShigh (
Mhigh) for high-priority frame transmission than in
PathSlow (
Mlow) for low-priority frame transmission.
The original CAMS algorithm is modified to use our proposed context model as discussed above and shown on the left side of
Figure 4. The following shows how frames are transmitted by our modified CAMS algorithm under three case scenarios:
// Case 1: no transmission if none of the available paths meets the end-to-end delay requirement |
if (∀Non_Guaranteed_Trans._Delay.PathS) |
{No_Transmission} |
end if |
// Case 2: if all available paths meet the end-to-end delay requirement, transmit the high-priority stream simultaneously over the paths in PathShigh.If there are still unused paths remaining in PathS, transmit the low-priority stream simultaneously over these paths.Otherwise, discard the transmission of the low-priority stream. |
if (∀Guaranteed_Trans._Delay.PathS) |
{Transmit the High-Priority Stream (sequence of high-priority frames) |
Simultaneously over Mhigh number of paths in PathShigh |
if (Mhigh < N) |
{Transmit the Low-Priority Stream (sequence of low-priority frames) |
Simultaneously over (N−Mhigh) number of paths in (PathS ⊓ (¬PathShigh))} |
end if |
} |
end if |
// Case 3: if only a subset of available paths meet the end-to-end delay requirement, transmit the high-priority stream simultaneously over the paths in PathShigh.If there are still unused paths remaining in (PathSlow ⊓ (¬PathShigh)), transmit the low-priority stream simultaneously over these paths.Otherwise, discard the transmission of the low-priority stream. |
if (∃Guaranteed_Trans._Delay.PathS) |
{Transmit the High-Priority Stream (sequence of high-priority frames) |
Simultaneously over Mhigh number of paths in PathShigh |
if (Mhigh < Mlow) |
{Transmit the Low-Priority Stream (sequence of low-priority frames) |
Simultaneously over (Mlow−Mhigh) number of paths in (PathSlow ⊓ |
(¬PathShigh))} |
end if |
} |
end if |
As mentioned earlier, the CAMS as proposed in [
12] does not perform any exchange of priority related information, and therefore its selection of node-disjoint paths for frame transmissions is based only on local contexts,
i.
e., CAMS priority. However, it is conceivable that if individual nodes can be made aware of and adapt their behavior to not only their local context, but also the global context of other nodes, a more coherent and optimal management of the network can be achieved. To illustrate the usage of global contexts, CAMS has been extended for nodes to utilize another type of local context (local path usage), which can be shared or exchanged between nodes as global context. Therefore, a new case scenario has been designed and its corresponding context model is shown on the right side of
Figure 4.
The scenario involves multiple pairs of source-destination nodes performing CAMS at the same time. As in previous scenarios, the
PathS of the source node will hold the available node-disjoint paths to its destination, and these paths can be further placed into set
PathShigh or
PathSlow depending on whether they satisfy the delay requirement of the high-priority frame, or low-priority frame, respectively. To motivate the need for an improved CAMS, consider the case where multiple source-destination pairs performed CAMS only according to their local contexts, resulting in some node-disjoint paths between different communicating pairs to become ‘node-joint’ or overlapped as shown in
Figure 5.
Figure 5.
Overlapping node-disjoint paths of two communicating pairs.
Figure 5.
Overlapping node-disjoint paths of two communicating pairs.
The common relay nodes E, F, G, and B can potentially become traffic bottlenecks for Paths 2 and 3, and Paths 4 and 5 of node pair A–B, and node pair C–D, respectively. In order to avoid the occurrence of such situations, nodes can be permitted to exchange context about their available routing paths and the CAMS can be extended to harness the knowledge of such global contexts.
Under the extended CAMS, nodes will behave as follows: After the source node determines its routing paths for high-priority (PathShigh) and low-priority (PathSlow) frame transmissions, but before it transmits any frame according to the three case scenarios, the node will share or exchange its local PathShigh and PathSlow information with other nodes, received upon which will be stored as xPathShigh (exchanged PathShigh) and xPathSlow (exchanged PathSlow). On receiving, the node can determine if any of its paths in PathShigh and PathSlow are ‘node-joint’ or overlapped with those in xPathShigh and xPathSlow.
As shown on the right side of
Figure 4, a new local context class
Local Path Usage (constituted of local
PathShigh and
PathSlow) and a global context class
Global Path Usage (constituted of
xPathShigh and
xPathSlow) have been introduced, which can be seen as corresponding to the
Communication context, and
Network context, respectively, in the context ontology structure shown in
Figure 3.
An available routing path cannot be categorized into
PathShigh and
PathSlow at the same time. On the other hand, the same routing path may not be categorized into either
PathShigh or
PathSlow if it does not satisfy the delay requirement of either high-priority or low-priority frame. Each
Path in
PathShigh and
PathSlow can be assigned with one of the following two qualitative context values:
which expresses that if any relay node of a
Path in
PathShigh and
PathSlow is also a node of a path (e.g., source, destination, or relay node) in
xPathShigh and
xPathSlow, this
Path will be assigned with the state
RelayNodeShared, otherwise it will be assigned with the state
NoRelayNodeShared.
For a path ‘marked’ as having one or more shared nodes, the source node may perform a decision function to determine whether or not it should keep this path for frame transmission. The design of the decision function is often application/scenario specific, which may be based on probabilistic models, fuzzy logic, decision trees, or other reasoning mechanisms.
Figure 6 shows the flow of steps to handle shared paths in CAMS with global context. For each
Path in
PathShigh ‘node-joint’ with other paths in
xPathShigh, the node will perform a decision function to decide whether or not it should keep this
Path locally in its
PathShigh. The same procedure is applied for each
Path in
PathSlow ‘node-joint’ with other paths in
xPathSlow. However, if a
Path in
PathSlow is ‘node-joint’ with paths in
xPathShigh, this
Path will be removed from
PathSlow of this node. In other words, priority for using this Path is given to nodes that will be using it for high-priority frame transmission,
i.e., as
Path in
PathShigh. This step will be taken as well by other nodes with
Path in their
PathSlow ‘node-joint’ with paths in their
xPathShigh. On the other hand, if a
Path in
PathShigh is a ‘node-joint’ with any paths in
xPathSlow, the node will keep this
Path in its
PathShigh.
Figure 6.
Handling of shared paths in CAMS with global context.
Figure 6.
Handling of shared paths in CAMS with global context.
As a formalism for ontology representation, the RDF/XML serialization of the proposed ontology in
Figure 4 is shown. However, XML is seen as a ‘heavy’ syntax for resource-constrained devices. Thus, for implementing the proposed ontology on sensor nodes, more compact XML representations such as binary XML formats should be used [
27]. Another promising approach uses streaming HDT as lightweight serialization format for RDF and Wiselib Tuplestore for storing RDF data locally on embedded IoT devices such as sensor nodes is proposed in [
28].
<owl:Class rdf:ID=“CAMSPriority”> |
<owl:Class rdf:ID=“LocalContext”> |
<owl:Class rdf:ID=“ContentPriority”> |
<rdfs:subClassOf rdf:resource=“#LocalContext”/> |
</owl:Class> |
<owl:DatatypeProperty rdf:ID=“PriorityState” > |
<rdfs:domain rdf:resource=“ #ContentPriority”/> |
<rdfs:range rdf:resource=“xsd:string”/> |
</owl: DatatypeProperty > |
<owl:ObjectProperty rdf:ID=“ImagePriorityState”> |
<rdfs:domain rdf:resource=“#ContentPriority”/> |
<rdf:range rdf:resource=“#PriorityState”/> |
</owl:ObjectProperty> |
<owl:ObjectProperty rdf:ID=“AudioPriorityState”> |
<rdfs:domain rdf:resource=“#ContentPriority”/> |
<rdf:range rdf:resource=“#PriorityState”/> |
</owl:ObjectProperty> |
<owl:Class rdf:ID=“ DelayPriority”> |
<rdfs:subClassOf rdf:resource=“#LocalContext”/ > |
</owl:Class> |
<owl:ObjectProperty rdf:ID=“RoutingDelay”> |
<rdfs:domain rdf:resource=“#DelayPriority”> |
<rdf:range rdf:resource=“xsd:double”> |
</owl:ObjectProperty> |
<owl:Class rdf:ID=“ LocalPathUsage”> |
<rdfs:subClassOf rdf:resource=“#LocalContext”/ > |
</owl:Class> |
<owl:ObjectProperty rdf:ID=“PathShigh”> |
<rdfs:domain rdf:resource=“#LocalPathUsage”> |
<rdf:range rdf:resource=“xsd:string”> |
</owl:ObjectProperty> |
<owl:ObjectProperty rdf:ID=“PathSlow”> |
<rdfs:domain rdf:resource=“#LocalPathUsage”> |
<rdf:range rdf:resource=“xsd:string”> |
</owl:ObjectProperty> |
</owl:Class> |
<owl:Class rdf:ID=“GlobalContext”> |
<owl:Class rdf:ID=“GlobalPathUsage”> |
<rdfs:subClassOf rdf:resource=“#GlobalContext”/ > |
</owl:Class> |
<owl:ObjectProperty rdf:ID=“xPathShigh”> |
<rdfs:domain rdf:resource=“# GlobalPathUsage”> |
<rdf:range rdf:resource=“xsd:string”> |
</owl:ObjectProperty> |
<owl:ObjectProperty rdf:ID=“ xPathSlow”> |
<rdfs:domain rdf:resource=“# GlobalPathUsage”> |
<rdf:range rdf:resource=“xsd:string”> |
</owl:ObjectProperty> |
</owl:Class> |
</owl> |
The above use cases have illustrated how the proposed ontology-based context model can be applied to contextualize the mostly numeric data in the network, and how the contextualized data can be used beyond their sources by facilitating context sharing between network entities, all with the goal of enabling context-aware management of WSNs, which can also harness the rich context knowledge of the IoT systems. In comparison with the original CAMS, the ‘contextualized’ CAMS presented in this paper, i.e., CAMS using the proposed context ontology model, is more prepared to perform in the IoT environment, since all network entities share a common understanding of the network related information originated from heterogeneous sources, but contextualized using the same proposed model for a unified unambiguous interpretation. As mentioned, while there exist context ontology models for mitigating the complexity of systems operating in heterogeneous environments, most if not all of them are focused on modeling system or application level contexts, with very few or no ontology models proposed for network level contexts. Hence, to the best of our knowledge, the proposed model in this paper is one of the first (if not the first) for WSN management in IoT.