1. Introduction
With the wide application of the Internet of Things (IoT) in various fields such as city, industry, and transportation, a constant emergence of IoT devices (IoDs) are connected via the Internet to exchange information about themselves and their surroundings. It is expected that the number of IoDs will increase to 75.4 billion by 2025, more than 9-fold the number in 2017 [
1]. The proliferation of IoDs puts higher demands on the spectrum, data rate, and latency for IoT communications. In response, device to device (D2D) communication, where two nearby devices can exchange information directly, has been widely employed in IoT networks to improve spectrum efficiency and data rates, along with reducing transmission delays [
2,
3,
4]. Depending on whether radio frequency (RF) resources are shared between D2D users (DUs) and traditional cellular users (CUs), D2D communication can be classified into two categories: underlay and overlay communication. In particular, underlay D2D communication has been proven to provide a higher spectrum efficiency and match spectrum sharing nature in IoT networks [
5]. However, it will inevitably lead to mutual interference between DUs and CUs. In addition to the absence of electro-magnetic interference with existing RF systems, the emerging visible light communication (VLC) offers many advantages, such as a broad spectrum, innate security, and license-free deployment [
6]. Yet, VLC is also susceptible to blockage and has severe path attenuation. Therefore, combining RF and VLC bands for D2D communication has been regarded as an enticing solution to mitigate the interference and overcrowding of the RF spectrum, thus boosting the system capacity [
7,
8,
9].
However, the envisioned benefits of D2D communication may be limited by long distances, obstacles, and inferior channel conditions, especially for VLC-D2D communication. As a result, D2D communication is not well-suited for IoT applications that require wide coverage and high reliability [
10]. A promising response to this dilemma is to implement relay-aided D2D communication, which is able to extend the communication range as well as improve both reliability and flexibility [
11]. That is, D2D communication can be extended to a relay-aided manner when IoDs that need to communicate are far away from each other or are blocked by obstacles. Such relay-aided systems are feasible for the large-scale IoT without extra construction costs, like massive machine-type communication, the cognitive IoT, and wireless sensing, as there are a large number of available IoDs (e.g., sensors, actuators, drones) that can act as relays [
12,
13,
14]. Unmanned aerial vehicles (UAVs) have been widely used in both military and civilian applications [
15,
16]. Renhai Feng [
17] considers unmanned aerial vehicles (UAVs) to relay the maintenance data by visible light communication (VLC) under the requirements of ultra-reliability and low-latency. Zhiyu Zhu [
18] and Yining Wang [
19] enable UAVs to determine their deployment and user association to minimize the total transmit power with VLC. In [
20], the authors optimized the UAV-assisted VLC framework that aims at minimizing the required number of UAVs first and minimizing the total transmitted power second. In [
21], the authors consider UAVs equipped with a VLC access point and the coordinated multipoint (CoMP) capability to maximize the total data rate and minimize the total communication power consumption simultaneously. In [
22], the authors describe a UAV-assisted outdoor VLC system to provide high-speed and high-capacity communication for some users who are blocked by natural disasters or mountains, in where the UAV is set as a communication relay. However, to my knowledge, there is no research on using drones as relays for joint resource allocation and relay selection in a hybrid VLC/RF IoT system for D2D communication.
Accordingly, we concentrate on a large-scale drone relay-aided D2D communication underlaying hybrid VLC/RF IoT system, where multiple CU, DU, and drone relays coexist. Different from existing research, in this paper, we innovatively introduced drones as relay stations to address the challenges posed by the constrained coverage, poor reliability, and the lake of flexibility. More concretely, each DU corresponds to a pair of IoDs with inferior channel condition, so a drone relay is needed to aid the communication. And this relay can be selected from other idle IoDs. Besides, each DU and its relay are allowed to utilize either VLC resource or orthogonal RF resource of a certain CU. Obviously, there are two main variables that determine the system performance. One is the resource allocation for each DU, which also actually decides the resource used by its corresponding relay. For large-scale IoT, the number of DUs is typically higher than that of CUs. This means that although the VLC resource is included, some DUs still share the same resource, causing mutual interference among the DUs, the corresponding relays, and perhaps a certain CU. Hence, how to fully leverage the combination of VLC and RF to alleviate the mutual interference is a crucial issue. Another variable is the drone relay selection for each DU. For large-scale IoT, a DU has multiple available relays. However, different relays bring not only different communication gains to the DU, but also different interferences to the users sharing the same resource. Thus, how to select the relays to improve the overall system performance is another important issue.
So far, there have been some works on resource allocation [
23,
24,
25,
26,
27], relay selection [
28,
29,
30], and their joint optimization [
31,
32,
33,
34,
35,
36,
37] for relay-aided D2D communication. However, there is no research on using drones as relays for joint resource allocation and relay selection for large-scale D2D communication in hybrid VLC/RF IoT system. Static relay limits the flexibility and maintaining connectivity of relays in Hybrid VLC/RF IoT Systems. By using a drone as a relay station, it is possible to avoid obstacles such as buildings and to communicate in a line-of-sight (LoS) environment, which naturally aligns with the requirement of VLC Systems. In addition, most works mainly consider small-scale scenarios in which sharing resources among DUs is not required. Most of the optimization methods proposed in these works are not suitable for large-scale scenarios, where resource allocation and relay selection become more difficult for the following reasons.
Large-scale relay-aided D2D communication causes resource shortages, leaving each resource shared by multiple DUs. The arising complex interference relationships make the resource allocation for one DU also impact the performance of other DUs who share the same resource. This motivates us to view the resource allocation process for each DU as finding the optimal set of DUs for each resource, in which the mutual interference is minimal.
Similarly, the interference relationships make the relay selection for all DUs within the same set co-dependent. This prompts us to further consider allowing these DUs to cooperate with each other for a higher collective gain.
Large-scale IoDs deployment inherently exacerbates the time complexity and signaling overhead required for the optimization process, especially when it comes to relay selection. The optimization schemes are desired to have low complexity due to practical applications.
Against this background, we present a joint optimization of resource allocation and drones relay selection for large-scale relay-aided D2D underlaying hybrid VLC/RF IoT system, aiming to maximize the D2D system sum rate while ensuring the quality of service (QoS) requirements for CUs and DUs. First, inspired by the aforementioned perspective of finding the optimal DU set for each resource, the resource allocation problem can be modeled as a coalitional game. In particular, we construct a two-phase coalitional game that allows each DU to explore and finally join a coalition while guaranteeing QoS. The different coalitions that eventually form are exactly the optimal sets of DUs for different resources. Afterwards, with a large number of DUs and available relays, we regard each DU as an agent that can autonomously select a proper relay through learning. In this way, the relay selection problem is modeled as a multi-agent problem and thus can be solved in a distributed manner. Furthermore, given the aforementioned co-dependency in relay selection among the DUs in the same coalition, we propose two cooperative relay selection schemes based on multi-agent reinforcement learning (MARL) with low complexity, termed WoLF policy hill-climbing (WoLF-PHC). These two proposed schemes can not only overcome the inherent non-stationary of the multi-agent environment, but also encourage the DUs to cooperate for a higher system sum rate. The main contributions of this paper are summarized as follows:
The model of the drone relay-aided D2D communication underlaying hybrid VLC/RF system for the large-scale IoT is given. Aiming to maximize the sum rate of the D2D system while ensuring QoS, the joint optimization problem of resource allocation and drones relay selection is formulated. The problem has a nonconvex and combinatorial structure that makes it difficult to be solved in a straightforward way. Thus, we divide it into two subproblems and solve them sequentially.
From the perspective of finding the optimal DU set for each resource, we construct a two-phase coalitional game to tackle the resource allocation problem. Specifically, we leverage the combination of VLC and RF to ensure QoS in the coalition initialization phase. We also incorporate a greedy strategy into the coalition formation phase to obtain the global optimal sets of DUs.
In order to eliminate co-dependency, we first propose a cooperative WoLF-PHC-based relay selection scheme, where the agents in the same coalition share a common reward. Meanwhile, in any coalition, each agent’s policy can use the historical action information of other agents to overcome the non-stationary of the environment. Interestingly, combining the results of the resource allocation, we find that only the historical information of neighboring agents is sufficient to alleviate the instability. Hence, a lightweight neighbor–agent-based WoLF-PHC algorithm with curtailed complexity is further proposed.
We provide a theoretical analysis of the proposed schemes in terms of complexity and signaling overhead. Also, we provide numerical results to indicate that the proposed schemes outperform the considered benchmarks in terms of the sum rate and outage probability. Moreover, we investigate the trade-off between the sum rate performance and computational complexity.
The rest of this paper is organized as follows.
Section 2 is the related works. In
Section 3, the system model is given and the problem is formulated. In
Section 4 and
Section 5, we present the proposed resource allocation and relay selection schemes, respectively. The complexity and signaling overhead, and the simulation results are shown and analyzed in
Section 6 and
Section 7, respectively. Finally,
Section 8 concludes the paper.
2. Related Works
With the potential to substantially increase system capacity, the novel D2D concept combining VLC and RF communication was first proposed in [
7]. In [
38], a survey on D2D Communication for 5 GB/6G Networks about concept, applications, challenges, and future directions have been discussed. In [
39], the authors provide a V2I and V2V collaboration framework to support emergency communications in the ABS-aided internet of vehicles. Up to now, several works have been proposed to study the resource allocation for D2D communication in hybrid VLC/RF systems. In [
8], an iterative two-stage resource allocation algorithm was proposed based on the analysis of the interference generated by D2D transmitters and those received by D2D receivers. With only limited channel state information (CSI), the authors in [
9] attempted to implement a quick band selection between VLC and RF using deep neural networks. On this basis, refs. [
25,
26] included a millimeter wave into the hybrid VLC/RF bands and formulated the multi-band selection problem as a multi-armed bandit problem. However, the above works only considered the overlay mode instead of the multiple DUs coexisting in the underlay mode, which is an essential use-case in future networks. Only our previous work [
40] considered this use-case for D2D underlay communication and solved the resource allocation problem using the coalitional game. The main difference between our work and previous work is that D2D communication is extended to a relay-aided manner, which gives rise to new problems.
The relay-aided D2D communication appeared due to the demand to extend the communication range as well as enhance both reliability and flexibility. As a matter of fact, jointly optimizing resource allocation and relay selection for relay-aided D2D communication in traditional RF systems has been widely studied. In [
31], the joint optimization problem of mode selection, power control, channel allocation, and relay selection was decomposed into four subproblems and solved individually, aiming to maximize the total throughput. However, the authors in [
32] first addressed the power control problem separately, and then solved the remaining joint problem using an improved greedy algorithm. Similarly, ref. [
33] addressed the power control problem first so that the remaining joint problem could be converted into the tractable integer–linear programming problem. In [
34], taking into account both willingness and social attributes, a social-aware relay selection algorithm was proposed, and then a greedy-based resource allocation scheme was presented. Furthermore, in order to motivate users acting as relays, ref. [
35] assumed that the relays involved in assisting D2D communication could harvest energy from RF signals and formulated the optimization problem as a three-dimensional resource–power–relay problem. The authors in [
36] focused on an energy efficiency optimization problem of relay-aided D2D communication under simultaneous wireless information and the power transfer paradigm. Besides, ref. [
37] derived an energy efficient oriented joint relay selection and resource allocation solution for mobile edge computing systems by using convex optimization techniques. Despite all this, these research works took into consideration neither large-scale nor hybrid VLC/RF scenarios. Moreover, most of the above works on relay selection adopted either the brute-force algorithm based on designated regions or the distance-based algorithm, which have high computational complexities and are not suitable for large-scale applications.
Given the dynamics of practical networks, reinforcement learning (RL) techniques have been introduced to provide a solution to the relay selection problem. Ref. [
41] developed a centralized hierarchical deep RL-based relay selection algorithm to minimize the total transmission delay in mmWave vehicular networks. Ref. [
42] presented a multi-featured actor-critic RL algorithm to maximize the data delivery ratio in energy-harvesting wireless sensor networks. Also, ref. [
43] incorporated the prioritized experience replay into a deep deterministic policy algorithm and minimized outage probability without any prior knowledge of CSI. The above works modeled the policy search process as a Markov decision process, which is true if different agents update their policies independently at different times. Nevertheless, if two or more agents update their policies at the same time, a non-stationary multi-agent environment may occur [
44]. How to reduce the action space and computational complexity of multi-agent systems to improve the training speed while ensuring a stationary multi-agent environment is a key issue.
In summary, there are four drawbacks in the above studies:
- (1)
The above works only considered the overlay mode instead of the multiple DUs coexisting in the underlay mode, which is an essential use-case in future networks.
- (2)
Although some works focus on jointly optimizing resource allocation and relay selection for relay-aided D2D communication, these works did not take the large-scale IoT or hybrid VLC/RF scenarios into consideration.
- (3)
Static relay is adopted in existing research, which limits the flexibility and maintaining connectivity of relays in Hybrid VLC/RF IoT Systems. The dynamic relay-assisted D2D communication system with wide coverage, high flexibility, good reliability, and strong connectivity needs to be constructed.
- (4)
Most of the joint optimization methods proposed in these works are not suitable for large-scale scenarios, and new methods with low complexities and signaling overhead are forced to be developed.
4. Coalitional Game Based Resource Allocation
Since an appropriate resource allocation solution has a large positive impact on the system throughput improvement, we first address the resource allocation problem under random relay selection to approach the maximum throughput quickly. It is worth noting that the relays are randomly selected from the candidate relays, which are described in
Section 5.1. In this section, with random relays, a two-phase coalitional game is introduced to solve the resource allocation problem.
4.1. Coalitional Game Formulation
We model the resource allocation problem as a coalitional game. Specifically, each CU forms a coalition representing one RF resource, and an empty coalition is used to represent the VLC resource. Next, each DU independently and randomly chooses to join a coalition, which means that the DU shares the same resource with other users in the chosen coalition.
In the game,
with a non-transferable utility is defined. The set of players
consists of both the CUs and DUs. The coalition structure is denoted by the set
, where
is the
m-th coalition, and all coalitions are disjoint. That is, we have
for any
, and
. The characteristic function
denotes the coalition utility, which is expressed as:
Given the two coalitions
and
, if the switch operation of DU
n can increase the total throughput of the system, DU
n will leave its current coalition
and participate in the new coalition
. We say that DU
n prefers being a member of
to
, which is denoted by
. Thereby, the transfer rule is as:
where
denotes the sum rate of the current coalition structure
, and
is the new coalition structure.
According to Equation (23), the D2D system reaches the maximum throughput when all DUs no longer perform switch operations. At this time, the final evolutionary coalition structure is the solution of the resource allocation problem. More concretely, the different coalitions in are exactly the optimal sets of DUs for different resources.
4.2. Coalition Formation Algorithm
Based on the coalition structure and transfer rule described above, we need to try our best to satisfy the QoS requirements for all players so that more switch operations can be performed to search for the global optimal solution. Therefore, we construct a two-phase coalitional game composed of the coalition initialization phase and coalition formation phase.
To ensure the QoS of the CUs and DUs, we design the following process for the coalition initialization phase by leveraging the combination of VLC and RF.
- (1)
Initialize coalitions. In the relay-aided D2D network, the advantage of using the VLC band is more prominent, as it can provide a high data rate. To be specific, VLC signals are strongly attenuated with distance, so the interferences from other DU-TXs and relays operating in the VLC are naturally suppressed. Moreover, VLC signals are closely related to the D2D peer’s orientation in terms of irradiance and incidence angles, thus reducing the amount of interferences received. Accordingly, all DUs choose to be members of the coalition .
- (2)
Environment sensing. It is obvious that the DUs with a long distance or misaligned orientation are not good candidates for utilizing the VLC resource. In addition, the DUs in close proximity are also unsuitable for reusing the VLC resource due to the heavy interference generated. In this regard, we can filter these DUs that require reallocated resources by observing the data rate received per DU, which intuitively reflects the above environmental factors.
- (3)
Guarantee the QoS. More concretely, we sort the data rate achieved by each DU in descending order and filter out those with data rates below the threshold . In other words, these DUs are more appropriate to exploit the RF resources. To this end, a priority sequence is designed to guide the switch operation of DU n, where with the smaller subscript k indicates that DU n suffers less interference from CU . For simplicity, the priority order is determined by the distance between the n-th DU and m-th CU. The farther the is, the higher priority of DU n sharing the resource of CU m is. Note that if CU no longer meets the threshold due to DU n joining the coalition , then DU n should switch to the next coalition .
In the traditional coalition formation algorithm [
49], a randomly selected DU
n performs switch operations in a random order based on a randomly initialized coalition partition. According to the transfer rule in Equation (23), DU
n leaves the current coalition
and joins the new coalition
only when the system profit increases. However, it only refers to the local information and may deviate from the global optimal solution. Furthermore, the existence of users who do not satisfy the QoS demands compromises the coalition utility, which may adversely affect the decision to switch operations. To this end, by allowing DUs to carry out some exploratory operations with a chance probability, we introduce a greedy strategy to search for the global maximum throughput of the D2D system in our coalition formation phase. Considering the convergence rate of the algorithm, the chance probability should decrease gradually with the increase of the number of switch operations. Moreover, it should also depend on the system profit generated by the switch operation. More concretely, it is recommended to reduce the probability of performing the exploratory operation when the system penalty is high, i.e., the system profit is highly negative. In this regard, the chance probability
is designed as [
50]:
where
denotes the system profit,
is the function of the current number of switch operations
t, and
is the constant value.
The detailed process of the coalition formation algorithm for resource allocation is shown in Algorithm 1.
Algorithm 1: The Coalition Formation Algorithm for Resource Allocation |
Initialization phase: |
1: Initialize coalition structure ; |
2: Collect data rate of DU n in coalition ; |
3: Sort DUs in descending order of , and get |
the set of DUs with data rate below , denoted as ; |
4: for DU do |
5: According to priority sequence , DU n joins coalition ; |
6: while do |
7: if then |
8: DU n joins coalition ; |
9: else |
10: DU n joins back coalition ; |
11: break |
12: end if |
13: end while |
14: end for |
15: Get the initial coalition structure ; |
Formation phase: |
1: Set the current structure to , ; |
2: repeat |
3: Uniformly randomly choose DU and denote its coalition as ; |
4: Uniformly randomly choose another coalition ; |
5: if The switch operation from to satisfying then |
6: The chosen DU leaves coalition , and joins coalition ; |
7: Update and current structure as follows: |
; |
8: else |
9: Draw a random number P uniformly distributed in , and |
calculate the chance probability ; |
10: if then |
11: Allow to join , update and current structure as: |
; |
12: end if |
13: end if |
14: until The coalition structure converges to the final Nash-stable . |
4.3. Theoretical Analysis
In this subsection, we provide the theoretical analysis in terms of convergence, stability, and optimability.
Convergence: Starting from any initial coalition structure , the proposed coalition formation algorithm will always converge to a final coalition structure .
Proof: For a given number of the CUs and DUs, the total number of the possible coalition structure is finite. As stated before, to improve the D2D system sum rate, each switch operation is allowed to sacrifice the immediate profit with a chance probability. Nevertheless, the probability will approach zero as the number of switch operations increases, denoted by , if . That is, every switch operation will eventually contribute to a higher profit, thus ensuring the convergence to a final coalition structure.
Stability: The final coalition structure of our algorithm is Nash-stable, which means that for any , the condition is always satisfied.
Proof: Supposing that is not Nash-stable, there is at least a and a new coalition that conform to the transfer rule , and then a new coalition structure is formed. This is in contradiction with the premise that is the final coalition structure. Therefore, the final coalition structure is Nash-stable.
Optimality: The Nash-stable coalition structure obtained by this algorithm corresponds to the optimal system solution.
Proof: Regarding the renewal of the coalition structure as the evolution of the Markov chain, we can prove that the Markov chain will enter a stationary state with the increase of the number of iterations, so as to guarantee the optimability. The detailed proof can be referred to [
50].
5. MARL-Based Relay Selection
After obtaining the resource allocation solution, we discuss how to select the optimal relay for each DU to further improve the D2D system sum rate. Considering the large number of DUs and relays, it may not be practical to accomplish relay selection with a centralized method due to its high signaling overhead. In this regard, each DU can be considered as an agent and independently selects a relay for assistance, which constitutes a multi-agent system. However, the interferences among some DUs for the large-scale IoT make the relay selection for these DUs co-dependent. That is, a DU needs to consider the relay selection behaviors of other DUs within the same coalition when selecting a relay. In this section, in order to eliminate the co-dependency, we introduce a distributed cooperative MARL-based algorithm, named WoLF-PHC, which encourages the DUs to cooperate for a higher system sum rate.
5.1. Modeling of Multi-Agent Environments
By solving the resource allocation problem in
Section 4,
N DUs are grouped into
coalitions. Note that DUs in different coalitions are not mutually interfered, which implies that the DUs in coalition
do not need to consider the relay selection behaviors of the DUs in other coalitions
. Hence, without a loss of generality, we focus on the relay selection problem for the DUs in coalition
and conduct the modeling analysis hereafter.
In the formulation of the MARL problem, all DUs as agents are independently refining their relay selection policies according to their own observations of the global environment state. Nevertheless, if two or more agents update their policies at the same time, the multi-agent environment appears non-stationary, which violates the Markov hypothesis required for the convergence of RL [
51]. Here, we consider modeling the problem as a partially observable Markov game. Formally, the multi-agent Markov game in
is formalized by the 5-tuple
, where
is the set of DUs in
, and
is the total number of DUs in
;
is the global environment state space;
is the local observation space for DU
, determined by the observation function
O as
;
is the action space for DU
;
is the immediate reward that is shared by all DUs in
to promote cooperative behavior among them. As depicted in
Figure 2, at each step
t, given the current environment state
, each DU
takes an action
from its action space
according to the observation
and its current policy
, forming a joint action
. Thereafter, the environment generates an immediate reward
and evolves to the next state
. Then, each DU receives a new observation
. To be specific, at each step
t, for DU
, the observation space
, action space
, and reward
are defined as follows:
Observation space: The state space observed by
can be described as
, which includes the historical actions of all DUs in
at the previous step. One of the motivations behind this is that if we know the actions taken by all agents, the multi-agent environment becomes stationary [
52]. Furthermore, each DU can fully learn to cooperate with other DUs to achieve the global optimal reward in this way.
Action space: The action space of
can be described as
, which represents that the DU can select a relay (The terms
select a relay and
take an action will be used interchangeably throughout the paper.) from the set of relays
for assistance. Accordingly, the dimension of the action space is the total number of relays
R. In order to reduce computational complexity, we limit the number of available relays by delineating the area. For DU
, let the distance between DU-TX
and DU-RX
be
. As shown in
Figure 3, we create two circles of radius
and place
and
at the center of each circle, thus forming an overlapping area. The relays that are located inside the overlapping area are considered as the candidate relays. Subsequently,
can be reduced to:
where
denotes the distance between
and
r;
is the distance between
r and
. Besides, we assume that the candidate relays for each DU do not overlap.
Reward: To encourage each DU to learn to collaborate with other DUs and thus maximize the D2D system throughput, the DUs in the coalition
share a common reward
, which is defined as:
where
is the decision matrix for relay selection at the current step
t in coalition
, which depends on the joint action
. That is, if
, then we have
and
.
5.2. WoLF-PHC
In a multi-agent environment, each agent is part of the other agent’s environment, leading to a non-stationary environment. Directly applying a classical single-agent RL (e.g., Q-learning and policy gradient) in the multi-agent case may cause severe oscillations and eventually make the results hard to converge [
53]. In contrast, WoLF-PHC, as an extension of Q-learning, adopts the principle of fast learning when losing and slow learning when winning, which allows agents to learn moving targets with both guaranteed rationality and convergence [
54]. Hence, we apply the WoLF-PHC to enable the DUs to learn their own relay selection decisions in a multi-agent system.
In the WoLF-PHC, each DU continuously interacts with the environment and other DUs in the same coalition to update the Q-value. To simplify the representation, for DU
, the local observation
, action
, and action space
at the current step
t are simply denoted as
,
a and
, respectively; the reward received
, new observation
, and action
at the next step are denoted as
,
and
, respectively. Let
be the estimated Q-value with action
a in state
during the learning process. As with the Q-learning algorithm, the update rule of the Q-value can be expressed as:
where
represents the learning rate, and
is the discount factor.
To learn the optimal Q-value, the DU updates its own relay selection policy
that describes the probability of taking action
a in state
. As a generalization of the widely used greedy algorithm, the policy hill-climbing (PHC) algorithm increases the probability of taking the highest valued action while it decreases the probability of other actions according to the learning parameter
[
55]. Moreover, the policy should be restricted to a legal probability distribution. Thus, the updated rule of policy
can be calculated as:
where
where
M is a constant coefficient.
In essence, the key contribution of the WoLF-PHC is the variable learning parameter
consisting of two parameters:
and
, with
. They are employed to update the policy, which depends upon whether the current policy
is winning or losing. To this end, the average policy denoted as
is introduced to judge the win–lose of the current policy and can be formulated as:
where
represents the number of occurrences of the state
observed by the DU, which is updated by:
By comparing the expected payoff of the current policy with that of the average policy over time, the DU can choose its appropriate learning parameter
from
and
. If the expected value of the current policy is larger,
is applied to update the policy cautiously; otherwise,
is utilized to learn quickly. Accordingly, the learning parameter
can be described as:
The detailed process of the WoLF-PHC algorithm for relay selection is given in Algorithm 2.
Algorithm 2: The WoLF-PHC Algorithm for Relay Selection |
1: Set , , , ; |
2: for each coalition do |
3: Initialize for each DU : |
, , ; |
4: for each step t do |
5: for each DU do |
6: Receive current local observation and update by using ; |
7: Select relay a at random with probability policy ; |
8: end for |
9: All DUs take actions and receive immediate reward ; |
10: for each DU do |
11: Receive next observation ; |
12: Update Q-value as well as Q-table by using ; |
13: Update average policy by using ; |
14: Update relay selection policy by using , and ; |
15: Update observation ; |
16: end for |
17: end for |
18: for each DU do |
19: Find the optimal relay , and set ; |
20: end for |
21: end for |
22: Output the optimal decision matrix: . |
5.3. Neighbor–Agent-Based WoLF-PHC
In the WoLF-PHC, we define the observation space of agent
as the past joint action of all agents within coalition
, so as to guarantee the stability of the multi-agent environment. Before reselecting relays, when the number of the DUs and resources are 10 and four, we visualize the geographic location of all the DUs and the result of the resource allocation, as shown in
Figure 4, where different colors represent different resources. It can be seen that the closer DUs use different resources, while the more distant DUs share the same resource. In other words, the DUs in a coalition are far apart from each other. In the case of limited range D2D communication, the interference between any candidate relay of DU
and a remote DU
can be considered the same and negligible. Similarly, the interference between any candidate relay of
and
can be considered the same. Thus, the relay selection decisions of
and
are independent of each other. That is, it is not necessary to have all agents’ historical actions to ensure stability; only the actions of neighboring agents is enough. Accordingly, we propose a lightweight algorithm that allows the target agent to observe the actions of a fixed number of neighboring agents, named neighbor–agent-based WoLF-PHC.
In the neighbor–agent-based WoLF-PHC, for the target agent , we define the nearest agents to target within a coalition as the neighboring agents of . Moreover, the observation space is changed from the joint action to the joint action of neighbors , where comprises the actions of agents in the neighboring set , which incorporates itself and its neighboring agents. Note that if , the neighbor–agent-based WoLF-PHC is the same as the WoLF-PHC.
6. Complexity and Signaling Overhead Analysis
6.1. Complexity Analysis
The complexity of the proposed joint resource allocation and relay selection algorithm can be analyzed from the following two parts.
One part of the complexity comes from the resource allocation scheme based on the coalitional game. The computational complexity of the resource allocation scheme is , where is the number of inner iterations required to converge to the final coalition structure.
Another part of the complexity arises from the relay selection scheme based on the WoLF-PHC or neighbor–agent-based WoLF-PHC. For each agent , the computational complexity is calculated as , where is the observation space size of n, and is the action space size of n. As for the WoLF-PHC, the overall complexity is, at most, , where denotes the largest size of the observation space, and is the largest size of the action space. As for the neighbor–agent-based WoLF-PHC, the overall complexity is at most , where the setting parameter is much less than in general.
Therefore, the total complexity of our proposed algorithms is or . To obtain the global optimal solution, apart from solving subproblems sequentially, an ideal algorithm usually requires multiple outer iterations until the D2D sum rate no longer rises. As a result, the complexity of the ideally proposed algorithms (IPA) is or , where is the number of outer iterations.
However, the relays reselected by any agent come from its corresponding delineated area, i.e., the candidate relays are close to each other, which leads to less impact of reselecting relays on the resource allocation solution. In this way, the performance of our proposed algorithms with lower complexity is considered to be approximate that of IPA. Hence, it is more suitable to apply our proposed algorithms rather than IPA to large-scale scenarios.
6.2. Signaling Overhead Analysis
The signaling overhead of our proposed algorithm should also be analyzed in two parts.
On the one hand, since the resource allocation mechanism is implemented in a centralized manner, the signaling overhead mainly comes from the process of acquiring CSI, which can be classified into transmission and interference CSI. Concretely, in the relay-aided D2D network, the transmission CSI includes the links from CUs to the BS, from DU-TXs to the corresponding relays, and from these relays to DU-RXs; the interference CSI includes the links from CUs to the relays and DU-RXs, and from DU-TXs to the BS. When the number of CUs, DUs and relays are
M,
N and
R, respectively, we can conclude that the signaling overhead for the CSI measurement in a centralized manner is
by using the evaluation method in [
56]. In contrast, the signaling overhead for CSI measurements can be reduced to
in a distributed manner, which usually comes at the expense of the global system performance. Note that the number of
R is generally assumed to be larger than that of
N and
M, so as to ensure the reliability of relay-aided D2D communication. Thus, the difference in signaling overhead between these two manners is not significant.
On the other hand, the distributed relay selection mechanism is performed independently in each coalition without exchanging information among coalitions, which greatly reduces the signaling overhead. However, for the DUs within any coalition, in order to encourage the DUs to achieve the global optimal reward in a collaborative way, each DU needs to upload its own historical information to the BS, including the actions taken and data rate obtained. Then, the BS broadcasts the actions of all DUs within a coalition along with a common reward. All the above information exchanged between the DUs and BS are numerical data with a size of only a few kilobytes, which leads to a small signaling overhead. Consequently, this part of the overhead is negligible compared to that incurred by the former centralized resource allocation mechanism.
In summary, the signaling overhead of our proposed algorithm approximates .
7. Numerical Results
In this section, we present numerical results to evaluate our proposed algorithm. In our simulation, we consider a 30 m × 30 m room in which CUs utilize RF resources for uplink communication, and relay-assisted DUs want to implement the applications that require high rate communication; the DUs can choose either the VLC-D2D or RF-D2D communication mode. Furthermore, the distance between the transmitter and receiver of each DU is uniformly distributed and the upper bound is 10 m, which makes cooperation gain obtained by the combination of VLC and RF the most notable [
7]. Moreover, the idle relays available are evenly distributed and the number is fixed at 50 [
57]. To model the realistic VLC-D2D communication channel, we assume that the irradiance and incidence angle follow a Gaussian distribution with a mean value of
and a standard deviation of
[
7]. We repeat the simulations 200 times independently and average the results, thus mitigating the randomness of the above parameters. Considering the QoS requirements, the minimum rate thresholds of the CUs and DUs are set to 10 Mbps and 20 Mbps, respectively. Additional detailed simulation parameters can be seen in
Table 1.
7.1. Performance Analysis of PCG-Based Resource Allocation
At first, by comparing with the exhaustive algorithm (EA), we further demonstrate the optimability of the proposed coalitional game (PCG)-based resource allocation in practice. Meanwhile, we give the performance comparison between the proposed joint resource allocation and relay selection algorithm, namely PCG-WP, and the corresponding IPA. In this case, we present the D2D system sum rate comparison under the above algorithms by varying the number of CUs and DUs. Given the high complexity of EA and IPA, we fix the number of DUs at eight in
Figure 5, and fix the number of CUs at two in
Figure 6. From these two figures, on the one hand, we can observe that the sum rate of the D2D system achieved by PCG is almost close to that implemented by EA, which demonstrate that our proposed PCG can achieve a sum rate close to EA, but with a lower complexity. On the other hand, the sum rate gap between PCG-WP and IPA is insignificant. Concretely, the sum rate of IPA is at most 10% larger than that of PCG-WP.
7.2. Performance Analysis of WoLF-PHC for Relays Selection
Then, based on the final coalition structure obtained by PCG, we employ RL algorithms to reselect relays for the DUs in each coalition. In this simulation, we use Q-learning (QL) as a comparative algorithm to evaluate the convergence performance of our proposed WoLF-PHC (WP). In addition, we also show the convergence performance of the algorithms only exploiting local information, including the neighbor–agent-based WoLF-PHC (NWP) and the neighbor–agent-based Q-learning (NQL). For the sake of simplicity, we define the NWP working with neighboring users as NWP, and the same goes for NQL.
Figure 7 compares the convergence of the above four approaches in terms of the total reward performance when the number of DUs is 10 and the number of CUs is one. The total reward is the sum of the rewards received by all coalitions. From
Figure 7, the proposed WP converges to the maximum total reward of about 1150 at nearly 2700 steps, while the N3WP converges to the close-to-maximum reward of about 1070 at a faster convergence rate of around 1500 steps. Therefore, the use of N3WP increases the convergence speed by approximately 44.4% in the case of a total reward loss of 6.9%. By contrast, both the QL and N3QL fail to converge and exhibit poor performance, despite the N3QL seeming to be more stable (less fluctuations) than the QL. On the one hand, capitalizing on the “wining or learning fast” mechanism, the WP-based approaches present a much better convergence performance than the QL-based approaches. On the other hand, the approaches that utilize local information (N3WP and N3QL) can greatly reduce the state space, thereby accelerating the convergence speed but sacrificing the tolerable performance, while the complexity of IPA grows exponentially. This result further confirms the feasibility of replacing IPA with PCG-WP.
7.3. Performance Comparison
Next, we compare the two proposed schemes, PCG-WP and PCG-N3WP, with the following five baseline schemes:
- (1)
Random algorithm (RA). In this scheme, each D2D pair assisted by a randomly selected relay can randomly use either the VLC resource or RF resource of any CU.
- (2)
PCG-based resource allocation and random relay selection (PCG-RD). For investigating the potential gain of the joint optimizing of resource allocation and relay selection, the PCG-RD that optimizes only the resource allocation is considered as a comparative algorithm.
- (3)
Random resource allocation and WP-based relay selection (RD-WP). Similar to the PCG-RD above, the RD-WP that optimizes only the relay selection is regarded as a comparative algorithm to analyze the joint optimization gain.
- (4)
Traditional coalitional game [
49] and WP-based relay selection (TCG-WP). In this scheme, the resource allocation problem is addressed by the traditional coalitional game with random initialization and formation, and the WP method is used for relay selection.
- (5)
Best response dynamics (BRD) in [
58]. Compared with our proposed cooperative scheme, each DU in this scheme is selfish and aims at maximizing its own throughput performance. In both the resource allocation and relay selection stage, every DU simultaneously optimizes its actions with respect to the action profile, which is composed of the actions played by the other DUs in the same coalition at the previous time.
In
Figure 8, we evaluate the impact of the number of CUs on the D2D system sum rate under different schemes. Here, the number of DUs is enlarged to 14 and the number of CUs varies from one to eight. As the number of CUs increases, the performance of both the RA and RD-WP declines slightly and then levels off, although that of the RA exhibits slight fluctuations on the curve due to randomness. The performance degradation is due to the short distance (up to 10 m) between the transceivers of each D2D pair, which makes the VLC superior to the RF. When the number of CUs equals one, the probability of randomly selecting VLC resources for every DU is up to 50%, so the sum rate of both RA and RD-WP reaches the maximum. However, the performance of the remaining schemes improves with the increase in the number of CUs, thanks to the rational resource allocation. Among them, the BRD with the selfish nature exhibits the worst performance, while the cooperative PCG-WP obtains the best one. This is because each DU in BRD optimizes its own profit, regardless of the interference introduced to other DUs. When three CUs are involved, the sum rate of PCG-WP is larger than that of PCG-N3WP, TCG-WP, and BRD of about 5.2%, 13.3%, and 27.2%, respectively. As the number of CUs increases further, which implies that the number of DUs within a coalition decreases, PCG-N3WP becomes enough to characterize global information and thus achieve almost the same throughput as PCG-WP. Meanwhile, the sum rate gap between PCG-WP and TCG-WP is gradually narrowing. This is due to the fact that the switch operations in TCG-WP are no longer limited by QoS constraints in the case of adequate resources. In addition, when the number of CUs is five, PCG-WP outperforms PCG-RD and RD-WP by about 19.0% and 44.5%, which highlights the gain of joint optimization.
Moreover, in
Figure 9, we focus on the system performance in terms of the outage probability, which is calculated as the ratio of users who do not meet the QoS demands to the total system users. As can be seen, the outage probability declines sharply as the number of CUs increases. The underlying reason is that more CUs will naturally contribute to a lower interference. When the number of CUs equals one, BRD shows the worst-case due to the ping-pong effect between the VLC resource and RF resource of the CU. As the number of CUs grows, however, its performance surpasses that of RD because the probability of the ping-pong effect decreases. In combination with
Figure 8, it can be noticed that BRD outperforms RD-WP in terms of the sum rate performance, while its outage performance is slightly worse than that of RD-WP. This is due to the selfish nature of BRD, i.e., improving the rate of some DUs at the expense of others. More importantly, PCG-N3WP achieves almost the same and lowest outage probability as PCG-WP. Note that TCG-WP initially exhibits a poor performance, and its performance exceeds that of our proposed PCG-WP and PCG-N3WP when the number of CUs is larger than seven. It makes sense that when resources are sufficient, an affordable individual DU performance can be sacrificed for the sake of the overall system performance in our schemes.
Figure 10 depicts the comparison of the D2D system sum rate for different mechanisms in the resource-lacking system by fixing the number of CUs at two and varying the number of DUs from four to 18. It is shown that the increase in the number of DUs can boost the total throughput, and PCG-WP always achieves the highest total throughput. Moreover, with the aggravation of traffic congestion, the gap between PCG-WP and other competitive schemes is growing. When 18 DUs are involved, PCG-WP results in a 129.2% higher total throughput than the baseline RA. Another observation is that when the number of DUs is larger than 12, the performance of all schemes except the proposed PCG-WP and PCG-N3WP shows little improvement. This can be inferred that without the effective joint gain of the resource allocation and relay selection, the gain from increasing the number of DUs alone no longer compensates for the loss from the resulting severe interference. Concretely, in the context of insufficient resources, PCG has more prominent advantages over TCG in finding the optimal solutions. The reason is that the QoS requirements of users restrict TCG to perform switch operations, which leads to deviation from the optimal solution. While PCG satisfies the QoS demands as much as possible in the initialization stage, the greedy policy further allows the system to explore more operations in the formation stage, so as to bring the sum rate performance enhancement. The last observation is that when the number of DU increases, the advantage of exploiting global information for relay selection becomes obvious.
In
Figure 11, we can observe that the outage probability goes up as the number of DUs increases because of the fierce competition for resources and relays. In contrast to
Figure 9, the performance of BRD is slightly better than that of RD-WP, which suggests that an efficient resource allocation scheme may be more important than an appropriate relay selection scheme in the resource-scarce environment, and vice versa. In addition, increasing the number of DUs makes the gap between PCG-WP and other algorithms become notable.
Moreover, we study the impact of the number of neighboring users
on the system performance in terms of the sum rate of the D2D system and convergence rate. The convergence rate is indicated by the reciprocal of the number of iterations to converge. In
Figure 12, the number of CUs remains as two and the number of DUs equals 18. As expected, if
decreases, the sum rate decreases as well, while the convergence rate increases greatly. More specifically, the decrease in
from seven to three decreases the sum rate by 10.3%, and also decreases the number of iterations to converge by 82.9%. Obviously, PCG-NWP trades a smaller throughput loss for a significantly faster convergence rate. In this regard, users can make a trade-off between throughput and convergence performance according to preferences and practical constraints.
7.4. Summary of Main Results
In order to present the results of this article more clearly, this section summarizes the main conclusions as follows:
- (1)
In the stage of Resource Allocation, our proposed PCG can achieve a sum rate close to EA, but with a lower complexity.
- (2)
Compared with WoLF-PHC (WP), the neighbor–agent-based WoLF-PHC (N3WP) increases the convergence speed by approximately 44.4% in the case of a total reward loss of 6.9%.
- (3)
Our proposed WP presents a much better convergence performance than the QL-based approaches.
- (4)
The approaches that utilize local information (N3WP and N3QL) can greatly reduce the state space, thereby accelerating the convergence speed.
- (5)
Just randomly optimizing the Resource Allocation or Relays Selection policy cannot make the overall performance maximization. Appropriate methods applied to joint optimization are indispensable.
- (6)
In the resource-lacking system, our proposed WP or NWP shows greater advantages.