1. Introduction
The diffusion of sensors and personal devices has recently made possible a range of networked applications that have geographical proximity as a key characteristic. Relevant examples abound: smart-city applications are often based on querying sensors deployed in a certain area (e.g., for temperature, air pollution, etc.) [
1]. In vehicle-to-vehicle (V2V) communications, cars that sense anomalous conditions (e.g., a collision) should broadcast this information to their neighbors, to instruct their assisted-driving systems to activate safety maneuvers [
2]. Likewise, coordinated robots or drones need to broadcast their position and status to their neighbors to coordinate swarming [
3]. In all the above cases, the set of potentially interested recipients of a message generated by an application is defined according to geographical proximity to the originator: anyone close enough should pay attention to the message, where how close is close enough is actually determined by the application itself. For instance, still in the case of vehicular collision, it is foreseeable that only cars in a small radius from the collision point should activate their assisted-driving system and initiate safety maneuvers, whereas vehicle navigation systems in a much larger radius may benefit from knowing about the collision and start looking for alternative routes. In other words, the broadcast domain should be defined directly by the application.
All the above applications need to rely on ubiquitous and reliable and secure connectivity, as well as mobility support. Another requirement of these applications is small latency, either because of a specific deadline, or because the performance of networked applications relying on these broadcast messages depends on how fast these propagate.
In the last decade, researchers and manufacturers widely investigated the performance of 802.11p as a technology for vehicular mobile ad hoc networks. On one hand, the latter has proven to be very scalable and flexible, as it does not need any infrastructure to work. On the other hand, 802.11p has demonstrated to have limitations in providing bounded delay and QoS (Quality of Service) guarantees [
4]. Recently, both researchers and automotive industries have begun to investigate using 4G cellular networks, such as LTE-A (Long-Term Evolution-Advanced) as an option for vehicular communications [
5]. Research works have evaluated the performance of 4G for various vehicular applications, showing that it can be considered a viable alternative to 802.11p [
6]. The above considerations also fall into the context of Vehicle-to-Everything (V2X) communications, where one endpoint of the communications is a vehicle, and the other one can be user cell phones, connected traffic lights, etc. Moreover, several of the above examples of applications are being mentioned as use-cases to generate requirements for the definition of the future 5G communications [
7]. We are thus moving towards a context where cellular communications are expected to play a major role as a unifying technology for multiple services.
The current LTE-Advanced standard, unfortunately, is ill equipped to support this type of applications. In fact, cellular communications normally have the eNodeB (eNB) as an endpoint of each layer-2 radio transmission. This requires the User Equipment (UE) application originating the message to always use the eNB as a relay in a two-hop path, even though the destination is a proximate UE. The eNB can relay the message using either multicast or unicast downlink transmissions. The multicast leverages the standard Multicast/Broadcast SubFrame Network, (MBSFN), which was designed for broadcast services like TV. MBSFN is inflexible for at least three reasons: first, multicast/broadcast subframes are alternative to unicast ones, and their definition must be configured semi-statically. Thus, defining MBSFN subframes implies eating into the capacity for downlink unicast transmissions, and reserving capacity for broadcast ones even when there is nothing to relay. If the network is configured to have just one MBSFN subframe per frame (a frame being 10 subframes), then unicast transmission capacity in the downlink is reduced by 10%, and the worst-case delay for a multicast relaying is still 10 ms, which is non negligible. Clearly, this mechanism is tailored to a continuous, periodic traffic, rather than a sporadic, infrequent one. Second, MBSFN transmissions reach a tracking area, which corresponds to a set of cells. There is no way to geofence the broadcast to smaller, user-defined areas. Third, a single transmission format is selected for the whole tracking area: depending on their channel conditions, some—possibly many—UEs may not be able to decode the message, hence reliable delivery is not guaranteed.
eNB-driven relaying using unicast transmissions solve all the above three problems: assuming that the eNB possesses the location of the target UEs (something which is achievable through localization services, empowered by Mobile-edge Computing (MEC) [
8]), the eNB may select which UEs to target (hence defining its own geofence), use different transmission formats in order to match their channel conditions, and allocate capacity only on demand. The downside, however, is that this may be too costly in terms of downlink resources: in fact, the resource occupancy grows linearly with the number of UEs. If a 40-byte message has to be relayed to 100 UEs, and the average Channel Quality Indicator (CQI) is 5, three Resource Blocks (RBs) per UE are needed, which means 300 RBs in total, i.e., six subframes entirely devoted to relaying the message within the cell in a 10-MHz bandwidth LTE deployment. This deprives other UEs of bandwidth for a non-negligible time, and consumes energy in the network.
Starting from the latest releases, the LTE-A standard has incorporated network-controlled device-to-device (D2D) transmissions, i.e., broadcast transmissions where both endpoints are UEs. These are also foreseen in the upcoming 5G standard. The eNB still allocates the resources for D2D transmission on the so-called sidelink (SL), which is often physically allocated in the UL (uplink) frame [
9]. D2D transmissions have a number of attractive features: they do not increase the operator’s energy bill, since data-plane transmissions do not involve the eNB. Moreover, they can occur at reduced power, hence exploit spatial frequency reuse. However, the main downside is that their coverage area is limited to a UE’s transmission radius, which is often too small.
This paper, which extends our previous work [
10], advocates using multihop D2D transmissions to support geographically constrained broadcast services. Multihopping allows these services to scale up to a larger geographical reach, while retaining all the benefits of D2D. In order to engineer a Multihop D2D-based broadcast (MDB) service, it is necessary to enlist the cooperation of both the UE and the eNB, which are interdependent. In fact, UEs must define the broadcast domain, and—being the only nodes that can use the SL—decide if and when to relay messages, keeping into account that an aggressive relaying policy may waste resources or even induce collisions. On the other hand, the network—and, specifically, the eNBs—must allocate resources to allow the diffusion of the broadcast, and possibly coordinating with neighboring eNBs. Resource allocation can be either static or dynamic (i.e., on demand) [
11], and both solutions have pros and cons. Our goal is to prove that an MDB service can be realized by using minimal, standard-compliant cooperation from the network infrastructure (which need not even be aware of the very existence of the MDB service), and only running relatively simple application logic within UEs. While several other papers have investigated multihop D2D transmissions in LTE-A (e.g., [
12,
13,
14,
15,
16,
17]), this work and its predecessor [
10] have been the first paper to propose multihop D2D as a building block for geofenced broadcast services. A relevant issue, therefore, is to investigate what performance can be expected from such services, i.e., what latency, resource consumption, and reliability (i.e., percentage of reached destinations within the broadcast domain) are in order. This paper extends [
10] by presenting a thorough discussion and evaluation of the various factors that determine the performance of MDB. Moreover, we discuss the impact on the performance of MDB of different network conditions, such as varying UE density, presence of selfish users, or the occurrence of near-simultaneous broadcasts related to the same event. Last, but not least, we assess the performance of a real-life service, i.e., the diffusion of alerts in a vehicular network scenario, run on MDB. Our results confirm that MDB consumes few resources, that it is reliable, i.e., is able to reach most of the UEs, and that the latency involved is tolerable, even when the target area is quite large.
The rest of the paper is organized as follows:
Section 2 reports background information.
Section 3 reviews the related work.
Section 4 discusses the role of UEs and eNBs in MDB.
Section 5 reports performance evaluation results, and
Section 6 concludes the paper.
2. Background
Hereafter, we describe the LTE-A protocol stack, as well as point-to-multipoint (P2MP) D2D communications.
An LTE-A network is composed of cells, under the control of a single eNB. UEs are attached to a eNB, and can change the serving eNB through a handover procedure. The eNBs can communicate between themselves using the X2 interface, a logical connection generally implemented on a wired network.
The LTE-A protocol stack incorporates a suite of four protocols, shown in
Figure 1, which collectively make up layer 2 (i.e., the Data-link layer) of the OSI (Open System Interconnection) stack. The stack is present on both the eNB and the UE. Traversing the LTE-A stack from the top down, and assuming the viewpoint of the eNB, we first find the Packet Data Convergence Protocol (PDCP), which receives IP (Internet Protocol) datagrams, performs cyphering and numbering, and sends them to the Radio Link Control (RLC) layer. RLC Service Data Units (SDUs) are stored in the RLC buffer, and they are fetched by the underlying MAC (Media Access Control) layer when the latter needs to compose a subframe transmission. The RLC may be configured to work in three different modes: transparent (TM), unacknowledged (UM) or acknowledged (AM). The TM mode does not perform any operation. The UM, instead, segments and concatenates RLC SDUs to match the size requested by the MAC layer, on the transmission side. On the reception side, RLC-UM reassembles SDUs, it detects duplicates and performs reordering. The AM adds an ARQ (Automatic Repeat Request) retransmission mechanism on top of UM functionalities. The MAC assembles the RLC Protocol Data Units (PDUs) into Transmission Blocks (TB), adds a MAC header, and sends everything through the physical (PHY) layer for transmission.
In LTE-A resources are scheduled by the eNB’s MAC layer, at periods of a Transmission Time Interval (TTI) (1 ms). On each TTI, a vector of Resource Blocks (RBs) is allocated to backlogged UEs according to the desired scheduling policy. A TB may occupy a different number of RBs, depending on the Modulation and Coding Scheme (MCS) chosen by the eNB. The latter is selected based on the CQI reported by the UE, which is computed by the UE using proprietary algorithms, and corresponds to the Signal to Interference to Noise Ratio (SINR) perceived by the latter, over a scale of 0 (i.e., very poor) to 15 (i.e., optimal). Each CQI corresponds to a particular MCS, which in turn determines the number of bits that one RB can carry. Hereafter, we will often use the term CQI to refer to the (one and only) MCS that is determined by the former, trading a little accuracy for conciseness.
In the downlink (DL), the eNB transmits the TB to the destination UE on the allocated RBs. In the uplink (UL), the eNB issues transmission grants to UEs, specifying which RBs and which MCS each UE can use. In the UL, UEs need means to signal to the eNB that they have backlog. This is done both in band, by transmitting a Buffer Status Report (BSR) when scheduled, or out band, by starting a Random ACcess (RAC) procedure, to which the eNB reacts by issuing transmission grants in a future TTI. RAC requests from different UEs may collide at the eNB. To mitigate these collisions, the UEs select at random one in 64 preambles, and only RAC requests with the same preamble collide. After a RAC request, a UE sets a timer: If the timer expires without the eNB having sent a grant, that UE waits for a backoff period and re-iterates the requests.
The 3rd Generation Partnership Project (3GPP) has standardized Network-controlled D2D communications for LTE-A in release 12 [
11]. These are point-to-multipoint (or one-to-many) communications having proximate UEs as the endpoints, i.e., without the need to use a two-hop path having the eNB as a relay. A D2D link is also called sidelink (SL). The SL is often allocated in the UL spectrum in a Frequency Division Duplex (FDD) system, since the latter can be expected to be less loaded than the DL one, due to the well-known traffic asymmetry [
9]. Under this hypothesis, D2D-enabled UEs must be equipped with a Single-Carrier Frequency Division Multiple Access (SC-FDMA) receiver [
18]. The phrase network-controlled hints at the fact that the eNB is still in control of resource allocation on the SL, i.e., it decides which UE can use which resources. Two schemes have been envisaged to do this: a Scheduled Resource Allocation (SRA), and an Autonomous Resource Selection (ARS). SRA is an on-demand scheme, similar to resource allocation in the UL for standard communications: the UE must send a RAC request to the eNB, which grants enough space for it to send its BSR. Then, the eNB schedules SL resources accordingly and issues the grant to the UE for D2D communications, as shown in
Figure 2a. On the other hand, in ARS the eNB configures a static resource pool, e.g.,
RBs every
TTIs, and UEs can draw from it without any signaling. With reference to
Figure 2b, the UE has new data to transmit at
, but it needs to wait for the next eligible TTI, i.e., at
. If more than one UE selects the same resources, then collisions will ensue. Note that P2MP D2D transmissions are not acknowledged, hence the sender cannot know which neighboring UEs received a message, and H-ARQ (Hybrid Automatic Repeat reQuest) is disabled. This is because (N)ACKs should be sent on a dedicated control channel to the sender, but dimensioning the latter would be impossible: in fact, with P2MP D2D transmission, there is no way to know in advance how many and which UEs will actually receive the message, or were meant to in the first place. P2MP D2D transmissions use UM RLC.
3. Related Work
Multihop D2D communications in LTE-A networks have been studied in several works, e.g., [
12,
13,
14,
15,
16,
17]. In [
12], authors consider multihop D2D communications in order to extend the network coverage and propose a resource allocation strategy to optimize the throughput along the multihop paths. The study is restricted to two-hop communications where one UE is identified as relay node for one pair of transmitting and receiving UE, where each hop is a unicast point-to-point (P2P) D2D transmission. The work in [
13] proposes a theoretical formulation for computing the outage probability of multihop communications. Also in this case, P2P D2D transmissions are considered. An opportunistic multihop networking scheme for Machine-type Communications is presented in [
14]. UEs exploits the Routing for Low-power and Lossy Networks (RPL) algorithm used in Wireless Sensor Networks (WSNs) to compute the best route toward a given destination. In [
15], game theory is applied to find the best multihop path for uploading content from one UE to the eNB. Both [
14,
15] differ from our work as they deal with the problem of delivering messages toward a given destination, instead of disseminating them to all the UEs within a given target area. Moreover, P2MP communications are not considered. In [
16], P2P D2D communications are considered in order to enhance the evolved Multimedia Broadcast and Multicast Services (eMBMS) provided by LTE-A networks. In this case, the eNB transmits its multimedia content to a subset of UEs and the latter exploit D2D links to forward the data to UEs with poor channel conditions in the downlink, i.e., cell-edge UEs. A similar problem is tackled in [
17], where UEs receiving data from the eNB use one P2MP D2D transmissions to distribute the data to neighboring UEs. The above paper focuses on finding the best subset of relay UEs so that the total power consumption is minimized and the rate requirements for all the UEs are satisfied. None of the above works address the problem of disseminating UE-generated contents toward all UEs within a geographical neighborhood.
Centralized resource scheduling is sometimes assumed in Wireless Mesh Networks (WMNs), although far less often than distributed resource scheduling, (see, e.g., [
19,
20]). In this context, our broadcasting problem is superficially similar to the one of channel assignment and/or link scheduling in WMNs. However, the assumptions are quite different from those made in LTE-A, since nodes in a WMN are usually equipped with few radios, which can be tuned to a larger number of channels. In LTE-A, all UEs have as many “radios” and “channels” as the number of RBs, which is in the order of several tens. More importantly, RBs can be allocated dynamically to UEs, whereas the algorithms presented in the literature often assume periodic transmissions and long-term, semi-static resource allocations. Moreover, unicast P2P transmissions are considered. For these reasons, the broadcasting problem considered in this paper cannot be accommodated using the above algorithms.
Broadcast diffusion problems have been addressed in the context of mobile ad-hoc networks (MANETs) (e.g., [
21,
22]), especially to support the dissemination of routing alerts or for gossiping applications [
23]. Unlike LTE-A, where resources are centrally scheduled by the eNB on demand, the above networks are infrastructureless and have distributed resource allocation. Work [
24] reviews and classifies the broadcasting methods in MANETs, focusing on how they try to limit collisions, the latter being the key issue in an infrastructureless network. For example, [
25] employs similar hypotheses as this work (no knowledge of the underlying topology, fixed transmission range), although in a different technology, and proposes a method for limiting the number of broadcast relaying, and thus of collisions, by preventing nodes transmission based on an adaptive function of the number of received copies of the same message and the number of its neighbors. The considered function can then be tuned to trade user reachability with broadcast latency. We show in
Section 5 that the latency of MDB is in line with the ones of [
25], but the delivery ratios are higher, in similar scenarios. One of the main purposes of this paper is in fact to show that MDB can leverage LTE’s centralized scheduling. The combination of centralized scheduling and distributed transmissions is in fact unique to D2D-enabled LTE-A. Note that the ASR mode, described in
Section 2, does instead allow unscheduled, collision-prone medium access, similar to what a MANET would do. In
Section 5, we will show that such collisions actually hamper the performance.
4. Multihop D2D Broadcasting
In the following, we consider an LTE-A system composed of several cells, where UEs are D2D-enabled. UEs run applications that may generate messages (e.g., vehicular collision alerts) destined to all other UEs running the same application, within an arbitrary target area. Our problem is to reach as many interested UEs in the target area as possible, using only P2MP D2D transmissions, relayed by UEs themselves, using as few resources as possible. The system model is shown in
Figure 3, where the shaded UE originates a message that has to be delivered to all the UEs within the circle. The solid arrows represent the first P2MP D2D transmission, and the dashed ones represent transmissions relayed by UEs in the first-hop neighborhood of the originating UE. A UE that perceives collisions in the same time/frequency resources will still attempt to decode the message received with the strongest power, i.e., it will exploit the so-called capture effect, typical of wireless networks [
26].
In multihop D2D broadcast, the eNB does not participate in data plane transmissions, i.e., it does not send data packets. Data-plane transmissions are instead performed by the UEs themselves, on behalf of the applications running on them. However, the eNB still controls the resource allocation, hence can affect the performance of the broadcast. We only assume that the eNB allocates resources for generic D2D transmissions using standard-compliant means (to be discussed later in this section), unaware of the fact that multihop relaying is going on for D2D transmissions, or of specific application requirements (e.g., deadlines, target areas, etc.). In other words, we assume minimal, standard-compliant support from the infrastructure.
Multihop D2D broadcasting requires that applications decide which UEs to target and how to relay messages, whereas the LTE-A network allocates resources for D2D transmissions to allow multihop relaying. This implicitly assumes a trusted environment, where UEs behave cooperatively. Security is notoriously difficult to enforce in broadcast networks, and we refer the interested reader to [
27] for a discussion of security issues in D2D communications in particular. Regarding cooperation, UEs may be inclined to behave selfishly to save battery or to avoid increasing traffic volumes in pay-per-use plans. On one hand, suitable incentives and reputation-based schemes, such as those discussed in [
28,
29], could mitigate the problem. On the other hand, MDB services are supposed to be used also by embedded applications (e.g., application software running on cars), which have access to an energy source (e.g., the vehicle’s battery) and are not under the control of the end user (i.e., the car owner or pilot). In this last case, manufacturers would clearly benefit from coding cooperative behaviors in their embedded software. Hereafter, we assume that the UEs running MDB behave cooperatively. In
Section 5 we evaluate the impact of selfish users on the performance.
Hereafter, we first discuss how UE applications should be designed in order to support broadcasting effectively, and then move on to discussing resource allocation policies in the network.
4.1. Broadcast Management within the UE
The two problems that UE applications should solve are: (i) how to identify the set of potential recipients; and (ii) when to relay D2D communications. The first problem boils down to identifying all the UEs running the same application in a certain geographical area. UEs running the same application can register to a reserved multicast IP address. This is relatively easy to do with IPv6 (Internet Protocol Version 6), where multicast address format is flexible. As far as defining the target area is concerned, we argue that the area depends on both the network scenario and the application: in a vehicular use case, for instance, vehicle collision alerts should reach vehicles in a radius of few hundred meters, whereas traffic notifications should probably travel larger distances, allowing drivers to route around congested areas. This means that the application message should contain enough information to allow a recipient UE to understand whether or not it should relay it. The information regarding the target area should then be embedded in the application-level message. A simple, but coarse, approach to do this is via a Time-to-live (TTL) field: the source UE sets the TTL in the application message to a desired maximum number of hops. Each relaying UE, then, decreases that field by one, and relays the message only if TTL > 0. While this is relatively simple and economical in space (an 8-bit field should be enough for must purposes), the downside is that the source UE can exert little control over the area covered by the broadcast, since the latter ends up depending on both radio parameters (such as the UEs’ CQI and their transmission power) and network topology (i.e., the position and density of UEs). The latter, in turn, is unpredictable and changes over time, so that any default value runs the risk of being too small or too high. The alternative is to code the target area within the message, by inserting the originating UE’s coordinates and the boundaries of the target area. Geographic coordinates can be taken from GPS positioning, or from geolocation services co-located with the network (e.g., using MEC solutions [
8]). Geographic coordinates can be represented by two 32-bit floating points, indicating latitude and longitude with enough precision. A simple way to constrain the target area is to encode a maximum target radius, thus making it circular. Assuming that the target radius is represented in meters, a 16-bit integer should be large enough for most purposes. Encoding originator’s coordinates and target radius allows one to define with more precision the target area, which becomes independent of the UE’s density and location. This comes at the cost of using more space in the message (10 bytes overall instead of one). Increasing the message size, in turn, entails consuming more network resources for transmission. Obviously, more advanced definitions of target areas can also be envisaged, at the cost of further increasing the message payload. With a geographical representation of the broadcast domain, receiving UEs can then check whether their own position falls within the target area before relaying the message. This can be done by using simple floating-point arithmetic, i.e., by computing the distance from the originator and checking whether it is smaller than the target radius included in the message. Given the coordinates of the two points A and B (specified in latitude
and longitude
), the Haversine formula [
30] is used to compute the shortest distance over the earth’s surface, which is:
where
is the earth’s radius.
Note that using a geographical representation (even one with infinite precision) still leaves a margins of uncertainty as to which UEs will receive the message: in fact any UE which is inside the target radius will relay the message, hence all UEs within an annulus of one D2D transmission radius outside the edge of the target area may still receive it.
UE applications should also take care of relaying. In fact, it is at the application level that suitable algorithms can be run to make relaying efficient. A brute-force relaying, whereby UEs relay all received messages, would in fact quickly congest the network, since the same UE would receive the message from several neighbors, and relay them all unnecessarily. This would waste resources that could otherwise be used for other purposes. In order to make relaying efficient, a suppression mechanism can be used, e.g., the one of the Trickle algorithm [
31]. Trickle is used in WSNs to regulate the relaying of updates and/or routing information. In that context, Trickle runs on each node participating in the broadcasting: before sending a message, the node listens to the shared medium in order to figure out if that information is redundant, i.e., enough neighboring nodes are already sharing it. If so, it abstains from transmission so as to avoid network flooding. In Trickle, two parameters can be configured: the Trickle Interval
I and a number of duplicates
. A UE selects a random time window
in
, and counts the copies of the same message received therein. The UE only relays a message if it receives fewer than
copies of it. To sum up,
Figure 4 depicts the flowchart of the operations performed by a UE application on reception of a message, when the Trickle suppression algorithm is employed. First, the UE checks whether the incoming message had already been received. If so, it abstains from relaying and the procedure terminates. Otherwise, it computes the distance from the originating UE and compares it with the maximum target radius. If the UE is inside the target area, then Trickle operations are initiated: the Trickle timer is started and the duplicate counter
k is set to 0.
Figure 5 shows that
k increases on each duplicate reception within
. When the above timer expires, the UE relays the message if
.
Note that, in the absence of Trickle, UEs must be made to wait for a random time chosen uniformly in before attempting a relay. Parameter can be tuned to trade collision probability for latency. In fact, P2MP D2D transmissions will reach several UEs simultaneously. In the absence of random delays, these UEs will attempt relaying at the same time, since LTE-A is slotted. This may lead to collisions, regardless of how the eNB allocates resources (an issue which is dealt with in the next subsection).
So far we have assumed that a message is originated by one UE. However, messages are supposed to be generated as reactions to events (e.g., a vehicular collision), and the same event may be detected by multiple UEs, which might then initiate a broadcast quasi-simultaneously. In fact, whichever the allocation scheme in the network, there is a time window where all UEs that want to start a broadcast will be unaware of others doing so, even if they are within D2D hearing range of one another. That time window is around 10 ms with SRA (i.e., the time it takes to complete a resource allocation handshake), and equal to the period with ARS. What happens in this case depends on how the application handles the different broadcasts. A baseline solution is to do nothing. In that case, since the information included in the messages is not the same, since e.g., originators’ coordinates are different, the latter will be considered as different broadcasts by the Trickle instances running in the UEs’ applications, and will be broadcast independently. As a result, multiple, independent broadcasts related to the same event will traverse the network, with a corresponding increase of the traffic load. On the other hand, the applications running at the UEs can easily be endowed with the necessary intelligence to associate two (or more) messages, possibly with a different payload, to the same event: for example, if the distance between the originators’ is below a threshold, the message type is the same, and the reception times are again within a predefined window (which may be computed based on the Trickle window). In this case, merging can occur, i.e., the various broadcasts messages are associated to the same Trickle instance, thus being perceived as duplicates of the same broadcast process. Note that MDB can also accommodate messages generated by entities other than UEs, e.g., nodes located in the Internet, the LTE core network, or—possibly—Mobile-edge Computing servers running applications on behalf of the UEs. In that case, the network can select one (or more) proxy originator UE(s) and send them the message from the serving eNB(s) using downlink transmissions. The receiving UE(s) can, in turn, initiate the broadcasting procedure as described above.
4.2. Resource Allocation in the Network
As discussed in
Section 2, the eNB controls resource allocation, and may use either SRA or ARS. We now compare the two approaches, highlighting their pros and cons in the context of multihop relaying, also taking into account that multicell relaying may be required.
As far as latency is concerned, using SRA requires each UE to undergo one RAC handshake per transmission. As shown in
Figure 2a, this handshake takes a 10 ms delay in the best of cases, i.e., when the eNB issues grants immediately. If RAC collisions are experienced, or the eNB delays scheduling because the UL is congested, the per-hop delay may be even larger. On the other hand, with ASR, a UE can send a message as soon as a transmission opportunity becomes available, without the need of going through a RAC/BSR handshake. Thus, with ARS the maximum scheduling delay is given by the resource allocation period
. Using ASR, especially with small periods, allows faster access to the medium. However, this entails allocating a large share of resources to P2MP D2D transmissions statically, thus wasting resources when these are not required, and preventing standard UL communications to use them. Therefore, with ASR, latency is traded off for resource efficiency.
The two allocation schemes differ greatly regarding collisions. When using SRA, the only possible collisions are those of simultaneous RAC requests at the eNB. However, these are quite unlikely. The LTE-A standard requires UEs to select a preamble among 64 possible choices. Simultaneous RAC requests with different preambles do not collide. Furthermore, when a RAC request is not answered by the eNB (either because of a RAC collision or because the eNB does not have resources to spare), the UE simply sends it again after a backoff time. Thus, RAC collisions do delay the broadcast process, but they also desynchronize relaying UEs, which is a positive side effect. With SRA, data transmission on the SL is instead interference-free, since the eNB generally grants SL resources to one transmitting UE at a time. The only exception to that rule is when the eNB exploits a frequency reuse scheme (such as the one in [
32]), in which case faraway, non-interfering UEs may be granted the same RBs simultaneously. However, this happens exactly because the eNB knows that they will not interfere with each other. If, instead, ARS allocation is used, UEs claim RBs on the SL for their own transmission without a central scheduling and without their neighbors knowing, hence the intended receivers face unpredictable interference. The latter can be mitigated by having the UEs select at random which RBs to use, and by dedicating more resources to SL transmissions, which decreases the efficiency. Moreover, ARS allocation is periodic, hence it implicitly forces synchronization among groups of UEs: all UEs whose application requests a relay in the same ARS period will end up accessing the SL at the next ARS opportunity, hence increasing the likelihood of collisions. This would happen at each hop. Furthermore, since a sender does not know if collisions have occurred, the only possible countermeasure to increase the reliability of a transmission would be to retransmit the same message more than once.
As already discussed, a target area may include more than one cell, as shown in
Figure 6. This poses the problem of coordinated resource allocation among neighboring cells. In fact, if each cell allocates SL resources autonomously, cell-border UEs will be subject to interference from UL transmissions in the neighboring cells, hence they may be unable to receive P2MP D2D transmissions. This problem is likely to affect more heavily dense networks [
33], where cells are smaller.
If the network uses ARS allocation, coordination is fairly easy to achieve: all it takes is that neighboring cells use the same allocation pattern. If, instead, SRA is used, resources are allocated on demand, hence the eNB must share information regarding their allocation using the X2 interface. For instance, an eNB may inform its neighbor(s) about which RBs will be allocated to a cell-border P2MP D2D transmission in a future TTI, so that the neighboring eNB(s) avoid allocating the same resources to UL or D2D transmissions in the vicinity of the cell border. This requires the sending eNB to plan scheduling on the SL (at least for cell-border UEs) with a lookahead of some TTIs, enough for the above message to reach its neighbor through the X2. With reference to
Figure 7, at
the eNB informs its neighbor that, in a future TTI, a grant for a cell-border P2MP D2D transmission will be scheduled on a given set of RBs. The receiving eNB marks the advertised RBs as occupied at the appointed TTI, and performs its usual scheduling. The lookahead mechanism can be expected to add a negligible delay to the broadcast diffusion, since the X2 connection is normally wired and low-delay.
The eNBs should also select the MCS of P2MP transmissions. Such choice should strike a tradeoff between two conflicting objectives, i.e., transmission range and resource consumption. In fact, selecting more performing MCSs implies reducing the number of RBs required for a transmission, since more bits will be packed in the same space. However, it will also decrease the transmission range, since the distance at which the SINR will be high enough to allow successful decoding decreases with the CQI. This implies that more hops will be required to cover a given target area. Conversely, selecting less performing MCSs will require fewer hops, but more RBs per transmission. Note that the eNB must choose the MCS only if it uses SRA allocation: in this case, in fact, the eNB sends D2D grants, which carry indication of the transmission format. If, instead, ARS is used, UEs may select the MCS autonomously, at least in principle. In practice, we argue that the eNB should still make that choice, and possibly advertise it periodically using RRC procedures. In fact, the eNB is in a better position than single UEs to assess the UE density or location, hence to select the most suitable cell-wide transmission format.
6. Conclusions
In this paper we have presented a solution for message broadcasting in LTE-A, using multihop P2MP D2D transmissions. This solution relies on application-level intelligence on the UEs, and leverages standard D2D resource allocation schemes in the network. It allows UEs to specify the target area, without being constrained by cell boundaries.
We performed simulations in both a static and a vehicular environment, in a multi-cell network. Our results show that this type of broadcast is fast, taking 80–120 ms to cover a 1000 m target radius at the 95th percentile. Moreover, it is highly reliable, meaning that the percentage of UEs actually reached by the message is close to 100%. Last, but not least, it is cheap in terms of resource consumption. In fact, it does not occupy the DL frame (the SL being carved out of the UL spectrum); this means that no service disruption occurs in the DL, and that no additional power is consumed by the eNB to support this service. Moreover, the amount of UL resources consumed is quite limited, thanks to the possibility of frequency reuse and proximate transmissions with higher CQIs. The RBs consumed for a broadcast, on average, are less than one per UE, which makes MDB quite economical, and minimally disruptive of other services that an LTE network would need to carry on simultaneously.
Further research on this topic, ongoing at the time of writing, includes at least two directions: first, investigating a deeper involvement of the infrastructure, in particular of the eNBs, in the broadcast relaying process. If the eNB is aware that a multihop transmission on the SL is being required by the sending applications, then it may allocate grants proactively to speed up the process. The second direction is to leverage network intelligence and network-wide information to characterize the target area. With reference to the vehicular case, the alert application could be made more efficient by being more selective as to which destination UEs it targets. For instance, if the message notifies that the originating vehicle is suddenly slamming on the brakes, the alert should reach the vehicles that are following it, and not those preceding it or across a block. This highlights the problem of building context-aware broadcast domains. On one hand, defining a more detailed area may occupy more space in the application-level message, which is something to consider carefully if the above-mentioned benefits are to be retained. On the other, we argue that a context-aware definition of the target area may not be defined by the vehicles themselves, since they may lack the knowledge of the surrounding environment and position of neighboring vehicles. Acquiring this knowledge using distributed means (i.e., inter vehicle communications) may not be viable either, because it would take a non-negligible message exchange and time, whereas broadcasting alerts should be accomplished in real-time to meet strict deadline requirements. The emerging MEC paradigm can play an important role in this respect. With MEC, vehicles could acquire the information about the intended geographical reach of one message by querying the corresponding service running at an application server located at the edge of the mobile network. The latter can leverage the location services provided by the network operator to define which vehicles should be receiving the message, and define a target area on behalf of the originator. Low latency would be guaranteed by the proximity of the MEC server to the vehicles, and by single client-server interaction.