Next Article in Journal
Wavelet and Earth Mover’s Distance Coupling Denoising Techniques
Next Article in Special Issue
A Comparison of Machine Learning Algorithms for Wi-Fi Sensing Using CSI Data
Previous Article in Journal
Fraction Execution Resolver Using a Hybrid Multi-CPU/GPU Encoding Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of Wearables and Wireless Technologies to Improve the Interaction between Disabled Vulnerable Road Users and Self-Driving Cars

by
Antonio Guerrero-Ibañez
*,
Ismael Amezcua-Valdovinos
and
Juan Contreras-Castillo
Faculty of Telematics, University of Colima, Colima 28040, Mexico
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(17), 3587; https://doi.org/10.3390/electronics12173587
Submission received: 20 July 2023 / Revised: 18 August 2023 / Accepted: 23 August 2023 / Published: 25 August 2023

Abstract

:
The auto industry is accelerating, and self-driving cars are becoming a reality. However, the acceptance of such cars will depend on their social and environmental integration into a road traffic ecosystem comprising vehicles, motorcycles, bicycles, and pedestrians. One of the most vulnerable groups within the road ecosystem is pedestrians. Assistive technology focuses on ensuring functional independence for people with disabilities. However, little effort has been devoted to exploring possible interaction mechanisms between pedestrians with disabilities and self-driving cars. This paper analyzes how self-driving cars and disabled pedestrians should interact in a traffic ecosystem supported by wearable devices for pedestrians to feel safer and more comfortable. We define the concept of an Assistive Self-driving Car (ASC). We describe a set of procedures to identify people with disabilities using an IEEE 802.11p-based device and a group of messages to express the intentions of disabled pedestrians to self-driving cars. This interaction provides disabled pedestrians with increased safety and confidence in performing tasks such as crossing the street. Finally, we discuss strategies for alerting disabled pedestrians to potential hazards within the road ecosystem.

1. Introduction

Vehicle automation toward full autonomy has become an increasingly important technology over the last decade to improve road safety, reduce pollution, optimize traffic flow, and reduce costs. Automated driving has been defined as a series of levels at which functions are implemented in the car, allowing it to perform driving tasks previously performed by the driver. The Society of Automotive Engineers (SAE) has defined six levels of automation in vehicles, ranging from zero, where the driver fully performs the car’s driving task, to level 5, where the car controls the driving task without human intervention. Figure 1, which is based on the content presented in [1], provides an overview of the different automation functions performed at each level of automation. The automotive industry’s efforts over the past decade have focused on developing more intelligent vehicles with connected and self-driving capabilities, known as Connected Self-Driving Cars (CSC), which will contribute to increased safety for passengers and pedestrians, as well as reductions in road accidents, congestion, and air pollution, among other issues [2,3].
Some organizations, such as the United States Department of Transportation’s (USDOT) National Highway Traffic Safety Administration [4], the World Health Organization (WHO), and the European Union’s Intelligent Transportation System Directive [5] consider that within the road driving ecosystem there is a group of road users who can be regarded as less protected and who, in the event of a crash, may suffer severe injury or even lose their lives. This group is called the vulnerable road users (VRUs) and includes pedestrians, cyclists, motorcyclists, and users with transport devices.
VRUs do not have the necessary protection so when they are hit by a vehicle, the injuries suffered by the pedestrian can be very serious. The risk factors that cause most pedestrian crashes and injuries include speeding, impaired driving, unsafe infrastructure, poor pedestrian visibility, inadequate enforcement of traffic laws, reduced walking speed of the elderly, pedestrian distraction, and other factors. To improve user safety within the road ecosystem, ITS-based road safety applications have been developed and proven effective, but the main problem is their slow deployment, so the benefits to society are not visible.
In [6], the authors define a classification with six categories to identify the different types of VRUs, including (i) distracted road users, (ii) road users inside the vehicle, (iii) special road users (older adults and children), (iv) users of transport devices, (v) animals and (vi) road users with disabilities. According to WHO statistics, approximately 1.35 million people die in road crashes annually, and more than half of all road deaths are of vulnerable road users [7]. WHO emphasizes that older adults, children, and disabled people are the most vulnerable in the road-driving ecosystem. WHO estimates that around 16% of the world’s population has a severe disability [8]. Data show that approximately 45 million people are blind, and 430 million have disabling hearing loss [9].
Pedestrians with disabilities, known as Disabled VRUs (D-VRUs), suffer from serious problems preventing them being able to move independently within the road driving ecosystem. Several methods have been proposed to reduce the number of accidents involving pedestrians [10,11,12,13]. Generally, these proposals use onboard sensors such as cameras, radar, or LiDAR to identify pedestrians [14]. The main drawback of these methods is that they rely on line-of-sight (LoS) between pedestrians and vehicles and have limited coverage.
Solutions to the line-of-sight dependency problems have focused on using mobile devices of VRUs to exchange information with cars near their travel environment. These methods enable cars to detect VRUs even when they are out of the car’s line of sight [15,16,17,18].
For D-VRUs to feel safe and secure within the road driving ecosystem, they need a mechanism to interact with the car and ensure that both parties understand this interaction correctly. A two-way interaction mechanism for communication between a self-driving car and a disabled pedestrian needs to be defined. This mechanism must include features to detect and identify D-VRUs and allow them to express intentions to both the self-driving car and the D-VRU.
In recent years, assistive technology (AT) has emerged, focusing on using technology to assist people with disabilities. Assistive technology is a technological device or software that enables people with disabilities to have an independent, healthy, and productive lifestyle and an easier integration into society [19].
Several interaction tasks, such as pedestrian detection and identification, motion prediction, behavior analysis, pedestrian location, car-to-pedestrian communication, and feedback, are required for successful two-way communication between the self-driving car and the D-VRU. Cooperative communication systems between D-VRUs and self-driving cars are required to facilitate the mobility of people with disabilities, reduce the likelihood of accidents, and increase the acceptance of such cars.
A D-VRU needs user equipment (UE) that obtains and exchanges contextual and motion information with the self-driving cars on the public road. This allows the self-driving car to identify the D-VRU and generates a self-driving car–D-VRU interaction. The use of contextual information improves the detection accuracy of disabled pedestrians.
In this paper, we analyze how integrating AT into the environment of D-VRUs and self-driving cars can contribute to increased safety, security, and confidence of D-VRUs when crossing streets. The proposal integrates assistive devices worn by D-VRUs with assistive technology that is incorporated into self-driving vehicles, creating the Assistive Self-Driving Car concept. An assisted self-driving car has four main components. The first component is a set of sensors (cameras, radar, and LiDAR, among others) that allow it to perform the driving task automatically, without the need for human intervention. The second component is a wireless communication system based on 802.11p that facilitates the exchange of messages between the self-driving car and pedestrians with disabilities. The third component is software that enables the car to recognize hand gestures to interpret the intentions of pedestrians with disabilities. Finally, the last component is a set of interfaces (audio and visual) that facilitate the interaction with the disabled person and express to the pedestrian the following actions.
In this proposed system, pedestrians with disabilities wear a handheld device with built-in 802.11p technology, and self-driving cars are equipped with 802.11p communications technology. To illustrate how our proposal works, a specific scenario type of AITS is presented as an example: the self-driving car–disabled pedestrian interaction in a crosswalk environment. In this scenario, the self-driving car can locate pedestrians with disabilities, identify the specific type of disability, alert the other cars traveling behind it, and provide adaptive interaction according to the disability.
This article is part of a larger project integrating machine learning models to detect D-VRUs and identify the intentions of people with disabilities. This work focuses on the communication protocol to enable the D-VRUS to communicate with the self-driving cars, reporting their presence and the nature of their disability and performing the detection, even in situations of total occlusion. The proposal defines a set of procedures for identifying D-VRUs using a device based on 802.11p wireless technology. In addition, a group of messages is defined and exchanged between all the elements that comprise the proposed architecture, which communicates the location of the D-VRU and their disability, enabling the self-driving car to respond through the interface that best suits the limitations of the D-VRU, giving the pedestrian higher safety to move around the road environment.
This proposal brings several benefits to the road environment, including early detection of disabled people in the road ecosystem, which could reduce road accidents. Cars could maneuver to assist the disabled person to cross the road. At the same time, other surrounding vehicles would be alerted of the disabled pedestrian nearby.
In practice, this proposal could be implemented by government agencies, which could provide the device, certify the nature of the disability, and record non-personal information on the device, thus preserving user privacy. Alternatively, a vehicle, even a non-autonomous one, could use an electronic component equipped with 802.11p technology to alert the driver of a disabled person nearby through some mechanism (visual or auditory).
The rest of the paper is structured as follows: Section 2 presents work related to assistive technology’s use for pedestrian detection. Section 3 discusses our proposed interaction solution between self-driving vehicles and disabled pedestrians. Section 4 is a detailed description of the proposal evaluation process. The results and their discussion are presented in Section 5. We close this article with the conclusions of the work.

2. Related Work

In the self-driving environment, pedestrian detection is one of the most important tasks for maintaining the safety of people moving around. In [20], the authors explain that pedestrian detection has focused on three types of methods: (i) handcrafted approaches [21,22,23], (ii) applying self-learning algorithms [24,25,26], and (iii) hybrid methods [27,28,29]. These methods define three essential stages: proposal generation, classification (and regression), and post-processing. Proposal generation extracts a set of pedestrian-representing objects from an input image. Some of the methods used for proposal generation are the sliding-window method [30,31,32], the objectness method [33,34,35], and the region proposal network [36,37,38]. The proposal classification stage assigns a positive or negative class to each candidate proposal. The assignment is based on the features extracted from each candidate proposal. Some algorithms use shallow classifiers such as SVM [39,40,41] or Boosting [42,43,44]. Others integrate a framework that performs the classifying and extracting functions [25,45,46]. These algorithms add a regression method that runs together with the classification function to improve the location quality of the bounding boxes [47,48,49]. The post-processing stage focuses on refining the detection and avoiding duplicate or occlusion problems. Heuristic-based [50,51,52] or learning-based [53,54,55] methods eliminate duplicate boxes by selecting the best bounding box.
Although current pedestrian detection systems have shown promising results, they face many challenges. For example, existing systems do not classify pedestrians by condition; thus, there is a need for self-driving cars to be able to detect people with disabilities and identify their disability, and therefore be able to react and interact with the pedestrian in the best possible way according to the detected ability. Although these methods have proven very efficient for pedestrian detection in general, if there is an obstacle between the object to be detected and the sensors, i.e., if something is blocking the line of sight, the sensors are no better than the human eye. Cooperation through communication is an alternative solution to overcome these problems of sensor-based mechanisms. Thanks to the exchange of contextual information between all the actors in the road environment, it will be possible to detect the presence of each element and thus identify risk situations that threaten the integrity of all users.

Vehicular Communication

Vehicular networks enable applications related to road safety, passenger infotainment, traffic optimization, and others. Among these applications, road safety to reduce traffic accidents is one of today’s most urgent needs [56]. Over the last decade, vehicular communication has become a growing technology. This technology establishes communication between all the actors in the vehicular environment using communication technologies such as IEEE 802.11p [57], Visible Light Communication (VLC) [58], or Long-Term Evolution (LTE) [59] technologies. However, the IEEE 802.11 technology has been boosted since it focuses on local communication. Therefore, we will consider the 802.11p technology as our communication base in this study.
The vehicular ad hoc network (VANET) allows vehicles to communicate with each other (vehicle-to-vehicle or V2V for short) and with roadside units (V2R) to support safety applications such as traffic signal violation warnings, lane change warnings, highway merge assistance, and cooperative forward collision avoidance, among others. Such applications typically broadcast periodic or event-driven messages to all surrounding vehicles and units. Periodic messages are sent regularly, while event-driven notifications are triggered by hard braking, detection of hazardous road conditions, etc. Vehicle–pedestrian communication has focused on three specific issues: safety, pedestrian movement, and technology performance analysis.
In work related to pedestrian motion detection, proposals have been made that use Vehicle-to-Pedestrian (V2P) and Pedestrian-to-Infrastructure (P2I) communication to improve the safety of road users [60,61]. In addition, efforts have been made in the area of vehicle-to-pedestrian communication to incorporate mechanisms for the prevention of traffic accidents [62,63]. Finally, efforts have been made to evaluate the performance of the 802.11p technology used for V2P and P2I communication. Technology has been compared and analyzed under line-of-sight (LOS) and non-line-of-sight (NLOS) channel conditions [64]. Evaluations of 802.11p technology have been conducted, comparing it with C-V2X communication technology based on fourth- and fifth-generation mobile communication standards (LTE and 5G NR) [65].

3. Materials and Methods

In this section, we first explain the general application scenario of our proposal. Subsequently, we describe the technical aspects of the technology used, and finally, we describe our proposal in detail.

3.1. Scenario Description

The overall scenario to illustrate this proposal is shown in Figure 2. The scenario presents a person with a disability who is walking towards a pedestrian crossing. The person with a disability is wearing a device equipped with 802.11p communication capability (WPD—Wearable Personal Device). This device could be implemented through a wristband or a pendant-type device. It transmits information at regular 2 s intervals over approximately 500 m. We propose a WPD that includes a DSRC with a 10 MHz bandwidth; an operation frequency of 5.9 GHz; and BPSK/QPSK, 16-QAM, or 64-QAM modulation schemes, with a range of up to 450 m [66]. Table 1 shows the parameters of the IEEE 802.11p physical layer [67].
An onboard unit (OBU) with 802.11p technology is installed in the vehicles. The OBU is primarily responsible for maintaining communication within the driving environment. It performs the function of listening for messages from either disabled pedestrians or the other cars in the driving environment. In addition, it can send messages to the cars to inform them if a disabled person is moving around the driving environment. The vehicles receive messages sent by a person with a disability at regular intervals to indicate that they are always present in the driving environment. Vehicles receiving the message retransmit it to inform other vehicles in the vicinity of the presence of a person with a disability. The information received by the autonomous car allows it to identify the type of disability of the pedestrian and to react most safely (e.g., stop or continue driving). It then indicates the action using the interface best suited to the pedestrian’s characteristics, such as visual communication through displays, LED light strips, or holograms, among others, or transmission through sound emission.

3.2. Technical Background

Wireless Access in Vehicular Environments (WAVE) is a wireless communication system that provides seamless and interoperable services for Intelligent Transportation Systems (ITS). The goal of WAVE is to provide communication mechanisms for vehicles and infrastructure, vehicles, and devices, and between vehicles. WAVE consists of the IEEE 1609 protocol family and the IEEE 802.11p [57]. The WAVE specification defines the architecture and services required to enable communications in vehicular environments. The IEEE 1609 protocol set contains the architecture, communication models, management structure, and security specifications to be used in conjunction with IEEE 802.11p. The IEEE 1609 and IEEE 802.11p provide upper and lower layers, respectively, to enable WAVE communications [68]. Specifically, IEEE 1609.1 enables WAVE application interoperability by defining the architectural components for WAVE. The IEEE 1609.2 is responsible for providing security mechanisms to prevent eavesdropping, spoofing, alteration, and replay attacks. The IEEE 1609.3 addresses schemes and routing services. Finally, IEEE 1609.4 defines medium access (MAC) enhancements to support WAVE. IEEE 1609.12 specifies object identifier assignments used in WAVE [69]. The WAVE standards support low-latency transaction environments between vehicles (V2V), infrastructure (V2I), and devices (V2D) for safety and mobility applications. Figure 3 shows the organization of the WAVE protocol stack.
The IEEE 1609.3 standard specifies two data plane protocols: IPv6 and the WAVE Short Message Protocol (WSMP). WSMP and IPv6 are separate protocols independent from each other: IPv6 frames are not transported over WSMP or vice versa. WSMP allows the control of physical characteristics such as channel number and transmission power. Applications must provide a Provider Service Identifier (PSID) and the MAC address of the destination device or group addresses.
On the other hand, the SAE J2734 Dedicated Short-Range Communications (DSRC) Message Set Dictionary defines a set of messages, frames, and elements for V2V and V2I safety exchanges. DSRC is a promising wireless standard to connect infrastructure with vehicles [70].
WAVE is the core part of DSRC. A DSRC network comprises two primary units: the RoadSide Unit (RSU) and the On-Board Unit (OBU). RSUs are stationary units connected to a core network and roaming vehicles. OBUs are network devices attached to several kinds of cars. As mentioned earlier, DSRC uses WiFi-based PHY and MAC protocols. IEEE 802.11p is the default standard that defines the characteristics of such layers to be used by DSRC. IEEE 802.11p allows high-mobility vehicular scenarios [71].
IEEE 802.11p uses 5.9 GHz radio transmission and covers at most 1 Km in diameter. This radio spectrum is divided into operational channels, as Figure 4 shows. WAVE uses the Control Channel (CCH) in Channel 178 and at least one Service Channel (SCH) in Channel 172, 174, 176, 180, or 182 while connected to the network. Channel 184 is a High Availability and Low Latency (HALL) channel for future use [70]. CCH is used to exchange safety and control information. SCH provides IP packet exchange communication.
The IEEE 802.11p PHY layer uses orthogonal frequency-division multiple access (OFDMA) with a 10 MHz channel bandwidth. The specification also doubles the symbol duration to reduce inter-symbol interference and to support a large delay spread. Thus, the peak data rate is 27 Mb/s. The MAC layer uses carrier-sense multiple access with collision avoidance (CSMA/CA) and the enhanced distributed channel access (EDCA) procedure to prioritize access for traffic categories such as emergency messages.
Recently, a new IEEE 802.11bd standard compatible with V2X communications, aimed at enhancing transmission modes, increasing the MAC throughput to up to 500 km/h, and reducing noise sensitivity levels of the lowest IEEE 802.11p data rate for longer communication ranges and defining a position technique is being developed [72].

3.3. Description of the Proposal

In the following section, we explain the overall architecture of the proposal, although this work will only focus on the communication layer.

3.3.1. Global Architecture

The overall architecture of the proposal is shown in Figure 5. The overall architecture comprises four tiers. The sensing layer is responsible for gathering all the information around the car to obtain a detailed picture of all the elements in the driving environment and to take the appropriate actions to ensure safe driving. The communication layer is responsible for interaction with people with disabilities and other cars in the same area through the 802.11p protocol. A series of messages are exchanged to identify the presence of a disabled person and alert other cars of their presence. The intelligence layer is responsible for processing the packets, performing the appropriate calculations to determine the location and direction of movement of the disabled pedestrian, and choosing the actions to maintain a safe mobility environment for the disabled pedestrian. Finally, the interaction layer is responsible for selecting the appropriate interface to communicate the actions to the pedestrian with a disability when crossing the same area.
A relevant issue is information sharing privacy, a potentially sensitive topic. In this case, the user can decide what information to share anonymously with the different nodes. Only the strictly necessary information can be sent (location, address, and disability).

3.3.2. Message Format

Figure 6 shows the types of messages defined for the information exchange process. For our proposal’s information exchange process, we defined two types of messages: broadcast and notification. Broadcast messages are those broadcast by the D-VRU device to the rest of the vehicles traveling in the environment of the disabled person. This type of message informs the cars of the presence of a disabled person in the surrounding area. The structure of the broadcast message comprises several fields, which are described in general terms as follows. The ‘Node_ID’ field is used to specify the identifier of the device sending the message. The ‘Msg_Type’ field specifies the type of message to be sent and can take two values: Broadcast or Notice. The ‘Node’ field specifies the type of node sending the message. The ‘Disab_type’ field is used to identify the D-VRU’s disability type, e.g., ‘blindness’. Finally, the ‘Location’ field provides information on the coordinates of the device’s location.
Notification messages are sent by the vehicle that has detected the D-VRU to the other cars behind it to alert them to the presence of a person with a disability in the traffic area.

4. Evaluation of the Proposal

To evaluate the proposal presented in this paper, we carried out a simulation test of the model in which we analyzed the speed of movement of the vehicles and their reaction to the reception of the messages sent by the D-VRUs to observe the response of the vehicles.

4.1. Global Evaluation Scenario

We selected an area in Colima where people with disabilities circulate. Figure 7 shows the selected region map. The street, Motolinía, shown on the map, is commonly used by people with different disabilities. The street leads to a disability association situated in the same direction toward the city’s downtown area. However, the intersection of Pino Suarez Avenue with Motolinía St. represents a challenge for disabled pedestrians due to the large influx of vehicles such as motorcycles, metropolitan buses, and cars that use these streets practically all day.

4.2. Simulation Scenario Description

It is vital to use simulation tools widely accepted by the vehicular research community to demonstrate the basic operation of the communication scheme we propose in this paper. One of the most prominent vehicular simulators is Veins (Vehicles in Network Simulation), which unites the OMNeT++ network simulator with the SUMO vehicular traffic framework to provide realistic scenarios [73].
Veins can be defined in four building blocks: the simulation of network protocols, the simulation of vehicular traffic, a bidirectional simulation of the previous blocks, and the definition of a standard interface to control vehicles during simulation time.

4.2.1. Network Simulation

For the network simulation, authors of Veins use OMNeT++, which is a discrete event-based simulator, along with the INET Framework [74], which provides modules for Internet protocols such as TCP, UDP, and IPv4, among others. Moreover, the latter framework allows modeling of mobile devices using wireless technologies such as IEEE 802.11 for connectivity. Integrating the two frameworks provides more accurate modeling of interference and shadowing caused by static and moving obstacles.
Specifically, we used Instant Veins, a ready-to-run version of Veins implemented in a virtual machine appliance for hypervisors such as Oracle VM VirtualBox, VMware Workstation Player, or any other software that supports the Open Virtualization Format (OVF). The appliance provides all the components needed to run simulations, including the Veins framework, the INET framework for communication protocols, SimuLTE for cellular communications support, Veins_INET for Veins-specific communication protocols, OMNeT++ for the network simulation, and SUMO for traffic simulation. To import the appliance into Oracle VM VirtualBox, we press the Import Virtual Appliance button and select the downloaded Instant Veins image. We configured the virtual machine with 6 cores and 12 GB of RAM for better performance.

4.2.2. Traffic Microsimulation

Veins use a microscopic modeling technique provided by the Eclipse SUMO (Simulation of Urban Mobility) framework [75], which allows the import of real-world maps, different types of intersections, such as right-of-way or traffic light signaling, and various types of vehicles that can be configured according to a timetable to provide a more realistic simulation. SUMO is a modeling framework for intermodal transport systems, where elements such as road vehicles, public transport, and pedestrians can be simulated. In addition, the framework has several features to implement activities such as route search, visualization, network import, and even emissions calculation.
To run our simulation, we import a previously created map as described in Section 4.2.5 Scenario Map. The interaction between Veins and the traffic simulation is handled by the SUMO daemon, a script that opens a TCP socket for bidirectional communication between the traffic and network simulations, as described in Section 4.2.3 Bidirectional Coupled Simulation. After initializing SUMO, the OMNeT++ IDE is used to import the Veins project with the appropriate map and applications. We created an application specifically for the vehicles and the D-VRUs as they have different behaviors. The applications are associated with each node type in the omnetpp.ini configuration file.

4.2.3. Bidirectionally Coupled Simulation

Veins extends both frameworks, the network simulator OMNeT++ and the road traffic simulator SUMO, with modules providing a bidirectional communication channel to exchange commands and mobility traces via TCP connections. The message exchange is seamless, introducing dynamicity to the simulation as both simulators are discrete. With Veins, vehicles in SUMO are equipped with connectivity models provided by OMNeT++. Furthermore, developing new custom-made applications for each transport vehicle or pedestrian type with a device is possible.

4.2.4. Common Interfaces

Finally, besides the coupled simulation that allows SUMO vehicles to communicate with each other by using networking models, there is also a control interface called TraCI that uses a command and response scheme via TCP connections. TraCI enables control of the traffic dynamics during the simulation for vehicles, traffic lights, pedestrians, etc. By using all the modules mentioned above, Veins can provide realistic models with communication capabilities for vehicular network research.

4.2.5. Scenario Map

We use the OpenStreetMap (OSM) [76] platform to export the portion of the map from the city of Colima in México used in our study. We performed a series of tasks to adapt the information obtained from OSM to enable its convenient use in the Veins simulator.
Firstly, it is required to translate the exported file from OSM to a format supported by SUMO using the following command: netconvert --osm-files motolinia.osm.xml-o motolinia.net.xml. OSM files use Geo-coordinates based on WGS84. Such coordinates are translated to UTM for their use inside Veins.
We also need to create polygons that SUMO understands as transportation routes for vehicles and pedestrians. We use sumo-gui to create polygons starting from the north side of de los Maestros av. intersection with Centenario St. in Figure 7, extending toward Balvino Dávalos St., a distance of approximately 700 m according to the map. The next polygon has the same length and trajectory but in the opposite direction as the first one, which creates a two-way avenue in SUMO. The last polygon encompasses a trajectory from Motolinía St. and Daniel Larios St. to the intersection of Motolinía and Centenario streets with an approximate distance of 210 m. This last polygon is defined as a pedestrian-only trajectory.

4.3. Simulation Parameters

A realistic simulation scenario for Veins was designed and implemented to validate our proposed communication model between devices. Figure 8 shows such a scenario as a proof-of-concept, where a node representing a D-VRU (node [0] in our simulation) is equipped with an IEEE 802.11p-enabled device that periodically (each second) broadcasts a message indicating the type of disability in a JSON-formatted message. Vehicles receive and process these messages for decision-making and peer communication in their coverage area.
To prevent messages containing D-VRU information from propagating indefinitely between vehicles, each message is configured with a maximum of four hops to reduce the number of message exchanges, optimizing bandwidth and spectrum usage. Defining mechanisms for message propagation that optimize the number of hops in more complex scenarios is beyond the scope of this paper. However, future work will focus on investigating optimal propagation mechanisms in scenarios with a high density of vehicles and D-VRUs.
The avenue that D-VRUs have to transit carefully almost every day is a two-way four-lane street generally used by vehicles daily. Once the different trajectory polygons are defined, we create the vehicle and pedestrian objects for the simulation. In our scenario, we specified eight cars. Four vehicles are spawned at the de los Maestros Av. and Centenario St. intersection with 5 s intervals between them and moving at a maximum speed of 32 m per second (m/s). The nodes are moving from north to south. The other four vehicles spawn with the same trajectory, but on the opposite side, from south to north (from de los Maestros Av. and Daniel Larios St. until they cross the intersection with Centenario Street). Vehicles are also spawned at 5 s intervals. These intervals were defined so that the first vehicle could detect the pedestrian’s transmissions to set a flag as the lead vehicle. The lead car is responsible for transmitting messages about the pedestrian’s status to all other vehicles. When the other cars receive such a message, they reduce their speed according to the distance from the pedestrian to themselves. The first vehicles that reach the D-VRU establish a suitable communication scheme between the car and the D-VRU by using the most appropriate interface and then broadcast this agreement to neighboring vehicles, which in turn, when receiving the message, start to reduce their speed to allow the D-VRU to cross the street safely.
Lastly, the pedestrian is spawned at the start of the simulation, moving with an average speed of 2 m/s from the Motolinía St. and Daniel Larios St. intersection, traveling toward the Motolinía St. and Centenario St. intersection. From the moment the pedestrian is spawned, it sends broadcast status messages every two seconds. The maximum vehicle and pedestrian speeds and simulation time are the SUMO default values. The distance of the road trajectory was obtained by measuring intersections from OpenStreetMap information and the number of lanes from the avenue. The values for antenna type, transmit power, MAC layer, and packet size are also the Veins default values. In SUMO, the maximum speed for pedestrians is set to 1.39 m/s or 5 km/h as a default [72]. In [77], the authors explain that the movement speed of a disabled person ranges between 3.02 ft/s and 3.8 ft/s (equivalent to 0.92 m/s and 1.15 m/s). To simulate the disabled person’s displacement, we changed the displacement speed to 1 m/s. In our simulation, we increased this speed to observe the reaction time from the vehicles when the pedestrian moves at higher speeds than usual.
In the simulation, the D-VRU uses the service channel available in IEEE 802.11p to transmit messages to nearby vehicles. Whenever neighboring vehicles receive a message, they compute the distance between the D-VRU and themselves using the coordinates in the message. Based on the distance between both cars, the receiving vehicle starts to decrease its speed until it completely stops, allowing for the exchange of information between the D-VRU and the car so that agreement on the D-VRU’s intention can be established. The D-VRU’s intention identification process is part of the overall proposal, which essentially consists of communication between the VRU and the ASC using hand gestures; nonetheless, its inclusion is beyond the scope of the present work.
It should be noted that all vehicles are configured to stop at the intersection where the D-VRU is located, but only the first one to arrive communicates through other types of interfaces with the D-VRU. Whenever an agreement is reached, this vehicle broadcasts this agreement to neighboring vehicles so that they know what actions the D-VRU intends to perform. Table 2 shows the simulation parameters.

Detection and Stopping Algorithm

We implemented a simple algorithm to manage vehicle speeds based on the current distance between the vehicle and the pedestrian. As mentioned, pedestrians broadcast messages every second indicating their current position, trajectory, and speed. When a vehicle receives such a message, it must determine if it is the leading vehicle, meaning that it is the first that receives the message from the pedestrian and that it has not received notification messages from other vehicles. If it is the leading vehicle, it starts to send notification messages to other vehicles indicating the presence of a pedestrian. When otherwise, the vehicle continues to receive status messages from the leading vehicle to compute the distance as a straight line between this vehicle and the pedestrian. Supposing the distance is less than 100 m in our simulation scenario (the minimum distance for the vehicle to stop completely), the vehicle starts to decrease its speed to stop at a safe distance from the pedestrian. We know this approach is unsuitable for every scenario, but it allows us to simplify the implementation while proving our algorithm’s correctness. Once all vehicles correctly identify the pedestrian and slowly decrease their speed, the leading vehicle and the disabled pedestrian exchange messages using an appropriate interface dependent on the disability so they both can make decisions about the pedestrian crossing or staying at the edge of the intersection. In our simulation, the pedestrian always crosses the intersection. Therefore, vehicles from both trajectories are commanded to stop at the corner, at a safe distance from the pedestrian, until the pedestrian crosses entirely.
While the vehicles wait for the pedestrian to cross the intersection, the lead vehicle continues to compute the distance to the pedestrian. When this distance increases, the pedestrian is safe from the vehicle, and the latter can start increasing its speed. Although the leading vehicle shares the information about the D-VRU, each vehicle uses the calculated distance to decide whether to start driving, and to increase or reduce speed. In our simulation, we use a threshold from 5 to 10 m for the car to start moving again.

5. Results and Discussion

The main goal of our research is to provide a framework in which autonomous vehicles can use wireless and machine learning technologies to improve the driving experience. In this first approach, the speed of the pedestrian is not directly related to the outcome of the simulations, but rather provides the basis for the feasibility of our research.
The results shown in Figure 9 correspond to the speed of all vehicles in the simulation. N0 represents the D-VRU in the simulation, so it has a constant speed of 2 m/s until it stops at the intersection, which is at 51 s in the simulation. The time that N0 spends at the intersection is used to communicate its intentions to the vehicles using the most appropriate interface. Note that the broadcast message sent by the D-VRU indicates the type of disability/condition with which the person has been diagnosed. Note that communication by means other than IEEE802.11p is not possible in Veins. Therefore, we have only focused on the wireless communication part of the simulation. The D-VRU starts moving again at 2 m/s at the 56 s mark.
As mentioned above, vehicles are configured to spawn every five seconds. When vehicles receive messages from the D-VRU or the lead vehicle or platoon, they reduce their speed according to their distance from the D-VRU. The graph in Figure 9 strongly suggests that vehicles reduce their speed at second 51 and begin to accelerate at second 58 after the D-VRU has crossed the intersection. Note that vehicle N2 decreases its speed at the 18 s mark, but increases its speed after the 21 s mark. This is because the distance calculation between the vehicle and the D-VRU increases as the vehicle approaches the intersection. Therefore, the provided algorithm determines that the D-VRU is moving away from the vehicle and does not pose a threat.
The simulation model is a work in progress. Future simulations will include more signaling for better decision-making. However, based on our preliminary results, we can conclude that it is feasible to equip a D-VRU with an IEEE 802.11p-based device to give them confidence in their daily mobility routines, allowing them to safely move around the city, crossing avenues with high and moderate traffic. Furthermore, according to Minhas, et al. [78], human drivers take an average of 3.15 s to put their hands back on the steering wheel when a vehicle is self-driving and the driver is distracted, and an average of 2.47 s to put their feet on the pedals. We believe that using the mechanisms defined in this paper can significantly reduce this time, as the system’s reaction time is negligible when compared with human drivers.

6. Conclusions

In this paper, we integrate assistive technology into a road environment. The goal is to improve the functional capabilities of people with special needs moving in such an environment. The proposal consists of a communication mechanism between people with special needs and self-driving cars. A message exchange process, considering the privacy of the information, allows the detection of the presence of a person with a disability and the identification of his or her type of disability.
The results obtained through simulation showed the capabilities of the proposal in detecting the person with a disability and locating him/her in the road environment. The location of the pedestrian was performed almost in real-time, with a difference of 50 ms, and a detection accuracy of 100% was achieved.
In future work, this information will be integrated into a more sophisticated system based on deep learning to complement its operation and achieve an interaction between the self-driving car and the disabled pedestrian that allows the car to identify the intentions of the disabled pedestrian using hand gesture signals. By identifying the pedestrian’s intentions, the self-driving car will be able to communicate the actions to be taken through the interface that best suits the limitations of the disabled pedestrian.
It is worth noting that this proposal could be implemented in road infrastructure, such as traffic lights. In this way, it could identify people with disabilities in the environment, adapt the sequence of lights according to their needs, and communicate with disabled pedestrians through displays or sounds.

Author Contributions

Conceptualization, A.G.-I., I.A.-V. and J.C.-C.; methodology, A.G.-I., I.A.-V. and J.C.-C.; software, A.G.-I., I.A.-V. and J.C.-C.; validation, A.G.-I., I.A.-V. and J.C.-C.; formal analysis, A.G.-I., I.A.-V. and J.C.-C.; investigation, A.G.-I., I.A.-V. and J.C.-C.; resources, A.G.-I., I.A.-V. and J.C.-C.; data curation, A.G.-I., I.A.-V. and J.C.-C.; writing—original draft preparation, A.G.-I., I.A.-V. and J.C.-C.; writing—review and editing, A.G.-I., I.A.-V. and J.C.-C.; visualization, A.G.-I., I.A.-V. and J.C.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Society of Automotive Engineers. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. 2014. Available online: https://www.sae.org/standards/content/j3016_202104/ (accessed on 22 August 2023).
  2. NHTSA. Automated Vehicle for Safety; National Highway Traffic Safety Administration: Washington, DC, USA, 2021. Available online: https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety (accessed on 22 August 2023).
  3. NHTSA. Vehicle Manufactures, Automated Driving Systems; National Highway Traffic Safety Administration: Washington, DC, USA, 2021. Available online: https://www.nhtsa.gov/vehicle-manufacturers/automated-driving-systems (accessed on 22 August 2023).
  4. NHTSA. Comparing Demographic Trends in Vulnerable Road User Fatalities and the U.S. Population, 1980–2019; National Highway Traffic Safety Administration: Washington, DC, USA, 2021; pp. 1–11.
  5. European Commission. ITS & Vulnerable Road Users. 2015. Available online: https://transport.ec.europa.eu/transport-themes/intelligent-transport-systems/road/action-plan-and-directive/its-vulnerable-road-users_en (accessed on 22 August 2023).
  6. Reyes-Muñoz, A.; Guerrero-Ibáñez, J. Vulnerable Road Users and Connected Autonomous Vehicles Interaction: A Survey. Sensors 2022, 22, 4614. [Google Scholar] [CrossRef] [PubMed]
  7. World Health Organization. Global Status Report on Road Safety 2018. Available online: https://www.who.int/publications-detail-redirect/9789241565684 (accessed on 13 February 2023).
  8. World Health Organization. Disability. 2 December 2022. Available online: https://www.who.int/news-room/fact-sheets/detail/disability-and-health (accessed on 3 February 2023).
  9. World Health Organization. Deafness and Hearing Loss. Available online: https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss (accessed on 9 March 2023).
  10. Wu, T.-E.; Tsai, C.-C.; Guo, J.-I. LiDAR/camera sensor fusion technology for pedestrian detection. In Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia, 12–15 December 2017; pp. 1675–1678. [Google Scholar] [CrossRef]
  11. Lin, B.-Z.; Lin, C.-C. Pedestrian detection by fusing 3D points and color images. In Proceedings of the 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), Okayama, Japan, 26–29 June 2016; pp. 1–5. [Google Scholar] [CrossRef]
  12. Lovas, T.; Barsi, Á. Pedestrian detection by profile laser scanning. In Proceedings of the 2015 International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), Budapest, Hungary, 3–5 June 2015; pp. 408–412. [Google Scholar] [CrossRef]
  13. Kim, J. Pedestrian Detection and Distance Estimation Using Thermal Camera in Night Time. In Proceedings of the 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Okinawa, Japan, 11–13 February 2019; pp. 463–466. [Google Scholar] [CrossRef]
  14. Bila, C.; Sivrikaya, F.; Khan, M.A.; Albayrak, S. Vehicles of the Future: A Survey of Research on Safety Issues. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1046–1065. [Google Scholar] [CrossRef]
  15. Gu, F.; Niu, J.; Jiang, L.; Liu, X.; Hancke, G.P. SafePath: Exploiting Ubiquitous Smartphones to Avoid Vehicle–Pedestrian Collision. IEEE Internet Things J. 2022, 9, 6763–6777. [Google Scholar] [CrossRef]
  16. Napolitano, A.; Cecchetti, G.; Giannone, F.; Ruscelli, A.L.; Civerchia, F.; Kondepu, K.; Valcarenghi, L.; Castoldi, P. Implementation of a MEC-based Vulnerable Road User Warning System. In Proceedings of the 2019 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE), Torino, Italy, 2–4 July 2019; pp. 1–6. [Google Scholar] [CrossRef]
  17. Fan, Y.; Liang, Q. An Improved Method for Detection of the Pedestrian Flow Based on RFID. In Proceedings of the 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), Guangzhou, China, 21–24 July 2017; pp. 69–72. [Google Scholar] [CrossRef]
  18. Llorca, D.F.; Quintero, R.; Parra, I.; Izquierdo, R.; Fernández, C.; Sotelo, M.A. Assistive Pedestrian Crossings by Means of Stereo Localization and RFID Anonymous Disability Identification. In Proceedings of the 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 15–18 September 2015; pp. 1357–1362. [Google Scholar] [CrossRef]
  19. ATiA. What is AT? Assistive Technology Industry Association: Chicago, IL, USA, 2015; Available online: https://www.atia.org/home/at-resources/what-is-at/ (accessed on 19 November 2022).
  20. Cao, J.; Pang, Y.; Xie, J.; Khan, F.S.; Shao, L. From Handcrafted to Deep Features for Pedestrian Detection: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4913–4934. [Google Scholar] [CrossRef] [PubMed]
  21. Zhao, X.; He, Z.; Zhang, S.; Liang, D. Robust pedestrian detection in thermal infrared imagery using a shape distribution histogram feature and modified sparse representation classification. Pattern Recognit. 2015, 48, 1947–1960. [Google Scholar] [CrossRef]
  22. Cheng, Y.; Su, S.Z.; Li, S.Z. Combine histogram intersection kernel with linear kernel for pedestrian classification. In Proceedings of the IET International Conference on Information Science and Control Engineering 2012 (ICISCE 2012), Shenzhen, China, 7–9 December 2012; pp. 1–3. [Google Scholar] [CrossRef]
  23. Mao, L.; Tang, L. Pedestrian Detection Based on Gradient Direction Histogram. In Proceedings of the 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2022; pp. 939–943. [Google Scholar] [CrossRef]
  24. Liu, T.; Cheng, J.; Yang, M.; Du, X.; Luo, X.; Zhang, L. Pedestrian detection method based on self-learning. In Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China, 20–22 December 2019; pp. 2161–2165. [Google Scholar] [CrossRef]
  25. Ahmed, Z.; Iniyavan, R.; Madhan Mohan, P. Enhanced Vulnerable Pedestrian Detection using Deep Learning. In Proceedings of the 2019 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 4–6 April 2019; pp. 971–974. [Google Scholar] [CrossRef]
  26. Wu, Y.; Chen, C.; Wang, B. Pedestrian Detection Based on Improved SSD Object Detection Algorithm. In Proceedings of the 2022 International Conference on Networking and Network Applications (NaNA), Urumqi, China, 3–5 December 2022; pp. 550–555. [Google Scholar] [CrossRef]
  27. Abbass, M.Y.; Kwon, K.-C.; Kim, N.; Abdelwahab, S.A.; Abd El-Samie, F.E.; Khalaf, A.A.M. Utilization of deep convolutional and handcrafted features for object tracking. Optik 2020, 218, 164926. [Google Scholar] [CrossRef]
  28. Tesema, F.B.; Wu, H.; Chen, M.; Lin, J.; Zhu, W.; Huang, K. Hybrid channel based pedestrian detection. Neurocomputing 2020, 389, 1–8. [Google Scholar] [CrossRef]
  29. Trichet, R.; Bremond, F. LBP Channels for Pedestrian Detection. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1066–1074. [Google Scholar] [CrossRef]
  30. Zhang, S.; Wang, X. Human detection and object tracking based on Histograms of Oriented Gradients. In Proceedings of the 2013 Ninth International Conference on Natural Computation (ICNC), Shenyang, China, 23–25 July 2013; pp. 1349–1353. [Google Scholar] [CrossRef]
  31. Surasak, T.; Takahiro, I.; Cheng, C.; Wang, C.; Sheng, P. Histogram of oriented gradients for human detection in video. In Proceedings of the 2018 5th International Conference on Business and Industrial Research (ICBIR), Bangkok, Thailand, 17–18 May 2018; pp. 172–176. [Google Scholar] [CrossRef]
  32. Sangeetha, D.; Deepa, P. Efficient Scale Invariant Human Detection Using Histogram of Oriented Gradients for IoT Services. In Proceedings of the 2017 30th International Conference on VLSI Design and 2017 16th International Conference on Embedded Systems (VLSID), Hyderabad, India, 7–11 January 2017; pp. 61–66. [Google Scholar] [CrossRef]
  33. Zhang, C.; Dai, B.; Jiang, H.; Shen, X.; Yao, Y. A moving target detection algorithm based on BING objectness and background estimation. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 10795–10800. [Google Scholar] [CrossRef]
  34. Chen, J.; Mei, F.; Ye, W.; Wang, H.; Shen, X.; Yao, Y. Fast algorithm for moving target detection. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 11217–11222. [Google Scholar] [CrossRef]
  35. Huang, L.; Ma, X.; Fang, F.; Zhou, B. Device Target Checking for Power Patrol Robot Based on Objectness Estimation. In Proceedings of the 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), Kristiansand, Norway, 9–13 November 2020; pp. 73–78. [Google Scholar] [CrossRef]
  36. Nabati, R.; Qi, H. RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 3093–3097. [Google Scholar] [CrossRef]
  37. Lin, J.-A.; Chiu, C.-T.; Cheng, Y.-Y. Object Detection with Color and Depth Images with Multi-Reduced Region Proposal Network and Multi-Pooling. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1618–1622. [Google Scholar] [CrossRef]
  38. Lei, J.; Chen, Y.; Peng, B.; Huang, Q.; Ling, N.; Hou, C. Multi-Stream Region Proposal Network for Pedestrian Detection. In Proceedings of the 2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), San Diego, CA, USA, 23–27 July 2018; pp. 1–6. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Huang, X. Research on Pedestrian Detection System based on Tripartite Fusion of ‘HOG+SVM+Median filter’. In Proceedings of the 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Beijing, China, 6–8 November 2020; pp. 484–488. [Google Scholar] [CrossRef]
  40. Ma, N.; Chen, L.; Hu, J.; Shang, Q.; Li, J.; Zhang, G. Pedestrian Detection Based on HOG Features and SVM Realizes Vehicle-Human-Environment Interaction. In Proceedings of the 2019 15th International Conference on Computational Intelligence and Security (CIS), Macau, China, 13–16 December 2019; pp. 287–291. [Google Scholar] [CrossRef]
  41. Zhang, M.; Jin, J.S.; Wang, M.; Tang, B.; Zheng, Y. Pedestrian intrusion detection based on improved GMM and SVM. In Proceedings of the 2016 13th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 16–18 December 2016; pp. 311–315. [Google Scholar] [CrossRef]
  42. Chong, P.; Tay, Y.H. A novel pedestrian detection and tracking with boosted HOG classifiers and Kalman filter. In Proceedings of the 2016 IEEE Student Conference on Research and Development (SCOReD), Kuala Lumpur, Malaysia, 13–14 December 2016; pp. 1–5. [Google Scholar] [CrossRef]
  43. Zhu, C.; Peng, Y. A Boosted Multi-Task Model for Pedestrian Detection with Occlusion Handling. IEEE Trans. Image Process. 2015, 24, 5619–5629. [Google Scholar] [CrossRef] [PubMed]
  44. Abdelmutalab, A.; Wang, C. Pedestrian Detection Using MB-CSP Model and Boosted Identity Aware Non-Maximum Suppression. IEEE Trans. Intell. Transp. Syst. 2022, 23, 24454–24463. [Google Scholar] [CrossRef]
  45. Saeidi, M.; Ahmadi, A. Deep Learning based on CNN for Pedestrian Detection: An Overview and Analysis. In Proceedings of the 2018 9th International Symposium on Telecommunications (IST), Tehran, Iran, 17–19 December 2018; pp. 108–112. [Google Scholar] [CrossRef]
  46. Feng, X.; Gu, X.; Kuang, P.; Li, X.; Zhu, Y. Pedestrian Detection and Tracking with Deep Mutual Learning. In Proceedings of the 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 17–19 December 2021; pp. 217–220. [Google Scholar] [CrossRef]
  47. Zeng, H.; Ai, H. Pedestrian Detection with Central-Line Heatmap Regression. In Proceedings of the 2019 IEEE 2nd International Conference on Information Communication and Signal Processing (ICICSP), Weihai, China, 28–30 September 2019; pp. 444–448. [Google Scholar] [CrossRef]
  48. Wang, J.; Chen, Y.; Wang, W.; Li, H. Robust Infrared Pedestrian Detection via GMR and Logistic Regression Based ROIs Extraction. In Proceedings of the 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Beijing, China, 13–15 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
  49. Saeidi, M.; Ahmadi, A. Pedestrian Detection Using an Extended Fast RCNN based on a Secure Margin in RoI Feature Maps. In Proceedings of the 2018 9th International Symposium on Telecommunications (IST), Tehran, Iran, 17–19 December 2018; pp. 155–159. [Google Scholar] [CrossRef]
  50. Zhang, Z.; Wang, Y.; Jiang, H.; Zeng, X. Strict NMS: Pedestrian Detection in Crowd Scenes. In Proceedings of the 2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE), Dalian, China, 27–29 September 2020; pp. 225–230. [Google Scholar] [CrossRef]
  51. Tian, L.; Zhang, Z. Simple-NMS: Improved Pedestrian Detection with New Constraints. In Proceedings of the 2021 International Conference on Computer Communication and Artificial Intelligence (CCAI), Guangzhou, China, 7–9 May 2021; pp. 30–35. [Google Scholar] [CrossRef]
  52. Huang, X.; Ge, Z.; Jie, Z.; Yoshie, O. NMS by Representative Region: Towards Crowded Pedestrian Detection by Proposal Pairing. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10747–10756. [Google Scholar] [CrossRef]
  53. Toprak, T.; Gunel, S.; Belenlioglu, B.; Aydın, B.; Zoral, E.Y.; Alper Selver, M. Machine Learning Based Bounding Box Regression for Improved Pedestrian Detection. In Proceedings of the 2019 International Symposium on Advanced Electrical and Communication Technologies (ISAECT), Rome, Italy, 27–29 November 2019; pp. 1–6. [Google Scholar] [CrossRef]
  54. Zhang, H.; Yan, C.; Li, X.; Yang, Y.; Yuan, D. MSAGNet: Multi-Stream Attribute-Guided Network for Occluded Pedestrian Detection. IEEE Signal Process. Lett. 2022, 29, 2163–2167. [Google Scholar] [CrossRef]
  55. Ge, Z.; Hu, C.; Huang, X.; Qiu, B.; Yoshie, O. DualBox: Generating BBox Pair with Strong Correspondence via Occlusion Pattern Clustering and Proposal Refinement. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 2097–2102. [Google Scholar] [CrossRef]
  56. Omar, H.A.; Lu, N.; Zhuang, W. Wireless access technologies for vehicular network safety applications. IEEE Netw. 2016, 30, 22–26. [Google Scholar] [CrossRef]
  57. IEEE Std 1609.0-2019 (Revision of IEEE Std 1609.0-2013); IEEE Guide for Wireless Access in Vehicular Environments (WAVE) Architecture. IEEE: Piscataway, NJ, USA, 2019; pp. 1–106. [CrossRef]
  58. Segata, M.; Lo Cigno, R.; Tsai, H.-M.M.; Dressler, F. On platooning control using IEEE 802.11p in conjunction with visible light communications. In Proceedings of the 2016 12th Annual Conference on Wireless On-demand Network Systems and Services (WONS), Cortina d’Ampezzo, Italy, 20–22 January 2016; pp. 1–4. [Google Scholar]
  59. Rengaraju, P.; Lung, C.-H. Network architecture and QoS study on software defined LTE vehicular ad hoc networks. In Proceedings of the 2016 International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 21–22 October 2016; pp. 1–7. [Google Scholar] [CrossRef]
  60. Tahir, M.N.; Mäenpää, K.; Hippi, M. Pedestrian Motion Detection & Pedestrian Communication (P2I & V2P). In Proceedings of the 2020 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 17–19 September 2020; pp. 1–3. [Google Scholar] [CrossRef]
  61. Liu, Z.; Pu, L.; Meng, Z.; Yang, X.; Zhu, K.; Zhang, L. POFS: A novel pedestrian-oriented forewarning system for vulnerable pedestrian safety. In Proceedings of the 2015 International Conference on Connected Vehicles and Expo (ICCVE), Shenzhen, China, 19–23 October 2015; pp. 100–105. [Google Scholar] [CrossRef]
  62. Liu, Z.; Pu, L.; Zhu, K.; Zhang, L. Design and evaluation of V2X communication system for vehicle and pedestrian safety. J. China Univ. Posts Telecommun. 2015, 22, 18–26. [Google Scholar] [CrossRef]
  63. Sewalkar, P.; Krug, S.; Seitz, J. Towards 802.11p-based vehicle-to-pedestrian communication for crash prevention systems. In Proceedings of the 2017 9th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), Munich, Germany, 6–8 November 2017; pp. 404–409. [Google Scholar] [CrossRef]
  64. Karoui, M.; Berg, V.; Mayrargue, S. Assessment of V2X Communications for Enhanced Vulnerable Road Users Safety. In Proceedings of the 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring), Helsinki, Finland, 19–22 June 2022; pp. 1–5. [Google Scholar] [CrossRef]
  65. Moshkov, V.V.; Badin, A.D.; Guminskiy, O.A. Research of Characteristics of Radio Technologies of V2V/V2P Systems. In Proceedings of the 2022 Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), Saint Petersburg, Russia, 25–28 January 2022; pp. 64–68. [Google Scholar] [CrossRef]
  66. Hu, L.; Wang, H.; Zhao, Y. Performance Analysis of DSRC-Based Vehicular Safety Communication in Imperfect Channels. IEEE Access 2020, 8, 107399–107408. [Google Scholar] [CrossRef]
  67. Lin, C.-S.; Sun, C.-K.; Lin, J.-C.; Chen, B.-C. Performance evaluations of channel estimations in IEEE 802.11p environments. In Proceedings of the 2009 International Conference on Ultra Modern Telecommunications & Workshops, St. Petersburg, Russia, 12–14 October 2009; pp. 1–5. [Google Scholar] [CrossRef]
  68. IEEE P1609.0/D6.0; IEEE Draft Guide for Wireless Access in Vehicular Environments (WAVE)—Architecture. IEEE: Piscataway, NJ, USA, 2013; pp. 1–96.
  69. Marroquin, A.; To, M.A.; Azurdia-Meza, C.A.; Bolufé, S. A General Overview of Vehicle-to-X (V2X) Beacon-Based Cooperative Vehicular Networks. In Proceedings of the 2019 IEEE 39th Central America and Panama Convention (CONCAPAN XXXIX), Guatemala City, Guatemala, 20–22 November 2019; pp. 1–6. [Google Scholar] [CrossRef]
  70. Morgan, Y.L. Notes on DSRC & WAVE Standards Suite: Its Architecture, Design, and Characteristics. IEEE Commun. Surv. Tutor. 2010, 12, 504–518. [Google Scholar] [CrossRef]
  71. Javed, M.A.; Zeadally, S.; Hamida, E.B. Data analytics for Cooperative Intelligent Transport Systems. Veh. Commun. 2019, 15, 63–72. [Google Scholar] [CrossRef]
  72. Zeadally, S.; Javed, M.A.; Hamida, E.B. Vehicular Communications for ITS: Standardization and Challenges. IEEE Commun. Stand. Mag. 2020, 4, 11–17. [Google Scholar] [CrossRef]
  73. Sommer, C.; German, R.; Dressler, F. Bidirectionally Coupled Network and Road Traffic Simulation for Improved IVC Analysis. IEEE Trans. Mob. Comput. 2011, 10, 3–15. [Google Scholar] [CrossRef]
  74. INET Framework—INET Framework. Available online: https://inet.omnetpp.org/ (accessed on 16 May 2023).
  75. Lopez, P.A.; Behrisch, M.; Bieker-Walz, L.; Erdmann, J.; Flötteröd, Y.-P.; Hilbrich, R.; Lücken, L.; Rummel, J.; Wagner, P. Microscopic Traffic Simulation using SUMO. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2575–2582. [Google Scholar] [CrossRef]
  76. OpenStreetMap—SUMO Documentation. Available online: https://sumo.dlr.de/docs/Networks/Import/OpenStreetMap.html (accessed on 26 June 2023).
  77. Gates, T.J.; Noyce, D.A.; Bill, A.R.; Ee, N.V. Recommended Walking Speeds for Timing of Pedestrian Clearance Intervals Based on Characteristics of the Pedestrian Population. Transp. Res. Rec. 2006, 1982, 38–47. [Google Scholar] [CrossRef]
  78. Minhas, S.; Hernández-Sabaté, A.; Ehsan, S.; McDonald-Maier, K.D. Effects of Non-Driving Related Tasks During Self-Driving Mode. IEEE Trans. Intell. Transp. Syst. 2022, 23, 1391–1399. [Google Scholar] [CrossRef]
Figure 1. Levels of vehicle driving automation defined by the Society of Automotive Engineers.
Figure 1. Levels of vehicle driving automation defined by the Society of Automotive Engineers.
Electronics 12 03587 g001
Figure 2. Representation of the global scenario for illustration of the proposal.
Figure 2. Representation of the global scenario for illustration of the proposal.
Electronics 12 03587 g002
Figure 3. Representation of the WAVE reference model.
Figure 3. Representation of the WAVE reference model.
Electronics 12 03587 g003
Figure 4. Representation of the IEEE 802.11p channel allocation.
Figure 4. Representation of the IEEE 802.11p channel allocation.
Electronics 12 03587 g004
Figure 5. Representation of global architecture.
Figure 5. Representation of global architecture.
Electronics 12 03587 g005
Figure 6. Representation of the type of messages defined for data exchange.
Figure 6. Representation of the type of messages defined for data exchange.
Electronics 12 03587 g006
Figure 7. Simulation coverage area map. Representation of the type of messages defined for data exchange.
Figure 7. Simulation coverage area map. Representation of the type of messages defined for data exchange.
Electronics 12 03587 g007
Figure 8. Representation of the simulation scenario in Veins. The overlapped messages represent IEEE 802.11p broadcasting frames sent by the D-VRU to neighboring vehicles.
Figure 8. Representation of the simulation scenario in Veins. The overlapped messages represent IEEE 802.11p broadcasting frames sent by the D-VRU to neighboring vehicles.
Electronics 12 03587 g008
Figure 9. Vehicle speed with respect to time.
Figure 9. Vehicle speed with respect to time.
Electronics 12 03587 g009
Table 1. Summary of IEEE 802.11p physical layer parameters.
Table 1. Summary of IEEE 802.11p physical layer parameters.
ParameterValue
Data rate3, 4.5, 6, 9, 12, 18, 24 and 27 Mbps
Transmission bandwidth10 MHz
Modulation schemesBPSK, QPSK, 16-QAM, and 64-QAM
Codification rate1/2, 1/3, and 3/4
Data sub-carriers52
OFDM symbol duration8 μs
Guard interval1.6 μs
FFT period6.4 μs
Preamble duration32 μs
Sub-carriers spacing0.15625 MHz
Table 2. Summary of simulation parameters.
Table 2. Summary of simulation parameters.
ParameterValue
Number of vehicles8
Number of pedestrians1
Maximum vehicle speed32 m/s
Maximum pedestrian speed2 m/s
Simulation time200 s
Distance of road trajectory700 m
Number of road lanes4 (2 in each direction)
Antenna typeOmnidirectional
Transmission power15 dbm
MAC layer802.11p
Packet size1400 bytes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guerrero-Ibañez, A.; Amezcua-Valdovinos, I.; Contreras-Castillo, J. Integration of Wearables and Wireless Technologies to Improve the Interaction between Disabled Vulnerable Road Users and Self-Driving Cars. Electronics 2023, 12, 3587. https://doi.org/10.3390/electronics12173587

AMA Style

Guerrero-Ibañez A, Amezcua-Valdovinos I, Contreras-Castillo J. Integration of Wearables and Wireless Technologies to Improve the Interaction between Disabled Vulnerable Road Users and Self-Driving Cars. Electronics. 2023; 12(17):3587. https://doi.org/10.3390/electronics12173587

Chicago/Turabian Style

Guerrero-Ibañez, Antonio, Ismael Amezcua-Valdovinos, and Juan Contreras-Castillo. 2023. "Integration of Wearables and Wireless Technologies to Improve the Interaction between Disabled Vulnerable Road Users and Self-Driving Cars" Electronics 12, no. 17: 3587. https://doi.org/10.3390/electronics12173587

APA Style

Guerrero-Ibañez, A., Amezcua-Valdovinos, I., & Contreras-Castillo, J. (2023). Integration of Wearables and Wireless Technologies to Improve the Interaction between Disabled Vulnerable Road Users and Self-Driving Cars. Electronics, 12(17), 3587. https://doi.org/10.3390/electronics12173587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop