Next Article in Journal
Coupled Numerical Method for Modeling Propped Fracture Behavior
Next Article in Special Issue
Modeling the Trip Distributions of Tourists Based on Trip Chain and Entropy-Maximizing Theory
Previous Article in Journal
Thin Film Gas Sensors Based on Planetary Ball-Milled Zinc Oxide Nanoinks: Effect of Milling Parameters on Sensing Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges

1
Department of Electrical Engineering, Tsinghua University, Beijing 100084, China
2
Department of Civil Engineering, The University of Texas at El Paso, El Paso, TX 79968, USA
3
Department of Civil and Environmental Engineering, University of Washington, Seattle, WA 98195, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2021, 11(20), 9680; https://doi.org/10.3390/app11209680
Submission received: 22 September 2021 / Revised: 12 October 2021 / Accepted: 14 October 2021 / Published: 17 October 2021
(This article belongs to the Special Issue Transportation Big Data and Its Applications)

Abstract

:
The widespread use of mobile devices and sensors has motivated data-driven applications that can leverage the power of big data to benefit many aspects of our daily life, such as health, transportation, economy, and environment. Under the context of smart city, intelligent transportation systems (ITS), as a main building block of modern cities, and edge computing (EC), as an emerging computing service that targets addressing the limitations of cloud computing, have attracted increasing attention in the research community in recent years. It is well believed that the application of EC in ITS will have considerable benefits to transportation systems regarding efficiency, safety, and sustainability. Despite the growing trend in ITS and EC research, a big gap in the existing literature is identified: the intersection between these two promising directions has been far from well explored. In this paper, we focus on a critical part of ITS, i.e., sensing, and conducting a review on the recent advances in ITS sensing and EC applications in this field. The key challenges in ITS sensing and future directions with the integration of edge computing are discussed.

1. Introduction

Data explosion has been posing unprecedented opportunities and challenges to our cities. To utilize big data to better allocate urban resources for the purpose of improving life quality and city management, the concept of Smart City was introduced as an emerging topic to society and the research community. To fulfill the increasing demand in data processing, cloud computing is expected not to be able to fully support the computing services, and thus, the community has realized the need for a new form of computing, i.e., edge computing (EC). Edge computing processes sensor data closer to where the data are generated, thereby balancing the computing load and saving network resources. At the same time, edge computing has the potential for improved privacy protection by not transmitting all the raw data to the cloud datacenters.
Transportation is a key building block of our city, and the concept of intelligent transportation systems (ITS) is a critical component of Smart City. Research studies and engineering implementations on ITS have been attracting attention in recent years. According to the Web of Science, we surveyed the number of publications with the keywords intelligent transportation, intelligent vehicle, or smart transportation. It can be seen from Figure 1 (the orange line) that the number of publications on ITS increased from 2011 to 2019 by almost five times. We also searched the publications with keywords edge computing, and the trend is similar to that of ITS (the blue line). However, when we searched the combined keywords, including edge computing + transportation and edge computing + vehicle, the related publications were in very small numbers: 21 publications in 2011 and 199 publications in 2019.
From the numbers themselves, we might conclude that EC and ITS are two unrelated research fields. Nevertheless, with careful investigation, this is not the case at all. In several of the most highly cited survey papers on edge computing [1,2,3], the summarized key applications of edge computing included smart transportation, connected vehicle, wireless sensing, smart city, traffic video analytics, and so on. These are core research topics in ITS. They envisioned that, with the wide spread of mobile phones, sensors, network cameras, connected cars, etc., cloud computing would no longer be suitable for many city-wide applications. Edge computing would be able to leverage the large amount of data produced by them. On the other hand, we observed more and more studies and articles on applying edge computing to ITS (e.g., [4,5,6,7,8,9]). Though still in a relatively small number, these studies are innovative and show great potential to branch out into new ideas and solutions.
Therefore, the statistics in Figure 1 actually unveil a big gap and high demand in future research on the combination of EC and ITS. In this paper, we conduct a survey on recent advances in ITS, especially ITS sensing technologies; we then propose the challenges in ITS sensing and how EC may help address them, as well as future research opportunities in applying EC to the area of ITS sensing. Note that EC could benefit not only ITS sensing but also other components of ITS, such as data pre-processing, traffic pattern analysis, and control strategies; however, in this paper, we mainly focus on EC’s application in ITS sensing and present a detailed survey. The structure of the content in this paper is displayed in Figure 2.

2. Intelligent Transportation Systems (ITS)

ITS is a combination of cutting-edge information and communication technologies for the advancement of traffic management. Examples include traffic signal control, smart parking management, electronic toll collection, variable speed limit, route optimization, and, more recently, connected and automated vehicles. Regardless of the specific application, ITS is typically composed of five major components: traffic sensing, data pre-processing, data pattern analysis, information communication, and control. Other components, such as traffic prediction, may also be necessary for certain tasks. In this section, we review the ITS components and their functions.

2.1. Sensing

Sensing is essentially the detection of certain types of signals in the real world and the conversion of them into readable data. Traffic sensors generate data that supports the analysis, prediction, and decision-making of intelligent transportation systems. There are various sensors for different data collection purposes and scenarios. The most commonly seen traffic sensors in today’s roadway networks and transportation infrastructures include, but are not limited to, inductive loop detectors [10,11,12], magnetic sensors [13,14], cameras [15,16], infrared sensors [17], LiDAR sensors [18], acoustic sensors [19], Bluetooth [20], wi-fi [21], mobile phones [22], and probe vehicle sensors [23,24,25]. These sensors measure the feature quantity of some objects or scenarios in transportation systems, such as road users, traffic flow parameters, congestion, crashes, queue length at intersections, and automobile emission.
For most sensor signals, while there is a lot of valuable information that can be mined from them, the conversion process is straightforward and can be completed based on some simple rules. For example, loop detectors measure the change in the inductance when vehicles pass over them for traffic volume and occupancy detection; Bluetooth sensors capture the radio communication signal with a device unique identifier, i.e., the media access control address (MAC) so that they can estimate the number of devices (usually associated with the number of road users) or travel time; acoustic sensors generate acoustic wave to detect the existence of objects at a certain location, without the ability to tell the object type. For some sensors, such as camera and LiDAR, the conversion of the raw signals (i.e., the digital images and the 3D point cloud) to useful data can be quite complicated, and thereby advanced algorithms have been applied widely to the conversion of LiDAR and camera signals.

2.2. Data Pre-Processing

Data pre-processing, when necessary, is conducted right after the sensing task. It can address data quality issues that are hard to be addressed previously. Noisy data and missing data are two major problems in traffic data pre-processing and cleaning. While data denoising can be done with satisfactory performance using traditional methods, such as wavelet filter, moving average model, and Butterworth filter [26], missing data imputation is much harder since it adds information properly. Another commonly applied traffic data denoising task is trajectory data map matching. The most popular models for this task that denoises the map matching errors are often based on the Hidden Markov Model [27,28,29]. There have been quite some efforts in deep learning-based missing data imputation lately. These state-of-the-art methods often focus on learning spatial-temporal features using deep learning models so that that are able to inference the missing values using the existing values [30,31,32,33,34,35]. Given the spatial-temporal property of traffic data, the Convolutional Neural Network (CNN) is a natural choice due to its ability to learn image-like patches. Zhuang et al. designed a CNN-based method for loop traffic flow data imputation and demonstrated its improved performance over the state-of-the-art methods [32]. Generative adversarial network (GAN) is another deep learning method that is appropriate for traffic data imputation, given its recent advances in image-like data generation. Chen et al. proposed a GAN algorithm to generate time-dependent traffic flow data. They made two modifications to the standard GAN on using the real data and introducing a representation loss [31]. GAN is also experimented for travel time imputation using probe vehicle trajectory data. Zhang et al. developed a travel times imputation GAN (TTI-GAN) considering the network-wide spatial-temporal correlations [35].

2.3. Traffic Pattern Analysis

With the data organized, the next step for an ITS is to learn the traffic patterns, understand traffic status, and make traffic predictions. There are two critical steps in most tasks for traffic pattern learning: (1) feature selection and (2) model design. Essentially, feature selection forms the original data space, and model design converts the original space to a new space that is learnable for classification, regression, clustering, or other tasks. On the one hand, traffic sensing is so important that, without original data property collected, it is almost impossible to make up a new space for pattern learning from poor original data space. On the other hand, once the traffic sensing is done with good design and quality, it is then necessary to focus on designing models to extract useful information for your tasks. Machine learning has been widely applied for a variety of traffic pattern learning tasks, such as driver and passenger classification using smart phone data [36], K-means clustering for truck bottleneck identification using GPS data [37], estimation of the number of bus passengers using deep learning [38], and faulty detection in vehicular cyber-physical systems [39].
A traditional group of studies is transportation mode recognition. Models are developed to recognize the mode of travelers, such as working, biking, running, and driving. This can be achieved by identifying travel features, such as speed, distance, and acceleration. Jahangiri and Rakha applied multiple traditional machine learning techniques for mode recognition using mobile phone data and found Random Forest (RF) and Support Vector Machine (SVM) to have the best performances [40]. Ashqar et al. enhanced the mode recognition accuracy by designing a two-layer hierarchical classifier and extracting new frequency domain features [41]. Another work introduced an online sequential extreme learning machine (ELM), which focuses on transfer learning techniques for mode recognition. It was trained with both labeled and unlabeled data for better training efficiency and classification accuracy. Recently, deep learning models were also developed for mode recognition [42]. Jeyakumar et al. developed a convolutional bidirectional Long Short-Term Memory (LSTM) model for transportation mode recognition. Feature extraction includes time domain and frequency domain features from the raw data [43].
Another representative group in data-driven pattern analysis is traffic accident detection. It is beneficial for transportation management agencies and travelers to have real-time information of traffic accidents regarding where it occurs and what the situation is. Otherwise, it may cause severe congestion and other issues besides the accident itself. This group of work often extracts features from traffic flow data, weather data, and so on to identify the traffic pattern change or differences around the accident location. Parsa et al. implemented eXtreme Gradient Booting (XGBoost) to detect the occurrence of accidents using real-time data, including traffic flow, road network, demographic, land use, and weather information. The Shapley Additive exPlanation (SHAP) is employed for interpretation of the results for the analysis of the importance of individual features [44]. They also led another study that showed the superiority of probabilistic neural networks for accident detection on freeways using imbalanced data. It revealed that the speed difference between the upstream and downstream of the accident was very significant [45]. In addition to traffic flow data, social media data is also shown to be effective for traffic accident detection. Zhang et al. employed the Deep Belief Network (DBN) and LSTM in the detection of traffic accidents using Twitter data in Northern Virginia and New York City. They found that nearly 66% of the accident-related tweets can be located by the accident log and over 80% can be linked to abnormal traffic data nearby [46]. Another sub-category is to detect accidents in real-time from a vehicle’s perspective. For example, Dogru and Subasi studied the possibility of applying RF, SVM, and a neural network for accident detection based on an individual vehicle’s speed and location under the context of Vehicular Ad-hoc Network (VANET) [47].

2.4. Traffic Prediction

Traffic pattern analysis is fundamental to traffic prediction. In some cases, traffic prediction is critical for an ITS if decision-making requires information in advance. In traffic prediction, the models extract features and learn the current pattern of traffic in order to predict some measurements. Traffic prediction is crucial for intelligence. It is one of the areas in which artificial intelligence, especially deep learning techniques, have been heavily applied. Traffic prediction has covered many tasks in transportation systems. Examples are traffic flow prediction [26,48,49,50,51,52,53,54], transit demand prediction [55,56], taxi or ride-hailing demand prediction [57,58,59], bike sharing-related prediction [60,61], parking occupancy prediction [62], pedestrian behavior prediction [63], and lane change behavior prediction [64]. LSTM, CNN, GAN, and graph neural networks are some of the most widely used deep learning methods for traffic prediction. The current trend of traffic prediction is larger scale, higher resolution, higher prediction accuracy, and real-time speed. For instance, Ma et al. investigated the feasibility of applying LSTM for single-spot traffic flow data prediction [51]. Their work is a milestone in this field and has laid the foundation for sophisticated models that can capture network-wide features for large-scale traffic speed prediction [65,66,67].

2.5. Information Communication and Control

There are two purposes of information communication: (1) gathering information to support decision making and (2) disseminating the decisions and control strategies to devices and road users. Traditional communication relies a lot on wired communication. Actuated traffic signal control collects vehicle arrival data from loop detectors underneath the roadway surface and pedestrian signal data via push button at intersections [68]. This information is gathered through wires into the signal controller cabinet, which is usually located at the roadside near an intersection. A similar communication method is for ramp metering control at freeway entrances [69]. Loop detectors are located underneath the freeway mainstream lanes and sometimes also underneath ramp lanes and they communicate with the cabinet through wire [70]. Using an Ethernet cable is another common method for wired communication for ITS. It can either connect devices to the Internet or serves as media for local communication, such as video streaming [71]. Controller Area Network (CAN bus) is a standard vehicle bus designed for microcontroller communications without a host computer [72], which enables the parts within vehicles to communicate with each other. Wired communication through CAN is an important way to test vehicle onboard innovations and solutions in ITS studies.
Wireless communication has been widely used in different applications, thanks to the rapid development of general communication technologies. Probe vehicle data can be available in real-time through vehicle-cloud communications. Companies, such as INRIX [73], and Wejo [74], have such connected vehicle data, such as trajectories and driver events, given their good connections to the vehicle OEMs. Similar vehicle and traffic data are available via devices other than the vehicle itself, such as smartphones, to provide real-time information for drivers via phone apps, such as Google Maps and Waze [75]. A connected vehicle, in many other scenarios, refers to not only vehicle-cloud communication but also vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and even vehicle-to-everything (V2X) [76]. Dedicated Short Range Communication (DSRC) was a standard communication protocol for V2X application [77]; however, recently, C-V2X has been proposed as a new communication protocol with the emergence of 5G for high bandwidth, low latency, and highly reliable communication among a broad range of devices in ITS [78]. Information, such as variable speed limit via control strategy, work zone information, and real-time travel time, can be disseminated via variable message signs on the roadways in Advanced Traffic Management System (ATMS) and Advanced Traveler Information System (ATIS), from traffic management centers to road users [79,80].

3. ITS Sensing

This section goes into deep detail in the state-of-the-art in ITS sensing from a unique angle. First, existing ITS sensing works using camera and LiDAR are briefly introduced in Section 3.1, since those two sensors often require complicated methods for formatting input signals into useful data. The authors then summarize ITS sensing into infrastructure-based traffic sensing, vehicle onboard sensing, and aerial sensing for surface traffic from Section 3.2 to Section 3.4: (1) From the transportation system functionality perspective, infrastructure and road users are the two crucial elements that form the ground transportation system; the ground transportation system’s functionality is further extended with the emergence of aerial-based surveillance in civil utilization; (2) From the methodological perspective, sensor properties for these three transportation system components requires different solutions. Taking video sensing as an example, surveillance video, vehicle onboard video, and aerial video have different video background motion patterns so that there are unique video analytics algorithms for video foreground extraction for each of the three groups.

3.1. LiDAR and Camera

LiDAR has been predominately used in autonomous vehicles compared to its use in transportation infrastructure systems. LiDAR signal is 3D point cloud and it can be used for 3D object detection, 3D object tracking, lane detection, obstacle detection, traffic sign detection, and 3D mapping in autonomous vehicles’ perception systems [81]. For example, Qi et al. proposed PointNets, a deep learning framework for 3D object detection from RGB-D data that learned directly from the raw point clouds to extract 3D bounding boxes of vehicles [82]. Allodi et al. proposed using machine learning for combined LiDAR/stereo vision data that did tracking and obstacle detection at the same time [83]. Jung et al. designed an expectation-maximization-based method for real-time 3D road lane detection using raw LiDAR signals from a probe vehicle [84]. Guan developed a traffic sign classifier based on a supervised Gaussian-Bernoulli deep Boltzmann machine model, which used LiDAR point cloud and images as input [85]. There are also some representative works providing critical insights into the application of LiDAR as an infrastructure-based sensor. Zhao et al. proposed a clustering method for detecting and tracking pedestrians and vehicles using roadside LiDAR [18]. The findings are helpful for both researchers and transportation engineers.
Camera collects images or videos, and these raw data are essentially 2D matrices with quantized pixel numbers that are samples of the real-world visual signals. New techniques are applied to convert these complex 2D matrices into traffic-related data. One fundamental application is object detection. Researchers in the engineering and computer science fields have spent a lot of effort designing smart and fast object detectors using traditional statistics/learning [15] and deep learning techniques [86,87]. Object detection localizes and classifies cars, trucks, pedestrians, bicyclists, etc., in traffic camera images and enables different data collection tasks. There are also datasets being collected and published specifically for object detection and classification in traffic surveillance images, which has generated much interest [88]. The AI City Challenge is a leading workshop and competition in the field of traffic surveillance video data processing [89]. It has guided video-based traffic sensing, such as traffic volume counting, vehicle re-identification, multiple-vehicle tracking, and traffic anomaly detection. Since the camera sensor is a critical component of autonomous vehicles, advanced traffic sensing techniques have been heavily deployed in autonomous vehicles’ perception systems to understand the environment [81]. Traffic events, congestion levels, road users, road regions, infrastructure information, road user interactions, etc., are all meaningful data that can be extracted from the raw images using machine learning. Datasets have also been published and widely recognized to facilitate the design of cutting-edge methods for converting camera data into readable traffic-related data [90,91,92,93].

3.2. Infrastructure-Based ITS Sensing

A key objective of the ITS concept is to leverage the existing civil infrastructures to improve traffic performance. Transport infrastructure refers to roads, bridges, tunnels, terminals, railways, traffic controllers, traffic signs, other roadside units, and so on. Sensors installed with transport infrastructures monitor certain locations in a transportation system, such as intersections, roadway segments, freeway entrances/exists, and parking facilities.

3.2.1. Traffic Flow Detection

One of the fundamental functions of infrastructure-based ITS sensing is traffic flow detection and classification at certain locations. Vehicle counts, flow rate, speed, density, trajectories, classes, and many other valuable data can be available through traffic flow detection and classification. Chen et al. [94] proposed a traffic flow detection method using optimized YOLO (You Only Look Once) for vehicle detection and DeepSORT (Deep Simple Online and Realtime Tracking) for vehicle tracking and implemented the method on Nvidia edge device Jetson TX2. Haferkamp et al. [95] proposed a method by applying machine learning (KNN and SVM) to radio attenuation signals and were able to achieve success in traffic flow detection and classification. If processed with advanced signal processing methods, traditional traffic sensors, such as loop detectors and radar, can also expand their detection categories and performance. Ho and Chung [96] applied Fast Fourier Transform (FFT) to radar signals to detect traffic flow at the roadside. Ke et al. [97] developed a method for traffic flow bottleneck detection using Wavelet Transform on loop detector data. Distributed sensing with acoustic sensing, traffic flow outlier detection, deep learning, and robust traffic flow detection in congestion are examples of other state-of-the-art studies in this sub-field [98,99,100,101].

3.2.2. Travel Time Estimation

Coupled with traffic flow detection, travel time estimation is another task in ITS sensing. Accurate travel time estimation needs multi-location sensing and re-identification of road users. Bluetooth sensing is a primary way to detect travel time since Bluetooth detection comes with a MAC address of a device so it can naturally re-identify the road users that carry the device. Vehicle travel time [102] and pedestrian travel time [103] can both be extracted with Bluetooth sensing. Bluetooth sensing has generated privacy concerns. With the advance in computer vision and deep learning, travel time estimation has been advanced with road user re-identification using surveillance cameras. Deep image features are extracted for vehicles and pedestrians and are compared among region-wide surveillance cameras for multi-camera tracking [104,105,106,107,108]. An effective and efficient pedestrian re-identification method was developed by Han et al. [108], called KISS+ (Keep It Simple and Straightforward Plus), in which multi-feature fusion and feature dimension reduction are conducted based on the original KISS method. Sometimes it is not necessary to estimate travel time for every single road user. In those cases, more conventional detectors and methods could achieve good results. Oh et al. [109] proposed a method to estimate link travel time, as early as in the year 2002, using loop detectors. The key idea was based on road section density that can be acquired by observing in-and-out traffic flows between two loop stations. While no re-identification was realized, these methods had reasonably good performances and provided helpful travel time information for traffic management and users [109,110,111].

3.2.3. Traffic Anomaly Detection

Another topic in infrastructure-based sensing is traffic anomaly detection. As the name suggests, traffic anomaly refers to those abnormal incidents in an ITS. They rarely occur, and examples include vehicle breakdown, collision, near-crash, wrong-way driving, and so forth. Two major challenges in traffic anomaly detection are (1) the lack of sufficient anomaly data for algorithm development and (2) the wide variety of anomalies that lacks a clear definition. Anomalies detection is achieved mainly using surveillance cameras given the requirement for rich information, though time series data is also feasible in some relatively simple anomaly detection tasks [112]. Traffic anomaly detection can be divided into three categories: supervised learning, unsupervised learning, and semi-supervised learning. Supervised learning methods are useful when the number of classes is clearly defined and training data is large enough to make the model statistically significant; but supervised learning requires manual labeling and needs both data and labor, and they cannot detect unforeseen anomalies [113,114,115]. Unsupervised learning has no requirement for labeling data and is more generalizable to the unforeseen anomaly as long as sufficient normal data is given; however, anomaly detection will be hard when the data nature changes over time (e.g., if a surveillance camera keeps changing angle and direction) [116]. Li et al. [117] designed an unsupervised method based on multi-granularity tracking, and their method won first place in the 2020 AI City Challenge. Semi-supervised learning needs only weak labels. Chakraborty et al. [118] proposed a semi-supervised model for freeway traffic trajectory classification using YOLO, SORT, and maximum-likelihood-based Contrastive Pessimistic Likelihood Estimation (CPLE). This model detects anomalies based on trajectories and improves the accuracy by 14%. Sultani et al. [119] considered videos as bags and video segments as instances in multiple instance learning and automatically learned an anomaly ranking model with weakly labeled data. Lately, traffic anomaly detection has been advanced not only by the design of new learning methods but also by object tracking methods. It is interesting to see that, in the 2021 AI City Challenge, all top-ranking methods somewhat made contributions to the tracking part [120,121,122].

3.2.4. Parking Detection

Alongside roadway monitoring, parking facility monitoring, as another typical scene in the urban area, plays a crucial role in infrastructure-based sensing. Infrastructure-based parking space detection can be divided into two categories from the sensor functionality perspective: the wireless sensor network (WSN) solution and camera-based solution. The WSN solution has one sensor for each parking space, and the sensors need to be low power, sturdy, and affordable [8,123,124,125,126,127,128,129,130,131,132]. The WSN solution has some pros and cons: algorithm-wise, it is often straightforward; a thresholding method would work in most cases, but a relatively simple detection method may lead to a high false detection rate. A unique feature for the WSN is it is robust to sensor failure due to a large number of sensors. That means, even if a few stop working, the WSN still covers most of the spaces. However, a large number of sensors do require a high cost of labor and maintenance in large-scale installation. Magnetic nodes, infrared sensors, ultrasonic sensors, light sensors, and inductance loops are the most popular sensors. For example, Sifuentes et al. [131] developed a cost-effective parking space detection algorithm based on magnetic nodes, which integrates a wake-up function with optical sensors. The camera-based solution has been increasingly popular with advances in video sensing, machine learning, and data communication technologies [8,123,133,134,135,136,137,138,139,140,141,142,143,144]. Compared to the WSN, one camera covers multiple parking spaces; thus, the cost per space is reduced. It is also more manageable regarding maintenance, since the installation of camera systems is non-intrusive. Additionally, as aforementioned, video contains more information than other sensors, which has the potential to apply to more complicated tasks in parking. Bulan et al. proposed to use background subtraction and SVM for street parking detection, which achieved a promising performance and was not sensitive to occlusion [133]. Nurullayev et al. designed a pipeline with a unique design of dilated convolutional neural network (CNN) structure. The design was validated to be robust and suitable for parking detection [136].

3.3. Vehicle Onboard Sensing

Vehicle onboard sensing is complimentary to infrastructure-based sensing. It happens on the road user side. The sensors move with road users and thereby are more flexible and cover larger areas. Additionally, vehicle onboard sensors are the eyes of an intelligent vehicle, making the vehicle see and understand the surroundings. These properties pose opportunities for urban sensing and autonomous driving technologies, but at the same time create challenges for innovation. A major technical challenge is the irregular movement of sensors. Traditional ITS sensing on infrastructure-mounted sensors deal with stationary backgrounds and relatively stable environment settings. For instance, radar sensors for speed measurement know where the traffic is supposed to be. Camera sensors have a fixed video background so that traditional background modeling algorithms can be applied. Therefore, in order to benefit from vehicle onboard sensing, it is necessary to address the challenges.

3.3.1. Traffic Near-Crash Detection

Traffic near-crash or traffic near-miss is the conflict between road users that has the potential to develop into a collision. Near-crash detection using onboard sensors is the first step for multiple ITS applications: near-crash data serves as (1) surrogate safety data for traffic safety study, (2) corner-case data for autonomous vehicle testing, and (3) input to collision avoidance systems. There were some pioneer studies on automatic near-crash data extraction on the infrastructure side using LiDAR and camera [145,146,147]. In recent years, near-crash detection systems and algorithms using onboard sensors have been developed at a fast pace. Ke et al. [148] and Yamamoto et al. [149] each applied conventional machine learning models (SVM and random forest) in their near-crash detection frameworks and achieved fairly good detection accuracy and efficiency on regular computers. The state-of-the-art methods tend to use deep learning for near-crash detection. The integration of CNN, LSTM, and attention mechanisms was demonstrated to be superior in recent studies [149,150,151]. Ibrahim et al. presented that a bi-directional LSTM with self-attention outperformed a single LSTM with a normal attention mechanism [150]. Another feature in recent studies was the combination of onboard camera sensor input and onboard telematics input, such as vehicle speed, acceleration, and location to either improve the near-crash detection performance or increase the output data diversity [9,149,152]. Ke et al. mainly used onboard video for near-crash detection but also collected telematics and vehicle CAN data for post analysis [9].

3.3.2. Road User Behavior Sensing

Human drivers can recognize and predict other road users’ behaviors, e.g., pedestrians crossing the street, vehicle changing lanes. For intelligent or autonomous vehicles, automating this kind of behavior recognition process is expected to be part of the onboard sensing functions [153,154,155,156,157]. Stanford University [157] published an article on pedestrian intent recognition using onboard videos. They built a graph CNN to exploit spatio-temporal relationships in the videos, which was able to show the relationships between different objects. While, for now, the intent prediction just focused on crossing the street or not, the research direction is clearly promising. They also published over 900 h of onboard videos online. Another study proposed by Brehar et al. [154] on pedestrian action recognition used an infrared camera, which compensates for regular cameras in the nighttime, on foggy days, and on rainy days. They built a framework composed of a pedestrian detector, an original tracking method, road segmentation, and LSTM-based action recognition. They also introduced a new dataset named CROSSIR. Likewise, vehicle behavior recognition is of the same importance for intelligent or autonomous vehicles [158,159,160,161,162]. Wang et al. [159] lately developed a method using fuzzy inference and LSTM for vehicles’ lane changing behavior recognition. The recognition results were used for a new intelligent path planning method to ensure the safety of autonomous driving. The method was trained and tested by NGSIM data. Another study on vehicle trajectory prediction using onboard sensors in a connected-vehicle environment was conducted. It improved the effectiveness of the Advanced Driver Assistant System (ADAS) in cut-in scenarios by establishing a new collision warning model based on lane-changing intent recognition, LSTM for driving trajectory prediction, and oriented bounding box detection [158]. Another type of road user-related sensing is passenger sensing, while for different purposes, e.g., transit ridership sensing using wireless technologies [163] and car passenger occupancy detection using thermal images for carpool enforcement [164].

3.3.3. Road and Lane Detection

In addition to road user-related sensing tasks, road and lane detection are often performed for lane departure warning, adaptive cruise control, road condition monitoring, and autonomous driving. The state-of-the-art methods mostly apply deep learning models for onboard camera sensors, LiDAR, and depth sensors for road and lane detection [165,166,167,168,169,170]. Chen et al. [165] proposed a novel progressive LiDAR adaption approach-aided road detection method to adapt LiDAR point cloud to visual images. The adaption contains two modules, i.e., data space adaptation and feature space adaptation. This camera-LiDAR fusion model currently stays at the top of the KITTI road detection leaderboard. Fan et al. [166] designed a deep learning architecture that consists of a surface normal estimator, an RGB encoder, a surface normal encoder, and a decoder with connected skip connections. It applied road detection to the RGB image and depth image and achieved state-of-the-art accuracy. Alongside road region detection, an ego-lane detection model proposed by Wang et al. outperformed other state-of-the-art models in this sub-field by exploiting prior knowledge from digital maps. Specifically, they employed OpenStreetMap’s road shape file to assist lane detection [167]. Multi-lane detection has been more challenging and rarely addressed in existing works. Still, Luo et al. [168] were able to achieve pretty good multi-lane detection results by adding five constraints to Hough Transform: length constraint, parallel constraint, distribution constraint, pair constraint, and uniform width constraint. A dynamic programming approach was operated after the Hough Transform to select the final candidates.

3.3.4. Semantic Segmentation

Detecting the road regions at the pixel level is a type of image segmentation focusing on the road instance. There has been a trend in onboard sensing to segment the entire video frame at pixel level into different object categories. This is called semantic segmentation and is considered a must for advanced robotics, especially autonomous driving [171,172,173,174,175,176,177,178,179]. Compared to other tasks, which can usually be fulfilled using different types of onboard sensors, semantic segmentation is strictly realized using visual data. Nvidia researchers [172] proposed a hierarchical multi-scale attention mechanism for semantic segmentation based on the observation that certain failure modes in the segmentation can be resolved in a different scale. The design of their attention was hierarchical so that memory usage was four times more efficient in the training process. The proposed method ranked top on two segmentation benchmark datasets. Semantic segmentation is relatively computationally expensive; thus, working towards the goal of real-time segmentation is a challenge [171,174]. Siam et al. [171] targeted proposing a general framework for real-time segmentation and ran 15 fps on Nvidia Jetson TX2. Labeling at the pixel level is time-consuming and is another challenge for semantic segmentation. There are some benchmark datasets available for algorithm testing, such as Cityscapes [180]. Efficient labeling for semantic segmentation and unsupervised/semi-supervised learning for semantics segmentation are interesting topics worth exploring [173,175,176].

3.4. Aerial Sensing for ITS

Aerial sensing using drones, i.e., unmanned aerial vehicles (UAVs), has been performed in the military for years and recently has become increasingly explored in civil applications, such as agriculture, transportation, good delivery, and security. Automation and smartness of surface traffic cannot be fulfilled with ground transportation itself. UAV extends the functionality of existing ground transportation systems with its high mobility, top-view perspective, wide view range, and autonomous operation [181]. UAV’s role is envisaged in many ITS scenarios, such as flying accident report agents [182], traffic enforcement [183], traffic monitoring [184], and vehicle navigation [185]. While there are regulations to be completed and practical challenges to be addressed, such as safety concerns, privacy issues, and short battery life problem, UAV’s applications in ITS is envisioned to be one step forward towards transportation network automation [181]. On the road user side, UAV extends the functionality of ground transportation systems by detecting vehicles, pedestrians, and cyclists from the top view, which has a wider view range and better view angle (no occlusion) than surveillance cameras and onboard cameras. UAV also detects road users’ interactions and traffic theory-based parameters, thereby supporting applications in traffic management and user experience improvement.

3.4.1. Road User Detection and Tracking

Road user detection and tracking are the initialization processes for traffic interaction detection, pattern recognition, and traffic parameter estimation. Conventional UAV-based road user detection often uses background subtraction and handcrafted features, assuming UAV is not moving or stitching frames in the first step [186,187,188,189,190,191]. Recent studies tended to develop deep learning detectors for UAV surveillance [192,193,194,195,196]. Road user detection itself can acquire traffic flow parameters, such as density and counts, without any need for motion estimation or vehicle tracking. Zhu et al. [196] proposed an enhanced Single Shot Multibox Detector (SSD) for vehicle detection with manually annotated data, resulting in high detection accuracy and a new dataset. Wang et al. [197] identified the challenge in UAV-based pedestrian detection, particularly at night time, and proposed an image enhancement method and a CNN for pedestrian detection at nighttime. In order to conduct more advanced tasks in UAV sensing on the road user side, road user tracking is a must because it connects individual detection results. Efforts have been made on UAV-based vehicle tracking and motion analysis [186,187,188,198,199,200]. In many previous works, existing tracking methods, such as particle filter and SORT, were directly applied and had fairly good tracking performance. Recently, Ke et al. [201] developed a new tracking algorithm that incorporated lane detection information and improved tracking accuracy.

3.4.2. Advanced Aerial Sensing

Road user detection and tracking support advanced aerial ITS sensing applications. For example, in [192,202], the researchers developed new methods for traffic shock-wave identification and synchronized traffic flow pattern recognition under oversaturated traffic conditions. Chen et al. [203] conducted a thorough study on traffic conflict based on extracted road user trajectories from UAV. The Safety Space Boundary concept in the paper is an informative design for conflict analysis. One of the most useful applications using UAV is traffic flow parameter estimation: Traditional research in this field focused on using static UAV videos for macroscopic parameters extraction. McCord et al. [204] led a pioneering research work to extract a variety of critical macroscopic traffic parameters, such as annual average daily traffic (AADT). Later on, a new method was proposed by Shastry et al. [205], in which they adopted image registration and motion information to stitch images and obtain fundamental traffic flow parameters. Lately, Ke et al. developed a series of efficient and robust frameworks to estimate aggregated traffic flow parameters (speed, density, and volume) [206,207,208]. Because of the potential benefits of higher-resolution data in ITS, microscopic traffic parameter estimation has been conducted [209]. Barmpounakis et al. proposed a method to extract naturalistic trajectory data from UAV videos at a relatively less congested intersection using static UAV video [194]. Ke et al. [201] developed an advanced framework composed of lane detection, vehicle detection, vehicle tracking, and traffic parameter estimation that can estimate 10 different macroscopic and microscopic parameters from a moving UAV.

3.4.3. UAV for Infrastructure Sensing

On the infrastructure side, UAV has been utilized for some ITS sensing services, such as road detection, lane detection, and infrastructure inspection. UAV is extremely helpful at locations where it is hard for humans to reach. Road detection is used to localize the regions where traffic appears from UAV sensing data. It is crucial to support applications such as navigation and task scheduling [210,211,212]. For instance, Zhou et al. [213,214] designed two of the popular methods for road detection in UAV imagery. While there were some studies before these two, the paper [213] was the first targeting speeding up the road localization part with a proposed tracking method. Reference [214] presented a fully automatic approach that detects roads from a single UAV image with two major components: road/non-road seeds generation and seeded road segmentation. Their methods were tested on challenging scenes. UAV has been intensely used for infrastructure inspection, particularly bridge inspection [215,216,217,218] and road surface inspection [219,220,221]. Manual inspection for bridges and road surfaces is costly in terms of both time and labor. Bolourian et al. [217] proposed a high-level framework for bridge inspection using LiDAR-equipped UAVs. It contained planning a collision-free optimized path and a data analysis framework for point cloud processing. Bicici and Zeybek [219] developed an approach with verticality features, DBSCAN clustering, and robust plane fitting to process point cloud for automated extraction of road surface distress.

4. Edge Computing: Opportunities in ITS Sensing Challenges

Despite the massive advances in ITS sensing both in methodology and application, there are various challenges to be addressed towards a truly smart city and smart transportation system. We envision the major objectives of future ITS sensing to be large-scale sensing, high intelligence, and real-time capability. These three properties would lay the foundation for high automation of city-wide transportation systems. On the other hand, we summarize the challenges into a few categories: heterogeneity, high probability of sensor failure, sensing in extreme cases, and privacy concern. In review of the emerging works in using edge computing for ITS tasks, it is reasonable to consider that edge computing will be a primary component of the solutions to these challenges.

4.1. Objectives

4.1.1. Large-Scale Sensing

ITS sensing in smart cities is expected to cover a large network of microsites. Without edge computing, the cost for large-scale cloud computing services (e.g., AWS and Azure) is significant and will eventually reach the upper limit of network resources (bandwidth, computation, and storage) [9]. Sending network-wide data over a limited bandwidth to a centralized cloud is counterproductive. Edge computing could significantly improve network efficiency by transporting non-raw data in smaller amounts or providing edge functions to eliminate irrelevant data onsite. Systems and algorithms will need to be developed to address concerns in high probability of sensor failure in a high variety of large scale real-world scenarios and maintenance and support facilities.

4.1.2. High Intelligence

Intelligence in ITS sensing means that transportation systems understand the surrounding environment through intelligent sensing functions, thus providing valuable information for efficient and effective decision-making. Many ITS environments today still have unreliable or unpredictable network connectivity. These could include buses, planes, parking facilities, traffic signal facilities, and general infrastructures under extreme conditions. Edge computing functions can be designed as self-contained, thereby neatly supporting these environments by allowing autonomous or semi-autonomous operation without network connectivity. One existing example could be ADAS functions, which automatically run onboard vehicles. Without Internet connection, it may not serve as a data collection point for other services but is still able to warn and protect drivers in risky scenarios. However, high intelligence often requires high-complexity methods and computation power. Concerns exist in the resource constraint on edge devices, the ability to handle corner cases that the machines never encountered, and other general challenges in AI.

4.1.3. Real-Time Sensing

Sensing in real-time is critical for many future ITS applications. Connected infrastructure, autonomous vehicles, smart traffic surveillance, short-term traffic prediction, and so on, all expect real-time capability, and they cannot tolerate even milliseconds of delay in processing due to effectiveness and safety. These tasks that require fast response time, low latency, and high efficiency, especially when on a large scale, cannot be achieved without edge computing architecture. However, there is always a tradeoff between real-time sensing and high intelligence: as intelligence increases, efficiency commonly decreases. This conflict stands out in edge computing, given the limited resources at the edge. Sometimes, the input data itself is intense, such as video data and sensor fusion data, which puts additional burdens on the edge computing devices. Careful design in system architectures that balance the computation load between edge and cloud is expected to move towards this goal. Algorithm design that targets innovation in light-weight neural network structures and other models has shown effectiveness in reducing computation load at the edge while maintaining a good sensing performance. In summary, concerns are a tradeoff between real-time sensing and high intelligence, the network and computation resource constraint, and intense data input at the edge.

4.2. State of the Art

In this subsection, we summarize state-of-the-art models in edge computing for ITS sensing. The benefit of edge computing lies in the improvement in computation efficiency, network bandwidth usage, response time, cyber security, and privacy [1]. However, the resource constraints on edge devices are the key bottleneck for the implementation of high intelligence. Zhou et al. conducted a comprehensive survey on edge AI and considered edge computing is paving the last mile of AI [2]. In terms of AI model optimization at the edge, the compression of deep neural networks using pruning and quantization techniques is significant [222,223].
In ITS applications, there have not been many pioneering studies that explore the design of both system architectures and algorithms for certain transportation scenarios using edge computing. It is widely known that edge computing with machine learning is a trend for ITS. Ferdowsi et al. introduced a new ITS architecture that relies on edge computing and deep learning to enhance computation, latency, and reliability [224]. They investigated the potential of using edge deep learning to solve multiple ITS challenges, including data heterogeneity, path planning, autonomous vehicle and platoon control, and cyber security.
Crowdsensing with the Internet of Vehicles (IoV) is one category of research using edge computing for ITS. Vehicles are individual nodes in the traffic network with local data collection and processing units. As a whole, they form the IoV network that can perform crowdsensing. One example is monitoring urban street parking spaces with in-vehicle edge video analytics [225]. In this work, smart phones serve as the data producer; the edge unit detects cars, signs, and GPS and uploads the ego-vehicle location and road identifier to the cloud for data aggregation. Another study uses a crowdsensing scheme and edge machine learning for road surface condition classification. A multi-classifier is applied at the edge to recognize road surface type and anomaly situation [226]. Liu et al. proposed SafeRNet, a safe transportation routing computation framework that utilized the Bayesian network to analyze crowdsensing traffic data to infer safe routes and deliver them to users in real time [227].
Some other studies focus on managing and optimizing the resources of the system to ensure efficient message delivery, computation, caching, and so on for IoV [6,7,228,229,230]. Dai et al. [7] exploited reinforcement learning (RL) to formulate a new architecture that dynamically optimizes edge computing and caching resources. Yuan et al. [229] proposed a two-level edge computing architecture for efficient content delivery of large-volume, time-varying, location-dependent, and delay-constrained automated driving services. Ji et al. [6] developed a relay cooperative transmission algorithm of IoV with aggregated interference at the destination node.
Another group of research on this topic focuses on developing machine learning methods for certain ITS tasks with edge computing, instead of for resources management in crowdsensing. Real-time video analytics, as the killer app for edge computing, has generated challenges and thereby huge interests for research [5,231]. Microsoft Research has explored a new architecture with deep learning and edge computing techniques for intersection traffic monitoring and potential conflict detection [5]. Ke et al. [4] designed a new architecture that splits the computation load into cloud part and edge part for smart parking surveillance. On the edge device Raspberry Pi, background subtraction, and an SSD vehicle detector were implemented, and only the bounding boxes related information was sent back to the cloud for object tracking and occupancy judgment. The proposed work improved efficiency, accuracy, and reliability of the sensing system in adverse weather conditions.
Detecting parking space occupancy by lightweight CNN models on edge devices has also been investigated by different researchers [135,143,232]. Another lightweight CNN that comprised factorization convolution layers and compression layers was developed for edge computing and multiple object detection on a Nvidia Jetson device for transportation cyber-physical systems [232]. Cyber-attacks can also be detected in transportation cyber-physical systems using machine learning. Chen et al. proposed a deep belief network structure to achieve attack detection in a transportation mobile edge computing environment [233]. UAV can also serve as an edge unit for attack detection for smart vehicles [234]. Another interesting application of edge machine learning is detecting road surface quality issues onboard a vehicle [235,236]. Traditional machine learning methods, such as random forest, appeared to perform well with high accuracy and real-time operation for this task.

4.3. Challenges in ITS Sensing

4.3.1. Challenge 1: Heterogeneity

Developing advanced ITS applications requires the adoption of different sensors and sensing methods. On a large scale, heterogeneity resides in many aspects, e.g., hardware, software, power supply, and data. Sensor hardware has a large variety of different ITS tasks. Magnetic sensor, radar sensor, infrared sensor, LiDAR, camera, etc., are common sensor types that each poses unique advantage in certain scenarios. These sensors are different regarding cost, size, material, reliability, working environment, sensing capability, and so on. Not only is there a large variety of sensors themselves, the hardware supporting the sensing functions for storage and protection is also diverse. The associated hardware may limit the applicability of sensors, as well. A sensor with local storage is able to store data onsite for later use; a sensor with a waterproof shell is able to work outdoors, while those without may solely be available for indoor monitoring.
Even within the same type of sensors, there can be significant variance with respect to detailed configurations and will influence the effectiveness and applicability of the sensors. Cameras with different resolution is an example, and those with high resolution are suitable for some tasks that are not possible for low-resolution cameras, such as small object detection; the low-resolution cameras may support less complicated tasks and have a better efficiency and lower cost. The installation locations of the same sensors also vary. As aforementioned, sensors onboard a vehicle or carried by a pedestrian have different functions from those installed on infrastructures. Some sensors can only be installed on the infrastructures, and some are appropriate for onboard sensing. For example, loop detectors and magnetic nodes are most often on or underneath the road surface, while sensors for collision avoidance need to be onboard cars, buses, or trucks.
Software is another aspect that poses heterogeneity in ITS sensing. There is open-source software and proprietary software. Open-source software is free and flexible and can be customized for specific tasks; however, there is a relatively high risk that some open-source software is unreliable and may solely work in specific settings. There are many open-source codes on platforms such as GitHub. A good open-source tool can generate massive influence on the research community, such as open codes for Mask R-CNN [237], which has been widely applied for traffic object instance segmentation. Proprietary software is generally more reliable, and some software comes with customer services from the company who develop the software. These software tools are usually not free and have less flexibility in being customized. It is also hard to know the internal sensing algorithms or design. When an ITS system is composed of multiple software tools, which is likely the case most of the time, and these tools lack transparency or flexibility regarding communication, there will be hurdles in developing efficient and advanced ITS applications.
Heterogeneous settings in ITS sensing inevitably collect a heterogeneous mix of data, such as vehicle dynamics, traffic flow, driver behavior, safety measurements, and environmental features. There are uncertainties, noises, and missing patches in ITS data. Modern ITS applications would require data to be of high quality, integrated, and sometimes in real-time. Despite improving sensing functionality for individual sensors at a single location, new challenges arise in the integration of heterogeneous data. New technologies also pose challenges in data collection as some data under traditional settings will be redundant, and at the same time, new data will be required for some tasks, e.g., CAV safety and mixed autonomy.
Edge computing is promising in terms of improving data integration of different data sources. For example, Ke et al. [238] developed an onboard edge computing system based on Nvidia Jetson TX2 for near-crash detection based on video streams. The system leveraged edge computing for real-time video analytics and communicated with another LiDAR-based system onboard; while the sensor sets and data generated from the two onboard systems were very different, the designed edge data fusion framework was able to address the data heterogeneity of the two groups neatly through a CAN-based triggering mechanism. In future ITS, data heterogeneity problems are expected to be more complex, involving not only data from different sensors on the same entity but also data with completely different characteristics and generation processes. Edge computing will make it one step closer to an ideal solution by formatting the data immediately after they are produced.

4.3.2. Challenge 2: High Probability of Sensor Failure

Large-scale and real-time sensing requirements will have little tolerance for sensor failure because it may cause severe problems to the operation and safety of ITS. A representative example is sensor failure in an autonomous vehicle, which could lead to property damage, injuries, and even fatality. When ITS becomes more advanced, where one functional system will likely consist of multiple coordinated modules, the failure of one sensor could malfunction the entire system. For instance, a connected vehicle and infrastructure system may stop working because some infrastructure-mounted sensors have no readings, so the data flow for the system would be interrupted.
Sensor failure may rarely occur for every individual sensor, but according to probability theory, if the failure probability of one sensor is p during a specific period, the failure that occurs among N sensors will be 1 ( 1 p ) N . When N is large enough, the probability of sensor failure will be very high. This phenomenon is similar to the fault tolerance in cloud computing, which is about designing a blueprint for continuing the cloud computing ongoing work when a few machines are down. However, in ITS sensing, sensor fault tolerance is more challenging due to: (1) the hardware, software, and data heterogeneity mentioned in the last sub-section; (2) the fact that, unlike cloud computing settings, sensors are connected to different networks or even not connected to any network; (3) the potential cost and loss from a sensor fault, which could be much more serious than one in cloud computing.
This problem naturally exists and is hard to be eliminated, because, as we discussed, even when the failure probability for a single sensor is super low, in city-scale ITS sensing applications, where there are hundreds and even thousands of heterogeneous sensors, the probability goes up significantly. Furthermore, it is not realistic to reduce the failure probability of a single sensor or device to zero in the real world. Edge computing could help improve the situation. On the one hand, an edge computing device or an edge server can make the network edge itself more robust to sensor fault with fault detection designs. A sensor directly connected to the network cannot notify the datacenter of a failure of itself, but when a sensor mounted on an edge device fails, the edge device would know and communicate with the datacenter. On the other hand, backup sensor sets could be deployed within an edge computing platform. With computing capability at the edge, the backup sensor set could be called in case of sensor failure. Nevertheless, more comprehensive solutions to perfectly deal with sensor failure are still under exploration.

4.3.3. Challenge 3: Sensing in Extreme Cases

ITS sensing tasks that seem simple could become extraordinarily complicated or unreliable in extreme cases, such as during adverse weather, due to occlusion, and at nighttime. A typical example is video sensing, which is sensitive to lighting changes, shadow, reflection, and so forth. In smart parking surveillance, a recent study showed that video-based detectors performed more reliably indoors than outdoors due to extreme lighting conditions and adverse weather [4]. Due to low-lighting conditions, even the cutting-edge video-based ADAS products on the market are not recommended for operation at night [239]. The LiDAR sensor is one of the most reliable sensors for ITS; however, LiDAR sensing performance downgrades in rainy and snowy weather, and it is also sensitive to objects with reflective surfaces. GPS sensors experience signal obstruction due to surrounding buildings, trees, tunnels, mountains, and even human bodies. Therefore, GPS sensors work well in open areas but not in areas where obstructions are unavoidable, such as downtown.
In ITS, especially in automated vehicle testing, extreme cases can also refer to corner cases that an automated and intelligent vehicle has not encountered before. For example, a pedestrian crossing the freeway at night may not be a common case that is thoroughly covered in the database, so a vehicle might not understand the sensing results enough to proceed confidently; therefore, it would cause uncertainty in the real-time decision making. Some corner cases may be created by attackers. Adding noise that is unnoticeable by human eyes to a traffic sign image could result in a missed detection of the sign [240]; these adversarial examples threaten the security and robustness of ITS sensing. Corner case detection appears to be one of the hurdles that slow down the pace towards L-5 autonomous driving. The first question is: how does a vehicle know when it encounters a corner case? The second question is: how should it handle the unforeseen situation? We expect that corner case handling will not only be an issue for the automated vehicle but is also faced by the broad ITS sensing components.
There have been research studies that focused on addressing extreme case challenges. Li et al. [241] developed a domain adaptation method that used UAV sensing data from daytime to train detectors for traffic sensing at nighttime. The transfer learning method is a promising direction to address extreme cases in sensing. With edge computing, the machine is expected to be able to collect onsite data and improve the sensing functions over time. A particular edge device at one certain location could overfit itself for improved sensing performance at that certain location, though overfitting is not good in traditional machine learning.

4.3.4. Challenge 4: Privacy Protection

Privacy protection is another major challenge. As ITS sensing becomes advanced, more and more detailed information is available, and there have been increasing concerns regarding the use of the data and possible invasion of privacy. Bluetooth sensing detects the MAC address of the devices such as cell phones and tracks the devices in some applications, which not only risk people’s identification but also their location information. Camera images, when not properly protected, may contain private information, such as faces and license plates. These data are often stored on the cloud and not owned by the people whose private information is there.
Edge computing is a great solution to privacy challenges. Data are collected and processed at the edge, and raw data, with private information, is not transmitted to the cloud. In [238], video and other sensor data are processed onboard the vehicles and most are removed in real-time. While the primary purpose was to save network and cloud resources, privacy protection was fulfilled, as well as with edge computing. Federated learning [2] is a learning mechanism for privacy protection that assumes that users at different locations/agencies cannot share all the data to the cloud datacenter, so learning with new data has to happen at the edge first before transmitting some intermediate values to the cloud.

5. Future Research Directions

5.1. Resource-Efficient Edge Sensing Design

The state-of-the-art sensing models, especially those based on AI, are mostly resource-intensive and consume lots of computation, storage, network bandwidth, and power. Abundant hardware and network support is crucial for boosting the performance of the latest sensing methods. However, edge devices are resource-constrained. The sharp contrast naturally entails the design of resource-friendly algorithms and system architectures for ITS sensing. There have been quite a few studies on AI model compression techniques, e.g., network pruning and quantization, that target reducing the weight of neural networks. In addition to resizing existing AI models, another solution is to exploit the AutoML and neural architecture search methods to search over the model parameter space, at the same time considering the edge hardware constraints [242,243]. Alongside general designs on AI models, it is sometimes beneficial to leverage the characteristics of certain ITS scenes and theories, which can simplify the models and even improve the overall robustness when incorporated appropriately. On the other hand, system design is vital for edge sensing: system architecture design includes designs at the edge and designs across the edge and cloud. The purpose of system architecture design is to optimize the resource allocation to support requirements in different sensing tasks.

5.2. Federated Sensing

Federated learning was proposed by Google in 2016, which is adopted for joint learning of data from multiple edge devices for a centralized model. The learning occurs both on the edge devices and the centralized cloud. In ITS sensing, federated learning can be leveraged in many research areas (e.g., IoV sensing). Sensing models at the same type of agents for a specific task are often the same. It would be helpful to update the model as multiple agents collect new data; this is expected to improve the overall sensing performance by training using more samples. In the large-scale application, new data accumulate quickly, thus the general model training could iterate. However, at present, these valuable data are often discarded or stored in a place for offline analysis. There is also a hurdle regarding data sharing by different edge devices due to privacy issues or technology constraints. In the future, federated sensing schemes are expected to be devised towards real-time data sharing from edge devices and to enhance ITS sensing applications.

5.3. Cooperated Sensing by Infrastructure and Road Users

While federated learning will benefit multi-agent sensing for the same sensing task, it is also expected that sensor data integration from different ITS components will be another research direction. At present, sensing tasks are carried out by individual road users or individual infrastructure, e.g., roadside radars for speed enforcement, surveillance cameras for traffic flow detection, onboard LiDAR for collision avoidance. Even sensor fusion techniques are mostly about sensor signal integration at an individual agent, e.g., camera and LiDAR fusion onboard a vehicle. However, sensor data from different types of ITS components could provide richer information from different angles towards the same problem. For example, for a freeway segment of interest, individual loop detectors are distributed at fixed locations, sampling the traffic flow (speed, volume, occupancy) about every 0.5–1 mile. There is no ground truth data regarding what goes on at locations not covered by the loop detectors. If we can develop a cooperated sensing mechanism that integrates vehicle telematics data or other onboard data, tasks such as congestion management and locating an incident would benefit largely. Another example is jointly detecting objects of interest (e.g., street parking spaces). From a certain angle, either from a road user or some roadside infrastructure, there may be occlusion of a certain parking space; a cooperated sensing on the edge could help improve the detection accuracy and reliability.

5.4. ITS Sensing Data Abstraction at Edge

There will be a huge number of edge devices for ITS sensing. The large amount of data provided at the edge, even not raw data, still needs further data abstraction to a level that balances the workload and resources. There are a few points that may guide us through the exploration. First, to what extent do the edge devices conduct data abstraction? Second, data from different devices may be in different formats, e.g., the cooperated sensing data, so what are the abstraction and fusion frameworks for multi-source data? Third, if the data abstraction layer should be on the top of the sensor layer, then, for an application, how would the data abstraction strategies change as the sensor distributions change? We envision that appropriate data abstraction is the foundation to support advanced tools and application development in ITS sensing. Good data abstraction strategies at the edge will not only balance resource usage and information availability but also make the upper layers of pattern analysis and decision-making easier.

5.5. Training and Sensing All at Edge

A previous survey on edge computing [2] summarized six levels in the development of edge intelligence, ranging from level-1 cloud-edge co-inference to level-6 both training and inference on edge devices. We agree on this point and envision that ITS sensing with edge computing will follow a similar path of development. At present, most edge computing applications in ITS are level-1 to level-3, where the training happens on the cloud and models are deployed to the edge devices with or without compression/optimization. Sometimes the sensing function is done collaboratively by edge and cloud. Since federated sensing needs to be conceived in the future, with huge benefits from consistent data input to update the general model, it is reasonable to require an extension from federated sensing and for each device to update a customized sensing model online at the edge. Compared to a general model, all at-edge training and sensing is more flexible and intelligent. However, it does not mean that centralized learning from distributed devices is not useful; even in the era of level-5 or level-6, we expect that there will be models updating on single devices and aggregated learning to some extent for optimal sensing performances.

6. Conclusions

The intersection between ITS and EC is expected to have enormous potential in smart city applications. This paper has initially reviewed the key components of ITS, including sensing, data pre-processing, pattern analysis, traffic prediction, information communication, and control. This has been followed by a detailed review of the recent advances in ITS sensing, which summarized ITS sensing from three perspectives: infrastructure-based sensing, vehicle onboard sensing, and aerial sensing; under each of the three corresponding subsections, we further divided these perspectives into representative applications. Based on the review of state-of-the-art models in ITS sensing, the next section summarized three objectives of future ITS sensing (large-scale sensing, high intelligence, real-time capability) and was followed by a review of recent edge computing applications in ITS sensing. Several key challenges in ITS sensing (heterogeneity, high probability of sensor failure, sensing in extreme cases, and privacy protection) and how edge computing could help address them were then discussed. Five future research directions were envisioned by the authors in Section 5, including resource-efficient edge sensing design, federated sensing, cooperative sensing by infrastructure and road users, ITS sensing data abstraction at edge, and training and sensing all at edge. Edge computing applications in ITS sensing, as well as other ITS components, are still in their infancy. The road ahead is full of opportunities and challenges.

Author Contributions

X.Z. and R.K. conceptualized and presented the idea. X.Z., R.K., H.Y. and C.L. studied the state-of-the-art methods and summarized the literature on ITS sensing and edge computing. X.Z. and R.K. took the lead in organizing and writing the manuscript, with the participation of H.Y. and C.L. All authors provided feedback, helped shape the research, and contributed to the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  2. Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. arXiv 2019, 107, 1738–1762. [Google Scholar] [CrossRef] [Green Version]
  3. Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile edge computing: A survey. IEEE Internet Things J. 2017, 5, 450–465. [Google Scholar] [CrossRef] [Green Version]
  4. Ke, R.; Zhuang, Y.; Pu, Z.; Wang, Y. A Smart, Efficient, and Reliable Parking Surveillance System with Edge Artificial Intelligence on IoT Devices. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4962–4974. [Google Scholar] [CrossRef] [Green Version]
  5. Ananthanarayanan, G.; Bahl, P.; Bodik, P.; Chintalapudi, K.; Philipose, M.; Ravindranath, L.; Sinha, S. Real-time video analytics: The killer app for edge computing. Computer 2017, 50, 58–67. [Google Scholar] [CrossRef]
  6. Ji, B.; Han, Y.; Wang, Y.; Cao, D.; Tao, F.; Fu, Z.; Li, P.; Wen, H. Relay Cooperative Transmission Algorithms for IoV Under Aggregated Interference. IEEE Trans. Intell. Transp. Syst. 2021, 1–14. [Google Scholar] [CrossRef]
  7. Dai, Y.; Xu, D.; Maharjan, S.; Qiao, G.; Zhang, Y. Artificial Intelligence Empowered Edge Computing and Caching for Internet of Vehicles. IEEE Wirel. Commun. 2019, 3, 12–18. [Google Scholar] [CrossRef]
  8. Al-Turjman, F.; Malekloo, A. Smart parking in IoT-enabled cities: A survey. Sustain. Cities Soc. 2019, 49, 101608. [Google Scholar] [CrossRef]
  9. Ke, R. Real-Time Video Analytics Empowered by Machine Learning and Edge Computing for Smart Transportation Applications; University of Washington: Seattle, WA, USA, 2020. [Google Scholar]
  10. Ban, X.J.; Herring, R.; Margulici, J.D.; Bayen, A.M. Optimal Sensor Placement for Freeway Travel Time Estimation. In Transportation and Traffic Theory 2009: Golden Jubilee; Springer: Berlin/Heidelberg, Germany, 2009; pp. 697–721. [Google Scholar]
  11. Sharma, A.; Bullock, D.M.; Bonneson, J.A. Input-output and hybrid techniques for real-time prediction of delay and maximum queue length at signalized intersections. Transp. Res. Rec. 2007, 2035, 69–80. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, Y.; Nihan, N.L. Freeway traffic speed estimation with single-loop outputs. Transp. Res. Rec. 2000, 1727, 120–126. [Google Scholar] [CrossRef]
  13. Cheung, S.Y.; Coleri, S.; Dundar, B.; Ganesh, S.; Tan, C.W.; Varaiya, P. Traffic measurement and vehicle classification with single magnetic sensor. Transp. Res. Rec. 2005, 1917, 173–181. [Google Scholar] [CrossRef]
  14. Haoui, A.; Kavaler, R.; Varaiya, P. Wireless magnetic sensors for traffic surveillance. Transp. Res. Part C Emerg. Technol. 2008, 16, 294–306. [Google Scholar] [CrossRef]
  15. Buch, N.; Velastin, S.A.; Orwell, J. A review of computer vision techniques for the analysis of urban traffic. IEEE Trans. Intell. Transp. Syst. 2011, 12, 920–939. [Google Scholar] [CrossRef]
  16. Datondji, S.R.E.; Dupuis, Y.; Subirats, P.; Vasseur, P. A Survey of Vision-Based Traffic Monitoring of Road Intersections. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2681–2698. [Google Scholar] [CrossRef]
  17. Odat, E.; Shamma, J.S.; Claudel, C. Vehicle Classification and Speed Estimation Using Combined Passive Infrared/Ultrasonic Sensors. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1593–1606. [Google Scholar] [CrossRef]
  18. Zhao, J.; Xu, H.; Liu, H.; Wu, J.; Zheng, Y.; Wu, D. Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors. Transp. Res. Part C Emerg. Technol. 2019, 100, 68–87. [Google Scholar] [CrossRef]
  19. Sen, R.; Siriah, P.; Raman, B. RoadSoundSense: Acoustic sensing based road congestion monitoring in developing regions. In Proceedings of the 8th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks, Salt Lake City, UT, USA, 27–30 June 2011; pp. 125–133. [Google Scholar] [CrossRef] [Green Version]
  20. Malinovskiy, Y.; Saunier, N.; Wang, Y. Analysis of pedestrian travel with static bluetooth sensors. Transp. Res. Rec. 2012, 2299, 137–149. [Google Scholar] [CrossRef]
  21. Dunlap, M.; Li, Z.; Henrickson, K.; Wang, Y. Estimation of origin and destination information from bluetooth and wi-fi sensing for transit. Transp. Res. Rec. 2016, 2595, 11–17. [Google Scholar] [CrossRef]
  22. Herrera, J.C.; Work, D.B.; Herring, R.; Ban, X.; Jacobson, Q.; Bayen, A.M. Evaluation of traffic data obtained via GPS-enabled mobile phones: The Mobile Century field experiment. Transp. Res. Part C Emerg. Technol. 2010, 18, 568–583. [Google Scholar] [CrossRef] [Green Version]
  23. Kamyab, M.; Remias, S.; Najmi, E.; Rabinia, S.; Waddell, J.M. Machine learning approach to forecast work zone mobility using probe vehicle data. Transp. Res. Rec. 2020, 2674, 157–167. [Google Scholar] [CrossRef]
  24. McCormack, E.; Hallenbeck, M.E. ITS devices used to collect truck data for performance benchmarks. Transp. Res. Rec. 2006, 1957, 43–50. [Google Scholar] [CrossRef]
  25. Ban, X.J.; Hao, P.; Sun, Z. Real time queue length estimation for signalized intersections using travel times from mobile sensors. Transp. Res. Part C Emerg. Technol. 2011, 19, 1133–1156. [Google Scholar]
  26. Chen, X.; Wu, S.; Shi, C.; Huang, Y.; Yang, Y.; Ke, R.; Zhao, J. Sensing Data Supported Traffic Flow Prediction via Denoising Schemes and ANN: A Comparison. IEEE Sens. J. 2020, 20, 14317–14328. [Google Scholar] [CrossRef]
  27. Taguchi, S.; Koide, S.; Yoshimura, T. Online Map Matching with Route Prediction. IEEE Trans. Intell. Transp. Syst. 2019, 20, 338–347. [Google Scholar] [CrossRef]
  28. Goh, C.Y.; Dauwels, J.; Mitrovic, N.; Asif, M.T.; Oran, A.; Jaillet, P. Online map-matching based on hidden Markov model for real-time traffic sensing applications. In Proceedings of the 2012 15th International IEEE Conference on Intelligent Transportation Systems, ITSC 2012, Anchorage, AK, USA, 16–19 September 2012; pp. 776–781. [Google Scholar] [CrossRef] [Green Version]
  29. Mohamed, R.; Aly, H.; Youssef, M. Accurate Real-Time Map Matching for Challenging Environments. IEEE Trans. Intell. Transp. Syst. 2017, 18, 847–857. [Google Scholar] [CrossRef]
  30. Duan, Y.; Lv, Y.; Liu, Y.-L.; Wang, F.-Y. An efficient realization of deep learning for traffic data imputation. Transp. Res. Part C Emerg. Technol. 2016, 72, 168–181. [Google Scholar] [CrossRef]
  31. Chen, Y.; Lv, Y.; Wang, F.Y. Traffic Flow Imputation Using Parallel Data and Generative Adversarial Networks. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1624–1630. [Google Scholar] [CrossRef]
  32. Zhuang, Y.; Ke, R.; Wang, Y. Innovative method for traffic data imputation based on convolutional neural network. IET Intell. Transp. Syst. 2018, 13, 605–613. [Google Scholar] [CrossRef]
  33. Asadi, R.; Regan, A. A convolution recurrent autoencoder for spatio-temporal missing data imputation. arXiv 2019, arXiv:1904.12413. [Google Scholar]
  34. Chen, X.; Yang, J.; Sun, L. A nonconvex low-rank tensor completion model for spatiotemporal traffic data imputation. Transp. Res. Part C Emerg. Technol. 2020, 117, 102673. [Google Scholar] [CrossRef]
  35. Zhang, K.; He, Z.; Zheng, L.; Zhao, L.; Wu, L. A generative adversarial network for travel times imputation using trajectory data. Comput. Civ. Infrastruct. Eng. 2020, 36, 197–212. [Google Scholar] [CrossRef]
  36. Torres, R.; Ohashi, O.; Pessin, G. A machine-learning approach to distinguish passengers and drivers reading while driving. Sensors 2019, 19, 3174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Zhao, W.; McCormack, E.; Dailey, D.J.; Scharnhorst, E. Using truck probe gps data to identify and rank roadway bottlenecks. J. Transp. Eng. 2013, 139, 1–7. [Google Scholar] [CrossRef] [Green Version]
  38. Hsu, Y.W.; Chen, Y.W.; Perng, J.W. Estimation of the number of passengers in a bus using deep learning. Sensors 2020, 20, 2178. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Sargolzaei, A.; Crane, C.D.; Abbaspour, A.; Noei, S. A machine learning approach for fault detection in vehicular cyber-physical systems. In Proceedings of the 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA, USA, 18–20 December 2016; pp. 636–640. [Google Scholar] [CrossRef]
  40. Jahangiri, A.; Rakha, H.A. Transportation Mode Recognition Using Mobile Phone Sensor Data. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2406–2417. [Google Scholar] [CrossRef]
  41. Ashqar, H.I.; Almannaa, M.H.; Elhenawy, M.; Rakha, H.A. Smartphone Transportation Mode Recognition Using a Hierarchical Machine Learning Classifier and Pooled Features from Time and Frequency Domains. arXiv 2020, 20, 244–252. [Google Scholar] [CrossRef] [Green Version]
  42. Chen, Z.; Wang, S.; Shen, Z.; Chen, Y.; Zhao, Z. Online sequential ELM based transfer learning for transportation mode recognition. In Proceedings of the IEEE Conference on Cybernetics and Intelligent Systems (CIS), Manila, Philippines, 12–15 November 2013; pp. 78–83. [Google Scholar] [CrossRef]
  43. Jeyakumar, J.V.; Sandha, S.S.; Lee, E.S.; Tausik, N.; Xia, Z.; Srivastava, M. Deep convolutional bidirectional LSTM based transportation mode recognition. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. Singapore, 8–12 October 2018; pp. 1606–1615. [Google Scholar] [CrossRef]
  44. Parsa, A.B.; Movahedi, A.; Taghipour, H.; Derrible, S.; Mohammadian, A. (Kouros) Toward safer highways, application of XGBoost and SHAP for real-time accident detection and feature analysis. Accid. Anal. Prev. 2020, 136, 105405. [Google Scholar] [CrossRef]
  45. Parsa, A.B.; Taghipour, H.; Derrible, S.; Mohammadian, A. (Kouros) Real-time accident detection: Coping with imbalanced data. Accid. Anal. Prev. 2019, 129, 202–210. [Google Scholar] [CrossRef]
  46. Zhang, Z.; He, Q.; Gao, J.; Ni, M. A deep learning approach for detecting traffic accidents from social media data. Transp. Res. Part C Emerg. Technol. 2018, 86, 580–596. [Google Scholar] [CrossRef] [Green Version]
  47. Dogru, N.; Subasi, A. Traffic accident detection using random forest classifier. In Proceedings of the 15th Learning and Technology Conference (L&T), Jeddah, Saudi Arabia, 25–26 February 2018; pp. 40–45. [Google Scholar] [CrossRef]
  48. Zhang, Z.; Li, M.; Lin, X.; Wang, Y.; He, F. Multistep Speed Prediction on Traffic Networks: A Graph Convolutional Sequence-to-Sequence Learning Approach with Attention Mechanism. arXiv 2018, arXiv:1810.10237 . [Google Scholar]
  49. Cui, Z.; Ke, R.; Pu, Z.; Wang, Y. Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network for Forecasting Network-Wide Traffic State with Missing Values. Transp. Res. Part C Emerg. Technol. 2020, 118, 102674. [Google Scholar] [CrossRef]
  50. Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; Wang, F.Y. Traffic Flow Prediction with Big Data: A Deep Learning Approach. IEEE Trans. Intell. Transp. Syst. 2015, 16, 865–873. [Google Scholar] [CrossRef]
  51. Ma, X.; Tao, Z.; Wang, Y.; Yu, H.; Wang, Y. Long short-term memory neural network for traffic speed prediction using remote microwave sensor data. Transp. Res. Part C Emerg. Technol. 2015, 54, 187–197. [Google Scholar] [CrossRef]
  52. Yu, H.; Wu, Z.; Wang, S.; Wang, Y.; Ma, X. Spatiotemporal recurrent convolutional networks for traffic prediction in transportation networks. Sensors 2017, 17, 1501. [Google Scholar] [CrossRef] [Green Version]
  53. Ke, R.; Li, W.; Cui, Z.; Wang, Y. Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact. Transp. Res. Rec. 2020, 2674, 459–470. [Google Scholar] [CrossRef]
  54. Kumar, S.V. Traffic Flow Prediction using Kalman Filtering Technique. Procedia Engineering 2020, 187, 582–587. [Google Scholar] [CrossRef]
  55. Noursalehi, P.; Koutsopoulos, H.N.; Zhao, J. Real time transit demand prediction capturing station interactions and impact of special events. Transp. Res. Part C 2018, 97, 277–300. [Google Scholar] [CrossRef]
  56. Zhang, J.; Chen, F.; Cui, Z.; Guo, Y.; Zhu, Y. Deep Learning Architecture for Short-Term Passenger Flow Forecasting in Urban Rail Transit. IEEE Trans. Intell. Transp. Syst. 2020, 1–11. [Google Scholar] [CrossRef]
  57. Yao, H.; Wu, F.; Ke, J.; Tang, X.; Jia, Y.; Lu, S.; Gong, P.; Ye, J.; Li, Z. Deep multi-view spatial-temporal network for taxi demand prediction. arXiv Prepr. 2018, arXiv:1802.08714. [Google Scholar]
  58. Geng, X.; Li, Y.; Wang, L.; Zhang, L.; Yang, Q.; Ye, J.; Liu, Y. Spatiotemporal multi-graph convolution network for ride-hailing demand forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 3656–3663. [Google Scholar]
  59. Ke, J.; Zheng, H.; Yang, H.; Michael, X. Short-term forecasting of passenger demand under on-demand ride services: A spatio-temporal deep learning approach. Transp. Res. Part C 2017, 85, 591–608. [Google Scholar] [CrossRef] [Green Version]
  60. Xu, C.; Ji, J.; Liu, P. The station-free sharing bike demand forecasting with a deep learning approach and large-scale datasets. Transp. Res. Part C 2018, 95, 47–60. [Google Scholar] [CrossRef]
  61. Lin, L.; He, Z.; Peeta, S. Predicting station-level hourly demand in a large-scale bike- sharing network: A graph convolutional neural network approach. Transp. Res. Part C 2018, 97, 258–276. [Google Scholar] [CrossRef] [Green Version]
  62. Yang, S.; Ma, W.; Pi, X.; Qian, S. A deep learning approach to real-time parking occupancy prediction in transportation networks incorporating multiple spatio-temporal data sources. Transp. Res. Part C 2019, 107, 248–265. [Google Scholar] [CrossRef]
  63. Ridel, D.; Rehder, E.; Lauer, M.; Stiller, C.; Wolf, D. A literature review on the prediction of pedestrian behavior in urban scenarios. In Proceedings of the 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3105–3112. [Google Scholar]
  64. Tang, J.; Liu, F.; Zhang, W.; Ke, R.; Zou, Y. Lane-changes pre diction base d on adaptive fuzzy neural network. Expert Syst. Appl. 2018, 91, 452–463. [Google Scholar] [CrossRef]
  65. Cui, Z.; Henrickson, K.; Ke, R.; Wang, Y. Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting. IEEE Trans. Intell. Transp. Syst. 2019, 21, 4883–4894. [Google Scholar] [CrossRef] [Green Version]
  66. Zhang, Y.; Wang, S.; Chen, B.; Cao, J.; Huang, Z. Trafficgan: Network-scale deep traffic prediction with generative adversarial nets. IEEE Trans. Intell. Transp. Syst. 2019, 22, 219–230. [Google Scholar]
  67. Cui, Z.; Ke, R.; Pu, Z.; Ma, X.; Wang, Y. Learning traffic as a graph: A gated graph wavelet recurrent neural network for network-scale traffic prediction. Transp. Res. Part C Emerg. Technol. 2020, 115. [Google Scholar] [CrossRef]
  68. Smaglik, E.J.; Sharma, A.; Bullock, D.M.; Sturdevant, J.R.; Duncan, G. Event-based data collection for generating actuated controller performance measures. Transp. Res. Rec. 2007, 2035, 97–106. [Google Scholar] [CrossRef] [Green Version]
  69. Papageorgiou, M.; Kotsialos, A. Freeway ramp metering: An overview. IEEE Trans. Intell. Transp. Syst. 2002, 3, 271–281. [Google Scholar] [CrossRef]
  70. Chen, C.; Petty, K.; Skabardonis, A.; Varaiya, P.; Jia, Z. Freeway performance measurement system: Mining loop detector data. Transp. Res. Rec. 2001, 1748, 96–102. [Google Scholar] [CrossRef] [Green Version]
  71. Bugeja, M.; Dingli, A.; Attard, M.; Seychell, D. Comparison of vehicle detection techniques applied to IP camera video feeds for use in intelligent transport systems. Transp. Res. Procedia 2020, 45, 971–978. [Google Scholar] [CrossRef]
  72. Lakhal, N.M.B.; Nasri, O.; Adouane, L.; Slama, J.B.H. Controller area network reliability: Overview of design challenges and safety related perspectives of future transportation systems. IET Intell. Transp. Syst. 2020, 14, 1727–1739. [Google Scholar] [CrossRef]
  73. Kim, S.; Coifman, B. Comparing INRIX speed data against concurrent loop detector stations over several months. Transp. Res. Part C Emerg. Technol. 2014, 49, 59–72. [Google Scholar] [CrossRef]
  74. Mathew, J.K.; Desai, J.C.; Sakhare, R.S.; Kim, W.; Li, H.; Bullock, D.M. Big Data Applications for Managing Roadways. Inst. Transp. Eng. ITE J. 2021, 91, 28–35. [Google Scholar]
  75. Jeske, T. Floating car data from smartphones: What google and waze know about you and how hackers can control traffic. In Proceedings of the BlackHat Europe, Amsterdam, The Netherlands, 12–15 March 2013; pp. 1–12. [Google Scholar]
  76. Chen, S.; Hu, J.; Shi, Y.; Peng, Y.; Fang, J.; Zhao, R.; Zhao, L. Vehicle-to-everything (V2X) services supported by LTE-based systems and 5G. IEEE Commun. Stand. Mag. 2017, 1, 70–76. [Google Scholar] [CrossRef]
  77. Abboud, K.; Omar, H.A.; Zhuang, W. Interworking of DSRC and cellular network technologies for V2X communications: A survey. IEEE Trans. Veh. Technol. 2016, 65, 9457–9470. [Google Scholar] [CrossRef]
  78. Chen, S.; Hu, J.; Shi, Y.; Zhao, L.; Li, W. A vision of C-V2X: Technologies, field testing, and challenges with chinese development. IEEE Internet Things J. 2020, 7, 3872–3881. [Google Scholar] [CrossRef] [Green Version]
  79. Shaon, M.R.R.; Li, X.; Wu, Y.-J.; Ramos, S. Quantitative Evaluation of Advanced Traffic Management Systems using Analytic Hierarchy Process. Transp. Res. Rec. 2021, 03611981211030256. [Google Scholar]
  80. Tang, Q.; Hu, X. Modeling individual travel time with back propagation neural network approach for advanced traveler information systems. J. Transp. Eng. Part A Syst. 2020, 146, 4020039. [Google Scholar] [CrossRef]
  81. Van Brummelen, J.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part C Emerg. Technol. 2018, 89, 384–406. [Google Scholar] [CrossRef]
  82. Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum PointNets for 3D object detection from RGB-D data. arXiv 2017, arXiv:1711.08488. [Google Scholar]
  83. Allodi, M.; Broggi, A.; Giaquinto, D.; Patander, M.; Prioletti, A. Machine learning in tracking associations with stereo vision and lidar observations for an autonomous vehicle. In Proceedings of the IEEE Intelligent Vehicles Symposium, Gothenburg, Sweden, 19–22 June 2016; pp. 648–653. [Google Scholar] [CrossRef]
  84. Jung, J.; Bae, S.H. Real-time road lane detection in Urban areas using LiDAR data. Electronics 2018, 7, 276. [Google Scholar] [CrossRef] [Green Version]
  85. Guan, H.; Yan, W.; Yu, Y.; Zhong, L.; Li, D. Robust Traffic-Sign Detection and Classification Using Mobile LiDAR Data with Digital Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1715–1724. [Google Scholar] [CrossRef]
  86. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–10 December 2015; pp. 91–99. [Google Scholar]
  87. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
  88. Luo, Z.; Branchaud-Charron, F.; Lemaire, C.; Konrad, J.; Li, S.; Mishra, A.; Achkar, A.; Eichel, J.; Jodoin, P.-M. MIO-TCD: A new benchmark dataset for vehicle classification and localization. IEEE Trans. Image Process. 2018, 27, 5129–5141. [Google Scholar]
  89. Chang, M.C.; Chiang, C.K.; Tsai, C.M.; Chang, Y.K.; Chiang, H.L.; Wang, Y.A.; Chang, S.Y.; Li, Y.L.; Tsai, M.S.; Tseng, H.Y. AI city challenge 2020 - Computer vision for smart transportation applications. In Proceedings of the IEEE/CVF Conference On Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 2638–2647. [Google Scholar] [CrossRef]
  90. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the IEEE/CVF Conference On Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar] [CrossRef]
  91. Xu, H.; Gao, Y.; Yu, F.; Darrell, T. End-to-end learning of driving models from large-scale video datasets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3530–3538. [Google Scholar] [CrossRef] [Green Version]
  92. Avola, D.; Cinque, L.; Foresti, G.L.; Martinel, N.; Pannone, D.; Piciarelli, C. A UAV Video Dataset for Mosaicking and Change Detection from Low-Altitude Flights. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 2139–2149. [Google Scholar] [CrossRef] [Green Version]
  93. Li, S.; Yeung, D.Y. Visual object tracking for unmanned aerial vehicles: A benchmark and new motion models. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4140–4146. [Google Scholar]
  94. Chen, C.; Liu, B.; Wan, S.; Qiao, P.; Pei, Q. An edge traffic flow detection scheme based on deep learning in an intelligent transportation system. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1840–1852. [Google Scholar] [CrossRef]
  95. Haferkamp, M.; Al-Askary, M.; Dorn, D.; Sliwa, B.; Habel, L.; Schreckenberg, M.; Wietfeld, C. Radio-based traffic flow detection and vehicle classification for future smart cities. In Proceedings of the IEEE 85th Vehicular Technology Conference (VTC Spring), Sydney, Australia, 4–7 June 2017; pp. 1–5. [Google Scholar]
  96. Ho, T.-J.; Chung, M.-J. An approach to traffic flow detection improvements of non-contact microwave radar detectors. In Proceedings of the 2016 International Conference on Applied System Innovation (ICASI), Okinawa, Japan, 26–30 May 2016; pp. 1–4. [Google Scholar]
  97. Ke, R.; Zeng, Z.; Pu, Z.; Wang, Y. New framework for automatic identification and quantification of freeway bottlenecks based on wavelet analysis. J. Transp. Eng. Part A Syst. 2018, 144, 1–10. [Google Scholar] [CrossRef] [Green Version]
  98. Liu, Z.; Zhang, W.; Gao, X.; Meng, H.; Tan, X.; Zhu, X.; Xue, Z.; Ye, X.; Zhang, H.; Wen, S.; et al. Robust movement-specific vehicle counting at crowded intersections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 614–615. [Google Scholar]
  99. Liu, H.; Ma, J.; Yan, W.; Liu, W.; Zhang, X.; Li, C. Traffic flow detection using distributed fiber optic acoustic sensing. IEEE Access 2018, 6, 68968–68980. [Google Scholar] [CrossRef]
  100. Djenouri, Y.; Zimek, A.; Chiarandini, M. Outlier detection in urban traffic flow distributions. In Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM), Singapore, 17–20 November 2018; pp. 935–940. [Google Scholar]
  101. Liang, H.; Song, H.; Li, H.; Dai, Z. Vehicle counting system using deep learning and multi-object tracking methods. Transp. Res. Rec. 2020, 2674, 114–128. [Google Scholar] [CrossRef]
  102. Martchouk, M.; Mannering, F.; Bullock, D. Analysis of freeway travel time variability using Bluetooth detection. J. Transp. Eng. 2011, 137, 697–704. [Google Scholar] [CrossRef]
  103. Malinovskiy, Y.; Saunier, N.; Wang, Y. Pedestrian travel analysis using static bluetooth sensors. In Proceedings of the 91th Transp. Res. Board, Washington, DC, USA, 22–26 January 2012; pp. 1–22. [Google Scholar]
  104. Liu, X.; Liu, W.; Mei, T.; Ma, H. A deep learning-based approach to progressive vehicle re-identification for urban surveillance. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 869–884. [Google Scholar]
  105. He, S.; Luo, H.; Chen, W.; Zhang, M.; Zhang, Y.; Wang, F.; Li, H.; Jiang, W. Multi-domain learning and identity mining for vehicle re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 582–583. [Google Scholar]
  106. Lee, S.; Park, E.; Yi, H.; Lee, S.H. Strdan: Synthetic-to-real domain adaptation network for vehicle re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 608–609. [Google Scholar]
  107. Han, H.; Zhou, M.; Shang, X.; Cao, W.; Abusorrah, A. KISS+ for rapid and accurate pedestrian re-identification. IEEE Trans. Intell. Transp. Syst. 2020, 22, 394–403. [Google Scholar] [CrossRef]
  108. Han, H.; Ma, W.; Zhou, M.; Guo, Q.; Abusorrah, A. A Novel Semi-Supervised Learning Approach to Pedestrian Reidentification. IEEE Internet Things J. 2020, 8, 3042–3052. [Google Scholar] [CrossRef]
  109. Oh, J.-S.; Jayakrishnan, R.; Recker, W. Section Travel Time Estimation from Point Detection Data. Available online: https://escholarship.org/uc/item/7fg677tx (accessed on 21 September 2021).
  110. Stehly, L.; Campillo, M.; Shapiro, N.M. Traveltime measurements from noise correlation: Stability and detection of instrumental time-shifts. Geophys. J. Int. 2007, 171, 223–230. [Google Scholar] [CrossRef] [Green Version]
  111. Cortes, C.E.; Lavanya, R.; Oh, J.-S.; Jayakrishnan, R. General-purpose methodology for estimating link travel time with multiple-point detection of traffic. Transp. Res. Rec. 2002, 1802, 181–189. [Google Scholar] [CrossRef]
  112. Mercader, P.; Haddad, J. Automatic incident detection on freeways based on Bluetooth traffic monitoring. Accid. Anal. Prev. 2020, 146, 105703. [Google Scholar] [CrossRef]
  113. Singh, D.; Mohan, C.K. Deep spatio-temporal representation for detection of road accidents using stacked autoencoder. IEEE Trans. Intell. Transp. Syst. 2018, 20, 879–887. [Google Scholar] [CrossRef]
  114. Lee, S.; Kim, H.G.; Ro, Y.M. STAN: Spatio-temporal adversarial networks for abnormal event detection. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1323–1327. [Google Scholar]
  115. Loewenherz, F.; Bahl, V.; Wang, Y. Video analytics towards vision zero. Inst. Transp. Eng. ITE J. 2017, 87, 25. [Google Scholar]
  116. Roshtkhari, M.J.; Levine, M.D. An on-line, real-time learning method for detecting anomalies in videos using spatio-temporal compositions. Comput. Vis. Image Underst. 2013, 117, 1436–1452. [Google Scholar] [CrossRef]
  117. Li, Y.; Wu, J.; Bai, X.; Yang, X.; Tan, X.; Li, G.; Wen, S.; Zhang, H.; Ding, E. Multi-granularity tracking with modularlized components for unsupervised vehicles anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 586–587. [Google Scholar]
  118. Chakraborty, P.; Sharma, A.; Hegde, C. Freeway Traffic Incident Detection from Cameras: A Semi-Supervised Learning Approach. In Proceedings of the 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1840–1845. [Google Scholar]
  119. Sultani, W.; Chen, C.; Shah, M. Real-world anomaly detection in surveillance videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6479–6488. [Google Scholar]
  120. Zhao, Y.; Wu, W.; He, Y.; Li, Y.; Tan, X.; Chen, S. Good practices and a strong baseline for traffic anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 3993–4001. [Google Scholar]
  121. Wu, J.; Wang, X.; Xiao, X.; Wang, Y. Box-level tube tracking and refinement for vehicles anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 4112–4118. [Google Scholar]
  122. Chen, J.; Ding, G.; Yang, Y.; Han, W.; Xu, K.; Gao, T.; Zhang, Z.; Ouyang, W.; Cai, H.; Chen, Z. Dual-Modality Vehicle Anomaly Detection via Bilateral Trajectory Tracing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 4016–4025. [Google Scholar]
  123. Lin, T.; Rivano, H.; Le Mouël, F. A Survey of Smart Parking Solutions. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3229–3253. [Google Scholar] [CrossRef] [Green Version]
  124. Lou, L.; Zhang, J.; Xiong, Y.; Jin, Y. An Improved Roadside Parking Space Occupancy Detection Method Based on Magnetic Sensors and Wireless Signal Strength. Sensors 2019, 19, 2348. [Google Scholar] [CrossRef] [Green Version]
  125. Park, W.-J.; Kim, B.-S.; Seo, D.-E.; Kim, D.-S.; Lee, K.-H. Parking space detection using ultrasonic sensor in parking assistance system. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 1039–1044. [Google Scholar]
  126. Lee, S.; Yoon, D.; Ghosh, A. Intelligent parking lot application using wireless sensor networks. In Proceedings of the IEEE International Symposium on Collaborative Technologies and Systems, Irvine, CA, USA, 19–23 May 2008; pp. 48–57. [Google Scholar]
  127. Zhang, Z.; Li, X.; Yuan, H.; Yu, F. A street parking system using wireless sensor networks. Int. J. Distrib. Sens. Netw. 2013, 9, 107975. [Google Scholar] [CrossRef]
  128. Zhang, Z.; Tao, M.; Yuan, H. A parking occupancy detection algorithm based on AMR sensor. IEEE Sens. J. 2014, 15, 1261–1269. [Google Scholar] [CrossRef]
  129. Jeon, Y.; Ju, H.-I.; Yoon, S. Design of an lpwan communication module based on secure element for smart parking application. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 12–14 January 2018; pp. 1–2. [Google Scholar]
  130. Grodi, R.; Rawat, D.B.; Rios-Gutierrez, F. Smart parking: Parking occupancy monitoring and visualization system for smart cities. In Proceedings of the SoutheastCon 2016, Norfolk, VA, USA, 30 March–3 April 2016; pp. 1–5. [Google Scholar]
  131. Sifuentes, E.; Casas, O.; Pallas-Areny, R. Wireless magnetic sensor node for vehicle detection with optical wake-up. IEEE Sens. J. 2011, 11, 1669–1676. [Google Scholar] [CrossRef]
  132. Zhu, H.; Yu, F. A vehicle parking detection method based on correlation of magnetic signals. Int. J. Distrib. Sens. Netw. 2015, 11, 361242. [Google Scholar] [CrossRef]
  133. Bulan, O.; Loce, R.P.; Wu, W.; Wang, Y.R.; Bernal, E.A.; Fan, Z. Video-based real-time on-street parking occupancy detection system. J. Electron. Imaging 2013, 22, 41109. [Google Scholar] [CrossRef]
  134. Cho, W.; Park, S.; Kim, M.; Han, S.; Kim, M.; Kim, T.; Kim, J.; Paik, J. Robust parking occupancy monitoring system using random forests. In Proceedings of the 2018 International Conference on Electronics, Information and Communication (ICEIC), Honolulu, HI, USA, 24–27 January 2018; pp. 1–4. [Google Scholar]
  135. Amato, G.; Carrara, F.; Falchi, F.; Gennaro, C.; Meghini, C.; Vairo, C. Deep learning for decentralized parking lot occupancy detection. Expert Syst. Appl. 2017, 72, 327–334. [Google Scholar] [CrossRef]
  136. Nurullayev, S.; Lee, S.-W. Generalized Parking Occupancy Analysis Based on Dilated Convolutional Neural Network. Sensors 2019, 19, 277. [Google Scholar] [CrossRef] [Green Version]
  137. Alam, M.; Moroni, D.; Pieri, G.; Tampucci, M.; Gomes, M.; Fonseca, J.; Ferreira, J.; Leone, G.R. Real-Time Smart Parking Systems Integration in Distributed ITS for Smart Cities. J. Adv. Transp. 2018, 2018. [Google Scholar] [CrossRef]
  138. Wu, Q.; Huang, C.; Wang, S.; Chiu, W.; Chen, T. Robust parking space detection considering inter-space correlation. In Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, Beijing, China, 2–5 July 2007; pp. 659–662. [Google Scholar]
  139. Rianto, D.; Erwin, I.M.; Prakasa, E.; Herlan, H. Parking Slot Identification using Local Binary Pattern and Support Vector Machine. In Proceedings of the 2018 International Conference on Computer, Control, Informatics and its Applications (IC3INA), Tangerang, Indonesia, 1–2 November 2018; pp. 129–133. [Google Scholar]
  140. Baroffio, L.; Bondi, L.; Cesana, M.; Redondi, A.E.; Tagliasacchi, M. A visual sensor network for parking lot occupancy detection in smart cities. In Proceedings of the 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT), Milan, Italy, 14–16 December 2015; pp. 745–750. [Google Scholar]
  141. Amato, G.; Carrara, F.; Falchi, F.; Gennaro, C.; Vairo, C. Car parking occupancy detection using smart camera networks and Deep Learning. In Proceedings of the 2016 IEEE Symposium on Computers and Communication (ISCC), Messina, Italy, 27–30 June 2016; pp. 1212–1217. [Google Scholar]
  142. Vitek, S.; Melničuk, P. A distributed wireless camera system for the management of parking spaces. Sensors 2018, 18, 69. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  143. Ling, X.; Sheng, J.; Baiocchi, O.; Liu, X.; Tolentino, M.E. Identifying parking spaces & detecting occupancy using vision-based IoT devices. In Proceedings of the 2017 Global Internet of Things Summit (GIoTS), Geneva, Switzerland, 6–9 June 2017; pp. 1–6. [Google Scholar]
  144. Nieto, R.M.; Garcia-Martin, Á.; Hauptmann, A.G.; Martinez, J.M. Automatic Vacant Parking Places Management System Using Multicamera Vehicle Detection. IEEE Trans. Intell. Transp. Syst. 2018, 1–12. [Google Scholar]
  145. Ismail, K.; Sayed, T.; Saunier, N.; Lim, C. Automated analysis of pedestrian–vehicle conflicts using video data. Transp. Res. Rec. 2009, 2140, 44–54. [Google Scholar] [CrossRef]
  146. Wu, J.; Xu, H.; Zheng, Y.; Tian, Z. A novel method of vehicle-pedestrian near-crash identification with roadside LiDAR data. Accid. Anal. Prev. 2018, 121, 238–249. [Google Scholar] [CrossRef] [PubMed]
  147. Huang, X.; He, P.; Rangarajan, A.; Ranka, S. Intelligent intersection: Two-stream convolutional networks for real-time near-accident detection in traffic video. ACM Trans. Spat. Algorithms Syst. 2020, 6, 1–28. [Google Scholar] [CrossRef] [Green Version]
  148. Ke, R.; Lutin, J.; Spears, J.; Wang, Y. A Cost-Effective Framework for Automated Vehicle-Pedestrian Near-Miss Detection Through Onboard Monocular Vision. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; Volume 2017. [Google Scholar]
  149. Yamamoto, S.; Kurashima, T.; Toda, H. Identifying Near-Miss Traffic Incidents in Event Recorder Data. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Singapore, 11–14 May 2020; pp. 717–728. [Google Scholar]
  150. Ibrahim, M.R.; Haworth, J.; Christie, N.; Cheng, T. CyclingNet: Detecting cycling near misses from video streams in complex urban scenes with deep learning. arXiv 2021, arXiv:2102.00565. [Google Scholar]
  151. Kataoka, H.; Suzuki, T.; Oikawa, S.; Matsui, Y.; Satoh, Y. Drive video analysis for the detection of traffic near-miss incidents. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–26 May 2018; pp. 1–8. [Google Scholar]
  152. Taccari, L.; Sambo, F.; Bravi, L.; Salti, S.; Sarti, L.; Simoncini, M.; Lori, A. Classification of crash and near-crash events from dashcam videos and telematics. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2460–2465. [Google Scholar]
  153. Jayaraman, K.; Tilbury, D.M.; Yang, X.J.; Pradhan, A.K.; Robert, L.P. Analysis and prediction of pedestrian crosswalk behavior during automated vehicle interactions. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Virtual Conference, 31 May–31 August 2020; pp. 6426–6432. [Google Scholar]
  154. Brehar, R.D.; Muresan, M.P.; Maricta, T.; Vancea, C.-C.; Negru, M.; Nedevschi, S. Pedestrian Street-Cross Action Recognition in Monocular Far Infrared Sequences. IEEE Access 2021, 9, 74302–74324. [Google Scholar] [CrossRef]
  155. Chen, L.; Ma, N.; Wang, P.; Li, J.; Wang, P.; Pang, G.; Shi, X. Survey of pedestrian action recognition techniques for autonomous driving. Tsinghua Sci. Technol. 2020, 25, 458–470. [Google Scholar] [CrossRef]
  156. Ushapreethi, P.; Priya, G.G.L. A fine-tuned feature descriptor for pedestrian action recognition in autonomous vehicles. Int. J. Veh. Inf. Commun. Syst. 2021, 6, 40–63. [Google Scholar]
  157. Liu, B.; Adeli, E.; Cao, Z.; Lee, K.-H.; Shenoi, A.; Gaidon, A.; Niebles, J.C. Spatiotemporal relationship reasoning for pedestrian intent prediction. IEEE Robot. Autom. Lett. 2020, 5, 3485–3492. [Google Scholar] [CrossRef] [Green Version]
  158. Lyu, N.; Wen, J.; Duan, Z.; Wu, C. Vehicle Trajectory Prediction and Cut-In Collision Warning Model in a Connected Vehicle Environment. IEEE Trans. Intell. Transp. Syst. 2020, 1–16. [Google Scholar] [CrossRef]
  159. Wang, W.; Qie, T.; Yang, C.; Liu, W.; Xiang, C.; Huang, K. An intelligent Lane-Changing Behavior Prediction and Decision-Making strategy for an Autonomous Vehicle. IEEE Trans. Ind. Electron. 2021, 1–10. [Google Scholar]
  160. Fernando, T.; Denman, S.; Sridharan, S.; Fookes, C. Deep inverse reinforcement learning for behavior prediction in autonomous driving: Accurate forecasts of vehicle motion. IEEE Signal Process. Mag. 2020, 38, 87–96. [Google Scholar] [CrossRef]
  161. Zhang, M.; Li, H.; Wang, L.; Wang, P.; Tian, S.; Feng, Y. Overtaking Behavior Prediction of Rear Vehicle via LSTM Model. In Proceedings of the CICTP 2020, Xi’an, China, 16–19 December 2020; pp. 3575–3586. [Google Scholar]
  162. Mozaffari, S.; Al-Jarrah, O.Y.; Dianati, M.; Jennings, P.; Mouzakitis, A. Deep learning-based vehicle behavior prediction for autonomous driving applications: A review. IEEE Trans. Intell. Transp. Syst. 2020, 1–15. [Google Scholar] [CrossRef]
  163. Pu, Z.; Zhu, M.; Li, W.; Cui, Z.; Guo, X.; Wang, Y. Monitoring Public Transit Ridership Flow by Passively Sensing Wi-Fi and Bluetooth Mobile Devices. IEEE Internet Things J. 2020, 8, 474–486. [Google Scholar] [CrossRef]
  164. Erlik Nowruzi, F.; El Ahmar, W.A.; Laganiere, R.; Ghods, A.H. In-vehicle occupancy detection with convolutional networks on thermal images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  165. Chen, Z.; Zhang, J.; Tao, D. Progressive lidar adaptation for road detection. IEEE/CAA J. Autom. Sin. 2019, 6, 693–702. [Google Scholar] [CrossRef] [Green Version]
  166. Fan, R.; Wang, H.; Cai, P.; Liu, M. Sne-roadseg: Incorporating surface normal information into semantic segmentation for accurate freespace detection. In Proceedings of the European Conference on Computer Vision, Virtual Conference, 23–28 August 2020; pp. 340–356. [Google Scholar]
  167. Wang, X.; Qian, Y.; Wang, C.; Yang, M. Map-enhanced ego-lane detection in the missing feature scenarios. IEEE Access 2020, 8, 107958–107968. [Google Scholar] [CrossRef]
  168. Luo, S.; Zhang, X.; Hu, J.; Xu, J. Multiple lane detection via combining complementary structural constraints. IEEE Trans. Intell. Transp. Syst. 2020, 1–10. [Google Scholar] [CrossRef]
  169. Farag, W. Real-time detection of road lane-lines for autonomous driving. Recent Adv. Comput. Sci. Commun. (Former. Recent Pat. Comput. Sci.) 2020, 13, 265–274. [Google Scholar] [CrossRef]
  170. Almeida, T.; Lourenço, B.; Santos, V. Road detection based on simultaneous deep learning approaches. Rob. Auton. Syst. 2020, 133, 103605. [Google Scholar] [CrossRef]
  171. Siam, M.; Gamal, M.; Abdel-Razek, M.; Yogamani, S.; Jagersand, M.; Zhang, H. A comparative study of real-time semantic segmentation for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 587–597. [Google Scholar]
  172. Tao, A.; Sapra, K.; Catanzaro, B. Hierarchical multi-scale attention for semantic segmentation. arXiv 2020, arXiv:2005.10821. [Google Scholar]
  173. Erkent, Ö.; Laugier, C. Semantic segmentation with unsupervised domain adaptation under varying weather conditions for autonomous vehicles. IEEE Robot. Autom. Lett. 2020, 5, 3580–3587. [Google Scholar] [CrossRef] [Green Version]
  174. Treml, M.; Arjona-Medina, J.; Unterthiner, T.; Durgesh, R.; Friedmann, F.; Schuberth, P.; Mayr, A.; Heusel, M.; Hofmarcher, M.; Widrich, M.; et al. Speeding Up Semantic Segmentation for Autonomous Driving. In Proceedings of the NIPS 2016 Workshop MLITS, Barcelona, Spain, 9 December 2016. [Google Scholar]
  175. Hung, W.-C.; Tsai, Y.-H.; Liou, Y.-T.; Lin, Y.-Y.; Yang, M.-H. Adversarial learning for semi-supervised semantic segmentation. arXiv 2018, arXiv:1802.07934. [Google Scholar]
  176. Ouali, Y.; Hudelot, C.; Tami, M. Semi-supervised semantic segmentation with cross-consistency training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 12674–12684. [Google Scholar]
  177. Yuan, Y.; Chen, X.; Chen, X.; Wang, J. Segmentation transformer: Object-contextual representations for semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Montreal, QC, Canada, 11–17 October 2021; Volume 1. [Google Scholar]
  178. Mohan, R.; Valada, A. Efficientps: Efficient panoptic segmentation. Int. J. Comput. Vis. 2021, 129, 1551–1579. [Google Scholar] [CrossRef]
  179. Cheng, B.; Collins, M.D.; Zhu, Y.; Liu, T.; Huang, T.S.; Adam, H.; Chen, L.-C. Panoptic-deeplab: A simple, strong, and fast baseline for bottom-up panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 12475–12485. [Google Scholar]
  180. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 3213–3223. [Google Scholar]
  181. Menouar, H.; Guvenc, I.; Akkaya, K.; Uluagac, A.S.; Kadri, A.; Tuncer, A. UAV-enabled intelligent transportation systems for the smart city: Applications and challenges. IEEE Commun. Mag. 2017, 55, 22–28. [Google Scholar] [CrossRef]
  182. Ardestani, S.M.; Jin, P.J.; Volkmann, O.; Gong, J.; Zhou, Z.; Feeley, C. 3D Accident Site Reconstruction Using Unmanned Aerial Vehicles (UAV). In Proceedings of the Transportation Research Board 95th Annual Meeting, Washington, DC, USA, 10–14 January 2016. [Google Scholar]
  183. Constantinescu, S.-G.; Nedelcut, F. UAV systems in support of Law Enforcement forces. In Proceedings of the International Conference of Scientific Paper AFASES 2011, Brasov, Romania, 26–28 May 2011; pp. 1211–1219. [Google Scholar]
  184. Huang, H.; Savkin, A.V.; Huang, C. Decentralised Autonomous Navigation of a UAV Network for Road Traffic Monitoring. IEEE Trans. Aerosp. Electron. Syst. 2021. [Google Scholar] [CrossRef]
  185. Shao, G.; Ma, Y.; Malekian, R.; Yan, X.; Li, Z. A novel cooperative platform design for coupled USV--UAV systems. IEEE Trans. Ind. Inform. 2019, 15, 4913–4922. [Google Scholar] [CrossRef] [Green Version]
  186. Teutsch, M.; Krüger, W. Detection, segmentation, and tracking of moving objects in UAV videos. In Proceedings of the 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance, Beijing, China, 18–21 September 2012; pp. 313–318. [Google Scholar]
  187. Rodriguez-Canosa, G.R.; Thomas, S.; Del Cerro, J.; Barrientos, A.; MacDonald, B. A real-time method to detect and track moving objects (DATMO) from unmanned aerial vehicles (UAVs) using a single camera. Remote Sens. 2012, 4, 1090–1111. [Google Scholar] [CrossRef] [Green Version]
  188. Gomaa, A.; Abdelwahab, M.M.; Abo-Zahhad, M. Real-Time Algorithm for Simultaneous Vehicle Detection and Tracking in Aerial View Videos. In Proceedings of the 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS), Windsor, ON, Canada, 5–8 August 2018; pp. 222–225. [Google Scholar]
  189. Tsao, P.; Ik, T.-U.; Chen, G.-W.; Peng, W.-C. Stitching aerial images for vehicle positioning and tracking. In Proceedings of the 2018 IEEE International Conference on Data Mining Workshops (ICDMW), Singapore, 17–20 November 2018; pp. 616–623. [Google Scholar]
  190. Cao, X.; Wu, C.; Lan, J.; Yan, P.; Li, X. Vehicle detection and motion analysis in low-altitude airborne video under urban environment. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 1522–1533. [Google Scholar] [CrossRef]
  191. Breckon, T.P.; Barnes, S.E.; Eichner, M.L.; Wahren, K. Autonomous real-time vehicle detection from a medium-level UAV. In Proceedings of the 24th International Conference on Unmanned Air Vehicle Systems, Bristol, UK, 30 March–1 April 2009; pp. 21–29. [Google Scholar]
  192. Khan, M.; Ectors, W.; Bellemans, T.; Janssens, D.; Wets, G. Unmanned aerial vehicle-based traffic analysis: A case study for shockwave identification and flow parameters estimation at signalized intersections. Remote Sens. 2018, 10, 458. [Google Scholar] [CrossRef] [Green Version]
  193. Carletti, V.; Greco, A.; Saggese, A.; Vento, M. Multi-Object Tracking by Flying Cameras Based on a Forward-Backward Interaction. IEEE Access 2018, 6, 43905–43919. [Google Scholar] [CrossRef]
  194. Barmpounakis, E.N.; Vlahogianni, E.I.; Golias, J.C.; Babinec, A. How accurate are small drones for measuring microscopic traffic parameters? Transp. Lett. 2017, 11, 1–9. [Google Scholar] [CrossRef]
  195. Najiya, K.V.; Archana, M. UAV Video Processing for Traffic Surveillence with Enhanced Vehicle Detection. In Proceedings of the 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), Thondamuthur, India, 20–21 April 2018; pp. 662–668. [Google Scholar]
  196. Zhu, J.; Sun, K.; Jia, S.; Li, Q.; Hou, X.; Lin, W.; Liu, B.; Qiu, G. Urban Traffic Density Estimation Based on Ultrahigh-Resolution UAV Video and Deep Neural Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2018, 11, 4968–4981. [Google Scholar] [CrossRef]
  197. Wang, W.; Peng, Y.; Cao, G.; Guo, X.; Kwok, N. Low-Illumination Image Enhancement for Night-Time UAV Pedestrian Detection. IEEE Trans. Ind. Inform. 2020, 17, 5208–5217. [Google Scholar] [CrossRef]
  198. Li, J.; Ye, D.H.; Chung, T.; Kolsch, M.; Wachs, J.; Bouman, C. Multi-target detection and tracking from a single camera in Unmanned Aerial Vehicles (UAVs). In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, South Korea, 9–14 October 2016; pp. 4992–4997. [Google Scholar]
  199. Cao, X.; Gao, C.; Lan, J.; Yuan, Y.; Yan, P. Ego motion guided particle filter for vehicle tracking in airborne videos. Neurocomputing 2014, 124, 168–177. [Google Scholar] [CrossRef]
  200. Khan, M.A.; Ectors, W.; Bellemans, T.; Janssens, D.; Wets, G. Unmanned Aerial Vehicle--Based Traffic Analysis: Methodological Framework for Automated Multivehicle Trajectory Extraction. Transp. Res. Rec. J. Transp. Res. Board 2017, 2626, 25–33. [Google Scholar] [CrossRef]
  201. Ke, R.; Feng, S.; Cui, Z.; Wang, Y. Advanced framework for microscopic and lane-level macroscopic traffic parameters estimation from UAV video. IET Intell. Transp. Syst. 2020, 14, 724–734. [Google Scholar] [CrossRef]
  202. Kaufmann, S.; Kerner, B.S.; Rehborn, H.; Koller, M.; Klenov, S.L. Aerial observations of moving synchronized flow patterns in over-saturated city traffic. Transp. Res. Part C Emerg. Technol. 2018, 86, 393–406. [Google Scholar] [CrossRef]
  203. Chen, A.Y.; Chiu, Y.-L.; Hsieh, M.-H.; Lin, P.-W.; Angah, O. Conflict analytics through the vehicle safety space in mixed traffic flows using UAV image sequences. Transp. Res. Part C Emerg. Technol. 2020, 119, 102744. [Google Scholar] [CrossRef]
  204. McCord, M.; Yang, Y.; Jiang, Z.; Coifman, B.; Goel, P. Estimating annual average daily traffic from satellite imagery and air photos: Empirical results. Transp. Res. Rec. J. Transp. Res. Board 2003, 1855, 136–142. [Google Scholar] [CrossRef]
  205. Shastry, A.C.; Schowengerdt, R.A. Airborne video registration and traffic-flow parameter estimation. IEEE Trans. Intell. Transp. Syst. 2005, 6, 391–405. [Google Scholar] [CrossRef]
  206. Ke, R. A Novel Framework for Real-Time Traffic Flow Parameter Estimation from Aerial Videos; University of Washington: Seattle, WA, USA, 2016. [Google Scholar]
  207. Ke, R.; Li, Z.; Kim, S.; Ash, J.; Cui, Z.; Wang, Y. Real-time bidirectional traffic flow parameter estimation from aerial videos. IEEE Trans. Intell. Transp. Syst. 2017, 18, 890–901. [Google Scholar] [CrossRef]
  208. Ke, R.; Li, Z.; Tang, J.; Pan, Z.; Wang, Y. Real-time traffic flow parameter estimation from UAV video based on ensemble classifier and optical flow. IEEE Trans. Intell. Transp. Syst. 2019, 20, 54–64. [Google Scholar] [CrossRef]
  209. Chen, X.; Li, Z.; Yang, Y.; Qi, L.; Ke, R. High-resolution vehicle trajectory extraction and denoising from aerial videos. IEEE Trans. Intell. Transp. Syst. 2020, 22, 3190–3202. [Google Scholar] [CrossRef]
  210. Karaduman, M.; Cinar, A.; Eren, H. UAV traffic patrolling via road detection and tracking in anonymous aerial video frames. J. Intell. Robot. Syst. 2019, 95, 675–690. [Google Scholar] [CrossRef]
  211. Li, J.; Cao, X.; Guo, D.; Xie, J.; Chen, H. Task scheduling with UAV-assisted vehicular cloud for road detection in highway scenario. IEEE Internet Things J. 2020, 7, 7702–7713. [Google Scholar] [CrossRef]
  212. Lin, Y.; Saripalli, S. Road detection from aerial imagery. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–19 March 2012; pp. 3588–3593. [Google Scholar]
  213. Zhou, H.; Kong, H.; Wei, L.; Creighton, D.; Nahavandi, S. Efficient road detection and tracking for unmanned aerial vehicle. IEEE Trans. Intell. Transp. Syst. 2015, 16, 297–309. [Google Scholar] [CrossRef]
  214. Zhou, H.; Kong, H.; Wei, L.; Creighton, D.; Nahavandi, S. On detecting road regions in a single UAV image. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1713–1722. [Google Scholar] [CrossRef]
  215. Chen, S.; Laefer, D.F.; Mangina, E.; Zolanvari, S.M.I.; Byrne, J. UAV bridge inspection through evaluated 3D reconstructions. J. Bridg. Eng. 2019, 24, 5019001. [Google Scholar] [CrossRef] [Green Version]
  216. Jung, S.; Song, S.; Kim, S.; Park, J.; Her, J.; Roh, K.; Myung, H. Toward Autonomous Bridge Inspection: A framework and experimental results. In Proceedings of the 2019 16th International Conference on Ubiquitous Robots (UR), Jeju, Korea, 24–27 June 2019; pp. 208–211. [Google Scholar]
  217. Bolourian, N.; Soltani, M.M.; Albahria, A.H.; Hammad, A. High level framework for bridge inspection using LiDAR-equipped UAV. In Proceedings of the International Symposium on Automation and Robotics in Construction, ISARC, Taipei, Taiwan, 28 June–1 July 2017; Volume 34. [Google Scholar]
  218. Lei, B.; Wang, N.; Xu, P.; Song, G. New crack detection method for bridge inspection using UAV incorporating image processing. J. Aerosp. Eng. 2018, 31, 4018058. [Google Scholar] [CrossRef]
  219. Biçici, S.; Zeybek, M. An approach for the automated extraction of road surface distress from a UAV-derived point cloud. Autom. Constr. 2021, 122, 103475. [Google Scholar] [CrossRef]
  220. Leonardi, G.; Barrile, V.; Palamara, R.; Suraci, F.; Candela, G. 3D mapping of pavement distresses using an Unmanned Aerial Vehicle (UAV) system. In Proceedings of the International Symposium on New Metropolitan Perspectives, Reggio Calabria, Italy, 22–25 May 2018; pp. 164–171. [Google Scholar]
  221. Fan, R.; Jiao, J.; Pan, J.; Huang, H.; Shen, S.; Liu, M. Real-time dense stereo embedded in a uav for road inspection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  222. Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv 2015, arXiv:1510.00149. [Google Scholar]
  223. Rastegari, M.; Ordonez, V.; Redmon, J.; Farhadi, A. Xnor-net: Imagenet classification using binary convolutional neural networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 525–542. [Google Scholar]
  224. Ferdowsi, A.; Challita, U.; Saad, W. Deep learning for reliable mobile edge analytics in intelligent transportation systems: An overview. IEEE Veh. Technol. Mag. 2019, 14, 62–70. [Google Scholar] [CrossRef]
  225. Grassi, G.; Jamieson, K.; Bahl, P.; Pau, G. Parkmaster: An in-vehicle, edge-based video analytics service for detecting open parking spaces in urban environments. In Proceedings of the Second ACM/IEEE Symposium on Edge Computing, San Jose, CA, USA, 12–14 October 2017; pp. 1–14. [Google Scholar]
  226. El-Wakeel, A.S.; Member, S.; Li, J.; Member, S.; Noureldin, A.; Member, S.; Hassanein, H.S. Towards a Practical Crowdsensing System for Road Surface Conditions Monitoring. IEEE Internet Things J. 2018, 5, 4672–4685. [Google Scholar] [CrossRef]
  227. Liu, Q.; Kumar, S.; Mago, V. SafeRNet: Safe Transportation Routing in the era of Internet of Vehicles and Mobile Crowd Sensing. In Proceedings of the 14th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 8–11 January 2017; pp. 299–304. [Google Scholar]
  228. Xu, X.; Xue, Y.; Qi, L.; Yuan, Y.; Zhang, X.; Umer, T. An edge computing-enabled computation offloading method with privacy preservation for internet of connected vehicles. Futur. Gener. Comput. Syst. 2019, 96, 89–100. [Google Scholar] [CrossRef]
  229. Yuan, Q.; Zhou, H.; Li, J.; Liu, Z.; Yang, F.; Shen, X.S. Toward Efficient Content Delivery for Automated Driving Services: An Edge Computing Solution. IEEE Netw. 2018, 32, 80–86. [Google Scholar] [CrossRef]
  230. He, Y.; Yu, F.R.; Zhao, N.; Leung, V.C.M.; Yin, H. Software-Defined Networks with Mobile Edge Computing and Caching for Smart Cities: A Big Data Deep Reinforcement Learning Approach. IEEE Commun. Mag. 2017, 55, 31–37. [Google Scholar] [CrossRef]
  231. Barthélemy, J.; Verstaevel, N.; Forehead, H.; Perez, P. Edge-computing video analytics for real-time traffic monitoring in a smart city. Sensors 2019, 19, 2048. [Google Scholar] [CrossRef] [Green Version]
  232. Zhou, J.; Dai, H.-N.; Wang, H. Lightweight convolution neural networks for mobile edge computing in transportation cyber physical systems. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–20. [Google Scholar] [CrossRef]
  233. Chen, Y.; Zhang, Y.; Maharjan, S.; Alam, M.; Wu, T. Deep learning for secure mobile edge computing in cyber-physical transportation systems. IEEE Netw. 2019, 33, 36–41. [Google Scholar] [CrossRef]
  234. Garg, S.; Singh, A.; Batra, S.; Kumar, N.; Yang, L.T. UAV-empowered edge computing environment for cyber-threat detection in smart vehicles. IEEE Netw. 2018, 32, 42–51. [Google Scholar] [CrossRef]
  235. Kulkarni, A.; Mhalgi, N.; Gurnani, S.; Giri, N. Pothole detection system using machine learning on Android. Int. J. Emerg. Technol. Adv. Eng. 2014, 4, 360–364. [Google Scholar]
  236. Zheng, Z.; Zhou, M.; Chen, Y.; Huo, M.; Chen, D. Enabling real-time road anomaly detection via mobile edge computing. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719891319. [Google Scholar]
  237. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  238. Ke, R.; Cui, Z.; Chen, Y.; Zhu, M.; Yang, H.; Wang, Y. Edge Computing for Real-Time Near-Crash Detection for Smart Transportation Applications. arXiv 2021, arXiv:2008.00549v3. [Google Scholar]
  239. Spears, J.; Lutin, J.; Wang, Y.; Ke, R.; Clancy, S.M. Active Safety-Collision Warning Pilot in Washington State; TRB Transit Innovations Deserving Exploratory Analysis (IDEA) Program J-04/IDEA 82, Final Report; Transportation Research Board: Washington, DC, USA, May 2017. [Google Scholar]
  240. Wang, K.; Li, F.; Chen, C.-M.; Hassan, M.M.; Long, J.; Kumar, N. Interpreting Adversarial Examples and Robustness for Deep Learning-Based Auto-Driving Systems. IEEE Trans. Intell. Transp. Syst. 2021. [Google Scholar] [CrossRef]
  241. Li, J.; Xu, Z.; Fu, L.; Zhou, X.; Yu, H. Domain adaptation from daytime to nighttime: A situation-sensitive vehicle detection and traffic flow parameter estimation framework. Transp. Res. Part C Emerg. Technol. 2021, 124, 102946. [Google Scholar] [CrossRef]
  242. He, Y.; Lin, J.; Liu, Z.; Wang, H.; Li, L.-J.; Han, S. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 784–800. [Google Scholar]
  243. Liu, C.; Zoph, B.; Neumann, M.; Shlens, J.; Hua, W.; Li, L.-J.; Fei-Fei, L.; Yuille, A.; Huang, J.; Murphy, K. Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 19–34. [Google Scholar]
Figure 1. The number of publications from 2011 to 2019 on the topics of (1) edge computing, (2) intelligent transportation systems, and (3) edge computing + transportation. The statistics are from the Web of Science.
Figure 1. The number of publications from 2011 to 2019 on the topics of (1) edge computing, (2) intelligent transportation systems, and (3) edge computing + transportation. The statistics are from the Web of Science.
Applsci 11 09680 g001
Figure 2. The structure of the content in this review paper.
Figure 2. The structure of the content in this review paper.
Applsci 11 09680 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, X.; Ke, R.; Yang, H.; Liu, C. When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges. Appl. Sci. 2021, 11, 9680. https://doi.org/10.3390/app11209680

AMA Style

Zhou X, Ke R, Yang H, Liu C. When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges. Applied Sciences. 2021; 11(20):9680. https://doi.org/10.3390/app11209680

Chicago/Turabian Style

Zhou, Xuan, Ruimin Ke, Hao Yang, and Chenxi Liu. 2021. "When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges" Applied Sciences 11, no. 20: 9680. https://doi.org/10.3390/app11209680

APA Style

Zhou, X., Ke, R., Yang, H., & Liu, C. (2021). When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges. Applied Sciences, 11(20), 9680. https://doi.org/10.3390/app11209680

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop