Next Article in Journal
On Placement, Location and Orientation of Wrist-Worn Tri-Axial Accelerometers during Free-Living Measurements
Next Article in Special Issue
Image Thresholding Improves 3-Dimensional Convolutional Neural Network Diagnosis of Different Acute Brain Hemorrhages on Computed Tomography Scans
Previous Article in Journal
Learning Environmental Field Exploration with Computationally Constrained Underwater Robots: Gaussian Processes Meet Stochastic Optimal Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges

1
Centre for Integrated Systems Engineering and Advanced Technologies, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
2
Institute of Power Engineering, Universiti Tenaga Nasional, Kajang 43000, Malaysia
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(9), 2093; https://doi.org/10.3390/s19092093
Submission received: 2 April 2019 / Revised: 24 April 2019 / Accepted: 26 April 2019 / Published: 6 May 2019

Abstract

:
The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are detection, tracking and classification. During the last decade, a substantial number of techniques have been reported for TSDR. This paper provides a comprehensive survey on traffic sign detection, tracking and classification. The details of algorithms, methods and their specifications on detection, tracking and classification are investigated and summarized in the tables along with the corresponding key references. A comparative study on each section has been provided to evaluate the TSDR data, performance metrics and their availability. Current issues and challenges of the existing technologies are illustrated with brief suggestions and a discussion on the progress of driver assistance system research in the future. This review will hopefully lead to increasing efforts towards the development of future vision-based TSDR system.

1. Introduction

In all countries of the world, the important information about the road limitation and condition is presented to drivers as visual signals, such as traffic signs and traffic lanes. Traffic signs are an important part of road infrastructure to provide information about the current state of the road, restrictions, prohibitions, warnings, and other helpful information for navigation [1,2]. This information is encoded in the traffic signs visual traits: Shape, color and pictogram [1]. Disregarding or failing to notice these traffic signs may directly or indirectly contribute to a traffic accident. However, in adverse traffic conditions, the driver may accidentally or deliberately not notice traffic signs [3]. In these circumstances, if there is an automatic detection and recognition system for traffic signs, it can compensate for a driver’s possible inattention, decreasing a driver’s tiredness by helping him follow the traffic sign, and thus, making driving safer and easier. Traffic sign detection and recognition (TSDR) is an important application in the more recent technology referred to as advanced driver assistance systems (ADAS) [4], which is designed to provide drivers with vital information that would be difficult or impossible to come by through any other means [5]. The TSDR system has received an increasing interest in recent years due to its potential use in various applications. Some of these applications have been well defined and summarized in [6] as checking the presence and condition of the signs on highways, sign inventory in towns and cities, re-localization of autonomous vehicles; as well as its use in the application relevant to this research, as a driver support system. However, a number of challenges remain for a successful TSDR systems; as the performance of these systems is greatly affected by the surrounding conditions that affect road signs visibility [4]. Circumstances that affect road signs visibility are either temporal because of illumination factors and bad weather conditions or permanent because of vandalism and bad postage of signs [7]. Figure 1 shows an example of some non-ideal invariant traffic signs. These non-identical traffic signs cause difficulties for TSDR.
This paper provides a comprehensive survey on traffic sign detection, tracking and classification. The details of algorithms, methods and their specifications on detection, tracking and classification are investigated and summarized in the tables along with the corresponding key references. A comparative study on each section has been provided to evaluate the TSDR methods, performance metrics and their availability. Current issues and challenges of the existing technologies are illustrated with brief suggestions and a discussion on the progress of driver assistance system research in the future. The rest of this paper is organized as follows: In Section 2, an overview on traffic signs and recent trends of the research in this field is presented. This is followed by providing a brief review on the available traffic sign databases in Section 3. The methods of detection, tracking, and classification are categorized, reviewed, and compared in Section 4. Section 5 revises current issues and challenges facing the researchers in TSDR. Section 5 summarizes the paper, draws the conclusion and suggestions.

2. Traffic Signs and Research Trends

Aiming at standardizing traffic signs across different countries, an international treaty, commonly known as the Vienna Convention on Road Signs and Signals [8], was agreed upon in 1968. To date, 52 countries have signed this treaty, among which 31 are in Europe. The Vienna convention classified the traffic signs into eight categories, designated with letters A–H: Danger/warning signs (A), priority signs (B), prohibitory or restrictive signs (C), mandatory signs (D), special regulation signs (E), information, facilities or service signs (F), direction, position or indication signs (G), and additional panels (H). Examples of traffic signs in the United Kingdom for each of the categories are shown in Figure 2.
Despite the well-defined laws in the Vienna Treaty, variations in traffic sign designs still exist among the countries’ signatories to the treaty, and in some cases considerable variation within traffic sign designs can exist within the nation itself. These variations are easier to be detected by humans, nevertheless, they may pose a major challenge to an automatic detection system. As an example, different designs of stop signs in different countries are shown in Table 1.
In terms of research, recently there has been a growing interest in developing efficient and reliable TSDR systems. To show the current state of scientific research regarding this development, a simple search of the term “traffic sign detection and recognition” in the Scopus database has been carried out, with the aim of locating articles published in journals indexed in this database. To focus on the recent and most relevant research, the search has been restricted to the past decade (2009–2018) and only in the subjects of computer science and engineering. In this way, a set of 674 articles and 5414 citations were obtained. The publication and citation trends are shown in Figure 3 and Figure 4, respectively. Generally, the figures indicate a relatively fast growth rate in publications and a rapid increase in citation impact. More importantly, it is clear from the figures that TSDR research has grown remarkably in the last three years (2016–2018), with the highest number of publications and citations representing 41.69% and 60.34%, respectively.

3. Traffic Sign Database

A traffic sign database is an essential requirement in developing any TSDR system. It is used for training and testing the detection and recognition techniques. A traffic sign database contains a large number of traffic sign scenes and images representing samples of all available types of traffic signs: Guide, regulatory, temporary and warning signs. During the past few years, a number of research groups have worked on creating traffic sign datasets for the task of detection, recognition and tracking. Some of these datasets are publicly available for use by the research community. The detailed information regarding the publicly available databases are summarized Table 2. According to [1,9], the first and most widely used dataset is the German traffic sign dataset, which has two datasets: The German Traffic Signs Detection Benchmark (GTSDB) [10] and German Traffic Signs Recognition Benchmark (GTSRB) [11]. This dataset collects three important categories of road signs (prohibitory, danger and mandatory) from various traffic scenes. All traffic signs have been fully annotated with the rectangular regions of interest (ROIs). Examples of traffic scenes in the GTSDB database are shown in Figure 5 [12].

4. Traffic Sign Detection, Tracking and Classification Methods

As aforementioned, a TSDR is a driver supportive system that can be used to notify and warn the driver in adverse conditions. This system is a vision-based system that usually has the capability to detect and recognize all traffic signs, even those signs that may be partially occluded or somewhat distorted [14]. Its main tasks are locating the sign, identifying it and distinguishing one sign from another [15,16]. Thus, the procedure of the TSDR system can be divided into three stages, the detection, tracking and classification stages. Detection is concerned with locating traffic signs in the input scene images, whereas classification is about determining what type of sign the system is looking at [17,18]. In other words, traffic sign detection involves generating candidate region of interests (ROIs) that are likely to contain regions of traffic signs, while traffic sign classification gets each candidate ROI and tries to identify the exact type of sign or rejects the identified ROI as a false detection [4,19]. Detection and classification usually constitute recognition in the scientific literature. Figure 6 illustrates the main stages of the traffic sign recognition system. As indicated in the figure, the system is able to work in two modes, the training mode in which a database can be built by collecting a set of traffic signs for training and validation, and a testing mode in which the system can recognize a traffic sign which has not been seen before. In the training mode, a traffic sign image is collected by the camera and stored in the raw image database to be classified and used for training the system. The collected image is then sent to color segmentation process where all background objects and unimportant information in the image are eliminated. The generated image from this step is a binary image containing the traffic sign and any other objects similar to the color of the traffic sign. The noise and small objects in the binary image are cleaned by the object selector process and the generated image is then used to create or update the training image database. According to [20], feature selection has two functions in enhancing the performances of learning tasks. The first function is to eliminate noisy and redundant information, thus getting a better representation and facilitating the classification task. The second function is to make the subsequent computation more efficient through lowering the feature space. In the block diagram, the features are then extracted from the image and used to train the classifier in the subsequent step. In testing mode, the same procedure is followed, but the extracted features are used to directly classify the traffic sign using a pre-trained classifier.
Tracking is used by some research in order to improve the recognition performance [21]. The three stages of a TSDR system are shown in Figure 7, and further discussed in the subsequent sections.

4.1. Detection Phase

The initial stage in any TSDR system is locating potential sign image regions from a natural scene image input. This initial stage is called the detection stage, in which a ROI-containing traffic sign ise actually localized [17,23,24]. Traffic signs usually have a strict color scheme (red, blue, and white) and specific shapes (round, square, and triangular). These inherent characteristics distinguish them from other outdoor objects making them suitable to be processed by a computer vision system automatically, thus, allow the TSDR system to distinguish traffic signs from the background scene [21,25]. Therefore, traffic sign detection methods have been traditionally classified into color-based, shape-based and hybrid (color–shape-based) methods [23,26]. Detection methods are outlined in Figure 8 and compared in the following subsections.

4.1.1. Color-Based Methods

Color-based methods take advantage of the fact that traffic signs are designed to be easily distinguished from their surroundings, often colored in highly visible contrasting colors [17]. These colors are extracted to detect ROI within an input image based on different image-processing methods. Detection methods based on the color characteristics have low computing, good robustness and other characteristics, which can improve the detection performance to a certain extent [25]. However, methods based on color information can be used with a high-resolution dataset but not with grayscale images [23]. In addition, the main problem with using the color parameter is its sensitivity to various factors such as the distance of the target, weather conditions, time of the day, as well as reflection, age and condition of the signs [17,23].
In color-based approaches, the captured images are partitioned into subsets of connected pixels that share similar color properties [26]. Then the traffic signs are extracted by color thresholding segmentation based on smart data processing. The choice of color space is important during the detection phase, hence, the captured images are usually transformed into a specific color space where the signs are more distinct [9]. According to [27], the developed color-based detection methods are based on the red, green, blue (RGB) color space [28,29,30], the hue, saturation, and value (HSV) color space [31,32], the hue, saturation, and intensity (HSI) color space [33] and various other color spaces [34,35]. The most common color-based detection methods are represented in Figure 9 and reviewed respectively in Table 3.
Color thresholding segmentation is one of the earliest techniques used to segment digital images [26]. Generally, it is based on the assumption that adjacent pixels whose value (grey level, color value, texture, etc.) lies within a certain range belong to the same class [36]. Normal color segmentation was used for traffic sign detection by Varun et al. [37] with their own created dataset, containing 2000 test images, resulting in an accuracy level of 82%. The efficiency was improved in [38] by using color segmentation followed by a color enhancement method. In recent research, color thresholding has commonly been used for pre-processing purposes [39,40]. In [39], pre-filtering was used to train a color classifier, which created a regression problem, whose core was to find a linear function, as shown in (1).
f ( x ) = ( w , x ) + b , x = ( v 1 , v 2 , v 3 ) i
where v i is the intensity value of i t h channel (i = 1, 2, 3 for a three-channel RGB image), ( w , b ) × are parameters that control the function and the decision rule is given by sgn ( f ( x ) ) . In [40], Vazquez-Reina et al. used RGB to HSI color space conversion with the additional feature of white sign detection. The main advantage of this feature is its illuminated sign detection. In Refs. [33,41,42,43,44,45], HSI/HSV transformation approach was used for the purpose of detection. The major advantages of the HSI color space over the RGB color space are that it has only two components, hue and saturation, both are very similar to human perception and it is more immune to lighting conditions. In [33], a simple RGB to HSI color space transformation is used for the TSDR purpose. In [44], the HSI color space was used for detection, and then, the detected signal was passed to the distance to borders (DtBs) feature for shape detection to increase the accuracy level. The average accuracy was approximately 88.4% on GRAM database. The main limitation of using HSV transformation is the strong hue dependency of brightness. Hue is only a measurement of the physical lightness of a color, not the perceived brightness. Thus, the value of a fully saturated yellow and blue is the same.
Region growing is another simple and popular technique used for detection in TSDR systems. Region growing is a pixel-based image segmentation method that starts by selecting a starting point or seed pixel. Then, the region develops by adding neighboring pixels that are uniform, according to a certain match criterion, increasing step-by-step the size of the region [46]. This method was used by Nicchiotti et al. [47] and Priese et al. [48] for TSDR. Its efficiency was not very high, approximately 84%. Because this method is dependent on seed values, problems can occur when the seed points lie on edges, and, if the growth process is dominated by the regions, uncertainty around edges of adjacent regions may not be resolved correctly.
The color indexing method is another simple method that identifies objects entirely on the basis of color [49]. It was developed by Swain and Ballard [50] and was used by researchers in the early 1990s. In this method, a comparison of any two colored images is done by comparing their color histogram. For a given pair of histograms, I and M, each containing n bins, the histogram intersections are defined as [50]:
j = 1 n min ( I j , M j ) .
The match value is then,
H ( I , M ) = j = 1 n min ( I j , M j ) j = 1 n M j .
The advantage of using color histograms is their robustness with respect to geometric changes of projected objects [51]. However, color indexing is segmentation dependent, and complete, efficient and reliable segmentation cannot be performed prior to recognition. Thus, color indexing is negatively characterized as being an unreliable method.
Another approach to color segmentation is called a dynamic pixel aggregation [52]. In this method, the segmentation process is accomplished by introducing a dynamic thresholding to the pixel aggregation process in the HSV color space. The applied threshold is independent in terms of linearity and its value is defined as [52],
a = k sin ( s s e e d )
where, k is the normalization parameter and Sseed is the seed pixel saturation. The main advantage of this approach is hue instability reduction. However, it fails to reduce other segmentation-based problems, such as fading and illumination. This method was tested in [52] on their own created database with 620 outdoor images, resulting in an accuracy level approximately 86.3 to 95.7%.
The International Commission on Illumination 1997 Interim Color Appearance Model (CIECAM97) appearance model is another method has been used to detect and extract color information and to segment and classify traffic signs. Generally, color appearance models are capable of predicting color appearance under a variety of viewing conditions, including different light sources, luminance levels, surrounds, and lightness of backgrounds [53]. This model was used by Gao et al. [54] to transform the image from RGB to (International Commission on Illumination) CIE XYZ values. The main drawback of this model is its chromatic-adaptation transform, which is called the Bradford transform, where chromatic blues appear purple as the chroma is reduced at a constant hue angle.
The Green (Y), Blue (Cb), Red (Cr) (YCbCr) color space has been considered in recent approaches. Different from the most common color space RGB, which represents color as red, green and blue, YCbCr represents color as brightness and two-color difference signals. It was used for detection in [55], showing an accuracy level over 93% on their own collected database. The efficiency was improved to approximately 97.6% in [56] by first transforming RGB color space to YCbCr color space, then segmenting the image and performing shape-based analysis.

4.1.2. Shape-Based Methods

Just as traffic signs have specific colors, they also have very well-defined shapes that can be searched for. Shape-based methods ignore the color in favor of the characteristic shape of signs [17]. Detection of a traffic sign via its shape follows the defining algorithm of shape detection i.e., to finding the contours and approximating it to reach a final decision based on the number of contours [15,23]. Shape detection is preferred for traffic signs recognition as the colors found on traffic signs changes according to illumination. In addition, shape detection reduces the search for a road sign regions from the whole image to a small number of pixels [57]. However, for this method the memory and computational requirement is quite high for large images [58]. In addition, damaged, partially obscured, faded and blurred traffic signs may cause difficulties in detecting traffic signs accurately, leading to a low accuracy rate. Detection of the traffic signs in these methods is made from the edges of the image analyzed by structural or comprehensive approaches [23]. Many shape-based methods are popular in TSDR systems. These methods are represented in Figure 10 and reviewed respectively in Table 4.
The most common shape-based approach is the Hough transformation. The Hough transformation usually isolates features of a particular shape within a given frame/images [15]. It was applied by Zaklouta et al. in [59] to detect triangular and circular signs. Their own test datasets contained 14,763 and 1584 signs, and the accuracy rate was approximately 90%. The main advantage of the Hough transformation technique is that it is tolerant of gaps in feature boundary descriptions and is relatively unaffected by image noise [60]. However, its main disadvantage is the dependency on input data. In addition, it is only efficient for a high number of votes that fall in the correct bin. When the parameters are large, the average number of votes cast for a single bin becomes low, and thus, the detection rate is decreased.
Another shape-based detection method is the similarity detection. In this method the detection is performed by computing a similarity factor between a segmented region and set of binary image samples representing each road sign shape [57]. This method was used by Vitabile et al. [52] on their collected dataset with an accuracy level over 86.3%. The main advantage of this method is its straightforwardness, whilst its main drawback is that the input image should be perfectly segmented and the dimensions have to be same. In [52], the images were initially converted from RGB to HSV, then they were segmented and resized into 36 × 36 pixels. The similarity detection equation is,
x = x x min x max x min . n y = y y min y max y min . n
where, x max , y max , x min and y min are the coordinates of the rectangle vertices.
Distance transform matching (DTM) is also another type of shape-based detection method. In this method, the distance transform of the image is formed by assigning each non-edge pixel a value that is a measure of distance to the nearest edge pixel. It was used by Gavrila [61] to capture large variations in object shape by identifying the template features to the nearest feature image from a distribution of distances. This distance is inversely proportional to the matching of the image and the templates of the images. The chamfer distance equation is:
D c h a m f e r ( T , I ) 1 T t T d I ( t )
where | T | and d I ( t ) denote the number of features and the distance between features t in T and the closest feature in I , respectively. In his experiment, Gavrila [61] used DTM to examine 1000 collected test images, and the accuracy was approximately 95%. The DTM technique is efficient for detecting arbitrary shapes within images. However, its main disadvantage is the vulnerability of detecting cluttered images.
Another popular two colorless traffic sign detection methods are edge detection features and Haar-like features. Edge detection refers to the process of identifying and locating sharp discontinuities in an image [62]. By using this method, image data is simplified for the purpose of minimizing the amount of data to be processed. This method was used in [63,64,65,66,67] for indicating the boundaries of objects within the image through finding a set of connected curves. The Haar-like features method was proposed by Paul Viola and Michael Jones [68] based on the Haar wavelet to recognize the target objects. As indicated in Table 4, the Haar-like features based detection method was used in [69,70] for traffic sign detection. The main advantage is its calculating speed, where any size of images can be calculated in a constant time. However, its weakness is the requirement of a large number of training images and high false positive rates [23].

4.1.3. Hybrid Methods

As previously discussed, both color-based and shape-based methods have some advantages and disadvantages. Therefore, researchers recently have tried to improve the efficiency of the TSDR system using a combination of color- and shape-based features. In the hybrid methods, either color-based approaches take shape into account after having looked at colors, or shape detection is used as the main method but integrate some color aspects as well. In color-based approaches a two-stage strategy is usually employed. First, segmentation is done to narrow the search space. Subsequently, shape detection is implemented and is applied only to the segmented regions [58]. Color and shape features were combined into traffic sign detection algorithms in studies [71,72,73,74,75,76]. In these studies, different signs with various colors and shapes were detected using different datasets.

4.2. Tracking Phase

For robust detection and in order to increase the accuracy of the information used in identifying traffic signs, the signs are tracked using a simple motion model and temporal information propagation. This tracking process is very important for real-time applications, by which the TSDR system verifies correctness of the traffic sign and keeps tracking the sign to avoid handling the same detected sign more than once [21,83]. The tracking process is performed by feeding the TSDR system with a video recorded by a camera fixed on the vehicle and monitoring the sign candidates on a number of consecutive frames. The accepted sign candidates are only those shown up more than once. If the object is not a traffic sign or a sign that only shows up once, it can be eliminated as soon as possible, and thus, the computation time of the detection task can be reduced [84]. According to [85] and as shown in Table 5, the most common tracker adapted is the Kalman filter, as in [82,85,86,87,88]. The block diagram of a TSDR system with a tracking process based on the Kalman filter as proposed in [82] is shown in Figure 11. In the figure, SIFT, CCD and MLP are abbreviations of scale-invariant feature transform, contracting curve density and multi-layer perceptrons, respectively.

4.3. Classification Phase

After the localization of ROIs, classification techniques are employed to determine the content of the detected traffic signs [1]. Understanding the traffic rule enforced by the sign is achieved by reading the inner part of the detected traffic sign using a classifier method. Classification algorithms are neither color-based nor shape-based. The classifier usually takes a certain set of features as the input, which distinguishes the candidates from each other. Different algorithms are used to classify the traffic signs swiftly and accurately. Some conventional methods used for classification of traffic signs are outlined in Figure 12 and reviewed respectively in Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13.
Template matching is a common method in image processing and pattern recognition. It is a low-level approach which uses pre-defined templates to search the whole image pixel by pixel or to perform the small window matching [15]. It was used for TSDR by Ohara et al. [90] and Torresen et al. [91]. It has the advantages of being fast, straightforward and accurate (with a hit rate of approximately 90% on their own pictured images dataset). However, the drawback of this method is that it is very sensitive to noise and occlusions. In addition, it requires a separate template for each scale and orientation. Examples of TSDR systems using a template matching method are shown in Table 6.
Another common classification method is the random forest. It is a machine learning method that operates by constructing a multitude of decision trees during the training time and outputting the class that is the mode of the output of the class of individual trees. This method was compared in [92,93] with SVM, MLP, Histogram of Oriented Gradient (HOG)-based classifiers, showing the highest accuracy rate and the lowest computational time. Based on their own dataset, the accuracy was approximately 94.2%, whereas the accuracy of the SVM is 87.8% and that of MLP is 89.2%. In terms of computational time for a single classification, the SVM takes 115.87 ms, MLP takes 1.45 ms, and a decision tree takes 0.15 ms. Despite its high accuracy and low computation time, the main limitation of a random forest is that a large number of trees can make the algorithm slow and ineffective for real-time predictions. Examples of TSDR systems using a decision tree method are shown in Table 7.
Genetic algorithm is another classification method. It is based on a natural selection process that mimics biological evolution, which was used early in this century. This method was used for traffic sign recognition by Aoyagi et al. [98] and Eccalera et al. [99]. It was proved in these studies that this method is effective in detection of the traffic sign even if the traffic sign has some shape loss or illumination problem. The disadvantage of the genetic algorithm is non-deterministic work time and non-guarantee finding of the best solution [57]. Examples of TSDR systems using a genetic algorithm method are shown in Table 8.
The other most common method for classification is using an artificial neural network (ANN). This method has gained an increasing popularity in recent years due to the advancement in general-purpose computing on graphics processing units (GPGPU) technologies [2]. In addition, it is popular due to its robustness, greater adaptability to changes, flexibility and high accuracy rate [100]. Another key advantage of this method is its ability to recognize and classify objects at the same time, while maintaining high speed and accuracy [2]. ANN-based classifiers were used in [56,99,101,102,103,104,105,106,107,108] for TSDR. In the experiment conducted in [56], the hit rate was 97.6%, and the computational time was 0.2 s. However, in [107] ANN-based methods were described to have some limitations, such as their slowness and the instability in the NN training due to too large a step. This method was compared with a template matching method in [108], concluding that NNs require a large number of training samples for real world applications. Examples of TSDR systems using an ANN method are shown Table 9.
Another increasingly popular method in vision-based object recognition is the deep learning method. This method has acquired general interest in recent years owing to its high performance of classification and the power of representational learning from raw data [109,110]. Deep learning is a part of a broader family of machine learning methods. In contrary to task specific methods, deep learning focuses on data representations with supervised, weakly supervised or unsupervised learning. Deep learning methods use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. Higher level features are derived from lower level features to form a hierarchical representation [110]. Among the deep learning models, the convolutional neural networks (CNN) have acquired unique noteworthiness from their repeatedly confirmed superiorities [111]. According to [112], CNN models are the most widely used deep learning algorithms for traffic sign classification to date. Of the examples applied to traffic sign classification are committee CNN [113], multi-scale CNN [114], multi-column CNN [102], multi-task CNN [111,115], hinge-loss CNN [116], deep CNN [46,117], a CNN with diluted convolutions [118], a CNN with a generative adversarial network (GAN) [119], and a CNN with SVM [120]. Based on these studies, a simultaneous detection and classification can be achieved using deep learning-based methods. This simultaneousness results in improved performance, boosted training and testing speeds. Examples of TSDR systems using a deep learning method are shown in Table 10.
Adaptive boosting or AdaBoost is a combination of multiple learning algorithms that can be utilized for regression or classification [15]. It is a cascade algorithm, which was introduced by Freund and R. Schapire [122]. Its working concept is based on constructing multiple weak classifiers and assembling them into a single strong classifier for the overall task. As indicated in Table 11, the AdaBoost method was used for TSDR in [123,124,125,126,127]. Based on these studies, it can be concluded that the main advantage of the AdaBoost is its simplicity, high prediction power and capability to cascade an architecture for improving the computational efficiency. However, its main disadvantage is that if the input data have wide variations or abrupt changes in the background, then the training time increases and classifier accuracy decreases [121]. In addition, the AdaBoost trained classifier cannot be dynamically adjusted with new coming samples unless retrained from the beginning, which is time consuming and demands storing all historical samples [128]. Examples of TSDR systems using an AdaBoost method are shown in Table 11.
Support vector machine (SVM) is another classification method that contracts an N-dimensional hyper plane that optimally separates the data into two categories. More precisely, SVM is a binary classifier that separates two different classes by a subset of data samples called support vectors. It was implemented as a classifier for traffic sign recognition in [44,55,88,129,130,131,132,133,134,135,136]. This classification method is robust, highly accurate and extremely fast which is a good choice for large amounts of training data. In [129], a SVM-based classifier was applied for detecting speed limit signs and it was compared with the artificial neural network multilayer perceptron (MLP), k-nearest neighbors (kNN), least mean squares (LMS), least squares (LS) and extreme learning machine (ELM) based classifiers. Results of the comparison demonstrated that the SVM-based classifier obtained the highest accuracy and lowest standard deviation amongst all other classifiers. Similarly, in a recent study [3], a cascaded linear SVM classifier was used for detecting speed limit signs, and the result was a recall of 99.81% with a precision of 99.08% on the GTSRB dataset. In [55], a SVM-based classifier was used to detect and classify red road signs in 1000 test images, and the accuracy rate was over 95%. In [88,131], SVM was used with Gaussian kernels for the recognition of traffic signs, and the success rate was 92.3% and 92.6%, respectively. In [136], an advanced SVM method was proposed and tested with binary pictogram and gray scale images; the result was achieving high accuracy rates of approximately 99.2% and 95.9%, respectively. SVM has also shown great effectiveness in extracting the most relevant shots of an event of interest in a video, where a new SVM-based classifier called nearly-isotonic SVM classifier (NI-SVM) was proposed in [137] for prioritizing the video shots using a novel notion of semantic saliency. The proposed classifier exhibited higher discriminative power in event analysis tasks. The main disadvantage of SVM is lack of transparency of results. Transparency means how the results were obtained by the kernel and how the results should be interpreted. In SVM such things are unknown and cannot be known due to the high dimensional vector space. Examples of TSDR systems using a SVM method are shown in Table 12.
In addition to these conventional methods, researchers have used other methods for recognition. In [138], the SIFT matching method was used for recognizing broken areas of a traffic sign. This method adjusts the traffic sign to a standard camera axis and then compares it with a reference image. Sebanja et al. in [139] used principal component analysis (PCA) for both TSDR and the accuracy rate was approximately 99.2%. In [140], the researchers used improved fast radial symmetry (IFRS) for detection and a pictogram distribution histogram (PDH) for recognition. Soheilian et al. in [141] used template matching followed by a three dimensional (3D) reconstruction algorithm to reconstruct the traffic signs obtained from video data and to improve the visual angle for detecting traffic signs. In [142], Pei et al. used low rank matrix recovery (LRMR) to recover the correlation for classification with a hit rate of 97.51% in less than 0.2 s. Gonzalez-Reyna et al. [143] used oriented gradient maps for feature extraction, which is invariant in illumination and variable lighting. For classification, they used Karhunen–Loeve transform and MLP. They reported an accuracy of 95.9% and processing time of 0.0054 s per image. In [35], Miguel et al. used a self-organizing map (SOM) for recognition, where in every level, a pre-processor extracts a feature vector characterizing the ROI and passes it to the SOM. The accuracy rate was very high, approximately 99%. Examples of TSDR systems using the other methods are shown in Table 13.

5. Current Issues and Challenges

TSDR is the essential part of the ADAS. It is mainly designed to operate in a real-time environment for enhancing driver safety through the fast acquisition and interpretation of traffic signs. However, there are a number of external non-technical challenges that may face this system in the real environment degrading its performance significantly. Among the many issues that needed to be addressed while developing a TSDR system are the following issues outlined in Figure 13.
Variable lighting condition: Variable lighting condition is one of the key issues to be considered during TSDR system development. As aforementioned, one of the main distinguishing features of traffic sign is its unique colors which discriminate it from the background information, thus facilitating its detection. However, in outdoor environments illumination changes greatly affects the color of traffic sign, making the color information become completely unreliable as a main feature for traffic sign detection. To cope with such challenge, a method based on adaptive color threshold segmentation and high efficient shape symmetry algorithms has been recently proposed by Xu et al. [26]. This method is claimed to be robust for a complex illumination environment, exceeding a detection rate of 94% on GTSDB dataset.
Fading and blurring effect: Another important difficulty for a TSDR system is the fading and blurring of traffic signs caused by illumination through rain or snow. These conditions can lead to increase in false detections and reduce the effectiveness of a TSDR system. Using a hybrid shape-based detection and recognition method in such conditions can be very useful and may give more superior performance [146].
Affected visibility: Light emitted by the headlamps of the incoming vehicles, shadows, and other weather-related factors such as rains, clouds, snow and fog can lead to poor visibility. Recognizing traffic signs from a road image taken in such cases is a challenging task, and a simple detector may fail to detect these traffic signs. To resolve this problem, it is necessary to enhance the quality of taken images and make them clear by using an image pre-processing technique. A pre-processing makes image filtration and converts input information into usable format for further analysis and detection [147].
Multiple appearances of sign: While detecting traffic signs mainly in city areas, which are more crowded by signs, multiple traffic sign appearing at a time and similar shape man-made objects can cause overlapping of signs and lead to a false detection. The detection process can also be affected by rotation, translation, scaling and partial occlusion. Li et al. in [33], used HSI transform and fuzzy shape recognizer which is robust and unaffected by these problems and its accuracy rate in different weather condition is; sunny 94.66%, cloudy 92.05%, rainy 90.72%.
Motion artifacts: In the ADAS application, the images are captured from a moving vehicle and sometimes using a low resolution camera, thus, these images usually appear blurry. Recognition of blurred images is a challenging task and may lead to false results. In this respect, a TSDR system that integrates color, shape, and motion information could be a possible solution. In such a system, the robustness of recognition is improved through incorporating the detection and classification with tracking using temporal information fusion [73]. The detected traffic signs are tracked, and individual detections from sequential frames (t−t0, …, t) are temporally fused for a robust overall recognition.
Damaged or partially obscured sign: The other distinctive feature of traffic sign is its unique shape. However, traffic signs could appear in various conditions including damaged, partly occluded and/or clustered. These conditions can be very problematic for the detection systems, particularly shape-based detection systems. In order to overcome these problems, hybrid color segmentation and shape analysis based methods are recommended [15].
Unavailability of public database: A database is a crucial requirement for developing any TSDR system. It is used for training and testing the detection and recognition methods. One of the obstacles facing this research area is the lack of large, properly organized, and free available public image databases. According to [12], for example, the most commonly used database (GTSDB database) contains only 600 training images and 300 evaluation images. Of the seven categories classified in the Vienna convention, GTSDB covers only three categories of traffic signs for detection: prohibitory, mandatory and danger. All included images are only German traffic signs, which are substantially different from other parts of the world. To resolve the database scarcity problem, perhaps one of the ideas is to create a unified global database containing a large number of images and videos for road scenes in various countries around the world. These scenes must contain all categories of traffic signs under all possible weather conditions and physical states of the signs.
Real-time application: The detection and recognition of traffic signs are caught up with the performance of a system in real-time. Accuracy and speed are surely the two main requirements in practical applications. Achieving these requirements requires a system with efficient algorithms and powerful hardware. A good choice is convolutional neural networks-based learning methods with GPGPU technologies [2].
In brief, although lots of relevant approaches have been presented in the literature, no one can solve the traffic sign recognition problem very well in conditions of different illumination, motion blur, occlusion and so on. Therefore, more effective and more robust approaches need to be developed [12].

6. Conclusions and Suggestion

The major objective of the paper was to analyze the main direction of the research in the field of automatic TSDR and to categorize the main approaches into particular sections to make the topics easy to understand and to visualize the overall research for future directions. Unlike most of the available review papers, the scope of this paper has been broadened to cover all recognition phases: Detection, tracking and classification. In addition, this paper has tried to discuss as many studies as possible, in an attempt to provide a comprehensive review of the various alternative methods available for traffic sign detection and recognition; including along with methods categorization, current trends and research challenges associated with TSDR systems. The overall summary is presented in Figure 14.
The conducted review reveals that research in traffic sign detection and recognition has grown rapidly, where the number of papers published during the last three years was approximately 280 papers, which represents about 41.69% of the total number of papers published during the last decade as a whole. With regard to the methods used, it was observed that the subject of traffic sign detection and recognition incorporates three main steps: Detection, tracking and classification; and in each step, many methods and algorithms were applied, each has its own merits and demerits. In general, the methods applied in detection and recognition consider either color or shape information of the traffic sign. However, it is well known that the image quality in real-world traffic scenarios is usually poor; due to low resolution, weather condition, varying lighting, motion blur, occlusion, scale and rotation and so on. In addition, traffic signs are usually in a variety of appearances, with high inter-class similarity, and complicated backgrounds. Thus, proper integration of color and shape information in both detection and classification phases is a very promising and exciting task that is in need of much more attention. For tracking, the Kalman filter and its variations are the most common methods. For classification, artificial neural network and support vector machine-based methods were found to be the most popular methods, with a high detection rate, high flexibility and easy adoptability. Despite the recent improvements in the overall performance of TSDR systems, more research is still needed to achieve a rigorous, robust and reliable TSDR system. It is believed that TSDR system performance can be enhanced by merging the detection and classification tasks into one step rather than performing them separately. By doing so, classification can improve detection and vice versa. Another idea for further improvement of TSDR is by using standard, sufficient and large databases for learning, testing and evaluation of the proposed algorithms. In this way, the TSDR system will be able to recognize the eight different categories of the traffic signs in the real environment with different conditions. This paper will be a useful reference for researchers looking for an understanding of the current status of research in the field of TSDR and finding the related research problems in need of solutions.

Author Contributions

Conceptualization, M.A.H. and A.H., S.A.S.; methodology, M.A.H., M.A.A. and S.B.W.; software, M.A.A.; validation, S.B.W.; formal analysis, M.A.A.; investigation, M.A.H., and M.B.M.; resources, M.A.H., S.B.W.; data curation, M.A.A. and S.B.W.; writing—original draft preparation, M.A.H., M.A.A., S.B.W., P.J.K.; writing—review and editing, M.A.H., P.J.K., A.H., S.A.S. and M.B.M.; visualization, M.A.A.; supervision, M.A.H., S.A.S.; project administration, M.A.H.; funding acquisition, M.A.H.

Funding

This research was funded by the TNB Bold Strategic Grant J510050797 under the Universiti Tenaga Nasional and the Universiti Kebangsaan Malaysia under Grant DIP-2018-020.

Acknowledgments

This work is supported by the collaboration between Universiti Tenaga Nasional and Universiti Kebangsaan Malaysia.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Alturki, A.S. Traffic Sign Detection and Recognition Using Adaptive Threshold Segmentation with Fuzzy Neural Network Classification. In Proceedings of the 2018 International Symposium on Networks, Computers and Communications (ISNCC), Rome, Italy, 19–21 June 2018; pp. 1–7. [Google Scholar]
  2. Satılmış, Y.; Tufan, F.; Şara, M.; Karslı, M.; Eken, S.; Sayar, A. CNN Based Traffic Sign Recognition for Mini Autonomous Vehicles. In Proceedings of the International Conference on Information Systems Architecture and Technology, Nysa, Poland, 16–18 September 2018; pp. 85–94. [Google Scholar]
  3. Saadna, Y.; Behloul, A.; Mezzoudj, S. Speed limit sign detection and recognition system using SVM and MNIST datasets. Neural Comput. Appl. 2019, 1–11. [Google Scholar] [CrossRef]
  4. Guo, J.; Lu, J.; Qu, Y.; Li, C. Traffic-Sign Spotting in the Wild via Deep Features. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 120–125. [Google Scholar]
  5. Vokhidov, H.; Hong, H.; Kang, J.; Hoang, T.; Park, K. Recognition of damaged arrow-road markings by visible light camera sensor based on convolutional neural network. Sensors 2016, 16, 2160. [Google Scholar] [CrossRef] [PubMed]
  6. De la Escalera, A.; Armingol, J.M.; Mata, M. Traffic sign recognition and analysis for intelligent vehicles. Image Vis. Comput. 2003, 21, 247–258. [Google Scholar] [CrossRef] [Green Version]
  7. Hoang, T.M.; Baek, N.R.; Cho, S.W.; Kim, K.W.; Park, K.R. Road lane detection robust to shadows based on a fuzzy system using a visible light camera sensor. Sensors 2017, 17, 2475. [Google Scholar] [CrossRef] [PubMed]
  8. Economic Commission for Europe. Convention on Traffic Signs and Signals; Vienna Convention: Vienna, Austria, 1968. [Google Scholar]
  9. Luo, H.; Yang, Y.; Tong, B.; Wu, F.; Fan, B. Traffic sign recognition using a multi-task convolutional neural network. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1100–1111. [Google Scholar] [CrossRef]
  10. Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8. [Google Scholar]
  11. Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. The German traffic sign recognition benchmark: A multi-class classification competition. In Proceedings of the 2011 International Joint Conference on Neural Networks (IJCNN), San Jose, CA, USA, 31 July–5 August 2011; pp. 1453–1460. [Google Scholar]
  12. Li, J.; Wang, Z. Real-time traffic sign recognition based on efficient CNNs in the wild. IEEE Trans. Intell. Transp. Syst. 2018, 20, 975–984. [Google Scholar] [CrossRef]
  13. Madani, A.; Yusof, R. Malaysian Traffic Sign Dataset for Traffic Sign Detection and Recognition Systems. J. Telecommun. Electron. Comput. Eng. (JTEC) 2016, 8, 137–143. [Google Scholar]
  14. Liu, H.; Ran, B. Vision-based stop sign detection and recognition system for intelligent vehicles. Transp. Res. Rec. J. Transp. Res. Board 2001, 1748, 161–166. [Google Scholar] [CrossRef]
  15. Nandi, D.; Saif, A.S.; Prottoy, P.; Zubair, K.M.; Shubho, S.A. Traffic Sign Detection based on Color Segmentation of Obscure Image Candidates: A Comprehensive Study. Int. J. Mod. Educ. Comput. Sci. 2018, 10, 35. [Google Scholar] [CrossRef]
  16. Hannan, M.; Hussain, A.; Mohamed, A.; Samad, S.A.; Wahab, D.A. Decision fusion of a multi-sensing embedded system for occupant safety measures. Int. J. Automot. Technol. 2010, 11, 57–65. [Google Scholar] [CrossRef]
  17. Møgelmose, A.; Trivedi, M.M.; Moeslund, T.B. Vision-Based Traffic Sign Detection and Analysis for Intelligent Driver Assistance Systems: Perspectives and Survey. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1484–1497. [Google Scholar] [CrossRef] [Green Version]
  18. Shao, F.; Wang, X.; Meng, F.; Rui, T.; Wang, D.; Tang, J. Real-time traffic sign detection and recognition method based on simplified Gabor wavelets and CNNs. Sensors 2018, 18, 3192. [Google Scholar] [CrossRef] [PubMed]
  19. Zabihi, S.J.; Zabihi, S.M.; Beauchemin, S.S.; Bauer, M.A. Detection and recognition of traffic signs inside the attentional visual field of drivers. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017. [Google Scholar]
  20. Chang, X.; Yang, Y. Semisupervised feature analysis by mining correlations among multiple tasks. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2294–2305. [Google Scholar] [CrossRef] [PubMed]
  21. Yuan, Y.; Xiong, Z.; Wang, Q. An incremental framework for video-based traffic sign detection, tracking, and recognition. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1918–1929. [Google Scholar] [CrossRef]
  22. The Clemson University Vehicular Electronics Laboratory (CVEL). Traffic Sign Recognition Systems. Available online: https://cecas.clemson.edu/cvel/auto/systems/sign-recognition.html (accessed on 5 March 2019).
  23. Saadna, Y.; Behloul, A. An overview of traffic sign detection and classification methods. Int. J. Multimed. Inf. Retr. 2017, 6, 193–210. [Google Scholar] [CrossRef]
  24. Gündüz, H.; Kaplan, S.; Günal, S.; Akınlar, C. Circular traffic sign recognition empowered by circle detection algorithm. In Proceedings of the 2013 21st Signal Processing and Communications Applications Conference (SIU), Haspolat, Turkey, 24–26 April 2013; pp. 1–4. [Google Scholar]
  25. Kuang, X.; Fu, W.; Yang, L. Real-Time Detection and Recognition of Road Traffic Signs using MSER and Random Forests. Int. J. Online Eng. (IJOE) 2018, 14, 34–51. [Google Scholar] [CrossRef] [Green Version]
  26. Xu, X.; Jin, J.; Zhang, S.; Zhang, L.; Pu, S.; Chen, Z. Smart data driven traffic sign detection method based on adaptive color threshold and shape symmetry. Future Gener. Comput. Syst. 2019, 94, 381–391. [Google Scholar] [CrossRef]
  27. Liu, C.; Chang, F.; Chen, Z.; Liu, D. Fast traffic sign recognition via high-contrast region extraction and extended sparse representation. IEEE Trans. Intell. Transp. Syst. 2016, 17, 79–92. [Google Scholar] [CrossRef]
  28. De La Escalera, A.; Armingol, J.M.; Pastor, J.M.; Rodríguez, F.J. Visual sign information extraction and identification by deformable models for intelligent vehicles. IEEE Trans. Intell. Transp. Syst. 2004, 5, 57–68. [Google Scholar] [CrossRef]
  29. Ruta, A.; Li, Y.; Liu, X. Real-time traffic sign recognition from video by class-specific discriminative features. Pattern Recognit. 2010, 43, 416–430. [Google Scholar] [CrossRef]
  30. Gómez-Moreno, H.; Maldonado-Bascón, S.; Gil-Jiménez, P.; Lafuente-Arroyo, S. Goal evaluation of segmentation algorithms for traffic sign recognition. IEEE Trans. Intell. Transp. Syst. 2010, 11, 917–930. [Google Scholar] [CrossRef]
  31. Ren, F.; Huang, J.; Jiang, R.; Klette, R. General traffic sign recognition by feature matching. In Proceedings of the 24th International Conference on Image and Vision Computing New Zealand, IVCNZ’09, Wellington, New Zealand, 23–25 November 2009; pp. 409–414. [Google Scholar]
  32. Fleyeh, H. Shadow and highlight invariant colour segmentation algorithm for traffic signs. In Proceedings of the 2006 IEEE Conference on Cybernetics and Intelligent Systems, Bangkok, Thailand, 7–9 June 2006; pp. 1–7. [Google Scholar]
  33. Maldonado-Bascón, S.; Lafuente-Arroyo, S.; Gil-Jimenez, P.; Gómez-Moreno, H.; López-Ferreras, F. Road-sign detection and recognition based on support vector machines. IEEE Trans. Intell. Transp. Syst. 2007, 8, 264–278. [Google Scholar] [CrossRef]
  34. Khan, J.F.; Bhuiyan, S.M.; Adhami, R.R. Image segmentation and shape analysis for road-sign detection. IEEE Trans. Intell. Transp. Syst. 2011, 12, 83–96. [Google Scholar] [CrossRef]
  35. Prieto, M.S.; Allen, A.R. Using self-organising maps in the detection and recognition of road signs. Image Vis. Comput. 2009, 27, 673–683. [Google Scholar] [CrossRef]
  36. Fan, J.; Yau, D.K.; Elmagarmid, A.K.; Aref, W.G. Automatic image segmentation by integrating color-edge extraction and seeded region growing. IEEE Trans. Image Process. 2001, 10, 1454–1466. [Google Scholar]
  37. Varun, S.; Singh, S.; Kunte, R.S.; Samuel, R.S.; Philip, B. A road traffic signal recognition system based on template matching employing tree classifier. In Proceedings of the International Conference on Computational Intelligence and Multimedia Applications ICCIMA, Sivakasi, India, 13–15 December 2007; pp. 360–365. [Google Scholar]
  38. Ruta, A.; Li, Y.; Liu, X. Detection, tracking and recognition of traffic signs from video input. In Proceedings of the 11th International IEEE Conference on Intelligent Transportation Systems, ITSC 2008, Beijing, China, 12–15 October 2008; pp. 55–60. [Google Scholar]
  39. Jiang, Y.; Zhou, S.; Jiang, Y.; Gong, J.; Xiong, G.; Chen, H. Traffic sign recognition using ridge regression and Otsu method. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 613–618. [Google Scholar]
  40. Vázquez-Reina, A.; Lafuente-Arroyo, S.; Siegmann, P.; Maldonado-Bascón, S.; Acevedo-Rodríguez, F. Traffic sign shape classification based on correlation techniques. In Proceedings of the 5th WSEAS International Conference on Signal Processing, Computational Geometry & Artificial Vision, Alcalá de Henares, Malta, 15–17 September 2005; pp. 149–154. [Google Scholar]
  41. Jiménez, P.G.; Bascón, S.M.; Moreno, H.G.; Arroyo, S.L.; Ferreras, F.L. Traffic sign shape classification and localization based on the normalized FFT of the signature of blobs and 2D homographies. Signal Process. 2008, 88, 2943–2955. [Google Scholar] [CrossRef]
  42. Lafuente-Arroyo, S.; Salcedo-Sanz, S.; Maldonado-Bascón, S.; Portilla-Figueras, J.A.; López-Sastre, R.J. A decision support system for the automatic management of keep-clear signs based on support vector machines and geographic information systems. Expert Syst. Appl. 2010, 37, 767–773. [Google Scholar] [CrossRef]
  43. Li, L.; Li, J.; Sun, J. Robust traffic sign detection using fuzzy shape recognizer. In Proceedings of the MIPPR 2009: Pattern Recognition and Computer Vision, Yichang, China, 30 October–1 November2009; p. 74960Z. [Google Scholar]
  44. Wu, J.-Y.; Tseng, C.-C.; Chang, C.-H.; Lien, J.-J.J.; Chen, J.C.; Tu, C.T. Road sign recognition system based on GentleBoost with sharing features. In Proceedings of the 2011 International Conference on System Science and Engineering (ICSSE), Macao, China, 8–10 June 2011; pp. 410–415. [Google Scholar]
  45. Tagunde, G.A.; Uke, N.; Banchhor, C. Detection, classification and recognition of road traffic signs using color and shape features. Int. J. Adv. Technol. Eng. Res. 2012, 2, 202–206. [Google Scholar]
  46. Deshmukh, V.R.; Patnaik, G.; Patil, M. Real-time traffic sign recognition system based on colour image segmentation. Int. J. Comput. Appl. 2013, 83, 30–35. [Google Scholar]
  47. Nicchiotti, G.; Ottaviani, E.; Castello, P.; Piccioli, G. Automatic road sign detection and classification from color image sequences. In Proceedings of the 7th International Conference on Image Analysis and Processing, 1994; pp. 623–626. [Google Scholar]
  48. Priese, L.; Rehrmann, V. On hierarchical color segmentation and applications. In Proceedings of the 1993 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 15–17 June 1993; pp. 633–634. [Google Scholar]
  49. Funt, B.V.; Finlayson, G.D. Color constant color indexing. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 522–529. [Google Scholar] [CrossRef]
  50. Swain, M.J.; Ballard, D.H. Color indexing. Int. J. Comput. Vis. 1991, 7, 11–32. [Google Scholar] [CrossRef]
  51. Park, D.-S.; Park, J.-S.; Kim, T.Y.; Han, J.H. Image indexing using weighted color histogram. In Proceedings of the International Conference on Image Analysis and Processing, Venice, Italy, 27–29 September 1999; pp. 909–914. [Google Scholar]
  52. Vitabile, S.; Pollaccia, G.; Pilato, G.; Sorbello, F. Road signs recognition using a dynamic pixel aggregation technique in the HSV color space. In Proceedings of the International Conference on Image Analysis and Processing ICIAP, Palermo, Italy, 26–28 September 2001; p. 0572. [Google Scholar]
  53. Li, C.; Luo, M.R.; Hunt, R.W.; Moroney, N.; Fairchild, M.D.; Newman, T. The performance of CIECAM02. In Proceedings of the Color and Imaging Conference, Scottland, AN, USA, 1 January 2002; pp. 28–32. [Google Scholar]
  54. Gao, X.W.; Podladchikova, L.; Shaposhnikov, D.; Hong, K.; Shevtsova, N. Recognition of traffic signs based on their colour and shape features extracted using human vision models. J. Vis. Commun. Image Represent. 2006, 17, 675–685. [Google Scholar] [CrossRef] [Green Version]
  55. Fatmehsari, Y.R.; Ghahari, A.; Zoroofi, R.A. Gabor wavelet for road sign detection and recognition using a hybrid classifier. In Proceedings of the 2010 International Conference on Multimedia Computing and Information Technology (MCIT), Sharjah, UAE, 2–4 March 2010; pp. 25–28. [Google Scholar]
  56. Hechri, A.; Mtibaa, A. Automatic detection and recognition of road sign for driver assistance system. In Proceedings of the 2012 16th IEEE Mediterranean Electrotechnical Conference (MELECON), Yasmine Hammamet, Tunisia, 25–28 March 2012; pp. 888–891. [Google Scholar]
  57. Saxena, P.; Gupta, N.; Laskar, S.Y.; Borah, P.P. A study on automatic detection and recognition techniques for road signs. Int. J. Comput. Eng. Res. 2015, 5, 24–28. [Google Scholar]
  58. Hu, Q.; Paisitkriangkrai, S.; Shen, C.; van den Hengel, A.; Porikli, F. Fast detection of multiple objects in traffic scenes with a common detection framework. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1002–1014. [Google Scholar] [CrossRef]
  59. Zaklouta, F.; Stanciulescu, B. Real-time traffic-sign recognition using tree classifiers. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1507–1514. [Google Scholar] [CrossRef]
  60. Yin, S.; Ouyang, P.; Liu, L.; Guo, Y.; Wei, S. Fast traffic sign recognition with a rotation invariant binary pattern based feature. Sensors 2015, 15, 2161–2180. [Google Scholar] [CrossRef] [PubMed]
  61. Gavrila, D.M. Traffic sign recognition revisited. In Mustererkennung 1999; Springer: Berlin, Germany, 1999; pp. 86–93. [Google Scholar]
  62. Kaur, S.; Singh, I. Comparison between edge detection techniques. Int. J. Comput. Appl. 2016, 145, 15–18. [Google Scholar] [CrossRef]
  63. Xu, S. Robust traffic sign shape recognition using geometric matching. IET Intell. Transp. Syst. 2009, 3, 10–18. [Google Scholar] [CrossRef]
  64. Barnes, N.; Zelinsky, A.; Fletcher, L.S. Real-time speed sign detection using the radial symmetry detector. IEEE Trans. Intell. Transp. Syst. 2008, 9, 322–332. [Google Scholar] [CrossRef]
  65. Deguchi, D.; Shirasuna, M.; Doman, K.; Ide, I.; Murase, H. Intelligent traffic sign detector: Adaptive learning based on online gathering of training samples. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 72–77. [Google Scholar]
  66. Houben, S. A single target voting scheme for traffic sign detection. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 124–129. [Google Scholar]
  67. Soetedjo, A.; Yamada, K. Improving the performance of traffic sign detection using blob tracking. IEICE Electron. Express 2007, 4, 684–689. [Google Scholar] [CrossRef] [Green Version]
  68. Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; pp. 511–518. [Google Scholar]
  69. Ruta, A.; Li, Y.; Liu, X. Towards Real-Time Traffic Sign Recognition by Class-Specific Discriminative Features. In Proceedings of the British Machine Vision Conference, Warwick, UK, 10–13 September 2007. [Google Scholar]
  70. Prisacariu, V.A.; Timofte, R.; Zimmermann, K.; Reid, I.; Van Gool, L. Integrating object detection with 3D tracking towards a better driver assistance system. In Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 3344–3347. [Google Scholar]
  71. Zhu, Z.; Lu, J.; Martin, R.R.; Hu, S. An optimization approach for localization refinement of candidate traffic signs. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3006–3016. [Google Scholar] [CrossRef]
  72. Bahlmann, C.; Zhu, Y.; Ramesh, V.; Pellkofer, M.; Koehler, T. A system for traffic sign detection, tracking, and recognition using color, shape, and motion information. In Proceedings of the Intelligent Vehicles Symposium, Las Vegas, NY, USA, 6–8 June 2005; pp. 255–260. [Google Scholar]
  73. Chiang, H.-H.; Chen, Y.-L.; Wang, W.-Q.; Lee, T.-T. Road speed sign recognition using edge-voting principle and learning vector quantization network. In Proceedings of the 2010 International Computer Symposium (ICS), Tainan, Taiwan, 16–18 December 2010; pp. 246–251. [Google Scholar]
  74. Gu, Y.; Yendo, T.; Tehrani, M.P.; Fujii, T.; Tanimoto, M. Traffic sign detection in dual-focal active camera system. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 1054–1059. [Google Scholar]
  75. Wang, G.; Ren, G.; Wu, Z.; Zhao, Y.; Jiang, L. A robust, coarse-to-fine traffic sign detection method. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–5. [Google Scholar]
  76. Loy, G.; Barnes, N. Fast shape-based road sign detection for a driver assistance system. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2004), Sendai, Japan, 28 September–2 October 2004; pp. 70–75. [Google Scholar]
  77. Pettersson, N.; Petersson, L.; Andersson, L. The histogram feature-a resource-efficient weak classifier. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 678–683. [Google Scholar]
  78. Xie, Y.; Liu, L.-F.; Li, C.-H.; Qu, Y.-Y. Unifying visual saliency with HOG feature learning for traffic sign detection. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 24–29. [Google Scholar]
  79. Creusen, I.M.; Wijnhoven, R.G.; Herbschleb, E.; de With, P. Color exploitation in hog-based traffic sign detection. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 2669–2672. [Google Scholar]
  80. Overett, G.; Petersson, L. Large scale sign detection using HOG feature variants. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 326–331. [Google Scholar]
  81. Hoferlin, B.; Zimmermann, K. Towards reliable traffic sign recognition. In Proceedings of the Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 324–329. [Google Scholar]
  82. Hannan, M.A.; Hussain, A.; Samad, S.A. Decision fusion via integrated sensing system for a smart airbag deployment scheme. Sens. Mater. 2011, 23, 179–193. [Google Scholar]
  83. Fu, M.-Y.; Huang, Y.-S. A survey of traffic sign recognition. In Proceedings of the 2010 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Qingdao, China, 11–14 July 2010; pp. 119–124. [Google Scholar]
  84. Lafuente-Arroyo, S.; Maldonado-Bascon, S.; Gil-Jimenez, P.; Acevedo-Rodriguez, J.; Lopez-Sastre, R. A tracking system for automated inventory of road signs. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 166–171. [Google Scholar]
  85. Lafuente-Arroyo, S.; Maldonado-Bascon, S.; Gil-Jimenez, P.; Gomez-Moreno, H.; Lopez-Ferreras, F. Road sign tracking with a predictive filter solution. In Proceedings of the IECON 2006—32nd Annual Conference on IEEE Industrial Electronics, Paris, France, 6–10 November 2006; pp. 3314–3319. [Google Scholar]
  86. Wali, S.B.; Hannan, M.A.; Hussain, A.; Samad, S.A. An automatic traffic sign detection and recognition system based on colour segmentation, shape matching, and svm. Math. Probl. Eng. 2015, 2015, 250461. [Google Scholar] [CrossRef]
  87. Garcia-Garrido, M.; Ocaña, M.; Llorca, D.F.; Sotelo, M.; Arroyo, E.; Llamazares, A. Robust traffic signs detection by means of vision and V2I communications. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1003–1008. [Google Scholar]
  88. Fang, C.-Y.; Chen, S.-W.; Fuh, C.-S. Road-sign detection and tracking. IEEE Trans. Veh. Technol. 2003, 52, 1329–1341. [Google Scholar] [CrossRef]
  89. Ohara, H.; Nishikawa, I.; Miki, S.; Yabuki, N. Detection and recognition of road signs using simple layered neural networks. In Proceedings of the 9th International Conference on Neural Information Processing, ICONIP’02, Singapore, 18–22 November 2002; pp. 626–630. [Google Scholar]
  90. Torresen, J.; Bakke, J.W.; Sekanina, L. Efficient recognition of speed limit signs. In Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems, Washington, WA, USA, 3–6 October 2004; pp. 652–656. [Google Scholar]
  91. Greenhalgh, J.; Mirmehdi, M. Traffic sign recognition using MSER and random forests. In Proceedings of the 2012 20th European Signal Processing Conference (EUSIPCO), Bucharest, Romania, 27–31 August 2012; pp. 1935–1939. [Google Scholar]
  92. Greenhalgh, J.; Mirmehdi, M. Real-time detection and recognition of road traffic signs. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1498–1506. [Google Scholar] [CrossRef]
  93. Zaklouta, F.; Stanciulescu, B. Real-time traffic sign recognition using spatially weighted HOG trees. In Proceedings of the 2011 15th International Conference on Advanced Robotics (ICAR), Tallinn, Estonia, 20–23 June 2011; pp. 61–66. [Google Scholar]
  94. Zaklouta, F.; Stanciulescu, B. Segmentation masks for real-time traffic sign recognition using weighted HOG-based trees. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1954–1959. [Google Scholar]
  95. Zaklouta, F.; Stanciulescu, B. Real-time traffic sign recognition in three stages. Robot. Auton. Syst. 2014, 62, 16–24. [Google Scholar] [CrossRef]
  96. Zaklouta, F.; Stanciulescu, B. Warning traffic sign recognition using a HOG-based Kd tree. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 1019–1024. [Google Scholar]
  97. Aoyagi, Y.; Asakura, T. A study on traffic sign recognition in scene image using genetic algorithms and neural networks. In Proceedings of the 1996 IEEE IECON 22nd International Conference on Industrial Electronics, Control, and Instrumentation, Taipei, Taiwan, 9 August 1996; pp. 1838–1843. [Google Scholar]
  98. De La Escalera, A.; Armingol, J.M.; Salichs, M. Traffic sign detection for driver support systems. In Proceedings of the International Conference on Field and Service Robotics, Helsinki, Finland, 6–8 June 2001. [Google Scholar]
  99. Hannan, M.; Hussain, A.; Samad, S.; Ishak, K.; Mohamed, A. A Unified Robust Algorithm for Detection of Human and Non-human Object in Intelligent Safety Application. World Acad. Sci. Eng. Technol. Int. J. Comput. Electr. Autom. Control Inf. Eng. 2008, 2, 3838–3845. [Google Scholar]
  100. Ugolotti, R.; Nashed, Y.S.G.; Cagnoni, S. Real-Time GPU Based Road Sign Detection and Classification. In Parallel Problem Solving from Nature—PPSN XII; Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 153–162. [Google Scholar]
  101. CireşAn, D.; Meier, U.; Masci, J.; Schmidhuber, J. Multi-column deep neural network for traffic sign classification. Neural Netw. 2012, 32, 333–338. [Google Scholar] [CrossRef] [Green Version]
  102. Fang, C.-Y.; Fuh, C.-S.; Yen, P.; Cherng, S.; Chen, S.-W. An automatic road sign recognition system based on a computational model of human recognition processing. Comput. Vis. Image Underst. 2004, 96, 237–268. [Google Scholar] [CrossRef]
  103. Broggi, A.; Cerri, P.; Medici, P.; Porta, P.P.; Ghisio, G. Real time road signs recognition. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 981–986. [Google Scholar]
  104. Li, L.-B.; Ma, G.-F. Detection and classification of traffic signs in natural environments. J. Harbin Inst. Technol. 2009, 41, 682–687. [Google Scholar]
  105. Sheng, Y.; Zhang, K.; Ye, C.; Liang, C.; Li, J. Automatic detection and recognition of traffic signs in stereo images based on features and probabilistic neural networks. In Proceedings of the Optical and Digital Image Processing, Strasbourg, France, 25 April 2008; p. 70001I. [Google Scholar]
  106. Fištrek, T.; Lončarić, S. Traffic sign detection and recognition using neural networks and histogram based selection of segmentation method. In Proceedings of the 2011 ELMAR, Zadar, Croatia, 14–16 September 2011; pp. 51–54. [Google Scholar]
  107. Carrasco, J.-P.; de la Escalera, A.D.L.E.; Armingol, J.M. Recognition stage for a speed supervisor based on road sign detection. Sensors 2012, 12, 12153–12168. [Google Scholar] [CrossRef]
  108. Kim, H.-K.; Park, J.H.; Jung, H.-Y. An Efficient Color Space for Deep-Learning Based Traffic Light Recognition. J. Adv. Transp. 2018, 2018, 2365414. [Google Scholar] [CrossRef]
  109. Zhu, Y.; Liao, M.; Yang, M.; Liu, W. Cascaded Segmentation-Detection Networks for Text-Based Traffic Sign Detection. IEEE Trans. Intell. Transp. Syst. 2018, 19, 209–219. [Google Scholar] [CrossRef]
  110. Qian, R.; Zhang, B.; Yue, Y.; Wang, Z.; Coenen, F. Robust Chinese traffic sign detection and recognition with deep convolutional neural network. In Proceedings of the 2015 11th International Conference on Natural Computation (ICNC), Zhangjiajie, China, 15–17 August 2015; pp. 791–796. [Google Scholar]
  111. Kumar, A.D. Novel deep learning model for traffic sign detection using capsule networks. arXiv 2018, arXiv:1805.04424. [Google Scholar]
  112. Ciresan, D.C.; Meier, U.; Masci, J.; Schmidhuber, J. A committee of neural networks for traffic sign classification. In Proceedings of the IJCNN, San Jose, CA, USA, 31 July–5 August 2011; pp. 1918–1921. [Google Scholar]
  113. Sermanet, P.; LeCun, Y. Traffic sign recognition with multi-scale Convolutional Networks. In Proceedings of the IJCNN, San Jose, CA, USA, 31 July–5 August 2011; pp. 2809–2813. [Google Scholar]
  114. Lee, H.S.; Kim, K. Simultaneous traffic sign detection and boundary estimation using convolutional neural network. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1652–1663. [Google Scholar] [CrossRef]
  115. Jin, J.; Fu, K.; Zhang, C. Traffic sign recognition with hinge loss trained convolutional neural networks. IEEE Trans. Intell. Transp. Syst. 2014, 15, 1991–2000. [Google Scholar] [CrossRef]
  116. Abdi, L.; Meddeb, A. Deep learning traffic sign detection, recognition and augmentation. In Proceedings of the Symposium on Applied Computing, Marrakech, Morocco, 3–7 April 2017; pp. 131–136. [Google Scholar]
  117. Aghdam, H.H.; Heravi, E.J.; Puig, D. A practical approach for detection and classification of traffic signs using convolutional neural networks. Robot. Auton. Syst. 2016, 84, 97–112. [Google Scholar] [CrossRef]
  118. Li, J.; Liang, X.; Wei, Y.; Xu, T.; Feng, J.; Yan, S. Perceptual generative adversarial networks for small object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1222–1230. [Google Scholar]
  119. Lai, Y.; Wang, N.; Yang, Y.; Lin, L. Traffic Signs Recognition and Classification based on Deep Feature Learning. In Proceedings of the ICPRAM, Medea, Algeria, 24–25 November 2018; pp. 622–629. [Google Scholar]
  120. Lim, K.; Hong, Y.; Choi, Y.; Byun, H. Real-time traffic sign recognition based on a general purpose GPU and deep-learning. PLoS ONE 2017, 12, e0173317. [Google Scholar] [CrossRef] [PubMed]
  121. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef]
  122. Li, Y.; Pankanti, S.; Guan, W. Real-time traffic sign detection: An evaluation study. In Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 3033–3036. [Google Scholar]
  123. Chen, L.; Li, Q.; Li, M.; Mao, Q. Traffic sign detection and recognition for intelligent vehicle. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 908–913. [Google Scholar]
  124. Lin, C.-C.; Wang, M.-S. Road sign recognition with fuzzy adaptive pre-processing models. Sensors 2012, 12, 6415–6433. [Google Scholar] [CrossRef]
  125. Huang, Y.-S.; Le, Y.-S.; Cheng, F.-H. A method of detecting and recognizing speed-limit signs. In Proceedings of the 2012 Eighth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Piraeus, Greece, 18–20 July 2012; pp. 371–374. [Google Scholar]
  126. Chen, L.; Li, Q.; Li, M.; Zhang, L.; Mao, Q. Design of a multi-sensor cooperation travel environment perception system for autonomous vehicle. Sensors 2012, 12, 12386–12404. [Google Scholar] [CrossRef]
  127. Liu, C.; Li, S.; Chang, F.; Dong, W. Supplemental Boosting and Cascaded ConvNet Based Transfer Learning Structure for Fast Traffic Sign Detection in Unknown Application Scenes. Sensors 2018, 18, 2386. [Google Scholar] [CrossRef]
  128. Gomes, S.L.; Rebouças, E.D.S.; Neto, E.C.; Papa, J.P.; de Albuquerque, V.H.; Rebouças Filho, P.P.; Tavares, J.M.R. Embedded real-time speed limit sign recognition using image processing and machine learning techniques. Neural Comput. Appl. 2017, 28, 573–584. [Google Scholar] [CrossRef]
  129. Soendoro, D.; Supriana, I. Traffic sign recognition with Color-based Method, shape-arc estimation and SVM. In Proceedings of the 2011 International Conference on Electrical Engineering and Informatics (ICEEI), Bandung, Indonesia, 17–19 July 2011; pp. 1–6. [Google Scholar]
  130. Siyan, Y.; Xiaoying, W.; Qiguang, M. Road-sign segmentation and recognition in natural scenes. In Proceedings of the 2011 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Xi’an, China, 14–16 September 2011; pp. 1–4. [Google Scholar]
  131. Bascón, S.M.; Rodríguez, J.A.; Arroyo, S.L.; Caballero, A.F.; López-Ferreras, F. An optimization on pictogram identification for the road-sign recognition task using SVMs. Comput. Vis. Image Underst. 2010, 114, 373–383. [Google Scholar] [CrossRef]
  132. Martinović, A.; Glavaš, G.; Juribašić, M.; Sutić, D.; Kalafatić, Z. Real-time detection and recognition of traffic signs. In Proceedings of the 2010 Proceedings of the 33rd International Convention (MIPRO), Opatija, Croatia, 24–28 May 2010; pp. 760–765. [Google Scholar]
  133. Min, K.-I.; Oh, J.-S.; Kim, B.-W. Traffic sign extract and recognition on unmanned vehicle using image processing based on support vector machine. In Proceedings of the 2011 11th International Conference on Control, Automation and Systems (ICCAS), Gyeonggi-do, Korea, 26–29 October 2011; pp. 750–753. [Google Scholar]
  134. Bui-Minh, T.; Ghita, O.; Whelan, P.F.; Hoang, T. A robust algorithm for detection and classification of traffic signs in video data. In Proceedings of the 2012 International Conference on Control, Automation and Information Sciences (ICCAIS), Ho Chi Minh City, Vietnam, 26–29 November 2012; pp. 108–113. [Google Scholar]
  135. Park, J.-G.; Kim, K.-J. Design of a visual perception model with edge-adaptive Gabor filter and support vector machine for traffic sign detection. Expert Syst. Appl. 2013, 40, 3679–3687. [Google Scholar] [CrossRef]
  136. Chang, X.; Yu, Y.-L.; Yang, Y.; Xing, E.P. Semantic pooling for complex event analysis in untrimmed videos. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1617–1632. [Google Scholar] [CrossRef]
  137. Yang, L.; Kwon, K.-R.; Moon, K.; Lee, S.-H.; Kwon, S.-G. Broken traffic sign recognition based on local histogram matching. In Proceedings of the Computing, Communications and Applications Conference (ComComAp), Hong Kong, China, 11–13 January 2012; pp. 415–419. [Google Scholar]
  138. Sebanja, I.; Megherbi, D. Automatic detection and recognition of traffic road signs for intelligent autonomous unmanned vehicles for urban surveillance and rescue. In Proceedings of the 2010 IEEE International Conference on Technologies for Homeland Security (HST), Waltham, MA, USA, 8–10 November 2010; pp. 132–138. [Google Scholar]
  139. Huang, Y.-S.; Fu, M.-Y.; Ma, H.-B. A combined method for traffic sign detection and classification. In Proceedings of the 2010 Chinese Conference on Pattern Recognition (CCPR), Chongqing, China, 21–23 October 2010; pp. 1–5. [Google Scholar]
  140. Soheilian, B.; Paparoditis, N.; Vallet, B. Detection and 3D reconstruction of traffic signs from multiple view color images. ISPRS J. Photogramm. Remote Sens. 2013, 77, 1–20. [Google Scholar] [CrossRef]
  141. Pei, D.; Sun, F.; Liu, H. Supervised low-rank matrix recovery for traffic sign recognition in image sequences. IEEE Signal Process. Lett. 2013, 20, 241–244. [Google Scholar] [CrossRef]
  142. Gonzalez-Reyna, S.E.; Avina-Cervantes, J.G.; Ledesma-Orozco, S.E.; Cruz-Aceves, I. Eigen-gradients for traffic sign recognition. Math. Probl. Eng. 2013, 2013, 364305. [Google Scholar] [CrossRef]
  143. Măriuţ, F.; Foşalău, C.; Zet, C.; Petrişor, D. Experimental traffic sign detection using I2V communication. In Proceedings of the 2012 35th International Conference on Telecommunications and Signal Processing (TSP), Prague, Czech Republic, 3–4 July 2012; pp. 141–145. [Google Scholar]
  144. Wang, W.; Wei, C.-H.; Zhang, L.; Wang, X. Traffic-signs recognition system based on multi-features. In Proceedings of the 2012 IEEE International Conference on Computational Intelligence for Measurement Systems and Applications (CIMSA), Tianjin, China, 2–4 July 2012; pp. 120–123. [Google Scholar]
  145. Islam, K.T.; Raj, R.G. Real-time (vision-based) road sign recognition using an artificial neural network. Sensors 2017, 17, 853. [Google Scholar] [CrossRef]
  146. Khan, J.; Yeo, D.; Shin, H. New dark area sensitive tone mapping for deep learning based traffic sign recognition. Sensors 2018, 18, 3776. [Google Scholar] [CrossRef] [PubMed]
  147. Laguna, R.; Barrientos, R.; Blázquez, L.F.; Miguel, L.J. Traffic sign recognition application based on image processing techniques. IFAC Proc. Vol. 2014, 47, 104–109. [Google Scholar] [CrossRef]
Figure 1. Non-identical traffic signs: (a) Partially occluded traffic sign, (b) faded traffic sign, (c) damaged traffic sign, (d) multiple traffic signs appearing at a time.
Figure 1. Non-identical traffic signs: (a) Partially occluded traffic sign, (b) faded traffic sign, (c) damaged traffic sign, (d) multiple traffic signs appearing at a time.
Sensors 19 02093 g001
Figure 2. Examples of traffic signs: (a) A danger warning sign, (b) a priority sign, (c) a prohibitory sign, (d) a mandatory sign, (e) a special regulation sign, (f) an information sign, (g) a direction sign and (h) an additional panel.
Figure 2. Examples of traffic signs: (a) A danger warning sign, (b) a priority sign, (c) a prohibitory sign, (d) a mandatory sign, (e) a special regulation sign, (f) an information sign, (g) a direction sign and (h) an additional panel.
Sensors 19 02093 g002
Figure 3. Trends of research for a traffic sign detection and recognition (TSDR) topic based on Scopus analysis tools.
Figure 3. Trends of research for a traffic sign detection and recognition (TSDR) topic based on Scopus analysis tools.
Sensors 19 02093 g003
Figure 4. Trends of citations for a TSDR topic based on Scopus analysis tools.
Figure 4. Trends of citations for a TSDR topic based on Scopus analysis tools.
Sensors 19 02093 g004
Figure 5. Examples of traffic scenes in the German Traffic Signs Detection Benchmark (GTSDB) database [12].
Figure 5. Examples of traffic scenes in the German Traffic Signs Detection Benchmark (GTSDB) database [12].
Sensors 19 02093 g005
Figure 6. Block diagram of the traffic sign recognition system.
Figure 6. Block diagram of the traffic sign recognition system.
Sensors 19 02093 g006
Figure 7. General procedure of TSDR system [22].
Figure 7. General procedure of TSDR system [22].
Sensors 19 02093 g007
Figure 8. Different methods applied for traffic sign detection.
Figure 8. Different methods applied for traffic sign detection.
Sensors 19 02093 g008
Figure 9. Most popular color-based detection methods.
Figure 9. Most popular color-based detection methods.
Sensors 19 02093 g009
Figure 10. Most popular shape-based detection methods.
Figure 10. Most popular shape-based detection methods.
Sensors 19 02093 g010
Figure 11. An example of a TSDR system includes tracking process based on Kalman filter [81].
Figure 11. An example of a TSDR system includes tracking process based on Kalman filter [81].
Sensors 19 02093 g011
Figure 12. Most popular classification methods.
Figure 12. Most popular classification methods.
Sensors 19 02093 g012
Figure 13. Some of TSDR challenges.
Figure 13. Some of TSDR challenges.
Sensors 19 02093 g013
Figure 14. Summary of the paper.
Figure 14. Summary of the paper.
Sensors 19 02093 g014
Table 1. Example of stop signs in different countries.
Table 1. Example of stop signs in different countries.
CountryUSJapanPakistanEthiopiaLibyaNew Guinea
Sign Sensors 19 02093 i001 Sensors 19 02093 i002 Sensors 19 02093 i003 Sensors 19 02093 i004 Sensors 19 02093 i005 Sensors 19 02093 i006
Table 2. Publicly available traffic sign databases [13].
Table 2. Publicly available traffic sign databases [13].
DatasetCountryClassesTS ScenesTS ImagesImage Size (px)Sign Size (px)Include Videos
GTSDRB
(2012 and 2013)
Germany43900039,209 (training), 12,630 (testing)15 × 15 to 250 × 25015 × 15 to 250 × 250No
KULD
(2009)
Belgium100+900613,4441628 × 1236100 × 100 to 1628 × 1236Yes, 4 tracks
STSD
(2011)
Sweden720,00034881280 × 9603 × 5 to 263 × 248No
RUGD
(2003)
The Netherlands34848360 × 270N/ANo
Stereopolis
(2010)
France108472511920 × 108025 × 25 to 204 × 159No
LISAD
(2012)
US4966107855640 × 480 to 1024 × 526 × 6 to 167 × 168All annotations
UKOD
(2012)
UK100+43,5091200 (synthetic)648 × 48024 × 24No
RTSD
(2013)
Russia140N/A80,000+ (synthetic)1280 × 72030 × 30No
Table 3. Colors based approaches for TSDR system.
Table 3. Colors based approaches for TSDR system.
TechniquesPaperSegmentation MethodsAdvantagesSign TypeNo. of Test ImagesTest Image Type
Color Thresholding Segmentation[37]RGB color segmentationSimpleAny color2000N/A
[38]RGB color segmentation with enhancement of colorFast and high detection rateRed, blue, yellow135Video data
HSI/HSV Transform[40]HSI thresholding with addition for white signsSegments adversely illuminated signsAny colorN/AHigh-res
[33]HSI color-based segmentationSimple and fastAny colorN/AN/A
[41]RGB to HSI transformationSegments adversely illuminated signsAny colorN/ALow-res
[42]RGB to HSI transformationN/ARedN/ALow-res
[43]RGB to HSI transformationN/AAny color3028Low-res
[44]HSI color-based segmentationSimple and high accuracy rateRed, blueN/AVideo data
[45]HSI color-based segmentationSimple and real time applicationAny color632High-res
Region Growing[48]Started with seed and expand to group pixels with similar affinityN/AN/AN/AN/A
[47]N/AN/AHigh-res
Color Indexing[50]Comparison of two any-color images is done by comparing their color histogramStraightforward, fast methodAny colorN/ALow-res
[49]Any colorN/AN/A
Dynamic Pixel Aggregation[52]Dynamic threshold in pixel aggregation on HSV color spaceHue instability reducedAny color620Low-res
CIECAM97 Model[54]RGB to CIE XYZ transformation, then to LCH space using CIECAM97 modelInvariant in different lighting conditionsRed, blueN/AN/A
YCbCr Color Space[55]RGB to YCbCr transformation then dynamic thresholding is performed in Cr component to extract red objectSimple and high accuracyRed193N/A
[56]High accuracy less processing timeAny colorN/ALow-res
Table 4. Shape-based methods for TSDR system.
Table 4. Shape-based methods for TSDR system.
TechniquePaperOverall ProcessRecognition FeatureAdvantagesSign TypeNo. of Test ImageTest Image Type
Hough Transform[77]Each pixel of edge image votes for the object center at object boundaryN/AInvariant to in-plane rotation and viewing angleOctagon, square, triangle45Low-res
[78]AdaBoostHigh accuracyAny signN/ALow-res
[79]N/ARobustness to illumination, scale, pose, viewpoint change and even partial occlusionRed (circular), blue (square)500+Low-res
[80]N/AReducing memory consumption and increasing utilization Hough-based SVMAny sign3000High-res
[81]N/ARobustnessRed (circular)N/A768 × 580
[59]Random ForestImprove efficiency of K-d tree, random forest and SVMTriangular and circular14,763752 × 480 px
[82]SIFT and SURF based MLPApplying another state refinementRed circularN/AVideo data
Similarity Detection[52]Computes a region and sets binary samples for representing each traffic sign shape.NNStraight forward methodAny color620Low-res
DTM[61]Capturing object shape by template hierarchy.RBF NetworkDetects objects of arbitrary shapeCircular and triangular1000360 × 288 px
Edge Detection Feature[63]A set of connected curves is found which indicates the boundaries of objects within the image.Geometric matchingInvariant in translation, rotation and scalingAny color1000640 × 480
[64]Normalized cross correlationReliability and high accuracy in real timeSpeed limit signN/A320 × 240 px video data
[65]N/AImproved accuracy by training negative sampleRed (circular)3907Low-res
[66]N/AInvariant in noise and lightingTriangle, circular847High-res
[67]CDTInvariant in noise and illuminationRed, blue, yellow
Edges with Haar-like Features[69]Sums three pixel intensities and calculates the difference of sums by Haar-like featuresCDTSmoother and noise invariantRectangular, any colorVideo data
[70]SVMFast methodCircular, triangular upside-down, rectangle and diamond640 × 480 px video data
Table 5. Sign tracking based on Kalman Filter approaches.
Table 5. Sign tracking based on Kalman Filter approaches.
TechniquePaperAdvantagesPerformance
Kalman Filter[82]For avoiding incorrect assignment, rule-based approach utilizing combined distance direction difference is used.N/A
[89]Takes less time in tracking and verifyingUsing 320 × 240 pixel images, takes 0.1 s to 0.2 s.
[88]Used stereo parameters to reduce the error of stereo measurementN/A
Advanced Kalman Filter[85]Fast and advanced method, high detection and tracking rateUsing 400 × 300 pixel images, can process 3.26 frames per second.
Table 6. Examples of TSDR systems using a template matching method.
Table 6. Examples of TSDR systems using a template matching method.
RefDetection FeatureAdvantagesTrue Positive RateFalse Positive RateNo. of Test ImagesOverall AccuracyTime
[90]RGB to HSV then contrast stretchingFast and straight forward methodN/AN/AN/A<95%N/A
[91]N/AN/AN/A10090.9%N/A
Table 7. Examples of TSDR systems using a decision tree method.
Table 7. Examples of TSDR systems using a decision tree method.
RefDetection FeatureAdvantagesTrue Positive RateFalse Positive RateNo. of Test ImagesOverall AccuracyTimeDataset
[94]HOG based SVMUsed GTSRB and ETH 80 dataset and compared90.9%N/A12,56990.46%17.9 msGTSRB and ETH 80
[95,96]Used Gaussian weighting in HOG to improve performance by 15%90%N/A12,56997.2%17.9 msOwn created
[92]MSER based HOGEliminating hand labeled database, robust to various lighting and illumination83.3%0.85640 × 480 px video data87.72%N/AOwn created
[97]HOGRemove false alarm up to 94%N/AN/A12,56992.7%17.9 msOwn created
Table 8. Examples of TSDR systems using a genetic algorithm.
Table 8. Examples of TSDR systems using a genetic algorithm.
RefDetection FeatureAdvantagesTrue Positive RateFalse Positive RateNo. of Test ImagesOverall AccuracyTime
[98,99]Genetic AlgorithmUnaffected by illumination problemN/AN/AVideo dataN/AN/A
Table 9. Examples of TSDR systems using an ANN method.
Table 9. Examples of TSDR systems using an ANN method.
RefDetection FeatureAdvantagesTrue Positive RateFalse Positive RateNo. of Test ImagesOverall AccuracyTimeDataset
[56]YCbCr and normalized cross correlationRobustness and adaptability0.960.08640 × 480 px video data97.6%0.2 sOwn created
[101]N/AFlexibility and high accuracyN/AN/AN/A98.52–99.46%N/AOwn created
[106]Adaptive shape analysisInvariant in illuminationN/AN/A22095.4%0.6 sOwn created
[107]NNRobustnessN/AN/A467N/AN/AOwn created
[108]Bimodal binarization and thresholdingCompared TM and NN elaborately0.960.08640 × 480 px video data97.6%0.2 sOwn created
Table 10. Examples of TSDR systems using a deep learning method.
Table 10. Examples of TSDR systems using a deep learning method.
RefDetection FeatureAdvantagesTrue Positive RateFalse Positive RateNo. of Test ImagesOverall AccuracyTimeDataset
[115]Object bounding box predictionPredicting position and precise boundary simultaneously>0.88 mPA<3 pixels3,71991.95%N/AGTSDB
[120]YCbCr modelHigh accuracy and speedN/AN/AVideo data98.6%N/AOwn created
[111]Color space thresholdingImplementing detection and classification90.2%2.4%20,00095%N/AGTSRB
[121]SVMRobust against illumination changesN/AN/AVideo data97.9%N/AOwn created
[117]Scanning window with a Haar cascade detectorEnhanced detection capability with good time performanceN/AN/A16,63099.36%N/AGTSRB
Table 11. Examples of TSDR systems using an AdaBoost method.
Table 11. Examples of TSDR systems using an AdaBoost method.
RefDetection FeatureAdvantagesTrue Positive RateFalse Positive RateNo. of Test ImagesOverall AccuracyTimeDataset
[123]Sobel edge detectionComparison of SVM and AdaBoostN/A0.25N/A92%N/AOwn created
[124]AdaBoostFastN/AN/A200>90%50 msOwn created
[125]AdaBoostInvariant in speed, illumination and viewing angle92.47%0%35094%51.86 msOwn created
[126]AdaBoost and CHTReal-time and robust system with efficient SLS detection and recognition0.970.26185094.5%30–40 msOwn created
[127]Haar-like methodReliability and accuracy0.90.420092.7%50 msOwn created
Table 12. Examples of TSDR systems using a SVM method.
Table 12. Examples of TSDR systems using a SVM method.
RefDetection FeatureAdvantagesTrue Positive RateFalse Positive RateNo. of Test ImagesOverall AccuracyTimeDataset
[44]DtBs and SVMFast, high accuracyN/AN/AVideo data92.3%N/AGRAM
[55]Gabor FilterSimple and high accuracyN/AN/A5893.1%N/AOwn created
[130]CIELab and Ramer–Douglas–Peucker algorithmIllumination proof and high accuracyN/AN/A40597%N/AOwn created
[131]RGB to HSI then shape analysisLess processing timeN/AN/A92.6%Avg. 5.67 sOwn created
[88]Hough transformReliability and accuracyN/AN/AVideo dataAvg. 92.3%35 msOwn created
[132]RGB to HIS then shape localizationReduce the memory space and time for testing new sampleN/AN/AN/A95%N/AOwn created
[133]MSERInvariant in illumination and lighting condition0.970.8543,50989.2%N/AOwn created
[134]HSI and edge detectionLess processing timeN/AN/AVideo dataN/AN/AOwn created
[135]RGB to HSIIdentify the optimal image attributes0.8670.1265086.7%0.125 sOwn created
[136]Edge Adaptive Gabor FilteringReliability and Robustness85.93%11.62%38795.8%.3.5–5 msOwn created
Table 13. Examples of TSDR systems using the other methods.
Table 13. Examples of TSDR systems using the other methods.
RefMethodDetection FeatureAdvantagesTrue Positive RateFalse Positive RateNo. of Test ImagesOverall AccuracyTimeDataset
[138]SIFT matchingN/AEffective in recognizing low light and damaged signsN/AN/A60N/AN/AOwn created
[34]Fringe-adjusted joint Transform CorrelationColor Feature Extraction using Gabor FilterExcellent discrimination ability between object and non-object783217587N/AN/AOwn created
[139]Principal Component AnalysisHSV, CIECAM97 and PCAHigh accuracy rateN/AN/AN/A99.2%2.5 sOwn created
[140]Improved Fast Radial Symmetry and Pictogram Distribution Histogram based SVMRGB to LaB color space then IFRS detectionHigh accuracy rateN/AN/A30096.93%N/AOwn created
[144]Infrastructures of vehiclesN/AEliminating possibility of false positive rate because of ID codingN/AN/AVideo dataN/A.N/AOwn created
[145]FCM and Content Based Image RecorderFuzzy c means (FCM)Effective in real time applicationN/AN/AVideo data<80%N/AOwn created
[141]Template matching and 3D reconstruction algorithmN/AVery effective in recognizing damaged or occulted road signsIn 3D, 54 out of 63In 3D, 6 out of 63 and 3 signs were missing4800N/AN/AOwn created
[142]Low Rank Matrix Recovery (LRMR)N/AFast computation and parallel executionN/AN/A40,00097.51%>0.2GTSRB
[143]Karhunen–Loeve Transform and MLPOriented gradient mapsInvariant in illumination an different lighting conditionN/AN/A12,60095.9%0.0054 s/imageGTSRB
[35]Self-Organizing MapN/AFast and accurateN/AN/AN/A<99%N/AOwn created

Share and Cite

MDPI and ACS Style

Wali, S.B.; Abdullah, M.A.; Hannan, M.A.; Hussain, A.; Samad, S.A.; Ker, P.J.; Mansor, M.B. Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges. Sensors 2019, 19, 2093. https://doi.org/10.3390/s19092093

AMA Style

Wali SB, Abdullah MA, Hannan MA, Hussain A, Samad SA, Ker PJ, Mansor MB. Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges. Sensors. 2019; 19(9):2093. https://doi.org/10.3390/s19092093

Chicago/Turabian Style

Wali, Safat B., Majid A. Abdullah, Mahammad A. Hannan, Aini Hussain, Salina A. Samad, Pin J. Ker, and Muhamad Bin Mansor. 2019. "Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges" Sensors 19, no. 9: 2093. https://doi.org/10.3390/s19092093

APA Style

Wali, S. B., Abdullah, M. A., Hannan, M. A., Hussain, A., Samad, S. A., Ker, P. J., & Mansor, M. B. (2019). Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges. Sensors, 19(9), 2093. https://doi.org/10.3390/s19092093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop