Next Article in Journal
Defining the Spatial Resolution Requirements for Crop Identification Using Optical Remote Sensing
Previous Article in Journal
Narrowband Bio-Indicator Monitoring of Temperate Forest Carbon Fluxes in Northeastern China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Road Centerline Extraction from Imagery Using Road GPS Data

1
Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore 117583, Singapore
2
Harbin institute of technology (HIT) Wuhu Robot Technology Research Institute, Wuhu 241007, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2014, 6(9), 9014-9033; https://doi.org/10.3390/rs6099014
Submission received: 20 June 2014 / Revised: 25 August 2014 / Accepted: 2 September 2014 / Published: 23 September 2014

Abstract

: Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data. The proposed method combines road color feature with road GPS data to detect road centerline seed points. After global alignment of road GPS data, a novel road centerline extraction algorithm is developed to extract each individual road centerline in local regions. Through road connection, road centerline network is generated as the final output. Extensive experiments demonstrate that our proposed method can rapidly and accurately extract road centerline from remotely sensed imagery.

1. Introduction

Geographical information extraction from remotely sensed images has been an active research subject with a variety of applications in recent years, such as map localization, urban planning, resource management, natural disaster analysis, transportation system modelling, and military applications. Road centerline is one of the most fundamental and prominent geographical information for Geographic Information System (GIS) updating, which may affect various applications in a positive manner [1].

This paper presents a novel method for extracting road centerlines from imagery using road GPS data, which is capable of coping with complex scenes, different road shapes, diverse regions, and different resolutions. The objective of this work is to develop an efficient method for automatic and accurate road centerline extraction. The remainder of the paper is organized as follows. Related work regarding road centerline extraction is reviewed next, followed by the framework of our proposed method. Section II presents the global alignment of road GPS data, and Section III describes the road centerline extraction method. Experimental results are discussed in Section IV, followed by our conclusions in Section V.

Related Work

In the past decades, numerous methods for road extraction from satellite imagery have been proposed. Mena [1] and Mayer et al. [2] surveyed literatures before 2006 on automatic road extraction from GIS imagery, and Li et al. [3] reviewed published studies about semi-automatic road extraction methods. According to different processing methods, recent road extraction techniques can be mainly classified into a few groups, such as user-guided [4], line-based [5], region-based [6], and knowledge-based techniques [7]. In order to obtain transportation data and update GIS database [8], more research is focused on road centerline extraction. Due to the complexity of road conditions, the human operator still plays a principal role in extracting road centerlines from imagery. Yang and Zhu [9] used active window line segment matching method to extract main road network from high resolution images. Hu et al. [10] extracted road centerline based on the piecewise parabola model, but the road width and seed points need to be input manually. Chaudhuri et al. [11] developed customized operators to accurately extract the road from high resolution satellite imagery using directional morphological enhancement, segmentation and thinning. These semi-automatic methods are time-consuming and require repetitive manual work that does not meet the requirement of current applications [12]. Automatic road centerline extraction has remained a challenging research subject for years. Price [13] presented the street grid method for urban road extraction, which is limited to grid roads. Using high spatial resolution images, Alian et al. [14] detected the road as a curve and analyzed its shape using fuzzy c-means and alpha shapes. Shi et al. [15] extracted the centerline of main road in urban region based on color and shape information. Lin and Saripalli [16] extracted intersecting road, bifurcating road from images with significant changes in lighting and intensity based on the clustering and road tracking method. With the increase of remotely sensed data, several methods have been developed to improve accuracy and reliability of the results based on multi-information fusion, such as multi-view satellite imagery [17], stereoscopic aerial images, satellite imagery and SAR (synthetic aperture radar) data [18], satellite imagery and LiDAR (light detection and ranging) data [12], floating GPS data [19], aerial images captured by unmanned aerial vehicles (UAV) [20,21].

Although the abovementioned methods provide many techniques for road extraction, some challenging problems remain unsolved due to various conditions in different regions (e.g., urban roads, suburban roads, and rural roads) and partial occlusions (e.g., clouds, buildings, shadows, trees, and cars). Consequently, traditional methods that consider only main roads with simple background often fail when dealing with complex scenes. Recently, some papers have been published concerning occlusion problems. Chai et al. [22] developed a junction-point process that can improve accuracy in presence of occlusions. However, the road with curve line can be more delicate to be adequately extracted. Zielinski and Iwanowski [23] proposed a morphology method for invisible road parts extraction, which is limited to roads with simple background in rural areas. Silva and Centeno [24] presented a semi-automatic method, which only deals with roads partially occluded by some trees, and they cannot extract road centerlines in urban and suburban regions with high accuracy.

To overcome the aforementioned shortcomings of the existing road centerline algorithms, this paper presents a novel and robust approach, which extracts road centerlines on the basis of imagery and road GPS data. Satellite or aerial imagery includes abundant data covering roads in large areas. Meanwhile, road GPS data contain useful road information (e.g., road GPS position, road shape, road type and road relationship) to improve road centerline extraction results. The performance of the proposed method is assessed on 20 satellite images, 3 aerial images and the corresponding road GPS data. Figure 1 depicts the flowchart of proposed methodology, which can be summarized into three main modules: global alignment, road centerline extraction, and road network generation. The details of each module are explained respectively in Sections II, III, and IV.

2. Global Alignment of Road GPS Data

The proposed approach is based on remotely sensed imagery and the corresponding road GPS data. The input remotely sensed images are composed of three color bands in tagged image file format. Road GPS data are available from a volunteered geographical dataset OpenStreetMap (OSM), which is a collaborative project to create a free editable map of the world [25]. Contributors in the OSM community collect geographic information and submit to the OSM database [26]. In the OSM dataset, the information of each road is recorded by road attributes and GPS nodes. The average road GPS overlap rate (OR) of OSM is more than 88% in urban regions [27], which are being improved gradually to cover more regions. The attributes of each road include the road ID (identity number), road layer (e.g., highway, waterway, railway), road type (e.g., expressway, residential, motorway, cycleway) and road name, etc. The GPS data of each road are represented by several road GPS nodes [28]. In order to normalize the coordinate system, the road GPS data are transformed from WGS84 (world geodetic system) to the mapping system of remotely sensed images according to their transformation parameters.

In practical terms, most of the GPS data from OSM is captured by normal GPS receivers with an accuracy of 6 to 10 m in normal conditions [25]. Due to the distance error of the captured GPS data, global alignment of the road GPS data is required. Although the remotely sensed images are composed of three color bands (i.e., RGB), only the discriminative features are adopted, because of the general difference between the road and its surroundings (e.g., building, grass, forest and other objects) in regard to hue and saturation. Therefore, the images composed of RGB color bands are converted into HSV space to remove the dependency between the color and brightness information, then the H (hue) and S (saturation) channel are collected for clustering. To fuse two features into a road prior map representing the dissimilarity to the road, we use the k-means clustering method [29], which partitions the image into several clusters of pixels that minimize the total within-cluster differences. There are two parameters that have to be specified: the number of clusters and the initial cluster centroids, which are determined automatically for each image through 2-D histogram analysis of the H and S values. As shown in Figure 2b, the number of clusters k is equal to the number of peaks, and the initial cluster centroids μ are given by the H and S values of the peaks. Then, the clusters are further processed to obtain the cluster corresponding to the road by the following steps:

(1)

Removing the clusters representing green vegetation (grass or trees). The cluster centroids of vegetation mask have saturation value over 0.4 and a hue value between 0.1 and 0.5.

(2)

Depending on an analysis of 200 images with the size larger than 1000 × 1000 pixels under different illuminations, as shown in Figure 2c, the clusters representing the road and building always have more pixels than other clusters.

(3)

Most of the roads have the lower reflection rate than buildings in the urban region. Therefore, the cluster with lower brightness value is more likely to contain pixels belonging to roads. The cluster results are shown in Figure 2d. Let μ denote the mean feature of the cluster corresponding to road. The road prior map P is defined as:

P ( x , y ) = i = 1 2 [ F ( x , y ) i μ i ] 2 ,
where F (x, y)i refers to the i-th feature channel of pixel (x, y). The resultant road prior map is shown in Figure 2e. Depending on the road prior map obtained by (2), road GPS data are aligned to improve their accuracy. Let xj and yj denote the position of the road GPS nodes extracted from the GPS data of the j-th road, and we search for an offset (Xo ∈ ℤ, Yo ∈ ℤ) within a certain range δ/res to minimize the following energy function:
E ( X o , Y o ) = 1 N r j = 1 N r F ( x + X 0 , y + Y o ) ,
where Nr is the number of all road GPS nodes in the region, res is the resolution of the image and ⌊ · ⌋ gives the nearest integer value of the argument. Since this energy function is not convex, a brute force method is used to shift the road GPS data to the optimal position. Figure 2f shows the aligned road GPS data in red color and some enlarged regions are shown in Figure 2g. The road prior map only represents the rural dissimilarity of the road. Hence, as shown in Figure 2g, the original road GPS data are adjusted to close the road based on the entire region. In the following sections, we will focus on the road centerline extraction in local region.

3. Road Centerline Extraction

3.1. Local Image Segmentation

Road condition varies in different regions. Moreover, a local image region contains detail information of the road. Hence, a given satellite or aerial image is first partitioned into a number of local regions according to the position of GPS nodes belonging to each road. Depending on their position in the image, the bounding box of the j-th road is generated to extract local image with the margin Wm in both x and y directions. The value Wm is selected adaptively according to the resolution of image, which is defined as Wm =20/res. If the calculated margin is out of range of the image, excessive regions are not included in the local image.

3.2. Road Centerline Extraction Algorithm

Road centerlines are represented by a set of road seed points (RSP) and their connection lines. Therefore, the performance of road centerline extraction depends on the accuracy of extracted RSP. In order to achieve accurate road centerline detection, we develop an effective algorithm to generate the road centerline by extracting a set of RSP from candidate seed points (CSP) located at the road profile within a certain range. The road centerline extraction algorithm consists of the following steps:

(a)

Computing the position of CSP in the local image;

(b)

Selecting the seed points from CSP according to the corresponding road intensity profile;

(c)

Refining the initial estimated road centerline using the road shape information.

Road GPS nodes are always located at the key position of the road GPS data. Hence, according to these nodes, each road can be divided into several segments to extract the road centerline independently. We refer to this segment as road segment (RS). The positions of two road GPS nodes denoted by (Xi, Yi) and (Xi+1, Yi+1) are used as input parameters to extract the road centerline of the i-th RS. Next we explain the procedures for extracting the RS. The unit of distance parameters used in the algorithm is pixel.

3.2.1. CSP Position Calculation

As indicated by black solid line in Figure 3a, CSP are along the road profile, which is perpendicular to the connection line of road GPS nodes. To accurately and completely detect the seed points of each RS, the road profiles are placed with adaptive interval width according to the length of the RS. The length of the road profile is called the profile buffer width (Wp). The value of Wp is dependent on the resolution of the image used and the road width. Hence, Wp = φWr/res, where φ is the adjustable parameter of road profile buffer width and Wr is the road width predefined according to different road types obtained from road GPS data.

The distance between neighboring road profiles is the step width (Ws), which is computed as:

W s = { dis / 10 if dis / 10 2 1 otherwise
where dis is the distance between two road GPS nodes. The number of road profiles in one RS is num ∈ ℤ, which is set as:
num = { 12 if dis / 10 2 dis otherwise

In order to obtain CSP on the k-th road profile, the start points P s k ( x s i ( k ) , y s i ( k ) ) indicated by white color and end points P e k ( x e i ( k ) , y e i ( k ) ) indicated by yellow color in Figure 3a, are calculated as follow:

[ x i y i ] = [ X i Y i ] + [ cos θ sin θ sin θ cos θ ] [ ( k 1 ) W s β W p / 2 ]
where θ is the angle of P i P i + 1 and k ∈ {1, 2,...,num}. The value of β used in start point and end point calculation is set to −1 and 1, respectively.

Since a pixel is the smallest unit of an image, the positions of pixels are quickly obtained based on the Bresenham’s line algorithm [30] to approximate a set of points that lie between start point P s k and end point P e k, as shown in Figure 3b.

3.2.2. Seed Point Selection

Seed point selection is carried out in three steps. First, we obtain the derivation of CSP intensity value. Second, we recognize dominant peaks and valleys (PV) of the derivation value and we group CSP into several segments. Third, we select seed point from the segment belonging to the road.

After CSP position calculation, the intensity values of CSP in the image are easily obtained from the local image according to CSP positions. The appearance of roads in satellite image may be affected by the presence of building shadows or other objects [31]. However, the intensity values of points on the same object are always within a small range, while the intensity values of points located at the edge vary significantly due to the different appearance of objects. When CSP are located at cars or pavement markers on the road, the intensity values of these points sometimes vary significantly from adjacent road pixels, so the slide window filter [32] is introduced to smooth the intensity value of CSP to eliminate the influence of small objects and noisy pixels, as shown in Figure 4a. Compared with other small objects on the road, cars are wider. Hence, given that the width of the car is around 1.75 m [33], the sliding window width is set as Wd = ⌊2/res⌋.

According to the derivation of CSP intensity value, the CSP separated by dominating PV of derivation values are selected as object segments that belong to rooftop, road or other objects. However, as shown in Figure 4b, there are several small or weak PV on or near dominating PV of the derivation values, which makes it difficult to precisely locate the dominating PV. In order to find the dominant PV that represent the interval points of different segments of CSP, a PV-finding algorithm is designed based on the derivation values, which is similar to a peak-finding algorithm for histogram analysis in [34]. The process of PV-finding is described in Algorithm 1.

Algorithm 1. Finding the dominant PV
Algorithm 1. Finding the dominant PV
Input: The derivation values of CSP are represented by ϕ(i), where i is an integer (0 < iWp).
Output: Dominant peaks ϕ(k).
  for i=1to Wp, do
    if |ϕ(i)| > |ϕ(i + 1)| and |ϕ(i)| > |ϕ(i − 1)|, then
        if | ϕ ( i ) i = 1 W p ϕ ( i ) / W p | 20, then
          ϕ(j)= ϕ(i) ← record these peaks.
        end if
      end if
    end for
    Merge adjacent PV (n is the number of ϕ(j)):
    for j=1 to n − 1, do
      if ( ϕ(j + 1) − ϕ(j)) ≤ (Wp/10), then
       ϕ(k)=max{ϕ(j), ϕ(j + 1)}
      end if
    end for

The CSP are separated into several object segments depending on the obtained dominant PV. Due to the road characteristics, road segments have a certain range of intensity value and length. In order to filter out non-road segments, the length of segments and average intensity value are limited to length range (Rl ∈ [Wp, Wr]) and intensity value range (Rt ∈ [Vaυ, Va + υ]), respectively, where Va is the average intensity value of pixels located at the aligned road GPS data of the entire region, σ and υ are the length and intensity range parameters, respectively. If one road segment (Np =1) is extracted from CSP, the position of seed point for this road profile is located at the center of the selected segment, as shown in Figure 4c. Road centerline is generated by connecting neighboring seed points when all seed points have been extracted, as shown in Figure 3c. If Np ≠ 1, the CSP may cover other nearby roads (Np > 1) or CSP points are partially occluded (Np = 0).

3.2.3. Road Centerline Refinement

Some roads are partially occluded in an image, and consequently seed points located in the occluded area cannot be extracted based on the image information. The generated road centerline is imperfect merely via connecting seed points with a large interval length. Therefore, in order to improve the performance of road centerline extraction under complex scenes, we make full use of road shape information obtained from road GPS data to represent road centerline segments when parts of road are covered by scene objects.

If the distance between two neighboring seed points is less than a certain length, which is defined as valid interval length (Lv), their connection can be approximated by a short line. Hence, two seed points are connected by a straight line as part of road centerline. Otherwise, two neighboring seed points are connected according to road shape information obtained from road GPS data. In order to normalize Lv for roads with different lengths, it can be defined as Lv = Rv × Lr where Rv is the road replace rate, which is a confidence value associated with the accuracy of road GPS data. If road GPS data is more accurate in this region, Rv can be assigned a larger value. Let P r i and P g i denote the position of i-th road seed point and the corresponding road GPS point, respectively, and the algorithm of road centerline refinement with partial occlusion is described in Algorithm 2.

Algorithm 2. Road centerline refinement with partial occlusion
Algorithm 2. Road centerline refinement with partial occlusion
Input: The road length Lr ; extracted road seed points (xr(i), yr(i))  i = 1,...,n − 1, m + 1,...,num  1 < nmnum; the corresponding road GPS data (xg(i), yg(i));
Output: The road centerline.
  if Np(i) ≠ 1  i = n, . . . , m, then
    Compute the distance Ds between the n-th and m-th seed points.
  end if
  if Ds > Lv, then Compute the offset value:
  if m = num, then
    Ox = xr(n − 1) − xg(n − 1)
    Oy = yr(n − 1) − yg(n − 1)
  else
    Ox = 0.5 × (xr(n − 1) + xr(m + 1)) − (xg(n − 1) + xg(m + 1))
    Oy = 0.5 × (yr(n − 1) + yr(m + 1)) − (yg(n − 1) + yg(m + 1))
  end if
end if
for i = n to m, do
  (xg(j) + Ox, yg(j)+ Oy) ← the position of seed points between n-th and m-th.
end for
for j =1 to num − 1, do
  Connect the i-th and (i +1)-th seed points by straight line
end for

As shown in Figure 5a, the road segments between seed points P r n and P r m occluded by trees are generated to keep the road shape depending on the corresponding road GPS data. Consequently, the road centerline extraction is not sensitive to partial occlusion.

The refinement of road completely covered by objects (e.g., buildings, shadows, trees, clouds) is similar to the road centerline refinement with partial occlusion. The centerline of road is generated based on its GPS data to keep the road shape and its offset is calculated using the average offset value of Nu nearest road nodes belong to uncovered roads.

3.3. Road Connection

Since each road centerline is extracted independently, road connection is required to connect these road centerlines and generate the road network as the final output. The connection errors lead to independent road centerlines and seriously affect the completeness of road network due to the large number of crossings and junctions in the urban area.

The relationship of nearby roads is recorded in road GPS data, which can be used to refine the road centerline connection. Since crossing and junction points are always the key positions of the road, connected roads have the same GPS nodes as in their associated road GPS data. By searching all road GPS nodes in this region, the connected roads are recorded when they share the same road GPS nodes. Then, extracted centerlines of roads that have connected relationship are examined. If they are not connected, Lr of each pair are compared. Since a minor road whose Lr is shorter always connects with the primary road according to road traffic design, the road with larger width is considered as the reference and the other one is considered as floating. Through the computation for each pair of connected roads, the extracted centerlines of the reference and floating roads are adjusted following two rules:

(a)

If they intersect with each other, the floating road is split by the reference road. The split part with larger length is preserved, which are indicated by black circles in Figure 5b.

(b)

If they do not intersect with each other, the floating road is extended to connect with the reference road, as indicated by white circles in Figure 5b.

3.4. Post-Processing

In order to further improve the performance, we propose a post-processing step to extract additional road centerlines from regions without road GPS data. Missing road GPS data mostly arises at the road network endpoints that are located in secluded places and rarely measured using GPS receiver. Hence, we need to search the region around the road network endpoints and identify centerlines of the roads without GPS traces. The post-processing step consists of the following steps:

(a)

Since the roads in the same region have similar colors, pixels located on or near the road network are used to obtain the gray value range of the road pixels. Then, we search the region of each endpoint of the road network and detect the nearby pixels possibly belonging to the road. The searching regions are inside the circles with a radius of 20 m and centered at the endpoints. As shown in the left part of Figure 5c, the outlines of the searching regions and detected road regions are indicated by black and green colors, respectively.

(b)

The biggest detected region without the road network coverage is prior selected as the road region, unless there is only one detected region.

(c)

Road region is fitted by the rectangle shape [35] indicated by blue color in the middle part of Figure 5c; meanwhile the direction of the rectangle centerline should be parallel to the corresponding road network.

(d)

To avoid overlapping, the center point of rectangle edge indicated as the red point in the middle part of Figure 5c is selected to connect the endpoint as the final results when this rectangle overlaps with the road network. Otherwise, the center line of rectangle indicated by the yellow line in the middle part of Figure 5c is calculated to connect the endpoints as the final results shown in the right part of Figure 5c.

The final post-processing results for extracting additional road centerline from the regions without road GPS data can further improve the performance of road centerline extraction.

4. Experimental Results

To show the strengths and weaknesses of the proposed method, we perform several tests in this section. Our testbed is composed of large and representative images from three different sensors, i.e., Geoeye satellite, Ikonos satellite, and aerial images. The proposed method was tested on 10 satellite images captured by the GeoEye satellite with a resolution of 0.5 m/pixel, 10 satellite images captured by the Ikonos satellite with a resolution of 1 m/pixel, and 3 aerial images with resolution of 0.5 m/pixel. Each of the images has three spectral bands: red, green, blue. Satellite images are from urban, suburban and rural regions in Singapore, Australia, China, and Vietnam, which contain a total of 13,731 roads indicated by OSM dataset according to the road names. The scenes include diverse road characteristics both in road shape and road type, such as expressway, motorway, and residential way with complex scenes. In the test images, the average density of GPS nodes is about 1 per 10.5 m in urban area and 1 per 17.8 m in rural area. Some test images represent a very complex road network near city centers, where roads are covered or partially occluded by clouds, trees and building shadows. Road GPS data are collected from the OSM website and transformed into the ‘shp’ format as the input. In quantitative evaluation, we used manually formed ground truth data. To reduce bias influence, two authors and two students prepared this ground truth data.

The quantitative evaluation of road centerline extraction was conducted using the evaluation framework introduced in [36], which is a common evaluation method used in other papers [18,31], including three criteria: completeness Ep ∈ [0, 1], correctness Et ∈ [0, 1], and quality Eq ∈ [0, 1].In addition, the geometric quality of the extraction is also assessed. The root mean square error is used to measure the average distance between the extracted road centerline and ground truth, which is defined as RMS = i = 1 n d i 2 / n, where di is the shortest distance between the i-th detected road centerline and the reference centerlines, and n is the number of pieces of extracted road centerlines. As RMS mainly depends on the resolution of the image, it is given in pixels in this paper. According to the definition of criteria, the evaluation results depend on the buffer width, which is set to2mon both sides.

4.1. Parameter Selection and Sensitivity Analysis

In the proposed road centerline extraction method, most parameters are estimated automatically, and a few parameters are set according to the characteristics of the input imagery, such as image resolution. There are also some manually adjusted parameters in the proposed method. In this section, we justify these selections experimentally and conduct the sensitivity analysis of the proposed method according to the variations of each parameter within reasonable range while fixing other parameters at the mean value of their ranges. The quantitative results for the free parameters are shown in Figure 6, which compares several reasonable values of the δ, φ, τ, ν, Nu and Rv, where the correctness and completeness are computed based on the same GeoEye satellite image (Singapore, 5000 × 6000, 0.5 meter/pixel). According to the experiments, the values of δ, φ, τ, υ, Nu and Rv are set as 5%, 1.2%, 8%, 40%, 5% and 10%, respectively.

4.2. Qualitative Evaluation

Figure 7 shows the results on three tested satellite images captured from different regions. As can be seen, the proposed approach performs well for rural, suburban and urban areas in the test. The road centerlines of rural and suburban regions are the easiest to extract, as shown in Figure 7c, since most of the roads are straight lines with simple environment. In the urban region, many roads are fully or partially occluded by buildings, trees, shadow and cloud, as shown in Figure 7a,b. Hence, the method cannot achieve good performance in urban and suburban regions relying only on the image features. Through fusing the information extracted from satellite imagery and road GPS data, the proposed method can effectively deal with this problem and is applicable to different regions. The proposed method also performs well with various road shapes, such as straight line and curved line, as shown in Figure 7. According to road GPS nodes, each road is divided into continuous RS, which are processed independently through the road centerline extraction algorithm. The final road centerline is composed by each extracted road centerline of the RS. As a result, road centerline extraction is not sensitive to different road shapes. As shown in the enlarged regions in Figure 7a,b, depending on the road shape extracted from road GPS data to generate the road centerline segments that are covered, the performance of proposed method is not sensitive to partial occlusion.

Typical results for each module are shown in Figure 8a,b, with the extracted road centerlines overlaid on the input images. Note that the performance sensitivity to the accuracy of GPS data is minimal owing to the global alignment of road GPS data. Although in some local areas the original road GPS data are far from ground truth, the road GPS data are adjusted to the reasonable range near the road centerline after global alignment, thus resulting in no influence on the final results. By means of the local region processing and road connection, the road centerlines are extracted with good performance.

4.3. Quantitative Evaluation

The proposed system was implemented in MATLAB 7 and tested on a PC (CPU Intel i5 1.7 GHz with 4 GB RAM). Since satellite images and GPS data are all included in the processing, the computing speed is highly dependent on the image size as well as the number of roads in the image. On average, the entire extraction process takes around 10 seconds per road to complete, considering an average image size of 5000 × 5000 pixels in the urban area with more than 3000 roads and a resolution of 0.5 m/pixel. Although this timing depends on the computer hardware, this result is comparable with the performance of a human expert and faster than most existing methods.

On gross total, the size of our test images becomes around 3.13 × 108 pixels and the road network is represented by 2.09 × 107 pixels. For each module, the performance of proposed method is given in Table 1, which is described in each module. As can be seen, on such a large and diverse image set in different regions, the obtained average performance values are fairly good.

Most of the original road GPS data are not located within the buffer zone around road centerline ground truth, so their Ep is much lower, especially in the suburban and urban regions, such as Ho Chi Minh and Erenhot. After global alignment, road GPS data are adjusted to the reasonable range and the Ep and Et are improved obviously. Depending on the detail information of local region and road attributes, the road centerlines are extracted with high accuracy. Test images include a large number of roads with partial occlusion, which will increase the difficulty of road network extraction. Owing to making use of road GPS data and the information extracted from uncovered parts of road, rather than ‘guessing’ or giving up, our proposed method can effectively handle the road with partial occlusion to improve the overall performance. Since the length of road connection part is much smaller than the length of the road, road connection has only a slight influence on the results based on common evaluation method. However, the module of road connection can generate the complete road network as the output, which is an important aspect in some applications, such as transportation system modelling and map localization. Note that test images of the regions in Singapore with different resolutions still have similar results, because the proposed method can adaptively adjust parameters depending on the image resolution to minimize its influence.

4.4. Comparison With the Existing Methods

In this section, we present the comparison of the proposed method with results obtained from published literatures based on the same test images, as shown in Table 2 [2,3739]. In this table, Aerial1, Aerial2 and Aerial3 correspond to images in the suburban area, rural area with medium complexity scene and rural area with simple scene, respectively. According to the results in Table 2, our approach is shown to perform better than other published methods in terms of completeness, correctness and RMS based on small test samples.

Mayer et al. [2] mentioned that Aerial1 is the most difficult of the three images in the test, yet the proposed method still can achieve good result in suburban area owing to the use of road attributes. The overlap rate of road GPS data in suburban or urban area is always higher than the rural area, which affects the completeness and correctness of proposed method. Baumgartner et al. [37] make use of road direction information to improve correctness with sacrifice in completeness. Note that the scene of Aerial2 is more complex than Aerial3, which is the main reason affecting the final results of other methods. As can be seen, the variables of different scenes have minimal influence on the results of proposed method. Depending on the comparison test, we have the same assumptions as Mayer et al. [2]. In analyzing the results of the evaluation, they mentioned that the values for completeness and correctness should be around 0.7 and 0.85 to make the method practical. By making use of the road GPS data, the proposed method passes both thresholds for the three test images.

These extensive comparisons demonstrate that the proposed method has significant advantages compared with existing methods in the literature, especially for urban and suburban regions where the road with complex scenes. Some road GPS data obtained from OSM dataset are inaccurate but have slight influence on the final results. Hence, the results can be further improved by using the commercial road GPS data. Some of our test satellite images are open to the public, which can be used as standard benchmark images for road centerline extraction in future studies.

5. Conclusions

In this work we focus on automatic and reliable detection and extraction of road centerline from satellite and aerial images. We have presented an integrated solution that incorporates the information obtained from satellite imagery and road GPS data, under a novel and unified framework to address the challenging problem of road centerline extraction with various road shapes and complex scenes.

Road GPS data provide useful information including road approximate position, road shape, road type, and road relationship. However, to the best of our knowledge, road GPS data have not been considered in previous studies on road centerline extraction. The proposed method selects aligned road GPS data as the initial road position for road centerline extraction, which can efficiently improve the accuracy and computing speed. Depending on the road shape and road type information, road centerline can be extracted from complex scenes in different regions, even when the road is partially occluded. Based on the road relationship, road connection is refined to further improve the performance and generate the complete road network. Experimental results for various satellite and aerial images from different regions confirm the method’s capability to accurately extract road centerlines. One limitation of the proposed method is the overlap rate of road GPS data, but according to the post-processing step, less than 3% of the roads are affected in the tested regions. Meanwhile, road GPS data supported by volunteered geographical dataset are being improved gradually to cover more regions.

The next step is to investigate road centerline extraction from multiple remotely sensed imagery. It would be interesting to extract useful information from satellite images captured for the same region but at different times with different target evaluation angles.

Acknowledgments

We acknowledge the following research grants: R-263-000-630-133/232 and R-263-000-677-592.

Author Contributions

Chuqing Cao carried out the experiments and drafted the manuscript; Ying Sun secured supporting funding, supervised research and helped with formulating the methodology.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mena, J.B. State of the art on automatic road extraction for GIS update: A novel classification. Pattern Recognit. Lett 2003, 24, 3037–3058. [Google Scholar]
  2. Mayer, H.; Hinz, S.; Bacher, U.; Baltsavias, E. A test of automatic road extraction approaches. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2006, 36, 209–214. [Google Scholar]
  3. Li, Y.; Xu, L.; Piao, H. Semi-automatic road extraction from high-resolution remote sensing image: Review and prospects. In Proceedings of the 2009 IEEE Ninth International Conference on Hybrid Intelligent Systems, Shenyang, China, 12–14 August 2009; 1, pp. 204–209.
  4. Miao, Z.; Wang, B.; Shi, W.; Zhang, H. A semi-automatic method for road centerline extraction from VHR images. IEEE Geosci. Remote Sens. Lett 2014, 11, 1856–1860. [Google Scholar]
  5. Callier, S.; Saito, H. Automatic road area extraction from printed maps based on linear feature detection. IEICE Trans. Inf. Syst 2012, 95, 1758–1765. [Google Scholar]
  6. Anil, P.; Natarajan, S. Automatic road extraction from high resolution imagery based on statistical region merging and skeletonization. Int. J. Eng. Sci. Technol 2010, 2, 165–171. [Google Scholar]
  7. Mokhtarzade, M.; Zoej, M. Road detection from high-resolution satellite images using artificial neural networks. Int. J. Appl. Earth Obs. Geoinf 2007, 9, 32–40. [Google Scholar]
  8. Miao, Z.; Shi, W.; Zhang, H.; Wang, X. Road centerline extraction from high-resolution imagery based on shape features and multivariate adaptive regression splines. IEEE Geosci. Remote Sens. Lett 2013, 10, 583–587. [Google Scholar]
  9. Yang, Y.; Zhu, C. Extracting road centrelines from high-resolution satellite images using active window line segment matching and improved SSDA. Int. J. Remote Sens 2010, 31, 2457–2469. [Google Scholar]
  10. Hu, X.; Zhang, Z.; Tao, C.V. A robust method for semi-automatic extraction of road centerlines using a piecewise parabolic model and least square template matching. Photogramm. Eng. Remote Sens 2004, 70, 1393–1398. [Google Scholar]
  11. Chaudhuri, D.; Kushwaha, N.; Samal, A. Semi-automated road detection from high resolution satellite images by directional morphological enhancement and segmentation techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens 2012, 5, 1538–1544. [Google Scholar]
  12. Poullis, C.; You, S. Delineation and geometric modeling of road networks. ISPRS J. Photogramm. Remote Sens 2010, 65, 165–181. [Google Scholar]
  13. Price, K. Urban street grid description and verification. Proceedings of the 2000 IEEE fifth Workshop on Applications of Computer Vision, Palm Springs, CA, USA, 4–6 December 2000; pp. 148–154.
  14. Alian, S.; Tolpekin, V.A.; Bijker, W.; Kumar, L. Identifying curvature of overpass mountain roads in Iran from high spatial resolution remote sensing data. Int. J. Appl. Earth Obs. Geoinf 2014, 26, 21–25. [Google Scholar]
  15. Shi, W.; Miao, Z.; Debayle, J. An integrated method for urban main-road centerline extraction from optical remotely sensed imagery. IEEE Trans. Geosci. Remote Sens 2014, 52, 3359–3372. [Google Scholar]
  16. Lin, Y.; Saripalli, S. Road detection from aerial imagery. In Proceedings of the 2012 IEEE International Conference on.Robotics and Automation (ICRA), St. Paul, MN, USA, 14–18 May 2012; pp. 3588–3593.
  17. Hinz, S.; Baumgartner, A. Automatic extraction of urban road networks from multi-view aerial imagery. ISPRS J. Photogramm. Remote Sens 2003, 58, 83–98. [Google Scholar]
  18. He, C.; Yang, F.; Yin, S.; Deng, X.; Liao, M. Stereoscopic road network extraction by decision-level fusion of optical and SAR imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens 2013, 6, 2221–2228. [Google Scholar]
  19. Li, J.; Qin, Q.; Xie, C.; Zhao, Y. Integrated use of spatial and semantic relationships for extracting road networks from floating car data. Int. J. Appl. Earth Obs. Geoinf 2012, 19, 238–247. [Google Scholar]
  20. Lin, Y.; Saripalli, S. Road detection and tracking from aerial desert imagery. J. Intell. Robot. Syst 2012, 65, 345–359. [Google Scholar]
  21. Zhou, H.; Kong, H.; Wei, L.; Creighton, D.; Nahavandi, S. Efficient road detection and tracking for unmanned aerial vehicle. IEEE Trans. Intell. Transp. Syst 2014, PP, 1–13. [Google Scholar]
  22. Chai, D.; Forstner, W.; Lafarge, F. Recovering line-networks in images by junction-point processes. Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Protland, OR, USA, 25–27 June 2013; pp. 1894–1901.
  23. Zielinski, B.; Iwanowski, M. Morphology-based method for reconstruction of invisible road parts on remote sensing imagery and digitized maps. Comput. Recognit. Syst 2011, 95, 411–420. [Google Scholar]
  24. Silva, C.; Centeno, J. Automatic extraction of main roads using aerial images. Pattern Recognit. Image Anal 2010, 20, 225–233. [Google Scholar]
  25. Haklay, M.; Weber, P. Openstreetmap: User-generated street maps. IEEE Pervasive Comput 2008, 7, 12–18. [Google Scholar]
  26. Ciepluch, B.; Mooney, P.; Jacob, R.; Winstanley, A.C. Using openstreetmap to deliver location-based environmental information in Ireland. ACM SIGSPATIAL Spec 2009, 1, 17–22. [Google Scholar]
  27. Haklay, M. How good is volunteered geographical information? A comparative study of OpenStreetMap and Ordnance Survey datasets. Environ. Plan. B Plan. Des 2010, 37, 682–703. [Google Scholar]
  28. Mooney, P.; Corcoran, P.; Winstanley, A. A study of data representation of natural features in openstreetmap. In Proceedings of 6th GIScience International Conference on Geographic Information Science, Zurich, Switzerland, 14–17 September 2010; p. 150.
  29. Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A k-means clustering algorithm. J. R. Stat. Soc. Ser. C (Appl. Stat.) 1979, 28, 100–108. [Google Scholar]
  30. Bresenham, J. Algorithm for computer control of a digital plotter. IBM Syst. J 1965, 4, 25–30. [Google Scholar]
  31. Da Silva, C.R.; Centeno, J.A.S. Semiautomatic extraction of main road centrelines in aerial images acquired over rural areas. Int. J. Remote Sens 2012, 33, 502–516. [Google Scholar]
  32. Lee, C.H.; Lin, C.R.; Chen, M.S. Sliding window filtering: An efficient method for incremental mining on a time-variant database. Inf. Syst 2005, 30, 227–244. [Google Scholar]
  33. Furth, P.G.; Dulaski, D.M.; Buessing, M.; Tavakolian, P. Parking lane width and bicycle operating space. Transp. Res. Rec.: J. Transp. Res. Board 2010, 2190, 45–50. [Google Scholar]
  34. Cheng, H.D.; Sun, Y. A hierarchical approach to color image segmentation using homogeneity. IEEE Trans. Image Process 2000, 9, 2071–2082. [Google Scholar]
  35. Chaudhuri, D.; Samal, A. A simple method for fitting of bounding rectangle to closed regions. Pattern Recognit 2007, 40, 1981–1989. [Google Scholar]
  36. Heipke, C.; Mayer, H.; Wiedemann, C.; Jamet, O. Evaluation of automatic road extraction. Int. Arch. Photogramm. Remote Sens 1997, 32, 151–160. [Google Scholar]
  37. Baumgartner, A.; Steger, C.; Mayer, H.; Eckstein, W.; Ebner, H. Automatic road extraction based on multi-scale, grouping, and context. Photogramm. Eng. Remote Sens 1999, 65, 777–786. [Google Scholar]
  38. Wiedemann, C.; Hinz, S. Automatic extraction and evaluation of road networks from satellite imagery. Int. Arch. Photogramm. Remote Sens 1999, 32, 95–100. [Google Scholar]
  39. Zhang, Q.; Couloigner, I. Benefit of the angular texture signature for the separation of parking lots and roads on high resolution multi-spectral imagery. Pattern Recognit. Lett 2006, 27, 937–946. [Google Scholar]
Figure 1. Flowchart of the proposed road centerline extraction method.
Figure 1. Flowchart of the proposed road centerline extraction method.
Remotesensing 06 09014f1 1024
Figure 2. (a) Satellite image. (b) Histogram of the H and S values. (c) Frequency of the number for pixels representing different objects under six ranks (testing 200 images). (d) Clustering results using H and S values. (e) Road prior map. (f) Aligned road GPS data. (g) Enlarged regions (red lines are the aligned GPS data).
Figure 2. (a) Satellite image. (b) Histogram of the H and S values. (c) Frequency of the number for pixels representing different objects under six ranks (testing 200 images). (d) Clustering results using H and S values. (e) Road prior map. (f) Aligned road GPS data. (g) Enlarged regions (red lines are the aligned GPS data).
Remotesensing 06 09014f2 1024
Figure 3. Road centerline extraction process. (a) CSP position calculation. (b) Generated CSP. (c) Road centerline generation.
Figure 3. Road centerline extraction process. (a) CSP position calculation. (b) Generated CSP. (c) Road centerline generation.
Remotesensing 06 09014f3 1024
Figure 4. Seed point selection process. (a) Intensity values of CSP. (b) CSP segments. (c) Seed point selection.
Figure 4. Seed point selection process. (a) Intensity values of CSP. (b) CSP segments. (c) Seed point selection.
Remotesensing 06 09014f4 1024
Figure 5. (a) Example of road centerline refinement (red line is aligned road GPS data; blue line is the connection of extracted road seeds; yellow line is based on the road shape obtained from road GPS data). (b) Examples of road connection (yellow lines are extracted road centerlines and blue lines are road connection results). (c) The performance of road post-processing in each module.
Figure 5. (a) Example of road centerline refinement (red line is aligned road GPS data; blue line is the connection of extracted road seeds; yellow line is based on the road shape obtained from road GPS data). (b) Examples of road connection (yellow lines are extracted road centerlines and blue lines are road connection results). (c) The performance of road post-processing in each module.
Remotesensing 06 09014f5 1024
Figure 6. Sensitivity test of free parameters. (a) δ (Section II). (b) φ (Section III.B). (c) τ (Section III.B). (d) ν (Section III.B). (e) Nu (Section III.B). (f) Rv (Section III.C).
Figure 6. Sensitivity test of free parameters. (a) δ (Section II). (b) φ (Section III.B). (c) τ (Section III.B). (d) ν (Section III.B). (e) Nu (Section III.B). (f) Rv (Section III.C).
Remotesensing 06 09014f6 1024
Figure 7. Road centerline extraction results in different regions with 0.5 meter/pixel. (a) Urban region, Singapore (5000 × 6000). (b) Urban region, Sydney (5000 × 6000). (c) Suburban region, Erenhot.
Figure 7. Road centerline extraction results in different regions with 0.5 meter/pixel. (a) Urban region, Singapore (5000 × 6000). (b) Urban region, Sydney (5000 × 6000). (c) Suburban region, Erenhot.
Remotesensing 06 09014f7 1024
Figure 8. The performance of road centerline extraction in each module. (Red lines are the input original road GPS data. Yellow lines are the aligned road GPS data. Green lines are the extracted road centerline. Blue lines are the road centerline after road connection.) (a) Region 1, (b) Region 2.
Figure 8. The performance of road centerline extraction in each module. (Red lines are the input original road GPS data. Yellow lines are the aligned road GPS data. Green lines are the extracted road centerline. Blue lines are the road centerline after road connection.) (a) Region 1, (b) Region 2.
Remotesensing 06 09014f8 1024
Table 1. Evaluation of extracted road networks in each module.
Table 1. Evaluation of extracted road networks in each module.
Location/res (m/pixel)OR (%)Original Road GPSGlobal AlignmentRoad Centerline Extraction

EpEtRMSEpEtRMSEpEtEqRMS
Wuxi (0.5)96.50.3910.4024.620.6840.7052.140.9650.9630.9311.30
Ho Chi Minh (0.5)96.30.3420.3554.120.7460.7532.180.9680.9620.9321.35
Sydney (0.5)97.20.4200.4683.360.8770.8922.050.9760.9540.9321.14
Erenhot (0.5)98.40.2120.2443.900.8920.8991.840.9850.9720.9580.86
Singapore (0.5)94.30.3780.3814.790.6620.7022.220.9540.9760.9321.26
Singapore (1)94.30.3360.3424.730.6590.6982.320.9480.9350.8891.42
Singapore (0.5)90.10.3650.3764.700.6600.6982.120.9340.9660.9041.28
Singapore (0.5)84.80.3050.3424.610.6140.6932.060.9010.9220.8371.44
Table 2. Road centerline extraction performances on three test images from Mayer et al. [2].
Table 2. Road centerline extraction performances on three test images from Mayer et al. [2].
Images/MethodsAerial1Aerial2Aerial3

EpEtRMSEpEtRMSEpEtRMS
Baumgartner [37]0.310.561.530.650.821.140.720.771.3
Wiedemann [38]0.460.473.740.760.662.870.810.633.14
Zhang [39]0.510.491.920.670.491.720.720.631.66
Proposed method0.860.911.250.820.881.340.840.891.32

Share and Cite

MDPI and ACS Style

Cao, C.; Sun, Y. Automatic Road Centerline Extraction from Imagery Using Road GPS Data. Remote Sens. 2014, 6, 9014-9033. https://doi.org/10.3390/rs6099014

AMA Style

Cao C, Sun Y. Automatic Road Centerline Extraction from Imagery Using Road GPS Data. Remote Sensing. 2014; 6(9):9014-9033. https://doi.org/10.3390/rs6099014

Chicago/Turabian Style

Cao, Chuqing, and Ying Sun. 2014. "Automatic Road Centerline Extraction from Imagery Using Road GPS Data" Remote Sensing 6, no. 9: 9014-9033. https://doi.org/10.3390/rs6099014

APA Style

Cao, C., & Sun, Y. (2014). Automatic Road Centerline Extraction from Imagery Using Road GPS Data. Remote Sensing, 6(9), 9014-9033. https://doi.org/10.3390/rs6099014

Article Metrics

Back to TopTop