Snake-Based Model for Automatic Roof Boundary Extraction in the Object Space Integrating a High-Resolution Aerial Images Stereo Pair and 3D Roof Models
Abstract
:1. Introduction
- Vegetation that rises above roofs, partially or totally obstructing their faces. The main problem of this type of obstruction is the lack of radiometric response along roof edges and corners. However, the rectilinear geometry of the roof can be used to overcome the obstruction problem;
- Shadows adjacent to the roof edges, caused by the roof itself and the sunlight projection. In this case, the shadow edge gradients are usually higher than the roof edge gradients; as a result, the shadow edges can be misinterpreted as a roof boundary. However, the gradient vector itself can be used to weight the edge responses;
- Viewpoint-dependent obstructions caused by adjacent buildings. In urban area images, it is common for building roofs to appear obstructed by neighboring building, depending on the viewpoints of the image taking. However, it is possible to explore the fact that obstructed parts in one image may be visible in the other image of a stereo pair, allowing the correct extraction of the roof boundaries.
2. Proposed Method
2.1. Pre-Processing: Generation of Vegetation and Shadow Auxiliary Images
2.1.1. Shadow Image
2.1.2. Vegetation Image
2.2. Stereo-Based Roof Boundary Mathematical Model
2.2.1. Image-Space Roof Boundary Energy Function
2.2.2. Object-Space Roof Boundary Energy Function
2.3. DP Optimization of the Roof Boundary Energy Function
- To begin the extraction process, seed points that approximately describe the boundary should be provided. We use the corner points from the 3D roof model (Figure 7a).
- The 3D polyline is densified inserting midpoints between its points, keeping the existing points (Figure 7b). A separation threshold between the inserted points is used to prevent the excessive densification on small roof sides and maintain a pattern of point density on small and large sides.
- These points—initial and inserted points—are used to create the solution space, which consists of two types of search spaces:
- (a)
- Side search space: is established on the roof side points. It is created by sampling points along straight-line segments simultaneously transversal to the boundary and to the normal vector of the roof plane containing the point under analysis. Hence the straight-line segments generated go along with the roof planes (Figure 7c).
- (b)
- Corner search space: is established on the corner points and is designed by a regular grid in which the points are sampled on the roof plane segment (Figure 7d).
- The DP algorithm is performed considering the solution space created in the previous step in order to determine the 3D polyline that better fits the roof outline. In this step, the auxiliary images of vegetation and shadow, and the visibility map are used to verify if a given point is in any of these regions and establish the appropriate value for each parameter of the Snake energy function. Section 2.3.1 shows how each parameter is set up on the Snake energy function.
- Verify the condition: , where Pi and Pi−1 are the two last 3D refined polylines, and L is a pre-established threshold. If the condition is not satisfied, return to step 2; otherwise, the optimum polyline Pi was found.
Adaptive Determination of the Energy Function Parameters
2.4. Accuracy Assessment Metrics
3. Experimental Results
3.1. Dataset
- LiDAR point cloud: was collected from the UNESP Photogrammetric Dataset [31]. The point cloud was acquired by a RIEGL LMS-Q680i Airborne LASER Scanner unit. The average density is 6 points/m2. The data was acquired in 2014 with a flight height of 900 m.
- High-resolution aerial images: were collected from the UNESP Photogrammetric Dataset [31]. The camera used was a medium format camera Phase One model iXA 180, based on a CCD (Charge Couple Device) technology and equipped with Schneider-Kreuznach lenses. The nominal focal length was 55 mm, and the image size is 10328 × 7760 pixels. The pixel size is 5.2 × 5.2 micron, and the average ground sample distance (GSD) was 10 cm. The data was acquired in 2014 with a flight height of 900 m.
- Tridimensional roof models: they were created from the LiDAR data. They contain the object-space coordinates of the corner and ridge points, and the plan coefficients.
- Visibility maps: obtained by using Inpho® software (Trimble). The map has a GSD of 10 cm.
3.2. Parameters
3.3. High Vegetation Obstructions Results
3.4. Adjacent Shadow Results
3.5. Viewpoint-Dependent Obstructions Results
4. Discussion
4.1. Performance
4.2. Limitations
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chen, J.; Dowman, I.; Li, S.; Li, Z.; Madden, M.; Mills, J.; Paparoditis, N.; Rottensteiner, F.; Sester, M.; Toth, C.; et al. Information from imagery: ISPRS scientific vision and research agenda. ISPRS J. Photogramm. Remote Sens. 2016, 115, 3–21. [Google Scholar] [CrossRef] [Green Version]
- Benciolini, B.; Ruggiero, V.; Vitti, A.; Zanetti, M. Roof planes detection via a second-order variational model. ISPRS J. Photogramm. Remote Sens. 2018, 138, 101–120. [Google Scholar] [CrossRef]
- Huang, J.; Zhang, X.; Xin, Q.; Sun, Y.; Zhang, P. Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network. ISPRS J. Photogramm. Remote Sens. 2019, 151, 91–105. [Google Scholar] [CrossRef]
- Ghanea, M.; Moallem, P.; Momeni, M. Building extraction from high-resolution satellite images in urban areas: Recent methods and strategies against significant challenges. Int. J. Remote Sens. 2016, 37, 5234–5248. [Google Scholar] [CrossRef]
- Alshehhi, R.; Marpu, P.R.; Woon, W.L.; Mura, M.D. Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2017, 130, 139–149. [Google Scholar] [CrossRef]
- Xu, Y.; Wu, L.; Xie, Z.; Chen, Z. Building extraction in very high resolution remote sensing imagery using deep learning and guided filters. Remote Sens. 2018, 10, 144. [Google Scholar] [CrossRef] [Green Version]
- Wu, G.; Shao, X.; Guo, Z.; Chen, Q.; Yuan, W.; Shi, X.; Xu, Y.; Shibasaki, R. Automatic building segmentation of aerial imagery using multi-constraint fully convolutional networks. Remote Sens. 2018, 10, 407. [Google Scholar] [CrossRef] [Green Version]
- Nguyen, T.H.; Daniel, S.; Guériot, D.; Sintès, C.; Le Caillec, J.-M. Super-resolution-based Snake model—An unsupervised method for large-scale building extraction using airborne LiDAR data and optical image. Remote Sens. 2020, 12, 1702. [Google Scholar] [CrossRef]
- Yang, B.; Xu, W.; Dong, Z. Automated extraction of building outlines from airborne laser scanning point clouds. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1399–1403. [Google Scholar] [CrossRef]
- Tomljenovic, I.; Höfle, B.; Tiede, D.; Blaschke, T. Building extraction from airborne laser scanning data: An analysis of the state of the art. Remote Sens. 2015, 7, 3826–3862. [Google Scholar] [CrossRef] [Green Version]
- Du, S.; Zhang, Y.; Zou, Z.; Xu, S.; He, X.; Chen, S. Automatic building extraction from LiDAR data fusion of point and grid-based features. ISPRS J. Photogramm. Remote Sens. 2017, 130, 294–307. [Google Scholar] [CrossRef]
- Pirasteh, S.; Rashidi, P.; Rastiveis, H.; Huang, S.; Zhu, Q.; Liu, G.; Li, Y.; Li, J.; Seydipour, E. Developing an algorithm for buildings extraction and determining changes from airborne LiDAR, and comparing with R-CNN method from drone images. Remote Sens. 2019, 11, 1272. [Google Scholar] [CrossRef] [Green Version]
- Zarea, A.; Mohammadzadeh, A. A novel building and tree detection method from LiDAR data and aerial images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1864–1875. [Google Scholar] [CrossRef]
- Gilani, S.A.N.; Awrangjeb, M.; Lu, G. An automatic building extraction and regularisation technique using LiDAR point cloud data and orthoimage. Remote Sens. 2016, 8, 258. [Google Scholar] [CrossRef] [Green Version]
- Lari, Z.; El-Sheimy, N.; Habib, A. A new approach for realistic 3D reconstruction of planar surfaces from laser scanning data and imagery collected onboard modern low-cost aerial mapping systems. Remote Sens. 2017, 9, 212. [Google Scholar] [CrossRef] [Green Version]
- Fernandes, V.J.M.; Dal Poz, A.P. Extraction of building roof contours from the integration of high-resolution aerial imagery and laser data using Markov random fields. Int. J. Image Data Fusion 2018, 9, 263–286. [Google Scholar] [CrossRef]
- Chen, Q.; Wang, S.; Liu, X. An improved Snake model for refinement of LiDAR-derived building roof contours using aerial images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B3, 583–589. [Google Scholar] [CrossRef] [Green Version]
- Sun, Y.; Zhang, X.; Zhao, X.; Xin, Q. Extracting building boundaries from high resolution optical images and LiDAR data by integrating the convolutional neural network and the active contour model. Remote Sens. 2018, 10, 1459. [Google Scholar] [CrossRef] [Green Version]
- Griffiths, D.; Boehm, J. Improving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne LIDAR and image data using active contours. ISPRS J. Photogramm. Remote Sens. 2019, 154, 70–83. [Google Scholar] [CrossRef]
- Oliveira, H.C.; Dal Poz, A.P.; Galo, M.; Habib, A.F. Surface gradient approach for occlusion detection based on triangulated irregular network for true orthophoto generation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 443–457. [Google Scholar] [CrossRef]
- Azevedo, S.C.; Sival, E.A.; Pedrosa, M.M. Shadow detection improvement using spectral indices and morphological operators in urban areas in high resolution images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-7/W3, 587–592. [Google Scholar] [CrossRef] [Green Version]
- Soille, P. Morphological Image Analysis; Springer: Berlin, Germany, 2004. [Google Scholar]
- Otsu, N. A threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–69. [Google Scholar] [CrossRef] [Green Version]
- Axelsson, P. Processing of laser scanner data: Algorithms and applications. ISPRS J. Photogramm. Remote Sens. 1999, 54, 138–147. [Google Scholar] [CrossRef]
- Wolf, P.R.; Dewitt, B.A. Elements of Photogrammetry–with Applications in GIS, 3rd ed.; McGraw-Hill: New York, NY, USA, 2000. [Google Scholar]
- Mikhail, E.M.; Bethel, J.S.; McGlone, J.C. Introduction to Modern Photogrammetry; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
- Kass, M.; Witkin, A.; Terzopoulous, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
- Fazan, A.J.; Dal Poz, A.P. Rectilinear building roof contour extraction based on snakes and dynamic programming. Int. J. Appl. Earth Obs. Geoinf. 2013, 25, 1–10. [Google Scholar] [CrossRef]
- Moravec, H.P. Towards automatic visual obstacle avoidance. In Proceedings of the 5th International Joint Conference on Artificial Intelligence, Cambridge, MA, USA, 22–25 August 1977. [Google Scholar]
- Ballard, D.; Brown, C.M. Computer Vision; Prentice Hall: Hoboken, NJ, USA, 1982. [Google Scholar]
- Tommaselli, A.M.G.; Galo, M.; Dos Reis, T.T.; Ruy, R.S.; De Moraes, M.V.A.; Matricardi, W.V. Development and assessment of a dataset containing frame images and dense airborne laser scanning point clouds. IEEE Geosci. Remote Sens. Lett. 2018, 15, 192–196. [Google Scholar] [CrossRef]
- Jovanovic, D.; Milovanov, S.; Ruskovski, I.; Govedarica, M.; Sladic, D.; Radulovic, A.; Pajic, V. Building virtual 3D city model for Smart Cities applications: A case study on campus area of the University of Novi Sad. ISPRS Int. J. Geo Inf. 2020, 9, 476. [Google Scholar] [CrossRef]
Parameters | Value |
---|---|
α | 500,000 |
β | 500,000 |
γ | 1 |
η | 500,000 |
Side search space dimension | 9 points |
Corner search space dimension | 9 × 9 points |
Minimal separation between search spaces | 1.5 m |
Roof | Completeness (%) | Correctness (%) | RMSEE (m) | RMSEN (m) | RMSEH (m) | |
---|---|---|---|---|---|---|
Initial boundaries | 1 | 89.69 | 99.67 | 0.27 | 0.35 | 0.75 |
2 | 83.65 | 100.00 | 0.29 | 0.39 | 0.69 | |
3 | 88.76 | 97.02 | 0.41 | 0.27 | 0.60 | |
4 | 92.35 | 96.99 | 0.33 | 0.46 | 1.25 | |
5 | 89.14 | 98.95 | 0.36 | 0.28 | 1.13 | |
6 | 96.25 | 98.86 | 0.21 | 0.21 | 0.38 | |
7 | 96.25 | 93.31 | 0.35 | 0.30 | 0.95 | |
Average | 90.87 | 97.83 | 0.32 | 0.32 | 0.82 | |
Extracted boundaries | 1 | 95.86 | 92.18 | 0.30 | 0.22 | 0.61 |
2 | 94.51 | 96.49 | 0.23 | 0.16 | 0.54 | |
3 | 99.23 | 95.68 | 0.22 | 0.28 | 0.52 | |
4 | 97.42 | 94.69 | 0.27 | 0.25 | 1.14 | |
5 | 96.66 | 93.24 | 0.34 | 0.21 | 0.94 | |
6 | 98.29 | 98.19 | 0.11 | 0.23 | 0.34 | |
7 | 96.20 | 98.39 | 0.22 | 0.18 | 0.91 | |
Average | 96.88 | 95.55 | 0.24 | 0.22 | 0.71 |
Roof | Completeness (%) | Correctness (%) | RMSEE (m) | RMSEN (m) | RMSEH (m) | |
---|---|---|---|---|---|---|
Initial boundaries | 1 | 92.50 | 99.97 | 0.23 | 0.29 | 0.51 |
2 | 89.73 | 100.00 | 0.34 | 0.30 | 0.35 | |
3 | 93.28 | 100.00 | 0.22 | 0.26 | 0.64 | |
4 | 87.08 | 100.00 | 0.37 | 0.35 | 0.40 | |
5 | 87.67 | 100.00 | 0.28 | 0.35 | 0.48 | |
Average | 90.05 | 99.99 | 0.29 | 0.31 | 0.48 | |
Extracted boundaries | 1 | 99.91 | 97.37 | 0.13 | 0.09 | 0.43 |
2 | 99.75 | 98.75 | 0.04 | 0.12 | 0.22 | |
3 | 99.72 | 97.04 | 0.15 | 0.12 | 0.53 | |
4 | 98.13 | 97.70 | 0.12 | 0.10 | 0.28 | |
5 | 99.09 | 96.92 | 0.16 | 0.14 | 0.36 | |
Average | 99.32 | 97.56 | 0.12 | 0.11 | 0.36 |
Roof | Completeness (%) | Correctness (%) | RMSEE (m) | RMSEN (m) | RMSEH (m) | |
---|---|---|---|---|---|---|
Initial boundaries | 1 | 93.50 | 95.50 | 0.42 | 0.33 | 0.93 |
2 | 93.01 | 97.73 | 0.28 | 0.29 | 1.46 | |
Average | 93.26 | 96.62 | 0.35 | 0.31 | 1.20 | |
Extracted boundaries | 1 | 94.46 | 97.96 | 0.31 | 0.28 | 0.91 |
2 | 97.86 | 94.83 | 0.29 | 0.27 | 1.34 | |
Average | 96.16 | 96.40 | 0.30 | 0.28 | 1.13 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ywata, M.S.Y.; Dal Poz, A.P.; Shimabukuro, M.H.; de Oliveira, H.C. Snake-Based Model for Automatic Roof Boundary Extraction in the Object Space Integrating a High-Resolution Aerial Images Stereo Pair and 3D Roof Models. Remote Sens. 2021, 13, 1429. https://doi.org/10.3390/rs13081429
Ywata MSY, Dal Poz AP, Shimabukuro MH, de Oliveira HC. Snake-Based Model for Automatic Roof Boundary Extraction in the Object Space Integrating a High-Resolution Aerial Images Stereo Pair and 3D Roof Models. Remote Sensing. 2021; 13(8):1429. https://doi.org/10.3390/rs13081429
Chicago/Turabian StyleYwata, Michelle S. Y., Aluir P. Dal Poz, Milton H. Shimabukuro, and Henrique C. de Oliveira. 2021. "Snake-Based Model for Automatic Roof Boundary Extraction in the Object Space Integrating a High-Resolution Aerial Images Stereo Pair and 3D Roof Models" Remote Sensing 13, no. 8: 1429. https://doi.org/10.3390/rs13081429
APA StyleYwata, M. S. Y., Dal Poz, A. P., Shimabukuro, M. H., & de Oliveira, H. C. (2021). Snake-Based Model for Automatic Roof Boundary Extraction in the Object Space Integrating a High-Resolution Aerial Images Stereo Pair and 3D Roof Models. Remote Sensing, 13(8), 1429. https://doi.org/10.3390/rs13081429