Semantic Segmentation of Airborne LiDAR Data in Maya Archaeology
Abstract
:1. Introduction
2. Acquisition and Processing of LiDAR-Derived Terrain Models in Archaeology
2.1. Using Airborne LiDAR in Tropical Environment
2.2. Overview of Deep Learning Methods for Semantic Segmentation
2.3. Deep Learning Methods for LiDAR-Derived Terrain Models in Archaeology
3. Uaxactun Data Set
4. Semantic Segmentation of Maya Ruins by CNNs
- Object mask creation.
- CNN input preparation.
- CNN usage that contains:
- (a)
- CNN training,
- (b)
- CNN output processing,
- (c)
- thresholding.
- Semantic segmentation quality evaluation.
4.1. Object Mask Creation
- Structures, containing buildings and platforms the buildings are constructed on.
- Mounds, containing buildings only.
- The input series is split into multiple polygonal chains defined by sequentially connected coordinates. Each occurrence of ‘NaN’ in the series is considered to be the start of a new polygonal chain, Figure 5a.
- Each polygonal chain is processed:
- (a)
- If the polygonal chain contains 3 or more vertices, and the first and the last vertex are identical, a polygon is produced.
- (b)
- If the polygonal chain contains 3 or more vertices, and the first and the last vertex are not identical, the last vertex is connected to the nearest vertex from the polygonal chain and a polygon is created.
- (c)
- If the polygonal chain contains fewer than 3 vertices, it is discarded.
- This process can result in either zero, one or multiple 2D polygons representing the output structure. The structure is then reconstructed as a union of the areas enclosed by the polygons, Figure 5b.
- The areas delimited by the vertices of the individual polygons are filled, Figure 5c.
4.2. CNN Input Preparation
- The minimal altitude in m.a.s.l within the tile was subtracted from all values.
- The result was divided by a normalization constant.
4.3. U-Net CNN Usage
Algorithm 1 Assembling U-Net predictions |
whiledo while do end while end while |
- Classification accuracy (A, maximization).
- Number of misclassified pixels further than 10 m from true positives over all misclassified pixels. (, minimization).
- Intersection over Union (, maximization).
- Preparing the training set.
- Training the U-Net CNN.
- Finding optimal thresholds for binary segmentation.
- Preparing novel inputs from the testing area.
- Evaluating the inputs to produce output matrix .
- Optional—smoothing the output matrix.
- Thresholding: producing binary segmentation.
4.4. Mask R-CNN Usage
4.5. Semantic Segmentation Quality Evaluation
- A = accuracy
- = true positive rate (sensitivity, recall)
- = true negative rate (specificity)
- = balanced accuracy
- = positive predictive value (precision)
- = negative predictive value
- = harmonic mean of precision and sensitivity
- = Matthews correlation coefficient
5. Experiments
5.1. Segmentation of Structures
5.2. Segmentation of Mounds
6. Discussion
- If the labeling was done without ground-truthing, ideally, several experts should evaluate each object and indicate the confidence in their decision. The confidence level is a valuable information when calculating CNN loss.
- If presence of an object was verified by ground-truthing, this should be indicated.
- When labeling objects of ancient construction activity, besides the maler outlining also the area covering the present-day features belonging to the object should be indicated.
- When labeling objects such as looting trenches, agricultural fields, etc., the entire area of the objects’ features should be indicated (for example the trench and the debris hill).
7. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Canuto, M.; Estrada-Belli, F.; Garrison, T.; Houston, S.; Acuña, M.; Kováč, M.; Marken, D.; Nondédéo, P.; Auld-Thomas, L.; Cyril, C.; et al. Ancient Lowland Maya Complexity as Revealed by Airborne Laser Scanning of Northern Guatemala. Science 2018, 361, 1–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Doneus, M. Openness as visualization technique for interpretative mapping of airborne lidar derived digital terrain models. Remote Sens. 2013, 5, 6427–6442. [Google Scholar] [CrossRef] [Green Version]
- Zakšek, K.; Oštir, K.; Kokalj, Ž. Sky-view factor as a relief visualization technique. Remote Sens. 2011, 3, 398–415. [Google Scholar] [CrossRef] [Green Version]
- Kattenborn, T.; Eichel, J.; Fassnacht, F.E. Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery. Sci. Rep. 2019, 9, 17656. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Anderson, K.; Hancock, S.; Disney, M.; Gaston, K.J. Is waveform worth it? A comparison of LiDAR approaches for vegetation and landscape characterization. Remote Sens. Ecol. Conserv. 2016, 2, 5–15. [Google Scholar] [CrossRef]
- Fernandez-Diaz, J.C.; Carter, W.E.; Glennie, C.; Shrestha, R.L.; Pan, Z.; Ekhtari, N.; Singhania, A.; Hauser, D.; Sartori, M. Capability assessment and performance metrics for the Titan multispectral mapping lidar. Remote Sens. 2016, 8, 936. [Google Scholar] [CrossRef] [Green Version]
- Moller, A.; Fernandez-Diaz, J.C. Airborne Lidar for Archaeology in Central and South America. Lidar Magazine, 1 January 2019; 9. [Google Scholar]
- Yan, K.; Wang, Y.; Liang, D.; Huang, T.; Tian, Y. CNN vs. SIFT for Image Retrieval: Alternative or complementary? In Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 407–411. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep Into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Aloysius, N.; Geetha, M. A Review on Deep Convolutional Neural Networks. In Proceedings of the 2017 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 6–8 April 2017; pp. 0588–0592. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Fiorucci, M.; Khoroshiltseva, M.; Pontil, M.; Traviglia, A.; Del Bue, A.; James, S. Machine Learning for Cultural Heritage: A Survey. Pattern Recognit. Lett. 2020, 133, 102–108. [Google Scholar] [CrossRef]
- Verschoof-van der Vaart, W.B.; Lambers, K. Learning to Look at LiDAR: The Use of R-CNN in the Automated Detection of Archaeological Objects in LiDAR Data from the Netherlands. J. Comput. Appl. Archaeol. 2019, 2. [Google Scholar] [CrossRef] [Green Version]
- Gallwey, J.; Eyre, M.; Tonkins, M.; Coggan, J. Bringing lunar LiDAR back down to earth: Mapping our industrial heritage through deep transfer learning. Remote Sens. 2019, 11, 1994. [Google Scholar] [CrossRef] [Green Version]
- Kazimi, B.; Thiemann, F.; Sester, M. Object Instance Segmentation in Digital Terrain Models. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Salerno, Italy, 3–5 September 2019; pp. 488–495. [Google Scholar]
- Politz, F.; Kazimi, B.; Sester, M. Classification of Laser Scanning Data Using Deep Learning. In Proceedings of the 38th Scientific-Technical Annual Conference of the DGPF and PFGK18 in Munich, Deutsche Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation (DGPF), Munich, Germany, 7–9 March 2018; Volume 27, pp. 597–610. [Google Scholar]
- Soroush, M.; Mehrtash, A.; Khazraee, E.; Ur, J.A. Deep learning in archaeological remote sensing: Automated qanat detection in the Kurdistan region of Iraq. Remote Sens. 2020, 12, 500. [Google Scholar] [CrossRef] [Green Version]
- Zeggada, A.; Melgani, F.; Bazi, Y. A deep learning approach to UAV image multilabeling. IEEE Geosci. Remote. Sens. Lett. 2017, 14, 694–698. [Google Scholar] [CrossRef]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Martinez-Gonzalez, P.; Garcia-Rodriguez, J. A survey on deep learning techniques for image and video semantic segmentation. Appl. Soft Comput. 2018, 70, 41–65. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Girshick, R.B.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
- Lin, T.; Maire, M.; Belongie, S.J.; Bourdev, L.D.; Girshick, R.B.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Lambers, K.; Verschoof van der Vaart, W.B.; Bourgeois, Q.P. Integrating remote sensing, machine learning, and citizen science in Dutch archaeological prospection. Remote Sens. 2019, 11, 794. [Google Scholar] [CrossRef] [Green Version]
- Moyes, H.; Montgomery, S. Locating cave entrances using lidar-derived local relief modeling. Geosciences 2019, 9, 98. [Google Scholar] [CrossRef] [Green Version]
- Somrak, M.; Džeroski, S.; Kokalj, Ž. Learning to classify structures in ALS-derived visualizations of ancient Maya Settlements with CNN. Remote Sens. 2020, 12, 2215. [Google Scholar] [CrossRef]
- Pires de Lima, R.; Marfurt, K. Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis. Remote Sens. 2020, 12, 86. [Google Scholar] [CrossRef] [Green Version]
- Zingman, I.; Saupe, D.; Penatti, O.A.; Lambers, K. Detection of fragmented rectangular enclosures in very high resolution remote sensing images. IEEE Trans. Geosci. Remote. Sens. 2016, 54, 4580–4593. [Google Scholar] [CrossRef]
- Trier, Ø.; Salberg, A.; Pilø, L. Semi-automatic Mapping of Charcoal Kilns From Airborne Laser Scanning Data Using Deep Learning. In Proceedings of the 44th Conference on Computer Applications and Quantitative Methods in Archaeology on CAA2016, Oslo, Norway, 29 March–2 April 2016; pp. 219–231. [Google Scholar]
- Banaszek, Ł.; Cowley, D.C.; Middleton, M. Towards national archaeological mapping. Assessing source data and methodology—A case study from Scotland. Geosciences 2018, 8, 272. [Google Scholar] [CrossRef] [Green Version]
- Quintus, S.; Day, S.S.; Smith, N.J. The efficacy and analytical importance of manual feature extraction using lidar datasets. Adv. Archaeol. Pract. 2017, 5, 351–364. [Google Scholar] [CrossRef]
- Kováč, M. Verificaciones de los Rasgos Agrícolas Identificados por LiDAR. In Nuevas Excavaciones en Uaxactun IX; CMS-CHRONOS: Bratislava, Slovakia, 2019; pp. 205–220. [Google Scholar]
- Kováč, M. Recorridos, Verificaciones y Rescate en Uaxactun y Alrededor. In Nuevas Excavaciones en Uaxactun IX; CMS-CHRONOS: Bratislava, Slovakia, 2019; pp. 220–230. [Google Scholar]
- Hutson, S.R. “Unavoidable Imperfections”: Historical Contexts for Representing Ruined Maya Buildings. In The Past Presented: Archaeological Illustration in the Americas; Dumbarton Oaks Research Library Collection: Washington, DC, USA, 2012. [Google Scholar]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: https://www.tensorflow.org/ (accessed on 9 November 2020).
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Abdulla, W. Mask R-CNN for Object Detection and Instance Segmentation on Keras and TensorFlow. 2017. Available online: https://github.com/matterport/Mask_RCNN (accessed on 9 November 2020).
- TensorFlow DeepLab Model Zoo. 2020. Available online: https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md (accessed on 9 November 2020).
- Boughorbel, S.; Jarray, F.; El-Anbari, M. Optimal classifier for imbalanced data using Matthews Correlation Coefficient metric. PLoS ONE 2017, 12, e0177678. [Google Scholar] [CrossRef] [PubMed]
- Casana, J. Global-Scale Archaeological Prospection using CORONA Satellite Imagery: Automated, Crowd-Sourced, and Expert-led Approaches. J. Field Archaeol. 2020, 45, S89–S100. [Google Scholar] [CrossRef] [Green Version]
- Traviglia, A.; Cowley, D.; Lambers, K. Finding common ground: Human and computer vision in archaeological prospection. AARGnews 2016, 53, 11–24. [Google Scholar]
Model | U-Net | Mask R-CNN |
---|---|---|
A | 0.9875 | 0.9881 |
0.7958 | 0.7484 | |
0.5973 | 0.5002 | |
0.9944 | 0.9966 | |
0.6524 | 0.7246 | |
0.9929 | 0.9912 | |
0.6236 | 0.5918 | |
0.4894 | 0.5774 | |
0.4531 | 0.4203 | |
0.9874 | 0.9879 | |
0.7202 | 0.7041 | |
0.6179 | 0.6015 |
Model | Small | Medium | Large | Total |
---|---|---|---|---|
U-Net | 0.4318 | 0.7388 | 0.9808 | 0.6054 |
Mask R-CNN | 0.4155 | 0.6408 | 0.9615 | 0.5503 |
Optimized for | Predicted | IoU with Predicted | Optimal | IoU with Optimal |
---|---|---|---|---|
Accuracy | 10.012 | 0.7126 | 10.3429 | 0.7059 |
MOR10R | 9.2119 | 0.7202 | 9.2116 | 0.7202 |
IoU | 9.2119 | 0.7202 | 9.0501 | 0.7204 |
Model | U-Net | U-Net | U-Net | M-RCNN |
---|---|---|---|---|
Smoothing | none | none | 30 × 30 | none |
Criterion | IoU_ave | Accuracy | MOR10R | n/a |
A | 0.9966 | 0.9969 | 0.9897 | 0.9967 |
0.7708 | 0.7408 | 0.8208 | 0.5906 | |
0.5436 | 0.4827 | 0.6506 | 0.1813 | |
0.9983 | 0.9988 | 0.9910 | 0.9998 | |
0.5515 | 0.6143 | 0.2176 | 0.7891 | |
0.9982 | 0.9980 | 0.9986 | 0.9969 | |
0.5473 | 0.5406 | 0.3256 | 0.2949 | |
0.4705 | 0.4752 | 0.3194 | 0.7558 | |
0.3768 | 0.3704 | 0.1944 | 0.1729 | |
0.9967 | 0.9969 | 0.9897 | 0.9967 | |
0.6868 | 0.6836 | 0.5921 | 0.5848 | |
0.5456 | 0.5430 | 0.3721 | 0.3773 |
Model | Smoothing | Criterion | Small | Medium | Large | Total | OPT | OCP | OFP |
---|---|---|---|---|---|---|---|---|---|
U-Net | none | IoU | 0.5577 | 0.8056 | 0.9305 | 0.6882 | 1509 | 978 | 531 |
U-Net | none | Accuracy | 0.5183 | 0.7978 | 0.9167 | 0.6643 | 1454 | 944 | 510 |
U-Net | 30 × 30 | MOR10R | 0.5254 | 0.7445 | 0.8611 | 0.6411 | 495 | 911 | 127 |
M-RCNN | none | n/a | 0.1167 | 0.3384 | 0.5714 | 0.2585 | 440 | 389 | 37 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bundzel, M.; Jaščur, M.; Kováč, M.; Lieskovský, T.; Sinčák, P.; Tkáčik, T. Semantic Segmentation of Airborne LiDAR Data in Maya Archaeology. Remote Sens. 2020, 12, 3685. https://doi.org/10.3390/rs12223685
Bundzel M, Jaščur M, Kováč M, Lieskovský T, Sinčák P, Tkáčik T. Semantic Segmentation of Airborne LiDAR Data in Maya Archaeology. Remote Sensing. 2020; 12(22):3685. https://doi.org/10.3390/rs12223685
Chicago/Turabian StyleBundzel, Marek, Miroslav Jaščur, Milan Kováč, Tibor Lieskovský, Peter Sinčák, and Tomáš Tkáčik. 2020. "Semantic Segmentation of Airborne LiDAR Data in Maya Archaeology" Remote Sensing 12, no. 22: 3685. https://doi.org/10.3390/rs12223685
APA StyleBundzel, M., Jaščur, M., Kováč, M., Lieskovský, T., Sinčák, P., & Tkáčik, T. (2020). Semantic Segmentation of Airborne LiDAR Data in Maya Archaeology. Remote Sensing, 12(22), 3685. https://doi.org/10.3390/rs12223685