Learning Color Distributions from Bitemporal Remote Sensing Images to Update Existing Building Footprints
Abstract
:1. Introduction
- Image color translation is performed on different-phase remote sensing images using CycleGAN to smoothly translate the color distribution from the source domain to the target domain in an unsupervised manner.
- A priori information is obtained based on a historical database using UNet(EfficientNet) to update buildings (additions and demolitions) without relabeling.
- We propose a post-processing update strategy to replace the segmentation of unchanged regions using strictly accurate historical labels to solve the problem of inaccurate prediction edges.
2. Methods
2.1. Image Color Translation
2.2. Semantic Segmentation
2.3. Post-Processing Update Strategy
Algorithm 1 Post-processing update strategy | |
Step 1: | Transform the pre-temporal label L and the post-temporal prediction P into the polygon sets , respectively, and set the threshold . |
Step 2: | Calculate and update: for in : for in : if : if otherwise: |
Step 3: | Convert the set of polygons into pixel-level update results. |
3. Experiments and Results Analysis
3.1. Datasets and Experimental Details
- (1)
- Wuhan University Building Change Detection Dataset [53]
- (2)
- Beijing Huairou district Land Survey Dataset
3.2. Visualization of Image Color Translation
3.3. Numerical Results and Semantic Segmentation Visualization
3.4. Effectiveness Analysis of the Post-Processing Update Strategy
4. Discussion
4.1. Ablation Study
4.2. Thresholds in the Post-Processing Update Strategy
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Huang, X.; Cao, Y.; Li, J. An Automatic Change Detection Method for Monitoring Newly Constructed Building Areas Using Time-Series Multi-View High-Resolution Optical Satellite Images. Remote Sens. Environ. 2020, 244, 111802. [Google Scholar] [CrossRef]
- Guo, H.; Shi, Q.; Marinoni, A.; Du, B.; Zhang, L. Deep Building Footprint Update Network: A Semi-Supervised Method for Updating Existing Building Footprint from Bi-Temporal Remote Sensing Images. Remote Sens. Environ. 2021, 264, 112589. [Google Scholar] [CrossRef]
- Zheng, H.W.; Shen, G.Q.; Wang, H. A Review of Recent Studies on Sustainable Urban Renewal. Habitat Int. 2014, 41, 272–279. [Google Scholar] [CrossRef] [Green Version]
- Cheng, J.; Mao, C.; Huang, Z.; Hong, J.; Liu, G. Implementation Strategies for Sustainable Renewal at the Neighborhood Level with the Goal of Reducing Carbon Emission. Sustain. Cities Soc. 2022, 85, 104047. [Google Scholar] [CrossRef]
- Stiller, D.; Stark, T.; Wurm, M.; Dech, S.; Taubenböck, H. Large-Scale Building Extraction in Very High-Resolution Aerial Imagery Using Mask R-CNN. In Proceedings of the 2019 Joint Urban Remote Sensing Event (JURSE), Vannes, France, 22–24 May 2019; pp. 1–4. [Google Scholar]
- Bouziani, M.; Goïta, K.; He, D.-C. Automatic Change Detection of Buildings in Urban Environment from Very High Spatial Resolution Images Using Existing Geodatabase and Prior Knowledge. ISPRS J. Photogramm. Remote Sens. 2010, 65, 143–153. [Google Scholar] [CrossRef]
- Yang, G.; Zhang, Q.; Zhang, G. EANet: Edge-Aware Network for the Extraction of Buildings from Aerial Images. Remote Sens. 2020, 12, 2161. [Google Scholar] [CrossRef]
- Liu, P.; Liu, X.; Liu, M.; Shi, Q.; Yang, J.; Xu, X.; Zhang, Y. Building footprint extraction from high-resolution images via spatial residual inception convolutional neural network. Remote Sens. 2019, 11, 830. [Google Scholar] [CrossRef] [Green Version]
- Zheng, J.; Tian, Y.; Yuan, C.; Yin, K.; Zhang, F.; Chen, F.; Chen, Q. MDESNet: Multitask Difference-Enhanced Siamese Network for Building Change Detection in High-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 3775. [Google Scholar] [CrossRef]
- Deng, Y.; Chen, J.; Yi, S.; Yue, A.; Meng, Y.; Chen, J.; Zhang, Y. Feature Guided Multitask Change Detection Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 9667–9679. [Google Scholar] [CrossRef]
- Trenčanová, B.; Proença, V.; Bernardino, A. Development of Semantic Maps of Vegetation Cover from UAV Images to Support Planning and Management in Fine-Grained Fire-Prone Landscapes. Remote Sens. 2022, 14, 1262. [Google Scholar] [CrossRef]
- Abubakar, F.M. Study of Image Segmentation Using Thresholding Technique on a Noisy Image. Int. J. Sci. Res. 2013, 2, 49–51. [Google Scholar]
- Chakraborty, S. An Advanced Approach to Detect Edges of Digital Images for Image Segmentation. In Applications of Advanced Machine Intelligence in Computer Vision and Object Recognition: Emerging Research and Opportunities; IGI Global: Hershey, PA, USA, 2020; pp. 90–118. [Google Scholar]
- Raja, N.; Fernandes, S.L.; Dey, N.; Satapathy, S.C.; Rajinikanth, V. Contrast Enhanced Medical MRI Evaluation Using Tsallis Entropy and Region Growing Segmentation. J. Ambient. Intell. Humaniz. Comput. 2018, 1–12. [Google Scholar] [CrossRef]
- Ke, L.; Xiong, Y.; Gang, W. Remote Sensing Image Classification Method Based on Superpixel Segmentation and Adaptive Weighting K-Means. In Proceedings of the 2015 International Conference on Virtual Reality and Visualization (ICVRV), Xiamen, China, 17–18 October 2015; pp. 40–45. [Google Scholar]
- Bouman, C.A.; Shapiro, M. A Multiscale Random Field Model for Bayesian Image Segmentation. IEEE Trans. Image Process. 1994, 3, 162–177. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Fan, J.; Yau, D.K.; Elmagarmid, A.K.; Aref, W.G. Automatic Image Segmentation by Integrating Color-Edge Extraction and Seeded Region Growing. IEEE Trans. Image Process. 2001, 10, 1454–1466. [Google Scholar] [PubMed] [Green Version]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected Crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [Green Version]
- Shang, R.; Zhang, J.; Jiao, L.; Li, Y.; Marturi, N.; Stolkin, R. Multi-Scale Adaptive Feature Fusion Network for Semantic Segmentation in Remote Sensing Images. Remote Sens. 2020, 12, 872. [Google Scholar] [CrossRef] [Green Version]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
- Bazi, Y.; Bashmal, L.; Rahhal, M.M.A.; Dayil, R.A.; Ajlan, N.A. Vision Transformers for Remote Sensing Image Classification. Remote Sens. 2021, 13, 516. [Google Scholar] [CrossRef]
- Zhang, J.; Zhao, H.; Li, J. TRS: Transformers for Remote Sensing Scene Classification. Remote Sens. 2021, 13, 4143. [Google Scholar] [CrossRef]
- Lei, S.; Shi, Z.; Mo, W. Transformer-Based Multistage Enhancement for Remote Sensing Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–11. [Google Scholar] [CrossRef]
- Li, X.; Xu, F.; Xia, R.; Li, T.; Chen, Z.; Wang, X.; Xu, Z.; Lyu, X. Encoding Contextual Information by Interlacing Transformer and Convolution for Remote Sensing Imagery Semantic Segmentation. Remote Sens. 2022, 14, 4065. [Google Scholar] [CrossRef]
- Tasar, O.; Happy, S.L.; Tarabalka, Y.; Alliez, P. ColorMapGAN: Unsupervised Domain Adaptation for Semantic Segmentation Using Color Mapping Generative Adversarial Networks. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7178–7193. [Google Scholar] [CrossRef]
- Yu, X.; Fan, J.; Zhang, M.; Liu, Q.; Li, Y.; Zhang, D.; Zhou, Y. Relative Radiation Correction Based on CycleGAN for Visual Perception Improvement in High-Resolution Remote Sensing Images. IEEE Access 2021, 9, 106627–106640. [Google Scholar] [CrossRef]
- Zheng, Z.; Tang, X.; Yue, Q.; Bo, A.; Lin, Y. Color Difference Optimization Method for Multi-Source Remote Sensing Image Processing. Proc. IOP Conf. Ser. Earth Environ. Sci. 2020, 474, 042030. [Google Scholar] [CrossRef]
- Yang, X.; Lo, C.P. Relative Radiometric Normalization Performance for Change Detection from Multi-Date Satellite Images. Photogramm. Eng. Remote Sens. 2000, 66, 967–980. [Google Scholar]
- Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color Transfer between Images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
- Schott, J.R.; Salvaggio, C.; Volchok, W.J. Radiometric Scene Normalization Using Pseudoinvariant Features. Remote Sens. Environ. 1988, 26, 1–16. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Liu, M.-Y.; Tuzel, O. Coupled Generative Adversarial Networks. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar]
- Liu, M.-Y.; Breuel, T.; Kautz, J. Unsupervised Image-to-Image Translation Networks. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Huang, X.; Liu, M.-Y.; Belongie, S.; Kautz, J. Multimodal Unsupervised Image-to-Image Translation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 172–189. [Google Scholar]
- Lee, H.-Y.; Tseng, H.-Y.; Huang, J.-B.; Singh, M.; Yang, M.-H. Diverse Image-to-Image Translation via Disentangled Representations. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 35–51. [Google Scholar]
- Kim, T.; Cha, M.; Kim, H.; Lee, J.K.; Kim, J. Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 1857–1865. [Google Scholar]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Alami Mejjati, Y.; Richardt, C.; Tompkin, J.; Cosker, D.; Kim, K.I. Unsupervised Attention-Guided Image-to-Image Translation. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
- Xue, L.I.; Li, Z.; Qingdong, W.; Haibin, A.I. Multi-Temporal Remote Sensing Imagery Semantic Segmentation Color Consistency Adversarial Network. Acta Geod. Cartogr. Sin. 2020, 49, 1473. [Google Scholar]
- Hoffman, J.; Tzeng, E.; Park, T.; Zhu, J.-Y.; Isola, P.; Saenko, K.; Efros, A.; Darrell, T. Cycada: Cycle-Consistent Adversarial Domain Adaptation. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 1989–1998. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Trans. Med. Imaging 2019, 39, 1856–1867. [Google Scholar] [CrossRef] [Green Version]
- He, Y.R.; He, S.; Kandel, M.E.; Lee, Y.J.; Hu, C.; Sobh, N.; Anastasio, M.A.; Popescu, G. Cell Cycle Stage Classification Using Phase Imaging with Computational Specificity. ACS Photonics 2022, 9, 1264–1273. [Google Scholar] [CrossRef]
- Le Duy Huynh, N.B. A U-Net++ with Pre-Trained Efficientnet Backbone for Segmentation of Diseases and Artifacts in Endoscopy Images and Videos. Available online: https://ceur-ws.org/Vol-2595/endoCV2020_paper_id_11.pdf (accessed on 14 October 2022).
- Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Ji, S.; Wei, S.; Lu, M. Fully Convolutional Networks for Multisource Building Extraction from an Open Aerial and Satellite Imagery Data Set. IEEE Trans. Geosci. Remote Sens. 2018, 57, 574–586. [Google Scholar] [CrossRef]
- Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Instance Normalization: The Missing Ingredient for Fast Stylization. arXiv 2016, arXiv:1607.08022. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least Squares Generative Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
- Shrivastava, A.; Pfister, T.; Tuzel, O.; Susskind, J.; Wang, W.; Webb, R. Learning from Simulated and Unsupervised Images through Adversarial Training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2107–2116. [Google Scholar]
Method | IoU | Precision | Recall | Accuracy | F1 |
---|---|---|---|---|---|
Baseline | 0.8273 | 0.8775 | 0.9353 | 0.9635 | 0.9055 |
Histogram matching | 0.9331 | 0.9737 | 0.9572 | 0.9871 | 0.9654 |
Reinhard method | 0.9282 | 0.9671 | 0.9584 | 0.9861 | 0.9627 |
DRIT | 0.9194 | 0.9601 | 0.9559 | 0.9843 | 0.9580 |
UNIT | 0.9310 | 0.9665 | 0.9620 | 0.9867 | 0.9643 |
CycleGAN | 0.9366 | 0.9697 | 0.9647 | 0.9878 | 0.9672 |
Method | IoU | Precision | Recall | Accuracy | F1 |
---|---|---|---|---|---|
Baseline | 0.6428 | 0.7134 | 0.8665 | 0.9291 | 0.7825 |
Histogram matching | 0.6720 | 0.8093 | 0.7984 | 0.9426 | 0.8038 |
Reinhard method | 0.7108 | 0.8370 | 0.8250 | 0.9506 | 0.8310 |
DRIT | 0.7777 | 0.8705 | 0.8794 | 0.9630 | 0.8749 |
UNIT | 0.8036 | 0.8789 | 0.9036 | 0.9675 | 0.8911 |
CycleGAN | 0.8121 | 0.8806 | 0.9125 | 0.9689 | 0.8963 |
Method | IoU | Precision | Recall | Accuracy | F1 |
---|---|---|---|---|---|
PSPNet | 0.9153 | 0.9642 | 0.9474 | 0.9836 | 0.9557 |
DeepLabV3 | 0.9229 | 0.9685 | 0.9514 | 0.9851 | 0.9599 |
OCRNet | 0.9301 | 0.9667 | 0.9609 | 0.9865 | 0.9638 |
Segformer | 0.9133 | 0.9651 | 0.9445 | 0.9832 | 0.9547 |
SwinTransformer | 0.9272 | 0.9629 | 0.9615 | 0.9859 | 0.9622 |
UNet(ResNet50) | 0.9316 | 0.9655 | 0.9636 | 0.9868 | 0.9646 |
UNet(EfficientNet-b1) | 0.9366 | 0.9697 | 0.9647 | 0.9878 | 0.9672 |
Method | IoU | Precision | Recall | Accuracy | F1 |
---|---|---|---|---|---|
PSPNet | 0.7483 | 0.8535 | 0.8586 | 0.9575 | 0.8560 |
DeepLabV3 | 0.7678 | 0.9019 | 0.8377 | 0.9627 | 0.8686 |
OCRNet | 0.7846 | 0.8780 | 0.8805 | 0.9644 | 0.8793 |
Segformer | 0.7478 | 0.8420 | 0.8698 | 0.9568 | 0.8557 |
SwinTransformer | 0.7814 | 0.7814 | 0.9025 | 0.9628 | 0.8773 |
UNet(ResNet50) | 0.7637 | 0.8473 | 0.8856 | 0.9596 | 0.8660 |
UNet(EfficientNet-b1) | 0.8121 | 0.8806 | 0.9125 | 0.9689 | 0.8963 |
Model Backbone | PSPNet Res50 (Depth = 3) | DeepLabV3 Res50 | OCRNet HR18 | Segformer B2 | SwinT S | UNet Res50 | UNet Eff-b1 |
---|---|---|---|---|---|---|---|
Params (M) | 2.238 | 39.634 | 12.026 | 2.478 | 48.746 | 32.521 | 2.303 |
FLOPs (G) | 0.743 | 10.258 | 3.294 | 0.381 | 15.761 | 2.677 | 0.637 |
Threshold () | IoU | Precision | Recall | Accuracy | F1 |
---|---|---|---|---|---|
0 | 0.8490 | 0.9319 | 0.9051 | 0.9699 | 0.9183 |
0.2 | 0.8537 | 0.9359 | 0.9067 | 0.9709 | 0.9210 |
0.4 | 0.8555 | 0.9375 | 0.9072 | 0.9714 | 0.9221 |
0.6 | 0.8576 | 0.9396 | 0.9077 | 0.9718 | 0.9233 |
0.8 | 0.8622 | 0.9435 | 0.9091 | 0.9728 | 0.9260 |
1 | 0.9363 | 0.9692 | 0.9649 | 0.9877 | 0.9671 |
Threshold () | IoU | Precision | Recall | Accuracy | F1 |
---|---|---|---|---|---|
0 | 0.9256 | 0.9477 | 0.9754 | 0.9884 | 0.9613 |
0.2 | 0.9272 | 0.9581 | 0.9663 | 0.9888 | 0.9622 |
0.4 | 0.9201 | 0.9609 | 0.9559 | 0.9877 | 0.9584 |
0.6 | 0.9104 | 0.9618 | 0.9445 | 0.9863 | 0.9531 |
0.8 | 0.8937 | 0.9603 | 0.9279 | 0.9837 | 0.9438 |
1 | 0.8120 | 0.8784 | 0.9148 | 0.9688 | 0.8962 |
Methods | IoU | Precision | Recall | Accuracy | F1 |
---|---|---|---|---|---|
Baseline | 0.8273 | 0.8775 | 0.9353 | 0.9635 | 0.9055 |
Baseline + CycleGAN | 0.9366 | 0.9697 | 0.9647 | 0.9878 | 0.9672 |
Baseline + CycleGAN + the Post-processing Update Strategy | 0.9363 | 0.9692 | 0.9649 | 0.9877 | 0.9671 |
Methods | IoU | Precision | Recall | Accuracy | F1 |
---|---|---|---|---|---|
Baseline | 0.6428 | 0.7134 | 0.8665 | 0.9291 | 0.7825 |
Baseline + CycleGAN | 0.8121 | 0.8806 | 0.9125 | 0.9689 | 0.8963 |
Baseline + CycleGAN + the Post-processing Update Strategy | 0.9272 | 0.9581 | 0.9663 | 0.9888 | 0.9622 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Z.; Meng, Y.; Chen, J.; Ma, J.; Yue, A.; Chen, J. Learning Color Distributions from Bitemporal Remote Sensing Images to Update Existing Building Footprints. Remote Sens. 2022, 14, 5851. https://doi.org/10.3390/rs14225851
Wang Z, Meng Y, Chen J, Ma J, Yue A, Chen J. Learning Color Distributions from Bitemporal Remote Sensing Images to Update Existing Building Footprints. Remote Sensing. 2022; 14(22):5851. https://doi.org/10.3390/rs14225851
Chicago/Turabian StyleWang, Zehui, Yu Meng, Jingbo Chen, Junxian Ma, Anzhi Yue, and Jiansheng Chen. 2022. "Learning Color Distributions from Bitemporal Remote Sensing Images to Update Existing Building Footprints" Remote Sensing 14, no. 22: 5851. https://doi.org/10.3390/rs14225851
APA StyleWang, Z., Meng, Y., Chen, J., Ma, J., Yue, A., & Chen, J. (2022). Learning Color Distributions from Bitemporal Remote Sensing Images to Update Existing Building Footprints. Remote Sensing, 14(22), 5851. https://doi.org/10.3390/rs14225851