High-Quality Damaged Building Instance Segmentation Based on Improved Mask Transfiner Using Post-Earthquake UAS Imagery: A Case Study of the Luding Ms 6.8 Earthquake in China
Abstract
:1. Introduction
- Different from the existing damaged building identification methods, this paper proposes a high-quality instance segmentation method to extract damaged buildings, which can accurately obtain the location and fine contour of damaged buildings. Each polygon predicted by the proposed method is almost consistent with the contours of damaged buildings.
- To enhance the accuracy of collapsed building recognition, we use deformable convolution to replace standard convolution in the backbone part. This allows the network to capture more detailed features of irregularly shaped objects, thereby adapting to the arbitrariness of the shape of collapsed buildings.
- An enhanced bidirectional feature pyramid network is proposed to fuse multi-scale features. It can enhance the feature expression ability of targets of different sizes, thereby improving the model’s ability to recognize damaged buildings of different sizes.
- We propose a more lightweight Transformer sequence encoder. This improves the efficiency of global feature extraction and the refinement of target edges when processing pixels in incoherent areas.
2. Study Area and Data
2.1. Study Area
2.2. Damaged Buildings Dataset
3. Methodology
3.1. Overview of the Mask Transfiner Model
3.2. Improvement of Mask Transfiner
- Deformable Convolution Feature Extraction Module (as shown in Figure 4(①)): Improvements were made to the CNN component of the base detector in Mask Transfiner by replacing standard convolutions with deformable convolutions [42,43]. This enhancement allows the network to capture more detailed features of irregularly shaped targets, making it better suited for detecting collapsed building shapes with arbitrary forms.
- Multi-Scale Feature Extraction and Fusion Module (as shown in Figure 4(②)): The FPN component of the base detector in Mask Transfiner was improved by proposing an enhanced bidirectional feature pyramid network (BiFPN) based on Path Aggregation Network (PANet) [44]. This modification facilitates multi-scale feature fusion, improving the model’s ability to represent features of objects with various scales and enhancing its capability to recognize damaged buildings of different sizes.
- Lightweight Transformer Global Feature Refinement Module (as shown in Figure 4(③)): The Transformer sequence encoder of the encoder was upgraded to use a lightweight Transformer sequence encoder. This improvement enhances the efficiency of global feature extraction and the refinement of object boundaries by processing incoherent region pixels.
3.2.1. Deformable Convolution Feature Extraction Module
3.2.2. Multi-Scale Feature Extraction and Fusion Module
3.2.3. Lightweight Transformer Global Feature Refinement Module
3.3. Implementation Details
3.4. Evaluation Metrics
4. Results
4.1. Comparison of Model Performance
4.2. Ablation Study
4.3. Feature Maps Visualization
4.4. Applicability of DB-Transfiner Model
4.5. Generalization Capability of the Model in Yangbi Earthquake
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, Q.; Mou, L.; Sun, Y.; Hua, Y.; Shi, Y.; Zhu, X.X. A Review of Building Extraction from Remote Sensing Imagery: Geometrical Structures and Semantic Attributes. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4702315. [Google Scholar] [CrossRef]
- Valentijn, T.; Margutti, J.; van den Homberg, M.; Laaksonen, J. Multi-Hazard and Spatial Transferability of a CNN for Automated Building Damage Assessment. Remote Sens. 2020, 12, 2839. [Google Scholar] [CrossRef]
- Nedjati, A.; Vizvari, B.; Izbirak, G. Post-earthquake response by small UAV helicopters. Nat. Hazards 2016, 80, 1669–1688. [Google Scholar] [CrossRef]
- Xiong, C.; Li, Q.S.; Lu, X.Z. Automated regional seismic damage assessment of buildings using an unmanned aerial vehicle and a convolutional neural network. Autom. Constr. 2020, 109, 102994. [Google Scholar] [CrossRef]
- Zhang, R.; Li, H.; Duan, K.F.; You, S.C.; Liu, K.; Wang, F.T.; Hu, Y. Automatic Detection of Earthquake-Damaged Buildings by Integrating UAV Oblique Photography and Infrared Thermal Imaging. Remote Sens. 2020, 12, 2621. [Google Scholar] [CrossRef]
- Jhan, J.P.; Kerle, N.; Rau, J.Y. Integrating UAV and Ground Panoramic Images for Point Cloud Analysis of Damaged Building. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6500805. [Google Scholar] [CrossRef]
- Xie, Y.; Feng, D.; Chen, H.; Liu, Z.; Mao, W.; Zhu, J.; Hu, Y.; Baik, S.W. Damaged Building Detection from Post-Earthquake Remote Sensing Imagery Considering Heterogeneity Characteristics. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4708417. [Google Scholar] [CrossRef]
- Ge, J.; Tang, H.; Yang, N.; Hu, Y. Rapid identification of damaged buildings using incremental learning with transferred data from historical natural disaster cases. ISPRS J. Photogramm. Remote Sens. 2023, 195, 105–128. [Google Scholar] [CrossRef]
- Wang, J.; Guo, H.; Su, X.; Zheng, L.; Yuan, Q. PCDASNet: Position-Constrained Differential Attention Siamese Network for Building Damage Assessment. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5622318. [Google Scholar] [CrossRef]
- Tilon, S.; Nex, F.; Kerle, N.; Vosselman, G. Post-Disaster Building Damage Detection from Earth Observation Imagery Using Unsupervised and Transferable Anomaly Detecting Generative Adversarial Networks. Remote Sens. 2020, 12, 4193. [Google Scholar] [CrossRef]
- Jing, Y.; Ren, Y.; Liu, Y.; Wang, D.; Yu, L. Automatic Extraction of Damaged Houses by Earthquake Based on Improved YOLOv5: A Case Study in Yangbi. Remote Sens. 2022, 14, 382. [Google Scholar] [CrossRef]
- Pi, Y.; Nath, N.D.; Behzadan, A.H. Convolutional neural networks for object detection in aerial imagery for disaster response and recovery. Adv. Eng. Inf. 2020, 43, 101009. [Google Scholar] [CrossRef]
- Wang, Y.; Feng, W.; Jiang, K.; Li, Q.; Lv, R.; Tu, J. Real-Time Damaged Building Region Detection Based on Improved YOLOv5s and Embedded System from UAV Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 4205–4217. [Google Scholar] [CrossRef]
- Hong, Z.; Zhong, H.; Pan, H.; Liu, J.; Zhou, R.; Zhang, Y.; Han, Y.; Wang, J.; Yang, S.; Zhong, C. Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images. Sensors 2022, 22, 5920. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Zhu, P.; Tang, X.; Li, C.; Jiao, L.; Zhou, H. Semantic Attention and Scale Complementary Network for Instance Segmentation in Remote Sensing Images. IEEE Trans. Cybern. 2022, 52, 10999–11013. [Google Scholar] [CrossRef]
- Wang, Y.; Jing, X.; Cui, L.; Zhang, C.; Xu, Y.; Yuan, J.; Zhang, Q. Geometric consistency enhanced deep convolutional encoder-decoder for urban seismic damage assessment by UAV images. Eng. Struct. 2023, 286, 116132. [Google Scholar] [CrossRef]
- Khankeshizadeh, E.; Mohammadzadeh, A.; Arefi, H.; Mohsenifar, A.; Pirasteh, S.; Fan, E.; Li, H.; Li, J. A Novel Weighted Ensemble Transferred U-Net Based Model (WETUM) for Postearthquake Building Damage Assessment from UAV Data: A Comparison of Deep Learning- and Machine Learning-Based Approaches. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4701317. [Google Scholar] [CrossRef]
- Li, X.; Yang, J.; Li, Z.; Yang, F.; Chen, Y.; Ren, J.; Duan, Y. Building Damage Detection for Extreme Earthquake Disaster Area Location from Post-Event UAV Images Using Improved SSD. In Proceedings of the IGARSS 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 2674–2677. [Google Scholar]
- Hussein, B.R.; Malik, O.A.; Ong, W.H.; Slik, J.W.F. Automated Extraction of Phenotypic Leaf Traits of Individual Intact Herbarium Leaves from Herbarium Specimen Images Using Deep Learning Based Semantic Segmentation. Sensors 2021, 21, 4549. [Google Scholar] [CrossRef]
- Gu, W.; Bai, S.; Kong, L. A review on 2D instance segmentation based on deep neural networks. Image Vision Comput. 2022, 120, 104401. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Xie, E.; Sun, P.; Song, X.; Wang, W.; Liu, X.; Liang, D.; Shen, C.; Luo, P. PolarMask: Single Shot Instance Segmentation with Polar Representation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 12190–12199. [Google Scholar]
- Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. YOLACT: Real-Time Instance Segmentation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October 2019; pp. 9156–9165. [Google Scholar]
- Wang, X.; Kong, T.; Shen, C.; Jiang, Y.; Li, L. SOLO: Segmenting Objects by Locations. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 649–665. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 213–229. [Google Scholar]
- Dong, B.; Zeng, F.; Wang, T.; Zhang, X.; Wei, Y. SOLQ: Segmenting Objects by Learning Queries. In Proceedings of the Thirty-Fifth Annual Conference on Neural Information Processing Systems, New Orleans, LA, USA, 6–14 December 2021; pp. 4206–4217. [Google Scholar]
- Fang, Y.; Yang, S.; Wang, X.; Li, Y.; Fang, C.; Shan, Y.; Feng, B.; Liu, W. Instances as Queries. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 6890–6899. [Google Scholar]
- He, J.; Li, P.; Geng, Y.; Xie, X. FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 23663–23672. [Google Scholar]
- Ke, L.; Danelljan, M.; Li, X.; Tai, Y.W.; Tang, C.K.; Yu, F. Mask Transfiner for High-Quality Instance Segmentation. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 4402–4411. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
- Zou, Y.; Wang, X.; Wang, L.; Chen, K.; Ge, Y.; Zhao, L. A High-Quality Instance-Segmentation Network for Floating-Algae Detection Using RGB Images. Remote Sens. 2022, 14, 6247. [Google Scholar] [CrossRef]
- Yang, S.; Zheng, L.; Wu, T.; Sun, S.; Zhang, M.; Li, M.; Wang, M. High-throughput soybean pods high-quality segmentation and seed-per-pod estimation for soybean plant breeding. Eng. Appl. Artif. Intell. 2024, 129, 107580. [Google Scholar] [CrossRef]
- Panboonyuen, T.; Nithisopa, N.; Pienroj, P.; Jirachuphun, L.; Watthanasirikrit, C.; Pornwiriyakul, N. MARS: Mask Attention Refinement with Sequential Quadtree Nodes for Car Damage Instance Segmentation. arXiv 2023, arXiv:2305.04743. [Google Scholar]
- Topics on Lu County “9•16” Rescue Attack. Available online: https://www.luxian.gov.cn/zwgk/fdzdgknr/zdmsxx/ylws/content_303681 (accessed on 17 May 2024). (In Chinese)
- Gao, X.L.; Ji, J. Analysis of the seismic vulnerability and the structural characteristics of houses in Chinese rural areas. Nat. Hazard 2014, 70, 1099–1114. [Google Scholar] [CrossRef]
- People First, Life First—The Seventh Diary of Sichuan Province’s Response to the “Ninth Five-Year” Luding Earthquake. Available online: https://www.sc.gov.cn/10462/10464/10797/2022/9/12/5973fd88141145ea9f49477bb4f92c9d.shtml (accessed on 12 May 2024). (In Chinese)
- Earthquake Experts: “9•5” Luding Earthquake Damage Has Five Characteristics. Available online: https://www.sc.gov.cn/10462/10778/10876/2022/9/14/1f2655ddc5394b1a989f22d1393560e8.shtml (accessed on 14 May 2024). (In Chinese)
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the Computer Vision-ECCV 2014, Zurich, Switzerland, 5–12 September 2014; pp. 740–755. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable Convolutional Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
- Zhu, X.; Hu, H.; Lin, S.; Dai, J. Deformable ConvNets V2: More Deformable, Better Results. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 9300–9308. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.F.; Shi, J.P.; Jia, J.Y. Path Aggregation Network for Instance Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759–8768. [Google Scholar]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE. 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10778–10787. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Zou, R.; Liu, J.; Pan, H.; Tang, D.; Zhou, R. An Improved Instance Segmentation Method for Fast Assessment of Damaged Buildings Based on Post-Earthquake UAV Images. Sensors 2024, 24, 4371. [Google Scholar] [CrossRef]
- Shi, P.; Zhao, Z.; Fan, X.; Yan, X.; Yan, W.; Xin, Y. Remote Sensing Image Object Detection Based on Angle Classification. IEEE Access 2021, 9, 118696–118707. [Google Scholar] [CrossRef]
Dataset-Labeled | Total Images | Total Sample |
---|---|---|
Training | 480 | 935 |
Validation | 120 | 231 |
Testing | 104 | 206 |
Sum | 704 | 1372 |
Model | (%) | (%) | (%) | (%) | (%) | (%) | (%) | (%) | T (ms/img) | FPS (img/s) |
---|---|---|---|---|---|---|---|---|---|---|
Mask R-CNN | 45.78 | 67.10 | 50.56 | 48.88 | 67.17 | 51.14 | 68.41 | 0.49 | 48.3 | 20.7 |
PolarMask | 42.65 | 65.70 | 46.52 | 45.14 | 65.81 | 47.32 | 64.02 | 0.42 | 39.8 | 25.1 |
YOLOACT | 41.16 | 65.24 | 45.35 | 44.32 | 65.30 | 46.10- | 62.45 | 0.41 | 12.1 | 82.8 |
SOLO | 47.45 | 67.88 | 53.74 | 49.74 | 68.15 | 53.94 | 70.68 | 0.53 | 32.1 | 31.2 |
SOLQ | 49.50 | 69.23 | 56.10 | 53.12 | 69.40 | 56.12 | 73.92 | 0.58 | 73.5 | 13.6 |
QueryInst | 49.78 | 69.79 | 56.45 | 52.09 | 69.89 | 56.55 | 74.17 | 0.58 | 51.3 | 19.5 |
FastInst | 48.99 | 68.91 | 54.62 | 51.47 | 69.02 | 54.80 | 73.51 | 0.57 | 15.4 | 65.1 |
Mask Transfiner | 50.78 | 70.72 | 57.62 | 51.42 | 70.49 | 57.60 | 75.88 | 0.60 | 75.8 | 13.2 |
DB-Transfiner (ours) | 54.85 | 70.75 | 62.20 | 56.42 | 71.97 | 60.50 | 81.99 | 0.70 | 70.9 | 14.1 |
Model | AP (%) | (%) | (%) | (%) | (%) | (%) | (%) |
---|---|---|---|---|---|---|---|
Baseline | 51.42 | 70.49 | 57.60 | 59.01 | 31.42 | 76.14 | 0.61 |
Baseline + DCNM | 53.72 (+2.30) | 72.76 (+2.27) | 60.06 (+2.46) | 60.36 (+1.35) | 35.60 (+4.18) | 79.03 (+2.89) | 0.66 (+0.05) |
Baseline + DCNM + MEFM | 55.20 (+3.78) | 69.29 (−1.20) | 60.47 (+2.87) | 65.53 (+6.52) | 37.76 (+6.34) | 81.75 (+5.61) | 0.70 (+0.09) |
Baseline + DCNM + MEFM + LTGM | 56.42 (+5.00) | 71.97 (+1.48) | 60.50 (+2.90) | 66.72 (+7.71) | 37.89 (+6.47) | 82.93 (+6.79) | 0.72 (+0.11) |
Model | AP (%) | (%) | (%) | (%) | (%) | (%) | (%) |
---|---|---|---|---|---|---|---|
Baseline | 50.78 | 70.72 | 57.62 | 59.82 | 27.42 | 75.88 | 0.60 |
Baseline + DCNM | 53.17 (+2.39) | 72.90 (+2.18) | 61.30 (+3.68) | 60.80 (+0.98) | 30.87 (+3.45) | 78.52 (+2.64) | 0.67 (+0.07) |
Baseline + DCNM + MEFM | 54.50 (+3.72) | 69.20 (−1.50) | 61.84 (+4.22) | 63.86 (+4.04) | 32.90 (+5.48) | 80.80 (+4.92) | 0.69 (+0.09) |
Baseline + DCNM + MEFM + LTGM | 54.85 (+4.07) | 70.75 (+0.03) | 62.20 (+4.58) | 63.95 (+4.13) | 33.52 (+6.10) | 81.99 (+6.11) | 0.70 (+0.10) |
Test Area | Ground Truth | Detection Number | Wrong Number | Omission Number | Time(s) | Correctness (%) |
---|---|---|---|---|---|---|
e | 197 | 162 | 6 | 35 | 249 | 82.23 |
f | 131 | 108 | 3 | 23 | 192 | 82.44 |
g | 186 | 164 | 5 | 22 | 86 | 88.17 |
Average Correctness | - | - | - | - | - | 84.28 |
Dataset | (%) | (%) | (%) | (%) | (%) | (%) | (%) |
---|---|---|---|---|---|---|---|
Luding earthquake | 54.85 | 70.75 | 62.20 | 63.95 | 33.52 | 81.99 | 0.70 |
Yangbi earthquake | 53.08 | 68.97 | 60.86 | 61.02 | 31.15 | 80.12 | 0.68 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, K.; Wang, S.; Wang, Y.; Gu, Z. High-Quality Damaged Building Instance Segmentation Based on Improved Mask Transfiner Using Post-Earthquake UAS Imagery: A Case Study of the Luding Ms 6.8 Earthquake in China. Remote Sens. 2024, 16, 4222. https://doi.org/10.3390/rs16224222
Yu K, Wang S, Wang Y, Gu Z. High-Quality Damaged Building Instance Segmentation Based on Improved Mask Transfiner Using Post-Earthquake UAS Imagery: A Case Study of the Luding Ms 6.8 Earthquake in China. Remote Sensing. 2024; 16(22):4222. https://doi.org/10.3390/rs16224222
Chicago/Turabian StyleYu, Kangsan, Shumin Wang, Yitong Wang, and Ziying Gu. 2024. "High-Quality Damaged Building Instance Segmentation Based on Improved Mask Transfiner Using Post-Earthquake UAS Imagery: A Case Study of the Luding Ms 6.8 Earthquake in China" Remote Sensing 16, no. 22: 4222. https://doi.org/10.3390/rs16224222
APA StyleYu, K., Wang, S., Wang, Y., & Gu, Z. (2024). High-Quality Damaged Building Instance Segmentation Based on Improved Mask Transfiner Using Post-Earthquake UAS Imagery: A Case Study of the Luding Ms 6.8 Earthquake in China. Remote Sensing, 16(22), 4222. https://doi.org/10.3390/rs16224222