SCAD: A Siamese Cross-Attention Discrimination Network for Bitemporal Building Change Detection
Abstract
:1. Introduction
- (1)
- We added a SCA module to the Siamese network, focusing on unchanged and changed regions. The Siamese network is now capable of deploying two-channel targeted attention on the specified feature information, strengthening the network’s characterization ability and improving its ability to recognize environmental and building changes, and thus enhancing the network’s recognition accuracy.
- (2)
- Our proposed MFF module is able to fuse independent multi-scale information, recover the original feature information of remote sensing images as much as possible, reduce the false detection rate of CD, and make the edge lines of detecting change regions more delicate.
- (3)
- We designed a DCD module by combining differential and concatenation methods, enhancing the feature differences between contexts and focusing on comparing the differences between pseudo and real changes, making the model more responsive to the region where the changes occur, thus reducing the network’s missing detection rate.
2. Materials and Methods
2.1. Network Architecture
2.2. Siamese Cross-Attention Module
2.3. Multi-Scale Feature Fusion Module
2.4. Differential Context Discrimination Module
2.5. Loss Function
3. Experiments and Results
3.1. Datasets
- LEVIR-CD [30], created by Bei-hang University, contains a variety of architectural images, including original Google Earth images collected between 2002 and 2018. There are 1024 × 1024 pixels in each image with a resolution of 0.5 m. Due to GPU memory limitations, we divided each image into 16 patches of 256 × 256 pixels without an overlap. As a result, we obtained 3096, 432, and 921 pairs of patches for training, validation, and testing, respectively. As shown in Figure 3, a few scenes are taken from the LEVIR-CD dataset.
- Sun Yat-Sen University created the challenging SYSU-CD dataset [41] for CD. This dataset contains changes in vegetation and buildings in a forest, buildings along a coastline, and the appearance and disappearance of ships in an ocean. The image size is 256 × 256 pixels with a resolution of 0.5 m. In our work, the training, validation, and test sample proportion was 6:2:2. Therefore, we obtained 12,000, 4000, 4000 pairs of patches for training, validation, and testing, respectively. Figure 4 illustrates some of the variety of the scenarios included in the SYSU-CD dataset.
- The WHU-CD dataset [42] was released by the University of Wuhan as a public CD dataset. Only one image is included in the original dataset, which is 15,354 × 32,507 pixels. In order to be consistent with the two datasets mentioned above, 7432 patches were generated by cropping the image into 256 × 256 pixels. During the splitting, no overlap was used. In the end, we obtained 5201, 744, and 1487 pairs of patches for the training, validation, and testing, respectively. A few scenes from the WHU-CD building dataset are shown in Figure 5.
3.2. Experimental Details
3.2.1. Evaluation Metrics
3.2.2. Parameter Settings
3.3. Ablation Experiment
3.4. Comparative Experiments
- FC-EF [43]: A method of image-level fusion in which bitemporal images are concatenated to shift the single input to a fully convolutional network, and feature mapping is performed through skip connections.
- FC-Siam-conc [43]: This method fuses the multiscale information in the decoder. A Siamese FCN is employed, which uses the same structure and shared weights to extract multilevel features.
- FC-Siam-diff [43]: Only the skip connection is different between this method and FC-Siam-conc. Instead of concatenating the absolute values, FC-Siam-diff uses absolute value differences.
- CDNet [44]: CDNet is initially used to detect street changes. The core part of the network is four compression blocks and four extension blocks. The compression blocks acquire feature information about the images and the extension blocks refine the change regions. Softmax is used to classify each pixel point for the prediction, balancing performance, and model size.
- IFNet [45]: An image fusion network for CD that is deeply supervised. The bitemporal images are first extracted using a two-stream network. The feature information is then transferred to the deep supervised difference discrimination network (DDN) for analysis. Finally, channel attention and spatial attention are applied to fuse the two-stream feature information to ensure the integrity of the change region boundaries.
- SNUNet [46]: SNUNet reduces the loss of deep information in neural networks by combining the Siamese network and NestedUNet. In addition, an ensemble channel attention module (ECAM) is used to achieve an accurate feature extraction.
- BITNet [47]: A transformer module is added to the Siamese network to transform each image into a set of semantic tokens. This is followed by building a contextual model of this set of semantic tokens using an encoder. Finally, a decoder restores the original information of the image through a decoder, thus enhancing the feature representation in the pixel space.
- LUNet [48]: LUNet is implemented by incorporating an LSTM neural network based on UNet. It adds an integrated LSTM before each encoding process, which makes the network operation more lightweight by adjusting the weight of each LSTM, the bias, and the switch of the forgetting gate, thus achieving an end-to-end network structure.
3.5. Quantitative Results
3.6. Computational Efficiency Experiment
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Qin, R.; Tian, J.; Reinartz, P. 3D change detection—Approaches and applications. ISPRS J. Photogramm. Remote Sens. 2016, 122, 41–56. [Google Scholar] [CrossRef] [Green Version]
- Xue, J.; Xu, H.; Yang, H.; Wang, B.; Wu, P.; Choi, J.; Cai, L.; Wu, Y. Multi-Feature Enhanced Building Change Detection Based on Semantic Information Guidance. Remote Sens. 2021, 13, 4171. [Google Scholar] [CrossRef]
- Song, L.; Xia, M.; Jin, J.; Qian, M.; Zhang, Y. SUACDNet: Attentional change detection network based on siamese U-shaped structure. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102597. [Google Scholar] [CrossRef]
- Chen, Z.; Zhou, Y.; Wang, B.; Xu, X.; He, N.; Jin, S.; Jin, S. EGDE-Net: A building change detection method for high-resolution remote sensing imagery based on edge guidance and differential enhancement. ISPRS J. Photogramm. Remote Sens. 2022, 191, 203–222. [Google Scholar] [CrossRef]
- Islam, M.A.; Jia, S.; Bruce, N.D. How much position information do convolutional neural networks encode? arXiv 2020, arXiv:2001.08248. [Google Scholar]
- Sefrin, O.; Riese, F.M.; Keller, S. Deep learning for land cover change detection. Remote Sens. 2020, 13, 78. [Google Scholar] [CrossRef]
- Vivekananda, G.; Swathi, R.; Sujith, A. Multi-temporal image analysis for LULC classification and change detection. Eur. J. Remote Sens. 2021, 54, 189–199. [Google Scholar] [CrossRef]
- Wang, H.; Lv, X.; Zhang, K.; Guo, B. Building Change Detection Based on 3D Co-Segmentation Using Satellite Stereo Imagery. Remote Sens. 2022, 14, 628. [Google Scholar] [CrossRef]
- Zhang, Z.; Li, Z.; Tian, X. Vegetation change detection research of Dunhuang city based on GF-1 data. Int. J. Coal Sci. Technol. 2018, 5, 105–111. [Google Scholar] [CrossRef] [Green Version]
- Bruzzone, L.; Prieto, D.F. Automatic analysis of the difference image for unsupervised change detection. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1171–1182. [Google Scholar] [CrossRef] [Green Version]
- Celik, T. Unsupervised change detection in satellite images using principal component analysis and K-means clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
- Nemmour, H.; Chibani, Y. Multiple support vector machines for land cover change detection: An application for mapping urban extensions. ISPRS J. Photogramm. Remote Sens. 2006, 61, 125–133. [Google Scholar] [CrossRef]
- Li, P.; Xu, H. Land-cover change detection using one-class support vector machine. Photogramm. Engineer. Remote Sens. 2010, 76, 255–263. [Google Scholar] [CrossRef]
- Seo, D.K.; Kim, Y.H.; Eo, Y.D.; Park, W.Y.; Park, H.C. Generation of radiometric, phenological normalized image based on random forest regression for change detection. Remote Sens. 2017, 9, 1163. [Google Scholar] [CrossRef] [Green Version]
- Ke, L.; Lin, Y.; Zeng, Z.; Zhang, L.; Meng, L. Adaptive change detection with significance test. IEEE Access. 2018, 6, 27442–27450. [Google Scholar] [CrossRef]
- Hay, G.J.; Niemann, K.O. Visualizing 3-D texture: A three-dimensional structural approach to model forest texture. Can. J. Remote Sens. 1994, 20, 90–101. [Google Scholar]
- Jabari, S.; Rezaee, M.; Fathollahi, F.; Zhang, Y. Multispectral change detection using multivariate Kullback-Leibler distance. ISPRS J. Photogramm. Remote Sens. 2019, 147, 163–177. [Google Scholar] [CrossRef]
- Huang, X.; Cao, Y.; Li, J. An automatic change detection method for monitoring newly constructed building areas using time-series multi-view high-resolution optical satellite images. Remote Sens. Environ. 2020, 244, 111802. [Google Scholar] [CrossRef]
- Javed, A.; Jung, S.; Lee, W.H.; Han, Y. Object-based building change detection by fusing pixel-level change detection results generated from morphological building index. Remote Sens. 2020, 12, 2952. [Google Scholar] [CrossRef]
- Guo, X.; Meng, L.; Mei, L.; Weng, Y.; Tong, H. Multi-focus image fusion with Siamese self-attention network. IET Image Process. 2020, 14, 1339–1346. [Google Scholar] [CrossRef]
- Zhu, Q.; Guo, X.; Deng, W.; Guan, Q.; Zhong, Y.; Zhang, L.; Li, D. Land-use/land-cover change detection based on a Siamese global learning framework for high spatial resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2022, 184, 63–78. [Google Scholar] [CrossRef]
- Bai, B.; Fu, W.; Lu, T.; Li, S. Edge-guided recurrent convolutional neural network for multitemporal remote sensing image building change detection. IEEE Trans. Geosci. Remote Sen. 2021, 60, 1–13. [Google Scholar] [CrossRef]
- Gao, Y.; Gao, F.; Dong, J.; Wang, S. Change detection from synthetic aperture radar images based on channel weighting-based deep cascade network. IEEE J. Sel. Top. Appl. Earth Observ. 2019, 12, 4517–4529. [Google Scholar] [CrossRef]
- Kang, M.; Baek, J. Sar image change detection via multiple-window processing with structural similarity. Sensors 2021, 21, 6645. [Google Scholar] [CrossRef]
- Dong, H.; Ma, W.; Jiao, L.; Liu, F.; Shang, R.; Li, Y.; Bai, J. A Contrastive Learning Transformer for Change Detection in High-Resolution Sar Images; SSRN 4169439; SSRN: Rochester, NY, USA, 2022. [Google Scholar]
- Lei, Y.; Liu, X.; Shi, J.; Lei, C.; Wang, J. Multiscale superpixel segmentation with deep features for change detection. IEEE Access. 2019, 7, 36600–36616. [Google Scholar] [CrossRef]
- Dong, H.; Ma, W.; Wu, Y.; Zhang, J.; Jiao, L. Self-supervised representation learning for remote sensing image change detection based on temporal prediction. Remote Sens. 2020, 12, 1868. [Google Scholar] [CrossRef]
- Lv, Z.; Liu, T.; Shi, C.; Benediktsson, J.A.; Du, H. Novel land cover change detection method based on K-means clustering and adaptive majority voting using bitemporal remote sensing images. IEEE Access. 2019, 7, 34425–34437. [Google Scholar] [CrossRef]
- Chen, Y.; Bruzzone, L. Self-supervised Remote Sensing Images Change Detection at Pixel-level. arXiv 2021, arXiv:2105.08501. [Google Scholar]
- Chen, H.; Shi, Z. A spatial-temporal attention-based method and a new dataset for remote sensing image change detection. Remote Sens. 2020, 12, 1662. [Google Scholar] [CrossRef]
- Wang, Z.; Peng, C.; Zhang, Y.; Wang, N.; Luo, L. Fully convolutional siamese networks based change detection for optical aerial images with focal contrastive loss. Neurocomputing 2021, 457, 155–167. [Google Scholar] [CrossRef]
- Zhan, Y.; Fu, K.; Yan, M.; Sun, X.; Wang, H.; Qiu, X. Change detection based on deep siamese convolutional network for optical aerial images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1845–1849. [Google Scholar] [CrossRef]
- Chen, J.; Yuan, Z.; Peng, J.; Chen, L.; Huang, H.; Zhu, J.; Liu, Y.; Li, H. DASNet: Dual attentive fully convolutional Siamese networks for change detection in high-resolution satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1194–1206. [Google Scholar] [CrossRef]
- Mi, L.; Chen, Z. Superpixel-enhanced deep neural forest for remote sensing image semantic segmentation. ISPRS J. Photogramm. Remote Sens. 2020, 159, 140–152. [Google Scholar] [CrossRef]
- Wang, H.; Cao, P.; Wang, J.; Zaiane, O.R. Uctransnet: Rethinking the skip connections in u-net from a channel-wise perspective with transformer. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, virtual, 22 February–1 March 2022; pp. 2441–2449. [Google Scholar]
- Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Instance normalization: The missing ingredient for fast stylization. arXiv 2016, arXiv:1607.08022. [Google Scholar]
- West, R.D.; Riley, R.M. Polarmetric interferomtric SAR change detection discrimination. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3091–3104. [Google Scholar] [CrossRef]
- Mei, L.; Yu, Y.; Shen, H.; Weng, Y.; Liu, Y.; Wang, D.; Liu, S.; Zhou, F.; Lei, C. Adversarial Multiscale Feature Learning Framework for Overlapping Chromosome Segmentation. Entropy 2022, 24, 522. [Google Scholar] [CrossRef] [PubMed]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.K.; Wang, Z.; Smolley, S.P. The Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2813–2821. [Google Scholar]
- Shi, Q.; Liu, M.; Li, S.; Liu, X.; Wang, F.; Zhang, L. A deeply supervised attention metric-based network and an open aerial image dataset for remote sensing change detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
- Ji, S.; Wei, S.; Lu, M. Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set. IEEE Trans. Geosci. Remote Sens. 2018, 57, 574–586. [Google Scholar] [CrossRef]
- Daudt, R.C.; Le Saux, B.; Boulch, A. Fully convolutional Siamese networks for change detection. In Proceedings of the 25th IEEE International Conference on Image Processing, Athens, Greece, 7–10 October 2018; pp. 4063–4067. [Google Scholar]
- Alcantarilla, P.F.; Stent, S.; Ros, G.; Arroyo, R.; Gherardi, R. Street-view change detection with deconvolutional networks. Auton. Robots 2018, 42, 1301–1322. [Google Scholar] [CrossRef]
- Zhang, C.; Yue, P.; Tapete, D.; Jiang, L.; Shangguan, B.; Huang, L.; Liu, G. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 166, 183–200. [Google Scholar] [CrossRef]
- Fang, S.; Li, K.; Shao, J.; Li, Z. SNUNet-CD: A densely connected Siamese network for change detection of VHR images. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Chen, H.; Qi, Z.; Shi, Z. Remote sensing image change detection with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
- Papadomanolaki, M.; Vakalopoulou, M.; Karantzalos, K. A deep multitask learning framework coupling semantic segmentation and fully convolutional LSTM networks for urban change detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7651–7668. [Google Scholar] [CrossRef]
- Wang, X.; Du, J.; Tan, K.; Ding, J.; Liu, Z.; Pan, C.; Han, B. A high-resolution feature difference attention network for the application of building change detection. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102950. [Google Scholar] [CrossRef]
Name | Bands | Image Pairs | Resolution (m) | Image Size | Training Set | Validation Set | Testing Set |
---|---|---|---|---|---|---|---|
LEVIR-CD | 3 | 637 | 0.5 | 1024 × 1024 | 3096 | 432 | 921 |
SYSU-CD | 3 | 20,000 | 0.5 | 256 × 256 | 12,000 | 4000 | 4000 |
WHU-CD | 3 | 1 | 0.5 | 32,207 × 15,354 | 5201 | 744 | 1487 |
Method | Precision | Recall | F1 Score | mIOU | IOU_0 | IOU_1 | OA | Kappa |
---|---|---|---|---|---|---|---|---|
Baseline | 87.73 | 91.31 | 89.48 | 89.16 | 97.36 | 80.96 | 97.63 | 88.15 |
Baseline + SCA | 88.37 | 91.29 | 89.81 | 89.48 | 97.45 | 81.50 | 97.71 | 88.52 |
Baseline + SCA + DCD | 90.14 | 91.74 | 90.32 | 90.56 | 97.75 | 83.37 | 97.98 | 89.79 |
Method | Precision | Recall | F1 Score | mIOU | IOU_0 | IOU_1 | OA | Kappa |
---|---|---|---|---|---|---|---|---|
FC-EF | 79.91 | 82.84 | 81.35 | 81.97 | 95.38 | 68.56 | 95.80 | 78.99 |
FC-Siam-conc | 81.84 | 83.55 | 82.68 | 83.11 | 95.74 | 70.48 | 96.13 | 80.51 |
FC-Siam-diff | 78.60 | 89.30 | 83.61 | 83.77 | 95.71 | 71.84 | 96.13 | 81.43 |
CDNet | 84.21 | 87.10 | 85.63 | 85.65 | 96.43 | 74.87 | 96.77 | 83.81 |
IFNet | 85.37 | 90.24 | 87.74 | 87.53 | 96.91 | 78.16 | 97.21 | 86.17 |
SNUNet | 91.05 | 88.87 | 89.94 | 89.65 | 97.57 | 81.73 | 97.81 | 88.71 |
BITNet | 87.32 | 91.41 | 89.32 | 89.00 | 97.31 | 80.70 | 97.59 | 87.96 |
LUNet | 85.69 | 90.99 | 88.73 | 88.44 | 97.13 | 79.75 | 97.42 | 87.28 |
DeepLabV3 | 90.03 | 82.51 | 86.11 | - | - | - | - | 85.39 |
UNet++ | 91.44 | 85.24 | 88.23 | - | - | - | - | 87.62 |
STANet | 92.01 | 83.33 | 87.46 | - | - | - | - | 86.82 |
HDANet | 92.26 | 87.61 | 89.87 | - | - | - | - | 89.34 |
SCADNet(ours) | 90.14 | 91.74 | 90.32 | 90.56 | 97.75 | 83.37 | 97.98 | 89.79 |
Method | Precision | Recall | F1 Score | mIOU | IOU_0 | IOU_1 | OA | Kappa |
---|---|---|---|---|---|---|---|---|
FC-EF | 64.58 | 89.79 | 75.13 | 71.19 | 82.21 | 60.17 | 85.98 | 65.73 |
FC-Siam-conc | 65.98 | 89.39 | 75.99 | 72.18 | 83.08 | 61.28 | 86.65 | 67.04 |
FC-Siam-diff | 70.84 | 84.87 | 77.22 | 74.07 | 85.24 | 62.90 | 88.19 | 69.34 |
CDNet | 74.61 | 84.10 | 79.08 | 76.15 | 86.90 | 65.39 | 89.50 | 72.10 |
IFNet | 75.29 | 87.60 | 80.98 | 77.91 | 87.77 | 68.04 | 90.30 | 74.52 |
SNUNet | 76.90 | 79.59 | 78.22 | 75.68 | 87.13 | 64.23 | 89.55 | 71.35 |
BITNet | 80.61 | 79.29 | 79.95 | 77.53 | 88.46 | 66.59 | 90.62 | 73.83 |
LUNet | 76.14 | 81.74 | 78.84 | 76.13 | 87.18 | 65.08 | 89.65 | 72.01 |
DeepLabV3 | 80.99 | 70.65 | 75.47 | - | - | - | - | 68.56 |
UNet++ | 81.44 | 74.66 | 77.90 | - | - | - | - | 71.76 |
STANet | 80.38 | 74.75 | 77.46 | - | - | - | - | 70.84 |
HDANet | 78.53 | 79.88 | 79.20 | - | - | - | - | 72.71 |
SCADNet(ours) | 80.12 | 83.53 | 81.79 | 79.13 | 89.08 | 69.19 | 91.23 | 76.02 |
Method | Precision | Recall | F1 Score | mIOU | IOU_0 | IOU_1 | OA | Kappa |
---|---|---|---|---|---|---|---|---|
FC-EF | 70.43 | 92.31 | 79.90 | 82.12 | 97.72 | 66.53 | 97.82 | 78.77 |
FC-Siam-conc | 63.80 | 91.81 | 75.28 | 78.70 | 97.04 | 60.36 | 97.16 | 73.83 |
FC-Siam-diff | 65.98 | 94.30 | 77.63 | 80.38 | 97.33 | 63.44 | 97.44 | 76.32 |
CDNet | 81.75 | 88.69 | 85.08 | 86.25 | 98.47 | 74.03 | 98.54 | 84.31 |
IFNet | 86.51 | 87.69 | 87.09 | 87.93 | 98.72 | 77.14 | 98.78 | 86.45 |
SNUNet | 81.92 | 85.33 | 83.59 | 85.08 | 98.36 | 71.80 | 98.42 | 82.76 |
BITNet | 82.35 | 92.59 | 87.17 | 87.96 | 98.66 | 77.26 | 98.72 | 86.50 |
LUNet | 66.32 | 93.06 | 77.45 | 80.26 | 97.33 | 63.19 | 97.45 | 76.13 |
DeepLabV3 | 82.56 | 81.97 | 82.26 | - | - | - | - | 81.58 |
UNet++ | 89.06 | 78.98 | 83.72 | - | - | - | - | 83.13 |
STANet | 86.01 | 83.40 | 84.68 | - | - | - | - | 84.10 |
HDANet | 89.87 | 82.55 | 86.05 | - | - | - | - | 85.54 |
SCADNet(ours) | 85.06 | 92.50 | 88.62 | 89.20 | 98.83 | 79.57 | 98.88 | 88.04 |
Method | Params(M) | FLOPs(G) |
---|---|---|
FC-EF | 1.35 | 3.56 |
FC-Siam-conc | 1.54 | 5.31 |
FC-Siam-diff | 1.35 | 4.71 |
CDNet | 1.43 | 23.45 |
IFNet | 35.99 | 82.26 |
SNUNet | 27.06 | 123.11 |
BITNet | 3.01 | 8.48 |
LUNet | 8.45 | 17.33 |
SCADNet(ours) | 66.94 | 70.72 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, C.; Ye, Z.; Mei, L.; Shen, S.; Zhang, Q.; Sui, H.; Yang, W.; Sun, S. SCAD: A Siamese Cross-Attention Discrimination Network for Bitemporal Building Change Detection. Remote Sens. 2022, 14, 6213. https://doi.org/10.3390/rs14246213
Xu C, Ye Z, Mei L, Shen S, Zhang Q, Sui H, Yang W, Sun S. SCAD: A Siamese Cross-Attention Discrimination Network for Bitemporal Building Change Detection. Remote Sensing. 2022; 14(24):6213. https://doi.org/10.3390/rs14246213
Chicago/Turabian StyleXu, Chuan, Zhaoyi Ye, Liye Mei, Sen Shen, Qi Zhang, Haigang Sui, Wei Yang, and Shaohua Sun. 2022. "SCAD: A Siamese Cross-Attention Discrimination Network for Bitemporal Building Change Detection" Remote Sensing 14, no. 24: 6213. https://doi.org/10.3390/rs14246213
APA StyleXu, C., Ye, Z., Mei, L., Shen, S., Zhang, Q., Sui, H., Yang, W., & Sun, S. (2022). SCAD: A Siamese Cross-Attention Discrimination Network for Bitemporal Building Change Detection. Remote Sensing, 14(24), 6213. https://doi.org/10.3390/rs14246213