A Scene Graph Similarity-Based Remote Sensing Image Retrieval Algorithm
Abstract
:1. Introduction
- A Siamese- network-based one-shot object detection algorithm was designed. A Siamese network model was proposed considering the issue of potentially unknown categories of objects in query images. The training objective of the model was to ensure that the distance in the feature representation space between any two input samples reflected their image feature similarity. In addition, to deal with the problem of complex backgrounds in a remote sensing image, a feature extraction network based on asymmetric convolution was developed in each feature extraction branch of the Siamese network to enhance feature extraction capabilities. To address the problem of missing small targets in remote sensing images, a feature pyramid structure was designed to improve detection capabilities for small targets. Finally, a candidate region generation network based on attention mechanisms was proposed to solve the problem of low positional regression accuracy in one-shot object detection tasks.
- A remote sensing image retrieval method was developed using scene graph similarity. By constructing scene graphs and performing image matching and retrieval based on scene graph similarity, this method more fully utilized the high-level semantic information in images, including the categories and spatial relationships of objects, to improve retrieval accuracy and efficiency, which fundamentally aligned with human behavior in image retrieval.
- Several scene graph construction strategies for target spatial relationships have been developed to meet various retrieval needs, including fully connected, randomly connected, nearest neighbor, star-shaped, and circular connections. These strategies can flexibly adapt to various data features and retrieval requirements, providing a variety of options for scene-graph-based remote sensing image retrieval.
- The RSSG dataset, which contains a corresponding scene graph for each image, was created by using various scene graph construction strategies. To our best knowledge, RSSG is the first remote sensing dataset in the academic community. The experimental results of the performance evaluation using RSSG showed that the selection of appropriate construction strategies according to different scene characteristics obviously enhanced retrieval performance.
2. Related Work
3. Remote Sensing Image Retrieval Algorithms Based on Scene Graph Similarity
3.1. Problem Description
- (1)
- The basic task for extracting semantic information from images is object detection. Traditional object detection tasks possess predefined sets of target object categories to classify and locate objects within images. However, target object categories in publicly reported images, i.e., query image Iq, are often unpredictable, making it a typical one-shot object detection task.
- (2)
- A scene graph is a structured representation of the scene in an image to clearly express the objects, attributes, and relationships among the content in the scene. Remote sensing images are significantly different from ordinary images, primarily due to their overhead perspective; therefore, relationships among objects are mainly represented as spatial relationships on a plane. Secondly, the field of view is large, thus the images include several targets and the complex spatial relationships among them. This poses challenges for constructing scene graphs for remote sensing images. To the best of our knowledge, there is currently a lack of methods for constructing scene graphs specifically for remote sensing images in the academic community.
3.2. One-Shot Object Detection Algorithm for Remote Sensing Images Based on a Siamese Network
- (1)
- ACSE residual module based on asymmetric convolution
- (2)
- ACSENet feature extraction network based on asymmetric convolution
- (3)
- Generation of the network ARPN based on the candidate regions of the attention mechanism
- (4)
- Double-head detector
3.3. Scene Diagram Construction
- (1)
- Fully connected: Each node is directly connected to every other node in the graph, capturing the relationships among all nodes but introducing redundancy with the increase in node number.
- (2)
- Randomly connected: Each node is probabilistically connected to other nodes in the graph, decreasing the connection number while maintaining information transfer among nodes.
- (3)
- Nearest neighbor connected: Starting from a selected node, other nodes are incrementally added to the graph based on their proximity to the already selected nodes. This method effectively captures the local structure and similarity in the data, improving retrieval and analysis efficiency.
- (4)
- Star connected: One central node is directly connected to all other nodes, while connections among other nodes are absent. This configuration is appropriate for scenes with a clear center or theme, emphasizing information related to the central node for easier retrieval and understanding.
- (5)
- Ring connected: Nodes are connected in a circular manner where each node is connected to its adjacent nodes. This method protects the spatial information and layout relationships among targets, capturing the periodic relationships among the data.
3.4. Similarity Calculation of the Remote Sensing Image of the Fusion Scene Map
3.4.1. Feature Extraction Module of the Scene Image
3.4.2. Feature Similarity Calculation Module
4. Experimental Evaluation
4.1. Datasets and Experimental Environment
- (1)
- Object detection in optical remote sensing images (DIOR) dataset [44]
- (2)
- Remote sensing images with scene graph (RSSG) ocean remote sensing dataset
- (3)
- Experimental environment
4.2. Evaluation of the One-Shot Object Detection Algorithm for Remote Sensing Images
4.2.1. Evaluation Indicators
4.2.2. Performance Comparison Result
4.3. Evaluation of the Remote Sensing Image Retrieval Algorithm
4.3.1. Evaluation Indicators
- : measured the proportion of relevant instances among the top retrieved items. It was calculated as the ratio of relevant retrieved instances to the total number of retrieved instances at rank . In this context, at = 1, 5, and 10 evaluated the precision of the retrieval algorithm when the top 1, 5, and 10 retrieved items were considered, respectively. The equation for the calculation of is shown in Equation (4):
- 2.
- : measured the proportion of relevant instances which were retrieved among all relevant instances. It was calculated as the ratio of relevant instances retrieved to the total number of relevant instances in the dataset at rank . Specifically, at = 1, 5, and 10 assessed how well the retrieval algorithm captured relevant instances within the top 1, 5, and 10 retrieved items, respectively. The equation for the calculation of is shown in Equation (5):
4.3.2. Performance Test
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Avtar, R.; Komolafe, A.A.; Kouser, A.; Singh, D.; Yunus, A.P.; Dou, J.; Kumar, P.; Gupta, R.D.; Johnson, B.A.; Minh, H.V.T.; et al. Assessing sustainable development prospects through remote sensing: A review. Remote Sens. Appl. Soc. Environ. 2020, 20, 100402. [Google Scholar] [CrossRef]
- Li, Y.; Ma, J.; Zhang, Y. Image retrieval from remote sensing big data:A survey. Inf. Fusion 2021, 67, 94–115. [Google Scholar] [CrossRef]
- Ye, F.; Zhao, X.; Luo, W.; Li, D.; Min, W. Query-Adaptive Remote Sensing Image Retrieval Based on Image Rank Similarity and Image-to-Query Class Similarity. IEEE Access 2020, 8, 116824–116839. [Google Scholar] [CrossRef]
- Wan, J.; Wang, D.; Hoi, S.C.; Wu, P.; Zhu, J.; Zhang, Y.; Li, J. Deep Learning for Content-Based Image Retrieval: A Comprehensive Study. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA; 2014. [Google Scholar]
- Alzu’bi, A.; Amira, A.; Ramzan, N. Semantic content-based image retrieval: A comprehensive study. J. Vis. Commun. Image Represent. 2015, 32, 20–54. [Google Scholar] [CrossRef]
- Liu, Y.; Zhang, D.; Lu, G.; Ma, W.Y. A survey of content-based image retrieval with high-level semantics. Pattern recognition 2007, 40, 262–282. [Google Scholar] [CrossRef]
- Antonelli, S.; Avola, D.; Cinque, L.; Crisostomi, D.; Foresti, G.L.; Galasso, F.; Marini, M.R.; Mecca, A.; Pannone, D. Few-Shot Object Detection: A Survey. ACM Comput. Surv. 2022, 54, 37. [Google Scholar] [CrossRef]
- Li, H.; Zhu, G.; Zhang, L.; Jiang, Y.; Dang, Y.; Hou, H.; Shen, P.; Zhao, X.; Shah, S.A.A.; Bennamoun, M. Scene Graph Generation: A comprehensive survey. Neurocomputing 2024, 566, 127052. [Google Scholar] [CrossRef]
- Bretschneider, T.; Cavet, R.; Kao, O. Retrieval of remotely sensed imagery using spectral information content. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toronto, ON, Canada, 7 November 2002; pp. 2253–2255. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Scott, G.J.; Klaric, M.N.; Davis, C.H.; Shyu, C.R. Entropy-Balanced Bitmap Tree for Shape-Based Object Retrieval from Large-Scale Satellite Imagery Databases. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1603–1616. [Google Scholar] [CrossRef]
- Tong, X.Y.; Xia, G.S.; Hu, F.; Zhong, Y.F.; Datcu, M.H.; Zhang, L.P. Exploiting Deep Features for Remote Sensing Image Retrieval: A Systematic Investigation. IEEE Trans. Big Data 2020, 6, 507–521. [Google Scholar] [CrossRef]
- Cao, B.; Araujo, A.; Sim, J. Unifying deep local and global features for image search. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 726–743. [Google Scholar]
- Ge, Y.; Yang, Z.H.; Huang, Z.H.; Ye, F.M. A multi-level feature fusion method based on pooling and similarity for HRRS image retrieval. Remote Sens. Lett. 2021, 12, 1090–1099. [Google Scholar] [CrossRef]
- Cao, R.; Zhang, Q.; Zhu, J.S.; Li, Q.; Li, Q.Q.; Liu, B.Z.; Qiu, G.P. Enhancing remote sensing image retrieval using a triplet deep metric learning network. Int. J. Remote Sens. 2020, 41, 740–751. [Google Scholar] [CrossRef]
- Famao, Y.E.; Chen, S.X.; Meng, X.L. Remote sensing image retrieval method based on regression CNN feature fusion. Sci. Surv. Gand Mapp. 2023, 48, 168–176. [Google Scholar]
- Xu, B.; Cen, K.; Huang, J.; Shen, H.; Cheng, X. A Survey on Graph Convolutional Neural Network. Chin. J. Comput. 2020, 43, 755–780. [Google Scholar]
- Chaudhuri, U.; Banerjee, B.; Bhattacharya, A. Siamese graph convolutional network for content based remote sensing image retrieval. Comput. Vis. Image Underst. 2019, 184, 22–30. [Google Scholar] [CrossRef]
- Chaudhuri, U.; Banerjee, B.; Bhattacharya, A.; Datcu, M. Attention-driven graph convolution network for remote sensing image retrieval. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Lu, X.X.; Wang, B.Q.; Zheng, X.T.; Li, X.L. Exploring Models and Data for Remote Sensing Image Caption Generation. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2183–2195. [Google Scholar] [CrossRef]
- Li, X.L.; Zhang, X.T.; Huang, W.; Wang, Q. Truncation Cross Entropy Loss for Remote Sensing Image Captioning. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5246–5257. [Google Scholar] [CrossRef]
- Le, T.M.; Dinh, N.T.; Van, T.T. Developing a model semantic-based image retrieval by combining KD-Tree structure with ontology. Expert Syst. 2023, 18, e13396. [Google Scholar] [CrossRef]
- Kuznetsova, A.; Rom, H.; Alldrin, N.; Uijlings, J.; Krasin, I.; Pont-Tuset, J.; Kamali, S.; Popov, S.; Malloci, M.; Kolesnikov, A.; et al. The open images dataset v4. Int. J. Comput. Vis. 2020, 128, 1956–1981. [Google Scholar] [CrossRef]
- Hybridised, K.C.N. OntoKnowNHS: Ontology Driven Knowledge Centric Novel Hybridised Semantic Scheme for Image Recommendation Using Knowledge Graph. In Proceedings of the Knowledge Graphs and Semantic Web: Third Iberoamerican Conference and Second Indo-American Conference, KGSWC 2021, Kingsville, TX, USA, 22–24 November 2021; p. 138. [Google Scholar]
- Asim, M.N.; Wasim, M.; Khan, M.U.G.; Mahmood, N.; Mahmood, W. The Use of Ontology in Retrieval: A Study on Textual, Multilingual, and Multimedia Retrieval. IEEE Access 2019, 7, 21662–21686. [Google Scholar] [CrossRef]
- Nhi, N.T.U.; Le, T.M.; Van, T.T. A Model of Semantic-Based Image Retrieval Using C-Tree and Neighbor Graph. Int. J. Semant. Web Inf. Syst. 2022, 18, 23. [Google Scholar] [CrossRef]
- Dinh, N.T.; Van, T.T.; Le, T.M. Semantic relationship-based image retrieval using KD-tree structure. In Proceedings of the Asian Conference on Intelligent Information and Database Systems, Ho Chi Minh City, Vietnam, 28–30 November 2022; pp. 455–468. [Google Scholar]
- Dinh, N.T.; Le, T.M.; Van, T.T. An improvement method of KD-Tree using k-means and k-NN for semantic-based image retrieval system. In Proceedings of the World Conference on Information Systems and Technologies, Budva, Montenegro, 12–14 April 2022; pp. 177–187. [Google Scholar]
- Schroeder, B.; Tripathi, S. Structured query-based image retrieval using scene graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 178–179. [Google Scholar]
- Caesar, H.; Uijlings, J.; Ferrari, V. Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18 June 2018; pp. 1209–1218. [Google Scholar]
- Wang, S.; Wang, R.; Yao, Z.; Shan, S.; Chen, X. Cross-modal scene graph matching for relationship-aware image-text retrieval. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 1508–1517. [Google Scholar]
- Yoon, S.; Kang, W.Y.; Jeon, S.; Lee, S.; Han, C.; Park, J.; Kim, E.-S. Image-to-image retrieval by learning similarity between scene graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, online event, 19–21 May 2021; pp. 10718–10726. [Google Scholar]
- O’Connor, R.J.; Spalding, A.K.; Bowers, A.W.; Ardoin, N.M. Power and participation: A systematic review of marine protected area engagement through participatory science Methods. Mar. Policy 2024, 163, 106133. [Google Scholar] [CrossRef]
- Liu, C.; Ma, J.; Tang, X.; Liu, F.; Zhang, X.; Jiao, L. Deep hash learning for remote sensing image retrieval. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3420–3443. [Google Scholar] [CrossRef]
- Dubey, S.R. A Decade Survey of Content Based Image Retrieval Using Deep Learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 2687–2704. [Google Scholar] [CrossRef]
- Bertinetto, L.; Valmadre, J.; Henriques, J.F.; Vedaldi, A.; Torr, P.H. Fully-convolutional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 850–865. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Ding, X.; Guo, Y.; Ding, G.; Han, J. Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1911–1920. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18 June 2018; pp. 7132–7141. [Google Scholar]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Gong, L.; Cheng, Q. Exploiting edge features for graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9211–9219. [Google Scholar]
- Socher, R.; Chen, D.; Manning, C.D.; Ng, A. Reasoning with neural tensor networks for knowledge base completion. Adv. Neural Inf. Process. Syst. 2013, 26. [Google Scholar]
- Li, K.; Wan, G.; Cheng, G.; Meng, L.Q.; Han, J.W. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS-J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
- Hsieh, T.-I.; Lo, Y.-C.; Chen, H.-T.; Liu, T.-L. One-shot object detection with co-attention and co-excitation. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Chen, D.-J.; Hsieh, H.-Y.; Liu, T.-L. Adaptive image transformer for one-shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12247–12256. [Google Scholar]
- Yang, H.; Cai, S.; Sheng, H.; Deng, B.; Huang, J.; Hua, X.-S.; Tang, Y.; Zhang, Y. Balanced and hierarchical relation learning for one-shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 7591–7600. [Google Scholar]
- Robbins, H.; Monro, S. A stochastic approximation method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
- Yuan, Z.Q.; Zhang, W.K.; Fu, K.; Li, X.; Deng, C.B.; Wang, H.Q.; Sun, X. Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval. IEEE Trans. Geosci. Remote Sens. 2022, 60, 19. [Google Scholar] [CrossRef]
- Roy, S.; Sangineto, E.; Demir, B.; Sebe, N. Metric-Learning-Based Deep Hashing Network for Content-Based Retrieval of Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2021, 18, 226–230. [Google Scholar] [CrossRef]
- Song, W.W.; Gao, Z.; Dian, R.W.; Ghamisi, P.; Zhang, Y.J.; Benediktsson, J.A. Asymmetric Hash Code Learning for Remote Sensing Image Retrieval. IEEE Trans. Geosci. Remote Sens. 2022, 60, 14. [Google Scholar] [CrossRef]
Name | ResNet101 | ACSENet |
---|---|---|
Conv1 | 7 × 7, 64, S2 3 × 3, maxpool, S2 | 7 × 7, 64, S2 3 × 3, maxpool, S2 |
Conv2_x | ||
Conv3_x | ||
Conv4_x | ||
Conv5_x |
Experimental Equipment | Parameters |
---|---|
Operating system | Ubuntu20.04.2 LTS |
CPU | Intel® Core™ i7-13700 CPU @ 2.10 GHz ×16 |
Memory | 32G |
GPU | NVIDIA GeForce GTX 3090 |
Programming language and framework | Python3.8, PyTorch |
Development tools | PyCharm |
Algorithm | CoAE [17] | AIT [18] | BHRL [19] | SiamACDet | |
---|---|---|---|---|---|
Visible classes | oil tank | 33.6 | 35.2 | 38.4 | 41.7 |
boat | 7.9 | 8.5 | 8.3 | 9.1 | |
plane | 43.8 | 44.2 | 45.6 | 48 | |
house | 36.3 | 36.2 | 39.5 | 39.3 | |
dam | 8.2 | 9.9 | 10.3 | 12.5 | |
highway service areas | 10.6 | 13.2 | 11.5 | 16.8 | |
gymnasium | 30.9 | 28.7 | 30.6 | 31 | |
football | 51.2 | 54.6 | 55.2 | 58.1 | |
overpass | 13.9 | 14.1 | 19.3 | 14.3 | |
wind generator | 9.6 | 13.2 | 11 | 18.1 | |
bridge | 3.8 | 4.5 | 6.5 | 4.6 | |
basketball court | 37.2 | 36.9 | 35.4 | 39.7 | |
mAP | 23.9 | 24.9 | 26 | 27.8 | |
Invisible classes | golf course | 2.4 | 4.2 | 5.5 | 5.5 |
vehicle | 5.8 | 5.9 | 6.2 | 8 | |
railway station | 14.8 | 15.1 | 15.9 | 17.4 | |
chimney | 16.5 | 17.2 | 17.6 | 18.7 | |
mAP | 9.9 | 10.6 | 11.3 | 12.4 |
Algorithm | Visible Classes | Invisible Classes | |||||
---|---|---|---|---|---|---|---|
Farming Floating Rafts | Oil Tank | mAP | Sea Fields | Boat | House | mAP | |
CoAE [17] | 8.1 | 30.6 | 19.4 | 2.6 | 8.5 | 36.3 | 15.8 |
AIT [18] | 9.6 | 32.7 | 21.2 | 2.9 | 9.6 | 36.2 | 16.2 |
BHRL [19] | 10.1 | 33.6 | 21.9 | 3.1 | 10 | 39.4 | 17.5 |
SiamACDet | 10.5 | 35.4 | 23 | 3.5 | 10.2 | 39.4 | 17.7 |
Algorithm | Precision@1 | Precision@5 | Precision@10 |
---|---|---|---|
AMFMN | 27.6 | 10.2 | 10.3 |
MiLaN | 20.4 | 8.2 | 9 |
AHCL | 32.4 | 11.5 | 11.4 |
SGSRSIIR | 37.6 | 13.2 | 12 |
Algorithm | Recall@1 | Recall@5 | Recall@10 |
---|---|---|---|
AMFMN | 6.9 | 12.8 | 25.7 |
MiLaN | 5.1 | 10.3 | 22.4 |
AHCL | 8.1 | 14.4 | 28.5 |
SGSRSIIR | 9.4 | 16.5 | 30.1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ren, Y.; Zhao, Z.; Jiang, J.; Jiao, Y.; Yang, Y.; Liu, D.; Chen, K.; Yu, G. A Scene Graph Similarity-Based Remote Sensing Image Retrieval Algorithm. Appl. Sci. 2024, 14, 8535. https://doi.org/10.3390/app14188535
Ren Y, Zhao Z, Jiang J, Jiao Y, Yang Y, Liu D, Chen K, Yu G. A Scene Graph Similarity-Based Remote Sensing Image Retrieval Algorithm. Applied Sciences. 2024; 14(18):8535. https://doi.org/10.3390/app14188535
Chicago/Turabian StyleRen, Yougui, Zhibin Zhao, Junjian Jiang, Yuning Jiao, Yining Yang, Dawei Liu, Kefu Chen, and Ge Yu. 2024. "A Scene Graph Similarity-Based Remote Sensing Image Retrieval Algorithm" Applied Sciences 14, no. 18: 8535. https://doi.org/10.3390/app14188535
APA StyleRen, Y., Zhao, Z., Jiang, J., Jiao, Y., Yang, Y., Liu, D., Chen, K., & Yu, G. (2024). A Scene Graph Similarity-Based Remote Sensing Image Retrieval Algorithm. Applied Sciences, 14(18), 8535. https://doi.org/10.3390/app14188535