Next Article in Journal
Classifying Stand Compositions in Clover Grass Based on High-Resolution Multispectral UAV Images
Previous Article in Journal
Effects of Assimilating Ground-Based Microwave Radiometer and FY-3D MWTS-2/MWHS-2 Data in Precipitation Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weak-Texture Seafloor and Land Image Matching Using Homography-Based Motion Statistics with Epipolar Geometry

1
School of Computer Science, China University of Geosciences (Wuhan), 388 Lumo Road, Wuhan 430074, China
2
School of Geography and Information Engineering, China University of Geosciences (Wuhan), 388 Lumo Road, Wuhan 430074, China
3
Key Laboratory of Geological Survey and Evaluation of Ministry of Education, 388 Lumo Road, Wuhan 430074, China
4
Land Consolidation and Rehabilitation Center of Zhejiang Province, Stadium Road 498, Hangzhou 310007, China
5
The Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology, 104 Youyi Road, Beijing 100086, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2683; https://doi.org/10.3390/rs16142683
Submission received: 26 June 2024 / Revised: 15 July 2024 / Accepted: 19 July 2024 / Published: 22 July 2024
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
The matching of remote sensing images is a critical and necessary procedure that directly impacts the correctness and accuracy of underwater topography, change detection, digital elevation model (DEM) generation, and object detection. The texture of images becomes weaker with increasing water depth, and this results in matching-extraction failure. To address this issue, a novel method, homography-based motion statistics with an epipolar constraint (HMSEC), is proposed to improve the number, reliability, and robustness of matching points for weak-textured seafloor images. In the matching process of HMSEC, a large number of reliable matching points can be identified from the preliminary matching points based on the motion smoothness assumption and motion statistics. Homography and epipolar geometry are also used to estimate the scale and rotation influences of each matching point in image pairs. The results show that the matching-point numbers for the seafloor and land regions can be significantly improved. In this study, we evaluated this method for the areas of Zhaoshu Island, Ganquan Island, and Lingyang Reef and compared the results to those of the grid-based motion statistics (GMS) method. The increment of matching points reached 2672, 2767, and 1346, respectively. In addition, the seafloor matching points had a wider distribution and reached greater water depths of −11.66, −14.06, and −9.61 m. These results indicate that the proposed method could significantly improve the number and reliability of matching points for seafloor images.

1. Introduction

The surveying and mapping of underwater topography for coastal zones, islands, and reefs is one of the most crucial research fields in oceanography and provides key geographic information data for nearshore navigation [1,2,3], ocean geomorphology [4,5], coral reef studies [6,7], and hydrography [8,9]. Currently, underwater topography in coastal zones and island areas is primarily dependent on airborne light detection and ranging (LiDAR) and spaceborne photogrammetry, and the efficiency of shipborne acoustic systems in these areas is extremely low [10,11]. However, the detection area of this method is usually limited by the airborne platform [12]. Another mainstream method for underwater topography is spaceborne photogrammetry based on high-resolution multispectral stereo images, which can efficiently obtain expansive underwater topography of coastal, island, and reef areas [13]. In the process of spaceborne photogrammetry, remote sensing image matching is a critical and necessary procedure that directly impacts the correctness and accuracy of underwater topography [14]. Furthermore, image matching is identified as a crucial step, with the quality of the match exerting a direct influence on the performance of applications, such as change detection, object detection and tracking, and image stitching. Compared with conventional image matching for land areas [15,16,17], the remote sensing image matching of coastal zones and islands includes some areas covered by the water column with different depths, which decreases the image texture information and significantly increases the matching difficulty [18,19].
The process of remote sensing image matching can be divided into two steps: (1) extraction and matching of the point pairs from the stereo image pairs, and (2) optimization of the matching points [20]. Methods for the extraction and matching of the point pairs can be generally classified into two categories: area-based methods and feature-based methods [21]. Area-based methods, also referred to as correlation-like or template matching, establish correspondence between reference and sensed pixels using a similarity measure. Feature-based methods aim to identify distinctive features in images and then match these features to establish correspondence between reference and sensed images. However, area-based matching methods are only suitable for homologous images with linear radiometric differences and face challenges when used to extract common features between image pairs. Conversely, feature-based matching is more suitable for situations involving significant scales of offset, geometric distortion or scale differences between two images [22]. Typical solutions rely on feature detectors, descriptors, and matchers to generate putative correspondences. At present, feature-matching methods, such as invariant feature transform (SIFT) [23], speeded-up robust features (SURF) [24], accelerated KAZE (AKAZE) [25], and oriented brief (ORB) [26], are generally performed based on “feature points” detection and description by their local intensity or shape patterns.
After the process of extraction and matching of the point pairs, extensive matching points are extracted, including true and false matches. Therefore, the initial result of the matching points should be optimized to remove false matches. A large portion of true matches are eliminated to limit false matches [27]. Several techniques have been proposed to solve this problem, for example, the ratio test (RT) and GMS [9,28]. The RT identifies high-quality matches by comparing the distances of the two closest candidate points. However, due to the lack of more stringent constraints, many incorrect matches remain difficult to eliminate in complex scenes. GMS are related to optical flow, point-based coherence techniques, and patch-match-based matchers, directly using smoothness to help match estimations [29,30,31]. However, during the separation of true and false matches with GMS, the scale and rotation of matching points are only detected and estimated by five and eight discrete grid kernels [32,33,34]. In addition, a series of putative matches are included in one grid, in which the different scales and rotations of matches are averaged. Therefore, a large portion of true matching points could be considered false matches and removed.
For remote-sensing images of coastlines, islands, and reefs that contain seafloor image pixels, the local self-similarity of the seafloor terrain leads to weak texture features and a lack of salient point features [35]. In addition, the quality of imaging in water areas is affected by waves, solar flares, and water properties [36,37]. As a result, traditional matching methods have low matching accuracy. Feature-matching points that are automatically extracted in areas with poor texture still require a significant amount of manual editing before practical application [38]. To overcome the existing issues with the weak texture areas of the seafloor for remote sensing images, the homography-based motion statistics with an epipolar constraint (HMSEC) method is proposed. This method can effectively improve the number, reliability, and robustness of matching points for weak-texture seafloor images by translating a large number of putative matches into high-match quality and using the homograph and epipolar geometry to estimate the scale and orientation influence of matching points. In combination with ORB, the matching number and accuracy are ensured and enhanced, and the distribution of matching points in the seafloor area is improved with increasing depth. In addition, different experiments were performed using the island and reef images to validate the correctness and accuracy of the proposed method. Furthermore, various matching metrics were used to compare and estimate matching results obtained by the various methods. Finally, the matching points obtained by different methods were projected into the water-depth image to analyze the distribution of matching results and estimate the detection capability of HMSEC at different water depths.

2. Methodology

The HMSEC method with ORB was adopted to achieve robust feature matching and obtain highly reliable matching points based on the neighborhood information of matching points. The ORB method was used to generate a sufficient number of putative matching points. For each initial putative matching point generated by ORB, the neighborhood information used in HMSEC was used to determine correctness and remove false matching points. In addition, epipolar geometry was used as a constraint with homography-based motion statistics to ensure the reliability of the extracted true matching points. To realize this aim, the proposed method, HMSEC with ORB, has four parts: (1) putative matching, (2) filtering with motion statistics, (3) homography-based scale and orientation adaption, and (4) matching-metrics evaluation. Figure 1 shows the flow chart of HMSEC with ORB.

2.1. Filtering with Motion Statistics

HMSEC rely on the motion smoothness constraint, the assumption that neighboring pixels in one image move together as they often land in one object or structure. Although the assumption is not generally correct, e.g., it is violated in image boundaries, it suits most regular pixels. This is sufficient for our purpose as we are not targeting a final correspondence solution but a set of high-quality correspondences for RANSAC (Random Sample Consensus)-like approaches. The assumption means that neighboring true correspondences in one image are also close in other images, while false correspondences are not. This allows us to classify a correspondence as true or false by simply counting the number of its similar neighbors and the correspondences that are close to the reference correspondence in both images.
True matches are influenced by the smoothness constraints, while false matches are not. Let similar neighbors refer to those matches that are close to the reference match in both images. True matches often have more similar neighbors than false matches, as shown in Figure 2. To identify true matches, HMSEC use the number of similar neighbors.
Let M be all matches across images I 1 and I 2 , and let m i be one match correspondence that connects the points a i and b i between two images. We define m i s neighbors as R i = { m j | m j M, m j     m i , d a i , a j < δ }, and its similar neighbors as C i = { m j | m j   R i , d b i , b j < δ }, where d · , · refers to the Euclidean distance of two points. We term | C i | , the number of elements in C i , motion support for m i .
The motion support can be used as a discriminative feature to distinguish true and false matches. Modeling the distribution of | C i | for true and false matches, we obtain Equation (1), in which B · , · refers to the binomial distribution, and | R i | refers to the number of neighbors for m i . The symbol t represents the probability of one true match supporting its neighbor pixels, which is close to the correct rate of correspondences. The false-match probability is denoted by f and is usually small, as false correspondences approximately satisfy the random distribution in regular pixel regions.
C i = B | R i , t ,         i f   m i   i s   t r u e B | R i , f ,         i f   m i   i s   f a l s e
E | C i | = E t = | R i |   · t ,         i f   m i   i s   t r u e E f = | R i |   · f ,         i f   m i   i s   f a l s e
V | C i | = V t = | R i | · t · ( 1 t ) ,         i f   m i   i s   t r u e V f = | R i | · f · ( 1 f ) ,         i f   m i   i s   f a l s e
Furthermore, the expectation of | C i | can be derived and formulated as Equation (2), and its variance is expressed by Equation (3).
s = | E t E f | V t + V f = | R i | · ( t f ) | R i | · t · ( 1 t ) + | R i | · f · ( 1 f )
Then, the separability between true and false matches can be defined as S and described by Equation (4), where S   | R i | , and if | R i | , S . This indicates that the value of S is greater, andthe separability becomes increasingly reliable when the number of feature points is sufficiently large. This occurs when t is just slightly larger than f , and as a result, it is possible to obtain reliable matches from difficult scenes by increasing the number of detected feature points.
m i   T ,         i f   C i > α i F   ,         i f   C i   α i
α i = β | R i |
In addition, it shows that improving feature quality ( t ) can also boost separability. The distinctive attributes allow us to decide if m i is true or false using the threshold | C i | given in Equations (5) and (6), where T and F denote true and false match sets, respectively. α i is the threshold of | C i | . The symbol β is a hyperparameter, which is empirically set to range from 4 to 6.

2.2. Homography-Based Scale and Orientation Adaptation

In the matching process for two images, significant image changes in scale and rotation exist in each image [39]. To address this issue, multi-scale and multi-rotation solutions are adopted for image homography, which describes a geometric projective transformation from one image space to the other. I1 and I2 are shown in Figure 3.
x y 1 = 1 s A R T x y 1 = λ H x y 1 = λ h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 x y 1
The homography between the two images is described by the H matrix, and the homographic transformation is represented by Equation (7), in which the coordinates of I1 and I2 are represented by ( x , y ) and ( x , y ) ; ( h 00 , h 01 , h 02 , h 10 , h 11 , h 12 , h 20 , h 21 , h 22 ) represent the nine elements of the matrix H, while h 22 equals 1; the symbol s is the scale factor between two images, and λ is the reciprocal of s ; A means the matrix of the camera’s internal parameters; and R and T represent the rotation and translation vectors of two images.
s x = h 01 x + h 02 y + h 03 s y = h 10 x + h 11 y + h 12 s = h 20 x + h 21 y + 1
x = h 01 x + h 02 y + h 03 h 20 x + h 21 y + 1 y = h 10 x + h 11 y + h 12 h 20 x + h 21 y + 1
Through Equations (8) and (9), the scale factor of s is removed, and the coordinate in image I2, corresponding to the coordinate of the matching point in image I1, can be determined. In this process, the rotation of two images is taken into account and eliminated by the rotation matrix R that is included in the matrix H.
In the process of the homography-based scale and orientation estimation, there are deviations in the coordinates and angles for the matching points. The coordinate deviation between the matching point after motion statistics and the point obtained by the homography matrix is statistically counted for image I2, using standard deviation. Two-times standard deviation ( 2 σ d ) is used as a radius to construct a circle for each red point obtained by the homography matrix, represented in Figure 3. When the corresponding matching point after motion statistics is outside the circle, such as the green point P1, this means its deviation is greater than the two-times standard of coordinate deviation, and it is an unreliable matching point. On the contrary, the matching points P2 and P3 are reliable matching points to be reserved. To improve the matching accuracy of the entire image, the angle deviation between the lines connects each matching point in I1 with its corresponding matching point in I2 and the calculation point. Then, the two-times standard deviation ( 2 A s ) of the angle deviation shown by the orange dashed line can be calculated and is used to evaluate the matching point correctness and reliability. Finally, for each matching point, when its coordinate and angle deviations in total are greater than the corresponding values of 2 σ d and 2 A s , the point is considered unreliable and removed.

2.3. Matching-Metrics Evaluation

To evaluate the reliability and accuracy of HMSEC for nearshore remote sensing images, five matching metrics were applied based on the results of various algorithms and their comparisons. Therefore, this comprehensive set of metrics allowed us to embed our analysis into the existing body of work. The standard metric is a metric that is proposed for binary descriptor evaluation and used to assess the accuracy and reliability of feature-descriptor matching [17]. In addition, the putative match ratio (PMR) and matching score (MS) are also used for the raw matching performance of each image pair. Furthermore, a set of different metrics, the seafloor match ratio (SMR), seafloor matching number (SMN), and total matching number (TMN), were adopted to evaluate the image-matching results for the nearshore area via comparison with other methods and analysis of the HMSEC characteristics.
The following notations are used to illustrate the metrics in this paper: M p denotes all putative matches across two images; F denotes all the features; M i n denotes all inlier matches across two images; M e denotes all matches across two images with an epipolar constraint; M e o denotes all seafloor matches across two images with an epipolar constraint. The PMR was adopted and used to quantify the selectivity of the descriptor in terms of the fraction of the detected features initially identified as a pair of key point matches, which is represented by the formulation of | M p | / | F | . The ratio of these two values was generally utilized to estimate the raw matching performance for each image. The MS is formulated as | M i n | / | F | to represent the inlier ratio among all detected features, which is used to describe the robustness and transformation efficiency of feature points. The metric of the SMR is represented by | M e o | / | M e | and means the percentage of the seafloor matching points in the whole set of matching points. A higher percentage value expresses a better seafloor matching result.
In image-matching research, the parameters described above are generally used for the overall evaluation of matching results. In addition, recognition and determination by the human eye are also generally used to compare location matching points with true ground points or the relative location of matching points in left and right images. However, for islands and reefs located far away from land, it is difficult for humans to access them, and consequently, matching points in images are not often compared in situ with the ground point at the seafloor.

3. Study Area and Dataset

3.1. Study Area

This study focused on Zhaoshu Island, Ganquan Island, and Lingyang Reef in the South China Sea. Zhaoshu Island is a part of the Qilianyu Islands, which are situated at a 16.956 north latitude and 112.318 east longitude in China’s Xisha Islands. Ganquan Island and Lingyang Reef are in the Yongle Islands at a longitude of 111.592 and a latitude of 16.523. Zhaoshu Island, Ganquan Island, and Lingyang Reef are shown in Figure 4, and their locations are marked with red, blue, and orange stars, respectively. Zhaoshu Island and Ganquan Island include both land and seafloor areas. The WorldView-2 image shows a large reef flat containing various sediments in the shallow water of Zhaoshu Island. However, the sediment situation on Ganquan Island is relatively simple, and its water level is significantly lower than that on Zhaoshu Island. Lingyang Reef is a large atoll with a lagoon in its middle and is completely submerged. The radiation quality in Zhaoshu Island is slightly better than in the other two areas. Each of these three areas was selected to test the performance of the proposed method because they represent a significant area in the South China Sea.

3.2. Data and Experiment

The WorldView-2 satellite provides high-resolution data at a resolution of 0.52 m for panchromatic images and 1.8 m for multispectral images at an understellar point. WorldView-2 multispectral stereo image pairs of Zhaoshu Island, Ganquan Island, and Lingyang Reef with less than 10% cloud cover were used. The WorldView-2 multispectral image has 8 bands with a spectral range from 450 to 1040 nm [29]. The calibration accuracy of WorldView-2 image can reach 0.5%, providing high-quality geographic data. The wide coverage range and high sensitivity of these datasets make them powerful tools for global ocean measurements and monitoring. The images used in this study are detailed in Table 1. The images were preprocessed with atmospheric corrections using a fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) model based on the environment for visualizing images (ENVI) 5.3 [40,41]. A solar glint effect exists in the image of Ganquan Island, which affects feature detection and matching-point extraction. Thus, the solar glint effect was eliminated using a linear regression method with the near-infrared band [42].
The newly proposed HMSEC with ORB method was used to perform matching for various remote sensing images around the islands and the reef. The matching results were compared with current frequently used methods to validate the matching capability and accuracy of HMSEC. The experiments were conducted as follows: (1) the HMSEC method was configured with the feature detections (ORB-HMSEC) to achieve matching extraction for three study areas; (2) The matching results of ORB-HMSEC were compared with those from other configured methods based on SIFT, SURF, AKAZE, and ORB with the RT, GMS, and HMSEC; and (3) Assessment metrics were used to assess the matching capabilities and accuracies of the various methods and ensure the correction and accuracy of HMSEC for land and seafloor areas.

4. Results

4.1. Matching-Point Results

To validate and analyze the robustness and accuracy of the proposed matching method, remote sensing images of Zhaoshu Island, Ganquan Island, and Lingyang Reef were used for correspondence acquisitions using eight different methods (Table 2). The methods were constructed by combining different matching methods, such as SIFT, SURF, ORB, and AKAZE, and the false match removal methods, the RT, GMS, and HMSEC. The SIFT and SURF algorithms use the differences in Gaussian key point detectors and Hessian matrix detectors, respectively, both of which rely on image gradient changes. In addition, to ensure the stability of key points and the accuracy of the descriptors, SIFT and SURF algorithms typically set high thresholds, which filter out many potential key points in weak-texture regions. Consequently, the number of matching points detected in weak-texture regions is generally low [43,44]; thus, GMS and HMSEC are not suitable for these tasks. ORB and AKAZE algorithms emphasize matching efficiency by using fast and simple key-point-detection algorithms. This generally results in the retention of more matching points; however, the accuracy is typically lower, and thus, they are suitable for the method proposed in this study. Generally, the number of feature points obtained by SIFT and SURF is reduced, especially for areas where the seafloor has a weak texture; therefore, GMS and HMSEC are not suitable for the two methods. In this study, GMS and HMSEC were combined with ORB and AKAZE. Based on the comparisons between the various methods, the matching reliability and accuracy of HMSEC were evaluated statistically by analyzing the characteristics of land and seafloor areas.
Point matching was achieved for Zhaoshu Island, Ganquan Island, and Lingyang Reef using the methods presented in Table 2. The matching results are shown in Figure 5a–c. The highest number of matching points was obtained using the ORB-HMSEC method (Figure 5a). The results obtained using the SIFT, SURF, ORB-GMS, and AKAZE-HMSEC methods are similar and higher than those obtained using the AKAZE-RT, ORB-RT, and AKAZE-GMS methods. For Ganquan Island, the matching result obtained using the ORB-HMSEC method was better than that obtained using other methods (Figure 5b). The ORB-RT and ORB-GMS methods yielded similar results that are higher than those obtained using the other tested methods. Lingyang Reef is completely submerged. The number of matching points extracted using the ORB-RT, ORB-GMS, and ORB-HMSEC methods are similar and significantly higher than those extracted using other methods (Figure 5c).

4.2. Comparison and Evaluation of the Matching Results

To evaluate the matching correctness and reliability of the ORB-HMSEC method, five matching metrics were used and compared with other methods. For Zhaoshu Island, Ganquan Island, and Lingyang Reef, the matching metric values obtained using the various methods are listed in Table 2, Table 3 and Table 4, respectively. The metrics obtained using the ORB-HMSEC method were highest at 53.65, 60.20, 1812, and 3010 for the MS, SMR, SMN, and TMN, respectively, but not for the PMR at 50.44 (Table 3). The corresponding minimum values were obtained using the SIFT, ORB-RT, AKAZE-GMS, AKAZE-GMS, and AKAZE-RT methods. When using the AKAZE-GMS method, the seafloor matching point could not be detected; thus, the values of the SMR and SMN are zero.
For Ganquan Island, the metric values of the MS, SMN, and TMN determined using the ORB-HMSEC method were the highest, reaching 34.31, 1291, and 5348 (Table 4). The PMR and SMR values were 31.27 and 24.13, which is in the middle of all results. The seafloor matching points could not be extracted by the SIFT and SURF methods, and the SMR and SMN values are zero. Using the AKAZE-RT, AKAZE-GMS, and AKAZE-HMSEC methods, the SMN metric reached two or three. The TMN value obtained using the AKAZE-GMS method was the lowest, reaching 90. The PMR and MS values obtained using the ORB-RT method were 30.57 and 20.79, respectively, which are lower than the other methods.
For Lingyang Reef, five metric values were obtained using various methods (Table 5). The SMR value for each method was 100, and the SMN and TMN values were equal because the reef region covered by the image was submerged. The maximum PMR, MS, SMR, SMN, and TMN values obtained using the ORB-HMSEC method were 65.95, 69.61, 100, 6369, and 6369, respectively. The minimum PMR and MS values obtained using the SIFT method were 10.29 and 27.70, respectively. The matching point number was 50, and it was obtained using the AKAZE-HMSEC method.

4.3. Distributions of Seafloor and Land Matching Points

Underwater image texture becomes weaker with increasing water depth, which results in different distributions and matching results [45,46]. The ORB-GMS method achieved the second-best performance (Table 5). Consequently, we focused on comparing the matching capabilities of ORB-GMS and the proposed algorithm across various water depth intervals. The distributions of the seafloor and land matching points obtained using the ORB-HMSEC method at Zhaoshu Island, Ganquan Island, and Lingyang Reef were compared with the corresponding results of ORB-GMS. To illustrate the weakening of underwater textures and the reduction in the number of matching points with increasing water depth, the matching results of the two methods were overlaid with a bathymetric map to visually demonstrate the differences in the number of matching points at various depths. Distributions and comparisons of the matching points obtained using the ORB-GMS and ORB-HMSEC methods are shown in Figure 6. The white region represents the land on the island above the water surface, and the underwater region with different water depths is represented by different colors. The results for Zhaoshu Island and Ganquan Island reveal that the number of seafloor matching points was higher with the ORB-HMSEC method than the ORB-GMS method. The number of land matching points obtained from the ORB-HMSEC method was also improved. Furthermore, the seafloor point distribution with the ORB-HMSEC method was wider and deeper than that with the ORB-GMS method. Lingyang Reef is completely submerged, and consequently, its matching points are all on the seafloor. The distribution and number of matching points obtained using the ORB-HMSEC method were also higher than those obtained using the ORB-GMS method.
The capability of the ORB-HMSEC method was assessed at various water depths. The statistical distributions of matching-point numbers and different water depths at 1 m intervals are compared in Figure 7. The matching points obtained using the ORB-HMSEC method were improved at almost every water depth interval, except for the depth interval from 2 to 4 m at Ganquan Island. In this interval, hundreds of matching-point numbers were obtained using the two methods, and the matching number for GMS was slightly higher than that for HMSEC. For Zhaoshu Island, Ganquan Island, and Lingyang Reef, the maximum water depth of the matching points obtained by GMS is −5.88, −11.44, and −7.12 m. The corresponding values of HMSEC are −11.66, −14.06, and −9.61 m, and the deep increment reaches −5.78, −2.66, and −2.49 m, respectively.

5. Discussion

The results of various matching metrics (Table 2, Table 3 and Table 4) indicate that the PMR obtained using the ORB-HMSEC method is at the middle level because the feature number detected by ORB is notably larger than that detected by the other methods. The MS value obtained using the ORB-HMSEC method was the highest for the three images, indicating that the number of putative matching points and the conversion rate from features to putative matches were the highest. The TMN values for the three study areas were determined using various methods. The results show that the ORB-HMSEC method was significantly better than the other tested methods, reaching 3010, 5348, and 6369 for Zhaoshu Island, Ganquan Island, and Lingyang Reef, respectively; relative to the second-best values, the matching numbers improved by 2317, 2767, and 1346, respectively. Thus, the results demonstrate that numerous putative matching points can be transformed efficiently into reliable matching points using motion smoothness assumptions and motion statistics.
Compared with the ORB-GMS method, the ORB-HMSEC method achieved higher SMN and TMN values. In the GMS method, only five relative scales and eight relative rotations between image pairs were defined and considered for the matching estimation of scale and rotation enhancement (Figure 8). Furthermore, the grid-based framework divided two images into non-overlapping cells to reduce the complexity of the calculation, and only cell pairs, not corresponding match points, were classified (Figure 8a,b). The grid kernel was fixed on one image, and eight different grid kernels were used to simulate possible relative rotations [32]. Five different grid kernels were used to simulate the potential relative scales in other images. Therefore, during the matching process, the scale and rotation of the image pairs should be discrete to achieve rough simulations and estimations, but this also results in errors and the elimination of matching points [47,48]. To overcome this problem, a homography-based adaptation method was proposed and used in the HMSEC method. To accurately estimate the scale and rotation of each matching point, coordinate and angle deviations were used (Figure 3). These results demonstrate that ORB-HMSEC has higher filtering and estimating accuracy for matching points, and the number of matching points can be confirmed and improved (Table 6).
In underwater areas, image texture becomes weaker as the water depth increases. The distribution becomes wider, and the number of seafloor matching points acquired increases when using the ORB-HMSEC method (Figure 6 and Figure 7). The maximum water depth of the matching points also improved, reaching −11.66, −14.06, and −9.61 m at Zhaoshu Island, Ganquan Island, and Lingyang Reef, respectively. The number of seafloor matching points increased with increasing water depth (Figure 7), and between −3 and −4 m, the matching number peaks. In shallower areas ranging from 0 to −2 m, the wavelength and quantity of white and breaker waves increased, which may have reduced the number of matching points. Subsequently, increasing the depth reduced the image texture and the matching number. With the ORB-HMSEC method, the number of matching points improved with increasing water depth. Thus, the depth and distribution of underwater points could be increased.
Image homography was used to eliminate the matching error caused by multi-scale and multi-rotation changes in each image. In this procedure, the two-times standard of coordinate and angle deviations was adopted to further reduce the matching error and ensure the reliability and stability of the matching results in most situations. As shown in Figure 9, different values of the coordinate and angle thresholds were used to obtain matching points for analyzing the changing trend of the matching-point number. For the study areas of Zhaoshu Island, Ganquan Island, and Lingyang Reef, the value of 2 σ d is 0.96, 1.12, and 0.75 pixels. The corresponding value of 2 A s is 0.36, 1.44, and 1.80 degrees. Through the results shown in Figure 9, it is clear that the number of matching points changes relatively smoothly when values of the coordinate and angle threshold range from one-time to two-times standard deviations. These results demonstrate that the accuracy and reliability of matching points can be further evaluated and improved, using the adaptation constraint with the homography-based scale and orientation. Seafloor texture varies between island and reef locations. In relatively turbid and deep-water environments, the radiation information in the image and image texture are reduced. For this situation, two-time standard deviations of σ d and angle deviations of A s could be further improved, ranging from two times to one time to ensure matching accuracy and reliability in the ORB-HMSEC method.

6. Conclusions

This study proposes a novel matching method, ORB-HMSEC, that uses homography-based motion statistics. The proposed method efficiently transforms numerous putative matching points into reliable matching points using motion smoothness assumptions and motion statistics. In this process, the scale and rotation of each matching point in the image pairs can be estimated accurately to improve the accuracy and reliability of the matching points. According to the experimental results and comparisons with other methods, the ORB-HMSEC method efficiently improves the reliability and number of matching points both on the seafloor and in land regions. The increment of the matching points reached 2672, 2767, and 1346 for the WorldView-2 multispectral images of Zhaoshu Island, Ganquan Island, and Lingyang Reef, respectively. Furthermore, the seafloor matching points of ORB-HMSEC had wider distributions and deeper water depths of −11.66, −14.06, and −9.61 m, respectively. Thus, the ORB-HMSEC method is highly reliable, with a large number of matching points in the study area. In the future, we plan to conduct a marching experiment using various remote sensing images to further explore the potential of this proposed method in different environments.

Author Contributions

Conceptualization, Y.C. and Y.L.; methodology, Y.C. and Y.L.; software, L.W., D.Z. and Q.Z.; validation, X.Z. and L.L.; formal analysis, Y.C. and Y.L.; investigation, Q.Z.; resources, L.W.; data curation, L.W., D.Z. and Q.Z.; writing—original draft preparation, Y.C. and Y.L.; writing—review and editing, Y.C., Y.L., L.W. and D.Z.; visualization, L.W.; supervision, Y.C. and Y.L.; project administration, Y.L. and Y.C.; funding acquisition, Y.C. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 42171373), the Special Fund of Hubei Luojia Laboratory (Grant No. 220100035), the Key Laboratory of Geological Survey and Evaluation of Ministry of Education (Grant No. GLAB2023ZR09), the Open Research Project of the Hubei Key Laboratory of Intelligent Geo-Information Processing (Grant No. KLIGIP-2022-B04, KLIGIP-2022-C01), the Open Research Program of the International Research Center of Big Data for Sustainable Development Goals (Grant No. CBAS2023ORP03), the National Key Research and Development Program of China (Grant No. 2023YFB3907700), and the Postdoctoral Fellowship Program of CPSF (Grant No. GZC20232463).

Data Availability Statement

Data underlying the results presented in this paper can be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abaspur Kazerouni, I.; Dooly, G.; Toal, D. Underwater image enhancement and mosaicking system based on A-KAZE feature matching. J. Mar. Sci. Eng. 2020, 8, 449. [Google Scholar] [CrossRef]
  2. Alcantarilla, P.F.; Nuevo, J.; Bartoli, A. Fast explicit diffusion for accelerated features in nonlinear scale spaces. In Proceedings of the Electronic Proceedings of the British Machine Vision Conference 2013; Bristol, UK, 9–13 September 2013. [CrossRef]
  3. Anderson, G.P.; Felde, G.W.; Hoke, M.L.; Ratkowski, A.J.; Cooley, T.W.; Chetwynd, J.H., Jr.; Gardner, J.A.; Adler-Golden, S.M.; Matthew, M.W.; Berk, A. MODTRAN4-based atmospheric correction algorithm: FLAASH (fast line-of-sight atmospheric analysis of spectral hypercubes). In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII; International Society for Optics and Photonics: Bellingham, WA, USA, 2002; Volume 4725, pp. 65–71. [Google Scholar]
  4. Baker, S.; Gross, R.; Matthews, I. Lucas-kanade 20 years on: A unifying framework. Part 2 1 Introduction. Int. J. Comput. Vis. 2004, 56, 221–255. [Google Scholar] [CrossRef]
  5. Barandiaran, I.; Graña, M.; Nieto, M. An empirical evaluation of interest point detectors. Cybern. Syst. 2013, 44, 98–117. [Google Scholar] [CrossRef]
  6. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  7. Beall, C.; Lawrence, B.J.; Ila, V.; Dellaert, F. 3D reconstruction of underwater structures. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4418–4423. [Google Scholar]
  8. Bekerman, Y.; Avidan, S.; Treibitz, T. Unveiling optical properties in underwater images. In Proceedings of the 2020 IEEE International Conference on Computational Photography (ICCP), St. Louis, MO, USA, 24–26 April 2020; pp. 1–12. [Google Scholar]
  9. Bian, J.W.; Lin, W.Y.; Liu, Y.; Zhang, L.; Yeung, S.K.; Cheng, M.M.; Reid, I. GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence. Int. J. Comput. Vis. 2020, 128, 1580–1593. [Google Scholar] [CrossRef]
  10. Bianco, S.; Ciocca, G.; Marelli, D. Evaluating the performance of structure from motion pipelines. J. Imaging 2018, 4, 98. [Google Scholar] [CrossRef]
  11. Yang, F.; Qi, C.; Su, D.; Ding, S.; He, Y.; Ma, Y. An airborne LiDAR bathymetric waveform decomposition method in very shallow water: A case study around Yuanzhi Island in the South China Sea. Int. J. Appl. Earth Obs. Geoinf. 2022, 109, 102788. [Google Scholar] [CrossRef]
  12. Saylam, K.; Brown, R.A.; Hupp, J.R. Assessment of depth and turbidity with airborne Lidar bathymetry and multiband satellite imagery in shallow water bodies of the Alaskan North Slope. Int. J. Appl. Earth Obs. Geoinf. 2017, 58, 191–200. [Google Scholar] [CrossRef]
  13. Nocerino, E.; Menna, F. Photogrammetry: Linking the world across the water surface. J. Mar. Sci. Eng. 2020, 8, 128. [Google Scholar] [CrossRef]
  14. Harris, P.T.; Macmillan-Lawler, M.; Rupp, J.; Baker, E.K. Geomorphology of the oceans. Mar. Geol. 2014, 352, 4–24. [Google Scholar] [CrossRef]
  15. Heap, A.D.; Harris, P.T. Geomorphology of the Australian margin and adjacent seafloor. Aust. J. Earth Sci. 2008, 55, 555–585. [Google Scholar] [CrossRef]
  16. Hedley, J.D.; Roelfsema, C.M.; Chollett, I.; Harborne, A.R.; Heron, S.F.; Weeks, S.J.; Skirving, W.J.; Strong, A.E.; Mark Eakin, C.; Christensen, T.R.L.; et al. Remote sensing of coral reefs for monitoring and management: A review. Remote Sens. 2016, 8, 118. [Google Scholar] [CrossRef]
  17. Heinly, J.; Dunn, E.; Frahm, J.M. Comparative Evaluation of Binary Features; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef]
  18. Jin, Y.; Mishkin, D.; Mishchuk, A.; Matas, J.; Fua, P.; Yi, K.M.; Trulls, E. Image matching across wide baselines: From paper to practice. Int. J. Comput. Vis. 2021, 129, 517–547. [Google Scholar] [CrossRef]
  19. Kay, S.; Hedley, J.D.; Lavender, S. Sun glint correction of high and low spatial resolution images of aquatic scenes: A review of methods for visible and near-infrared wavelengths. Remote Sens. 2009, 1, 697–730. [Google Scholar] [CrossRef]
  20. Yan, L.; Roy, D.P.; Zhang, H.; Li, J.; Huang, H. An automated approach for sub-pixel registration of Landsat-8 Operational Land Imager (OLI) and Sentinel-2 Multi Spectral Instrument (MSI) imagery. Remote Sens. 2016, 8, 520. [Google Scholar] [CrossRef]
  21. Skakun, S.; Roger, J.C.; Vermote, E.F.; Masek, J.G.; Justice, C.O. Automatic sub-pixel co-registration of Landsat-8 Operational Land Imager and Sentinel-2A Multi-Spectral Instrument images using phase correlation and machine learning based mapping. Int. J. Digit. Earth 2017, 10, 1253–1269. [Google Scholar] [CrossRef]
  22. Zhu, B.; Zhou, L.; Pu, S.; Fan, J.; Ye, Y. Advances and challenges in multimodal remote sensing image registration. IEEE J. Miniaturization Air Space Syst. 2023, 4, 165–174. [Google Scholar] [CrossRef]
  23. Lin, W.; Liu, S.; Matsushita, Y.; Ng, T.; Cheong, L. Smoothly varying affine stitching. In Proceedings of the Conference on Computer Vision and Pattern Recognition 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 345–352. [Google Scholar]
  24. Lin, W. Institutional Knowledge at Singapore Management University CODE: Coherence based decision boundaries for feature correspondence. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 34–47. [Google Scholar] [CrossRef]
  25. Lindeberg, T. Feature detection with automatic scale selection. Int. J. Comput. Vis. Pattern Recognit. 1998, 30, 79–116. [Google Scholar] [CrossRef]
  26. Lipman, Y.; Yagev, S.; Poranne, R.; Jacobs, D.W.; Basri, R. Feature matching with bounded distortion. ACM Trans. Graph. 2014, 33, 1–14. [Google Scholar] [CrossRef]
  27. Muja, M.; Lowe, D.G. Fast approximate nearest neighbors with automatic algorithm configuration. Int. Jt. Conf. Comput. Vis. Comput. Graph. Theory Appl. 2009, 2, 331–340. [Google Scholar]
  28. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  29. Padwick, C.; Deskevich, M.; Pacifici, F.; Smallwood, S. WorldView-2 pan-sharpening. In Proceedings of the American Society for Photogrammetry and Remote Sensing Annual Conference 2010: Opportunities for Emerging Geospatial Technologies, San Diego, CA, USA, 26 April 2010; Volume 2, pp. 740–753. [Google Scholar]
  30. Papakonstantinou, A.; Kavroudakis, D.; Kourtzellis, Y.; Chtenellis, M.; Kopsachilis, V.; Topouzelis, K.; Vaitis, M. Mapping cultural heritage in coastal areas with UAS: The case study of Lesvos island. Heritage 2019, 2, 1404–1422. [Google Scholar] [CrossRef]
  31. Pizarro, D.; Bartoli, A. Feature-based deformable surface detection with self-occlusion reasoning. Int. J. Comput. Vis. 2012, 97, 54–70. [Google Scholar] [CrossRef]
  32. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
  33. Sarlin, P.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superglue: Learning feature matching with graph neural networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 4938–4947. [Google Scholar]
  34. Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform robust scale-invariant feature matching for optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4516–4527. [Google Scholar] [CrossRef]
  35. Sedaghat, A.; Ebadi, H. Distinctive order based self-similarity descriptor for multi-sensor remote sensing image matching. ISPRS J. Photogramm. Remote Sens. 2015, 108, 62–71. [Google Scholar] [CrossRef]
  36. Sedaghat, A.; Ebadi, H. Remote sensing image matching based on adaptive binning SIFT descriptor. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5283–5293. [Google Scholar] [CrossRef]
  37. Shah, R.; Srivastava, V.; Narayanan, P.J. Geometry-aware feature matching for structure from motion applications. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015; pp. 278–285. [Google Scholar]
  38. Sun, J.; Shen, Z.; Wang, Y.; Bao, H.; Zhou, X. LoFTR: Detector-free local feature matching with transformers. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 8922–8931. [Google Scholar]
  39. Zhao, J.H.; Wang, A.X. Precise marine surveying and data processing technology and their progress of application. Hydrogr. Surv. Chart 2015, 35, 1–6. [Google Scholar]
  40. Zhang, Y.; Zhang, Y.; Mo, D.; Zhang, Y.; Li, X. Direct digital surface model generation by semi-global vertical line locus matching. Remote Sens. 2017, 9, 214. [Google Scholar] [CrossRef]
  41. Wu, B.; Zhang, Y.; Zhu, Q. Integrated point and edge matching on poor textural images constrained by self-adaptive triangulations. ISPRS J. Photogramm. Remote Sens. 2012, 68, 40–55. [Google Scholar] [CrossRef]
  42. Kanade, T.; Okutomi, M. A stereo matching algorithm with an adaptive window: Theory and experiment. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 920–932. [Google Scholar] [CrossRef]
  43. Ye, L.; Wu, B. Integrated image matching and segmentation for 3D surface reconstruction in urban areas. Photogramm. Eng. Remote Sens. 2018, 84, 135–148. [Google Scholar] [CrossRef]
  44. Hu, H.; Rzhanov, Y.; Hatcher, P.J.; Bergeron, R.D. Binary adaptive semi-global matching based on image edges. In Proceedings of the Seventh International Conference on Digital Image Processing (ICDIP 2015), Los Angeles, USA, 9–10 April 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9631, pp. 306–312. [Google Scholar]
  45. Zhang, F.; Bian, H.; Lv, Z.; Zhai, Y. Ring-Masked Attention Network for Rotation-Invariant Template-Matching. IEEE Signal Process. Lett. 2023, 30, 289–293. [Google Scholar] [CrossRef]
  46. Zhang, F.; Bian, H.; Ge, W.; Wei, M. Exploiting Deep Matching and Underwater Terrain Images to Improve Underwater Localization Accuracy. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  47. Cui, S.; Ma, A.; Zhang, L.; Xu, M.; Zhong, Y. MAP-Net: SAR and Optical Image Matching via Image-Based Convolutional Network with Attention Mechanism and Spatial Pyramid Aggregated Pooling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  48. Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow chart of HMSEC with ORB.
Figure 1. Flow chart of HMSEC with ORB.
Remotesensing 16 02683 g001
Figure 2. Diagram of motion smooth statistics. Subgraphs (a,b) represent the left and right images. The red points and yellow circles represent the match points and neighbors, and the brown and blue lines represent the matches between two images.
Figure 2. Diagram of motion smooth statistics. Subgraphs (a,b) represent the left and right images. The red points and yellow circles represent the match points and neighbors, and the brown and blue lines represent the matches between two images.
Remotesensing 16 02683 g002
Figure 3. Diagram of the homography-based scale and orientation estimation. Subgraphs (a,b) represent the left and right images. The green points represent the match points, and the brown and yellow lines represent the matches between two images.
Figure 3. Diagram of the homography-based scale and orientation estimation. Subgraphs (a,b) represent the left and right images. The green points represent the match points, and the brown and yellow lines represent the matches between two images.
Remotesensing 16 02683 g003
Figure 4. Zhaoshu Island, Ganquan Island, and Lingyang Reef are indicated by red, blue, and orange stars in (a), and the WorldView-2 images for these areas are displayed in (b), (c), and (d), respectively.
Figure 4. Zhaoshu Island, Ganquan Island, and Lingyang Reef are indicated by red, blue, and orange stars in (a), and the WorldView-2 images for these areas are displayed in (b), (c), and (d), respectively.
Remotesensing 16 02683 g004
Figure 5. Matching results for (a) Zhaoshu Island, (b) Ganquan Island, and (c) Lingyang Reef using a variety of methods.
Figure 5. Matching results for (a) Zhaoshu Island, (b) Ganquan Island, and (c) Lingyang Reef using a variety of methods.
Remotesensing 16 02683 g005
Figure 6. The matching distributions with the ORB-GMS method at (a) Zhaoshu Island, (c) Ganquan Island, and (e) Lingyang Reef, and the corresponding distributions of ORB-HMSEC in (b,d,f).
Figure 6. The matching distributions with the ORB-GMS method at (a) Zhaoshu Island, (c) Ganquan Island, and (e) Lingyang Reef, and the corresponding distributions of ORB-HMSEC in (b,d,f).
Remotesensing 16 02683 g006
Figure 7. Distributions of the matching-point numbers at different water depths for (a) Zhaoshu Island, (b) Ganquan Island, and (c) Lingyang Reef. The orange and blue curves indicate the matching-point numbers at various water depths obtained by ORB-HMSEC and ORB-GMS, respectively. The content in the red dashed rectangle is locally magnified and represented in detail.
Figure 7. Distributions of the matching-point numbers at different water depths for (a) Zhaoshu Island, (b) Ganquan Island, and (c) Lingyang Reef. The orange and blue curves indicate the matching-point numbers at various water depths obtained by ORB-HMSEC and ORB-GMS, respectively. The content in the red dashed rectangle is locally magnified and represented in detail.
Remotesensing 16 02683 g007
Figure 8. Grid kernels for eight relative rotations (a) and five relative scales (b) used in GMS.
Figure 8. Grid kernels for eight relative rotations (a) and five relative scales (b) used in GMS.
Remotesensing 16 02683 g008
Figure 9. Change trend of matching-point numbers using various values of coordinate and angle standard deviations in ORB-HMSEC method at three study areas. Subgraphs (ac) represent the change in matching-point numbers with various values of coordinate thresholds when the angle deviation is 2 A s , and subgraphs (df) represent the change in matching-point numbers with angle thresholds when the angle deviation is 2 σ d .
Figure 9. Change trend of matching-point numbers using various values of coordinate and angle standard deviations in ORB-HMSEC method at three study areas. Subgraphs (ac) represent the change in matching-point numbers with various values of coordinate thresholds when the angle deviation is 2 A s , and subgraphs (df) represent the change in matching-point numbers with angle thresholds when the angle deviation is 2 σ d .
Remotesensing 16 02683 g009
Table 1. Zhaoshu Island, Ganquan Island, and Lingyang Reef image data.
Table 1. Zhaoshu Island, Ganquan Island, and Lingyang Reef image data.
Study AreaAcquisition TimeImage NameResolution (m)
Zhaoshu Island11 March 201717MAR11030327-M2AS-056861369020_01_P0011.8
17MAR11030446-M2AS-056861369020_01_P001
Ganquan Island2 April 201414APR02033318-M2AS-054801903010_01_P001
14APR02033200-M2AS-054801903010_01_P001
Lingyang Reef2 April 201414APR02033159-M2AS-056861369050_01_P002
14APR02033318-M2AS-056861369050_01_P002
Table 2. Configuration methods for the extraction of matching points.
Table 2. Configuration methods for the extraction of matching points.
Matching MethodFalse-Match FilterMethod Configuration
SIFTRTSIFT-RT
SURFRTSURF-RT
AKAZERTAKAZE-RT
GMSAKAZE-GMS
HMSECAKAZE-HMSEC
ORBRTORB-RT
GMSORB-GMS
HMSECORB-HMSEC
Table 3. Comparison of the matching metrics obtained using different matching methods for Zhaoshu Island.
Table 3. Comparison of the matching metrics obtained using different matching methods for Zhaoshu Island.
Zhaoshu IslandPMR (%)MS (%)SMR (%)SMNTMN
SIFT40.1536.9157.58399693
SURF49.8540.6857.02256449
AKAZE-RT54.0843.4310220
ORB-RT48.1134.314.94217344
AKAZE-GMS54.0942.440025
AKAZE-HMSEC53.5743.7943.44182419
ORB-GMS48.1140.5426.9291338
ORB-HMSEC50.4453.6560.2018123010
Table 4. Comparison of the matching metrics obtained using different matching methods for Ganquan Island.
Table 4. Comparison of the matching metrics obtained using different matching methods for Ganquan Island.
Ganquan IslandPMR (%)MS (%)SMR (%)SMNTMN
SIFT39.1732.7200124
SURF37.6621.7500144
AKAZE-RT56.3531.222.35285
ORB-RT30.5720.7939.106461652
AKAZE-GMS57.0133.183.33390
AKAZE-HMSEC56.5432.361.373219
ORB-GMS31.8933.2244.3211442581
ORB-HMSEC31.2734.3124.1312915348
Table 5. Comparison of the matching metrics obtained using different matching methods for Lingyang Reef.
Table 5. Comparison of the matching metrics obtained using different matching methods for Lingyang Reef.
Lingyang ReefPMR (%)MS (%)SMR (%)SMNTMN
SIFT10.2927.70100210210
SURF28.7054.90100227227
AKAZE-RT20.8938.611006060
ORB-RT64.3657.7610048214821
AKAZE-GMS20.8937.341005858
AKAZE-HMSEC20.9134.811005050
ORB-GMS64.3659.2910050235023
ORB-HMSEC65.9569.6110063696369
Table 6. Increments of matching points between HMSEC and GMS.
Table 6. Increments of matching points between HMSEC and GMS.
Study AreaIncrements of the Matching Points
Land PointsUnderwater PointsTotal Points
Zhaoshu Island95117212672
Ganquan Island26201472767
Lingyang Reef-13461346
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Le, Y.; Wu, L.; Zhang, D.; Zhao, Q.; Zhang, X.; Liu, L. Weak-Texture Seafloor and Land Image Matching Using Homography-Based Motion Statistics with Epipolar Geometry. Remote Sens. 2024, 16, 2683. https://doi.org/10.3390/rs16142683

AMA Style

Chen Y, Le Y, Wu L, Zhang D, Zhao Q, Zhang X, Liu L. Weak-Texture Seafloor and Land Image Matching Using Homography-Based Motion Statistics with Epipolar Geometry. Remote Sensing. 2024; 16(14):2683. https://doi.org/10.3390/rs16142683

Chicago/Turabian Style

Chen, Yifu, Yuan Le, Lin Wu, Dongfang Zhang, Qian Zhao, Xueman Zhang, and Lu Liu. 2024. "Weak-Texture Seafloor and Land Image Matching Using Homography-Based Motion Statistics with Epipolar Geometry" Remote Sensing 16, no. 14: 2683. https://doi.org/10.3390/rs16142683

APA Style

Chen, Y., Le, Y., Wu, L., Zhang, D., Zhao, Q., Zhang, X., & Liu, L. (2024). Weak-Texture Seafloor and Land Image Matching Using Homography-Based Motion Statistics with Epipolar Geometry. Remote Sensing, 16(14), 2683. https://doi.org/10.3390/rs16142683

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop