Estimation of Positions and Poses of Autonomous Underwater Vehicle Relative to Docking Station Based on Adaptive Extraction of Visual Guidance Features
Abstract
:1. Introduction
2. Model Building for PPARD Estimation Based on Camera Imaging Principle
3. Guidance Feature Extraction Based on Adaptive Threshold Image Binarization
- (1)
- Noise points extracted as guidance lamp peaks for filtering could not guarantee the smoothness of the surface in Figure 3. By extracting the wave peaks, a large number of connected areas of the wave peaks could be obtained, which was much larger than the number of guidance lamps.
- (2)
- In order to test whether the noise can be eliminated by setting the wave peaks extraction threshold tb, it is gradually increased from a small value (100) to 200. During this period, the noise cannot be eliminated. However, when tb is 200, a guidance lamp is eliminated. As shown in Figure 5, not all guidance lamps could be extracted, and there were noise points in the figure. Therefore, all the guidance lamps could not be extracted without noise by adjusting tb.
- (3)
- The area of the extracted connected area was very small and close to a point. Therefore, morphological processing could not be used to remove the noise. This was not conducive to the location of the guide lamps.
4. Preprocessing Method of Underwater Guidance Images
4.1. Underwater Guidance Image Enhancement
- (1)
- Adjacent guidance lamps in the image should be clearly separated, that is, the values of peaks and troughs of any two adjacent lamps should differ significantly.
- (2)
- Guidance lamps should present the characteristics of strong light sources.
- (3)
- The highlighted area of guiding lamps should have an adequate area, not just a point, to prevent morphological filtering and causing the loss of guiding lamps.
- (4)
- The brightness of all the guidance lamps should be similar.
4.2. Segmentation of Guidance Lamp Region
- (1)
- Average intensity lm;
- (2)
- Gradient mean dm;
- (3)
- Strong boundary pixels number b.
5. Experimental Verification
5.1. PPARD Estimation Steps Based on Image Processing
- (1)
- Acquire a guidance image from the camera.
- (2)
- Preprocess the guidance image, including image enhancement and image segmentation.
- (3)
- Binarize the image using the adaptive threshold binarization method, proposed in Section 3. Extract the guidance features and calculate their pixel coordinates.
- (4)
- Input the pixel coordinates and actual coordinates of the guidance features into Equation (7) to solve for PPARD.
5.2. Experimental Equipment
5.3. Accuracy Analysis of PPARD Estimation
5.4. Comparison of Proposed Method and OI Method
5.5. Analysis of Guidance Feature Extraction
5.5.1. Success Rate of Guidance Feature Extraction
5.5.2. Comparison of Adaptive Threshold Image Binarization and Classical Binarization Methods
6. Conclusions and Discussion
- (1)
- This study derived the imaging error function of the guidance features according to the principle of camera imaging. The model for estimating PPARD was built by optimizing the function to obtain a minimum value.
- (2)
- The characteristics of the guidance images were analyzed. The guidance lamps were found to be strong blue point light sources. Based on this characteristic, an adaptive threshold binarization method for the guidance image was proposed in this study. The guidance features were extracted by combining image binarization and morphological processing.
- (3)
- To improve the robustness of guidance feature extraction, this study proposed an enhancement method for guidance images. It became more obvious that the guidance lamps were strong point light sources with a certain pixel area in the enhanced image. In addition, the texture features of the guidance image were analyzed and used to segment the guidance target area. Image enhancement and segmentation could improve the failure of guidance feature extraction caused by changes in the photographing environment and pose/position of the camera.
- (4)
- A PPARD estimation test platform was designed in this study. The experimental results of the PPARD estimation showed that the estimation accuracy of the proposed method was better than that of the OI method. The absolute errors of the three rotation angle estimations were less than 1°; the position estimation errors along the x- and y- axes were less than 80 mm; and the position estimation error along the z-axis was less than 140 mm. In the comprehensive evaluation index, the rotation error was less than 1.3° and the translation error was less than 8%. Furthermore, the feature extraction method proposed in this study was compared to fixed thresholding, Otsu thresholding, HATS, and HSTS using datasets Drecovery and Dexperiment. The results showed that the guidance feature extraction method proposed in this study yielded a better environmental adaptability and success rate.
- (1)
- Limitations. This method requires that the brightness of the guidance lamps is similar. Its great difference can easily cause the failure of guidance feature extraction. Thus, the PPARD estimation cannot be realized. If one or two guidance lamps are too dark, the pixel value of the corresponding imaging position in the obtained guidance image is low. The adaptive threshold calculated using Equation (9) may be higher than the maximum pixel value of the too dark guidance lamps, resulting in the loss of guidance lamps in the binary image. If one or two guidance lamps are too bright, the area between adjacent guidance lamps may be overexposed, and the guidance lamps will be connected into one piece. It is impossible to extract all guidance lamps. All of the above will lead to the invalidation of the PPARD estimation method.
- (2)
- Application extension. This method is proposed to aim toward the problem of AUV underwater docking guidance. It will be applied to the AUV docking operation in the future. Its main function is to estimate the PPARD. This method can be applied to other scenes that need to estimate the positions and poses of the AUV relative to the target objects through further research and development. For example, in the AUV cluster, AUVs can use this method to estimate the relative positions and poses between AUV groups by carrying guidance lamps. In addition, for oil field pipeline intervention operations, guidance lamps can be configured on the pipeline valve to estimate the positions and poses of the AUV relative to the valve, and then control the manipulator installed on the AUV to screw the valve.
- (1)
- The prerequisite for the feature extraction of this method is that the four guidance lamps in the image cannot become a large over-exposed dot, and there must be a non-over-exposed region between the four guiding lamps. Camera parameters, the size of the docking station, and the power of guidance lamps all affect the success of feature extraction. The limited range of distance from the camera to the docking station and whether the algorithm is working well need to be analyzed for the specific docking application. In the future, this method will be used in the actual AUV docking scenarios. During docking, multiple experimental analyses will be required to determine the suitable distance for feature extraction and PPARD estimation.
- (2)
- In the experiment results, the failure in the extraction of guidance lamps was mostly owing to the fact that the guidance lamp extraction had not received all the guidance lamps. Aiming at this type of extraction results, the PPARD estimation algorithm needs to be studied to ensure that it can also be carried out when the guide lamps are not extracted completely. In addition, there were pseudo guidance lamps in some failure extraction results. The image classification-based identification of guidance lamps and the proposal of pseudo guidance lamps need to be studied in the future.
- (3)
- The vision-based PPARD estimation proposed in this study had a high accuracy. The following work for the docking problem of the AUV and docking station involves applying it to an AUV docking control. It can provide feedback information for docking control and help the AUV to realize docking with a high precision and high success rate. Furthermore, lake and ocean experiments for AUV docking will be carried out to test the performance of PPARD estimation and AUV docking control.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Jiang, X.S.; Feng, X.S.; Wang, L.T. Unmanned Underwater Vehicles; Liaoning Science and Technology Publishing House: Shenyang, China, 2000. [Google Scholar]
- Wynn, R.B.; Huvenne, V.; Bas, T.; Murton, B.J. Autonomous underwater vehicles (AUVs): Their past, present and future contributions to the advancement of marine geoscience. Mar. Geol. 2014, 352, 451–468. [Google Scholar] [CrossRef] [Green Version]
- Pang, G.; Huang, M.M.; Song, B.W.; Hu, H.B.; Du, X.X. Current situation and development trend of AUV recovery technology. Torpedo Technol. 2008, 16, 10–14. [Google Scholar]
- Jaffre, F.M.; Austin, T.C.; Allen, B.G.; Stokey, R.; Alt, C.J.V. Ultra short baseline acoustic receiver/processor. In Proceedings of the Europe Oceans 2005, Brest, France, 20–23 June 2005. [Google Scholar]
- Stokey, R.; Allen, B.; Austin, T.; Goldsborough, R. Enabling technologies for REMUS docking: An integral component of an autonomous ocean-sampling network. IEEE J. Ocean. Eng. 2001, 26, 487–497. [Google Scholar] [CrossRef]
- Park, J.Y.; Jun, B.H.; Lee, P.M.; Lim, Y.K.; Oh, J.H. Docking problem and guidance laws considering drift for an underactuated AUV. In Proceedings of the Europe Oceans 2011, Santander, Spain, 6–9 June 2011. [Google Scholar]
- Palomeras, N.; Vallicrosa, G.; Mallios, A.; Bosch, J. AUV homing and docking for remote operations. Ocean. Eng. 2018, 154, 106–120. [Google Scholar] [CrossRef]
- Li, L. Research on Pose Estimation of Underwater Target Based on Vision. Master’s Thesis, Huazhong University of Science & Technology, Wuhan, China, 2019. [Google Scholar]
- Trslic, P.; Rossi, M.; Robinson, L.; O’Donnell, C.; Weir, A.; Coleman, J.; Riordan, J.; Omerdic, E.; Dooly, G.; Toal, D. Vision based autonomous docking for work class ROVs. Ocean. Eng. 2020, 196, 106840. [Google Scholar] [CrossRef]
- Zhong, L.J.; Li, D.J.; Lin, M.W.; Lin, R.; Yang, C.J. A fast binocular localisation method for AUV docking. Sensors 2019, 19, 1735. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, Y.; Jiang, Y.Q.; Jian, C.; Wang, B.L.; Li, Y.M. AUV docking experiments based on vision positioning using two cameras. Ocean. Eng. 2015, 110, 163–173. [Google Scholar] [CrossRef]
- Yan, Z.P.; Gong, P.; Zhang, W.; Li, Z.X.; Teng, Y.B. Autonomous underwater vehicle vision guided docking experiments based on l-shaped light array. IEEE Access 2019, 7, 72567–72576. [Google Scholar] [CrossRef]
- Liu, S.; Ozay, M.; Okatani, T.; Xu, H.L.; Sun, K.; Lin, Y. Detection and pose estimation for short-range vision-based underwater docking. IEEE Access 2018, 7, 2720–2749. [Google Scholar] [CrossRef]
- Liu, S.; Xu, H.L.; Lin, Y.; Gao, L. Visual navigation for recovering an AUV by another AUV in shallow water. Sensors 2019, 19, 1889. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yi, W.K. Research on the Docking Guidance Algorithm of Autonomous Underwater Robot Based on Vision. Master’s Thesis, Shenyang Ligong University, Shenyang, China, 2018. [Google Scholar]
- Sato, Y.; Maki, T.; Masuda, K.; Matsuda, T.; Sakamaki, T. Autonomous docking of hovering type AUV to seafloor charging station based on acoustic and visual sensing. In Proceedings of the 2017 IEEE Underwater Technology, Busan, Korea, 21–24 February 2017. [Google Scholar]
- Jiang, Y.Q. Study on Critical Control Issues for AUV Docking Applications. Ph.D. Thesis, Harbin Engineering University, Harbin, China, 2016. [Google Scholar]
- Abdel-Aziz, Y.I.; Karara, H.M. Direct linear transformation from comparator to object space coordinates in close-range photogram. Photogramm. Eng. Remote Sens. 2015, 81, 103–107. [Google Scholar] [CrossRef]
- Lepetit, V.; Morenonoguer, F.; Fua, P. EPnP: An accurate o(n) solution to the PnP problem. Int. J. Comput. Vis. 2009, 81, 155–166. [Google Scholar] [CrossRef] [Green Version]
- Lu, C.P.; Hager, G.; Mjolsness, E. Fast and globally convergent pose estimation from video images. IEEE Trans. Pattern Anal. mach. Intel. 2000, 22, 610–622. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.Y.; Deriehe, R.; Faugeras, O. A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artif. Intell. 1995, 78, 87–119. [Google Scholar] [CrossRef] [Green Version]
- Byrd, R.H.; Hribar, M.E.; Nocedal, J. An interior point algorithm for large-scale nonlinear programming. SIAM J. Optim. 1999, 9, 877–900. [Google Scholar] [CrossRef]
- Land, E.H. The Retinex; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 1964. [Google Scholar]
- Li, S.; Chi, X.; Ming, X. A robust o(n) solution to the perspective-n-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1444–1450. [Google Scholar] [CrossRef] [PubMed]
- Ghosh, S.; Ray, R.; Vadali, S.; Vadali, S.R.K.; Shome, S.N.; Nandy, S. Reliable pose estimation of underwater dock using single camera: A scene invariant approach. Mach. Vis. Appl. 2016, 27, 221–236. [Google Scholar] [CrossRef]
DOF | Value |
---|---|
θx (°) | –10, –5, 0, 5, 10 |
θy (°) | –10, –5, 0, 5, 10 |
θz (°) | –10, –5, 0, 5, 10 |
lx (mm) | –400, –200, 0, 200, 400 |
ly (mm) | –400, –200, 0, 200, 400 |
lz (mm) | –2000, –3000, –4000 |
Evaluating Indicators | DOF | Proposed Method | OI |
---|---|---|---|
Mean error of RRARD | θx (°) | 0.42 | 3.77 |
θy (°) | 0.45 | 3.95 | |
θz (°) | 0.34 | 1.22 | |
lx (mm) | 45.76 | 163.60 | |
ly (mm) | 36.07 | 191.88 | |
lz (mm) | 61.80 | 89.97 | |
Maximum error of PPARD | θx (°) | 0.91 | 6.59 |
θy (°) | 0.90 | 5.92 | |
θz (°) | 0.89 | 3.03 | |
lx (mm) | 76.67 | 294.50 | |
ly (mm) | 69.20 | 274.30 | |
lz (mm) | 132.25 | 131.65 | |
Root mean squared error of PPARD | θx (°) | 0.49 | 4.28 |
θy (°) | 0.53 | 4.36 | |
θz (°) | 0.41 | 1.47 | |
lx (mm) | 49.89 | 206.35 | |
ly (mm) | 40.73 | 219.20 | |
lz (mm) | 71.30 | 98.85 |
Database | Without Image Preprocessing | With Image Preprocessing | ||
---|---|---|---|---|
Successful Extraction (%) | Fake Guidance Lamps Existing in the Failed Extraction (%) | Successful Extraction (%) | Fake Guidance Lamps Existing in Failed Extraction (%) | |
Drecovery | 68.89 | 8.58 | 87.99 | 0.39 |
Dexperiment | 100 | 0 | 100 | 0 |
Data Set | Proposed Method | Fixed Thresholding | Otsu Thresholding | HATS | HSTS |
---|---|---|---|---|---|
Drecovery | 87.99% | 35.94% | 2.29% | 34.47% | 34.78% |
Dexperiment | 100% | 41.94% | 4.12% | 31.46% | 32.21% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lv, F.; Xu, H.; Shi, K.; Wang, X. Estimation of Positions and Poses of Autonomous Underwater Vehicle Relative to Docking Station Based on Adaptive Extraction of Visual Guidance Features. Machines 2022, 10, 571. https://doi.org/10.3390/machines10070571
Lv F, Xu H, Shi K, Wang X. Estimation of Positions and Poses of Autonomous Underwater Vehicle Relative to Docking Station Based on Adaptive Extraction of Visual Guidance Features. Machines. 2022; 10(7):571. https://doi.org/10.3390/machines10070571
Chicago/Turabian StyleLv, Fengtian, Huixi Xu, Kai Shi, and Xiaohui Wang. 2022. "Estimation of Positions and Poses of Autonomous Underwater Vehicle Relative to Docking Station Based on Adaptive Extraction of Visual Guidance Features" Machines 10, no. 7: 571. https://doi.org/10.3390/machines10070571
APA StyleLv, F., Xu, H., Shi, K., & Wang, X. (2022). Estimation of Positions and Poses of Autonomous Underwater Vehicle Relative to Docking Station Based on Adaptive Extraction of Visual Guidance Features. Machines, 10(7), 571. https://doi.org/10.3390/machines10070571