Next Article in Journal
A Method for Predicting the Remaining Life of Rolling Bearings Based on Multi-Scale Feature Extraction and Attention Mechanism
Next Article in Special Issue
Thermal Biometric Features for Drunk Person Identification Using Multi-Frame Imagery
Previous Article in Journal
Artificial Neural Network-Based Abnormal Gait Pattern Classification Using Smart Shoes with a Gyro Sensor
Previous Article in Special Issue
Switching Trackers for Effective Sensor Fusion in Advanced Driver Assistance Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of Obstacle Detection Systems for Collision Avoidance of Autonomous Underwater Vehicles Tested in a Real Environment

Faculty of Mechanical and Electrical Engineering, Polish Naval Academy, Smidowicza 69, 81-127 Gdynia, Poland
Electronics 2022, 11(21), 3615; https://doi.org/10.3390/electronics11213615
Submission received: 10 October 2022 / Revised: 29 October 2022 / Accepted: 2 November 2022 / Published: 5 November 2022

Abstract

:
The high efficiency of obstacle detection system (ODS) is essential to obtain the high performance of autonomous underwater vehicles (AUVs) carrying out a mission in a complex underwater environment. Based on the previous literature analysis, that include path planning and collision avoidance algorithms, the solutions which operation was confirmed by tests in a real-world environment were selected for this paper consideration. These studies were subjected to a deeper analysis assessing the effectiveness of the obstacle detection algorithms. The analysis shows that over the years, ODSs being improved and provide greater detection accuracy that results in better AUV response time. Almost all analysed methods are based on the conventional approach to obstacle detection. In the future, even better ODSs parameters could be achieved by using artificial intelligence (AI) methods.

1. Introduction

The efficiency and accuracy of the obstacle detection systems (ODSs) are strictly dependent on the parameters of the equipment and devices used. In recent years, significant technological progress has been made in this field, including increasing the accuracy and speed of the operational devices and perception sensors as well as increasing the efficiency of computing systems. The air, ground, and underwater environment presents different characteristics and parameters of signals’ attenuation, reflection, and propagation, so that the hardware setup solution must be adjusted to the environment in which the ODS is expected to operate.
ODS is an essential element of the autonomous underwater vehicles (AUVs), allowing collision-free movement in an unfamiliar environment in the presence of obstacles. They are also a necessary component of path planning and collision avoidance systems or high-level controller of AUV [1]. Efficiency of ODS has a decisive impact on the operation speed and decision-making about the movement of the AUV in the event of an obstacle. The ODS workflow from the detection of an obstacle to its avoidance maneuvers, is presented in Figure 1. The initial stage includes pre-processing and environment detection. At this stage, the selection of the environmental perception sensor (e.g., sonar, camera, echosounder, laser scanner) and the appropriate tuning of the detection parameters have a key importance for the final parameters of the ODS. Next, image processing steps so-called image segmentation and morphological operations are performed. Based on the collected data from the above-mentioned procedures, the AUV path of movement is determined. Using the implemented AUV collision avoidance algorithms, the vehicle moves in accordance with the designated path, performing collision avoidance maneuvers.
Despite many simulation-tested methods of collision avoidance and path planning ensuring very good operating parameters [2], only a few solutions have been integrated with obstacle detection systems and tested in AUV in a real-world environment. The low number of real-world-environment tests is due to its necessity to properly prepare systems implemented in underwater vehicles and its testing of interoperability. It is usually time-consuming and must be preceded by an appropriate analysis related to the influence of the environment on the tested object. Designing an AUV that is operating and making decisions in near real-time together with moving along the optimal path is still a big challenge. It requires the use of data processing algorithms and very efficient computing systems. The data received from the environment perception devices must be appropriately processed to ensure quick and accurate environment imaging, obstacle detection, their classification, and determining a safe path.
Because of the rapid development of obstacle detection methods, attempts are systematically made to summarize and assess the state-of-the-art in this field. For autonomous vehicles (AVs) and unmanned ground vehicles (UGVs) the description of progress in this field was presented in references [3,4,5,6]. For unmanned aerial vehicles (UAVs) vision-based obstacle detection algorithms were analysed in references [7,8]. For unmanned aerial systems (UASs) obstacle detection algorithms based on deep learning (DL) were presented in [9]. Deep learning-based techniques for obstacle detection and target recognition for unmanned underwater vehicles (UUVs) have been analysed in [10,11,12]. Underwater mine detection and classification methods for UUVs and unmanned surface vehicles (USVs) have been discussed in [13]. In the reference [14] passive, active and collaborative detection technologies were discussed. The main surveys connected with obstacle detection methods for robots are presented in Table 1.
This article presents an analysis of the ODSs implemented in the AUVs with ability to avoid collision and path planning. Based on the previous literature analysis, including the path planning and collision avoidance algorithms discussed in [2], studies that presented their operation testing in a real-world environment were selected for this review. These solutions will be subject to deeper analysis in termsof assessing the effectiveness of the obstacle detection algorithms. It should be noted that only a few publications are presenting underwater ODSs integrated with path planning and collision avoidance systems. Many publications only provide path planning and collision avoidance methods tested in a simulated environment [15,16,17]. Some authors tested the operation of the AUV in real-world conditions where the obstacles are generated numerically [18]. Also, many authors focus only on presenting the ODS [19]. For the efficient operation of an AUV, the integration of these three systems is essential, ensuring fast and effective data processing and decision making. It also should be mentioned that among the articles related to aerial, ground, and underwater unmanned vehicles, a significantly low number of publications specifically for the underwater environment is noticeable. Most papers focus on solutions for ground or aerial environments.
The paper is organized as follows. Section 2 discusses basic technical knowledge on the sensors used to perceive the environment in which the AUV operates, and Section 3 describes the image processing steps. Section 4 provides a detailed assessment of obstacle detection systems integrated with path planning and collision avoidance systems in AUVs tested in a real environment. Section 5 summarizes the analysis performed and identifies the most crucial problems in obstacle detection, development constraints, and future works. The final conclusions are presented in Section 6.

2. Perception Sensors in Underwater ODS

Specific parameters of the underwater environment, such as strong attenuation of high-frequency signals and limited access to light, have a significant impact on the choice of the environment perception sensor in operations. The most common devices for the perception of the underwater environment are sensors based on a hydroacoustic wave, such as side scan sonar (SSS), single/multibeam echosounder (SBES, MBES), or forward looking sonar (FLS). Underwater cameras are also increasingly used. In addition, UUV can be equipped with other devices, such as a laser scanner or a combination of the above sensors. There are also passive solutions for obstacle detection [20,21,22,23]. However, despite the advantages of hydrophones, such as low consumption, high concealment, and long working time, such systems only provide information about the presence of an obstacle and the signal direction of arrival (DOA) from the object. In the case of obstacles that do not carry any transmission, detecting the object is impossible. Additionally, these types of methods are very susceptible to interference. Therefore, when the AUV navigates in a complex environment, such methods have limited capabilities and can only be used as auxiliary systems. In this section, the most commonly used sensors of environmental perception will be discussed with their principles of operations, advantages, and disadvantages, limitations, and examples of applications.

2.1. Sonar

Sound propagation in the underwater environment can be represented by the plane wave equation (depending on spatial variations in pressure and time) as below [24]:
p ( x , t ) = A s i n ( 2 π f t 2 π x λ ) = A s i n ( ω t k x )
where:
A—peak amplitude of the acoustic pressure of a plane wave,
f—frequency in Hz,
λ —wavelength,
ω —angular frequency in rad/s,
k—wave number in rad/m.
The transmitted beam takes the shape of a cone. Its aperture angle is determined by the physical parameters of the sonar device. The basic equation representing the performance of the sonar, taking into account the parameters of the object, the environment, and the transducer, is shown below [25].
S E = ( S L + T S 2 P L ) N D T
where:
S L —source level
T S —target strength
P L —propagation loss
N—noise level
D T —detection threshold
Based on the above equation, it is possible to calculate the transmitter power needed to detect an object of a given size at a known distance. The equation shows the relationship for the FLS signal but is also true for the echosounder and SSS. A broader analysis of the acoustic signals and their underwater propagation can be found in references [26,27,28,29]. The difference between devices such as FLS, SSS, and echosounder is usually the shape of the beam, range, frequency, number of pulses, etc.
The principle of sonar operation is to send a short-sounding pulse called “ping” in the form of a hydroacoustic wave with specific parameters to explore the space. The transmitted probe signal partially reflects off the object in the range of the wave and goes back to the receiver. The detected object distance is calculated from the sound propagation time underwater according to the following equation:
R = c t 2
where:
c—speed of sound in an underwater environment,
t—time for the sound to reach the target and back to the transducer.
An echo intensity image is created based on the information about the distance and power of the signal reflected from the objects in irradiated space. The sonar signal may have different parameters and modulations depending on the expected accuracy, resolution, and speed of operation. One type of sonar signal is a fixed-frequency pulse that typically lasts several tens of microseconds [30]. The pulse duration is extended to achieve the required range without increasing the transmitter’s peak power. It reduces distinguish ability in the distance [31], which means that in the case of two objects located close to each other, the echo of both objects may be combined into one reflection. Additionally, this type of signal is susceptible to interference. Avoiding this phenomenon is associated with the reduction of the pulse duration. However, if the peak power of the pulse is not increased, it will reduce the sonar range. Therefore, increasing distance discrimination and range is typically accomplished by applying signal compression. In this solution, the pulse is frequency or phase modulated. Modern sonar for AUVs uses a signal called “chirp” (Figure 2), where the frequency increases as a function of time. For example, the Tritech SeaPrince sonar’s frequency varies from 500 kHz to 900 kHz [30]. The receiving of the pulse reflected from objects in space is realized using matched filters. As a result of the correlation analysis, it is possible to interpret the distance at which the object is located precisely. Additionally, an increase in the signal to noise ratio (SNR) is obtained. Such a structure of the ping makes the distance discrimination from the sonar range independent.
Typical sonar solutions for small AUVs and remotely operated vehicles (ROVs) use mechanical scanning sonar (MSS). In this type of sonar, after examining the space on a specific bearing, the transducer head changes the transmission angle mechanically, and then the next part of the space is probed [32]. The user can set the resolution, range, sector, and in some solutions, also scanning frequency, which directly affects the scanning time. In reference [33], MSS was used for the online processing framework for multiple-target tracking in cluttered environments.
Multibeam sonars are typically used as FLS in AUV. Mounted permanently at the front of the vehicle, they provide detailed information about the environment in front of the vehicle by sending multiple (several to several hundred) narrow beams in a limited field of view (FOV).
Side Scan Sonar requires the movement of the vehicle or platform on which it is mounted to change a part of the scanned space. A single ping or several pings are sent toward the object, creating a portion of its display. Repeated pulses sent from a moving vehicle generate an image of the seabed. Its primary purpose is to map the seabed or glaciers [34,35].
The single or multibeam echosounder [36,37] can also be used to measure the seabed parameters. Based on the sonar signal, SBES/MBES determines the distance to the seabed or an object within its range. The MBES generates many single narrow beams covering a larger space than an SBES and provides higher imaging resolution [38]. Due to the narrow beam, echosounders can be used as part of a simple ODS mounted on the front of the AUV operating in shallow water bodies or near the bottom/water surface. In the article [39], the echosounder, in conjunction with the depth sensor, was used to calculate the calibration factor of the vision system. The advantage of using sensors based on a hydroacoustic wave is a high distance resolution and a relatively large range that can be obtained thanks to the low attenuation of sonar signals underwater. In practice, because of reflections from the bottom and the water surface, it is necessary to limit the sonar range to several dozen meters, depending on the depth of the reservoir in which the measurements are carried out. Another disadvantage is the disturbance related to the multipath propagation of the hydroacoustic signal. Additionally, obtaining 3D images requires extensive computational resources.

2.2. Camera

Underwater cameras, just like ordinary cameras, capture the light reflected from the elements of the environment towards the lens. Through a matrix of photosensitive elements called pixels, an image depending on light intensity is saved in a digital form. The more pixels in the matrix, the higher the image resolution. The light colors are appropriately filtered for individual pixels to obtain three matrices corresponding to red, green, and blue color intensity. This method of color reproduction is called the RGB system and is widely used in visual image processing. Nowadays, digital cameras can record images with very high resolution and wide viewing angles. Thanks to this, the object’s image is richer with many visual features that are impossible to achieve in sonar imaging [40]. Based on such features as color, the intensity of individual RGB colors, contrast, texture, and contours, it is possible to determine the size of the obstacle and its type more accurately. The disadvantage of a high-resolution video image is the need to process a large amount of data, which requires a significant computing power of ODS. Reducing the resolution causes many details to be lost at the cost of faster image processing. Another disadvantage is the need for light in the registered environment. In the underwater environment, the light intensity decreases with increasing depth. Dirty water can cause additional restrictions on incoming light. Even when the water is clean, the visibility does not exceed several meters because light rays are absorbed and converted into heat [38]. These factors also have a direct impact on the operating range of the system. For this reason, the perception of the underwater environment is highly dependent on the clarity of the water and operation depth. The camera-based ODS will also fail in under-ice missions. Despite the disadvantages, systems based on a visual image show effective performance in detecting and tracking cables located at the bottom of water reservoirs [41]. In reference [42], an image from the camera was used to determine the trajectory of swimmers in the pool and in reference [43] to swim-fins efficiency analysis. The visual image is also chosen in object classification [44], object recognition [45] and target recognition based on deep learning methods [46]. In reference [47] the stereo vision system was used to control inspection-class ROV. Vision-based system providing position data in coastal areas was proposed in [48].
For a comprehensive solution that will guarantee high accuracy and efficiency, it is necessary to use additional sensors such as a depth sensor or a distance measurement sensor, e.g., an echosounder used to scale the visual image.

2.3. Other

Despite the dominance of sonar systems, other obstacle-detection solutions in the underwater environment are also being tested. An example is the laser scanner system based on a laser beam to illuminate the space in front of the vehicle. Usually, a camera is used in tandem with a laser. If there is an obstacle within the laser’s operation, a strong reflection is visible in the image captured by the camera. It allows for determining where the object to be avoided is. The disadvantage of this solution is its relatively short range. The [19] presents an ODS based on a 3D image reconstruction Laser Line Scan method cooperating with the binocular stereo-vision system. In a study carried out in stable conditions in a laboratory water reservoir, the effectiveness of the object reconstruction for various conditions of water turbidity was investigated. It was found that the error in determining point clouds in water significantly increased compared to air.
Another solution can be using different combinations of sensors, such as a laser camera and sonar [49]. However, it should be noted that excessive data requires appropriate computing power to use the devices’ potential. Additionally, it is necessary to consider the mutual interference of sensors, such as a combination of FLS and several echosounders, the operating bands of which may overlap. Then it is required to synchronize the operation of the devices, which reduces the speed of the AUV control system. This type of interference does not occur when combining vision and sonar systems.

2.4. Summary

When analysing the advantages and disadvantages of the above solutions, sonar seems to be the most appropriate sensor for the underwater environment. Depending on the intended use and the conditions in which it will operate, a device should be selected with parameters ensuring proper operating efficiency. Sensor combinations also is considered as an effective solution. A prerequisite is the appropriate computational performance of the ODS needed to process the data from these sensors.

3. Image Processing in ODS

This section discusses the basic image processing operations related to the classical obstacle detection process. It should be noted that the scheme of obstacle detection in AI methods is different. Each step in the classic sense of the problem corresponds to the other operation in, e.g., a convolutional neural network (CNN) based scheme. This article focuses mainly on the classical approach to image detection and processing of underwater objects.
The raw sonar data includes the echo of the reflected signal and the noise interference Figure 3. Filtering out this noise is crucial for proper obstacle detection. For this purpose, sonar data is processed using methods such as mean or median filtering, a histogram for local image values (e.g., 5 × 5 pixels), threshold segmentation, and filtration in removing groups of pixels smaller than a × b pixels window. The next step in sonar image processing is the image morphology process. In this case, methods such as edge determination, template matching, dilation, erosion, and combinations of the above techniques are used. After such processing of the sonar image, the obstacles’ characteristics can be determined in detail, and false reflections can be avoided. Processing a high-resolution image increases the computation time needed to determine the features of the detected objects. Therefore, selecting the correct sonar resolution and image processing methods is crucial. In the case of vision, image processing is based mainly on visual characteristics such as color, contrast, and the intensity of individual colors. The above features are also crucial in object detection techniques based on machine learning. The use of AI methods in obstacle detection systems seems very promising and prospective. The development of this type of method began about ten years ago when the authors of [50] presented an extensive deep convolutional neural network called AlexNet to recognize objects in high-resolution images with outstanding results. Since then, many modifications have been made to improve the speed and accuracy of obstacle detection and object features using CNN. The advantage of this type of solution is the high efficiency and precision of identifying and classifying obstacles, which is not comparable to classical methods. The condition is the appropriate adaptation of the neural network consisting of training with images of real obstacles. Supervised learning is very time-consuming. Additionally, it is uncertain whether, if CNN has been trained to detect, e.g., mines or submarines, it will be equally good at dealing with other obstacles that have not been trained. Another problem is acquiring a large enough quantity of training materials that DL methods guarantee a very high level of performance. Due to the limited training material and the quality of available sonar images, detecting and classifying obstacles based on deep learning algorithms are not very developed for sonar images [51]. Choosing inappropriate training resources can have a negative impact on training results.

3.1. Pre-Processing and Detection

At this stage, depending on the purpose of the obstacle detection method, various operations may be performed. The main objective is to avoid disturbances at the detection stage or to reduce disturbances generated during the detection. Pre-processing algorithms prepare the image for further image-processing steps. In the pre-processing and detection stage, methods based on white balance, color correction, and histogram equalization are used in camera images. For example, the study [52] presents an algorithm based on contrast mask and contrast-limited adaptive histogram equalization (CLAHE), which improves the image by visualizing the details of the object and compensates for light attenuation in captured images in an underwater environment. CLAHE was also used in [39] for video image processing in the pre-processing step in the simultaneous localization and mapping (SLAM) system. At this stage, the image can be divided into individual matrices containing grayscale with color intensity in the RGB system [53]. In the case of mine detection, the preparation of the image requires prior determination of shaded areas, areas of reflection from the bottom and water surface, and reflections from the object [54]. The image is pre-normalized to reduce noise and distinguish the background from the highlight and shadow of the mine by, e.g., using the histogram equalization operation [13]. In references [55,56], median filtering was used as part of pre-processing for sonar data, which consists in ordering adjacent pixels and then applying the median value for a specific filter size. The authors of the references above chose a window size of 5 × 5 pixels. The mean filter pre-processing method was used in the detection method presented in [57]. The operating principle of the technique is analogous to the median method. An example of a mean filter operation is shown in Figure 4.
Authors of [58] present a comparison of the effects of the median, mean, wavelet-based, and morphological smoothing methods. The wavelet-based operation uses wavelet transformation for filtering the noise in the signal by splitting into the different scale (e.g., frequency) components [59]. The morphological smoothing method is based on erosion or dilation operations, which are more often used during the morphological processing stage. It reduces noise in the image obtained during the detection process. In a result of comparing the processing time, the obtained effect, the peak signal to noise ratio (PSNR), and the mean square error (MSE) of the above methods, the authors concluded that the most optimal methods are the median and mean methods.
As part of the pre-processing step, the scanning sector or region of interest can also be specified by selecting the distance range and the angular range of the area to be later segmented [60] (Figure 5). This reduces the amount of data processed in further image processing steps. This operation shortens the image processing time and is conducive to achieving real-time ODS operation.

3.2. Image Segmentation

Image segmentation consists of creating classes of objects and qualifying individual pixels of the processed image. For example, in mine detection systems, classes of objects can be a reflection from the object, shadow, and background [61]. The main groups of segmentation methods are threshold segmentation, clustering, and Markov random fields method.
Threshold segmentation methods are based on comparing individual pixel values with a set threshold value. Based on that comparison, the pixel value is set to a specific value (0 or 1). In literature, modifications and improvements to this method can be found, such as Otsu threshold [62]. The thresholding operation with the gradient operator was presented in [63] to the vision-based image segmentation by searching the edge between areas.
Cluster analysis consists of classifying points into subgroups containing an appropriate degree of similarity. The purpose of segmentation based on cluster analysis is to distinguish such objects as echo, shadow, reflections, etc. Among the methods based on clustering, it can be distinguished by the K-means algorithm or region growing method. The K-means clustering technique [64] is based on determining K random points in the image and then assigning the closest points to each of them. Then the centroid of each cluster is calculated. Over time, the algorithm has been improved and modified. For example, [65] introduced the K-means clustering algorithm in conjunction with mathematical morphology. Another method is the region growing technique which is iterative checking of neighboring pixels and comparing their values with the averaged local value. The point is assigned to the region when the difference does not exceed the specified value. This method was used in [33] in the online processing framework for FLS images.
Markov random field is a method based on probability analysis of connections between adjacent pixels [66]. E.g., for an image obtained from SSS, if the pixel is close to the shadow, the probability that it belongs to it increases [61]. Various Markovian models for segmentation of sonar-based detection were presented in [37,67].
Figure 6 shows an example of the operation of the threshold segmentation algorithm. In Figure 7, threshold segmentation was preceded by mean filtering. It is worth noting that the use of the pre-processing method has a significant impact on the subsequent operation of segmentation algorithms.

3.3. Image Morphological Operations

Image morphology operations aim to improve the features of detected objects resulting from segmentation imperfections [68]. Basic operations in this step are dilation, erosion, opening, closing, and edge detection. Dilation operation is the expansion of an image of an object shape. It removes irregularities in the object’s shape by extending its surface by the number of pixels depending on the structuring element (e.g., 3 × 3 or 5 × 5 pixels window). Erosion operation reduces the area of the object in the image as a result of comparison with the structuring element. In addition, image processing also uses other operations, which may be a combination of both of the above (opening, closing) or differing in how the structuring element affects individual pixels (skeletonization). After adjusting the image, it can be detected for such features as, e.g., boundary, edge, or point, depending on the application (Figure 8 and Figure 9). They usually work by looking for significant value differences between neighboring pixels and marking them as an edge.

3.4. Summary

Traditional image processing methods ensure adequate reduction of noise and interference generated in the sensor’s perception of the environment. The result of such processing is detailed information about objects/obstacles near the AUV. Due to the fact that most techniques require checking the value of each pixel and subjecting them to mathematical or statistical operations, they often require a large amount of computation: the more complex the method, the greater the processing time. Additionally, the author’s experience in analysing and interpreting images is necessary for the methods to be properly tuned [55].

4. ODS in Practically Tested AUV Capable of Collision Avoidance

This chapter discusses the obstacle detection approaches implemented in AUVs with obstacle avoidance and path planning capabilities. Based on the literature review presented in [2], only studies demonstrating path planning and collision avoidance algorithms tested practically in a real-world environment were selected for further analysis. Each selected research was analysed in detail regarding the equipment used to perceive the environment and the image processing operations. In addition, the solutions were assessed in terms of: complexity of the environment, static and dynamic obstacles detection ability, operational speed, and path planning suitability.
The operation of the vehicle named NPS ARIES [69] starts with the identification of the bottom. Then the ROI is determined. Once obstacle detected, the information about the obstacle’s distance, height, and the centroid is sent to the controller. In image processing, a binary image is first created using a threshold. Then, during the erosion process, the value of each pixel in the 3 × 3 window is set to the minimum value. In the next step, the algorithm searches for the linear features of the object. Based on that operation, the position of the bottom is determined. The obstacle is identified using a Kalman Filter based on the vehicle pitch, pitch rate, and rotation angle. In the process of segmentation, areas forming lines are separated. The most significant areas are treated as potential obstacles. On this basis, the ROI is determined. Then, using the binary image, the contours of the obstacles are detected, which are tracked using the Kalman Filter. The method is effective and, together with appropriate path-planning algorithms, can provide optimal collision avoidance maneuvers. However, it is difficult to assess its effectiveness in a complex environment with more obstacles because the algorithm has been tested in a low-complication environment. In [70,71] no imaging methods were applied. The ODS works with the obstacle distance data obtained from the echosounders. Thanks to this, it was operated effectively in missions conducted by the authors. However, it does not guarantee effective operation in complicated conditions. Additionally, the AUV’s obstacle avoidance maneuvers may not be optimal.
In reference [72], threshold and median filtering were first applied in the image processing, and then in the morphology step, erosion, dilation, and edge detection methods were used. In the experiment, the vehicle correctly avoided the breakwater obstacle. It proves the correct operation of the obstacle detection method, but the environment in which the experiment was performed was not complex. In reference [73], the vehicle’s operation in the underwater environment with the presence of obstacles was tested. The obstacle detection method is based on measuring the distance to an object using a collision sensor or proximity sensor and is effective for static obstacles. The authors [74] presented a solution based on three echosounders for measuring the distance to obstacles in front of the vehicle. In the study [75], five echosounders were used in the ODS for octree space representation. Obstacle detection systems that use only echosounders to measure the distance to an obstacle are prone to interference. In addition, it does not allow for optimal collision avoidance maneuvers. In research [76], sonar imaging was used in the obstacle detection system, which is then filtered using mean and median filtering. In the segmentation step, a fuzzy K-mean clustering algorithm is used. In morphological processing, the occupancy map is determined from the grayscale image. The method ensures sufficient accuracy and, after applying appropriate algorithms, allows for near-optimal path planning. In reference [77], the sonar image is first speckle noise suppression by a 17 × 17 filter, then the local image histogram entropy method (9 × 9) is used. In the next step the hysteretic threshold of entropy is applied to the image. Finally, the edge detection process is performed, and the obstacles are saved on the map. The method allows making decisions in near real-time and planning near the optimal path. The study [18] presented a vehicle capable of avoiding a collision. An experiment confirmed the correct operation in a real underwater environment. However, the obstacles were simulated, which allowed for bypassing the detection process and the related problems. Therefore, this study will not be considered for further analysis. The authors of [78] presented a solution based on two sets of line laser and camera. The image obtained from the sensors in this configuration is subjected to top-hat transformation based on opening and subtraction operations through the 1 × 21 pixels window to remove bright points from the background of the image (the background is not completely black). Then the image is binarized by the threshold value. White boxes are annotated using the fast label method, and groups of less than 80 pixels are removed as noise. An obstacle is identified if it is present in 5 or more frames. The method is effective, but it has a small operating range, and the processing time does not allow the vehicle to be controlled in a complex environment close to real-time. The same solution with an additional sensor in the form of FLS was presented in [49]. By that implementation, the authors obtained a greater range of ODS activities. In reference [79], two sonars were used to provide 210 degrees FOV. The vehicle has the ability to follow the wall parallel to the AUV axis. The vehicle will perform avoidance maneuvers depending on the sector in which the obstacle appears. The system detects an obstacle when it is present on five or more returns or when the tracked wall is in front of the AUV. The method does not focus on the features of the obstacle and its size. Therefore it does not allow for the determination of the optimal route and the effective movement of the AUV in a complex underwater environment. In [80], the echo intensity matrix obtained from sonar is filtered by the specified threshold method. Then a range is computed for points with intensity greater than the threshold, representing the distance to the obstacle. Extensive AUV experiments were carried out in various scenarios, confirming ODS’s effectiveness. By the use of appropriate path planning algorithms, the AUV has the ability to move near the optimal path. In reference [81] detection is based on the point cloud, the parameters of which in space are estimated using scaling factors based on depth and inertial measurement unit (IMU) measurements. In a pre-processing step, the contrast adjustment is performed along with histogram equalization. The SVIn method [39] outputs a representation of the sensed environment as a 3D point cloud, which is later subjected to extracting visual objects with a high density of features. The method uses the density-based clustering operation in the segmentation step. After clusters detection, their centroids are determined. The method allows us to determine the near-optimal path and operate in real-time. In the study [82], a threshold-based operation for segmentation was first performed in sonar image processing. Then edge detection is applied. The method provides real-time obstacle detection and can be used to move in the underwater environment in the presence of both static and dynamic obstacles. Reference [83] presents ODS based on DL methods for fishing net detection. Pre-processing of the FLS images is conducted by gray stretching and threshold operations. Authors trained and tested their network to achieve ATR. Learning of MRF-Net based on the data collected in the sea and mix-up strategy based on using randomly synthesized virtual data. As a result, the system detects and classifies an obstacle as a fishing net with very high accuracy. The method is very effective and allows the AUV to operate in real-time. However, applying the technique to other obstacles must be preceded by a learning process based on images of the specific obstacle. In reference [84], vision images are used in ODS. First, image features such as intensity, color, contrast, and light transmission contrast are determined. The next step is the appropriate global contrast calculation. After that, ROI is detected, and threshold-based segmentation is executed. AUV successfully detected and avoided obstacles in a complex environment in pool tests. The method allows for near real-time operation. The disadvantage of this method is the range which depends on the visibility. All the above methods are chronologically presented in Table 2 with information about the hardware used, main image processing properties, and effectiveness evaluation.

5. Analysis, Bottlenecks, Future Works

Section 4 describes the obstacle detection and image processing systems for AUVs capable of avoiding obstacles, the correct operation of which has been confirmed by tests performed in a real-world environment. This chapter intends to summarize the results of the analysis carried out in the previous chapter. Statistics on the evaluation of the effectiveness of ODS depending on the research’s publication time are presented together with analysis of the equipment used to perceive the environment in ODS. Additionally, the number of research corresponding with low, medium, and high effectiveness are shown. Then the limitations that inhibit the development of ODSs and future work in this field are defined.

5.1. Analysis

After examining the literature of ODS in AUV, it can be seen that there is a constant development in this field. In Figure 10, the effectiveness of each considered system is evaluated. The surveys are arranged in chronological order. The efficiency has been assessed as low, medium or high depending on below parameters:
1
Complexity of the environment in which the AUV was tested. Each article was rated on a scale of 0–3 depending on the number of obstacles included in the tested environment, the distance between them, the ability to detect obstacles with irregular or only simple shapes, and whether the vehicle was tested for 2D or 3D maneuvers.
2
Ability to only static or static and dynamic obstacles detection. The studies were rated on a 0–1 scale, depending on whether the presented solution can operate correctly in the presence of static and dynamic or only static obstacles.
3
ODS operation speed. This parameter was assessed based on data such as sonar update rate, frame rate, path replanning time, and other specific parameters of the analysed systems. The solutions were rated on a 0–2 scale, where 1 Hz was chosen as the reference frequency of the environment detection and image processing procedure. However, in some studies, the ODS operation speed assessment was based on estimated values because of limited information about the system.
4
Suitability for path planning which is the potential ability to provide sufficient data to determine the optimal path. The research was rated on a 0–3 scale depending on the image processing methods used, the accuracy of presenting obstacles after image processing, and the optimality of the executed path during tests in a real-world environment.
The sum of points determined the final evaluation where high efficiency was in the range of 7–9, medium 4–6, and low 0–3. Detailed assessment data are provided in Table S1.
It can be seen that at the initial studies, the effectiveness of the methods was low. In the first decade of the 21st century, the main challenge was to obtain a high speed of operation, allowing the AUV to avoid collisions in real-time. The situation was related to the limitations of the large number of calculations that must be performed as part of image processing. In addition, the first designs of AUVs with collision avoidance algorithms did not focus on determining the optimal route. Hence, the first ODSs were not advanced in determining the characteristics of the obstacles found in the environment where the AUV was tested. The ODSs were refined over time. For example, in references [49,78], Forward-looking sonar was added to the initially developed system based on forward-looking cameras and line lasers to increase the obstacle detection range. In recent times, along with technological development and the increase in computational resources, researchers have used more advanced systems that ensure fast data processing, operate in a complex environment, and enable the determination of a near-optimal path. Additionally, in reference [83], the method of AI based on deep learning was used to detect obstacles. The authors presented the effectiveness of this method in detecting the fishing net.
In terms of environment perception sensors, researchers most often used sonar imagery (Figure 11). Hydroacoustic wave-based sensors provide the greatest range. This directly increases the time the AUV has to react to an obstacle on its path. However, sonar imagery requires appropriate analysis experience and proper operations to suppress background noise. Additional limitations in using sonars are false positives resulting from the reflection of the signal from the water surface and the bottom. For the above reasons, some researchers decided to base on the visual image from the camera. It provides more details related to contrast, texture, and color. However, the vision system range is small due to the high attenuation of light in the underwater environment. Some studies used a combination of several sensors [49,73,80], which increases the amount of data to process.
In Figure 12 the analysed literature is presented in three groups of effectiveness. The number of ODSs with low, medium, and high efficiency is almost the same. This proves the constant development of systems. It is most likely that further developed methods will be highly effective. It also indicates that the ODS field is less challenging than 10–15 years ago.

5.2. Bottlenecks and Future Works

Currently, researchers dealing with obstacle detection for AUVs field have mastered many issues related to the demanding underwater environment. Developed ODSs work in close to real-time and ensure adequate accuracy for determining the near-optimal path. However, the results accuracy strictly depends on the AUV speed. In the studies analyzed in this paper, more complex methods allow AUVs to develop a speed of several meters per second. Increasing the speed is associated with increasing the range of ODS operation. For the camera-based solutions, the range is limited by light attenuation and is usually several meters. In the case of sonar technology, the greater range is associated with, the longer time the signal needs to irradiate the space. More data is related to the number of samples depending on the distance resolution. Reducing the resolution at a distance may reduce the amount of data, however, it will also reduce the accuracy of the ODS. If the obstacle is small enough, it may not be registered, or the receiver may treat obstacles close to each other as one. It is also worth noting that in the above analysis, almost all methods rely on the classic image processing approach. Artificial intelligence methods have been extensively developed in recent years. Nevertheless, they are rarely used in solutions for AUVs capable of collision avoidance. This may be due to certain limitations in expert knowledge or the lack of an adequate amount of data needed for training (for machine learning methods). Sonar or video images of the underwater environment are not as widespread as for other environments. This practically requires extensive measurements by researchers, which is usually time-consuming. The effectiveness of the methods is highly dependent on the amount of training materials. For this reason, virtually synthesized images can be a promising solution.

6. Conclusions

This article presents an in-depth analysis of the ODSs for collision avoidance AUVs, including a basic explanation of underwater perception sensors, image processing and a detailed description of the ODSs tested under real conditions. The underwater environment is very challenging in perception. For this reason, developing an effective system that can enable optimal and safe path planning requires implementing specific solutions. Nevertheless, the above analysis results show that researchers have well mastered this part of underwater robotics. The recently developed systems are more and more effective. In the future, even better ODSs parameters could be achieved by using AI methods.

Supplementary Materials

The following supporting information can be downloaded at:https://www.mdpi.com/article/10.3390/electronics11213615/s1, Table S1: Assessment of analysed papers.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ATRAutomatic Target Recognition
AUVsAutonomous Underwater Vehicles
AVsAutonomous Vehicles
CNNConvolutional Neural Network
DLDeep Learning
DOADirection Of Arrival
FLSForward Looking Sonar
FOVField Of View
IMUInertial Measurement Unit
MBESMultibeam Echosounder
MSSMechanical Scanning Sonar
ODSsObstacle Detection Systems
ROIRegion Of Interest
ROVsRemotely Operated Vehicles
SBESSingle beam Echosounder
SLAMSimultaneous Localization and Mapping
SNRSignal to Noise Ratio
SSSSide Scan Sonar
UASsUnmanned Aerial Systems
UAVsUnmanned Aerial Vehicles
UGVsUnmanned Ground Vehicles
UUVsUnnmaned Underwater Vehicles

References

  1. Szymak, P.; Kot, R. Trajectory Tracking Control of Autonomous Underwater Vehicle Called PAST. Pomiary Autom. Robot. 2022, 266, 112731. [Google Scholar] [CrossRef]
  2. Kot, R. Review of Collision Avoidance and Path Planning Algorithms Used in Autonomous Underwater Vehicles. Electronics 2022, 11, 2301. [Google Scholar] [CrossRef]
  3. Thakur, S.; Sharma, V.; Chhabra, A. A review: Obstacle tracking using image segmentation. Int. J. Innov. Res. Comput. Commun. Eng. 2014, 2, 4589–4596. [Google Scholar]
  4. Yu, X.; Marinov, M. A study on recent developments and issues with obstacle detection systems for automated vehicles. Sustainability 2020, 12, 3281. [Google Scholar] [CrossRef] [Green Version]
  5. Iqbal, A. Obstacle detection and track detection in autonomous cars. In Autonomous Vehicle and Smart Traffic; IntechOpen: London, UK, 2020. [Google Scholar]
  6. Chavan, Y.; Chavan, P.; Nyayanit, A.; Waydande, V. Obstacle detection and avoidance for automated vehicle: A review. J. Opt. 2021, 50, 46–54. [Google Scholar] [CrossRef]
  7. Al-Kaff, A.; Martin, D.; Garcia, F.; de la Escalera, A.; Armingol, J.M. Survey of computer vision algorithms and applications for unmanned aerial vehicles. Expert Syst. Appl. 2018, 92, 447–463. [Google Scholar] [CrossRef]
  8. Badrloo, S.; Varshosaz, M.; Pirasteh, S.; Li, J. Image-Based Obstacle Detection Methods for the Safe Navigation of Unmanned Vehicles: A Review. Remote Sens. 2022, 14, 3824. [Google Scholar] [CrossRef]
  9. Fraga-Lamas, P.; Ramos, L.; Mondéjar-Guerra, V.; Fernández-Caramés, T.M. A review on IoT deep learning UAV systems for autonomous obstacle detection and collision avoidance. Remote Sens. 2019, 11, 2144. [Google Scholar] [CrossRef] [Green Version]
  10. Steiniger, Y.; Kraus, D.; Meisen, T. Survey on deep learning based computer vision for sonar imagery. Eng. Appl. Artif. Intell. 2022, 114, 105157. [Google Scholar] [CrossRef]
  11. Neupane, D.; Seok, J. A review on deep learning-based approaches for automatic sonar target recognition. Electronics 2020, 9, 1972. [Google Scholar] [CrossRef]
  12. Teng, B.; Zhao, H. Underwater target recognition methods based on the framework of deep learning: A survey. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420976307. [Google Scholar] [CrossRef]
  13. Hożyń, S. A Review of Underwater Mine Detection and Classification in Sonar Imagery. Electronics 2021, 10, 2943. [Google Scholar] [CrossRef]
  14. Zhang, C.; Xiao, F. Overview of Data Acquisition Technology in Underwater Acoustic Detection. Procedia Comput. Sci. 2021, 188, 130–136. [Google Scholar] [CrossRef]
  15. Sun, Y.; Luo, X.; Ran, X.; Zhang, G. A 2D Optimal Path Planning Algorithm for Autonomous Underwater Vehicle Driving in Unknown Underwater Canyons. J. Mar. Sci. Eng. 2021, 9, 252. [Google Scholar] [CrossRef]
  16. Scharff Willners, J.; Gonzalez-Adell, D.; Hernández, J.D.; Pairet, È.; Petillot, Y. Online 3-dimensional path planning with kinematic constraints in unknown environments using hybrid A* with tree pruning. Sensors 2021, 21, 1152. [Google Scholar] [CrossRef] [PubMed]
  17. Yuan, J.; Wang, H.; Zhang, H.; Lin, C.; Yu, D.; Li, C. AUV obstacle avoidance planning based on deep reinforcement learning. J. Mar. Sci. Eng. 2021, 9, 1166. [Google Scholar] [CrossRef]
  18. McMahon, J.; Plaku, E. Mission and motion planning for autonomous underwater vehicles operating in spatially and temporally complex environments. IEEE J. Ocean. Eng. 2016, 41, 893–912. [Google Scholar] [CrossRef]
  19. Lin, Y.H.; Shou, K.P.; Huang, L.J. The initial study of LLS-based binocular stereo-vision system on underwater 3D image reconstruction in the laboratory. J. Mar. Sci. Technol. 2017, 22, 513–532. [Google Scholar] [CrossRef]
  20. Piskur, P.; Gasiorowski, M. Digital Signal Processing for Hydroacoustic System in Biomimetic Underwater Vehicle. NAŠE MORE Znan. časopis Za More I Pomor. 2020, 67, 14–18. [Google Scholar] [CrossRef] [Green Version]
  21. Piskur, P.; Szymak, P.; Jaskólski, K.; Flis, L.; Gąsiorowski, M. Hydroacoustic system in a biomimetic underwater vehicle to avoid collision with vessels with low-speed propellers in a controlled environment. Sensors 2020, 20, 968. [Google Scholar] [CrossRef] [Green Version]
  22. Szymak, P.; Piskur, P. Measurement system of biomimetic underwater vehicle for passive obstacles detection. In Proceedings of the 18th International Conference on Transport Science, Portrož, Slovenia, 14–15 June 2018. [Google Scholar]
  23. Tian, Y.; Liu, M.; Zhang, S.; Zhou, T. Underwater multi-target passive detection based on transient signals using adaptive empirical mode decomposition. Appl. Acoust. 2022, 190, 108641. [Google Scholar] [CrossRef]
  24. Rascon, A. Forward-Looking Sonar Simulation Model for Robotic Applications; Technical Report; Naval Postgraduate School: Monterey, CA, USA, 2020. [Google Scholar]
  25. Waite, A.D. Sonar for Practising Engineers; Wiley: Hoboken, NJ, USA, 2002. [Google Scholar]
  26. Brekhovskikh, L.M.; Lysanov, Y.P.; Lysanov, J.P. Fundamentals of Ocean Acoustics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  27. Kinsler, L.E.; Frey, A.R.; Coppens, A.B.; Sanders, J.V. Fundamentals of Acoustics; John Wiley & Sons: Hoboken, NJ, USA, 2000. [Google Scholar]
  28. Burdic, W.S. Underwater Acoustic System Analysis; Peninsula Pub: Los Altos, CA, USA, 2002. [Google Scholar]
  29. Urick, R.J. Principles of Underwater Sound, 3rd ed.; Peninsula Publising: Los Atlos, CA, USA, 1983; Volume 22, pp. 23–24. [Google Scholar]
  30. Tritech International Ltd. SeaKing, SeaPrince Imaging Sonars. Product Manual. Available online: https://www.tritech.co.uk/media/products/mechanical-scanning-sonar-tritech-super-seaprince_hardware_manual.pdf?id=e6c140ff (accessed on 29 September 2022).
  31. Leszczynski, T. Metody Zwiększania Rozdzielczości Rozpoznawania Krótkich Sygnałów z Liniową Modulacją Częstotliwości; Rozprawy. Uniwersytet Technologiczno-Przyrodniczy w Bydgoszczy: Bydgoszcz, Poland, 2010. [Google Scholar]
  32. Jiang, M.; Song, S.; Li, Y.; Jin, W.; Liu, J.; Feng, X. A survey of underwater acoustic SLAM system. In Proceedings of the International Conference on Intelligent Robotics and Applications, Shenyang, China, 8–11 August 2019; pp. 159–170. [Google Scholar]
  33. Zhang, T.; Liu, S.; He, X.; Huang, H.; Hao, K. Underwater target tracking using forward-looking sonar for autonomous underwater vehicles. Sensors 2019, 20, 102. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Montefalcone, M.; Rovere, A.; Parravicini, V.; Albertelli, G.; Morri, C.; Bianchi, C.N. Reprint of “Evaluating change in seagrass meadows: A time-framed comparison of Side Scan Sonar maps”. Aquat. Bot. 2014, 115, 36–44. [Google Scholar] [CrossRef]
  35. Fish, J.P.; Carr, H.A. Sound Reflections: Advanced Applications of Side Scan Sonar; Lower Cape Pub.: Boston, MA, USA, 2001. [Google Scholar]
  36. Huseby, R.B.; Milvang, O.; Solberg, A.S.; Bjerde, K.W. Seabed classification from multibeam echosounder data using statistical methods. In Proceedings of the OCEANS’93, Victoria, BC, Canada, 18–21 October 1993; pp. III-229–III-233. [Google Scholar]
  37. Dugelay, S.; Graffigne, C.; Augustin, J. Deep seafloor characterization with multibeam echosounders by image segmentation using angular acoustic variations. In Proceedings of the Statistical and Stochastic Methods for Image Processing, Fort Lauderdale, FL, USA, 23–26 September 1996; Volume 2823, pp. 255–266. [Google Scholar]
  38. Blondel, P. The Handbook of Sidescan Sonar; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  39. Rahman, S.; Li, A.Q.; Rekleitis, I. Svin2: An underwater slam system using sonar, visual, inertial, and depth sensor. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 1861–1868. [Google Scholar]
  40. Chen, Z.; Zhang, Z.; Dai, F.; Bu, Y.; Wang, H. Monocular vision-based underwater object detection. Sensors 2017, 17, 1784. [Google Scholar] [CrossRef] [PubMed]
  41. Ortiz, A.; Simó, M.; Oliver, G. A vision system for an underwater cable tracker. Mach. Vis. Appl. 2002, 13, 129–140. [Google Scholar] [CrossRef]
  42. Piskur, P. Strouhal Number Measurement for Novel Biomimetic Folding Fins Using an Image Processing Method. J. Mar. Sci. Eng. 2022, 10, 484. [Google Scholar] [CrossRef]
  43. Hożyń, S. An Automated System for Analysing Swim-Fins Efficiency. NAŠE MORE Znan. časopis More Pomor. 2020, 67, 10–17. [Google Scholar]
  44. Szymak, P.; Piskur, P.; Naus, K. The effectiveness of using a pretrained deep learning neural networks for object classification in underwater video. Remote Sens. 2020, 12, 3020. [Google Scholar] [CrossRef]
  45. Szymak, P.; Gasiorowski, M. Using Pretrained AlexNet deep learning neural network for recognition of underwater objects. NAŠE MORE Znan. časopis More Pomor. 2020, 67, 9–13. [Google Scholar] [CrossRef] [Green Version]
  46. Jin, Y.; Wen, S.; Shi, Z.; Li, H. Target Recognition and Navigation Path Optimization Based on NAO Robot. Appl. Sci. 2022, 12, 8466. [Google Scholar] [CrossRef]
  47. Hożyń, S.; Żak, B. Stereo Vision System for Vision-Based Control of Inspection-Class ROVs. Remote Sens. 2021, 13, 5075. [Google Scholar] [CrossRef]
  48. Praczyk, T.; Hożyń, S.; Bodnar, T.; Pietrukaniec, L.; Błaszczyk, M.; Zabłotny, M. Concept and first results of optical navigational system. Trans. Marit. Sci. 2019, 8, 46–53. [Google Scholar] [CrossRef]
  49. Okamoto, A.; Sasano, M.; Seta, T.; Hirao, S.C.; Inaba, S. Deployment of the auv hobalin to an active hydrothermal vent field with an improved obstacle avoidance system. In Proceedings of the 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans (OTO), Kobe, Japan, 28–31 May 2018; pp. 1–6. [Google Scholar]
  50. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  51. Karimanzira, D.; Renkewitz, H.; Shea, D.; Albiez, J. Object detection in sonar images. Electronics 2020, 9, 1180. [Google Scholar] [CrossRef]
  52. Rizzini, D.L.; Kallasi, F.; Oleari, F.; Caselli, S. Investigation of vision-based underwater object detection with multiple datasets. Int. J. Adv. Robot. Syst. 2015, 12, 77. [Google Scholar] [CrossRef] [Green Version]
  53. Hożyń, S.; Zalewski, J. Shoreline Detection and Land Segmentation for Autonomous Surface Vehicle Navigation with the Use of an Optical System. Sensors 2020, 20, 2799. [Google Scholar] [CrossRef]
  54. Reed, S.; Petillot, Y.; Bell, J. An automatic approach to the detection and extraction of mine features in sidescan sonar. IEEE J. Ocean. Eng. 2003, 28, 90–105. [Google Scholar] [CrossRef] [Green Version]
  55. Cao, X.; Ren, L.; Sun, C. Research on Obstacle Detection and Avoidance of Autonomous Underwater Vehicle Based on Forward-Looking Sonar. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef]
  56. Sheng, M.; Tang, S.; Wan, L.; Zhu, Z.; Li, J. Fuzzy Preprocessing and Clustering Analysis Method of Underwater Multiple Targets in Forward Looking Sonar Image for AUV Tracking. Int. J. Fuzzy Syst. 2020, 22, 1261–1276. [Google Scholar] [CrossRef]
  57. Bharti, V.; Lane, D.; Wang, S. Robust subsea pipeline tracking with noisy multibeam echosounder. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV), Porto, Portugal, 6–9 November 2018; pp. 1–5. [Google Scholar]
  58. Tan, K.; Xu, X.; Bian, H. The application of NDT algorithm in sonar image processing. In Proceedings of the 2016 IEEE/OES China Ocean Acoustics (COA), Harbin, China, 9–11 January 2016; pp. 1–4. [Google Scholar]
  59. Al-Haj, A. Wavelets pre-processing of Artificial Neural Networks classifiers. In Proceedings of the 2008 5th International Multi-Conference on Systems, Signals and Devices, Amman, Jordan, 20–22 July 2008; pp. 1–5. [Google Scholar]
  60. Lee, M.; Kim, J.; Yu, S.C. Robust 3d shape classification method using simulated multi view sonar images and convolutional nueral network. In Proceedings of the OCEANS 2019-Marseille, Marseille, France, 17–20 June 2019; pp. 1–5. [Google Scholar]
  61. Dura, E.; Rakheja, S.; Honghai, L.; Kolev, N. Image processing techniques for the detection and classification of man made objects in side-scan sonar images. In Sonar Systems; InTech: Makati, Philippines, 2011. [Google Scholar]
  62. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man, Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  63. Żak, B.; Hożyń, S. Segmentation Algorithm Using Method of Edge Detection. Solid State Phenomena 2013, 196, 206–211. [Google Scholar] [CrossRef]
  64. Krishna, K.; Murty, M.N. Genetic K-means algorithm. IEEE Trans. Syst. Man, Cybern. Part B (Cybern.) 1999, 29, 433–439. [Google Scholar] [CrossRef] [PubMed]
  65. Wang, X.; Wang, Z.; Sheng, M.; Li, Q.; Sheng, W. An adaptive and opposite K-means operation based memetic algorithm for data clustering. Neurocomputing 2021, 437, 131–142. [Google Scholar] [CrossRef]
  66. Kato, Z.; Zerubia, J. Markov random fields in image segmentation. Found. Trends® Signal Process. 2012, 5, 1–155. [Google Scholar] [CrossRef]
  67. Mignotte, M.; Collet, C.; Pérez, P.; Bouthemy, P. Three-class Markovian segmentation of high-resolution sonar images. Comput. Vis. Image Underst. 1999, 76, 191–204. [Google Scholar] [CrossRef] [Green Version]
  68. Goyal, M. Morphological image processing. IJCST 2011, 2, 59. [Google Scholar]
  69. Horner, D.; Healey, A.; Kragelund, S. AUV experiments in obstacle avoidance. In Proceedings of the OCEANS 2005 MTS/IEEE, Washington, DC, USA, 17–23 September 2005; pp. 1464–1470. [Google Scholar]
  70. Pebody, M. Autonomous underwater vehicle collision avoidance for under-ice exploration. Proc. Inst. Mech. Eng. Part M J. Eng. Marit. Environ. 2008, 222, 53–66. [Google Scholar] [CrossRef]
  71. McPhail, S.D.; Furlong, M.E.; Pebody, M.; Perrett, J.; Stevenson, P.; Webb, A.; White, D. Exploring beneath the PIG Ice Shelf with the Autosub3 AUV. In Proceedings of the Oceans 2009-Europe, Bremen, Germany, 11–14 May 2009; pp. 1–8. [Google Scholar]
  72. Teo, K.; Ong, K.W.; Lai, H.C. Obstacle detection, avoidance and anti collision for MEREDITH AUV. In Proceedings of the OCEANS 2009, Biloxi, MS, USA, 26–29 October 2009; pp. 1–10. [Google Scholar]
  73. Guerrero-González, A.; García-Córdova, F.; Gilabert, J. A biologically inspired neural network for navigation with obstacle avoidance in autonomous underwater and surface vehicles. In Proceedings of the OCEANS 2011 IEEE-Spain, Santander, Spain, 6–9 June 2011; pp. 1–8. [Google Scholar]
  74. Millar, G. An obstacle avoidance system for autonomous underwater vehicles: A reflexive vector field approach utilizing obstacle localization. In Proceedings of the 2014 IEEE/OES Autonomous Underwater Vehicles (AUV), Oxford, MS, USA, 6–9 October 2014; pp. 1–4. [Google Scholar]
  75. Hernández, J.D.; Vidal, E.; Vallicrosa, G.; Galceran, E.; Carreras, M. Online path planning for autonomous underwater vehicles in unknown environments. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1152–1157. [Google Scholar]
  76. Xu, H.; Gao, L.; Liu, J.; Wang, Y.; Zhao, H. Experiments with obstacle and terrain avoidance of autonomous underwater vehicle. In Proceedings of the OCEANS 2015-MTS/IEEE Washington, Washington, DC, USA, 19–22 October 2015; pp. 1–4. [Google Scholar]
  77. Braginsky, B.; Guterman, H. Obstacle avoidance approaches for autonomous underwater vehicle: Simulation and experimental results. IEEE J. Ocean. Eng. 2016, 41, 882–892. [Google Scholar] [CrossRef]
  78. Okamoto, A.; Sasano, M.; Seta, T.; Inaba, S.; Sato, K.; Tamura, K.; Nishida, Y.; Ura, T. Obstacle avoidance method appropriate for the steep terrain of the deep seafloor. In Proceedings of the 2016 Techno-Ocean (Techno-Ocean), Kobe, Japan, 6–8 October 2016; pp. 195–198. [Google Scholar]
  79. McEwen, R.S.; Rock, S.P.; Hobson, B. Iceberg wall following and obstacle avoidance by an AUV. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV), Porto, Portugal, 6–9 November 2018; pp. 1–8. [Google Scholar]
  80. Hernández, J.D.; Vidal, E.; Moll, M.; Palomeras, N.; Carreras, M.; Kavraki, L.E. Online motion planning for unexplored underwater environments using autonomous underwater vehicles. J. Field Robot. 2019, 36, 370–396. [Google Scholar] [CrossRef]
  81. Xanthidis, M.; Karapetyan, N.; Damron, H.; Rahman, S.; Johnson, J.; O’Connell, A.; O’Kane, J.M.; Rekleitis, I. Navigation in the presence of obstacles for an agile autonomous underwater vehicle. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 892–899. [Google Scholar]
  82. Zhang, H.; Zhang, S.; Wang, Y.; Liu, Y.; Yang, Y.; Zhou, T.; Bian, H. Subsea pipeline leak inspection by autonomous underwater vehicle. Appl. Ocean Res. 2021, 107, 102321. [Google Scholar] [CrossRef]
  83. Qin, R.; Zhao, X.; Zhu, W.; Yang, Q.; He, B.; Li, G.; Yan, T. Multiple receptive field network (MRF-Net) for autonomous underwater vehicle fishing net detection using forward-looking sonar images. Sensors 2021, 21, 1933. [Google Scholar] [CrossRef] [PubMed]
  84. An, R.; Guo, S.; Zheng, L.; Hirata, H.; Gu, S. Uncertain moving obstacles avoiding method in 3D arbitrary path planning for a spherical underwater robot. Robot. Auton. Syst. 2022, 151, 104011. [Google Scholar] [CrossRef]
Figure 1. Workflow of ODS operations in conjunction with path planning and collision avoidance algorithms. The image of the environment obtained as a result of the detection is processed using image processing techniques. Then the AUV movement path is determined depending on the position of obstacles and collision avoidance maneuvers are performed, taking into account the design constraints of the vehicle.
Figure 1. Workflow of ODS operations in conjunction with path planning and collision avoidance algorithms. The image of the environment obtained as a result of the detection is processed using image processing techniques. Then the AUV movement path is determined depending on the position of obstacles and collision avoidance maneuvers are performed, taking into account the design constraints of the vehicle.
Electronics 11 03615 g001
Figure 2. An example of a frequency-modulated signal called chirp. This type of sampling signal is commonly used in high-precision sonars. The different frequency at each point of the pulse provides high distance resolution and low susceptibility to interference.
Figure 2. An example of a frequency-modulated signal called chirp. This type of sampling signal is commonly used in high-precision sonars. The different frequency at each point of the pulse provides high distance resolution and low susceptibility to interference.
Electronics 11 03615 g002
Figure 3. Examples of noise in sonar imaging. The red marks highlight the disturbances caused by reflections in combination with the high angular resolution of the space scan. To avoid this type of interference pre-processing operations must be performed, together with the tuning of the detection parameters. [own source].
Figure 3. Examples of noise in sonar imaging. The red marks highlight the disturbances caused by reflections in combination with the high angular resolution of the space scan. To avoid this type of interference pre-processing operations must be performed, together with the tuning of the detection parameters. [own source].
Electronics 11 03615 g003
Figure 4. An example of mean filter application in pre-processing step. This method is based on ordering adjacent pixels and then applying the mean value for a specific filter size. The 5 × 5 pixels window was used for this processing case. [own source].
Figure 4. An example of mean filter application in pre-processing step. This method is based on ordering adjacent pixels and then applying the mean value for a specific filter size. The 5 × 5 pixels window was used for this processing case. [own source].
Electronics 11 03615 g004
Figure 5. An example of determining ROI. This pre-processing technique reduces image area for faster image processing. [own source].
Figure 5. An example of determining ROI. This pre-processing technique reduces image area for faster image processing. [own source].
Electronics 11 03615 g005
Figure 6. An example of threshold segmentation operation in sonar image processing. In this case, the application of threshold segmentation was not preceded by any pre-processing operation (except for determining the ROI). Therefore, many pixel groups can be seen as a result of this operation [own source].
Figure 6. An example of threshold segmentation operation in sonar image processing. In this case, the application of threshold segmentation was not preceded by any pre-processing operation (except for determining the ROI). Therefore, many pixel groups can be seen as a result of this operation [own source].
Electronics 11 03615 g006
Figure 7. An example of mean filtering and threshold segmentation operations in sonar image processing. Due to the application of the mean filtering method before threshold segmentation, removing false positives related to reflections and noise have been achieved. The example above shows the importance of pre-processing operations for better image processing results. [own source].
Figure 7. An example of mean filtering and threshold segmentation operations in sonar image processing. Due to the application of the mean filtering method before threshold segmentation, removing false positives related to reflections and noise have been achieved. The example above shows the importance of pre-processing operations for better image processing results. [own source].
Electronics 11 03615 g007
Figure 8. An example of edge detection operation in sonar image processing. The measurement was performed in a pool in which a cube-shaped object was placed. The side wall of the cube was directed perpendicular to the axis of the beam generated by the sonar. [own source].
Figure 8. An example of edge detection operation in sonar image processing. The measurement was performed in a pool in which a cube-shaped object was placed. The side wall of the cube was directed perpendicular to the axis of the beam generated by the sonar. [own source].
Electronics 11 03615 g008
Figure 9. An example of image processing for sonar imagery measured at Lake Wysockie near Gdańsk (Poland) for the cylinder-shaped object. In measurements performed in a real environment, the impact of environmental instability on the final result of image processing can be noticed [own source].
Figure 9. An example of image processing for sonar imagery measured at Lake Wysockie near Gdańsk (Poland) for the cylinder-shaped object. In measurements performed in a real environment, the impact of environmental instability on the final result of image processing can be noticed [own source].
Electronics 11 03615 g009
Figure 10. Effectiveness of ODS in time, where 1 is low, 2 is medium and 3 is high efficiency. The constant development of ODS is noticeable, caused mainly by increasing the computing power of control systems.
Figure 10. Effectiveness of ODS in time, where 1 is low, 2 is medium and 3 is high efficiency. The constant development of ODS is noticeable, caused mainly by increasing the computing power of control systems.
Electronics 11 03615 g010
Figure 11. The number of surveys depending on the sensor used to perceive the environment. Due to the low attenuation level of acoustic signals in an underwater environment, sonar-based solutions are the most popular device chosen in ODS.
Figure 11. The number of surveys depending on the sensor used to perceive the environment. Due to the low attenuation level of acoustic signals in an underwater environment, sonar-based solutions are the most popular device chosen in ODS.
Electronics 11 03615 g011
Figure 12. The number of surveys depending on the ODS effectiveness. Almost the same number of low, medium, and high-efficiency solutions proves the constant development of ODSs.
Figure 12. The number of surveys depending on the ODS effectiveness. Almost the same number of low, medium, and high-efficiency solutions proves the constant development of ODSs.
Electronics 11 03615 g012
Table 1. The main surveys connected with obstacle detection methods for robots.
Table 1. The main surveys connected with obstacle detection methods for robots.
SourceYearField of AnalysisMain Focus
[3]2014Ground robots (UGVs)Image processing, obstacle detection, and collision avoidance algorithms for UGVs.
[7]2017Aerial robots (UAVs)Vision-based applications for UAVs including issues such as visual odometry, obstacle detection, mapping, and localization.
[9]2019Aerial robots (UASs)Deep learning methods for UASs in the context of obstacle detection and collision avoidance.
[11]2020Marine robots (UUVs)Recent development of applying deep learning algorithms for sonar automatic target recognition, tracking, and detection for UUVs.
[12]2020Marine robots (AUVs)Application of deep learning methods in underwater image analysis and description of the main underwater target recognition methods for AUV.
[4]2020Ground robots (AVs)Advancements of obstacle detection systems for AVs.
[5]2020Ground robots (AVs)AVs development regard to obstacle detection and track detection.
[6]2021Ground robots (AVs)Review of the obstacle detection and avoidance approaches for AVs.
[13]2021Marine robots (UUVs, USVs)Underwater mine detection and classification techniques based on sonar imagery and the classical image processing, machine learning, and deep learning methods.
[14]2021Marine robots (UUVs)Data acquisition technology in underwater acoustic detection field with regard to the passive detection, active detection, and collaborative detection technology.
[8]2022Robots (UAVs, AVs)Vision-based obstacle detection algorithms mainly for UAVs and also for Autonomous Vehicles.
[10]2022Marine robots (AUVs)Deep learning approaches for automatic target recognition (ATR) equipped with side scan sonar and synthetic-aperture sonar imagery for AUVs
This review2022Marine robots (AUVs)Obstacle detection systems integrated with path planning and collision avoidance systems in AUVs tested in a real environment.
Table 2. Evaluation of AUVs Obstacle Detection Systems capable of collision avoidance.
Table 2. Evaluation of AUVs Obstacle Detection Systems capable of collision avoidance.
SourceYearHardware Used in ODSEffectivenessMain Properties
[69]2005Forward looking sonarLow• Sonar resolution 491 × 198
• Binary image by threshold
• Erosion
• Determination of the bottom slope
• Feature extraction
• Determination of ROI
• Contours of obstacles and surface calculation
• Kalman filter tracking
[70,71]2008EchosounderLow• Margin for detection depends on the AUV speed
• Filtering out a false reflection from the surface
• Indication of possible surface icing during ascent
• Limited headroom indication based on comparison of the echosounder data with data from other sensors (immersion, inclination)
[72]2009Blazed array multibeam sonarMedium• Pre-processing by specifying a background threshold level
• Median filtering
• Morphology: erosion, dilation, edge detection
[73]2011Vision camera, imaging sonar, echosounder, side scan sonarLow• Detection based on collision sonar or proximity sensor
• Detection of obstacles at a distance shorter than the specified activate obstacle avoidance algorithms
[74]20143 single beam ranging sonarMedium• Bottom tracking with an echosounder
• Avoidance of noise from cooperating devices using the ping synchronization scheme
• Threshold segmentation
[75]20155 echosoundersMedium• Octree-based representation of points in space based on data obtained from echosounders working as a single beam sensor
[76]2015Multibeam sonarHigh• Filtering: median filtering, mean filtering
• Segmentation: fuzzy K-mean clustering algorithm
• Morphological processing—binarization
[77]20162 Forward-looking sonarsHigh• Speckle noise suppression
• Local image histogram entropy
• Hysteretic threshold of entropy
• Feature extraction—Edge detection
[18]2016Obstacles are simulated-• No obstacle detection system
• Real-world tests of only collision avoidance and path planning algorithms without obstacle detection consideration
[78]20162 cameras
2 line lasers
Low• Morphological filter to extract features, uneven luminance
• Top hat transformation (opening, subtraction)
• Binarization by Threshold
• Description by fast label method
• Removing groups of pixels below 80
• A decision about the classification of an obstacle based on five or more images
[49]2018Forward looking sonar
2 cameras
2 line lasers
Medium• Solution based on the sensors and algorithms used in [78]
• Added FLS to increase the range of ODS operation
[79]20182 forward-looking sonarsMedium• 210 deg FOV
• The system detects an obstacle when it takes five or more returns or when the tracked wall is in front of the AUV
[80]2019Mechanical scanning imaging sonar, 4 echosounders 3 camerasHigh• Threshold segmentation
• Real-time operating
• Extensive numerical and practical tests executed
[81]20203 camerasHigh• SVIn2 vision inertial state estimation system based on visual data augmented with IMU sensor data, which are linked together in a visual form
• Histogram equalization for contrast adjustment
• Extracting visual objects with a high density of features from a point cloud
[83]2021Multibeam forward-looking sonarHigh• Gray stretching and threshold segmentation
• Normalization methods
• Deep reinforcement learning for proper obstacle detection and avoidance
• Mixup learning strategy
[84]2021CameraHigh• Determination of image features such as intensity, color, contrast, and light transmission contrast
• Appropriate global contrast calculation
• ROI detection
• Threshold-based segmentation
[82]2021Multibeam echosounder forward-looking sonarMedium• Threshold segmentation method
• Edge detection of the object
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kot, R. Review of Obstacle Detection Systems for Collision Avoidance of Autonomous Underwater Vehicles Tested in a Real Environment. Electronics 2022, 11, 3615. https://doi.org/10.3390/electronics11213615

AMA Style

Kot R. Review of Obstacle Detection Systems for Collision Avoidance of Autonomous Underwater Vehicles Tested in a Real Environment. Electronics. 2022; 11(21):3615. https://doi.org/10.3390/electronics11213615

Chicago/Turabian Style

Kot, Rafał. 2022. "Review of Obstacle Detection Systems for Collision Avoidance of Autonomous Underwater Vehicles Tested in a Real Environment" Electronics 11, no. 21: 3615. https://doi.org/10.3390/electronics11213615

APA Style

Kot, R. (2022). Review of Obstacle Detection Systems for Collision Avoidance of Autonomous Underwater Vehicles Tested in a Real Environment. Electronics, 11(21), 3615. https://doi.org/10.3390/electronics11213615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop