Next Article in Journal
Research on the Deployment of Professional Rescue Ships for Maritime Traffic Safety under Limited Conditions
Next Article in Special Issue
Design and Optimization of the Teardrop Buoy Driven by Ocean Thermal Energy
Previous Article in Journal
Analysis of the Swordfish Xiphias gladius Linnaeus, 1758 Catches by the Pelagic Longline Fleets in the Eastern Pacific Ocean
Previous Article in Special Issue
Behavior Analysis of a Bucket Foundation with an Offshore Wind Turbine during the In-Water Sinking Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IRNLGD: An Edge Detection Algorithm with Comprehensive Gradient Directions for Tidal Stream Turbine

1
Logistics Engineering College, Shanghai Maritime University, Pudong District, Shanghai 201306, China
2
Leshan Shawan Power Supply Branch, State Grid Sichuan Electric Power Company, Leshan 614900, China
3
Shanghai Power Industrial & Commerical Co., Ltd., State Grid Shanghai Municipal Electric Power Company, Huangpu District, Shanghai 200001, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(3), 498; https://doi.org/10.3390/jmse12030498
Submission received: 21 February 2024 / Revised: 9 March 2024 / Accepted: 13 March 2024 / Published: 17 March 2024
(This article belongs to the Special Issue Offshore Renewable Energy)

Abstract

:
Tidal stream turbines (TSTs) harness the kinetic energy of tides to generate electricity by rotating the rotor. Biofouling will lead to an imbalance between the blades, resulting in imbalanced torque and voltage across the windings, ultimately polluting the grid. Therefore, rotor condition monitoring is of great significance for the stable operation of the system. Image-based attachment detection algorithms provide the advantage of visually displaying the location and area of faults. However, due to the limited availability of data from multiple machine types and environments, it is difficult to ensure the generalization of the network. Additionally, TST images degrade, resulting in reduced image gradients and making it challenging to extract edge and other features. In order to address the issue of limited data, a novel non-data-driven edge detection algorithm, indexed resemble-normal-line guidance detector (IRNLGD), is proposed for TST rotor attachment fault detection. Aiming to solve the problem of edge features being suppressed, IRNLGD introduces the concept of “indexed resemble-normal-line direction” and integrates multi-directional gradient information for edge determination. Real-image experiments demonstrate IRNLGD’s effectiveness in detecting TST rotor edges and faults. Evaluation on public datasets shows the superior performance of our method in detecting fine edges in low-light images.

1. Introduction

Climate change and energy shortages are two rising issues on the agenda of the world. Human activities are driving the global warming trend observed since the mid-20th century, and the effects are irreversible and will worsen in the decades to come. Renewable energy sources, which are cleaner and more efficient, could play a pivotal role in sustainability development. The theoretical capacity of tidal energy in China reaches 110 GW [1], indicating vast prospects for development. Its underwater generator, the tidal stream turbine (TST), does not occupy land resources, avoiding noise or visual pollution [2,3]. A TST can provide similar power to a bigger wind turbine [4]. The above characteristics prove that tidal stream energy is attractive for electric power generation. However, TSTs are susceptible to factors such as biofouling [5], sudden changes in instantaneous flow velocity [6], seawater corrosion [7], cavitation, and turbulence. These factors can result in mechanical faults or blade damage, leading to torque imbalances and voltage fluctuations, which significantly affect the quality of power generation, efficiency, and grid stability [8,9,10]. Biofouling can cause an increase in device maintenance time and structural loading [11]; there is also a biosecurity risk since immersed devices can serve as potential vectors for the introduction of non-native species [12]. Therefore, timely and effectively detection of TST rotor attachment is of significant importance.
Current methods for TST rotor attachment detection primarily rely on electrical and image signals. Electrical signal-based methods, usually utilizing time–frequency or statistical analysis and machine learning, have shown some effect in detecting unbalanced rotor attachments [13,14,15,16]. However, these methods are susceptible to sea state and can find it difficult to detect balance attachments. Image-based methods overcome the shortcomings by analyzing the attachment conditions from underwater TST images directly. They focus on classification and semantic segmentation networks. Zheng et al. [17] carried out attachment detection using an improved sparse autoencoder (SAE) and Softmax Regression (SR) method. They collected TST attachment images and divided the data based on the degree of attachment. Their proposed method achieved higher accuracy compared to the traditional principal component analysis (PCA) feature extraction algorithm. However, they only focused on TSTs in a static state, which is an idealized condition not reflective of reality. Therefore, Xin et al. [18] collected TST images under operational conditions to make the dataset more representative of real-world scenarios. Then, the data were classified using a depthwise separable convolutional neural network (CNN), which achieved higher recognition accuracy than SAE+SR and a reasonable computational cost compared to large deep networks such as ResNet. However, the classification-based methods fail to precisely localize and display the biofouling. Peng et al. [19] proposed a semantic segmentation network to identify TST blade attachments. It identified the TST blade edge by constructing two branches of coarse and fine segmentation. However, it requires a large number of labeled data, high computational costs, and a longer training time. Peng et al. [20] proposed an image generation method, which extended and generated labeled data to reduce the workload of manual labeling. The proposed C-SegNet further improved the segmentation accuracy but performed poorly in contour localization. To address the localization misalignment caused by motion blur, Qi et al. [21] combined preceding and succeeding frames for fault detection, significantly improving the localization accuracy on blurred images. However, a remaining challenge is the insufficient diversity in data, leading to inadequate network generalization. The limited data restricts data-driven algorithms to specific environments, posing a challenge for the application of TST attachment fault detection algorithms. Moreover, the severe degradation of underwater images will impact the extraction of features such as edges.
To address the above problem, this paper proposes a non-data-driven edge detection algorithm, Indexed Resemble-Normal-Line Guidance Detector (IRNLGD), for TST attachment fault detection. Focusing on the difficulty of extracting edge features in degraded underwater images, we employ an eight-directional gradient operator to extract image gradients. Next, to better utilize gradient direction information, we introduce the concept of “indexed resemble-normal line direction” and calibrate the gradient directions based on the trend of gradient changes. Finally, edge points are determined through the joint calibration of gradient direction and magnitude. The experimental results on the TST dataset and public datasets demonstrate IRNLGD’s effectiveness in dim images. What is more, a TST attachment fault detection method is explored on the basic of IRNLGD. A two-level detection is carried out with a data-driven lightweight classification network and the non-data-driven algorithm IRNLGD. It combines the advantages of both algorithms, providing more reliable and precise fault detection results.
The main contributions of this paper are as follows:
  • An effective edge detection algorithm, IRNLGD, is proposed to extract edge from low-contrast images.
  • A TST attachment fault detection method, MobileNet-IRNLGD, is explored, which combines data-driven and non-data-driven algorithms to strike a balance between limited data and precise detection.
  • The proposed TST attachment fault detection method is applied specifically to the real TST images, demonstrating the feasibility for engineering applications.
The remainder of the paper is organized as follows: In Section 2, we review the related work on edge detection. In Section 3, the proposed method is introduced in detail. Then, the experimental results and analysis are presented in Section 4. Finally, we give the conclusions of this study in Section 5.

2. Related Work

The TST images captured by underwater sensors degrade due to the environment. According to the classical Jaffe–McGlamery underwater imaging model [22,23], the received light by the camera can be divided into three components, i.e., a direct, forward-scattered, and back-scattered component. The degradation process can be simplified as follows: an increase in the camera–object distance leads to direct component attenuation, resulting in low contrast; forward scattering causes convolution of the point spread function with scene radiance, causing blurring; and increased background light due to backward scattering amplifies image noise. These factors collectively drive the degradation of underwater images, resulting in a smaller rate of intensity variation, making it challenging to extract features such as edges.
Edge detection can significantly reduce the irrelevant information in an image, allowing essential structures to be preserved. Edge detection has a rich history and can be divided into the following two categories: handcrafted-based methods and machine learning-based methods [24]. The two main streams have achieved satisfactory performance in edge detection tasks. However, they still face challenges when dealing with specific tasks such as blurred or dim images.

2.1. Handcrafted-Based Edge Detection Methods

The existing handcrafted-based edge detection methods extract spatial, geometric, and other features from the image. They perform operations such as gradient calculations or feature statistics to achieve edge detection. Gradient-based detectors, such as Sobel [25], Laplacian [26], and Canny [27], perform first- or second-order gradient calculations on the image and combine the gradient values with directions to perform edge detection. Gonzalez et al. [28] combined gradient and fuzzy logic theory to detect edge of color images, but the gradient information is not fully used. Ranjan et al. [29] carried out guided image filtering before edge detection. Their strategy achieves satisfactory results, but the performance may fluctuate with the manually selected parameters. Ma et al. [30] proposed a two-level edge detection method, combined with a back propagation (BP) network, to measure the concrete surface roughness. However, low-quality images can pose a challenge to the algorithm’s accuracy. According to the analysis of the degradation of underwater images, the TST images captured by the camera are severely blurred. The pixel values of a blurred image are represented as a weighted sum of multiple pixels in the neighborhood of the clear image. Therefore, the image gradient decreases, making it difficult to extract edge features. The gradient-based edge detection operators often focus on two or four gradient directions, neglecting the possibility of other gradient directions. When determining edge points, only the information of the maximum gradient direction or the dominant direction is considered, limiting the contribution of gradients in other directions. They do not fully utilize the information about gradient direction, which leads to the failure of edge detection in low-contrast and blurred images, such as underwater images, as shown in Figure 1b–d.

2.2. Machine Learning-Based Edge Detection Methods

The machine learning-based methods excel at learning features from a large number of samples, thereby better capturing edge structures [24,31,32]. Inspired by the idea of edge patches with obvious local structures, P. Dollár et al. [33] introduced structure learning into edge detection, proposing a structured edge (SE) detector. This method demonstrates satisfactory performance in both accuracy and real-time processing. Thanks to the remarkable performance of deep learning on images, it has gained popularity in edge detection tasks. Inspired by fully CNN, Xie et al. [34] proposed an end-to-end edge detection algorithm, Holistically-Nested Edge Detection (HED), which utilized multi-scale and deep supervision to improve the localization accuracy of edge pixels. Building on the HED style, Liu et al. [35] used relaxed deep supervision (RDS) to guide intermediate layer learning, incorporating operators like Canny and SE to facilitate the learning of edge features. Considering the commonality between edge detection and image segmentation, Yang et al. [36] introduced the U-Net [37] structure into edge detection tasks, proposing a fully convolutional encoder–decoder network (CEDN). Zou et al. [38] constructed DeepCrack based on SegNet [39], utilizing multi-scale feature fusion for crack detection. Su et al. [40] combined traditional difference operators with convolution and introduced a Pixel Difference Network (PiDiNet). It focuses on edge-related features, achieving a better trade-off between accuracy and efficiency for edge detection. The machine learning-based methods have demonstrated impressive capabilities, but their performance relies on sufficiently comprehensive datasets. Their accuracy may not be guaranteed on limited samples.
Figure 1. The comparison of different algorithms on underwater TST image: (a) original image; (b) Canny; (c) Roberts; (d) Sobel; (e) SE; (f) HED [34]; (g) PiDiNet [40]; (h) IRNLGD. The red boxes indicate the dim region.
Figure 1. The comparison of different algorithms on underwater TST image: (a) original image; (b) Canny; (c) Roberts; (d) Sobel; (e) SE; (f) HED [34]; (g) PiDiNet [40]; (h) IRNLGD. The red boxes indicate the dim region.
Jmse 12 00498 g001
In summary, current edge detection algorithms have achieved good results. However, they still face challenges when dealing with low-quality images. Aiming at the problems mentioned above, inspired by classical gradient-based operators, we propose an edge detection algorithm that needs no training. It captures additional gradient clues and combines gradient magnitudes with directions to determine edges, allowing for effective preservation of edges in low-contrast images. As shown in Figure 1, most handcrafted-based methods fail to detect the contour of the TST and none of the compared methods are able to detect the edges of the dim regions, as indicated by the red boxes in Figure 1b,f,g. Only the proposed method is capable of detecting the contour of the TST and the edges between the attachment and the blade in the dim region (as shown in Figure 1h). A detailed introduction is given in Section 3.

3. Indexed Resemble-Normal-Line Guidance Detector

The edge detection algorithm proposed in this paper is divided into three steps: (1) The eight-direction gradients of the input image are calculated to form a gradient matrix. (2) The direction is calibrated according to the pairwise gradient magnitude, and the indexed resemble-normal-line direction is obtained according to the calibrated direction matrix. (3) The edge point determination is performed by combining the gradient magnitude and the indexed resemble-normal-line direction. In the following, we will introduce the proposed method in detail and return to its application in Section 4.

3.1. Calculation of Eight-Direction Gradients

Gradients contain rich image information, and edge detection tasks can be performed using gradients. Traditional operators typically focus on two or four directions when computing gradients, thereby losing information from other directions. To obtain comprehensive multi-directional gradient information, an adjustable eight-direction gradient calculation method is proposed. The gradient magnitude for the first direction is calculated by
g 1 = γ f x , y f x 1 , y 1 = γ 1 0 0 0 1 0 0 0 0 f x 1 , y 1 f x , y 1 f x + 1 , y 1 f x 1 , y f x , y f x + 1 , y f x 1 , y + 1 f x , y + 1 f x + 1 , y + 1 ,
where f x , y indicates the pixel intensity of the pixel x , y . We design an eight-directional gradient operator, sequentially rotating 45 from the first direction. According to Equation (1), the eight-directional gradient operator is given by:
C 1 = γ 1 0 0 0 1 0 0 0 0 , C 2 = γ 0 1 0 0 1 0 0 0 0 C 3 = γ 0 0 1 0 1 0 0 0 0 , C 4 = γ 0 0 0 0 1 1 0 0 0 C 5 = γ 0 0 0 0 1 0 0 0 1 , C 6 = γ 0 0 0 0 1 0 0 1 0 C 7 = γ 0 0 0 0 1 0 1 0 0 , C 8 = γ 0 0 0 1 1 0 0 0 0 ,
where γ is the gradient amplification coefficient. It is introduced to amplify the gradient magnitudes without affecting subsequent gradient comparisons and direction calibration. Since the edge pixel intensity is based on the gradient magnitude, it is necessary to appropriately amplify the gradient values to enhance the edge. However, the calculation may overflow if γ is too large. Here, we take γ = 2 .
For the input image I , the gradient is obtained by convolution with the gradient operator in the corresponding direction. Therefore, according to Equation (2), the gradient matrix can be written as G = G 1 , , G 8 , where
G k = C k I k = 1 , 2 , , 8 .
Notice that G k R H × W . G records pixelwise eight-direction gradient magnitudes of the input image, and the sequence indicates the corresponding direction. The gradient matrix G provides the basic information for direction calibration and edge discrimination.

3.2. Indexed Resemble-Normal-Line Direction Calibration

In order to further utilize the cues of the gradient matrix, we introduce gradient directions in the form of recalibration. In classical algorithms, the role of gradient direction in edge detection tasks is to determine the edge direction and filter out pixels outside the specified direction. In this process, only the information of the maximum gradient direction or dominant direction is considered, resulting in the loss of information from other directions. Therefore, a novel gradient direction calibration method is proposed to preserve more useful cues.
According to the eight-directional gradients mentioned above, the calibrated direction is given by
D x , y , k = s g n G x , y , k G x , y , k + 4 D x , y , k + 4 = s g n G x , y , k G x , y , k + 4 ,
where D R H × W × 8 is the calibrated direction matrix. D x , y , k indicates the k t h k 4 calibrated direction of the pixel x , y . D consists of 1 , 0, and 1, indicating the change trend of a pixel spreading to its neighborhood. When the intensity increases, the corresponding direction is calibrated as 1.
More specifically, assume that there exists a neighborhood of size 3 × 3 for pixel x , y , and calculate the eight gradient magnitudes G 1 , G 8 . The gradient direction calibration process is as follows:
  • Compare the magnitude of two gradients at 180 to each other;
  • Calibrate the directions to 1, 0, or 1 , respectively, based on Equation (4). The calibration is carried out simultaneously due to the pairwise comparison;
  • Iterate over all gradient magnitudes in the neighborhood.
Following the idea of obtaining multi-directional gradient information, the set of directions calibrated as 1 in D x , y is defined as the indexed resemble-normal-line direction of the pixel x , y , marked as r x , y , i.e.,
r x , y = k | D x , y , k = 1 .
r x , y is an indefinite-length vector, with a length of l l 1 , 2 , 3 , 4 . It contains the direction information of intensity growth. Different from traditional methods using single-direction information, the indexed resemble-normal-line direction records all directions where the intensity has an increasing trend. It preserves more gradient information and is beneficial for extracting edge features from degraded images. To fully utilize differential information across various directions, gradient magnitude and direction are retained to complement edge discrimination where gradient directions are not recorded in the indexed resemble-normal-line direction.

3.3. Edge Discrimination

Due to the characteristics of degraded underwater images, edge features such as gradients are suppressed. Traditional edge detection methods fail because they consider information limited to partial gradients in the image. Therefore, a multi-directional-based edge point determination method is proposed. The pixelwise edge discrimination is carried out by combining the gradient magnitude and the indexed resemble-normal-line direction. Inspired by classical Gestalt cues [41]: similarity, continuity, and proximity, the following three criteria are proposed:
  • The gradient magnitude of the edge point is much larger than that of the non-edge.
  • The length of the indexed resemble-normal-line direction of the edge point satisfies l e n r x , y 3 .
  • Within 3 × 3 neighborhood of the edge point, at least one pixel has the same indexed resemble-normal-line direction, and the gradient magnitude is about the same.
Criterion 1 follows the principle of abrupt changes in intensity at edges. The gradient reflects the intensity variation, where a larger gradient magnitude indicates a more significant grayscale variation and a higher likelihood of an edge. For criterion 1, the decision threshold is calculated as follows:
For the eight-direction gradient G x , y of pixel x , y , the quadratic sum matrix Gs x , y can be given as
Gs x , y = k = 1 8 G x , y , k 2 .
To eliminate the effect of too small pixel values on the threshold calculation, a truncation parameter μ is set, and a dynamic truncation threshold M can be obtained as
M = μ × m a x Gs x , y .
The pixel is involved if and only if its quadratic sum value is greater than M . Through experimental verification, here, we take μ = 0.1 .
Then, the threshold λ is calculated as
λ = x = 1 H y = 1 W Gs x , y | Gs x , y M n ,
where n is the number of pixels involved in the calculation.
Criterion 2 ensures that the detected edge points exhibit intensity variations in as many directions as possible. This helps in locating edge points accurately and also contributes to refining the edge. Criterion 3 aims to maintain edge continuity by controlling the similarity of gradients among neighboring pixels.
In conclusion, the proposed IRNLGD is finally detailed through the following steps:
  • Calculating eight-direction gradients: after graying the input image, the grayscale image is convolved with the eight-direction gradient operator to calculate the gradient matrix G .
  • Indexed resemble-normal-line direction calibration: First, the gradients on the opposite direction are compared. The eight directions are calibrated to 1 , 0, or 1, and saved in the gradient direction calibration matrix D . Then, for each pixel, the set of the channel sequence numbers with value 1 in D is denoted as indexed resemble-normal-line direction r .
  • Edge discrimination: First, the decision threshold λ is calculated based on the gradient matrix G . Then, the edge points are judged by the proposed criteria.

3.4. MobileNet-IRNLGD TST Rotor Attachment Detection Network

In image-based TST attachment detection methods, the classification methods perform category-level detection, with a relatively low dependence on data diversity but insufficient precision in detection. The segmentation algorithm, on the other hand, conducts pixel-level detection, accurately indicating attachment and blade regions, but it requires abundant data under various conditions to ensure accuracy. To balance the limited data and high-precision detection, we explore a two-level TST rotor attachment detection method that combines the strengths of classification and segmentation: a classification network conducts first-level detection for preliminary fault severity assessment, while a non-data-driven IRNLGD edge detection branch performs second-level detection for precise fault localization. The design of the two-level detection combines the advantages of supervised algorithms and non-data-driven methods: the category-level detection in the first stage can fully utilize the existing data priors with the advantages of a deep network to make a preliminary fault assessment, reducing the workload of manual identification; the pixel level in the second stage can further locate the fault on the basics of fault assessment, and conduct secondary detection to prevent possible false alarms in the first step, providing more precise fault detection results for subsequent maintenance operations.
Classification allows for an initial assessment of the severity of faults. The standard convolution-based CNNs can achieve effective feature extraction but often come with a considerable parameter count, leading to constraints on hardware resources. For the convenience of practical applications, a lightweight network is necessary for category-level detection. Considering the significant reduction in computational complexity offered by depthwise separable convolution, we employed MobileNetV1 [42] as the backbone network.
Depthwise separable convolution reduces the network parameter count by decomposing the standard convolution into depthwise convolution and pointwise convolution. The network parameter count mainly depends on the 1 × 1 pointwise convolution and the fully connected layer. The first layer of standard convolution and depthwise convolution parameters account for less than 2 % . The overall parameter count is significantly reduced compared with that of the deep CNN, which can be verified by the experimental results in Section 4.
The proposed IRNLGD is extended to achieve pixel-level attachment detection. The edge detection, offering the semantic output, is carried out to visualize the attachment localization. The edge detection branch is paralleled with MobileNet, and the detection results are given in both branches. The data are input into the trained network branch for classification, and the IRNLGD branch achieves fault localization. The design of two-level detection combines the supervised algorithm with the non-data-driven method, achieving complementarity.

4. Results and Discussion

In this section, we test the performance of our algorithm. First, the proposed method is applied on the TST dataset. Then, we conducted experiments on the BSDS500 dataset [43] and the UIEB [44] dataset to evaluate the universality of the proposed method. The performance of the proposed method is compared with classical and state-of-the-art methods.

4.1. Experiment on TST Dataset

To demonstrate the effectiveness of the proposed method for TST rotor attachment detection, experiments were conducted on the TST dataset. The dataset is obtained by simulating TST rotor attachment faults on a marine current power generation system experiment platform and collected with an underwater camera.

4.1.1. Data Collection Experiment

To ensure the sample diversity, we add more fault types and different data collection environments on the basis of Ref. [17]. The data collection experiment is performed on the marine current power generation system experimental platform, and the structure and parameters are the same as Ref. [18]. The rope is used to wrap around the TST rotor to simulate the biofouling, as shown in Figure 2. Different degrees of attachment are simulated by employing dry ropes of varying weights and different wrapping methods. Specifically, the degree of attachment is distinguished by the dry rope weight (20 g, 40 g, and 60 g), area, and position. The total number of winding turns is 13, of which there are 3 different configurations for 40 g, namely 13, 3 (near the tip)-10 (near the hub), 6 (near the tip)-7 (near the hub). The detailed attachment degrees are shown in Table 1. The data collection experiment is carried out in two working environments of clean and turbid water. Image data collection is carried out at different shooting angles to simulate the real underwater monitoring situation. The collected data are divided into 11 categories according to the attachment degrees. The training set and test set contain 100 / 50 images for each category, totaling 1100 / 550 images.

4.1.2. Network Implementation Details

The method implementation process and network have been stated in Section 3. All the algorithms are designed by Python 3.9 . The classification network is built with open source deep learning framework TensorFlow [45]. The details of the network parameters involved are listed in Table 2.

4.1.3. Experiment Results

We apply the proposed MobileNet-IRNLGD rotor attachment detection network to the TST dataset to validate the effectiveness. First, we evaluate the performance of MobileNet in first-level detection. Table 3 shows the average results of classification networks under ten repeated experiments. The dataset comes from the data collection experiment mentioned above. It can be seen that MobileNet achieves the best performance in terms of reducing the number of trainable parameters. In the case of costing 13.68 % parameters of ResNet50, the accuracy of MobileNet is 2.21 % higher. The two-layer CNN and VGG-16 fail to extract attachment features well due to insufficient network layers, so the recognition accuracy is unsatisfying. Moreover, because of the standard convolution, the number of parameters is dozens or even hundreds of times that of MobileNet, even if the depth is not as deep as the same. The results of two different ResNets show that the accuracy of the network does not increase significantly with the depth of the network; it can be analyzed that the TST images are captured underwater, and the contrast decreases compared with the onshore image due to the medium absorption and attenuation of light propagation. As the image intensity decreases, the features will be lost in the deep network, so the accuracy does not improve much with the increase in the network depth.
Next, we assess the performance of IRNLGD. To verify the effectiveness of the proposed algorithm in a complex underwater environment, the operating environment of the TST is divided into clean/turbid water. Each environment is further characterized by strong/weak illumination. The data under strong illumination come from Ref. [17], and the data under weak illumination come from the data collection experiment mentioned above. Four representative edge detection algorithms are selected, including Canny, SE, HED, and PiDiNet. The mean squared error (MSE) and average gradient (AG) are employed to evaluate the performance of each method. MSE is used to evaluate the similarity of the edge detection results to the original images, while AG is used to measure the rate of gray change in the results, which reflects the clarity and the expression of detail. The quantitative evaluation results are presented in Table 4. IRNLGD achieves the best results in both MSE and AG, indicating that the edges detected by the proposed method are closest to the original image and possess the richest details. Canny and PiDiNet are ranked second and third, respectively. The outcome is consistent with the characteristics observed in qualitative comparisons. Figure 3 shows some results on the TST dataset, with two samples for each environment. The experimental results indicate that SE hardly detects meaningful edges. The results of HED and parts of Canny are discontinuous, failing to detect closed edges, as shown in the blue boxes in Figure 3. While PiDiNet achieves the most outstanding performance among the comparative methods, its output cannot distinguish between attachments and blades. This leads to a crucial problem: when the blades are completely covered by attachments, the results from PiDiNet fail to provide meaningful information, as indicated in the green boxes in Figure 3. Only the proposed IRNLGD can detect complete blade contours and the texture of attachments, which can achieve attachment localization (the red boxes in Figure 3). What is more, a detailed comparison is conducted to demonstrate the advantages of the proposed method in TST attachment fault detection. The performance evaluation criteria are: (1) the ability to separate the foreground and background; (2) the ability to detect the edge of the TST rotor; and (3) the ability to distinguish between the blades and attachment.
Figure 4 shows the edge detection results of the TST image under clean water and strong illumination. Canny is capable of capturing the complete edge of the TST rotor and can partially differentiate the attachment from the blades. However, the distinction may not be very clear. SE and HED fail to obtain the complete edge of the rotor. PiDiNet captures complete edges but fails to differentiate between the blades and attachment, which may affect the subsequent maintenance process. The proposed algorithm not only captures clear and complete edges of the TST rotor but also highlights the distinction between healthy blades and the attachment.
Figure 5 shows the edge detection results of the TST image under turbid water and strong illumination. Similar to Figure 4, Canny can extract rotor edges but cannot indicate the location of the attachment. SE is almost completely ineffective, and HED detects incomplete edges. PiDiNet can detect complete edges, but they tend to be thick with insufficient refinement. It also ignores the localization of the attachment. IRNLGD is capable of both extracting complete edges and indicating attachment areas.
Figure 6 shows the edge detection results of the TST image under clean water and weak illumination. We use several boxes to highlight the details. Except for SE, most of the algorithms can detect clear edges but some differences exist: Canny and IRNLGD can extract fine edges and differentiate between clean blades and biofouling regions, while the other two can only identify the outline. HED can only detect partial edges and tends to lose edges in details, such as the attachments at the blade tip (left red box in Figure 6d); the detected edges are incomplete, as shown in the dark side of the blade (right green box in Figure 6d). PiDiNet can extract precise rotor edges but fails to distinguish between the blades and attachment. What is more, it also displays some edges of background, which are useless for fault diagnosis. In the comparison of Canny and IRNLGD, the attachment area detected by Canny shows partial loss (left red box in Figure 6b), but IRNLGD can fully display the texture of the attachment (left red box in Figure 6f). Both of them have achieved better results in edge refinement than other methods. Canny performs well in refinement due to non-maximum suppression. However, it is important to note that Canny’s high performance relies on manually selected thresholds, while the proposed IRNLGD is adaptive.
Figure 7 shows the edge detection results of the TST image under turbid water and weak illumination. SE fails to detect a meaningful edge. HED produces incomplete contours (right green box in Figure 7d) and fails to detect the boundary between the attachment and blade near the tip (left red box in Figure 7d). PiDiNet offers advantages in edge extraction, but it cannot indicate the attachment areas. Canny and IRNLGD perform well in distinguishing between the blade and the attachment, but Canny tends to lose details in densely textured areas (right green box in Figure 7b). In contrast, IRNLGD not only detects complete edges but also clearly distinguishes the attachment area from the blades.

4.2. Experiment on BSDS500 Dataset

Due to the lack of a publicly available edge detection dataset designed for underwater images, the BSDS500 dataset is used to evaluate the quality of the proposed edge detection algorithm. The dataset is universally adopted for natural edge detection evaluation, consisting of 500 natural images and real human annotations. Each image is segmented by five different experts independently. The five segmentation results are combined with equal weights for objective ground truth. Color channel attenuation and blurring operations are performed on the dataset to simulate the characteristics of an underwater environment.
We employ three quantitative metrics commonly used in edge detection evaluation: the optimal dataset scale (ODS), optimal image scale (OIS), and average precision (AP). Table 5 compares the different edge detection algorithms on BSDS500. The proposed algorithm does not obtain satisfactory results in quantitative evaluation, primarily because IRNLGD can detect fine edges, while human ground truth focuses more on contours and ignores details. This difference causes the detected detailed textures to be penalized in the evaluation. The distinctive feature can be highlighted in visual comparisons. Figure 8 compares the results of IRNLGD with those of other methods. Due to the simulated underwater operation, the images are degraded and feature extraction is compromised, leading to various degrees of failure for the algorithms: SE fails to detect meaningful edges; HED produces a few non-closed contours, losing the most meaningful information, as shown in Figure 8d,e. Canny and PiDiNet achieve better performance among the compared methods but still exhibit incomplete edges. There are discontinuities in low-contrast regions, which is evident in the results of Canny (Figure 8c). The dependence of deep learning on data can result in decreased performance when dealing with new types of samples. For example, in Figure 8f, PiDiNet fails to detect edges or produces weak responses in some dim areas. In contrast, IRNLGD can detect complete edges and capture details not marked in human ground truth, such as the bird wing in sample III in Figure 8g. The proposed method may have weaker noise suppression capabilities but successfully detects complete edges and fine textures.

4.3. Experiment on UIEB Dataset

Experiments were conducted on the UIEB dataset [44] to demonstrate the proposed method’s effectiveness on underwater images. The dataset contains 950 original underwater images captured under various illuminations, including natural light, artificial light, or a combination of both. Due to the lack of an objective and publicly available human ground truth, only a qualitative assessment is performed here.
Figure 9 shows some detection results. Due to the degradation of underwater images, Canny and SE fail to detect meaningful edges and only prove effective in some high-contrast regions. HED produces a clear contour but can only detect partial edges in a high-turbidity environment. PiDiNet takes into account the idea of pixel differentiation and is sensitive to gradients, achieving more complete edge detection compared to HED. However, it also misses some edges, especially in low-contrast regions, as shown in red boxes in Figure 9e. IRNLGD not only achieves complete edges but also detects fine details, even in dim parts (as shown in the green boxes in Figure 9f). Moreover, its performance does not rely on an extensive amount of data for training.

4.4. Analysis

Based on the above experimental results, it is evident that the proposed method outperforms other techniques in the detection of fine edges in underwater images. IRNLGD obtains better results on the TST and UIEB datasets than other methods. The failure of the compared methods is due to the scattering during the propagation of light underwater, which causes image degradation. Forward scattering weakens the energy of the light, blurring the image and defocussing the contour lines; backward scattering allows light reflected by floating particles to enter the camera, resulting in an image with much noise. These factors make underwater images characterized by low contrast and attenuated intensity, leading to the failure of various edge detection algorithms that perform well in air images. The pixel values of underwater images do not vary significantly, making some gradient-based operators unable to obtain the desired results. The accuracy of supervised algorithms relies on rich training data. They fail on underwater images with low contrast due to a lack of labeled samples. However, our method considers gradients in multiple directions, allowing for the capture of richer information. Additionally, IRNLGD creatively introduces the concept of “gradient direction calibration”, enabling gradient directions to directly participate in edge determination; it contributes to IRNLGD’s ability to better capture changes in pixel intensity with limited information, thus facilitating edge detection in degraded images such as those with low light or turbidity.
In the quantitative assessment, the results of IRNLGD are undesirable. The reason is that BSDS500 is annotated for image segmentation and contour detection, and some detailed edges detected by the proposed method are penalized. However, the new edges and textures can also provide valuable information in some cases. The quantitative evaluation results of supervised algorithms are generally better due to the advantage of deep learning-based methods with good feature extraction. They can extract deep feature representations from a large number of training images. A similar feature extraction process does not exist in unsupervised methods. However, our method can still obtain satisfactory results and has no requirement for paired data for training. Therefore, the proposed IRNLGD is competent for edge detection tasks.
While the proposed method has some limitations compared with other methods, these can also be turned into advantages in certain situations. It is undeniable that, in terms of semantic understanding, IRNLGD falls short compared to deep learning-based approaches. However, the performance of deep learning relies on a large amount of labeled data, which is not aligned with the research purposes. The advantage of IRNLGD lies precisely in its ability to perform comprehensive and detailed edge detection without the need of training. Additionally, the proposed IRNLGD exhibits some noise in certain samples; this is acceptable. IRNLGD is designed for fault diagnosis in underwater equipment such as a TST, and this characteristic helps to distinguish between attachments and equipment bodies in the outcome: submerged devices often use smooth materials and anti-fouling coatings to delay biofouling, while the biofouling has a rough surface [46,47]. Experimental results on the TST dataset demonstrate that IRNLGD can fully display the texture of attachments, thereby indicating attachment areas; something that other methods cannot achieve. Therefore, a small amount of noise does not diminish the contribution of the proposed method to the research purposes.
In summary, the proposed method can detect fine edges better, without training or manually selected thresholds, and is more robust than other methods.

5. Conclusions

In this paper, a novel edge detection method IRNLGD based on multi-direction gradient is proposed. IRNLGD enhances the edge determination process by calibrating the gradient directions, allowing a more comprehensive inclusion of gradient from multiple directions. Experiments on a TST dataset, BSDS500, and UIEB demonstrate as follows: (1) IRNLGD can effectively detect edges in low-quality images; and (2) IRNLGD can detect a greater amount of fine texture information. In addition, we explore a two-level TST rotor attachment detection method, MobileNet-IRNLGD. IRNLGD is added as a branch to the classification network MobileNet, which provides fault location while the MobileNet provides attachment degree. The advantages of this method are: (1) It requires less computational resources, reducing hardware demands. (2) It combines supervised algorithms with non-data-driven methods to achieve complementarity, reducing the need for extensive training data. The proposed MobileNet-IRNLGD can provide preliminary fault diagnosis in the first stage, enabling technical staff to take appropriate measures based on the fault severity. During underwater cleaning operations, the precise fault localization in the second stage can provide necessary reference and guidance for operators. Since the proposed edge detection method, IRNLGD, is a non-data-driven algorithm, it can meet the monitoring needs of various types of submerged turbines and other underwater devices. This can be extrapolated from experiments conducted on the UIEB dataset, as our method can also achieve detailed edges and texture detection in degraded underwater images. Overall, the proposed IRNLGD has demonstrated its effectiveness in edge detection, and the MobileNet-IRNLGD has also shown promising results and potential.
IRNLGD can effectively extract edge information from TST images without requiring paired data for training. However, it faces challenges when dealing with images affected by motion blur. In the future, the research will focus on exploring solutions for the motion-blurred images.

Author Contributions

Conceptualization, D.S., R.L. and T.W.; methodology, D.S. and R.L.; software, D.S. and R.L.; validation, D.S., R.L. and D.Y.; formal analysis, D.S.; investigation, D.S. and R.L.; resources, Z.Z. and T.W.; data curation, D.S., R.L., Z.Z., D.Y. and T.W.; writing—original draft preparation, D.S. and R.L.; writing—review and editing, D.S., D.Y., Z.Z. and T.W.; visualization, D.S.; supervision, Z.Z. and T.W.; project administration, Z.Z. and T.W.; funding acquisition, T.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restriction.

Conflicts of Interest

Author Zhiwei Zhang was employed by the company Shanghai Power Industrial & Commerical Co., Ltd., State Grid Shanghai Municipal Electric Power Company. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Yang, X.; Liu, N.; Zhang, P.; Guo, Z.; Ma, C.; Hu, P.; Zhang, X. The current state of marine renewable energy policy in China. Mar. Policy 2019, 100, 334–341. [Google Scholar] [CrossRef]
  2. Elghali, S.B.; Benbouzid, M.; Charpentier, J.F. Marine tidal current electric power generation technology: State of the art and current status. In Proceedings of the 2007 IEEE International Electric Machines & Drives Conference, Antalya, Turkey, 3–5 May 2007; IEEE: Piscataway, NJ, USA, 2007; Volume 2, pp. 1407–1412. [Google Scholar]
  3. Lust, E.E.; Luznik, L.; Flack, K.A.; Walker, J.M.; Van Benthem, M.C. The influence of surface gravity waves on marine current turbine performance. Int. J. Mar. Energy 2013, 3, 27–40. [Google Scholar] [CrossRef]
  4. Goundar, J.N.; Ahmed, M.R. Marine current energy resource assessment and design of a marine current turbine for Fiji. Renew. Energy 2014, 65, 14–22. [Google Scholar] [CrossRef]
  5. Langhamer, O. Effects of wave energy converters on the surrounding soft-bottom macrofauna (west coast of Sweden). Mar. Environ. Res. 2010, 69, 374–381. [Google Scholar] [CrossRef] [PubMed]
  6. Chen, H.; Ait-Ahmed, N.; Zaim, E.; Machmoum, M. Marine tidal current systems: State of the art. In Proceedings of the 2012 IEEE International Symposium on Industrial Electronics, Hangzhou, China, 28–31 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1431–1437. [Google Scholar]
  7. Turbines, M.C. SeaGen Environmental Monitoring Programme; Final Report; Haskoning Uk Ltd.: Edinburgh, UK, 2011. [Google Scholar]
  8. Titah-Benbouzid, H.; Benbouzid, M. Biofouling issue on marine renewable energy converters: A state of the art review on impacts and prevention. Int. J. Energy Convers. 2017, 5, 67. [Google Scholar] [CrossRef]
  9. Frost, C.; Morris, C.E.; Mason-Jones, A.; O’Doherty, D.M.; O’Doherty, T. The effect of tidal flow directionality on tidal turbine performance characteristics. Renew. Energy 2015, 78, 609–620. [Google Scholar] [CrossRef]
  10. Kearney, J. Grid Voltage Unbalance and the Integration of DFIG’s. Ph.D. Thesis, Technological University Dublin, Dublin, Ireland, 2013. [Google Scholar]
  11. Nall, C.R.; Schläppy, M.L.; Guerin, A.J. Characterisation of the biofouling community on a floating wave energy device. Biofouling 2017, 33, 379–396. [Google Scholar] [CrossRef] [PubMed]
  12. Loxton, J.; Macleod, A.; Nall, C.R.; McCollin, T.; Machado, I.; Simas, T.; Vance, T.; Kenny, C.; Want, A.; Miller, R. Setting an agenda for biofouling research for the marine renewable energy industry. Int. J. Mar. Energy 2017, 19, 292–303. [Google Scholar] [CrossRef]
  13. Freeman, B.; Tang, Y.; VanZwieten, J. Marine Hydrokinetic Turbine Blade Fault Signature Analysis using Continuous Wavelet Transform. In Proceedings of the 2019 IEEE Power & Energy Society General Meeting (PESGM), Atlanta, GA, USA, 4–8 August 2019; pp. 1–5. [Google Scholar] [CrossRef]
  14. Saidi, L.; Benbouzid, M.; Diallo, D.; Amirat, Y.; Elbouchikhi, E.; Wang, T. Higher-Order Spectra Analysis-Based Diagnosis Method of Blades Biofouling in a PMSG Driven Tidal Stream Turbine. Energies 2020, 13, 2888. [Google Scholar] [CrossRef]
  15. Freeman, B.; Tang, Y.; Huang, Y.; VanZwieten, J. Rotor blade imbalance fault detection for variable-speed marine current turbines via generator power signal analysis. Ocean Eng. 2021, 223, 108666. [Google Scholar] [CrossRef]
  16. Xie, T.; Li, Z.; Wang, T.; Shi, M.; Wang, Y. An integration fault detection method using stator voltage for marine current turbines. Ocean Eng. 2021, 226, 108808. [Google Scholar] [CrossRef]
  17. Zheng, Y.; Wang, T.; Xin, B.; Xie, T.; Wang, Y. A sparse autoencoder and softmax regression based diagnosis method for the attachment on the blades of marine current turbine. Sensors 2019, 19, 826. [Google Scholar] [CrossRef]
  18. Xin, B.; Zheng, Y.; Wang, T.; Chen, L.; Wang, Y. A diagnosis method based on depthwise separable convolutional neural network for the attachment on the blade of marine current turbine. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2021, 235, 1916–1926. [Google Scholar] [CrossRef]
  19. Peng, H.; Yang, D.; Wang, T.; Pandey, S.; Chen, L.; Shi, M.; Diallo, D. An adaptive coarse-fine semantic segmentation method for the attachment recognition on marine current turbines. Comput. Electr. Eng. 2021, 93, 107182. [Google Scholar] [CrossRef]
  20. Peng, H.; Wang, T.; Pandey, S.; Chen, L.; Zhou, F. An Attachment Recognition Method Based on Image Generation and Semantic Segmentation for Marine Current Turbines. In Proceedings of the IECON 2020—The 46th Annual Conference of the IEEE Industrial Electronics Society, Singapore, 18–21 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2819–2824. [Google Scholar]
  21. Qi, F.; Wang, T.; Wang, X.; Chen, L. LAW-IFF Net: A semantic segmentation method for recognition of marine current turbine blade attachments under blurry edges. Proc. Inst. Mech. Eng. Part M J. Eng. Marit. Environ. 2023. [Google Scholar] [CrossRef]
  22. McGlamery, B. A computer model for underwater camera systems. In Proceedings of the Ocean Optics VI, Monterey, CA, USA, 23–25 October 1979; SPIE: Bellingham, WA, USA, 1980; Volume 208, pp. 221–231. [Google Scholar]
  23. Jaffe, J.S. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  24. Jing, J.; Liu, S.; Wang, G.; Zhang, W.; Sun, C. Recent advances on image edge detection: A comprehensive review. Neurocomputing 2022, 503, 259–271. [Google Scholar] [CrossRef]
  25. Kittler, J. On the accuracy of the Sobel edge detector. Image Vis. Comput. 1983, 1, 37–42. [Google Scholar] [CrossRef]
  26. Szeliski, R. Computer Vision: Algorithms and Applications; Springer Nature: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  27. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  28. Gonzalez, C.I.; Melin, P.; Castillo, O. Edge Detection Method Based on General Type-2 Fuzzy Logic Applied to Color Images. Information 2017, 8, 104. [Google Scholar] [CrossRef]
  29. Ranjan, R.; Avasthi, V. Edge Detection Using Guided Sobel Image Filtering. Wirel. Pers. Commun. 2023, 132, 651–677. [Google Scholar] [CrossRef]
  30. Ma, J.; Wang, T.; Li, G.; Zhan, Q.; Wu, D.; Chang, Y.; Xue, Y.; Zhang, Y.; Zuo, J. Concrete surface roughness measurement method based on edge detection. Vis. Comput. 2023, 40, 1553–1564. [Google Scholar] [CrossRef]
  31. Muntarina, K.; Shorif, S.B.; Uddin, M.S. Notes on edge detection approaches. Evol. Syst. 2022, 13, 169–182. [Google Scholar] [CrossRef]
  32. Yang, D.; Peng, B.; Al-Huda, Z.; Malik, A.; Zhai, D. An overview of edge and object contour detection. Neurocomputing 2022, 488, 470–493. [Google Scholar] [CrossRef]
  33. Dollár, P.; Zitnick, C.L. Fast edge detection using structured forests. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 1558–1570. [Google Scholar] [CrossRef] [PubMed]
  34. Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
  35. Liu, Y.; Lew, M.S. Learning Relaxed Deep Supervision for Better Edge Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 231–240. [Google Scholar] [CrossRef]
  36. Yang, J.; Price, B.; Cohen, S.; Lee, H.; Yang, M.H. Object Contour Detection with a Fully Convolutional Encoder-Decoder Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  37. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  38. Zou, Q.; Zhang, Z.; Li, Q.; Qi, X.; Wang, Q.; Wang, S. DeepCrack: Learning Hierarchical Convolutional Features for Crack Detection. IEEE Trans. Image Process. 2019, 28, 1498–1512. [Google Scholar] [CrossRef] [PubMed]
  39. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  40. Su, Z.; Liu, W.; Yu, Z.; Hu, D.; Liao, Q.; Tian, Q.; Pietikäinen, M.; Liu, L. Pixel difference networks for efficient edge detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 5117–5127. [Google Scholar]
  41. Elder, J.H.; Goldberg, R.M. Ecological statistics of Gestalt laws for the perceptual organization of contours. J. Vis. 2002, 2, 5. [Google Scholar] [CrossRef]
  42. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  43. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 898–916. [Google Scholar] [CrossRef]
  44. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
  45. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  46. Gittens, J.E.; Smith, T.J.; Suleiman, R.; Akid, R. Current and emerging environmentally-friendly systems for fouling control in the marine environment. Biotechnol. Adv. 2013, 31, 1738–1753. [Google Scholar] [CrossRef] [PubMed]
  47. Want, A.; Crawford, R.; Kakkonen, J.; Kiddie, G.; Miller, S.; Harris, R.E.; Porter, J.S. Biodiversity characterisation and hydrodynamic consequences of marine fouling communities on marine renewable energy infrastructure in the Orkney Islands Archipelago, Scotland, UK. Biofouling 2017, 33, 567–579. [Google Scholar] [CrossRef] [PubMed]
Figure 2. Examples of TST rotor attachments: (a) in air; (b) in water. The red boxes indicate the attachment area.
Figure 2. Examples of TST rotor attachments: (a) in air; (b) in water. The red boxes indicate the attachment area.
Jmse 12 00498 g002
Figure 3. The comparison of five edge detection methods on TST dataset: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD.
Figure 3. The comparison of five edge detection methods on TST dataset: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD.
Jmse 12 00498 g003
Figure 4. The comparison of five edge detection methods for clean water and strong illumination: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD. The red boxes indicate the attachment area.
Figure 4. The comparison of five edge detection methods for clean water and strong illumination: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD. The red boxes indicate the attachment area.
Jmse 12 00498 g004
Figure 5. The comparison of five edge detection methods for turbid water and strong illumination: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD. The red boxes indicate the attachment area.
Figure 5. The comparison of five edge detection methods for turbid water and strong illumination: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD. The red boxes indicate the attachment area.
Jmse 12 00498 g005
Figure 6. The comparison of five edge detection methods for clean water and weak illumination: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD.
Figure 6. The comparison of five edge detection methods for clean water and weak illumination: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD.
Jmse 12 00498 g006
Figure 7. The comparison of five edge detection methods for turbid water and weak illumination: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD.
Figure 7. The comparison of five edge detection methods for turbid water and weak illumination: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD.
Jmse 12 00498 g007
Figure 8. The comparison of five edge detection methods on the BSDS500 dataset: (a) original image; (b) ground truth; (c) Canny; (d) SE; (e) HED [34]; (f) PiDiNet [40]; (g) IRNLGD. I, II, and III refer to different samples. The red boxes indicate details.
Figure 8. The comparison of five edge detection methods on the BSDS500 dataset: (a) original image; (b) ground truth; (c) Canny; (d) SE; (e) HED [34]; (f) PiDiNet [40]; (g) IRNLGD. I, II, and III refer to different samples. The red boxes indicate details.
Jmse 12 00498 g008
Figure 9. The comparison of five edge detection methods on the UIEB dataset: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD.
Figure 9. The comparison of five edge detection methods on the UIEB dataset: (a) original image; (b) Canny; (c) SE; (d) HED [34]; (e) PiDiNet [40]; (f) IRNLGD.
Jmse 12 00498 g009
Table 1. Details of attachment degrees of TST.
Table 1. Details of attachment degrees of TST.
CategoryClassifier LabelsAttachment Details
Attachment degree 000 g-0 g-0 g
Attachment degree 110 g-20 g-0 g
Attachment degree 220 g-20 g-40 g
Attachment degree 330 g-20 g-60 g
Attachment degree 4420 g-40 g-60 g
Attachment degree 5520 g-40 g (6-7)-60 g
Attachment degree 6640 g (6-7)-40 g (6-7)-60 g
Attachment degree 7740 g-40 g (6-7)-60 g
Attachment degree 8840 g (3-10)-40 g (6-7)-60 g
Attachment degree 9940 g (3-10)-40 g (3-10)-60 g
Attachment degree 101040 g-40 g (3-10)-60 g
Table 2. Parameter settings of the lightweight network.
Table 2. Parameter settings of the lightweight network.
Parameter NameValue
Learning rate 1.0 × 10 3
Training epochs1000
Training batch size16
Image size 495 × 495
Width coefficient α 1.0
Resolution coefficient ρ 1.0
Table 3. Quantitative experimental results of the five networks.
Table 3. Quantitative experimental results of the five networks.
Name of NetworkTrainable ParametersAccuracy
MobileNet3,218,25196.69%
ResNet-10142,522,699 94.48 %
ResNet-5023,530,571 94.46 %
VGG-16134,305,611 25.00 %
Two-layer CNN63,046,091 12.50 %
Table 4. The quantitative comparison results of five methods on TST dataset.
Table 4. The quantitative comparison results of five methods on TST dataset.
MethodMSEAG
Canny17,374.37 9.186
SE17,573.62 0.647
HED17,813.11 3.299
PiDiNet17,471.07 3.953
IRNLGD16,742.7714.627
Table 5. Quantitative results on the BSDS500 dataset.
Table 5. Quantitative results on the BSDS500 dataset.
DetectorsODSOISAP
Human 0.80 0.80 -
Sobel 0.563 0.594 0.537
Canny 0.546 0.548 0.004
Roberts 0.553 0.581 0.524
SE 0.668 0.683 0.659
HED [34] 0.557 0.560 0.041
PiDiNet [40] 0.762 0.777 0.753
IRNLGD 0.487 0.489 0.009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, D.; Liu, R.; Zhang, Z.; Yang, D.; Wang, T. IRNLGD: An Edge Detection Algorithm with Comprehensive Gradient Directions for Tidal Stream Turbine. J. Mar. Sci. Eng. 2024, 12, 498. https://doi.org/10.3390/jmse12030498

AMA Style

Song D, Liu R, Zhang Z, Yang D, Wang T. IRNLGD: An Edge Detection Algorithm with Comprehensive Gradient Directions for Tidal Stream Turbine. Journal of Marine Science and Engineering. 2024; 12(3):498. https://doi.org/10.3390/jmse12030498

Chicago/Turabian Style

Song, Dingnan, Ran Liu, Zhiwei Zhang, Dingding Yang, and Tianzhen Wang. 2024. "IRNLGD: An Edge Detection Algorithm with Comprehensive Gradient Directions for Tidal Stream Turbine" Journal of Marine Science and Engineering 12, no. 3: 498. https://doi.org/10.3390/jmse12030498

APA Style

Song, D., Liu, R., Zhang, Z., Yang, D., & Wang, T. (2024). IRNLGD: An Edge Detection Algorithm with Comprehensive Gradient Directions for Tidal Stream Turbine. Journal of Marine Science and Engineering, 12(3), 498. https://doi.org/10.3390/jmse12030498

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop