Next Article in Journal
Quantifying Spatiotemporal Gait Parameters with HoloLens in Healthy Adults and People with Parkinson’s Disease: Test-Retest Reliability, Concurrent Validity, and Face Validity
Previous Article in Journal
Artificial Intelligence in Decision Support Systems for Type 1 Diabetes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Self-Organizing Fuzzy Logic Classifier for Benchmarking Robot-Aided Blasting of Ship Hulls

by
M. A. Viraj J. Muthugala
1,*,
Anh Vu Le
2,
Eduardo Sanchez Cruz
1,
Mohan Rajesh Elara
1,
Prabakaran Veerajagadheswar
1 and
Madhu Kumar
3
1
Engineering Product Development Pillar, Singapore University of Technology and Design, 8 Somapah Rd, Singapore 487372, Singapore
2
Optoelectronics Research Group, Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam
3
Brightsun Marine Pte Ltd, 9 Tuas Ave 8, Singapore 639224, Singapore
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(11), 3215; https://doi.org/10.3390/s20113215
Submission received: 4 May 2020 / Revised: 29 May 2020 / Accepted: 2 June 2020 / Published: 5 June 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Regular dry dock maintenance work on ship hulls is essential for maintaining the efficiency and sustainability of the shipping industry. Hydro blasting is one of the major processes of dry dock maintenance work, where human labor is extensively used. The conventional methods of maintenance work suffer from many shortcomings, and hence robotized solutions have been developed. This paper proposes a novel robotic system that can synthesize a benchmarking map for a previously blasted ship hull. A Self-Organizing Fuzzy logic (SOF) classifier has been developed to benchmark the blasting quality of a ship hull similar to blasting quality categorization done by human experts. Hornbill, a multipurpose inspection and maintenance robot intended for hydro blasting, benchmarking, and painting, has been developed by integrating the proposed SOF classifier. Moreover, an integrated system solution has been developed to improve dry dock maintenance of ship hulls. The proposed SOF classifier can achieve a mean accuracy of 0.9942 with an execution time of 8.42 µs. Realtime experimenting with the proposed robotic system has been conducted on a ship hull. This experiment confirms the ability of the proposed robotic system in synthesizing a benchmarking map that reveals the benchmarking quality of different areas of a previously blasted ship hull. This sort of a benchmarking map would be useful for ensuring the blasting quality as well as performing efficient spot wise reblasting before the painting. Therefore, the proposed robotic system could be utilized for improving the efficiency and quality of hydro blasting work on the ship hull maintenance industry.

1. Introduction

Routine dry dock maintenance on the outer hull of ships is essential for the efficient and sustainable operation of shipping [1,2]. Improper maintenance of ship hulls may lead to an increase of the fuel consumption due to the surface roughness [2,3]. In addition to that, safety issues can also be aroused due to the improper hull maintenance. Typically, maintenance work on ship hulls needs to be carried out every 4–5 years [4]. One of the major maintenance work on ship hull is the blasting work carried out for removing rust and adherence [2]. Hydro blasting or abrasive blasting is commonly used for this purpose. In most of the typical cases, these blasting works are carried out by human workers with the aid of semi-automatic devices [2]. Furthermore, inspections of hull surfaces are done by humans to identify the areas that need to be blasted as well as to evaluate the quality of a blasting work [2]. Thereby, the conventional ship hull maintenance work suffers from concerns in accuracy, efficiency, cost, and safety.
Many robotic solutions have been emerged for resolving the issues associated with conventional maintenance work that requires extensive human labor in diverse domains [5,6,7,8]. Similarly, many robots and devices have been developed for inspection and maintenance work of ship hulls to resolve the shortcoming of the conventional methods discussed above [9]. In this regard, many ground-based robots and robotic systems have been introduced with the intention of automating dry dock maintenance work of ships hulls [10,11,12,13]. These robots require the support of external structures such as rail mechanisms, gantry cranes, and cherry pickers for facilitating the movements along a ship hull. A system that uses a set of stationary lidars and cameras for detecting corrosion spots in a ship hull has been proposed in [14]. However, these ground robots have limited access to ship hulls due to hindrances caused by geometric constraints and occultations. Furthermore, it is required to deploy either a set of systems or facilitates the movement of the structures around a ship for covering an entire ship hull. Possibility of using Unmanned Area Vehicles (UAV) such as quadcopters and hexacopters for inspection of ship hulls has also been investigated since UAVs have proven the ability to inspect confined and limited access areas in other application domains [9,15]. Nevertheless, the usability of UAVs is limited in inspection of ship hulls due to the lack of operation for a longer time and distance as well as low accuracies in positioning.
Robots that can climb on ship hulls are preferred for inspection and maintenance work of ship hulls due to their performance benefits [16,17]. A climbing robot based on magnetic adhesion has been developed for inspecting long welding lines on ship hulls [18]. MIRA [19] is another climbing robot developed for the inspection of ship hulls. The wheels of this robot are integrated with permanent magnets fixed on a flexible strip to provide the climbing ability. A mobile robot with a novel design of magnetic wheels based on Halbach array has been introduced in [20] for dry dock inspections. A tracked mobile robot that is capable of adapting to the uneven surfaces has been developed for ship hull inspection [21]. The self-adaptability of the robot allows it to move through uneven surfaces. Since magnetic wheels are often used for inspection robots intended for ship hulls, in depth analysis on magnetic flux distribution of wheels for climbing robot has also been conducted [22]. Furthermore, optimization of a magnetic wheel for a grit blasting robot has been investigated [23]. Apart from the climbing robot with magnetic adhesion, robots based on negative pressure adhesion have also been developed [17,24]. The work [24] analyzed the variation of required negative pressure in different conditions such as in wet conditions. Nevertheless, the scope of the work cited above is limited to the design and development of climbing robots or their adhesion mechanisms for inspection of ship hulls; the methods for automatic inspection and maintenance such as vision-based corrosion detection, are not discussed within the scope of the work.
A magnetic crawler robot with a monocular-camera based vision processing method for inspecting ship hulls has been developed [25]. The vision processing method is capable of constructing a mosaic image, which can produce a metric representation of the areas inspected by the robot, from the multiple images captured from different positions of the robot. Many computer-vision based methods for detecting corrosion in metal structures, including ship hulls and steel bridges, have been developed [26]. According to the outcomes of the survey [26], the color is the principal representative feature for identifying the corroded areas. Furthermore, the cited survey concludes that learning-based methods could perform better than the non-learning methods in detecting corrosion through vision. A method for a surface corrosion grade classification of metals has been proposed using a deep convolution neural network [27]. However, the method was developed to detect the correction grade based on microscopic images of the surface. A microscopic image could cover only a minimal area of the environment to be inspected due to the constraints of magnification. If microscopic images were taken to cover a ship hull, a considerably large number of images would need to be taken for covering the whole surface. The vision system has to be directed toward very near proximity areas many times in this regard. Therefore, the inspection process would be prolonged, hindering the feasibility of adopting the method proposed in [27] for a robot intended for inspecting the corrosion grade of ship hulls. A vision-based defect detection method based on statistical approaches built upon circular histograms entropy analysis has been introduced to identify the rusted regions of ship hulls [28]. Nevertheless, the proposed method has been verified merely using offline images and have not been validated in the real-world application context.
The ability to detect corrode areas of ship hulls from a set of stationary cameras by using histogram-based background detection and adaptive thresholding methods have been investigated in [14]. According to the outcomes, these methods are effective only when the inspected zones have a majority of non-corroded areas. The work [29] proposed an aerial robot equipped with a feedforward neural network trained for corrosion detection of ship hulls. The proposed neural network is capable of detecting the areas of corrosions/coating breakdowns of ship hulls. An automated visual inspection method that can detect defected areas for automated spot blasting has been proposed in [30]. The cited work proposes a wavelet transformation combined with an entropy-based method for detecting the corrode spots. The proposed method has been implemented on a crane-based ground robotic system and verified the abilities.
Nevertheless, all the vision-based methods discussed above are intended for detecting corroded areas on a surface. The blasting quality needs to be classified into a few categories for benchmarking a hydro blasting work. Furthermore, the surface appearance becomes different after a blasting process. Hence, the methods discussed above could not be adopted for benchmarking the performance of a hydro blasting process. In addition to that, much of the work is limited to the development of vision-based detection mechanisms for robots, and the ways for utilizing the detection outcomes as a fully integrated system for synthesizing a benchmarking map for an already blasted ship hull are not considered within the scopes. Therefore, this paper proposes a novel vision-based benchmarking method for hydro blasting. The proposed benchmarking method can classify the hydro blasting quality into three categories, good, medium, and bad. A Self-Organizing Fuzzy logic (SOF) classifier is proposed to realize the classification into the benchmarking categories. In addition to that, a hydro blasting robotic system consists of a hydro blasting robot and a benchmarking robot has been designed and developed. An overview of the proposed robotic system is given in Section 2. Section 3 presents the theoretical backgrounds of the proposed SOF classifier. Particulars on experimental validation are discussed in Section 4. Concluding remarks are given in Section 5.

2. System Overview

2.1. Context of Application

The context of the application of the proposed robotic system is explained with the aid of Figure 1. The application context is to remove the layer of rust in a ship hull through hydro blasting for applying a new painting. Initially, a robot equipped with hydro blasting capability is sent on the ship hull in a zig-zag path to cover the entire area to be blasted (as shown in Figure 1a). While navigating in the given path, the robot is expected to continuously and uniformly perform the hydro blasting in the area. Typically, a hydro blasting robot is capable of removing the rust layer to a greater extent. Nevertheless, the rust removal would not always be uniform throughout a given area, and there would be partially removed areas or completely unremoved areas. Thus, the blasting quality is usually inspected by human experts after performing a blasting cycle. ISO 8501-1 standard is used as a guideline in this regard. A blasted surface with either SA 2.5 or SA 3 standard quality (according to ISO 8501-1) is considered as “good” in the work presented in the paper, where the blasting quality is adequate for processing with painting. Typical appearance of a ship after a good blasting is shown in Figure 2a. According to the standard, a blasted surface free from visible oil, grease, and dirt and from mill scale, rust, paint coatings, and foreign matters, when the surface is inspected without magnification, can be considered as at least SA 2.5 quality. Slight traces of contamination in the form of spots or stripes are allowed in SA 2.5 while the surface should have uniform metallic color without any traces to be considered as SA 3 quality. Areas with standard blasting quality SA 1 and SA 2, where poorly adhering rust, paint coatings, and foreign matters could be observed without magnification, are considered as medium quality in the work presented in this paper. Appearances of areas with medium blasting quality are given in Figure 2b. The medium quality areas are expected to have a light reblasting on it. If a surface blasting quality is below the quality definition of SA 2 (e.g., an area with a completely unremoved coat), it is considered bad quality blasting. The areas identified as bad quality blasting are expected to have a full reblasting on them. Examples for areas categorized as bad quality blasting is given in Figure 2c. If the new painting was applied on top of a ship hull with existing rust particles, then it would not last for a long time. Thereby, it is necessary to ensure that a ship hull is completely blasted before applying the paint.
To ensure that a ship hull is uniformly blasted in good condition, a second robot is sent on the ship hull in a zig-zag path as shown in Figure 1b, to benchmark the work done by the blasting robot. After completion of the inspection by the robot, a benchmarking map for the hull area is developed as shown in Figure 1c, where it indicates the quality of blasting in different segments of areas. The areas shown in green represent the areas with good quality blasting (areas where the appearance is similar to Figure 2a). The areas given in yellow represent the areas where the quality of blasting is medium (areas where the appearance is similar to Figure 2b). The areas where the blasting quality is bad (areas where the appearance is similar to Figure 2c) is represented in red. After generating the benchmarking map of the hull area, it is expected to resend the blasting robot to perform selective blasting on medium and bad quality areas. This benchmarking map would be useful in improving the efficiency of the robot by planning an efficient navigation plan for the blasting robot and performing selective blasting with different blasting parameters (for medium areas low pressure, for bad areas high pressure, etc.). Therefore, the robotic system proposed in this paper would be highly beneficial in improving the automated ship hull blasting and inspection work.

2.2. Functional Overview

The functional overview of the proposed robotic system is depicted in Figure 3. An operator can control the navigation path of the benchmarking robot through the user interface. The navigation controller of the benchmarking robot is responsible for performing the low-level functionalities related to navigation in a given trajectory such as localization and locomotion motor controlling. For localization, the robot uses a beacon-based off the shelf localization system. The vision feedback from the camera attached to the robot is processed by the vision processing module to extract the features.
The main steps within the vision processing module are explained in Figure 4. The vision processing module captures image frames from the incoming vision feed. This capturing is done in 600 × 600 size. Then, the image information of each frame is separated into R, G, and B components. Then, for each color component, a histogram is generated. The bin size of a histogram is configured to 10. Hence, there are 10 parameters for each color component yielding to 30 parameters as the output of the vision processing module (for altogether for R, G, and B components 30 parameters). This data set is denoted as x, and it is a row vector with 30 elements. The output of the vision processing module, x is fed to the benchmarking classifier for each captured frame.
The benchmarking classifier labeled each image frame to either of the considered benchmarking categories, good, medium, and bad. The benchmarking classifier has been developed using a Self-Organizing Fuzzy logic (SOF) classifier (A detailled description is given in Section 3). The classification results of the incoming image frames are then sent to the benchmarking map generator. The corresponding location of a captured image is retrieved from the navigation controller. Then the corresponding location is tagged with the classified results to generate the benchmarking map. The created benchmarking map could be accessible through the user interface. In addition to that, the generated benchmark map could be transferred to the blasting robot through the user interface for reblasting the areas identified as medium and bad.

2.3. Robot Platform

Hornbill is a differential drive robot designed for ship hull maintenance. The robot uses magnets as principal support to adhere and navigate across the metallic surface of the vessel, besides it incorporates custom design wheels for water displacement to maximize the traction in the presence of water. The robot’s architecture incorporates a multipurpose arm that can be used for hydro blasting, painting, and surface benchmarking. Figure 5 depicts the robot’s design.
A complete part diagram of the robot is shown in Figure 6 for reference. The robot dimensions are 535 × 785 × 480 mm including the multipurpose arm. At the front axis, it has 2 DC motors coupled to a gear head that transmits 55 Nm of torque each to the surface by using 2 rubber wheels of 200 mm of diameter (FW). The robot’s frame is made of Aluminium 6061 Alloy to minimize any possible hazard to occur due to the presence of high magnetic forces. The frame also has a safety handler (SH) to easy its deployment on a ship’s hull. The robot’s design also includes a waterproof cover (C1, C2) to isolate the electric components from the liquids.
The robot is designed to carry 300 kg of payload including its weight, for this purpose the robot includes 10 grade N50 Neodymium squared magnets of 200 × 25 × 25 mm, 8 magnets are located at the front axis using 4 magnet holder (MHF) and 2 at the rear (MHR) right next to 2 castor wheels (CW). The magnets are placed at 5 mm from the surface, and they provide a combined pull force of 300 kg.
Hornbill has been designed to be reconfigurable in terms of the task that needs to perform. Therefore, the robot has attached a multipurpose arm (MA) made of a hollow aluminum tube, at one end it is coupled a DC motor of 25 Nm of torque that allows the arm to swing, and at the other end it has a bracket (MAB) that can be used to link diverse tools depending on what is the robot needed to perform between hydro blasting, painting, and surface benchmarking. To perform surface benchmarking, a camera has to be attached to the arm bracket (MAB) oriented parallel to the hull surface. Situations, where Hornbill is used as a benchmarking robot and a hydro blasting robot, are shown in Figure 7.
A set of beacons have to be placed along the section of the ship’s hull where the robot operates. The communication of a beacon placed on the robot with the surface beacons is used to determine the robot’s absolute position and orientation. In addition to the information from the beacons, the information from wheel encoders and inertia measurement unit is fused together to improve the accuracy of the localization within the workspace. This localization facilitates the autonomous navigation of the robot in a given trajectory.

3. Self-Organizing Fuzzy Logic (SOF) Classifier

A Self-Organizing Fuzzy logic (SOF) classifier [31] is proposed for the benchmarking classifier. The architecture of the SOF classifier was originally proposed by Gu and Angelov [31] in 2018. The underlying theoretical rationales and mathematical proving of the architecture, such as analysis of convergence, have been analyzed in the cited work. Furthermore, the performance of the SOF classifier has been compared against the other method using well-known offline testing data such as data set for optical character recognition. Nevertheless, a SOF classifier has not been proposed for benchmarking the quality of hydro blasting. A SOF classifier was selected for this application based on the following reasons.
  • The benchmarking statuses are defined based on human expert knowledge, and the benchmarking categorization is performed based on the three fuzzy linguistic descriptors, good, medium, and bad. Moreover, the benchmarking classifier should emulate the human expert knowledge in the classification process. Fuzzy logic has been proven to be well suited for replicating the human expert knowledge that can be represented through linguistic expressions [32,33,34]. Furthermore, fuzzy logic has the ability to cope with imprecise sensor information [35,36,37]. Therefore, a method based on fuzzy concepts would be expected to perform well in this specific application.
  • A human interpretable and explainable set of rules is generated after the training of a SOF classifier. Explainable intelligent techniques are preferred for ensuring transparency and trust of safety in this sort of industrial application, which might become hazardous from undesired control actions that might be performed by a robot [38]. Furthermore, the set of rules can be tailored based on expert knowledge.
  • A SOF classifier is a highly efficient model with high classification accuracy [31,39]. Therefore, it requires lower computational power with respect to the other existing models. In addition to that, a SOF classifier does not require dedicated optimized hardware such as GPU cores for the computation. Moreover, a SOF model is comparatively lightweight.
  • Many existing classification models rely heavily on prior assumptions on data generation models and user-defined trial and error parameters such as learning rate and the size of the network. In most of the practical cases, the assumptions on data generation are often too tough to be sustained, and user-defined parameters are often troublesome to define due to the insufficient prior knowledge of the problem. In contrast, a SOF classifier is nonparametric, and it does not require an assumption on data generation models and parameter knowledge about the problem of interest [31].
The architecture of the SOF classifier is depicted in Figure 8. It is designed to assign a class label to a given data sample, x based on trained knowledge. It should be noted that x can be a row vector with any dimension. The trained knowledge is stored as a set of zeroth-order AnYa type fuzzy rules [40]. An AnYa type fuzzy rule has the form given in (1). The antecedent of an AnYa type fuzzy rule is in a nonparametric vector form, and it does not require membership functions. Here, the symbol, ∼ denotes the similarity between a data sample and a data cloud. This similarity is similar to the concept of degree of membership in the Mamdani type or Sugeno type fuzzy inference system. { p } c = { p 1 c , p 2 c , , p N c c } is the set of prototypes belonging to c th class, and N c is the number of prototype in { p } c . These prototypes are the centers of the data clouds. The data clouds and the corresponding fuzzy rules are formed during the training phase of the SOF classifier.
IF ( x p 1 c ) OR ( x p 2 c ) OR OR ( x p N c c ) THEN ( c l a s s c )
To identify the class corresponding to an input data sample (i.e., x), the firing strength of c th fuzzy rule, λ c ( x ) is evaluated as in (2) by the local decision-maker where d ( x , p ) denotes distance/dissimilarity between the two data points. Common distance/dissimilarity measures such as Euclidean, Mahalanobis, and Cosine distance can be used in this purpose. Then, the class label of the data sample is assigned by the overall decision-maker using the winner-takes-all method as in (3). For c = 1 , 2 , , C .
λ c ( x ) = max p { p } c ( e d 2 ( x , p ) )
l a b e l = arg max c = 1 , 2 , , C ( λ c ( x ) )
The classifier identifies the prototypes from each class and forms the data clouds. A zeroth-order AnYa type fuzzy rule for each class is then formulated. The training process is independent for different classes, and there is no influence from the training of one class to another class. Thereby, the training process can be explained based on c th class such that c = 1 , 2 , , C . Suppose data sample set of c th class is denoted by { x } K c c = { x 1 c , x 2 c , , x K c c } such that { x } K c c { x } K , where K c is the number of data samples belonging to c th class and K is the total number of data samples in the dataset. All the data in a training data sample may not be unique and the same data sample may be repeated. The corresponding unique data sample set and their frequency of appearance are considered as { u } U K c c = { u 1 c , u 2 c , , u U K c c } and { f } f K c c = { f 1 c , f 2 c , , f U K c c } respectively, where U K c is the number of unique data sample belonging to c th class, and U K is the total number of unique samples in the dataset. Moreover, this definitions lead to c = 1 C K c = K and c = 1 C U K c = U K . The main steps of the training of SOF classifier are given below.
  • Step 1: The multimodal density [41] of i th unique data sample of c th class, D K c M M ( u i c ) is calculated as in (4) where i = 1 , 2 , , U K c and d ( x i , x j ) denotes distance/dissimilarity between the two data points.
    D K c M M ( u i c ) = f i c l = 1 K c j = 1 K c d 2 ( x l c , x j c ) 2 K c j = 1 K c d 2 ( u i c , x j c )
  • Step 2: The sample { u } U K c c is sorted according to the multimodal density and mutual distances calculated in step 1. The sorted sample set is denoted by { r } = { r 1 , r 2 , , r U K c } , where r 1 is given in (5) and the rest are obtained as in (6). It should be notated that u i c corresponding to r k is excluded in each run of (6), and the process is repeated for all the data in the sample.
    r 1 = arg max i = 1 , 2 , , U K c ( D K c M M ( u i c ) )
    r j = arg min u i c { u } U K c c ( d ( r k , u i c ) ) for j = 2 , 3 , , U K c
  • Step 3: The multimodal density set after the sorting in step 2 is taken as { D K c M M ( r ) } . The initial set of prototypes, { p } 0 is generated by considering the condition given in (7). Moreover, the local maxima of { D K c M M ( r ) } are taken for { p } 0 .
    IF ( D K c M M ( r i ) > D K c M M ( r i + 1 ) AND ( D K c M M ( r i ) > D K c M M ( r i 1 ) THEN ( r i { p } 0 )
  • Step 4: The nearby data samples are attracted to form data clouds resembling Voronoi tesselation [42]. The assignment of a data sample to a cloud is determined based on the winning prototype obtained as in (8).
    Winning Prototype = arg min p { p 0 } , x i { x } K c c ( d ( x i , p ) )
  • Step 5: The set of centers of the formed data clouds, { φ 0 } are identified. { φ 0 } is equivalent to { p 0 } . The multimodal density at the center of i th cloud is calculated as in (9) where S i is the number of members in i th cloud, and n is the number of clouds formed.
    D K c M M ( φ i ) = S i l = 1 n j = 1 n d 2 ( φ l , φ j ) 2 n j = 1 n d 2 ( φ i , φ j )
  • Step 6: The set of centers of neighboring data clouds of i th data cloud, { φ } i n e i g h b o r is identified for each i based on the condition given in (10) such that φ j { φ } 0 and φ j φ i . Here, G K c c , L defines the average radius of local influential area around each data sample. This parameter is calculated iteratively as given in (11) based on the granularity level L Z + defined by user. Here, Q K c c , L is the number of pairs of data samples where the distance between a pair is less than G K c c , L for L = 1 . When L = 1 , Q K c c , L is the number of pairs of data samples where the distance between a pair is less than the average distance, d ¯ K c c .
    IF ( d 2 ( φ i , φ j ) G K c c , L THEN ( φ i { φ } i n e i g h b o r )
    G K c c , L = x , y { x } K C c , x y , d 2 ( x , y ) d ¯ K c c d 2 ( x , y ) Q K c c , L if L = 1 x , y { x } K C c , x y , d 2 ( x , y ) G K c c , L 1 d 2 ( x , y ) Q K c c , L otherwise
  • Step 7: The set of representative prototypes of c th class, { p } c are identified by evaluating the condition given in (12).
    IF ( D K c M M ( φ i ) > max φ { φ } i n e i g h b o r ( D K c M M ( φ ) ) ) THEN ( φ i { p } c )
  • Step 8: A zeroth order AnYa type fuzzy rule is created for c th class in the format given in (1), where N c is the number of representative prototypes.

4. Results and Discussion

4.1. Data Collection and Training, and Classification Performance

The data set required for the training and testing was prepared by capturing images of blasted ship hulls through the robot’s camera. The captured images were manually labeled to benchmarking categories with support of expert knowledge. For each benchmarking class, 1850 images were gathered, yielding the total size of the data set to 5550 images. Geometrical transformations, such as rotation, were applied to the data set to prevent the possible overfitting. The data set was randomly divided into two subsets for training and testing in the ratio of 80:20.
The variation of the classification accuracy of the proposed Self-Organizing Fuzzy logic (SOF) classifier for the testing data was obtained by varying the granularity level (i.e., L) and the type of distance/dissimilarity measure. Furthermore, the variation of the execution time (i.e., t) of the classifier was also examined. The variation of the mean accuracy and the mean execution time with granularity level and distance type is given in Table 1. The mean accuracy was calculated by considering 10 different trained classifiers (10 randomly selected data set for training) for each case. The mean execution time was obtained by running the classifier 100 times for each case in a laptop with the Intel Core i7-9750H processor and 16 GB memory. It should be noted that the execution time represents only the time taken by the classifier when assigning the class of a given input, and it does not include the time taken by the vision processing module for extracting the inputs for the classifier.
Improvement in the accuracy with increasing L could be observed when the cosine distance was used. In contrast, a reduction in accuracy could be observed when L changed from 8 to 12 when Euclidean distance was used. The overfitting of the classifier was the reason for this behavior since L controls the generalization of the classifier. An increase in execution time could be observed with increasing L irrespective of the distance type. Nevertheless, the execution times are in the order of microseconds (highest 23.8 ms), implying a trivial computational overhead for the realtime operation of the robot.
The highest mean accuracy of 0.9942 was observed when Euclidean distance and granularity level of 8 were considered for the SOF classifier. The mean execution time for this configuration was 8.67 µs. Therefore, a trained case of Euclidean distance and granularity level of 8 was considered for the real-time experimenting with the robotic system. The confusion matrix corresponding to this trained SOF classifier is given in Table 2 for conveying insights of the classification performance.

4.2. Realtime Operation on the Robot

The proposed overall system has been implemented, and a benchmarking map generation has been tested on a ship hull. The benchmarking robot was placed in a ship hull, as shown in Figure 9. The ship hull had been blasted by the blasting robot previous to this experiment. The blasting robot would work usually covering the ship hull with more or less even blasting quality in most situations. However, areas with different blasting qualities could be observed due to the operational issues of the robot, such as interruptions of pressurized water supply and sudden variations in navigation speed, during the real operation for a long time. For the sake of better demonstration of the benchmarking ability, such interruptions had been intentionally created during the blasting. Intentionally interrupting the blasting robot for making different blasting performances would not detract the experimental condition from the spontaneity of typical operating conditions. After placing the benchmarking robot on the ship hull, the robot was moved in a horizontal path. The SOF classifier analyzed the visual feedback of the robot in realtime. A video (Supplementary Materials: Video S1) of this segment of the experiment is attached as a supplementary multimedia attachment.
The robot could achieve a frame rate of 14.66 frames per second (fps). However, the frame rate was intentionally limited to 7 fps to avoid unnecessarily higher overlapping of the areas inspected by the robot in each frame. The corresponding benchmarking map generated by the system is overlaid on top of the ship hull, as shown in Figure 10. Here, the areas in green represent the good blasting quality, while the areas in yellow and red represent the areas with medium and bad blasting qualities. Altogether 195 captured frames were analyzed by the robot during this run, and the corresponding frame number is annotated in this overlaid map. The captured frames from the robot’s camera in intervals of 20 frames are given in Fig as samples for conveying the corresponding appearance and condition of the surface.
The area represented from frame 1 can be benchmarked as bad quality blasting based on expert knowledge. The corresponding color of the benchmarking map for the area represented by this frame is red, which indicates that the benchmarking method correctly identifies this frame. Similarly, the frames 20 and 40 were benchmarked as bad by the system as expected. The frames 60, 80, and 100 were benchmarked as good by the system as similar to human expert knowledge. However, frame 120 was benchmarked as good by the system, where the frame should not have been tagged as good per the human expert. The possible cause for this miss classification is that this image represents a combination of a good segment and a bad segment. The frames 140, 160, and 180 were tagged as bad in the benchmark map similar to the expert recommendation. Overall, the benchmarking robot was capable of correctly benchmarking the blasting quality to a greater extent. Nevertheless, few failure situations could be observed when the captured frames involve segments of different blasting qualities. Based on these observations, it can be concluded that the proposed benchmarking robotic system is capable of benchmarking the hydro blasting with adequate accuracy in realtime. Furthermore, it is capable of synthesizing a benchmarking map for a previously conducted hydro blasting.
The benchmarking map generated by the robot can be used to ensure the quality of a blasted ship hull before starting the painting. If the paint were applied to a ship hull that was not adequately blasted, the new paint would not last long, which degrades the quality of the maintenance work. Thereby, the proposed method for synthesizing a benchmarking map for hydro blasting would be useful to identify the areas that were not adequately blasted, and subsequently, selective spot blasting could be carried out on those areas. The benchmarking map would be useful in planning an efficient path for the blasting robot for the reblasting process in such cases. Furthermore, the representation of blasting quality categories in the map could be used to decide the amount of pressure required for the reblasting of the corresponding area. For example, if the benchmarking category is medium, a medium level of pressure can be used. In contrast, if the benchmarking category of an area is bad, then a high level of pressure can be applied for the corresponding area. Moreover, the categorical representation of the blasting quality in the benchmarking map would help to improve the overall efficiency of the blasting work. Therefore, the ability to synthesize the benchmarking map for hydro blasting work on a ship hull is widely useful for improving the efficiency and quality of ship hull maintenance.
A few miss classification was observed during the realtime operation with the robotic system. However, the accuracy of the proposed classifier is 99%, which is highly adequate for the application context. In addition to miss classification errors, the navigation errors of the robot due to poor localization cause errors to a synthesizing of a benchmarking map. Moreover, the accuracy of navigation and localization of the benchmarking robot should be maintained at a higher level to minimize the induce of errors. Nevertheless, solely relying on the beacon-based localization system is challenging in the working environment due to the high-frequency noises generated from the surrounding heavy-duty machinery. Therefore, the position estimations from the beacon system, inertial measurement unit, and wheel encoders are fused using the Kalman filter to improve the accuracy of the localization to an acceptable level.
The scope of this paper is limited to the design and development of a robotic benchmarking system for a robot-aided ship hull hydro blasting process. The proposed system is capable of synthesizing a benchmarking map for a previously blasted ship hull. Therefore, the work presented in this paper could contribute to improving the dry dock maintenance industry. Investigations on the development of methods for efficient path planning for a blasting robot and determination of optimum pressure for a selective blasting based on the information of a synthesized benchmarking map are proposed for future work.
In the current practice, the blasting quality is inspected by human experts for the decision making process. The blasting quality is determined based on qualitative factors of a blasted surface. This paper proposed a method to automate the inspection process. Nevertheless, the expert knowledge has to be captured for labeling the data set since a labeled data set is required for the training of the proposed classifier. Manual labeling of a large data set is a time-consuming task and would compromise the accuracy of the classifier if the expert knowledge was not captured accurately. During the labeling process, two human experts were asked to label the images independently. The label of an image is determined based on the unanimous decision of the human experts. Thus, this strategy ensures the correctness of the captured expert knowledge through manual labeling of the data set. The usage of unsupervised learning methods for the classification would alleviate the requirement of a labeled data set. The investigation on the possibility of using unsupervised learning methods for the classification is proposed for future work.

5. Conclusions

Routine dry dock maintenance work on ship hulls is essential for the efficient and sustainable operation of the shipping industry. In this regard, hydro blasting is one of the major maintenance work on ship hulls. Robots and systems have been developed for the dry dock maintenance industry to overcome the shortcoming of conventional methods.
This paper proposed a novel robotic system for benchmarking the robot-aided hydro blasting on ship hulls. The proposed robotic system is capable of synthesizing a benchmarking map that indicates the quality of a previously conducted hydro blasting in area wise. A novel Self-Organizing Fuzzy logic (SOF) classifier has been developed to realize the classification of blasting quality similar to human expert knowledge. A multipurpose inspection and maintenance robot called Hornbill, which can perform hydro blasting, benchmarking, and painting, has also been developed to facilitate a fully integrated system.
The SOF classifier has been trained and tested with a set of image data collected from the robot’s camera and labeled based on expert knowledge. The variation of the classification accuracy and the execution time of the SOF classifier with granularity level and distance/dissimilarity measure type has been studied to evaluate the classification performance. The highest achieved mean classification accuracy was 0.9942. The mean execution time of the classifier in the corresponding case was 8.67 µs, which indicates a trivial computational overhead to the system.
Real-time experimenting with the proposed robotic system has been conducted on a ship hull by navigating the benchmarking robot on a given path. According to the results, the proposed robotic system is capable of synthesizing a benchmarking map for a previously conducted hydro blasting process. A synthesized benchmarking map can represent the blasting quality of different areas in multilevel as similar to human expert categorization.
This benchmarking map can be used for planning an efficient navigation path for a blasting robot (for reblasting if a surface was not properly blasted) and by facilitating the spot blasting with different parameter settings based on identified blasting quality of the corresponding area. In addition to that, it can be used to verify whether the area of interest is adequality blasted before starting the painting. Therefore, the proposed robotic system would be highly beneficial in improving the quality and efficiency of hydro blasting work on the ship hull maintenance industry. Developments of methods for optimizing a reblasting process (robot-aided) based on a synthesized benchmarking map are proposed for future work.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/20/11/3215/s1, Video S1: sensors-20-03215 supplementary.v3.

Author Contributions

Conceptualization, M.A.V.J.M., A.V.L. and M.R.E.; methodology, M.A.V.J.M.; software, M.A.V.J.M. and A.V.L.; validation, M.A.V.J.M., A.V.L. and P.V.; formal analysis, M.A.V.J.M.; investigation, M.A.V.J.M.; resources, M.K.; data curation, A.V.L. and P.V.; writing—original draft preparation, M.A.V.J.M. and E.S.C.; writing—review and editing, M.A.V.J.M. and M.R.E.; supervision, M.R.E.; project administration, M.R.E.; funding acquisition, M.R.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Robotics Programme under its Robotics Enabling Capabilities and Technologies (Funding Agency Project No. 192 25 00051) and administered by the Agency for Science, Technology and Research.

Acknowledgments

The authors would like to thank Brightsun Marine Pte Ltd., Singapore, for facilitating the experiments of the proposed robotic system in ship hulls.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Garbatov, Y.; Sisci, F.; Ventura, M. Risk-based framework for ship and structural design accounting for maintenance planning. Ocean Eng. 2018, 166, 12–25. [Google Scholar] [CrossRef]
  2. Adland, R.; Cariou, P.; Jia, H.; Wolff, F.C. The energy efficiency effects of periodic ship hull cleaning. J. Clean. Prod. 2018, 178, 1–13. [Google Scholar] [CrossRef]
  3. Swain, G.; Lund, G. Dry-Dock Inspection Methods for Improved Fouling Control Coating Performance. J. Ship Prod. Des. 2016, 32, 186–193. [Google Scholar] [CrossRef]
  4. Gong, C.; Frangopol, D.M.; Cheng, M. Risk-based life-cycle optimal dry-docking inspection of corroding ship hull tankers. Eng. Struct. 2019, 195, 559–567. [Google Scholar] [CrossRef]
  5. Muthugala, M.A.V.J.; Vega-Heredia, M.; Vengadesh, A.; Sriharsha, G.; Elara, M.R. Design of an Adhesion-Aware Façade Cleaning Robot. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 1441–1447. [Google Scholar]
  6. Le, A.V.; Nhan, N.H.K.; Mohan, R.E. Evolutionary Algorithm-Based Complete Coverage Path Planning for Tetriamond Tiling Robots. Sensors 2020, 20, 445. [Google Scholar] [CrossRef] [Green Version]
  7. Samarakoon, S.M.B.P.; Muthugala, M.A.V.J.; Le, A.V.; Elara, M.R. hTetro-Infi: A Reconfigurable Floor Cleaning Robot With Infinite Morphologies. IEEE Access 2020, 8, 69816–69828. [Google Scholar] [CrossRef]
  8. Muthugala, M.A.V.J.; Vega-Heredia, M.; Mohan, R.E.; Vishaal, S.R. Design and Control of a Wall Cleaning Robot with Adhesion-Awareness. Symmetry 2020, 12, 122. [Google Scholar] [CrossRef] [Green Version]
  9. Bonnin-Pascual, F.; Ortiz, A. On the use of robots and vision technologies for the inspection of vessels: A survey on recent advances. Ocean Eng. 2019, 190, 106420. [Google Scholar] [CrossRef]
  10. Hachicha, S.; Nejim, S.; Zaoui, C.; Maalej, A.; Dallagi, H. Study and modeling of a hull cleaning station with an arm manipulator. In Proceedings of the 2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET), Hammamet, Tunisia, 22–25 March 2018; pp. 132–137. [Google Scholar]
  11. Zheng, X.; Lan, G.; Chew, C.M.; Lu, W.F. Design of a semi-automatic robotic system for ship hull surface blasting. In Proceedings of the 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA), Berlin, Germany, 6–9 September 2016; pp. 1–4. [Google Scholar]
  12. Navarro, P.J.; Muro, J.S.; Alcover, P.M.; Fernández-Isla, C. Sensors systems for the automation of operations in the ship repair industry. Sensors 2013, 13, 12345–12374. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Li, X.; Alexander, A.A.; Liu, N.; Wang, S.; Sulaimee, N.H.B.; Wong, F.S.; Lu, W.F.; Chew, C.M. A Semi-Automatic System for Grit-Blasting Operation in Shipyard. In Proceedings of the 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA), Turin, Italy, 4–7 September 2018; Volume 1, pp. 1133–1136. [Google Scholar]
  14. Aijazi, A.; Malaterre, L.; Tazir, M.; Trassoudaine, L.; Checchin, P. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d Lidar Point Clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 153–160. [Google Scholar] [CrossRef]
  15. Caldwell, R. Hull inspection techniques and strategy-remote inspection developments. In SPE Offshore Europe Conference & Exhibition; Society of Petroleum Engineers: Richardson, TX, USA, 2017. [Google Scholar]
  16. Schmidt, D.; Berns, K. Climbing robots for maintenance and inspections of vertical structures—A survey of design aspects and technologies. Robot. Auton. Syst. 2013, 61, 1288–1305. [Google Scholar] [CrossRef]
  17. Brusell, A.; Andrikopoulos, G.; Nikolakopoulos, G. A survey on pneumatic wall-climbing robots for inspection. In Proceedings of the 2016 24th Mediterranean Conference on Control and Automation (MED), Athens, Greece, 21–24 June 2016; pp. 220–225. [Google Scholar]
  18. Garrido, G.G.; Sattar, T.; Corsar, M.; James, R.; Seghier, D. Towards safe inspection of long weld lines on ship hulls using an autonomous robot. In Proceedings of the 21st International Conference on Climbing and Walking Robots (CLAWAR 2018), Panamá, Panama, 10–12 September 2018. [Google Scholar]
  19. Ahmed, M.; Eich, M.; Bernhard, F. Design and Control of MIRA: A Lightweight Climbing Robot for Ship Inspection. In International Letters of Chemistry, Physics and Astronomy; Sci Press: Bach, Switzerland, 2015; Volume 55, pp. 128–135. [Google Scholar] [CrossRef]
  20. Stepson, W.; Amarasinghe, A.; Fernando, P.; Amarasinghe, Y. Design and development of a mobile crawling robot with novel halbach array based magnetic wheels. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 6561–6566. [Google Scholar]
  21. Huang, H.; Li, D.; Xue, Z.; Chen, X.; Liu, S.; Leng, J.; Wei, Y. Design and performance analysis of a tracked wall-climbing robot for ship inspection in shipbuilding. Ocean Eng. 2017, 131, 224–230. [Google Scholar] [CrossRef]
  22. Welch, H.; Mondal, S. Analysis of Magnetic Wheel Adhesion Force for Climbing Robot. J. Robot. Mechatron. 2019, 3, 534–541. [Google Scholar] [CrossRef]
  23. Xu, Z.; Xie, Y.; Zhang, K.; Hu, Y.; Zhu, X.; Shi, H. Design and optimization of a magnetic wheel for a grit-blasting robot for use on ship hulls. Robotica 2017, 35, 712–728. [Google Scholar] [CrossRef]
  24. Zhang, M.; Zhang, H.; Song, Q. Study on Negative Pressure Adsorption Characteristics of Ship Climbing Robot. In Proceedings of the 2017 International Conference on Computer Technology, Electronics and Communication (ICCTEC), Dalian, China, 19–21 December 2017; pp. 1061–1067. [Google Scholar]
  25. Milella, A.; Maglietta, R.; Caccia, M.; Bruzzone, G. Robotic inspection of ship hull surfaces using a magnetic crawler and a monocular camera. Sens. Rev. 2017, 37, 425–435. [Google Scholar] [CrossRef]
  26. Ahuja, S.K.; Shukla, M.K.; Ahuja, S.K.; Shukla, M.K. A survey of computer vision based corrosion detection approaches. In International Conference on Information and Communication Technology for Intelligent Systems; Springer: Berlin, Germany, 2017; pp. 55–63. [Google Scholar]
  27. Kumar, M.; Satyanarayana, K.V.V.; Ramesh, A.P. Surface Corrosion Grade Classification using Convolution Neural Network. Int. J. Recent Technol. Eng. 2019, 8, 7645–7649. [Google Scholar]
  28. Jalalian, A.; Lu, W.; Wong, F.; Ahmed, S.; Chew, C.M. An Automatic Visual Inspection Method based on Statistical Approach for Defect Detection of Ship Hull Surfaces. In Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany, 20–24 August 2018; pp. 445–450. [Google Scholar]
  29. Ortiz, A.; Bonnin-Pascual, F.; Garcia-Fidalgo, E. Visual inspection of vessels by means of a micro-aerial vehicle: An artificial neural network approach for corrosion detection. In Robot 2015: Second Iberian Robotics Conference; Springer: Berlin, Germany, 2016; pp. 223–234. [Google Scholar]
  30. Fernández-Isla, C.; Navarro, P.J.; Alcover, P.M. Automated visual inspection of ship hull surfaces using the wavelet transform. Math. Probl. Eng. 2013, 2013. [Google Scholar] [CrossRef]
  31. Gu, X.; Angelov, P.P. Self-organising fuzzy logic classifier. Inf. Sci. 2018, 447, 36–51. [Google Scholar] [CrossRef] [Green Version]
  32. De Silva, C.W. Intelligent Control: Fuzzy Logic Applications; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  33. Zadeh, L.A. Is there a need for fuzzy logic? Inf. Sci. 2008, 178, 2751–2779. [Google Scholar] [CrossRef]
  34. Muthugala, M.A.V.J.; Vengadesh, A.; Wu, X.; Elara, M.R.; Iwase, M.; Sun, L.; Hao, J. Expressing attention requirement of a floor cleaning robot through interactive lights. Autom. Constr. 2020, 110, 103015. [Google Scholar] [CrossRef]
  35. Ross, T.J. Fuzzy Logic with Engineering Applications; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  36. Muthugala, M.A.V.J.; Srimal, P.H.D.; Jayasekara, A.G.B.P. Enhancing interpretation of ambiguous voice instructions based on the environment and the user’s intention for improved human-friendly robot navigation. Appl. Sci. 2017, 7, 821. [Google Scholar] [CrossRef] [Green Version]
  37. Muthugala, M.A.V.J.; Jayasekara, A.G.B.P. Improving the understanding of navigational commands by adapting a robot’s directional perception based on the environment. J. Ambient Intell. Smart Environ. 2019, 11, 135–148. [Google Scholar] [CrossRef]
  38. Hagras, H. Toward Human-Understandable, Explainable AI. Computer 2018, 51, 28–36. [Google Scholar] [CrossRef]
  39. Du, W.; Guo, X.; Wang, Z.; Wang, J.; Yu, M.; Li, C.; Wang, G.; Wang, L.; Guo, H.; Zhou, J.; et al. A New Fuzzy Logic Classifier Based on Multiscale Permutation Entropy and Its Application in Bearing Fault Diagnosis. Entropy 2020, 22, 27. [Google Scholar] [CrossRef] [Green Version]
  40. Angelov, P.; Yager, R. A new type of simplified fuzzy rule-based system. Int. J. Gen. Syst. 2012, 41, 163–185. [Google Scholar] [CrossRef]
  41. Angelov, P.P.; Gu, X.; Príncipe, J.C. A generalized methodology for data analysis. IEEE Trans. Cybern. 2017, 48, 2981–2993. [Google Scholar] [CrossRef] [Green Version]
  42. Okabe, A.; Boots, B.; Sugihara, K.; Chiu, S.N. Spatial Tessellations: Concepts and Applications of Voronoi Diagrams; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 501. [Google Scholar]
Figure 1. The overall sequence of the application context of the proposed robotic system. (a) Blasting robot is sent in a zig-zag path for uniformly blasting the area; (b) The benchmarking robot is sent in the previously blasted area to benchmark the blasting quality; (c) The benchmarking map that represents the quality of blasting in different areas.
Figure 1. The overall sequence of the application context of the proposed robotic system. (a) Blasting robot is sent in a zig-zag path for uniformly blasting the area; (b) The benchmarking robot is sent in the previously blasted area to benchmark the blasting quality; (c) The benchmarking map that represents the quality of blasting in different areas.
Sensors 20 03215 g001
Figure 2. Categorization of the blasting quality considered for benchmarking. (a) Appearances of good areas; (b) Appearances of medium quality areas; (c) Appearances of bad quality areas.
Figure 2. Categorization of the blasting quality considered for benchmarking. (a) Appearances of good areas; (b) Appearances of medium quality areas; (c) Appearances of bad quality areas.
Sensors 20 03215 g002
Figure 3. Functional overview.
Figure 3. Functional overview.
Sensors 20 03215 g003
Figure 4. Steps of image processing for feature extraction.
Figure 4. Steps of image processing for feature extraction.
Sensors 20 03215 g004
Figure 5. Hornbill Design.
Figure 5. Hornbill Design.
Sensors 20 03215 g005
Figure 6. Part Diagram.
Figure 6. Part Diagram.
Sensors 20 03215 g006
Figure 7. (a) Hornbbill is used as the bechmarking robot; (b) Hornbill is performing hydro blasting.
Figure 7. (a) Hornbbill is used as the bechmarking robot; (b) Hornbill is performing hydro blasting.
Sensors 20 03215 g007
Figure 8. Architecture of Self-Organizing Fuzzy Logic (SOF) classifier.
Figure 8. Architecture of Self-Organizing Fuzzy Logic (SOF) classifier.
Sensors 20 03215 g008
Figure 9. (a) Experimental setup; (b) The resulted benchmarking map. The benchmarking map is overlaid on the hull surface for better comparison. The areas predicted with good blasting quality by the system are given in green. The medium and bad quality areas are given in yellow and red, respectively. The corresponding frame number is annotated below the synthesized benchmarking map.
Figure 9. (a) Experimental setup; (b) The resulted benchmarking map. The benchmarking map is overlaid on the hull surface for better comparison. The areas predicted with good blasting quality by the system are given in green. The medium and bad quality areas are given in yellow and red, respectively. The corresponding frame number is annotated below the synthesized benchmarking map.
Sensors 20 03215 g009
Figure 10. The image frames captured during the robot’s run with an interval of 20 frames. The corresponding frame number annotated in each image.
Figure 10. The image frames captured during the robot’s run with an interval of 20 frames. The corresponding frame number annotated in each image.
Sensors 20 03215 g010
Table 1. Performance of the SOF classifier in different configurations.
Table 1. Performance of the SOF classifier in different configurations.
Distance MeasureEuclideanCosine
L48124812
Accuracy0.97440.99420.99280.97790.99150.9933
t (µs)2.868.6723.86.5610.3111.04
Table 2. Confusion matrix.
Table 2. Confusion matrix.
Actual
GoodMediumBad
PredictedGood37040
Medium03661
Bad00369

Share and Cite

MDPI and ACS Style

Muthugala, M.A.V.J.; Le, A.V.; Cruz, E.S.; Rajesh Elara, M.; Veerajagadheswar, P.; Kumar, M. A Self-Organizing Fuzzy Logic Classifier for Benchmarking Robot-Aided Blasting of Ship Hulls. Sensors 2020, 20, 3215. https://doi.org/10.3390/s20113215

AMA Style

Muthugala MAVJ, Le AV, Cruz ES, Rajesh Elara M, Veerajagadheswar P, Kumar M. A Self-Organizing Fuzzy Logic Classifier for Benchmarking Robot-Aided Blasting of Ship Hulls. Sensors. 2020; 20(11):3215. https://doi.org/10.3390/s20113215

Chicago/Turabian Style

Muthugala, M. A. Viraj J., Anh Vu Le, Eduardo Sanchez Cruz, Mohan Rajesh Elara, Prabakaran Veerajagadheswar, and Madhu Kumar. 2020. "A Self-Organizing Fuzzy Logic Classifier for Benchmarking Robot-Aided Blasting of Ship Hulls" Sensors 20, no. 11: 3215. https://doi.org/10.3390/s20113215

APA Style

Muthugala, M. A. V. J., Le, A. V., Cruz, E. S., Rajesh Elara, M., Veerajagadheswar, P., & Kumar, M. (2020). A Self-Organizing Fuzzy Logic Classifier for Benchmarking Robot-Aided Blasting of Ship Hulls. Sensors, 20(11), 3215. https://doi.org/10.3390/s20113215

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop