Next Article in Journal
Experimental Study on Hydrodynamic Characteristics of Barge-Type Breakwaters under Different Mooring Methods
Next Article in Special Issue
A Review of Autonomous Tugboat Operations for Efficient and Safe Ship Berthing
Previous Article in Journal
A Smart Risk Assessment Tool for Decision Support during Ship Evacuation
Previous Article in Special Issue
Integral Sliding Mode-Based Fault-Tolerant Control for Dynamic Positioning of Unmanned Marine Vehicles Based on a T-S Fuzzy Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of an Automated Spare-Part Management Device for Ship Controlled by Raspberry-Pi Microcomputer Based on Image-Progressing & Transfer-Learning

1
Division of Marine System Engineering, Korea Maritime and Ocean University, Busan 49112, Republic of Korea
2
Division of Marine Information Technology, Korea Maritime and Ocean University, Busan 49112, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2023, 11(5), 1015; https://doi.org/10.3390/jmse11051015
Submission received: 11 April 2023 / Revised: 8 May 2023 / Accepted: 8 May 2023 / Published: 10 May 2023

Abstract

:
As the development of autonomous ships is underway in the maritime industry, the automation of ship spare part management has become an important issue. However, there has been little development of dedicated devices or applications for ships. This study aims to develop a Raspberry Pi-based embedded application that identifies the type and quantity of spare parts using a transfer learning model and image processing algorithm suitable for ship spare part recognition. A newly improved image processing algorithm was used to select a transfer learning model that balances accuracy and training speed through training and validation on a real spare parts dataset, achieving a prediction accuracy of 98.2% and a training time of 158 s. The experimental device utilizing this model used a camera to identify the type and quantity of spare parts on an actual ship. It displayed the spare parts list on a remotely connected computer. The ASSM (Automated Ship Spare-Part Management) device utilizing image processing and transfer learning is a new technology that successfully automates spare part management.

1. Introduction

Starting with the definition of a Maritime Autonomous Surface Ship (MASS) at the IMO MSC 98 meeting [1] as ‘A ship which, to a varying degree, seafarers can operate without the intervention of human’, the technologies of the 4th Industrial Revolution have also been applied to the marine field. Breakthrough technology development using artificial intelligence and big data technologies is underway [2,3,4]. Because most marine accidents are human accidents, the realisation of autonomous ships can reduce marine accidents caused by human error by reducing the number of crew members on board the ships [5,6,7]. Therefore, systems that can perform remotely and manage the work a seafarer has to perform manually are being developed [8,9,10,11].
One such task that requires automation is a ship’s material resource inventory management system. Ships have various material resources. For example, there are more than 240 machines in one ship plant, and more than 3000 types of inventories are listed concerning them. If consumables are considered, these would be more than 4000 [12,13]. Although the inventory of consumption or supply of the ship material resources is managed through the resource management platform of each ship owner, there is bound to be some discrepancy from the actual inventory. To reduce this difference, ship inventories are periodically inventoried by the crew members in inventory management [13,14]. In particular, in the case of spare parts that are not readily available or statutory spare parts, inventory management must be strictly performed. This is necessary for the safe operation of ships. However, having more materials than necessary increases maintenance costs and restricts space. Moreover, if there is a shortage, it is impossible to repair the machines necessary for navigation, which may lead to a situation in which navigation is impossible [15].
In the logistics industry, various studies on the importance of inventory management and management technology have been conducted; Studies on inventory control for spare parts in aviation logistics [16], inventory control systems using the Markov model [17], RFID-based inventory control systems [18,19,20], and QR-code-based inventory control systems [21,22]. However, these methods are not suitable for actual ship spare parts models. Spare parts for ships are supplied by various manufacturers; therefore, there is a limit to assigning RFID and QR codes individually. In object recognition, research and development have been conducted using various APIs (YOLO, OpenCV, etc.) [23,24,25,26], and devices using embedded systems are also being developed [27,28,29,30].
Recently, studies have been conducted to recognise ship parts using transfer-learning models [31]. However, these studies deal with the accuracy of recognition models and do not deal with the algorithms for counting the number of spare parts.
In image processing, numerous algorithms have been developed for object recognition. However, achieving proper performance requires optimizing or customizing the parameters based on the specific environment in which they are used. This study proposes an image processing algorithm optimized for identifying spare-part numbers in the ship environment. Furthermore, selecting a learning model with low memory usage but high accuracy is crucial since a predictive model that guarantees high performance requires significant time and memory to operate. A model was selected for the device that balances prediction, accuracy and learning time by comparing four transfer-learning models (ResNet50, VGG-19, ShuffleNet, and SqueezeNet) that have shown excellent performance in object classification.
The hardware for device development was selected based on Table 1. The microcomputer, Raspberry Pi, was chosen over microcontrollers, like the Arduino, due to their higher hardware performance, making them more suitable for testing the generalization performance of the selected prediction model and proposed image processing algorithm.
This study achieved automated ship-spare management by utilizing balanced transfer learning and the proposed image processing algorithm through high-performance hardware. This presents the possibility of automating spare-part management on actual operating ships, which can be verified through generalized testing via a remote network.

2. Image Identification and Prediction Algorithm

By inputting the image of the spare part, the class of the spare part is predicted, and the number of spare parts in a given class is identified. The spare-part class is predicted using a DCNN model covered in Section 3.
This section describes an algorithm for back-projecting an image using the image -histogram and OpenCV’s contour function on the back-projected image to identify the number of spare -parts. Open-Source Computer Vision Library (OpenCV) is the most representative and popular library in image processing and visual programming.

2.1. Proposed Image-Processing Algorithm

There are several algorithms for recognizing the number of objects in an image, including Template matching, Blob detection, Vascade classifiers, and Contour detection. Among them, contour detection is the most commonly used method, a computer vision technique used to detect the boundaries of objects in an image.
The cv2.findContours() function is used for contour detection, which takes a binary image as input and detects all contours in the image. The quality and accuracy of the contours depend on various factors, such as the quality of the input image, the threshold or edge detection algorithm used, and the contour approximation method used.
Figure 1 is a result image of the spindle, one of the ship’s spare parts, obtained using the contour detection algorithm. The contour detection algorithm determines boundaries with similar colors or intensities based on the threshold value. It performs greyscale binary classification so that it can be affected by the type and environment of the object.
Therefore, in this section, a new object detection algorithm is introduced that removes the background, excluding the color characteristic of the object, by combining the Histogram technique, inverse image, and contour technique to reduce the influence of the object’s environment when detecting the object’s contour [32,33].
Figure 2 shows 2D histograms of RGB images of the Spindle and Atomizer spare parts with the same pixels. Each histogram shows the distribution of blue and green pixels in the image (A), the distribution of green and red pixels in the image (B), and the distribution of blue and red pixels in the image (C). The two spare parts have different distributions (32 × 32 pixels) and intensities (0~255), and the 2D histogram information of the spare parts, excluding the background, is stored in a Python histogram data list variable. This stored data is used as input for the inverse projection function. Using the information obtained from the spare part class through the learning model, the stored 2D histogram data can be outputted.

2.1.1. Image-Histogram

An image histogram is a graphical representation of the pixel-value distribution of the image.
The histogram in a grayscale image can be obtained by counting the number of pixels corresponding to each grayscale value distributed in the image and expressing it as a bar graph. The horizontal axis of the histogram is called the bin. Generally, a grayscale image is represented as a histogram with brightness bins ranging from 0 to 255. In colour images, it can be expressed as a histogram with brightness bins of 0–255 for the three colours (Red, Green, and Blue), the three primary colours of light [32,33].

2.1.2. Back-Projection Function

The back-projection function is provided by OpenCV to extract an object that matches the input histogram from an image. Using this function, if the bitwise _not operator is used between the back-projected image and background image, used as a mask, only the part corresponding to the object would be extracted from the image [33]
First, the stored histogram data described in Section 2.1 is input to the cv2.calcBackProject function to back-project. Next, the threshold function on the back-projected image removes ambiguous values that could be confused with histogram data. Finally, the threshold function binarizes the pixels in the image. The input image is divided into binaries based on a set value (threshold value) [33,34,35].
Subsequently, the necessary objects are extracted through the bitwise _not operation with the thresholded mask image from the input image. In the bitwise_not operation, the background, excluding the necessary objects, is changed to white (pixel:0) through the operation between the white image and the thresholded image. If the background is black (pixel:255), the entire picture is recognised as a single object during the contour process. Thus, the background is white so the object can be recognised.

2.1.3. Contour Function

The contour function connects adjacent pixels with similar values among the pixels in the entire image in a curved shape. This function helps recognise the objects necessary for an image [36]. However, the contour technique has several limitations. For example, if there are many white pixels in the histogram of an object with a white background, the pixel boundary with the background is blurred. Therefore, in the back-projection process, an appropriate background should be selected based on the histogram data of the object to be contoured.

2.1.4. Combined Image Processing Algorithm

The proposed image processing algorithm is illustrated in Figure 3.
This process is implemented on Python version 3.7 and OpenCV library version 4.5.1.48. First, images of the ship’s spare parts are captured using a camera, and the trained model outputs classes for the spare parts, which are then stored in a result list. Next, the algorithm uses histogram data for the pre-stored spare parts as input to predict the class of the spare parts. Then it applies a process called „Back-projection” to convert spare-parts image (A) to image (B) with a white background by setting all pixels except for the histogram data to 0.
Next, the contour function is applied to image (B) to create the outermost contour shape, as shown in the image (C). Finally, by using the len() function on the contoured image (C), the number of objects can be estimated. A transfer learning model based on the CNN model was used to predict the type of spare parts for ships. A Convolutional Neural Network (CNN) is a type of ANN that uses convolutional operations and is specialized in multidimensional processing, such as colour images, because it can process multidimensional array data. The CNN algorithm extracts and classifies the features of the image data through several layers that perform the convolutional operations. Since the algorithm was first introduced by LeCun in 1995, CNN has been in the limelight in image recognition [37]. In particular, significant advances have been made in image feature recognition and classification techniques using deep convolutional neural networks (DCNN), having more operations and layers, especially in computer vision [38,39,40].

3. Deep Convolutional Neural Networks (DCNN)

Figure 4 shows a simple structure of a DCNN.
Deep Convolutional Neural Networks (DCNN) are a type of deep learning utilised to tackle problems related to image recognition, classification, and processing. DCNN has a similar structure to a basic Artificial Neural Network (ANN) but includes unique components such as convolutional layers and pooling layers, allowing it to process image data more effectively. DCNN comprise repetitions of convolution layers, pooling layers (subsampling), and fully connected layers.
Figure 5 illustrates the process of convolutional layers and max pooling layers.
The convolution layer extracts feature from an image. In this layer, as in many feature maps, filters called ‘kernels’ are created. Figure 5 illustrates an example of a convolution process [41]. Input data of size 3 × 3 generates features in the form of [[14, 0], [16, 11]] through a kernel in the form of a unit matrix of size 2 × 2. A feature map is formed by repeating this process over the entire matrix.
The pooling layer down-samples the dimensions of the feature map produced by the convolution layer to reduce the number of operations and extract feature vectors so that the model can learn effectively. That means it reduces the number of parameters in the model. The pooling layer mainly uses max pooling, which extracts the largest value during the operation, and average pooling, which extracts the average value. The max pooling was used for feature extraction [42]. Figure 5B illustrates an example of the max-pooling process. First, the convolution layer is partially extracted with a size of 2 × 2, and the largest value is selected. Then, by repeating this process over the entire matrix, a max-pooling layer is formed.
Finally, the fully connected layer derives the prediction results. The fully connected layer derives the result by connecting feature maps with a reduced resolution by operating a convolution layer and pooling layer to a multi-perceptron layer [43].
In addition, a dropout layer can be added to avoid overfitting. Dropout is a learning method that activates only a part of the neural network during the learning process. Dropout is a method developed to solve a problem in which the learning result can be overfitted as the neural network becomes more complex. For example, during training, if the weight or bias value of a specific neuron in the neural network increases, the learning speed of the other neurons may decrease, or the learning may not work properly. Adding a dropout layer reduces the effects of such neurons and avoids overfitting [44].
The feature map extracted through these processes is passed to the fully connected layer as a one-dimensional array through flattening. Finally, the image is classified through the multi-perceptron layer with learned weights and biases.

4. Transfer-Learning

Transfer learning is a technique for efficiently training machine learning and deep learning models by applying the knowledge of an already-trained model to a new, related problem. The main objectives of transfer learning are to reduce learning time, alleviate the problem of data scarcity, and improve the performance of models on new problems [45]. Transfer learning is useful when there is insufficient learning data for a new problem when similar patterns exist between the new problem and the original problem, or when computational resources or learning time for the new problem are limited [46].
This technique is primarily used in deep learning fields such as image classification, natural language processing, and speech recognition. The transfer learning process consists of the following steps [45]: First, the model is trained on the original problem in the pre-training phase. In this process, large datasets are used to learn the model’s weights effectively. Second, in the fine-tuning phase, the weights of the pre-trained model are used as initial values, and data for the new problem is added to train the model further. In this process, the learning rate is lowered to fine-tune the weights, allowing the model to adapt to the new data. Third, the fine-tuned model is applied to the new problem to perform prediction and classification tasks in the application phase.
Prominent transfer learning models include AlexNet, GoogLeNet, VGG, and ResNet. This paper introduces high-performing models, such as VGG19 and ResNet-50, and SqueezeNet and ShuffleNet, which emphasise computational efficiency whilst maintaining performance. Using the Deep Network Designer application provided by MATLAB, we train and compare these models in the context of ship-spare prediction. The following section will identify the most suitable model for the ASSM application using this comparative analysis.

4.1. VGG19

VGG19 is one of the convolutional neural networks (CNNs) that performs extremely highly in image classification tasks. It comprises 19 layers and extracts complex features through small filter sizes and deep layers. Therefore, the model trained on the ImageNet dataset is suitable for transfer learning. VGG19 requires large datasets and considerable computational resources for high accuracy, but its performance is outstanding [47].

4.2. ResNet-50

ResNet-50 is a convolutional neural network (CNN) widely used in image recognition tasks. It has 50 layers, including residual blocks that allow for the efficient training of deep neural networks. ResNet-50 achieves state-of-the-art performance on many benchmark datasets due to its ability to capture complex features through its deep architecture. The model was trained on the ImageNet dataset, making it well-suited for transfer learning. Despite its high computational requirements, ResNet-50 is known for its high accuracy in image classification tasks [48].

4.3. ShuffleNet

ShuffleNet is a lightweight CNN model used in deep learning that achieves high accuracy despite its small model size and low computational cost. It reduces the number of parameters and resolves bottlenecks by utilizing special techniques such as weight sharing and feature map shuffling. This model is suitable for mobile devices and embedded systems and can be applied in autonomous driving systems such as cars and drones for object detection and tracking [49].

4.4. SqueezeNet

SqueezeNet is a CNN architecture designed for efficient model size and computational cost without sacrificing accuracy. It achieves this by combining 1 × 1 and 3 × 3 filters with a small number of channels. The network structure also includes a „squeeze” layer that reduces the number of input channels before applying the convolutional layer, further reducing the number of parameters in the model. SqueezeNet can be trained on large datasets such as ImageNet and has been shown to perform similarly to larger models with much fewer parameters [50].

5. Ship Spare-Part Recognition Deep Learning Model

5.1. Dataset and Split

Learning data are required to train the transfer-learning predictive model. The spare parts of machines used on ships were selected for learning data. Because the operating ship plant is a special environment far from the manufacturer’s repair service centres, shipowners must manage more than the legally designated number of spare parts to ensure the ship’s safety. This is called the ‘Demanded quantity in law’.
The six spare parts of a ship’s internal combustion engines that are strictly managed on ships were selected as the dataset, as shown in Table 2.
The image data collection for each object was performed at a consistent location called the “spare warehouse” as shown in Figure 6, for generalization verification. An experimental device was used for this purpose. A CMOS (Complementary Metal-Oxide-Semiconductor) type OV7670 camera module was used to collect RGB images in 640 × 480-pixel format, which was then converted to 64 × 64 pixels and stored. When collecting images, they were captured at different angles on a white background as they would be used for histogram information extraction.
The size of each spare part satisfies the minimum size of 9 (3 × 3) or more of the convolution kernel, and a total of 660 data sets were prepared by collecting 110 images from each spare.
Next, for model learning and verification, the training data and verification data were divided in the ratio of 80:20, as shown in Figure 7.

5.2. Selection of Ship Parts Recognition Model Based on Transfer Learning

This paper uses representative transfer learning models such as VGG19, resnet50, SqueezeNet, and ShuffleNet models for learning. For learning, a deep learning designer provided by MATLAB was used.

5.2.1. Hyperparameter Optimization

To increase the model’s prediction accuracy and avoid overfitting, it is essential to properly select hyperparameters such as the number of epochs, batch size, learning rate, and optimization algorithms.
The dataset is divided into three categories: training and validation: 80%, and 20%, respectively. Therefore, 80% of the images were considered for network training in each stage, and 20% were considered for validation.
Because the number of samples for learning is 528 (660 × 0.8), the step_per_epoch value and batch size were set to maximum values of 52 and 10, respectively, according to Equation (1). The epoch value was set to 12, the optimal value, based on several tests [51].
N s a m p l e ( N s t e p p e r s p o c h × N b a t c h s i z e )
The learning rate is a hyperparameter that controls the rate at which the network’s weights are updated during training. Setting the learning rate too high may cause weights to update too quickly, resulting in unstable behavior or overfitting. On the other hand, setting the learning rate too low may update weights too slowly, slowing convergence or getting stuck in suboptimal solutions. The optimal learning rate depends on the problem you are trying to solve and can be determined through experimentation and model performance monitoring. In transfer learning, we have initial parameters obtained through pre-training. Then, the learning rate is tuned by fine-tuning the weights of the pre-trained model. For this study, to select the optimal transfer model, the learning rate was set to 0.0001, and a cyclical learning rate lowered by 0.1 for every two iterations was applied.
Optimization algorithms in deep learning are methods used to adjust the weights and biases of a model during training for better prediction performance and faster convergence. In the experiment, the accuracy is compared to select the optimal optimization algorithm after learning. As a comparison group of optimization algorithms, SGD (Stochastic Gradient Descent), RMSProp (Root Mean Square Propagation), and Adam (Adaptive Moment Estimation) were used.
SGD is one of the most basic optimization algorithms. Instead of calculating the parameters for the entire dataset, SGD calculates them for a randomly selected data subset, reducing the computational cost.
RMSProp addresses one of the issues with SGD: the uniform application of a learning rate that causes parameters to be updated at an imbalanced ratio. RMSProp dynamically adjusts the learning rate to increase the stability of the training process. It uses an exponentially weighted moving average of the previously squared gradients to give more weight to the latest gradient and converge more quickly.
Adam is an optimization algorithm that combines the advantages of RMSProp, and SGD. Adam uses the exponentially weighted moving average of previous gradients and the exponentially weighted moving average of RMSProp to adjust the learning rate automatically and help find optimal weights and biases.

5.2.2. Model Evaluation

The results of the confusion matrix to check how accurately the model predicted each object in the validation dataset showed excellent accuracy for each object, as shown in Figure 8.
Accuracy is a metric determined by dividing the number of correct predictions by all observations. It is a good metric when the number of positive and negative samples in the dataset is balanced, and the cost of false positives and false negatives is similar. The formula for calculating accuracy is as follows:
Accuracy = TP + TN TP + TN + FP + FN
Accuracy is the most straightforward and accurate metric for evaluating model performance, but if the distribution of data between classes is imbalanced, accuracy can show misleading evaluation results. In such cases, various evaluation metrics, such as precision, recall, and F1-score, can be used to evaluate the model’s performance.
Instances of classification models have TP, TN, FP, and FN. TP and TN indicate the number of true positive and true negative results, respectively, whereas FP and FN indicate the number of false positives and negatives. Accuracy is used when the dataset is balanced, and the ratio of TP and TN is high. Precision is useful when the ratio of FP is high, and recall is useful when the ratio of FN is high. F1-score is a metric used when both FP and FN are essential, as it is calculated as the harmonic mean of precision and recall.
In Figure 8, we can see the distribution of each instance. The present model has a constant number of 110 spare part dataset samples. Additionally, since the ratio of classified TP and TN is high and balanced, accuracy is more helpful than other evaluation metrics. Furthermore, as the model size increases, the required computational resources for training increase, resulting in slower training speed and possible degradation of accuracy performance. Therefore, to select a model with optimized memory usage, we also considered training speed.
Accuracy loss is a metric that represents the difference between the accuracy value and 1. It is a measure used to quantify the difference between the predicted values of a model and the actual values and is calculated as a value between 0 and 1.
Accuracy   Loss = 1 Accuracy
The lower the accuracy loss, the more accurate the model’s predictions, and the higher the accuracy loss, the more inaccurate the model’s predictions.

5.3. Learning and Results

This study presents the training process of a ship spare parts recognition model using transfer learning. Table 3 shows the results of training the model using three different optimization algorithms. Figure 9 illustrates the training process of four transfer learning models using the SGDm algorithm, which showed the fastest training time. The training accuracy ranged from 99.8% for ResNet-50 to 95.4% for SqueezeNet, and the training speed differed by approximately 14 times between the fastest SqueezeNet and the slowest VGG-19.
While ResNet-50 and VGG-19 seem suitable in terms of accuracy, SqueezeNet and ShuffleNet, which have faster training speeds, are more appropriate for embedded devices such as Raspberry Pi.
Comparing the training processes of (C) and (D) in Figure 9, it was found that SqueezeNet had larger overfitting than ShuffleNet. When overfitting occurs, the model may perform well on the training data but may have lower overall stability and accuracy on a test or new data. Therefore, ShuffleNet, which showed a balanced performance in training speed, accuracy, and stability, was adopted as a learning model for embedded devices.
The model trained in MATLAB is saved in MAT format. Next, it must be converted to a format compatible with Tensorflow (HDF5 or h5) and then finally stored on the Raspberry Pi integrated with the automated ship spare-part management (ASSM) device.

6. Proposal of Automated Ship Spare-Part Management Device

The proposed device works as shown in Figure 10.
The laptop and controller of the automated ship spare-part management (ASSM) device communicate remotely, and the device machine is placed in the warehouse where the spare parts are stocked. For the device to move on a designated path, a path with a black outline was marked on a white background, and a black horizontal line was marked in the middle of the path to recognise the point where the object was placed.
The spare parts to be recognised are placed in front of the warehouse wall so that there is no obstruction by other objects when recognising the image and is placed where the device’s camera is turned and pointed.

6.1. Experiment Equipment Description

The hardware circuit diagram of the automated ship spare-part management (ASSM) is shown in Figure 11.
Adeept’s model ‘ADR019’ was used as the main frame of the experimental device. The overall hardware configuration is composed of the main parts: the centre, the input, the output, and the driving. The input unit has three IR sensors (line detecting sensors) to detect black straight lines and horizontal lines and a camera module to collect images of the spare part.
The central part receives the binary signal of the IR sensor and sends the signal to the steering servo motor, which is the output part. It gets the stock image from the camera module and stores it in the storage disk of the main part. In addition, it determines forward/stop according to the internal variable ‘CAR STATE’ and sends a signal to the drive motor, which is the output unit.
Table 3 lists the hardware configurations of the ASSM device.

6.2. Algorithm Description

Figure 12 shows the flowchart for the ASSM device.
A ‘GO’ signal is sent to the experimental device through the SSH protocol using a Wifi module to start the experiment. When the ‘CAR STATE’ of the device is changed to ‘GO’, it moves in a straight line along the path, marked with two black lines on the warehouse floor. During movement, when a horizontal line indicating the spare-part location is detected, ‘CAR STATE’ becomes ‘Capture stop’, and the driving motor is stopped. The device turns the camera 90° in the direction of the spare part, captures the spare-part photo, and saves it to the designated path in the form of a PNG file. If the total number of stored spare-part photos is less than 3, ‘CAR STATE’ returns to ‘GO’ again and this process is repeated. When three spare-part photos were collected, the driving motor was stopped and ‘CAR STATE’ was changed to ‘STOP’.
When ‘CAR STATE’ is ‘STOP’, the class of spare-part is identified by inputting the saved spare-part photo into the pre-learned DCNN model. Based on the histogram data corresponding to the type of object, the number of objects is estimated through image processing using a contour. Based on the histogram data corresponding to the class of the predicted spare part, the number of spare parts is estimated through image processing using the contour function.
Finally, the class and number of predicted spare parts were output through the Tkinter GUI, a Python library.

6.3. Device Validation

The process for verification of the device is illustrated in Figure 13.
Three black lines were marked in the middle of the path so that the device could identify the location of the spare part. In a representative experiment, Spindles and springs were selected as the spare parts to be recognised.
By detecting the first and second objects, which were the same spare part, the ‘Spindle’, it was verified that the device could recognise a given object and count the number of similar objects. After that, by identifying the next object, a ‘Spring’ it was verified that the device could recognise up to three objects. The experiment was conducted using a laptop remotely connected to the Raspberry Pi through SSH communication. The device traced the path to the end while detecting a black line using three IR sensors. When the device encountered a horizontal black line in the middle of the path, it paused to collect images. After collecting the image of the third spare part, the device stopped, and the spare parts stock result was displayed using the Python Tkinter GUI on the laptop, as shown in Figure 14.
The spare-part class was accurately predicted, and the number of spare parts according to the class was accurately identified. In 7 experiments, the recognition accuracy for the type and number of spares was demonstrated as shown in Table 4.

7. Conclusions and Perspectives

In this study, a remote device is proposed for automating the management of spare parts for ships. For recognizing the ship’s spare parts, six objects were selected as spare parts for ships, and 110 image data were used for each object. The training model for recognizing the class of spare parts was trained and compared using four transfer-learning models (ResNet-50, VGG-19, ShuffleNet, SqueezeNet) with verified performance, and ShuffleNet was selected as suitable for embedded devices based on comparisons of accuracy and training speed.
The number of objects was identified using an algorithm that combined the back-projection and contour functions, which are image processing technologies of OpenCV, and good results were obtained. The device experiment in this study was conducted through a WiFi network of the internal Ethernet in an actual ship environment. Still, the experiment through an internet connection from an external network to the ship’s internal Ethernet network was also successful.
The proposed automated ship spare-part management device is expected to enable shipping companies to accurately identify individual spare parts of a ship and reduce unnecessary stock to achieve economic benefits and secure the ship’s safety by managing the quantities as demanded by law for the safety of the ship. In addition, it can significantly contribute to the development of autonomous ships, thereby reducing the manpower on ships.
A relatively large spare part was used in this experiment. However, in the case of tiny spare parts, such as an O-ring, image recognition software may have limitations. Therefore, in the future, we plan to develop a device that can identify a ship’s spare parts by adding a weight-measurement-based spare-part management device to identify even such tiny spare parts.

Author Contributions

Conceptualization, C.-M.L. and H.-J.J.; Methodology, C.-M.L.; Software, C.-M.L.; Validation, C.-M.L.; Investigation, H.-J.J.; Resources, C.-M.L.; Data curation, H.-J.J.; Writing—original draft, C.-M.L.; Writing—review & editing, C.-M.L.; Visualization, H.-J.J.; Supervision, B.-G.J.; Project administration, B.-G.J.; Funding acquisition, B.-G.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Veal, R. Maritime Autonomous Surface Ships: Autonomy, manning and the IMO. Shipp. Trade Law 2018, 18, 1–4. [Google Scholar]
  2. Wang, R.; Miao, K.; Sun, J. Intelligent recognition method of infrared imaging target of unmanned autonomous ship based on fuzzy mathematical model. J. Intell. Fuzzy Syst. 2020, 38, 3981–3989. [Google Scholar] [CrossRef]
  3. Perera, L.P. Autonomous ship navigation under deep learning and the challenges in COLREGs. In Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering, American Society of Mechanical Engineers, Madrid, Spain, 25 September 2018; p. V11BT12A005. [Google Scholar]
  4. Escario, J.B.; Jimenez, J.F.; Giron-Sierra, J.M. Optimization of autonomous ship maneuvers applying swarm intelligence. In Proceedings of the 2010 IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, 22 November 2010; pp. 2603–2610. [Google Scholar]
  5. Hasanspahić, N.; Vujičić, S.; Frančić, V.; Čampara, L. The role of the human factor in marine accidents. J. Mar. Sci. Eng. 2021, 9, 261. [Google Scholar] [CrossRef]
  6. Rothblum, A.M. Human error and marine safety. In Proceedings of the National Safety Council Congress and Expo, Orlando, FL, USA, 16–21 September 2000. [Google Scholar]
  7. Kim, H.; Na, S.; Ha, W. A case study of marine accident investigation and analysis with focus on human error. J. Ergon. Soc. Korea 2011, 30, 137–150. [Google Scholar] [CrossRef]
  8. Höyhtyä, M.; Martio, J. Integrated satellite-terrestrial connectivity for autonomous ships: Survey and future research directions. Remote Sens. 2020, 12, 2507. [Google Scholar] [CrossRef]
  9. Höyhtyä, M.; Huusko, J.; Kiviranta, M.; Solberg, K.; Rokka, J. Connectivity for autonomous ships: Architecture, use cases, and research challenges. In Proceedings of the 2017 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 14 December 2017; pp. 345–350. [Google Scholar]
  10. Kennard, A.; Zhang, P.; Rajagopal, S. Technology and training: How will deck officers transition to operating autonomous and remote-controlled vessels?,13 October 2022. Mar. Policy 2022, 146, 105326. [Google Scholar] [CrossRef]
  11. Huang, Y.; Chen, L.; Negenborn, R.R.; Van Gelder, P. A ship collision avoidance system for human-machine cooperation during collision avoidance. Ocean Eng. 2020, 217, 107913. [Google Scholar] [CrossRef]
  12. Mouschoutzi, M.; Ponis, S.T. A comprehensive literature review on spare parts logistics management in the maritime industry. Asian J. Shipp. Logist. 2022, 38, 71–83. [Google Scholar] [CrossRef]
  13. Rustenburg, W.D.; van Houtum, G.; Zijm, W. Spare parts management at complex technology-based organizations: An agenda for research. Int. J. Prod. Econ. 2001, 71, 177–193. [Google Scholar] [CrossRef]
  14. Pahl, J. Maritime Spare Parts Management: Current State-of-the-Art. In Proceedings of the 55th Hawaii International Conference on System Sciences, Maui, HI, USA, 4 January 2022. [Google Scholar]
  15. Zhou, B.; Fan, S.; Li, D. Research on ship spare parts inventory based on selective maintenance. In Proceedings of the 2010 2nd International Workshop on Intelligent Systems and Applications, Wuhan, China, 22–23 May 2010; pp. 1–5. [Google Scholar]
  16. Bhalla, S.; Alfnes, E.; Hvolby, H.; Sgarbossa, F. Advances in Spare Parts Classification and Forecasting for Inventory Control: A Literature Review. IFAC-Pap. 2021, 54, 982–987. [Google Scholar] [CrossRef]
  17. Johansen, S.G. The Markov model for base-stock control of an inventory system with Poisson demand, non-crossing lead times and lost sales. Int. J. Prod. Econ. 2021, 231, 107913. [Google Scholar] [CrossRef]
  18. Alyahya, S.; Wang, Q.; Bennett, N. Application and integration of an RFID-enabled warehousing management system—A feasibility study. J. Ind. Inf. Integr. 2016, 4, 15–25. [Google Scholar] [CrossRef]
  19. Chow, H.K.H.; Choy, K.L.; Lee, W.B.; Lau, K.C. Design of a RFID case-based resource management system for warehouse operations. Expert Syst. Appl. 2006, 30, 561–576. [Google Scholar] [CrossRef]
  20. Doss, R.; Trujillo-Rasua, R.; Piramuthu, S. Secure attribute-based search in RFID-based inventory control systems. Decis. Support Syst. 2020, 132, 113270. [Google Scholar] [CrossRef]
  21. Nemeshaev, S.; Fatkullina, A. Predictive analytics of the state of computer devices in the inventory system. Procedia Comput. Sci. 2021, 190, 647–650. [Google Scholar] [CrossRef]
  22. Bose, R.; Mondal, H.; Sarkar, I.; Roy, S. Design of smart inventory management system for construction sector based on IoT and cloud computing. E-Prime Adv. Electr. Eng. Electron. Energy 2022, 2, 100051. [Google Scholar]
  23. Khan, S.; Akram, A.; Usman, N. Real time automatic attendance system for face recognition using face API and OpenCV. Wirel. Pers. Commun. 2020, 113, 469–480. [Google Scholar] [CrossRef]
  24. Zhu, Z.; Cheng, Y. Application of attitude tracking algorithm for face recognition based on OpenCV in the intelligent door lock. Comput. Commun. 2020, 154, 390–397. [Google Scholar] [CrossRef]
  25. Xue, Y.; Ju, Z.; Li, Y.; Zhang, W. MAF-YOLO: Multi-modal attention fusion based YOLO for pedestrian detection. Infrared Phys. Technol. 2021, 118, 103906. [Google Scholar] [CrossRef]
  26. Zhaoxin, G.; Han, L.; Zhijiang, Z.; Libo, P. Design a Robot System for Tomato Picking Based on YOLO v5. IFAC-Pap. 2022, 55, 166–171. [Google Scholar] [CrossRef]
  27. Zou, Z.; Wu, Q.; Zhang, Y.; Wen, K. Design of Smart Car Control System for Gesture Recognition Based on Arduino. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 5 February 2021; pp. 695–699. [Google Scholar]
  28. Li, Z.F.; Li, J.T.; Li, X.F.; Yang, Y.J.; Xiao, J.; Xu, B.W. Intelligent Tracking Obstacle Avoidance Wheel Robot Based on Arduino. Procedia Comput. Sci. 2020, 166, 274–278. [Google Scholar] [CrossRef]
  29. Poda, X.; Qirici, O. Shape Detection and Classification Using OpenCV and Arduino Uno. RTA-CSIT 2018, 2280, 128–136. [Google Scholar]
  30. Zamir, M.; Ali, N.; Naseem, A.; Ahmed Frasteen, A.; Zafar, B.; Assam, M.; Othman, M.; Attia, E. Face Detection & Recognition from Images & Videos Based on CNN & Raspberry Pi. Computation 2022, 10, 148. [Google Scholar]
  31. Yang, K.; Yang, T.; Yao, Y.; Fan, S. A transfer learning-based convolutional neural network and its novel application in ship spare-parts classification. Ocean Coast Manag. 2021, 215, 105971. [Google Scholar] [CrossRef]
  32. Han, J.; Yang, S.; Lee, B. A novel 3-D color histogram equalization method with uniform 1-D gray scale histogram. IEEE Trans Image Process. 2010, 20, 506–512. [Google Scholar] [CrossRef]
  33. Lee, J.; Lee, W.; Jeong, D. Object tracking method using back-projection of multiple color histogram models. In Proceedings of the 2003 IEEE International Symposium on Circuits and Systems (ISCAS), Bangkok, Thailand, 25 June 2003; p. II. [Google Scholar]
  34. Wang, N.; Zha, W.; Li, J.; Gao, X. Back projection: An effective postprocessing method for GAN-based face sketch synthesis. Pattern Recognit. Lett. 2018, 107, 59–65. [Google Scholar] [CrossRef]
  35. Tam, K.C.; Lauritsch, G.; Sourbelle, K. Filtering point spread function in backprojection cone-beam CT and its applications in long object imaging. Phys. Med. Biol. 2002, 47, 2685. [Google Scholar] [CrossRef]
  36. Gurav, R.M.; Kadbe, P.K. Real time finger tracking and contour detection for gesture recognition using OpenCV. In Proceedings of the 2015 International Conference on Industrial Instrumentation and Control (ICIC), Pune, India, 9 July 2015; pp. 974–977. [Google Scholar]
  37. LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. In The Handbook of Brain Theory and Neural Networks; MIT Press: Cambridge, MA, USA, 1995; pp. 33–61. [Google Scholar]
  38. Chauhan, R.; Ghanshala, K.K.; Joshi, R.C. Convolutional neural network (CNN) for image detection and recognition. In Proceedings of the 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, 15–17 December 2018; pp. 278–282. [Google Scholar]
  39. Shad, H.S.; Rizvee, M.; Roza, N.T.; Hoq, S.M.; Monirujjaman Khan, M.; Singh, A.; Zaguia, A.; Bourouis, S. Comparative analysis of deepfake image detection method using convolutional neural network. Comput. Intell. Neurosci. 2021, 2021, 3111676. [Google Scholar] [CrossRef]
  40. Karam, A.; Embaby, M.; El-Kady, H.; Abdel-Hafeez, S.; Nabil, G.; Mohammed, A. Applying convolutional neural networks for image detection. In Proceedings of the 2019 International Conference on Smart Applications, Communications and Networking (SmartNets), Sharm El Sheikh, Egypt, 20 April 2020; pp. 1–8. [Google Scholar]
  41. Lee, H.; Song, J. Introduction to convolutional neural network using Keras; an understanding from a statistician. Commun. Stat. Appl. Methods 2019, 26, 591–610. [Google Scholar] [CrossRef]
  42. Nagi, J.; Ducatelle, F.; Di Caro, G.A.; Cireşan, D.; Meier, U.; Giusti, A.; Nagi, F.; Schmidhuber, J.; Gambardella, L.M. Max-pooling convolutional neural networks for vision-based hand gesture recognition. In Proceedings of the 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 2 February 2012; pp. 342–347. [Google Scholar]
  43. Basha, S.S.; Dubey, S.R.; Pulabaigari, V.; Mukherjee, S. Impact of fully connected layers on performance of convolutional neural networks for image classification. Neurocomputing 2020, 378, 112–119. [Google Scholar] [CrossRef]
  44. Wu, H.; Gu, X. Max-pooling dropout for regularization of convolutional neural networks. In Neural Information Processing, Proceedings of the 22nd International Conference, ICONIP 2015, Istanbul, Turkey, 9–12 November 2015; Springer: Berlin/Heidelberg, Germany, 2015; Part I; pp. 46–54. [Google Scholar]
  45. Akçay, S.; Kundegorski, M.E.; Devereux, M.; Breckon, T.P. Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 19 August 2016; pp. 1057–1061. [Google Scholar]
  46. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  47. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  48. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  49. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848–6856. [Google Scholar]
  50. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  51. Ajayi, O.G.; Ashi, J. Effect of varying training epochs of a Faster Region-Based Convolutional Neural Network on the Accuracy of an Automatic Weed Classification Scheme. Smart Agric. Technol. 2023, 3, 100128. [Google Scholar] [CrossRef]
Figure 1. Result of contour detection processing.
Figure 1. Result of contour detection processing.
Jmse 11 01015 g001
Figure 2. 2D-histogram of the spare-part image. (A) the distribution of blue and green pixels (B) the distribution of green and red pixels (C) the distribution of blue and red pixels.
Figure 2. 2D-histogram of the spare-part image. (A) the distribution of blue and green pixels (B) the distribution of green and red pixels (C) the distribution of blue and red pixels.
Jmse 11 01015 g002
Figure 3. Proposed image processing algorithm. (A) Original spare-parts image (B) Back-projected image (C) Contoured image.
Figure 3. Proposed image processing algorithm. (A) Original spare-parts image (B) Back-projected image (C) Contoured image.
Jmse 11 01015 g003
Figure 4. Deep convolutional neural networks.
Figure 4. Deep convolutional neural networks.
Jmse 11 01015 g004
Figure 5. (A) Convolution process (B) Max-pooling process.
Figure 5. (A) Convolution process (B) Max-pooling process.
Jmse 11 01015 g005
Figure 6. Process of image data collection.
Figure 6. Process of image data collection.
Jmse 11 01015 g006
Figure 7. Visualisation of data distribution.
Figure 7. Visualisation of data distribution.
Jmse 11 01015 g007
Figure 8. Confusion matrix of the transfer-learning model.
Figure 8. Confusion matrix of the transfer-learning model.
Jmse 11 01015 g008
Figure 9. Training processes (A) ResNet-50, (B) VGG-19, (C) ShuffleNet (D) SqueezeNet.
Figure 9. Training processes (A) ResNet-50, (B) VGG-19, (C) ShuffleNet (D) SqueezeNet.
Jmse 11 01015 g009
Figure 10. Experiment progress of the proposed device.
Figure 10. Experiment progress of the proposed device.
Jmse 11 01015 g010
Figure 11. Hardware circuit diagram for the ASSM.
Figure 11. Hardware circuit diagram for the ASSM.
Jmse 11 01015 g011
Figure 12. Flowchart for the ASSM device.
Figure 12. Flowchart for the ASSM device.
Jmse 11 01015 g012
Figure 13. (A) The automated ship spare-part management device (B) The Road for the experiment (C) the whole experiment scene.
Figure 13. (A) The automated ship spare-part management device (B) The Road for the experiment (C) the whole experiment scene.
Jmse 11 01015 g013
Figure 14. Results of an experiment for device validation.
Figure 14. Results of an experiment for device validation.
Jmse 11 01015 g014
Table 1. Comparison between Arduino uno R3 and Rapberry Pi 4B.
Table 1. Comparison between Arduino uno R3 and Rapberry Pi 4B.
ProductArduino Uno R3Raspberry Pi 4B
ProcessorATmega328PBroadcom BCM2711 SoC
Clock speed11.5 GHz
Register width26
RAM2 KB2 GB/4 GB/8 GB
Operation systemNoneLinux & Others
GPUNoneBroadcom VideoCore IV MP2 400 MHz
Table 2. Dataset for ship spare parts.
Table 2. Dataset for ship spare parts.
ObjectIndexDemanded Quantity in LawDimensions (Pixels)
Valve Spindle02101.1
Valve Spring1281
Nozzle Atomizer2673
Valve Seat3268.5
Main Bearing41153.2
F.O Injection Pump51231
Table 3. Training results for the three optimization algorithms.
Table 3. Training results for the three optimization algorithms.
Transfer-Learning
Model
OptimizersAccuracy
(%)
Train Time
(Second)
ResNet-50SGDm99.8544
RMSProp99.2602
Adam98.6495
VGG-19SGDm1001260
RMSProp1001864
Adam1001719
ShuffleNetSGDm97.4128
RMSProp98.2158
Adam98.1146
SqueezeNetSGDm95.489
RMSProp92.2110
Adam92.1102
Table 4. Accuracy experiment of ASSM.
Table 4. Accuracy experiment of ASSM.
ExperimentSpare 1Spare 2Spare 3
TypeNumberTypeNumberTypeNumber
1Spindle(C)1(C)Spindle(C)2(C)Spring(C)3(C)
2Spindle(C)2(C)Spring(C)3(C)Spindle(C)1(C)
3F.O pump(C)1(C)Spring(C) 2(C)Atomizer(C)3(C)
4Atomizer(C)2(C)Seat(C)1(C)Bearing(C)2(C)
5Seat(C)1(C)Bearing(C)2(C)Atomizer(C)1(C)
6Bearing(C)1(C)Seat(C)3(C)Spring(C)2(C)
7Spring(C)1(C)Bearing(C)2(C)Seat(C)2(C)
Spare type accuracy: 100%(21/21)Spare number accuracy: 100%(21/21)
(C): Correct.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, C.-M.; Jang, H.-J.; Jung, B.-G. Development of an Automated Spare-Part Management Device for Ship Controlled by Raspberry-Pi Microcomputer Based on Image-Progressing & Transfer-Learning. J. Mar. Sci. Eng. 2023, 11, 1015. https://doi.org/10.3390/jmse11051015

AMA Style

Lee C-M, Jang H-J, Jung B-G. Development of an Automated Spare-Part Management Device for Ship Controlled by Raspberry-Pi Microcomputer Based on Image-Progressing & Transfer-Learning. Journal of Marine Science and Engineering. 2023; 11(5):1015. https://doi.org/10.3390/jmse11051015

Chicago/Turabian Style

Lee, Chang-Min, Hee-Joo Jang, and Byung-Gun Jung. 2023. "Development of an Automated Spare-Part Management Device for Ship Controlled by Raspberry-Pi Microcomputer Based on Image-Progressing & Transfer-Learning" Journal of Marine Science and Engineering 11, no. 5: 1015. https://doi.org/10.3390/jmse11051015

APA Style

Lee, C. -M., Jang, H. -J., & Jung, B. -G. (2023). Development of an Automated Spare-Part Management Device for Ship Controlled by Raspberry-Pi Microcomputer Based on Image-Progressing & Transfer-Learning. Journal of Marine Science and Engineering, 11(5), 1015. https://doi.org/10.3390/jmse11051015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop