Next Article in Journal
Detection of Marine Oil Spill from PlanetScope Images Using CNN and Transformer Models
Previous Article in Journal
Model-Driven Cooperative Path Planning for Dynamic Target Searching of Unmanned Unterwater Vehicle Formation
Previous Article in Special Issue
Bollard Pull and Self-Propulsion Performance of a Waterjet Propelled Tracked Amphibian
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Reading and Reporting Weather Information from Surface Fax Charts for Ships Sailing in Actual Northern Pacific and Atlantic Oceans

1
Navigation College, Dalian Maritime University, Dalian 116026, China
2
School of Earth and Atmospheric Sciences, Georgia Institute of Technology, Atlanta, GA 30332, USA
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(11), 2096; https://doi.org/10.3390/jmse12112096
Submission received: 28 September 2024 / Revised: 8 November 2024 / Accepted: 9 November 2024 / Published: 19 November 2024
(This article belongs to the Special Issue Ship Performance in Actual Seas)

Abstract

:
This study is aimed to improve the intelligence level, efficiency, and accuracy of ship safety and security systems by contributing to the development of marine weather forecasting. The accurate and prompt recognition of weather fax charts is very important for navigation safety. This study employed many artificial intelligent (AI) methods including a vectorization approach and target recognition algorithm to automatically detect the severe weather information from Japanese and US weather charts. This enabled the expansion of an existing auto-response marine forecasting system’s applications toward north Pacific and Atlantic Oceans, thus enhancing decision-making capabilities and response measures for sailing ships at actual sea. The OpenCV image processing method and YOLOv5s/YOLO8vn algorithm were utilized to make template matches and locate warning symbols and weather reports from surface weather charts. After these improvements, the average accuracy of the model significantly increased from 0.920 to 0.928, and the detection rate of a single image reached a maximum of 1.2 ms. Additionally, OCR technology was applied to retract texts from weather reports and highlighted the marine areas where dense fog and great wind conditions are likely to occur. Finally, the field tests confirmed that this auto and intelligent system could assist the navigator within 2–3 min and thus greatly enhance the navigation safety in specific areas in the sailing routes with minor text-based communication costs.

1. Introduction

Accurate and prompt marine weather forecasting guarantees the safety of ships sailing at sea. Nowadays, there are two major types of marine weather forecasts: weather charts and digital weather model products.
Weather charts have a long history and extensive use in shipping and various other industries. As per the International Maritime Organization’s regulations, ocean-going vessels are obligated to install ship meteorological fax machines to receive weather charts [1]. These charts provide timely insights into weather changes. They encompass meteorological data, such as weather component contours, centers and movements of high- and low-pressure centers, warning symbols, and more. By utilizing these weather charts, senior officers in sailing vessels can anticipate potential weather conditions in their navigation regions, thereby ensuring the safety of their navigation. Although color-shade weather charts are usually more noticeable than contour-based weather charts, they cannot be easily delivered via fax machine, so right now, actual ship navigators still need to learn to recognize the weather-related symbols from the contour-based weather fax charts.
In the northwestern low- and mid-latitude Pacific Ocean, authorized by the World Meteorological Organization and International Maritime Organization, the Japanese Meteorological Agency (JMA) broadcasts widely applied surface weather fax charts with warning symbols like dense fog warning (depicted as FOG[W]) and great wind warnings (depicted as GW, SW, TW) [2]. Moreover, detailed weather briefings are provided in the vacant spaces to address significant low-pressure weather systems that are prone to a wind velocity of 30 knots or higher, as marked in Figure 1a. These weather briefings encompass information on tropical cyclones or frontal cyclones, which holds tremendous importance for guaranteeing the safety of ocean-going vessels’ navigation and making prudent decisions regarding route selections. Other busy transoceanic trade routes are the mid-latitude northeastern Pacific and northern Atlantic oceans, where the weather fax charts are dominantly issued by the US National Weather Service, and with which great wind warning symbols and wind barbs are labeled in rectangles and arrows, respectively (Figure 1b and Table 1). Like in other places of the world, all weather fax charts are generated by official meteorological institutes and in graphic mode, without providing raw data.
The training and practice of the recognition of the weather warning symbols cost a lot of time for navigator seafarers. However, in most cases, the direct demands from the navigators are the surface wind speed and its trend at specific locations on the sailing route. As the numerical weather model’s resolution has recently been improved to smaller than 50 km or more, it is possible for the marine forecasting sectors to provide the weather forecast at specific points.
In 2013, Jian et al. [3] designed a simple non-intelligent maritime fixed-point wind forecast automatic response system for pilots. It was originally set up to apply the raw numerical weather model prediction data purely, just as many marine weather providers are doing right now. Lately, this system was developed to intelligently read the public-issued color-shade forecasting graphics and convert pixel information into a point-based forecast (Jian et al. [4]). The automatic system periodically collects the latest weather and marine forecast charts from professional meteorological agencies worldwide, employs OpenCV (Open Source Computer Vision Library) for meteorological information recognition, retrieves changes in wind speed and wave height at desired fixed points on the sea surface, and simultaneously provides multiple sources of information to ships, assisting senior pilots with comprehensive information. Table 2 compares our previous work, examples of similar services, and the proposed work.
On another side, with the rapid development of intelligent technology and the wide application of the principle of computer vision, it is more and more popular to use image processing technology to track the target in real time. Efforts have been made to retrieve information from those weather charts. In 2016, Zhang et al. [5] extracted warning lines from weather charts through a simple analysis method. Later, deep-learning methods were applied to rapidly detect and extract the pressure contours from similar weather charts [6,7].
There are two main target detection algorithms based on deep learning. One is the two-stage algorithm represented by Fast R-CNNs (Convolutional Neural Networks) and Faster R-CNNs [8,9]. The other is the one-stage algorithm [10,11,12] represented by the SSD (Single Shot Multibox Detector) [13] and YOLO (You Only Look Once) [14]. The former includes extracting the region of interest in the first step, combining the trunk features for accurate target positioning and category determination in the second stage. The latter conducts target detection in the feature extraction layer. Li et al. [15] proposed a new cross line recognition method, a new vector product algorithm, and a variety of triangle extraction methods, which identified the middle potential height line, equal potential vorticity line, and front line of the weather fax chart effectively. Wang [16] proposed a method to judge the type of fax charts by identifying the title of the chart. Wei et al. [17] utilized an early version of YOLO for meteorological image identification.
The practicability of deep learning in weather prediction and target detection has attracted more and more attention [18,19,20], but it has not been widely used in the field of weather fax chart detection. Based on the OpenCV recognition and YOLO deep-learning extraction of wind warnings and vectors from the various surface fax charts, this paper realizes batch image processing and improves the timeliness and accuracy of meteorological services, which can provide reference for the correct use of the weather fax chart for the navigation ships in the region and the workers engaged in marine meteorological forecasting.
Given the International Convention on Standards of Training, Certification and Watchkeeping for Seafarers (STCW) compulsively requests the corresponding seafarers to receive and analyze weather fax charts [1], and this study performed further study to intelligently read the critical weather information from fax charts and report to the actual seafarers in need of them. The proposed research greatly expands the application range of this system, elevates the level of intelligence in existing ship safety and security, and provides a theoretical foundation. In addition, it can be integrated into other intelligence systems, like risk reduction decision or the AIS maritime network, to have a better and broad usage.

Main Contribution

With new deep-learning methods and other intelligent technology, we are able to detect most of the great wind and fog warning messages; with the help of a previously developed auto response system, we can deliver the bad weather information to specific ships via an email module.
This study is an innovated effort to introduce more AI methods into the actual shipping navigation industry. Specifically, it is among the first intelligent applications to identify wind barbs from US weather charts, weather warnings and reports from JMA charts, which is conducive to large-scale, intelligent, and fast and accurate detection of wind speed information in weather fax charts.
This paper is organized as follows: Section 2 provides the weather fax chart data source and relative methodology; Section 3 discusses the recognition of the weather warning from surface charts; Section 4 applies the findings in Section 3 on a previous automatic warning system; and Section 5 draws the conclusion.

2. Data and Methods

2.1. Surface Weather Fax Charts from US and JMA

In this study, 2236 US weather fax charts of 20° N–60° N, 140° E–120° W surface forecast maps released by the website of the Ocean Prediction Center of National Weather Service (Ocean.weather.gov, accessed on 25 August 2023) were collected and saved in JPG format. Target detection algorithms based on deep learning require a large number of labeled samples to achieve good performance. The data set was expanded to pieces by supervised random erasing, flipping, and rotation, which increased the size of the data set and the diversity of samples, and reduced the overfitting phenomenon of model training, and were divided according to a ratio of 9:1. Finally, 2013 pieces of the training set, 223 pieces of the verification set, and 200 pieces of the test set were set.
JMA updated the surface synoptic weather chart format in 2016. Since then, the positions of longitude, latitude, and coastline have remained fixed. By superimposing and averaging over 2800 charts (e.g., Figure 2a), an image referred to as the base image can be achieved (Figure 2b). To enhance image clarity and accuracy, the base image is processed using Otsu’s binarization algorithm to eliminate shadows. The resulting binarized base image is displayed in Figure 2c, with all weather information removed. The pure binarized weather information then can be obtained by calculating the difference between the binarized original image and the base image (Figure 2d).

2.2. Recognition Weather Brief by OpenCV and OCR Method

Optical Character Recognition (OCR), one of the AI computer technologies, is used to identify text information within images through scanning and other optical input methods. Another AI technology, OpenCV, is an open-source computer vision and machine learning library that offers a broad range of commonly used image processing functions, enabling the swift implementation of image processing and recognition tasks. Currently, technologies such as OpenCV image processing, OCR text recognition, and natural language processing are extensively employed in various fields. Numerous OCR tools are available in the market, among which Baidu AI provides mature OCR services. Baidu AI has developed a general scene text recognition program that delivers high-precision text detection and recognition services for complete images.

2.3. Flow Chart of the Automatic Warning System

The study involves the automatic detection and recognition of request emails sent from sea using the Intelligent Response System for Offshore Fixed-Point Wind and Wave Forecasting. The latitude and longitude coordinates are extracted from these emails. Additionally, the latest surface Japan weather chart, obtained through the automated crawling of the JMA, is processed to remove the bottom image and obtain the desired weather chart. Within the obtained image, specific character position is identified using template matching. A screenshot of the weather bulletin is taken, and the text recognition angle is adjusted. Subsequently, the latitude and longitude coordinates mentioned in the bulletin are converted to pixel coordinates, after which the corresponding color is applied to the area specified by the pixel coordinates. The process of recognizing wind, storm, and fog warning characters is largely similar to the weather bulletin recognition process, with the omission of the steps involving dotted lines. The newly acquired weather forecast map is subjected to image processing using OpenCV. This processing enables the automatic recognition of gale and fog warning information at specific points, thereby providing automatic feedback to sailing ships. Refer to Figure 3 for the visual representation of the flow chart for JMA fax charts. A similar flow chart was also built for US fax chart cases.

2.4. Recognition Wind Warning and Vectors by YOLOv5s Algorithm

The YOLO algorithms were first proposed by Redmon et al. in 2016 [14], and YOLOv4 was proposed in 2020 [21]. It improved the backbone feature extraction network Darknet-53 of YOLOv3 [22] to CSP (Cross Stage Partial Connections) Darknet-53, where it added methods such as the mish activation function and mosaic data augmentation. YOLOv4-tiny reduces some structures on the basis of YOLOv4 to achieve the purpose [23] of speed up, and reuses the LeakyReLU of YOLOv3 in the activation function of the backbone feature network; YOLOv5 is the fifth version of the YOLO series algorithm, and its core idea is to take the whole picture as the input of the network, and return the position coordinates and categories of the target in the output layer directly.
Right now, YOLO has been developed into a big family with different research groups involved. Among them, YOLOv5 has successfully found an ideal balance between network parameters, convolutional layers, network resolution, and detection speed, making it a masterpiece in this field. This study adopted the YOLOv5s version of YOLOv5. According to different functions of each part, it can be divided into Backbone, Neck, and Prediction. In the YOLOv5s network model, mosaic data augmentation, adaptive anchor frame calculation, and adaptive picture scaling are carried out. Mosaic data augmentation is conducive to the detection of small targets. The adaptive anchor frame calculation refers to the initial setting of the length and width of the anchor frame and the output of the forecast frame, is compared with the real frame, and then updated in reverse, so as to iterate the network parameters. The adaptive picture scaling fixes the picture of different sizes at 640 pixels ×640 pixels as input.
In the backbone, YOLOv5s mainly adopts the Focus and CSP structure [24]. The purpose is to prevent the loss of information during down-sampling, obtain richer feature maps, and reduce the amount of computation through local cross-layer fusion.
The Neck mainly draws on the idea of PANet [25] and adopts the FPN (Feature Pyramid Networks) + PAN (Path Aggregation Network) structure. The FPN structure adopts an up-sampling and top-down method for feature information fusion, while the PAN structure is an improved FPN, which adopts a bottom-up pyramid structure and down-sampling method for feature information fusion. The prediction is mainly composed of three parts, which are responsible for the three prediction feature layers of the output of the Neck network. The biggest change in YOLOv5s prediction is to use GIOU_Loss [26] as a loss function, replacing IOU_Loss in YOLOv3, and optimizing the case where the two bounding boxes do not intersect.

2.5. Optimization Method Based on YOLOv5

The CBAM (Convolutional Block Attention Module) network [27] consists of two submodules: the Channel Attention Module (CAM) and the Spatial Attention Module (SAM). The CAM module passes through a parallel max pooling layer to obtain two 1 × 1 × C feature maps. First, the two feature vectors are added, and then the weight coefficient of each channel is obtained through an activation function. Finally, the weight coefficient is multiplied with the original feature graph channel to obtain the new feature map. The SAM module first performs average pooling and maximum pooling of a channel dimension, respectively, to obtain two H × W × 1 channel tensors, splicing the two tensors in the channel dimension, then obtaining the weight coefficient through a 7 × 7 convolution and Sigmoid activation function, and finally multiplying the weight coefficient with the input feature graph to obtain the scaled new feature map. The YOLOv5s network with the CBAM attention mechanism is shown in Figure 4. The introduction of the Squeeze-and-Excitation (SE) [28] attention mechanism is similar to the CBAM. The feature map is globally pooled to obtain the 1 × 1 point feature. After the activation function and the fully connected layer, the Sigmoid activation function is introduced to obtain the output. The final output is multiplied with the input feature map of the attention mechanism to obtain the preliminary result.

2.6. Recognition Wind Vectors by YOLOv8n Algorithm

As the successor of YOLOv5 from the same research group, YOLOv8 is also based on the DarkNet-53 model and contains several common characteristics, like using anchor boxes to improve detection accuracy and non-maximum suppression techniques to reduce false positives. Meanwhile, YOLOv8 also demonstrates higher accuracy and a slightly longer retrieval speed than YOLOv5 in different scenarios, making it more suitable for situations that require higher accuracy [29]. YOLOv8 was introduced in 2023, and can be divided into five categories based on the following scaling factors: YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x. Among them, YOLOv8n is the smallest and fastest model, so it was chosen for the same wind barb detection for comparison purpose, with its structure shown in Figure 5. The differences between YOLOv4-Tiny, YOLOv5s, and YOLOv8n are shown in Table 3.

2.7. Optimization of Activation Function

ACON (Activate or Not) [30] is a new activation function family that adaptively determines whether each neuron is activated, including ACon-a, ACON-B, and ACon-C, which solves the problem that the Leaky ReLU function converges too slowly in the negative half axis and the function is not smooth at the zero position. The ACON function defines the switching of linear and nonlinear parameters, thus determining whether a neuron is activated or not.
In the ACON function, the approximate smoothness of a standard maximization function max(x1, …, xn) can be expressed as follows:
S β x 1 , , x n = i = 1 n x i e β x i i = 1 n e β x i
In the above formula, n is the total number of data set samples, x i is the input vector, and β is the switching factor. When β → ∞, S β max, the function is nonlinear. When β → 0, S β mean, the function is a linear average function (not active). The common activation function form is max ( η a x , η b x ) . The approximate smoothness of ACON can be expressed as S β η a x , η b x .
When η a x = x , η b x = 0 , the activation function is as follows:
S β η a x , η b x = S β x , 0 = x · σ ( β x )
which is called ACON-A. The ACON-A activation function is an approximate smoothing of ReLU.
When η a x = x , η b x = p x , the activation function is as follows:
S β η a x , η b x = S β x , p x = ( 1 p ) x · σ ( β ( 1 p ) x ) + p x
which is called ACON-B. The ACON-B activation function is an approximate smooth of the Leaky ReLU.
When η a x = p 1 x , η b x = p 2 x ( p 1 p 2 ) , the activation function is as follows:
S β η a x , η b x = S β p 1 x , p 2 x = p 1 p 2 x · σ β p 1 p 2 x + p 2 x
which is called ACON-C. The more widely used meta-ACON function is improved on the basis of ACON-C.

2.8. Bi-Directional Feature Pyramid Network

The original YOLOv5s feature fusion network adopted an FPN + PAN structure. FPN obtains more semantic features through top-down sampling and combines them with more accurate location information. However, a long transmission path may cause information loss. Compared with FPN, PAN has better network accuracy, but its large network scale and large number of parameters lead to low computing efficiency.
The Bi-directional Feature Pyramid Network (BiFPN) [31] introduces skip connections. It is an improvement of the PAN structure. Compared with the PAN + FPN structure in the original network, it adds an additional path at the same level between the input and output nodes, enhancing the information extraction capability of the network.

3. Recognition of the Weather Information from Surface Charts

3.1. Text from JMA Weather Brief Report

Statistical analysis of multiple weather charts found that weather reports often appeared in the blank area in the lower part or far margin. However, obtaining the exact location pattern is challenging, and the overall direction of inclination of the text remains uncertain. Lacking the mapping principle from the Japan Meteorological Agency, it was necessary to establish where it appears on each weather chart before analyzing the weather information on the report.
Template matching is a method used to locate specific targets in an image [32]. By using characters that will inevitably appear in the weather briefings as templates for template matching, it becomes possible to approximate the location where the weather briefings appear. In this study, the matching method chosen for template matching is the normalized correlation coefficient matching (TM_CCOEFF_NORMED), which offers high accuracy and stability in OpenCV. The return value calculated by this method comprises the comparison results of each position. A return value close to 1 indicates a strong correlation with the template. During practical implementation, setting the return value threshold for successful recognition too high leads to a significant number of missed samples, while setting it too low results in numerous false detections. After conducting over 300 trials, the optimal threshold for determining successful recognition in this study was finally set at 0.58.
Template matching is a rapid and efficient method; however, it has the drawback of being unable to detect objects that are rotated, scaled, or viewed from different angles. In the case of weather brief reports on a weather chart, they are often tilted, requiring the template containing the characters to be rotated to match the tilt direction of the characters on the original map. The rotation matrix for rotating around any point in two dimensions is as follows:
M = c o s θ s i n θ ( 1 cos θ ) × x c e n t e r + s i n θ × y c e n t e r s i n θ c o s θ ( 1 cos θ ) × y c e n t e r s i n θ × x c e n t e r 0 0 1
By specifying the rotation center, angle of rotation, and scaling after rotation in the code, it becomes possible to perform the rotation operation on the template containing the characters. Combined with the rotated template matching method, this approach can be employed to locate specific characters in the weather chart.
In this study, the recall and precision indicators were selected to analyze the results of the target detection experiments. After conducting template matching detection experiments on 535 JMA weather charts containing weather reports, it was discovered that “hPa” exhibited the highest recall and precision rates compared to “WINDS” and “WITHIN”. This outcome aligns with expectations, and the experimental results can be seen in Table 4. Consequently, by identifying the position of “hPa”, the location of the weather bulletin on the weather chart can be determined.
After obtaining the pixel coordinates of the character “hPa” by template matching, the weather report can be cropped from the weather chart using OpenCV. Since most of the weather bulletin screenshots are typically tilted, direct text recognition is ineffective. Thus, it becomes necessary to rotate the image after capturing the weather bulletin screenshot, aligning its text horizontally, as seen in the upper part of Figure 6. The required rotation angle to align the screenshot horizontally is approximately the same as the angle of rotation of the template “hPa” during a successful template matching, albeit in the opposite direction. In this study, once the weather bulletin section is cropped, the image is rotated by the corresponding angle. This adjustment aims to rotate the text in the screenshot into an approximate horizontal position, thereby improving the accuracy of the text recognition results in the subsequent steps.
Afterwards, the OCR Application Programming Interfaces (APIs) cloud platform was applied to retrieve text from the image within a millisecond response time and with an accuracy rate exceeding 95%. An example of the text recognition from the brief image before and after the tilting adjustment is shown in the lower part of Figure 6. Overall, the latter performs better, with most of the critical weather messages recognized.

3.2. Warning Symbols from JMA Charts

The wind and fog warning symbols in the JMA weather chart are positioned at the center of their respective regions, with the tilt direction changing based on the latitude and longitude. To accurately identify and locate these symbols, a combination of rotation and template matching, as explained in Section 3.1, is employed. This method not only enables the identification and localization of “hPa” but also provides the positions of gale and fog in the weather chart. After this method was applied to Figure 2d, the corresponding weather warning symbols were detected and illustrated, as seen in Figure 7.
The template matching detection experiments were performed on a data set of 750 weather charts that included both “hPa” and various other alarm symbols. The findings, illustrated in Table 5, reveal significant recall values for “hPa”, “SW”, “GW”, and “TW”, with the lowest recall value being 0.92. Additionally, precision rates greater than 0.85 were achieved for “hPa”, “TW “, “hPa”, “TW”, and “FOG”, indicating a satisfactory detection performance overall.

3.3. Wind Barbs from US Charts via YOLO Algorithm

The YOLO-related experiment was conducted by using the PyTorch framework on a GPU, with the following configuration: Windows 10 operating system, Intel Core i7-7700HQ processor, NVIDIA GTX1050Ti 4G GDDR5 graphics card, CUDA version 11.3, and Python 3.6. It is not a high-speed workstation, so we have to set the total number of training iterations, or epochs, to 200, and the batch size is 4. The momentum factor is 0.9, and it is an important parameter affecting gradient descent. If the learning rate is too large, the network may not converge, and if it is too small, the network convergence speed will be too slow. Therefore, the learning rate is set to 0.001. In order to prevent overfitting, the weight attenuation coefficient is set to 0.0005, the confidence threshold is set to 0.5, and the non-maximum suppression threshold is set to 0.3. The time required for one training of the model is about 30 h.
The specific evaluation indicators include precision (P), recall (R), mean average precision (mAP), the F1 measure, and single image detection time. Among them, mAP and single image detection time are the most important evaluation indicators in the object detection algorithm, which measure the accuracy and speed of the algorithm. The calculation expressions are as follows: P = T P T P + F P , R = T P T P + F N , F 1 = 2 P R P + R , m A P = K 1 N P R N .
TP, FP, TN, and FN, respectively, represent a positive sample and correct prediction, positive sample but wrong prediction, negative sample and correct prediction, and negative sample but wrong prediction. In order to verify the effect of the improved part, the original model and all the improved models were compared. Under the same data set and experimental environment, the experimental results of each model are shown in Table 6.
The mAP values of the original YOLOv5s model and 11 improved experiments for wind barb detection are all higher than 90%. Among them, the mAP values of experiments 2, 5, 8, 9, and 12 are higher than the original YOLOv5s. The BiFPN integrated into the YOLOv5s network structure (experiment 5) improves the mAP of the model the most, which is 0.8% higher than that of the original model, but it is not superior in the single image detection time, which is 3.6 ms slower than the 1.2 ms of experiment 8, which is the fastest. The mAP values of experiments 3, 4, 6, 10, and 11 are lower than the original YOLOv5s, and the mAP value combining the ACON activation function and BiFPN (experiment 10) is the lowest at only 0.903. In terms of precision, experiment 8 has the highest value, and the mAP value of it is 0.923, higher than that of the original model, and experiment 8 also has the fastest single image detection time. However, compared with the original YOLOv5s, the mAP value of experiment 8 has a slightly smaller improvement, lower than that of experiment 12, and its detection precision is only 0.014 higher than that of experiment 12. For six experiments with an improved mAP value compared with experiment 1, their detection precision values and recall rates for five types of wind barb symbols are compared on the same chart. The different image detection results of the experiments in Table 7 are shown in Figure 8.
Among all the above experiments, experiment 5 has the highest mAP value, but it still has the phenomenon of wrong detection and missing detection. Compared with experiment 5, the mAP value of experiment 12 is slightly lower, but it has the highest reliability for the detection of wind weather symbols, and there is no wrong detection, missing detection, and empty detection. In the detection of weather map information, missing detection is more dangerous than empty detection and wrong detection. Compared with the original model, the improved experiment (experiment 5) that only fuses BiFPN has the largest increase in mAP value, which is 0.8%, but the effect on the actual detection of wind barbs is not stable, with the phenomenon of wrong detection and missing detection.
In general, the reliability of experiment 12 is higher than that of other experiments. Although not every confidence of the detection box is improved, the phenomena of wrong detection, missing detection, and empty detection are improved. Meanwhile, although most of the wind barbs could be correctly detected in other experiments, the fit degree of the detection box is worse than that of experiment 12 due to the influence of the background of the weather map. In experiment 12, there is not only no wrong detection, missing detection, and empty detection, but also a more fit and compact detection box after the correct detection. It shows that the improved experiment 12 still maintains strong accuracy and robustness under the interfering background of a weather map.
The experiment environment and parameter configuration for wind vector detection based on the YOLOv8n model are the same as the improved YOLOv5s, thus allowing an easy comparison. The mAP value and recall rate R of YOLOv8n are higher than those of the improved YOLOv5s model, but the accuracy P is significantly lower. The bottom row of Figure 8 shows the actual detection results of stroke vectors in the same meteorological fax image for both models. YOLOv8n had a false detection of “45 kt” in the first fax chart, and missed “35 kt” in the remaining three maps. In contrast, experiment 12 did not present any detection errors. Therefore, in this study, the improved YOLOv5s model has higher applicability for wind barb detection in weather charts than the YOLOv8n model.
As can be seen from Figure 9, the loss value of the original YOLOv5s model rapidly drops to around 0.03 in the first 75 cycles, while the mAP value rapidly rises to around 0.92. During the 75th–200th cycles, there were several oscillations during the decline of the loss value, but the overall trend was gentle and gradually approached 0.03. In the process of training, due to hardware constraints, the maximum batch_size value could only be set to 4, which may be the cause of the slight oscillation. The mAP value slowly rises to 0.92 and then remains stable. At this time, the model reaches stable convergence.
In experiment 12 (yolov5s + CBAM + ACON + BiFPN), the loss value rapidly drops to about 0.035 in the first 50 cycles, while the mAP value rapidly rises to about 0.7. Between the 50th and 125th cycles, the decline rate of the loss value tends to be gentle, falling to about 0.032, and the mAP value also changes from a rapid rise to a slow rise, gradually approaching 0.92. After the 125th cycle, the loss value remains near 0.032, during which it slightly oscillates, and the mAP value tends to be stable and finally remains at 0.925. At this time, the model reached stable convergence, reaching the level of accurate recognition of wind barbs.
It can be seen that the loss value of experiment 12 decreases faster, and the oscillation amplitude is smaller. Meanwhile, the final stable value of experiment 12 is slightly higher than that of the original model, which proves that the improvement of the model is effective.

4. Mark Weather Warning and Deliver Them to Ship via Automatic System

After the warning symbols and brief text were recognized from the charts (Figure 10a for JMA chart), the associated adverse weather areas can be located and filled with different colors (Figure 10b for JMA chart, Figure 10c for US chart) in order to be applied by the automatic marine forecasting system developed by Jian et al. [4].
Compared to the RGB color space, the HSV color space provides a more intuitive representation of color properties such as brightness, hue, and saturation. This makes it convenient for comparing colors, leading to the necessity of converting the newly generated weather forecast chart into an HSV image for color detection. To accomplish this, we can utilize the OpenCV library in Python. By applying the conversion process, we obtain an HSV image of the weather chart. Subsequently, we define a mask of type Mat and set the upper and lower limits of the color based on the forecast map’s legend. Using this mask, we perform color detection, resulting in a binary image which is then outputted. Finally, we examine whether the specified pixel points in the legend-generated mask have a pixel value of 255 or not. This allows us to extract the information regarding wind and fog warnings for specific points.
This study also utilizes the Zmail and smtplib modules provided by the Python standard library to enable bi-directional communication between Webmail and email clients [4]. A testing was conducted using the email server of Dalian Maritime University to assess email interaction. The intelligent response system automatically checks the mailbox every five minutes. If it detects latitude and longitude coordinates that are out of range or if the content format is incorrect and unrecognizable, a format error prompt is returned. On the other hand, if the format is correct, the email extracts the latitude and longitude coordinates to calculate high wind and fog warning information for the specified point. The calculated result is then sent back to the requesting ship via email. Figure 11 displays examples of the request and response email snapshots.

5. Conclusions

This paper aims to extract weather warning symbols and weather brief report information from JMA and US surface weather fax charts using multiple AI techniques. Template matching, OCR text recognition technology, and OpenCV image processing were utilized to identify the location and extent of potential gale and fog occurrences in the sea area, subsequently coloring the corresponding area. A previous intelligent and auto response system was reconstructed to process more gathered information, retrieving the great wind and fog warning information specified in the emails sent by ships, and returning it to the requesting ship via email. This improves the original system, with the entire process completed within 3 min without the need for manual operation.
In order to solve the problems of artificial recognition of weather fax charts by ship pilots and the urgent need for intelligent recognition, an improved wind information target detection algorithm of weather fax charts based on YOLOv5s is proposed, by adding an attention mechanism using the ACON activation function to replace the original one to improve the generalization and transmission ability of the model, and fusing BiFPN to improve the integration degree of multi-scale targets. The original and improved YOLOv5s models were applied to the wind symbol detection. The results show that the mAPs of the improved models have improved and decreased. Although the improvement of the mAP value in the experiment combining the three improved methods (experiment 12) is not as high as that in experiment 5, it plays a more stable and reliable role in the actual detection of wind barbs, with no phenomenon of empty detection, missing detection, and wrong detection. The detection precision and the detection speed of a single image have also been improved to varying degrees.
The results by the YOLOv8n algorithm showed that the average accuracy reached 0.936, higher than the improved YOLOv5s model. However, there were omissions and false detections in the actual detection of wind vectors, while the improved YOLOv5s model did not make any detection errors.
To provide accurate and timely weather information for sailing ships, it can replace humankind recognition in the future, thus relieving the pressure of ship pilots to read charts manually, and helping seafarers to monitor marine weather better, so as to avoid the risks brought by adverse weather. The detection of the weather fax chart can be combined with the intelligent response system of wind and wave forecasting for fixed points at sea. Currently, this system has some limitations as it only focuses on Japanese and US surface fax charts for wind and fog warnings, and the text recognition process relies on internet connectivity. In the future, we intend to expand the range of fax charts analyzed and develop an independent text recognition tool to enhance the weather warning service, and even integrate the weather information with the AIS resource network [33], which is absent in ship piloting and navigation. Therefore, this research has a wide range of applications and great significance.

Author Contributions

Conceptualization, J.J.; methodology, Y.Z. and K.X.; software, Y.Z. and K.X.; validation, Y.Z. and K.X.; formal analysis, J.J. and Y.Z.; investigation, P.J.W.; resources, J.J.; data curation, J.J.; writing—original draft preparation, Y.Z. and K.X.; writing—review and editing, J.J.; visualization, Y.Z. and K.X.; supervision, P.J.W.; project administration, P.J.W.; funding acquisition, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 42261144671) and National Key R&D Program of China (Grant No. 2024YFE0103200).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

Thank you to the reviewers for their helpful comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. International Convention on Standards of Training, Certification and Watchkeeping for Seafarers. 2010, p. 53. Available online: https://wwwcdn.imo.org/localresources/en/OurWork/HumanElement/Documents/34.pdf (accessed on 5 November 2024).
  2. Miyamoto, M.; Yamada, T.J. Points of consideration on identification of the atmospheric fronts depicted on weather charts. IOP Conf. Ser. Earth Environ. Sci. 2023, 1136, 012023. [Google Scholar] [CrossRef]
  3. Jian, J.; Webster, P.J. A new marine auto-response quantitative wind forecast system. Procedia Soc. Behav. Sci. 2013, 96, 1362–1365. [Google Scholar] [CrossRef]
  4. Jian, J.; Sun, Z.; Sun, K. An Intelligent Automatic Sea Forecasting System Targeting Specific Areas on Sailing Routes. Sustainability 2024, 16, 1117. [Google Scholar] [CrossRef]
  5. Zhang, H.; Zhou, Y. Research on the extraction method of warning line in meteorological fax map. Sci. Technol. Innov. Appl. 2016, 175, 61–62. [Google Scholar]
  6. Fang, W.; Di, T. Multivariate bias correction and downscaling of climate models with trend-preserving deep learning. Clim. Dyn. 2024, 62, 9651–9672. [Google Scholar]
  7. Yin, Z. Research on Contour Detection and Interpolation of Meteorological Fax Map Based on Deep Learning. Master’s Thesis, Nanjing University of Science and Technology, Nanjing, China, 2021. [Google Scholar]
  8. Arena, P.; Baglio, S.; Fortuna, L.; Manganaro, G. Self-organization in a two-layer CNN. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1998, 45, 157–162. [Google Scholar] [CrossRef]
  9. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  10. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  11. Zhong, Z.; Zhou, S.; Li, S.; Li, H.; Yang, H. Tunnel lining quality detection based on the YOLO-LD algorithm. Constr. Build. Mater. 2024, 449, 138240. [Google Scholar]
  12. Rui, H.; Hu, R.; Su, W.H.; Li, J.L.; Peng, Y. Real-time lettuce-weed localization and weed severity classification based on lightweight YOLO convolutional neural networks for intelligent intra-row weed control. Comput. Electron. Agric. 2024, 226, 109404. [Google Scholar]
  13. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. Comput. Vis. 2016, 9905, 21–37. [Google Scholar]
  14. Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  15. Li, C.D.; Xiao, C.Y.; Pan, H.L.; Chen, R.; Pan, H. Information extraction from meteorological facsimile maps. J. Image Graph. 2012, 17, 1268–1273. [Google Scholar]
  16. Wang, X. Retraction Note: System simulation of computer image recognition technology application by using improved neural network algorithm. Soft Comput. 2024, 28, 33. [Google Scholar] [CrossRef]
  17. Wei, J.; Luo, Z.; Luo, K.; Shi, X.; Li, P. Computer vision–based surface defect identification method for weld images. Mater. Lett. 2024, 371, 136972. [Google Scholar]
  18. Schultz, M.G.; Betancourt, C.; Gong, B.; Kleinert, F.; Langguth, M.; Leufen, L.H.; Mozaffari, A.; Stadtler, S. Can deep learning beat numerical weather prediction? Philos. Trans. R. Soc. A 2021, 379, 20200097. [Google Scholar] [CrossRef] [PubMed]
  19. Ren, X.; Li, X.; Ren, K.; Song, J.; Xu, Z.; Deng, K.; Wang, X. Deep learning-based weather prediction: A survey. Big Data Res. 2021, 23, 100178. [Google Scholar] [CrossRef]
  20. Kaur, R.; Singh, S. A comprehensive review of object detection with deep learning. Digit. Signal Process. 2023, 132, 103812. [Google Scholar] [CrossRef]
  21. Wu, D.H.; Lv, S.C.; Jiang, M.; Song, H. Using Channel pruning-based YOLOv4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Comput. Electron. Agric. 2020, 178, 105742. [Google Scholar] [CrossRef]
  22. Redmon, J.; Farahdi, A. YOLOv3: An incremental improvement [EB/OL]. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  23. Zhang, X.; Zhang, Y.Q.; He, B.; Li, G.N. Research on remote sensing image aircraft target detection technology based on YOLOv4-tiny. Opt. Tech. 2021, 47, 344–351. [Google Scholar]
  24. Zou, L.; Ma, C.; Hu, J.; Jun, H.; Yu, Z.; Zhuang, K. Enhanced predictive modeling of rotating machinery remaining useful life by using separable convolution backbone networks. Appl. Soft Comput. 2024, 156, 111493. [Google Scholar] [CrossRef]
  25. Jin, H.; Li, Z. Grid multi-scroll attractors in cellular neural network with a new activation function and pulse current stimulation. Nonlinear Dyn. 2024, 1–18. [Google Scholar] [CrossRef]
  26. Rezatofighi, H.; Tsoi, N.; Gwak, J.Y.; Amir, S.; Ian, D.R.; Silvio, S. Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
  27. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  28. Hu, J.; Shen, L.; Albanie, S. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
  29. Sohan, M.; Sai Ram, T.; Rami Reddy, C.V. A Review on YOLOv8 and Its Advancements. In Data Intelligence and Cognitive Informatics; ICDICI 2023. Algorithms for Intelligent Systems; Springer: Singapore, 2024. [Google Scholar] [CrossRef]
  30. Ma, N.; Zhang, X.; Liu, M.; Sun, J. Activate or not: Learning customized activation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 8032–8042. [Google Scholar]
  31. Tan, M.X.; Pang, R.M.; Le, Q.V. Efficient Det: Scalable and Efficient Object Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10778–10787. [Google Scholar]
  32. Thomas, C.T. Automated main-chain model building by template matching and iterative fragment extension. Acta Crystallogr. Sect. D Biol. Crystallogr. 2002, 59 Pt 1, 38–44. [Google Scholar]
  33. Rindone, C. AIS Data for Building a Transport Maritime Network: A Pilot Study in the Strait of Messina (Italy). In Conference on Computational Science and Its Applications; Springer Nature: Cham, Switzerland, 2024; pp. 213–226. [Google Scholar] [CrossRef]
Figure 1. (a) Weather report and warning symbols in JMA surface weather fax chart retrieved from imocwx.com.(accessed on 14 Feb 2022). (b) Warning symbols and wind barbs in a 48 h surface forecast chart issued by US National Weather Service at 1758 UTC on 13 February 2022.
Figure 1. (a) Weather report and warning symbols in JMA surface weather fax chart retrieved from imocwx.com.(accessed on 14 Feb 2022). (b) Warning symbols and wind barbs in a 48 h surface forecast chart issued by US National Weather Service at 1758 UTC on 13 February 2022.
Jmse 12 02096 g001
Figure 2. JMA (a) original surface fax weather chart, (b) averaged base chart, (c) after binarization, (d) difference between the original (a) and base chart (c), resulting in a pure weather map.
Figure 2. JMA (a) original surface fax weather chart, (b) averaged base chart, (c) after binarization, (d) difference between the original (a) and base chart (c), resulting in a pure weather map.
Jmse 12 02096 g002
Figure 3. Flow chart of the auto-warning system for JMA charts.
Figure 3. Flow chart of the auto-warning system for JMA charts.
Jmse 12 02096 g003
Figure 4. YOLOv5s-CBAM(SE) network structure diagram, the parts related with CBAM(SE) are marked in gray.
Figure 4. YOLOv5s-CBAM(SE) network structure diagram, the parts related with CBAM(SE) are marked in gray.
Jmse 12 02096 g004
Figure 5. YOLOv8n model structure diagram.
Figure 5. YOLOv8n model structure diagram.
Jmse 12 02096 g005
Figure 6. Comparison of weather briefing text recognition results.
Figure 6. Comparison of weather briefing text recognition results.
Jmse 12 02096 g006
Figure 7. Recognition of warning symbols “hPa”, “GW”, “SW”, and “FOG[W]” from the chart Figure 2d.
Figure 7. Recognition of warning symbols “hPa”, “GW”, “SW”, and “FOG[W]” from the chart Figure 2d.
Jmse 12 02096 g007
Figure 8. Comparison of detection results of wind barb (interception).
Figure 8. Comparison of detection results of wind barb (interception).
Jmse 12 02096 g008aJmse 12 02096 g008b
Figure 9. Training process visualization (a) train-loss and (b) mAP values for original and improved YOLOv5s.
Figure 9. Training process visualization (a) train-loss and (b) mAP values for original and improved YOLOv5s.
Jmse 12 02096 g009
Figure 10. (a) JMA charts with warning symbols detected, (b) JMA charts with warning area colored, red and yellow for wind speeds greater than 50 kts and 35–49 kts, green for visibility < 0.3 nm (c) US charts with wind levels colored.
Figure 10. (a) JMA charts with warning symbols detected, (b) JMA charts with warning area colored, red and yellow for wind speeds greater than 50 kts and 35–49 kts, green for visibility < 0.3 nm (c) US charts with wind levels colored.
Jmse 12 02096 g010
Figure 11. Field tests of the auto-warning system (upper and middle) US case (bottom) and JMA case.
Figure 11. Field tests of the auto-warning system (upper and middle) US case (bottom) and JMA case.
Jmse 12 02096 g011
Table 1. Wind barb and warning symbols in surface weather charts.
Table 1. Wind barb and warning symbols in surface weather charts.
(US) SymbolJmse 12 02096 i001Jmse 12 02096 i002Jmse 12 02096 i003Jmse 12 02096 i004Jmse 12 02096 i005Jmse 12 02096 i006Jmse 12 02096 i007
Meaning35 kt40 kt45 kt50 kt55 kt35–45 kts 35–45 kts expected in next 24 h
(JMA) SymbolGWSWTWFOG[W]
Meaning35–45 kts now or in next 24 h≥50 kts now or in next 24 h≥65 kts now or in next 24 hDense fog with visibility less than 0.3 nm
Table 2. List of marine surface wind forecast at specific location.
Table 2. List of marine surface wind forecast at specific location.
ReferencesJian et al. [3].Jian et al. [4].www.buoyweather.com (accessed on 16 October 2024). www.windy.com (accessed on 16 October 2024). www.stormgeo.com (accessed on 16 October 2024). This study
Agency Private sailing forecasting sectorPrivate weather forecasting sectorLeading weather routing corporation
Core data sourceECMWF (European Centre for Medium-Range Weather Forecasts)Various public official color-shade weather chartUS GFS (Global Forecast System model)ECMWF, GFS, and German ICON (Icosahedral Nonhydrostatic)Not availableUS and JMA non-shade surface fax charts
Domain NW Pacific and Indian OceanNW PacificGlobal marineGlobalGlobal marineNW Pacific, northern Pacific and Atlantic
Object userShip navigatorShip navigatorSailorAllTransoceanic shipping companyShip navigator
CostFreeFreeFirst 2 days freeFirst 7 days freeHighFree
Max lead120 h48 h16 days10 days30 days96 h and 24 h
Time step6 h6 h6 h1 h3 h6 h
modeTextTextWeb-based graphicGIS-based graphicText and graphicText
Delivery methodAutomatic email as per requestAutomatic email as per requestOnlineOnlineReport pre and during the voyageAutomatic email as per request
Table 3. Structures of YOLOv4-Tiny, YOLOv5s, and YOLOv8n.
Table 3. Structures of YOLOv4-Tiny, YOLOv5s, and YOLOv8n.
YOLOv4-TinyYOLOv5sYOLOv8n
BackboneCSPDarknet structureC3 module CSPDarknet structureC2f module CSPDarknet structure
NeckSPP, PANSPPF, C3 module PANSPPF, C2f module PAN
HeadYOLOv3Coupled Head, Anchor-basedDecoupled Head, Anchor-free
LossRegression-CIOU_LossClassification-BEC_Loss
Regression-GIOU_Loss
Classification-VFL_Loss
Regression-DFL_LOSS + CIOU_LOSS
Table 4. Comparison of recognition rates of common words in weather briefing.
Table 4. Comparison of recognition rates of common words in weather briefing.
Common Words in Weather BriefingsNumber of Samples Correctly DetectedNumber of Incorrect ChecksTotal Number of SamplesRecallPrecision
hPa50055350.9350.990
WINDS34512105350.6450.222
WITHIN4701010500.4480.979
Table 5. Recognition rates of weather warnings in JMA charts.
Table 5. Recognition rates of weather warnings in JMA charts.
TemplateNumber of Samples Correctly DetectedNumber of Incorrect ChecksTotal Number of SamplesRecallPrecision
hPa71257500.9490.993
SW5942816370.9320.679
GW9383879870.9500.708
TW241260.9230.960
FOG142124120160.7050.855
Table 6. Comparison of effects of different methods, the highest values are marked in bold.
Table 6. Comparison of effects of different methods, the highest values are marked in bold.
ExperimentModelsmAPPRF1Single Image
Detection Time/ms
Weight
1YOLOv5s0.9200.8760.9050.8910.513.7 MB
2YOLOv5s + SE0.9210.8880.9090.8986.913.8 MB
3YOLOv5s + CBAM0.9170.9640.9060.9347.413.8 MB
4YOLOv5s + ACON0.9150.9610.8880.9237.514.6 MB
5YOLOv5s + BiFPN0.9280.9630.9080.9355.814.0 MB
6YOLOv5s + SE + ACON0.9160.9690.8960.9314.414.6 MB
7YOLOv5s + CBAM + ACON0.9200.9680.8820.92313.614.6 MB
8YOLOv5s + SE + BiFPN0.9230.9730.8820.9251.214.0 MB
9YOLOv5s + CBAM + BiFPN0.9260.9680.9070.9375.614.0 MB
10YOLOv5s + ACON + BiFPN0.9030.9140.8690.8916.914.9 MB
11YOLOv5s + SE + ACON + BiFPN0.9160.9640.8710.9157.514.9 MB
12YOLOv5s + CBAM + ACON + BiFPN0.9250.9590.8990.9283.414.9 MB
13YOLOv8n0.9360.880.920.928.66.08 MB
Table 7. Comparison of P and R values of different wind barbs.
Table 7. Comparison of P and R values of different wind barbs.
ModelPR
35404550553540455055
1YOLOv5s0.880.8750.8720.870.8880.9890.6290.940.9920.978
2YOLOv5s + SE0.8970.8870.8930.8850.8780.9890.630.940.9920.993
5YOLOv5s + BiFPN0.9710.9620.9760.9470.9580.990.6270.9640.990.968
7YOLOv5s + CBAM + ACON0.9810.9650.9730.960.9610.9760.6260.8550.9870.965
8YOLOv5s + SE + BiFPN0.9770.9540.9850.9550.9930.9810.6250.880.9880.936
9YOLOv5s + CBAM + BiFPN0.9810.9710.9640.9650.960.9810.6260.9760.9880.965
12YOLOv5s + CBAM + ACON + BiFPN0.9820.9710.9290.9560.9560.9770.6240.9520.9860.958
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jian, J.; Zhang, Y.; Xu, K.; Webster, P.J. Automatic Reading and Reporting Weather Information from Surface Fax Charts for Ships Sailing in Actual Northern Pacific and Atlantic Oceans. J. Mar. Sci. Eng. 2024, 12, 2096. https://doi.org/10.3390/jmse12112096

AMA Style

Jian J, Zhang Y, Xu K, Webster PJ. Automatic Reading and Reporting Weather Information from Surface Fax Charts for Ships Sailing in Actual Northern Pacific and Atlantic Oceans. Journal of Marine Science and Engineering. 2024; 12(11):2096. https://doi.org/10.3390/jmse12112096

Chicago/Turabian Style

Jian, Jun, Yingxiang Zhang, Ke Xu, and Peter J. Webster. 2024. "Automatic Reading and Reporting Weather Information from Surface Fax Charts for Ships Sailing in Actual Northern Pacific and Atlantic Oceans" Journal of Marine Science and Engineering 12, no. 11: 2096. https://doi.org/10.3390/jmse12112096

APA Style

Jian, J., Zhang, Y., Xu, K., & Webster, P. J. (2024). Automatic Reading and Reporting Weather Information from Surface Fax Charts for Ships Sailing in Actual Northern Pacific and Atlantic Oceans. Journal of Marine Science and Engineering, 12(11), 2096. https://doi.org/10.3390/jmse12112096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop