Next Article in Journal
Experimental and Numerical Analysis of PIP Slip Joint Subjected to Bending
Previous Article in Journal
Spatial Distribution and Decadal Variability of 129I and 236U in the Western Mediterranean Sea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional CKANs: UUV Noncooperative Target State Estimation Approach Based on 3D Convolutional Kolmogorov–Arnold Networks

1
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
2
College of Electronics and Internet of Things Engineering, Chongqing Industry Polytechnic College, Chongqing 401120, China
3
School of Mechatronic Engineering, Changchun University of Technology, Changchun 130012, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(11), 2040; https://doi.org/10.3390/jmse12112040
Submission received: 17 October 2024 / Revised: 5 November 2024 / Accepted: 9 November 2024 / Published: 11 November 2024

Abstract

:
Accurate and stable estimation of the position and trajectory of noncooperative targets is crucial for the safe navigation and operation of sonar-equipped underwater unmanned vehicles (UUVs). However, the uncertainty associated with sonar observations and the unpredictability of noncooperative target movements often undermine the stability of traditional Bayesian methods. This paper presents an innovative approach for noncooperative target state estimation utilizing 3D Convolutional Kolmogorov–Arnold Networks (3DCKANs). By establishing a non-Markovian model that characterizes state estimation of UUV noncooperative targets under uncertain observations, we leverage historical data to construct 3D Convolutional Kolmogorov–Arnold Networks. This network learns the patterns of sonar observations and target state transitions from a substantial offline dataset, allowing it to approximate the posterior probability distribution derived from past observations effectively. Additionally, a sliding window technique is integrated into the convolutional neural network to enhance the estimator’s fault tolerance with respect to observation data in both temporal and spatial dimensions, particularly when posterior probabilities are unknown. The incorporation of the Kolmogorov–Arnold representation within the convolutional layers enhances the network’s capacity for nonlinear expression and adaptability in processing spatial information. Finally, we present statistical experiments and simulation cases to validate the accuracy and stability of the proposed method.

1. Introduction

When executing tasks such as scientific discovery, resource exploitation, and strategic military operations, underwater unmanned vehicles (UUVs) encounter significant challenges in accurately estimating the states of noncooperative targets within complex underwater environments [1,2]. These targets may include submarines, other UUVs, or underwater obstacles. Since these entities do not actively disclose their state information, UUVs are forced to rely solely on their sensors and algorithms for tracking and estimation. Sonar, the primary means of underwater detection and communication, is susceptible to measurement errors, signal attenuation, and fluctuations in the underwater acoustic channel. Consequently, uncertainties in sonar observations, such as delays and dropouts, present considerable challenges for target state estimation algorithms in UUVs [3,4,5]. Furthermore, the performance of UUV state estimation for noncooperative targets is limited by model uncertainty. The movements of noncooperative targets can be highly maneuverable and unpredictable, complicating the accurate modeling of their dynamics [6].
Bayesian methods provide a robust framework for traditional target state estimation by employing prior state transition models in conjunction with available observational data to compute posterior probabilities [7]. For linear systems influenced by zero-mean Gaussian noise, the Kalman filter is renowned for its optimality in minimizing mean square error. The Extended Kalman Filter (EKF) linearizes nonlinear system equations through Taylor expansions, extending the Bayesian filtering framework to accommodate nonlinear state estimation [8]. However, in highly nonlinear systems, the EKF may diverge and yield unreliable estimates [9]. The Unscented Kalman Filter (UKF) employs the unscented transformation technique to calculate the posterior mean and covariance of nonlinear expressions, executing the filtering task within the Kalman filter paradigm [10,11]. This method deftly avoids errors associated with EKF linearization while simplifying algorithmic complexity, although it may still face issues regarding matrix singularity [12]. The Cubature Kalman Filter (CKF) utilizes a third-degree spherical–radial rule for cubature, selecting a set of cubature points to approximate the mean and covariance of nonlinear systems [13]. The CKF performs exceptionally well in managing highly nonlinear systems and provides accurate estimations, even with a limited sample set [14]. The Particle Filter (PF), rooted in Bayesian filtering theory, employs prior information and real-time data to estimate a system’s posterior probability, achieving state estimation for nonlinear systems [15]. Nonetheless, the efficacy of these traditional Bayesian methods is highly contingent upon the alignment between the observation model, target motion model, and prior state–space model, which may significantly degrade in noncooperative target tracking scenarios [16,17].
To enhance the accuracy of prior target state transition models, various robust model-based state estimators, such as robust filters, Interacting Multiple Model (IMM) approaches, and closed-loop adaptive filters, have been proposed [18,19]. The robust filter models system uncertainty by estimating filter gain to adjust state estimates, thus improving estimation performance amid inaccuracies in system model parameters (guaranteeing high-performance estimation with H filtering). However, the practical application of robust filters is often constrained by the challenges of accurately modeling the uncertain components of real systems. The IMM dynamically combines multiple prior models to convey the target’s state transition process, thereby enhancing estimation performance during state transitions of target motion [20,21]. Its effectiveness depends on the quality of prior models and the model transition matrices; low-matching prior models within the set can diminish estimation accuracy [22]. Adaptive techniques, including joint estimation and identification methods [23] and closed-loop adaptive filters [24], update model parameters in real-time based on state estimation results, addressing uncertainties in system dynamics and measurement models. However, these adaptive methods are often hindered by delays in model updates relative to actual system dynamics, resulting in diminished estimation accuracy for highly maneuverable targets [25,26].
With advancements in perception and storage technologies, data-driven estimation and prediction have emerged as critical research avenues in this domain. Unlike approaches reliant on prior model information, data-driven methods derive system characteristics—which are difficult to model—from substantial volumes of offline data, demonstrating superior performance in estimating states of highly maneuverable targets [27,28]. Liu et al. [29] developed a prior model-assisted, data-driven estimation method for maneuvering target states by fusing prior models with offline radar historical data, thereby enhancing estimation accuracy during unpredictable maneuvers. To improve the generalization of data-assisted or data-driven methods, Liu et al. [30] constructed a digital twin system to gather a more comprehensive training dataset for maneuvering target tracking. Additionally, to further enhance tracking accuracy for maneuvering targets, they designed independent noise reduction networks and motion model estimation networks, utilizing Transformers, LSTM, and Cross-Product Neural Networks to develop an intelligent state prediction method. Zhang et al. [31] designed spatiotemporal feature extraction and decoupling modules founded on LightGBM, considering both measurement data characteristics in time and space and the independence between observation errors and target motion patterns, demonstrating effectiveness in radar and video tracking tasks. Jin et al. [32] employed attention mechanisms to capture systemic motion characteristics from offline GPS data, integrating this with the Expectation–Maximization (EM) algorithm to estimate system model parameters online. By learning the system’s parameters, dynamics, and measurement characteristics, they utilized a Kalman filter to estimate the system’s dynamic state, significantly improving trajectory estimation accuracy for GPS-based unmanned vehicles.
In contrast to sensors such as radar and GPS, sonar systems deployed on UUVs operate under harsher conditions where observation performance is significantly affected by UUV movement, leading to increased uncertainty in observations [33]. Additionally, the diversity of underwater targets and their high maneuverability, coupled with their noncooperative behavior, complicate explicit modeling efforts [34]. These factors further intricate the prior distribution description, rendering Bayesian state estimation methods inadequate for achieving satisfactory performance [35,36]. To address these challenges, this paper presents Convolutional Kolmogorov–Arnold Networks, leveraging a convolutional neural network architecture combined with Kolmogorov–Arnold representation to learn sonar observation characteristics and noncooperative target state transition patterns from extensive offline data. This methodology enables the estimation of noncooperative target states under uncertain sonar observations in the context of UUV operations. We shared the code on GitHub: https://github.com/ChangjianL/CKAN (accessed on 10 August 2024). The access date should be before the Submission Revision Date 17 October 2024 ).

2. UUV Noncooperative Target Tracking Model

This paper establishes a coordinate reference system, as shown in Figure 1, which includes the global pose relationship between the UUV and the target represented by the N O E reference frame, as well as local coordinate systems fixed to the UUV, the sonar measurement center, and the target, denoted as X u O u Y u , X s O s Y s , and X t O t Y t , respectively. As depicted in Figure 1, the sonar observation result at time k, denoted as Z k = [ d k m , θ k m ] T , can be represented in the N O E framework as follows:
X k m = f o b s 1 Z k = R ψ k u d k m cos θ k m + x s u d k m sin θ k m + y s u + X k u
where X k m = [ n k m , e k m ] T , X k u = [ n k u , e k u ] T , ψ k u represents the heading of the UUV at time k, and R ψ k u = cos ψ k u sin ψ k u sin ψ k u cos ψ k u is the rotation matrix. The vector [ x s u , y s u ] T denotes the position of point O s in the X u O u Y u frame.
The system model of underactuated UUV is described as
n ˙ u = u u cos ψ u e ˙ u = u u sin ψ u ψ ˙ u = r u
where v u = [ u u , v u , r u ] T denote UUV’s velocity in X u O u Y u , and according to UUV’s dynamics u u 2 kn , 8 kn , r u 10 ° / s , 10 ° / s , u ˙ u 1 kn / s , 1 kn / s .
The 3-DoF system model of a fully driven noncooperative target is expressed as follows:
n ˙ t = u t cos ψ t v t sin ψ t e ˙ t = u t sin ψ t + v t cos ψ t ψ ˙ t = r t
where v t = [ u t , v t , r t ] T denote the target’s velocity in X t O t Y t . The motion capacity and detection range of UUV equipped with forward-looking sonar are limited. It is hard to track noncooperative targets whose motion capacity far exceeds the UUV. Therefore, this paper focuses on the state estimation problem of a noncooperative target whose u t 0 , 20 kn , v t 0 , 20 kn , u ˙ t 2 kn / s , 2 kn / s , v ˙ t 2 kn / s , 2 kn / s , and r t 15 ° / s , 15 ° / s .

3. Convolutional Kolmogorov–Arnold Network-Based Target State Estimation

The goal of the target state estimation is to compute the current position of the target X k t based on historical observation information Z 1 : k . The Bayesian filters, such as IMM and AIMM-based UKF and PF, model the target state estimation as a Markov process through the prediction step (Equation (4)) and update step (Equation (5)), calculating the marginal posterior distribution p X k t Z 1 : k , thereby estimating the target position X k t .
p X k t Z 1 : k 1 = p X k t X k 1 t p X k 1 t Z 1 : k 1 d X k 1 t
p X k t Z 1 : k = η p Z k X k t p X k t Z 1 : k 1
where η signifies the normalized variables of p Z k X k t p X k t Z 1 : k 1 , p X k t X k 1 t denotes the target state transition model, and p Z k X k t is the sensor observation model.
In the problem of state estimation for noncooperative targets by unmanned underwater vehicles (UUVs) based on uncertain observations, establishing accurate distribution expressions for p X k t X k 1 t and p Z k X k t presents significant challenges. The state transition of the target and the characteristics of sonar observations are often obscured within vast amounts of offline data. Hence, this paper shifts away from the assumption that the target’s motion adheres to a first-order Markov chain and instead employs a fixed time window κ to characterize the target’s state transition. To extract regular patterns relating to the target’s state transitions and sonar measurements from the offline dataset, we construct a Convolutional Kolmogorov–Arnold Network (CKAN) architecture. This network learns the non-Markov process p X k t Z k κ + 1 : k , which facilitates the calculation of the target’s current state based on sonar observations within the time window κ and fits the nonlinear function X k t = f p r e d X k κ + 1 : k m , where X k m = f o b s 1 Z k .
Both the input and output of the nonlinear function X k t = f p r e d X k κ + 1 : k m are bounded, with inputs comprising sonar observations represented in the global coordinate system at times [ k κ + 1 , k ] . Since the sampling frequency of the sonar, denoted as f s , is generally higher than the frequency of target state estimation f e , and considering the relatively slow motion speeds of both the UUV and the underwater target, the motion state of the target between adjacent sonar sampling moments is similar, resulting in dense observational information. In this paper, we construct the input as a 2D vector of size κ f e × f s f e . A target state estimation approach based on 3D Convolutional Kolmogorov–Arnold Networks (3DCKANs) is proposed, as depicted in Figure 2. The architecture of the 3DCKAN includes layers such as KANConv, average pooling, unstacking, and KAN layers.
The KANConv layer operates within a 3D convolutional neural network framework, replacing the traditional convolution operation with Kolmogorov–Arnold representation. Typical convolutional layers extract features from structured data through a combination of convolution transformations and fixed nonlinear activation functions. In contrast, the KANConv layer utilizes spline functions to manipulate structured data, thereby constructing learnable nonlinear activation functions that enhance nonlinear expressive capabilities. While reference [37] demonstrates that the KANConv layer processes 3D data by extracting information from each channel independently, the UUV noncooperative target state estimation problem reveals strong coupling between the northward and eastward position transitions of the target. Consequently, this paper adapts the multichannel information extraction technique based on reference [37] to foster the mutual integration of multichannel information. The combination of average pooling layers and KANConv enhances the fault tolerance of the target state estimation network regarding uncertain observation times and spaces. Additionally, the output layer uses a Kolmogorov-Arnold network instead of a multilayer perceptron to enhance the network’s nonlinear representation capability, reduce the number of parameters, and improve the speed of target state estimation.
The integration of the average pooling layer with the KANConv layer enhances the network’s fault tolerance to uncertain observations in both time and space. Furthermore, instead of employing a traditional multilayer perceptron in the output layer, we utilize the Kolmogorov–Arnold Network, which strengthens the nonlinear representation capacity of the network, reduces the number of parameters, and improves the speed of target state estimation.
The target state estimation process for the UUV noncooperative target based on historical observations X k κ + 1 : k m using 3DCKAN can be described as follows:
X i n = pad reshape X k κ + 1 : k m
X k c 1 = KANConv ( 1 ) X i n = Φ ( 1 ) X i n
where ⊗ denotes the convolution KAN operation, and the kernel Φ ( 1 ) = Φ c ( 1 ) c = 1 C is a sliding window composed of a three-dimensional function matrix, and
Φ c ( 1 ) = ϕ c , 1 , 1 ϕ c , 1 , 2 ϕ c , 1 , L ϕ c , 2 , 1 ϕ c 2 , 2 ϕ c 2 , L ϕ c , W , 1 ϕ c , W , 2 ϕ c , W , L
For the window A i , j in X i n , we have
KANConv i , j A i , j = c = 1 C l = 1 L w = 1 W ϕ c l w a c , i + l 1 , j + w 1
where ϕ are learnable activations composed of basic functions (Tanh and B-spline) and trainable parameters W A and b A and where ϕ s are learnable activation composed of basic functions (Tanh and B-spline B i ) and trainable parameters (w and c i ):
ϕ a = w Tanh a + i c i B i a
X p 1 = AvgPool 2 d X k c 1
X k c 2 = KANConv ( 2 ) X p 1
X p 2 = AvgPool 2 d X k c 2
X ^ k t = KAN Flatten X p 2
where X ^ k t is the target state estimation value at time k, KAN ( A ) = Φ Λ 1 Φ 1 A , and the transition from layer λ to layer λ + 1 can be expressed as
A λ + 1 = Φ λ A λ = ϕ λ 1 , 1 ϕ λ 1 , 2 ϕ λ 1 , n λ ϕ λ 2 , 1 ϕ λ 2 , 2 ϕ λ 2 , n λ ϕ λ n λ + 1 , 1 ϕ λ n λ + 1 , 2 ϕ λ n λ + 1 , n λ A λ
where n λ denotes the number of nodes at layer λ , and ϕ is the learnable activation in Equation (10).
The 3DCNN and 3DCKAN are trained in Python 3.9.12 and the PyTorch 2.0.1 framework on the Windows 11 operating system, which is equipped with 16 GB of RAM, Intel Core i7-12700F 2.10 GHz CPU, and Nvidia RTX3070 GPU. The implementation details of the target state estimation network are illustrated in Figure 2. The network acquires knowledge of sonar measurements and target state transitions from historical observations in offline datasets. The employed forward-looking sonar has a maximum detection range of 100 m and covers a 180° sector. The sampling frequency of the sonar f s = 20 Hz, the frequency of target state estimation f e = 1 Hz, and the time steps κ = 10 . Adam optimizer with a learning rate of 0.001 is used to minimize Mean Squared Error loss during 3DCKAN training. The batch size and maximum iteration are set as 256 and 1900, respectively. The above hyperparameters and 3DCKAN structure parameters illustrated in Figure 2 are determined using the grid search method. Adam optimizer with a learning rate of 0.001 is used to minimize Mean Squared Error loss during training. After the network training, some nodes and connections that contributed weakly to the output were removed, resulting in a lightweight target state prediction network.

4. Simulation Results and Analysis

To validate the effectiveness and superiority of the proposed method, this paper conducts statistical and case analyses comparing the 3DCKAN with 3D convolutional neural network (3DCNN) and classical approaches based on the Interacting Multiple Model (IMM) and the Improved Adaptive Interactive Multiple Model (AIMM) [20], utilizing the Unscented Kalman Filter (UKF) and Particle Filter (PF). Among these, four multimodel algorithms are integrated with three types of models: constant velocity (CV), constant acceleration (CA), and constant turn (CT).

4.1. Statistical Experiment and Analysis

We designed a series of statistical experiments to evaluate the target state estimation performance of 3DCKAN and five comparison algorithms under three different noise intensities. The experiments assume that the observation data Z is subject to Gaussian noise with zero mean and standard deviations of 1 m 0.5 ° , 1.5 m 1 ° , and 2 m 2 ° . Although the nonlinear transformation Equation (1) from the observation Z to the state variable X m is known, the noise distribution in X m remains unknown and difficult to model due to the complexity of the transformation.
The performance of six methods in the statistical experiments is shown in Figure 3. In each set of experiments, the 3DCKAN method had a lower median and variance of estimation error compared with IMM-UKF, IMM-PF, AIMM-UKF, AIMM-PF, and 3DCNN, demonstrating optimal position estimation accuracy and stability. Overall, as the observation noise increased, the estimation errors for the target positions of the six methods also increased, and their stability decreased. The four multimodel methods performed similarly, with their position estimation RMSE distributed in the ranges of 2.98 m, 3.31 m, and 5.79 m across the three sets of experiments, indicating that the target state estimation performance was severely affected by noise. In the first group of experiments with low noise intensity, 3DCNN obtained similar results to 3DCKAN. With the increase in noise intensity, the variance of 3DCNN position estimation RMSE increases gradually. In the third group of experiments with high observational noise intensity, the stability of 3DCNN is significantly worse than that of the first two groups and 3DCKAN of the third group. The 3DCKAN exhibited the least median and variance changes, with position estimation RMSE values of 0.81 m, 0.84 m, and 1.21 m across the three sets of experiments, showing the strongest noise resistance. The statistical experimental results show that 3DCNN and 3DCKAN perform significantly better than the four Bayesian methods, which proves the feasibility and effectiveness of using deep neural networks to extract the regularity information of target state transition and sonar observation from offline datasets and construct non-Markov processes. The statistical results show that 3DCKAN performs better than 3DCNN in anti-noise interference, which proves the superiority of the 3D Convolutional Kolmogorov–Arnold Networks constructed in this paper. Additionally, the average time for a single estimation using 3DCKAN was 13.88 milliseconds in the three sets of experiments, allowing for rapid estimation of UUV noncooperative target positions based on sonar observations.

4.2. Simulation Cases and Analysis

We designed two sets of cases to evaluate the target state estimation performance of 3DCKAN and four comparison algorithms under uncertain observations and high-maneuver scenarios of noncooperative targets. The noise in the observations Z follows a Gaussian distribution with mean μ = 0 , 0 and covariance matrix Σ = d i a g 2 m , 2 ° . The observation frequency is 40 Hz, the sampling frequency is 20 Hz, and the target state estimation frequency is 1 Hz. The motion states of the UUV and the noncooperative targets in the simulation cases are illustrated in Table 1.
(1) 
Simulation Case 1
Case 1 evaluates the target state estimation performance of 3DCKAN under uncertain observations during the period from [1 s, 100 s]. The observation uncertainty is characterized by 10% delays and 10% dropouts. As shown in Table 1 and Figure 4, the UUV maintains a constant velocity during the interval from [1s, 40s] and transitions to constant turn (CT) at the 40-second mark. During this period, the switching sequence of the noncooperative target’s motion state is CV CA CA CT CT CV CA . The target state estimation results of 3DCKAN based on uncertain observations, as well as those of the IMM and AIMM algorithms based on certain observations, are visualized in Figure 4 and Figure 5. The figures provide evidence that 3DCKAN outperforms the comparison algorithms under uncertain observations. From Figure 4, it can be observed that the target state estimation methods based on IMM and AIMM exhibit good estimation performance while the target maintains its motion state, but they show significant fluctuations when the target’s motion state changes. In contrast, the target position estimated by 3DCKAN under uncertain observations is the closest to its true position, and the trajectory estimated by 3DCKAN is the smoothest, best reflecting the true motion path of the target. The position estimation error curves shown in Figure 5 indicate that the position estimation errors of the four IMM and AIMM methods vary with the changes in the UUV and target motion states, with maximum position estimation RMSE exceeding 4.4 m. However, even under the conditions of uncertain observations, the position estimation RMSE of 3DCKAN consistently remains around 1 m, demonstrating the stability of the 3DCKAN method under the influence of observation uncertainty.
(2) 
Simulation Case 2
The experimental results from case 1 indicate that the complex and unpredictable relative motion between the target and the UUV significantly affects the target state estimation performance. Therefore, this section uses case 2 to evaluate the target state estimation performance of 3DCKAN during the period from [101 s, 200 s], where the UUV and target frequently change their motion states. As shown in Table 1 and Figure 6, the UUV maintains a constant turn (CT) motion state from [101 s, 111 s], undergoes constant acceleration from [111 s, 127 s], then maintains a constant speed after reaching its maximum cruising speed, and finally performs constant deceleration after 190s. During this period, the switching sequence of the noncooperative target’s motion state is CA-CV-CT-CA-CV-CA-CT-CA-CV. Compared with case 1, the frequency of motion state changes in the noncooperative target and UUV in case 2 is higher, resulting in more complex relative motion. The target state estimation results of 3DCKAN, IMM, and AIMM methods are depicted in Figure 6 and Figure 7. In comparison with case 1, the estimated trajectories from the four multimodel (MM) methods are rougher, with the position estimation RMSE curve exhibiting more frequent and larger fluctuations, and maximum RMSE exceeding 5.5 m. Among the five methods, the proposed 3DCKAN still achieves the best position and trajectory estimation results. Under the impact of more complex relative motions of the target, the decline in the state estimation performance of 3DCKAN is minimal, with most of the RMSE remaining in the range of 0–1 m and a maximum RMSE of 1.4 m, thus achieving optimal stability.

5. Conclusions and Future Work

This article addresses the issues of observation uncertainty and the unpredictability of target motion in the state estimation of noncooperative underwater unmanned vehicles (UUVs). It constructs a non-Markov model based on historical observations and prior knowledge, proposing a target state estimation method using 3D Convolutional Kolmogorov–Arnold Networks (3DCKANs). Based on the mechanism of UUV target state estimation, the 3DCKAN framework is designed to learn the characteristics of uncertain observations and the rules of unpredictable target state transitions from a large amount of offline data, fitting the mapping from historical observations to the current state of the target. The proposed method is evaluated using statistical experiments and simulation cases. Experimental results indicate that the proposed target state estimation method based on 3D Convolutional Kolmogorov–Arnold Networks can construct models for uncertain observations and unpredictable target motion through offline training. Compared with multimodel algorithms, the proposed 3DCKAN is less sensitive to observation delays and dropouts and is only slightly affected by changes in target motion states. Under conditions of uncertain observations and frequent changes in target motion states, 3DCKAN can still maintain high state estimation accuracy and stability.
This paper proposed a UUV noncooperative target state estimation method based on 3D Convolutional Kolmogorov–Arnold Networks. This method is a supervised learning method, and the difficulty of sampling a large amount of actual data is a major challenge in applying this method to UUV. It is an effective way to solve this problem by using transfer learning to reduce the need for actual data. Therefore, our future research will focus on the improvement of simulation models and the transfer method from simulation to real.

Author Contributions

D.Y. and C.L. designed the study, performed the research, analyzed data, and wrote the paper; S.L. contributed to refining the ideas, carrying out additional analyses, and finalizing this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 62303467 and the Natural Science Foundation of Jiangsu Province under Grant BK20231061.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Er, M.J.; Gong, H.; Liu, Y.; Liu, T. Intelligent trajectory tracking and formation control of underactuated autonomous underwater vehicles: A critical review. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 543–555. [Google Scholar] [CrossRef]
  2. Liu, F.; Ma, Z.; Mu, B.; Duan, C.; Chen, R.; Qin, Y.; Pu, H.; Luo, J. Review on fault-tolerant control of unmanned underwater vehicles. Ocean. Eng. 2023, 285, 115471. [Google Scholar] [CrossRef]
  3. Zhou, J.; Si, Y.; Chen, Y. A review of subsea AUV technology. J. Mar. Sci. Eng. 2023, 11, 1119. [Google Scholar] [CrossRef]
  4. Zhang, B.; Hou, X.; Yang, Y.; Zhou, J.; Xu, S. Variational Bayesian cardinalized probability hypothesis density filter for robust underwater multi-target direction-of-arrival tracking with uncertain measurement noise. Front. Phys. 2023, 11, 1142400. [Google Scholar] [CrossRef]
  5. Huang, H.; Zhang, S.; Wang, D.; Ling, K.-V.; Liu, F.; He, X. A novel Bayesian-based adaptive algorithm applied to unobservable sensor measurement information loss for underwater navigation. IEEE Trans. Instrum. Meas. 2023, 72, 1–12. [Google Scholar] [CrossRef]
  6. Challa, S.; Morelande, M.R.; Mušicki, D.; Evans, R.J. Fundamentals of Object Tracking; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  7. Wang, G.; Zhao, J.; Yang, C.; Ma, L.; Fan, X.; Dai, W. Robust Kalman filter for systems with colored heavy-tailed process and measurement noises. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 4256–4260. [Google Scholar] [CrossRef]
  8. Ljung, L. Asymptotic behavior of the extended Kalman filter as a parameter estimator for linear systems. IEEE Trans. Autom. Control. 1979, AC-24, 36–50. [Google Scholar] [CrossRef]
  9. Ford, K.R.; Haug, A.J. A study of endpoint-constrained nonlinear tracking filters. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 3952–3961. [Google Scholar] [CrossRef]
  10. Wan, E.A.; Van Der Merwe, R.; Haykin, S. The unscented Kalman filter. Kalman Filter. Neural Netw. 2001, 5, 221–280. [Google Scholar]
  11. Shen, H.; Wen, G.; Lv, Y.; Zhou, J.; Wang, L. USV parameter estimation: Adaptive unscented Kalman filter-based approach. IEEE Trans. Ind. Inform. 2023, 19, 7751–7761. [Google Scholar] [CrossRef]
  12. Kumar, M.; Mondal, S. A fuzzy-based adaptive unscented Kalman filter for state estimation of three-dimensional target tracking. Int. J. Control Autom. Syst. 2023, 21, 3804–3812. [Google Scholar] [CrossRef]
  13. Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
  14. Fu, H.; Cheng, Y.; Cheng, C. A novel improved cubature Kalman filter with adaptive generation of cubature points and weights for target tracking. Meas. Sci. Technol. 2022, 33, 035002. [Google Scholar] [CrossRef]
  15. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  16. Zhang, Z.; Yang, G.; Jin, M.; Fan, S. Motion parameters and state estimation of non-cooperative target. In Intelligent Robotics and Applications; ICIRA 2022. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; Volume 13455. [Google Scholar]
  17. Wei, Y.; Yang, X.; Bai, X.; Xu, Z. Adaptive hybrid Kalman filter for attitude motion parameters estimation of space non-cooperative tumbling target. Aerosp. Sci. Technol. 2024, 144, 108832. [Google Scholar] [CrossRef]
  18. Seo, M.; Kia, S.S. Online target localization using adaptive belief propagation in the HMM framework. IEEE Robot. Autom. Lett. 2022, 7, 10288–10295. [Google Scholar] [CrossRef]
  19. Li, Q.; Gan, R.; Liang, J.; Godsill, S.J. An adaptive and scalable multi-object tracker based on the non-homogeneous Poisson process. IEEE Trans. Signal Process. 2023, 71, 105–120. [Google Scholar] [CrossRef]
  20. Wang, P.; Liu, Y. Underwater target tracking algorithm based on improved adaptive IMM-UKF. J. Electron. Inf. Technol. 2022, 44, 1999–2005. [Google Scholar]
  21. Li, X.; Lu, B.; Li, Y.; Lu, X.; Jin, H. Adaptive interacting multiple model for underwater maneuvering target tracking with one-step randomly delayed measurements. Ocean. Eng. 2023, 280, 114933. [Google Scholar] [CrossRef]
  22. Xie, G.; Sun, L.; Wen, T.; Hei, X.; Qian, F. Adaptive transition probability matrix-based parallel IMM algorithm. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 2980–2989. [Google Scholar] [CrossRef]
  23. Lan, H.; Liang, Y.; Yang, F.; Wang, Z.; Pan, Q. Joint estimation and identification for stochastic systems with unknown inputs. IET Control Theory Appl. 2013, 7, 1377–1386. [Google Scholar] [CrossRef]
  24. Das, B.P.; Sharma, K.D.; Chatterjee, A.; Bera, J. Joint state estimation of indoor thermal dynamics with unknown inputs using augmented fading memory Kalman filter. J. Build. Perform. Simul. 2023, 16, 90–106. [Google Scholar] [CrossRef]
  25. Wang, S.; Li, C.; Lim, A. Optimal joint estimation and identification theorem to linear Gaussian system with unknown inputs. Signal Process. 2019, 161, 268–288. [Google Scholar] [CrossRef]
  26. Cao, W.; Lan, J.; Wu, Q. Joint tracking and identification based on constrained joint decision and estimation. IEEE Trans. Intell. Transp. Syst. 2021, 22, 6489–6502. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Li, G.; Zhang, X.; He, Y. A deep learning model based on transformer structure for radar tracking of maneuvering targets. Inf. Fusion 2024, 103, 102120. [Google Scholar] [CrossRef]
  28. Liu, Y.; Shi, S.; Fang, Y.; Fu, A. Robust data-driven wave excitation force estimation for wave energy converters with nonlinear probabilistic modelling. Ocean. Eng. 2024, 310, 118726. [Google Scholar] [CrossRef]
  29. Liu, Z.; Wang, Z.; Yang, Y.; Lu, Y. A data-driven maneuvering target tracking method aided with partial models. IEEE Trans. Veh. Technol. 2024, 73, 414–425. [Google Scholar] [CrossRef]
  30. Liu, J.; Yan, J.; Wan, D.; Li, X.; Al-Rubaye, S. Digital twins based intelligent state prediction method for maneuvering-target tracking. IEEE J. Sel. Areas Commun. 2023, 41, 3589–3606. [Google Scholar] [CrossRef]
  31. Zhang, C.; Deng, J.; Yi, W. Data-driven online tracking filter architecture: A LightGBM implementation. Signal Process. 2024, 221, 109477. [Google Scholar] [CrossRef]
  32. Jin, X.-B.; Chen, W.; Ma, H.-J.; Kong, J.-L.; Su, T.-L.; Bai, Y.-T. Parameter-free state estimation based on Kalman filter with attention learning for GPS tracking in autonomous driving system. Sensors 2023, 23, 8650. [Google Scholar] [CrossRef]
  33. Wang, Q.; Fan, S.; Zhang, Y.; Gao, W.; Wei, J.; Wang, Y. A novel adaptive sliding observation-based cooperative positioning algorithm under factor graph framework for multiple UUVs. IEEE Trans. Ind. Inform. 2023, 19, 8743–8753. [Google Scholar] [CrossRef]
  34. Zhou, T.; Wang, Y.; Chen, B.; Zhu, J.; Yu, X. Underwater multitarget tracking with sonar images using thresholded sequential Monte Carlo probability hypothesis density algorithm. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  35. Wolek, A.; Dzikowicz, B.R.; McMahon, J.; Houston, B.H. At-sea evaluation of an underwater vehicle behavior for passive target tracking. IEEE J. Ocean. Eng. 2019, 44, 514–523. [Google Scholar] [CrossRef]
  36. Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljačić, M.; Hou, T.; Tegmark, M. KAN: Kolmogorov-Arnold Networks. arXiv 2024. [Google Scholar] [CrossRef]
  37. Bodner, A.D.; Tepsich, A.S.; Spolski, J.N.; Pourteau, S. Convolutional Kolmogorov-Arnold Networks. arXiv 2024, arXiv:2406.13155. [Google Scholar]
Figure 1. The coordinate reference system.
Figure 1. The coordinate reference system.
Jmse 12 02040 g001
Figure 2. The overall structure of the Convolutional Kolmogorov–Arnold Betwork-based target state estimation approach.
Figure 2. The overall structure of the Convolutional Kolmogorov–Arnold Betwork-based target state estimation approach.
Jmse 12 02040 g002
Figure 3. The position estimation RMSEs in the statistical experiment.
Figure 3. The position estimation RMSEs in the statistical experiment.
Jmse 12 02040 g003
Figure 4. The actual and estimated trajectories in simulation case 1.
Figure 4. The actual and estimated trajectories in simulation case 1.
Jmse 12 02040 g004
Figure 5. The position estimation RMSEs in simulation case 1.
Figure 5. The position estimation RMSEs in simulation case 1.
Jmse 12 02040 g005
Figure 6. The actual and estimated trajectories in simulation case 2.
Figure 6. The actual and estimated trajectories in simulation case 2.
Jmse 12 02040 g006
Figure 7. The position estimation RMSEs in simulation case 2.
Figure 7. The position estimation RMSEs in simulation case 2.
Jmse 12 02040 g007
Table 1. Motion State of UUV and Noncooperative Target.
Table 1. Motion State of UUV and Noncooperative Target.
Motion StateCVCACT
UUV[1, 40][111, 127][40, 111]
[127, 190][190, 200]
Time/snoncooperative target[1, 6][6, 28][41, 59]
[79, 89][28, 14][59, 79]
[103, 113][89, 103][113, 136]
[150, 161][136, 150][171, 181]
[196, 200][161, 171]
[181, 196]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, C.; Yu, D.; Lin, S. Three-Dimensional CKANs: UUV Noncooperative Target State Estimation Approach Based on 3D Convolutional Kolmogorov–Arnold Networks. J. Mar. Sci. Eng. 2024, 12, 2040. https://doi.org/10.3390/jmse12112040

AMA Style

Lin C, Yu D, Lin S. Three-Dimensional CKANs: UUV Noncooperative Target State Estimation Approach Based on 3D Convolutional Kolmogorov–Arnold Networks. Journal of Marine Science and Engineering. 2024; 12(11):2040. https://doi.org/10.3390/jmse12112040

Chicago/Turabian Style

Lin, Changjian, Dan Yu, and Shibo Lin. 2024. "Three-Dimensional CKANs: UUV Noncooperative Target State Estimation Approach Based on 3D Convolutional Kolmogorov–Arnold Networks" Journal of Marine Science and Engineering 12, no. 11: 2040. https://doi.org/10.3390/jmse12112040

APA Style

Lin, C., Yu, D., & Lin, S. (2024). Three-Dimensional CKANs: UUV Noncooperative Target State Estimation Approach Based on 3D Convolutional Kolmogorov–Arnold Networks. Journal of Marine Science and Engineering, 12(11), 2040. https://doi.org/10.3390/jmse12112040

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop