Contrastive Learning for Graph-Based Vessel Trajectory Similarity Computation
Abstract
:1. Introduction
- A graph-based trajectory contrastive learning framework, CLAIS, is proposed. It constructs similar trajectory samples to learn robust trajectory representation vectors and computes trajectory similarity based on the Euclidean distance between representation vectors, leading to favorable similarity results.
- A parameterized trajectory augmentation method is introduced to enhance the robustness of the model’s trajectory representation learning.
- Improved evaluation experiments and three evaluation metrics are proposed to verify the performance of the proposed framework in learning trajectory representations and computing ship trajectory similarities.
2. Related Work
3. Methodology
3.1. Regional Graph Pretraining Module
3.2. Vessel Trajectory Contrastive Learning Module
3.3. Vessel Trajectory Representation Learning Module
4. Experiment
4.1. Data & Preprocess
4.2. Experiment Metrics
- 1.
- Randomly select trajectories, , from . Divide into two sub-trajectories, and , based on the order of their internal positional points.
- 2.
- Apply selected augmentation operations to the trajectories in to create variant trajectories, . Split into and .
- 3.
- Add to the empty set . Place back into the database collection and add to .
- 4.
- Subsequently, randomly downsample half of the trajectories in that do not belong to and place them back into . Downsample the augmented trajectories and add them to .
- 1.
- Augmentation invariance precision () represents the proportion of queries where the corresponding trajectory is ranked highest in both and . Let us denote the rankings of ∀∈ in and as and , respectively. Then, the enhanced invariant precision can be calculated as follows:
- 2.
- Augmentation invariance mean rank () represents the average rank of the corresponding twin sub-trajectory in both and . It can be calculated as the average of the ranks in and for each ∈ . Mathematically, it can be expressed as follows:
- 3.
- Augmentation invariance rank standard deviation (rank std), denoted as , represents the sample standard deviation of the average ranks . It measures the variability in the average ranks of the corresponding twin sub-trajectories in and . The formula for calculating is as follows:
4.3. Comparative Baselines and Parameter Setting
4.4. Model Comparison Experiment
4.5. Robustness Experiment
4.6. Grid Size Experiment
4.7. Visualization
4.8. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Shelmerdine, R.L. Teasing out the Detail: How Our Understanding of Marine AIS Data Can Better Inform Industries, Developments, and Planning. Mar. Policy 2015, 54, 17–25. [Google Scholar] [CrossRef]
- Tao, Y.; Both, A.; Silveira, R.I.; Buchin, K.; Sijben, S.; Purves, R.S.; Laube, P.; Peng, D.; Toohey, K.; Duckham, M. A Comparative Analysis of Trajectory Similarity Measures. GISci. Remote Sens. 2021, 58, 643–669. [Google Scholar] [CrossRef]
- Zhao, L.; Shi, G. A Novel Similarity Measure for Clustering Vessel Trajectories Based on Dynamic Time Warping. J. Navig. 2019, 72, 290–306. [Google Scholar] [CrossRef]
- Zhao, L.; Shi, G. Maritime Anomaly Detection Using Density-Based Clustering and Recurrent Neural Network. J. Navig. 2019, 72, 894–916. [Google Scholar] [CrossRef]
- Sang, L.; Wall, A.; Mao, Z.; Yan, X.; Wang, J. A Novel Method for Restoring the Trajectory of the Inland Waterway Ship by Using AIS Data. Ocean Eng. 2015, 110, 183–194. [Google Scholar] [CrossRef]
- Zhao, L.; Shi, G.; Yang, J. Ship Trajectories Pre-Processing Based on AIS Data. J. Navig. 2018, 71, 1210–1230. [Google Scholar] [CrossRef]
- Yan, R.; Mo, H.; Yang, D.; Wang, S. Development of Denoising and Compression Algorithms for AIS-Based Vessel Trajectories. Ocean Eng. 2022, 252, 111207. [Google Scholar] [CrossRef]
- Lee, W.; Cho, S.-W. AIS Trajectories Simplification Algorithm Considering Topographic Information. Sensors 2022, 22, 7036. [Google Scholar] [CrossRef]
- Yang, P.; Wang, H.; Zhang, Y.; Qin, L.; Zhang, W.; Lin, X. T3S: Effective Representation Learning for Trajectory Similarity Computation. In Proceedings of the 2021 IEEE 37th International Conference on Data Engineering (ICDE), Chania, Greece, 19–22 April 2021; pp. 2183–2188. [Google Scholar]
- Yang, P.; Wang, H.; Lian, D.; Zhang, Y.; Qin, L.; Zhang, W. TMN: Trajectory Matching Networks for Predicting Similarity. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE), Kuala Lumpur, Malaysia, 9–12 May 2022; pp. 1700–1713. [Google Scholar]
- Zhang, H.; Zhang, X.; Jiang, Q.; Zheng, B.; Sun, Z.; Sun, W.; Wang, C. Trajectory Similarity Learning with Auxiliary Supervision and Optimal Matching. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, 7–15 January 2021; pp. 3209–3215. [Google Scholar]
- Yao, D.; Hu, H.; Du, L.; Cong, G.; Han, S.; Bi, J. TrajGAT: A Graph-Based Long-Term Dependency Modeling Approach for Trajectory Similarity Computation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; ACM: New York, NY, USA, 2022; pp. 2275–2285. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Association for Computing Machinery: New York, NY, USA, 2017; Volume 30. [Google Scholar]
- Yao, D.; Zhang, C.; Zhu, Z.; Hu, Q.; Wang, Z.; Huang, J.; Bi, J. Learning Deep Representation for Trajectory Clustering. Expert Syst. 2018, 35, e12252. [Google Scholar] [CrossRef]
- Li, S.; Liang, M.; Liu, R.W. Vessel Trajectory Similarity Measure Based on Deep Convolutional Autoencoder. In Proceedings of the 2020 5th IEEE International Conference on Big Data Analytics (ICBDA), Xiamen, China, 8–11 May 2020; pp. 333–338. [Google Scholar]
- Fu, T.-Y.; Lee, W.-C. Trembr: Exploring Road Networks for Trajectory Representation Learning. ACM Trans. Intell. Syst. Technol. 2020, 11, 1–25. [Google Scholar] [CrossRef]
- Balestriero, R.; Ibrahim, M.; Sobal, V.; Morcos, A.; Shekhar, S.; Goldstein, T.; Bordes, F.; Bardes, A.; Mialon, G.; Tian, Y.; et al. A Cookbook of Self-Supervised Learning. arXiv 2023, arXiv:2304.12210. [Google Scholar]
- Gui, J.; Chen, T.; Zhang, J.; Cao, Q.; Sun, Z.; Luo, H.; Tao, D. A Survey of Self-Supervised Learning from Multiple Perspectives: Algorithms, Applications and Future Trends. arXiv 2023, arXiv:2301.05712. [Google Scholar]
- Chen, X.; Xu, J.; Zhou, R.; Chen, W.; Fang, J.; Liu, C. TrajVAE: A Variational AutoEncoder Model for Trajectory Generation. Neurocomputing 2021, 428, 332–339. [Google Scholar] [CrossRef]
- Miguel, M.Á.D.; Armingol, J.M.; García, F. Vehicles Trajectory Prediction Using Recurrent VAE Network. IEEE Access 2022, 10, 32742–32749. [Google Scholar] [CrossRef]
- Wu, Z.; Xiong, Y.; Yu, S.X.; Lin, D. Unsupervised Feature Learning via Non-Parametric Instance Discrimination. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 19–23 June 2018; pp. 3733–3742. [Google Scholar]
- Chen, T.; Kornblith, S.; Swersky, K.; Norouzi, M.; Hinton, G. Big Self-Supervised Models Are Strong Semi-Supervised Learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–12 December 2020; Curran Associates Inc.: Red Hook, NY, USA, 2020; pp. 22243–22255. [Google Scholar]
- Liu, X.; Tan, X.; Guo, Y.; Chen, Y.; Zhang, Z. CSTRM: Contrastive Self-Supervised Trajectory Representation Model for Trajectory Similarity Computation. Comput. Commun. 2022, 185, 159–167. [Google Scholar] [CrossRef]
- Jing, Q.; Yao, D.; Gong, C.; Fan, X.; Wang, B.; Tan, H.; Bi, J. TrajCross: Trajecotry Cross-Modal Retrieval with Contrastive Learning. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 344–349. [Google Scholar]
- Grover, A.; Leskovec, J. Node2vec: Scalable Feature Learning for Networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 855–864. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the 37th International Conference on Machine Learning, Online, 13–18 July 2020; Volume 119, pp. 1597–1607. [Google Scholar]
- Li, X.; Zhao, K.; Cong, G.; Jensen, C.S.; Wei, W. Deep Representation Learning for Trajectory Similarity Computation. In Proceedings of the 2018 IEEE 34th International Conference on Data Engineering (ICDE), Paris, France, 16–19 April 2018; pp. 617–628. [Google Scholar]
- Deng, L.; Zhao, Y.; Fu, Z.; Sun, H.; Liu, S.; Zheng, K. Efficient Trajectory Similarity Computation with Contrastive Learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022; ACM: New York, NY, USA, 2022; pp. 365–374. [Google Scholar]
- Sohn, K. Improved Deep Metric Learning with Multi-Class N-Pair Loss Objective. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Curran Associates Inc.: Red Hook, NY, USA, 2016; pp. 1857–1865. [Google Scholar]
- Van den Oord, A.; Li, Y.; Vinyals, O. Representation Learning with Contrastive Predictive Coding. arXiv 2018, arXiv:1807.03748. [Google Scholar] [CrossRef]
- Chen, T.; Sun, Y.; Shi, Y.; Hong, L. On Sampling Strategies for Neural Network-Based Collaborative Filtering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 767–776. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Santurkar, S.; Tsipras, D.; Ilyas, A.; Mądry, A. How Does Batch Normalization Help Optimization? In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; Curran Associates Inc.: Red Hook, NY, USA, 2018; pp. 2488–2498. [Google Scholar]
- Hahnloser, R.H.R.; Sarpeshkar, R.; Mahowald, M.A.; Douglas, R.J.; Seung, H.S. Digital Selection and Analogue Amplification Coexist in a Cortex-Inspired Silicon Circuit. Nature 2000, 405, 947–951. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2017, arXiv:1609.02907. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. arXiv 2018, arXiv:1710.10903. [Google Scholar]
- Cho, K.; van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations Using RNN Encoder–Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; Association for Computational Linguistics: Doha, Qatar, 2014; pp. 1724–1734. [Google Scholar]
- Ranu, S.; Deepak, P.; Telang, A.D.; Deshpande, P.; Raghavan, S. Indexing and Matching Trajectories under Inconsistent Sampling Rates. In Proceedings of the 2015 IEEE 31st International Conference on Data Engineering, Seoul, Republic of Korea, 13–17 April 2015; pp. 999–1010. [Google Scholar]
- Su, H.; Zheng, K.; Wang, H.; Huang, J.; Zhou, X. Calibrating Trajectory Data for Similarity-Based Analysis. In Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data, New York, NY, USA, 22–27 June 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 833–844. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
Statistic Information | Value |
---|---|
Longitude range | [121.167° E, 122.000° E] |
Latitude range | [31.215° N, 31.632° N] |
Number of recorded positions | 37,406,189 |
Parameter | Value |
---|---|
Maximum signal time interval | 10 min |
Minimum velocity | 0 knots |
Maximum velocity | 50 knots |
Minimum trajectory length | 100 |
Maximum trajectory length | 4000 |
Model Parameter | Value | Augment Parameter | Value |
---|---|---|---|
Train set size | 1000 | position ratio | 0.2 |
Batch size | 1024 | position distort | 2 |
Pretrain epoch | 10 | position loss | 0.2 |
Train epoch | 200 | position interval | 2 |
Hidden size | 128 | segment ratio | 0.2 |
Output size | 128 | segment distort | 2 |
Grid size | 0.01° | segment loss | 0.2 |
segment num | 2 |
Database Size | 2 k | 4 k | 6 k | 8 k | 10 k | |
---|---|---|---|---|---|---|
Fréchet | 0.16 | 0.14 | 0.12 | 0.10 | 0.10 | |
Hausdorff | 0.12 | 0.08 | 0.07 | 0.06 | 0.06 | |
DTW | 0.44 | 0.41 | 0.40 | 0.39 | 0.39 | |
LSTM Encoder | 0.79 | 0.67 | 0.66 | 0.66 | 0.64 | |
CLAIS | 0.78 | 0.72 | 0.68 | 0.62 | 0.62 | |
Fréchet | 35.43 | 70.32 | 105.85 | 141.67 | 176.04 | |
Hausdorff | 52.91 | 104.19 | 156.00 | 208.20 | 258.84 | |
DTW | 11.21 | 21.76 | 33.54 | 44.57 | 55.46 | |
LSTM Encoder | 0.81 | 1.71 | 2.39 | 3.11 | 3.79 | |
CLAIS | 0.40 | 0.81 | 1.20 | 1.60 | 1.89 | |
Fréchet | 51.73 | 103.01 | 154.11 | 204.48 | 251.07 | |
Hausdorff | 77.06 | 152.45 | 227.31 | 302.33 | 371.82 | |
DTW | 18.72 | 35.33 | 54.13 | 71.9 | 89.71 | |
LSTM Encoder | 3.41 | 6.758 | 9.38 | 12.17 | 14.45 | |
CLAIS | 1.85 | 3.91 | 5.55 | 7.421 | 8.695 |
Grid Size | |||
---|---|---|---|
0.0010° | 0.7286 | 2.2438 | 34.5396 |
0.0015° | 0.6976 | 1.4097 | 7.6423 |
0.0020° | 0.6736 | 2.2986 | 19.8756 |
0.0025° | 0.6258 | 3.1892 | 16.2722 |
0.0030° | 0.6060 | 3.7766 | 30.2616 |
0.0040° | 0.5246 | 5.2674 | 25.2150 |
0.0050° | 0.4070 | 8.4100 | 22.8003 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Luo, S.; Zeng, W.; Sun, B. Contrastive Learning for Graph-Based Vessel Trajectory Similarity Computation. J. Mar. Sci. Eng. 2023, 11, 1840. https://doi.org/10.3390/jmse11091840
Luo S, Zeng W, Sun B. Contrastive Learning for Graph-Based Vessel Trajectory Similarity Computation. Journal of Marine Science and Engineering. 2023; 11(9):1840. https://doi.org/10.3390/jmse11091840
Chicago/Turabian StyleLuo, Sizhe, Weiming Zeng, and Bowen Sun. 2023. "Contrastive Learning for Graph-Based Vessel Trajectory Similarity Computation" Journal of Marine Science and Engineering 11, no. 9: 1840. https://doi.org/10.3390/jmse11091840
APA StyleLuo, S., Zeng, W., & Sun, B. (2023). Contrastive Learning for Graph-Based Vessel Trajectory Similarity Computation. Journal of Marine Science and Engineering, 11(9), 1840. https://doi.org/10.3390/jmse11091840