Symmetry Applied in Computer Vision, Automation, and Robotics

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 5818

Special Issue Editors


E-Mail Website
Guest Editor
College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
Interests: image processing; point cloud processing; Artificial Intelligence; intelligent visual surveillance and plant phenotyping

E-Mail Website
Guest Editor
College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
Interests: computer vision; deep learning; brain science; natural language processing; evolutionary computation

E-Mail Website
Guest Editor
College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
Interests: model predictive control; complex systems

Special Issue Information

Dear Colleagues,

Computer vision has become an indispensable part of many fields such as manufacturing, health care, transportation, and has profoundly influenced our daily life (e.g., through algorithms in smart phones and cameras, search engines, and social networks). Automation and robotic systems, such as self-driving vehicles, autonomous robots and unmanned workshop plants, have also had a significant impact on society and human life. Therefore, it is necessary to keep carrying out advanced research on computer vision, automation and robotics.

Symmetry plays a crucial role in various aspects of computer vision, automation and robotics. This Special Issue, entitled “Symmetry in Applied Computer Vision, Automation, and Robotics”, mainly covers the topics of the theory, phenomenon, and research regarding symmetry in applied computer vision, automation, and robotics. This Special Issue will also attempt to cover the whole field of symmetry (and asymmetry) in its widest sense. We cordially and earnestly invite researchers to contribute their original and high-quality research papers that will inspire advances in computer vision, image processing, 3D sensing, automation, control system and control engineering, robotics, robotic control, optimization, and their symmetry-related applications. We encourage submissions that cover a wide range of topics, including, but not limited to, the following:

  • Learning-based modeling of automation systems with symmetry;
  • Advanced controller design for automation systems with symmetry;
  • Model predictive control based on symmetry;
  • Motion planning and control strategies with symmetry;
  • Symmetry in robot design and morphology;
  • Human–robot interaction with symmetry;
  • Multi-robot systems and swarm robotics with symmetry;
  • Learning and adaptation in robotics with symmetry;
  • Robot manipulation and grasping with symmetry;
  • Robot mapping and localization with symmetry;
  • Symmetry in computer vision;
  • Image processing with symmetry;
  • Point cloud processing with symmetry;
  • Pattern analysis and machine learning in computer vision with symmetry applications;
  • Deep learning on images or other regular data forms with symmetry.

Dr. Dawei Li
Dr. Xuesong Tang
Dr. Xin Cai
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • symmetry phenomenon and theory
  • symmetry-based applications
  • dynamical systems
  • optimization
  • optimal control
  • model predictive control
  • learning-based control
  • learning-based modelling
  • robot design and morphology
  • multi-robot systems
  • robot manipulation and grasping
  • motion planning
  • robot mapping and localization
  • computer vision
  • image processing
  • pattern analysis and machine learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 9691 KiB  
Article
UAV Tracking via Saliency-Aware and Spatial–Temporal Regularization Correlation Filter Learning
by Liqiang Liu, Tiantian Feng, Yanfang Fu, Lingling Yang, Dongmei Cai and Zijian Cao
Symmetry 2024, 16(8), 1076; https://doi.org/10.3390/sym16081076 - 20 Aug 2024
Viewed by 839
Abstract
Due to their great balance between excellent performance and high efficiency, discriminative correlation filter (DCF) tracking methods for unmanned aerial vehicles (UAVs) have gained much attention. Due to these correlations being capable of being efficiently computed in a Fourier domain by discrete Fourier [...] Read more.
Due to their great balance between excellent performance and high efficiency, discriminative correlation filter (DCF) tracking methods for unmanned aerial vehicles (UAVs) have gained much attention. Due to these correlations being capable of being efficiently computed in a Fourier domain by discrete Fourier transform (DFT), the DFT of an image has symmetry in the Fourier domain. However, DCF tracking methods easily generate unwanted boundary effects where the tracking object suffers from challenging situations, such as deformation, fast motion and occlusion. To tackle the above issue, this work proposes a novel saliency-aware and spatial–temporal regularized correlation filter (SSTCF) model for visual object tracking. First, the introduced spatial–temporal regularization helps build a more robust correlation filter (CF) and improve the temporal continuity and consistency of the model to effectively lower boundary effects and enhance tracking performance. In addition, the relevant objective function can be optimized into three closed-form subproblems which can be addressed by using the alternating direction method of multipliers (ADMM) competently. Furthermore, utilizing a saliency detection method to acquire a saliency-aware weight enables the tracker to adjust to variations in appearance and mitigate disturbances from the surroundings environment. Finally, we conducted numerous experiments based on three different benchmarks, and the results showed that our proposed model had better performance and higher efficiency compared to the most advanced trackers. For example, the distance precision (DP) score was 0.883, and the area under the curve (AUC) score was 0.676 on the OTB2015 dataset. Full article
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)
Show Figures

Figure 1

20 pages, 18126 KiB  
Article
Spherical Superpixel Segmentation with Context Identity and Contour Intensity
by Nannan Liao, Baolong Guo, Fangliang He, Wenxing Li, Cheng Li and Hui Liu
Symmetry 2024, 16(7), 925; https://doi.org/10.3390/sym16070925 - 19 Jul 2024
Cited by 1 | Viewed by 647
Abstract
Superpixel segmentation is a popular preprocessing tool in the field of image processing. Nevertheless, conventional planar superpixel generation algorithms are inadequately suited for segmenting symmetrical spherical images due to the distinctive geometric differences. In this paper, we present a novel superpixel algorithm termed [...] Read more.
Superpixel segmentation is a popular preprocessing tool in the field of image processing. Nevertheless, conventional planar superpixel generation algorithms are inadequately suited for segmenting symmetrical spherical images due to the distinctive geometric differences. In this paper, we present a novel superpixel algorithm termed context identity and contour intensity (CICI) that is specifically tailored for spherical scene segmentation. By defining a neighborhood range and regional context identity, we propose a symmetrical spherical seed-sampling method to optimize both the quantity and distribution of seeds, achieving evenly distributed seeds across the panoramic surface. Additionally, we integrate the contour prior to superpixel correlation measurements, which could significantly enhance boundary adherence across different scales. By implementing the two-fold optimizations on the non-iterative clustering framework, we achieve synergistic CICI to generate higher-quality superpixels. Extensive experiments on the public dataset confirm that our work outperforms the baselines and achieves comparable results with state-of-the-art superpixel algorithms in terms of several quantitative metrics. Full article
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)
Show Figures

Figure 1

26 pages, 34712 KiB  
Article
Research on LFD System of Humanoid Dual-Arm Robot
by Ze Cui, Lang Kou, Zenghao Chen, Peng Bao, Donghai Qian, Lang Xie and Yue Tang
Symmetry 2024, 16(4), 396; https://doi.org/10.3390/sym16040396 - 28 Mar 2024
Viewed by 1228
Abstract
Although robots have been widely used in a variety of fields, the idea of enabling them to perform multiple tasks in the same way that humans do remains a difficulty. To solve this, we investigate the learning from demonstration (LFD) system with our [...] Read more.
Although robots have been widely used in a variety of fields, the idea of enabling them to perform multiple tasks in the same way that humans do remains a difficulty. To solve this, we investigate the learning from demonstration (LFD) system with our independently designed symmetrical humanoid dual-arm robot. We present a novel action feature matching algorithm. This algorithm accurately transforms human demonstration data into task models that robots can directly execute, considerably improving LFD’s generalization capabilities. In our studies, we used motion capture cameras to capture human demonstration actions, which included combinations of simple actions (the action layer) and a succession of complicated operational tasks (the task layer). For the action layer data, we employed Gaussian mixture models (GMM) for processing and constructing an action primitive library. As for the task layer data, we created a “keyframe” segmentation method to transform this data into a series of action primitives and build another action primitive library. Guided by our algorithm, the robot successfully imitated complex human tasks. Results show its excellent task learning and execution, providing an effective solution for robots to learn from human demonstrations and significantly advancing robot technology. Full article
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)
Show Figures

Figure 1

15 pages, 20617 KiB  
Article
Automatic Control of Virtual Cameras for Capturing and Sharing User Focus and Interaction in Collaborative Virtual Reality
by Junhyeok Lee, Dongkeun Lee, Seowon Han, Hyun K. Kim and Kang Hoon Lee
Symmetry 2024, 16(2), 228; https://doi.org/10.3390/sym16020228 - 13 Feb 2024
Viewed by 1171
Abstract
As VR technology advances and network speeds rise, social VR platforms are gaining traction. These platforms enable multiple users to socialize and collaborate within a shared virtual environment using avatars. Virtual reality, with its ability to augment visual information, offers distinct advantages for [...] Read more.
As VR technology advances and network speeds rise, social VR platforms are gaining traction. These platforms enable multiple users to socialize and collaborate within a shared virtual environment using avatars. Virtual reality, with its ability to augment visual information, offers distinct advantages for collaboration over traditional methods. Prior research has shown that merely sharing another person’s viewpoint can significantly boost collaborative efficiency. This paper presents an innovative non-verbal communication technique designed to enhance the sharing of visual information. By employing virtual cameras, our method captures where participants are focusing and what they are interacting with, then displays these data above their avatars. The direction of the virtual camera is automatically controlled by considering the user’s gaze direction, the position of the object the user is interacting with, and the positions of other objects around that object. The automatic adjustment of these virtual cameras and the display of captured images are symmetrically conducted for all participants engaged in the virtual environment. This approach is especially beneficial in collaborative settings, where multiple users work together on a shared structure of multiple objects. We validated the effectiveness of our proposed technique through an experiment with 20 participants tasked with collaboratively building structures using block assembly. Full article
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)
Show Figures

Figure 1

23 pages, 19390 KiB  
Article
Semi-Symmetrical, Fully Convolutional Masked Autoencoder for TBM Muck Image Segmentation
by Ke Lei, Zhongsheng Tan, Xiuying Wang and Zhenliang Zhou
Symmetry 2024, 16(2), 222; https://doi.org/10.3390/sym16020222 - 12 Feb 2024
Cited by 2 | Viewed by 1254
Abstract
Deep neural networks are effectively utilized for the instance segmentation of muck images from tunnel boring machines (TBMs), providing real-time insights into the surrounding rock condition. However, the high cost of obtaining quality labeled data limits the widespread application of this method. Addressing [...] Read more.
Deep neural networks are effectively utilized for the instance segmentation of muck images from tunnel boring machines (TBMs), providing real-time insights into the surrounding rock condition. However, the high cost of obtaining quality labeled data limits the widespread application of this method. Addressing this challenge, this study presents a semi-symmetrical, fully convolutional masked autoencoder designed for self-supervised pre-training on extensive unlabeled muck image datasets. The model features a four-tier sparse encoder for down-sampling and a two-tier sparse decoder for up-sampling, connected via a conventional convolutional neck, forming a semi-symmetrical structure. This design enhances the model’s ability to capture essential low-level features, including geometric shapes and object boundaries. Additionally, to circumvent the trivial solutions in pixel regression that the original masked autoencoder faced, Histogram of Oriented Gradients (HOG) descriptors and Laplacian features have been integrated as novel self-supervision targets. Testing shows that the proposed model can effectively discern essential features of muck images in self-supervised training. When applied to subsequent end-to-end training tasks, it enhances the model’s performance, increasing the prediction accuracy of Intersection over Union (IoU) for muck boundaries and regions by 5.9% and 2.4%, respectively, outperforming the enhancements made by the original masked autoencoder. Full article
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)
Show Figures

Figure 1

Back to TopTop