sensors-logo

Journal Browser

Journal Browser

Visual Sensors for Object Tracking and Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 25941

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA
Interests: computer vision; biomedical image analysis; visual surveillance and monitoring; motion detection; visual tracking; deep learning methods; level set methods

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA.
Interests: computer vision; video analytics; robotics

E-Mail Website
Guest Editor
Institute for Data Science and Informatics, University of Missouri, Columbia, MO, USA
Interests: computer vision; bioimage informatics; microscopy image analysis; deep learning; visualization

Special Issue Information

Dear Colleagues,

Visual object recognition and tracking are fundamental tasks in computer vision that are essential in a wide range of applications, including visual surveillance and monitoring, autonomous vehicles, human–computer interaction, biomedical image informatics, and so on. We are witnessing a growing need and renewed interest for robust visual object tracking and recognition capabilities due to recent advances in sensor technologies and emergence of new applications associated with these technologies.

Although visual object detection, tracking, and recognition in challenging real-world environments are relatively effortless tasks for humans, they are still very challenging tasks in a computational video analytics pipeline. Advances in technology combining more powerful and low-cost computer platforms with novel methods, particularly those relying on deep learning, are revolutionizing the computer vision field and provide new opportunities for research with larger and more diverse data sets. In addition to using visual information for recognition and tracking tasks, other sensors such as GPS, IMU, Lidar, etc. can also be synergically utilized to provide more robust approaches in diverse fields from aerial surveillance to wildlife tracking, mobile, and/or wearable technologies to automated driving and robotics.

The aim of this Special Issue is to solicit papers from academia and industry researchers with original and innovative works on all aspects of visual object recognition and tracking that address the needs in a diverse set of application fields. Original contributions that review and report on the state-of-the art, highlight challenges, point to future directions, and propose novel solutions are also welcome.

Topics of interest include but are not limited to:

  • visual recognition and/or tracking for video surveillance and monitoring (ground and aerial platforms);
  • visual recognition and/or tracking for robotics and autonomous vehicles;
  • visual recognition and/or tracking in biomedical modalities (endoscopy, videofluoroscopy, microscopy, etc.);
  • embedded solutions for visual recognition and/or tracking;
  • recognition and tracking for computational human behavior analysis, assistive robots, and human–robot interaction;
  • heterogeneous sensor fusion for robust tracking and video analytics.

Dr. Filiz Bunyak
Dr. Hadi Ali Akbarpour
Dr. Ilker Ersoy
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual object tracking
  • visual object recognition
  • video analytics
  • visual surveillance and monitoring
  • bioimage informatics
  • data fusion
  • sensor fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 9900 KiB  
Article
Detection-Based Object Tracking Applied to Remote Ship Inspection
by Jing Xie, Erik Stensrud and Torbjørn Skramstad
Sensors 2021, 21(3), 761; https://doi.org/10.3390/s21030761 - 23 Jan 2021
Cited by 15 | Viewed by 5901
Abstract
We propose a detection-based tracking system for automatically processing maritime ship inspection videos and predicting suspicious areas where cracks may exist. This system consists of two stages. Stage one uses a state-of-the-art object detection model, i.e., RetinaNet, which is customized with certain modifications [...] Read more.
We propose a detection-based tracking system for automatically processing maritime ship inspection videos and predicting suspicious areas where cracks may exist. This system consists of two stages. Stage one uses a state-of-the-art object detection model, i.e., RetinaNet, which is customized with certain modifications and the optimal anchor setting for detecting cracks in the ship inspection images/videos. Stage two is an enhanced tracking system including two key components. The first component is a state-of-the-art tracker, namely, Channel and Spatial Reliability Tracker (CSRT), with improvements to handle model drift in a simple manner. The second component is a tailored data association algorithm which creates tracking trajectories for the cracks being tracked. This algorithm is based on not only the intersection over union (IoU) of the detections and tracking updates but also their respective areas when associating detections to the existing trackers. Consequently, the tracking results compensate for the detection jitters which could lead to both tracking jitter and creation of redundant trackers. Our study shows that the proposed detection-based tracking system has achieved a reasonable performance on automatically analyzing ship inspection videos. It has proven the feasibility of applying deep neural network based computer vision technologies to automating remote ship inspection. The proposed system is being matured and will be integrated into a digital infrastructure which will facilitate the whole ship inspection process. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Figure 1

20 pages, 75282 KiB  
Article
ISSD: Improved SSD for Insulator and Spacer Online Detection Based on UAV System
by Xuan Liu, Yong Li, Feng Shuang, Fang Gao, Xiang Zhou and Xingzhi Chen
Sensors 2020, 20(23), 6961; https://doi.org/10.3390/s20236961 - 5 Dec 2020
Cited by 36 | Viewed by 3264
Abstract
In power inspection tasks, the insulator and spacer are important inspection objects. UAV (unmanned aerial vehicle) power inspection is becoming more and more popular. However, due to the limited computing resources carried by a UAV, a lighter model with small model size, high [...] Read more.
In power inspection tasks, the insulator and spacer are important inspection objects. UAV (unmanned aerial vehicle) power inspection is becoming more and more popular. However, due to the limited computing resources carried by a UAV, a lighter model with small model size, high detection accuracy, and fast detection speed is needed to achieve online detection. In order to realize the online detection of power inspection objects, we propose an improved SSD (single shot multibox detector) insulator and spacer detection algorithm using the power inspection images collected by a UAV. In the proposed algorithm, the lightweight network MnasNet is used as the feature extraction network to generate feature maps. Then, two multiscale feature fusion methods are used to fuse multiple feature maps. Lastly, a power inspection object dataset containing insulators and spacers based on aerial images is built, and the performance of the proposed algorithm is tested on real aerial images and videos. Experimental results show that the proposed algorithm can efficiently detect insulators and spacers. Compared with existing algorithms, the proposed algorithm has the advantages of small model size and fast detection speed. The detection accuracy can achieve 93.8%. The detection time of a single image on TX2 (NVIDIA Jetson TX2) is 154 ms and the capture rate on TX2 is 8.27 fps, which allows realizing online detection. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Figure 1

19 pages, 1524 KiB  
Article
A Binarized Segmented ResNet Based on Edge Computing for Re-Identification
by Yanming Chen, Tianbo Yang, Chao Li and Yiwen Zhang
Sensors 2020, 20(23), 6902; https://doi.org/10.3390/s20236902 - 3 Dec 2020
Cited by 13 | Viewed by 2445
Abstract
With the advent of the Internet of Everything, more and more devices are connected to the Internet every year. In major cities, in order to maintain normal social order, the demand for deployed cameras is also increasing. In terms of public safety, person [...] Read more.
With the advent of the Internet of Everything, more and more devices are connected to the Internet every year. In major cities, in order to maintain normal social order, the demand for deployed cameras is also increasing. In terms of public safety, person Re-Identification (ReID) can play a big role. However, the current methods of ReID are to transfer the collected pedestrian images to the cloud for processing, which will bring huge communication costs. In order to solve this problem, we combine the recently emerging edge computing and use the edge to combine the end devices and the cloud to implement our proposed binarized segmented ResNet. Our method is mainly to divide a complete ResNet into three parts, corresponding to the end devices, the edge, and the cloud. After joint training, the corresponding segmented sub-network is deployed to the corresponding side, and inference is performed to realize ReID. In our experiments, we compared some traditional ReID methods in terms of accuracy and communication overhead. It can be found that our method can greatly reduce the communication cost on the basis of basically not reducing the recognition accuracy of ReID. In general, the communication cost can be reduced by four to eight times. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Figure 1

25 pages, 4006 KiB  
Article
Learning Soft Mask Based Feature Fusion with Channel and Spatial Attention for Robust Visual Object Tracking
by Mustansar Fiaz, Arif Mahmood and Soon Ki Jung
Sensors 2020, 20(14), 4021; https://doi.org/10.3390/s20144021 - 20 Jul 2020
Cited by 13 | Viewed by 4251
Abstract
We propose to improve the visual object tracking by introducing a soft mask based low-level feature fusion technique. The proposed technique is further strengthened by integrating channel and spatial attention mechanisms. The proposed approach is integrated within a Siamese framework to demonstrate its [...] Read more.
We propose to improve the visual object tracking by introducing a soft mask based low-level feature fusion technique. The proposed technique is further strengthened by integrating channel and spatial attention mechanisms. The proposed approach is integrated within a Siamese framework to demonstrate its effectiveness for visual object tracking. The proposed soft mask is used to give more importance to the target regions as compared to the other regions to enable effective target feature representation and to increase discriminative power. The low-level feature fusion improves the tracker robustness against distractors. The channel attention is used to identify more discriminative channels for better target representation. The spatial attention complements the soft mask based approach to better localize the target objects in challenging tracking scenarios. We evaluated our proposed approach over five publicly available benchmark datasets and performed extensive comparisons with 39 state-of-the-art tracking algorithms. The proposed tracker demonstrates excellent performance compared to the existing state-of-the-art trackers. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Figure 1

22 pages, 47872 KiB  
Article
Multiple Object Tracking for Dense Pedestrians by Markov Random Field Model with Improvement on Potentials
by Peixin Liu, Xiaofeng Li, Yang Wang and Zhizhong Fu
Sensors 2020, 20(3), 628; https://doi.org/10.3390/s20030628 - 22 Jan 2020
Cited by 9 | Viewed by 3572
Abstract
Pedestrian tracking in dense crowds is a challenging task, even when using a multi-camera system. In this paper, a new Markov random field (MRF) model is proposed for the association of tracklet couplings. Equipped with a new potential function improvement method, this model [...] Read more.
Pedestrian tracking in dense crowds is a challenging task, even when using a multi-camera system. In this paper, a new Markov random field (MRF) model is proposed for the association of tracklet couplings. Equipped with a new potential function improvement method, this model can associate the small tracklet coupling segments caused by dense pedestrian crowds. The tracklet couplings in this paper are obtained through a data fusion method based on image mutual information. This method calculates the spatial relationships of tracklet pairs by integrating position and motion information, and adopts the human key point detection method for correction of the position data of incomplete and deviated detections in dense crowds. The MRF potential function improvement method for dense pedestrian scenes includes assimilation and extension processing, as well as a message selective belief propagation algorithm. The former enhances the information of the fragmented tracklets by means of a soft link with longer tracklets and expands through sharing to improve the potentials of the adjacent nodes, whereas the latter uses a message selection rule to prevent unreliable messages of fragmented tracklet couplings from being spread throughout the MRF network. With the help of the iterative belief propagation algorithm, the potentials of the model are improved to achieve valid association of the tracklet coupling fragments, such that dense pedestrians can be tracked more robustly. Modular experiments and system-level experiments are conducted using the PETS2009 experimental data set, where the experimental results reveal that the proposed method has superior tracking performance. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Figure 1

20 pages, 6530 KiB  
Article
Global Motion-Aware Robust Visual Object Tracking for Electro Optical Targeting Systems
by Byeong Hak Kim, Alan Lukezic, Jong Hyuk Lee, Ho Min Jung and Min Young Kim
Sensors 2020, 20(2), 566; https://doi.org/10.3390/s20020566 - 20 Jan 2020
Cited by 5 | Viewed by 5246
Abstract
Although recently developed trackers have shown excellent performance even when tracking fast moving and shape changing objects with variable scale and orientation, the trackers for the electro-optical targeting systems (EOTS) still suffer from abrupt scene changes due to frequent and fast camera motions [...] Read more.
Although recently developed trackers have shown excellent performance even when tracking fast moving and shape changing objects with variable scale and orientation, the trackers for the electro-optical targeting systems (EOTS) still suffer from abrupt scene changes due to frequent and fast camera motions by pan-tilt motor control or dynamic distortions in field environments. Conventional context aware (CA) and deep learning based trackers have been studied to tackle these problems, but they have the drawbacks of not fully overcoming the problems and dealing with their computational burden. In this paper, a global motion aware method is proposed to address the fast camera motion issue. The proposed method consists of two modules: (i) a motion detection module, which is based on the change in image entropy value, and (ii) a background tracking module, used to track a set of features in consecutive images to find correspondences between them and estimate global camera movement. A series of experiments is conducted on thermal infrared images, and the results show that the proposed method can significantly improve the robustness of all trackers with a minimal computational overhead. We show that the proposed method can be easily integrated into any visual tracking framework and can be applied to improve the performance of EOTS applications. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Graphical abstract

Back to TopTop