applsci-logo

Journal Browser

Journal Browser

Advances and Application of Intelligent Video Surveillance Systems: Volume II

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 March 2025 | Viewed by 11633

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
Interests: image/video processing; computer vision; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: video processing; computer vision; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Video surveillance can be used to witness or observe a scene and look for specific behaviors in daily life, and it has been widely used for public security, transportation monitoring, and so on. A video surveillance system consists of three parts: a camera system, a transmission system, and an observation system.   
The camera system aims to capture surveillance videos; however, due to the increased video resolution, the volume of video data has also increased significantly, becoming a challenge for storage, transmission, and processing.

In this Special Issue, we would like to highlight new and innovative work focused on intelligent video surveillance systems. We invite you to present high-quality research in one or more areas revolving around the current state of the art and future of intelligent video surveillance systems in practice and theory. In addition, quality manuscripts on surveillance video compression, surveillance video processing, and surveillance video security are of course very welcome.

Dr. Zhaoqing Pan
Dr. Bo Peng
Prof. Dr. Jinwei Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • surveillance video compression
  • surveillance video processing
  • surveillance video transmission
  • surveillance video security

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 9647 KiB  
Article
Privacy-Preserving Live Video Analytics for Drones via Edge Computing
by Piyush Nagasubramaniam, Chen Wu, Yuanyi Sun, Neeraj Karamchandani, Sencun Zhu and Yongzhong He
Appl. Sci. 2024, 14(22), 10254; https://doi.org/10.3390/app142210254 - 7 Nov 2024
Viewed by 599
Abstract
The use of lightweight drones has surged in recent years across both personal and commercial applications, necessitating the ability to conduct live video analytics on drones with limited computational resources. While edge computing offers a solution to the throughput bottleneck, it also opens [...] Read more.
The use of lightweight drones has surged in recent years across both personal and commercial applications, necessitating the ability to conduct live video analytics on drones with limited computational resources. While edge computing offers a solution to the throughput bottleneck, it also opens the door to potential privacy invasions by exposing sensitive visual data to risks. In this work, we present a lightweight, privacy-preserving framework designed for real-time video analytics. By integrating a novel split-model architecture tailored for distributed deep learning through edge computing, our approach strikes a balance between operational efficiency and privacy. We provide comprehensive evaluations on privacy, object detection, latency, bandwidth usage, and object-tracking performance for our proposed privacy-preserving model. Full article
Show Figures

Figure 1

17 pages, 8965 KiB  
Article
A Distributed Real-Time Monitoring Scheme for Air Pressure Stream Data Based on Kafka
by Zixiang Zhou, Lei Zhou and Zhiguo Chen
Appl. Sci. 2024, 14(12), 4967; https://doi.org/10.3390/app14124967 - 7 Jun 2024
Viewed by 867
Abstract
Strict air pressure control is paramount in industries such as petroleum, chemicals, transportation, and mining to ensure production safety and to improve operational efficiency. In these fields, accurate real-time air pressure monitoring is critical to optimize operations and ensure facility and personnel safety. [...] Read more.
Strict air pressure control is paramount in industries such as petroleum, chemicals, transportation, and mining to ensure production safety and to improve operational efficiency. In these fields, accurate real-time air pressure monitoring is critical to optimize operations and ensure facility and personnel safety. Although current Internet of Things air pressure monitoring systems enable users to make decisions based on objective data, existing approaches are limited by long response times, low efficiency, and inadequate preprocessing. Additionally, the exponential increase in data volumes creates the risk of server downtime. To address these challenges, this paper proposes a novel real-time air pressure monitoring scheme that uses Arduino microcontrollers in conjunction with GPRS network communication. It also uses Apache Kafka to construct a multi-server cluster for high-performance message processing. Furthermore, data are backed up by configuring multiple replications, which safeguards against data loss during server failures. The scheme also includes an intuitive and user-friendly visualization interface for data analysis and subsequent decision making. The experimental results demonstrate that this approach offers high throughput and timely responsiveness, providing a more reliable option for real-time gathering, analysis, and storage of massive data. Full article
Show Figures

Figure 1

18 pages, 13550 KiB  
Article
Content-Adaptive Light Field Contrast Enhancement Using Focal Stack and Hierarchical Network
by Xiangyan Guo, Jinhao Guo, Zhongyun Yuan and Yongqiang Cheng
Appl. Sci. 2024, 14(11), 4885; https://doi.org/10.3390/app14114885 - 5 Jun 2024
Viewed by 821
Abstract
Light field (LF) cameras can capture a scene’s information from all different directions and provide comprehensive image information. However, the resulting data processing commonly encounters problems of low contrast and low image quality. In this article, we put forward a content-adaptive light field [...] Read more.
Light field (LF) cameras can capture a scene’s information from all different directions and provide comprehensive image information. However, the resulting data processing commonly encounters problems of low contrast and low image quality. In this article, we put forward a content-adaptive light field contrast enhancement scheme using a focal stack (FS) and hierarchical structure. The proposed FS set contained 300 light field images, which were captured using a Lytro-Illum camera. In addition, we integrated the classical Stanford Lytro Light Field Archive and JPEG Pleno Database. Specifically, according to the global brightness, the acquired LF images were classified into four different categories. First, we transformed the original LF FS into a depth map (DMAP) and all-in-focus (AIF) image. The image category was preliminarily determined depending on the brightness information. Then, the adaptive parameters were acquired by the corresponding multilayer perceptron (MLP) network training, which intrinsically enhanced the contrast and adjusted the light field image. Finally, our method automatically produced an enhanced FS based on the DMAP and AIF image. The experimental comparison results demonstrate that the adaptive values predicted by our MLP had high precision and approached the ground truth. Moreover, compared to existing contrast enhancement methods, our method provides a global contrast enhancement, which improves, without over-enhancing, local areas. The complexity of image processing is reduced, and real-time, adaptive LF enhancement is realized. Full article
Show Figures

Figure 1

12 pages, 1332 KiB  
Article
End-to-End Light Field Image Compression with Multi-Domain Feature Learning
by Kangsheng Ye, Yi Li, Ge Li, Dengchao Jin and Bo Zhao
Appl. Sci. 2024, 14(6), 2271; https://doi.org/10.3390/app14062271 - 8 Mar 2024
Cited by 1 | Viewed by 1132
Abstract
Recently, end-to-end light field image compression methods have been explored to improve compression efficiency. However, these methods have difficulty in efficiently utilizing multi-domain features and their correlation, resulting in limited improvement in compression performance. To address this problem, a novel multi-domain feature learning-based [...] Read more.
Recently, end-to-end light field image compression methods have been explored to improve compression efficiency. However, these methods have difficulty in efficiently utilizing multi-domain features and their correlation, resulting in limited improvement in compression performance. To address this problem, a novel multi-domain feature learning-based light field image compression network (MFLFIC-Net) is proposed to improve compression efficiency. Specifically, an EPI-based angle completion module (E-ACM) is developed to obtain a complete angle feature by fully exploring the angle information with a large disparity contained in the epipolar plane image (EPI) domain. Furthermore, in order to effectively reduce redundant information in the light field image, a spatial-angle joint transform module (SAJTM) is proposed to reduce redundancy by modeling the intrinsic correlation between spatial and angle features. Experimental results demonstrate that MFLFIC-Net achieves superior performance on MS-SSIM and PSNR metrics compared to public state-of-the-art methods. Full article
Show Figures

Figure 1

18 pages, 3951 KiB  
Article
Mitigating Distractor Challenges in Video Object Segmentation through Shape and Motion Cues
by Jidong Peng, Yibing Zhao, Dingwei Zhang and Yadang Chen
Appl. Sci. 2024, 14(5), 2002; https://doi.org/10.3390/app14052002 - 28 Feb 2024
Viewed by 878
Abstract
The purpose of semi-supervised video object segmentation (VOS) is to predict and generate object masks in subsequent video frames after being provided with the initial frame’s object mask. Currently, mainstream methods leverage historical frame information for enhancing the network’s performance. However, this approach [...] Read more.
The purpose of semi-supervised video object segmentation (VOS) is to predict and generate object masks in subsequent video frames after being provided with the initial frame’s object mask. Currently, mainstream methods leverage historical frame information for enhancing the network’s performance. However, this approach faces the following issues: (1) They often overlook important shape information, leading to decreased accuracy in segmenting object-edge areas. (2) They often use pixel-level motion estimation to guide the matching for addressing distractor objects. However, this brings heavy computation costs and struggle against occlusion or fast/blurry motion. For the first problem, this paper introduces an object shape extraction module that exploits both the high-level and low-level features to obtain object shape information, by which the shape information can be used to further refine the predicted masks. For the second problem, this paper introduces a novel object-level motion prediction module, in which it stores the representative motion features during the training stage, and predicts the object motion by retrieving them during the inference stage. We evaluate our method on benchmark datasets compared with recent state-of-the-art methods, and the results demonstrate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

19 pages, 2168 KiB  
Article
Three-Stage Deep Learning Framework for Video Surveillance
by Ji-Woon Lee and Hyun-Soo Kang
Appl. Sci. 2024, 14(1), 408; https://doi.org/10.3390/app14010408 - 2 Jan 2024
Cited by 5 | Viewed by 6308
Abstract
The escalating use of security cameras has resulted in a surge in images requiring analysis, a task hindered by the inefficiency and error-prone nature of manual monitoring. In response, this study delves into the domain of anomaly detection in CCTV security footage, addressing [...] Read more.
The escalating use of security cameras has resulted in a surge in images requiring analysis, a task hindered by the inefficiency and error-prone nature of manual monitoring. In response, this study delves into the domain of anomaly detection in CCTV security footage, addressing challenges previously encountered in analyzing videos with complex or dynamic backgrounds and long sequences. We introduce a three-stage deep learning architecture designed to detect abnormalities in security camera videos. The first stage employs a pre-trained convolutional neural network to extract features from individual video frames. Subsequently, these features are transformed into time series data in the second stage, utilizing a blend of bidirectional long short-term memory and multi-head attention to analyze short-term frame relationships. The final stage leverages relative positional embeddings and a custom Transformer encoder to interpret long-range frame relationships and identify anomalies. Tested on various open datasets, particularly those with complex backgrounds and extended sequences, our method demonstrates enhanced accuracy and efficiency in video analysis. This approach not only improves current security camera analysis but also shows potential for diverse application settings, signifying a significant advancement in the evolution of security camera monitoring and analysis technologies. Full article
Show Figures

Figure 1

Back to TopTop