Detection of Moving Objects

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 April 2018) | Viewed by 53536

Special Issue Editor


E-Mail Website
Guest Editor
MIA-Lab, Université La Rochelle, 17042 La Rochelle, France
Interests: background modeling; face detection; deep learning; graph signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The detection of moving objects is one of the most important steps in the video processing field, such as in video-surveillance, optical motion capture, multimedia applications, teleconferencing, video editing, human-computer interface, etc. The last two decades have witnessed very significant publications on the detection of moving objects in video taken by static cameras; however, recently, new applications in which backgrounds are not static, such as recordings taken from drones, UAVs or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds and illumination changes in real scenes with fixed cameras or mobile devices are needed and so different models need to be used such as advanced statistical models, fuzzy models, robust subspace learning models and deep learning models.

The intent of this Special Issue is to provide: 1) new approaches in detection of moving objects, 2) new strategies to improve foreground detection algorithms to tackle critical scenarios, such as dynamic backgrounds, illumination changes, night videos and low-frame rate videos, and 3) new adaptive and incremental algorithms to achieve real-time applications.

This Special Issue is primarily focused on the following topics; however, we encourage all submissions related to detection of moving objects in videos taken by a static or moving cameras:

  • Background initialization
  • Background subtraction
  • Background modeling
  • Foreground detection
  • Feature selection
  • Statistical, Fuzzy, and Dempster-shafer concepts for detection of moving objects
  • Robust subspace learning models (RPCA, etc.).
  • Deep learning models.
  • HD camera, IR cameras, Light Field cameras, RGB-D cameras
  • Drones, UAV’s
  • Real-time implementations (GPU, FPGA, etc.)
Prof. Thierry Bouwmans
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Background initialization
  • Background subtraction
  • Background modeling
  • Foreground detection
  • Feature selection
  • Statistical, Fuzzy, and Dempster-shafer concepts for detection of moving objects
  • Robust subspace learning models (RPCA, etc.)
  • Deep learning models
  • HD camera, IR cameras
  • Light Field  Cameras, RGB-D cameras
  • Drone, UAV’s
  • Real-time implementations (GPU, FPGA, etc.)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 130 KiB  
Editorial
Detection of Moving Objects
by Thierry Bouwmans
J. Imaging 2018, 4(7), 93; https://doi.org/10.3390/jimaging4070093 - 13 Jul 2018
Viewed by 3409
(This article belongs to the Special Issue Detection of Moving Objects)

Research

Jump to: Editorial

28 pages, 1047 KiB  
Article
Background Subtraction Based on a New Fuzzy Mixture of Gaussians for Moving Object Detection
by Ali Darwich, Pierre-Alexandre Hébert, André Bigand and Yasser Mohanna
J. Imaging 2018, 4(7), 92; https://doi.org/10.3390/jimaging4070092 - 10 Jul 2018
Cited by 21 | Viewed by 7616
Abstract
Moving foreground detection is a very important step for many applications such as human behavior analysis for visual surveillance, model-based action recognition, road traffic monitoring, etc. Background subtraction is a very popular approach, but it is difficult to apply given that it must [...] Read more.
Moving foreground detection is a very important step for many applications such as human behavior analysis for visual surveillance, model-based action recognition, road traffic monitoring, etc. Background subtraction is a very popular approach, but it is difficult to apply given that it must overcome many obstacles, such as dynamic background changes, lighting variations, occlusions, and so on. In the presented work, we focus on this problem (foreground/background segmentation), using a type-2 fuzzy modeling to manage the uncertainty of the video process and of the data. The proposed method models the state of each pixel using an imprecise and adjustable Gaussian mixture model, which is exploited by several fuzzy classifiers to ultimately estimate the pixel class for each frame. More precisely, this decision not only takes into account the history of its evolution, but also its spatial neighborhood and its possible displacements in the previous frames. Then we compare the proposed method with other close methods, including methods based on a Gaussian mixture model or on fuzzy sets. This comparison will allow us to assess our method’s performance, and to propose some perspectives to this work. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

23 pages, 3290 KiB  
Article
Compressive Online Video Background–Foreground Separation Using Multiple Prior Information and Optical Flow
by Srivatsa Prativadibhayankaram, Huynh Van Luong, Thanh Ha Le and André Kaup
J. Imaging 2018, 4(7), 90; https://doi.org/10.3390/jimaging4070090 - 3 Jul 2018
Cited by 15 | Viewed by 6446
Abstract
In the context of video background–foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method operates on a small set [...] Read more.
In the context of video background–foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method operates on a small set of measurements taken per frame, in contrast to conventional batch-based RPCA, which processes the full data. The proposed method also leverages multiple prior information by incorporating previously separated background and foreground frames in an n-1 minimization problem. Moreover, optical flow is utilized to estimate motions between the previous foreground frames and then compensate the motions to achieve higher quality prior foregrounds for improving the separation. Our method is tested on several video sequences in different scenarios for online background–foreground separation given compressive measurements. The visual and quantitative results show that the proposed method outperforms other existing methods. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

22 pages, 10901 KiB  
Article
LaBGen-P-Semantic: A First Step for Leveraging Semantic Segmentation in Background Generation
by Benjamin Laugraud, Sébastien Piérard and Marc Van Droogenbroeck
J. Imaging 2018, 4(7), 86; https://doi.org/10.3390/jimaging4070086 - 25 Jun 2018
Cited by 21 | Viewed by 5380
Abstract
Given a video sequence acquired from a fixed camera, the stationary background generation problem consists of generating a unique image estimating the stationary background of the sequence. During the IEEE Scene Background Modeling Contest (SBMC) organized in 2016, we presented the LaBGen-P method. [...] Read more.
Given a video sequence acquired from a fixed camera, the stationary background generation problem consists of generating a unique image estimating the stationary background of the sequence. During the IEEE Scene Background Modeling Contest (SBMC) organized in 2016, we presented the LaBGen-P method. In short, this method relies on a motion detection algorithm for selecting, for each pixel location, a given amount of pixel intensities that are most likely static by keeping the ones with the smallest quantities of motion. These quantities are estimated by aggregating the motion scores returned by the motion detection algorithm in the spatial neighborhood of the pixel. After this selection process, the background image is then generated by blending the selected intensities with a median filter. In our previous works, we showed that using a temporally-memoryless motion detection, detecting motion between two frames without relying on additional temporal information, leads our method to achieve the best performance. In this work, we go one step further by developing LaBGen-P-Semantic, a variant of LaBGen-P, the motion detection step of which is built on the current frame only by using semantic segmentation. For this purpose, two intra-frame motion detection algorithms, detecting motion from a unique frame, are presented and compared. Our experiments, carried out on the Scene Background Initialization (SBI) and SceneBackgroundModeling.NET (SBMnet) datasets, show that leveraging semantic segmentation improves the robustness against intermittent motions, background motions and very short video sequences, which are among the main challenges in the background generation field. Moreover, our results confirm that using an intra-frame motion detection is an appropriate choice for our method and paves the way for more techniques based on semantic segmentation. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

24 pages, 6895 KiB  
Article
Full Reference Objective Quality Assessment for Reconstructed Background Images
by Aditee Shrotre and Lina J. Karam
J. Imaging 2018, 4(6), 82; https://doi.org/10.3390/jimaging4060082 - 19 Jun 2018
Cited by 1 | Viewed by 4937
Abstract
With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures [...] Read more.
With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

17 pages, 8886 KiB  
Article
Deep Learning with a Spatiotemporal Descriptor of Appearance and Motion Estimation for Video Anomaly Detection
by Kishanprasad G. Gunale and Prachi Mukherji
J. Imaging 2018, 4(6), 79; https://doi.org/10.3390/jimaging4060079 - 8 Jun 2018
Cited by 20 | Viewed by 6997
Abstract
The automatic detection and recognition of anomalous events in crowded and complex scenes on video are the research objectives of this paper. The main challenge in this system is to create models for detecting such events due to their changeability and the territory [...] Read more.
The automatic detection and recognition of anomalous events in crowded and complex scenes on video are the research objectives of this paper. The main challenge in this system is to create models for detecting such events due to their changeability and the territory of the context of the scenes. Due to these challenges, this paper proposed a novel HOME FAST (Histogram of Orientation, Magnitude, and Entropy with Fast Accelerated Segment Test) spatiotemporal feature extraction approach based on optical flow information to capture anomalies. This descriptor performs the video analysis within the smart surveillance domain and detects anomalies. In deep learning, the training step learns all the normal patterns from the high-level and low-level information. The events are described in testing and, if they differ from the normal pattern, are considered as anomalous. The overall proposed system robustly identifies both local and global abnormal events from complex scenes and solves the problem of detection under various transformations with respect to the state-of-the-art approaches. The performance assessment of the simulation outcome validated that the projected model could handle different anomalous events in a crowded scene and automatically recognize anomalous events with success. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

19 pages, 34950 KiB  
Article
Analytics of Deep Neural Network-Based Background Subtraction
by Tsubasa Minematsu, Atsushi Shimada, Hideaki Uchiyama and Rin-ichiro Taniguchi
J. Imaging 2018, 4(6), 78; https://doi.org/10.3390/jimaging4060078 - 8 Jun 2018
Cited by 46 | Viewed by 6522
Abstract
Deep neural network-based (DNN-based) background subtraction has demonstrated excellent performance for moving object detection. The DNN-based background subtraction automatically learns the background features from training images and outperforms conventional background modeling based on handcraft features. However, previous works fail to detail why DNNs [...] Read more.
Deep neural network-based (DNN-based) background subtraction has demonstrated excellent performance for moving object detection. The DNN-based background subtraction automatically learns the background features from training images and outperforms conventional background modeling based on handcraft features. However, previous works fail to detail why DNNs work well for change detection. This discussion helps to understand the potential of DNNs in background subtraction and to improve DNNs. In this paper, we observe feature maps in all layers of a DNN used in our investigation directly. The DNN provides feature maps with the same resolution as that of the input image. These feature maps help to analyze DNN behaviors because feature maps and the input image can be simultaneously compared. Furthermore, we analyzed important filters for the detection accuracy by removing specific filters from the trained DNN. From the experiments, we found that the DNN consists of subtraction operations in convolutional layers and thresholding operations in bias layers and scene-specific filters are generated to suppress false positives from dynamic backgrounds. In addition, we discuss the characteristics and issues of the DNN based on our observation. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

27 pages, 9177 KiB  
Article
Background Subtraction for Moving Object Detection in RGBD Data: A Survey
by Lucia Maddalena and Alfredo Petrosino
J. Imaging 2018, 4(5), 71; https://doi.org/10.3390/jimaging4050071 - 16 May 2018
Cited by 68 | Viewed by 10565
Abstract
The paper provides a specific perspective view on background subtraction for moving object detection, as a building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. Since color information is not sufficient for [...] Read more.
The paper provides a specific perspective view on background subtraction for moving object detection, as a building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. Since color information is not sufficient for dealing with problems like light switches or local gradual changes of illumination, shadows cast by the foreground objects, and color camouflage, new information needs to be caught to deal with these issues. Depth synchronized information acquired by low-cost RGBD sensors is considered in this paper to give evidence about which issues can be solved, but also to highlight new challenges and design opportunities in several applications and research areas. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

Back to TopTop