sensors-logo

Journal Browser

Journal Browser

Cooperative Camera Networks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 June 2020) | Viewed by 17316

Special Issue Editors


E-Mail Website
Guest Editor
Dept. of Information & Communication Technologies, University of Trento, Via Sommarive, 14 – 38050 Trento, Italy
Interests: multimedia signal processing; image retrieval

E-Mail Website
Guest Editor
Department of Information Engineering and Computer Science, University of Trento, Via Sommarive, 5 - 38123 Povo, Italy
Interests: multimedia signal processing; computer vision

E-Mail Website
Guest Editor
Via all'Opera Pia 11, 16145 Genova, Italy
Interests: video processing for event recognition; detection and localization of objects in complex scenes; distributed heterogeneous sensors ambient awareness systems; ambient intelligence and bio-inspired cognitive systems

E-Mail Website
Guest Editor
WMG Data Science, University of Warwick, Coventry CV4 7AL, UK
Interests: computer vision; video analysis; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Philips Research, High Tech Campus 34, 5656AE Eindhoven, The Netherlands
Interests: computer vision; pattern recognition; image and video analysis

Special Issue Information

Dear Colleagues,

Video acquisition devices are everywhere around us. They are used in private homes to provide monitoring and assistive services; placed in buildings and indoor public spaces for surveillance purposes; and spread in urban areas to monitor traffic and people, reveal possible danger or security issues, or trigger intelligent systems such as adaptive lighting or smart crossings. Besides fixed cameras, mobile cameras are also increasingly being used as a way of collecting user-centered information in applications such as life-logging, augmented reality or location-based services. A further increase in the diffusion of visual sensors is expected in the years to come, due to the availability of huge-bandwidth/low-latency networks such as 5G on the one hand, and to the spread of cost-effective advanced visual sensors (smart-, lightfield-, 360- cameras) on the other hand.

Although the current situation presents a largely unstructured scenario, in which the various devices operate without any coordination and are not meant to exchange data amongst themselves, a current trend in the research is devoted to the study of systems able to jointly exploit the large amount of inter-related information acquired by visual sensors in the framework of large cooperative visual sensor networks. However, the great potential of these technologies is still hindered by the many challenges to be solved, such as the availability of an efficient protocol for communication among sensors that is capable of guaranteeing the necessary coordination for acquisition and processing, calibrating and reconstructing multiple views, enabling distributed processing of the acquired visual information, and fusing information flows.

Prof. Dr. Nicola Conci
Prof. Dr. Francesco De Natale
Dr. Lucio Marcenaro
Dr. Jungong Han
Dr. Caifeng Shan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Distributed smart cameras
  • Cooperative camera networks
  • Self-aware camera networks
  • Massive data analysis and information fusion
  • Mobile and ego-vision
  • Autonomous vehicles and autonomous driving
  • Deep learning for distributed and mobile vision
  • Cloud and edge computing for video analysis
  • Ambient-aware robotic systems
  • Immersive reality
  • Networking and 5G for video connectivity

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1824 KiB  
Article
Bodyprint—A Meta-Feature Based LSTM Hashing Model for Person Re-Identification
by Danilo Avola, Luigi Cinque, Alessio Fagioli, Gian Luca Foresti, Daniele Pannone and Claudio Piciarelli
Sensors 2020, 20(18), 5365; https://doi.org/10.3390/s20185365 - 18 Sep 2020
Cited by 11 | Viewed by 3015
Abstract
Person re-identification is concerned with matching people across disjointed camera views at different places and different time instants. This task results of great interest in computer vision, especially in video surveillance applications where the re-identification and tracking of persons are required on uncontrolled [...] Read more.
Person re-identification is concerned with matching people across disjointed camera views at different places and different time instants. This task results of great interest in computer vision, especially in video surveillance applications where the re-identification and tracking of persons are required on uncontrolled crowded spaces and after long time periods. The latter aspects are responsible for most of the current unsolved problems of person re-identification, in fact, the presence of many people in a location as well as the passing of hours or days give arise to important visual appearance changes of people, for example, clothes, lighting, and occlusions; thus making person re-identification a very hard task. In this paper, for the first time in the state-of-the-art, a meta-feature based Long Short-Term Memory (LSTM) hashing model for person re-identification is presented. Starting from 2D skeletons extracted from RGB video streams, the proposed method computes a set of novel meta-features based on movement, gait, and bone proportions. These features are analysed by a network composed of a single LSTM layer and two dense layers. The first layer is used to create a pattern of the person’s identity, then, the seconds are used to generate a bodyprint hash through binary coding. The effectiveness of the proposed method is tested on three challenging datasets, that is, iLIDS-VID, PRID 2011, and MARS. In particular, the reported results show that the proposed method, which is not based on visual appearance of people, is fully competitive with respect to other methods based on visual features. In addition, thanks to its skeleton model abstraction, the method results to be a concrete contribute to address open problems, such as long-term re-identification and severe illumination changes, which tend to heavily influence the visual appearance of persons. Full article
(This article belongs to the Special Issue Cooperative Camera Networks)
Show Figures

Figure 1

15 pages, 8483 KiB  
Article
Dynamic Camera Reconfiguration with Reinforcement Learning and Stochastic Methods for Crowd Surveillance
by Niccolò Bisagno, Alberto Xamin, Francesco De Natale, Nicola Conci and Bernhard Rinner
Sensors 2020, 20(17), 4691; https://doi.org/10.3390/s20174691 - 20 Aug 2020
Cited by 11 | Viewed by 5430
Abstract
Crowd surveillance plays a key role to ensure safety and security in public areas. Surveillance systems traditionally rely on fixed camera networks, which suffer from limitations, as coverage of the monitored area, video resolution and analytic performance. On the other hand, a smart [...] Read more.
Crowd surveillance plays a key role to ensure safety and security in public areas. Surveillance systems traditionally rely on fixed camera networks, which suffer from limitations, as coverage of the monitored area, video resolution and analytic performance. On the other hand, a smart camera network provides the ability to reconfigure the sensing infrastructure by incorporating active devices such as pan-tilt-zoom (PTZ) cameras and UAV-based cameras, thus enabling the network to adapt over time to changes in the scene. We propose a new decentralised approach for network reconfiguration, where each camera dynamically adapts its parameters and position to optimise scene coverage. Two policies for decentralised camera reconfiguration are presented: a greedy approach and a reinforcement learning approach. In both cases, cameras are able to locally control the state of their neighbourhood and dynamically adjust their position and PTZ parameters. When crowds are present, the network balances between global coverage of the entire scene and high resolution for the crowded areas. We evaluate our approach in a simulated environment monitored with fixed, PTZ and UAV-based cameras. Full article
(This article belongs to the Special Issue Cooperative Camera Networks)
Show Figures

Figure 1

23 pages, 1556 KiB  
Article
Diffusion Parameters Analysis in a Content-Based Image Retrieval Task for Mobile Vision
by Federico Magliani, Laura Sani, Stefano Cagnoni and Andrea Prati
Sensors 2020, 20(16), 4449; https://doi.org/10.3390/s20164449 - 9 Aug 2020
Cited by 1 | Viewed by 2536
Abstract
Most recent computer vision tasks take into account the distribution of image features to obtain more powerful models and better performance. One of the most commonly used techniques to this purpose is the diffusion algorithm, which fuses manifold data and k-Nearest Neighbors (kNN) [...] Read more.
Most recent computer vision tasks take into account the distribution of image features to obtain more powerful models and better performance. One of the most commonly used techniques to this purpose is the diffusion algorithm, which fuses manifold data and k-Nearest Neighbors (kNN) graphs. In this paper, we describe how we optimized diffusion in an image retrieval task aimed at mobile vision applications, in order to obtain a good trade-off between computation load and performance. From a computational efficiency viewpoint, the high complexity of the exhaustive creation of a full kNN graph for a large database renders such a process unfeasible on mobile devices. From a retrieval performance viewpoint, the diffusion parameters are strongly task-dependent and affect significantly the algorithm performance. In the method we describe herein, we tackle the first issue by using approximate algorithms in building the kNN tree. The main contribution of this work is the optimization of diffusion parameters using a genetic algorithm (GA), which allows us to guarantee high retrieval performance in spite of such a simplification. The results we have obtained confirm that the global search for the optimal diffusion parameters performed by a genetic algorithm is equivalent to a massive analysis of the diffusion parameter space for which an exhaustive search would be totally unfeasible. We show that even a grid search could often be less efficient (and effective) than the GA, i.e., that the genetic algorithm most often produces better diffusion settings when equal computing resources are available to the two approaches. Our method has been tested on several publicly-available datasets: Oxford5k, ROxford5k, Paris6k, RParis6k, and Oxford105k, and compared to other mainstream approaches. Full article
(This article belongs to the Special Issue Cooperative Camera Networks)
Show Figures

Figure 1

26 pages, 25749 KiB  
Article
Automatic Multi-Camera Extrinsic Parameter Calibration Based on Pedestrian Torsors
by Anh Minh Truong, Wilfried Philips, Nikos Deligiannis, Lusine Abrahamyan and Junzhi Guan
Sensors 2019, 19(22), 4989; https://doi.org/10.3390/s19224989 - 15 Nov 2019
Cited by 4 | Viewed by 5641
Abstract
Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be [...] Read more.
Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street. Full article
(This article belongs to the Special Issue Cooperative Camera Networks)
Show Figures

Figure 1

Back to TopTop