Next Article in Journal
A Novel Framework for Open-Set Authentication of Internet of Things Using Limited Devices
Next Article in Special Issue
Indoor Visual Exploration with Multi-Rotor Aerial Robotic Vehicles
Previous Article in Journal
A Highly Efficient RF-DC Converter for Energy Harvesting Applications Using a Threshold Voltage Cancellation Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Sensor Networks for Indoor Real-Time Surveillance and Tracking of Multiple Targets

by
Jacopo Giordano
1,*,†,
Margherita Lazzaretto
1,†,
Giulia Michieletto
2,* and
Angelo Cenedese
1
1
Department of Information Engineering, University of Padova, 35131 Padova, Italy
2
Department of Management and Engineering, University of Padova, 36100 Vicenza, Italy
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2022, 22(7), 2661; https://doi.org/10.3390/s22072661
Submission received: 13 December 2021 / Revised: 17 March 2022 / Accepted: 28 March 2022 / Published: 30 March 2022
(This article belongs to the Special Issue Recent Advances in Visual Sensor Networks for Robotics and Automation)

Abstract

:
The recent trend toward the development of IoT architectures has entailed the transformation of the standard camera networks into smart multi-device systems capable of acquiring, elaborating, and exchanging data and, often, dynamically adapting to the environment. Along this line, this work proposes a novel distributed solution that guarantees the real-time monitoring of 3D indoor structured areas and also the tracking of multiple targets, by employing a heterogeneous visual sensor network composed of both fixed and Pan-Tilt-Zoom (PTZ) cameras. The fulfillment of the twofold mentioned goal was ensured through the implementation of a distributed game-theory-based algorithm, aiming at optimizing the controllable parameters of the PTZ devices. The proposed solution is able to deal with the possible conflicting requirements of high tracking precision and maximum coverage of the surveilled area. Extensive numerical simulations in realistic scenarios validated the effectiveness of the outlined strategy.

1. Introduction

A Visual Sensor Network (VSN) is a multi-agent system constituted of a collection of spatially distributed smart cameras. In recent years, due to the ever-improving sensing and computational capabilities and the reduced cost of visual sensors, such architectures have gained popularity and are currently employed in many tasks, ranging from the more traditional surveillance and security scenarios to the cutting-edge IoT applications, e.g., in environmental monitoring, sports, and education contexts [1,2,3].
The ongoing IoT-driven incentive towards the development of cooperative distributed solutions leads to the progressive substitution of the traditional centralized VSNs made up of static devices, namely cameras having a fixed pose (i.e., position and orientation), in favor of more flexible, decentralized, and intelligent systems [4,5,6]. The modern VSNs do not generally envisage a central computing unit, but accomplish the assigned task by relying on a distributed approach. Moreover, they often include PTZ cameras, namely visual sensors having controllable pan and tilt angles and zoom parameters and, thus, characterized by variable orientations and fields of view. The introduction of these new visual sensors enhances the VSNs’ scalability and robustness and, at the same time, allows reducing both the number of cameras needed to cover a given area and the communication and computational burden imposed by data exchange. On the other hand, to take full advantage of the PTZ cameras, it is necessary to dynamically optimize their parameters depending on some external factors, including their (fixed) position, the potential occlusions, the occurrence of failures, and/or targets to follow.
In this work, the attention is focused on a VSN made up of both fixed and PTZ cameras, required to monitor a given area with the purpose of detecting and tracking one or more targets. To this aim, a decentralized solution is proposed: it entails the optimization of the PTZ parameters through the exploitation of both the network/environment topology and the tracking information shared among the same cameras.

1.1. Related Works

The monitoring performance of the traditional VSNs, composed exclusively of fixed cameras, uniquely depends on the device number and positioning in the considered environment (see, e.g., [7,8,9]). On the contrary, when accounting for VSNs involving also PTZ cameras, several parameters can be dynamically selected and actively controlled in order to optimize the surveillance task’s fulfillment.
As in the existing literature, the selection of the PTZ parameters can be performed in different fashions, as for instance exploiting the notions of control theory, decision theory, game theory, or resting upon learning algorithms or suitable heuristics [5,10,11]. More in detail, the parameter selection strategies based on control theory exploit a feedback mechanism to minimize the difference between the observed and desired system state. In this direction, in [12], a PID controller was designed to perform the real-time tracking of a target by means of a single PTZ camera. In [13], each movement of the pan and tilt servo camera was controlled by some signals calculated by adopting the model predictive control approach. Nonetheless, although the employment of control-based methods appears to be efficient and sound, their use may lack flexibility. The PTZ parameter selection methods based on decision theory entail the possibility of choosing among different actions even if their consequences are not perfectly known. Specifically, the Markov Decision Process (MDP) and Partially Observable MDP (POMDP) models are employed in many VSN control solutions. For example, in [14], both the MDP and POMDP approaches were adopted in the determination of the optimal configuration of several active cameras, required to maximize the number of observed targets, while guaranteeing a minimum view resolution. A particularly interesting branch of decision theory is game theory, wherein the decisions are assumed to be independent in the selection process. For this reason, this approach results in being a valuable technique when coping with real-time constraints. Indeed, it permits solving cooperative and competitive problems and dealing with non-convex and discontinuous utility functions [15]. Along this line, a game theoretic approach was presented in [16] to address the distributed pose optimization of a group of PTZ cameras. The proposed solution relies on the maximization of various performance metrics (including tracking accuracy and image resolution), with the purpose of finding the best multi-camera system configuration. Such a strategy was further developed in [4], wherein the presence of groups of targets, capable of merging and dividing, was assumed. More recently, PTZ parameter selection procedures relying on Reinforcement Learning (RL) techniques have gained popularity. For instance, a Q-learning-based solution was presented in [17] for smoothly controlling a PTZ camera. A soft actor–critic RL system was, instead, designed in [18] to perform the decentralized reconfiguration of a camera network involving PTZ sensors and devices mounted on flying platforms. The main issue related to the RL solutions consists of their intrinsic black box nature: in some contexts, it is not possible or safe to rely upon such controllers, whose performance may not be completely predictable. Finally, the adoption of some heuristics turns out to be a valid option, especially in case of real-time control requirements. In [19], for instance, a scheduling heuristic was taken into account for dealing with the cooperative monitoring of an object through a PTZ camera network. Nonetheless, the general drawback of heuristics is that they strongly rely on the experience of the users and often on lengthy and tiring tuning procedures.
Despite the specific method adopted to select the PTZ parameters, some general observations are due when accounting for VSNs composed also of dynamic cameras and required to perform both the real-time area surveillance and the targets’ tracking. First, ensuring the cameras’ field of view (FOV) overlap turns out to be advantageous in the surveillance activities, especially when also target detection and tracking are required. This is motivated by the fact that, without the full coverage of the environment, the positions of the targets need to be estimated in correspondence with the blind spots [1]. Then, the presence of multiple targets entails a tradeoff between the surveilled area coverage and the target view image resolution.
To conclude this non-exhaustive survey of the related works, many approaches to address target tracking are described in the literature. Most of them envisage the exploitation of filtering and prediction tools, as, for example particle filters, Bayesian estimation techniques, and (extended) Kalman Filters (KFs) [1]. In detail, the approaches based on the KF are extensively used when dealing with real-time applications and often require distributed computations [20,21]. For instance, in [22], a distributed KF was studied for sensor networks with a limited sensing range, and an extended version of the same approach was investigated in [16] to perform 2D tracking employing a PTZ camera network. In [23], instead, a solution based on particle filters was considered for decentralized tracking of groups of people or individuals. Finally, in [24], a decentralized framework was presented for cooperative self-localization and multi-target tracking via Gaussian filters.

1.2. Contributions

Accounting for a heterogeneous VSN made up of both fixed and PTZ cameras, this work presents a strategy aiming at ensuring the real-time surveillance of a structured indoor environment, as well as multi-target tracking. The outlined procedure envisages the (optimal) selection of the adjustable parameters of the dynamic devices composing the network, resting on the game theory approach proposed in [16].
With respect to [16], the novel aspects of this work derive from the focus on real-world scenarios, wherein it is desirable to limit both the costs (in terms of employed devices) and the task execution time. In [16], the study case consisted of a unique unstructured 2D environment monitored by a broad set of PTZ cameras capable of performing both the distributed multi-target tracking and the iterative optimization of their parameters. In particular, the proposed target tracking method is based on an Extended Kalman Filter (EKF), while the parameter selection rests on the iterative solution of a computationally demanding optimization problem. In this work, instead, great attention is devoted to the real-world environment. The twofold mentioned goal is, indeed, faced by accounting for a 3D scenario consisting of a structured area composed of multiple connected rooms, and the available a priori information on the physical space partition is exploited in the solution process. In addition, the considered VSN involves a limited number of (highly expensive) PTZ cameras, while including also (low-cost) static devices. Specifically, these permit still efficiently managing the transitions between different areas, thus streamlining the PTZ parameter selection. In this direction, their presence favors also network scalability since the dynamic devices’ reconfiguration can be accomplished in a parallel manner in the different rooms.
More in detail, in this work, the double monitoring and tracking problem is addressed by modeling the targets as 3D point particles whose position is characterized by a non-null uncertainty and overcoming the concept of a planar occupancy grid. The selection of the PTZ parameters implies the identification of the optimal values for both the pan and tilt angle, jointly with the zoom (when accounting for the 2D context only, the pan angle is generally considered in the PTZ parameter selection) Inspired by the game theory approach proposed in [16], we propose an update procedure for the orientation of the PTZ cameras based on the optimization of a certain utility function. In particular, this latter is defined accounting for some criteria that are new and original with respect to [16] since the intent is to reduce the computational complexity of the parameter selection process because of the real-time constraints on the task’s execution. The utility function is maximized in a distributed manner via an iterative negotiation mechanism among some PTZ devices. In particular, different from [16], the set of PTZ cameras involved in the mentioned negotiation procedure is determined based on the a priori knowledge of the environment in terms of structure and devices’ placement. Such information is further exploited to allow for the parallelization of the parameter selection by multiple independent groups of PTZ cameras.
The principal advantage of the solution proposed in this work consists of its versatility and flexibility. Indeed, by conveniently choosing the weights that regulate the contributions in the utility function, it is possible to prioritize the monitoring task with respect to the tracking task, or vice versa. Along this line, the results of the conducted simulative campaign demonstrated its effectiveness in handling the tradeoff between the tracking precision and the image resolution, especially in the critical scenarios. Moreover, the heterogeneous nature of the network and the outlined distributed approach allow the parallelization of the parameter selection and tracking tasks, resulting in a framework that can be easily scaled up to larger and more complex environments.
As a final remark, we emphasize that, although the problems related to targets’ detection and partial/complete loss are not directly taken into account in this work, some possible actions to face these issues are discussed.

1.3. Paper Structure

This paper is organized as follows. In Section 2, after the problem statement, the application scenario is described and modeled. In Section 3, the elements necessary to perform real-time multi-target tracking are outlined. In particular, Section 3.1 illustrates the distributed tracking algorithm based on the EKF solution, while Section 3.2 discusses the PTZ parameter selection. After that, Section 4 presents the application environment employed to validate the strategy outlined in the previous sections, and Section 5 reports the results of the multiple test scenarios considered. In Section 6, a discussion is provided about interesting aspects revealed from the simulation results, together with possible future improvements. Finally, Section 7 reports a summary of the study and contains some final considerations.

2. Problem Statement, Models, and Assumptions

This section aims at illustrating the application scenario taken into account in this work: a structured and cluttered indoor environment monitored by a heterogeneous VSN. We highlight the considered twofold goal, stating the problem and discussing the models and the assumptions adopted in the design of the proposed solution.

2.1. Problem Statement

In this work, the attention is focused on a structured indoor 3D environment E R 3 composed of n R 1 rooms and characterized by n A 1 access points. This is supposed to be monitored by a VSN made up of n C 2 cameras, divided into n S 1 static visual sensors, i.e., fixed cameras, and n D 1 dynamic visual sensors, namely PTZ cameras. In turn, the fixed cameras are split into n S H R 0 high-resolution visual sensors and n S W A 0 wide-angle visual sensors.
In this context, we aimed at proposing an effective strategy to fulfill a twofold goal: to ensure the real-time surveillance of the described environment and to guarantee the efficient tracking of n T 1 targets, free to move in the supervised area.

2.2. Environment Modeling

To cope with the aforementioned goal, we define the following sets:
  • The environment access points set A = { A 1 A n A } , consisting of a single element in the currently considered setup where n A = 1 ;
  • The physical environment partitions set R = { R 1 R n R } with R h R 3 denoting the h-th room composing the considered environment,
  • The high-resolution and wide-angle static visual sensors set C S H R = { C 1 S H R C n S H R S H R } and C S W A = { C 1 S W A C n S W A S W A } , respectively;
  • The dynamic and static visual sensors set C D = { C 1 D C n D D } and C S = { C 1 S C n S S } , respectively, of which this last one is the direct sum of C S H R and C S W A ;
  • The visual sensors set C = { C 1 C n C } resulting from the direct sum of C D and C S ;
  • The targets set T = { T 1 T n T } .
In addition, we also introduce the virtual environment partitions set P = { P 1 P n P } and the handout zones set H = { H 1 H n H } . The former set consists of n P n R virtual partitions of the supervised environment; formally, we have that P k R 3 , k = 1 n P P k = h = 1 n R R h = E , and P k P κ = (disjoint sets) with k , κ { 1 n P } , k κ . We emphasize that each element of P can either correspond to a physical room ( n P = n R ) or a part of a physical room ( n P > n R ). On the other side, the handout zones set is composed of n H n P environment portions corresponding to the transition areas between two adjacent virtual partitions. From a mathematical perspective, it thus holds that H p R 3 , H p H ρ = with p , ρ { 1 n H } , p ρ and H p P k P κ being P k , P κ adjacent virtual partitions. In particular, hereafter, we use the notation H k κ ( = H p ) to indicate the handout zone between the k-th and the κ -th virtual partition; more specifically, we have that H k κ = H k κ k H k κ κ , being H k κ k = H k κ P k and H k κ κ = H k κ κ P κ with k , κ { 1 n P } , k κ .
All the introduced sets are listed in Table 1, where we also report the assumptions about their cardinality, many of which derive from the following statements regarding the considered scenario.
  • The high-resolution fixed cameras were used only to monitor the access points in order to guarantee the quick and effective target detection when they enter in the environment, thanks to their increased performance, but also because of their cost. Thus, it holds that n S H R = n A ;
  • Characterized by ample FOVs at the cost of low resolution, the wide-angle fixed cameras were instead exploited to ensure the best coverage. Hence, we assumed that at least a visual sensor in the set C S W A is placed in each virtual partition, and in particular, this is located in order to monitor the related handout zones. Consequently, it follows that n S W A n P ;
  • Finally, the PTZ cameras were employed to enhance the VSN target tracking capabilities. Therefore, it is reasonable to assume that n D n P .
At the same time, we highlight that any assumption is stated as regards the specific camera’s placement within the environment: several existing algorithms allow determining the best sensor location [9,25,26,27].

2.3. Targets’ Modeling

Adopting a control system approach, we modeled any target as a point particle acting in 3D space, thus characterized by a (time-varying) position in the global inertial frame F W , hereafter termed the world frame. Formally, the position of any j-th target, j { 1 n T } , at time t is identified by the vector p j ( t ) R 3 .
Assuming then a first-order dynamics for all the targets, we have that the j-th target state at time t is described by the vector x j ( t ) = p j ( t ) p ˙ j ( t ) R 6 , stacking its position and velocity in F W . Moreover, inspired by [16], we assumed that the introduced state evolves according to the following discrete-time dynamics with sampling time T R + :
x j ( t + 1 ) = A j x j ( t ) + w j ( t ) , A j = I 3 × 3 T I 3 × 3 0 3 × 3 I 3 × 3 R 6 × 6
where 0 3 × 3 R 3 × 3 and I 3 × 3 R 3 × 3 denote the square null and identity matrix, respectively. The vector w j ( t ) R 6 in (1) represents an additive Gaussian noise; in particular, we assumed that w j ( t ) N ( 0 6 , W j ) with 0 6 R 6 identifying the zero mean vector and W j R 6 × 6 representing the j-th target (known) covariance matrix. The point particle modeling assumption, even if it appears simplistic, allows capturing the basic behavior of a moving subject and focusing on other aspects of interest such as the coordination and cooperation among the VSN devices. We further emphasize that, although depending on the specific case, it is generally possible to relate the output of any object detection algorithm to the assumed representation. For example, if a detected object is modeled by a bounding box, then the center of such a box can be exploited in the point particle model. The uncertainty characterizing this operation can be included in a comprehensive error term affecting the cameras’ observations.

2.4. Cameras Modeling

In this work, every camera composing the given VSN was modeled as a rigid body having (possibly time-varying) position and orientation, i.e., a pose, in the world frame. Note that these quantities are often referred to as camera extrinsic parameters in the literature. In detail, we denote by F B the local frame in-built with the device so that the x-axis points upward, the y-axis points to the right, the z-axis points forward, and it is aligned with the device optical axis. Then, the i-th camera (time-invariant) position in F W is identified by the vector t W , i R 3 , while its orientation with respect to the world frame is represented by the rotation matrix R W 2 B , i ( t ) S O ( 3 ) , (potentially) depending on the time t. In particular, we assumed that R W 2 B , i ( t ) results from the composition of three subsequent rotations around the axes of F B , namely R W 2 B , i ( t ) = R z ( γ i ( t ) ) R y ( β i ( t ) ) R x ( α i ( t ) ) with α i ( t ) , β i ( t ) , γ i ( t ) [ π , π ] .
For all the static visual sensors, the orientation is fixed and constant over time, namely R W 2 B , i ( t ) = R W 2 B , i , i { 1 n S } . On the other hand, the dynamic visual sensors are characterized by a partially time-varying orientation. Indeed, a PTZ camera can modify its orientation through a pan and/or a tilt movement, namely through a rotation around the x-axis and/or the y-axis of its F B of a certain controllable pan and/or tilt angle, respectively. From a mathematical perspective, we have that R W , i ( t ) = R W , i ( α i ( t ) , β i ( t ) ) , i { 1 n D } , with α i ( t ) and β i ( t ) hereafter referred to as the i-th camera pan and tilt angle, respectively.
Without loss of generality, some standard assumptions were made also on the intrinsic parameters of all the considered visual sensors: for any camera, the focal length f was assumed to be unitary; no distortion was taken into account; the FOV is defined by a pair of angles affecting its height and width. We remark that the PTZ cameras can also dynamically vary their zoom settings; hence, these are characterized by three controllable degrees of freedom. Hereafter, the zoom parameter of the i-th dynamic visual sensor, i { 1 n D } , is referred to as ζ i 0 . In addition, we took into account the maximum distance at which a target can be detected with a satisfying quality level. Hereafter, this is associated with a minimum pixel density value. More in detail, for any camera, we assumed computing the pixel density characterizing the FOV at the distance (along the optical axis) of a certain target: when such a density is lower than a minimum threshold, the considered target is considered as not visible.
Then, when the j-th target, j { 1 n T } , is observed by the i-th visual sensor, i { 1 n C } , at time t, its position is projected on the camera image plane. Formally, introducing the (nonlinear) function h ( · ) : R 3 R 2 mapping any vector x = x 1 x 2 x 3 R 3 into its projection onto the 2D plane h ( x ) = x 1 / x 3 x 2 / x 3 R 2 , we have that the position z i j ( t ) R 2 of the j-th target into the i-th camera image plane evolves as follows:
z i j ( t ) = h R W 2 B , i ( t ) I 3 × 3 0 3 × 3 x j ( t ) t W , i ) + v i j ( t )
where the vector v i j ( t ) R 2 represents the addictive noise deriving from the projection and measurement errors for the i-th camera. We assumed that v i j ( t ) N ( 0 2 , V i ( t ) ) with 0 2 R 2 and V i ( t ) R 2 × 2 ; in particular, the covariance matrix was modeled as a diagonal matrix whose trace decreases proportionally to the zoom magnitude when considering PTZ cameras. Note that the camera orientation R W 2 B , i in (2) is reported as a time-varying quantity since the provided observation model is valid both for the static and dynamic visual sensors.

2.5. VSN Modeling

Motivated by the intent of proposing a distributed solution, we assumed that any i-th visual sensor, i { 1 n C } , composing the given VSN can communicate with the set of cameras placed in the same partition and in the adjacent ones. Formally, defining C P k S and C P k D as the sets of static and dynamic visual sensors located in the k-th partition, respectively, we have that all the devices placed in P k constitute the set C P k = C P k S C P k D , k { 1 n P } . Then, assuming that C i C P k , we have that the cameras interacting with the i-th one at time t correspond to the set C i ( t ) C P k C P κ , P k and P κ being adjacent partitions, k , κ { 1 n P } , k κ . Note that we implicitly made the assumption that all cameras in the network are aware of the partition wherein they are located and also of the related handout zones.
Figure 1 aims at clarifying the introduced communication setup through a toy example fulfilling all the assumptions of the scenario taken into account in this work. In the following, the 3D simulation environment is shown using its projection on the 2D floor. One can, indeed, observe that the reported example envisages a structured environment having a single access point, composed of n R = 4 rooms (corresponding to the blue, red, green, and orange areas) and virtually divided into n P = 5 partitions connected by n H = 5 handout zones (dashed portions). We emphasize that the partitions P 2 and P 3 jointly cover the area associated with the orange room. The VSN is made up of n S H R = 1 high-resolution fixed camera (represented by the magenta square) monitoring the unique access point, n S W A = 5 wide-angle static visual sensors placed in the environment in order to guarantee the maximum area coverage (identified by the cyan squares), and n D = 8 PTZ cameras located with the purpose of entailing the network tracking capabilities (denoted by the gray circles). We point out that the FOV of the visual sensor C 2 S W A can cover a portion of both the partitions P 2 and P 3 ; in addition, we remark that all the handout zones are potentially monitored by at least a dynamic visual sensor. On the right panel of Figure 1, we highlight the devices’ interaction in terms of information exchange: as illustrated, the cameras physically placed in the same partition can communicate among themselves, and these can also share data with the visual sensors located in the adjacent partitions. To conclude, we emphasize that the communication graph is imposed by the considered VSN structure. Similar graph-based descriptions, but with different connection roles among nodes were employed in [28,29] to characterize the environment structure.

3. Real-Time Surveillance and Multi-Target Tracking

Accounting for the scenario described in the previous section, we present here a distributed strategy aiming at ensuring the efficient real-time surveillance of the considered environment and the tracking of the n T targets, by means of the given VSN.
The designed procedure involves three principal actions:
  • The targets’ detection, executed by both the static and dynamic visual sensors with the intent of extracting information regarding the presence of one or more targets;
  • The targets’ position estimation and prediction performed by all the fixed and PTZ cameras having detected one or more targets, mainly to identify the devices involved in the tracking task in the near future;
  • The PTZ parameter selection, carried out by all the dynamic visual sensors that are already or soon engaged in the tracking task, with the purpose of optimizing the real-time performance.
We specify that the first and second actions were performed at a frequency of 1 / T , while the PTZ parameters were optimized every 1 steps of duration T, namely at a slower frequency of 1 / ( T ) with respect to the previous ones. The parameter was selected in order to respect the computational limits of the system while guaranteeing its promptness in reacting to tracking requirements.
In the rest of the section, the attention is focused on the outlined methods for the estimation and prediction of the targets’ position and for the determination of the more suited parameters for the PTZ cameras. Conversely, we do not explicitly account for the targets’ detection, assuming that this action is accurately performed by resting upon one of the existing and well-proven techniques. On the other hand, we remark that the designed solution permits the computations’ parallelization. In detail, observing that both the targets’ detection and the PTZ parameter selection require a high computational burden, these two operations can be concurrently executed by distributing the workload between two computing cores, if possible. Indeed, the optimization process depends only on the information gathered at every -th step about the predicted target state.

3.1. Targets’ Position Estimation and Prediction

To efficiently fulfill the tracking task, it is well known that a fundamental step consists of the accurate estimation of the current position of the targets and also of their future trajectory. In the proposed strategy, we address this issue by suitably extending the distributed consensus-based EKF approach presented in [16] to the case of targets moving in 3D space (rather than 2D).
To better clarify the adopted approach, summarized in Algorithm 1, we focus on a generic j-th target, j { 1 n T } , assuming that this is detected by a set C T j ( t ) C of cameras in the network. Observe that, according to (2), any i-th device in the aforementioned set can retrieve the projection z i j ( t ) of the target position onto its image plane jointly with the corresponding covariance V i (Line 2). This allows then computing the quantities r i j and U i j introduced in [16] (Lines 4–5). These are subsequently communicated to the devices set C ¯ i ( t ) C distinguishing between the following situations (lines 6–8).
     Assuming that C i C P k , k { 1 n P } , we have that:
  • If the target is in P k \ H k , κ , H k , κ H , then the i-th device communicates with all the other cameras in the same partition, namely C ¯ i ( t ) = C P k ;
  • If the target is in H k , κ k for any κ { 1 n H } , then the i-th device communicates with all the cameras in the same and in the adjacent partition P κ , i.e., C ¯ i ( t ) = C P k C P κ .
Algorithm 1Distributed Consensus-based EKF
1:
for any detected target T j  do
2:
   compute z i j ( t ) as in (2) and the corresponding V i
3:
   compute H i j ( t ) R 2 × 6 as H i j ( t ) = x ¯ i j ( t ) h R W 2 B , i ( t ) I 3 × 3 0 3 × 3 x ¯ i j ( t ) t W , i )
4:
   compute r i j ( t ) = H i j ( t ) V i 1 z i j ( t ) H i j ( t ) V i 1 h i ( x ¯ i j ( t ) ) as in [16]
5:
   compute U i j ( t ) = H i j ( t ) V i 1 H i j ( t ) as in [16]
6:
   data exchange
7:
   transmit m i j ( t ) = ( r i j ( t ) , U i j ( t ) , x ¯ i j ( t ) ) to any C ι C ¯ i ( t )
8:
   receive m ι j ( t ) = ( r ι j ( t ) , U ι j ( t ) , x ¯ ι j ( t ) ) from any C ι C T j ( t )
9:
   information fusion
10:
  compute y i j ( t ) = C ι C i ( t ) r ι j ( t )
11:
  compute S i j ( t ) = C ι C i ( t ) U ι j ( t )
12:
  EKF - a posteriori estimation
13:
  compute M i j ( t ) = ( P i j 1 ( t ) + S i j ( t ) ) 1 (error covarince matrix)
14:
  compute x ^ i j ( t ) = x ¯ i j ( t ) + M i j ( t ) y i j ( t ) + ( M i j ( t ) + 1 ) 1 M i j ( t ) C ι C T j ( t ) ( x ¯ ι j ( t ) x ¯ i j ( t ) ) (target state estimate)
15:
  EKF - a priori estimation
16:
  update P i j ( t ) = A j M i j ( t ) A j T + W j (error covariance matrix)
17:
  update x ¯ i j ( t ) = A j x ^ i j ( t ) (target state estimate)
18:
end for
The exchanged data are required by all the cameras in C T j ( t ) to initialize and/or update an EKF needed to retrieve a suitable estimation x ¯ i j ( t ) R 6 of the j-th target state at time t (Lines 12–17). Note that the filter initialization can be performed exploiting either the received information or the environment knowledge, as, for instance, the size and position of the rooms’ access points. It is straightforward that the accuracy of such an estimation is affected by the number of cameras in C T j ( t ) . Moreover, it is possible to prove that, relying on the consensus approach, it holds that x ¯ i j ( t ) = x ¯ j ( t ) for any C i C T j ( t ) , namely the target state estimation converges to the same value for all the devices detecting T j . For this reason, hereafter, we drop out the dependence on the i-th camera when referring to the EKF target state estimation.
Our solution entails the exploitation of the computed target state estimation to determine (even roughly) a prediction of its evolution after > 1 time steps. Indeed, exploiting the target dynamics (1), we obtain:
x ¯ j ( t ) = x ¯ j ( t ) if p ¯ j ( t ) E or p ¯ j ( t ) H k κ & p ¯ j ( t ) P κ x ¯ j ( t + ) = A x ^ j ( t ) otherwise ,
where x ¯ j ( t ) = p ¯ j ( t ) p ¯ ˙ j ( t ) R 6 is the -steps ahead j-th target state prediction. Note that if the predicted target position p ¯ j ( t ) R 3 exits from the surveilled environment or if it changes partition without being in the corresponding handout zone, then the prediction is considered not valid and is substituted by the actual estimated position.
To conclude, we point out that the tracking performance is affected by the selected sampling time. Indeed, small values of T might imply extremely high computational burden, whereas large values of T might compromise the system promptness in the case of fast-moving targets. A good choice is to select the sampling time taking into account the average speed of the targets.

3.2. PTZ Parameter Selection

One of the most original aspects of the proposed surveillance and tracking solution rests upon the use of a heterogeneous VSN, through a smart exploitation of the adjustable parameters of the PTZ cameras. Hereafter, we illustrate the PTZ parameter selection procedure designed to determine both the orientation and the zoom value of the dynamic visual sensors in the network, with the purpose of improving the tracking capability of the whole camera group. In detail, inspired by [16], we tackled the selection of the PTZ cameras’ parameters through the iterative solution of a suitable maximization problem. Clearly, it is convenient to consider only a finite discrete number of PTZ parameters values since small changes do not yield relevant differences in the cameras’ FOV.

3.2.1. PTZ Parameter Selection Procedure

To provide a clearer explanation, we first focus on the generic single j-th target, j { 1 n T } . Based on the computed prediction (3) and exploiting the information on the network topology, it is possible to identify the set of both fixed and PTZ cameras that could potentially detect the considered target at the following -th time step. We indicate such a set as C T j ( t ) = C T j , S ( t ) C T j , D ( t ) , distinguishing between the static and dynamic visual sensors subsets. In particular, we specify that if the target predicted position p ¯ j ( t ) is in P k \ H k , κ , H k , κ H , k , κ { 1 n P } , k κ , then the set C T j ( t ) includes only cameras placed in the partition P k . Instead, if after time steps, the target is estimated to be in H k , κ k , then the set C T j ( t ) contains all visual sensors located in P k and only the static ones of P κ . Formally, in the former scenario, we have that C T j ( t ) C P k , while in the latter one, C T j ( t ) C P k C P κ S .
The PTZ parameter selection process initially requires the communication among all the cameras in C T j ( t ) . The involved devices share information about their position, orientation, as well as zoom value in the case of PTZ cameras. Subsequently, all the PTZ cameras in C T j , D ( t ) compute a certain utility function depending on the received information. Then, for a fixed number m 1 of consecutive iterations, a single dynamic visual sensor at a time, randomly selected from a uniform distribution over the set C T j , D ( t ) , computes the optimal values for its PTZ parameters (maximizing the utility function) and broadcasts this information to the other cameras in C T j , D ( t ) , which correspondingly update their utility function. The whole iterative procedure can be interpreted as a negotiation phase.
For sufficiently large values of m, such a negotiation phase allows the selection process to converge at least towards a local maximum. In particular, as proven in [16], the convergence is ensured by the game theory results on the Nash equilibrium. When both the number of cameras in a partition and the number of PTZ parameters to select are high, it could be advantageous to rely on a stochastic method for the PTZ parameter selection (for example, in [16], at each negotiation step, a softmax function and a temperature variable were used to generate a probability distribution over the utilities of the selected camera available configurations). This allows avoiding the local maxima, but turns out to be computational demanding. On the contrary, when both the number of visual sensors and the number of PTZ parameters to select is low (and therefore, the risk of incurring a local maximum is low) or when a sub-optimal solution is accepted to the benefit of a faster convergence, then a greedy choice method could be preferred.
Note also that, for the PTZ parameter selection, the dynamic visual sensors do not exchange data with the PTZ cameras placed in other partitions. This implies that the negotiation phase can be simultaneously performed in more than one partition, coping with the presence of multiple targets.
Finally, we emphasize that the selection process’s performance is conditioned by the value assigned to : the number of prediction time steps needs to be compatible with the value of m and the cameras’ computational and actuation time.

3.2.2. Utility Function Definition

The determination of the PTZ parameters from any dynamic camera in C T j , D ( t ) relies on the evaluation of the aforementioned utility function. Such a function is computed tacking into account all the targets that the device is supposed to detect at the following -th time step. Denoting this targets set as i T ( t ) T , we define the i-th camera utility function as:
f i ( α i ( t ) , β i ( t ) , ζ i ( t ) ) = T j i T ( t ) q j f i j ( α i ( t ) , β i ( t ) , ζ i ( t ) )
where the triplet ( α i ( t ) , β i ( t ) , ζ i ( t ) ) summarizes the PTZ camera parameters, the scalar q j 0 constitutes the weight assigned to the j-th target in order to prioritize (or less) its tracking and the function f i j ( α i ( t ) , β i ( t ) , ζ i ( t ) ) depends only on the j-th target and on the position, orientation, and eventually, zoom value of the visual sensors belonging to the set C T j ( t ) . More in detail, f i j ( α i ( t ) , β i ( t ) , ζ i ( t ) ) is defined as the weighted sum of the terms deriving from the adoption of l 0 different criteria, namely:
f i j ( α i ( t ) , β i ( t ) , ζ i ( t ) ) = l r l g l ( α i ( t ) , β i ( t ) , ζ i ( t ) )
with r l 1 and g l ( α i ( t ) , β i ( t ) , ζ i ( t ) ) inferred as explained in the following.
In addition to those proposed in [16], in this work, we account for these criteria:
  • Distance from the center: This criterion implies the evaluation of the distance of the predicted position of the target from the center of the camera image plane. As a consequence, in this case, we have that:
    g 1 ( α i ( t ) , β i ( t ) , ζ i ( t ) ) = 1 | C T j ( t ) | C ι C T j ( t ) χ ι p + a d ( α ι ( t ) , β ι ( t ) , ζ ι ( t ) ) with d ( · ) = I 2 × 2 0 2 × 1 R W 2 B , ι ( α ι ( t ) , β ι ( t ) ) ( x ¯ j t W , ι ) .
    In (6), the scalar a [ 0 , 1 ] permits weighting the importance given to the distance d ( · ) , and it is set to one when the i-th camera can detect the target without modifying its PTZ parameters (Case a). On the other hand, the condition a < 1 is in place when the i-th camera needs to modify its current orientation and/or zoom parameter in order to detect the target (Case b). Note that in this last scenario, a penalty p > 0 is also assigned to the device thanks to the introduction of the indicator function χ ι , which takes a value of one in correspondence to Case b and zero to Case a;
  • View quality: This criterion intends to favor zoomed frames up to a minimum FOV height h m i n > 0 , which is measured on the plane orthogonal to the optical axis and intersecting the j-th target position. Hence, we account for:
    g 2 ( α i ( t ) , β i ( t ) , ζ i ( t ) ) = C ι C T j ( t ) χ ι h ι ( α ι ( t ) , β ι ( t ) , ζ ι ( t ) ) with h ι ( · ) = 0 0 2 R W 2 B , ι ( α ι ( t ) , β ι ( t ) ) ( x ¯ j t W , ι ) tan 1 2 β ι ( t ) 1 if h ι ( · ) > h m i n 0 otherwise ;
  • Number of cameras per target: This criterion aims at assigning a penalty p > 0 when the j-th target is estimated to be detected by less than n m i n cameras or more than n m a x cameras. Hence, we have that:
    g 3 ( α i ( t ) , β i ( t ) , ζ i ( t ) ) = 0 if n m i n | C T j ( t ) | n m a x p otherwise ;
  • Minimum parameter adjustments: According to this criterion, the minimum parameter adjustments with respect to the previous update step are advisable. Introducing the vector [ α i β i ζ i ] stacking the PTZ parameters of the i-th camera obtained at the previous selection step, it follows that:
    g 3 ( α i ( t ) , β i ( t ) , ζ i ( t ) ) = α i β i ζ i [ α i β i ζ i ] .
To conclude, we remark that, when accounting for different rooms, only static visual sensors are allowed to communicate. For this reason, it is possible to perform the selection of the PTZ cameras in separate partitions simultaneously.

4. Application Scenario

We note that the framework proposed in this work allows coping with a wide range of different (and potentially conflicting) objectives. This is possible through a convenient choice of the utility function terms and of the corresponding weights. Nonetheless, in this section, the attention is focused on the description of a specific application scenario. This is motivated by the intent of investigating the performance of the designed solution. In detail, we considered an application case wherein the necessary tradeoff between the high-resolution and high-precision tracking requirements emerges.
We considered the indoor environment depicted in Figure 1. This is physically divided into n R = 4 portions, namely a corridor (where the main entrance to the surveilled area is located) and three rooms accessible from the corridor. However, n P = 5 virtual partitions were taken into account. Indeed, two virtual partitions are associated with the environment portion physically corresponding to the corridor; this choice was motivated by the space geometry and by the intent of preventing camera view occlusions.
Figure 1 reports also the assumed cameras’ placement in the environment. We highlight that the outlined framework allows verifying the simultaneous PTZ parameter selection for dynamic devices associated with different partitions.

4.1. VSN Insights

We emphasize that partitions P 1 , P 2 , and P 3 are populated by the minimum number of visual sensors to guarantee multi-target tracking; partition P 4 is surveilled only by a (wide-angle) static and a dynamic device; four cameras are located in correspondence to partition P 5 . Note that P 4 and P 5 represent the most critical and the most favorable situations in the considered framework, respectively.
More in detail, one can observe that, as highlighted in Figure 2a, the static visual sensors were placed in order to guarantee the monitoring of the whole environment. Nonetheless, different features were assumed for these cameras, as reported in Table 2. Observe that the high-resolution device aimed at monitoring the environment access point is characterized by a limited FOV. Concerning the PTZ cameras, instead, these are supposed to be located so as to ensure that the volume of each partition can be approximately entirely covered by at least a couple of these devices, except for partition P 4 , as depicted in Figure 2b. The considered PTZ cameras were not all identical and differ, as reported in Table 3, where the pan and tilt ranges identify the extreme achievable angles when moving with respect to the initial configuration. Note that the sensors placed in the corridor are characterized by a smaller pan range as compared to the ones in the other rooms, which can span a larger area.
In the simulation, the FOV of each camera is also characterized by a maximum distance at which a target can be seen. This value can be computed starting from each camera resolution, horizontal and vertical FOV angles and the minimum pixel density at which a target is considered to be detectable. Clearly, for dynamic cameras, this maximum distance changes depending on the zoom magnitude. The minimum density considered by us was 3 pixel per cm 2 (ppcm).
Taking into account a maximum velocity of 4 m / s for the targets, we assume that all the cameras composing the VSN acquire new data every T = 50 ms . In addition, for any i-th static sensor, i { 1 n S } , we select the covariance matrix of its observation error in (2) as a diagonal matrix V i ( t ) = 10 3 I 3 × 3 . In doing this, when projecting the target position on the camera image plane, we have that the maximum error is approximately 9.5 cm for a target at a distance of 1 m. Conversely, for any ι -th PTZ camera, ι { 1 n D } , the covariance matrix depends on the value of its zoom parameter as V ι ( t ) = ( 10 3 / ζ ι ( t ) I 3 × 3 ) . Note that we suppose that the zoom parameter can vary in the range [ 1 , 3 ] with unitary step for all the dynamic devices (Table 3). The pan and tilt angles, instead, can be updated in different ranges for the various cameras, although the update step is fixed to 7.5 degrees for all the devices. We also assume that such angular movements are achieved in 0.5 s per step: this constitutes an arbitrary choice, even though, in the following, it is shown that reasonably longer movement time do not affect considerably the tracking performance.

4.2. PTZ Parameter Selection Insights

In the following, the PTZ parameter selection is performed by relying only on two of the criteria presented in Section 3.2, namely the distance from the center and the quality view criterion. This choice was motivated by the results of a preliminary comparison of all the proposed criteria, jointly with those described in [16]: the two selected ones constitute the best tradeoff in terms of both effectiveness and computational burden. We remark that the tracking criterion introduced in [16] and the outlined distance from the center criterion serve the same purpose. Nonetheless, the former requires complex computations to minimize the covariance matrix of the target state estimate, while the latter ensures good tracking performance just by trying to keep the targets as centered as possible in the image plane, only involving the computation of the distance from the optical axis through a norm. As regards the distance from the center criterion, the penalty term and the weight factor introduced in (6) were set to p = 10 3 and a = 0.5 , respectively. As far as the quality view criterion is concerned, instead, the parameter h m i n in (7) was fixed to 1 m. We highlight that the purpose of the studied scenario was to obtain high-resolution shots of the targets while maintaining a good tracking performance for all of them. Since the quality view criterion favors zoomed framings of the targets and the distance from the center criterion improves the tracking precision, just by combining these two simple rules, it is possible to obtain the desired network behavior.
In light of the given premises, the utility function computed by any i-th PTZ camera, i { 1 n D } , in correspondence to the generic j-th target, j { 1 n T } , results in being:
f i j ( α i ( t ) , β i ( t ) , ζ i ( t ) ) = r 1 g 1 ( α i ( t ) , β i ( t ) , ζ i ( t ) ) + r 2 g 2 ( α i ( t ) , β i ( t ) , ζ i ( t ) )
with g 1 ( α i ( t ) , β i ( t ) , ζ i ( t ) ) and g 2 ( α i ( t ) , β i ( t ) , ζ i ( t ) ) defined as in (6) and (7), respectively, and r 1 = r 2 = 1 , namely without prioritizing any of the two selected criteria. The utility function that the i-th PTZ camera is required to maximize in order to determine its next PTZ parameter is thus f i ( α i ( t ) , β i ( t ) , ζ i ( t ) ) defined in (4). In particular, hereafter, we assume q j = 1 for any j { 1 n T } .
We remark that the PTZ parameter selection can be simultaneously performed by devices corresponding to different partitions. Moreover, since in our simulation framework, at most three dynamic visual sensors are placed in each partition, for the reasons explained in Section 3.2.1, a greedy selection policy over the PTZ parameters was employed. Therefore, it is possible to use a relatively small number of negotiation steps, i.e., m = 6 . Observe that partitions P 1 , P 2 , and P 4 are all monitored by a single dynamic sensor: in these cases, the optimal PTZ parameters are directly determined and the negotiating process is not required. To conclude, we highlight that the computational time for parameter selection is proportional to the number of targets inside a partition. Observing also that the network tracking capabilities are strongly influenced by the PTZ parameter selection time, we made the following design choices as regards the heterogeneous network. First, the dynamic devices are supposed to be able to perform a discrete pan and/or tilt movement in 0.5 s . In addition to this time, we need to consider also a small time interval during which the cameras stand still in order to acquire images without motion blur. We chose this interval to be of at least 0.5 s , during which we also computed the new PTZ parameters for the dynamic cameras. It follows that, if the computational time exceeds this value, the mentioned time interval needs to be longer. As a consequence, it turns out to be convenient to predict the targets state at least 1 s in the future. Having assumed T = 0.05 s , this leads to selecting 20 prediction time steps, 10 steps of which are introduced to cover the camera movement time.

4.3. Targets’ Insights

In the designed simulation framework, we accounted for multiple possible targets moving in the environment according to (1). In detail, we selected the j-th target state noise covariance matrix, j { 1 n T } , as:
W = p ¯ j 0.25 I 3 × 3 0 3 × 3 0 3 × 3 2.5 I 3 × 3
where we indicate with p ¯ j 0 the average target velocity expressed in m / s .
In addition, we took into account three main possible trajectories for all the targets: these are reported in Figure 3, neglecting the hypothesis of additional noise. The path in Figure 3a, i.e., Trajectory T1, refers to a non-elusive target going through partitions P 1 , P 2 , and P 3 , i.e., the environment portion characterized by a minimum number of cameras to properly ensure the target tracking. The trajectories in Figure 3b,c, namely T2 and T3, respectively, account for the behavior of a non-elusive and an elusive target, respectively, crossing partitions P 4 and P 5 . We recall that these two partitions represent the most critical and favorable scenario in terms of cameras to guarantee the target tracking. In all three cases, targets were generally assumed to move at a constant speed of 1 m / s , although variations up to 4 m / s were also studied for the trajectory in Figure 3a. In addition, in the following, we considered scenarios wherein 4, 8, and 12 targets were simultaneously present in the environment. In these cases, the distance among them was reduced in order to have all of subjects concurrently present in P 1 at some point during the simulation.

5. Simulation Results

Accounting for the simulation framework outlined in the previous section, hereafter, we investigate the performance of the solution designed, by studying different scenarios.

5.1. Performance Evaluation Criteria

To do this, the following performance indexes were taken into account:
  • The target state estimation error (precision) and the tracking confidence (accuracy). For any j-th target, j { 1 n T } , at time t, the former is simply the difference between the true and the estimated state. The latter is computed from the elements on the main diagonal of the covariance matrix P j ( t ) . Formally, we distinguish between the position estimation accuracy δ p ( t ) = 3 ( p 1 , 1 ( t ) + p 2 , 2 ( t ) + p 3 , 3 ) R and the velocity estimation accuracy δ v ( t ) = 3 ( p 4 , 4 + p 5 , 5 + p 6 , 6 ) R . Note that the accuracy indexes δ p ( t ) and δ v ( t ) correspond to the 3 σ C.I., where σ is the square root of the average variances of the position and velocity components of the target state, respectively;
  • The (maximum, minimum, mean, and 75th percentile) resolution at which any target is observed. This is expressed in pixel per cm 2 (ppcm) and computed evaluating the pixel density per 1 m 2 on the plane orthogonal to the optical axis and intersecting the estimated target position;
  • The number of cameras detecting any target at each time step T;
  • The time required for the PTZ parameter selection depending on the number of targets in the partition.
Note that only the last mentioned performance index involves a temporal quantity and, specifically, refers to the computational time employed in the PTZ parameter selection. This choice was motivated by the fact that the workload associated with all the other operations, as, e.g., the EKF target state estimation, is negligible with respect to the PTZ parameter selection process.
Furthermore, to better highlight the advantages of heterogeneous VSNs including also PTZ cameras, we introduce the so-called static simulation framework (SSF). This differs from the simulation framework described in Section 4, hereafter termed the dynamic simulation framework (DSF), since we assumed substituting all the dynamic visual sensors with static devices. In particular, in the SSF, we fixed the orientation of the (substituted) cameras in order to have at least a couple of sensors monitoring the whole volume of each partition, except for partition P 4 , as depicted in Figure 4.
Since we did not deal with occlusions, one can realize that in the SSF, the network tracking performance does not scale with the number of targets. For this reason, in the following, we investigate the performance of the SSF by accounting only for a single target moving in the environment. Nonetheless, the achieved results were then used as a benchmark for comparing the performance of the designed solution in the given DSF, specifically addressing also the multi-target case. All the simulations were run on a Windows laptop equipped with an Intel core i7-6700HQ.
To conclude, we emphasize that, to provide a fair evaluation of the designed solution performance, 10 independent trials were run for each testing scenario and the average of all the aforementioned metrics was computed.

5.2. Single Target

The first intent is to remark about the advantages deriving from the use of a heterogeneous VSN. In doing this, we focus on the network tracking capabilities both in the SSF and in the DSF, by considering a single target that follows the trajectories described in Section 4.3 with a constant speed equal to 1 m / s .
First, note that in this case, the choice of = 20 prediction steps (Section 4.2) is sufficient. Indeed, as shown in Table 4, the computational time required for the PTZ parameter selection in the case of a single target following the trajectory T3 (in Figure 3c) never exceeded 0.5 s . We remark that the PTZ parameter selection procedure took into account not only the visible targets, but also the ones potentially visible in the near future. For this reason, the computation time turned out to be strongly affected also by the environment structure: a PTZ camera located in a room having a complex geometry in terms of walls and obstacles is penalized.
Focusing on the target tracking of trajectory T3 (the most challenging one for the camera network), in Figure 5, we report the performance indexes in correspondence to both the SSF and the DSF. However, we also summarize the performance in correspondence to all the target trajectories that are depicted in Figure 3 and Table 5.
It is possible to notice that the estimation error and tracking confidence, namely the position and velocity precision and accuracy, are comparable for all the trajectories, with a slight improvement when considering the DSF. This improvement can be explained by observing the mean number of cameras on the target. Indeed, accounting for Figure 5b, related to the case of a target following trajectory T3, we note that in several steps of the path, the number of cameras framing the target was larger for the DSF, as compared to the SSF. Moreover, the use of PTZ cameras allows considerably improving the resolution at which the target is seen (see Table 5 and Figure 5b). Observing also Figure 6, one can realize that the frame distribution in terms of ppcm is higher in DSF.
We observed that some spikes affected the trend of the estimation error reported in Figure 5a: this fact can be motivated by the changes of direction in the considered trajectory, approximated with instantaneous variations. However, the overall performance was not compromised by this behavior: the maximum value of the tracking precision and accuracy was, indeed, bigger than its 75th percentile. We emphasize that in real-world scenarios, this issue is less remarkable, since usually, changes of direction happen more smoothly; nonetheless, the proposed approach is useful to test the robustness of the network tracking capability.
In the elusive target scenario corresponding to trajectory T3 (Figure 5), we highlight that the target tries to exploit the blind areas of the VSN, and this turns out to be particularly problematic in partition P 4 , where the visual sensors are not capable of ensuring the tracking over the entire environment portion. This fact can be noticed in Figure 5a: in correspondence to the part of the trajectory associated with partition P 4 , namely between the 350th and the 650th time step, the accuracy and precision are compromised, especially when considering the SSF, since the target is framed by only a single camera for a long time (Figure 5b). In the DSF, the presence of dynamic devices allows partially counteracting this problem, with the limiting factor given by the movement capabilities of the PTZ cameras.
To conclude, to test the robustness inherited from the distributed approach, two scenarios were analyzed where cameras C 7 D and C 8 D were respectively considered as not working and a target was following trajectory T2. In both cases, thanks to the negotiation among the remaining cameras, the VSN was able to autonomously adapt to the new situation and to limit the loss of performance to a slight decrease in the quality of view performance (mean resolution values), as can be observed by comparing the data in Table 5 (DSF, T2) and Table 6.

5.3. Single Target vs. Four Targets

Accounting for the DSF, in Figure 7, we compare the single-target case discussed in the previous subsection with the case in which four targets move in the environment following trajectory T1 and thus crossing partitions P 1 , P 2 , and P 3 . The intent, here, is to evaluate the heterogeneous VSN performance in a scenario wherein the number of cameras is the minimum to guarantee a successful target tracking over the entire area.
First of all, from Table 7, it is possible to notice that also with four targets, the computational time necessary to perform the PTZ parameter selection did not exceed 0.5 s , thus the assumption on the target state prediction = 20 is again valuable.
Based on the accuracy and precision values reported in Table 8, we point out that the increase in the number of targets did not affect the network tracking performance. However, when multiple targets were simultaneously present in the same partition, the PTZ cameras leaned toward the reduction of their zoom value in order to frame as many targets as possible, consequently compromising the (mean) resolution at which the targets were observed. This was due to the fact that the utility function (4) was designed so that all the dynamic devices tend to keep all targets at the maximum possible resolution, but also centered in their camera image plane. The described situation is more probable in larger partitions, as can be noticed from the ppcm trend in Figure 7: between the 400th and the 700th time step, we detected the highest discrepancy between the single- and multi-target case, and this corresponds to the part of the target trajectory in P 3 . We also observed that the utility function maximization sometimes led the PTZ cameras to focus on a single target (thus, increasing their zoom value), rather than framing more targets, especially when these were already monitored by other visual sensors. This fact explains why the mean number of cameras on the targets was slightly lower in the DSF when accounting for four targets as compared both to the one-target case and to the SSF (see Table 8). In general, it is possible to balance the amount of zoomed PTZ configurations with a good tracking performance by tuning the weights of the target utility function.
To conclude, we highlight that Figure 6 and Table 5 and Table 8 show that, even when four targets are assumed to be present in the environment, the use of PTZ cameras brings a benefit in terms of target resolution. In fact, the average pixel density on the target was higher with respect to the one obtained in SSF.

5.4. Multiple Targets: Limit Behavior

Hereafter, we analyze how the average target resolution changes when the number of targets increases, and in particular when the use of PTZ cameras in the VSN does not provide any improvement with respect to the SSF. In doing this, we considered two scenarios involving 8 and 12 targets, respectively, ensuring their simultaneous passage through the partition P 1 at some time instant. It is straightforward that the different partitions have different tracking limits, depending on their topology and on the number of corresponding visual sensors. To provide a fair comparison, we considered trajectory T1 as in Section 5.3.
It turned out that the tracking limit in the DSF was 12 targets. Indeed, as can be observed comparing Table 5 and Table 9, the advantage given by the exploitation of a heterogeneous network in this case was minimal. As a consequence, we can conjecture that for a higher number of targets, the network performance might degenerate until the use of PTZ devices results in not being beneficial. This happened, for instance, in partition P 2 and P 3 in correspondence to a lower number of targets because of the presence of a single dynamic device. On the other hand, in partition P 1 , the adoption of a heterogeneous VSN turned out to be advantageous due to the higher number of available PTZ cameras. For the 12-target case, thus, the DSF performance in terms of mean ppcm was the same as can be obtained by considering a network of static cameras, with higher resolution in critical points of the surveilled environment.
Moreover, Figure 8a,b highlights that the computational time grew approximately linearly with the number of targets in a partition and the trend slope changed according to the number of devices placed in the partition. One can also observe that in these cases, the computational time to select the PTZ parameters of all the dynamic cameras was higher than 0.5 s (see Table 10 and Table 11). This implies that a longer time period needs to be taken into account for computations before the cameras’ movements. In particular, we considered intervals of 1 s for the 8-target case and of 2 s for the 12-target case, which correspond to = 30 and = 50 steps ahead prediction, respectively.
Finally, we noticed that, under the target velocity assumption of 1 m / s , the target state estimation precision was not compromised (see Table 9), suggesting that the designed solution was capable of coping with situations involving a lower number of targets and a longer time for the camera movements.

5.5. Multiple Targets with Different Velocities

We investigate now the system performance in the presence of targets having different (increasing) velocities. Specifically, we assumed dealing with four targets moving at the constant speeds of 1 m / s , 2 m / s , and 4 m / s . The last case is extreme for walking targets, but this was studied with the purpose of pushing the system performance and analyzing the provided solution robustness. Furthermore, in this scenario, the targets were supposed to follow trajectory T1.
In Figure 9, we report the results of the comparison of the tracking precision and accuracy for the second of the four targets, while moving at the different velocities. We observed that, in all the cases, the distributed EKF generally allowed estimating the target position with a small error, except in correspondence to direction changes wherein the tracking error grew proportionally to the velocity. In particular, we note that for higher velocities, a longer time was required to correct the estimates. These considerations are confirmed by the results in Table 12. Nonetheless, we also remark that abrupt changes in direction are unlikely in a real case scenario, especially in the case of high velocities.
As concerns the resolution at which the targets were observed, from Table 12, one can observe that generally, the mean ppcm value was lower for higher target velocities. In these cases, in fact, less zoomed PTZ configurations are preferable since the movement that any camera can accomplish in one time step is not sufficient to follow a relatively close target that is moving fast.
Finally, we emphasize that, as the target velocity increases, the mean number of cameras on the target decreases. Indeed, in correspondence to faster targets, it is more likely for the PTZ cameras to temporarily lose the target. In these cases, the target state estimate only relies on the measurements of the static devices. In the worst case, for a brief time interval, the target is viewed by a single fixed camera; therefore, its state estimates can become less accurate until a PTZ camera frames it again.

5.6. Multiple-Target Real-World Scenario

To conclude the assessment of the proposed solution, we accounted for a real-world scenario wherein several targets access the surveilled environment with small time intervals between each other. They are all supposed to move at 1 m / s , following different trajectories in order to cover the whole environment and to explore also the blind areas for the VSN. In this real-world scenario, the targets follow more natural paths with respect to the ones considered in the previous cases; the result is a more random distribution of the tracked subjects over the simulation area. We observed that the PTZ parameter selection computational time here was less than 1s for the partitions of the corridor and less than 0.5 s for the other partitions; hence, the prediction time steps considered were = 30 in correspondence to partitions P 2 and P 3 and = 20 for the other ones.
In this scenario also, particularly challenging from a surveillance point of view, the designed solution based on the exploitation of a heterogeneous VSN resulted in being more effective with respect to the SSF, especially in terms of the resolution at which targets were seen. This fact is confirmed when comparing the SSF and DSF performances reported in Table 13: the ppcm index almost doubled in correspondence to any partition, with the exception of P 2 and P 3 , where the improvement was slightly less due to the constrained topology of the corridor.
Furthermore, we remark that when multiple targets are present in a single partition, the number of PTZ cameras on a specific target tends to diminish. This behavior can be explained by considering that, based on the maximization of its utility function, for some dynamic device, it may to be more convenient to set its PTZ parameters in order to focus on a specific target, obtain some high-resolution shots instead of framing a larger area. The most challenging situation occurs when the maximum number of targets occupies a single partition and, in particular, these are spread over the entire partition area, e.g., when eight targets are present in partitions P 2 and P 3 and/or when five targets are present in partitions P 1 , P 4 , and P 5 , as shown in Figure 10. In this case, it turns out to be preferable to not focus on a single target for a long period, since the risk of loosing the others exists. The results reported in Table 14 highlight how, besides the difficult situation, the solution proposed in this work was able to improve considerably the resolution at which targets were seen as compared to the traditional VSN characterizing the SSF.

6. Discussion

Accounting for the critical aspects of the proposed solution highlighted in the previous sections, we discuss here some possible ruses to improve the system performance.
First of all, we observed that the parameter selection computational time linearly depends on the number of targets, while the mean resolution at which the target is seen results in being inversely proportional. To address the first issue, it could be convenient to consider an adaptive rate for the PTZ parameter selection, namely to change the interval between consecutive selection procedures. To do so, it is sufficient to estimate the duration of the selection procedures’ process depending on the number of targets in a partition and therefore to select an adequate number of steps for the prediction of the target state. Indeed, when the partition is populated by a small number of targets, then a high PTZ parameter update rate can be adopted, thus improving the network performance. Note that the value of can be different for each partition, since the selection procedures can be carried out independently. Moreover, when the number of targets in the surveilled environment is such that the mean resolution obtained on them with a heterogeneous VSN is comparable to the one obtained with a static network, it could be useful to temporarily switch to a predefined configuration for the PTZ cameras that allows covering the entire area well, without trying to maximize the utility function.
Furthermore, we remark that, as the targets’ velocity increases, less zoomed configurations are preferred to reduce the possibility of losing the target. An improvement rests upon the introduction of an adaptive zoom based on the estimated velocity of the target, namely a procedure entailing the reduction of the maximum zoom magnitude as the velocity increases, allowing reducing the number of parameter value possibilities considered in the selection process.
Another crucial aspect is the management of the target direction changes, constituting the major source of error in the tracking process. To face this issue, it is possible to identify specific zones in which these situations are more likely to occur, such as the corners of a room or of the corridor, as well as the intersections between different rooms. When the targets are in these zones, it could be useful to consider higher values for the variance of the noise w j ( t ) . This would allow dealing better with the uncertainty related to the changes of direction, which in these areas are bound to happen with a very high probability. Another more heuristic approach to this problem could be to consider only configurations with a wide FOV for cameras framing targets crossing these critical zones that would therefore be able to cover all the potential targets movements.
Finally, we emphasize that target occlusions represent still an open challenge in camera network design and in the proposed solution. It could be possible to select the PTZ cameras’ parameters according to the potential detectable obstacles present in the environment. Along this line, it would be sufficient to weight the contribution of each camera to a given target utility function according to the occlusion information. In other words, once a visual sensor realizes that a target is occluded by using its visual information jointly with the target state estimation, it could lower its contribution to the utility function with respect to the occluded target. In this way, the PTZ parameters of such a camera will be set by the algorithm to focus only on the targets that are visible from its point of view.
One further possible refinement could be to optimize the number, position, and type of cameras located inside the surveilled environment, for example by using the solutions proposed in [7,8,9,27]. This would allow starting from more advantageous PTZ parameter configurations as concerns the utility function maximization.

7. Conclusions

In this work, we proposed an original real-time surveillance and multi-target solution for an indoor environment, based on the exploitation of a heterogeneous VSN composed of both fixed and PTZ cameras. The environment topology was exploited to support the implementation of a distributed approach and thus obtain a robust, flexible, and scalable network. Indeed, the outlined structure allows separating the surveilled area into multiple independent partitions that can be handled in parallel.
The described surveillance solution consists of two main parts: a distributed EKF and a PTZ parameter selection algorithm that is based on a game theoretic approach. The former aims at estimating and predicting the state of the targets moving in the surveilled area. The latter, instead, given the predicted targets’ states, tries to maximize a utility function with the aim of finding the best configuration for the dynamic cameras in the VSN. Such a framework allows realizing a wide range of different and potentially conflicting objectives by simply choosing the proper utility function terms and weights.
In the simulation part, we evaluated the performance of the designed solution considering a specific case where the objective was to obtain a tradeoff between high-resolution views and good tracking ability. Multiple scenarios were investigated, by considering different numbers of targets following multiple trajectories at different velocities. The results confirmed the effectiveness of the solution in obtaining the desired tradeoff. In addition, it is possible to establish a linear relationships between the number of targets in a partition and the computational times. Finally, the threshold (in terms of target number) at which it is more convenient to temporarily switch to a static camera network configuration was also individuated.

Author Contributions

Conceptualization, J.G. and M.L.; methodology, J.G. and M.L.; formal analysis, J.G., M.L., G.M. and A.C.; writing—original draft preparation, J.G., M.L. and G.M.; writing—review and editing, J.G., M.L., G.M. and A.C.; supervision, G.M.; project administration, A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by MIUR (Italian Ministry of Education, University, and Research) under the initiative “Departments of Excellence” and by the University of Padova under BIRD-SEED TSTARK.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Olagoke, A.S.; Ibrahim, H.; Teoh, S.S. Literature survey on multi-camera system and its application. IEEE Access 2020, 8, 172892–172922. [Google Scholar] [CrossRef]
  2. SanMiguel, J.C.; Micheloni, C.; Shoop, K.; Foresti, G.L.; Cavallaro, A. Self-reconfigurable smart camera networks. Computer 2014, 47, 67–73. [Google Scholar]
  3. Hancke, G.P.; de Carvalho e Silva, B.; Hancke, G.P., Jr. The role of advanced sensing in smart cities. Sensors 2012, 13, 393–425. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Ding, C.; Bappy, J.H.; Farrell, J.A.; Roy-Chowdhury, A.K. Opportunistic image acquisition of individual and group activities in a distributed camera network. IEEE Trans. Circuits Syst. Video Technol. 2016, 27, 664–672. [Google Scholar] [CrossRef]
  5. Piciarelli, C.; Esterle, L.; Khan, A.; Rinner, B.; Foresti, G.L. Dynamic reconfiguration in camera networks: A short survey. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 965–977. [Google Scholar] [CrossRef]
  6. Di Paola, D.; Milella, A.; Cicirelli, G.; Distante, A. An autonomous mobile robotic system for surveillance of indoor environments. Int. J. Adv. Robot. Syst. 2010, 7, 8. [Google Scholar] [CrossRef]
  7. Kritter, J.; Brévilliers, M.; Lepagnot, J.; Idoumghar, L. On the optimal placement of cameras for surveillance and the underlying set cover problem. Appl. Soft Comput. 2019, 74, 133–153. [Google Scholar] [CrossRef]
  8. Altahir, A.A.; Asirvadam, V.S.; Hamid, N.H.B.; Sebastian, P.; Hassan, M.A.; Saad, N.B.; Ibrahim, R.; Dass, S.C. Visual sensor placement based on risk maps. IEEE Trans. Instrum. Meas. 2019, 69, 3109–3117. [Google Scholar] [CrossRef]
  9. Wang, X.; Zhang, H.; Gu, H. Solving Optimal Camera Placement Problems in IoT Using LH-RPSO. IEEE Access 2019, 8, 40881–40891. [Google Scholar] [CrossRef]
  10. Esterle, L. Centralised, decentralised, and self-organized coverage maximisation in smart camera networks. In Proceedings of the 2017 IEEE 11th International Conference on Self-Adaptive and Self-Organizing Systems (SASO), Tucson, AZ, USA, 18–22 September 2017; pp. 1–10. [Google Scholar]
  11. Kumar, S.; Piciarelli, C.; Singh, H.P. Reconfiguration of PTZ Camera Network with Minimum Resolution. In Harmony Search and Nature Inspired Optimization Algorithms; Springer: Singapore, 2019; pp. 869–878. [Google Scholar]
  12. Zhang, L.; Xu, K.; Yu, S.; Fu, R.; Xu, Y. An effective approach for active tracking with a PTZ camera. In Proceedings of the 2010 IEEE International Conference on Robotics and Biomimetics, Tianjin, China, 14–18 December 2010; pp. 1768–1773. [Google Scholar]
  13. Saragih, C.F.D.; Kinasih, F.M.T.R.; Machbub, C.; Rusmin, P.H.; Rohman, A.S. Visual Servo Application Using Model Predictive Control (MPC) Method on Pan-tilt Camera Platform. In Proceedings of the 2019 6th International Conference on Instrumentation, Control, and Automation (ICA), Bandung, Indonesia, 31 July–2 August 2019; pp. 1–7. [Google Scholar]
  14. Natarajan, P.; Hoang, T.N.; Wong, Y.; Low, K.H.; Kankanhalli, M. Scalable decision-theoretic coordination and control for real-time active multi-camera surveillance. In Proceedings of the International Conference on Distributed Smart Cameras, Venezia Mestre, Italy, 4–7 November 2014; pp. 1–6. [Google Scholar]
  15. Natarajan, P.; Atrey, P.K.; Kankanhalli, M. Multi-camera coordination and control in surveillance systems: A survey. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2015, 11, 1–30. [Google Scholar] [CrossRef]
  16. Ding, C.; Song, B.; Morye, A.; Farrell, J.A.; Roy-Chowdhury, A.K. Collaborative sensing in a distributed PTZ camera network. IEEE Trans. Image Process. 2012, 21, 3282–3295. [Google Scholar] [CrossRef] [PubMed]
  17. Kim, D.; Park, S. Enhanced Model-Free Deep-Q Network based PTZ Camera Control Method. In Proceedings of the 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), Zagreb, Croatia, 2–5 July 2019; pp. 251–253. [Google Scholar]
  18. Bisagno, N.; Xamin, A.; De Natale, F.; Conci, N.; Rinner, B. Dynamic Camera Reconfiguration with Reinforcement Learning and Stochastic Methods for Crowd Surveillance. Sensors 2020, 20, 4691. [Google Scholar] [CrossRef] [PubMed]
  19. Wang, Y.C.; Chen, D.Y. Cooperative monitoring scheduling of ptz cameras with splitting vision and its implementation for security surveillance. In Proceedings of the International Conference on Advanced Technology Innovation, Sapporo, Japan, 15–18 July 2019; p. 2. [Google Scholar]
  20. Kamal, A.; Ding, C.; Morye, A.; Farrell, J.; Roy-Chowdhury, A.K. An overview of distributed tracking and control in camera networks. In Wide Area Surveillance; Springer: Berlin/Heidelberg, Germany, 2014; pp. 207–234. [Google Scholar]
  21. He, S.; Shin, H.S.; Xu, S.; Tsourdos, A. Distributed estimation over a low-cost sensor network: A review of state-of-the-art. Inf. Fusion 2020, 54, 21–43. [Google Scholar] [CrossRef]
  22. Olfati-Saber, R.; Sandell, N.F. Distributed tracking in sensor networks with limited sensing range. In Proceedings of the 2008 American Control Conference, Seattle, WA, USA, 11–13 June 2008; pp. 3157–3162. [Google Scholar]
  23. Bazzani, L.; Cristani, M.; Murino, V. Decentralized particle filter for joint individual-group tracking. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1886–1893. [Google Scholar]
  24. Sharma, P.; Saucan, A.A.; Bucci, D.J.; Varshney, P.K. Decentralized Gaussian filters for cooperative self-localization and multi-target tracking. IEEE Trans. Signal Process. 2019, 61, 5896–5911. [Google Scholar] [CrossRef]
  25. Gonzalez-Barbosa, J.J.; Garcia-Ramirez, T.; Salas, J.; Hurtado-Ramos, J.B.; Rico-Jimenez, J.d.J. Optimal camera placement for total coverage. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 844–848. [Google Scholar]
  26. Bodor, R.; Drenner, A.; Schrater, P.; Papanikolopoulos, N. Optimal camera placement for automated surveillance tasks. J. Intell. Robot. Syst. 2007, 50, 257–295. [Google Scholar] [CrossRef]
  27. Yap, F.G.; Yen, H.H. Novel visual sensor coverage and deployment in time aware PTZ wireless visual sensor networks. Sensors 2016, 17, 64. [Google Scholar] [CrossRef] [PubMed]
  28. Mavrinac, A.; Chen, X. Modeling coverage in camera networks: A survey. Int. J. Comput. Vis. 2013, 101, 205–226. [Google Scholar] [CrossRef]
  29. Lucchese, R.; Cenedese, A.; Carli, R. A Hidden Markov Model based transitional description of camera networks. IFAC Proc. Vol. 2014, 47, 7394–7399. [Google Scholar] [CrossRef]
Figure 1. Environment example: colors distinguish the physical environment partitions; dashed lines identify the handout zones among virtual environment partitions; cameras are indicated with round and square markers.
Figure 1. Environment example: colors distinguish the physical environment partitions; dashed lines identify the handout zones among virtual environment partitions; cameras are indicated with round and square markers.
Sensors 22 02661 g001
Figure 2. Simulation framework: cameras’ position. (a) Fixed cameras’ position. (b) PTZ cameras’ position.
Figure 2. Simulation framework: cameras’ position. (a) Fixed cameras’ position. (b) PTZ cameras’ position.
Sensors 22 02661 g002
Figure 3. Possible target trajectories (without noise). (a) Trajectory T1, (b) Trajectory T2, (c) Trajectory T3.
Figure 3. Possible target trajectories (without noise). (a) Trajectory T1, (b) Trajectory T2, (c) Trajectory T3.
Sensors 22 02661 g003
Figure 4. Static Simulation Framework (SSF).
Figure 4. Static Simulation Framework (SSF).
Sensors 22 02661 g004
Figure 5. SSF vs. DSF performance comparison: 1 target following trajectory T3. (a) Position and velocity estimation error and tracking confidence. (b) PPCM and num. cameras on target evolution.
Figure 5. SSF vs. DSF performance comparison: 1 target following trajectory T3. (a) Position and velocity estimation error and tracking confidence. (b) PPCM and num. cameras on target evolution.
Sensors 22 02661 g005
Figure 6. SSF vs. DSF frame distribution: 1 and 4 targets following trajectory T1.
Figure 6. SSF vs. DSF frame distribution: 1 and 4 targets following trajectory T1.
Sensors 22 02661 g006
Figure 7. DSF performance: 1 and 4 targets following trajectory T1.
Figure 7. DSF performance: 1 and 4 targets following trajectory T1.
Sensors 22 02661 g007
Figure 8. PTZ parameter selection computational time trend: results for partition P 3 are similar to those of partition P 2 and thus omitted. Note that the parameter selection process of PTZ cameras in P 2 depends on 4 over 8 targets (a) and 7 over 12 targets (b).
Figure 8. PTZ parameter selection computational time trend: results for partition P 3 are similar to those of partition P 2 and thus omitted. Note that the parameter selection process of PTZ cameras in P 2 depends on 4 over 8 targets (a) and 7 over 12 targets (b).
Sensors 22 02661 g008
Figure 9. DSF tracking precision and accuracy: multiple targets having different velocities.
Figure 9. DSF tracking precision and accuracy: multiple targets having different velocities.
Sensors 22 02661 g009
Figure 10. Snapshots of a real-world scenario in the DSF: (a) 5 targets are present in P 4 ; (b) 5 targets are present in both P 3 and P 5 . The true position of the target is represented with a dot, its estimated position with a cross, and the confidence interval with a grey circle around the estimated position.
Figure 10. Snapshots of a real-world scenario in the DSF: (a) 5 targets are present in P 4 ; (b) 5 targets are present in both P 3 and P 5 . The true position of the target is represented with a dot, its estimated position with a cross, and the confidence interval with a grey circle around the estimated position.
Sensors 22 02661 g010
Table 1. Principal sets and main assumptions considered in this work.
Table 1. Principal sets and main assumptions considered in this work.
SetsAssumptions
environment access points A = { A 1 A n A } n A 1
physical environment partitions R = { R 1 R n R } n R 1
virtual environment partitions P = { P 1 P n P } n P n R
handout zones H = { H 1 H n H } = { H k κ = H k κ k H k κ κ , } n H n P
static high-resolution visual sensors C S H R = { C 1 S H R C n S H R S H R } n S H R n A
static wide-angle visual sensors C S W A = { C 1 S W A C n S W A S W A } n S W A n P
static visual sensors C S = { C 1 S C n S S } = C S W A C S H R n S = n S H R + n S W A n P + n A
dynamic visual sensors C D = { C 1 D C n D D } n D n P
visual sensors C = { C 1 C n C } = C D C S n C = n D + n S 2 n P + n A
targets T = { T 1 T n T } n T 1
Table 2. Simulation framework: fixed cameras’ features.
Table 2. Simulation framework: fixed cameras’ features.
CameraFOV (deg)Resolution
(pixel/deg)
Horiz.Vert.
C 1 S H R 6040100
C 1 S W A 1208050
C 2 S W A 1208050
C 3 S W A 906050
C 4 S W A 1208050
C 5 S W A 1208050
Table 3. Simulation framework: PTZ cameras’ features.
Table 3. Simulation framework: PTZ cameras’ features.
CameraFOV [deg]Resolution
(pixel/deg)
Tilt Range
(deg)
Pan Range
(deg)
Pan/Tilt Step
(deg)
Zoom Range
(deg)
Zoom Step
(deg)
Horiz.Vert.
C 1 D 604050 ± 15 ± 15 7.5 1 ÷ 3 1
C 2 D 604050 ± 15 ± 45 7.5 1 ÷ 3 1
C 3 D 604050 ± 15 ± 45 7.5 1 ÷ 3 1
C 4 D 604050 ± 15 ± 30 7.5 1 ÷ 3 1
C 5 D 604050 ± 15 ± 30 7.5 1 ÷ 3 1
C 6 D 604050 ± 15 ± 15 7.5 1 ÷ 3 1
C 7 D 604050 ± 15 ± 30 7.5 1 ÷ 3 1
C 8 D 604050 ± 15 ± 30 7.5 1 ÷ 3 1
Table 4. PTZ parameter selection computational time: 1 target following trajectory T3.
Table 4. PTZ parameter selection computational time: 1 target following trajectory T3.
Case StudyComputational Time (s)
1 targetmean0.043
std0.036
max0.136
Table 5. SSF vs. DSF performance comparison: 1 target following trajectories T1, T2, and T3.
Table 5. SSF vs. DSF performance comparison: 1 target following trajectories T1, T2, and T3.
Case Study Position
Precision
(cm)
Position
Accuracy
(cm)
Velocity
Precision
(cm/s)
Velocity
Accuracy
(cm/s)
Resolution

(ppcm)
Cameras
on
Target
SSF, T1mean6.2234.9311.08147.11252.442.64
std14.209.6321.218.04354.720.71
min0.5320.912.4156.1224.062
75%3.2042.746.48152.49247.403.00
max100.0658.98155.81162.322595.254
DSF, T1mean6.2030.8111.23144.05640.932.68
std13.617.8420.616.95360.190.63
min0.5220.342.8456.1226.741
75%3.5036.986.59148.74777.133.00
max101.0058.72152.40159.042932.684
SSF, T2mean5.5243.9911.22151.41212.742.68
std12.7122.4622.4611.30342.320.84
min0.4720.332.0656.1225.252
75%3.1547.706.40155.23189.483.00
max100.30146.31162.78188.542557.835
DSF, T2mean5.5337.9211.02148.11515.592.70
std12.2715.8621.329.49386.280.84
min0.5120.022.3156.1226.991
75%3.2942.006.44151.53690.153.20
max101.30107.76164.31177.722587.405
SSF, T3mean12.4855.7419.56155.09219.222.48
std23.8954.5630.8918.54302.080.92
min0.5120.292.3656.1248.321
75%7.7249.8815.67158.08243.493.00
max110.71373.64142.82234.542451.095
DSF, T3mean9.1135.3714.33146.87506.882.57
std18.2014.1225.078.97378.460.78
min0.6119.752.4156.1211.211
75%4.4440.397.68150.60710.263.00
max108.46100.77142.88176.902528.805
Table 6. DSF performance: C 7 D or C 8 D not-working, 1 target following trajectory T2.
Table 6. DSF performance: C 7 D or C 8 D not-working, 1 target following trajectory T2.
Case Study Position
Precision
(cm)
Position
Accuracy
(cm)
Velocity
Precision
(cm/s)
Velocity
Accuracy
(cm/s)
Resolution

(ppcm)
Cameras
on
Target
DSF, T2
without C 7 D
mean5.6938.9411.19148.78487.202.49
std12.9915.5321.889.34373.270.61
min0.4620.022.3856.1227.741
75%3.0442.066.31151.54663.703.00
max98.70107.67162.16177.712598.424
DSF, T2
without C 8 D
mean5.4538.2310.87148.41471.252.50
std12.2115.7521.349.42365.580.65
min0.4320.032.3556.129.551
75%3.1542.066.45151.56614.893.00
max99.41107.63162.61177.692620.064
Table 7. PTZ parameter selection computational time: 4 targets following trajectory T1.
Table 7. PTZ parameter selection computational time: 4 targets following trajectory T1.
Case StudyComputational Time (s)
4 targetsmean0.229
std0.067
max0.330
Table 8. DSF performance: 1 and 4 targets following trajectory T1.
Table 8. DSF performance: 1 and 4 targets following trajectory T1.
Case Study Position
Precision
(cm)
Position
Accuracy
(cm)
Velocity
Precision
(cm/s)
Velocity
Accuracy
(cm/s)
Resolution

(ppcm)
Cameras
on
Target
DSF, T1mean6.2030.8111.23144.05640.932.68
std13.617.8420.616.95360.190.63
min0.5220.342.8456.1226.741
75%3.5036.986.59148.74777.133.00
max101.0058.72152.40159.042932.684
DSF 4, T1mean6.4733.4911.50146.04448.502.54
std14.608.7021.677.50364.330.67
min0.5420.372.3856.1216.871
75%3.3540.316.64150.42584.093.00
max107.3976.01156.20170.342718.684
Table 9. DSF performance: 8 and 12 targets following trajectory T1.
Table 9. DSF performance: 8 and 12 targets following trajectory T1.
Case Study Position
Precision
(cm)
Position
Accuracy
(cm)
Velocity
Precision
(cm/s)
Velocity
Accuracy
(cm/s)
Resolution

(ppcm)
Cameras
on
Target
DSF 8, T1mean6.3433.5711.31146.05358.642.65
std14.409.7121.407.75374.650.64
min0.4620.462.3756.1220.891
75%3.3240.056.63150.68433.443.00
max108.7370.80154.19164.522740.934
DSF 12, T1mean6.4434.6111.40146.82316.112.60
std14.6110.0221.628.10364.130.66
min0.5020.532.2956.1217.721
75%3.2841.806.62151.91380.523.00
max109.3078.20155.42170.342782.134
Table 10. PTZ parameter selection computational time: 8 targets following trajectory T1.
Table 10. PTZ parameter selection computational time: 8 targets following trajectory T1.
Case StudyComputational Time (s)
8 targetsmean0.472
std0.035
max0.567
Table 11. PTZ parameter selection computational time: 12 targets following trajectory T1.
Table 11. PTZ parameter selection computational time: 12 targets following trajectory T1.
Case StudyComputational Time (s)
12 targetsmean0.780
std0.061
max0.959
Table 12. DSF performance: multiple targets having different velocities.
Table 12. DSF performance: multiple targets having different velocities.
Case Study Position
Precision
(cm)
Position
Accuracy
(cm)
Velocity
Precision
(cm/s)
Velocity
Accuracy
(cm/s)
Resolution

(ppcm)
Cameras
on
Target
DSF 4, 1 m/smean6.4733.4911.50146.04448.502.54
std14.608.7021.677.50364.330.67
min0.5420.372.3856.1216.871
75%3.3540.316.64150.42584.093.00
max107.3976.01156.20170.342718.684
DSF 4, 2 m/smean22.3841.3134.85201.40414.552.46
std39.7711.4157.9112.31408.150.70
min0.6626.393.3073.4811.821
75%16.9747.4833.84206.44554.983.00
max197.52105.16306.46241.773020.254
DSF 4, 4 m/smean83.2754.99128.47279.65330.762.39
std104.9421.08151.4622.80383.580.76
min1.1835.294.7099.5018.521
75%125.2459.85193.71286.78423.863.00
max403.85144.50603.55329.852492.244
Table 13. SSF and DSF performances: real-world scenario.
Table 13. SSF and DSF performances: real-world scenario.
Partition Number Resolution (ppcm)Cameras on Target
SSFDSFSSFDSF
mean
between
P 2 & P 3
mean320.37433.062.582.50
std234.84246.870.370.314
min64.3171.2521.77
max2147.162514.063.934
P 1 mean136.94348.132.722.64
std244.60238.720.460.31
min24.2220.8521.3
max269.841200.1143
P 4 mean74.38157.402.432.18
std40.7870.010.610.37
min26.9936.841.51.27
max269.17378.4143
P 5 mean143.93335.433.353.37
std82.02197.680.620.37
min29.8039.5822
max732.081290.0054.02
Table 14. SSF and DSF performance: most-crowded real-world scenario.
Table 14. SSF and DSF performance: most-crowded real-world scenario.
Partition Number Resolution (ppcm)Cameras on Target
SSFDSFSSFDSF
mean
between
P 2 & P 3
mean253.45320.092.582.52
std42.5346.890.140.06
min179.11208.72.332.39
max360.30427.272.852.64
P 1 mean174.11271.92.92.7
std58.2851.590.150.12
min101.49150.92.522.37
max410.27461.923.142.97
P 4 mean124.13293.682.662.63
std17.8994.030.170.1
min99.75150.92.262.45
max165.76440.272.952.85
P 5 mean146.31251.952.862.69
std27.9849.90.160.14
min101.49150.92.532.37
max189.61374.653.142.97
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Giordano, J.; Lazzaretto, M.; Michieletto, G.; Cenedese, A. Visual Sensor Networks for Indoor Real-Time Surveillance and Tracking of Multiple Targets. Sensors 2022, 22, 2661. https://doi.org/10.3390/s22072661

AMA Style

Giordano J, Lazzaretto M, Michieletto G, Cenedese A. Visual Sensor Networks for Indoor Real-Time Surveillance and Tracking of Multiple Targets. Sensors. 2022; 22(7):2661. https://doi.org/10.3390/s22072661

Chicago/Turabian Style

Giordano, Jacopo, Margherita Lazzaretto, Giulia Michieletto, and Angelo Cenedese. 2022. "Visual Sensor Networks for Indoor Real-Time Surveillance and Tracking of Multiple Targets" Sensors 22, no. 7: 2661. https://doi.org/10.3390/s22072661

APA Style

Giordano, J., Lazzaretto, M., Michieletto, G., & Cenedese, A. (2022). Visual Sensor Networks for Indoor Real-Time Surveillance and Tracking of Multiple Targets. Sensors, 22(7), 2661. https://doi.org/10.3390/s22072661

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop