Maximizing the minimum query granularity is less straightforward. In fact, it is NP-hard, even in the case where all entities are stationary (this follows directly from [
1] [Theorem 17]). Nevertheless, it is clear that any reasonable query scheme using minimum query granularity
, that guarantees a given measure of congestion potential at most
x at the target time
, will not query any entity more than once within the final
n queries. Thus, any such optimal query scheme determines a radius
for each entity
, where (i)
is a permutation of
, and (ii) the uncertainty configuration, in which entity
has an uncertainty region
with center
and radius
, has the given congestion measure at most
x. For any measure, we associate with
an
intrinsic fixed-target granularity, defined to be the largest
for which these conditions are satisfiable.
It is not hard to see that, by projecting the current uncertainty regions to the target time (assuming no further queries), some entities can be declared “safe” (meaning their projected uncertainty regions cannot possibly contribute to making a congestion measure, for itself or any other entity, exceed x at the target time). This idea is exploited in query schemes that query entities in rounds of geometrically decreasing duration, following each of which a subset of such “safe” entities are set aside with no further attention, until no “unsafe” entities remain.
3.1. The Fixed-Target-Time-ply (FTT-ply) Query Scheme
This query scheme shows that, for any , , uncertainty ply at most can be guaranteed at a fixed target time using minimum query granularity that is at most smaller than that used by any (even clairvoyant) query scheme that guarantees uncertainty ply at most x. Assuming no prior knowledge about entity locations, we treat the uncertainty regions of all entities as being unbounded at time 0. Hence, none of the entities are -ply-safe to start (assuming ). Thus any scheme, including a clairvoyant scheme, must query all but x of the entities at least once in order to avoid ply greater than x at the target time. The FTT-ply scheme starts by querying all entities in a single round using query granularity , which is -competitive, assuming , with what must be conducted by any other scheme.
At this point, the FTT-ply scheme identifies the set of entities that are not yet -ply-safe (the unsafe survivors). All other entities are set aside and attract no further queries.
The scheme then queries, in a second round, all survivors using query granularity . In general, after the rth round, the scheme identifies unsafe survivors which, assuming , continue into an st round using granularity . The rth round completes at time . Furthermore, all entities that have not been set aside have a projected uncertainty region whose radius is in the range . Note that, by our assumption, and hence, . Thus, since the projected uncertainty region radius of all entities is less than , by the time , no unsafe survivors remain after by the end of round .
Theorem 1. For any Δ, , the FTT-ply query scheme guarantees uncertainty ply at most at target time τ and uses minimum query granularity over the interval that is at most a factor smaller than the intrinsic fixed-target granularity for guaranteeing uncertainty ply at most x.
Proof. We claim that any query scheme that guarantees uncertainty ply at most x at time must use at least queries after the start of the rth query round of the FTT-ply query scheme; any fewer queries would result in one or more entities having ply greater than x at the target time.
To see this observe first that each of the unsafe survivors is either queried by after the start of the rth query round or has its projected uncertainty ply reduced to at most x, by queries to at least of its projected uncertainty neighbors after the start of the rth query round. Assuming that fewer than unsafe survivors are queried by after the start of the rth query round, we argue that at least queries must be made after the start of the rth query round to reduce the projected uncertainty ply of the remaining unsafe survivors to some value at most x.
Note that any query after the start of the rth round to an entity set aside in an earlier round cannot serve to lower the projected uncertainty ply of any of the unsafe survivors. Furthermore, any query to one of the survivors of the st round can serve to decrease by one the projected uncertainty ply of at most of the unsafe survivors whose projected uncertainty ply is at least . (This follows because (i) the projected uncertainty regions of all survivors are within a factor of 2 in size, and (ii) any collection of unit radius balls that are all contained in a ball of radius 4, must have ply at least .) Thus any scheme that guarantees uncertainty ply at most x at time must make at least queries after the start of the rth query round.
Since query scheme must use at least queries over the interval , it follows that our query scheme is -competitive, in terms of minimum query granularity, with any, even clairvoyant, query scheme that guarantees uncertainty ply at most x at the target time. □
3.2. The Fixed-Target-Time-Degree (FTT-Degree) Query Scheme
This somewhat more involved query scheme shows that for any , , uncertainty degree at most can be guaranteed at a fixed target time using minimum query granularity that is at most smaller than that used by any query scheme that guarantees uncertainty degree at most x. As before, since the projected uncertainty regions of all entities are unbounded at time 0, any scheme, including a clairvoyant scheme, must query all but x of the entities at least once in order to avoid degree (and also ply) greater than x at the target time. The FTT-degree scheme starts by querying all entities in a single round using query granularity , which is -competitive, assuming , with what must be conducted by any other scheme.
At this point, the FTT-degree scheme identifies two sets of entities (i) the entities that are not yet -degree-safe (the unsafe survivors), and (ii) the entities that are -degree-safe and whose projected uncertainty region intersects the projected uncertainty region of one or more of the unsafe survivors (the safe survivors). All other entities are set aside and attract no further queries.
The scheme then queries, in a second round, all survivors using query granularity . In general, after the rth round, the scheme identifies unsafe survivors and safe survivors, which, assuming , continue into an st round using granularity . The rth round completes at time . Furthermore, all entities that have not been set aside have a projected uncertainty region whose radius is in the range .
Theorem 2. For any Δ, , the FTT-degree query scheme guarantees uncertainty degree at most at target time τ and uses minimum query granularity over the interval that is at most a factor smaller than the intrinsic fixed-target granularity for guaranteeing uncertainty degree at most x.
Proof. We claim that any query scheme that guarantees the uncertainty degree at most x at time must use at least queries after the start of the rth query round of the FTT-degree query scheme; any fewer queries would result in one or more entities having a degree greater than x at the target time.
To see this, observe first that each of the unsafe survivors must be satisfied; they are either queried by after the start of the rth query round or have their projected uncertainty degree reduced below x by at least queries to their projected uncertainty neighbors after the start of the rth query round. Assuming that fewer than unsafe survivors are queried by after the start of the rth query round, we argue that at least queries must be made after the start of the rth query round to reduce below x the projected uncertainty degree of the remaining unsafe survivors.
Note, that any query after the start of the rth round to an entity set aside in an earlier round cannot serve to lower the projected uncertainty degree of any of the unsafe survivors. As in the proof of Theorem 1, any query to one of the survivors of the st round can serve to decrease by one the projected uncertainty degree of at most of the unsafe survivors whose uncertainty degree is at most . Thus, any scheme that guarantees uncertainty degree at most x at time must make at least queries after the start of the rth query round.
Similarly, observe that each of the safe survivors must have each of its unsafe neighbors satisfied in the sense described above. But, since the projected uncertainty regions of all survivors are within a factor of 2 in size, each query that serves to lower the projected uncertainty degree of an unsafe neighbor of some safe survivor must be to an entity that has the projected uncertainty region of in its projected uncertainty near-neighborhood (the ball centered at , whose radius is nine times the projected uncertainty radius of ). But has at most such safe near-neighbours, since any collection of unit radius balls that are all contained in a ball of radius 18, must have a degree (and also ply) at least .
It follows that, even if a query to lowers the projected uncertainty degree of all of the unsafe neighbors of , a total of at least queries must be made after the start of the rth query round by any scheme that guarantees uncertainty degree at most x at time .
Thus, query scheme
must use at least
queries over the interval
. It follows that our query scheme is
-competitive, in terms of minimum query granularity, with any, even clairvoyant, query scheme that guarantees the uncertainty degree at most
x at the target time. □
The competitive factor in both of the preceding theorems is worst-case optimal. Specifically, the following example demonstrates that, for degree at most x can be guaranteed at a fixed target time by a clairvoyant scheme that uses query granularity, yet any non-clairvoyant scheme that guarantees ply at most at the target time must use the query granularity that is .
Example 1. Imagine a configuration involving two collections A and B each with point entities located in , on opposite sides of a point O. At time 0 all of the entities are at distance from O, but have unbounded uncertainty regions. All entities begin by moving towards O at unit speed, but at time a subset of entities in both A and B (thespecial
entities) change direction and move away from O at unit speed, while all of the others carry on until the target time when they simultaneously reach O and stop (cf. Figure 3). To avoid an uncertainty degree greater than x at the target time a clairvoyant scheme needs only to (i) query all entities (in arbitrary order) up to time , and then (ii) query just the special entities (in arbitrary order) in the next time prior to the target, using query granularity 1, since doing so will leave the uncertainty regions of the entities in A disjoint from the uncertainty regions of the special entities in B, and vice versa.
On the other hand, to avoid ply at the target time any non-clairvoyant scheme must query at least one of the special entities (in either A or B) in the last time before the target. Since special and non-special entities are indistinguishable before this time interval, at least entities in at least one of A or B must be queried in the last time before the target in order to be sure that at least one special entity is queried late enough to confirm that its uncertainty region will not contain O at the target time. This requires query granularity at most . Thus, in the worst case every scheme that achieves uncertainty ply at most at the target time needs to use at least a factor smaller query granularity in some instances than the best query scheme for achieving uncertainty degree at most x at the target time on those same instances.
Theorems 1 and 2 speak to the query frequency requirements for bounding congestion at a fixed target time, measured in terms of ply or degree individually. This leaves open the question of how these measures relate to one another. The following example demonstrates that in some cases the granularity required to bound congestion degree by can be a factor smaller than that required to bound congestion ply by x.
Example 2. The example involves two clusters A and B of point entities separated by distance . To maintain the uncertainty ply at most x it suffices to query entities in both clusters once every steps, which can be achieved with query frequency one. Since the uncertainty regions associated with queried entities in cluster A never intersect the uncertainty regions associated with queried entities in cluster B, the largest possible ply involves entities in one cluster (say A) together with unqueried entities in the other cluster (B), for a total of x.
On the other hand, to maintain degree at most no uncertainty region can be allowed to have radius . Thus, all entities need to be queried with frequency at least , giving a total query demand of over any time interval of length .
Nevertheless, bounding congestion degree at a fixed target time cannot be too much worse than bounding congestion ply.
Theorem 3. The FTT-degree[] scheme uses a query granularity that is at most a factor smaller than the best, even clairvoyant, scheme that guarantees ply at most x.
Proof. In the FTT-degree[] scheme, (the number of safe survivors in round r) is . Thus, the “extra” queries (to handle the safe survivors) are at most a factor more numerous than the queries to handle the unsafe survivors. Suppose that FTT-ply[] is modified so that the queries in round r occur with granularity (i.e., half of their previous granularity), completing at the midpoint of the round, and that FTT-degree[] is modified so that the queries in round r occur with granularity (i.e., half of their previous granularity), starting at the midpoint of the round. Since the queries of FTT-degree[] in round r now occur after the corresponding queries in FTT-ply[], it is straightforward to see that the unsafe survivors in round r of FTT-degree[] are no more numerous than the unsafe survivors in round r of FTT-ply[]. It follows that the granularity of the queries in FTT-degree[] is no more than a factor smaller than that of FTT-ply[]. □