1. Introduction
In the paper, we discuss a computational model used to solve optimization problems. We focus on online algorithms that many researchers explore. An online algorithm processes an input data stream and outputs a data stream in an online fashion. It should return a piece of output variables immediately after reading a piece of input variables. At the same time, the best answer can depend on the whole input. The main goal is a minimization of an objective function (we call it the cost of an output). We can say that an online algorithm solves an online minimization problem. Researchers consider different measures of effectiveness for online algorithms [
1,
2]. At the same time, the competitive ratio is the most standard and useful [
3,
4]. If we consider the cost of the output generated by an online algorithm and the cost of the output produced by an optimal offline algorithm, then the competitive ratio is the ratio of these to costs. If the ratio is at most
c in the worst case, then we call the algorithm
c-competitive. We also can say that the competitive ratio of the algorithm is
c.
Typically, online algorithms have no limits for time and memory complexity. The main restriction is a lack of knowledge about future input variables. At the same time, researchers develop online algorithms with different types of restrictions. Some of them are restrictions on memory [
5,
6,
7,
8,
9,
10,
11], other ones are restrictions on time complexity [
12,
13,
14]. Often, an algorithm with restricted resources is closer to real-world applications.
In this paper, we focus on the memory complexity of online algorithms. We expect that an input stream is such big that it cannot be stored entirely in memory. An online algorithm processes a data stream. That is why we can consider automata (streaming algorithms, state machines) as online algorithms for this setup. In the paper, we use the “log-bounded automata” term. It is a uniform automata-like model with memory such that the size of memory can depend on the length of an input. This model was explored in [
5,
7,
8]. Note that the automaton is allowed to use only one input head in the considered model. Sometimes, for emphasizing that the model is input data processing model and it is “one-pass”, we call the model one-way log-bounded automata. It means that we cannot reread the input and store the whole required information about the past in memory. Sometimes, to emphasize that the algorithms are used as online algorithms or for solving an online minimization problem, we call the model one-way log-bounded automata for an online minimization problem.
We consider classical and quantum models of “log-bounded automata” as online algorithms. The classical model was developed in papers of different researchers [
5,
6,
7,
10,
11] for a long time. At the same time, the quantum model was introduced several years ago in [
8] and discussed in [
9]. Quantum computing [
15,
16,
17] itself is one of the hot topics in computer science of the last few decades. There are many problems where quantum algorithms outperform the best-known classical algorithms [
18,
19,
20]. A restriction for the memory size is especially important for the quantum model of algorithms because near-future quantum computers are able to manipulate with a few qubits.
In the case of one-way log-bounded automata as online algorithms, we know that the quantum model can be more effective than its classical counterparts with respect to a competitive ratio. The result was proven for two cases. The first one is sublogarithmic size of memory [
8]. The second case is polylogarithmic size of memory [
21]. Note that if an algorithm can use the linear size of memory, then it can store the whole input in memory. In that case, it is equivalent to the standard online algorithm model without restriction on the memory size. That is why the only sub-linear size of memory is interesting in the case of the restricted size of memory.
Researchers also considered other quantum models. One of them is quantum online algorithms with repeated test [
22]; another one is quantum online algorithms with restriction on the size of memory [
23,
24]; quantum algorithms with a buffer [
25,
26] and others. The question of comparing quantum and classical models was explored for data stream processing models by different researches [
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40].
Our Results
Let us focus on demonstrating the quantum supremacy in the case of the logarithmic size of memory. For this goal, we choose a problem and demonstrate that a quantum model for the problem is more effective than a classical counterpart.
A similar result was proven in [
21]. At the same time, the problem that was considered in [
21] has one restriction. It is not defined for all inputs and requires a pre-validation to check whether an input is permissible. Such problems are called “promise problems” or “partial functions”. It is an important restriction for a problem. The paper explores “total functions” or problems without a pre-validation procedure for an input.
We investigate an online minimization problem that is based on the Disjointness Boolean function . For some integer b, the function is defined on two binary strings and , and . Here and are bit-masks (or characteristic vectors) of two input sets . The result is 1 iff .
Researchers explored communication complexity and quantum query algorithms for the Disjointness Boolean function [
41,
42,
43]. Additionally, the function was used for comparing quantum and classical streaming algorithms in [
28].
In this paper, we consider an online version of this well-known problem that is
. The definition of the
problem is based on the problems that were presented in [
28,
44]. We provide a
c-competitive quantum log-bounded automaton for
, and we show that any classical counterpart is less effective. Formally, any log-bounded automaton for the problem is
-competitive and
. A similar result was presented in [
44]. This paper presents a more significant separation between the competitive ratios of quantum and classical algorithms.
Often, online algorithms are investigated in terms of advice complexity measure [
45,
46,
47]. We are interested in this measure also. The model with advice was not considered in [
44].
The main point of the model is the following one. An online algorithm gets some bits of advice about an input. A trusted Adviser sending these bits knows the whole input and has unlimited computational power. Such additional information helps the online algorithm for the computation process and allows to obtain a better competitive ratio. Deterministic and randomized online algorithms with advice are considered in [
45,
48,
49]. Typically, online algorithms researchers use the “Adviser” term, and other models researchers use the “Oracle” term.
In this paper, we provide a quantum log-bounded automaton for with a single advice bit and a logarithmic size of memory that is almost optimal, i.e., -competitive for . At the same time, any classical algorithm with the logarithmic size of memory is less effective. Formally, any classical algorithm with the logarithmic size of memory is -competitive and .
Note that the usage of a specially constructed problem for demonstrating the power of models is a common approach. Sometimes the problems are artificial, but they allow authors to show more representative results [
45,
49,
50].
The paper is organized as follows. We give definitions in
Section 2. In
Section 3.1 we present our results for the quantum models and
Section 3.2 contains results for classical models. The conclusion is presented in
Section 4.
2. Preliminaries
In the paper, we use different notations for -functions. A function is such that iff . A function is such that if ; otherwise, . So, .
Let us define the computational models.
An online minimization problem is presented by a set of possible inputs
and a cost function. An input
is a sequence of input variables. Here
n is a length of the input, shortly
. Let
be a set of feasible outputs associated with an input
I. An output is a sequence of output variables
. The cost function assigns a positive real value
to an input
and an output
. We call
an optimal solution for
if it satisfies the next statement
An online algorithm for this problem is defined as follows. A deterministic online algorithm A outputs a sequence such that the variable is computed from input variables .
An algorithm
A is
c-
competitive if we can present a constant
such that for every positive integer
n and for any input
of size
n, the following statement is correct
Here is an output of an optimal offline algorithm for the considered problem; c is the minimal number that satisfies the inequality. The number c is the competitive ratio of the algorithm A. In a case of and , the algorithm A is optimal. Typically c is constant. At the same time, c can be a function on n.
We can define An online algorithm A with advice as a sequence of algorithms where . The trusted adviser chooses that depends on an input I and the algorithm computes an output sequence such that . An algorithm A is c-competitive with advice complexity if there exists a constant such that, for every n and for any input I of size n, there exists some such that .
A randomized online algorithmR computes an output sequence such that is computed from , where is a content of the random tape, i. e., an infinite binary sequence, where every bit is chosen uniformly at random and independently of all the others. By we denote the random variable expressing the cost of the solution computed by R on I. R is c-competitive in expectation if there exists a constant such that, for every I, .
We use one-way automata for online minimization problems as online algorithms with restricted memory size. In the paper, we use the terminology for branching programs [
51] and algorithms. We say that an automaton computes a Boolean function
if for any input
X of length
m, the automaton returns 1 iff
. We can say that an automaton returns 1 if it accepts an input word and returns 0 otherwise. Additionally, we use the terminology on memory from algorithms. We say that an automaton has
s bits of memory if it has
states. In fact, the model that we consider is a stable id-OBDD [
30,
31,
51], that is an algorithm that uses at most
s bits of memory and processes a data stream.
Let us present the definitions of log-bounded automata that we use. A deterministic log-bounded automaton with bits of memory that processes an input is a 4-tuple .
The set D is a set of states, , is the initial state.
The function is a transition function , where is a size of the input alphabet.
is an output function , where is a size of the output alphabet.
It is a uniform model. The computation starts from the state . Then, on reading an input symbol , it changes the current state to . At the end of the computation, the automaton outputs .
A probabilistic log-bounded automaton is a probabilistic counterpart of the model. It chooses from more than one transition in each step such that each transition is associated with a probability. Thus, the automaton can be in a probability distribution over states during the computation. The total probability must be 1, i.e., the sum of probabilities of outgoing transitions from a single state for a single letter must be 1. Thus, a probabilistic automaton returns a result for each input with some probability. For , the automaton returns a result v for an input with bounded-error if the automaton returns the result v with probability at least for some . The automaton computes a function with bounded error if it returns with bounded error for each . The automaton computes a function f exactly if .
A deterministic online streaming algorithm (a deterministic log-bounded automaton for online minimization problem) with bits of memory that process input is a 4-tuple .
The set D is a set of states, , is the initial state.
is a transition function .
is an output function .
The computation starts from the state . Then on reading an input symbol it change the current state to and outputs . The main difference between an “online streaming algorithm” and a standard “online algorithm” definition is a restriction on memory and forbidding reading the previous input variables.
A randomized online streaming algorithm and a deterministic online streaming algorithm with advice have similar definitions, but with respect to definitions of corresponding models of online algorithms.
Comment. Note that any online algorithm can be simulated by an online streaming algorithm (automaton for online minimization problem) using bits of memory. It is n bits in the case of binary input and bits in the general case.
Let us consider a
quantum online streaming algorithm (quantum log-bounded automaton for online minimization problem). More information on quantum computation and quantum streaming computational models (automata) can be found in [
17,
35,
52,
53,
54]. For some integer
, a quantum online algorithm
Q with
q qubits is defined on an input
and outputs
. A memory of the quantum algorithm is a state of a quantum register of
q qubits. In other words, the computation of
Q on an input
I can be traced by a
-dimensional vector from Hilbert space over the field of complex numbers. The initial state is a given
-vector
(we use Dirac notation for vectors). The algorithm can perform one of two kinds of operators. The first one is applying a unitary
-matrix. The second one is measurement.
Let us describe a measurement process. Suppose that Q is in a state before a measurement and the algorithm measures the i-th qubit. Let 0-value for the qubit i is associated with states ; let 1-value for the qubit i is associated with states . If we measure the qubit, then we obtain 1-result with probability and we obtain 0-result with probability . If the result of the measurement is , then after measurement, the quantum system is converted to the state such that , where is an element of the new vector and is an element of the old vector. All other elements of the new vector are 0.
Suppose, on j-th step, the algorithm measures v qubit. Then, an outcome of the measurement is . We test variable on j-th step, for . The algorithm does all three following operations in the presented order.
- 1.
It does a measurement. Let the outcome be u.
- 2.
The algorithm applies a unitary -matrix that depends on the input variable and the outcome u.
- 3.
It does a measurement. Let the outcome be . Then, the algorithm outputs on this step. Here is a function that converts the result of the measurement to an output variable.
The algorithm Q is c-competitive in expectation if there exists a non-negative constant such that, for every I, .
A quantum log-bounded automaton has a similar definition, but it returns
at the end of the computation. See [
35,
55] for more details on quantum automata.
3. On Quantum Online Streaming Algorithm for the Online Disjointness Problem
Let us define
problem that is based on the Disjointness Boolean function. The
function is such that
We can say that
and
are bit-masks or characteristic vectors for sets
. The function is such that
iff the corresponding have no intersections, i.e.,
. For some positive integers
t,
k,
r, and
w such that
k mod
t = 0,
, the online minimization problem
is organized as follows.
Figure 1 describes the structure of an input.
Let m be an positive integer. Formally, an input has the following structure: where is a binary representation of m and , where . For some positive integer n, let . Let be a function such that iff the following properties are right:
- 1.
For some positive integer and binary strings , the structure of is
- 2.
and
- 3.
;
- 4.
, where is a length of a vector v;
- 5.
, where is a binary representation of an integer .
- 6.
The value of the function .
- 7.
.
The authors of [
28] have constructed a problem using a similar idea.
For
, let a Boolean variable
be such that
. Let
be output variables that correspond to input variables
. The cost function is
where
. Recall, parameters
and
r or the problem satisfy conditions
k mod
and
. Let us discuss the meaning of the cost function. The output is split into
t blocks. The size of each block is
T. We say that a block is “right” if all each element
from the block satisfies
. Otherwise it is a “wrong” block. The cost of a “right” block is
r, and the cost of a “wrong” block is
w. The sum of costs of all blocks is the total cost of the output. Our goal is to develop an algorithm that minimizes the cost of the output. A similar concept of the cost function was used, for example, in [
49].
3.1. A Quantum Online Streaming Algorithm for the Online Disjointness Problem
Let us discuss a quantum log-bounded automaton for function. For construction of the automaton, we will use the following existing result:
Lemma 1 ([
56]).
We can construct a quantum log-bounded automaton that checks the equality of two binary strings with bounded error for . The size of the memory of the automaton is . Note that the proof of the following lemma is similar to the proof from [
44]. At the same time, the structure of
differs from the structure of the corresponding function from [
44]. That is why we present the full proof here. The current form of the function allows us to obtain better error probability.
Lemma 2. Suppose that we have a quantum register that stores m, i.e., . There is a bounded error quantum log-bounded automaton for that uses qubits of memory and has error probability for , where is a length of .
Proof. As proof, we construct a quantum automaton for the problem. The automaton consists of five parallel procedures. A procedure per property from the following list:
- 1.
For an integer
b, the string
have the following structure:
Here, we use
for several repetitions of the string.
- 2.
The string satisfies and ;
- 3.
, and ;
- 4.
;
- 5.
.
Property 1. We use a deterministic procedure for checking the first property. The size of memory is
. Firstly, we check whether the strings have the following structure
We can do it using a constant number of states. Secondly, it checks that all binary parts have the same length and stores its length in a quantum register
. We can use
states or
memory. The detailed implementation is presented in
Appendix A.1.
Property 2. We use a deterministic procedure for checking the second property. We store the
string and compute a number of 4s and 5s that is
. Then, we check whether
is correct, where
. The algorithm stores
and counts
using
states or
bits of memory. The algorithm for the first property already stored
b and
m is stored in
, that is why we can use them for checking the equality. The detailed implementation is presented in
Appendix A.2.
Property 3. We use a quantum fingerprinting technique [
56,
57] based on a quantum procedure for checking the third property. We assemble two strings from the existing ones:
where “∘” is concatenation. If
, then
and
Let us choose
. Due to Lemma 1, we can present a log-bounded automaton for checking equality of
and
. The size of memory is
. The detailed implementation is presented in
Appendix A.3.
Property 4. We use a quantum procedure for checking the fourth property. The procedure is based on the quantum fingerprinting method also and checks whether
is correct. We construct an automaton using Lemma 1 for checking the property with bounded error
. Suppose
, then the automaton reaches an error probability
using
qubits. The detailed implementation is presented in
Appendix A.4.
Property 5. We use a quantum procedure for checking the fifth property, and it uses some ideas from [
28]. As a base, we use the well-known Grover’s algorithm [
58,
59] for the unstructured search problem. The size of quantum memory for the procedure is
and error probability is
. Let us present the description of the procedure. Let
and
. If the input satisfies the first four properties, then we can say that the input is
m copies of
string. The Grover’s Search algorithm was defined for Query model [
16,
17] and it does
queries to the oracle or
queries in a case of unknown number of solutions. We simulate a query to the oracle by reading the input string
. So,
copies of this string allow as to provide multiple queries. We simulate the search of the index
such that
or
. In that case
.
Let us provide the emulation of a query to the oracle. We have qubits as a quantum register , p qubits as a quantum register . Additionally, there is a qubit and a qubit .
Initially, the quantum system is in the state. Firstly, the automaton reads the string. In other words, it processes . We assume that the automaton already knows b on this step. Firstly, we apply the Hadamard transformation for obtaining the following state . Then, we apply the inversion transformation (X-gate or NOT-gate) for obtaining the following state .
We split the automaton’s steps to macro-steps. The
l-th one is reading
strings for
. During checking the third property, we store
l in
. The computing process of
l is presented in
Appendix A.3. Assume that the automaton reads
string’s
a-th symbol. If the automaton reads 1, then we apply the
V transformation to registers
. Here
for
. The automaton does not change the quantum registers if it reads 0.
Assume that the automaton reads string’s a-th symbol. If the automaton reads 1, then we apply the W transformation to registers . Here for . The automaton does not change the quantum registers if it reads 0. In other words, the amplitude for a state is inverted only if . Indexes with such property are our solution. We process the string exactly like we process the string.
Remember that we already tested the equality in procedure 3 and procedure 4. Therefore, after processing , the effect that does to the second register is removed. One macro step simulates one query of Grover’s search algorithm.
Then, we apply Grover’s diffusion transformation to the register
. The transformation is
. It rotates all amplitudes near the mean, where
R is a unitary transformation that inverts sign for all basis states except zero basis state. Reader can find details on the Grover’s diffusion transformation
D in [
58,
59].
One macro-step (processing of strings
) and the Grover’s diffusion transformation together correspond to one step of Grover’s search algorithm. Due to [
59], the automaton simulate
steps of the algorithm if
, where
is
a-th symbol of the string
. The automaton does
steps if
, where
. Initially,
. The automaton simulates the required number of Grover’s search steps and measures
. If we obtain
result, then we reads one more block
and check the equality
. For checking the equality, we store these two bits in
qubits. Then, we measure both qubits and check equality of measurement results.
If the condition
is right, then process stops. Otherwise, we assign
to the
, initialize
by
and repeat the procedure for the new value of
v. The probability of correct answer of the algorithm is at least
, due to [
59]. Using the standard success boosting procedure, we can get the error probability
by repeating the algorithm
times.
It is easy to see that the size of memory for presented automaton for is qubits. The error probability is where . □
If we compare the obtained result with results of [
28,
44], then we can see that the function and the automaton have several modifications. Using this modification we show that the error probability is
. At the same time, similar constructions from [
28,
44] allow us to obtain only constant error probability.
Using the previous result, we can construct a quantum online streaming algorithm for
with a smaller competitive ratio compared to [
44]. This property is presented in the following theorem.
Theorem 1. Suppose . We provide a quantum online streaming algorithm Q for that is expected c-competitive, uses qubits of memory, where . Here for .
Proof. Firstly, we store
m into the quantum register
of
qubits. We guess
, we choose it randomly and uniformly from the set
. We output the result of guessing as the first output variable
. After that the automaton (streaming online algorithm) computes
and outputs
that is
. On the
i-th step the automaton computes
and outputs
. If
(we guess
correctly), then for each other output variable is correct, i.e.,
for
. If
, then all output variables are incorrect, i.e.,
for
. Let
. We can show that
for
. Let us compute a cost of the output for this algorithm if a probability of computing
is
. Let us remind that all output variables
are split to
t blocks. Let us change the cost function. The new one
is such that 1 is the cost of a “right” block, and 0 is the cost of a “wrong” block. Formally,
The main cost function can be computed by the new one using the following statement .
For computing the competitive ratio, we should calculate .
Let us compute the probability that a block i is a “right” block (costs 1). Let . So, if the i-th block is “right”, then for all binary strings inside the block the computation of is right and a guess of is right. A probability of this event is .
Let . If the i-th block is “right”, then two conditions should be true:
- (i)
for all binary strings inside the block, the computation of should be right.
- (ii)
If we consider a number of errors for computation of for preceding s plus 1 if is wrong. Then this number should be even.
A probability of the first condition is . Let us compute the probability of the second condition.
Let
be the number of errors before computation of
. Let
be the number of of errors during computation of
. If
, then
, otherwise
. Let the probability
mod
. Hence,
mod
. If we compute
with an error, then we have
mod
. If we compute
with no error, then we have
mod
. Hence,
Remember that with probability . Hence, .
We can compute the probability
as follows
Hence,
. Let us compute the expected cost:
Therefore, .
Let us compute the expected competitive ratio.
Let us estimate
.
Hence, . □
The quantum streaming algorithm with advice is presented in the following theorem.
Theorem 2. There is the expected c-competitive quantum online streaming algorithm Q with qubits of memory and single advice bit for , where and for and .
Proof. The algorithm is the same as in the Theorem 1, but
is obtained as an advice bit. Let us compute
similarly as in the proof of the previous theorem. Assume that
is an error for computing
. Let
be the number of errors before computation of
. It is a number of errors for computation of
for preceding
Ls. Let the probability
mod
. Hence,
mod
. Remember that we obtain
as the advice bit; therefore,
.
So, .
The probability of the event is .
We can compute the expected value for the cost
Therefore, and . Hence, we obtain the claim of the theorem. □
3.2. Bounds for Classical Automata for the Online Disjointness Problem
We use some results from the communication complexity theory. We briefly remind the reader of the model (see [
60]). There are two players, Alice and Bob. They want to compute a Boolean function
. Alice has variables from a set
and Bob variables from a set
. Alice starts computation and sends a message to Bob. Then, Bob continues and sends a message to Alice, etc. The player who can compute
returns an answer. Alice and Bob can use a probabilistic mechanism. The probabilistic communication complexity is the total number of bits in all messages. It is known that
Lemma 3 ([
42]).
The probabilistic communication complexity of the Boolean function is , where , Alice has x and Bob has y. Using this result, we can show the following one.
Lemma 4. Suppose, we have a promise that properties 1–5 and 7 of are correct. Suppose the length of the string is and the integer m such that . Then, there is no bounded error probabilistic log-bounded automata for that uses bits of memory.
Proof. Suppose that there is a probabilistic automaton
R for
with
bits of memory. Then, we can simulate the work of
R by a probabilistic communication protocol such that Alice knows
and Bob knows
. Each player simulates
R on his part of the input. Then he/she sends a state of the memory of
R to the opposite player. See more details on an emulation of stream processing models by communication protocols, for example, in [
37,
51,
61]. Therefore, the communication protocol uses
bits of memory for computing
. This fact contradicts with Lemma 3. □
Using this result, we can show that for probabilistic and deterministic automata, the problem is hard.
Theorem 3. There is no c-competitive deterministic online streaming algorithm for that uses bits of memory, where , .
Proof. Let us consider any online streaming algorithm A for the problem such that the size of memory is at most bits. Assume, the algorithm A returns as the first output variable. Let us prove that there are two parts of the input such that A returns the same value for both, but .
Assume that there is no such triple . Then it means that we can construct an automaton that uses bits of memory and has the following property: iff , for any . The automaton simulates A. Hence, the automaton computes the function or the function . If it computes the function , then we can provide an automaton such that . This statement contradicts with the main claim of the theorem. Similarly, we can prove that we have analogous triples for .
Let us choose , for .
Let be such that . An optimal offline solution is where .
Let us prove that for each . We have , hence , so we obtain .
Therefore, all output variables are “wrong” and the cost is . Hence, the competitive ratio c more than . □
Theorem 4. There is no expected c-competitive randomized online streaming algorithm for that uses bits of memory, where and , .
Proof. We can show that a bounded error randomized online streaming algorithm A cannot compute . We prove the claim using the idea from the proof of Theorem 3. It is easy to see that the only option for an algorithm is to guess the value of and randomly uniformly choose it. Using this strategy, we can obtain the required competitive ratio. If we want to get a cost r for a block, then it should guess all output bits of the block. So, the cost of the i-th block is . Hence, . Therefore, the algorithm A is c competitive in expectation, for . □
For proofs of properties for classical models with advice (Theorems 5 and 6), we show that if the model does not have enough memory, then the problem can be interpreted as the “String Guessing, Unknown History” (
) from [
50].
The following result for the is known:
Lemma 5 ([
50]).
Consider an input string of length k for , for some positive integer k. Any online algorithm that is correct in more than characters, for , needs to read at least advice bits. Using above result we show complexity of the problem.
Theorem 5. There is no c-competitive deterministic online streaming algorithm for that uses bits of memory and b advice bits, where , , v is such that , and .
Proof. We start with a proof of the following claim. For any online algorithm for that obtains b advice bits, we can suggest an input such that the algorithm has at least errors in computation. We prove it by induction.
Firstly, let us prove the claim for . Then the adviser can send , where . Using this advise we return all correct output variables .
Secondly, let us show the correctness of the claim in a case of . Therefore, there is no advice at all. So, it is exactly the Theorem 3 situation.
Thirdly, let us show the correctness of the claim in a case of . Assume that the claim is correct for a pair such that , and at least one of these inequalities is strict. Let us consider the computation process of the function.
Assume that there is an input such that there is no way to compute an output with bounded error. Then using the input, we obtain the case for input variables and b advice bits. So, the computation of for binary strings is incorrect, and the computation for is incorrect also.
Assume that the algorithm computes an output for with a bounded error always. So we can describe the process of communication with the adviser in the following way: the adviser separates all possible inputs into non-overlapping groups . After that, he sends a group number containing current input to the algorithm. Then, the algorithm A processes the input with the knowledge that an input can be only from this group.
Let us consider three sets of groups:
such that is and ,
such that is and ,
.
Let , for some . If , then as we take any input from any group . Hence we have at most possible groups for the adviser that distinguish next s. We can say that the adviser can encode them using bits. Therefore, we get the situation for advice bits and binary strings . The claim is true for this situation. If , then we pick an input from any group . Therefore, there are at most possible groups for the adviser and the same situation. For this case, the claim is correct.
Suppose the size of sets and is . Assume that the algorithm solves the problem, and the memory size is , where . The automaton B with the following structure can simulate the algorithm’s work on . The automaton B has two memory registers: a register of b bits and a register of bits. We assume that is initialized by advice bits. After that, the automaton B invokes A depending on the value of advice bits stored in . The memory size of B is and it computes by simulating A. We obtain a contradiction with the theorem’s claim. Hence, transferring the answer of as an advice bit is the only way to compute the result of the function. So, we have a situation for binary strings and advice bits.
So, it means that for the algorithm, the problem is the same as the
String Guessing Problem with Unknown History (
) from [
50].
Due to Lemma 5, for obtaining v right answers , we need advice bits.
Because of the cost function properties, the best case for the algorithm is getting all
s of a block. Therefore, the algorithm can obtain advice bits on
full blocks, for
. In that case, the cost of these blocks is
r. Each of the rest blocks has at least one “wrong” output variable. So, such a block costs
w. Hence, we suggest the input such that the corresponding output costs
Let us compute the competitive ratio
c □
Next, we show an analog of Theorem 5 for randomized case. Ideas from [
62,
63,
64,
65] are the basis for the proof of the following theorem.
Theorem 6. Suppose . Any randomized online streaming algorithm A using b advice bits, less than bits of memory, and solving , has the expected competitive ratiofor , v is such that , and . Proof. Let us suggest the proof that is based on the proof of Theorem 5. Suppose that an online algorithm obtains b advice bits. Then, we can suggest an input such that on this input, the computation of has an error for at least strings . The claim can be proven by the way similar to the proof of Theorem 5. At the same time, we use the probabilistic automaton B to simulate A, and the part of the automaton that uses memory is probabilistic.
Therefore, for the algorithm, our problem is equivalent to the
String Guessing Problem with Unknown History (
) from [
50].
Hence, if we want to obtain
v right answers for
output variables, then we should use
We can show that if we do not know from the adviser, then we cannot compute the output variable with bounded error. Hence, the only way to obtain the variables is randomly uniformly choosing them. We can apply the proof technique as in Theorem 4. We use the same approach for all segments between “known” output variables. □