Next Article in Journal
Local Band Spectral Entropy Based on Wavelet Packet Applied to Surface EMG Signals Analysis
Previous Article in Journal
Modelling the Spread of River Blindness Disease via the Caputo Fractional Derivative and the Beta-derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structure of Optimal State Discrimination in Generalized Probabilistic Theories

1
Department of Applied Mathematics, Hanyang University (ERICA), 55 Hanyangdaehak-ro, Ansan 426-791, Korea
2
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543, Singapore
3
Institute of Advanced Studies, Nanyang Technological University, 60 Nanyang View, 639673, Singapore
4
National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, 637616, Singapore
5
MajuLab, CNRS-UNS-NUS-NTU International Joint Research Unit, UMI 3654, 117543, Singapore
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(2), 39; https://doi.org/10.3390/e18020039
Submission received: 30 November 2015 / Revised: 12 January 2016 / Accepted: 20 January 2016 / Published: 26 January 2016
(This article belongs to the Section Quantum Information)

Abstract

:
We consider optimal state discrimination in a general convex operational framework, so-called generalized probabilistic theories (GPTs), and present a general method of optimal discrimination by applying the complementarity problem from convex optimization. The method exploits the convex geometry of states but not other detailed conditions or relations of states and effects. We also show that properties in optimal quantum state discrimination are shared in GPTs in general: (i) no measurement sometimes gives optimal discrimination, and (ii) optimal measurement is not unique.
PACS:
03.65.Ta; 02.50.-r; 03.65.Ca; 03.67.-a

Graphical Abstract

1. Introduction

Suppose that there is a party, say Alice, who prepares her system in a particular state. The state is chosen from a set of states that have been publicly declared. The system is then given to the other party, called Bob, who then applies a measurement to find which state has been prepared in among the possibilities. This scenario defines the problem of optimal state discrimination that seeks the guessing probability, i.e., the maximum probability that Bob can correctly guess the state that has been prepared by Alice, as well as the optimal measurement that achieves the guessing probability. Optimal state discrimination shows that there is a fundamental limit in the distinguishability of systems. This problem constitutes one of the most fundamental measures in information theory with deep connection to applications in quantum information processing [1,2,3].
Generalized probabilistic theories (GPTs) capture the formalism of the convex operational framework, in which operational significances of states, effects, and dynamics can be identified and characterized, respectively [4,5,6], see also a recent review [7]. States are elements of a convex set, effects are postulated to map states into probabilities and present probabilities measures, and dynamics constrains possible evolution of states. In quantum theory, states correspond to non-negative and unit-trace bounded operators on Hilbert spaces, effects are postulated such that product of positive-operator-valued-measures and states results to probabilities, and dynamics is generally described by positive and completely positive maps. GPTs are of fundamental interest, particularly within the foundations of quantum information theory and they also useful for identifying specific properties of states or effects that have operational significances. For instance, in quantum theory, the fact that quantum states cannot be perfectly cloned may be found as one of properties associated with the Hilbert spaces, e.g., non-orthogonality of state vectors. However, the no-cloning theorem does not necessarily rely on the structure of Hilbert spaces and in fact, GPTs which have violations of Bell inequalities can also incorporate the no-cloning theorem [8].
Recently, optimal state discrimination in GPTs has been considered and it has been shown that it is tightly connected to ensemble steering of states and the no-signaling principle [9]. Specifically, in a GPT where ensemble steering is possible, the no-signaling principle can determine optimal state discrimination. This also holds true in quantum theory, where the no-signaling principle elucidates the relation between optimal state discrimination and quantum cloning [10]. Given that ensemble steering itself does not single out quantum theory [11], the result is valid even beyond quantum theory as long as ensemble steering is allowed in a theory. That is, GPTs are a useful theoretical tool to find operational relations that may play a key role in quantum information applications [7].
In this work, we investigate general properties of optimal state discrimination in GPTs and present a method of optimal state discrimination based on the convex geometry of a state space. After briefly introducing the framework of GPTs and optimal state discrimination, we formalize optimal state discrimination within the convex optimization framework. We show that primal and dual problems return the idential result, and thus formulate the problem in the form of the complementarity problem that gives a generalization of the optimization problems. This then allows us to derive a geometric method of state discrimination. We consider an example of GPTs, the polygon states, and apply the geometric formulation to optimal discrimination. We identify those properties that optimal quantum state discrimination shares with GPTs: (i) optimal measurement is not unique in general, and (ii) no measurement can sometimes give optimal state discrimination.
The present paper is structured as follows. We first review the framework of GPTs and optimal state discrimination, and then formulate optimal state discrimination in the convex optimization framework. We show that primal and dual problems result in the same solutions, due to the strong duality in the problem. We then apply the complementarity problem that generalizes the primal and the dual problems, and derive the method of optimal state discrimination. The polygon system is considered as examples of GPTs, and we apply the method to optimal discrimination of polygon states.

2. Optimal State Discrimination in GPTs

We briefly summarise GPTs [4,5,6] and formulate optimal state discrimination as a convex optimization problem. In particular, we apply the complementarity problem and then present a method of optimal discrimination based on the convex geometry of states.

2.1. Generalized Probabilistic Theories

As it has been mentioned, a GPT contains states and effects such that they produce probabilities. Any convex set can be a state space. A set of states, denoted by Ω, consists of all possible states that a system can be prepared in. Any probabilistic mixture of states, i.e., p w 1 + ( 1 p ) w 2 Ω for w 1 , w 2 Ω and probability p is also a state, and thus the set is convex. A general mapping from states to probabilities is described by effects, linear functionals Ω [ 0 , 1 ] . A measurement denoted by s is described by a set of effects, E ( s ) = { e x ( s ) } x = 1 N , with which the probability of getting outcome x for measurement s when state w is given is given by p ( x | s ) = e x ( s ) [ w ] . A unit effect u is introduced so that states are mapped to probabilities by effects: once a measurement occurs, we have u [ w ] = 1 for all w Ω . Thus, it holds that for any measurement s, we have x e x ( s ) = u . As effects are dual to the state space, they are also convex.

2.2. State Discrimination in Convex Optimization

Optimal state discrimination in GPTs can be described by a game of two parties, Alice and Bob, as follows. Suppose that they have agreed on a set of N states in advance, and then Alice prepares a system in one of the N states with some probability and gives it to Bob. Note also that the a priori probabilities are known publicly. Given that the set of states and a priori probabilities are known, Bob applies measurement and attempts to guess which one has been prepared by Alice. If he makes a correct guess, the score is given 1, and 0 otherwise. The goal is to maximize the average score by optimizing measurements.
Let us label the N states by { w x } x = 1 N and their prior probabilities by { q x } x = 1 N , so that together they can be expressed as { q x , w x } x = 1 N . Bob seeks optimal measurement { e x } x = 1 N that fulfills the condition x e x = u , in such a way that he makes guesses for each effect e x . Let p B | A ( x | y ) = e x [ w y ] denotes the probability that Bob makes a guess w x from effect e x corresponding to the state w y given by Alice. Optimal state discrimination allows us to determine the guessing probability, the maximum success probability that Bob makes correct guesses on average, with
p guess : = max x = 1 N q x p B | A ( x | x ) = max x = 1 N q x e x [ w x ]
where the maximization runs over all effects. Note that GPTs are generally not self-dual, that is, two spaces of states and effects are in general not isomorphic [12].

A Convex Optimization Framework

We recall that the state space Ω is convex, and so is its dual, the space of effects, leading naturally to the following optimization problem:
max x = 1 N q x e x [ w x ] subject to e x 0 x x e x = u ,
where by e x 0 it is meant that e x [ w ] 0 for all w Ω . Note that the above problem is feasible as the set of parameters satisfying constraints is not empty. The trivial solution can be e x = u for a single x and e y = 0 for y x . For convenience, we follow the notation in [13], and rewrite the maximization problem in the above as minimization,
min f ( { e x } x = 1 N ) = x = 1 N q x e x [ w x ] subject to e x 0 x x e x = u ,
It is then straightforward to derive the dual problem to this. Let us write down the Lagrangian as follows,
L ( { e x } x = 1 N , { r x , d x } x = 1 N , K ) = f ( { e x } x = 1 N ) x r x e x [ d x ] + ( x e x u ) [ K ] , = x e x [ q x w x + r x d x K ] u [ K ] .
where { r x , d x } i = 1 N and K are dual parameters. Note that { r x } x = 1 N are constants and { d x } x = 1 N are normalized states. The dual problem can be obtained by solving the following,
g ( { r x , d x } x = 1 N , K ) = min { e x } x = 1 N L ( { e x } x = 1 N , { r x , d x } x = 1 N , K ) .
The minimization in the above is given as u [ K ] if K = q x w x + r x d x for each x , and otherwise. Thus, we have r x d x = K q x w x for each x . Since r x d x is a (unnormalized) state, it is positive, that is e [ d x ] 0 for all effects e. We write this as, K q x w x for each x . The dual problem is thus as follows,
max u [ K ] subject to K q x w x , x .
or, equivalently,
min u [ K ] subject to K q x w x , x .
In the above, the inequality means an order relation in a convex cone, which is determined by effects, that is, e [ K q i w i ] 0 for all effects e. Note also that the dual problem is also feasible: the trivial solution would be K = x q x w x .

2.3. Constraint Qualification

Recall that, in general, the primal and the dual problems do not return an identical solution but there can be a finite gap between solutions of the two problems. In the case of state discrimination in the above, both problems in the above are feasible. This means that from Slater’s constraint qualification, the strong duality holds. Hence, no gap exists between the solutions, and in other words, one can get the optimal solution by solving either of the primal or the dual problems.
In addition, the strong duality also implies that the list of optimality conditions, the so-called Karush-Kuhn-Tucker (KKT) conditions, are also sufficient. That is, parameters satisfying KKT conditions provide optimal solutions in both primal and dual problems. For the optimization problems in the above, The KKT conditions are, together with constraints in both primal and dual problems, as follows,
K = q x w x + r x d x , x e x [ r x d x ] = 0 , x .
The former one is called Lagrangian stability, and the latter one the complementary slackness. The fact that the strong duality holds also guarantees that there exist dual parameters K and { r x , d x } | x = 1 N that fulfill KKT conditions, and then those parameters give optimal solutions to the primal and the dual problems. Here, the optimal effects are also characterized by the complementary slackness in the above. This also shows existence of optimal effects or observables in a GPT. All these follow from the fact that the state space is convex. For comparison with quantum cases, the formulation for minimum-error discrimination has been shown in [14], and see also its applications to various forms of figures of merit in [15].
To summarize, the sole fact that state and effect spaces are convex allows us to formalize the discrimination problem in the convex optimization framework [13]. This in fact provides a general approach of finding optimal discrimination in GPTs. For states { q x , w x } x = 1 N , we take the form in Equation (1) as the primal problem denoted by p * and derive its dual d * , as follows,
Primal : p * = max { x = 1 N q x e x [ w x ] | e x 0 x , x = 1 N e x = u }
Dual : d * = min { u [ K ] | K q x w x , x = 1 , , N }
where inequalities mean the order relation in the convex set: by e x 0 , it is meant that e x [ w ] > 0 for all w Ω , and by K q x w w , that e [ K q x w x ] 0 for all effects e.
For the primal and dual problems in Equations (4) and (5), the property called the strong duality holds true. This means that the two problems have an identical solution, i.e., p * = d * , and therefore one can obtain the guessing problem by solving either of the problems. The strong duality can be shown from the so-called Slater’s constraint quantification in convex optimization. A sufficient condition for the strong duality is the strict feasibility to either of primal or dual problems, that is, the existence of a strictly feasible point of parameters. For instance, primal parameters { e x = u / N } x = 1 N are in the case, since e x [ w y ] > 0 x , y and x e x = u . From this, it is shown that the guessing probability can be obtained from either the primal or the dual problem.

2.4. The Complementarity Problem

In convex optimization, there is another approach called the complementarity problem that generalizes primal and dual problems. It collects optimality conditions and analyzes them directly. Consequently, the complementarity problem deals with both primal and dual parameters in Equations (4) and (5) and find all of optimal parameters. In this sense, the approach is generally not considered more efficient than primal or dual problems. The advantage, actually, lies at the fact that generic structures existing in an optimization problem are found and exploited.
The optimality conditions for optimal state discrimination in Equations (4) and (5) can be summarized by the so-called Karush-Kuhn-Tucker (KKT) conditions, which are constraints listed in Equations (4) and (5), together with the followings,
( Symmetry parameter ) K = q x w x + r x d x , x
( Orthogonality ) e x [ r x d x ] = 0 , x ,
where r x [ 0 , 1 ] for all x , and { d x } x = 1 N are normalized states, i.e., u [ d x ] = 1 . We call { d x } x = 1 N complementary states that construct the symmetry operator. Two conditions in the above are explained in terms of the convex geometry of given states, as follows.
  • The first condition, symmetry parameter, follows from the Lagrangian stability and shows that for any discrimination problem e.g., { q x , w x } x = 1 N , there exists a single parameter K which is decomposed into N different ways with given states and complementary states { r x , d x } x = 1 N . Then, the second condition in Equation (7) from the complementary slackness characterizes optimal effects by the orthogonality relation between complementary states and optimal effects. These generalize optimality conditions from quantum cases to all GPTs, see also various forms of optimality conditions in quantum cases [14].
  • Primal and dual parameters satisfying the KKT conditions are automatically optimal parameters that provide solutions in the primal and the dual problems. Note also that, since the strong duality holds, both problems show the same solution. Conversely, the fact that the strong duality holds in Equations (4) and (5) implies the existence of optimal parameters which satisfy KKT conditions and give the guessing probability in Equation (1).
Note that a similar approach has been made in [16] in the form of the so-called Helstrom family, by generalising examples in quantum cases to GPTs. For quantum state discrimination, the approach based on the complementarity problem has been firstly applied in [17,18] for two qubit states. This has been generalised to a pair of arbitrary states in GPTs [19]. When this result is generalized to arbitrary number of states in GPTs, however, the existence of the symmetry operator and the orhogonality conditions has been only assumed [16]: in particular, those cases for which the optimal parameters exist are called Helstrom families. Here, we apply the complementarity problem that immediately proves the existence of optimal parameters.

2.5. The Geometric Method and the General Form of the Guessing Probability

We are now ready to present a geometric method of solving minimum-error state discrimination within the framework of GPTs for the complementarity problem. We first observe that, for optimality conditions in Equations (6) and (7), constraints for states and effects are separated. The symmetry parameter K is characterized on a state space and gives the guessing probability, see Equation (5), that is,
p guess = u [ K ] = q x + r x .
This means that one can find the guessing probability from a state space. To do this, one has to find the symmetry operator K such that it is decomposed into a given state q x w x and complementary states r x d x in the state space. Or, equivalently, one has to search complementary states { r x , d x } x = 1 N fulfilling Equation (6) on the state space.
Let us introduce a convex polytope denoted by P ( { q x , w x } x = 1 N ) of given states in the state space: each vertex of the polytope corresponds to state q x w x for x = 1 , , N . Then, the polytope of complementary states, P ( { r x , d x } x = 1 N ) , is immediately congruent to P ( { q x , w x } x = 1 N ) in the state space: from Equation (6) the following holds,
q x w x q y w y = r y d y r x d x , for all x , y ,
which shows that corresponding lines of two polytopes P ( { q x , w x } x = 1 N ) and P ( { r x , d x } x = 1 N ) are of equal lengths and anti-parallel. Then, from the convex geometry of the state space, one can find the polytope of complementary states as well as complementary states by putting two congruent polytopes such that the condition in Equation (6) holds. Once complementary states are obtained, optimal effects can be found from the orthogonal relation in Equation (7), accordingly.
Finally, let us provide a general form of the guessing probability in GPTs, when a priori probabilities are equal, i.e., q x = 1 / N for all x . In this case, the guessing probability is in a simpler form and show its meaning with the convex geometry. First, from Equation (8) we have p guess = q x + r x for any x . Since q x = 1 / N , we have r x = r y for all x , y . Denoted by r : = r x for all x , the guessing probability has the form in the following
p guess = 1 N + r , with r = 1 N w x 1 N w y d x d y
where the expression of r follows from the condition in Equation (9) with a distance measure · that can be defined in the state space. The parameter r has a meaning as the ratio between two polytopes, P ( { 1 / N , w x } x = 1 N ) of given states, and P ( { d x } x = 1 N ) of complementary states.

3. Examples: Polygon States

We illustrate the method of optimal state discrimination in GPTs, with an example called the polygon systems in [12]. We consider cases of three and four states and apply the geometric method of optimal state discrimination. It is straightforward to apply to N states. The polygon system is in general given by N states { w x } x = 0 N 1 ,
w x = r n cos 2 π x n r n sin 2 π x n 1
where r n = cos 1 / 2 ( π / n ) . Effects are given by { f x } x = 0 N 1 as follows,
for even N , f x = 1 2 r n cos ( 2 x 1 ) π n r n sin ( 2 x 1 ) π n 1 , and for odd N , f x = 1 1 + r n 2 r n cos 2 π x n r n sin 2 π x n 1
where the unit effect u = ( 0 , 0 , 1 ) T and the map to probabilities is given by the Euclidean inner product between states and effects.

3.1. A Case of N = 3

Let us first consider the case N = 3 , in which states and effects are given as
w 0 = 2 0 1 , w 1 = 2 / 2 6 / 2 1 , w 2 = 2 / 2 6 / 2 1 ,
f 0 = 1 3 2 0 1 , f 1 = 1 3 2 / 2 6 / 2 1 , f 2 = 1 3 2 / 2 6 / 2 1 .
One can easily check that f 0 + f 1 + f 2 = u . We consider optimal state discrimination for { 1 / 3 , w x } x = 0 2 . Applying the geometric method and also from the geometry of the polygon system for N = 3 , see also Figure 1, one can find it holds that
K = 1 3 w x + 2 3 d x , with d x = 1 2 ( f x + 1 + f x + 2 ) ,
where x = 0 , 1 , 2 and the addition is computed in modulo 3.
Figure 1. The polygon states for N = 3 are shown, see also Equation (12). The three states form a regular triangle on the plane z = 1 . The effects in Equation (13) are identical to the states.
Figure 1. The polygon states for N = 3 are shown, see also Equation (12). The three states form a regular triangle on the plane z = 1 . The effects in Equation (13) are identical to the states.
Entropy 18 00039 g001
Optimal measurement is therefore { f x } x = 0 2 since f x [ d x ] = 0 , from the orthogonality condition in Equation (7). Thus, we have p ( x | x ) = f x [ w x ] = 1 for x = 0 , 1 , 2 , and the guessing probability is found as
p guess = 1 3 x = 0 2 p ( x | x ) = 1 ,
which leads to the perfect discrimination.

3.2. A Case of N = 4

We next consider the case N = 4 , in which states and effects are given as
w 0 = r 4 0 1 , w 1 = 0 r 4 1 , w 2 = r 4 0 1 , w 3 = 0 r 4 1 ,
f 0 = 1 2 r 4 / 2 r 4 / 2 1 , f 1 = 1 2 r 4 / 2 r 4 / 2 1 , f 2 = 1 2 r 4 / 2 r 4 / 2 1 , f 3 = 1 2 r 4 / 2 r 4 / 2 1 .
For four states { 1 / 4 , w x } x = 0 3 , the goal is now to find the guessing probability and optimal measurement. Exploiting the convex geometry, see Figure 2, the polytope P ( { 1 / 4 , w x } x = 0 3 ) forms a rectangle, from which it follows that r = 1 / 4 from Equation (10). To be precise, from the state space geometry, one can see that
K = 1 4 w x + 1 4 d x , where d x = w x + 2 , for x = 0 , 1 , 2 , 3 , ( mod 4 ) ,
where the complementary states are obtained as d x = w x + 2 . Thus, we have the guessing probability, from the primal problem in Equation (6),
p guess = u [ K ] = 1 2 .
Figure 2. The polygon states for N = 4 are shown, see also Equation (14). The four states form a square on the plane z = 1 . The effects in Equation (15) are located on the plane z = 1 / 2 .
Figure 2. The polygon states for N = 4 are shown, see also Equation (14). The four states form a square on the plane z = 1 . The effects in Equation (15) are located on the plane z = 1 / 2 .
Entropy 18 00039 g002
Note that these four states are analogous to cases in quantum theory, pairs of orthogonal states: for the four quantum states, the guessing probability is also given by 1 / 2 [14].
Optimal measurement is obtained by using the orthogonality condition in Equation (7). In fact, optimal measurement is not unique and the following effects give the guessing probability.
  • (i) { f x / 2 } x = 0 3 : In this case, we have
    p ( x | x ) = 1 2 f x [ w x ] = 1 2 , and thus , p guess = 1 4 x = 0 3 p ( x | x ) = 1 2 .
    One can also easily check the orthogonality condition ( 1 / 2 ) e x [ d x ] = 0 and also that x e x / 2 = u .
  • (ii) { f 0 , f 2 } : In this case, measurement on effect on e 0 concludes that given state is either w 0 or w 3 , and e 3 to w 1 or w 2 . This is because, from the orhogonality condition in Equation (7), it holds that
    f 0 [ d 0 ] = f 0 [ w 2 ] = 0 , and f 0 [ d 3 ] = e 0 [ w 1 ] = 0 .
    Once measurement on effect f 0 ( f 2 ) is found, one randomly conclude the given state is either w 0 or w 3 ( w 1 or w 2 ), and the guessing probability is obtained 1 / 2 .
  • (iii) { f 1 , f 3 } : This case works in a similar way to the previous. Measurement on effect on f 1 concludes that given state is either w 0 or w 1 , and f 3 to w 2 or w 3 .
From optimal measurement shown in the above, we remark that properties of optimal quantum state discrimination also hold true in GPTs. First, optimal measurement of quantum state discrimination is generally not unique [1], and the example in the above shows that this also holds true in GPTs. Moreover, optimal measurement for discriminating N quantum states does not always contain output ports in the same number, that is, N POVMs [14,20]. This also holds true in GPTs in general as shown above.

3.3. When No Measurement Is Optimal

We here show another property that quantum state discrimination shares with GPTs. Namely, no measurement is sometimes optimal in state discrimination. That is, applying no measurement but simply guessing the state from a priori probabilities gives a guessing probability higher than any other strategies. This also holds true in GPTs. In the following, we provide an example from the result in the quantum case [20].
Let us consider the four polygon states { w x } x = 0 3 for N = 4 in the above together with their mixture w 4 = x = 0 3 w x / 4 . Let q x = ( 1 p ) / 4 denote a priori probabilities for states w x for x = 0 , 1 , 2 , 3 , respectively, and q 4 = p for state w 4 . Hence, we consider optimal state discrimination for { q x , w x } x = 0 4 .
In particular, let us also assume that p 1 / 5 . Then, one can find the optimal discrimination with the symmetry operator as follows,
K = p w 4 = 1 p 4 w x + r x d x , for x = 0 , 1 , 2 , 3
with constants { r x = ( 1 + 5 p ) / 4 } x = 0 3 . It is then straightforward to find { d x } x = 0 3 such that the equalities in Equation (16) hold true. Note also that whenever p 1 / 5 it holds that r x d x 0 . Then, the guessing probability is simply given as p guess = u ( K ) = p , which can be made by guessing the state w 4 with the a priori probability, without measurement.

4. Conclusions

Optimal state discrimination is one of the most fundamental tasks in information theory, and also connected to information applications. For instance, in quantum cases it is the operational task that corresponds to the information-theoretic measure, the min-entropy [21]. On the other hand, GPTs are of theoretical and fundamental interest such that states, effects, and dynamics are identified in a convex operational framework. Their operational significances can be found without detailed structures of a given theory, e.g., Hilbert spaces of quantum theory.
In the present work, we have considered optimal state discrimination in GPTs within the convex optimization framework. This generalizes the result in quantum cases where optimization runs over symmetric operators describing quantum states and measurements [14]. Here, we have considered optimal state discrimination without such structures, and shown that the results in quantum cases, e.g., see [14], are shared in GPTs in general. These include, firstly, the convex optimization and the complementariy problem, and then the method of optimal state discrimination with the convex geometry of state spaces. In particular, we has shown with the polygon systems how the method can be applied. We have shown that the followings hold true in GPTs in general: (i) optimal measurement is not unique, and (ii) no measurement can sometimes give optimal discrimination. The results may be useful in the operational characterization of quantum information processing, and we also envisage their usefulness in quantum information applications.

Acknowledgments

This work is supported by the research fund of Hanyang University (HY-2015-259), IITP grant funded by the Korea government (MSIP) (No. R0190-15-2028, PSQKD) and the National Research Foundation & Ministry of Education, Singapore.

Author Contributions

Joonwoo Bae and Leong-Chuan Kwek defined the problem, Joonwoo Bae and Dai-Gyoung Kim developed the technical parts including examples, and Joonwoo Bae, Dai-Gyoung Kim, and Leong-Chuan Kwek contributed to writing the manuscript. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Helstrom, C.W. Quantum Detection and Estimation Theory. J. Stat. Phys. 1969, 1, 231–252. [Google Scholar] [CrossRef]
  2. Holevo, A.S. Remarks on Optimal Quantum Measurements. Probl. Inf. Transm. 1974, 10, 317–320. [Google Scholar]
  3. Yuen, H.P.; Kennedy, R.S.; Lax, M. Optimum Testing of Multiple Hypotheses in Quantum Detection Theory. IEEE Trans. Inf. Theory 1975, 21, 125–134. [Google Scholar] [CrossRef]
  4. Birkhoff, G.; von Neumann, J. The Logic of Quantum Mechanics. Ann. Math. 1936, 37, 823–843. [Google Scholar] [CrossRef]
  5. Barnum, H.; Barrett, J.; Leifer, M.; Wilce, A. Generalized No-Broadcasting Theorem. Phys. Rev. Lett. 2007, 99, 240501. [Google Scholar] [CrossRef] [PubMed]
  6. Barrett, J. Information processing in generalized probabilistic theories. Phys. Rev. A 2007, 75, 032304. [Google Scholar] [CrossRef]
  7. Janotta, P.; Hinrichsen, H. Generalized Probability Theories: What determines the structure of quantum theory? J. Phys. A Math. Theor. 2014, 47, 323001. [Google Scholar] [CrossRef]
  8. Masanes, Ll.; Acín, A.; Gisin, N. General Properties of Nonsignaling Theories. Phys. Rev. A 2006, 73, 012112. [Google Scholar] [CrossRef]
  9. Bae, J. Distinguishability, Ensemble Steering, and the No-Signaling Principle. EPTCS 2014, 171, 26–32. [Google Scholar] [CrossRef]
  10. Bae, J.; Hwang, W.-Y.; Han, Y.-D. No-Signaling Principle Can Determine Optimal Quantum State Discrimination. Phys. Rev. Lett. 2011, 107, 170403. [Google Scholar] [CrossRef] [PubMed]
  11. Barnum, H.; Gaebler, C.P.; Wilce, A. Ensemble Steering, Weak Self-Duality, and the Structure of Probabilistic Theories. 2010; arXiv:0912.5532. [Google Scholar]
  12. Janotta, P.; Gogolin, C.; Barrett, J.; Brunner, N. Limits on nonlocal correlations from the structure of the local state space. New J. Phys. 2011, 13, 063024. [Google Scholar] [CrossRef]
  13. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  14. Bae, J. Structure of minimum-error quantum state discrimination. New J. Phys. 2013, 15, 073037. [Google Scholar] [CrossRef]
  15. Nakahira, K.; Kato, K.; Usuda, T.S. Generalized quantum state discrimination problems. Phys. Rev. A 2015, 91, 052304. [Google Scholar] [CrossRef]
  16. Nuida, K.; Kimura, G.; Miyadera, T. Optimal Observables for Minimum-Error State Discrimination in General Probabilistic Theories. J. Math. Phys. 2010, 51, 093505. [Google Scholar] [CrossRef]
  17. Hwang, W.-Y. Helstrom theorem from the no-signaling condition. Phys. Rev. A 2005, 71, 062315. [Google Scholar] [CrossRef]
  18. Bae, J.; Lee, J.-W.; Kim, J.; Hwang, W.-Y. Optimality of minimum-error discrimination by the no-signalling condition. Phys. Rev. A 2008, 78, 022335. [Google Scholar] [CrossRef]
  19. Kimura, G.; Miyadera, T.; Imai, H. Optimal State Discrimination in General Probabilistic Theories. Phys. Rev. A 2009, 79, 062306. [Google Scholar] [CrossRef]
  20. Hunter, K. Measurement does not always aid state discrimination. Phys. Rev. A 2003, 68, 012306. [Google Scholar] [CrossRef]
  21. Koenig, R.; Renner, R.; Schaffner, C. The operational meaning of min- and max-entropy. IEEE Trans. Inf. Theory 2009, 55, 4337–4347. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Bae, J.; Kim, D.-G.; Kwek, L.-C. Structure of Optimal State Discrimination in Generalized Probabilistic Theories. Entropy 2016, 18, 39. https://doi.org/10.3390/e18020039

AMA Style

Bae J, Kim D-G, Kwek L-C. Structure of Optimal State Discrimination in Generalized Probabilistic Theories. Entropy. 2016; 18(2):39. https://doi.org/10.3390/e18020039

Chicago/Turabian Style

Bae, Joonwoo, Dai-Gyoung Kim, and Leong-Chuan Kwek. 2016. "Structure of Optimal State Discrimination in Generalized Probabilistic Theories" Entropy 18, no. 2: 39. https://doi.org/10.3390/e18020039

APA Style

Bae, J., Kim, D. -G., & Kwek, L. -C. (2016). Structure of Optimal State Discrimination in Generalized Probabilistic Theories. Entropy, 18(2), 39. https://doi.org/10.3390/e18020039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop