1. Goals
Quantum theory is one of the most fascinating and successful constructs in the intellectual history of mankind. It applies to light and matter, from the smallest scales so far explored, up to mesoscopic scales. It is also a necessary ingredient for understanding the evolution of the universe. It has given rise to an impressive number of new technologies. Additionally, in recent years, it has been applied to several branches of science, which have previously been analyzed by classical means, such as quantum information processing and quantum computing [
1,
2,
3] and quantum games [
4,
5,
6,
7].
Our goal here is to demonstrate that the applicability of quantum theory can be extended in one more direction, to the theory of decision making, thus generalizing classical utility theory. This generalization allows us to characterize the behavior of real decision makers, who, when making decisions, evaluate not only the utility of the given prospects, but also are influenced by their respective attractiveness. We show that behavioral probabilities of human decision makers can be modeled by quantum probabilities.
We are perfectly aware that, in the philosophical interpretation of quantum theory, there are yet a number of unsettled problems. These have triggered active discussions that started with Einstein’s objection to considering quantum theory as complete, which are reported in detail in the famous Einstein–Bohr debate [
8,
9]. The discussions on the completeness of quantum theory stimulated approaches assuming the existence of hidden variables satisfying classical probabilistic laws. The best known of such hidden-variable theories is the De Broglie–Bohm pilot-wave approach [
10]. It can be shown that the description of a quantum system can be done as if it would be a classical system, with the introduction of the so-called nonlocal contextual hidden variables. However, their number has to be infinite in order to capture the same level of elaboration as their quantum equivalent [
11], which makes unpractical the use of a classical equivalent description. Instead, quantum techniques are employed, which is much simpler than dealing with a classical system having an infinite number of hidden unknown and nonlocal variables.
In the present paper, our aim is not to address the open issues associated with the not yet fully-resolved interpretation of quantum theory. Instead, our focus is more at the technical level, to demonstrate that the mathematics of quantum theory can be applied for describing human decision making.
The intimate connection of quantum laws with human decision making was suggested by Deutsch [
12] who attempted to derive the quantum Born rules from the notion of rational preferences of standard classical decision theory. This attempt, however, was criticized by several authors, as summarized by Lewis [
13,
14].
Here, we consider the inverse problem to that of Deutsch. We do not try to derive quantum rules from classical decision theory, which, we think, is impossible, but we demonstrate that human decision making can be described by laws resembling those of quantum theory that use the Hilbert space formalism.
Our approach is a generalization of quantum decision theory, advanced earlier by the authors [
15,
16,
17,
18,
19,
20,
21,
22,
23,
24], for the lotteries consisting only of gains, to be applicable for both types of lotteries, with gains, as well as with losses. The extension of the approach to the case of lotteries with losses is necessary for developing the associated methods of decision theory, taking into account behavioral biases. A behavioral quantum probability is found to be the sum of two terms, the utility factor, quantifying the objective utility of a lottery, and the attraction factor, describing subjective behavioral biases. The utility factor can be found from the minimization of an information functional, resulting in different forms for the lotteries with prevailing gains or with prevailing losses. To quantify behavioral deviation from rationality, we introduce a measure
describing the aggregate deviation from the rationality of a population of decision makers for a given set of lotteries. This measure is shown to depend on the level of difficulty in comparing the game lotteries, for which we propose a simple metric. For a given set of choices, the determination of the fraction
ν of difficult choices allows us to propose a prediction for the dependence of
as a function of
ν, thus generalizing the zero-information prior
that we have derived in previous articles.
An important part of our approach is the use of the principle of minimal information, which is equivalent to the conditional maximization of entropy under additional constraints imposed on the process of decision making. This method is justified by the Cox results proving that the Shannon entropy is the natural information measure for probabilistic theories [
25,
26,
27,
28].
By analyzing a large set of empirical data, we show that human decision making is well quantified by the quantum approach to calculate behavioral probabilities, even in those cases where classical decision theory fails in principle.
2. Related Literature
The predominant theory, describing decision maker behavior under risk and uncertainty, is nowadays the expected utility theory of preferences over uncertain prospects. This theory was axiomatized by von Neumann and Morgenstern [
29] and integrated with the theory of subjective probability by Savage [
30]. The theory was shown to possess great analytical power by Arrow [
31] and Pratt [
32] in their work on risk aversion and by Rothschild and Stiglitz [
33,
34] in their work on comparative risk. Friedman and Savage [
35] and Markowitz [
36] demonstrated its tremendous flexibility in representing decision makers’ attitudes toward risk. Expected utility theory has provided solid foundations to the theory of games, the theory of investment and capital markets, the theory of search and other branches of economics, finance and management [
37,
38,
39,
40,
41,
42,
43,
44,
45,
46].
However, a number of economists and psychologists have uncovered a growing body of evidence that individuals do not always conform to prescriptions of expected utility theory and indeed very often depart from the theory in a predictable and systematic way [
47]. Many researchers, starting with the works by Allais [
48], Edwards [
49,
50] and Ellsberg [
51], and continuing through the present, have experimentally confirmed pronounced and systematic deviations from the predictions of expected utility theory, leading to the appearance of many paradoxes. A large literature on this topic can be found in the recent reviews by Camerer et al. [
52] and Machina [
53].
Because of the large number of paradoxes associated with classical decision making, there have been many attempts to change the expected utility approach, which have been classified as non-expected utility theories. There are a number of such non-expected utility theories, among which we may mention a few of the best known ones: prospect theory [
49,
54] weighted-utility theory [
55,
56,
57], regret theory [
58], optimism-pessimism theory [
59], dual-utility theory [
60], ordinal-independence theory [
61] and quadratic-probability theory [
62]. More detailed information can be found in the articles [
53,
63,
64].
However, as has been shown by Safra and Segal [
65], none of non-expected utility theories can explain all of those paradoxes. The best that could be achieved is a kind of fitting for interpreting just one or, in the best case, a few paradoxes, while the other paradoxes remained unexplained. In addition, spoiling the structure of expected utility theory results in the appearance of several complications and inconsistencies. As has been concluded in the detailed analysis of Al-Najjar and Weinstein [
66,
67], any variation of the classical expected utility theory “ends up creating more paradoxes and inconsistencies than it resolves”.
The idea that the functioning of the human brain could be described by the techniques of quantum theory has been advanced by Bohr [
8,
68], one of the founders of quantum theory. von Neumann, who is both a founding father of game theory and of expected utility theory, on the one hand, and the developer of the mathematical theory of quantum mechanics, on the other hand, mentioned that the quantum theory of measurement can be interpreted as decision theory [
69].
The main difference between the classical and quantum techniques is the way of calculating the probability of events. As soon as one accepts the quantum way of defining the concept of probability, the latter, generally, becomes nonadditive. Additionally, one immediately meets such quantum effects as coherence and interference.
The possibility of employing the techniques of quantum theory in several branches of science, which have previously been analyzed by classical means, are nowadays widely known. As examples, we can recall quantum game theory [
4,
5,
6,
7], quantum information processing and quantum computing [
1,
2,
3].
After the works by Bohr [
8,
68] and von Neumann [
69], there have been a number of discussions on the possibility of applying quantum rules for characterizing the process of human decision making. These discussions have been summarized in the books [
70,
71,
72,
73,
74] and in the review articles [
75,
76,
77], where numerous citations to the previous literature can be found. However, this literature suffers from a fragmentation of approaches that lack general quantitative predictive power.
An original approach has been developed by the present authors [
15,
16,
17,
18,
19,
20,
21,
22,
23,
24], where we have followed the ideas of Bohr and von Neumann, treating decision theory as a theory of quantum measurements, generalizing these ideas for being applicable to human decision makers.
Although we have named our approach Quantum Decision Theory (QDT), it is worth stressing that the use of quantum techniques requires that neither brain nor consciousness would have anything to do with genuinely quantum systems. The techniques of quantum theory are used solely as a convenient and efficient mathematical tool and language to capture the complicated properties associated with decision making. The necessity of generalizing classical utility theory has been mentioned in connection with taking into account the notion of bounded rationality [
78] and confirmed by numerous studies in behavioral economics, behavioral finance, attention economy and neuroeconomics [
79,
80,
81].
3. Main Results
In our previous papers [
15,
16,
17,
18,
19,
20,
21,
22,
23,
24], we have developed a rigorous mathematical approach for defining behavioral quantum probabilities. The approach is general and can be applied to human decision makers, as well as to the interpretation of quantum measurements [
19,
24]. However, when applying the approach to human decision makers, we have limited ourselves to lotteries with gains.
In the present paper, we extend the applicability of QDT to the case of lotteries with losses. Such an extension is necessary for developing the associated general methods taking into account behavioral biases in decision theory and risk management [
82]. Risk assessment and classification are parts of decision theory that are known to be strongly influenced by behavioral biases. We generalize the approach to be applicable for both types of lotteries, with gains, as well as with losses.
It is necessary to stress that quantum theory is intrinsically probabilistic. The probabilistic theory of decision making has to include, as a particular case, the standard utility theory. However, the intrinsically probabilistic nature of quantum theory makes our approach principally different from the known classical stochastic utility theories, which are based on the assumption of the existence of a value functional composed of value functions and weighting functions with random parameters [
83,
84]. In classical probabilistic theories, one often assumes that the decision maker choice is actually deterministic, based on the rules of utility theory or some of its variants, but this choice is accompanied by random noise and decision errors, which makes the overall process stochastic. In this picture, one assumes that a deterministic decision theory is embedded into a stochastic environment [
85,
86,
87].
However, there are in the mathematical psychology literature a number of approaches, such as the random preference models or mixture models, which also assume intrinsically random preferences [
88,
89,
90,
91,
92,
93]. These models emphasize that “stochastic specification should not be considered as an ‘optional add-on,’ but rather as integral part of every theory which seeks to make predictions about decision making under risk and uncertainty” [
58]. Using different probabilistic specifications has been shown to lead to possibly opposite predictions, even when starting from the same core (deterministic) theory [
58,
85,
94,
95,
96,
97]. This stresses the important role of the probabilistic specification together with the more standard core component.
In this spirit, our approach also assumes that the probability is not a notion that arises due to some external noise or random errors, but it is the basic characteristic of decision making. This is because quantum theory does not simply decorate classical theory with stochastic elements, but is probabilistic in principle. This is in agreement with the understanding that the genuine ontological indeterminacy, pertaining to quantum systems, is not due to errors or noise.
The quantum probability in QDT is defined according to the general rules of quantum theory, which naturally results in the probability being composed of two terms that are called the utility factor and the attraction factor. The utility factor describes the utility of a prospect, being defined according to rational evaluations of a decision maker. Such factors are introduced as minimizers of an information functional, producing the best factors under the minimal available information. In the process of the minimization, there appears a Lagrange multiplier playing the role of a belief parameter characterizing the level of belief of a decision maker in the given set of lotteries. The belief parameter is zero under decision maker disbelief.
The additional quantum term is called the attraction factor. It shows how a decision maker appreciates the attractiveness of a lottery according to his/her subconscious feelings and biases. That is, the attraction factor quantifies the deviation from rationality. In the present paper, we introduce a measure that is defined as the average modulus of the attraction factors over the given set of lotteries. This measure, from one side, characterizes the level of difficulty in choosing between the given set of lottery games, as quantified by decision makers. From the other side, the measure reflects the influence of irrationality in the choice of decision makers. These two sides are intimately interrelated with each other, since the irrationality in decision making results from the uncertainty of the game, when a decision maker chooses between two lotteries, whose mutual advantages and disadvantages are not clear [
23]. To summarize, the introduced measure, from one side, is a measure of the difficulty in evaluating the given lotteries and is also from the other side an irrationality measure.
To illustrate the approach, we analyze two sets of games, one set of games with very close lotteries and another large set of empirical data of diverse lotteries. We calculate the introduced measure and show that the predicted values of this measure are in good agreement with experimental data.
Briefly, the main novel results of the present paper are the following:
- (i)
We demonstrate that the laws of quantum theory can be applied for modeling human decision making, so that quantum probabilities can characterize behavioral probabilities of human choice in decision making.
- (ii)
Generalization of the approach for defining quantum behavioral probabilities to arbitrary lotteries, with gains, as well as with losses.
- (iii)
Introduction of an irrationality measure, quantifying the strength of the deviation from rationality in decision making, under a given set of games involving different lotteries.
- (iv)
Analysis of a large collection of empirical data demonstrating good agreement between the predicted values of the irrationality measure with experimentally-observed data.
- (v)
Demonstration that predicted quantum probabilities are in good agreement with empirically-observed behavioral probabilities, which suggests that human decision making can be described by rules similar to those of quantum theory.
5. Quantum Decision Theory
We identify behavioral probabilities with the probabilities defined by quantum laws, as is done in the theory of quantum measurements [
15,
19,
24]. Then, it is straightforward to calculate such probabilities. Here, we shall not plunge into the details of these calculations, which has been described in full mathematical detail in our previous papers, but will list only the main notions and properties of the defined behavioral probabilities. All related mathematics have been thoroughly expounded in our previous papers [
15,
16,
17,
18,
19,
20,
21,
22,
23,
24].
Although these mathematics are rather involved, the final results can be formulated in a simple form sufficient for practical usage. Therefore, the reader actually does not need to know the calculation details, provided the final properties are clearly stated, as we do below.
Let the letter denote the action of choosing a lottery , with . Strictly speaking, any such choice is accompanied by uncertainty that has two sides, objective and subjective. Objectively, when choosing a lottery, the decision maker does not know what particular payoff he/she would get. Subjectively, the decision maker may be unsure whether the setup is correctly understood, whether there are hidden traps in the problem and whether he/she is able to make an optimal decision. Let such a set of uncertain items be denoted by the letter .
The operationally testable event
is represented by a vector
pertaining to a Hilbert space:
The uncertain event
B is represented by a vector:
in the Hilbert space:
with the coefficients
being random. The sets
and
form orthonormal bases in their related Hilbert spaces.
Thus, choosing a lottery is a composite event, consisting of a final choice
as such, which is accompanied by deliberations involving the set of uncertain events
B. The choice of a lottery, under uncertainty, defines a composite event called the prospect:
which is represented by a state:
in the Hilbert space:
The prospect operators:
play the role of observables in the theory of quantum measurements. The decision maker strategic state is defined by an operator
that is analogous to a statistical operator in quantum theory.
Each prospect
is characterized by its quantum behavioral probability:
where the trace is taken over the basis of the space
formed by the vectors
. The family of these probabilities composes a probability measure, such that:
Calculating the prospect probability, employing the basis of the space
, it is straightforward to separate the diagonal, positive-defined terms, and off-diagonal, sign-undefined terms, which results in the expression:
consisting of two terms. The first, positive-defined term,
corresponds to a classical probability describing the objective utility of the lottery
; because of which, it is called the utility factor, satisfying the condition:
The explicit form of the utility factors depends on whether the expected utilities are positive or negative and will be presented in the following sections.
The second, off-diagonal term,
represents subjective attitudes of the decision maker towards the prospect
, thus being called the attraction factor. This factor encapsulates behavioral biases of the decision maker and is nonzero, when the prospect (
6) is entangled, that is when the decision maker deliberates, being uncertain about the correct choice. The prospect
is called entangled, when its prospect operator
cannot be represented as a separable operator in the corresponding composite Hilbert–Schmidt space. The accurate mathematical formulation of entangled operators requires rather long explanations and can be found, with all details, in [
19,
24].
The attraction factor reflects the subjective attitude of the decision maker, caused by his/her behavioral biases, subconscious feelings, emotions, and so on. Being subjective, the attraction factors are different for different decision makers and even for the same decision maker at different times. Subjective feelings are known to essentially depend on the emotional state of decision makers, their affect, framing, available information and the like [
98,
99,
100,
101,
102]. It would thus seem that such a subjective and contextual quantity is difficult to characterize. Nevertheless, in agreement with its structure and employing the above normalization conditions, it is easy to show that the attraction factor enjoys the following general features. It ranges in the interval:
and satisfies the alternation law:
The absence of deliberations, hence lack of uncertainty, in quantum parlance, is equivalent to decoherence, when:
This is also called the quantum-classical correspondence principle, according to which the choice between the lotteries reduces to the evaluation of their objective utilities, which occurs when uncertainty is absent, leading to vanishing attraction factors. In this case, decision making is reduced to its classical probabilistic formulation described by the utility factors playing the role of classical probabilities.
We have also demonstrated [
17,
18,
20,
75] that the average values of the attraction factor moduli for a given set of lotteries can be determined on the basis of assumptions constraining the distribution of the attraction factor values. This suggests the necessity of studying this aggregate quantity more attentively. For this purpose, we introduce here the notation:
This quantity appears due to the quantum definition of probability, and it characterizes the level of deviation from rational decision making for the given set of N lotteries considered by decision makers. If decision making would be based on completely rational grounds, this measure would be zero. Since the value describes the deviation from rationality in decision making, it can be named the irrationality measure. Additionally, since it results from the use of quantum rules, it is also a quantum correction measure.
It can be shown that for the case of non-informative prior, when the attraction factors of the considered lotteries are uniformly distributed in the interval
, then
, which is termed the quarter law. This property also holds for some other distributions, symmetric with respect to the inversion
[
17,
18,
20,
75]. The value
describes the average level of irrationality. However, it is clear that this value does not need to be the same for all sets of lotteries. It is straightforward to compose a set of simple lotteries, having practically no uncertainty, so that decision choices could be governed by rational evaluations. For such lotteries, the attraction factors can be very small, since the behavioral probability
practically coincides with the utility factor
. For instance, this happens for the lotteries with low uncertainty, when one of the lotteries from the given set enjoys much higher gains, with higher probabilities, than the other lotteries. In such a case, the irrationality measure (
13) can be rather small. On the contrary, it is admissible to compose a set of highly uncertain lotteries, for which the irrationality measure would be larger than
. In this way, the irrationality measure (
13) is a convenient characteristic allowing for a quantitative classification of the typical deviation from rationality in decision making.
The above consideration concerns a single decision maker. It is straightforward to extend the theory to a society of
D decision makers choosing between
N prospects, with the collective state being represented by a tensor product of partial statistical operators. Then, for an
i-th decision maker, we have the prospect probability:
where
.
The typical behavioral probability, characterizing the decision maker society, is the average:
The utility factor
is an objective quantity for each decision maker. In general, this utility factor can be person-dependent, in order to reflect the specific skills, intelligence, knowledge, etc. of the decision maker, which shape his/her rational decision. Another reason to consider different
is that risk aversion is, in general, different for different people. However, being averaged, as in (
15), such an aggregate quantity describes the behavioral probability of a typical agent, which is a feature of the considered society on average.
Similarly, the typical attraction factor is:
which, generally, depends on the number of the questioned decision makers,
. These typical values describe the society of
D agents on average, thus, defining a typical agent. For a society of
D decision makers, the decision irrationality measure takes the aggregate form:
In standard experiments, one usually questions a pool of
D decision makers. If the number of decision makers choosing a prospect
is
, such that:
then the experimental frequentist probability is:
This, using the notation
for the aggregate utility factor, makes it possible to define the experimental attraction factor:
depending on the number of decision makers
D.
More generally, in standard experimental tests, decision makers are asked to formulate, not a single choice between
N lotteries, but multiple choices with different lottery sets. A single choice between a given set of lotteries is called a game that is the operation:
ascribing the probabilities
to each lottery
. When a number of games are proposed to the decision maker, enumerated by the index
, they are the operations:
Averaging over all games gives the aggregate irrationality or quantum correction measure:
quantifying the level of irrationality associated with the whole set of these games.
As is clear from Equations (
8), (
14) and (
20), the attraction factor defines the deviation of the behavioral probability from the classical rational value prescribed by the utility factor. We shall say that a prospect
is more useful than
, if and only if
. A prospect
is said to be more attractive than
, if and only if
. Additionally, a prospect
is preferable to
, if and only if
. Therefore, a prospect can be more useful, but less attractive, as a result being less preferable. This is why the behavioral probability, combining both the objective and subjective features, provides a more correct and full description of decision making.
It is important to emphasize that in our approach, the form of the probability (
8) is not an assumption, but it directly follows from the definition of the quantum probability. This is principally different from suggestions of some authors to add to expected utility an additional phenomenological term corresponding to either information entropy [
103,
104] or taking account of social interactions [
105]. In our case, we do not spoil expected utility, but we work with probability whose form is prescribed by quantum theory.
Our approach is principally different from the various models of stochastic decision making, where one assumes a particular form of a utility functional or a value functional, whose parameters are treated as random and fitted a posteriori for a given set of lotteries. Such stochastic models are only descriptive and do not enjoy predictive power. In the following sections, we show that our method provides an essentially more accurate description of decision making.
It is worth mentioning the Luce choice axiom [
106,
107,
108,
109], which states the following. Let us consider a set of objects enumerated by an index
n and labeled each by a scaling quantity
. Then, the probability of choosing the
n-th object can be written as
. In the case of decision making, one can associate the objects with lotteries and their scaling characteristics with expected utilities. Then, the Luce axiom gives a way of estimating the probability of choosing among the lotteries. Below, we show that the Luce axiom is a particular case of the more general principle of minimal information. Moreover, it allows for the estimation of only the rational part of the behavioral probability, related to the lottery utilities. However, in our approach, there also exists the other part of the behavioral probability, represented by the attraction factor. This makes the QDT principally different and results in essentially a more accurate description of decision making.
Furthermore, it is important to distinguish our approach from a variety of the so-called non-expected utility theories, such as weighted-utility theory [
55,
56,
57], regret theory [
58], optimism-pessimism theory [
59], dual-utility theory [
60], ordinal-independence theory [
61], quadratic-probability theory [
62] and prospect theory [
49,
54,
110]. These non-expected utility theories are based on an ad hoc replacement of expected utility by a phenomenological functional, whose parameters are fitted afterwards from empirical data. See more details in the review articles [
53,
63,
64]. Therefore, all such theories are descriptive, but not predictive. After a posterior fitting, practically any such theory can be made to correspond to experimental data, so that it is difficult to distinguish between them [
64]. Contrary to most of these, as is stressed above, we first of all do not deal with only utility, but are concerned with probability. More importantly, we do not assume phenomenological forms, but derive all properties of the probability from a self-consistent formulation of quantum theory. In particular, the properties of the attraction factor, described above, and the explicit form of the utility factor, to be derived below, give us a unique opportunity for quantitative predictions.
Similarly to quantum theory, where one can accomplish different experiments for different systems, in decision theory, one can arrange different sets of lotteries for different pools of decision makers. The results of such empirical data can be compared with the results of calculations in QDT.
6. Difficult or Easy Choice
In the process of decision making, subjects try to evaluate the utility of the given lotteries. Such an evaluation can be easy when the lotteries are noticeably different form each other, and alternatively, the choice is difficult when the lotteries are rather similar.
The difference between lotteries can be quantified as follows. Suppose we compare two lotteries, whose utility factors are
and
. Let us introduce the relative lottery difference as:
When there are only two lotteries in a game, because of the normalization (
9), we then have:
As is evident, the difficulty in choosing between the two lotteries is a decreasing function of the utility difference (
22). The problem of discriminating between two similar objects or stimuli has been studied in psychology and psychophysics, where the critical threshold quantifying how much difference between two alternatives is sufficient to decide that they are really different is termed the discrimination threshold, just noticeable difference or difference threshold [
111]. In applications of decision theory to economics, one sometimes selects the threshold difference of
, because “it is worth spending one percent of the value of a decision analyzing the decision” [
112]. This implies that the value of
, being spent to improve the decision, at the same time does not change significantly the value of the chosen lottery.
More rigorously, the threshold difference, when the difference (
22) is smaller than some critical value below which the lotteries can be treated as almost equivalent, can be justified in the following way. In psychology and operation research, to quantify the similarity or closeness of two alternatives
and
with close utilities or close probabilities, one introduces [
113,
114,
115,
116] the measure of distance between alternatives as
with
. In applications, one employs different values of the exponent
m, getting the linear distance for
, quadratic distance for
, and so on. In order to remove the arbitrariness in setting the exponent
m, it is reasonable to require that the difference threshold be invariant with respect to the choice of
m, so that:
for any positive
and
. Counting the threshold in percentage units, the sole nontrivial solution to the above equation is
percent. This implies that the difference threshold, capturing the psychological margin of significance, has to be equal to the value of
. Then, one says that the choice is difficult when:
Otherwise, when the lottery utility factors differ more substantially, the choice is said to be easy.
The value of the aggregate attraction factor depends on whether the choice between lotteries is difficult or easy. The attraction factors for different decision makers are certainly different. However, they are not absolutely chaotic, so that, being averaged over many decision makers and several games, the average modulus of the attraction factor can represent a sensible estimation of the irrationality measure defined in the previous section.
An important question is whether it is possible to predict the irrationality measure for a given set of games. Such a prediction, if possible, would provide a very valuable information predicting what decisions a society could make. The evaluation of the irrationality measure can be done in the following way. Suppose that
is a probability distribution of attraction factors for a society of decision makers. From the admissible domain (
10) of attraction factor values, the distribution satisfies the normalization condition:
Experience suggests that, except in the presence of certain gains and losses, there are practically no absolutely certain games that would involve no hesitations and no subconscious feelings. In mathematical terms, this can be formulated as follows. In the manifold of all possible games, absolutely rational games compose a set of zero measures:
On the other side, there are almost no completely irrational decisions, containing absolutely no utility evaluations. That is, on the manifold of all possible games, absolutely irrational games make a set of zero measures:
The last condition is also necessary for the probability to be in the range .
Consider a decision marker to whom a set of games is presented, which can be classified between difficult and easy according to condition (
23). Additionally, let the fraction of difficult choices be
ν. Then, a simple probability distribution that satisfies all of the above conditions is the Bernoulli distribution:
The Bernoulli distribution is a particular case of the beta distribution employed as a prior distribution under Conditions (
25) and (
26) in standard inference tasks [
117,
118,
119].
The expected irrationality measure then reads:
which, with Expression (
27), yields:
In this way, for any set of games, we can a priori predict the irrationality measure by Formula (
29) and compare this prediction with the corresponding quantity (
20) that can be defined from a posteriori experimental data.
For example, when there are no difficult choices, hence , we have . On the contrary, when all games involve difficult choices and , then . In the case when half of the games involve difficult choices, so that , then . This case reproduces the result of the non-informative prior. It is reasonable to argue that, if we would know nothing about the level of the games’ difficulty, we could assume that half of them are difficult and half are easy.
7. Positive Expected Utilities
To be precise, it is necessary to prescribe a general method for calculating the utility factors. When all payoffs in sets (
1) are gains, then all expected utilities (
4) are positive. This is the case we have treated in our previous papers [
16,
17,
18,
21,
75]. However, if among the payoffs there are gains, as well as losses, then the signs of expected utilities can be positive, as well as negative. Here, we shall consider two classes of lotteries including both gains and losses, a first class of lotteries with positive expected utilities and a second class with negative expected utilities.
In the present section, we consider the lotteries with semi-positive utilities, such that:
Recall that such a lottery does not need to be composed solely of gains, but it can include both gains and losses, in such a way that the expected utility (
4) is semi-positive.
The utility factor, by its definition, defines the objective utility of a lottery; in other words, it is supposed to be a function of the lottery expected utility. The explicit form of this function can be found from the conditional minimization of the Kullback–Leibler [
120,
121] information:
in which
is a trial likelihood function [
16].
The use of the Kullback–Leibler information for deriving the classical utility distribution is justified by the Shore–Johnson theorem [
122]. This theorem proves that there exists only one distribution satisfying consistency conditions, and this distribution is uniquely defined by the minimum of the Kullback–Leibler information, under given constraints. This method has been successfully employed in a remarkable variety of fields, including physics, statistics, reliability estimations, traffic networks, queuing theory, computer modeling, system simulation, optimization of production lines, organizing memory patterns, system modularity, group behavior, stock market analysis, problem solving and decision theory. Numerous references related to these applications can be found in the literature [
122,
123,
124,
125,
126].
It also worth recalling that the Kullback–Leibler information is actually a slightly modified Shannon entropy. Additionally, this entropy is known to be the natural information measure for probabilistic theories [
25,
26,
27,
28].
The total information functional is prescribed to take into account those additional constraints that uniquely define a representative statistical ensemble [
124,
127,
128]. First of all, such a constraint is the normalization condition (
9). Then, since the utility factor plays the role of a classical probability, the average quantity should be defined:
This quantity can be either finite or infinite. The latter case would mean that there could exist infinite (or very large) utilities, which, in real life, could be interpreted as a kind of “miracle”, leading to large “surprise” [
82]. Therefore, the assumption that
U can be infinite (or extremely large) can be interpreted as equivalent to the belief in the absence of constraints, in other words, to the assumption of strong uncertainty.
In this way, the information functional is written as:
in which
λ and
β are the Lagrange multipliers guaranteeing the validity of the imposed constraints.
In order to correctly reflect the objective meaning of the utility factor, it has to grow together with the utility, so as to satisfy the variational condition
for any value of the expected utility. Additionally, the utility factor has to be zero for zero utility, which implies the boundary condition:
To satisfy these conditions, it is feasible to take the likelihood function proportional to .
The minimization of the information functional (
33) yields the utility factor:
with the normalization factor:
Conditions (
34) and (
35) require that the Lagrange multiplier
β be non-negative, varying in the interval
. This quantity can be called belief parameter or certainty parameter, because of its meaning following from Equations (
32) and (
33). The value of
β reflects the level of certainty of a decision maker with respect to the given set of lotteries and to the possible occurrence of infinite (or extremely large) utilities.
If one is strongly uncertain about the outcome of a decision to be made, with respect to the given lotteries, thinking that nothing should be excluded, when quantity (
32) can take any values, including infinite, then, to make the information functional (
33) meaningful, one must set the belief parameter to zero:
. Thus, the zero belief parameter reflects strong uncertainty with respect to the given set of lotteries. Then, the utility factor (
36) becomes:
In that way, the uncertainty in decision making leads to the probabilistic decision theory, with the probability weight described by (
36). In the case of strong uncertainty, with the zero belief parameter, the probabilistic weight (
36) reduces to form (
38) suggested by Luce. It has been mentioned [
129] that the Luce form cannot describe the situations where behavioral effects are important. However, in our approach, form (
38) is only a part of the total behavioral probability (
8). Expression (
36), by construction, represents only the objective value of a lottery; hence, it is not supposed to include subjective phenomena. The subjective part of the behavioral probability (
8) is characterized by the attraction factor (
16). As a consequence, the total behavioral probability (
8) includes both objective, as well as subjective effects.
In the intermediate case, when one is not completely certain, but, anyway, assumes that (
32) cannot be infinite (or extremely large), then
β is also finite, and the utility factor (
36) is to be used. This is the general case of the probabilistic decision making.
However, when one is absolutely certain in the rationality of the choice between the given lottery set, that is when one believes that the decision can be made completely rationally, then the belief parameter is large,
, which results in the utility factor:
corresponding to the deterministic classical utility theory, when, with probability one, the lottery with the largest expected utility is to be chosen.
8. Negative Expected Utilities
We now consider lotteries with non-positive expected utilities, when:
Instead of the negative lottery expected utility, it is convenient to introduce a positive quantity:
called the lottery expected cost or the lottery expected risk.
Similarly to Equation (
32), it is possible to define the average cost:
that can be either finite or infinite. The latter case, assuming the existence of infinite (or extremely large) costs, corresponds to the situation that can be interpreted as a “disaster”, ending the decision making process. For instance, this can be interpreted as the loss of life of the decision maker, who when dead, “sees” in a sense any arbitrary positive lottery payoff as useless, i.e., dwarfed by his/her infinite loss. One should not confuse the perspective of society that puts a price tag on human life, which depends on culture and affluence. It remains however true that an arbitrary positive payoff has no impact on a dead person, neglecting here bequest considerations.
The utility factor can again be defined as a minimizer of the information functional that now reads as:
To preserve the meaning of the utility factor, reflecting the usefulness of a lottery, it is required that the larger cost would correspond to the smaller utility factor, so that:
for any cost. Additionally, as is obvious, infinite cost must suppress the utility, thus requiring the boundary condition:
These conditions make it reasonable to consider a likelihood function inversely proportional to the lottery cost .
Minimizing the information functional (
43), we obtain the utility factor:
with the normalization constant:
Here, again, β has the meaning of a belief parameter connected with the belief of a decision maker in the rationality of the choice among the given lotteries and the possibility of a disaster related to a lottery with an infinite (or extremely large) cost.
The possible occurrence of any outcome, including a disaster, tells that quantity (
42) is not restricted and even could go to infinity. In the presence of such an occurrence, to make the information functional meaningful, we need to set
. Thus, similarly to the considerations for utility functions with non-negative expectation, strong uncertainty about the given lottery set and the related outcome of decision making implies the zero belief parameter
. Then, the utility factor is:
and we recover the probabilistic utility theory (or probabilistic cost theory).
Similarly to the previous case, the intermediate level of uncertainty implies a finite belief parameter
β, when form (
46) should be used. This is the general situation in the probabilistic cost theory.
Additionally, when one is absolutely certain of the full rationality of the given lottery set, then the belief parameter
, which gives:
Then, we return to the deterministic classical utility theory (cost theory).
Following the procedure used for positive utilities, it is straightforward to classify the lotteries into more or less useful, more or less attractive and more or less preferable.
In what follows, considering a lottery , we shall keep in mind the related prospect , but for simplicity, it is also possible to write instead of .
9. Typical Examples of Decisions under Strong Uncertainty
To give a feeling of how our approach works in practice, let us consider the series of classical laboratory experiments treated by Kahneman and Tversky [
54], where the number of decision makers was around
and the related statistical errors on the frequencies of decisions were about
. The respondents had to choose between two lotteries, where payoffs were counted in monetary units. These experiments stress the influence of uncertainty, similarly to the Allais paradox [
48], although being simpler in their setup.
Each pair of lotteries in a decision choice have been composed in such a way that they have very close, or in many cases just coinciding, expected utilities, hence coinciding or almost coinciding utility factors, and not much different payoff weights. The choice between these very similar lotteries is essentially uncertain. Therefore, we would expect that the irrationality measure, for such strongly uncertain lotteries, should be larger than .
In order to interpret these experimental results in our framework, we use a linear utility function . The advantage of working with the linear utility function, using the utility factors, is that, by their structure, the latter do not depend on the constant in the definition of the utility function, as well as on the used monetary units that can be arbitrary. We calculate the utility factors assuming that decisions are made under uncertainty, such that the belief parameter .
The most important point of the consideration is to calculate the predicted irrationality measure (
29) and to compare it with the irrationality measure (
20) found in the experiments.
Below, we analyze fourteen problems in decision making, seven of which deal with lotteries with positive expected utilities and seven with negative expected utilities. The general scheme is as follows. First, the problem of choosing between two lotteries is formulated. Then, the utility factors are calculated. For positive expected utilities, these factors are given by expression (
38) that, in the case of two lotteries, read as:
For negative expected utilities, it is necessary to use Expression (
48) that, in the case of two lotteries, reduces to:
Calculating the lottery utility differences (
22) for each game and using the threshold (
23) allows us to determine the fraction
ν of the games that can be classified as difficult, from which we obtain the predicted irrationality measure (
29).
After this, using the experimental results to determine the frequentist probabilities, we find the attraction factors (
19). We then calculate the irrationality measure (
20) as the average over the absolute values of the attraction factors
found from (
19) for each game. Finally, we compare this aggregate quantity over the set of games and ensemble of subjects with the predicted value (
29).
Below, we give a brief description of the games treated by Kahneman and Tversky [
54] and then summarize the results in
Table 1.
The first lottery is more useful; however, it is less attractive, becoming less preferable. It is clear why the second lottery is more attractive: it provides a more certain gain, although the gains in both lotteries are close to each other. As a result, the second lottery is preferable .
Now, the first lottery is more useful and more attractive, as far as the payoff weights in both lotteries are close to each other, while the first lottery allows for a bit higher gain. Thus, the first lottery is preferable .
The first lottery is more useful, but less attractive. The second lottery is more attractive because it gives a more certain gain, although the gains in both lotteries are comparable. The second lottery becomes preferable .
The first lottery is more useful and also more attractive, since it suggests a slightly larger gain under very close payoff weights. The first lottery is preferable .
Both lotteries are equally useful. However, the second lottery gives a more certain gain, being, thus, more attractive and becoming preferable .
Again, both lotteries are equally useful, but the first lottery is more attractive, suggesting a larger gain under close payoff weights. Therefore, the first lottery is preferable .
Both lotteries are equally useful. However, the second lottery gives more chances for gains, being more attractive. The second lottery becomes preferable .
The second lottery is more useful, however, being less attractive, since it suggests a certain loss. Therefore, the first lottery is preferable .
The second lottery is more useful and more attractive, since its loss is lower, while the loss weights in both lotteries are close to each other. This makes the second lottery preferable .
Although the utilities of both lotteries are the same, the first lottery is less attractive, since the loss there is more certain. As a result, the second lottery is preferable .
Both lotteries are again equally useful. However, the second lottery is less attractive, yielding higher loss under close loss weights. This is why the first lottery is preferable .
The utilities of both lotteries are equal. However, the second lottery is less attractive, resulting in a more certain loss. Hence, the first lottery is preferable .
Although the utilities of the lotteries are again equal, the second lottery has more chances to result in a loss, thus being less attractive. Consequently, the first lottery is preferable .
Both lotteries are equally useful. However, the second lottery is more attractive, as far as the loss there is three orders smaller than in the first lottery. Then, the second lottery is preferable .
The summarizing results for these games are presented in
Table 1. Among the 14 above games, nine of them are difficult, which yields the fraction of difficult choices equal to
. Expression (
29) then predicts
.
The irrationality measure is larger than . This is not surprising, since the lotteries are arranged in such a way that their expected utilities are very close to each other or, in the majority of cases, even coincide. Hence, it is not easy to choose between such similar lotteries, which makes the decision choice rather difficult. Averaging the experimentally found moduli of the attraction factors over all fourteen problems, we get the irrationality measure , which practically coincides with the theoretical predicted value .
It is worth mentioning that the values of the attraction factors for a particular decision choice and, moreover, for each separate decision maker, are, of course, quite different. Dealing with a large pool of decision makers and several choices smooths out the particular differences, so that the found irrationality measure characterizes typical decision making of a large society, dealing with quite uncertain choices. In the case treated above, the number of decision makers D was about 100. Hence, the total number of choices for 14 lotteries is sufficiently large, being around .
10. Analysis for a Large Recent Set of Empirical Data
When the lotteries are not specially arranged to produce high uncertainty in decision choice, but are composed in a random way, we may expect that the irrationality measure will be smaller than in the case of the previous section. In order to check this, let us consider a large set of decision choices, using the results of the recently accomplished massive experimental tests with different lotteries, among which there are many for which the decision choice is simple [
130]. The subject pool consisted of 142 participants having to make 91 choices over a set of 91 pairs of binary option lotteries. The choices were administered in two sessions, approximately two weeks apart, with the same set of the lotteries. However, the item order was randomized, so that the choices in the sessions could be treated as independent. Thus, the total effective number of choices was
25,844.
Each choice was made between two binary option lotteries, which are denoted as:
with payoffs
and
, weights
and
and with
and
denoting the fraction of subjects choosing, either the lottery
A or
B, respectively. There were three main types of the lotteries, lotteries with only gains (
Table 2 and
Table 3), with only losses (
Table 4 and
Table 5), and mixed lotteries, with both gains and losses (
Table 6 and
Table 7 ). The specific order of the 91 choices in each session is not of importance, because they were administered two weeks apart, and the items were randomized, so that the choices could be treated as independent. The fractions of the decision makers choosing the same lottery in the first and second sessions, although being close with each other, were generally different, varying between zero and
, which reflects the contextuality of decisions. This variation represents what can be considered as a random noise in the decision process, limiting the accuracy of results by an error of order
.
We calculate the expected utilities
and
, or the expected costs
and
, and the corresponding utility factors
and
, as explained above. Then, we find the attraction factors:
The results are presented in
Table 2 and
Table 3 for the lotteries with gains, in
Table 4 and
Table 5 for the lotteries with losses and in
Table 6 and
Table 7 for mixed lotteries.
Table 3,
Table 5 and
Table 7 include the difference Δ given by expression (
22), allowing us to find out the number of games with a difficult choice.
Analyzing these games, we see that there is just one difficult game; hence,
. Formula (
29) then predicts the irrationality measure to be
. Averaging the empirical attraction-factor modulus over all lotteries yields the experimentally-observed irrationally measure
, in perfect agreement with the predicted value.
In that way, the irrationality measure is a convenient characteristic quantifying the lottery sets under decision making. It measures the level of irrationality, that is the deviation from rationality, of decision makers considering the games with the given lottery set. Such a deviation is caused by uncertainty encapsulated in the lottery set. On average, the irrationality measure that is typical for non-informative prior is equal to . However, in different particular realizations, this measure can deviate from , depending on the typical level of uncertainty contained in the given set of lotteries. The irrationality measure for societies of decision makers can be predicted. The considered large set of games demonstrates that the predicted values of the irrationality measure are in perfect agreement with the empirical data considered.
Recall that, analyzing the experimental data, we have used two conditions, the threshold of one percent in the relative lottery difference in (
23) and the Bernoulli distribution (
27). However, employing these conditions cannot be treated as fitting. The standard fitting procedure is done by introducing unknown fitting parameters that are calibrated to the observed data for each given experiment. In contrast, in our approach, the imposed conditions are introduced according to general theoretical arguments, but not fitted afterwards. Thus, the threshold of one percent follows from the requirement that the distance between two alternatives be invariant with respect to the definition of the distance measure [
113,
114,
115,
116]. Additionally, the Bernoulli distribution is the usual prior distribution under Conditions (
25) and (
26) in standard inference tasks [
117,
118,
119]. Moreover, as is easy to check, the results do not change if instead of the difference threshold of one percent, we accept any value between
and
.
Being general, the developed scheme can be applied to any set of games, without fitting to each particular case. In this respect, it is very instructive to consider the limiting cases, predicted at the end of
Section 6. Formula (
29) predicts that, when there are not difficult choices, hence
, one should have
. On the contrary, when all games involve difficult choices, and
, then
. To check these predictions, we can take from
Table 3,
Table 5 and
Table 7 all 90 games with an easy choice, implying
. Then, we find
, which is very close to the predicted value
. The opposite limiting case of
, when all games involve a difficult choice, is represented by the set of nine games, 1, 5, 6, 7, 10, 11, 12, 13 and 14 from
Table 1. For this set of games, we find
, which is close to the predicted value
. Actually, the found and predicted values are not distinguishable within the accuracy of experiments.
It is important to recall that the irrationality measure has been defined as an aggregate quantity, thus constructed as the average over many decision makers and many games. To be statistically representative, such an averaging has to involve as much subjects and games as possible. Therefore, when taking a subset of easy or difficult games, among the set of all given games, we have to take the maximal number of them, as is done in the examples above. It is possible that, when taking not the maximal number of games, but an arbitrary limited subset, we could come to quite different values of . In the extreme case, the attraction factor q for separate single games can be very different. However, the value of q for separate games has nothing to do with the aggregate quantity . When selecting subsets of easy or difficult games, we must take into account the maximal number of available games.
11. Conclusions
We have developed a probabilistic approach to decision making under risk and uncertainty for the general case of payoffs that can be gains or losses. The approach is based on the notion of behavioral probability that is defined as the probability of making decisions in the presence of behavioral biases. This probability is calculated by invoking quantum techniques, justifying the name quantum decision theory. The resulting behavioral probability is a sum of two terms, a utility factor, representing the objective value of the considered lottery, and an attraction factor, characterizing the subjective attitude of a decision maker toward the treated lotteries. The utility factors are defined from the principle of minimal information, yielding the best utility weights under the given minimal information. Minimizing the information functional yields the explicit form of the utility factor as a function of the lottery expected utility, or lottery expected cost. The utility factor form is different for two different situations depending on whether gains or losses prevail. In the first case, the expected utilities of all lotteries are positive (or semi-positive), while in the second case, the expected utilities are negative.
In the process of minimizing the information functional, there appears a parameter named the belief parameter, which is a Lagrange multiplier related to the validity of the condition reflecting the belief of decision makers in the level of uncertainty in the process of decision making. When the decision makers are absolutely sure about the way of choosing, the theory reduces to the classical deterministic decision making, where, with probability one, the lottery that enjoys the largest expected utility, or the minimal expected cost, is chosen. However, for choices under uncertainty, decision making remains probabilistic.
The attraction factor, despite its contextuality, possesses several general features allowing for a quantitative description of decision making. We introduced an irrationality measure defined as the average of the attraction factor moduli over the given lottery set, characterizing how much decision makers deviate from rationality in decision making with this set of lotteries. In the case of non-informative prior, the irrationality measure equals
. However, for other particular lottery sets, it may deviate from this value, depending on the level of uncertainty encapsulated in the considered sets of lotteries. Thus, Formula (
29) predicts that when there are no difficult choices, hence
, it should be
. On the contrary, when all games involve difficult choices, and
, then
. These predictions are surprisingly accurate, being compared with empirical data.
We illustrated in detail the applicability of our approach to fourteen examples of highly uncertain lotteries, suggested by Kahneman and Tversky [
54], including the lotteries with both gains and losses. Calculations are done with the use of a linear utility function, whose advantage is in the universality with respect to payoff measures. The used form of the utility factor corresponds to decision making under uncertainty. Then, we extended the consideration by analyzing 91 more problems of binary options, administered in two sessions [
130]. Taking into account the number of subjects involved, the total number of analyzed choices is about 27,250. The irrationality measure demonstrates that it serves as a convenient tool for quantifying the deviation from rationality in decision making, as well as characterizing the level of uncertainty in the considered set of lotteries.
Theoretical predictions for the irrationality measure have been found in perfect agreement with the observed empirical data.
Finally, the reader could ask whether quantum theory is needed after all. Indeed, one could formulate the whole approach by just postulating the basic features of the considered quantities and the main rules of calculating the probabilities, without mentioning quantum theory. One can indeed always replace derived properties by postulates, to exploit further. However, then, this so-called “theory” would be a collection of numerous postulates and axioms, whose origin and meaning would be unclear. In contrast, following our approach, the basic properties have been derived from the general definition of quantum probabilities. Thus, the structure of the quantum probability as the sum
is not a postulate, but the consequence of calculating the probability according to quantum rules. The meaning of
f, as the classical probability representing the utility of a prospect, follows from the quantum-classical correspondence principle. Conditions, showing when
q is to be nonzero, can be found from the underlying quantum techniques [
21,
24]. Thus, instead of formally fixing a number of postulates with not always clear origin, we derive the main facts of our approach from the general rules of quantum theory. In a deep sense, the anchoring of our theory of decision making and its structure on the rigid laws of quantum theory makes our approach more logical. It is also possible to mention that the theories based on smaller number of postulates, as compared to those based on larger number of the latter, are termed more “beautiful” [
131].