Next Article in Journal
How Fifth-Grade English Learners Engage in Systems Thinking Using Computational Models
Previous Article in Journal
Design of Product–Service Systems: Toward An Updated Discourse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Derivation and Application of the Subjective–Objective Probability Relationship from Entropy: The Entropy Decision Risk Model (EDRM)

by
Thomas Monroe
1,*,
Mario Beruvides
1 and
Víctor Tercero-Gómez
2
1
Department of Industrial, Manufacturing and Systems Engineering, Texas Tech University, Lubbock, TX 79409, USA
2
School of Engineering and Sciences, Tecnologico de Monterrey, Monterrey, Mexico
*
Author to whom correspondence should be addressed.
Systems 2020, 8(4), 46; https://doi.org/10.3390/systems8040046
Submission received: 2 September 2020 / Revised: 16 November 2020 / Accepted: 17 November 2020 / Published: 20 November 2020

Abstract

:
The uncertainty, or entropy, of an atom of an ideal gas being in a certain energy state mirrors the way people perceive uncertainty in the making of decisions, uncertainty that is related to unmeasurable subjective probability. It is well established that subjects evaluate risk decisions involving uncertain choices using subjective probability rather than objective, which is usually calculated using empirically derived decision weights, such as those described in Prospect Theory; however, an exact objective–subjective probability relationship can be derived from statistical mechanics and information theory using Kullback–Leibler entropy divergence. The resulting Entropy Decision Risk Model (EDRM) is based upon proximity or nearness to a state and is predictive rather than descriptive. A priori EDRM, without factors or corrections, accurately aligns with the results of prior decision making under uncertainty (DMUU) studies, including Prospect Theory and others. This research is a first step towards the broader effort of quantifying financial, programmatic, and safety risk decisions in fungible terms, which applies proximity (i.e., subjective probability) with power utility to evaluate choice preference of gains, losses, and mixtures of the two in terms of a new parameter referred to as Prospect. To facilitate evaluation of the EDRM against prior studies reported in terms of the percentage of subjects selecting a choice, the Percentage Evaluation Model (PEM) is introduced to convert choice value results into subject response percentages, thereby permitting direct comparison of a utility model for the first time.

1. Introduction

An executive is presented with an engineering risk analysis for a critical decision that involves a potential for loss of life for a failure mode that is highly unlikely and has no history of prior failure, a high-consequence, low-probability event that would take years and tens of millions of dollars to mitigate; however, the system under consideration itself is a safety system that provides mitigation for other Black Swan events, so its unavailability adds to risk in other interconnected areas. The executive chooses to accept the risk in spite of the grave prediction by the system’s engineers. In another example, an individual chooses to buy insurance for their property, but at the same time buys lottery tickets despite the overwhelming odds against success—seemingly a contraction. In yet another case, a financial manager is presented with the results of a value at risk analysis from the company’s risk management team for a transaction and chooses to go against their recommendation and make the trade based upon instinct. Such situations easily lead one to conclude that subjects appear irrational when it comes to making probabilistic choices; however, there is a clear pattern to these decisions.
These scenarios illustrate that people make decisions contrary to normative risk theories that quantify risk purely in economic or monetary terms, such as expected utility and the expected value rule, making quantization of risk in terms consistent with actual decision making elusive. At the choice level, this has been well studied as positive decision theory (i.e., Prospect Theory) and is replete with descriptive, but not predictive, models based upon various studies, most of which involve measurable objective probabilities and nominal, narrow ranges of values. This proven difference between how subjects should and do make decisions must be reconciled before risk can be universally quantified into monetary terms, as the risk value is based upon the perception of a decision maker.

1.1. A New Approach

The uncertainty, or entropy, of a single atom of an ideal gas being in a certain energy state mirrors the way people perceive uncertainty in the making of decisions, uncertainty that is related to unmeasurable subjective probability. The sense that the workings of the physical world are replicated in the making of choices has been, and continues to be, investigated by many great minds. One such luminary is John von Neumann, who formalized quantum mechanics and game theory as he sought resolution of the contradiction between the perceived macroscopic world and unmeasurable parameters in microscopic quantum mechanics [1]. It is this premise that provides the starting point for the present research, for the difference between the macro and microscopic views provides the relationship of objective and subjective probabilities that helps resolve the conflict between how decisions are supposed to be made and how people actually make them. The results of this new approach are profound. Without factors or corrections, the proposed model nearly perfectly predicts the results of Tversky and Kahneman’s Cumulative Prospect Theory results. This approach also addresses a nagging question of the true nature of the decision weighting factor, which has been stated to not be a probability. This research shows that the decision weighting factor is subjective probability and it does not necessarily need to sum to 1 for a system, as is the case for objective probabilities.
A second outcome of this new approach, which supports validation of the first, is a method to directly compare the model results for choices and the subject percent responses for the first time. All of the research reviewed to date merely evaluates if the predicted choices match the actual, but is unable to evaluate the degree to which the model matches the data beyond binary comparison. The new percentage evaluation model also provides a measure of the relative difficulty of a decision between two or more choices.

1.2. Objectives

This paper takes the first step towards expressing the prospect of choices in terms consistent with positive decision theories, rather than with the standard expected value definition of risk [2,3,4]. As a result of articulating choices in terms of prospect of an outcome, rather than probability of success or failure, the Entropy Decision Risk Model (EDRM) is essentially a translation of probabilities between the positive and normative domains, as shown in Figure 1. This ability to translate between domains permits the expression of risk consistent with decision making and allows for risk estimates to be translated back into probabilities and values from prospects. It has been asserted that subjects without training do not intuitively understand probabilities [5,6,7], but the value of expected utility theory is well established as the foundation of economic and risk analysis, so reconciliation is required. Other related research has shown that these two systems (normative and positive) are explained by dual process theory’s system 1 (intuitive thinking) and system 2 (deliberate thinking) [5,7]. This research suggests that the more complex the choice (e.g., multi-state and mixed gains/losses versus single-state gains), the better the agreement with a priori EDRM’s uncorrected models, ostensibly owing to intuitive system 1 processes. This concept of aligning positive decision theories to system 1 and normative utility theories to system 2 appears consistent with recent work in the field [8]. The results of this research show that EDRM effectively translates between the two domains and consistently predicts subject results in terms of state subjective probabilities when provided objective probabilities, which sets aside the long-held contention that people do not understand probability.

1.3. Definitions

It is necessary to state new working definitions for terms used within the model consistent with their origin and application.
Relative Certainty ( p ): Equivalent to redundancy (information theory), given as one minus the relative entropy as a function of the state probability, denoted by the lower-case p consistent with the classical definition of objective probability (See Section 4.3 and Appendix A). The term relative certainty is more descriptive of the use herein than is redundancy, and is lexically consistent with its derivation from relative entropy.
Proximity ( τ ) : Subjective probability representing the nearness to a state and a function of the relative certainty, denoted by the Greek letter τ . Proximity increases from 0 to 1 monotonically with relative certainty as nearness to a given state, with 0 implying no relation to the state and 1 that of achieving the state. Proximity and relative certainty are related as follows (See Appendix A):
p ( τ ) = τ 2 τ 2 ln   τ 2   .
Prospect (T): Product of magnitude and proximity as a function of relative certainty and is an extensive property. Prospect can also be seen as a weighted uncertainty of an outcome (See Section 4.6.4).
Risk: As stated in ISO 31000, “the effect of uncertainty on objectives,” [9]. In the context of this definition, prospect is a relative measure of risk; the greater the prospect of a choice, the lower the risk of achieving the desired objective, whether it is avoiding loss or achieving a gain. The ISO definition of risk is not widely applied, as most risk analyses are performed using expected values in a probabilistic risk assessment [10].
Reasonable Decision: Selection of a choice which increases the prospect of attaining an objective or end state; selection of the choice with the greatest prospect. To clarify terminologies, this paper will make use of the term reasonable, versus rational, to draw distinction from normative decision theories, like VNM utility. Highlighting this distinction, Charles Tapiero suggests an alternative rationality, and Dan Ariely similarly offers the concept of predictable irrationality, for choices by otherwise rational individuals that follow clear patterns which do not align with results as predicted by homo economicus (i.e., utility theory) [11,12,13]. It is interesting that all the literature reviewed appears consistent on this point and is careful not to redefine rationality in positivist terms; therefore, this research will treat the concept similarly.

2. Literature Review

Prospect Theory and Cumulative Prospect Theory provide the basic behavior theory evaluated for comparison in this research. In Prospect Theory: An Analysis of Decision Under Risk, Daniel Kahneman and Amos Tversky built upon the work of Markowitz and Allais to firmly establish a theory that addresses weaknesses in the venerable expected utility theory; their hypothetical decision weight curve is shown in Figure 2. Prospect Theory (PT) is based upon a critique of Daniel Bernoulli’s 1738 wealth-based utility theory by highlighting its contradictions and weaknesses in explaining discrete choices under risk which are based upon changes in wealth, rather than final wealth [14]. Markowitz recognizes that subjects do not necessarily perceive gains and losses referenced to initial wealth and, in his landmark paper The Utility of Wealth, he refers to a neutral reference point, the point of inflection, as the “customary wealth” [15]. Lacking a reference point, Kahneman considers Bernoulli’s model overly simple [5,16]. Prospect Theory’s most important finding is that people are risk averse in the presence of gains and risk seeking in the presence of loss. Kahneman and Tversky approached PT in two domains: positive and negative; seemingly, a model which naturally accounts for both domains would surely be preferable.
Thirteen years after Prospect Theory, Cumulative Prospect Theory (CPT) was introduced as “a new version of prospect theory that incorporates the cumulative function and extends the theory to uncertain as well as risky prospects with any number of outcomes” [17]. The updated model holds for a number of phenomena that violate expected utility and traditional von Neumann-Morgenstern (VNM) rationality, including framing effects, nonlinear preferences, source dependence, risk seeking, and loss aversion. The CPT decision weighting factor shown in Figure 3 varies between 0 and 1 but the authors state that it is not a probability; however, this research will demonstrate that it is a probability, specifically the probability of being in a specific state. CPT is initially modeled as positive (gain) and negative (loss) cumulative weighting functions that are empirically developed and then fit using regression to yield the following relationships [17], Equation (6)):
w + ( p ) = p γ ( p γ + ( 1 p ) γ ) 1 / γ     ,
w ( p ) = p δ ( p δ + ( 1 p ) δ ) 1 / δ   .
Tversky and Kahneman are careful to critique the limitations and concerns within their model. They acknowledge that it provides greater generality than Prospect Theory, but they also express reservation over the accuracy and sensitivity of the decision weights based upon the data. They also recognize the challenges of maintaining simplicity in an empirically derived model while striving for better fit [17]. Therefore, it is clear that other mathematical models which fit the data and are within the constraints of CPT would be considered valid.
In PT and CPT, Kahneman and Tversky assume an exponential value function (power utility),
v ( x ) = {       x α                               i f   x 0 λ ( x ) α           i f   x < 0 ,
where α is positive and less than or equal to 1, and λ is positive and greater than or equal to 1 to account for loss aversion, where losses loom greater than gains; however, this research assumes that loss aversion, while present, is a secondary effect and will set λ = 1 for all analyses (the validity of this assumption is proven in Section 6). This initial assumption is important in establishing the idea that gains and losses are contiguous on the same scale, rather than treated separately as they are under PT. In its original form, this relationship allows for different power utility exponents for positive and negative values; however, because CPT and several subsequent studies assign the same value to the gain and loss exponent, we will do so here [17].
Much of the literature reviewed discusses positive decision theory in terms of rank order utility and first and second order stochastic dominance; however, because this research approaches the modeling of decisions from another perspective, consistency with prior research results will be considered sufficient for generally aligning with these principles. Future research to axiomatically analyze the model is intended.
Similar to that proposed by Uday Karmarkar [18], Richard Gonzalez and George Wu provided a descriptive model based upon that suggested by Tversky and Kahneman in Equations (1) and (2) that and is based upon the logit, or logarithm of the odds (log-odds), which is actually the negative derivative of two-state information theory entropy. This relation to entropy is not discussed in their paper, but conceptually makes the convergence of the models all the more supportive of the underlying approach taken in developing EDRM:
log p 1 p = d d p [ p log p ( 1 p ) log ( 1 p ) ] = d d p H ( p ,   1 p )     .
The steps are shown below [18,19] 1:
log w ( p ) 1 w ( p ) = γ log p 1 p + τ       .
Solving for w ( p ) , they obtain
w ( p ) = δ p γ δ p γ + ( 1 p ) γ       ,
where δ = e τ . This model of w ( p ) differs slightly from Tversky and Kahneman but achieves similar results [19]. Also noteworthy is that this equation is nearly identical to that used earlier for the weighting function by John Quiggin in his paper, A theory of anticipated utility ([20], Equation (1)).
Additionally, for comparison, R. Duncan Luce et al. in Utility of Gambling II, presented the entropy-modified expected utility model as shown with their Equations (7) and (8) combined. A is defined as a constant [21].
U ( g [ n ] ) = i = 1 n U ( x i ) p i A i = 1 n p i log 2 p i   .
If A were equal to U ( x i ) p i , then this relationship would be in the same form as Equation (A5), which approximates EDRM, lending additional credence to that taken by the present research.

3. Method

As an answer to the questions of predictive versus descriptive behavioral models and subject understanding of probabilities, two hypotheses are evaluated:
Hypothesis 1.
An entropy-derived decision model can be developed a priori to predict results of Prospect Theory and other positive behaviors theory studies;
Hypothesis 2.
Contrary to long-held assumptions based upon objective probabilities, subjects do understand and make decisions based upon corresponding subjective probabilities.
Starting with an assumption that subjects understand choice in terms of subjective, rather than objective probabilities (Hypothesis 2), a qualitative research methodology is used to synthesize philosophical and foundational works in the fields of risk, entropy, and DMUU to develop the predictive EDRM. Performance of the EDRM against numerous prior studies will be used as model validation. Specifically, the EDRM will be evaluated against results reported in six studies by Allais, Kahneman and Tversky, and Wu and Markle. None of the studies involved actual financial loss/reward to subjects, except a small subset of one study, making it consistent with the risk decisions made in bureaucratic organizations where personal consequence is limited [22]. In addition to comparing the binary choice results (matching: yes or no), where appropriate, calculated prospect values are translated into percentages representing the fraction of subjects selecting a choice using the Percentage Evaluation Model (PEM) for direct comparison with prior research results. When applicable, statistical analysis will be performed by evaluating the coefficient of determination ( R 2 ) or Spearman’s rank correlation coefficient (Rho) and through a design of experiments methodology using ANOVA at a standard 5% significance level as algorithms in R without transformations. Assumptions of independence and constant variance can be presumed unless stated otherwise; normality will be confirmed by use of the Shapiro–Wilk test using a 5% significance. In a departure from most prior studies in this field, and as supported by Wakker and Zank [23], it is initially assumed that there is no difference between gains and losses other than the sign of the magnitude; any differences are considered as higher-order effects. A flowchart illustrating the present research is provided in Figure 4.

4. Derivation of EDRM: Theoretical Framework

The derivation of EDRM consists of two major sections: philosophical and mathematical. Because EDRM is derived from basic theory to predict results of subject choice behavior rather than presenting a descriptive a posteriori model that best fits the data, a firm philosophical foundation is required to establish EDRM using behavior theory, statistical mechanics/information theory, and probability theory.

4.1. Foundation of Utility Theory

Jeremy Bentham (1748–1832) introduced the notion of utility, describing it as follows: “By utility, is meant that property of any object to produce benefit, advantage, pleasure, good, or happiness, (all this in the present case comes the same thing) or (what comes again to the same thing) to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is concerned” [24]. The goal of the principle of utility is that people seek to maximize happiness (pleasure) and minimize unhappiness (pain) [25]. As pleasure and pain are scaled together, so too can gain and loss be considered as regions of the same measure. Stated slightly differently, Aristotle uses the phrase “pleasure or not without pleasure,” which is understood to be a framework wherein the greater value goes to the certainty of gains (pleasure) or the uncertainty of loss (not without pleasure) ([26], 1098b23), which is also consistent with the certainty effect from Prospect Theory [14]. It follows that a reasonable decision is one in which the prospect of happiness or pleasure is greatest.
In chapter 4 of An Introduction to the Principles of Morals and Legislation, Bentham identifies four primary factors, or circumstances, which define the value of utility, or Greatest Happiness Principle: intensity or magnitude, duration, certainty or uncertainty, proximity or remoteness [24]. In the context of this research, the first, third, and fourth factors are of greatest interest; time as a factor will be considered later. This research defines proximity as the nearness to a given state within a choice, which is also subjective probability. Daniel Ellsberg recognized the same three factors in the evaluation of a choice: the payoff, the relative likelihood, and the third, “the nature of one’s information concerning the relative likelihood of events,” which is understood here as knowledge of the proximity or nearness to a state [27]. A basic economic prospect model can be inferred as magnitude times proximity as a function of certainty, an idea further supported by Peter Wakker’s separation of risk aversion into factors of magnitude (marginal utility represented by power utility or expected utility) and proximity (cumulative probability transformation) [28].
Over the past 150 years, utility theory has been increasingly reduced to the seeking of monetary gain or economic satisfaction (ophelimity) which forms the fundamental disjointedness between how people should objectively make decisions and how they subjectively select among various choices. In their paper, Back to Bentham? Explorations in Expected Utility, Kahneman et al. draw the distinction between these two notions of utility as experienced utility, which aligns with Bentham and Mill, and decision utility or expected utility [29]. Kahneman’s conclusion is especially important in justifying this present research because it leaves room for cautiously reintroducing classical experienced utility into the field of economic decision utility, specifically in consumer rationality [29]. Kahneman and Thaler further explore the difference between decision utility and hedonistic experienced utility in their paper, Anomalies: Utility Maximization and Experienced Utility [30].

4.2. Entropy

Arieh Ben-Naim describes three definitions of entropy with different origins that all provide agreeable results: Clausius’ macro state definition (thermodynamic), Boltzmann’s micro state definition (statistical mechanics), and Shannon’s measure of information (SMI or information theory) [31]. Within this research, Boltzmann’s statistical mechanics entropy for the case of a non-equilibrium ideal gas and Shannon’s entropy will be used interchangeably in the context of choices, an action supported by Ben-Naim’s derivation of their equivalence and writings of the physicist Edwin Jaynes [31,32]. The remaining entropy definition, thermodynamic, will be introduced to draw out the relationship between subjective probabilities associated with Boltzmann/SMI and objective probabilities associated with the thermodynamic view where all states are equiprobable in thermal equilibrium, as shown in Boltzmann’s derivation [33].
The concept that entropy and uncertainty are synonymous, as concluded by Jaynes and Ben-Naim, is crucial to this research because human decision making is so strongly influenced by the presence of certainty and decisions lead to actions that locally create order out of disorder (e.g., build a house or write a book) [4,14,32]. Therefore, for the purposes of this research, the idea that people have the ability to conceive ideas and then decide to put them into action in the information or physical realms is reflected in the fact that, in general, people choose certainty over uncertainty for gains and the opposite for losses.
Information theory (SMI) and statistical mechanics hold the answer to quantifying decision uncertainty and has roots in Maxwell Boltzmann’s foundational paper on statistical mechanics [33]. John von Neumann, who also established Game Theory with Oscar Morgenstern, further tied together Boltzmann’s work and the work of other physicists, such as J. Willard Gibbs, into the Mathematical Foundations of Quantum Mechanics. von Neumann identified that there exists a “thermodynamic value of knowledge which consists of an alternative of two cases”: k ln 2 , the maximum entropy of a binary choice [1,34]. Claude Shannon, who reportedly consulted von Neumann while at Princeton, established information theory based upon this concept of entropy [35]. Shannon also defines two terms important for this research: relative entropy (entropy divided by the maximum entropy) and redundancy, which is one minus relative entropy.
Shannon considers information theory in terms of states and choices, forming a natural application to decision theory; however, only several of the numerous papers reviewed in the course of this research attempted to apply information theory to risk decisions. Nawrocki and Harding in their paper, State-value weighted entropy as a measure of investment risk, make use of entropy’s extrinsic properties to weight the uncertainty of choices by their economic value or utility [36]. Yang and Qui in Normalized Expected Utility-Entropy Measure of Risk, apply an additive entropy term in an attempt to model Prospect Theory and introduce the concept of redundancy, but do not subsequently make application [37]. Roman Belavkin, in Asymmetry of Risk and Value of Information, discusses many topics and even suggests application of entropy to Prospect Theory [38] and, in an earlier work, The Use of Entropy for Analysis and Control of Cognitive Models, Belavkin suggests the use of redundancy in estimating system accumulated information [39]. Even Tversky discussed information theory entropy as a measure of decision uncertainty in his paper, On the Optimal Number of Alternatives at a Choice Point, but this was not explored further [40]. Of all the relevant literature reviewed, none go so far as to directly apply redundancy as a measure of certainty to a decision model.
More recently, several papers work to apply various forms of entropy to decision making by individuals and organizations [41,42], but one is particularly interesting to the present research. In A Unified Theory of Human Judgement and Decision-Making under Uncertainty, Raffaele Pisano and Sandro Sozzo draw the conclusion that quantum theory (i.e., statistical mechanics) is representative of human cognition and that quantum state probability is subjective, which supports this research approach [43]. However, the Authors avoid directly applying entropy and make the assumption that the Born rule (or law) of quantum mechanics defines the relationship between subjective probability as the square root of the objective probability. This research shows that the square root relationship between probabilities is a special case assuming very small state probabilities (see Section 4.6.2 and Appendix B).

4.3. Two Types of Probabilities

Throughout the literature, there appear two general categories of probabilities [44,45,46,47,48,49]: those which are objective and physically measurable and those which are subjective and not directly measurable and are often correlated to various degrees of psychologistics, to include beliefs, states of mind, logical proximity (from logical positivism), or judged probabilities [50]. In 1763, Rev Thomas Bayes’ method of translating between probabilities of measurable events and their unmeasurable conditions was published posthumously and has since spawned an entire field of study [51]. Similarly, the goal of this research is to translate between what is directly measurable and what is not in the arena of behavior theory and risk, since the probabilities of risk events are usually posed in measurable objective terms, but positive behavior theory shows that subjects make risk choices differently. In all of the prior research reviewed, it was observed that probabilities provided to subjects were objective.
Building upon the prior discussion on entropy, there are two different, but related, types of probabilities contrasted by Roman Frigg based upon whether the problem is considered from a macro (temperature, pressure, volume) or micro state (energy state of a single atom): macro probabilities and micro probabilities [52]. In Probability Theory, Jaynes makes the clear delineation between subjective and objective probabilities. While probabilities that are subjective are merely descriptive of the knowledge of a specific state and are not physically measurable, objective probabilities can be physically measured and consider all states (ignoring none) and assumes an equivalent knowledge to each (i.e., equal probability to every possible state combination) [46]; for example, the probability of rolling any specific value on a fair six-sided die is objectively 1 / 6 . This distinction precisely fits those of micro and macro probabilities of statistical mechanics and thermodynamics, respectively. When considered from a macroscopic or thermodynamic perspective, equilibrium entropy is based merely upon all possible combinations of all states and uses the classical definition of objective probability where the individual micro-state probabilities are not known and assume an equal probability, as calculated by the Boltzmann principle 2.
Micro states are subsets of the macro-region where a change in entropy is calculated for each state based upon knowledge of the micro-probability of being in that state and is not directly related to the state of other atoms or the measurable effects on the system; a neutral reference point, if you will, since it is only based upon knowledge of that state and not the system as a whole. Following an exhaustive comparison of these two types of probabilities, Frigg philosophically concludes, “There is no causal connection between knowledge and happenings in the world” [52]; an elegant contrast of micro (subjective) and macro (objective) probabilities. However, while not causal, Frigg proposes there exists a direct relationship where the macro state is a function of the system’s micro state at a given time. Similarly, EDRM functionally relates subjective and objective probabilities. Now, the final step in aligning definitions is to match proximity with subjective knowledge and micro-probability and relative certainty with objective or macro-probability, since the foundation of the EDRM derivation hinges upon the definitional connection between these terms, as shown in two generalized categories in Figure 5.
Likewise, logical positivism holds that there are two different types of probabilities: Frequency and logical proximity. Frequency is definitionally objective probability, so it stands that logical proximity is synonymous with subjective probability. Frederick Weismann, who worked closely with Ludwig Wittgenstein, introduced the term as “the logical proximity or deductive connection between propositions” [translated] [47,53]. Waismann’s terminology is especially helpful for this discussion because it is both a type of subjective probability and reinforces the use of the term proximity in this context. This assertion is further supported by Karl Popper, who ties logical proximity to psychologistic theory through Keynes’ degrees of rational belief and appears synonymous with his logic of knowledge terminology, logical relation [44,54]. Popper continues regarding subjective probability, “It treats the degree of probability as a measure of the feeling of certainty or uncertainty, or belief or doubt, which may be aroused is us be certain assertation of conjectures” [54]. Interestingly, George Shackle’s surprise-belief curves are largely founded upon Keynes’ degrees of rational belief, for which he deduced a relationship between potential surprise and belief which closely approximates CPT, with belief then being the subjective probability and surprise being the objective [55].
Therefore, micro probabilities are definitionally subjective probabilities, with psychologistical connections to knowledge and beliefs, and macro probabilities are equivalent to objective probabilities. Ideally, a relationship that defines the difference between macro and micro probabilities would be effective in translating between these two contexts and would provide an isomorphic framework for contrasting between normative and positive behavior theory. EDRM’s relationship between proximity and relative certainty provides such a solution and offers an explanation of the differences between the neutral reference points observed in positive behavior theories, such as PT and CPT, and the wealth-based utilities found in normative theories. Similar to the development of entropy over the past 300 years through differing perspectives of thermodynamics and statistical mechanics, which were brought together by Boltzmann’s H-theorem, behavioral economics has long considered the same problem of decision making from two differing perspectives.
To finalize the philosophical foundation for derivation of EDRM as a translation between subjective and objective probabilities, it must be shown that relative certainty (i.e., redundancy, which is one minus relative entropy) is an objective probability. Shannon defines relative entropy as the ratio of the entropy of a source, based upon knowledge of the probability of each state (subjective probabilities), divided by the maximum entropy, which assumes an equal probability for each state with no knowledge of a specific state (objective probability). Entropy itself contains no knowledge of a state and is ambiguous about probability, as illustrated by the state entropy plot in Figure A1, which shows two values of state probability for any value of state entropy, except at its maximum. Because entropy does not contain state knowledge and there are only two types of probabilities, relative certainty cannot be subjective and therefore is an objective probability.
Referring back to Bentham’s identification of certainty and proximity as distinct factors in the definition of utility, this research therefore understands that his statement is a clear acknowledgement that both objective and subjective probabilities must be evaluated. EDRM accounts for these factors and provides translation between them.

4.4. Entropy Decision Risk Model (EDRM) Framework

The EDRM is developed from the following observations derived from the prior philosophical discussion:
  • Certainty of gains and the uncertainty of losses are more highly valued;
  • Gains and losses are considered contiguously as two regions of the same scale;
  • Relative certainty, or redundancy, is one minus the relative entropy;
  • Proximity is represented by the subjective probability of reaching a state;
  • Prospect can be stated as magnitude times proximity as a function of relative certainty;
  • The choice with the greatest prospect, positive or negative, is preferred.

4.5. Choices and States

Shannon says that choices are made up of individual states [35]. Employing Problem 1 of Prospect Theory as a classical example, Choice A (2500, 0.33; 2400, 0.66) has three states: 2500, 2400, and zero, although the zero state is implied by the remaining probability (0.01) and is usually omitted in the notation. Choice B (2400, 1.0) has two states, 2400 and zero, although both of these states are certain. However, there is a problem. All these probabilities are objective rather than subjective (micro) probabilities, which reveals the fundamental weakness of the current risk management paradigm; this can be easily seen in the first example problem from PT [14]:
Choose between
A: 2500 with probability 0.33
  2400 with probability 0.66;
B: 2400 with certainty
  0 with probability 0.01.
Based upon expected value, probability times consequence, Choice A (2409) should be preferred over Choice B (2400); however, an overwhelming 82 percent of subjects selected Choice B, all because the wrong probability is used. This single example demonstrates the misalignment between risk modeling and human decision making, a discord that has ostensibly been generally accepted by the risk community to maintain simplicity of calculating risk by laypersons. Therefore, state probabilities must be subjective, with choices shown in Figure 6.
According to information theory, SMI for state i within choice j is [35]
H j = τ i j log 2 τ i j   .
The maximum possible entropy for any given choice occurs when τ i j = 1 / m for all i , which results in the well-known basic equation of maximum entropy 3, H j   m a x = log 2 m .
Although there is a recognition that uncertainty’s effect (i.e., entropy) on outcomes may be used in a definition of risk, as stated in ISO 31000, there is no discussion of how to apply the uncertainty of approaching one of two states (failure or no failure) to a risk model [9,56]. To resolve the expected value inconsistency shown above in PT problem 1 and to incorporate the concept of uncertainty, the EDRM is proposed.

4.6. Prospect

The derivation of prospect requires application of statistical mechanics and information theory placed in the context of DMUU. To provide distinction between the two types of probabilities, τ and p are used for proximity (subjective probability) and relative certainty (objective probability), respectively. Prospect is identified with the Greek letter T (tau) 4. Proximity τ ( p ) and the CPT weighting factor w ( p ) are generally synonymous, except that Tversky and Kahneman explicitly state that the CPT weighting factor and PT decision weight π ( p ) are not probabilities, ostensibly because individually they do not necessarily sum to 1 within a choice as only objective probabilities are assumed; which is, however, indicative of additive subjective probabilities. Prospect for a given state is equivalent to its certainty equivalent (CE), which is the 100 percent probability (certainty) of the non-zero state. For example, under EDRM the CE for (USD 1000, 0.5) is (USD 432), which will be shown to be consistent with the results of CPT.

4.6.1. Derivation of Proximity from Information Theory Entropy (SMI) and Statistical Mechanics

The basic relationship between proximity ( τ ) and relative certainty ( p ) is the foundation of EDRM and is derived by taking the entropy divergence of a single state, which is fully presented in Appendix A:
p ( τ ) = τ 2 τ 2 ln   τ 2 = τ 2 ln ( e τ 2 )   .
The inverse of this equality, τ ( p ) , is much more useful; however, Equation (8) is not invertible, so numerical methods in R and Excel are used to apply the model.
When plotted, Equation (8) yields the sigmoid curve shown in Figure 7.

4.6.2. Very Small Probabilities

Although extremely small probabilities ( 1 × 10 6 ) are not part of most behavioral economics studies, because EDRM is derived from basic theory it should generally be extensible to these cases. For very small values of relative certainty and proximity, the relationship between the two converges to an exponential factor. For a priori EDRM, a simple relationship between objective and subjective probabilities results for very small probabilities, which is consistent with Born rule of quantum mechanics (See full derivation in Appendix B):
τ = p   .
To illustrate, given an objective probability of 3.3 × 10 9 , as could be found in the Powerball lottery grand prize, the proximity is the square root, 5.8 × 10 5 , an increase of greater than 10,000 times that is perhaps consistent with the popularity of lotteries despite the poor odds of winning.

4.6.3. Inflection and Preference Reversal Points

Many studies compare the alignment of descriptive models to CPT based upon the point of inflection, where the shape shifts from concave to convex, and the crossover point or preference reversal point. Drazen Prelec, in The Probability Weighting Function, developed a similar relationship that forms a curve like EDRM and has a combined inflection and crossover point, w ( p ) = p , at p = 1 / e [57] 5:
w ( p ) = e ( ln 1 p ) α .
This is of particular interest because the basic entropy equation, τ i log 2 τ i , has its maximum of 1 / e ln 2 at τ = 1 / e , which aligns with an inflection point at p = 3 / e 2 = 0.4060 and is highly consistent with the conclusions of Wu and Gonzalez who validated prior studies to confirm that the inflection point of the weighting function is at about 0.40 [58].
The EDRM preference reversal point naturally occurs at τ ( p ) = p = 0.2847 , as shown in Figure 7, which appears to more closely correspond to Tversky and Kahneman’s reported data than their proposed descriptive model and other follow-on studies; it is shown superimposed upon their actual plot (Figure 3) in Figure 8 [17,19]. To aid in visual assessment, including the preference reversal point, a 5th order polynomial trendline (orange dashed line) is shown nearly overlapping the predicted results (black line). Statistical analysis of the uncorrected model performance is provided in Appendix C.3. Lichtenstein, and Slovic reported reversal in three experiments with the following results: 0.295, 0.315, and 0.270, which averages to 0.293 [59]. In another preference reversal study by Tversky, Sattath, and Slovic, they reported a similar value for preference reversal of 0.28 [60]. These results are all consistent with the predicted EDRM preference reversal point.

4.6.4. Calculating Prospect of a Choice

As defined, prospect is the magnitude times proximity as a function of relative certainty for state i within choice j is calculated from Equations (3) and (A4), expressed as
T i j = v i j τ i j   .
The prospect of a choice of m states is given by,
T j = i = 1 m v i j τ i j   .
The preferred choice is that with the greatest (most positive or least negative) value of T j , whether the various values v i j are all positive (gains), all negative (losses), or a mixture of the two. The default value function will use a standard exponential value of α = 0.88 for the power utility.
Indifference plots graphically represent all possible combinations of a three-state choice ( x , p 1 ; x 2 , p 2 ; x 3 , p 3 ) for a given decision curve. By convention, the objective probabilities p 1 and p 3 are on the axes; p 2 is inferred as 1 p 1 p 3 and lies along the diagonal from the origin. Using Tversky and Kahneman’s example, the corners represent the three outcomes (states): x 1 = 0 ,   x 2 = 100 ,   and   x 3 = 200 [17]. Other values can be used, including negative and mixes of positive and negative. The contour lines depict equal prospects. Some authors portray indifference plots in equilateral triangles but to remain consistent, this research will use that reported in CPT. The uncorrected EDRM indifference plot is shown in Figure 9 in comparison with those originally reported by Tversky and Kahneman, indicating close alignment between EDRM and original CPT.

4.6.5. Applying a Proximity Exponent ( β ) to the Prospect of a Choice

Although the focus of this paper is on validating a priori EDRM without factors or corrections, it is appropriate to note how a factor would be applied and what effect it would have on the results. Equation (11) can be modified, as discussed in Appendix B, by expanding the application of β to proximity in general for all values, not just the very small,
T i j = v i j τ i j β   ,
and merged with power utility in Equation (3) yields,
T i j = { x i j α   τ i j β                           i f   x 0 λ ( x i j ) α   τ i j β         i f   x < 0     .
To illustrate the ability to model a wide range of prospect curves, proximity for various values of β is shown in Figure 10. For values of β < 2 the preference reversal point shifts along identity; reversal is at 0.5 when β = 0.8560 . For β 2 , proximity is always less than relative certainty so there is no preference reversal. At the extremes, proximity is 1 for β = 0 and tends to 0 as β . The loss aversion factor, λ , is assumed to 1 throughout this research, which is validated in analysis presented in Section 6.
The studies used for comparison will assume a natural value of β = 1 to validate the a priori relationship; comparison of other studies with varying values of β will be considered in subsequent research.

5. EDRM Validation (Without Application of Any Factors or Corrections, β = 1 )

Validation of the various versions of EDRM is done using data reported in prior studies and assumes that all reported choice decisions are reasonable decisions, as previously defined. The consistency of the data varies based upon the specific study and the number of subjects, which fluctuates between ten and several hundred. As this research will not replicate prior studies, the specifics of how choices were presented to subjects will not be discussed unless necessary to explain results, such as in the CPT analysis which reports certainty equivalent values derived from subject responses rather than the responses themselves. None of the studies involve actual financial loss or reward to the subjects, except a subset of one study (Wu and Markle), making them generally consistent with the bureaucratic risk decision systems under consideration, although subjects were sometimes compensated for participating in the study.

5.1. The Percentage Evaluation Model (PEM)

Most studies reviewed report results in terms of the fraction of subjects selecting between alternatives, so results must be converted to enable direct performance comparison with prior works, beyond that of merely evaluating the binary results (i.e., do they match?); however, literature reviews did not identify any such method for directly comparing value results with frequency of subjects selecting an alternative. The PEM is presented as a tool for conducting this evaluation and may be useful for comparing values with subject percentages in other research. Additionally, the difference in percentages reported by PEM can be evaluated as the choice difficulty, where a small difference represents a difficult choice.
While a straightforward ratio of prospect values might appear to work for pairs of gains or losses, it does not suffice for mixed gambles nor does it capture subject perception. This research proposes use of the natural shape of inverse hyperbolic sine over the range of possible positive and negative values to compute a relative percentage that is consistent with subject responses based upon the calculated values of prospect.
The challenge is to develop a scale that is both respective of the difference between the prospects and is referenced to the absolute values of the minimum and maximum possible values from the two choices. The solution is to use the inverse hyperbolic sine of the difference in the numerator and the difference of the asinh of the maximum and minimum values in the denominator. The maximum and minimum functions are both referenced to zero, such that the minimum value is never greater than zero and the maximum is never less than zero. Since the inverse hyperbolic sine is logarithmic, this approach is compatible with the Weber-Fechner law for human perception (psychophysics). To further support this approach, it is already well established that economic decision theory is closely related to the field of psychophysics, of which Daniel Bernoulli is considered the inventor [5,61]. This relationship is given by,
C h o i c e   A % = 50 % + 50 % asinh ( X A X B 2 ) asinh   ( Max   Value , 0 ) asinh ( Min   Value , 0 )     .
Figure 11 graphically represents the development of Equation (15). To enable comparison of prospects to maximums and minimums in cases where power utility was applied to the state prospects, the inverse of the function (e.g., T A 1 / 0.88 ) must be applied to calculate the corrected choice prospect ( X A ) to undo this effect, similar to that performed by Bernoulli is his discussion of expected utility. As PEM is calculated only from the prospects, it is independent of the binary matching results.
One interesting special case must be considered in the evaluation model, that of dominance [62]. When comparing two choices of an equal number of states, each with an identical probability set, dominance exists when the value of every state of one choice is equal to or greater that of the pairwise values of the second choice, with at least one of those values larger than its mate. Problem 4 from Framing of Decisions and the Psychology of Choice is provided as an example: Choice A (240, 0.25; −760, 0.75), Choice B (250, 0.25; −750, 0.75). Since the probability sets are equal, and because 250 > 240 and 750 > 760 , Choice B is necessarily preferred to Choice A and the outcome is insensitive to changes in α or β . In the references reviewed, Tversky and Kahneman report subject preferences for dominance problems as 100% for the greater choice, indicating that subjects are adept at detecting dominance [62,63]. While the EDRM prospects will predict the correct binary result in this case, Equation (15) may not accurately predict percentages because the prospects are often nearly equal; however, if there is even a small difference in probabilities, then this effect is not present and the evaluation model proves quite accurate, as demonstrated in Section 5.6. Therefore, when dominance is present, the percentage of the choice with the greater prospect will be 100%; the lesser will be 0%.
The proposed evaluation model used for validating EDRM itself requires assurance that it consistently and accurately translates between prospects and percentages. Since the evaluation model draws its validity from the very data it is used to evaluate, the following set of credible and objective criteria are established as a standard:
  • Varies monotonically with the difference in prospect between choices;
  • Scaled by the range, positive and negative, of values being evaluated in a given choice;
  • Accounts for non-linearities of human perception;
  • Equitably reports subject percentages for choices involving gains, losses, or mixtures of the two;
  • Performs consistently across a range of studies (not tuned to a specific set of research).
Criteria 1 through 3 are met by definition, and as previously discussed. Criteria 4 and 5 are met through analysis of eight related studies conducted by different researchers, all of which have been analyzed using matching binary results and are optimized values for the exponential parameters α and β [14,58,62,63,64,65,66,67]. Table A1 in Appendix C.1 summarizes this analysis and affirms consistency of PEM performance throughout this research with an R 2 of 0.80. Specifically, despite the presence of gain, loss, and mixed choices (criteria 4) and the myriad sources of the surveys (criteria 5), there is no statistical significance independently or in their interactions. Therefore, it is reasonable to conclude that this evaluation model is adequate for translating between prospects and subject response percentages.

5.2. Allais Paradox

As a foundation of DMUU, agreement with the Allais Paradox is an imperative for validation of EDRM, as shown in Table 1. EDRM correctly predicts results for the paradox, as posed by Allais, as well as other variants embedded within subsequent research. No actual results showing subject preference percentages were shown in his paper; however, the calculated percentages predict nearly all would agree with the choices.
Maurice Allais, in his 1988 Nobel Lecture, referred to the VNM utility as the “neo-Bernoullian utility index” and critically refuted it as “unacceptable because it amounts to neglecting the probability distribution of psychological values around their mean” [68], which was consistent with research by Harry Markowitz and points to use of subjective probabilities. To demonstrate the fundamental weakness of utility theory in predicting subject choice, Allais offered the Allais Paradox in his paper, Le Comportement de l’Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l’Ecole Americaine [69]. The paradox, cited below from Mark Machina and differing slightly from Allais’ original in currency and magnitude (1 USD = 100 Franc) for ease of transcription, consists of two pairs of gambles, a 1 , a 2 ,   and   a 3 , a 4 . Subjects usually select a 1 and a 3 , contrary to results predicted by utility theory, which requires that subjects select choice a 4 after selecting a 1 [70]:
a 1 : { 1.00   c h a n c e   o f   U S D   1 , 000 , 000   versus   a 2 :   { .10   c h a n c e   o f   U S D   5 , 000 , 000 .89   c h a n c e   o f   U S D   1 , 000 , 000 .01   c h a n c e   o f   U S D   0                                
and
a 3 : { .10   c h a n c e   o f   U S D   5 , 000 , 000 .90   c h a n c e   o f   U S D   0                                     versus   a 4 :   { .11   c h a n c e   o f   U S D   1 , 000 , 000 .89   c h a n c e   o f   U S D   0                                     .

5.3. Prospect Theory (Kahneman and Tversky)

As with the Allais Paradox, no positive decision model could make any claim to universality without predicating all results of Kahneman and Tversky’s hallmark work, Prospect Theory. EDRM accurately predicts all PT results, including lotteries and insurance problems (14 and 14′) which are usually characterized as large gambles where people tend to evaluate choices based upon the value of potential winnings alone without considering the probability, as is normal for small gambles [71].
The correlation between actual versus predicted results are shown in Figure 12. The detailed results comparing performance of EDRM against reported PT results is shown in Table 2.
The close alignment between EDRM and PT ( R 2 = 0.86 ) with 100% matching (See Appendix C.2), as seen in Figure 12, and the results reported by Kahneman and Tversky as shown in Table 2 is striking, especially considering that no factors were applied to modify to shape of the proximity curve to match their results. The gamble type, whether gain or loss, has no statistical effect, which supports the assumption that there is no difference between the two and affirms Hypothesis 1.

5.4. Cumulative Prospect Theory

The weighting factor curve developed by Tversky and Kahneman serves as the foundation for many subsequent works seeking to apply it or to provide further validation. Therefore, for EDRM to be of value, it must accurately predict CPT results, beyond the general agreement between EDRM and CPT for shape and critical point agreement (inflection and preference reversal) demonstrated in Section 4.6.3.
By nature of the method employed by Tversky and Kahneman to derive the median certainty equivalent (CE) data from observed choices rather than portraying raw subject preference data, the use of a unity power utility factor ( α = 1 ) is warranted, i.e., the inverse power utility correction has already been applied. Figure 13 displays the difference between the actual CEs and the calculated prospect as reported in Table 3. The consistency of the CE difference is tighter for losses than for gains, which can be seen in the increased dispersion of two-state gains. Consistent with the w + and w curves of Figure 3, the linear trendline indicates that calculated CE is slightly less than actual for gains, and slightly greater for losses.
Exhibiting excellent alignment between EDRM and CPT with a near-perfect R 2 result of 0.9971 (See Appendix C.3), not to mention the tight agreement between its predicted preference reversal and inflection points as shown from prior research, EDRM applied to CPT soundly affirms Hypothesis 1, along with Kahneman and Tversky’s groundbreaking work. EDRM serves as the baseline relationship between objective probability and one’s perception of the likelihood of an outcome (subjective probability). The results shown in Table A3 indicate that the type of gamble (gain or loss) only has a secondary effect, affirming the assumption that gains and losses can be considered together within this research.

5.5. The Framing of Decisions and the Psychology of Choice (Tversky and Kahneman)

Beyond their works of PT and CPT, Tversky and Kahneman produced a volume of research on related topics that provide additional sources for EDRM validation. In their paper, The Framing of Decisions and the Psychology of Choice, they explored a wide range of problem types that involved gains, losses, and the mixture of the two [62]. Three of the problems posed (8, 9, and 10) are without probabilities presented and are akin to those offered by Richard Thaler in Mental Accounting and Consumer Choice. They will not be included here but will be considered in future studies applying EDRM to Thaler’s works [72].
Due to the paucity of problems in this group and the 100% matching, statistical analysis was not conducted; however, the results were considered in the analysis of EDRM evaluation model performance. The results shown in Table 4 were produced using uncorrected EDRM with the default power utility exponent of α = 0.88 . These results support Hypothesis 1.

5.6. Rational Choice and the Framing of Decisions (Tversky and Kahneman)

While the paper, Rational Choice and the Framing of Decisions, includes problems that are identical to those in other papers, such as Framing of Decision and the Psychology of Choice, two of the problems presented (7 and 8) are of particular interest to this research because they contain mixes of gains and losses, more than three states, and dominance [63]. Additionally, both problems have the same expected values for their respective choices which would otherwise incorrectly predict Choice B for both problems. As shown in Table 5, EDRM accurately predicts the results of both, noting that the percentage result in problem 7 applies the dominance special case. Normatively, problems 7 and 8 should be equivalent; however, subjects appear to intuitively evaluate differences in certainty consistent with CPT as predicted by EDRM. This result supports Hypothesis 1.

5.7. Gain-Loss Separability (Wu and Markle)

George Wu and Alex Markle focused their research on the separability of gain and losses of mixed gambles, which provides data that can be used to validate the EDRM’s ability to model choices consisting of mixed gains and losses. Their study was made up of six different surveys of 59 to 81 participants, depending upon the test. Surveys 1, 2, and 3 were conducted using prepared booklets for which subjects were paid for completing, while surveys 4, 5, and 6 were performed by subjects using a computer with a randomized order of gambles in a format designed to replicate that of the booklets [67]. This variation in test method may have produced differing results, as observed when compared with EDRM predictions. Due to the generated mix of positive and negative prospects, this study also serves as a validation test for the evaluation model itself. Figure 14, graphically compares actual results with the EDRM prediction by survey number, showing reasonable alignment.
EDRM predicted results that agree with 82.4% of the binary results. Assuming EDRM is accurate, the comparative statistical analysis shows that all of the non-conformities were contained within the first three booklet-based surveys, especially survey 2 with three negative results, which appears significant given the comparatively lower value of R 2 (0.69 for correct binary results, 0.35 for all results including incorrect). The tests were designed to increase subject response to the “high” choice (H) with survey number, which was by design of the test. EDRM likewise shows an increasing trend with survey number but with a lesser slope. Despite these concerns and based upon the statistical results in Table 6, and notwithstanding variability in the subject data, EDRM is shown to generally predict results of mixed gambles with a Spearman rank correlation coefficient of 0.695 (See Appendix C.4), which supports Hypothesis 1.
In addition to the 34 mixed gamble problems analyzed for EDRM validation, the study included another 68 single non-zero-state choices of gains or losses, which were decompositions of the mixed problems. To maximize the number of correct binary results for the full set of 102 problems, β was increased from 1 to 1.26, assuming α = 0.88 , which resulted in 78.4% (80/102) matches. Wu and Markle conclude that α = 0.5 , which results in EDRM β = 0.5 to maximize results of the 34 mixed problems of interest with essentially no difference in the comparative result. This research agrees with Wu and Markle’s conclusion that mixed gambles cannot be simply deconstructed into separate gambles.

6. Summary of Analyses

EDRM has been shown to effectively predict results of the studies considered using a standard value of α = 0.88 and the neutral value of β = 1 and assuming no loss aversion (i.e., λ = 1 ). This section will show that these values naturally maximize valid binary results through comparison of plots of the results obtained by varying these factors over nominal ranges for the prior studies considered in this research. Specifically, α is varied from 0 to 1 and β is varied from 0 to 2, holding λ constant at 1; λ is then varied from 1 to 3, holding β constant at 1.
Two types of plots are discussed, the first is a subset of the data included in the second. Figure 15 illustrates results for a sample Wu and Markle problem (number 25) as the difference between the prospect of the two choices ( T A T B ) , which clearly shows a linear preference reversal relationship between the factors. The standard values of α and β are well within the range for selecting Choice A, which is consistent with reported results.

System-Level Analysis of Choices (Sensitivity)

Independent of the subject percentages and PEM results, by layering only the binary results of each problem analyzed in this paper upon one other, the effect of varying α , β , and λ , using the relationship in Equation (14), can be considered at a system level, where results from multiple studies are integrated. Figure 16 and Figure 17 demonstrate the results of combining the 63 previously discussed problems from Prospect Theory, Allais Paradox, Framing of Decisions and the Psychology of Choice, and Wu and Markle’s Gain-Loss Separability (mixed). The standard values are shown using dashed lines and clearly fall within the white zone for the 57 problems correctly predicted by EDRM (90.5%). The remaining six negative binary results were discussed in Section 5.7.
This observation serves four purposes. The first is that it validates the value of α determined from prior studies as a standard for subject responses in selecting between choices, along with the neutral value of β = 1 ; the plot is optimized at α = 0.88 and β = 1.07 . Secondly, this result shows that there is a mostly linear relationship between α and β (Figure 17). Third, this analysis further validates EDRM’s universality and consistency when applied to differing sources and researchers, which supports Hypothesis 1. Lastly, the assumption that loss aversion is a secondary effect is validated, although some loss aversion is evident.

7. Discussion

The broad goal of this research is to provide a method for addressing the mismatch between standard expected utility risk analysis tools and decision makers, ultimately to enable quantization of risk in fungible terms. In the process of answering this question, the present research has developed the predictive EDRM decision model developed from utility theory, statistical mechanics, and information theory that is highly consistent with myriad studies. Although derived independently, EDRM bears resemblance to several prior descriptive positive models from Kahneman and Tversky, Luce et al., Gonzalez and Wu, Prelec, and Quiggin, which lends significant credence to the validity of approach and the result. This research also reinforces validation of the various studies used in the analyses, especially that of CPT.
This research demonstrates that entropy divergence from certainty can be used to develop a positive decision model from basic theory that accurately predicts prior study results and provides a translation between positive and normative decision theory domains by relating subjective and objective probabilities, respectively. Tversky and Kahneman introduced this technique of translation when they stated, “In expected utility theory the utility of an uncertain outcome is weighted by its probability; in prospect theory the value of an uncertain outcome is multiplied by a decision weight w ( p ) ,” [62]. Since the decision weight and proximity are synonymous, Equation (A4) provides a translation between the two domains.
The first hypothesis is proven through the validation demonstrated in Section 5 and, in the process, it was demonstrated that gains and losses can be accurately considered together without correction; i.e., the assumption that λ = 1 is valid. This conclusion establishes the basis for expressing risks with measurable objective probabilities in terms useable by decision makers. It also permits translation of subjective prospects based upon perception of an outcome into standard objective utility risk models. Additionally, the assumption that gains and losses can generally be considered contiguously is validated.
The second hypothesis is also proven. As the prior studies used in this analysis are understood to accurately represent subject behavior, which have been shown to align with the EDRM prospect and is by definition based upon subjective probability, it follows that people do understand probabilities; however, as subjective probability. There is also some evidence that as choice complexity increases (greater number of states and mixtures of gain and loss states within a choice), decisions more closely align with uncorrected EDRM, which is consistent with intuitive system 1 behavior.
With the PEM validated and demonstrating consistent performance within this research, there is clearly potential application to other related studies to permit comparison of decision model outputs and subject responses. Since PEM quantifies relative choice difficulty as the difference between percentages, from an economics perspective it may be useful for engineering alternatives that are easier for subjects to choose between (i.e., make it easier to select one product over another). Additionally, there is an opportunity to conduct further research to understand how this relates as a function of variance in subject responses, i.e., is there more variance in difficult decisions?
With the positive results of the two hypotheses proven, the initial step towards quantizing programmatic risk is addressed, that is considering the mismatch between how decisions should be and are made. Future research to further evaluate the EDRM model in greater depth is requisite, especially the complex interactions of an increased number of states and mixtures of gains and losses within a choice, which are evident in many complex economic scenarios. Future research in this area will also consider application of continuous probability distributions and the use of utility functions other than the exponential power utility (i.e., logarithmic expected utility) to understand perception of risk.

Author Contributions

Methodology, T.M., M.B. and V.T.-G.; Software, T.M.; Supervision, M.B. and V.T.-G.; Validation, T.M., M.B. and V.T.-G.; Writing—Original Draft Preparation, T.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Derivation of Proximity from Entropy

The EDRM is derived by calculating the Kullback–Leibler entropy divergence of the state probabilities from certainty, where P is a continuous distribution as a linear function of τ and certainty is a value of 1 for all values of τ , which is also an integration of information theory entropy for a single state, shown in Equation (7) [11,35,73]. Like micro probabilities in statistical mechanics, one should note that proximity is a subjective probability as is not directly measurable. Derivation is as follows:
f ( p ) = D K L ( P | | C e r t a i n t y ) = D K L ( P | | 1 ) = 0 1 τ   log 2 ( τ 1 ) d τ   ,
so
D K L = c 1 ln 16 ( τ 2 ln   τ 2 τ 2 ) + c 2   .
Given constraints D K L ( 0 ) = 1 and D K L ( 1 ) = 0 , then c 1 = ln 16 and c 2 = 1 , so Equation (A2) simplifies to
D K L = τ 2 ln   τ 2 τ 2 + 1   .
Figure A1. Divergence, or relative entropy, is the distance between certainty and uncertainty for a given subjective probability. The arrow shows how the divergence curve is flipped when converted to Shannon’s redundancy, which is referred to herein as relative certainty and is an objective probability.
Figure A1. Divergence, or relative entropy, is the distance between certainty and uncertainty for a given subjective probability. The arrow shows how the divergence curve is flipped when converted to Shannon’s redundancy, which is referred to herein as relative certainty and is an objective probability.
Systems 08 00046 g0a1
Figure A2. Plot of relative certainty versus proximity that relates objective and subjective probabilities, but is then flipped around the diagonal to place relative certainty on the horizontal. This plot is shown to graphically illustrate the steps of the mathematical derivation.
Figure A2. Plot of relative certainty versus proximity that relates objective and subjective probabilities, but is then flipped around the diagonal to place relative certainty on the horizontal. This plot is shown to graphically illustrate the steps of the mathematical derivation.
Systems 08 00046 g0a2
The relationship between D K L and proximity is illustrated in Figure A1, along with the Shannon entropy (base 2) for a single state which has a maximum at 1 / e . Kullback–Leibler entropy divergence is also known as relative entropy, which can be used to express relative certainty, p ( τ ) , in terms of Shannon redundancy, which is one minus relative entropy, as shown in Equation (A4) and Figure A2. Figure 7 shows the inverted relationship for determining proximity as a function of relative certainty, which is more useful since subjects are usually provided relative certainty (objective probability) to evaluate a choice, which is expressed as
p ( τ ) = 1 D K L = τ 2 τ 2 ln   τ 2 = τ 2 ln ( e τ 2 )   .
Additionally, through the course of this research an alternate equation based upon Shannon’s redundancy and entropy of a single state was derived which, with the right factors, closely approximates the ideal Equation (A4) for probabilities not near the extremes of 0 or 1. Because equality (A4) is not invertible, the following relationship may be more convenient mathematically:
T i j v i j p i j a ( 1 H i j b c H m a x   )     ,
where a = 0.7276587 ,   b = 0.401077 ,   c = 2.664828 ,   a n d   H m a x = 1 .0.

Appendix B. Very Small Probabilities

The relationship for small probabilities is derived by introducing the factor β to the proximity relationship expressed in Equation (A4) and assuming p ( τ β ) = τ 2 / β τ 2 / β ln   τ 2 / β , where the log ratio is
log p ( τ β ) log τ   .
Substituting p ( τ β ) and taking the limit as τ 0 ,
lim τ 0 log ( τ 2 / β τ 2 / β ln   τ 2 / β ) log ( τ ) = 2 / β   .
Therefore, the exponential factor for very small probabilities converges to
p ( τ β ) = τ 2 / β   .
For uncorrected EDRM, β = 1 by definition, so a profoundly simple relationship between objective and subjective probabilities results for very small probabilities, which is consistent with Born rule of quantum mechanics:
τ = p   .
β ’s broader application as a factor in Equation (A4) is considered in Section 4.6.5 and in future research after validation of the ideal model in this report.

Appendix C. Statistical Analyses

Appendix C.1. Percentage Evaluation Model

Table A1. Statistical analysis of EDRM Percentage Evaluation Model using eight data sets from Birnbaum, Birnbaum and Bahra, Kahneman and Tversky, Tversky and Kahneman, Wu and Gonzalez, Wu and Markle, and Prelec with matching binary results and optimized values of α and β to validate performance.
Table A1. Statistical analysis of EDRM Percentage Evaluation Model using eight data sets from Birnbaum, Birnbaum and Bahra, Kahneman and Tversky, Tversky and Kahneman, Wu and Gonzalez, Wu and Markle, and Prelec with matching binary results and optimized values of α and β to validate performance.
Regression analysis: Coefficient of determination ( R 2 )
Actual percentages compared with calculated for matching binary results only0.8026
Spearman rank correlation coefficient (Rho)0.8899
ANOVA ( % actual vs. calculated)DfSum SqMean SqF-ValueProb(>F)Result
Study source7517.873.9770.86010.5401Not Significant
Type of choice (gain, loss, mix)296.048.0050.88280.5737Not Significant
Interaction between source and type5379.775.9320.88280.4947Not Significant
Residuals12811,009.586.012
Normality Assumption
Shapiro–Wilk W = 0.99522p-value = 0.9218Normal
Conclusions
1. Cannot reject and null hypothesis, which means that the EDRM evaluation model is likely effective at expressing relative differences in prospect as percentages. Criteria 4 and 5 are met.
2. T-statistic test confirms no survey source is significant.

Appendix C.2 Prospect Theory

Table A2. Statistical analysis of EDRM performance with Prospect Theory showing 100% binary agreement and excellent alignment between reported percentages and those calculated using the PEM.
Table A2. Statistical analysis of EDRM performance with Prospect Theory showing 100% binary agreement and excellent alignment between reported percentages and those calculated using the PEM.
Binary matching (yes/no) (percentage)100%
Regression analysis: Coefficient of determination ( R 2 )
Actual percentages compared with calculated (all match)0.8581
Spearman rank correlation coefficient (Rho)0.6966
ANOVA ( % actual vs. calculated)DfSum SqMean SqF-ValueProb(>F)Result
Type of gamble (gain or loss)158.9859.9830.46480.5051Not Significant
# of Non-zero States (1 or 2)136.6636.6580.28890.5983Not Significant
Residuals152030.40126.900
Normality Assumption
Shapiro–Wilk W = 0.94119p-value = 0.2771Normal
Conclusions
1. Cannot reject any of the null hypotheses, which means that EDRM reasonably predicts results of Prospect Theory.

Appendix C.3. Cumulative Prospect Theory

Table A3. Statistical analysis of EDRM Performance with Cumulative Prospect Theory showing nearly perfect alignment between a priori EDRM and data reported by Tversky and Kahneman. These results suggest that there is some difference between gains and losses, but as a second-order effect. The number of states (1 or 2) has no effect.
Table A3. Statistical analysis of EDRM Performance with Cumulative Prospect Theory showing nearly perfect alignment between a priori EDRM and data reported by Tversky and Kahneman. These results suggest that there is some difference between gains and losses, but as a second-order effect. The number of states (1 or 2) has no effect.
Regression analysis: Coefficient of determination ( R 2 )
Actual values (not percentages) compared with calculated values0.9971
Actual values (not percentages) compared with calculated values (Positive only)0.9885
Actual values (not percentages) compared with calculated values (Negative only)0.9980
Spearman rank correlation coefficient (Rho)0.9982
ANOVA ( CE actual vs. calculated)DfSum SqMean SqF-ValueProb(>F)Result
Type of gamble (gain or loss)1172.62172.623.90400.05339Marginal
# Non-zero states (1 or 2)148.4048.401.09460.30020Not significant
Residuals532343.4944.217
Normality Assumption
Shapiro–Wilk W = 0.97213p-value = 0.2196Normal
Conclusions
1. The coefficient of determination values for the comparison of actual and calculated values indicates near-perfect alignment and affirms Hypothesis 1. The ANOVA results for type of gamble do not reject the null hypothesis of no significant effect; however, the probability is very close to the 5% significance value indicating there is some difference between gains and losses, but that they can be considered as a secondary effect in this research given there is nearly no difference in the R 2 for positive (0.9885) and negative (0.9980) problems. Using a value of β = 0.947 rather than 1 increases the type of gamble Prob(>F) to nearly 0.35 from 0.053.

Appendix C.4. Wu and Markle Gain-Loss Separability

Table A4. Statistical analysis of EDRM Performance on Wu and Markle Gain-Loss Separability Study. Because there were non-matching binary results, binomial and nonparametric tests are shown to confirm general alignment between the EDRM and the reported results.
Table A4. Statistical analysis of EDRM Performance on Wu and Markle Gain-Loss Separability Study. Because there were non-matching binary results, binomial and nonparametric tests are shown to confirm general alignment between the EDRM and the reported results.
Binary matching (yes/no) (percentage)82.3%
Binomal test (Probability > 50%)# Y:28, # Trials: 34p-value 1.95 × 10−4
Nonparametric analysis using Wilcoxon testV = 206p-value 0.1207Agreement likely
Spearman rank correlation coefficient (Rho)0.6946
ANOVA( % actual vs. calc, matching only)DfSum SqMean SqF-ValueProb(>F)Result
Survey (6 surveys total)51602.40320.488.74101.60 × 10−4Significant
Prospect signs (both pos, both neg, mix)266.5633.280.90770.4194Not significant
Residuals20733.2836.66
Normality Assumption
Shapiro–Wilk (All-including non-matching) W = 0.81802p-value = 5.832 × 10−5Not normal
Shapiro–Wilk (matching binary result only) W = 0.96881p-value = 0.5492Normal
Conclusions
1. Wilcoxon null hypothesis cannot be rejected, so bias between calculated and actual values is unlikely. Additionally, this result further strengthens the PEM validation.
2. The sign of the resulting choice prospects has no significant effect.
3. The survey number is significant. All of the non-matching problems come from the surveys 1 through 3, which were conducted differently than surveys 4, 5, and 6; Survey 1 has a significantly higher difference mean than the other surveys.

References

  1. von Neumann, J. Mathematical Foundations of Quantum Mechanics; Princeton University Press: Princeton, NJ, USA, 1955. [Google Scholar]
  2. DoD Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs; Department of Defense (Ed.) Office of the Deputy Assistant Secretary of Defense for Systems Engineering: Washington, DC, USA, 2017.
  3. DoD System Safety (MIL-STD-882E); Department of Defense (Ed.) Air Force Material Command, Wright-Patterson Air Force Base: Dayton, OH, USA, 2012.
  4. Monroe, T.J.; Beruvides, M.G. Risk, Entropy, and Decision-Making Under Uncertainty. In Proceedings of the 2018 IISE Annual Conference, Orlando, FL, USA, 19–22 May 2018. [Google Scholar]
  5. Kahneman, D. Thinking, Fast and Slow, 1st ed.; Farrar, Straus and Giroux: New York, NY, USA, 2011. [Google Scholar]
  6. Taleb, N.N. The Black Swan, 2nd ed.; Random House Trade Paperbacks: New York, NY, USA, 2010. [Google Scholar]
  7. Stanovich, K.E.; West, R.F. Individual differences in reasoning: Implications for the rationality debate? Behav. Brain Sci. 2000, 23, 645. [Google Scholar] [CrossRef] [PubMed]
  8. Schnieder, M. Dual Process Utility Theory: A Model of Decisions Under Risk and Over Time; Economic Science Institute, Chapman University, One University Drive: Orange, CA, USA, 2018. [Google Scholar]
  9. ISO. ISO 31000:2018 Risk Management—Guidelines; International Organization for Standardization: Geneva, Switzerland, 2018. [Google Scholar]
  10. Rasmussen, N. The Application of Probabilistic Risk Assessment Techniques to Energy Technologies. Annu. Rev. Energy 1981, 6, 123–138. [Google Scholar] [CrossRef]
  11. Tapiero, C.S. Risk and Financial Management; John Wiley and Sons Ltd.: West Sussex, UK, 2004. [Google Scholar]
  12. Ariely, D. Predictable Irrational: The Hidden Forces That Shape Our Decisions; HarperCollins: New York, NY, USA, 2009. [Google Scholar]
  13. Cohen, D. Homo Economicus, the (Lost) Prophet of Modern Times; Polity Press: Malden, MA, USA, 2014. [Google Scholar]
  14. Kahneman, D.; Tversky, A. Prospect Theory—Analysis of Decision under Risk. Econometrica 1979, 47, 263–291. [Google Scholar] [CrossRef] [Green Version]
  15. Markowitz, H. The Utility of Wealth. J. Polit. Econ. 1952, 60, 151–158. [Google Scholar] [CrossRef]
  16. Bernoulli, D. Exposition of a New Theory on the Measurement of Risk (1738). Econometrica 1954, 22, 23–36. [Google Scholar] [CrossRef] [Green Version]
  17. Tversky, A.; Kahneman, D. Advances in Prospect-Theory—Cumulative Representation of Uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
  18. Karmarkar, U.S. Subjectively weighted utility: A descriptive extension of the expected utility model. Organ. Behav. Hum. Perform. 1978, 21, 61–72. [Google Scholar] [CrossRef]
  19. Gonzalez, R.; Wu, G. On the Shape of the Probability Weighting Function. Cogn. Psychol. 1999, 38, 129–166. [Google Scholar] [CrossRef] [Green Version]
  20. Quiggin, J. A theory of anticipated utility. J. Econ. Behav. Organ. 1982, 3, 323–343. [Google Scholar] [CrossRef]
  21. Luce, R.; Ng, C.; Marley, A.; Aczél, J. Utility of gambling II: Risk, paradoxes, and data. Econ. Theory 2008, 36, 165–187. [Google Scholar] [CrossRef]
  22. Buchanan, A. Toward a Theory of the Ethics of Bureaucratic Organizations. Bus. Ethics Q. 1996, 6, 419–440. [Google Scholar] [CrossRef]
  23. Wakker, P.P.; Zank, H. A simple preference foundation of cumulative prospect theory with power utility. Eur. Econ. Rev. 2002, 46, 1253–1271. [Google Scholar] [CrossRef]
  24. Bentham, J. An Introduction to the Principles of Morals and Legislation; Kitchener, Ont.: Batoche, SK, Canada, 2000. [Google Scholar]
  25. Mill, J.S. Utilitarianism; Heydt, C., Ed.; Broadview Editions: Buffalo, NY, USA, 2011. [Google Scholar]
  26. Introduction to Aristotle; The Modern Library: New York, NY, USA, 1947.
  27. Ellsberg, D. Risk, Ambiguity, and the Savage Axioms. Q. J. Econ. 1961, 75, 643–669. [Google Scholar] [CrossRef] [Green Version]
  28. Wakker, P. Separating marginal utility and probabilistic risk aversion. Theory Decis. 1994, 36, 1–44. [Google Scholar] [CrossRef] [Green Version]
  29. Kahneman, D.; Wakker, P.P.W.; Sarin, R.S. Back to Bentham? Explorations of Experienced Utility. Q. J. Econ. 1997, 375. [Google Scholar] [CrossRef] [Green Version]
  30. Kahneman, D.; Thaler, R.H. Anomalies: Utility Maximization and Experienced Utility. J. Econ. Perspect. 2006, 20, 221–234. [Google Scholar] [CrossRef]
  31. Ben-Naim, A. Entropy and Information Theory: Uses and Misuses. Entropy 2019, 21, 1170. [Google Scholar] [CrossRef] [Green Version]
  32. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  33. Boltzmann, L. Über die Mechanische Bedeutung des Zweiten Hauptsatzes der Wärmetheorie [On the Mechanical Importance of the Second Principles of Heat-Theory]. Wien. Ber. 1866, 53, 195–220. [Google Scholar]
  34. von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 2007. [Google Scholar]
  35. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; The Univeristy of Illinois Press: Urbana, IL, USA, 1949. [Google Scholar]
  36. Nawrocki, D.N.; Harding, W.H. State-Value Weighted Entropy as a Measure of Investment Risk. Appl. Econ. 1986, 18, 411–419. [Google Scholar] [CrossRef]
  37. Yang, J.; Qiu, W. Normalized Expected Utility-Entropy Measure of Risk. Entropy 2014, 16, 3590–3604. [Google Scholar] [CrossRef]
  38. Belavkin, R.V. Asymmetry of Risk and Value of Information; Middlesex University: London, UK, 2014. [Google Scholar] [CrossRef] [Green Version]
  39. Belavkin, R.; Ritter, F.E. The Use of Entropy for Analysis and Control of Cognitive Models. In Proceedings of the Fifth International Conference on Cognitive Modeling, Bamberg, Germany, 9–12 April 2003; pp. 21–26. [Google Scholar]
  40. Tversky, A. Preference, Belief, and Similarity; The MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  41. Hellman, Z.; Peretz, R. A Survey on Entropy and Economic Behaviour. Entropy 2020, 22, 157. [Google Scholar] [CrossRef] [Green Version]
  42. Zingg, C.; Casiraghi, G.; Vaccario, G.; Schweitzer, F. What Is the Entropy of a Social Organization? Entropy 2019, 21, 901. [Google Scholar] [CrossRef] [Green Version]
  43. Pisano, R.; Sozzo, S. A Unified Theory of Human Judgements and Decision-Making under Uncertainty. Entropy 2020, 22, 738. [Google Scholar] [CrossRef]
  44. Keynes, J.M. A Treatise on Probability; Macmillan and Co., Limited: London, UK, 1921. [Google Scholar]
  45. Hume, D. A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects; Batoche Books Limited: Kitchener, ON, Canada, 1998. [Google Scholar]
  46. Jaynes, E.T. Probability Theory: The Logic of Science; Bretthorst, G.L., Ed.; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2003. [Google Scholar]
  47. Waismann, F. Logische Analyse des Wahrscheinlichkeitsbegriffs. Erkenntnis 1930, 1, 228. [Google Scholar] [CrossRef]
  48. Carnap, R. The Two Concepts of Probability: The Problem of Probability. Philos. Phenomenol. Res. 1945, 5, 513–532. [Google Scholar] [CrossRef]
  49. Abdellaoui, M. Uncertainty and Risk: Mental, Formal, Experimental Representations; Springer: Berlin/Heidelberg, Germany; London, UK, 2007. [Google Scholar]
  50. Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef]
  51. Bayes, T.; Price, R. An Essay towards Solving a Problem in the Doctrine of Chances. By the Late Rev. Mr. Bayes, F.R.S. Communicated by Mr. Price, in a Letter to John Canton, A.M.F.R.S. Philos. Trans. (1683–1775) 1763, 53, 370–418. [Google Scholar]
  52. Frigg, R. Probability in Boltzmannian Statistical Mechanics. In Time, Chance and Reduction. Philosophical Aspects of Statistical Mechanics; Gerhard Ernst, G., Huttemann, A., Eds.; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  53. Waismann, F. Philosophical Papers; D. Reidel Pub. Co.: Dordrecht, The Netherlands, 1977. [Google Scholar]
  54. Popper, K. The Logic of Scientific Discovery; Routledge: London, UK; New York, NY, USA, 1992. [Google Scholar]
  55. Shackle, G.L.S. Expectation in Economics, 2nd ed.; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  56. ANSI/ASSE/ISO 31000-2009. Risk Management Principles and Guidelines; American Society of Safety Engineers: Des Plaines, IL, USA, 2011. [Google Scholar]
  57. Prelec, D. The Probability Weighting Function. Econometrica 1998, 66, 497–527. [Google Scholar] [CrossRef] [Green Version]
  58. Wu, G.; Gonzalez, R. Curvature of the Probability Weighting Function. Manag. Sci. 1996, 42, 1676–1690. [Google Scholar] [CrossRef] [Green Version]
  59. Lichtenstein, S.; Slovic, P. Reversals of preference between bids and choices in gambling decisions. J. Exp. Psychol. 1971, 89, 46–55. [Google Scholar] [CrossRef] [Green Version]
  60. Tversky, A.; Sattath, S.; Slovic, P. Contingent Weighting in Judgment and Choice. Psychol. Rev. 1988, 95, 371–384. [Google Scholar] [CrossRef]
  61. Schumpeter, J.A. History of Economic Analysis; Oxford University Press: New York, NY, USA, 1954. [Google Scholar]
  62. Tversky, A.; Kahneman, D. The framing of decisions and the psychology of choice. Science 1981, 211, 453. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Tversky, A.; Kahneman, D. Rational choice and the framing of decisions. J. Bus. 1986, 59, S251. [Google Scholar] [CrossRef]
  64. Birnbaum, M.H. Three New Tests of Independence That Differentiate Models of Risky Decision Making. Manag. Sci. 2005, 51, 1346–1358. [Google Scholar] [CrossRef] [Green Version]
  65. Birnbaum, M.H.; Bahra, J.P. Gain-loss separability and coalescing in risky decision making. Manag. Sci. 2007, 53, 1016–1028. [Google Scholar] [CrossRef] [Green Version]
  66. Prelec, D. A “Pseudo-endowment” effect, and its implications for some recent nonexpected utility models. J. Risk Uncertain. 1990, 3, 247–259. [Google Scholar] [CrossRef]
  67. Wu, G.; Markle, A.B. An Empirical Test of Gain-Loss Separability in Prospect Theory. Manag. Sci. 2008, 54, 1322–1335. [Google Scholar] [CrossRef] [Green Version]
  68. Allais, M. An Outline of My Main Contributions to Economic Science. Am. Econ. Rev. 1997, 87, 3–12. [Google Scholar] [CrossRef]
  69. Allais, M. Le Comportement de l’Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l’Ecole Americaine. Econometrica 1953, 21, 503–546. [Google Scholar] [CrossRef]
  70. Machina, M.J. Choice Under Uncertainty: Problems Solved and Unsolved. J. Econ. Perspect. 1987, 1, 121–154. [Google Scholar] [CrossRef] [Green Version]
  71. Conlisk, J. The Utility of Gambling. J. Risk Uncertain. 1993, 6, 255–275. [Google Scholar] [CrossRef]
  72. Thaler, R.H. Transaction Utility Theory. Adv. Consum. Res. 1983, 10, 229–232. [Google Scholar]
  73. Hoseinzadeh, A.; Mohtashami Borzadaran, G.; Yari, G. Aspects concerning entropy and utility. Theory Decis. 2012, 72, 273–285. [Google Scholar] [CrossRef]
1
The factors used in equations by Gonzalez and Wu ( ( γ , δ   &     τ ) ) are not those used in EDRM but are quoted in their original form for accuracy. Additionally, this relationship is nearly identical to that stated by Karmarkar.
2
As this paper is focused upon the application of an entropy model for positive decision theories, the apparent isomorphology between Boltzmann’s Principle and Daniel Bernoulli’s expected utility theory will be more deeply addressed in subsequent research.
3
This case is identical to that of the classical or frequency definition of probability, where each state is assumed to have to same probability due to a lack of knowledge about the states.
4
The Authors have chosen to use T out of respect for Amos Tversky who passed before being awarded the Nobel Prize alongside Daniel Kahneman.
5
Prelec’s relationship is provided as written; however, the constant α is not the same as that used for power utility.
6
To separate decision weights in the two-value CPT actual data, the following was assumed: w ( p ) w ( 1 p ) τ ( p ) τ ( 1 p ) .
Figure 1. There are two groups of decision theories: positive and normative. Normative theories are those applied in standard economic decisions and are tied with deliberate choices (i.e., system 2). In contrast, positive theories counter the normative to address how subjects make choices, often involving intuition (system 1). In other words, normative theories are viewed as how people should make decisions, whereas positive theories address how people actually make decisions. The Entropy Decision Risk Model (EDRM) provides a translation between the two domains. Subsequent research will report on the use of EDRM to apply Expected Utility Theory in the positive domain.
Figure 1. There are two groups of decision theories: positive and normative. Normative theories are those applied in standard economic decisions and are tied with deliberate choices (i.e., system 2). In contrast, positive theories counter the normative to address how subjects make choices, often involving intuition (system 1). In other words, normative theories are viewed as how people should make decisions, whereas positive theories address how people actually make decisions. The Entropy Decision Risk Model (EDRM) provides a translation between the two domains. Subsequent research will report on the use of EDRM to apply Expected Utility Theory in the positive domain.
Systems 08 00046 g001
Figure 2. PT decision weight. Contrary to expected utility theory, Prospect Theory (1979) empirically determined that subjects make decisions based upon a weighting factor rather than objective probability. Kahneman and Tversky provided this plot as a notional relationship [14].
Figure 2. PT decision weight. Contrary to expected utility theory, Prospect Theory (1979) empirically determined that subjects make decisions based upon a weighting factor rather than objective probability. Kahneman and Tversky provided this plot as a notional relationship [14].
Systems 08 00046 g002
Figure 3. CPT weighting factor. To address concerns identified with Prospect Theory (PT), Tversky and Kahneman later developed cumulative prospect theory (1992) [17]. These curves are formed from a linear regression model based on their data using Equations (1) and (2) with γ = 61 and δ = 0.69
Figure 3. CPT weighting factor. To address concerns identified with Prospect Theory (PT), Tversky and Kahneman later developed cumulative prospect theory (1992) [17]. These curves are formed from a linear regression model based on their data using Equations (1) and (2) with γ = 61 and δ = 0.69
Systems 08 00046 g003
Figure 4. Flowchart for the present EDRM research showing established theories comprising the EDRM framework, model development, and validation. Section numbers are noted in parentheses.
Figure 4. Flowchart for the present EDRM research showing established theories comprising the EDRM framework, model development, and validation. Section numbers are noted in parentheses.
Systems 08 00046 g004
Figure 5. EDRM mathematically and philosophically relates subjective and objective probabilities, which are referred to as proximity and relative certainty, respectively. Proximity encompasses the group of unmeasurable subjective probabilities and relative certainty relates to those probabilities which are directly measurable, as listed in the respective boxes.
Figure 5. EDRM mathematically and philosophically relates subjective and objective probabilities, which are referred to as proximity and relative certainty, respectively. Proximity encompasses the group of unmeasurable subjective probabilities and relative certainty relates to those probabilities which are directly measurable, as listed in the respective boxes.
Systems 08 00046 g005
Figure 6. This figure illustrates basic two- and three-state choices, where x i is the magnitude of a state. Consistent with the application of statistical mechanics, as discussed, proximity ( τ ) is used rather than objective probability ( p ) because this is a depiction of a set of choice states. Although only two and three-state choices are shown, a choice can be made up of any number of states.
Figure 6. This figure illustrates basic two- and three-state choices, where x i is the magnitude of a state. Consistent with the application of statistical mechanics, as discussed, proximity ( τ ) is used rather than objective probability ( p ) because this is a depiction of a set of choice states. Although only two and three-state choices are shown, a choice can be made up of any number of states.
Systems 08 00046 g006
Figure 7. Uncorrected EDRM plot of proximity ( τ ) versus relative certainty ( p ) provides a monotonic relationship between subjective and objective probabilities, in general. Of particular importance for comparison with prior research are the preference reversal point and the inflection point, which closely match previous empirical results.
Figure 7. Uncorrected EDRM plot of proximity ( τ ) versus relative certainty ( p ) provides a monotonic relationship between subjective and objective probabilities, in general. Of particular importance for comparison with prior research are the preference reversal point and the inflection point, which closely match previous empirical results.
Systems 08 00046 g007
Figure 8. Overlay of four plots to show alignment of EDRM with Cumulative Prospect Theory (CPT). The base layer showing the CPT weighting factor curves, which includes the axes, is taken directly from the original CPT paper [17]. The second layer of blue dots represent the actual positive and negative data points from the original report 6. The next (orange) layer is a 5th order linear regression trendline calculated from the original results. The final layer shows the uncorrected EDRM, which more closely trends with the original data than the reported weighting factor curves.
Figure 8. Overlay of four plots to show alignment of EDRM with Cumulative Prospect Theory (CPT). The base layer showing the CPT weighting factor curves, which includes the axes, is taken directly from the original CPT paper [17]. The second layer of blue dots represent the actual positive and negative data points from the original report 6. The next (orange) layer is a 5th order linear regression trendline calculated from the original results. The final layer shows the uncorrected EDRM, which more closely trends with the original data than the reported weighting factor curves.
Systems 08 00046 g008
Figure 9. Indifference plots of cumulative prospect theory (a,b) and EDRM (c,d) for nonnegative prospects ( x 1 = 0 ,   x 2 = 100 ,   x 3 = 200 ) and nonpositive prospects ( x 1 = 200 ,   x 2 = 100 ,   x 3 = 0 ) . Figures (a,b) are taken directly from the original text [17]. Figures (c,d) are calculated using EDRM. The dashed lines represent probability p 2 . It is noteworthy that EDRM generally matches the original CPT indifference plots, except along the edges. This may be explained by the fact that proximities calculated from relative certainties ( p ) are not required to sum to 1 (Section 4.6)
Figure 9. Indifference plots of cumulative prospect theory (a,b) and EDRM (c,d) for nonnegative prospects ( x 1 = 0 ,   x 2 = 100 ,   x 3 = 200 ) and nonpositive prospects ( x 1 = 200 ,   x 2 = 100 ,   x 3 = 0 ) . Figures (a,b) are taken directly from the original text [17]. Figures (c,d) are calculated using EDRM. The dashed lines represent probability p 2 . It is noteworthy that EDRM generally matches the original CPT indifference plots, except along the edges. This may be explained by the fact that proximities calculated from relative certainties ( p ) are not required to sum to 1 (Section 4.6)
Systems 08 00046 g009
Figure 10. To illustrate the effect of β on proximity, this plot graphically shows the shape of the proximity curve for various values to show change in preference reversal, as annotated. For β 2 , there is no preference reversal. While this paper will only apply β = 1 for comparison to prior studies to validate the a priori model, in Section 6 the proximity exponent is varied along with the value exponent to further validate the use of α = 0.88 and β = 1 across all prior studies as a system.
Figure 10. To illustrate the effect of β on proximity, this plot graphically shows the shape of the proximity curve for various values to show change in preference reversal, as annotated. For β 2 , there is no preference reversal. While this paper will only apply β = 1 for comparison to prior studies to validate the a priori model, in Section 6 the proximity exponent is varied along with the value exponent to further validate the use of α = 0.88 and β = 1 across all prior studies as a system.
Systems 08 00046 g010
Figure 11. This illustration shows a new model for converting the prospect (T) of two choices into the relative percentages of subject responses for direct comparison with prior studies, which universally report these percentages. No prior works reviewed attempt to compare results in this manner, making this the first to do so, to the authors’ knowledge. This model is based upon the Weber-Fechner law of human perception, which is logarithmic, and scaled by the minimum and maximum values. Asinh was chosen because it is likewise logarithmic and permits comparison of positive and negative prospects contiguously along a single scale.
Figure 11. This illustration shows a new model for converting the prospect (T) of two choices into the relative percentages of subject responses for direct comparison with prior studies, which universally report these percentages. No prior works reviewed attempt to compare results in this manner, making this the first to do so, to the authors’ knowledge. This model is based upon the Weber-Fechner law of human perception, which is logarithmic, and scaled by the minimum and maximum values. Asinh was chosen because it is likewise logarithmic and permits comparison of positive and negative prospects contiguously along a single scale.
Systems 08 00046 g011
Figure 12. Comparison of the results of PT versus uncorrected EDRM are shown above for all problems provided in the original PT paper. EDRM predictions match all results reported by Kahneman and Tversky.
Figure 12. Comparison of the results of PT versus uncorrected EDRM are shown above for all problems provided in the original PT paper. EDRM predictions match all results reported by Kahneman and Tversky.
Systems 08 00046 g012
Figure 13. This plot shows a high degree of alignment of EDRM compared with actual CPT data for one and two-state choices. The plot scales are different on the horizontal and vertical axes to amplify the results. The dashed line represents a linear trendline using all the data, which shows excellent alignment with the positive and negative extremes. There is a slightly tighter correlation of the model for negative values. The negative slope of the trendline shows there is a very small difference between gains and losses (loss aversion), but is considered a minor effect in this research.
Figure 13. This plot shows a high degree of alignment of EDRM compared with actual CPT data for one and two-state choices. The plot scales are different on the horizontal and vertical axes to amplify the results. The dashed line represents a linear trendline using all the data, which shows excellent alignment with the positive and negative extremes. There is a slightly tighter correlation of the model for negative values. The negative slope of the trendline shows there is a very small difference between gains and losses (loss aversion), but is considered a minor effect in this research.
Systems 08 00046 g013
Figure 14. Wu and Markle Gain/Loss versus EDRM. This study was chosen because it provides a challenging test of EDRM’s ability to handle mixtures of gains and losses. The plot shows the actual and calculated percentages (H) first by survey number and then by problem number. Note that all the non-matching binary results occur in surveys 1, 2, and 3, which were conducted differently in the original study.
Figure 14. Wu and Markle Gain/Loss versus EDRM. This study was chosen because it provides a challenging test of EDRM’s ability to handle mixtures of gains and losses. The plot shows the actual and calculated percentages (H) first by survey number and then by problem number. Note that all the non-matching binary results occur in surveys 1, 2, and 3, which were conducted differently in the original study.
Systems 08 00046 g014
Figure 15. (Sample) Wu and Markle Problem 25 (Actual Choice: A). This contour plot illustrates predicted subject choice for varying values of exponents α and β . For the standard values of α = 0.88 ,   β = 1 , and λ = 1 , choice A will be preferred. Plots such as this were generated for all the choices considered in this research for evaluation as a system.
Figure 15. (Sample) Wu and Markle Problem 25 (Actual Choice: A). This contour plot illustrates predicted subject choice for varying values of exponents α and β . For the standard values of α = 0.88 ,   β = 1 , and λ = 1 , choice A will be preferred. Plots such as this were generated for all the choices considered in this research for evaluation as a system.
Systems 08 00046 g015
Figure 16. EDRM Multiple Study α -vs- β Sensitivity Analysis ( λ = 1 ). This plot represents a compilation of all 63 choices evaluated in this research for which EDRM correctly predicted the binary result for 57 (90.5%). The legend shows the z-axis representing the percentage of the problems with a correct binary result as α and β are varied, up to a maximum of 57, which correlates to 100% on the plot. The results clearly demonstrate that the standard values of α = 0.88 and β = 1 are valid, affirming the original work by Kahneman and Tversky and EDRM.
Figure 16. EDRM Multiple Study α -vs- β Sensitivity Analysis ( λ = 1 ). This plot represents a compilation of all 63 choices evaluated in this research for which EDRM correctly predicted the binary result for 57 (90.5%). The legend shows the z-axis representing the percentage of the problems with a correct binary result as α and β are varied, up to a maximum of 57, which correlates to 100% on the plot. The results clearly demonstrate that the standard values of α = 0.88 and β = 1 are valid, affirming the original work by Kahneman and Tversky and EDRM.
Systems 08 00046 g016
Figure 17. EDRM Multiple Study α -vs- λ Sensitivity Analysis ( β = 1 ). Formatted similarly to Figure 16, this plot shows that as λ (loss aversion factor) increases, slightly wider ranges of α will correctly predict the binary result for a maximum of 57 of 63 choices analyzed. This shows that loss aversion is present for negative state values, but validates its consideration as a secondary effect, since the standard values of α = 0.88 and β = 1 are valid assuming loss aversion is not present (i.e., λ = 1 ). Plotting of β -vs- λ has nearly identical results.
Figure 17. EDRM Multiple Study α -vs- λ Sensitivity Analysis ( β = 1 ). Formatted similarly to Figure 16, this plot shows that as λ (loss aversion factor) increases, slightly wider ranges of α will correctly predict the binary result for a maximum of 57 of 63 choices analyzed. This shows that loss aversion is present for negative state values, but validates its consideration as a secondary effect, since the standard values of α = 0.88 and β = 1 are valid assuming loss aversion is not present (i.e., λ = 1 ). Plotting of β -vs- λ has nearly identical results.
Systems 08 00046 g017
Table 1. Allais Paradox performance using EDRM. T j is the prospect of the choice α = 0.88 is the standard power utility used by Kahneman and Tversky and others. Calc % provides percentage comparison from the PEM. Underlines indicate the greater values.
Table 1. Allais Paradox performance using EDRM. T j is the prospect of the choice α = 0.88 is the standard power utility used by Kahneman and Tversky and others. Calc % provides percentage comparison from the PEM. Underlines indicate the greater values.
α = 0.88
Problem (Value, Probability)EDRMCalc %Match
Choice A (1 and 3)Choice B (2 and 4) T A T B ABY/N
1 and 2(1M)(5M, 0.10; 1M, 0.89)190,456131,2659010Yes
3 and 4(5M,.10)(1M,.11)112,31228,9259010Yes
Table 2. EDRM performance with Prospect Theory. T j is the prospect of the choice. Calc % and Actual % provide comparison of model results to those reported by Kahneman and Tversky. Underlines indicate the greater values.
Table 2. EDRM performance with Prospect Theory. T j is the prospect of the choice. Calc % and Actual % provide comparison of model results to those reported by Kahneman and Tversky. Underlines indicate the greater values.
ProblemEDRMCalc %Actual %Diff
Choice AChoice B T A T B AB AB Δ%
1(2500, 0.33; 2400, 0.66)(2400)824.66943.16168418822
2(2500, 0.33)(2400, 0.34)308.94304.556436831719
3(4000, 0.8)(3000)978.901147.80158520805
4(4000, 0.2)(3000, 0.25)330.73298.5474266535−9
5(10000, 0.5)(4320)1 11430.491582.07198122783
6(10000, 0.05)(4320, 0.1) 1308.95226.2477236733−10
7(6000, 0.45)(3000, 0.9)840.31879.7925751486−11
8(6000, 0.001)(3000, 0.002)20.7016.646040732713
3′(−4000, 0.8)(−3000)−978.90−1147.8085159287
4′(−4000, 0.2)(−3000, 0.25)−330.73−298.542674425816
7′(−3000, 0.9)(−6000, 0.45)−879.79−840.312575892−17
8′(−3000, 0.002)(−6000, 0.001)−16.64−20.706040703010
10 2(4000, 0.8)(3000)978.901147.80158522787
11(1000, 0.5)(500)188.57237.1919811684−3
12(−1000, 0.5)(−500)−188.57−237.1981196931−12
13(6000, 0.25)(4000, 0.25; 2000, 0.25)549.43593.5025751882−7
13′(−6000, 0.25)(−4000, 0.25;−2000, 0.25)−549.43−593.5075257030−5
14(5000, 0.001)(5)17.634.12673372285
14′(−5000, 0.001)(−5)−17.63−4.12336717832
Notes: 1. Estimated trip values using certainty equivalent from CPT: CE(10000, 0.5) = 4320; 2. Problem 10 is the second stage of a two-stage problem where there is only a 25% chance of proceeding past the first stage; however, as stated by Kahneman and Tversky in problem 10 of Prospect Theory, people tend to disregard the first stage [14]. Therefore, the first stage is not applied in this model.
Table 3. EDRM Performance with Cumulative Prospect Theory through comparison of calculated and actual certainty equivalents (CE), which is equivalent to prospect, T. Proximities ( τ i ) calculated for each state are also shown. Data are as reported by Tversky and Kahneman.
Table 3. EDRM Performance with Cumulative Prospect Theory through comparison of calculated and actual certainty equivalents (CE), which is equivalent to prospect, T. Proximities ( τ i ) calculated for each state are also shown. Data are as reported by Tversky and Kahneman.
α = 1 ,   β = 1
ProblemEDRMResults
OutcomesGamble τ 1 τ 2 Calc   CE   T Actual CEDiff Δ CE
(0, 50)(50, 0.1)0.1430 7.1591.85
(50, 0.5)0.4320 21.60210.6
(50, 0.9)0.7665 38.32371.325
(0, −50)(−50, 0.1)0.1430 −7.15−80.85
(−50, 0.5)0.4320 −21.60−21−0.6
(−50, 0.9)0.7665 −38.32−390.675
(0, 100)(100, 0.005)0.0933 9.3314−4.67
(100, 0.25)0.2601 26.01251.01
(100, 0.5)0.4320 43.20367.2
(100, 0.75)0.6183 61.83529.83
(100, 0.95)0.8372 83.72785.72
(0, −100)(−100, 0.005)0.0933 −9.33−8−1.33
(−100, 0.25)0.2601 −26.01−23.5−2.51
(−100, 0.5)0.4320 −43.20−42−1.2
(−100, 0.75)0.6183 −61.83−631.17
(−100, 0.95)0.8372 −83.72−840.28
(0, 200)(200, 0.01)0.0361 7.2210−2.78
(200, 0.1)0.1430 28.60208.6
(200, 0.5)0.4320 86.407610.4
(200, 0.9)0.7665 153.3013122.3
(200, 0.99)0.9284 185.68188−2.32
(0, −200)(−200, 0.01)0.0361 −7.22−3−4.22
(−200, 0.1)0.1430 −28.60−23−5.6
(−200, 0.5)0.4320 −86.40−892.6
(−200, 0.9)0.7665 −153.30−1551.7
(−200, 0.99)0.9284 −185.68−1904.32
(0, 400)(400, 0.01)0.0361 14.44122.44
(400, 0.99)0.9284 371.36377−5.64
(0, −400)(−400, 0.01)0.0361 −14.44−14−0.44
(−400, 0.99)0.9284 −371.36−3808.64
(50, 100)(50, 0.9; 100, 0.1)0.14300.766552.6259−6.375
(50, 0.5; 100, 0.5)0.43200.432064.8071−6.2
(50, 0.1; 100, 0.9)0.76650.143083.80830.8
(−50, −100)(−50, 0.9; −100, 0.1)0.14300.7665−52.62−596.375
(−50, 0.5; −100, 0.5)0.43200.4320−64.80−716.2
(−50, 0.1; −100, 0.9)0.76650.1430−83.80−851.2
(50, 150)(50, 0.95; 150, 0.05)0.09330.837255.8564−8.145
(50, 0.75; 150, 0.25)0.26010.618369.9372.5−2.57
(50, 0.5; 150, 0.5)0.43200.432086.40860.4
(50, 0.25; 150, 0.75)0.61830.2601105.751023.75
(50, 0.05;150, 0.95)0.83720.0933130.241282.245
(−50, −150)(−50, 0.95; −150, 0.05)0.09330.8372−55.85−604.145
(−50, 0.75; −150, 0.25)0.26010.6183−69.93−711.07
(−50, 0.5; −150, 0.5)0.43200.4320−86.40−925.6
(−50, 0.25; −150, 0.75)0.61830.2601−105.75−1137.25
(−50, 0.05; −150, 0.95)0.83720.0933−130.24−1321.755
(100, 200)(100, 0.95; 200, 0.05)0.09330.8372102.38118−15.62
(100, 0.75; 200, 0.25)0.26010.6183113.85130−16.15
(100, 0.5; 200, 0.5)0.43200.4320129.60141−11.4
(100, 0.25; 200, 0.75)0.61830.2601149.67162−12.33
(100, 0.05; 200, 0.95)0.83720.0933176.77178−1.23
(−100, −200)(−100, 0.95; −200, 0.05)0.09330.8372−102.38−1129.62
(−100, 0.75; −200, 0.25)0.26010.6183−113.85−1217.15
(−100, 0.5; −200, 0.5)0.43200.4320−129.60−14212.4
(−100, 0.25; −200, 0.75)0.61830.2601−149.67−1588.33
(−100, 0.05; −200, 0.95)0.83720.0933−176.77−1792.23
Table 4. EDRM compared with results of Framing of Decisions and the Psychology of Choice. T j is the prospect of the choice. Calc % and Actual % provide comparison of model results to those reported by Tversky and Kahneman. Note that problem 4 makes use of the dominance effect. Underlines indicate the greater values.
Table 4. EDRM compared with results of Framing of Decisions and the Psychology of Choice. T j is the prospect of the choice. Calc % and Actual % provide comparison of model results to those reported by Tversky and Kahneman. Note that problem 4 makes use of the dominance effect. Underlines indicate the greater values.
α = 0.88 ,   β = 1
Problem (Value, Probability)EDRMCalc %Actual %DiffMatch
Choice AChoice B T A T B ABABΔ%Y/N
1(200)(600, 1/3)1068975257228−3Yes
2(−400)(0, 1/3; −600, 2/3)−195154188222784Yes
3i(240)(1000, 0.25)1241147129841613Yes
3ii(−750)(−1000, 0.75)−33927016841387−3Yes
4(240, 0.25; −760, 0.75)(250, 0.25; −750, 0.75)−1801760100 201000Yes
5(30)(45,.8)20195941782219Yes
6 1(30)(45,.8)20195941742615Yes
7(30,.25)(45,.2)5.26.4415942581Yes
Notes: 1. Problem 6 is the second stage of a two-stage version of problem 5 where there is only a 25% chance of proceeding past the first stage; however, as stated by Kahneman and Tversky in problem 10 of Prospect Theory, people tend to disregard the first stage [14]. Therefore, the first stage is not applied in this model; 2. Dominance is present, so the evaluation model returns 100% for the choice with the greater prospect.
Table 5. EDRM compared with results of select problems from Tversky and Kahneman’s Rational Choice and the Framing of Decisions having more than three states and mixtures of gains and losses.   T j is the prospect of the choice. Calc % and Actual % provide comparison of model results to those reported. Similar results were achieved with a wide range of power utility function exponents. Underlines indicate the greater values.
Table 5. EDRM compared with results of select problems from Tversky and Kahneman’s Rational Choice and the Framing of Decisions having more than three states and mixtures of gains and losses.   T j is the prospect of the choice. Calc % and Actual % provide comparison of model results to those reported. Similar results were achieved with a wide range of power utility function exponents. Underlines indicate the greater values.
α = 0.88 ,   β = 1
Problem (Value, Probability)EDRMCalc %Actual %DiffMatch
Choice AChoice B T A T B ABABΔ%Y/N
7(0, 0.9; 45, 0.06; 30, 0.01;−15, 0.01;
−15, 0.02)
(0, 0.9; 45, 0.06; 45, 0.01;−10, 0.01,
−15, 0.02)
2.713.140100 101000Yes
8(0, 0.9; 45, 0.06, 30, 0.01;
−15, 0.03)
(0, 0.9; 45, 0.07; −10, 0.01,−
15, 0.02)
2.952.40524858426Yes
Note: 1. Dominance is present, so the evaluation model returns 100% for the choice with the greater prospect.
Table 6. EDRM Performance with Wu and Markle Gain–Loss Separability Study (mixed gambles). T j is the prospect of the choice, v i j is the state value, p i j is the state relative certainty, and τ i j is the state proximity. Note that in this analysis, 6 of 34 problems have non-matching binary results, which are italicized. Underlines indicate the greater values.
Table 6. EDRM Performance with Wu and Markle Gain–Loss Separability Study (mixed gambles). T j is the prospect of the choice, v i j is the state value, p i j is the state relative certainty, and τ i j is the state proximity. Note that in this analysis, 6 of 34 problems have non-matching binary results, which are italicized. Underlines indicate the greater values.
Choice HChoice LProximityProspectResults (%)
CalcActualEval
v 1 , H p 1 , H v 2 , H p 2 , H v 1 , L p 1 , L v 2 , L p 2 , L τ 1 , H τ 2 , H τ 1 , L τ 2 , L T H T L HLHLΔ%Y/N
11500.3−250.7750.8−600.20.300.580.660.22142138622278−16Y
218000.05−2000.956000.3−2500.70.090.840.300.58−20837632179−16Y
310000.25−5000.756000.5−7000.50.260.620.430.43−33−1739612872−11Y
42000.3−250.7750.8−1000.20.300.580.660.22211759413367−26N
512000.25−5000.756000.5−8000.50.260.620.430.43−13−3562384357−19N
67500.4−10000.65000.6−15000.40.360.500.500.36−96−10860405149−9Y
742000.5−30000.530000.75−60000.250.430.430.620.2617116059415248−7Y
845000.5−15000.530000.75−30000.250.430.430.620.2643941162384852−14N
945000.5−30000.530000.75−60000.250.430.430.620.2621316063375842−5Y
1010000.3−2000.74000.7−5000.30.300.580.580.30684363375149−12Y
1148000.5−15000.530000.75−30000.250.430.430.620.2648041165355446−10Y
1230000.01−4900.9920000.02−5000.980.040.930.050.90−1751704258594117N
1322000.4−6000.68500.75−17000.250.360.500.620.261785367335248−15Y
1422000.2−10000.817000.25−11000.750.220.660.260.62−94−11261395842−4Y
1515000.25−5000.756000.5−9000.50.260.620.430.4316−5265355149−14Y
1650000.5−30000.530000.75−60000.250.430.430.620.26281160653565350Y
1715000.4−10000.66000.8−35000.20.360.500.660.228−11066345941−7Y
1820250.5−8750.518000.6−10000.40.430.430.500.361832093763722835N
196000.25−1000.751250.75−5000.250.260.620.620.2637−1866345843−8Y
2050000.1−9000.914000.3−17000.70.140.770.300.5848−22967334060−27N
217000.25−1000.751250.75−6000.250.260.620.620.2647−29673371294Y
227000.5−1500.53500.75−4000.250.430.430.620.261025666346337−3Y
2312000.3−2000.74000.7−8000.30.300.580.580.30907673370303Y
2450000.5−25000.525000.75−60000.250.430.430.620.26355556832792111Y
258000.4−10000.65000.6−16000.40.360.500.500.36−89−12164365843−6Y
2650000.5−30000.525000.75−65000.250.430.430.620.2628115673371294Y
277000.25−1000.751000.75−8000.250.260.620.620.2647−58683273285Y
2815000.3−2000.74000.7−10000.30.300.580.580.30123−16683275257Y
2916000.25−5000.756000.5−11000.50.260.620.430.4325−85673373286Y
3020000.4−8000.66000.8−35000.20.360.500.660.22112−11068326535−3Y
3120000.25−4000.756000.5−11000.50.260.620.430.4388−856832802012Y
3215000.4−7000.63000.8−35000.20.360.500.660.2267−194693178239Y
339000.4−10000.65000.6−18000.40.360.500.500.36−75−147663470304Y
3410000.4−10000.65000.6−20000.40.360.500.500.36−61−1736733782310Y
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Monroe, T.; Beruvides, M.; Tercero-Gómez, V. Derivation and Application of the Subjective–Objective Probability Relationship from Entropy: The Entropy Decision Risk Model (EDRM). Systems 2020, 8, 46. https://doi.org/10.3390/systems8040046

AMA Style

Monroe T, Beruvides M, Tercero-Gómez V. Derivation and Application of the Subjective–Objective Probability Relationship from Entropy: The Entropy Decision Risk Model (EDRM). Systems. 2020; 8(4):46. https://doi.org/10.3390/systems8040046

Chicago/Turabian Style

Monroe, Thomas, Mario Beruvides, and Víctor Tercero-Gómez. 2020. "Derivation and Application of the Subjective–Objective Probability Relationship from Entropy: The Entropy Decision Risk Model (EDRM)" Systems 8, no. 4: 46. https://doi.org/10.3390/systems8040046

APA Style

Monroe, T., Beruvides, M., & Tercero-Gómez, V. (2020). Derivation and Application of the Subjective–Objective Probability Relationship from Entropy: The Entropy Decision Risk Model (EDRM). Systems, 8(4), 46. https://doi.org/10.3390/systems8040046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop