Next Article in Journal
A Machine Learning Approach to Simulate Gene Expression and Infer Gene Regulatory Networks
Next Article in Special Issue
A Quantum Model of Trust Calibration in Human–AI Interactions
Previous Article in Journal
Adaptable 2D to 3D Stereo Vision Image Conversion Based on a Deep Convolutional Neural Network and Fast Inpaint Algorithm
Previous Article in Special Issue
Contextuality in Collective Intelligence: Not There Yet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Laws of Thought—A Quantum-like Machine Learning Approach

1
Independent Researcher, Hefei 230026, China
2
Independent Researcher, Chicago, IL 60607, USA
3
Department of Chemical Physics, University of Science and Technology of China, Hefei 230052, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(8), 1213; https://doi.org/10.3390/e25081213
Submission received: 1 June 2023 / Revised: 3 August 2023 / Accepted: 9 August 2023 / Published: 15 August 2023

Abstract

:
Incorporating insights from quantum theory, we propose a machine learning-based decision-making model, including a logic tree and a value tree; a genetic programming algorithm is applied to optimize both the logic tree and value tree. The logic tree and value tree together depict the entire decision-making process of a decision-maker. We applied this framework to the financial market, and a “machine economist” is developed to study a time series of the Dow Jones index. The “machine economist” will obtain a set of optimized strategies to maximize profits, and discover the efficient market hypothesis (random walk).

1. Introduction

We live in a world brimming with uncertainty, where we constantly have to make a lot of decisions with incomplete information. How we make decisions is truly an enigma. Classical decision theory [1,2,3,4,5] is a “black box”; we do not know what really happens inside the box. The human behaviors exhibited during decision making, such as the order effect, cannot be sufficiently explained by decision theory based on classical probability. Scientists are trying to apply quantum theory to reveal how decisions are made. Recently, many quantum-like decision theories [6,7,8] have been proposed based on quantum probability to revise the mathematical structure that is used in classical models. Aerts et al. first proposed to apply quantum probability in decision theory [9,10]; Busemeyer et al. proposed a quantum-like model to describe human judgments and the order effect [11,12,13]; Khrennikov et al. improved the Busemeyer quantum model by applying quantum instruments of quantum measurement theory [14,15,16,17,18]; Yukalov et al. proposed a rigorously axiomatic quantum decision theory [19,20,21]; Xin et al. proposed a quantum value operator decision theory [22].
Whether it is classical decision theories or quantum-like decision theories, all well-developed decision models have applied a rigorous mathematical structure to describe people’s decision making under uncertainty. We are firm believers that people’s subjective belief cannot be computed by rigorous mathematical formula. The main issue with mathematical models is that they are difficult to understand, cannot reflect the dynamic changes in the state of the decision-makers’ mind, and it is not easy to calculate theoretical values to compare with actual observed outcomes when the mathematical model becomes more complex.
In this paper, based on Darwin’s natural selection, we propose an algorithm that incorporates insights from quantum theory to describe people’s decision making under uncertainty. Our decision model emphasizes machine learning, where decision-makers build-up their experience by being rewarded or punished for each decision they make, preparing them to make better decisions in the future. This is more in line with decision-makers in the real world.
Our proposed quantum-like decision theory discovers laws of thought by machine learning an observed time series; there is no differential equation, and no transition probability computation in our decision theory. We do not model with the usual utility function or observables of the projection-type in other quantum-like decision theories, but a logic tree and a value tree. The logic tree determines the state of each point in the time series, and the value tree calculates the absolute value between two points in the time series. A logic tree and value tree work together to depict the entire decision-making process of a decision-maker.
In this paper, a “machine economist” is developed and the Dow Jones index is used as historical data for training the “machine economist”. The “machine economist” will trade Dow Jones index futures, build up experience, optimize a set of trading strategies to maximize profits, and finally construct a theory about financial markets.

2. Quantum-like Machine Learning Algorithm

The change in the Dow Jones index over time can be defined in terms of a time series consisting of states and observable values in (1).
q k , x k k = 1 , , N
q k = 0 ,     x k x k 1 1 ,     x k < x k 1
where q k denotes the dynamic state of the Dow Jones index; if the closing price of the Dow Jones index goes up, then the state is 0 ( q k = 0 ) ; if the closing price of the Dow Jones index goes down, then the state is 1 ( q k = 1 ) ; x k denotes the observed value (closing price) of the Dow Jones index; data sequences x k , k = 1 , , N describe the trajectory of the Dow Jones index.
Time series q k , x k can be considered as questions posed by the market, in which “machine economist” need to describe and interpret the market based on observed data sequences. The “machine economist” is actually playing with the market, and the market is neither optimistic nor pessimistic, but is just playing dice with the “machine economist”. The “machine economist” tries to maximize the expected value in order to find the most probabilistically correct answers (maximize profits).
The question now becomes: Can the “machine economist” find an answer?
Here, a quantum-like machine learning algorithm is proposed to answer the questions posed by the market. A quantum-like machine learning algorithm can be expressed in terms of two-part trees: the first part is a logic tree, which applies “yes or no” answers to determine the dynamically changing state of the security, and the other part is a value tree that describes the closing prices of the security. Together, the logic tree and value tree will reconstruct the trajectory of the security.
  • Logic tree: to determine the action to be taken and calculate the theoretical value of the closing price.
  • Value tree: to calculate the absolute value of the difference in closing prices between two trading points of the Dow Jones index.
The goal of an algorithm A k in general is to be able to either:
(1)
Generate the results to match the observed outcomes;
(2)
Predict the next outcome.
In other words, given a sequence of data q k , x k as an input by the market, a “machine economist” develops an algorithm A k to output a sequence of data q k , x k :
q k , x k i n p u t A k l o g i c T r e e , v a l u e T r e e o u t p u t q k , x k
Meet the following formula:
q k = q k   a n d   x k = x k ,   k = 1,2 , , n
q n + 1 = q n + 1   a n d   x n + 1 = x n + 1
Genetic programming (GP) [23,24,25,26] is used by the “machine economist” to search for a satisfactory algorithm. Just as the genes of the fittest of each species are passed down from generation to generation through natural selection, evolution algorithms can perform the same action through machine learning.
The idea and steps of GP are simple:
(1)
Randomly generate 300 logic or value trees;
(2)
Historical data is learned to obtain the fitness of each tree;
(3)
The satisfactory logic or value tree is obtained through the Darwinian principle of survival of the fittest (crossover, mutation and selection) after about 80 generations of evolution.
The GP Algorithm (Algorithm 1) is as follows:
Algorithm 1. GP Algorithm
  • Input:
    • Historical   dataset   q k , x k , k = 0 , , N (each sample consists of a security’s state and closing price);
    • Setting:
    (1)
    Operation   set   F ;
    (2)
    Dataset   T ;
    (3)
    Crossover probability = 70%; Mutation probability = 5%.
  • Initialization:
    • Population: randomly create 300 individuals.
  • Evolution:
    • Loop :   for   i = 0   t o   80 generations:
      a
      Calculate fitness for each individual based on the historical dataset;
      b
      According to the quality of fitness:
      i
      Selection: selecting parents.
      ii
      Crossover: generate a new offspring using the roulette algorithm based on crossover probability.
      iii
      Mutation: randomly modify the parent based on mutation probability.
  • Output:
    • An individual of the best fitness.

2.1. Value Tree

A value tree is a traditional function tree. The final form of the value tree is represented as a function. The output from this function is a numeric value. For a value tree the operation set F and dataset T are as follows
(1)
Operation set F = + , , × , ÷ , l o g , e x p ;
(2)
Dataset T = t , f l , a v , h , l .
where t denotes the t-th trade of the Dow Jones index time series; f l denotes the average fluctuation of the closing price; a v denotes the average closing price; h denotes the highest closing price; l denotes the lowest closing price.
A value tree is a function that consists of operation set F and dataset T.
v a l u e T r e e = f F , T
We define the absolute value of the Dow Jones index between two trading points as follows:
d t , t 1 = x t x t 1
The “machine economist” can calculate the absolute value between two trading points using the value tree:
d t , t 1 = f F , t , f l , a v , h , l f F , t 1 , f l , a v , h , l
Now we can define the fitness function for the value tree as follows:
f i t n e s s v a l u e T r e e = k = 1 n d t , t 1 d t , t 1 2
where d t , t 1 is the absolute value calculated by the value tree in (5), and d t , t 1 is the observed absolute value of the market in (4). The fitness function is essentially a particular type of function that is used to summarize, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in GP to guide simulations towards optimal design solutions. In order to reach the optimal solution, the GP algorithm implements a continuous evolution process through selection, crossover, and mutation. The goal of continuous evolution is to find a satisfactory value tree that makes d t , t 1 as close to d t , t 1 as possible.

2.2. Logic Tree

A logic tree is a matrix tree constructed from eight basic quantum gates. The final form of a logic tree is represented as a matrix. The output from this matrix is a vector (an action, for example, buy or sell). The purpose of the logic tree is to simulate the decision-making process of the “machine economist”. Table 1 shows that the Dow Jones index has two states: q 1 (index up) and q 2 (index down); the “machine economist” has two possible actions to take, a 1 (buy) and a 2 (sell). p 1 x , p 1 x , p 2 x , p 2 x are four possible outcomes determined by both the market and the “machine economist”. For example, p 1 | x means that the “machine economist” takes action a 1 (buy) with a subjective probability of p 1 and makes a profit x amount of money because the index is up ( q 1 ); p 2 | x indicates that the “machine economist” takes action a 2 (sell) with a subjective probability of p 2 and loses x amount of money because the index is up ( q 1 ).
The market influences the traders’ decisions, while all traders’ actions then decide the market’s state. This interaction between the two, the objective (state of the market) and the subjective (traders’ beliefs), is what causes both the result of the decisions (gain or loss) and the state of the market (up or down) to be uncertain.
The state of the market describes the objective world; it can be represented by the superposition of all possible states in terms of the Hilbert state space as shown below [27,28].
| ψ = c 1 | q 1 + c 2 | q 2
where | q 1 denotes a state in which the market has increased, and | q 2 denotes a state in which the market has decreased. | c 1 | 2 is the objective frequency of the increase; | c 2 | 2 is the objective frequency of the falling market.
The state of the trader’s mind is the subjective world. We postulate that when the trader is undecided in making a trade (buy or sell), it can be represented by superposition of all possible actions as follows.
| ϕ = μ 1 | a 1 + μ 2 | a 2
where | a 1 denotes the trader’s action to buy, and | a 2 denotes the trader’s action to sell. p 1 = | μ 1 | 2 is the trader’s degree of belief in betting that the market will rise; p 2 = | μ 2 | 2 is trader’s degree of belief in betting that the market will fall.
The information available to the “machine economist” prior to making its decision is incomplete; the “machine economist” does not know whether the market will rise or fall, forcing the “machine economist” to essentially guess. Before a “machine economist” makes a decision, its mind state is in a pure state, a superposed state in which it can decide whether to buy and sell at the same time. However, in reality, the “machine economist” cannot take an action to buy and sell simultaneously. This pure state is when the states of buy and sell are superposed in the “machine economist’s” mind. Then, when the “machine economist” makes the decision, the state of the “machine economist’s” mind then transforms from that pure state ρ into a mixed state ρ , which is when it decides to buy or sell, with certain degrees of belief. Basically, this transformation is the “machine economist” choosing from one of the available actions, with action a1 being buy with probability p1 and action a2 being sell with probability p2, shown below.
T r a d i n g p r o c e s s : ρ = | ϕ ϕ | d e c i s i o n ρ = p 1 | a 1 a 1 | + p 2 | a 2 a 2 |
It is expressed in matrix form as follows:
ρ = ρ 11 ρ 12 ρ 21 ρ 22   d i a g o n a l i z a t i o n   λ 1 0 0 λ 2 n o r m a l i z a t i o n ρ = p 1 0 0 p 2 = p 1 | a 1 a 1 | + p 2 | a 2 a 2 |
| a 1 = 1 0 ,   | a 2 = 0 1 ;   | a 1 a 1 | = 1 0 0 0 ,   | a 2 a 2 | = 0 0 0 1
The pure state (quantum density matrix) ρ can be approximately constructed from eight basic quantum gates. For a logic tree the operation set and dataset are as follows:
(1)
Operation set F = + , , / / ;
(2)
Dataset T = H , X , Y , Z , S , D , T , I
  H = 1 2 1 1 1 1   X = 0 1 1 0   Y = 0 i i 0   Z = 1 0 0 1 S = 1 0 0 i   D = 0 1 1 0   T = 1 0 0 e i π / 4   I = 1 0 0 1    
where + means two matrices are added, means two matrices are multiplied, and // means that one is randomly selected from two branches. H , X , Y , Z , S , D , T , I are eight basic quantum gates (2 × 2 matrix) [29,30].
A logic tree is composed of operation set F and dataset T that determines the action taken by the “machine economist” and calculates the closing price of the Dow Jones index at the different trading points.
l o g i c T r e e = f F , T
With a logic tree the “machine economist” can decide the action to be taken a t and the closing price x t ( d t , t 1 can be calculated by the value tree in (5)):
a t = l o g i c T r e e F , T = 0 ,     buy   is   excuted   ( degrees   of   belief   is   p 1 ) 1 ,     sell   is   excuted   ( degrees   of   belief   is   p 2 )
x t = x t 1 + d t , t 1 ,     if   a t = 0 ) x t 1 d t , t 1 ,     if   a t = 1 )
The next step is to find a way to optimize the logic tree with a group of satisfactory strategies to guide the “machine economist’s” decisions. To optimize anything, there needs to be: first, a selection of a good evaluation function, and two, how to acquire an optimal solution. First off, in our model, the “machine economist” will try to maximize its expected value when making any trading decisions. Thus, we need to evaluate how “fit” the result (profit or deficit) of the “machine economist’s” decision are, which can be performed using the expected value in (13) as a fitness function to optimize the logic tree by evolving them. The whole idea of having GP go through an iterative evolution loop is to find a satisfactory logic tree by means of learning historical data to obtain the most optimal solution. The learning rules are as follows:
(1)
If the Dow Jones index is up ( q 1 ):
  • If the “machine economist” bets the Dow Jones index is up to buy ( a 1 = 0 ), it profits d t , t 1 ;
  • If the “machine economist” bets the Dow Jones index is down to sell ( a 2 = 1 ), it deficits d t , t 1 .
(2)
If the Dow Jones Index is down ( q 2 ):
  • If the “machine economist” bets the Dow Jones index is down to sell ( a 2 = 1), it profits d t , t 1 ;
  • If the “machine economist” bets the Dow Jones index is up to buy ( a 1 = 0 ), it deficits d t , t 1 .
The expected t-th value of the “machine economist” is as follows:
E V t = = p 1 d t , t 1 ,     market   is   up   and   the   machine   economist   buys   with   degrees   of   belief p 1     = p 2 d t , t 1 ,     market   is   up   and   the   machine   economist   sells   with   degrees   of   belief p 2 = p 1 d t , t 1 ,         market   is   down   and   the   machine   economist buys   with   degrees   of   belief p 1 = p 2 d t , t 1 ,     market   is   down   and   the   machine   economist   sells     with   degrees   of   belief p 2
Now we can define the fitness function for the logic tree as follows:
f i t n e s s l o g i c T r e e = t = 1 n E V t
f i t n e s s l o g i c T r e e maximizes “machine economist” expectations ( max logicTree t = 1 n E V t ); f i t n e s s v a l u e T r e e applies negative feedback to make d t , t 1 e q u a l d t , t 1 . Logic tree together with value tree will reconstruct the trajectory of the Dow Jones index q k , x k e q u a l q k , x k , and make a prediction about the future outcomes as follows:
d n + 1 , n = v a l u e T r e e F , n + 1 , f l , a v , h , l v a l u e T r e e F , n , f l , a v , h , l
a n = l o g i c T r e e ( + , × , / / , H , X , Y , Z , S , D , T , I ) = 0 ,     machine   economist takes   action   a 1 ( buy ) 1 ,     machine   economist   takes   action   a 2 ( sell )
x n + 1 = x n + d n + 1 , n ,     if   a n = 0 ) x n d n + 1 , n ,     if   a n = 1 )

3. Results

The Dow Jones index from 1 to 30 December 2022 is used for training the “machine economist” as shown in Table 2. The first column indicates the state, where 0 means the index is up and 1 means the index is down; the second column indicates the closing price of the index. The first row is the initial condition, −1 indicates the state is uncertain, and 34,395.01 indicates the base closing price for machine learning.

3.1. Dow Jones Index’s Value Tree

By applying the fitness function of f i t n e s s v a l u e T r e e (6), the “machine economist” can continuously learn the historical data to evolve a satisfactory value tree, as shown in Figure 1.
v a l u e T r e e d o w J o n e s = t f l
where t denotes the t-th trade of the Dow Jones index, and f l denotes the average fluctuation of the closing price. The “machine economist” can use this value tree ( v a l u e T r e e d o w J o n e s ) to calculate the absolute value between two trading points of the Dow Jones index:
d t , t 1 = t f l t 1 f l = f l = 265

3.2. Dow Jones Index’s Logic Tree

By applying the fitness function f i t n e s s l o g i c T r e e (14) and d t , t 1 = 265 (17), the “machine economist” can continuously learn the historical data of the Dow Jones index, and evolve a satisfactory logic tree as shown in Figure 2.
Figure 3 shows the optimization curve of the evolution algorithm. Figure 4 shows the learning curve of the evolution algorithm. As shown in Figure 4, the expected value of this logic tree is 5101, very close to the maximum expected value of the Dow Jones index time series (1–30 December 2022), which is 5308.
The l o g i c T r e e d o w j o n e s (18) provides two strategies S 1 , S 2 , and the “machine economist” can randomly choose a strategy from the two and apply the strategy chosen to guide the “machine economist” in choosing which action to take (buy or sell with a subjective degrees of belief). If strategy S 1 is chosen, then the “machine economist” is 100% sure that the index is up (buy); if strategy S 2 is chosen, the “machine economist” is 100% sure that the index is down (sell). Combining l o g i c T r e e d o w j o n e s (18) and d t , t 1 (17), the “machine economist” can determine the action to be taken a t (19) and calculate the closing price x t (20) of the Dow Jones index.
l o g i c T r e e d o w j o n e s = D H + D I / / Z + Z I
  • S 1 = D H + D Z + Z I | a 1 a 1 | p 1 = 100 % , p 2 = 0
  • S 2 = D H + D I I | a 2 a 2 | p 1 = 0 , p 2 = 100 %
a t =   0 ,   if   S 1   is   selected 1 ,   if   S 2   is   selected
x t = x t 1 + 265 ,     if   a t = 0   ( buy ) x t 1 265 ,     i f   a t = 1   ( sell )
By applying Equations (19) and (20), the “machine economist” can then reconstruct the trajectory of the Dow Jones index, as shown in Figure 5 with 19 wins and 1 loss. The l o g i c T r e e d o w j o n e s randomly selects strategy S 1 and strategy S 2 with 9 buy events and 11 sell events, as shown in Figure 6, very close to the market 10 times up and 10 times down. In Figure 6, a positive bar represents a buy action taken by the logic tree, and 100 means with 100% degrees of belief to buy; a negative bar represents a sell action taken by the logic tree, and −100 means with 100% degrees of belief to sell. The random actions taken by l o g i c T r e e d o w j o n e s to buy and sell imply that the evolution algorithm believes the market is efficient, i.e., the market is walking randomly.
Although the “machine economist” approximately reconstructs the price trajectory of the index, it can only make a 50/50 probability prediction of the future state of the Dow Jones index (up or down) by randomly choosing strategy 1 (believe the index is up) or strategy 2 (believe the index is down), i.e., without using partial differential equations and joint probabilities. The “machine economist” independently discovers that the market is efficient (random walk).

4. Discussion

More than a hundred years ago, Louis Bachelier found the similarity between stock price movement and Brownian motion by studying the Paris stock market data, and Bachelier applied a normal distribution to describe the movement in stock prices using stochastic differential equations. In this paper, the “machine economist” constructs a theory about the financial markets by studying a Dow Jones index time series, where the “machine economist” applies an algorithm and treats the data structure of the market as unknown, and the “machine economist” discovers that the market is efficient (random walk) through machine learning. It should be emphasized here that, unlike Bachelier, the “machine economist” discovers the efficient market hypothesis through machine learning with the evolution algorithm, without using any stochastic differential equations or rational economic man hypotheses.
Based on the superposition principle of quantum mechanics, we introduce objective (market) and subjective (trader) dual uncertainty to decision theory (Equations (7) and (8)). “Quantum jump” is applied to explain the decision process: that is, the decision process is a projection from a pure state to a mixed state (Equation (9)). A quantum density matrix in a pure state ( ρ = | ϕ ϕ | ) has quantum interference, such as Schrödinger’s cat who is dead and alive at the same time, a market that is up and down at the same time or a trader who can buy and sell at the same time; the mixed state is the classic statistical state, that is, the market can only be up or down or a trader can only buy or sell. Furthermore, we used eight fundamental quantum gates to construct a quantum density matrix (pure state), and optimized the quantum density matrix through evolutionary algorithms.
Time series in the complex real world rarely have a certain probability distribution, so the key is not to find the probability distribution from the time series but to find valuable information (experience or knowledge) and apply the learned experience to make decisions. In this paper, the “machine economist” uses both a logic tree and a value tree together to study historical data in order to obtain useful information (information of the state and the absolute value between two trading points). Instead of fitting a curve (price fluctuations of the Dow Jones index) with just one equation, the “machine economist” first uses the value tree to find the absolute value of the price difference between two trading points, and then uses the logic tree to determine the action to be taken (with degrees of belief). The value tree (objective) and logic tree (subjective) fit the curve together.

Author Contributions

All authors conducted the research and contributed to the development of the model. H.X. contributed as an expert in quantum theory and non-linear science. L.X. contributed to the research from the aspects of machine learning, decision theory and wrote the code. L.X. and K.X. wrote this manuscript and performed the data analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available on request.

Acknowledgments

We would like to thank the anonymous referees of this journal, whose comments substantially improved the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
  2. Savage, L.J. The Foundation of Statistics; Dover Publication Inc.: New York, NY, USA, 1954. [Google Scholar]
  3. Binmore, K. Rational Decisions; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  4. Kahneman, D.; Tversky, A. Prospect theory: An analysis of decision under risk. Econemetrica 1979, 47, 263–292. [Google Scholar] [CrossRef] [Green Version]
  5. Simon, H.A. Reason in Human Affairs; Stanford University Press: Stanford, CA, USA, 1983. [Google Scholar]
  6. Ashtiani, M.; Azgomi, M.A. A survey of quantum-like approaches to decision making and cognition. Math. Soc. Sci. 2015, 75, 49–80. [Google Scholar] [CrossRef]
  7. Busemeyer, J.R.; Bruza, P.D. Quantum Models of Cognition and Decision; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  8. Haven, E.; Khrennikov, A. Quantum Social Science; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  9. Aerts, D.; Aerts, S. Applications of quantum statistics in psychological studies of decision processes. Found. Sci. 1995, 1, 85–97. [Google Scholar] [CrossRef]
  10. Aerts, D. Quantum structure in cognition. J. Math. Psychol. 2009, 53, 314–348. [Google Scholar] [CrossRef] [Green Version]
  11. Busemeyer, J.; Franco, R. What is the evidence for quantum like interference effects in human judgments and decision behavior? NeuroQuantology 2010, 8, S48–S62. [Google Scholar] [CrossRef] [Green Version]
  12. Busemeyer, J.R.; Franco, R.; Pothos, E.M. Quantum probability explanations for probability judgment errors. Psychol. Rev. 2010, 118, 193. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Wang, Z.; Busemeyer, J.R. A quantum question order model supported by empirical tests of an a priori and precise prediction. Top. Cogn. Sci. 2013, 5, 689–710. [Google Scholar]
  14. Khrennikov, A.; Basieva, I.; Dzhafarov, E.N.; Busemeyer, J.R. Quantum models for psychological measurements: An unsolved problem. PLoS ONE 2014, 9, e110909. [Google Scholar] [CrossRef] [PubMed]
  15. Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y. A quantum-like model of selection behavior. J. Math. Psych. 2017, 78, 2–12. [Google Scholar] [CrossRef] [Green Version]
  16. Basieva, I.; Khrennikova, P.; Pothos, E.M.; Asano, M.; Khrennikov, A. Quantum-like model of subjective expected utility. J. Math. Econ. 2018, 78, 150–162. [Google Scholar] [CrossRef] [Green Version]
  17. Ozawa, M.; Khrennikov, A. Application of theory of quantum instruments to psychology: Combination of question order effect with response replicability effect. Entropy 2019, 22, 37. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Ozawa, M.; Khrennikov, A. Modeling combination of question order effect, response replicability effect, and QQ-equality with quantum instruments. J. Math. Psychol. 2021, 100, 102491. [Google Scholar] [CrossRef]
  19. Yukalov, V.I.; Sornette, D. Physics of risk and uncertainty in quantum decision making. Eur. Phys. J. B 2009, 71, 533–548. [Google Scholar] [CrossRef]
  20. Yukalov, V.I.; Sornette, D. Quantum probabilities as behavioral probabilities. Entropy 2017, 19, 112. [Google Scholar] [CrossRef] [Green Version]
  21. Yukalov, V.I. Evolutionary Processes in Quantum Decision Theory. Entropy 2020, 22, 681. [Google Scholar] [CrossRef] [PubMed]
  22. Xin, L.; Xin, H. Decision-making under uncertainty—A quantum value operator approach. Int. J. Theor. Phys. 2023, 62, 48. [Google Scholar] [CrossRef]
  23. Holland, J. Adaptation in Natural and Artificial System; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  24. Goldberg, D.E. Genetic Algorithms—In Search, Optimization and Machine Learning; Addison-Wesley Publishing Company, Inc.: New York, NY, USA, 1989. [Google Scholar]
  25. Koza, J.R. Genetic Programming, on the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  26. Koza, J.R. Genetic Programming II, Automatic Discovery of Reusable Programs; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  27. Von Neumann, J. Mathematical Foundations of Quantum Theory; Princeton University Press: Princeton, NJ, USA, 1932. [Google Scholar]
  28. Dirac, P.A.M. The Principles of Quantum Mechanics; Oxford University Press: Oxford, UK, 1958. [Google Scholar]
  29. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  30. Benenti, G.; Casati, G.; Strini, G. Principles of Quantum Computation and Information I; World Scientific Publishing: Singapore, 2004. [Google Scholar]
Figure 1. Value tree for the Dow Jones index.
Figure 1. Value tree for the Dow Jones index.
Entropy 25 01213 g001
Figure 2. Logic tree for the Dow Jones index.
Figure 2. Logic tree for the Dow Jones index.
Entropy 25 01213 g002
Figure 3. The optimization curve of the evolution algorithm.
Figure 3. The optimization curve of the evolution algorithm.
Entropy 25 01213 g003
Figure 4. The learning curve of the evolution algorithm.
Figure 4. The learning curve of the evolution algorithm.
Entropy 25 01213 g004
Figure 5. Trajectory of the Dow Jones index (blue line is the observed closing price, and the red line is the computed closing price).
Figure 5. Trajectory of the Dow Jones index (blue line is the observed closing price, and the red line is the computed closing price).
Entropy 25 01213 g005
Figure 6. The buy (positive) or sell (negative) actions taken by the logic tree with degrees of belief (buy: 100; sell: −100).
Figure 6. The buy (positive) or sell (negative) actions taken by the logic tree with degrees of belief (buy: 100; sell: −100).
Entropy 25 01213 g006
Table 1. State–action–value decision table.
Table 1. State–action–value decision table.
State q 1 q 2
Action
a 1 p 1 | x p 1 | x
a 2 p 2 | x p 2 | x
Table 2. The Dow Jones index (1–30 December 2022).
Table 2. The Dow Jones index (1–30 December 2022).
−134,395.01
034,429.88
133,947.1
133,596.34
033,597.92
033,781.48
133,476.46
034,005.04
034,108.64
133,966.35
133,202.22
132,920.46
132,757.54
032,849.74
033,376.48
133,027.49
033,203.93
033,241.56
132,875.71
033,220.8
133,147.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xin, L.; Xin, K.; Xin, H. On Laws of Thought—A Quantum-like Machine Learning Approach. Entropy 2023, 25, 1213. https://doi.org/10.3390/e25081213

AMA Style

Xin L, Xin K, Xin H. On Laws of Thought—A Quantum-like Machine Learning Approach. Entropy. 2023; 25(8):1213. https://doi.org/10.3390/e25081213

Chicago/Turabian Style

Xin, Lizhi, Kevin Xin, and Houwen Xin. 2023. "On Laws of Thought—A Quantum-like Machine Learning Approach" Entropy 25, no. 8: 1213. https://doi.org/10.3390/e25081213

APA Style

Xin, L., Xin, K., & Xin, H. (2023). On Laws of Thought—A Quantum-like Machine Learning Approach. Entropy, 25(8), 1213. https://doi.org/10.3390/e25081213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop