Next Article in Journal
Fault Diagnosis of High-Speed Brushless Permanent-Magnet DC Motor Based on Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm
Next Article in Special Issue
Optical-Cavity-Induced Current
Previous Article in Journal
Chaotic Discrete Fractional-Order Food Chain Model and Hybrid Image Encryption Scheme Application
Previous Article in Special Issue
Gravitational Dispersion Forces and Gravity Quantization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Phishing for (Quantum-Like) Phools—Theory and Experimental Evidence †

by
Ariane Lambert-Mogiliansky
1,* and
Adrian Calmettes
2
1
Department of Economic Theory, Paris School of Economics, 75014 Paris, France
2
Department of Political Science, The Ohio State University, Columbus, OH 43210, USA
*
Author to whom correspondence should be addressed.
Phishing for Phools—the Economics of manipulation and deception. 2015.
Symmetry 2021, 13(2), 162; https://doi.org/10.3390/sym13020162
Submission received: 14 December 2020 / Revised: 8 January 2021 / Accepted: 11 January 2021 / Published: 21 January 2021
(This article belongs to the Special Issue Symmetries in Quantum Mechanics)

Abstract

:
Quantum-like decision theory is by now a theoretically well-developed field (see e.g., Danilov, Lambert-Mogiliansky & Vergopoulos, 2018). We provide a first test of the predictions of an application of this approach to persuasion. One remarkable result entails that, in contrast to Bayesian persuasion, distraction rather than relevant information has a powerful potential to influence decision-making. We first develop a quantum decision model of choice between two uncertain alternatives. We derive the impact of persuasion by means of distractive questions and contrast them with the predictions of the Bayesian model. Next, we provide the results from a first test of the theory. We conducted an experiment where respondents choose between supporting either one of two projects to save endangered species. We tested the impact of persuasion in the form of questions related to different aspects of the uncertain value of the two projects. The experiment involved 1253 respondents divided into three groups: a control group, a first treatment group and the distraction treatment group. Our main result is that, in accordance with the predictions of quantum persuasion but in violation with the Bayesian model, distraction significantly affects decision-making. Population variables play no role. Some significant variations between subgroups are exhibited and discussed. The results of the experiment provide support for the hypothesis that the manipulability of people’s decision-making can to some extent be explained by the quantum indeterminacy of their subjective representation of reality.

1. Introduction

Why is the famous P&G (Procter and Gamble) 2010 “Thank you, Mom” advertisement [1] showing devoted mothers supporting young athletes, among the most successful ads of all time? The pervasiveness of informationally irrelevant messaging in advertising is stunning. In this paper, we present and provide experimental evidence for an application of quantum-like decision-making theory that explains why distraction—i.e., addressing informationally irrelevant issues—can be a powerful manipulation technics.
The idea that people are being influenced and manipulated by a systematic exploitation of non rational psychological factors rather that by providing information that is rationally processed, was first forcefully put forward in the seminal book of Vance Packard (1957) “The Hidden Persuaders” [2]. His thesis is that persuador relies on psychiatric and psychological technics to address their message to our “wild and unruly subconscious”. Later Cialdini developed a “science of persuasion” based on a general behavioral principles (e.g., bias for reciprocity) that can be exploited to influence people’s choice (see e.g., [3,4]). Closer to our approach which focuses on information processing, is an early work by Festinger and Maccoby [5]. They published the first experiment showing that distraction can induce attitude change: a message has larger persuasion power among respondents subjected to distraction. Their idea is that distraction in the course of information processing makes attempts to provide counter arguments less successful. This in turn makes people more vulnerable to the persuador’s message. Later, we saw the development of a broad literature in psychology showing that distraction may decrease attention, impair learning and remembering opening up for manipulation [4,6,7,8,9,10,11]. Failures in information processing are also a the heart of Nobel-prize winner Kahneman’s best selling book “Thinking Fast and Slow” [12]. In the last section, we discuss how they relate to our approach to distraction.
More recently, Akerlof and Shiller (2015) provided loads of evidence showing that people are systematically “phished” in economic transactions. The authors suggest that this is due to the significance of the story people tell themselves when making decision i.e., the “narratives” or as they also write “the focus of the mind”. They conclude “just change people’s focus and you can change the decisions they make” (p. 173 [13]). Emphasizing the significance of the “narratives” is closely related to a rich literature in psychology on framing effects (see among others [14,15,16,17,18,19,20,21]).
Quantum cognition offers an approach to the concept of narratives in terms of perspectives on reality [22]. Formally, a perspective is a coordinate system of a state space and there exists a number of equally valid alternative coordinate systems. Different perspectives can be simultaneously true but not compatible with each other. In this paper, we rely on a formalisation of the concept of narratives in line with the general theory of quantum decision-making. As shown in [23], that theory delivers a power of distraction in the terms of Akerlof and Shiller. The power of distraction arises from non-Bayesian information processing reflecting the mathematical structure of the quantum model. Experimental evidence (see e.g., [24,25]) shows that people oftentimes systematically depart from Bayes’ rule when confronted with new information. Cognitive sciences propose a number of alternatives to Bayesianism (see e.g., [26,27,28,29]). The attractiveness of the quantum approach is partly due to the fact that quantum mechanics has properties that reminds of the paradoxical phenomena exhibited in human cognition. In addition, quantum cognition has been successful in explaining a wide variety of behavioral phenomena such as disjunction effect, cognitive dissonance or preference reversal (see among others [30,31,32,33,34,35]). Importantly, there exists by now a fully developed decision theory in the context of non-classical (quantum) uncertainty. Different formulations of that theory exist, including that by Aerts et al. [36]. In this paper, we rely on the formulation developed by Danilov et al. [37,38]. Clearly, the mind is likely to be even more complex than a quantum system, but our view is that the quantum cognitive approach already delivers interesting new insights in particular with respect to persuasion.
In quantum cognition, the object of interest is the decision-maker’s mental representation of the world. It is modelled as a quantum-like system represented by its state—a cognitive state which is the equivalent of beliefs in the classical context. In quantum cognition, the decision relevant uncertainty is consequently of non-classical (quantum) nature. As argued in [22] this modelling approach allows capturing widespread cognitive limitations in information processing. The key quantum property that we use is the “Bohr complementarity” of characteristics (properties) of the mental object (representation of the world). The decision-maker cannot consider all properties simultaneously i.e., they cannot have a definite value in his mind. Instead, the decision-maker processes information sequentially moving from one perspective to the other and order matters.
As in the classical context our rational decision-maker uses new information to update her beliefs. A rational quantum-like decision-maker is a decision-maker who has preferences over mental objects representing items (or actions). These mental objects are modelled as quantum-like systems. Her preferences satisfy a number of axioms that secure that they can be represented by an expected utility function. In [38] we learned that a dynamically consistent quantum-like decision-maker updates her beliefs according to Lüder’s postulate which in Quantum Mechanics governs state transition following the measurement of a system. In two recent papers, important theoretical results were established. First, it is shown in [39] that in the absence of constraints (on the number of operations that trigger updating), full persuasion applies: Sender can always persuade Receiver to believe anything that he wants. Next, in [23] the same authors investigate a short sequence of operations but in the frame of a simpler task that they call “targeting”. The object of “targeting” is the transition of a belief state into another specified target state. The main result of relevance to our issue is that distraction i.e., a test or question that generates irrelevant but “Bohr complementary” information has significant persuasion power. In contrast, a Bayesian decision-maker does not update her beliefs when the information is not relevant to her concern and thus cannot be persuaded in this manner to change her decision.
In the present paper we first formulate a model of quantum-like decision-making in the context of a choice between two uncertain alternatives. The model is used to derive the impact of relevant respectively distractive information on choice behavior. The results are contrasted with those of Bayesian persuasion. A contribution of the paper is to provide a first (illustrative) experimental test of the model’s predictions. We opted for a more basic treatment of the data because the quantum persuasion model is so rich that a rigorous estimation of the relevant parameters is beyond the scope of the present paper. The experimental situation that we consider is the following. People are invited to choose between two projects aimed at saving endangered species (elephants and tigers). The selected project will receive a donation of 50 euros (one randomly selected respondent will determine the choice). We consider two perspectives of relevance for the choice: the urgency of the cause and the honesty of the organization that manages the donations. As a first step and in a separate experiment we establish that the two perspectives are incompatible by exhibiting a significant order effect which is the signature of incompatible measurements (see [40]). In the main experiment 1253 respondents are divided into three groups: a control and two treatment groups. They all go through a presentation of the projects and some questions about their preferences. The difference between the groups is that the first treatment group is invited to answer a question about their beliefs of direct relevance to their choice while the second must answer a question that distracts them from what is relevant to their choice. We find that, at the population level, the results are in accordance with the predictions of the quantum model: the distractive question has a significant impact on the respondents choices as compared with both the control group and the other treatment group. The pattern of reactions is disconnected from the thematic content of the distractive information screen which is to be expected when the two perspectives are incompatible. In contrast the question on decision relevant beliefs had no significant impact compared to the control group. The data reveal some significant variation between subgroups with respect to their responsiveness to distraction. In particular, we find that people who care about the urgency of the cause are more responsive to distraction. We argue that this is consistent with the quantum model under the reasonable assumption that those people are more passionate about the issue. The quantum-like working of the mind is expected to be more pronounced for passionate people. This is because standard rational thinking which denies the contextuality of mental representations tends to constrain that spontaneous drive. We conclude with a discussion on rationality in information processing and relate our approach to other prominent behavioral theories.
This paper contributes to the economic literature on persuasion initiated by Kamenica and Gentskow’s seminal article “Bayesian Persuasion [41]. More precisely, it contributes to its recent development which introduces various kinds of imperfections in information processing one example is Bloedel and Segal’s “Persuasion with rational inattention” [42], which show how Sender optimally exploits Receiver’s inattention. Another example is Lipnowsky and Mathevet [43]. Their focus is on how Sender responds to Receiver’s problem with temptation and self-control by adapting the signal structure. More closely related to our work is Galperti “Persuasion—the art of changing worldviews” [44]. The author is interested on how a better informed Sender can modify Receiver’s incorrect worldview with “surprises” that trigger a change in the support of Receiver’s beliefs. Our approach is different because we do not assume that there is a single correct worldview. As shown in [45], a large number of non-bayesian rules systematically distort updated beliefs. However, Kamenica and Gentzkow’s concavification argument for optimal persuasion extends to such rules which boil down to introducing some form of bias. Kamenica and Gentzkow’s result entails that Senders payoff is concave in Receivers’s belief so that Sender’s problem can be formulated as the choice of Receiver’s posteriors. Our contribution departs more fundamentally from Kamenica and Gentzkow because quantum cognition relies on non-classical (quantum) uncertainty (contextuality) so in particular that result does not apply. This approach reveals a powerful role for distraction in persuasion and we provide some experimental evidence for it.
Our results also contribute to the literature in psychology by offering a novel explanation for the well documented distraction-persuasion nexus. Our study provides support to the thesis that people’s propensity to be persuaded is due to the contextuality (intrinsic indeterminacy) of their representation of the world rather than to limited cognitive capacity or to some bias. In so doing our paper contributes to the growing literature in quantum cognition (see recent contributions in [46,47,48,49] for other examples on how the (quantum) contextuality approach offers a new paradigm for explaining a variety of behavioral phenomena.
The paper is organized as follows. We first briefly remind of the classical Bayesian persuasion approach. Next we provide a quantum-like model of choice between two uncertain alternatives. We formulate the predictions related to the impact of information on choice behavior. In the second part of the paper, we first describe the experimental set-up used to test the predictions. We thereafter report and inteprete the results from the analysis of the data. We conclude with a discussion of our results in view of some of the existing literature.

2. Quantum Persuasion

2.1. Bayesian Persuasion

Let us first briefly describe Bayesian persuasion an approach developed by Kamenica and Gentzkow [41] in a classical uncertainty setting. The subject matter of the theory of Bayesian persuasion is the use of an “information structure”, we shall refer to it a “measurement” (in practical terms, it corresponds to an investigation, a test or a question), that generates new information in order to modify a person’s state of beliefs with the intent of making her act in a specific way.
More precisely the setting involves two players Sender and Receiver. Receiver chooses an action among a set of alternatives with uncertain consequences. An action yields consequences for both players. Sender may try to influence Receiver so she chooses an action that is most valuable to him. A crucial element of the Bayesian persuasion approach is that Sender does not choose the information Receiver obtains. If he did that would raise issues of strategic concealment and revelation. Instead Sender chooses an “information structure” (IS) or a measurement that is a test, an investigation or a question. Sender is committed to truthfully reveal the outcome of the IS (e.g., he does not control the entity that performs the study). One example is in lobbying. A pharmaceutical company commissions to a scientific laboratory a specific study of a drug impact, the result of which is delivered to the regulator. Another example, closer to our application here, is a question to Receiver: do you believe (Yes or No) that politicians’ climate inaction will lead to global catastrophe under this century? The outcome of any IS is information. In our examples above, it is information about the impact of a drug or about the opinion (beliefs) of Receiver on the responsibility of politicians. This information generally affects Receiver’s beliefs which in turn may affect her evaluation of the uncertain choice alternatives and therefore the choices she makes. Sender chooses an IS to move Receiver’s (expected) choice closest to his own preferred choice. In the classical context Receiver updates her beliefs using Bayes rule and therefore the power of Sender is constrained by Bayesian plausibility: the fact the expected posteriors must equal the priors.

2.2. The Quantum Persuasion Approach

The quantum persuasion approach has been developed in the same vein as Bayesian persuasion: we are interested in how Sender can use an IS to influence Receiver’s choices. A central motivation is that persuasion seems much more influential than what comes out of the Bayesian approach. So instead of assuming that agents are classical Bayesian, it has been proposed that their beliefs are quantum-like. A first line of justification is that people do not make decision based on reality but based on a representation of that reality, a mental object. In quantum cognition, the decision-maker’s mental representation is modeled as a quantum-like system and characterized by a cognitive state. The decision relevant uncertainty is therefore of a non-classical (quantum) nature. A second line of motivation is that, as argued in e.g., [30,38], this modeling approach allows capturing widespread cognitive limitations. In particular, the fact that people face difficulties in combining different types of information into a stable picture. Instead, the picture (mental object) that emerges depends on the order in which information is processed. The key quantum property that we appealed to is ‘Bohr complementarity’ of attributes i.e., that some attributes (or properties) of a mental object may be incompatible in the decision-maker’s mind: they cannot have definite value simultaneously. A central implication is that measurements (new information) modifies the cognitive state in a non-Bayesian well-defined manner: the mental representation evolves in response to new information in accordance Lüder’s rule which, as shown in [38], secures the dynamic consistency of preferences. As shown in [23,39] a rational quantum-like decision-maker can be manipulated well beyond the limits imposed by Bayesian plausibility. In particular, Sender can exploit the incompatibility properties of certain attributes in Receiver’s mind by providing distracting information to modify her representation and consequent choice.

2.3. A 2 × 2 Quantum Decision Model

We next present a simplified model that we formulate in the terms of our experiment, that is a choice between two uncertain options with two attributes each. For a general and detailed exposition of the formal framework of quantum-like decision making see [23].

2.3.1. The Representation of a Choice Alternative

We have two animal protection projects Tiger Forever (TF) and Elephant Crisis Fund (ECF). The initial information is incomplete so their (utility) value is uncertain. Our decision-maker (DM) is endowed with a cognitive state which encapsulates the probability distribution for every possible state of the world. The notion of cognitive state is similar to the notion of belief. However, in contrast to beliefs, a cognitive state is not an (imperfect) image of the objective world. It is a mental construct, a representation of the objective world that evolves in a way that reflects the cognitive constraints that we focus on: namely that our DM cannot consider all perspectives (attributes) simultaneously. There exist perspectives that are not compatible in her mind. They are incompatible or Bohr complementary. The notion of Bohr complementarity is a central feature of Quantum Mechanics. It relates to properties of a physical system that cannot have definite values simulatneously. In quantum cognition, it relates to properties of the mental object (the represented choice item). When the object has a determinate value i.e., the individual is subjectively certain about e.g., the issue of urgency of a cause, her beliefs about an incompatible characteristics e.g., honesty of the NGO is necessarily mixed. As a consequence, the picture (representation) that arises depends on the order in which different pieces of information are processed.
The DM has an initial representation of the projects. To each project we associate a vector that captures the initial representation i.e., the cognitive state with respect to that project (hereafter we refer simply to project-state or simply state). We use Dirac’s notation which allows easily connecting with the geometrical illustration, the initial states are denoted T and E with T for Tiger Forever and E for Elephants Crisis Fund. The two states are modelled as independent systems meaning that we assume that a measurement operation on one system has no impact on the other i.e., we have no entanglement. Each (represented) project is characterized by two properties (or characteristics) which are assumed incompatible with each other (in the DM mind). We call them Urgency (of the cause) and Honesty (of the NGO managing the project). The Urgency property (or perspective) is represented by a two dimensional space spanned by pure (subjective certainty) states U , and U corresponding to the property of the project being Urgent respectively not-Urgent. The Honesty perspective is represented by an alternative basis of the same state space ( H , H ) corresponding to Honest respectively not-Honest NGO. The fact that the two bases span the same space is the geometrical expression of the (subjective) incompatibility of the two properties.

2.3.2. Preferences

Individual preferences are captured by the utility value attributed by the DM to the projects in the possible pure states e.g., U or U . The utility of an uncertain state is calculated as a linear combination of those values. In addition, individual preferences are characterized by a “preferred perspective” corresponding to the (most) decision relevant characteristics of the item for the individual (e.g., the Urgency of the cause). When two or more perspectives are incompatible in her mind, the individual uses her preferred perspective to evaluate the expected utility of a project. This means that whereas our DM is capable of looking at alternative perspectives on the same item, when it comes to evaluation, she evaluates utility from one and the same perspective throughout the game. This secures that in any belief state, the utility value is uniquely defined while other incompatible perspectives affect choice through their impact on the belief state (see below). In the following, we assume that the preferred perspective is the same for the two projects.
This is illustrated in Figure 1. The two projects are represented each by two distinct states T and E .
The figure reads as follows. For a DM endowed with preferences that define Urgency (U) as her preferred perspective (a U-individual), the expected utility value of contributing 50 euros to the Elephant Crisis Fund in project-state E is denoted u ( E C F ; E , U ) . It depends on two things: 1. her beliefs (about the cause’s urgency) encapsulated in state E and 2. her valuation of contributing to an urgent respectively non-urgent ECF project. We denote these values x U R respectively x U R . It is useful to express an U- individual’s preferences as E U = x U 0 0 x U . As shown in [38] the utility value of project ECF is given as follows
u ( E C F ; E , U ) = T r ( E U , E ) = x U · E U 2 + x U · | E | U | 2 .
Our DM is risk neutral: her expected utility of the project is as usual the utility associated with the possible states multiplied by the (subjective) probability for those states. So, for instance, with x U = 1 and x U = 0 , the expected utility value of contributing to ECF when the individual has U-preferences is equal to her subjective probability that the elephant cause is urgent. That probability is calculated according to Born’s rule which is the formula for calculating probability in a quantum setting. It corresponds to the square of the correlation coefficient E U ; also called amplitude of probability. Graphically, the probability amplitudes are read off in the diagram as the orthogonal projection (yellow and blue thin doted lines) of vector E on the basis vectors ( U , U , for a U-individual).
Similarly, we have for an individual with H-preferences:
u ( E C F ; E , H ) = T r ( E H , E ) = x H · E H 2 + x H · | E | H | 2
where E H the utility matrix is defined in the (preferred) ( H , H ) basis: E H = x H 0 0 x H . The corresponding expected utility values of the TF project for individual having U- and H-preferences respectively are:
u ( T F ; T , U ) = T r ( T U , T ) = y U · T U 2 + y U · | T | U | 2 u ( T F ; T , H ) = T r ( T H , T ) = y T · T H 2 + y T · | T | H | 2
where T U = y U 0 0 y U (as defined in the ( U , U ) basis) and T H = y H 0 0 y H (as defined in the ( H , H ) basis) are the operator representing the utility value of choosing TF for a U-individual respectively a H-individual.

2.3.3. Choice

The individual makes her choice by comparing the expected utility of each project and selecting the one that yields the highest expected utility. For the sake of illustration take x U = y U = 1 and x U = y U = 0 , reading directly from the figure we see that
u ( T F ; T , U ) = T U 2 > u ( E C F ; E , U ) = E U 2
and similarly setting x H = y H = 1 and x H = y H = 0
u ( E C F ; E , H ) = E H 2 > u ( T F ; T , H ) = T H 2
Which means that in our example a U-individual prefers to contribute to the TF project while a H-individual prefers to contribute to the ECF project.

2.3.4. Persuasion

Persuasion is about modifying the cognitive state, i.e., the (mental) project-states. This is achieved by means of an informational structure (IS) which we define as an operation that triggers the resolution of (subjective) uncertainty with respect to some aspect. This corresponds to complete (projective) measurements of the state. In a two-dimentional case, such measurements yield maximal (but not complete) knowledge. It is important to keep in mind that we are dealing with mental objects (represented projects). So, in our context, an IS can be a question that the individual puts to herself (alternatively an IS is an investigation of the outside world that determines whether the threat of extinction is real or not). The outcome is generally some level of conviction (subjective certainty).
In quantum persuasion, an IS is decomposed into two parts: a measurement device (MD) and an information channel (IC) (see [23] for details). An IC translates outcomes into signals. In the present context, we confine ourselves to trivial IC, where the signals are the outcomes of the MD. A MD is defined by a set of possible outcomes I, a collection of probabilities p i to reach these outcomes where p i depends on the (cognitive) project-state , p i = T r ( P i E P i ) where E is the initial belief state (our project-state),the prior and P i is the projector corresponding to outcome i. Upon obtaining outcome i, the belief-state transits (is updated) into E i = P i E P i p i according to Lüders’ rule (a behavioral justification for this rule is provided in [38]). In line with the general theory, we focus on direct (or projective) measurements, that is, MDs whose outcomes transit the prior into a pure cognitive state, i.e., a state of full conviction (subjective certainty with respect to some aspect).
As in the standard persuasion problem, Sender chooses the MD. In our experiment Sender chooses the question put to Receiver. For instance “how urgent do you think it is to protect elephants from the threat of extinction”. Such an MD is similar to a procedure that actualizes (rather than “elicit”) the individual’s beliefs about the severity of the threat. The distinction between eliciting and actualizing is that in the first case, it is assumed that the beliefs pre-existed the questioning, it is simply revealed. In contrast, actualizing means that the revealed beliefs were a potential among others which were made actual by the operation of questioning—they did not pre-exist. The statistical distribution of answers expresses the (mixed) beliefs. A crucial point that we emphasize here is that simply eliciting beliefs does not provide any informational justification for modifying those beliefs (project-state). In the classical context, belief elicitation has no impact. Yet, as we next shall see, as Sender asks such a question Receiver’s cognitive state changes which is the signature of its intrinsic indeterminacy.

2.3.5. The Impact of Measurements

In the development, below we focus exclusively on introspective measurements, i.e., questions put to the decision-maker about the (represented) state of the world. This is in accordance with the experiment that follows. We distinguish between two types of measurements. Those that are compatible with each other, they correspond to commuting operations on the project-state. And those that are incompatible which correspond to non-commuting operations. In a similar way we speak of measurements that are compatible (incompatible) with the preferred perspective.

Compatible Measurements

The performance of a measurement of the (represented) projects in the individual’s preference perspective corresponds to actualizing decision-relevant beliefs. The U-question: do you think that the cause is urgent YES/NO? is for a U-individual compatible with her preferred perspective. The question is formulated as a binary choice YES/NO, the initial mixed project state generates the probabilities for the responses.
In the classical (Bayesian) context this type of questioning is inconsequential (see below). This contrast with the quantum context where it modifies the project-state (beliefs). A compatible YES/NO question induces the ’collapse’ of the (mixed) state onto one of the pure states. Consider a U-individual when questioned about her belief regarding the urgency of the elephant cause her prior E collapsesonto E = U E with probability E U E 2 and onto E = U E with probability E U E 2 and similarly if questioned about the urgency of the Tiger cause T T = U T or T = U T where the subscript informs about the project and are neglected when no confusion arises i.e., it is clear which project we talk about. The measurement of the two project-states (corresponding to ECF respectively TF) generates 4 possible combinations of project-states e.g., E , T transits onto U E , U T with a probabilities given by E U 2 T U 2 .
We can now examine the impact of a measurement on the DM’s choice in the graphical example above. Recall that in the absence of measurement our U-individual is selecting TF with probability 1 and ECF with probability 0. When the choice is preceded by the Urgency question, with probability T U 2 E U 2 > 0 the resulting states are U E , U T . In this event, she selects ECF because u ( T F ; T , U ) T = U T = T r ( T U U T ) = 0 < u ( E C F ; E , U ) E = U E = T r ( E U U T ) = 1 . So, we find that her choice behavior is affected by the question. Similarly, our H-individual will, after answering the compatible question, select TF with positive probability. Thus we have shown that even a “naive” question about the individual’s decision relevant beliefs can induce a change in the expected revealed preferences i.e., we already have some “persuasion”.
The impact of the mere actualization of beliefs underlines a distinction between the quantum and the classical framework. In the quantum world measurements generally change the state of the measured system (here the beliefs or project-states). This is an expression of the fundamental distinction with the classical world where it is assumed that reality preexists any measurement that merely reveals it. In the quantum world reality is contextual which means that measurements contribute in determining the state i.e., they do not reveal a preexisting state, they contribute in shaping that state. This is called contextuality (see [50] for a rich collection of contributions on contextuality). Another distinction with the classical case is the difference in the impact of compatible versus incompatible measurements as we show next.

Incompatible Measurement: Distraction

We now turn to distraction which we define as the actualization of beliefs with respect to features not directly relevant to decision-making i.e., belonging to a perspective that is incompatible with the preferred perspective. Below we depict the case when addressing a U-preference individual. Distraction corresponds to putting a H-question e.g., do you believe WWF (managing ECF) is honest YES/NO? As in the compatible case the question triggers the collapse of the project-state E in the basis corresponding to the question, here H , H . The state E transits into E equal to either H E or H E and it does so with probability E H 2 and E H 2 . And similarly for the H-question regarding TF (the NGO managing the Tiger project). We illustrate this in Figure 2 with the green lines for E the cognitive state representing ECF.
In contrast with the compatible measurement case, after having answered the incompatible question the resulting cognitive state does not allow the DM to evaluate the expected utility associated with the choice alternatives. She needs to project it back into her preferred perspective. The expected utility of ECF for a U-individual in initial project-state E subjected to the H-question, is obtained by considering a sequence of two non-commuting operations. First distraction, the state is projected onto the H , H basis. Then the resulting state (H or H ) is projected back onto the preferred basis ( U , U ) in order to evaluate the project:
u ( E C F ; E , U ) = E H 2 H U 2 + | E | H | 2 H U | 2 x U + E H 2 | H | U | 2 + | E | H | 2 H | U | 2 x U .

Example

Consider the following numerical example for a H-individual where we simplify the matter by assuming T = E = D , that is the two projects are represented by the same project-state meaning that they are subjectively perceived as equally urgent and honest. Let this state in the H-perspective be
D = 4 / 5 2 / 5 2 / 5 1 / 5
Consider the following utility values x H = 8 , x H = 7 , y H = 10 , y H = 4 . Using the formula in (2) we obtain:
u ( E C F ; D , H ) = 4 / 5 · 8 + 1 / 5 · 7 = 39 / 5
And similarly
u ( T F ; D , H ) = 4 / 5 · 10 + 1 / 5 · 4 = 44 / 5
Which means that this H-individual chooses to donate to TF.
Let us now consider a distraction toward the Urgency perspective that we model for simplicity as a 45 rotation of the H-basis (which corresponds to the case when the pure states are statistically uncorrelated across perspectives) U = 1 / 2 1 / 2 1 / 2 1 / 2 and U = 1 / 2 1 / 2 1 / 2 1 / 2 .
The distractive procedure is as follows: first the H-individual in project-state D is asked whether she thinks the Elephant respectively Tiger cause is Urgent or not Urgent which takes the state D onto U E ( T ) or U E ( T ) . Then, our H-individual evaluates her expected utility value in the H , H perspective. With a 45 rotation the computation simplifies greatly because whether distraction takes the states to U or U , the probability for H respectively H is the same:
u ( E C F ; D , H ) = D U 2 + | D | U | 2 ] 1 / 2 x H + D U 2 + | D | U | 2 1 / 2 x H = 1 / 2 x H + 1 / 2 x H = 4 + 3.5 = 7.5
and similarly
u ( T F ; D , T ) = D U 2 + | T | U | 2 ] 1 / 2 y H + T U 2 + | T | U | 2 ] 1 / 2 y H = 1 / 2 y H + 1 / 2 y H = 5 + 2 = 7 .
So after the distraction our individual chooses to donate to the ECF project with probability 1 instead of TF in the absence of distraction. So we note that the impact of distraction can be a total reversal of the choice. This is in contrast with the impact of decision-relevant belief actualization (compatible measurement) which only triggers some partial reversal. For a general theoretical argument on the persuasion power of a distractive IS as compared with a compatible IS see [23].
Before moving to the experiment let us briefly remind ourselves of the classical subjective uncertainty approach in our example.

2.3.6. The Classical Uncertainty Approach

The classical uncertainty framework is nested in the quantum setting. It corresponds to the case when all properties of an item (perspectives) are compatible and therefore Lüder’s rule for updating is equivalent to Bayesian updating. The individual can simultaneously considers Urgency and Honesty and combine them to obtain her expected utility value. Assuming a separable and additive utility function, we write
u ( T ) = α p U 0 T x U + 1 p U 0 T x U + ( 1 α ) p T 0 T x H + 1 p T 0 T x H u ( E ) = α p U 0 E y U + 1 p U 0 E y U + ( 1 α ) p T 0 E y T + 1 p T 0 E y T
where p U 0 T is the subjective probability in state T that the TF project is urgent (and c 1 p U 0 T c that it is not urgent) and similarly for the other probabilities. The superscript refers to time t = 0 (initial beliefs). The α is the relative preference weight given to urgency ( 1 α the relative weight of honesty). An individual for whom honesty is determinant is an individual with α < 1 / 2 and similarly for U-individuals ( α 1 / 2 ) .
Recall that the measurements that we consider are exclusively introspective i.e., no information appealing to the outside world is called upon. In other words, the questions correspond to eliciting Receiver’s beliefs.
When asked “do you believe the tiger cause is urgent YES/NO. With probability p U 0 T Receiver answers YES and with probability 1 p U 0 T she answers NO. But her beliefs do not change, they remain mixed. Since beliefs are unchanged so is the expected utility from the two projects. As a consequence Receiver’s choice is not affected by Sender’s question. To put it differently, an introspective measurement has no persuasion power whatsoever in the classical context. The classical prediction contrasts starkly with the quantum model where an introspective measurement with respect to both compatible and incompatible perspectives has impact on decision-making. Those predictions appears more consistent with numerous experimental works that exhibit a significant impact of belief elicitation on decision-making see most recently [51]). In addition, as illustrated above, incompatible introspective measurements have the strongest potential to affect decision-making. TIt is precisely this prediction that we aim at testing with the next following experiment.

3. Experimental Design

Our main experiment features the choice to donate to either one of two projects concerned with the protection of endangered species. It uses the property of Bohr complementarity of mental perspectives. More precisely it relies on the hypothesis that two perspectives on the projects are incompatible in the mind of Receiver. The two perspectives that we consider are “the urgency of the cause” and “the honesty of the organization that manages the funds’ (the terms “honesty” and “trustworthiness”, or “trust”, are used interchangeably). As a first step we provided experimental support for the incompatibility hypothesis. We know that when two properties are incompatible measuring them in different orders yields different outcomes. Therefore, we started with an experiment to check whether order matters for the response profile obtained. Note that even in Physics, there is no theoretical argument for establishing whether two properties are compatible or not. This must be done empirically.

3.1. Testing for the Incompatibility of Perspectives

At the time we conducted our study, the world was confronted with a severe refugee crisis in Myanmar. The situation actualized quite sharply the two perspectives we wanted to test. On the one hand, the urgency of the humanitarian crisis and, on the other hand, the uncertainty about the reliability/honesty of the NGOs on the ground.
We recruited 295 respondents through Amazon’s Mechanical Turk, for which data quality has been confirmed by different studies (e.g., [52,53]). The respondents completed the short survey below on the website Typeform. They were paid $0.1 and spent on average 0:17 minutes to complete the survey.
The participants were first presented a screen with a short description of the situation of refugees in Myanmar including a mention of the main humanitarian NGO present in the field:
“About a million refugees (a majority of women and children) escaped persecution in Myanmar. Most of them fled to Bangladesh. The Bengali Red Crescent is the primary humanitarian organization that is providing help to the Rohingyas. They are in immediate need of drinkable water, food, shelter and first medical aid.”
They were then asked to evaluate the urgency of the cause and the honesty to the NGO on a scale from 1 (“Not urgent” or “Do not trust”) to 5 (“Extremely urgent” or “Fully trust”). The order of presentation of the two questions was randomized so that half of participants responded to the urgency question before trust (U-T), and the other half conversely (T-U).
We prove the existence of order effects by showing that the responses are drawn from two different distributions. We do this using both a difference in means test (i.e., two-sample t-test with t = 2.54 and p-value = 0.011 ) and a nonparametric test of the two sample distributions (i.e., two-sample Kolmogorov-Smirnov test with D = 0.11 and p-value = 0.047 ) in R.
The results are consistent with the hypothesis that the two perspectives are incompatible in the mind of people. As well-known there exist other theories for order effects. We observe that the value of the responses to the Trust question tend to be lower when that question comes first (i.e., T-U), whereas the responses to the Urgency question tends to have a higher value when that question comes first (i.e., U-T). Therefore, we can reject both the hypothesis of a recency biais and that of a primacy bias. This strengthens our quantum interpretation. We next proceed to the main experiment using those two perspectives.

3.2. Main Experiment

1253 participants completed the survey on the website Typeform, they were recruited through Amazon’s Mechanical Turk. They were paid either $1 to $0.75 depending on the condition.
The participants were divided into three groups. Two treatment groups and a control group as explained below. All three groups were presented a screen with an introductory message, informing them that the questionnaire is part of a research project on quantum cognition and that they will contribute in deciding which one of two NGOs projects will receive a €50 donation. The decision will be made by randomly selecting a respondent and implementing his or her decision. Presumably, this created an incentive to respond truthfully. The respondents were next asked to click on a button that randomly assigned them to a specific condition. In all conditions, participants were shown a short text about the situation of elephants respectively tigers and of ongoing actions of two NGOs working for their protection the Elephant Crisis Fund (ECF) and Tiger Forever (TF). The order of presentation of the text was reversed for half of the subjects. This aimed at isolating order effects not relevant to our main point. The screen displayed the following two texts:
“Elephant crisis fund: A virulent wave of poaching is on-going with an elephant killed for its tusks every 15 min. The current population is estimated to around 700,000 elephants in the wild. Driving the killing is international ivory trade that thrives on poverty, corruption, and greed. But there is hope. The Elephant Crisis Fund closely linked to World Wildlife Fund (WWF) exists to encourage collaboration, and deliver rapid impact on the ground to stop the killing, the trafficking, and the demand for ivory.”
Tiger Forever: Tigers are illegally killed for their pelts and body parts used in traditional Asian medicines. They are also seen as threats to human communities. They suffer from large scale habitat loss due to human population growth and expansion. Tiger Forever was founded 2006 with the goal of reversing global tiger decline. It is active in 17 sites with Non-Governmental Organizations (NGOs) and government partners. The sites host about 2260 tigers or 70% of the total world’s tiger population”
It is worth mentioning that the descriptions were formulated so as to slightly suggest that the elephants’ NGO (EFC) could be perceived as more trustworthy (because of its link with of WWF, a well-known NGO). In contrast, the text about tigers suggested a higher level of urgency (the absolute number of remaining tigers is significantly lower than the number of remaining elephants). Thereafter, all respondents were confronted with a choice:
“When considering donating money in support of a project to protect endangered species, different aspects may be relevant to your choice. Let us know what counts most to you:
-The urgency of the cause: among the many important issues in today’s world, does the cause you consider belong to those that deserve urgent action? or
-The honesty of the organization to which you donate: do you trust the organization managing the project to be reliable; i.e., do you trust the money will be used as advertised rather than diverted.”
The objective was to elicit an element of their preferences namely their preferred perspective, see Section 2. The rest of the questionnaire depended on which one of the three groups the participants belonged to.
In the control condition (baseline), they were next asked whether or not they wanted to read the first descriptions again or if they wanted to make their final decision i.e., to make their choice between supporting the Elephant Crisis Fund or Tiger Forever both represented by an image of an adult elephant respectively adult tiger (presented in random order on the same screen).
In the first treatment condition, the respondents were redirected to a screen with general information compatible with the aspect they indicated as determinant to their choice when making a donation. Importantly, the information did not directly or indirectly favor or disfavor any of the two projects. The information was aimed at triggering a measurement as they were invited to determine themselves with respect to which of the species was most urgent to save respectively which NGO was most trustworthy. We below return to the role and expected impact of the general information screens. Those who cared most honesty saw a screen with the following text:
Did you know that most Elephant and Tiger projects are run by Non-Governmental Organizations (NGOs)? But NGOs are not always honest! NGOs operating in countries with endemic corruption face particular risks. NGOs are created by enthusiastic benevolent citizens who often lack proper competence to manage both internal and external risks. Numerous scandals have shown how even long standing NGOs had been captured by less scrupulous people to serve their own interest. So a reasonable concern is whether Tiger Forever respectively Elephant Crisis Fund deserves our trust.”
Those who cared most for urgency saw:
“Did you know that global wildlife populations have declined 58% since 1970, primarily due to habitat destruction, over-hunting and pollution. It is urgent to reverse the decline! “For the first time since the demise of the dinosaurs 65 million years ago, we face a global mass extinction of wildlife. We ignore the decline of other species at our peril–for they are the barometer that reveals our impact on the world that sustains us.” —Mike Barrett, director of science and policy at WWF’s UK branch. A reasonable concern is how urgent protecting tigers or elephants actually is.”
Thereafter the respondents were offered the opportunity to read again the descriptions before making their image choice between ECF and TF.
In the main treatment condition (distraction), participants were redirected to a screen with general information on the aspect they did not select as determinant to their choice; this is what we call a distraction. So, those who selected honesty (resp. urgency) saw the screen on global wildlife decline (resp. NGO’s scandals). Thereafter, the respondents were offered the opportunity to read again the initial project description before making their image choice.
Finally, information about their age, gender, education and habits of donation to NGOs was collected before the thank-you message ending of the experiment.
Before presenting the results, we wish to address a feature of the experimental design absent from the theoretical model.

The General Information Screens

First, we note that the theoretical model does not account for anything like a general information screen. The connection with the model is with the questions that follow the general information. Indeed, general information plays no role for persuasion since it conveys no new data on the relative urgency or honesty of the specific projects. Therefore it should not affect the choice between the two projects. So, what is the role of those screens?
Our justification for the general information screens is to be found in the quantum approach to cognition. Quantum cognition recognizes that people consider project from different perspectives some of which may be incompatible but not mutually exclusive. They are Bohr complementary which implies that the questions related to those perspectives do not commute i.e., order matters. This in turn is an expression of the fact that the cognitive state is modified by responding to a question. Our intuition is that there can be some inertia. Consider a person who declared that Honesty is her priority which we interpret as her being in the Honesty perspective. If you abruptly ask whether the elephant cause is urgent, she might not make the effort to switch perspective in order to respond faithfully. In contrast if you softly accompany her into the switch with an engaging short text, she will find herself capable of responding truthfully without particular effort.
Hence the point with those screens is to accompany the change in perspective. Clearly, that is only justified in the distraction treatment, but for the sake of symmetry, we have a similar screen in the treatment where no change of perspective is required.
Next, one may wonder why we do not, in the experiment, simply ask people for their beliefs e.g., do you trust that NGO? But instead we suggest a questioning: “So a reasonable concern is whether Tiger Forever respectively Elephant Crisis Fund deserves our trust”. The reason is that we wanted to avoid that the response would influence the respondent beyond the impact under investigation. Additional impact can be expected because of perceived dissonance. Assume the individual cares for the urgency of the cause and since there are only 2700 tigers left, so she is most likely to choose TF. If she is explicitly asked whether she believes that the TF NGO is honest and decides that she does not trust them much, then it becomes psychologically difficult to select TF. Since we do not put an explicit question but use a general text to induce the measurement, people are expected to be less likely to perceive dissonance and choose more spontaneously. These precautions are among the difficult decisions we have to make in quantum cognition when trying to exhibit quantum effect in behavior. Individuals are, in contrast with particles, thinking systems endowed with, among other things, a drive toward consistency which can interfere with the intrinsic indeterminacy of preferences (see e.g., [26,54]) and Discussion in Section 5).

3.3. Theoretical Predictions

Before getting into the results and their interpretation, let us remind of the main theoretical predictions:
  • The Bayesian model predicts no effect in both treatment groups.
  • The quantum model predicts that the distraction treatment group should exhibit a significantly different allocation of responses compared with the control group’s choice profile. It also predicts some milder impact of the question in the compatible treatment compared with the control group. It should be emphasized that since we lack information about the correlation coefficients between the two perspectives and the utility values, we do not have quantitative predictions. Generally, the less correlated two perspectives (in the example of Section 2 they were fully uncorrelated) the larger the expected impact in terms of switching the choice for given utility values.

4. Results

4.1. Descriptive Statistics

Data were processed, cleaned and analyzed with statistical software R. The number of acceptable observations was 1253 (114 participants were removed from the data due to a technical error which created a risk that some participants might have responded twice). 58.5% of all respondents were male, the average age was 35.6 years and the average education level was undergraduate. Overall, 71.1 % of the participants declared that the Honesty of the NGO rather than the Urgency of the cause is what counts most to their choice. Across the three conditions, 54.4% chose to support with their donation the Elephants Crisis Fund (ECF) and 45.6 the Tiger Forever project (TF). Looking into the different treatment groups, we find that 59% of the respondents in the control condition chose ECF and 54% in the compatible information treatment group. In contrast, in the distraction treatment group, only 47% chose ECF. Conditional on revealed preferences, 50% of the respondents who valued Urgency most chose to support TF, whereas 56% of those who valued Honesty most chose to support ECF. Overall, 87.8% made their final decision without reading the description of the projects a second time. They spent, on average, 1:33 minutes to complete the experiment.
We divided up the respondents into a number of subgroups based on their preferences and the detailed treatment they received. Figure 3 represents the number of participants who chose TF respectively ECF conditional on their preference (Honesty vs. Urgency) and the order of the presentation they have been exposed to (ECF-TF vs. TF-ECF). Note that for all conditions but “Honesty-ET”, a majority of participants chose ECF in the control condition and TF in the incompatible condition. This is particularly striking for “Honesty-TE” and “Urgency-ET”. At first glance, we find a clear reversal in three of the subgroups.

4.1.1. Data Analysis

General Results

The first set of results displayed in Table 1’s first column establishes that distraction—i.e., the question related to the non-determinant perspective - has a statistically significant impact on the final choice ( p = 0.005 ). This result stands across different specifications see Table A1 in the Appendix A). In particular, it appears that everything else being constant, the predicted probability of choosing ECF is 11.1% lower for an individual in the incompatible condition than for an individual in the control condition. By contrast, there is no statistically significant impact of the compatible question on the final decision ( p = 0.173 ). This result is also persistent over alternative specifications. Note however that the effect of the compatible question on the final choice is nonzero. We come back to this later on.
Not surprisingly, there is a statistically significant impact ( p = 0.046 ) on the final choice of the declared determinant—i.e., Honesty versus Urgency, which captures an element of preferences. Here again, the influence is robust to alternative specifications (see Table A1 in the Appendix A). More precisely, the predicted probability of choosing ECF is 6.33% higher for an individual claiming that Honesty is determinant than for someone who reported Urgency as determinant to her choice. By contrast there is no statistically significant impact ( p = 0.7 ) of the order of presentation of the project descriptions on the final decision.
The correlation between covariates were not bigger than 0.13 (between Age and Male)—hence putting aside potential issues of multicollinearity. In particular, regressing on revealed preferences (i.e., choice between Honesty and Urgency) shows no relationship with the order of presentation of the descriptions (ECF-TF and TF-ECF). Interestingly, none of the variables significantly affected the revealed preferences.

Advanced Results

As shown by Table 1, we find that the distraction effect is not homogeneous across subgroups. First, we find that the distraction effect was stronger in the Urgency subgroup than in Honesty subgroup: for Urgency-individuals, the predicted probability of choosing ECF in the incompatible condition is 14.93% lower compared to the control condition ( p = 0.051 ); for Honesty-individuals, it is 10.05% lower ( p = 0.031 ). Next, it appears that the impact of distraction is most pronounced for those who were presented the Tiger Forever project first and Elephant Crisis Fund last (TE subgroup; see Table 1). Distraction statistically significantly affected the final choice in that group (corresponding to 50% of the respondents). In fact, TE-participants in the incompatible condition had a predicted probability of choosing ECF that was 16.3% lower than TE-participants in the control condition ( p = 0.006 ; see Table 2).
When combining the TE presentation order with the Honesty subgroup: the difference in predicted probability to choose ECF between the incompatible condition and the control condition is 17.9% ( p = 0.01 ). For ET-Urgency a 17.4% change in probability can be seen (cf. Figure 3, Table 2 and Table 1), even though the effect fails to reach statistical significance at the 5% level ( p = 0.097 ). Note however that group only constitutes around 15% of the sample, while the TE-Honesty subgroup represents around 34% of the data. For the other two subgroups, distraction had no statistical significant impact ( p = 0.555 ), but a switch of 12.5% may still be noticed for TE-Urg.

4.1.2. Interpretation

First, we note that the significance of preferences (i.e., the answer to “what is determinant to your choice”) for the final choice combined with the fact that a majority of participants who chose Honesty also chose ECF regardless of their condition, suggests that the initial texts were generally well-understood. As explained earlier, the description of the Elephant project was designed to suggest more trust to the NGO managing the project and the description of the Tiger project to suggest a higher level of urgency.
The general results show with no ambiguity that the question triggered by the incompatible information (distraction) had a significant impact on the final choice. It induced some extent of switch as compared to both the control group and the compatible information group. Interestingly, the switch does not reflect the thematic content of the screens. This is consistent with the fact that the information in the screens did not favor any one of the projects. The quantum model provides an explanation for why Urgency individuals made aware of corruption problems reduced their support for ECF (presumably managed by the more reliable WWF). Distraction can induce such change. This is the case for example when the two perspectives (Urgency and Honesty) are uncorrelated (45 rotation as in the example) for a class of project-states and preferences. And it does seem reasonable to expect no or minimal correlation between Urgency of the cause and the Honesty of the NGO (in people’s mind). The fact that the compatible information had no statistically significant impact supports the thesis that being simply exposed to a general information screen does not affect the choice. Instead it is only when the appending question induces a change in perspective that something happens.
We found significant variations between subgroups. First, we could exhibit a distinction in the reaction to distraction depending on preferences alone. On average, Urgency-individuals have been more sensitive to distraction than Honesty-individuals. This could be explained by a the fact that Urgency people tend to be more passionate about the situation. A passionate individual may feature a more pronounced quantum-like working of the mind because she is expected to be less constrained by the rational mind (see below for further arguments).
More intriguing is the fact that when combining preference and the order of presentation, we find that individuals whose preferences are congruent with the last presented project (TE-Hon and ET-Urg) tend to be more sensitive to distraction. The order of presentation of the projects is an element of the “preparation procedure ’( in QM, the state of a quantum system is determined by a suitable preparation procedure). One possible explanation is that “congruent respondents” are more manipulable because both beliefs and preferences are indeterminate. Although this paper focuses on the indeterminacy of beliefs, both beliefs and preferences are mental objects that we expect can exhibit quantum-like properties. Indeed a number of works in quantum cognition address preference indeterminacy (see for instance [55]). As we discuss in the next section the rational mind tends to constrain the quantum-like working of the mind. We can thus conjecture that “congruent respondents” include respondents for whom the rational mind was less constraining. Their preferences were partly determined by the information received just before they had to respond to “what is most important for you?”. This line of interpretation goes outside of our quantum model which focuses on beliefs indeterminacy however. It suggests that future research in quantum cognition should address both determinants of decision-making simultaneously.
Even when looking more closely at the results, we find no statistically significant impact of the compatible question. This is consistent with the Bayesian model because no information relevant to the choice between the two projects is provided. Note however that, despite the lack of statistical significance, the effect in the compatible condition is nonzero. We note that in the ET-Honesty subgroup is close to the effect of the incompatible condition. We recall that the theoretical model predicts some mild impact on the belief state. An U-individual who chooses TF on the basis of mixed beliefs will choose ECF with some probability if she is forced to decide for herself whether the cause is urgent YES or NO, prior to decision (see Section 2.3.4). The same holds for those who choose ECF while holding mixed beliefs. We conjecture that, at the sample level these effects also counter-balance each other so the overall impact is not statistically significant.
The time for responding to the whole questionnaire was between 1 and 3 min which is rather short. We interpret this feature as an evidence that the quantum working of the mind could be part of what Nobel prize Kahneman calls System 1—the fast, non-rational reasoning [12]: no new information of relevance for the choice was provided yet decision-making was affected. The respondent did not take time to reflect, they reacted spontaneously to the distraction. Recall that we do not elicit their preferences for the projects but only for what is determinant in a class of situations. That choice in our experiment was made to minimize interference from the rational mind. Nevertheless, we found that those determinants were highly correlated with the final choice both in the control and compatible information groups. The significance of the impact of distraction distraction results suggest that as we had conjectured respondents were not aware of the correlation (and the logic behind it). Therefore, they were not confronted with a (conscious) cognitive dissonance when the distraction changed their focus and eventually affected their decision. In the same line of thoughts the respondents overwhelmingly passed the chance to reassess their understanding of the project before making their choice. Only 12% used the opportunity re-read before making their choice.
An interesting finding is that the results are fully independent of population variables which supports the hypothesis that the quantum-like structure is a general regularity of the human mind.

5. Concluding Remarks and Discussion

In this paper, we have proposed an explanation of the manipulability of people’s decision-making based on the intrinsic indeterminacy of the individual’s subjective representation of the world. We first developed a simple quantum model of choice between two uncertain alternatives. Compared with the classical approach, the main distinction is in the modelisation of uncertainty. Where the classical approach relies on a single integrated representation on the world, the quantum-like modelling of uncertainty allows for a multiplicity of equally valid but subjectively incompatible perspectives on the world which is the expression of the intrinsic indeterminacy of mental objects. We show how an indeterminate representation of the world can be exploited to manipulate a decision-maker by a Sender who simply asks questions. Our focus has been on introspective questions, that is question about beliefs that bring no new information from the outside world. This allows establishing a clear distinction between the classical model’s predictions and the quantum one. In particular, we show with an example that the quantum model predicts that distractive questions have strong persuasion power when the classical model predicts no impact of such questions at all.
We provided a first empirical test of that prediction in an experiment where individuals choose between supporting either one of two projects to save elephants respectively tigers. In the experiment that we performed, the change of focus or of narratives brought about by the distractive question was shown to statistically significantly affect revealed preferences for the projects. This central result is in accordance with the predictions of the quantum model when dealing with two incompatible perspectives here Urgency and Honesty. Looking closer, we find some significant differences in reaction between subgroups, with some reacting very strongly and others much less so. While this calls for further investigation, we find that this first experimental test was successful in providing some support for the hypothesis that the manipulability of people may have its roots in the indeterminacy of their subjective representation of the world.
In the real world however, a cause can simultaneously be urgent and the NGO supporting the project dishonest. There is thus a discrepancy between the properties of the true classical objects (the projects) and the properties of their representation, the mental objects (project-states). When Receiver processes information about a classical object as if it was a quantum system, she is mistaken. But as amply evidenced in Kahneman’s best selling book “Thinking Fast and Slow”, information processing is not always disciplined by (Bayesian) rational thinking when the brain operates quickly. The two-system approach does also open the way for manipulation because when the individual thinks fast she makes mistakes which could be exploited. Our view is that the quantum approach rather than being an alternative to most behavioral explanations, provides a rigorous foundations to a number of them. The interpretation that arises from its structure can however be different. Quantum cognition proposes that all forms of thinking are contextual due to the intrinsic indeterminacy of mental objects including beliefs and preferences. Conscious thinking may however interfere and constrain contextuality. Galberti [44] relies on a similar argument to explain Receiver’s resistance to change worldview. The reason is that individuals have a resistance to changing their mind without a “good reason” due to drive toward consistency. This drive needs not be related to true rationality however but instead to an entrenched attachment to a stable identity or ego. The existence of a stable identity has been questioned by numerous experimental results (see e.g., self-perception theory and [56]). Those studies are consistent with a contextual and thus unstable identity [56]. As in the two-system approach the extent of conscious thinking matters. This is because the drive toward maintaining a coherent ego is more effectual when the individual is conscious about her instability. As argued in [55] cognitive dissonance and its resolution is an expression of that drive in face of instability (arising from intrinsic indeterminacy). We close this short discussion by suggesting that the quantum-like nature of mental objects needs not reflect a cognitive failure but would be the expression of the intrinsic indeterminacy (contextuality) of human reality. The question of rationality in such a context deserves further investigation.
Finally, we recognize that quantum cognition experiments cannot have the same degree of precision of physical experiments which prevents making and testing quantitative predictions. To a large part, this is because it is (today) impossible to fully characterize the state of a cognitive system which is incommensurably more complex that of an atomic particle. Nevertheless, our experimental exercise shows that it may be useful to test some theoretical predictions in contrast with standard classical (Bayesian) ones.

Author Contributions

Theory: A.L.-M.; Methodology: A.L.-M., A.C.; Data analysis: A.C.; Writing: A.L.-M., A.C. Both authors have read and agreed to the published version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Paris School of Economics’ small grants.

Institutional Review Board Statement

Not requested by funding agency.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We would like to thank seminar participants at Paris School of Economics and at the QI2028 symposium for their enriching comments as well as Jerome Busemeyer for a very valuable suggestions on the design of the experiment.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Alternative Specifications.
Table A1. Alternative Specifications.
Dependent Variable:
Decision_ECF
(1)(2)(3)(4)
QComp−0.176−0.170−0.172−0.186
(0.134)(0.134)(0.134)(0.137)
QIncomp−0.475 * * * −0.476 * * * −0.477 * * * −0.451 * * *
(0.160)(0.160)(0.160)(0.162)
Honesty 0.267 * * 0.356 * 0.258 * *
(0.128)(0.185)(0.129)
ET 0.0390.1600.045
(0.116)(0.216)(0.117)
Honesty:ET −0.171
(0.256)
Reread_ Descriptions −0.093
(0.185)
Age 0.005
(0.006)
Male −0.189
(0.121)
Education 0.021
(0.083)
NGO 0.006
(0.121)
Constant0.371 * * * 0.1580.0960.066
(0.103)(0.151)(0.177)(0.306)
Observations1211121112111211
Log Likelihood−829.852−827.610−827.387−825.591
Akaike Inf. Crit.1665.7041665.2191666.7751671.183
Bayesian
Inf. Crit.
1681.0021690.7151697.3701722.175
Note: * p < 0.1; ** p < 0.05; *** p < 0.01

References

  1. P & G: Thank You, Mom|Wieden+Kennedy. Available online: https://www.wk.com/work/p-and-g-thank-you-mom/ (accessed on 16 January 2021).
  2. Packard, V. The Hidden Persuaders; McKay: New York, NY, USA, 1957. [Google Scholar]
  3. Cialdini, R.B.; Cialdini, R.B. Influence: The Psychology of Persuasion; Collins: New York, NY, USA, 2007; p. 55. [Google Scholar]
  4. Petty, R.E.; Cacioppo, J.T. Communication and Persuasion: Central and Peripheral Routes to Attitude Change; Springer Science and Business Media: Berlin, Germany, 2012. [Google Scholar]
  5. Festinger, L.; Maccoby, N. On resistance to persuasive communications. J. Abnorm. Soc. 1964, 68, 359. [Google Scholar] [CrossRef] [PubMed]
  6. Baron, R.S.; Baron, P.H.; Miller, N. The relation between distraction and persuasion. Psychol. Bull. 1973, 80, 310. [Google Scholar] [CrossRef]
  7. Petty, R.E.; Wells, G.L.; Brock, T.C. Distraction can enhance or reduce yielding to propaganda: Thought disruption versus effort justification. J. Personal. Soc. Psychol. 1976, 34, 874. [Google Scholar] [CrossRef]
  8. Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, USA, 1986; pp. 1–24. [Google Scholar]
  9. DellaVigna, S.; Gentzkow, M. Persuasion: Empirical evidence. Annu. Rev. Econ. 2010, 2, 643–669. [Google Scholar] [CrossRef] [Green Version]
  10. Dudukovic, N.M.; DuBrow, S.; Wagner, A.D. Attention during memory retrieval enhances future remembering. Mem. Cogn. 2009, 37, 953–961. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Fernandes, M.A.; Moscovitch, M. Divided attention and memory: Evidence of substantial interference effects at retrieval and encoding. J. Exp. Psychol. Gen. 2000, 129, 155. [Google Scholar] [CrossRef]
  12. Kahneman, D. Thinking Fast and Slow, 1st ed.; Farrar, Straus and Giroux: New York, NY, USA, 2011. [Google Scholar]
  13. Akerlof, G.; Shiller, R. Phishing for Phools—The Economics of Manipulation and Deception; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  14. Tversky, A.; Kahneman, D. The framing of decisions and the psychology of choice. Science 1981, 211, 453–458. [Google Scholar] [CrossRef] [Green Version]
  15. Chong, D.; Druckman, J. Framing Theory. Annu. Rev. Polit. Sci. 2007, 10, 103–126. [Google Scholar] [CrossRef]
  16. Dehaene, S.; Naccache, L.; Le Clec’H, G.; Koechlin, E.; Mueller, M.; Dehaene-Lambertz, G.; Le Bihan, D. Imaging unconscious semantic priming. Nature 1998, 395, 597–600. [Google Scholar] [CrossRef]
  17. Taylor, S.E. The Availability Bias in Social Perception and Interaction. In Judgment under Uncertainty: Heuristics and Biases; Kahneman, D., Slovic, P., Tversky, A., Eds.; Cambridge University Press: New York, NY, USA, 1982. [Google Scholar]
  18. Latham, G. Unanswered questions and new directions for future research on priming goals in the subconscious. Acad. Manag. Discov. 2019, 5, 111–113. [Google Scholar] [CrossRef]
  19. Bargh, J.A. The historical origins of priming as the preparation of behavioral responses: Unconscious carryover and contextual influences of real-world importance. Soc. Cogn. 2014, 32, 209–224. [Google Scholar] [CrossRef]
  20. Bargh, J.A. Before You Know It: The Unconscious Reasons We Do What We Do; Simon & Schuster: New York, NY, USA, 2017. [Google Scholar]
  21. Dijksterhuis, A.; Aarts, H. Goals, attention, and (un) consciousness. Annu. Rev. Psychol. 2010, 61, 467–490. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Dubois, F.; Lambert-Mogiliansky, A. Our (represented) world and quantum-like object. In Contextuality in Quantum Physics and Psychology; Advanced Series in Mathematical Psychology; Dzhafarov, E., Scott, J., Ru, Z., Cervantes, V., Eds.; World Scientific: Singapore, 2016; Volume 6, pp. 367–387. [Google Scholar]
  23. Danilov, V.I.; Lambert-Mogiliansky, A. Targeting in Persuasion Problems. J. Math. Econ. 2018, 78, 142–149. [Google Scholar] [CrossRef]
  24. Benjamin, D.J. Errors in probabilistic reasoning and judgment biases. In Handbook of Behavioral Economics—Foundations and Applications 2; Bernheim, B.D., DellaVigna, S., Laibson, D., Eds.; North-Holland: Amsterdam, The Netherlands, 2019; Chapter 2; pp. 69–186. [Google Scholar]
  25. Camerer, C. Bounded rationality in individual decision making. Exp. Econ. 1998, 1, 163–183. [Google Scholar] [CrossRef]
  26. Edwards, W. Conservatism in human information processing. In Judgment under Uncertainty: Heuristics and Biases; Kahneman, D., Slovic, P., Tversky, A., Eds.; Cambridge University Press: Cambridge, UK, 1982; pp. 359–369. [Google Scholar]
  27. Grether, D.M. Testing Bayes rule and the representativeness heuristic: Some experimental evidence. J. Econ. Behav. Organ. 1992, 17, 31–57. [Google Scholar] [CrossRef] [Green Version]
  28. Tversky, A.; Kahneman, D. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychol. Rev. 1983, 90, 293. [Google Scholar] [CrossRef]
  29. Zizzo, D.J.; Stolarz-Fantino, S.; Wen, J.; Fantino, E. A violation of the monotonicity axiom: Experimental evidence on the conjunction fallacy. J. Econ. Behav. Organ. 2000, 41, 263–276. [Google Scholar] [CrossRef]
  30. Bruza, P.; Busemeyer, J.R. Quantum Cognition and Decision-Making; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  31. Haven, E.; Khrennikov, A. The Palgrave Handbook of Quantum Models in Social Science; Macmillan Publishers Ltd.: London, UK, 2017; pp. 1–17. [Google Scholar]
  32. Khrennikov, A.; Basieva, I.; Dzhafarov, E.N.; Busemeyer, J.R. Quantum models for psychological measurements: An unsolved problem. PLoS ONE 2014, 9, e110909. [Google Scholar] [CrossRef] [Green Version]
  33. Ozawa, M.; Khrennikov, A. Application of theory of quantum instruments to psychology: Combination of question order effect with response replicability effect. Entropy 2020, 22, 37. [Google Scholar] [CrossRef] [Green Version]
  34. Bagarello, F.; Basieva, I.; Khrennikov, A. Quantum field inspired model of decision making: Asymptotic stabilization of belief state via interaction with surrounding mental environment. J. Math. Psychol. 2018, 82, 159–168. [Google Scholar] [CrossRef] [Green Version]
  35. Basieva, I.; Cervantes, V.H.; Dzhafarov, E.N.; Khrennikov, A. True Contextuality Beats Direct Influences in Human Decision Making. J. Exp. Psychol. Gen. 2019, 148, 1925–1937. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Aerts, D.; Haven, E.; Sozzo, S. A proposal to extend expected utility in a quantum probabilistic framework. Econ. Theory 2018, 65, 1079–1109. [Google Scholar] [CrossRef] [Green Version]
  37. Danilov, V.I.; Lambert-Mogiliansky, A. Expected Utility under Non-classical Uncertainty. Theory Decis. 2010, 68, 25–47. [Google Scholar] [CrossRef]
  38. Danilov, V.I.; Lambert-Mogiliansky, A.; Vergopoulos, V. Dynamic consistency of expected utility under non-classical (quantum) uncertainty. Theory Decis. 2018, 84, 645–670. [Google Scholar] [CrossRef] [Green Version]
  39. Danilov, V.I.; Lambert-Mogiliansky, A. Preparing a (quantum) belief system. Theor. Comput. Sci. 2018, 752, 97–103. [Google Scholar] [CrossRef] [Green Version]
  40. Cohen-Tannoudji, C.; Diu, B.; Laloë, F. Mécanique Quantique; Hermann, EDP Sciences: Paris, France, 2000. [Google Scholar]
  41. Kamenica, E.; Gentzkow, M. Bayesian Persuasion. Am. Econ. Rev. 2011, 101, 2590–2615. [Google Scholar] [CrossRef] [Green Version]
  42. Bloedel, A.W.; Segal, I.R. Persuasion with Rational Inattention. Available online: https://ssrn.com/abstract=3164033 (accessed on 16 April 2018).
  43. Lipnowski, E.; Mathevet, L. Disclosure to a psychological audience. Am. Econ. J. Microeconom. 2018, 10, 67–93. [Google Scholar] [CrossRef] [Green Version]
  44. Galperti, S. Persuasion: The Art of Changing Worldviews. Am. Econ. Rev. 2019, 109, 996–1031. [Google Scholar] [CrossRef] [Green Version]
  45. De Clippel, G.; Zhang, Z. Non-Bayesian Persuasion; Working Paper; Brown University: Providence, RI, USA, 2020. [Google Scholar]
  46. Broekaert, J.B.; Busemeyer, J.R.; Pothos, E.M. The disjunction effect in two-stage simulated gambles. An experimental study and comparison of a heuristic logistic, Markov and quantum-like model. Cogn. Psychol. 2020, 117, 101262. [Google Scholar] [CrossRef]
  47. Busemeyer, J.R.; Wang, Z.; Shiffrin, R.S. Bayesian model comparison favors quantum over standard decision theory account of dynamic inconsistency. Decision 2015, 2, 1–12. [Google Scholar] [CrossRef] [Green Version]
  48. Denolf, J.; Martínez-Martínez, I.; Josephy, H.; Barque-Duran, A. A quantum-like model for complementarity of preferences and beliefs in dilemma games. J. Math. Psychol. 2016, 78, 96–106. [Google Scholar] [CrossRef] [Green Version]
  49. Moreira, C.; Wichert, A. Are quantum-like Bayesian networks more powerful than classical Bayesian networks? J. Math. Psychol. 2018, 82, 73–83. [Google Scholar] [CrossRef]
  50. Dzhafarov, E.; Scott, J.; Ru, Z.; Cervantes, V. (Eds.) Contextuality from Quantum Physics to Psychology; Advanced Series in Mathematical Psychology; World Scientific: Singapore, 2016; Volume 6, ISBN 978-981-4730-62-4. [Google Scholar]
  51. Wang, Z.; Busemeyer, J.R.; DeBuys, B. Beliefs, action and rationality in strategical decisions. Top. Cogn. Sci. 2020, in press. [Google Scholar]
  52. Bartneck, C.; Duenser, A.; Moltchanova, E.; Zawieska, K. Comparing the similarity of responses received from studies in Amazon’s Mechanical Turk to studies conducted online and with direct recruitment. PLoS ONE 2015, 10, e0121595. [Google Scholar] [CrossRef] [PubMed]
  53. Kees, J.; Berry, C.; Burton, S.; Sheehan, K. An analysis of data quality: Professional panels, student subject pools, and Amazon’s Mechanical Turk. J. Advert. 2017, 46, 141–155. [Google Scholar] [CrossRef]
  54. Festinger, L. A Theory of Cognitive Dissonance; Stanford University Press: Stanford, CA, USA, 1957; Volume 2. [Google Scholar]
  55. Lambert-Mogiliansky, A.; Zamir, S.; Zwirn, H. Type-Indeterminacy a Model of the KT (Kahnemann and Tversky) Man. J. Math. Psychol. 2009, 53, 349–361. [Google Scholar] [CrossRef] [Green Version]
  56. Lambert-Mogiliansky, A. Quantum Type Indeterminacy in Dynamic Decision-Making: Self-Control through Identity Management. Games 2012, 3, 97–118. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Project states.
Figure 1. Project states.
Symmetry 13 00162 g001
Figure 2. Distraction.
Figure 2. Distraction.
Symmetry 13 00162 g002
Figure 3. Descriptive Histograms.
Figure 3. Descriptive Histograms.
Symmetry 13 00162 g003
Table 1. Logit Regressions on Final Choice (ECF)—Coefficients transformed.
Table 1. Logit Regressions on Final Choice (ECF)—Coefficients transformed.
Dependent Variable:
Decision_ECF
GeneralHonUrgETTEET-HonTE-HonET-UrgTE-Urg
Compatible condition−0.170−0.142−0.231−0.181−0.175−0.164−0.154−0.216−0.263
t = −1.363t = −0.938t = −1.030t = −1.063t = −0.958t = −0.807t = −0.683t = −0.669t = −0.833
Incompatible condition−0.363 * * * −0.336 * * −0.455 * −0.249−0.488 * * * −0.142−0.523 * * −0.510 * −0.404
t = −2.791t = −2.152t = −1.953t = −1.307t = −2.763t = −0.591t = −2.563t = −1.658t = −1.129
Honesty0.294 * * 0.1960.407 *
t = 1.998 t = 1.007t = 1.809
Order (ECF-TF)0.046−0.0140.209
t = 0.383t = −0.101t = 0.868
Reread Descriptions−0.088−0.1780.341−0.057−0.117−0.139−0.2090.3740.241
t = −0.502t = −0.941t = 0.720t = −0.225t = −0.467t = −0.508t = −0.777t = 0.566t = 0.354
Age0.0050.008−0.0020.0060.0030.0020.0150.019−0.022
t = 0.894t = 1.198t = −0.186t = 0.826t = 0.394t = 0.227t = 1.455t = 1.245t = −1.513
Male−0.172−0.128−0.262−0.067−0.274 * −0.086−0.184−0.119−0.390
t = −1.566t = −0.952t = −1.348t = −0.414t = −1.836t = −0.447t = −0.968t = −0.393t = −1.518
Education0.0210.0060.0720.0020.026−0.0110.0120.0320.116
t = 0.252t = 0.064t = 0.450t = 0.019t = 0.210t = −0.085t = 0.080t = 0.147t = 0.479
NGO0.0060.038−0.0620.190−0.1870.356−0.272−0.1670.137
t = 0.051t = 0.259t = −0.287t = 1.058t = −1.154t = 1.554t = −1.446t = −0.589t = 0.392
Constant0.0680.2510.317−0.0280.3030.2920.253−0.2131.559
t = 0.216t = 0.617t = 0.534t = −0.068t = 0.597t = 0.528t = 0.419t = −0.329t = 1.240
Observations1211864347638573458406180167
Log Likelihood−825.591−586.663−237.452−436.394−385.940−311.893−269.413−122.258−112.568
Akaike Inf. Crit.1671.1831191.325492.904890.788789.879639.786554.825260.515241.136
Note: * p < 0.1; * * p < 0.05; * * * p < 0.01. Coefficients are transformed as: e x p ( β ) 1 .
Table 2. Predicted Probability Difference between Incompatible and Control condition.
Table 2. Predicted Probability Difference between Incompatible and Control condition.
GeneralHonUrgETTEET_HonTE_HonET_UrgTE_Urg
Difference−0.111−0.101−0.149−0.070−0.163−0.037−0.179−0.174−0.125
p-value0.0050.0310.0510.1910.0060.5550.0100.0970.259
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lambert-Mogiliansky, A.; Calmettes, A. Phishing for (Quantum-Like) Phools—Theory and Experimental Evidence. Symmetry 2021, 13, 162. https://doi.org/10.3390/sym13020162

AMA Style

Lambert-Mogiliansky A, Calmettes A. Phishing for (Quantum-Like) Phools—Theory and Experimental Evidence. Symmetry. 2021; 13(2):162. https://doi.org/10.3390/sym13020162

Chicago/Turabian Style

Lambert-Mogiliansky, Ariane, and Adrian Calmettes. 2021. "Phishing for (Quantum-Like) Phools—Theory and Experimental Evidence" Symmetry 13, no. 2: 162. https://doi.org/10.3390/sym13020162

APA Style

Lambert-Mogiliansky, A., & Calmettes, A. (2021). Phishing for (Quantum-Like) Phools—Theory and Experimental Evidence. Symmetry, 13(2), 162. https://doi.org/10.3390/sym13020162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop