Next Article in Journal
Forms of Calling and Helping Behaviors at Work: Psychological Entitlement and Moral Duty as Mediators
Next Article in Special Issue
Self vs. Other in Affective Forecasting: The Role of Psychological Distance and Decision from Experience
Previous Article in Journal
Succession, Identity, and Consumption Scale of Prescriptive Ageism: Italian Validation and Invariance by Gender and Age
Previous Article in Special Issue
What Affects the Value of Our Time? The Case of Buying a Present vs. Buying for Ourselves and the Impact of Decision-Making Styles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Beyond the Surface: A New Perspective on Dual-System Theories in Decision-Making

Baruch Ivcher School of Psychology, Reichman University, Herzliya 4610101, Israel
Behav. Sci. 2024, 14(11), 1028; https://doi.org/10.3390/bs14111028
Submission received: 3 September 2024 / Revised: 26 October 2024 / Accepted: 28 October 2024 / Published: 1 November 2024

Abstract

:
The current paper provides a critical evaluation of the dual-system approach in cognitive psychology. This evaluation challenges traditional classifications that associate intuitive processes solely with noncompensatory models and deliberate processes with compensatory ones. Instead, it suggests a more nuanced framework where intuitive and deliberate characteristics coexist within both compensatory and noncompensatory processes. This refined understanding of dual-process models has significant implications for improving theoretical models of decision-making, providing a more comprehensive account of the cognitive mechanisms underlying human judgment and choice.

1. Introduction

Decision research scholars aim to reveal the cognitive processes underlying choice behavior. However, as mental processes are not observable, a variety of methodical developments have been proposed to overcome the obstacles to investigation. One major development is process modeling. Process models characterize the cognitive strategies employed by decision-makers while making judgments or choices [1,2]. Process-tracing techniques such as verbal protocol analysis, e.g., [3,4,5,6], information board, e.g., [7,8,9], and mouselab methodology, e.g., [10,11], and indirect measures of processes, e.g., [12,13,14,15], allow these models to trace the processes underling human decisions [16]. Direct and indirect measures are two approaches researchers use to understand the decision-making process. Direct measures involve straightforward methods where participants are explicitly asked about their thoughts, feelings, or actions (e.g., in surveys and interviews), or their responses are directly examined. Indirect measures, on the other hand, assess underlying processes without requiring participants to self-report. These methods often capture subtle, automatic reactions that participants may not be consciously aware of, such as physiological responses (like heart rate or pupil dilation) or behavioral indicators (like reaction times).
One major trend in decision research considers two families of cognitive processes: intuitive and deliberate [17,18,19,20,21]. In the current manuscript, I suggest two theoretical distinctions that might broaden the generality of this dual-system approach. The first distinction relates to certain decision situations in which conflicts between the two types of processes emerge [20]. Traditionally, the two-system theory explains such salient situations (e.g., the Müller-Lyer illusion [22] and the conjunction fallacy; for a full discussion, see [20,23]). However, situations where the dissociation between the two systems is less explicit are often neglected or ignored. I suggest that a consideration of these latent situations under the dual-process framework would account for some of the contradictory findings in the decision research literature (e.g., the findings about loss aversion [24] and other self-bias [25]). By revisiting such findings using a variety of physiological (autonomic) system measures, including pupil dilation, peripheral arterial tone, and heart rate, I propose the dissociation between behavior and autonomic responses as a marker of the different decision systems.
In addition, I suggest that the classical classification of cognitive processes into intuitive/deliberate and compensatory/noncompensatory, which are often considered analogous cf. [26,27], are, in fact, indicators of between-system and within-system processes. Briefly, there are two decision systems, each displaying compensatory and noncompensatory decision processes. Later, I will discuss the use of indirect measures of processes such as response time and choice proportion to highlight the robustness of this interpretation to account for several of the discrepancies in classical decision research (e.g., the theoretical hurdle faced by two-system approaches to successfully classify cognitive processes and heuristic tools into the two systems [26,28]; the findings that in some situations rational reasoning is the result of intuitive judgment [17,29,30,31,32,33,34,35,36]).
Considering the lack of a clear theoretical account and contradicting findings surrounding dual-system approaches [19,37], it is unsurprising that it attracts criticism [38,39]. Thus, a more refined and comprehensive theoretical account of the dual-system framework and a better understanding of the interplay between these two systems and their respective roles in reasoning are essential for advancing our comprehension of human cognition. In this paper, I present a critical review of dual-system theory. This topic is particularly relevant because, for many years, dual-system theories have focused on intuitive versus deliberate processes but have not fully addressed how these types of processes coexist and influence each other in situations requiring complex cognition.
The literature on complex cognition and problem-solving, such as the work of Dörner and Funke [40] on complex problem-solving (CPS) and Cronin et al. [41] on dynamic decision-making (DDM), suggests that decision-making in real-world settings involves an intricate blend of cognition, motivation, and emotional regulation. This literature focuses on how people respond to high-stakes, unpredictable environments, where dual-process theories often fail to capture the dynamic and adaptive nature of human decision-making. The literature on complex cognition, e.g., [42,43,44], provides valuable insights into decision-making as a multi-faceted process. This approach illustrates the importance of incorporating complex, interacting variables such as cognitive load, contextual factors, and adaptive strategies that people employ under varying degrees of uncertainty. Recognizing these nuances within the dual-system approach can deepen our understanding of how intuitive and deliberate processes operate jointly to support complex decision-making.
Accordingly, the current paper seeks to expand dual-system theory and discuss it in broader and more nuanced terms. It focuses on dual systems and not the important topic of complex decision-making. I start by reviewing the two classical classifications for the decision-making process models, followed by an overview of the factors that might limit their general acceptability. Then, I present two case studies that use existing results of laboratory studies to illustrate the application of the proposed framework and its potential. Finally, I use novel analyses of existing data to provide initial support for the proposed theoretical assumptions. Thus, while this work primarily synthesizes existing literature on dual-system theories and decision-making, it provides illustrative case studies and initial empirical support to substantiate the presented theoretical perspectives.

2. Two Systems of Reasoning

One question in cognitive psychology concerns the issue of whether the mind should be treated as having different functional parts. Simplistically, classic discussions tend to consider two minds at work: one is based on intuitive, automatic processing, and the other is based on reflective, deliberate processing that forms coherent, justifiable sets of beliefs and action plans [45]. While these dual-process models may come in many forms (e.g., heuristic and systematic [46]; experiential and rational [18,47]; intuitive and analytic [48]; reflexive and reflective [49]; and associative and rule-based [20,50]), they all distinguish cognitive operations that are quick and associated with those who are slow and rule-governed [19]. Although many names have been ascribed to these two cognitive mechanisms, the neutral or generic terms System 1 and System 2 proposed by Stanovich and West [21] and adopted by Kahneman and Frederick [27] are used. As in Kahneman and Frederick’s [27] summary, I use the term ‘system’ to describe a collection of cognitive processes that are architecturally (and evolutionarily) distinct and severalized by their speed, controllability, and the contents on which they operate.
System 1. System 1 is based on preconscious, intuitive, and automatic processing. Information is processed rapidly and in parallel; processing is associative, effortless, and opaque to the decision-maker. As such, System 1 places minimal demands on cognitive resources and acts upon schemas that are primarily generalizations from concrete, emotionally significant, intense, or repetitive experiences.
System 2. System 2 is based on slow, deliberate, and reflective information processing in a controlled and self-aware fashion. Information processing is serial, involving deductive reasoning. As such, it is effortful and cognitively demanding. System 2 attains beliefs and knowledge by conscious learning from explicit sources (e.g., books and lectures). Thus, System 1 and System 2 learn from the experience. However, it does so not through automatically established associations but by logical or rational inference. Finally, unlike System 1, System 2 has a short evolutionary history.
The two systems are assumed to operate in parallel, and both processes compete to determine the final responses. In short, when people are required to choose, System 1 processes some of the information (usually the most accessible) and immediately proposes an intuitive answer. In parallel, System 2 monitors the quality of System 1’s response, which may approve, alter, or override. If the overt response retains the initial proposal of System 1 without (much) modification, it is called intuitive. Nonetheless, System 2’s responses will likely remain anchored on initial impressions [18,20,27,48]. The relative contribution of each system is determined by situational factors [51,52] and the decision-maker [21,51,53,54,55,56,57,58,59,60].
The notion of two minds in one brain (i.e., the dual-process model for human cognition) has been empirically confirmed in numerous studies in the past (for a review, see [19,20,26,27,51]). For example, in recent research, Bago et al. [56] asked participants to evaluate a series of fake and real news headlines. First, the participants were asked to respond intuitively (under time limitations and cognitive load). Then, they were asked to rethink their intuitive response without limitations, thus providing a more deliberative response. The results showed that intuitive thinking led to higher rates of believing fake news than deliberative ones. These results might highlight the difference between intuitive and deliberative reasoning in evaluating misinformation. Nevertheless, the structure of the decision-making tasks in this study may have primed participants to question their intuitive responses, potentially leading them toward a more deliberate mode of reasoning. This design choice could affect the natural occurrence of intuitive judgments, as participants might feel compelled to reassess their initial reactions rather than rely on instincts. This bias provides context for interpreting the findings of Bago et al.’s study [56], as the priming effect may lead participants to engage in more deliberate processes than they would in a real-life, non-experimental setting. As such, it may not be sufficient to fully explore complex decisions.
Neuropsychological research [57,58,59] supports the dual-system approach. For example, using fMRI methodology, Goel and colleagues [57,58] showed a neural differentiation of intuitive and deliberate reasoning. Deliberate reasoning was associated with activation of the right inferior prefrontal cortex, whereas intuitive, belief-based responses were associated with activation of the ventral medial prefrontal cortex. In addition, their data supported the idea that System 2 processes can intervene or inhibit System 1 processes. Considering that dual-process models can account for many basic phenomena in psychology in general (e.g., the Müller-Lyer illusion [22] and the moon illusion [60]) and in behavioral decision research in particular (e.g., the ratio bias [61]; the belief–bias effect [26]; and the representativeness, availability, and anchoring heuristics [62]), it is hardly a surprise that the dual-process approach gained attraction, both at the theoretical and applied levels. In Section 4.1, I propose that the predictions of the dual-system approach may be more robust than previously considered in that they can be detected as disequilibrium between behavioral and autonomic physiological responses.

3. Compensatory vs. Noncompensatory Models

Another classic theoretical classification of decision-making models is compensatory versus noncompensatory frameworks [2,63,64,65,66]. In general, compensatory models like Expected Utility Theory [67] and Prospect Theory [68,69] assume that differing choices require trade-offs between alternatives and probabilities. For example, when people choose between two gambles (e.g., 100% to obtain USD 100 or 50% to obtain USD 200), they sum and weigh all the available information and choose the alternative with the highest expected utility, at least from their subjective viewpoint. Thus, in a compensatory process, the high features of one alternative (e.g., a big payoff) can compensate for the low levels of others (e.g., low probability). As such, the relationship between the expected values of the choice options is crucial in the decision process.
Conversely, noncompensatory models like Elimination by Aspects [70] and Lexicographic Theory [71] assume that the different features of the options are evaluated in the order of their validity. If the most valid feature (i.e., the “best”) can differentiate between the options (i.e., is clearly better), then the option with the highest value on this feature is chosen, and the decision process is over. Otherwise, the next best feature is examined, and so on. Thus, noncompensatory models assume no tradeoffs between conflicting values of the different features, and low-value features cannot outweigh high-value ones. This means that decisions can be based on limited information (e.g., one feature) while ignoring all others. In addition, Second, noncompensatory processes prevent the need for more complex processes like summing and weighting the possible options [13].
While many studies have confirmed the fact that noncompensatory models, such as fast-and-frugal heuristics, show predictive accuracy under certain conditions, e.g., [72,73,74], there is an ongoing debate as to their felicity as process models, e.g., [10,15,34,75]. Thus, further investigation is required to evaluate whether noncompensatory models merely reflect, to some extent, researchers’ trade-off between simplicity and descriptive accuracy. Tversky [70] pointed out concerning his noncompensatory model (i.e., Elimination by Aspects) that “…there may be many contexts in which it provides a good approximation to much more complicated compensatory models and could thus serve as a useful simplification procedure…” (p. 298). Alternatively, the theoretical appeal of the noncompensatory models might lie not only in their parsimonious nature but also in their ability to describe the cognitive processes that underlie choice behavior.
These questions were addressed by Ayal and Hochman [12]. In their study, two experiments were presented that juxtapose predictions derived from two prototypical fast-and-frugal noncompensatory models (i.e., the Priority Heuristic [13] and Take the Best Heuristic [76]) and alternative predictions derived from compensatory principles. Dependent measures, including reaction time, proportions of the correct response, and level of confidence, were found to be better predicted by compensatory indices; however, these indices could not account for the entire decision process exhaustively. In alignment with these findings, a model based on both types of processes was a runner-up in a behavioral prediction tournament [77]. Thus, these findings highlight the importance of integrating compensatory and noncompensatory principles in choice behavior models aiming to capture the complex decision-making process [40].

4. Reconsidering the Dual-System Approach

4.1. Simultaneous Contradictory Belief

Sloman [20] defines Criterion S as a decision situation in which people simultaneously feel that two contradicting responses are plausible, even if they do not act upon it [37,78]. In these kinds of situations, “…people first solve a problem in a manner consistent with one form of reasoning and then, either with or without external prompting, realize and admit that a different form of reasoning provides an alternative and more justifiable answer” [20], p. 11.
Many phenomena satisfy Criterion S. Some of the best-known and compelling examples are the Müller-Lyer illusion [22], the conjunction fallacy [79], and superstitious beliefs [80]. However, since people do not affirm both responses, some less apparent phenomena often fail to be recognized as belonging to this category. The effect of this failure is the inability of the field of decision research to explain some contradictory results on some of the most eminent phenomena in the literature (see, for example, Schurr and Erev [81] notions on base-rate neglect; Erev et al. [82], Ert and Erev [83], and Yechiam and Hochman findings on loss aversion [84,85]). Empirical findings regarding physiological measures in general, e.g., [86,87,88,89], and those that combine behavioral and physiological measures in particular, e.g., [84,90,91,92], highlight the potential of this method in identifying such decision phenomena [87], as well as the potential of behavior and autonomic indices to serve as markers of the different cognitive systems [16].

4.2. Between-System Processes vs. Within-System Processes

At first glance, the two classifications of the process models as intuitive/deliberate and compensatory/noncompensatory appear to be interdependent. A review of the theoretical and empirical evidence in behavioral decision research suggests that by and large, compensatory processes are considered more rational and deliberate. In contrast, noncompensatory processes are more intuitive.
However, a thorough investigation suggests that this might not be the case. It is not always possible to distinguish between non-/compensatory attributes and their association with Systems 1 and 2. Moreover, findings sometimes suggest that several attributes cannot easily be mapped onto one specific system. For example, the dual-process framework claims that System 2 evolved late as a powerful general-purpose reasoning system. In accordance with this assumption, it has been argued that the effortless, rapid, domain-specific, noncompensatory fast-and-frugal heuristics (e.g., the recognition heuristic [93]) pertain to this system, which is considered to be more deliberate and rational. However, the recognition heuristic draws solely on attributes such as recognition and familiarity, which are considered characteristics of System 1 cf. [27]. Under such assertions, strict classifications of the characteristics of the processes underlying each system might underestimate the possible role of System 2 in the overall decision process.
Considering this situation, I propose that intuitive vs. deliberate characteristics represent between-system processes, whereas compensatory vs. noncompensatory principles represent within-system processes. I believe that any attempt to characterize the cognitive processes that underlie choice behavior under a dual-system approach would benefit greatly from these considerations. To test the robustness and applicability of such a framework, I suggest using indirect measures of processes and combining such indices with physiological ones.

5. Dissociation Between Behavior and Autonomic Responses

Within the dual-system view, neuropsychological measures are assumed to provide a clear window into the intuitive system cf. [94]. Moreover, findings show that intuitive reasoning increases the arousal of autonomic indices [90,95]. I argue that the dissociation between behavior and neuropsychological responses can explain contradictory findings in judgment situations under the dual-system framework.
According to prospect theory [68], individuals are more sensitive to losses than equivalent gains. This behavior, which is empirically well-established [96,97,98,99], is considered the manifestation of the basic psychological phenomenon of loss aversion. For example, Tom et al. [99] used functional magnetic resonance imaging (fMRI) to explore brain activity while participants decided whether to accept or reject mixed gambles (i.e., an equal chance to win or lose some money). The authors found that the possibility of losing was associated with decreased activity in brain regions assumed to code subjective values and not increased activity in regions associated with negative emotions. In addition, their results provided evidence that the algebraic function that maps monetary incentives to subjective values is markedly steeper for losses than gains. In addition, loss aversion has been used to account for several paradoxical phenomena in classical decision research, such as the equity premium puzzle [100] and the status quo bias [101].
However, recent evidence shows people do not exhibit loss aversion [82,83,102,103]; for a review, see [24]. Thus, based on Solman’s [20] Criterion S, I postulate that conflicts between the two systems might emerge in situations that examine loss aversion. Accordingly, revisiting these findings using a variety of experience-based tasks and autonomic system measures should yield the expected dissociation between the two systems. In the next sub-sections, I present two case studies to support this claim. The first is a new analysis of the data published in Hochman and Yechiam [84]. I show that examining the difference between pupil diameter and behavioral responses in experience-based risky decisions under the proposed framework can help better understand the unique role of losses in decision-making. In the second, I provide more information on data briefly mentioned in Hochman et al. [16], which used peripheral arterial tone (PAT). Importantly, in both case studies, I do not present novel data. Rather, I try to draw new conclusions from the existing data to illustrate the potential of the proposed theoretical framework.

5.1. Case Study 1: Between-System Processes vs. Within-System Processes

Pupillometry measures the extent to which the pupils dilate due to external stimuli or arousal. Previous research found that pupil diameter increases in response to increased processing demands [104,105,106,107,108]. For example, pupil diameter increased during problem-solving (i.e., mental division problems) until the point of the solution, and peak dilations were the largest for the most difficult problems [102]. Problem-solving in real-world contexts involves complex cognitive processes encompassing analytical reasoning and emotional, motivational, and experiential components. For example, in personnel selection [109], decision-makers assess candidates not only on quantifiable skills but also on attributes like adaptability, social fitting, and growth potential. This requires a nuanced understanding of how multiple cognitive processes like intuition, experience-based judgment, and deliberate reasoning converge to inform hiring decisions. Similarly, in political decision-making [110], leaders’ and policymakers’ decisions often extend beyond simple cost–benefit analysis; they involve strategic thinking, empathy, and consideration of long-term societal impacts. This context requires decision-makers to rely on immediate, intuitive judgments and deliberate processes, balancing short-term pressures with broader policy objectives. Recognizing problem-solving as an inherently multifaceted and adaptive process provides a more accurate representation of how individuals navigate complex environments.
Thus, a dissociation between pupil diameter behavioral responses should help clarify contradictory findings in the literature, such as those concerning loss aversion under the dual-system framework. In Hochman and Yechiam’s [84] Study 1, 25 undergraduates were asked to choose between two options repeatedly, each associated with different potential monetary gains or losses. After each selection of one of the options, the obtained payoff in the current trial and an update of the accumulated payoff counter thus far were presented. Participants were instructed to maximize their total earnings. In addition, they were told that in each round, their choice would either lead to a gain or a loss. No further information regarding the payoff structure was provided.
This study had two within-subject conditions, with 60 trials each. In the Mixed Condition, a selection of one of the buttons, referred to as “Risky”, provided a payoff according to the following distribution: 50/50 chance of either gaining or losing two points. The payoff from the alternative button, referred to as “Safe”, was sampled from the distribution with a 50/50 chance of either gaining or losing one point. In the Gains Condition, a fixed value of four points was added to all payoffs to create an all-gains domain. Otherwise, the Gains Condition was identical to the Mixed Condition. Payoffs were delivered deterministically, i.e., each study started with either a gain/relative-gain or a loss/relative-loss and switched to a payoff from the opposite domain in t + 1. In addition, on every trial, a constant of 0.1–0.5 points (in 0.1 intervals) was randomly added or subtracted from the sampled payoffs.
Paired-sampled t-tests revealed that the aggregated selection in the risky option across all trials was 0.46 in the Mixed Condition and 0.51 in the Gains Condition. These results replicate the ones that were found in Erev et al. [82] and suggest that, behaviorally, participants did not exhibit loss aversion, whether the loss was absolute (i.e., in the Mixed Condition) or relative (i.e., in the Gains Condition). Thus, at least under these conditions, losses did not loom larger than gains, so the expected value function was not steeper in the negative than in the positive domain.
In contrast, absolute losses were associated with higher levels of arousal (indexed by pupil diameter). Moreover, these differences were significant in the epochs of 625 ms to 1125 ms after the stimulus onset, corresponding to previous reports on stimulus recognition, e.g., [111]. Similar findings were not observed in the all-gains domain (i.e., Gains Condition). These findings support the dissociation between behavioral and autonomic measures. Even though the overt response did not reflect loss aversion, the intuitive or automatic response for gains was markedly different than for losses. Thus, it could be argued that loss aversion satisfies Criterion S under certain conditions (e.g., experience-based decisions) since, intuitively, people exhibit higher sensitivity to losses than gains, but their final strategy is similar under the two conditions [24]. This gap can account for previous contradictory results regarding the loss aversion phenomena within the dual-system framework.
Under these assertions, one possible explanation for the dissociation between behavior and autonomic responses could be that in laboratory conditions (in which losses and gains are less real and/or significant), the intuitive tendency to avoid losses is mitigated by other tendencies such as the desire to be perceived as more rational, the tendency to diversify between outcomes, i.e., a diversification bias [112], to increase one’s interest in the task, and so forth. As participants learn that the risk is not big and the alternatives associated with losses are not necessarily detrimental, they learn to override the intuitive response and not shy away from these losses. Further research is needed to examine the plausibility of these interpretations and explore additional ones.

5.2. Case Study 2: Peripheral Arterial Tone vs. Behavioral Responses

Peripheral Arterial Tonometry (PAT) is a tool that measures vascular tone at the fingertip and can be used as a non-invasive measure of sympathetic nervous system activity [113]. Vascular tone is influenced by blood pressure, peripheral vascular resistance, blood volume in the finger, and autonomic nervous system activity [114]. Inferred from the signal is the activity of the sympathetic nervous system. The PAT signal (i.e., vasoconstriction) has been found to decrease in response to increased processing demands, e.g., [86]. Thus, much like pupil dilation, the PAT signal has the potential to highlight dissociations between the two different decision systems. Next, I present novel data that used PAT to examine the dissociation between behavioral and physiological responses to losses and gains. A summary of these results was published by [16]. Unlike in the previous case study [84], the data on which the current case study is based was not published in detail previously. Thus, the current case study includes more detailed information about its method.

5.2.1. Method

Design and procedure
Twenty undergraduates were presented with a “money machine” identical to the one described in the previous case study (and detailed in [84]). This study also had two within-subject conditions, with 60 trials each. In the Mixed Condition, a selection of the risky option provided a payoff from one of two distributions. In the gain domain, the possible payoffs were 8.5, 6, and 3.5 points; in the loss domain, the possible payoffs were −1.5, −4, and −6.5. All the payoffs had an equal probability of being sampled. In addition, on each trial, the sampled payoff was drawn from the opposite distribution (i.e., gain or loss) compared to the distribution sampled in trial t − 1. Selecting the safe option yielded a constant payoff of one point. In the Gains Condition, a constant payoff of 10 points was added to all payoffs. At the end of this study, participants were compensated based on their selections (around 10 NIS on average).
Because PAT signals are long-lasting waveforms (i.e., several seconds are required to return to base level), the interstimulus intervals (ISIs) were set to 15 s to avoid overlapping responses. Physiological data. Peripheral arterial tone was measured using a finger probe and the SitePAT_200 electrical plethysmograph (Itamar Medical Ltd., Caesarea, Israel).

5.2.2. Results and Brief Discussion

The results of this examination show no behavioral indications for loss aversion in the Mixed Condition. The aggregated P(Risky) across participants was 0.52 (SD = 0.27) and not above chance level (t(19) = 0.393, p = 0.699). In contrast, the PAT signal (which represents the average vasoconstriction in a 5-s interval starting from the onset of the stimulus) was significantly different for gains than for losses (t(16) = −1.829, p < 0.05 in a one-tailed t-test). Although this is a very small sample, these findings replicate previous ones [84] and further support the potential of the dissociation between behavioral and autonomic measures to mark the different decision processes.

5.3. Between-System Processes vs. Within-System Processes

To validate the assertion that loss aversion manifests as physiological arousal, which may, under certain conditions, be masked behaviorally, one must demonstrate that in situations that do not satisfy Criterion S, no dissociation between psychological indices and behavioral responses will be found (i.e., the overt response and the physiological arousal both serve as indicators of loss aversion). A study that can create loss-aversive situations in which the overt behavior will correspond to the physiological response, namely, situations in which both the physiological and behavioral responses will exhibit loss aversion, is presented below. In this study, participants will be presented with a set of description-based mixed risky choice decisions of the type:
Choose between
A: Win amount x for sure B: Win amount y (probability p)   Lose amount y (probability 1 − p)
in which all the choice problems provide similar expected values.
To induce aversive loss behavior, the potential losses should become more tangible. The payoff structure will be constructed as follows: A fixed payment (about USD 5) will be used as a participation fee. However, the final payment will be a function of the participant’s performance, and no truncation at zero will be included. Thus, non-loss-aversive behavior might result in the possibility of paying the experimenter a relatively substantial amount of money at the end of this study. Of course, this money will eventually be returned to the participants (by participating in an additional study that will provide them with a substantial gain relative to the lost amount). However, this will be disclosed only at the end of the loss-aversion study and after the debtor participant has paid off her bets.
Physiological data will be collected using Pupillometry HR, or PAT. This design creates a situation in which the option that is not associated with the possibility of losing is perceived as more attractive in both systems, as the safer option provides an option to maximize total earnings while avoiding the possibility that might lead to losses. In this case, I would expect no dissociation between physiological and behavioral measures, that is, an increase in arousal in response to the risky options and a decrease in the selection of this option.

6. Processes Within the Two Systems

One implication of the notion that intuitive equals noncompensatory and deliberate equals compensatory is that an intuitive decision (which is simple and noncompensatory) is more prone to judgmental errors compared to the more rational, compensatory, and deliberate ones cf. [26,27]. However, this traditional assumption sometimes fails to account for existing empirical evidence. Specifically, in some cases, it has been shown that intuitive and rapid processes might be more accurate than deliberate ones [55,115,116,117]. For example, Glöckner [34] showed that most individuals can integrate multiple pieces of information very quickly and intuitively (with a median decision time of less than three seconds) in a weighted compensatory manner. Similarly, research by Hochman and Erev [118] suggests that decision-makers may intuitively base their preferences on a small sample of previous experiences under similar contingencies. Finally, Ayal et al. [119] show that the quality of the decision depends not on the system but rather on the compatibility between the system and the demands of the task at hand. Tasks that require more deliberate processes benefit from System 2 reasoning, but tasks that require intuitive judgments benefit more from System 1 thinking style.
On the other hand, other research suggests that deliberate and prolonged reasoning, which draws on most (if not all) of the available information in an exhaustive manner, may, under certain conditions, be more prone to judgmental errors. Examples are myopic loss aversion [93], i.e., the tendency to evaluate outcomes frequently; the effect of forgone payoffs [120]; and the Perceived Diversity Heuristic [121]. In all these cases, it has been shown that deliberate reasoning on additional information may lead to a decision that impairs maximization. Moreover, some cases suggest that this less rational behavior might result from deliberate noncompensatory considerations such as initial attraction and (over)generalizing rare outcomes [120].
In the following section, I provide initial support for the plausibility of the claim that the compensatory/noncompensatory classification represents within-system processes and that such a perspective can account for some of the discrepancies in the literature. In their work, Gigerenzer and colleagues [122] introduced a set of noncompensatory process models, like the Priority Heuristic [13] and Take The Best Heuristic [81], for preferences and inferences. As Brandstätter et al. [13] have argued, “The priority heuristic is intended to model both choice and process. It not only predicts the outcome but also specifies the order of priority, a stopping rule, and a decision rule.” (p. 427).
As process models, fast-and-frugal heuristics lend themselves to testable predictions concerning processes. Thus, as Ayal and Hochman [12] have suggested, examining these models vis-à-vis compensatory principles can highlight the nature of the cognitive processes underlying choice behavior.

6.1. Examining the Processes Underlying Risky Choices

To test the compensatory against noncompensatory models, the different models must make different predictions on the choices. Thus, to examine the nature of the cognitive processes underlying decisions under risk, I juxtaposed predictions derived from a prototypical noncompensatory model, i.e., the Priority Heuristic [13], and alternative predictions derived from a simple compensatory model (i.e., a simple model of expected value).
Priority Heuristic (PH) is a simple lexicographic model that describes the decision process of people who make preferences. This fast-and-frugal heuristic describes the process of choosing between two alternatives of the type: “the probability p to win amount x, and the probability (1 − p) to win amount y” (X, p; Y, 1 − p). The PH suggests that three hierarchical rules govern this process:
Rule 1: If the difference between the minimum payoffs of the two options exceeds 10% of the maximum payoff (referred to as the aspiration level), select the option with the higher minimum payoff.
Rule 2: If Rule 1 does not apply, examine if the difference between the probabilities of the minimum payoffs exceeds 0.1. If it does, select the option with the lowest probability of obtaining the minimum payoff.
Rule 3: If neither Rules 1 nor 2 apply, choose the option with the higher maximum payoff.
To examine the underlying processes of such choice behaviors, I analyze the response time (RT) and choice proportion (CP) in a replication of the choice problems used by Brandstätter et al. [13] to provide an empirical examination of the PH as a process model. The data are taken from Ayal and Hochman [12], but the analyses are novel. As suggested by Brandstätter et al. [13], I classified the sets into a 2 (one reason or three reasons examined) × 2 (gambles of similar or dissimilar expected value) mixed-factorial design. This classification enables a formulation of contradictory hypotheses for RT and CP, one that fits the PH and the other to compensatory principles.
Reaction time (RT). Noncompensatory models assume a sequential, limited search for information with a clear stopping rule: the search ends when the decision-maker finds a piece of information that distinguishes between options. Thus, noncompensatory models predict that people will require a higher reaction time with more information (e.g., cues, reasons) they examine. By contrast, compensatory models suggest that the decision-maker integrates all the relevant information. As a result, compensatory models predict that when more information integration is required or when the integration between arguments leads to smaller differences between the alternatives (e.g., a small difference between the alternatives’ expected values), response time should be longer. Thus, I can make the following hypotheses:
Hypothesis 1.
RTPH: Reaction time will be higher in the three-reason-examined choice problems than in the one-reason-examined choice problems.
Hypothesis 2.
RTEV: Reaction time will be higher in the similar-expected-value choice problems relative to the dissimilar-expected-value choice problems.
Choice proportion (CP). In preference tasks, choice proportion describes the proportion of choices decision-makers make that aligns with the prediction of a specific choice model. Assuming that people can make processing errors on each step of their decision strategy, it can be argued that the earlier the examination is terminated, the fewer errors will be involved in the final decision. Therefore, noncompensatory models predict that when the examination is terminated after the first argument, the resulting choice proportion should be more in line with the strategy’s predictions (e.g., 80/20) than when termination occurs after the second reason (e.g., 70/30) cf. [13]. Alternatively, if people use compensatory strategies, the choice proportion should be highest for decisions that are derived more easily (e.g., when the different arguments are better at distinguishing between the choices). Thus, I can make the following hypotheses:
Hypothesis 3.
CPPH: The proportion of choices aligned with the PH will be higher in the one-reason-examined choice problems than in the three-reason-examined choice problems.
Hypothesis 4.
CPEV: The proportion of choices aligned with the EV will be higher in the dissimilar-expected value choice problems than in the similar-expected value choice problems.
In the study of Ayal and Hochman [12], 50 undergraduates were instructed to make 20 choices between gambles of the sort $X, p; $Y, 1 − p. All possible outcomes were gains. The mean RT for the one-reason-examined choice problems was 12.31 s (SD = 8.34) when the two options had similar EVs, and 10.03 s (SD = 5.40) when the two options had dissimilar EVs. Similarly, the mean RT for the three-reasons-examined choice problems with similar EVs was 10.9 s (SD = 6.18) and 8.78 s (SD = 4.34) for the dissimilar EV choices. Repeated measures analysis of variance (ANOVA) revealed a significant main effect both for the level of similarity between the EVs (F(1, 47) = 11.158, p < 0.001) and the number of reasons examined (F(1, 47) = 4.178, p < 0.05). Importantly, the results of the number of reasons examined contradict the prediction of the PH. Specifically, the results demonstrate that it takes longer for decision-makers to choose between alternatives that require one piece of information (according to the noncompensatory model) than three pieces, results that support Hypothesis RTEV.
A similar pattern was observed for CP. The mean CP (i.e., the choice proportion in line with the model’s predictions) was 0.55 (SD = 0.22) when the two options had similar EVs and 0.85 (SD = 0.19) when they had dissimilar EVs. Likewise, for the three-reasons-examined choice problems, the mean CP was 0.6 (SD = 0.23) for similar EVs and 0.892 (SD = 0.18) for dissimilar EVs problems. Repeated measures ANOVA revealed a significant main effect for the level of similarity between the expected values (F(1, 47) = 119.913, p < 0.001). However, there was no main effect for the number of reasons examined (F(1, 47) = 2.708, ns), nor was there a significant interaction between the two variables. Thus, again, we see support for the compensatory and not the noncompensatory model.
In summary, the analyses of Ayal and Hochman’s [12] data suggest that despite the value of noncompensatory principles, it does not capture the full extent and complexity of decision-making processes. The pattern of results obtained from different measures (i.e., RT and CP) supports the idea that when making preferences, people tend not to rely on limited information in a noncompensatory manner but rather to integrate all available information. Nevertheless, CP did not reduce to chance level in the dissimilar expected value choice problems (one-sampled t-test revealed that the aggregated mean CP across reasons examined was 0.57, t(47) = 3.476, p < 0.001). This could suggest that when integrating information does not help differentiate between the two options, people use additional strategies (either compensatory or not) to make their choice.
In this context, Brandstätter et al. [13] acknowledged these conclusions and admitted that their model could be better at predicting preferences if it assumed that people compute the EV of each option, take their ratio into account, and choose the option with the highest EV only if the ratio exceeds 2. The current results suggest that Brandstätter et al.’s intuition was correct. The coexistence of compensatory and noncompensatory principles may lead to better decision-making models [65] that help better understand complex decision-making processes [40].

6.2. Further Modeling Analysis

Although the existing data highlight the importance of accounting for both compensatory and noncompensatory principles, a more direct examination of the specific cognitive processes underlying intuitive and deliberate reasoning in these situations is in order. Notwithstanding, I present an alternative process model based on the two systems’ view as compensatory/noncompensatory and the previously reported results. According to this (tentative) model, decision-makers begin their decision-making process with a preliminary intuitive compensatory process that integrates all or most available information. For example, this integration is used to evaluate the expected values of the available options. The decision is made if this initial compensatory process provides clear evidence toward a specific option (e.g., points to an option with a substantially larger EV). If this is not the case and the initial process does not differentiate between the options, decision-makers use a rational compensatory selection among one of the many noncompensatory tools available at their disposal (i.e., heuristics and rules of thumb) to reach the best decision while investing minimal time and effort.
To provide initial support for the proposed model, I applied a new analysis to the data of Ayal and Hochman [12]. Since intuitive reasoning is rapid (i.e., short), if it is a compensatory process, I can assume that in ‘easy’ choice problems (i.e., with dissimilar expected values), all short decision times will lead to a higher proportion of maximization (i.e., higher CP). This is because if the maximizing alternative is identified intuitively, any additional reasoning might add more noise to the decision, resulting in greater judgmental errors. On the other hand, if the intuitive mechanism for these kinds of choice problems is noncompensatory, I would expect it to be less efficient as the number of reasons examined increases (due to cumulative error). To examine these assertions, the results of the RT only for the ten dissimilar EV choice problems were classified into a 2 (one reason or three reasons examined) × 2 (short or long response time) array. This enables us to make the following hypotheses:
Hypothesis 5.
INTUITIVEPH: The proportion of choices aligned with the PH will be higher in the one-reason-examined choice problems than in the three-reason-examined choice problems, regardless of the response time.
Hypothesis 6.
INTUITIVEEV: The proportion of choices aligned with the EV will be higher when the response time is short than when the response time is long.

Summary of the Results and Brief Discussion

Across participants, the mean CP for the one-reason-examined choice problems was 0.87 (SD = 0.35) when the response time was short (i.e., less than 5 s, M = 3, Md = 3.5), and it decreased to 0.81 (SD = 0.37) when the response time was long (i.e., more than 10 s, M = 16.6, Md = 14.4). For the three-reason-examined choice problems, the mean P(maximizing) was 1.00 (SD = 0.00) for short response times and 0.75 (SD = 0.38) for long response times. A repeated-measures ANOVA was conducted to test the effects of the number of reasons examined and the response time on choice behavior. The ANOVA included 2X2 within-participant independent variables with P(maximizing) as the dependent variable. This analysis revealed a significant main effect for the response time (F(1,7) = 3.723, p < 0.05 in a one-tail test). No main effect for the number of reasons examined (F(1, 7) = 0.179, p = 0.67) was found, nor was there a significant interaction between the two variables (F(1, 7) = 0.396, p = 0.55).
The results of the current analysis support Hypothesis INTUITIVEEV. Namely, the finding that judgment accuracy was higher for extremely rapid responses replicates previous results [34] and suggests that the complex compensatory integration of outcomes and probabilities was employed intuitively.
As mentioned, the current analysis is just an initial attempt to examine the proposed model. To examine the generality of these results, researchers need to collect additional data on description-based choice decisions. In addition, the predictions of the proposed model should be juxtaposed with other process models, such as the Decision Field Theory (DFT) [123]. The DFT assumes that at each moment in time, the decision-maker intuitively thinks about the various payoffs of each prospect and produces an affective reaction (i.e., valence) to each prospect accordingly. These valences are integrated across time to produce the preference state at each moment. A threshold controls the stopping rule for this process: the first prospect to reach the top threshold is chosen.
According to DFT [123], higher thresholds necessitate reaching a stronger state of preference, which allows decision-makers to obtain more information about the possible options. This extends the deliberation process and enhances accuracy. In contrast, lower thresholds permit decisions based on weaker preference states, limiting the information acquisition, thereby shortening the deliberation process and reducing accuracy (i.e., a tradeoff between speed and accuracy). The threshold is assumed to be low under high and high under low time pressure. In contrast, the proposed model suggests the opposite. Since intuitive reasoning is considered compensatory, the model assumes that decreasing accuracy results from acquiring additional information and a prolonged noncompensatory reasoning process.
To test our model against the DFT [123], one might design a study where participants face description-based decisions under risk. This study will include a 2 (time limitation and the magnitude of the dissimilarity between the alternative’s expected values) × 3 (low, medium, and high) factorial within-subject design. While the DFT predicts a reversal of opinion and decreasing accuracy as time constraint increases, the proposed model predicts the opposite. Namely, the model assumes that rational reasoning was obtained very early. Thus, any additional computation time will only result in adding noise to the system. This noise is predicted to impair reasoning.
Manipulating the magnitude of the dissimilarity between the expected values is offered to ensure that under the low time constraint condition, participants will still be forced to rely on deliberate reasoning. In addition, physiological data can be collected to examine dissociations between intuitive and deliberate processes.

7. Summary

The current review highlights the importance of a nuanced understanding of dual-system models, emphasizing the critical distinction between and within intuitive and deliberative processes. While the current review did not deal with complex decision models directly and focuses on the dual-system approach, it suggests that integrating complex cognition with dual-system theory can enrich its explanatory power by addressing real-world complexities often absent in controlled settings. Traditionally, dual-system models treat intuitive (System 1) and deliberate (System 2) processes as distinct and relatively fixed in their roles. However, frameworks like CPS [40] and DDM [41] reveal that intuitive and deliberate processes can adapt based on the complexity and immediacy of the decision context. For example, intuitive judgments in complex environments are not merely “quick and dirty” solutions. Rather, they are shaped by previous experiences and learned patterns, while deliberate reasoning may adaptively simplify to meet time or other environmental constraints.
This alignment suggests that intuitive and deliberate processes may not solely belong to distinct systems but could function within overlapping, flexible systems influenced by context, experience, and cognitive demands. Recognizing this adaptive capacity within dual-system models enhances our theoretical comprehension. This differentiation enriches our theoretical comprehension of cognitive mechanisms and significantly enhances decision-making models’ predictive capability. By utilizing the interplay between compensatory and noncompensatory processes within each system, the novel analyses reported here resonate with but diverge from existing literature with the potential to provide some interesting insight. As such, the current review might suggest that while some decision-making models adequately capture certain dimensions of cognitive processing, they often fall short of encapsulating the full spectrum of human complexity. If this is the case, it seems that judgment and decision-making scholars would benefit from a shift towards more granular analyses within cognitive psychology and decision research.
The current review may also point to other limitations of applying the dual-system approach to real-world scenarios, particularly regarding ecological and external validity. It might be argued that the structured, simplified dilemmas typically used in laboratory settings do not accurately reflect the complexity of decisions encountered in daily life, where factors like time constraints, emotional stakes, and social influences have real consequences [124,125]. Real-world decision-making often involves layered and dynamic considerations that are difficult to capture with compensatory or non-compensatory models alone, as these models generally assume a fixed set of criteria and outcomes [65]. Furthermore, ecological validity is compromised when decision-making scenarios lack contextual relevance, leading to choices that may not generalize beyond the lab [126]. Researchers have emphasized the need for decision models that account for adaptive, heuristic processes that align more closely with individuals’ intuitive responses in everyday settings [127,128]. By acknowledging these limitations, the dual-system framework can be viewed as a foundational yet partial representation of decision-making that requires further adaptation to better capture the complexities of real-life choices.
Of course, several limitations, like potential biases in the literature selection and interpretation, might hinder these conclusions. Moreover, empirical investigations with diverse methodologies across several contexts are crucial to validating and refining the proposed theoretical approach. Finally, integrating theoretical frameworks from decision-making theory, behavioral economics, psychology, and physiology presents unique challenges, particularly when bridging methodologies from the natural sciences and economic theory. Behavioral economics, for instance, offers abstract constructs (e.g., risk aversion and loss aversion), which serve as simplified representations of complex human behavior in economic contexts [68]. These constructs were designed to model reality at a high level, and as such, verifying or refuting them directly may appear to oversimplify nuanced psychological processes. However, examining physiological responses and behaviors to economic situations can provide a more comprehensive view of decision-making by revealing underlying cognitive and emotional mechanisms not always captured in economic models alone [129]. Incorporating insights from psychology and physiology offers a unique opportunity to explore how foundational economic principles manifest in real-time decision processes. While methodologically complex, this interdisciplinary approach seeks to enrich our understanding of decision-making by examining the interactions between economic rationality and physiological responses, adding depth to the traditionally separate fields of economic and psychological research [130]. By acknowledging these methodological distinctions, I wish to offer a multidisciplinary approach to leverage the strengths of each field while carefully navigating the limits of cross-disciplinary assumptions. Thus, this line of research could yield some important theoretical and practical implications, from improving behavioral interventions to crafting more effective marketing strategies.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created for this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Beach, D.; Pedersen, R.B. Process-Tracing Methods: Foundations and Guidelines; University of Michigan Press: Ann Arbor, MI, USA, 2019. [Google Scholar]
  2. Ford, J.K.; Schmitt, N.; Schechtman, S.L.; Hults, B.M.; Doherty, M.L. Process tracing methods: Contributions, problems, and neglected research problems. Organ. Behav. Hum. Decis. Process. 1989, 43, 75–117. [Google Scholar] [CrossRef]
  3. Isen, A.M.; Means, B. The influence of positive affect on decision-making strategy. Soc. Cogn. 1983, 2, 18–31. [Google Scholar] [CrossRef]
  4. Johnson, E.J. Expertise and decision under uncertainty: Performance and process. In The Nature of Expertise; Chi, M.T.H., Glaser, R., Farr, M.J., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 1988; pp. 209–228. [Google Scholar]
  5. Onken, J.; Hastie, R.; Revelle, W. Individual differences in the use of simplification strategies in a complex decision-making task. J. Exp. Psychol. Hum. Percept. Perform. 1985, 11, 14–27. [Google Scholar] [CrossRef]
  6. Witteman, C.; van Geenen, E. Cognitive process analysis. In Foundations for Tracing Intuition; Psychology Press: London, UK, 2009; pp. 53–68. [Google Scholar]
  7. Johnson, E.J.; Payne, J.W.; Bettman, J.R. Information displays and preference reversals. Organ. Behav. Hum. Decis. Process. 1988, 42, 1–21. [Google Scholar] [CrossRef]
  8. Klayman, J. Analysis of predecisional information search patterns. In Analyzing and Aiding Decision Processes; Humphreys, P., Svenson, O., Vari, A., Eds.; North-Holland: Amsterdam, The Netherlands, 1983. [Google Scholar]
  9. Payne, J.W.; Bettman, J.R.; Johnson, E.J. Adaptive strategy selection in decision making. J. Exp. Psychol. Learn. Mem. Cogn. 1988, 14, 534–552. [Google Scholar] [CrossRef]
  10. Johnson, E.J.; Schulte-Mecklenbeck, M.; Willemsen, M.C. Process models deserve process data: Comment on Brandstätter, Gigerenzer, and Hertwig (2006). Psychol. Rev. 2008, 115, 263–272. [Google Scholar] [CrossRef]
  11. Norman, E.; Schulte-Mecklenbeck, M. Take a quick click at that! Mouselab and eye-tracking as tools to measure intuition. In Foundations for Tracing Intuition; Psychology Press: London, UK, 2009; pp. 32–52. [Google Scholar]
  12. Ayal, S.; Hochman, G. Ignorance or integration: The cognitive processes underlying choice behavior. J. Behav. Decis. Mak. 2009, 22, 455–474. [Google Scholar] [CrossRef]
  13. Brandstätter, E.; Gigerenzer, G.; Hertwig, R. The priority heuristic: Making choices without trade-offs. Psychol. Rev. 2006, 113, 409–432. [Google Scholar] [CrossRef]
  14. Hochman, G.; Ayal, S.; Ariely, D. Fairness requires deliberation: The primacy of economic over social considerations. Front. Psychol. 2015, 6, 747. [Google Scholar] [CrossRef]
  15. Glöckner, A.; Betsch, T. Do people make decisions under risk based on ignorance? An empirical test of the priority heuristic against cumulative prospect theory. Organ. Behav. Hum. Decis. Process. 2008, 107, 75–95. [Google Scholar] [CrossRef]
  16. Hochman, G.; Glöckner, A.; Yechiam, E. Physiological measures in identifying decision strategies. In Foundations for Tracing Intuition; Psychology Press: London, UK, 2009; pp. 147–167. [Google Scholar]
  17. De Neys, W. Conflict detection, dual processes, and logical intuitions: Some clarifications. Think. Reason. 2014, 20, 169–187. [Google Scholar] [CrossRef]
  18. Epstein, S. Integration of the cognitive and psychodynamic unconscious. Am. Psychol. 1994, 49, 709–724. [Google Scholar] [CrossRef] [PubMed]
  19. Pennycook, G. A framework for understanding reasoning errors: From fake news to climate change and beyond. Adv. Exp. Soc. Psychol. 2023, 67, 131–208. [Google Scholar] [CrossRef]
  20. Sloman, S.A. The empirical case for two systems of reasoning. Psychol. Bull. 1996, 119, 3–22. [Google Scholar] [CrossRef]
  21. Stanovich, K.E.; West, R.F. Individual differences in reasoning: Implications for the rationality debate? In Heuristics and Biases: The Psychology of Intuitive Judgment; Gilovich, T., Griffin, D.W., Kahneman, D., Eds.; Cambridge University Press: Cambridge, UK, 2002; pp. 421–440. [Google Scholar]
  22. Müller-Lyer, F.C. Optische Urteilstauschungen. Arch. Anat. Physiol. 1889, 2, 263–270. [Google Scholar]
  23. Kahneman, D.; Slovic, E.; Tversky, A. Judgment Under Uncertainty: Heuristics and Biases; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  24. Yechiam, E.; Hochman, G. Losses as modulators of attention: Review and analysis of the unique effects of losses over gains. Psychol. Bull. 2013, 139, 497–518. [Google Scholar] [CrossRef]
  25. Kirkpatrick, L.A.; Epstein, S. Cognitive–experiential self-theory and subjective probability: Further evidence for two conceptual systems. J. Pers. Soc. Psychol. 1992, 63, 534–544. [Google Scholar] [CrossRef]
  26. Evans, J.S.B. In two minds: Dual-process accounts of reasoning. Trends Cogn. Sci. 2003, 7, 454–459. [Google Scholar] [CrossRef]
  27. Kahneman, D.; Frederick, S. Representativeness revisited: Attribute substitution in intuitive judgment. In Heuristics and Biases: The Psychology of Intuitive Judgment; Gilovich, T., Griffin, D.W., Kahneman, D., Eds.; Cambridge University Press: Cambridge, UK, 2002; pp. 49–81. [Google Scholar]
  28. Evans, J.S.B.T. Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol. 2008, 59, 255–278. [Google Scholar] [CrossRef]
  29. Acker, F. New findings on unconscious versus conscious thought in decision making: Additional empirical data and meta-analysis. Judgm. Decis. Mak. 2008, 3, 292–303. [Google Scholar] [CrossRef]
  30. Bago, B.; De Neys, W. Fast logic? Examining the time course assumption of dual process theory. Cognition 2017, 158, 90–109. [Google Scholar] [CrossRef] [PubMed]
  31. Bruine de Bruin, W.; Parker, A.M.; Fischhoff, B. Individual differences in adult decision-making competence. J. Pers. Soc. Psychol. 2007, 92, 938–956. [Google Scholar] [CrossRef] [PubMed]
  32. Davis, D.G.S.; Staddon, J.E.R.; Machado, A.; Palmer, R.G. The process of recurrent choice. Psychol. Rev. 1993, 100, 320–341. [Google Scholar] [CrossRef] [PubMed]
  33. De Neys, W.; Pennycook, G. Logic fast and slow: Advances in dual-process theorizing. Curr. Dir. Psychol. Sci. 2019, 28, 503–509. [Google Scholar] [CrossRef]
  34. Glöckner, A. Does intuition beat fast and frugal heuristics? A systematic empirical analysis. In Intuition in Judgment and Decision Making; Plessner, H., Betsch, C., Betsch, T., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2007; pp. 309–325. [Google Scholar]
  35. Glöckner, A.; Herbold, A.K. An eye-tracking study on information processing in risky decisions: Evidence for compensatory strategies based on automatic processes. J. Behav. Decis. Mak. 2011, 24, 71–98. [Google Scholar] [CrossRef]
  36. Usher, M.; Russo, Z.; Weyers, M.; Brauner, R.; Zakay, D. The impact of the mode of thought in complex decisions: Intuitive decisions are better. Front. Psychol. 2011, 2, 37. [Google Scholar] [CrossRef]
  37. De Neys, W. On dual-and single-process models of thinking. Perspect. Psychol. Sci. 2021, 16, 1412–1427. [Google Scholar] [CrossRef]
  38. Keren, G.; Schul, Y. Two is not always better than one: A critical evaluation of two-system theories. Perspect. Psychol. Sci. 2009, 4, 533–550. [Google Scholar] [CrossRef]
  39. Melnikoff, D.E.; Bargh, J.A. The mythical number two. Trends Cogn. Sci. 2018, 22, 280–293. [Google Scholar] [CrossRef]
  40. Dörner, D.; Funke, J. Complex problem solving: What it is and what it is not. Front. Psychol. 2017, 8, 1153. [Google Scholar] [CrossRef]
  41. Cronin, M.A.; Gonzalez, C.; Sterman, J.D. Why don’t well-educated adults understand accumulation? A challenge to researchers, educators, and citizens. Organ. Behav. Hum. Decis. Process. 2009, 108, 116–130. [Google Scholar] [CrossRef]
  42. Funke, J. Complex problem solving: A case for complex cognition? Cogn. Process. 2010, 11, 133–142. [Google Scholar] [CrossRef] [PubMed]
  43. Herde, C.N.; Wüstenberg, S.; Greiff, S. Assessment of complex problem solving: What we know and what we don’t know. Appl. Meas. Educ. 2016, 29, 265–277. [Google Scholar] [CrossRef]
  44. Wüstenberg, S.; Greiff, S.; Funke, J. Complex problem solving—More than reasoning? Intelligence 2012, 40, 1–14. [Google Scholar] [CrossRef]
  45. Sloman, S.A. Two systems of reasoning. In Heuristics and Biases: The Psychology of Intuitive Judgment; Gilovich, T., Griffin, D.W., Kahneman, D., Eds.; Cambridge University Press: Cambridge, UK, 2002; pp. 379–398. [Google Scholar]
  46. Chaiken, S. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J. Pers. Soc. Psychol. 1980, 39, 752–766. [Google Scholar] [CrossRef]
  47. Epstein, S.; Pacini, R. Some basic issues regarding dual-process theories from the perspective of cognitive-experiential theory. In Dual-Process Theories in Social Psychology; Chaiken, S., Trope, Y., Eds.; Guilford Press: New York, NY, USA, 1999; pp. 462–482. [Google Scholar]
  48. Hammond, K.R. Human Judgment and Social Policy; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
  49. Lieberman, M.D. Reflective and reflexive judgment processes: A social cognitive neuroscience approach. In Social Judgments: Implicit and Explicit Processes; Forgas, J.P., Williams, K.R., von Hippel, W., Eds.; Cambridge University Press: Cambridge, UK, 2003; pp. 44–67. [Google Scholar]
  50. Smith, E.R.; DeCoster, J. Dual-process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Pers. Soc. Psychol. Rev. 2000, 4, 108–131. [Google Scholar] [CrossRef]
  51. Epstein, S. Intuition from the perspective of cognitive-experiential self-theory. In Intuition in Judgment and Decision Making; Plessner, H., Betsch, C., Betsch, T., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2007; pp. 23–37. [Google Scholar]
  52. Finucane, M.L.; Alhakami, A.; Slovic, P.; Johnson, S.M. The affect heuristic in judgments of risks and benefits. J. Behav. Decis. Mak. 2000, 13, 1–17. [Google Scholar] [CrossRef]
  53. Agnoli, F. Development of judgmental heuristics and logical reasoning: Training counteracts the representativeness heuristic. Cogn. Dev. 1991, 6, 195–217. [Google Scholar] [CrossRef]
  54. Isen, A.M.; Nygren, T.E.; Ashby, F.G. Influence of positive affect on the subjective utility of gains and losses: It is just not worth the risk. J. Pers. Soc. Psychol. 1988, 55, 710–717. [Google Scholar] [CrossRef]
  55. Krava, L.A.; Ayal, S.; Hochman, G. Time is money: The effect of mode-of-thought on financial decision-making. Front. Psychol. 2021, 12, 735823. [Google Scholar] [CrossRef]
  56. Bago, B.; Rand, D.G.; Pennycook, G. Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. J. Exp. Psychol. Gen. 2020, 149, 1608–1613. [Google Scholar] [CrossRef] [PubMed]
  57. Goel, V.; Dolan, R.J. Explaining modulation of reasoning by belief. Cognition 2003, 87, B11–B22. [Google Scholar] [CrossRef] [PubMed]
  58. Goel, V.; Buchel, C.; Frith, C.; Dolan, R.J. Dissociation of mechanisms underlying syllogistic reasoning. Neuroimage 2000, 12, 504–514. [Google Scholar] [CrossRef] [PubMed]
  59. Lieberman, M.D. The X- and C-systems. In Social Neuroscience: Integrating Biological and Psychological Explanations of Social Behavior; Harmon-Jones, E., Winkielman, P., Eds.; Guilford Press: New York, NY, USA, 2007; pp. 290–315. [Google Scholar]
  60. Kaufman, L.; Rock, I. The moon illusion I. Science 1962, 136, 1023–1031. [Google Scholar] [CrossRef]
  61. Denes-Raj, V.; Epstein, S.; Cole, J. The generality of the ratio-bias phenomenon. Pers. Soc. Psychol. Bull. 1995, 21, 1083–1092. [Google Scholar] [CrossRef]
  62. Kahneman, D. A perspective on judgment and choice: Mapping bounded rationality. Am. Psychol. 2003, 58, 697–720. [Google Scholar] [CrossRef]
  63. Einhorn, H.J. Use of nonlinear, noncompensatory models as a function of task and amount of information. Organ. Behav. Hum. Perform. 1971, 6, 1–27. [Google Scholar] [CrossRef]
  64. Elrod, T.; Johnson, R.D.; White, J. A new integrated model of noncompensatory and compensatory decision strategies. Organ. Behav. Hum. Decis. Process. 2004, 95, 1–19. [Google Scholar] [CrossRef]
  65. Payne, J.W.; Bettman, J.R.; Johnson, E.J. The Adaptive Decision Maker; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  66. Von Gunten, C.D.; Scherer, L.D. Self–other differences in multiattribute decision making: Compensatory versus noncompensatory decision strategies. J. Behav. Decis. Mak. 2019, 32, 109–123. [Google Scholar] [CrossRef]
  67. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
  68. Kahneman, D.; Tversky, A. Prospect theory: An analysis of decision under risk. Econometrica 1979, 47, 263–291. [Google Scholar] [CrossRef]
  69. Tversky, A.; Kahneman, D. Advances in prospect theory: Cumulative representation of uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
  70. Tversky, A. Elimination by aspects: A theory of choice. Psychol. Rev. 1972, 79, 281–299. [Google Scholar] [CrossRef]
  71. Fishburn, P.C. Lexicographic orders, utilities, and decision rules: A survey. Manag. Sci. 1974, 20, 1442–1472. [Google Scholar] [CrossRef]
  72. Bröder, A. Take the best, Dawes’ rule, and compensatory decision strategies: A regression-based classification method. Qual. Quant. 2002, 36, 219–238. [Google Scholar] [CrossRef]
  73. Bröder, A.; Schiffer, S. “Take the best” versus simultaneous feature matching: Probabilistic inferences from memory and effects of representation format. J. Exp. Psychol. Gen. 2003, 132, 277–293. [Google Scholar] [CrossRef]
  74. Rieskamp, J.; Otto, P.E. SSL: A theory of how people learn to select strategies. J. Exp. Psychol. Gen. 2006, 135, 207–236. [Google Scholar] [CrossRef]
  75. Bröder, A. Assessing the empirical validity of the “Take-The-Best” heuristic as a model of human probabilistic inference. J. Exp. Psychol. Learn. Mem. Cogn. 2000, 26, 1332–1346. [Google Scholar] [CrossRef]
  76. Gigerenzer, G.; Goldstein, D.G. Reasoning the fast and frugal way: Models of bounded rationality. Psychol. Rev. 1996, 103, 650–669. [Google Scholar] [CrossRef] [PubMed]
  77. Erev, I.; Ert, E.; Roth, A.E.; Haruvy, E.; Herzog, S.M.; Hau, R.; Hertwig, R.; Stewart, T.; West, R.; Lebiere, C. A choice prediction competition: Choices from experience and from description. J. Behav. Decis. Mak. 2010, 23, 15–47. [Google Scholar] [CrossRef]
  78. Hoerl, C.; McCormack, T. Thinking in and about time: A dual systems perspective on temporal cognition. Behav. Brain Sci. 2019, 42, e244. [Google Scholar] [CrossRef]
  79. Tversky, A.; Kahneman, D. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychol. Rev. 1983, 90, 293–315. [Google Scholar] [CrossRef]
  80. Risen, J.L. Believing what we do not believe: Acquiescence to superstitious beliefs and other powerful intuitions. Psychol. Rev. 2016, 123, 182–207. [Google Scholar] [CrossRef] [PubMed]
  81. Schurr, A.; Erev, I. The effect of base rate careful analysis and the distinction between decisions from experience and from description. Behav. Brain Sci. 2007, 30, 281–282. [Google Scholar] [CrossRef]
  82. Erev, I.; Ert, E.; Yechiam, E. Loss aversion, diminishing sensitivity, and the effect of experience on repeated decisions. J. Behav. Decis. Mak. 2008, 21, 575–597. [Google Scholar] [CrossRef]
  83. Ert, E.; Erev, I. On the descriptive value of loss aversion in decisions under risk: Six clarifications. Judgm. Decis. Mak. 2013, 8, 214–235. [Google Scholar] [CrossRef]
  84. Hochman, G.; Yechiam, E. Loss aversion in the eye and in the heart: The autonomic nervous system’s responses to losses. J. Behav. Decis. Mak. 2011, 24, 140–156. [Google Scholar] [CrossRef]
  85. Yechiam, E.; Ashby, N.J.; Hochman, G. Are we attracted by losses? Boundary conditions for the approach and avoidance effects of losses. J. Exp. Psychol. Learn. Mem. Cogn. 2019, 45, 591–605. [Google Scholar] [CrossRef]
  86. Iani, C.; Gopher, D.; Lavie, P. Effects of task difficulty and invested mental effort on peripheral vasoconstriction. Psychophysiology 2004, 41, 789–798. [Google Scholar] [CrossRef]
  87. Ganster, D.C.; Crain, T.L.; Brossoit, R.M. Physiological measurement in the organizational sciences: A review and recommendations for future use. Annu. Rev. Organ. Psychol. Organ. Behav. 2018, 5, 267–293. [Google Scholar] [CrossRef]
  88. Kahneman, D. Attention and Effort; Prentice-Hall: Hoboken, NJ, USA, 1973. [Google Scholar]
  89. Nieuwenhuis, S.; Aston-Jones, G.; Cohen, J.D. Decision making, the P3, and the locus coeruleus-norepinephrine system. Psychol. Bull. 2005, 131, 510–532. [Google Scholar] [CrossRef]
  90. Bechara, A.; Damasio, H.; Tranel, D.; Damasio, A.R. Deciding advantageously before knowing the advantageous strategy. Science 1997, 275, 1293–1295. [Google Scholar] [CrossRef] [PubMed]
  91. Hochman, G.; Glöckner, A.; Fiedler, S.; Ayal, S. “I can see it in your eyes”: Biased processing and increased arousal in dishonest responses. J. Behav. Decis. Mak. 2016, 29, 322–335. [Google Scholar] [CrossRef]
  92. Yechiam, E.; Telpaz, A.; Hochman, G. The complaint bias in subjective evaluations of incentives. Decision 2014, 1, 147–159. [Google Scholar] [CrossRef]
  93. Goldstein, D.G.; Gigerenzer, G. Models of ecological rationality: The recognition heuristic. Psychol. Rev. 2002, 109, 75–90. [Google Scholar] [CrossRef]
  94. Volz, K.G.; von Cramon, D.Y. Can neuroscience tell a story about intuition? In Intuition in Judgment and Decision Making; Plessner, H., Betsch, C., Betsch, T., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2007; pp. 71–87. [Google Scholar]
  95. McCraty, R.; Atkinson, M.; Bradley, R.T. Electrophysiological evidence of intuition: Part I. The surprising role of the heart. J. Altern. Complement. Med. 2004, 10, 133–143. [Google Scholar] [CrossRef] [PubMed]
  96. Camerer, C.F.; Babcock, L.; Loewenstein, G.; Thaler, R.H. Labor supply of New York City cabdrivers: One day at a time. Q. J. Econ. 1997, 112, 407–441. [Google Scholar] [CrossRef]
  97. Brown, A.L.; Imai, T.; Vieider, F.M.; Camerer, C.F. Meta-analysis of empirical estimates of loss aversion. J. Econ. Lit. 2024, 62, 485–516. [Google Scholar] [CrossRef]
  98. Thaler, R.H.; Tversky, A.; Kahneman, D.; Schwartz, A. The effect of myopia and loss aversion on risk-taking: An experimental test. Q. J. Econ. 1997, 112, 647–661. [Google Scholar] [CrossRef]
  99. Tom, S.M.; Fox, C.R.; Trepel, C.; Poldrack, R.A. The neural basis of loss aversion in decision-making under risk. Science 2007, 315, 515–518. [Google Scholar] [CrossRef]
  100. Benartzi, S.; Thaler, R. Myopic loss aversion and the equity premium puzzle. Q. J. Econ. 1995, 110, 73–92. [Google Scholar] [CrossRef]
  101. Samuelson, W.; Zeckhauser, R. Status quo bias in decision making. J. Risk Uncertain. 1988, 1, 7–59. [Google Scholar] [CrossRef]
  102. Kermer, D.A.; Driver-Linn, E.; Wilson, T.D.; Gilbert, D.T. Loss aversion is an affective forecasting error. Psychol. Sci. 2006, 17, 649–653. [Google Scholar] [CrossRef] [PubMed]
  103. Levin, P.I.; Hart, S.S. Risk preferences in young children: Early evidence of individual differences in reaction to potential gains and losses. J. Behav. Decis. Mak. 2003, 16, 397–413. [Google Scholar] [CrossRef]
  104. Ahern, S.K.; Beatty, J. Physiological signs of information processing vary with intelligence. Science 1979, 205, 1289–1292. [Google Scholar] [CrossRef] [PubMed]
  105. Ahern, S.K.; Beatty, J. Physiological evidence that demand for processing capacity varies with intelligence. In Intelligence and Learning; Friedman, M., Dos, J.P., O’Connor, N., Eds.; Plenum Press: New York, NY, USA, 1981; pp. 121–128. [Google Scholar]
  106. Beatty, J. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 1982, 91, 276–292. [Google Scholar] [CrossRef]
  107. Bradshaw, J.L. Pupil size and problem solving. Q. J. Exp. Psychol. 1968, 20, 116–122. [Google Scholar] [CrossRef]
  108. Goldwater, B.C. Psychological significance of pupillary movements. Psychol. Bull. 1972, 77, 340–355. [Google Scholar] [CrossRef]
  109. Landers, R.N.; Sanchez, D.R. Game-based, gamified, and gamefully designed assessments for employee selection: Definitions, distinctions, design, and validation. Int. J. Sel. Assess. 2022, 30, 1–13. [Google Scholar] [CrossRef]
  110. Béchard, B.; Hodgetts, H.; Morneau-Guérin, F.; Ouimet, M.; Tremblay, S. Political complexity and the pervading role of ideology in policy-making. J. Dyn. Decis. Mak. 2023, 9, 121–128. [Google Scholar] [CrossRef]
  111. Lamberts, K.; Brockdorff, N.; Heit, E. Feature sampling and random walk models of individual stimulus recognition. J. Exp. Psychol. Gen. 2003, 132, 351–378. [Google Scholar] [CrossRef]
  112. Read, D.; Loewenstein, G. The diversification bias: Explaining the difference between prospective and real-time taste for variety. J. Exp. Psychol. Appl. 1995, 1, 34–49. [Google Scholar] [CrossRef]
  113. Schnall, R.P.; Shlitner, A.; Sheffy, J.; Kedar, R.; Lavie, P. Periodic profound peripheral vasoconstriction: A new marker of obstructive sleep apnea. Sleep 1999, 22, 939–946. [Google Scholar] [CrossRef] [PubMed]
  114. Pillar, G.; Bar, A.; Schnall, R.; Shefy, J.; Lavie, P. Autonomic arousal index: An automated detection based on peripheral arterial tonometry. Sleep 2002, 25, 541–547. [Google Scholar] [CrossRef]
  115. Ayal, S.; Zakay, D.; Hochman, G. Deliberative adjustments of intuitive anchors: The case of diversification behavior. Synthese 2012, 189, 131–145. [Google Scholar] [CrossRef]
  116. Brusovansky, M.; Glickman, M.; Usher, M. Fast and effective: Intuitive processes in complex decisions. Psychon. Bull. Rev. 2018, 25, 1542–1548. [Google Scholar] [CrossRef]
  117. Krava, L.A.; Ayal, S.; Hochman, G. Time is money: The advantages of quick and intuitive financial decision-making. In Behavioral Finance: The Coming of Age; Plessner, H., Betsch, C., Betsch, T., Eds.; Psychology Press: London, UK, 2019; pp. 37–56. [Google Scholar]
  118. Hochman, G.; Erev, I. The partial-reinforcement extinction effect and the contingent-sampling hypothesis. Psychon. Bull. Rev. 2013, 20, 1336–1342. [Google Scholar] [CrossRef]
  119. Ayal, S.; Rusou, Z.; Zakay, D.; Hochman, G. Determinants of judgment and decision-making quality: The interplay between information processing style and situational factors. Front. Psychol. 2015, 6, 1088. [Google Scholar] [CrossRef] [PubMed]
  120. Grosskopf, B.; Erev, I.; Yechiam, E. Foregone with the wind: Indirect payoff information and its implications for choice. Int. J. Game Theory 2006, 34, 285–302. [Google Scholar] [CrossRef]
  121. Ayal, S.; Zakay, D. The perceived diversity heuristic: The case of pseudodiversity. J. Pers. Soc. Psychol. 2009, 96, 559–573. [Google Scholar] [CrossRef]
  122. Gigerenzer, G.; Todd, P.M.; The ABC Research Group. Simple Heuristics That Make Us Smart; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
  123. Busemeyer, J.R.; Townsend, J.T. Decision field theory: A dynamic cognition approach to decision making. Psychol. Rev. 1993, 100, 432–459. [Google Scholar] [CrossRef]
  124. Gigerenzer, G.; Gaissmaier, W. Heuristic decision making. Annu. Rev. Psychol. 2011, 62, 451–482. [Google Scholar] [CrossRef] [PubMed]
  125. Kahneman, D. Thinking, Fast and Slow; Farrar, Straus and Giroux: New York, NY, USA, 2011. [Google Scholar]
  126. Todd, P.M.; Gigerenzer, G. Environments that make us smart: Ecological rationality. Curr. Dir. Psychol. Sci. 2007, 16, 167–171. [Google Scholar] [CrossRef]
  127. Klein, G. Naturalistic decision making. Hum. Factors 2008, 50, 456–460. [Google Scholar] [CrossRef] [PubMed]
  128. Simon, H.A. Invariants of human behavior. Annu. Rev. Psychol. 1990, 41, 1–19. [Google Scholar] [CrossRef] [PubMed]
  129. Lo, A.W.; Repin, D.V. The psychophysiology of real-time financial risk processing. J. Cogn. Neurosci. 2002, 14, 323–339. [Google Scholar] [CrossRef]
  130. Camerer, C.; Loewenstein, G.; Prelec, D. Neuroeconomics: How neuroscience can inform economics. J. Econ. Lit. 2005, 43, 9–64. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hochman, G. Beyond the Surface: A New Perspective on Dual-System Theories in Decision-Making. Behav. Sci. 2024, 14, 1028. https://doi.org/10.3390/bs14111028

AMA Style

Hochman G. Beyond the Surface: A New Perspective on Dual-System Theories in Decision-Making. Behavioral Sciences. 2024; 14(11):1028. https://doi.org/10.3390/bs14111028

Chicago/Turabian Style

Hochman, Guy. 2024. "Beyond the Surface: A New Perspective on Dual-System Theories in Decision-Making" Behavioral Sciences 14, no. 11: 1028. https://doi.org/10.3390/bs14111028

APA Style

Hochman, G. (2024). Beyond the Surface: A New Perspective on Dual-System Theories in Decision-Making. Behavioral Sciences, 14(11), 1028. https://doi.org/10.3390/bs14111028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop