Next Article in Journal
Collaborative Optimization of Container Liner Slot Allocation and Empty Container Repositioning Within Port Clusters
Previous Article in Journal
Comparative Nutritional Profiling of Economically Important Shrimp Species in Pakistan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Situation Awareness-Based Safety Assessment Method for Human–Autonomy Interaction Process Considering Anchoring and Omission Biases

by
Shengkui Zeng
,
Qidong You
,
Jianbin Guo
* and
Haiyang Che
School of Reliability and Systems Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(1), 158; https://doi.org/10.3390/jmse13010158
Submission received: 29 December 2024 / Revised: 13 January 2025 / Accepted: 16 January 2025 / Published: 17 January 2025
(This article belongs to the Section Ocean Engineering)

Abstract

:
Autonomy is being increasingly used in domains like maritime, aviation, medical, and civil domains. Nevertheless, at the current autonomy level, human takeover in the human–autonomy interaction process (HAIP) is still critical for safety. Whether humans take over relies on situation awareness (SA) about the correctness of autonomy decisions, which is distorted by human anchoring and omission bias. Specifically, (i) anchoring bias (tendency to confirm prior opinion) causes the imperception of key information and miscomprehending correctness of autonomy decisions; (ii) omission bias (inaction tendency) causes the overestimation of predicted loss caused by takeover. This paper proposes a novel HAIP safety assessment method considering effects of the above biases. First, an SA-based takeover decision model (SAB-TDM) is proposed. In SAB-TDM, SA perception and comprehension affected by anchoring bias are quantified with the Adaptive Control of Thought-Rational (ACT-R) theory and Anchoring Adjustment Model (AAM); behavioral utility prediction affected by omission bias is quantified with Prospect Theory. Second, guided by SAB-TDM, a dynamic Bayesian network is used to assess HAIP safety. A case study on autonomous ship collision avoidance verifies effectiveness of the method. Results show that the above biases mutually contribute to seriously threaten HAIP safety.

1. Introduction

1.1. Background

Nowadays, autonomy systems have been widely applied and succeeded in improving efficiency in domains such as medical, civil, aviation, and maritime domains [1]. However, autonomy technologies are currently not yet mature and have caused some serious accidents like the crash of Boeing 737 MAX. Learning from other industries, most of the maritime autonomy concepts consider human–autonomy interaction (HAI) as a means of compensating the defects of autonomy and ensuring safety [2]. During the HAI process (HAIP), a human in a Shore Control Centre (SCC) is required to act as a supervisor, and initiates takeover when an autonomy decision is incorrect [3]. Their situation awareness (SA) about correctness of autonomy decisions is the primary rationale of takeover decisions [4]. A well-accepted model by Endsley [5] describes SA as the perception of elements in the environment, the comprehension of their meaning, and the projection of their status. This paper adheres to this definition and specifies SA about correctness of an autonomy decision as the perception of information reflecting the autonomy decision process, comprehension about the correctness of the autonomy decision, and prediction about future states if takeover behavior or the autonomy decision was executed. Based on SA prediction, humans can also predict and compare the utility of takeover and autonomy decisions. In the maritime domain, some capabilities of autonomy have been explored to support SA, such as autonomy transparency, which exposes the autonomy decision process to humans [1,6]. Nevertheless, humans seem not yet adapted to the role change from manipulators to supervisors and correct takeover is difficult for them [2]. It can be attributed to cognitive biases that exist in the takeover decision process. (Cognitive biases are systematic and universally occurring tendencies, inclinations, or dispositions that distort the cognitive process and adversely affect decision making [7]. In existing studies, cognitive biases are widely found in the fields of management, medical treatment, and emergency response [8,9,10]. Currently, commonly discussed cognitive biases include anchoring bias, omission bias, availability bias, base rate neglect, gambler’s fallacy, hindsight bias, and others [11]).
During HAIP in risk scenarios, humans, such as operators in SCC, could have achieved correct SA and takeover decisions based on transparency information and information from other sources (e.g., cameras, GPS, and many other types of sensors onboard the ship) [12]. However, in critical or urgent risk scenarios, humans tend to use the simple and quick heuristic cognitive mechanisms, which introduces cognitive biases into takeover decisions [13]. Among many cognitive biases, anchoring bias and omission bias are two common types that affect SA and takeover decisions [14]. Anchoring bias is the phenomenon that humans’ preference for a previous opinion (i.e., anchor) makes them ignore information disproving the anchor and tend to keep comprehension consistent with the anchor [15]. During HAIP, if the anchor is set to believing that an autonomy decision is correct, anchoring bias leads to varying degrees of neglect of transparency information, and biases their comprehension about correctness of the autonomy decision, which makes humans overlook errors of the autonomy decision. Omission bias is a phenomenon that a human is more averse to loss caused by action than loss caused by inaction, and thus they prefer inaction [16]. During HAIP, omission bias makes humans more averse to the loss caused by takeover than loss caused by autonomy decisions, which leads to the overestimation of the former. It makes humans tend not to take over. Therefore, in HAIP, anchoring bias and omission bias, when they coincidentally occur, generate an increased negative effect, which prevents the correct takeover. This paper aims to develop a safety assessment method for HAIP considering anchoring and omission biases. The proposed method is designed for use during the design and development phases of human–autonomy systems. It is intended to assist designers in evaluating the risk-avoidance capabilities of autonomy systems when integrated with human operations and to provide technical support for improving system design.

1.2. Literature Review

The effects of anchoring bias and omission bias have been qualitatively explored in many studies. According to Endsley [15], anchoring bias guides perception and comprehension in a way that confirms prior opinions. If previous opinions are incorrect, this tendency has negative effects on subsequent SA [17]. An experiment by Walmsley and Gilbey [10] demonstrated that operators affected by anchoring bias tend to ignore information disproving previous opinions and misinterpret the newly perceived evidence. Regret theory [18] and loss-aversion theory [9,19] are often used to explain the mechanism of omission bias. They argued that omission bias leads to more pessimistic subjective cognition to the predicted loss caused by action and the overestimation of it. The above studies indicate that anchoring bias and omission bias are critical causes of SA errors. Considering that SA is the critical rationale of correct takeover decisions, it is significant to incorporate effects of anchoring bias and omission bias into HAIP safety assessment.
Many methods have been developed to assess HAIP safety. In studies related to maritime autonomous surface ships (MASSs), traditional risk assessment methods are often used to assess HAIP safety, including a fault tree analysis (FTA) [20], Bayesian network (BN) [21], Systems Theoretic Process Analysis (STPA) [22], and others. For example, Li et al. [23] used FTA to model the causal relationship among the risk factors and mapped FTA into BN to assess risk of MASS. Zhang et al. [24] used the hybrid causal logic (HCL), which integrates FTA and BN, to quantify accident probability of MASS. Li et al. [25] utilized STPA to analyze risk of MASS navigation under the human–machine co-driving mode. Sumon et al. [26] used STPA to analyze safety of hydrogen-driven MASS. The above methods focused on the causal relationships between accidents and factors like autonomy failures or human errors, and aimed to find critical risk-influencing factors or propagation paths. They successfully assessed HAIP safety from the system level. Nevertheless, they are inadequate to dig deep into the takeover decision process to analyze mechanisms of accidents.
In studies related to human–robot collaboration and autonomous vehicles, HAIP safety assessment is often performed based on trust quantification to analyze the mechanism of incorrect takeover decisions. They used trust as the main rationale of takeover decisions and treated trust as the mediator between HAIP safety and its affecting factors [27]. For example, Yang et al. [28] treated trust as the probability to accept autonomy and established a hidden Markov model to assess HAIP safety. In their model, a trust update affected by autonomy reliability was quantified with Beta distribution [29]. Moreover, some research utilized decision field theory to quantify the update of trust and self-confidence and predicted the occurrence of takeover when their difference evolves below a given threshold [30,31,32]. Based on this method, Gao and Lee [33] analyzed the effects of transparency levels on trust and HAIP safety. However, the above methods ignored the SA, which is a main rationale for takeover decisions.
The key to solve this problem is to quantitatively model SA and incorporate it into the takeover decision model. As SA is the widely used framework describing human cognition, its quantification has been extensively researched. For example, the well-known Adaptive Control of Thought-Rational (ACT-R) theory quantified perception probability of information with its activation level [34]. Zhao and Smidts [35] extended the activation level of information to two dimensions, including its saliency and value. To quantify SA comprehension, a Bayesian network, which has an advantage in encoding human knowledge, was used to model the human reasoning process [36]. For example, Naderpour et al. [37,38] used a Bayesian network to construct operators’ mental model and quantified their comprehension of the system state. To quantify behavioral utility prediction based on SA, Prospect Theory is a commonly used method [39]. Prospect Theory is often used to develop autonomous systems to make them predict the utility of behavior or decisions like humans [40,41].

1.3. Gaps and Contribution

Quantitative modeling methods of SA provide a chance to overcome the problem that trust-based HAIP safety assessment methods ignore SA. Nevertheless, existing quantitative modeling methods neglect the effects of anchoring bias on SA perception and comprehension, as well as the irrational preference in behavioral utility prediction caused by omission bias. Qualitative research clarified how anchoring bias and omission bias lead to incorrect takeover decisions. They provided sufficient theoretical guidance to fulfill the above gaps.
This paper proposes a novel HAIP safety assessment method, which considers the effects of anchoring bias and omission bias. Firstly, we investigate the HAIP affected by anchoring bias and omission bias, and identify its characteristics. Based on the analysis, an SA-based takeover decision model (SAB-TDM) is developed to analytically describe the takeover decision based on SA. In SAB-TDM, SA perception and comprehension affected by anchoring bias are represented by ACT-R theory [42] and the Anchoring Adjustment Model (AAM) [43], respectively. Meanwhile, Prospect Theory [44] is used to describe the behavioral utility prediction affected by omission bias. Then, a dynamic Bayesian network (DBN) is used to model HAIP and assess safety, due to its ability to encode human knowledge and describe time dependencies [45,46]. The structure establishment and parameter determination of DBN are supported by the proposed SAB-TDM. The applicability condition of our method is that the autonomy system interacts with a single operator provided with multiple information sources and the takeover decision is made by this operator only. A case study on multi-round collision avoidance for an autonomous ship is used to illustrate the effectiveness of the proposed method.
The contributions and innovations of this paper are as follows: (1) this paper provides a new perspective to analyze HAIP safety by incorporating SA about correctness of autonomy decisions into takeover decisions; (2) to the best of the authors’ knowledge, this is the first work to consider anchoring bias and omission bias in HAIP safety assessment; (3) the SAB-TDM is proposed to quantify SA perception and comprehension affected by anchoring bias with ACT-R and AAM, and to quantify utility prediction affected by omission bias with Prospect Theory; and (4) a DBN-based method is proposed to model the HAIP and assess its safety.

1.4. Article Structure

The remainder of this paper is divided into five sections. Section 2 describes the research problem and the framework of the method. The proposed method is detailed in Section 3 and Section 4. Section 3 proposes the SAB-TDM considering anchoring and omission biases, and Section 4 develops the DBN-based method to model HAIP and assess its safety. Section 5 presents the method application and result analysis. Finally, Section 6 discusses the major conclusion of this paper.

2. Research Problem and Framework of Method

This section firstly analyzes the characteristics of HAIP affected by anchoring bias and omission bias. Secondly, the framework of the proposed method is introduced. In HAIP, based on transparency information and information from other sources (e.g., cameras, GPS, and many other types of sensors onboard), humans could have achieved correct SA and takeover decisions. However, anchoring bias leads to neglect of transparency information and miscomprehending correctness of autonomy decisions; and omission bias makes humans overestimate the loss caused by takeover. These two cognitive biases mutually contribute in HAIP to impede correct takeover decisions. Based on these characteristics, SAB-TDM is developed to support the construction of DBN, which is used to model the HAIP and assess its safety.

2.1. Research Problem Statement

HAIP in risk scenarios is shown in Figure 1. An autonomy system firstly makes a risk-avoidance decision. At the same time, humans achieve SA about the correctness of the autonomy decision. This SA can support the takeover decision that is to dictate whether to take over. If humans do not take over within defined time, the autonomy system will automatically act out its risk-avoidance decision. When the autonomy decision is incorrect, human takeover is critical to ensure safety. However, anchoring bias leads to errors in SA and omission bias distorts the takeover decision, which both impede correct takeover.
To support SA and the takeover decision, autonomy transparency is developed to expose the autonomy decision process to humans. This paper follows the framework of the Situation awareness-based Agent Transparency (SAT) model [47] and specializes it to meet the needs of HAIP in risk scenarios. Specifically, SAT information includes SAT1, autonomy’s inputs and the proposed actions; SAT2, autonomy’s reasoning of current risk; and SAT3, autonomy’s utility prediction for its risk-avoidance decision. The above SAT information effectively supports HAIP.
To complete risk avoidance, multi-round HAIP is often needed [48]. The multi-round HAIP is shown in Figure 2. In each round, due to the faults of sensors, the autonomy may make wrong decisions along with errors in SAT information. Humans firstly perceive SAT information through oriented attention allocation [38]. According to information from other sources (e.g., visual scene or communication), the correctness of perceived SAT information can be verified [49]. Any error of SAT information indicates error of autonomy decisions. When humans cannot clearly comprehend the correctness of an autonomy decision, they need to predict the future state if takeover behavior and the autonomy decision was performed. Only based on this prediction can the utility of takeover and autonomy decisions be predicted and compared. When finding errors of autonomy decisions or predicted utility of autonomy decisions is not optimal, humans will initiate takeover. If the risk is avoided successfully or an accident occurs, the HAIP in this risk scenario is finished. Otherwise, the next round of HAIP is needed.
This paper assumes that a human anchor is set to believing that an autonomy decision is correct and the initial trust level on the anchor (i.e., anchor belief) equals initial trust. Guided by the anchor whose belief updates at the end of each round, anchoring and omission bias lead to causing SA errors and impede correct takeover during multi-round HAIP. The effects of anchoring bias and omission bias are indicated by the arrow lines with different numbers in Figure 2, and introduced as follows:
SA perception is round-dependent due to anchoring bias. Affected by anchoring bias, humans tend to perceive information supporting the anchor updated in the last round [50]. SAT information is set to help humans find the errors of autonomy decisions, which has the natural quality of opposing the anchor. Therefore, affected by anchoring bias, SAT information will be assigned with a smaller value and allocated with less attention. This will result in the inadequate perception of SAT information.
SA comprehension is round-dependent due to anchoring bias. SA comprehension is to judge the correctness of autonomy decisions. Affected by anchoring bias, humans have preference for the anchor updated in the last round, which will distort the estimation of evidence’s causal strength [51,52]. Specifically, the causal strength of evidence indicating autonomy decisions without errors is overestimated and the causal strength of evidence indicating autonomy decisions with errors is underestimated. It makes humans overlook the errors of autonomy decisions and tend not to take over, which means the promotion of anchoring bias on omission bias.
Behavioral utility prediction has irrational preference due to omission bias. Behavioral utility prediction is based on SA prediction. Affected by the omission bias, humans more readily accept losses caused by inaction compared to equal or even less loss caused by action [16]. It leads humans to have a more pessimistic subjective cognition to losses caused by takeover and overestimate it. That will make humans fail to identify the condition that an autonomy decision is not optimal and inclined not to take over. It also can improve anchor belief and promote effects of anchoring bias.
A human’s takeover behavior in the current round will update anchor belief. If they choose takeover, anchor belief will decrease. Otherwise, anchor belief increases. After the generation of a new autonomy decision and SAT information, the updated anchor will continue to bias SA perception and comprehension, which makes them round-dependent. Therefore, under the mutual promotion of the anchoring bias and omission bias, a human is prone to ignore critical SAT information, overlook errors of the autonomy decision, and believe that utility of a wrong autonomy decision is optimal. These errors impede the correct takeover decision, and lead to an accident when autonomy makes wrong risk-avoidance decisions.

2.2. Framework of Proposed Method

Based on effects of anchoring bias and omission bias on HAIP, an HAIP safety assessment method is proposed. The overall framework of the method is shown in Figure 3. Firstly, the SAB-TDM is developed to analytically represent a takeover decision based on SA and quantify the effects of anchoring and omission biases. In SAB-TDM, the perception of SAT information and its round dependency caused by anchoring bias are quantified with ACT-R [53]; comprehension about the correctness of the autonomy decision and its round dependency caused by anchoring bias are quantified with an intuitionistic fuzzy set (IFS) [54] and AAM [43]; and the irrational utility prediction of takeover behavior is quantified with Prospect Theory [9]. Secondly, DBN is used to model multi-round HAIP and assess its safety. In DBN, SAT information perception, comprehension about the correctness of the autonomy decision, and utility prediction affected by omission bias guide the structure establishment and parameter determination of a single Bayesian network. The SA perception and comprehension round dependencies caused by anchoring bias guide the structure establishment and parameter determination of adjacent networks. The above methods are detailed in the following two sections.

3. SA-Based Takeover Decision Model Considering Anchoring and Omission Biases

According to the analysis about the characteristics of multi-round HAIP, this section develops the SAB-TDM to represent a human’s takeover decision based on SA. The core is to quantify the negative effects of anchoring bias on SA achievement and the negative effects of omission bias on behavioral utility prediction.

3.1. Overview of SA-Based Takeover Decision Model

A takeover decision in multi-round HAIP is based on SA, which takes SAT information as inputs and is affected by anchoring bias and omission bias. Figure 4 illustrates the logic of takeover decisions based on SA. Humans firstly perceive SAT information under the effects of anchoring bias. According to information from other sources, humans verify the perceived SAT information to comprehend the correctness of autonomy decisions. The comprehension is distorted by anchoring bias and has three result states, that is, without errors, with errors, and hesitancy. Only if humans cannot clearly identify whether errors exist in an autonomy decision is when they need to predict and compare the utility of takeover and the autonomy decision, which is affected by omission bias. If the autonomy decision has errors or takeover is optimal, humans will choose to take over.
According to Figure 4, the SAB-TDM can be expressed as
p ( T O D r = a ) = ( p ( C O r = y | P E r , A r 1 ) + p ( P R r = a d | E r ) p ( C O r = h | P E r , A r 1 ) ) p ( P E r | A r 1 ) p ( A r 1 ) p ( T O D r = t ) = 1 p ( T O D r = a )
where T O D r { a , t } represents different states of the takeover decision, including accepting or taking over autonomy in round r ; A r 1 { b , u } represents belief or unbelief on the anchor updated in round r 1 ; P E r { o , n o } is the successful observation and neglect of information; C O r { y , n , h } represents three states of correctness judgment, that is, without errors, with errors, and hesitancy; P R r { a d , h d } means that the predicted utility of the autonomy decision or takeover is optimal; and E r is the evidence from other sources.
p ( T O D r ) is decided by SA about the correctness of the autonomy decision and utility prediction. Affected by anchoring bias, SA perception and comprehension are guided by the belief level of anchor p ( A r 1 ) . In the initial round, p ( A r 1 = b ) = T r u s t , ( T r u s t [ 0 , 1 ] ) and in the following round, p ( A r 1 = b ) = p ( T O D r 1 = a ) . Conditional probability p ( P E r | A r 1 ) and p ( C O r | P E r , A r 1 ) reflect perception and comprehension round dependency caused by anchoring bias, respectively. p ( P E r | A r 1 ) indicates the perception probability of information determined by anchor A r 1 . p ( C O r | P E r , A r 1 ) indicates that perceptions of SAT information P E r and anchor A r 1 decide a human’s belief on correctness of the autonomy decision. When C O r = h , the utility of takeover and the autonomy decision needs to be predicted and compared. Conditional probability p ( P R r | E r ) represents the behavioral utility prediction affected by omission bias. These conditional probabilities can be determined as follows.

3.2. Quantification of SA Perception Affected by Anchoring Bias

The determination of p ( P E r | A r 1 ) is to quantify the perception of SAT information and the perception round dependency caused by anchoring bias. It essentially is the quantification of attention allocation guided by the anchor. Related studies often divide the displaying interface into several areas of interest (AOIs) and uniformly measure the perception probability of information in the same AOIs [55]. This paper divides the SAT information at the same level into an AOI and quantifies its perception with ACT-R theory [53]. ACT-R theory uses the salience and value of information to measure its activation [35]. A higher activation level means more attention and higher perception probability.
Affected by anchoring bias, humans adjust their attention allocation for SAT information according to their attitude to the anchor updated in the last round. The adjustment is reflected in value change in SAT information. Specifically, belief of the anchor will decrease the value of SAT information and unbelief of the anchor leads to an increase in the value. It can be expressed as
A r i V = A r i ( A r 1 = b ) V B e l i e v e   A n c h o r r 1 A r i ( A r 1 = u ) V U n b e l i e v e   A n c h o r r 1
where A r i V is the subjective value of SAT i ( i { 1 , 2 , 3 } ) information in round r ; A r 1 { b , u } indicates belief and unbelief of the anchor in round r 1 . A r i ( A r 1 = b ) V and A r i ( A r 1 = u ) V represent the value of SAT i information in round r under the condition that humans believe and unbelieve A r 1 , respectively. Obviously, A r i ( A r 1 = b ) V < A r i ( A r 1 = u ) V .
Then, the perception round dependency p ( P E r i | A r 1 ) caused by anchoring bias can be calculated with
p ( P E r i = o | A r 1 ) = 1 1 + e ( A r i τ ) / s
A r i = γ A r i S + λ A r i V
where P E r i { o , n o } is the successful observation and neglect of SAT i information; A r i is the activation of SAT i information in round r ; τ is the threshold of activation; and s is the influence of noise in SAT information perception. A r i S is the salience of AOI containing SAT i information; γ and λ are the weights of the salience and value, and γ + λ = 1 .

3.3. Quantification of SA Comprehension Affected by Anchoring Bias

The determination of p ( C O r | P E r , A r 1 ) is to quantify the SA comprehension based on SAT information and the comprehension round dependency caused by anchoring bias. In HAIP, SA comprehension is to judge the correctness of autonomy decisions, according to the correctness of perceived SAT information. The comprehension is distorted to different degrees according to anchor belief updated in the last round, which makes comprehension round-dependent.
(1) Quantify SA comprehension based on SAT information. The correctness judgment of SAT information and autonomy decisions is benchmarked on information from other sources. T r i = { t r i 1 , , t r i j , , t r i N } and B r i = { b r i 1 , , b r i j , , b r i N } are used to represent the transparency information set at level SAT i and its benchmark information set, where N is the amount of information at level SAT i . SAT information t r i j and its corresponding benchmark information b r i j form an information tuple, x r i j = < t r i j , b r i j > . Correctness judgment of t r i j , i.e., C O r i j , depends on a human’s cognition about the consistency of elements in x r i j , which is often fuzzy. This paper uses IFS to model the human cognition, which is fuzzy in nature [54,56].
Let X i j be a universal set. The IFS of t r i j correctness (i.e., A i j ) on X i j can be expressed as
A i j = { < x r i j , μ A i j ( x r i j ) , ν A i j ( x r i j ) > | x r i j X i j }
where μ A i j ( x r i j ) : X i j [ 0 , 1 ] and ν A i j ( x r i j ) : X i j [ 0 , 1 ] are the membership function and non-membership function, respectively, which follow 0 μ A i j ( x r i j ) + ν A i j ( x r i j ) 1 . The hesitancy function (i.e., π A i j ( x r i j ) ) of x r i j in A i j is defined as
π A i j ( x r i j ) = 1 μ A i j ( x r i j ) ν A i j ( x r i j )
Then, a human’s belief on t r i j without error, with error, and with hesitancy can be represented as p ( C O r i j = y ) = μ A i j ( x r i j ) , p ( C O r i j = h ) = π A i j ( x r i j ) , and p ( C O r i j = n ) = ν A i j ( x r i j ) , respectively. The following reasoning rules are set to judge the correctness of T r i (i.e., C O r i { y , n , h } , representing without error, with error, and hesitancy):
(i)
IF P E r i = n o , THEN C O r i = y ;
(ii)
IF j { 1 , 2 , , N } , C O r i j = n , THEN C O r i = n ;
(iii)
IF j { 1 , 2 , , N } , C O r i j = y , THEN C O r i = y ;
(iv)
IF j { 1 , 2 , , N } , C O r i j = h & j { 1 , 2 , , N } , C O r i j n , THEN C O r i = h ;
(v)
IF C O r i 1 = n , i { 2 , 3 } , THEN C O r i = n .
The correctness of an autonomy decision (i.e., C O r { y , n , h } , representing without error, with error, and hesitancy) can be reasoned with
(vi)
IF i { 1 , 2 , 3 } , C O r i = n , THEN C O r = n ;
(vii)
IF i { 1 , 2 , 3 } , C O r i = y , THEN C O r = y ;
(viii)
IF i { 1 , 2 , 3 } , C O r i = h & i { 1 , 2 , 3 } , C O r i n , THEN C O r = h .
(2) Quantify comprehension round dependency. According to the above rules, a human’s belief on the correctness of an autonomy decision (i.e., p ( C O r | P E r ) ) can be reasoned. p ( C O r = y | P E r ) represents the initial causal strength that perceived SAT information supports the anchor and can be distorted by anchoring bias. This comprehension round dependency is represented by AAM [57]. It classifies evidence into positive and negative and adjusts their causal strength with different mechanisms. The classification is based on the reference point R (can be set to 0.5 [43]). The perceived SAT information is considered positive if p ( C O r = y | P E r ) R and negative when p ( C O r = y | P E r ) < R . Then, the causal strength adjustment of perceived SAT information guided by the anchor can be expressed with Equations (7)–(9):
p ( C O r = y | P E r , A r 1 = b ) = 1 + α ( p ( C O r = y | P E r ) R ) , 1 , p ( C O r = y | P E r ) < R p ( C O r = y | P E r ) R
p ( C O r = y | P E r , A r 1 = u ) = 0 , β [ p ( C O r | P E r ) R ] , 1 , p ( C O r = y | P E r ) < R R p ( C O r = y | P E r ) < 1 p ( C O r = y | P E r ) = 1
p ( C O r = n | P E r , A r 1 = b ) = ( 1 p ( C O r = n | P E r , A r 1 = b ) ) p ( C O r = n | P E r ) 1 p ( C O r = y | P E r ) p ( C O r = n | P E r , A r 1 = u ) = ( 1 p ( C O r = n | P E r , A r 1 = u ) ) p ( C O r = n | P E r ) 1 p ( C O r = y | P E r ) p ( C O r = h | P E r , A r 1 = b ) = ( 1 p ( C O r = h | P E r , A r 1 = b ) ) p ( C O r = h | P E r ) 1 p ( C O r = y | P E r ) p ( C O r = h | P E r , A r 1 = u ) = ( 1 p ( C O r = h | P E r , A r 1 = u ) ) p ( C O r = h | P E r ) 1 p ( C O r = y | P E r )
where α and β ( 0 < α < β < 1 ) are constants reflecting a human’s sensitivity to negative and positive evidence. According to the gap between α and β , the anchoring bias can be divided into three levels: the high level (HAB < α , β > = < 0.2 , 0.8 > ), middle level (MAB < α , β > = < 0.5 , 0.8 > ), and low level (LAB < α , β > = < 0.8 , 0.8 > ) [58].

3.4. Quantification of Utility Prediction Affected by Omission Bias

The determination of p ( P R r | E r ) is to quantify the utility prediction and comparison of an autonomy decision and takeover behavior, which is based on SA prediction and has irrational preference due to omission bias. Affected by omission bias, a human is more averse to loss caused by takeover. The irrational utility prediction of takeover behavior is quantified with Prospect Theory.
The finite risk-avoidance decision space is formulated as D = { d 1 , , d K } . The prerequisite to predict the utility of risk-avoidance decision d i D is to predict the future state if d i was executed. Based on the SA prediction, the actual utility can be measured. The above cognition process can be expressed as
P R S = f P ( E , d i ) x i = f U ( P R S )
where P R S is the future state prediction; E is the evidence observed from other sources; f P ( · ) is the SA prediction function; x i is the actual utility of d i ; and f U ( · ) is the actual utility function.
For the autonomy decision d A D D , D H = { d H 1 , , d H K 1 } = D { d A D } is the takeover behavior space. Prospect Theory uses two sigmoidal curves to represent the subjective utility functions of takeover and the autonomy decision, as shown in Figure 5 [9].
Then, the utility of autonomy decision u A D ( x A D r ) and takeover utility ( u H ( x H i ) ) can be expressed as
u A D ( x A D ) = ( x A D ) α P R x A D 0 λ A D ( x A D ) α P R x A D < 0
u H ( x H i ) = ( x H i ) α P R x H i 0 λ H ( x H i ) α P R x H i < 0
where x A D and x H i are the actual utility of the autonomy decision and takeover behavior d H i , respectively; α P R ( 0 , 1 ) is the risk preference index; and λ A D and λ H are the loss averse indexes of the autonomy decision and takeover, respectively. A human is more averse to loss caused by takeover, which means 1 < λ A D < λ H . According to the gap between λ A D and λ H , the omission bias can be divided into three levels: the high level (HOB < λ A D , λ H > = < 2.25 , 18.9 > ), middle level (MOB < λ A D , λ H > = < 2.25 , 9.45 > ), and low level (LOB < λ A D , λ H > = < 2.25 , 4.72 > ) [9,59].
Then, predicted utility of an autonomy decision ( U A D ) and takeover behavior ( U H j ) can be calculated with
U A D = j u A D ( x A D j ) w A D j U H i = j u H ( x H i j ) w H i j
where x A D j and x H i j are the j th utility of the autonomy decision and takeover behavior d H i , respectively, whose weights are w A D j and w H i j , respectively. The utility prediction space of takeover can be expressed as U H = { U H 1 , , U H K 1 } .
The belief that the predicted utility of an autonomy decision is optimal, i.e., p ( P R r = a d | E r ) , can be expressed as
p ( P R r = a d | E r ) = 1 , 0 , I f   U A D max ( U H ) E l s e
when humans choose to takeover, the option in D H with the highest utility will be selected to be executed, that is, h d = arg max d i D H ( U H i ) .

4. Dynamic Bayesian Network for HAIP Safety Assessment

A novel DBN-based safety assessment method for multi-round HAIP considering the effects of anchoring bias and omission bias is developed in this section. The structure establishment and parameter determination of DBN are guided by SAB-TDM. The flowchart of the proposed method is shown in Figure 6 with four major steps explained as follows.

4.1. Analysis of Multi-Round HAIP (Step 1)

This preparatory step is to obtain knowledge about multi-round HAIP in risk scenarios. It includes (i) the inputs and various algorithms that autonomy uses to make a risk-avoidance decision; (ii) the SAT information generated with an autonomy decision to support a human’s takeover decision; and (iii) the rules for HAI to avoid risk. This knowledge is important input to HAIP modeling and safety assessment.

4.2. Qualitative Modeling (Step 2)

Considering the abilities to encode human knowledge and describe time dependency, DBN is selected to model the HAIP affected by anchoring and omission bias. Figure 7 shows a schematic diagram to clarify the modeling concept. As shown in the example, HAIP based on SA is considered as the important elements in DBN. The interpretation and establishing approach of the network are introduced in the following three sub-steps.

4.2.1. Identify the Critical Nodes in DBN

To model HAIP, this sub-step identifies DBN nodes to represent an autonomy decision and human takeover decision in multi-round interaction. As shown in Figure 7, nodes E and Sensor FI are used to describe critical inputs and the bias caused by functional insufficiency of sensors. Moreover, nodes T r 1 , T r 2 , and T r 3 are identified to represent SAT information at three levels. They also indicate the generation of the autonomy decision.
To model the takeover decision based on SA, nodes P E r 1 , P E r 2 , and P E r 3 are identified to represent SAT information perception. Moreover, benchmark information (nodes B r 1 , B r 2 , and B r 3 ), SAT correctness (nodes C O r 1 , C O r 2 , and C O r 3 ), and autonomy decision correctness (node C O r ) are used to represent SA comprehension. Nodes P R r , omission bias, and D are identified to represent utility prediction affected by omission bias. Node T O D r is identified to represent a human’s takeover decision. It also works as an updated anchor affecting SA perception and comprehension in the next round. These nodes will be extended and assigned specific meaning when applied to an actual risk scenario.

4.2.2. Establish Causalities Within a Single Round

The causalities in the single time slice of DBN describe the HAI in a single interaction round. The causality arcs describing an autonomy decision start from E and the Sensor FI node, and end with T r 1 , T r 2 , and T r 3 nodes. In humans’ takeover decision process, the causality arcs start from the nodes related to SAT information perception and correctness judgment, followed by nodes related to autonomy decision correctness and takeover utility prediction, and end with a node representing the takeover decision. Additionally, causalities between the Trust node and SA perception and comprehension nodes (i.e., P E 1 1 , P E 1 2 , P E 1 3 , and C O 1 ) reflect effects of trust (as the initial anchor) and anchoring bias in the initial round.

4.2.3. Establish Causalities Between Adjacent Rounds

This sub-step aims to establish connections between adjacent DBN slices, which reflects the perception and comprehension round dependency caused by anchoring bias. The connections between T O D r 1 and SAT information perception nodes (i.e., P E r 1 , P E r 2 , and P E r 3 ) are used to represent perception round dependency (blue lines in Figure 7). The connections between T O D r 1 and the autonomy decision correctness node (i.e., C O r ) are used to represent comprehension round dependencies (red lines in Figure 7). The above nodes and connections are combined with that in single time slices to describe the dynamic multi-round HAIP.

4.3. Quantitative Modeling (Step 3)

Based on the construction of DBN structure, another critical step in the proposed method is to determine the parameters in DBN. The key is to determine the conditional probability distribution (CPD) of nodes.

4.3.1. Allocate Priors for Root Nodes

As shown in Figure 7, the root nodes in DBN include the Environment, risk-avoidance decision space D , trust level, and Sensor FI. Environment nodes serve as the input variables for both an autonomy decision and human takeover decision, varying according to the current risk scenarios. Risk-avoidance decision space D depends on the capability of the human–autonomy system to handle risk. The trust level can be determined by the actual situation of the analyzed object or different analysis purposes. Sensor FI nodes represent the biases of inputs detected by sensors, which typically follow a normal distribution and can be determined by relevant industry standards [60].

4.3.2. Determine CPD for Nodes Without Round Dependencies

The goal of this sub-step is to determine CPDs for nodes without round dependencies. For nodes related to the correctness judgment of SAT information, the key of CPD determination is to construct IFS representing correctness of SAT information, as seen in Equations (5) and (6). The IFS can be determined based on relevant industry standards. Alternatively, effective methods include conducting questionnaire surveys and using expert judgment [61]. After that, CPD of nodes C O r 1 , C O r 2 , and C O r 3 can refer to reasoning rules (i)–(v) in the SA comprehension quantification of SAB-TDM. The CPD of nodes P R r representing takeover utility prediction affected by omission bias can refer to Equations (11)–(14) in the utility prediction quantification of SAB-TDM. The CPD of nodes T O D r refer to Equation (1).

4.3.3. Determine CPD for Nodes with Round Dependencies

This sub-step aims to determine the CPD for nodes with round dependencies including nodes P E r 1 , P E r 2 , P E r 3 , and C O r . Nodes P E r 1 , P E r 2 , and P E r 3 represent the SAT information perception, which has round dependencies caused by anchoring bias. The determination of their CPD can refer to Equations (2)–(4) in the SA perception quantification of SAB-TDM. To determine that CPD of nodes has comprehension round dependencies (i.e., nodes C O r ), the initial causal strength that perceived SAT information supports the anchor should be determined according to reasoning rules (vi)–(viii) in the SA comprehension quantification of SAB-TDM. After that, the CPD determination of nodes C O r can refer to Equations (7)–(9).

4.4. HAIP Safety Assessment (Step 4)

According to the determination of the CPDs of all nodes, the joint probability distribution of DBN modeling multi-round HAIP can be calculated as follows:
P ( Z 1 : R n ) = r = 1 R n i = 1 n P ( Z r i | P a ( Z r i ) )
where P ( Z 1 : R n ) represents the joint distribution; Z r i is the node i in interaction round r ; P a ( Z r i ) is the parent node set of Z r i , which may contain nodes in round r or r 1 ; n is the node number in a single round slice; and R n is the total number of interaction rounds. Subsequently, the HAIP safety can be expressed as
S H A I P = P ( S a f e t y = s a f e )

5. Case Study

The proposed multi-round HAIP safety assessment method is demonstrated with multi-round collision avoidance for autonomous ships. In an encounter situation, an autonomous navigation system (ANS) and operator often need to interact for multiple rounds to complete collision avoidance. During multi-round HAIP, an operator’s SA is significant to make a correct takeover decision and ensure safety. However, anchoring and omission bias will lead to errors in the takeover decision, which are the significant causes of an accident. Therefore, this risk scenario is selected as the research object to illustrate the effectiveness of the proposed method.

5.1. Case Description

5.1.1. Encounter Situation Design

We construct a typical two-ship encounter situation, where the initial motion parameters of our own ship (OS), i.e., an autonomous ship, and target ship (TS) are shown in Table 1. There is collision risk between OS and TS, which is a crossing situation. The ship domain is a concept encompassing safe distance, whose violation signifies an imminent collision. Existing studies suggest employing a circular shape for ship domains in open seas [62].

5.1.2. Introduction of HAIP in Autonomous Ship Collision Avoidance

As shown in Figure 8, the process of HAI to complete collision avoidance can be divided into three stages according to the relative distance of two ships [62,63]. In each stage, ANS decides whether to change course, while an operator achieves SA based on SAT information to make a takeover decision. Therefore, three stages of collision avoidance correspond to three HAIP rounds.
Among many algorithms, the Velocity Obstacle (VO) Algorithm has been widely used for its strong interpretability and graphical expression ability [64]. In the risk scenario of this case study, ANS is assumed to apply VO. The SAT information of ANS is shown in Figure 9. The inputs of VO (i.e., SAT1) are detected by radar including velocities, courses, and the relative distance and bearing of the two ships. According to these inputs, VO constructs a velocity obstacle cone (VOC) of OS, which represents the velocity space of the two ships’ collision. Subsequently, VOC can be divided into more subspaces to assess the current severity degree (SD) and urgency degree (UD) (i.e., SAT2) [65,66]. The predicted utility of ANS’s decision (i.e., SAT3) is indicated by the distance to the closest point of approach (DCPA) and time to the closest point of approach (TCPA), if ANS’s decision is executed (formulated as d c p a a d and t c p a a d ) [67]. Limited by space, details of VO can be found in [65,66].
In each round, to achieve SA about correctness of ANS’s decision, an operator firstly perceives the SAT information of ANS. Then, the correctness of SAT information can be judged with benchmark information from other sources, like an optical camera onboard the ship. The correctness of SAT information reflects that of ANS’s decision. If the operator cannot clearly identify correctness of ANS’s decision, the benchmark information can also support the operator to predict the utility of different collision avoidance decisions. When finding the ANS’s decision has error or is not optimal, the operator will take over. The takeover decision process outlined above is distorted by anchoring bias and omission bias. In the initial round, the operator is assumed to anchor in trusting ANS. Cognitive biases may make the operator ignore SAT information, miscomprehend the correctness of ANS’s decision, and overestimate the loss caused by takeover. Therefore, the operator will tend not to take over.

5.2. Application of Proposed Method

5.2.1. Analysis of Multi-Round HAIP

To model multi-round HAIP, the knowledge about HAIP in autonomous ship collision avoidance should be analyzed first, including critical inputs and the algorithm of ANS, the SAT information, the division of collision avoidance rounds, and the takeover decision process of operators. This knowledge has been introduced in Section 5.1.2.

5.2.2. Qualitative Modeling

(1)
Identify the critical nodes in DBN
The HAIP to complete collision avoidance is identified as nodes in DBN. ANS’s collision avoidance decision and SAT information are represented by sets of ANS-related nodes. SA, behavioral utility prediction, and the takeover decision are represented by sets of operator-related nodes. Table 2 lists nodes mentioned above along with their descriptions.
(2)
Establish causalities within a single round
Within a single round, the causality arcs are established to represent the generation process of ANS’s collision avoidance decision and SAT information, and the operator’s SA achievement and takeover decision.
(3)
Establish causalities between adjacent rounds
Perception and comprehension round dependencies are described by the causality arcs from the operator’s takeover decision node to perception nodes of ANS’s SAT information, and the causality arcs between the operator’s takeover decision node and the node related to correctness judgment of the ANS decision, respectively.
Combining the connections within and between rounds, the construction of the qualitative DBN model can be completed, as shown in Figure 10. Sub-models are used to simplify the network structure. The connection related to sub-models means that the nodes in parent sub-models are all the parent nodes of nodes in child sub-models.

5.2.3. Quantitative Modeling

(1)
Allocated priors for root nodes
The priors for nodes describing the risk scenario can be determined by the designed encounter situation shown in Table 1. Secondly, the collision avoidance decision space is set as D = { 0 ° , ± 5 ° , ± 10 ° , ± 20 ° , ± 30 ° } . Thirdly, the trust level is initialized as 0.8. At last, the bias space for input information is detailed in Table 3 [68].
(2)
Determine CPD for nodes without round dependencies
To determine the CPDs of nodes related to SAT information correctness, we construct the membership and non-membership function describing IFS about the correctness of each piece of SAT information, as shown in Figure 11. Based on the above membership and ono-membership function, CPDs of C O r 1 , C O r 2 , and C O r 3 can refer to reasoning rules (i)–(v) in the SA comprehension quantification of SAB-TDM.
The CPD determination of the P R r node needs to investigate the way that the operator predicts behavior utility. The behavior utility prediction is based on SA prediction about the DCPA if d i was executed. It means that the SA prediction function is the calculation of DCPA. Based on SA prediction, utility indexes including risk and efficiency are used to model the actual utility prediction of a collision avoidance decision [41,62]. For collision avoidance decision d i D , its risk index x i R and efficiency index x i E can be represented as
x i R = d c p a i / 1500 1 , 0 , d c p a i < 1500 d c p a i 1500
x i E = | d i | / max | D |
where d c p a i is the DCPA if d i is executed in the current situation. The weight of x i R and x i E are set to w R = 0.8 and w E = 0.2 , respectively. Then, Equations (11)–(14), α P R = 0.88 , and parameter tuple < λ A D , λ H > are used to describe subjective utility prediction affected by omission bias.
(3)
Determine CPD for nodes with round dependencies
The perception round dependency is considered in the CPDs of those SAT perception nodes (i.e., P E r 1 , P E r 2 , P E r 3 ). Constants ( γ = 1 / 3 , λ = 2 / 3 , τ = 1 , s = 0.4 [69]) and parameters in Table 4 are used in Equations (2)–(4) to calculate the CPDs of P E r 1 , P E r 2 , and P E r 3 . The comprehension round dependency is considered in CPD of the C O r node. The rules (vi)–(viii) in the SA comprehension quantification of SAB-TDM and parameter tuple < α , β > in Equations (7)–(9) are used to calculate the CPD of the C O r node.

5.2.4. HAIP Safety Assessment

After the determination of priors and CPDs of all nodes, the multi-round HAIP safety can be calculated with Equations (15) and (16). In the case study, HAIP safety is the probability that the collision risk is avoided successfully after three-round HAIP. To simplify the calculation, the joint distribution of DBN is computationally implemented using MATLAB R2023b.

5.3. Result and Discussion

5.3.1. Result Analysis

In the case study, the operator is assumed to be under the middle anchoring bias level (MAB) and omission bias level (MOB). The safety of collision avoidance for autonomous ships is illustrated in Figure 12. To illustrate the effects of anchoring bias and omission bias, the safety without HAI and without considering the effects of these biases is plotted in Figure 12, too. It can be observed that collision risk is effectively reduced with the existence of the operator. It means that the operator who takes over when ANS makes a wrong collision avoidance decision can compensate for the defects of ANS. Nevertheless, the significant gap between safety with and without biases suggests that anchoring bias and omission bias hinder the advantage of HAI in reducing the collision risk of autonomous ships.
To explain the above phenomenon, the takeover probabilities with and without considering anchoring bias and omission bias are plotted in Figure 13. Based on the comparison, the above phenomenon can be explained as follows:
Due to perception round dependency caused by anchoring bias, the operator is prone to ignoring the SAT information used to judge the correctness of ANS’s decision. Even if those pieces of evidence are perceived, their causal strength to indicate the errors in ANS’s decision will be underestimated due to the comprehension round dependency caused by anchoring bias. These round dependencies impede the operator to find the errors of ANS’s decision. Although the operator can compare the predicted utility of ANS’s decision and takeover, omission bias leads to the overestimation of loss caused by the latter, which will make the operator believe that ANS’s wrong decision is optimal. The above effects of anchoring and omission biases impede the operator’s takeover in the current round. They also strengthen anchor belief, which affects the SA in the next round. Hence, in each round, the operator’s takeover probabilities are lower than that without considering anchoring and omission biases. It makes the defects of ANS unable to be compensated for effectively, resulting in accidents.

5.3.2. Sensitivity Analysis of Anchoring Bias Level

In this section, a sensitivity analysis is performed to investigate the effects of anchoring bias levels on HAIP safety. During risk scenarios, the anchoring bias level is the critical parameter that adjusts the strength of perception and comprehension round dependencies. We assess and compare HAIP safety and average takeover probability (i.e., p ¯ ( T O D r = t ) = r = 1 3 p ( T O D r = t ) / 3 ) under other anchoring bias levels, including HAB and LAB. Results are shown in Figure 14. Obviously, HAIP safety and takeover probability reduce significantly with the increase in the anchoring bias level, which also reflects the promotion of anchoring bias on omission bias. These phenomena can be explained as follows:
With the increase in the anchoring bias level, the strength of perception and comprehension round dependencies is strengthened. It makes the operator tend to ignore SAT information. Even if some SAT information is perceived, their causal strength in indicating errors of ANS’s decision will be further underestimated. It becomes harder for the operator to identify the correctness of the autonomy decision and they tend not to take over, when ANS makes a wrong collision avoidance decision. Therefore, the increase in the anchoring bias level can significantly reduce takeover probability and HAIP safety.
To further explain the effects of anchoring bias levels on perception, we compare the average perception probability of three-level SAT information (i.e., p ¯ ( P E r i ) = i = 1 3 p ( P E r i ) / 3 ) under different anchoring bias levels. The comparison results are illustrated in Figure 15. It can be observed that under the different anchoring levels, SAT information in the initial round has the same perception probability. This is because the perception probability of SAT information in the initial round is only related to the trust level (i.e., the initial anchor belief). In other rounds, the perception probability reduces with the increase in the anchoring bias level, which reflects the strengthening of perception round dependency. This phenomenon can be explained as follows:
The perception probability of SAT information depends on the subjective value assigned by the operator. Due to its natural quality to deny the initial anchor (i.e., trust ANS), SAT information will be assigned with a smaller value with the increase in anchor belief. Affected by comprehension round dependency, the strength of the same evidence to strengthen anchor belief increases with the anchoring bias level. The higher anchor belief makes the operator possibly assign a smaller value to SAT information in the next round, which means the strengthening of perception round dependency. Hence, perception probability of SAT information reduces with the increase in the anchoring bias level.
To further explain effects of anchoring bias levels on comprehension, we compare the operator’s comprehension about the correctness of ANS’s decision under different anchoring bias levels. It is formulated as the probability that ANS’s decision process is believed to be without error (i.e., p ( C O r = y ) ). The comparison results are illustrated in Figure 16. It can be found that p ( C O r = y ) increases with the anchoring bias level. It means that it is harder for the operator to find the errors of the ANS decision with the increase in the anchoring bias level, which reflects the strengthening of comprehension round dependency. The phenomenon can be explained as follows:
Comprehension about correctness of ANS’s decision is achieved based on SAT information. Affected by perception round dependency, less SAT information can be perceived with the increase in the anchoring bias level. The causal strength of perceived SAT information to indicate the errors in ANS’s decision will be underestimated due to anchoring bias. The underestimation degree increases with the anchoring bias level and anchor belief. It means that it is harder for the operator to find errors of ANS’s decision with the increase in the anchoring bias level. That will increase the belief of the anchor (i.e., ANS’s decision without error) and further strengthen the comprehension round dependency. Therefore, the probability of the operator believing ANS’s decision to be without errors increases with the anchoring bias level.

5.3.3. Sensitivity Analysis of Omission Bias Level

In this section, a sensitivity analysis is performed to investigate effects of different omission bias levels on multi-round HAIP safety. In multi-round HAIP during risk scenarios, the omission bias level is a critical parameter that adjusts the overestimation degree of loss caused by takeover. We assess and compare multi-round HAIP safety and takeover probability under other omission bias levels including high and low omission bias levels (i.e., HOB and LOB). The comparison results are shown in Figure 17. Obviously, multi-round HAIP safety and takeover probability decrease significantly with the increase in the omission bias level. These phenomena can be explained as follows:
With the increase in the omission bias level, the operator can more irrationally predict and compare the utility of takeover and ANS’s decision. It makes the operator less likely to identify the condition where ANS’s decision is not optimal and tend not to take over. The not taking over in the current round can also increase the belief of the anchor (i.e., ANS’s decision without error), thereby reinforcing the effects of anchoring bias, thus contributing to the incorrect takeover decision in the next round. Therefore, takeover probability and HAIP safety significantly reduce with the increase in the omission bias level.
To further explain the effects of anchoring bias levels on behavioral utility prediction, we compare the behavioral utility prediction results under different omission bias levels. It is formulated as the probability that the ANS decision is considered optimal (i.e., p ( P R r = a d ) ). The comparison results are illustrated in Figure 18. It can be found that in each interaction round, p ( P R r = a d ) increases with the omission bias level. This phenomenon can be explained as follows:
Omission bias makes the operator more averse to the loss caused by takeover. It increases the widening gap between aversion to losses from takeover and aversion to losses from ANS’s decision. The overestimation degree of loss caused by takeover will be strengthened. The operator tends to use a more pessimistic attitude to predict the utility of takeover. It makes ANS’s wrong decision more likely to be considered optimal. Hence, the probability that the ANS decision is considered optimal increases with the omission bias level.
Moreover, to investigate the promotion of omission bias on anchoring bias, we analyze and compare the SAT information perception and comprehension about correctness of ANS’s decision under different omission bias levels. The former is formulated as the average perception probability of three-level SAT information in each round (i.e., p ¯ ( P E r i ) = i = 1 3 p ( P E r i ) / 3 ), and the latter is formulated as the probability that ANS’s decision is believed to be without error (i.e., p ( C O r = y ) ). The comparison results are plotted in Figure 19. It can be found that with an increase in the omission bias level, perception probability of SAT information reduces and p ( C O r = y ) increases. It means that the increase in the omission bias level can strengthen the strength of perception and comprehension round dependencies caused by anchoring bias. The main reason for this phenomenon is as follows:
In the initial round, SAT perception and comprehension are only related to the initial anchor belief (i.e., trust level) and anchoring bias level; thus, p ¯ ( P E r i ) and p ( C O r = y ) are both the same under different omission bias levels. Nevertheless, with the increase in the omission bias level, the operator tends to believe that ANS’s decision is optimal, which will increase the anchor belief. The higher anchor belief will make the operator more likely to assign a smaller value to SAT information and more likely to underestimate the causal strength of evidence indicating errors of ANS’s decision. Therefore, the increase in the omission bias level reduces perception probability of SAT information and probability that the operator finds errors of the autonomy decision, which both reflect the strengthening of anchoring bias.

5.3.4. Analyze the Encounter Situation of Two Autonomous Ships

In this section, such an encounter situation is discussed, in which two ships are both autonomous ships. We assume that settings for OS and TS in Figure 8 are identical. The HAIP safety and some of its details in such an encounter situation are illustrated in Figure 20. As shown in Figure 20a, when the collision avoidance behaviors of two autonomous ships are not limited by clear rules, the HAIP safety is slightly reduced. This is because in some conditions, the two autonomous ships choose to give way to each other but change courses to the same side. This kind of risky movement may result in collision risk not being effectively mitigated, and even increased. Figure 20b plots the probability of such risky movement in each round.
To avoid such risky movement and ensure the success of collision avoidance, clear collision avoidance rules should be provided to determine priority clearly. In fact, the International Regulations for Preventing Collisions at Sea (COLREGs) have given some rules to identify the give-way ship and stand-on ship, while these rules may not be strictly followed by humans. In this case study, according to Rule 15 in COLREGs, OS is the give-way ship and TS is the stand-on ship. Nevertheless, according to Rule 17, both of the two ships should undertake collision avoidance behavior when collision risk cannot be avoided by a single ship (i.e., round 3). We assume that the two autonomous ships both strictly follow COLREGs and assess the HAIP safety. The result in Figure 20a shows that HAIP safety effectively increases. It indicates that providing clear collision avoidance rules, which are uniform for the whole world, for autonomous ships is an effective means to improve safety of ocean shipping.

5.3.5. Validation of Proposed Method

To validate the proposed method, we analyze such HAIP in which ANS uses an alternative collision avoidance algorithm described in [70]. This SA-based algorithm complies with COLREG rules. It extended VO with the ship domain (SD) and integrated the fuzzy inference system based on near-collision (FIS-NC) [71]. This integrated algorithm is referred to as SDVO + FIS-NC. We assess safety and analyze the details in such HAIP, as shown in Figure 21.
As shown in Figure 21, the phenomena in HAIP using SDVO + FIS-NC are similar with that in HAIP using VO. Specifically, (i) safety in collision avoidance increases with the existence of HAI; (ii) HAIP safety and takeover probability reduce with the existence of anchoring and omission biases; (iii) HAIP safety and takeover probability decrease with the improvement of the anchoring bias level and omission bias level. HAIP safety assessments of HAIP using VO and SDVO + FIS-NC both indicate that anchoring bias and omission bias impede the advantage of HAIP to compensate for the defects of autonomy. The consistency between the conclusions of HAIP using different algorithms validates the effectiveness of the proposed method.

6. Conclusions

To better understand the performance of human–autonomy systems under risk scenarios, effective HAIP safety assessment should dig deep into the takeover decision process based on SA and consider the effects of anchoring bias and omission bias. To meet these requirements, we first developed an SAB-TDM to analytically represent takeover decisions based on SA. The SAB-TDM focuses on quantifying how round dependencies caused by anchoring bias impede SAT information perception and distort comprehension about the correctness of autonomy decisions, and how irrational utility prediction preference caused by omission bias leads to the overestimation of loss caused by takeover. Subsequently, a novel method based on DBN for HAIP safety assessment is proposed. In the proposed method, the multi-round HAIP affected by anchoring bias and omission bias is modeled. Finally, the HAIP in a typical risk scenario, that is, multi-round collision avoidance for autonomous ships, is analyzed to exemplify the proposed method.
The proposed method was successfully applied for the first time to assess the HAIP safety under a risk scenario. The assessment result provides some details about the SA and takeover decision in multi-round HAIP. As seen from the case study, anchoring bias and omission bias mutually contribute to cause SA errors and impede correct takeover, which seriously threaten HAIP safety. With the increase in the anchoring bias level, perception probability of SAT information decreases, and humans are less likely to find the errors of an autonomy decision. The increase in the omission bias level makes humans less likely to identify the condition that the autonomy decision is not optimal, which also strengthens the strength of perception and comprehension round dependencies caused by anchoring bias. The increase in the anchoring bias and omission bias level reduces HAIP safety by 0.01316 and 0.01587, respectively. Therefore, to fully paly the advantage of HAI in risk avoidance and ensuring safety, it is necessary to take some effective measures to decrease the effects of anchoring bias and omission bias, such as enhancement of de-biasing training and improvement of autonomy transparency.
This paper proposes an HAIP safety assessment method considering effects of anchoring bias and omission bias. The purpose is to provide technical support for system improvement aimed at promoting correct takeovers. For future work, to further improve HAIP safety, research efforts should also be focused on enhancing the capabilities of autonomous systems. Prospective methods include improving algorithms and increasing the precision and reliability of sensors.

Author Contributions

Conceptualization, J.G. and S.Z.; methodology, Q.Y., J.G. and S.Z.; validation, Q.Y. and J.G.; formal analysis, Q.Y. and S.Z.; investigation, Q.Y. and H.C.; writing—original draft preparation, Q.Y. and J.G.; writing—review and editing, Q.Y. and S.Z.; visualization, Q.Y. and H.C.; supervision, J.G., S.Z. and H.C.; funding acquisition, J.G. and H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 72201018, and the Funding Project of Science and Technology on Reliability and Environmental Engineering Laboratory under Grant No. 614200420230102.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in this paper. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Han, C.; Abeysiriwardhane, A.; Chai, S.; Maiti, A. Future directions for human-centered transparent systems for engine room monitoring in shore control centers. J. Mar. Sci. Eng. 2021, 10, 22. [Google Scholar] [CrossRef]
  2. Van de Merwe, K.; Mallam, S.; Nazir, S.; Engelhardtsen, Ø. Supporting human supervision in autonomous collision avoidance through agent transparency. Saf. Sci. 2024, 169, 106329. [Google Scholar] [CrossRef]
  3. Chan, J.; Golightly, D.; Norman, R.; Pazouki, K. Perception of Autonomy and the Role of Experience within the Maritime Industry. J. Mar. Sci. Eng. 2023, 11, 258. [Google Scholar] [CrossRef]
  4. Endsley, M.R. Supporting Human-AI Teams: Transparency, explainability, and situation awareness. Comput. Hum. Behav. 2023, 140, 107574. [Google Scholar] [CrossRef]
  5. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. In Situational Awareness; Routledge: London, UK, 2017; pp. 9–42. [Google Scholar]
  6. Veitch, E.; Alsos, O.A. Human-centered explainable artificial intelligence for marine autonomous surface vehicles. J. Mar. Sci. Eng. 2021, 9, 1227. [Google Scholar] [CrossRef]
  7. Korteling, J.H.; van de Boer-Visschedijk, G.C.; Blankendaal, R.A.; Boonekamp, R.C.; Eikelboom, A.R. Human-versus artificial intelligence. Front. Artif. Intell. 2021, 4, 622364. [Google Scholar] [CrossRef]
  8. Wickham, P.A. The representativeness heuristic in judgements involving entrepreneurial success and failure. Manag. Decis. 2003, 41, 156–167. [Google Scholar] [CrossRef]
  9. Oraby, T.; Bauch, C.T. Bounded rationality alters the dynamics of paediatric immunization acceptance. Sci. Rep. 2015, 5, 10724. [Google Scholar] [CrossRef] [PubMed]
  10. Walmsley, S.; Gilbey, A. Cognitive biases in visual pilots’ weather-related decision making. Appl. Cogn. Psychol. 2016, 30, 532–543. [Google Scholar] [CrossRef]
  11. Juárez Ramos, V. Analyzing the Role of Cognitive Biases in the Decision-Making Process; IGI Global: Hershey, PA, USA, 2018. [Google Scholar]
  12. Kari, R.; Steinert, M. Human factor issues in remote ship operations: Lesson learned by studying different domains. J. Mar. Sci. Eng. 2021, 9, 385. [Google Scholar] [CrossRef]
  13. Xiong, W.; Fan, H.; Ma, L.; Wang, C. Challenges of human—Machine collaboration in risky decision-making. Front. Eng. Manag. 2022, 9, 89–103. [Google Scholar] [CrossRef]
  14. Berthet, V. The impact of cognitive biases on professionals’ decision-making: A review of four occupational areas. Front. Psychol. 2022, 12, 802439. [Google Scholar] [CrossRef] [PubMed]
  15. Endsley, M.R. Combating information attacks in the age of the Internet: New challenges for cognitive engineering. Hum. Factors 2018, 60, 1081–1094. [Google Scholar] [CrossRef]
  16. Teovanović, P.; Purić, D.; Živanović, M.; Lukić, P.; Branković, M.; Stanković, S.; Knežević, G.; Lazarević, B.L.; Žeželj, I. The role of cognitive biases in shaping irrational beliefs: A multi-Study Investigation. Think. Reason. 2024, 1–44. [Google Scholar] [CrossRef]
  17. Endsley, M.R. Final reflections: Situation awareness models and measures. J. Cogn. Eng. Decis. Mak. 2015, 9, 101–111. [Google Scholar] [CrossRef]
  18. Prentice, R.A.; Koehler, J.J. A normality bias in legal decision making. Cornell Law Rev. 2002, 88, 583–650. [Google Scholar]
  19. Zamir, E.; Ritov, I. Loss aversion, omission bias, and the burden of proof in civil litigation. J. Leg. Stud. 2012, 41, 165–207. [Google Scholar] [CrossRef]
  20. Zaib, A.; Yin, J.; Khan, R.U. Determining role of human factors in maritime transportation accidents by fuzzy fault tree analysis (FFTA). J. Mar. Sci. Eng. 2022, 10, 381. [Google Scholar] [CrossRef]
  21. Fan, H.; Enshaei, H.; Jayasinghe, S.G. Human error probability assessment for LNG bunkering based on fuzzy Bayesian network-CREAM model. J. Mar. Sci. Eng. 2022, 10, 333. [Google Scholar] [CrossRef]
  22. Ahn, S.I.; Kurt, R.E.; Turan, O. The hybrid method combined STPA and SLIM to assess the reliability of the human interaction system to the emergency shutdown system of LNG ship-to-ship bunkering. Ocean Eng. 2022, 265, 112643. [Google Scholar] [CrossRef]
  23. Li, P.; Wang, Y.; Yang, Z. Risk assessment of maritime autonomous surface ships collisions using an FTA-FBN model. Ocean Eng. 2024, 309, 118444. [Google Scholar] [CrossRef]
  24. Zhang, D.; Han, Z.; Zhang, K.; Zhang, J.; Zhang, M.; Zhang, F. Use of hybrid causal logic method for preliminary hazard analysis of maritime autonomous surface ships. J. Mar. Sci. Eng. 2022, 10, 725. [Google Scholar] [CrossRef]
  25. Li, W.; Chen, W.; Guo, Y.; Hu, S.; Xi, Y.; Wu, J. Risk Performance Analysis on Navigation of MASS via a Hybrid Framework of STPA and HMM: Evidence from the Human–Machine Co-Driving Mode. J. Mar. Sci. Eng. 2024, 12, 1129. [Google Scholar] [CrossRef]
  26. Sumon, M.M.A.; Kim, H.; Na, S.; Choung, C.; Kjønsberg, E. Systems-Based Safety Analysis for Hydrogen-Driven Autonomous Ships. J. Mar. Sci. Eng. 2024, 12, 1007. [Google Scholar] [CrossRef]
  27. Jin, M.; Lu, G.; Chen, F.; Shi, X.; Tan, H.; Zhai, J. Modeling takeover behavior in level 3 automated driving via a structural equation model: Considering the mediating role of trust. Accid. Anal. Prev. 2021, 157, 106156. [Google Scholar] [CrossRef] [PubMed]
  28. Bhat, S.; Lyons, J.B.; Shi, C.; Yang, X.J. Clustering Trust Dynamics in a Human-Robot Sequential Decision-Making Task. IEEE Robot. Autom. Lett. 2022, 7, 8815–8822. [Google Scholar] [CrossRef]
  29. Guo, Y.; Shi, C.; Yang, X.J. Reverse psychology in trust-aware human-robot interaction. IEEE Robot. Autom. Lett. 2021, 6, 4851–4858. [Google Scholar] [CrossRef]
  30. Zhang, X.; Sun, Y.; Zhang, Y. Evolutionary game and collaboration mechanism of human-computer interaction for future intelligent aircraft cockpit based on system dynamics. IEEE Trans. Hum. Mach. Syst. 2021, 52, 87–98. [Google Scholar] [CrossRef]
  31. Huang, J.; Wu, W.; Zhang, Z.; Tian, G.; Zheng, S.; Chen, Y. Human Decision-Making Modeling and Cooperative Controller Design for Human-Agent Interaction Systems. IEEE Trans. Hum. Mach. Syst. 2022, 52, 1122–1134. [Google Scholar] [CrossRef]
  32. Huang, J.; Wu, W.; Zhang, Z.; Chen, Y. A human decision-making behavior model for human-robot interaction in multi-robot systems. IEEE Access 2020, 8, 197853–197862. [Google Scholar] [CrossRef]
  33. Gao, J.; Lee, J.D. Extending the decision field theory to model operators’ reliance on automation in supervisory control situations. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2006, 36, 943–959. [Google Scholar] [CrossRef]
  34. Zhang, X.; Sun, Y.; Zhang, Y.; Su, S. Multi-agent modelling and situational awareness analysis of human-computer interaction in the aircraft cockpit: A case study. Simul. Model. Pract. Theory 2021, 111, 102355. [Google Scholar] [CrossRef]
  35. Zhao, Y.; Smidts, C. CMS-BN: A cognitive modeling and simulation environment for human performance assessment, part 1—Methodology. Reliab. Eng. Syst. Safe 2021, 213, 107776. [Google Scholar] [CrossRef]
  36. Guo, J.; Ma, S.; Zeng, S.; Che, H.; Pan, X. A risk evaluation method for human-machine interaction in emergencies based on multiple mental models-driven situation assessment. Reliab. Eng. Syst. Safe 2024, 252, 110444. [Google Scholar] [CrossRef]
  37. Naderpour, M.; Lu, J.; Zhang, G. A safety-critical decision support system evaluation using situation awareness and workload measures. Reliab. Eng. Syst. Safe 2016, 150, 147–159. [Google Scholar] [CrossRef]
  38. Naderpour, M.; Lu, J.; Zhang, G. A human-system interface risk assessment method based on mental models. Saf. Sci. 2015, 79, 286–297. [Google Scholar] [CrossRef]
  39. Song, Q.; Fu, W.; Wang, W.; Sun, Y.; Wang, D.; Zhou, J. Quantum decision making in automatic driving. Sci. Rep. 2022, 12, 11042. [Google Scholar] [CrossRef]
  40. Shi, C.; Wang, Y.; Shen, J.; Qi, J. Cooperative Mission Planning of USVs Based on Intention Recognition. Mob. Netw. Appl. 2024, 1–15. [Google Scholar] [CrossRef]
  41. Wu, X.; Liu, K.; Zhang, J.; Yuan, Z.; Liu, J.; Yu, Q. An Optimized Collision Avoidance Decision-Making System for Autonomous Ships under Human-Machine Cooperation Situations. J. Adv. Transp. 2021, 2021, 7537825. [Google Scholar] [CrossRef]
  42. Dutt, V.; Gonzalez, C. Making instance-based learning theory usable and understandable, the instance-based learning tool. Comput. Hum. Behav. 2012, 28, 1227–1240. [Google Scholar] [CrossRef]
  43. Hogarth, R.M.; Einhorn, H.J. Order Effects in Belief Updating: The Belief-Adjustment Model. Cogn. Psychol. 1992, 24, 1–55. [Google Scholar] [CrossRef]
  44. Geng, B.; Brahma, S.; Wimalajeewa, T.; Varshney, P.K.; Rangaswamy, M. Prospect theoretic utility based human decision making in multi-agent systems. IEEE Trans. Signal Process. 2020, 68, 1091–1104. [Google Scholar] [CrossRef]
  45. Mkrtchyan, L.; Podofillini, L.; Dang, V.N. Bayesian belief networks for human reliability analysis: A review of applications and gaps. Reliab. Eng. Syst. Safe 2015, 139, 1–16. [Google Scholar] [CrossRef]
  46. Xi, Y.; Zhang, X.; Han, B.; Zhu, Y.; Fan, C.; Kim, E. Advanced human reliability analysis approach for ship convoy operations via a model of IDAC and DBN: A case from ice-covered waters. J. Mar. Sci. Eng. 2024, 12, 1536. [Google Scholar] [CrossRef]
  47. Chen, J.Y.; Barnes, M.J. Human-agent teaming for multirobot control: A review of human factors issues. IEEE Trans. Hum. -Mach. Syst. 2014, 44, 13–29. [Google Scholar] [CrossRef]
  48. Hao, Y.; Zhao, Y. Research on Stage Division in the Ship Collision Avoidance Process. IOP Conf. Ser. Mater. Sci. Eng. 2019, 612, 052016. [Google Scholar]
  49. Wen, H.; Amin, M.T.; Khan, F.; Ahmed, S.; Imtiaz, S.; Pistikopoulos, E. Assessment of situation awareness conflict risk between human and AI in process system operation. Ind. Eng. Chem. Res. 2023, 62, 4028–4038. [Google Scholar] [CrossRef] [PubMed]
  50. Muthard, E.K.; Wickens, C.D. Change detection after preliminary flight decisions: Linking planning errors to biases in plan monitoring. Proc. Hum. Factors Ergon. Soc. Annu. Meeting 2002, 46, 91–95. [Google Scholar] [CrossRef]
  51. Arnott, D.; Gao, S. Behavioral economics in information systems research: Critical analysis and research strategies. J. Inf. Technol. 2022, 37, 80–117. [Google Scholar] [CrossRef]
  52. Kahle, J.; Pinsker, R.; Pennington, R. Belief revision in accounting: A literature review of the belief-adjustment model. In Advances in Accounting Behavioral Research; Emerald Group Publishing Limited: Leeds, UK, 2005; pp. 1–40. [Google Scholar] [CrossRef]
  53. Gonzalez, C.; Lerch, J.F.; Lebiere, C. Instance-based learning in dynamic decision making. Cogn. Sci. 2003, 27, 591–635. [Google Scholar] [CrossRef]
  54. Kaushik, M.; Kumar, M. An integrated approach of intuitionistic fuzzy fault tree and Bayesian network analysis applicable to risk analysis of ship mooring operations. Ocean Eng. 2023, 269, 113411. [Google Scholar] [CrossRef]
  55. Wickens, C. Attention: Theory, principles, models and applications. Int. J. Hum. Comput. Interact. 2021, 37, 403–417. [Google Scholar] [CrossRef]
  56. Zhang, X.; Hao, X.; Zhang, L.; Liu, L.; Zhang, S.; Ren, R. Multi-Autonomous Underwater Vehicle Full-Coverage Path-Planning Algorithm Based on Intuitive Fuzzy Decision-Making. J. Mar. Sci. Eng. 2024, 12, 1276. [Google Scholar] [CrossRef]
  57. You, Q.; Guo, J.; Zeng, S.; Che, H. A dynamic Bayesian network based reliability assessment method for short-term multi-round situation awareness considering round dependencies. Reliab. Eng. Syst. Safe 2024, 243, 109838. [Google Scholar] [CrossRef]
  58. Hogarth, R.M.; Villeval, M.C. Ambiguous incentives and the persistence of effort: Experimental evidence. J. Econ. Behav. Organ. 2014, 100, 1–19. [Google Scholar] [CrossRef]
  59. Tversky, A.; Kahneman, D. Advances in prospect theory: Cumulative representation of uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
  60. Wen, H.; Amin, M.T.; Khan, F.; Ahmed, S.; Imtiaz, S.; Pistikopoulos, S. A methodology to assess human-automated system conflict from safety perspective. Comput. Chem. Eng. 2022, 165, 107939. [Google Scholar] [CrossRef]
  61. Yu, Q.; Liu, K.; Yang, Z.; Wang, H.; Yang, Z. Geometrical risk evaluation of the collisions between ships and offshore installations using rule-based Bayesian reasoning. Reliab. Eng. Syst. Safe 2021, 210, 107474. [Google Scholar] [CrossRef]
  62. Liu, J.; Zhang, J.F.; Yan, X.P.; Soares, C. Multi-ship collision avoidance decision-making and coordination mechanism in Mixed Navigation Scenarios. Ocean Eng. 2022, 257, 111666. [Google Scholar] [CrossRef]
  63. Wang, Y.; Fu, S. Framework for process analysis of maritime accidents caused by the unsafe acts of seafarers: A case study of ship collision. J. Mar. Sci. Eng. 2022, 10, 1793. [Google Scholar] [CrossRef]
  64. Huang, Y.; Chen, L.; Negenborn, R.R.; Van Gelder, P. A ship collision avoidance system for human-machine cooperation during collision avoidance. Ocean Eng. 2020, 217, 107913. [Google Scholar] [CrossRef]
  65. Shaobo, W.; Yingjun, Z.; Lianbo, L. A collision avoidance decision-making system for autonomous ship based on modified velocity obstacle method. Ocean Eng. 2020, 215, 107910. [Google Scholar] [CrossRef]
  66. Huang, Y. Supporting Human-Machine Interaction in Ship Collision Avoidance Systems; Delft University of Technology: Delft, The Netherlands, 2019. [Google Scholar]
  67. Hu, Y.; Zhang, A.; Tian, W.; Zhang, J.; Hou, Z. Multi-ship collision avoidance decision-making based on collision risk index. J. Mar. Sci. Eng. 2020, 8, 640. [Google Scholar] [CrossRef]
  68. Maritime Safety Committee. Adoption of the Revised Performance Standards for Radar Equipment; IMO: London, UK, 2004. [Google Scholar]
  69. Cacciabue, P.C.; Decortis, F.; Drozdowicz, B.; Masson, M.; Nordvik, J. COSIMO: A cognitive simulation model of human decision making and behavior in accident management of complex plants. IEEE Trans. Syst. Man Cybern. 1992, 22, 1058–1074. [Google Scholar] [CrossRef]
  70. Namgung, H. Local route planning for collision avoidance of maritime autonomous surface ships in compliance with COLREGs rules. Sustainability 2021, 14, 198. [Google Scholar] [CrossRef]
  71. Namgung, H.; Kim, J. Collision risk inference system for maritime autonomous surface ships using COLREGs rules compliant collision avoidance. IEEE Access 2021, 9, 7823–7835. [Google Scholar] [CrossRef]
Figure 1. Human–autonomy interaction process.
Figure 1. Human–autonomy interaction process.
Jmse 13 00158 g001
Figure 2. Multi-round human–autonomy interaction process under risk scenario.
Figure 2. Multi-round human–autonomy interaction process under risk scenario.
Jmse 13 00158 g002
Figure 3. Framework of proposed method.
Figure 3. Framework of proposed method.
Jmse 13 00158 g003
Figure 4. The logic of a takeover decision based on SA.
Figure 4. The logic of a takeover decision based on SA.
Jmse 13 00158 g004
Figure 5. Subjective utility functions considering omission bias. In the figure, the black curve and dark green curve are the subjective utility functions of an autonomy decision and takeover behavior.
Figure 5. Subjective utility functions considering omission bias. In the figure, the black curve and dark green curve are the subjective utility functions of an autonomy decision and takeover behavior.
Jmse 13 00158 g005
Figure 6. Flowchart to model multi-round HAIP and assess its safety based on DBN.
Figure 6. Flowchart to model multi-round HAIP and assess its safety based on DBN.
Jmse 13 00158 g006
Figure 7. A schematic diagram of DBN describing HAIP. Variables and arrows in the figure represent the decision process of autonomy and humans, and their meanings are explained in the following sub-sections.
Figure 7. A schematic diagram of DBN describing HAIP. Variables and arrows in the figure represent the decision process of autonomy and humans, and their meanings are explained in the following sub-sections.
Jmse 13 00158 g007
Figure 8. The three stages and HAI rounds for autonomous ship collision avoidance.
Figure 8. The three stages and HAI rounds for autonomous ship collision avoidance.
Jmse 13 00158 g008
Figure 9. The SAT information of ANS applying VO [65,66].
Figure 9. The SAT information of ANS applying VO [65,66].
Jmse 13 00158 g009
Figure 10. DBN structure describing HAIP of the case study. Nodes in the network represent the decision process of ANS and the operator and their meanings are given in Table 2.
Figure 10. DBN structure describing HAIP of the case study. Nodes in the network represent the decision process of ANS and the operator and their meanings are given in Table 2.
Jmse 13 00158 g010
Figure 11. Membership and non-membership function of SAT information correctness.
Figure 11. Membership and non-membership function of SAT information correctness.
Jmse 13 00158 g011
Figure 12. The HAIP safety in the case study.
Figure 12. The HAIP safety in the case study.
Jmse 13 00158 g012
Figure 13. Takeover probability with and without considering the biases.
Figure 13. Takeover probability with and without considering the biases.
Jmse 13 00158 g013
Figure 14. HAIP safety and average takeover probability under different anchoring bias levels.
Figure 14. HAIP safety and average takeover probability under different anchoring bias levels.
Jmse 13 00158 g014
Figure 15. Perception probability of SAT information.
Figure 15. Perception probability of SAT information.
Jmse 13 00158 g015
Figure 16. Comprehension about correctness of autonomy decision.
Figure 16. Comprehension about correctness of autonomy decision.
Jmse 13 00158 g016
Figure 17. HAIP safety and average takeover probability under different omission bias levels.
Figure 17. HAIP safety and average takeover probability under different omission bias levels.
Jmse 13 00158 g017
Figure 18. Results of utility prediction under different omission bias levels.
Figure 18. Results of utility prediction under different omission bias levels.
Jmse 13 00158 g018
Figure 19. Perception and comprehension under different omission bias levels. (a) Plot perception probability of SAT information; (b) plot comprehension about autonomy decision correctness.
Figure 19. Perception and comprehension under different omission bias levels. (a) Plot perception probability of SAT information; (b) plot comprehension about autonomy decision correctness.
Jmse 13 00158 g019
Figure 20. The analysis of an encounter of two autonomous ships. (a) plots the safety without and with applying COLREGs; (b) plots the probability of risky movements without applying COLREGs.
Figure 20. The analysis of an encounter of two autonomous ships. (a) plots the safety without and with applying COLREGs; (b) plots the probability of risky movements without applying COLREGs.
Jmse 13 00158 g020
Figure 21. The analysis of HAIP in which ANS applies SDVO + FIS-NC. (a) is the safety without HAI, with biases, and without biases; (b) is the takeover probability in three HAI rounds; (c,d) are the HAIP safety and average takeover probability under different anchoring bias levels and omission bias levels.
Figure 21. The analysis of HAIP in which ANS applies SDVO + FIS-NC. (a) is the safety without HAI, with biases, and without biases; (b) is the takeover probability in three HAI rounds; (c,d) are the HAIP safety and average takeover probability under different anchoring bias levels and omission bias levels.
Jmse 13 00158 g021
Table 1. The settings of a two-ship encounter situation.
Table 1. The settings of a two-ship encounter situation.
Ship Position   (n mile) Course   (°) Velocity   (kn) Ship   Length (m) Ship   Domain   (m)
OS(0, 0)011.11711000
TS(4.02, 6.55)230.815.9154500
Table 2. Description of nodes in DBN.
Table 2. Description of nodes in DBN.
NodesDescriptionNodesDescription
RBRelative bearingRDRelative distance
VOSVelocity of OSCOSCourse of OS
VTSVelocity of TSCTSCourse of TS
Sensor FI 1~6Input data bias of sensorsADANS decision
RB/RD/VOS/COS/VTS/COS + AInput data detected by ANSUD/SD + AUD/SD from ANS
RB/RD/VOS/COS/VTS/COS + BBenchmark of SAT1 informationUD/SD + BBenchmark of SAT2 information
RB/RD/VOS/COS/VTS/COS + CCorrectness of input dataUD/SD + CCorrectness of UD/SD
d c p a a d / t c p a a d + A d c p a a d / t c p a a d predicted by ANS P E r i , ( r , i { 1 , 2 , 3 } ) Perception of each SAT level
d c p a a d / t c p a a d + BBenchmark of SAT3 information C O r i , ( r , i { 1 , 2 , 3 } ) Correctness of each SAT level
d c p a a d / t c p a a d + C Correctness   of   d c p a a d / t c p a a d C O r ( r { 1 , 2 , 3 } ) ANS decision correctness
P R r ( r { 1 , 2 , 3 } ) Utility prediction D Decision space
Omission biasExistence of omission biasTrustTrust on ANS (initial anchor)
T O D r ( r { 1 , 2 , 3 } ) Takeover decisionSafetyResult of collision avoidance
Table 3. Bias distribution of input information [68].
Table 3. Bias distribution of input information [68].
ParametersDistributionParametersDistribution
Sensor FI 1 ( ° ) N ( 0 , 1.5 2 ) Sensor FI 4 ( ° ) N ( 0 , 2.5 2 )
Sensor FI 2 ( n     m i l e ) N ( 0 , 0.15 2 ) Sensor FI 5 ( k n ) N ( 0 , 0.25 2 )
Sensor FI 3 ( k n ) N ( 0 , 0.25 2 ) Sensor FI 6 ( ° ) N ( 0 , 2.5 2 )
Table 4. The saliency and value of each SAT level.
Table 4. The saliency and value of each SAT level.
SAT InformationSaliency ( A r i S )Value ( A r i V )
A r 1 = b A r 1 = u
SAT130.54
SAT240.54
SAT340.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zeng, S.; You, Q.; Guo, J.; Che, H. Situation Awareness-Based Safety Assessment Method for Human–Autonomy Interaction Process Considering Anchoring and Omission Biases. J. Mar. Sci. Eng. 2025, 13, 158. https://doi.org/10.3390/jmse13010158

AMA Style

Zeng S, You Q, Guo J, Che H. Situation Awareness-Based Safety Assessment Method for Human–Autonomy Interaction Process Considering Anchoring and Omission Biases. Journal of Marine Science and Engineering. 2025; 13(1):158. https://doi.org/10.3390/jmse13010158

Chicago/Turabian Style

Zeng, Shengkui, Qidong You, Jianbin Guo, and Haiyang Che. 2025. "Situation Awareness-Based Safety Assessment Method for Human–Autonomy Interaction Process Considering Anchoring and Omission Biases" Journal of Marine Science and Engineering 13, no. 1: 158. https://doi.org/10.3390/jmse13010158

APA Style

Zeng, S., You, Q., Guo, J., & Che, H. (2025). Situation Awareness-Based Safety Assessment Method for Human–Autonomy Interaction Process Considering Anchoring and Omission Biases. Journal of Marine Science and Engineering, 13(1), 158. https://doi.org/10.3390/jmse13010158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop