Next Article in Journal
Improving Power Quality by a Four-Wire Shunt Active Power Filter: A Case Study
Next Article in Special Issue
Drivers and Barriers for a Circular Economy (CE) Implementation in Poland—A Case Study of Raw Materials Recovery Sector
Previous Article in Journal
Optimization of Silver Nanoparticle Separation Method from Drilling Waste Matrices
Previous Article in Special Issue
Impact of COVID19 on Operational Activities of Manufacturing Organizations—A Case Study and Industry 4.0-Based Survive-Stabilise-Sustainability (3S) Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland

1
Department of Organizational Behavior and Marketing, Faculty of Economic Sciences and Management, Nicolaus Copernicus University, 87-100 Toruń, Poland
2
Department of Econometrics and Statistics, Faculty of Economic Sciences and Management, Nicolaus Copernicus University, 87-100 Toruń, Poland
3
Department of Enterprise Management, Faculty of Economic Sciences and Management, Nicolaus Copernicus University, 87-100 Toruń, Poland
*
Author to whom correspondence should be addressed.
Energies 2021, 14(7), 1942; https://doi.org/10.3390/en14071942
Submission received: 21 February 2021 / Revised: 27 March 2021 / Accepted: 29 March 2021 / Published: 1 April 2021
(This article belongs to the Special Issue Integrated Approaches for Enterprise Sustainability)

Abstract

:
The use of artificial intelligence (AI) in companies is advancing rapidly. Consequently, multidisciplinary research on AI in business has developed dramatically during the last decade, moving from the focus on technological objectives towards an interest in human users’ perspective. In this article, we investigate the notion of employees’ trust in AI at the workplace (in the company), following a human-centered approach that considers AI integration in business from the employees’ perspective, taking into account the elements that facilitate human trust in AI. While employees’ trust in AI at the workplace seems critical, so far, few studies have systematically investigated its determinants. Therefore, this study is an attempt to fill the existing research gap. The research objective of the article is to examine links between employees’ trust in AI in the company and three other latent variables (general trust in technology, intra-organizational trust, and individual competence trust). A quantitative study conducted on a sample of 428 employees from companies of the energy and chemical industries in Poland allowed the hypotheses to be verified. The hypotheses were tested using structural equation modeling (SEM). The results indicate the existence of a positive relationship between general trust in technology and employees’ trust in AI in the company as well as between intra-organizational trust and employees’ trust in AI in the company in the surveyed firms.

1. Introduction

When the concept of artificial intelligence (AI) emerged in the 1960s, it was not supposed to cover such a wide range of applications. Today, AI-based solutions can be found in almost all areas of human life—at home (smart home) [1,2], in the city (smart city) [3,4], at work [5], in education [6], in communication [7], transport [8], health care [9], or entertainment [10]. More and more advanced technologies and solutions using AI are emerging, with a growing impact on the lives of individuals and the functioning of societies as a whole [11,12]. Artificial intelligence is also increasingly important in business, where it has a growing range of applications (from relatively simple “chatbots” used in customer service to more complex analytical solutions based on deep learning) [13,14,15,16,17,18,19].
Artificial intelligence, as the most advanced form of technology development to date, is an object of interest for managers who see the possibility of increasing their companies’ competitive advantage in its applications. In simple words, AI is a system or a machine which mimics human intelligence in the performance of specific tasks and, in addition, can interactively improve (learn) from information gathered [20,21]. Today’s businesses need to take advantage of the latest technology to grow and compete globally [22,23,24,25]. Effective and efficient implementation of AI solutions in companies, in addition to significant financial outlays [26,27], also requires the trust of employees in its usability and functionality [28,29,30].
Due to the fact that artificial intelligence is an extremely capacious concept encompassing various technologies and modern solutions, operating on the basis of diverse and often very complicated algorithms, it is difficult to indicate one commonly accepted definition of AI. These definitions vary depending on the context in which the AI concept is used [21,31]. Regardless of the existing definition differences, an important element of advanced technology that determines its categorization into the functional area of artificial intelligence is the ability of systems to make decisions or perform specific tasks with at least a partial representation of human intelligence, as well as the ability to learn and improve on the basis of information collected.
The presence of artificial intelligence solutions in business has become a fact. The real impact on the functioning of many organizations and the related benefits from the use of AI systems are influencing the growing interest of companies in the latest technologies supporting their development. For most of them, this is only the beginning of the path of change, which will soon revolutionize business. Implementation of AI solutions in companies is not possible without the acceptance of their employees. As highlighted by the past research, employees’ adjustment to advanced technologies (such as AI) that leads to their acceptance and use is a key factor in translating the technological advances into business revenue [28,32,33]. Going further, employees’ trust in artificial intelligence seems to be one of the key factors determining the level of this acceptance, and thus influencing the scale and effectiveness of implementation and use of artificial intelligence solutions in companies. This trust becomes particularly important in the situation of uncertainty and the related need to reduce risk, and therefore in the conditions in which most organizations operate today, including those operating on the Polish market of chemical and energy industry companies.
The trust of employees in artificial intelligence is a special category of trust in the broadest sense. It is a complex, multifaceted and multidimensional variable [34]. In addition, it is latent and difficult to measure directly. As a result, it is both difficult to define and measure, and it is also difficult to describe its relationships with other variables. A specific feature of this category of trust, however, is that the object of trust in this case is neither people nor organizations that are created by people, but technology, i.e., artificial intelligence, which can be considered the most advanced form of technology development to date. Moreover, it refers to [34] employees of companies, which means a significant narrowing of the group of entities (individuals) to which the term can be applied.
The existing research on trust in technology has mostly focused on automation, automated systems or e-commerce systems, e.g., [29,30,35,36,37,38]. In contrast, in the subject literature, the research on employees’ trust in AI in the company is scarce; thus, there is a need for more in-depth analysis in this area. Particularly, there is a niche regarding the research on the relation between employees’ trust in AI in the company and other variables, including those relating to the organizational system in which the employee takes up work and its individual characteristics. This article is therefore an attempt to fill an existing research gap in this area, the subject of our interest being the specific category of this trust, which is employees’ trust in AI in the company. Given the pioneering character of this research, we perceive it as highly contributing to the knowledge of the field as well as to the practice of contemporary companies dependent on advanced technologies, and especially on AI solutions.
The aim of the article is to examine links between employees’ trust in AI in the company and three other latent variables proposed by us (i.e., general trust in technology, intra-organizational trust, individual competence trust). The empirical study was conducted based on a sample of 428 employees from companies of the energy and chemical sectors in Poland. During the research process, we formulated three hypotheses which were tested using structural equation modeling (SEM).
The article consists of two parts—theoretical and empirical. In the first part of the paper, we review the literature on trust. We are particularly interested in issues concerning employee trust in the organization in the context of AI implementation. Our considerations also include the above-mentioned categories of trust, which can influence employees’ trust in AI (general trust in technology, intra-organizational trust, and individual competence trust). These considerations are the starting point for the formulation of research hypotheses. Next, we present the research methodology and discuss in more detail the analysis methods used (SEM models). In the next part of the paper, we present the research results and discuss them in the context of the previously presented literature. In the final part, we formulate the main conclusions drawn from our research; we present the contribution and limitations of the study as well as possible future research directions.

2. Theoretical Background and Hypothesis Development

2.1. Nature of Trust

Trust is a complex, multifaceted and multidimensional category [34,39,40,41]. As a result, it is difficult to define the concept of trust in an unambiguous and commonly accepted way. Trust as the basis of social relations is an object of interest for representatives of various disciplines of social sciences, including psychology, sociology, political sciences, economics, and management sciences [42,43,44,45]. Trust is also the subject of interest for humanists—primarily of philosophers and ethicists [46,47]. While the concept of trust is understood and defined in many different ways, it can be assumed that trust is some kind of belief (and sometimes even certainty) that the party has regarding the future behavior or states of the trust object [48,49,50]. Trust is often considered a personality trait, and therefore it can be considered an interpersonal variable [39,51,52,53,54]. The objects of trust are not only people but also institutions and companies that are created by people.
More and more frequently, the category of trust is also considered in the context of technological development, and thus refers not only to the social or institutional system but also to technology [54,55,56,57,58,59]. This is due to the gradual transformation, at least in part, of the existing relationship between people into a human–technology relationship. This transformation is the result of an extremely dynamic pace of technology development which has diffused and penetrated into almost every sphere of human life [11]. One of such spheres, in which there is a gradual displacement of the man–man relationship in favor of the man–technology relationship, is the one related to the performance of professional activity. Many employees in their workplaces today encounter a situation where their partners in the achievement of tasks are not only other people but also intelligent solutions from the area of advanced technology.

2.2. Trust in the Context of Implementing AI in a Company

The presence of artificial intelligence solutions in business is a fact. Artificial intelligence can provide employees with many activities, and this is a great advantage for entrepreneurs looking to implement AI solutions in their own company. The main positive effects of using artificial intelligence in a company include cost savings (e.g., by reducing employment, intelligent production quality control), increasing the effectiveness of business processes (e.g., by improving accounting, facilitating employee recruitment) or eliminating the so-called human error in the process of performing specific tasks [60,61].
The positive effects of the implementation of AI are also considered in the literature from the perspective of company employees. It is noted, for example, that employees, thanks to artificial intelligence, are able to perform tasks that were previously too complicated or dangerous, and this grants people easier access to information and significant time savings [60].
On the other hand, however, the implementation of AI solutions may also lead to negative effects/phenomena, either at the level of the entire company or at the individual level (employee). Doubts that arise in connection with the implementation of AI systems in business may concern, among others, lack of technological readiness of the company to implement such solutions or loss of competitive advantage as a result of faster implementation of artificial intelligence by competitive companies. This may be accompanied by the employees’ doubts about the effects of implementing such solutions (mainly the fear of losing their current jobs), their reluctance to change and even resistance to them [62,63].
The perceived benefits and risks associated with the implementation of AI in companies may therefore vary depending on who makes the assessment. What constitutes an advantage for the employer (e.g., reduction of labor costs) may be perceived by employees as a real threat related to the loss of job (replacement of the employee by artificial intelligence solutions). Reducing these concerns will only be possible if the artificial intelligence is developed and implemented in companies in a way that allows employees to gain trust. It seems that implementation of AI solutions in companies is not possible without the acceptance of their employees. In this context, it can be assumed that employees’ trust in AI in the company is one of the key factors determining the level of this acceptance and thus influencing the scale and effectiveness of implementation and use of AI solutions in companies. This trust becomes particularly important in the situation of uncertainty and the related need to reduce risk, and therefore under the conditions in which most organizations operate today, including chemical and energy industry companies operating on the Polish market.
Taking into account the above, the article attempts to describe the trust under consideration, as well as to analyze its relations with three other variables, described in further parts. The proposal to describe them is based on our literature review [54,58,64,65,66,67,68,69,70] and interviews with experts (see Section 3.2 for more details).

2.3. Employees’ Trust in Artificial Intelligence in the Company (TrAICom)

Our understanding of employees’ trust in AI in the company is derived from the definition of trust in technology. We assume that artificial intelligence is the most advanced form of technology development to date. As it is defined in the literature, trust in technology manifests itself in people’s readiness to be influenced by technology, resulting from its usefulness, predictability of its effects and credibility of its suppliers [54,55,58,59,67,71]. The concept of trust in technology (and thus in AI) refers to the belief that the other side of the relationship, i.e., technology (and in our case, artificial intelligence) will work in a functional, helpful and reliable way, providing positive results [54]. The functionality reflects the expectation that the technology is capable of performing the intended task. Helpfulness includes the adequacy and responsiveness of the help function built into the technology. Reliability refers to the expectation that a given technology will work in a predictable and consistent manner. Similarly, according to Hardré [72], trust in technology is “the degree to which people believe in the veracity or effectiveness of a tool or system to do what it was created for and is purported to do”.
The existing studies support the notion that trust in technology is based on the interpersonal trust concept. In many definitions, trust in technology reflects beliefs about the favorable attributes of a specific technology. Just like in interpersonal trust, research on trust is founded on the perceived qualities of the trustee’s trustworthiness [39,55]. Researchers have used these approaches mainly because people tend to anthropomorphize technologies and attribute human motivation or human qualities to them [73,74,75].
Employees’ trust in technology in the company—and hence employees’ trust in AI in the company being the main subject of our studies—does not develop in a vacuum. Instead, it evolves in a complexity of contexts within a company [35]. Therefore, while analyzing employees’ trust in AI in the company, we consider the following contexts: general trust in technology, the characteristics of the organizational support for trust in technology (intra-organizational trust) as well as the context defined by the characteristics of an individual employee being the user of particular AI solutions (individual competence trust).

2.4. General Trust in Technology (GenTrTech)

In the context of describing employees’ trust in AI, one of the variables that we are interested in is general trust in technology. Due to the fact that technology lacks volition and moral agency, trust in technology reflects beliefs about technology’s characteristics rather than its motives or will, because it has no will [53]. According to the relevant literature, general trust in technology refers to the issue of people’s (among them employees’) assessment of whether, in their opinion, the suppliers of technology have the knowledge and resources necessary to implement particular solutions [64,67,71]. Furthermore, as noted by McKnight et al. [53], general trust in technology refers to the assessment of people’s perception of whether the solutions in the field of technology are consistent, reliable, functional, and provide the help needed.
General trust in technology is closely related to the issue of the ethical governance of new technologies. Ethical governance in terms of technology means transparency of process as well as transparency of product itself [76,77]. In particular, ethical issues in regard to technology involve the following: identification of potential harms, providing guidelines on safe design, creating measures protecting the safety of new technology or privacy concerns [58,78]. Another aspect referring to ethical governance of technology and its impact on general trust in technology is ensuring the confidentiality of data and information provided by the technology user. Therefore, creating data privacy policies and procedures that enhance user’s trust in technology nowadays seems to be particularly significant for creating general trust in technology [77,78].
Given the context of our research, general trust in technology seems to be the variable of high importance as we follow McKnight et al. [53] who claim that people’s general trusting beliefs regarding the attributes of technology influence individual decisions to use technology and influence individual technology acceptance and post-adoption behavior. Some research provides an interesting point of view, arguing that general trust in technology has a nature of confidence, meaning that a technology user deliberately trusts himself/herself to use technology [57,79]. Taking such a point of view, Kiran and Verbeek [79] argue that rather than being perceived as risky or useful, the technology in general is approached by its users as trustworthy. This perspective relates to evidence mentioned by several authors that people’s technology knowledge highly impacts the adoption as well as the acceptance of the technology [80,81].
Taking the above-said into account, we assume that people’s general trust in technology is one of the primary key factors shaping trust in AI in the company felt by an employee working with a particular AI solution. Therefore, we hypothesize the following:
Hypothesis H1. 
Employees’ general trust in technology has a positive impact on their trust in AI in the company.

2.5. Intra-Organisational Trust (InOrgTr)

Intra-organizational trust is a special kind of trust, distinguished by its positive influence on phenomena and processes taking place in the organization. It concerns relations between employees (horizontal intra-organizational trust) and relations between employees and superiors (vertical intra-organizational trust) [82,83,84]. The concept of intra-organizational trust also includes a category that concerns employee trust in the organization as a whole (institutional trust) [82,85,86]. Intra-organizational trust depends on building relations in the organization based on positive expectations regarding the behavior and intentions of the parties (subordinates, superiors, colleagues, the organization as a whole). These relationships manifest themselves, among others, in mutual kindness, credibility, honesty of the parties to the relationship and willingness to provide support and assistance [87].
Numerous authors stress the role and importance of intra-organizational trust [88,89,90]. They also point out the important relationship between intra-organizational trust and employee involvement in their work [91,92,93] and their job satisfaction [94,95,96].
Intra-organizational trust (created by both interpersonal and institutional component) is an extremely important element of support for the success of strategic changes introduced in companies [91,94]. Such changes certainly involve the implementation of advanced technology solutions, including artificial intelligence. A high level of intra-organizational trust reduces employees’ fear and uncertainty about the future and fosters a positive climate of change and acceptance of the novelties [97,98]. It is also important to note that the positive effect of a high level of intra-organizational trust is that the most talented employees, who are authority to others, can be retained in the organization [99,100]. Their strong motivation to work, willingness to learn, and openness to change are conducive to implementing strategic solutions. Mutual trust of employees in each other and in their superiors fosters the building of positive relations within the team. Thanks to this trust, the organization can freely exchange information and share knowledge [101,102,103]. The positive impact of intra-organizational trust is therefore visible in the effects of both individual employees and entire teams [104,105,106,107], which, in turn, translates into the effectiveness and competitiveness of the organization as a whole [108,109].
Taking into account the above-described importance of intra-organizational trust in the processes of introducing strategic changes in companies, which, as we assume, include the introduction of AI solutions, in our study we propose the following hypothesis:
Hypothesis H2. 
Intra-organizational trust has a positive impact on employees’ trust in AI in the company.

2.6. Individual Competence Trust (IndComTr)

Because trust is a complex process, there is a variety of factors determining the extent to which humans trust in particular objects of trust. Even if the object of trust is not a person or a social group, the subject that tends to trust these objects is always a person. Thus, it can be expected that trust is a variable closely related to many individual characteristics that can be used to describe a person. Studies on trust (as a general construct) in this respect refer, among others, to such individual traits as a person’s propensity to trust [110] and a person’s specific history of interactions [52,111].
Similar research is conducted in relation to trust in technology or similar constructions, i.e., trust in differently defined objects related to technological development. In this area, researchers attempt to identify and describe influential variables which refer to individual context and guide formation of such kind of trust or are associated with such kind of trust. The relatively abundant literature, supported by the results of the research, mainly concerns factors relating to the category of “trust in automation” or “trust in automated systems”. The variables considered to influence the formation of such a kind of trust include age [112,113,114]; gender [115]; personality traits, such as extroversion and introversion [116] or intuitive and sensing personality [117]; individual’s emotional state/mood [118]; self-confidence [35,119]; person’s past or current experience with the same or similar object of trust [38,56]; pre-existing attitudes and expectations towards an object of trust [37,120]; and knowledge about the purpose of an automated system or how it functions [38].
Introducing new technological solutions in companies, especially in the area of advanced technology, often requires the employee to acquire new knowledge and qualifications. Furthermore, it can be a difficult challenge, requiring the employee to quickly adapt to changes that he or she does not accept or understand, and to cope with the stress that is related to them. Introducing a new technology can cause disruptions in current employees’ patterns or behaviors. Such changes may involve the modification of the employees’ job responsibilities, added work load, and additional training. Bringing on a new technology in a company can be particularly intimidating for employees who are content in doing their work as they have always done or for employees who possess specific skills and abilities which are no longer needed to the same extent as before and who simultaneously are not able to quickly develop new skills. For such employees, technology changes in a company may be seen as a threat to their positions and a factor that undermines their job competence. They may create feelings of uncertainty, and such uncertainty can trigger more employees’ resistance to their acceptance of the changes [121,122].
Taking the above into account, in our study we assume that an important role in shaping employees’ trust in AI in the company may be played by their trust in their own competences, resulting from such features as confidence in job-related knowledge; openness to the need to acquire new job-related knowledge; openness to challenges in the workplace that exceed their existing skills; ability to quickly adapt their behavior to changing situations, including the ability to cope with stressful situations; and acceptance of risk-taking changes together with the ability to convince others. In our study, we refer to this complex latent variable as “individual competence trust”. We assume that the higher the value of the factors that make up this construct, the higher its level will be, and this, in turn, will have a positive impact on the employee’s trust in artificial intelligence solutions present in their workplace. Therefore, we propose the following hypothesis:
Hypothesis H3. 
Employees’ individual competence trust positively impacts their trust in AI in the company.

3. Methods

3.1. Method and Participants

Data used to verify the proposed hypotheses were collected with the use of self-completion questionnaire. The survey was conducted between February and April 2020. Both the selection of companies and the selection of individual respondents were purposive and resulted directly from the research aim. In the case of companies, the key selection criterion was their size determined by the number of employees (large companies only). It could be assumed that large companies, acting under the conditions of strong competition, have developed R and D departments and/or use advanced technology (including AI systems), and thus the employees employed in them, thanks to their contact with AI solutions, have the opportunity to form their own opinion about them. The selection of respondents in each of the companies was made with the support of people employed in these entities. The respondents were people who, as part of their professional duties, have contact (direct or indirect) with high-tech solutions, including artificial intelligence. In the conducted survey, data were obtained from 792 persons meeting the described selection criterion [123]. The survey was carried out in large industrial enterprises operating in Poland. The sample included companies operating in the food, electrical-machinery, cement, fuel and energy, light, lumber, and mineral industries.
In the article, we present the results obtained from a part of the examined sample, i.e., from the employees employed in the chemical and energy industry. The total number of respondents was 428. These were employees from various departments of the surveyed companies—research and development departments were the most represented in the sample (27.1%) and were followed by marketing and sales, production, technical, finance and accounting, administration, procurement and logistics, IT departments and others. These individuals were employed in a variety of positions, the vast majority of them on other than managerial ones (67.8%), and they had different seniority within the company with a prevalence of people with seniority ranging from 6 up to 15 years (29.7%), with a significant, almost equal share of people in the range up to 5 years (22.2%) and 16–25 years (21.7%). The majority of respondents were 41–50 years old (39.5%), and the share of women (48.4%) was almost identical to that of men (50%), with 1.6% of lacking answers.

3.2. Variables and Measures

All four variables included in the proposed hypotheses are theoretical and hypothetical constructs with unobserved realizations in a given sample based on a set of identifiable variables. For this purpose, a set of statements was proposed for each variable in the survey questionnaire. In the course of the measurement, respondents were asked to respond to these statements by selecting a specific response category on a scale ranging from 0 to 10, where 0 meant “I completely disagree” and 10 meant “I completely agree”. Due to the original nature of the proposed hypotheses, it was not possible to find ready-made measuring scales that could be used to measure our four latent variables. Hence, when proposing particular statements, we decided to use a mixed approach consisting in two steps. The first step included the adaptation of ready-made scales that have already been used by other researchers for the measurement of similar variables in similar research:
  • The starting point for the construction of the statements attributed to the variable “Employees’ trust in artificial intelligence in the company” (TrAICom) was a measurement scale proposed by researchers from the New York State University of Buffalo, who originally used it to measure trust in automated systems [65].
  • The starting point for the construction of the statements attributed to the “General trust in technology” (GenTrTech) variable were the measuring scales proposed by Ganesan [64], Seppänen et al. [67], McKnight et al. [54] and Ejdys [58].
  • The starting point for the construction of statements assigned to the “Intra-organizational trust” (InOrgTr) variable were the measurement scales proposed by Hacker and Willard [66] and Ellonen, Blomqvist and Puumalainen [68].
  • The starting point for the construction of the statements attributed to the “Individual competence trust” (IndComTr) variable were primarily the solutions proposed by Jurek and Olech in a publication published by the Polish Ministry of Labour and Social Policy [70]. Additional support in this respect was provided by Zeffane’s publication [69].
The second step of our approach was aimed at supplementing the scales adapted from the literature on the basis of opinions obtained from experts during the pilot study.

3.3. The Analysis Method Applied

Data obtained from the study were analyzed using Structural Equation Modeling (SEM) [124,125,126,127,128,129]. SEM is a statistical approach to testing hypotheses on the relationship between observed and unobservable variables [128], derived from the following two main techniques: Confirmatory Factor Analysis (CFA) [124] and Multidimensional Regression and Path Analysis [130]. SEMs allow testing of elaborate theoretical models taking into account different relationships among variables and the analysis of both direct and indirect relationships [126].
The SEM structure consists of a model describing the relationships between hidden variables, called the latent variable model [124]:
η = B η + Γ ξ + ζ ,
and a model for the measurement of exogenous and endogenous unobservable variables, referred to as the external model (measurement model):
x = Λ x ξ + δ
y = Λ y η + ε
where:
η m × 1 , ξ k × 1 —latent endogenous and exogenous variables,
B m × m , Γ m × k —coefficient matrix for latent endogenous and exogenous variables,
ζ m × 1 —latent errors in equations,
x q × 1 , y p × 1 —observed indicators of latent endogenous and exogenous variables,
Λ x , Λ y —coefficient matrix for observed indicators of latent endogenous and exogenous variables,
δ q × 1 , ε p × 1 —measurement errors for observed indicators.
The estimation of CFA and SEM model parameters boils down to determining their values, thanks to which the postulated model will be able to reproduce the observed covariance matrix in a maximum way [126]. The most commonly used estimators are ML—Maximum Likelihood, GLS—Generalized Least Squares, ULS—Unweighted Least Squares, and WSL—Weighted Least Squares [124,126,131].
Evaluation of the model obtained is an ambiguous procedure with many variants [132]. Therefore, it is necessary to assess the fitting of the model on the basis of different measures simultaneously. In the literature, many different fitting indicators are proposed, see, for instance, [126,131,133]. For statistical inference, the only available test is the χ 2 test, which is a traditional measure of the overall fit of the model and assesses the magnitude of the discrepancy between the covariance matrix observed and implied by the model. The remaining model-fitting measures are descriptive and are divided into general fitting measures (e.g., CFI—Comparative Fit Index, TLI—Tucker-Lewis Index, GFI—Goodness of Fit Index) and comparative measures (RMSEA—Root Mean Square Error of Approximation, SRMR—Standardized Root Mean Square Residual). In the literature, there is no unambiguity as to the recommended values of individual measures. On the basis of the available lists of recommendations, e.g., [124,125,128,129,131,134,135,136,137] only minimum acceptable values of individual indicators can be indicated. Thus, CFI, TLI and GFI should be greater than 0.9, while RMSEA, SRMR—less than 0.1.

4. Results

The Structural Equation Modeling was developed in two stages. In the first stage, a confirmation factor analysis was carried out, and then SEM models were built. The CFA model allowed us to determine how latent variables are identified and explained by observable variables (items). Structural models, in turn, allowed us to determine relationships between latent variables.
Our proposed items for particular constructions are presented in Appendix A. They were the starting point for building CFA model. From all of the collected observations (428), the ones that contained the deficiencies (32) and outliers (106) were removed. The Mahalanobis distance was used for this purpose [129,137]. The collinearity and normality of the distribution of the analyzed factors were then examined. As a result, it turned out that the factor indices are characterized by low or moderate collinearity and do not meet the assumption of multidimensional normal distribution. Therefore, at the next stage of the analysis, CFA and SEM models were estimated using the Robust Maximum Likelihood (RML). In this method the correction of traditional statistics and standard errors proposed by Satorra-Bentler [138] was applied. The evaluation of model parameters together with the evaluation of stochastic structure was determined using the R software.
The applied research procedure assumes that the measurement model (CFA) should be correct in terms of reliability, consistency and validity of measures. Meeting these conditions forced the number of items for latent variables to be modified. For each latent variable (TrAICom, GenTrTech, InOrgTr, IndComTr), two items were removed. As a result, each variable consists of four items (see Table 1). The results of constructional correctness and CFA fitting quality measures are shown in Table 2. Additionally, in Table 2 the correlation coefficients between the latent variables are shown under the main diagonal.
The reliability and validity of theoretical constructions was assessed using the following measures: Cronbach’s Alpha, Composite Reliability (CR), Average Variance Extracted (AVE) and correlation coefficient. In the case of all constructions, Cronbach’s Alpha is above 0.88 and CR is above 0.92. This means that there is high reliability and consistency in the items included in each of our proposed constructions. At the same time, the AVE values are smaller than CR and exceed 0.5, which means that items assigned by us to a given construction are well-related to other items of the same construction (convergent validity). Factor charges, which determine the direct effects of the latent variable on items, are statistically significant and indicate the high fitting of model elements. The values of their standardized factorial charges exceed 0.78 (see Figure 1) and the Fornell-Larcker criterion is met, see, for example, [129,139].
The evaluation of the measuring model in terms of its fitting was carried out by means of the χ 2 test, selected comparative measures and general fitting measures. While the result of the χ 2 test is not satisfactory (the null hypothesis should be rejected), the result of standardized χ 2 (i.e., χ 2 /df = 1.6 < 2 ) is satisfactory. The values of the comparative measures (CFI, TLI and GFI) are greater than 0.93 and the general fit measures are smaller than 0.05. These measures take acceptable values. The results obtained indicate that latent variables are well explained by the selected items. Therefore, they can be used to verify the research hypotheses (H1, H2, H3) formulated by us.
The simultaneous impact of all three latent variables (GenTrTech, InOrgTr, IndComTr) on employees’ trust in AI in the company (TrAICom) was examined by applying SEM. The results of the model describing such a relationship are presented in Figure 2 and Table 3.
The indicators of fitting to empirical data achieved by the SEM model are satisfactory. The normalized χ 2 is less than 2; CFI, TLI, GFI measures are greater than 0.93. A very good level of fit was achieved with RMSEA as well as with SRMR (both less than 0.05).

5. Discussion

The research results show that among all constructs used to identify those that affect employees’ trust in AI in the company (TrAICom), the construct with the highest impact strength is general trust in technology (GenTrTech) (β = 0.639, p-value = 0.000). The obtained results regarding the GenTrTech variable indicate that there are no grounds to reject hypothesis H1, according to which employees’ general trust in technology has a positive impact on their trust in AI in the company. The study confirmed that this impact is positive, which means that as employees’ overall technological confidence increases, so does their trust in AI in the company (TrAICom).
Actually, the impact of general trust in technology on employees’ trust in AI used by them at their workplace was expected. Our findings are aligned with the evidence found in the relevant literature. The cause-and-effect relationship between general trust in technology (GenTrTech) and employees’ trust in AI in the company (TrAICom) refers to the fact pointed out by several researchers that people’s general trusting beliefs regarding the qualities of the technology have an impact on individual technology acceptance and adoption behavior [28,33,53]. In line with prior research, general trust in technology is said to be of a cognitive and confidence nature, which generally means that it relies on people’s rational thinking that is based on their general knowledge about technology and its attributes, prior experience, propensity to trust in technology as well as self-confidence [33,57,79,80,81]. Thus, general trust in technology is explained by the users’ willingness to take factual information or advice and act on it, as well as by their perception of the technology as helpful, competent, or useful [33]. As highlighted by some researchers, in contrast to the low trust that exists initially between unfamiliar humans, new technologies may produce optimistic beliefs regarding their abilities and functionality just at the moment they are introduced to the market [33,140]. Moreover, individual’s propensity to trust combined with technology user’s trust himself/herself in technology have been recognized as having positive impact on trusting behavior in novel situations that are, for example, new AI solutions applied in a workplace [53,55,79,141]. Given the above-mentioned dependencies as well as our study results, it is of increasing importance to develop a climate of trust in technology in general daily life at the workplace. In modern work environments, AI solutions are used frequently, and sometimes they even become employees’ daily companions. That is why while moving toward the new reality of 4.0 industry, companies should place considerable attention on providing employees with knowledge on new technologies (e.g., training) as well as technical and organizational support in order to enhance their general trust in technology, which leads to an increase in their trust in AI in the company they work for [28]. Furthermore, some authors focus on the necessity to intensify the activities related to transparency of new technologies that seems to be an imperative for business in the nearest future [71,77,142]. It seems to be of particular importance due to the fact that the increase of employees’ trust in AI in the company can bring about several benefits, such as, for instance, higher job performance, which leads to the improvement of job safety and the increase in companies’ efficiency, thus eliminating errors [28].
The results of the study indicate that the factor (construct) that also has a significant and positive impact on employees’ trust in AI in the company (TrAICom) is the intra-organizational trust (InOrgTr) (β = 0.216, p-value = 0.000). The results of the study on the InOrgTr variable indicate that there are no grounds for rejecting the H2 hypothesis, according to which intra-organizational trust has a positive effect on employees’ trust in AI in the company. The strength of this factor, however, is slightly less than that of the general trust in technology discussed above.
It was to be expected that the intra-organizational trust will have a significant and positive impact on employees’ trust in AI in the company, as this trust is an important factor supporting strategic changes in the organization, and such changes include the implementation of technological solutions using artificial intelligence. One of the aspects related to intra-organizational trust is activities which manifest themselves in the support of employees at every stage of their development, especially in the situation when it is necessary to assimilate new knowledge, skills and competences. Intra-organizational trust significantly reduces employees’ fear and uncertainty about new solutions being introduced. It makes them feel that in case of any problems with performing their duties, they will receive appropriate support, e.g., in the form of training [88,143]. Moreover, the mutual trust of employees in each other, their superiors and the organization in which they work fosters the building of positive relations in employee teams, which significantly improves the exchange of information and knowledge sharing within the organization [101,103,144,145]. In the context of implementing AI solutions, it is also important that a high level of intra-organizational trust is conducive to retaining the best employees in the organization who, on the one hand, provide substantive support for others and, on the other hand, function as authority [99,100]. Their strong motivation to work, commitment and openness to changes promote the implementation of strategic solutions.
Simultaneously, the results of the study indicate that individual competence trust (IndComTr) is a statistically insignificant factor (β = 0.056, p-value = 0.157) from the point of view of its impact on employees’ trust in AI in the company (TrAICom). This result makes it necessary to reject the H3 hypothesis assuming that the individual competence trust of employees positively impacts employees’ trust in AI in the company.
The obtained result may be surprising because H3 was proposed taking into account the results of research conducted in the area of similar trust categories (“trust in technology”, “trust in automation” or “trust in automated systems”), in which numerous influential variables are identified that refer to the individual context of a person and guide the formation of such trust. Among them, there are also those that appear to be closely related to the variables (items) that we have assigned to the IndComTr construct, for example “self-confidence”, which is discussed in the context of similar categories of trust by, for instance, Case, Sinclair, and Rani [119] and Lee and See [35]. On the other hand, however, it is worth noting that in the case of our study, the starting point for the construction of the statements assigned to the IndComTr variable was primarily the solutions proposed by Jurek and Olech [70] in the self-assessment questionnaire of competences in the personal and organizational area, which is part of the so-called IE-TC Catalogue of Competent Action. In our opinion, it was possible to relate them to the IndComTr construct examined by us, but it should be remembered that they were created for completely different purposes (the measuring scale proposed by the authors was not used to measure the variable, which in our study we define as “individual competence trust”). Moreover, the solutions proposed by Jurek and Olech have been modified by us based on interviews with experts conducted during the pilot study (both the number of items and their content). All these facts could have led to a decrease in the expected accuracy of the scale we proposed. Another explanation for the fact that the H3 hypothesis has not been confirmed may result from the fact that the competences that we indicate in the proposed items do not necessarily have to translate directly into the shaping of the employees’ trust in IT in the company under study, or at least they do not determine that this was the case for the specific group of employees who constituted our research sample. They were employees from production companies operating only in two industries (energy and chemical), who additionally had direct or indirect contact with solutions in the field of advanced technology, including artificial intelligence, and therefore had relatively high and relatively equal competences related to the use of AI in the workplace. The obtained result may also suggest that there are other individual characteristics of employees which we did not take into account in our study and which may have a more decisive influence on the formation of employees’ trust in AI in the company.
The above-mentioned possible explanations for the H3 nonconfirmation also point to interesting directions for further research. It is worthwhile considering in this respect, among other things, conducting analogous surveys among employees with more diverse competences in the area of using IT solutions at their workplaces or employed in companies operating in other industries/sectors of the economy. The latter postulate was partly realized by the authors because the research project referred to in this article also covered employees working in companies operating in the food, electrical-machinery, cement, fuel and energy, light, lumber, and mineral industries. The authors are currently at the stage of analyzing the data obtained from this part of the sample, and the results of their analytical work will be presented in subsequent publications. In case H3 is not confirmed in the above mentioned postulated future research, the natural direction of the next researchers’ activities seems to be the modification of the proposed scale for measuring “individual competence trust” and its validation, or the search for other individual characteristics of employees, which will manifest a positive correlation with the considered construction of “employees’ trust in AI in the company”.

6. Conclusions

The aim of the article was to examine links between employees’ trust in AI in the company and three other latent variables (general trust in technology, intra-organizational trust, and individual competence trust). The conducted analysis allowed us to verify the hypotheses that have been formulated in the research process. The developed structural equation model shows the existence of a positive relationship between general trust in technology and employees’ trust in AI in the company as well as between intra-organizational trust and employees’ trust in AI in the company in the surveyed firms.
Given the growing use of AI in business as well as companies’ dependence on employees’ interactions with advanced technologies, among them AI, it is necessary to understand the factors fostering employees’ trust in AI used in their companies. The present research provides one of the first empirical explorations and validations of key variables for employees’ trust in AI at the workplace. Therefore, we perceive it as contributing to the theory and having important managerial implications. The article contributes to the trust literature by adding to the existing debate on employees’ trust in AI. Specifically, the findings contribute to better understanding of human–AI collaboration and dynamics as well as the nature of employees’ trust in AI in companies of the energy and chemical sectors and antecedents of it.
Moreover, this study contributes to the practice in three ways. First, the findings may have implications for managers responsible for implementing solutions in the field of advanced technologies, including AI, by providing them with guidelines on how to build employees’ trust in this area. It is of high importance because nowadays there is doubt that the trust that employees develop in AI will be central to determining its role in companies moving forward. Second, knowledge of this may be of great importance for producers and suppliers of AI solutions because, according to the findings, one of the variables influencing employees’ trust in AI in the company is general trust in technology, which refers to people’s assessment of whether the suppliers of technology have the knowledge and resources necessary to implement these solutions. Third, considering that the governments of most countries treat artificial intelligence as the future main driver of economic growth and job creation, knowledge of the factors building employees’ trust in AI may be invaluable for public institutions involved in supporting commercial pilot projects as well as research and development projects in the field of advanced technology and/or artificial intelligence. This study also raises questions and may open up new avenues for much more research on employees’ trust in AI in the company.
Nevertheless, we are aware that our research is not free from limitations. Due to the exploratory nature of the study and the nonrandom sample selection, the results obtained cannot be treated as representative for all employees employed in the chemical and energy industry companies operating in Poland. However, they may be helpful in the operationalization of the considered latent variables as well as in determining the directions of subsequent research steps conducive to its measurement.
Providing recommendations for further research is an important outcome of any research study. The conducted study inspires in-depth investigations of employees’ trust in AI in the company. It would be interesting to enlarge empirical analysis through the inclusion of the mediators in the research model. Furthermore, it could also be interesting to compare the factors influencing employees’ trust in tangible AI that has some kind of physical representation and virtual AI characterized by having no physical presence. As mentioned above, it is also worth investigating the companies representing other industries in order to find out if the specificity of the industry has an impact on the research results regarding employees’ trust in AI at the workplace. Moreover, a survey among employees with more diverse competencies in the area of using AI solutions in their workplaces may shed new light on the issues being examined.

Author Contributions

Conceptualization, J.Ł., I.E., J.G., A.S. and P.B.; methodology, J.Ł., I.E., J.G., A.S. and P.B.; formal analysis, J.Ł., I.E., J.G., A.S. and P.B.; investigation, J.Ł., I.E., J.G., A.S. and P.B.; data curation, J.Ł.; writing—original draft preparation, J.Ł., I.E., J.G., A.S. and P.B.; writing—review and editing, J.Ł., I.E., J.G., A.S. and P.B.; visualization, J.G. and I.E.; supervision, J.Ł.; funding acquisition, J.Ł., I.E., J.G., A.S. and P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Centre of Excellence IMSErt, grant number FUTURE/03/2020. “Employee trust in artificial intelligence systems in industrial companies operating in Poland”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data supporting reported results can be found on https://drive.google.com/file/d/1UCNn7bVRBXx8f9f7OMfaSZz1V93FSgN9/view?usp=sharing (accessed on 30 March 2021).

Acknowledgments

The authors thank the companies’ representatives participating in the research, as well as the experts for their contribution in the process of constructing the research instrument.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Employees’ trust in AI in the company
I.1 The AI solutions used in my company are safe
I.2 The AI solutions used in my company are reliable
I.3 I can rely on AI solutions used in my company
I.4 The AI solutions used in my company have the appropriate functionality to perform the required tasks
I.5 The use of AI solutions in my company is intuitive
I.6 I can rely on the functioning of my company IT services
General trust in technology
II.1 Producers of advanced technology, including artificial intelligence (AI), are reliable (they have the knowledge and resources necessary to implement solutions)
II.2 Producers of advanced technology, including artificial intelligence (AI), are honest
II.3 Producers of advanced technology, including AI, have a good reputation
II.4 Producers of advanced technology, including AI, guarantee the confidentiality of the information provided (they ensure data security and privacy)
II.5 Producers of advanced technology, including AI, have good will and offer customers the best possible solutions
II.6 Producers of advanced technology, including AI, provide their customers with substantive and technical support (e.g., training in operation, service)
Intra-organizational trust
III.1 In my company, the opinions of competent (key) employees are consulted before significant (large) changes are implemented (e.g., new technological solutions)
III.2 Employees in my company have a say in matters that concern them (e.g., the scope of their duties, positions)
III.3 In my company, activities are undertaken aimed at substantive support for employees (e.g., training, mentoring)
III.4 Employees in my company share their knowledge with others, help each other learn
III.5 The flow of information in my company is fast and effective
III.6 I can rely on the work of my colleagues
Individual competence trust
IV.1 I feel I have been well trained to do my job
IV.2 I like challenges at work (new tasks, projects, duties that exceed my skills), I treat them as an opportunity for professional development
IV.3 I quickly adapt my behavior to the changing situation
IV.4 I follow all novelties referring to what I do on a daily basis
IV.5 In stressful situations, I quickly gain control over my emotions and concentrate on the task
IV.6 With high self-confidence, I convince others to take risky decisions when I do not see better solutions

References

  1. Sethumadhavan, A. Trust in artificial intelligence. Ergon. Des. 2019, 27, 34. [Google Scholar] [CrossRef] [Green Version]
  2. Lee, D.; Tsai, F.P. Air conditioning energy saving from cloud-based artificial intelligence: Case study of a split-type air conditioner. Energies 2020, 13, 2001. [Google Scholar] [CrossRef] [Green Version]
  3. Allam, Z.; Dhunny, Z.A. On big data, artificial intelligence and smart cities. Cities 2019, 89, 80–91. [Google Scholar] [CrossRef]
  4. Chui, K.T.; Lytras, M.D.; Visvizi, A. Energy sustainability in smart cities: Artificial intelligence, smart monitoring, and optimization of energy consumption. Energies 2018, 11, 2869. [Google Scholar] [CrossRef] [Green Version]
  5. Nica, E.; Miklencicova, R.; Kicova, E. Artificial Intelligence-supported workplace decisions: Big data algorithmic analytics, sensory and tracking technologies, and metabolism monitors organizations. Psychosociol. Issues Hum. Resour. Manag. 2019, 7, 31–36. [Google Scholar] [CrossRef] [Green Version]
  6. McArthur, D.; Lewis, M.; Bishary, M. The roles of artificial intelligence in education: Current progress and future prospects. J. Educ. Technol. 2005, 1, 42–80. [Google Scholar] [CrossRef]
  7. Guzman, A.L.; Lewis, S.C. Artificial intelligence and communication: A Human–Machine Communication research agenda. New Media Soc. 2020, 22, 70–86. [Google Scholar] [CrossRef]
  8. Dimitrakopoulos, G.; Demestichas, P. Intelligent transportation systems: Systems based on cognitive networking principles and management functionality. IEEE Veh. Technol. Mag. 2010, 5, 77–84. [Google Scholar] [CrossRef]
  9. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69, 36–40. [Google Scholar] [CrossRef] [PubMed]
  10. Yannakakis, G.N.; Togelius, J. Artificial Intelligence and Games; Springer International Publishing: Cham, Germany, 2018; ISBN 9783319635194. [Google Scholar]
  11. Makridakis, S. The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
  12. Goralski, M.A.; Tan, T.K. Artificial intelligence and sustainable development. Int. J. Manag. Educ. 2020, 18, 100330. [Google Scholar] [CrossRef]
  13. Okuda, T.; Shoda, S. AI-based chatbot service for financial industry. Fujitsu Sci. Tech. J. 2018, 54, 4–8. [Google Scholar]
  14. Kraus, M.; Feuerriegel, S.; Oztekin, A. Deep learning in business analytics and operations research: Models, applications and managerial implications. Eur. J. Oper. Res. 2020, 281, 628–641. [Google Scholar] [CrossRef]
  15. Vrbka, J.; Nica, E.; Podhorská, I. The application of Kohonen networks for identification of leaders in the trade sector in Czechia. Equilib. Q. J. Econ. Econ. Policy 2019, 14, 739–761. [Google Scholar] [CrossRef]
  16. Kolupaieva, I.; Pustovhar, S.; Suprun, O.; Shevchenko, O. Diagnostics of systemic risk impact on the enterprise capacity for financial risk neutralization: The case of Ukrainian metallurgical enterprises. Oeconomia Copernic. 2019, 10, 471–491. [Google Scholar] [CrossRef] [Green Version]
  17. Kitsios, F.; Kamariotou, M. Artificial Intelligence and Business Strategy towards Digital Transformation: A Research Agenda. Sustainability 2021, 13, 2025. [Google Scholar] [CrossRef]
  18. Çınar, Z.M.; Abdussalam Nuhu, A.; Zeeshan, Q.; Korhan, O.; Asmael, M.; Safaei, B. Machine Learning in Predictive Maintenance towards Sustainable Smart Manufacturing in Industry 4.0. Sustainability 2020, 12, 8211. [Google Scholar] [CrossRef]
  19. Lytras, M.D.; Visvizi, A. sustainability Editorial Artificial Intelligence and Cognitive Computing: Methods, Technologies, Systems, Applications and Policy Making. Sustainability 2021, 13, 3598. [Google Scholar] [CrossRef]
  20. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Pearson: New York, NY, USA, 2010; ISBN 9780136042594. [Google Scholar]
  21. Wang, P. On defining artificial intelligence. J. Artif. Gen. Intell. 2019, 10, 1–37. [Google Scholar] [CrossRef] [Green Version]
  22. Krykavskyy, Y.; Pokhylchenko, O.; Hayvanovych, N. Supply chain development drivers in industry 4.0 in Ukrainian enterprises. Oeconomia Copernic. 2019, 10, 273–290. [Google Scholar] [CrossRef]
  23. Kijek, A.; Matras-Bolibok, A. Technological convergence across European regions. Equilib. Q. J. Econ. Econ. Policy 2020, 15, 295–313. [Google Scholar] [CrossRef]
  24. Jakimowicz, A.; Rzeczkowski, D. Do barriers to innovation impact changes in innovation activities of firms during business cycle? The effect of the Polish green island. Equilib. Q. J. Econ. Econ. Policy 2019, 14, 631–676. [Google Scholar] [CrossRef] [Green Version]
  25. Roszko-Wójtowicz, E.; Grzelak, M.M.; Laskowska, I. The impact of research and development activity on the TFP level in manufacturing in Poland. Equilib. Q. J. Econ. Econ. Policy 2019, 14, 711–737. [Google Scholar] [CrossRef] [Green Version]
  26. Mokhova, N.; Zinecker, M. A survey of external and internal factors influencing the cost of equity. Eng. Econ. 2019, 30, 173–186. [Google Scholar] [CrossRef] [Green Version]
  27. Théate, T.; Mathieu, S.; Ernst, D. An Artificial Intelligence Solution for Electricity Procurement in Forward Markets. Energies 2020, 13, 6435. [Google Scholar] [CrossRef]
  28. Thielsch, M.T.; Meeßen, S.M.; Hertel, G. Trust and distrust in information systems at the workplace. PeerJ 2018, 2018, 5483. [Google Scholar] [CrossRef] [PubMed]
  29. Silic, M.; Barlow, J.; Back, A. Evaluating the role of trust in adoption: A conceptual replication in the context of open source systems. AIS Trans. Replication Res. 2018, 4, 1–17. [Google Scholar] [CrossRef] [Green Version]
  30. Li, X.; Hess, T.J.; Valacich, J.S. Why Do We Trust New Technology? A Study of Initial Trust Formation with Organizational Information Systems. J. Strateg. Inf. Syst. 2008, 17, 39–71. [Google Scholar] [CrossRef]
  31. Wilson, H.J.; Daugherty, P.R. Collaborative intelligence: Humans and AI are joining forces. Harv. Bus. Rev. 2018, 96, 114–123. [Google Scholar]
  32. Davenport, T.H.; Short, J.E. The new industrial engineering: Information technology and business process redesign. Sloan Manag. Rev. 1990, 31, 11–27. [Google Scholar]
  33. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14. [Google Scholar] [CrossRef]
  34. Lewis, D.J.; Weigert, A. Trust as a social reality. Soc. Forces 1985, 63, 967–985. [Google Scholar] [CrossRef]
  35. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
  36. Taddeo, M. Modelling trust in artificial agents, a first step toward the analysis of e-trust. Minds Mach. 2010, 20, 243–257. [Google Scholar] [CrossRef] [Green Version]
  37. Merritt, S.M.; Heimbaugh, H.; Lachapell, J.; Lee, D. I trust it, but I don’t know why: Effects of implicit attitudes toward automation on trust in an automated system. Hum. Factors 2013, 55, 520–534. [Google Scholar] [CrossRef]
  38. Hoff, K.A.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef]
  39. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  40. Nickel, P.J. Trust, staking, and expectations. J. Theory Soc. Behav. 2009, 39, 345–362. [Google Scholar] [CrossRef]
  41. McEvily, B.; Tortoriello, M. Measuring trust in organisational research: Review and recommendations. J. Trust Res. 2011, 1, 23–63. [Google Scholar] [CrossRef]
  42. Kramer, R.M.; Tyler, T.R. Trust in Organization Frontiers of Theory and Research; SAGE Publications: Thousand Oaks, CA, USA, 1996. [Google Scholar]
  43. Evans, A.M.; Krueger, J.I. The psychology (and economics) of trust. Soc. Personal. Psychol. Compass 2009, 3, 1003–1017. [Google Scholar] [CrossRef]
  44. Hough, M.; Jackson, J.; Bradford, B.; Myhill, A.; Quinton, P. Procedural justice, trust, and institutional legitimacy. Polic. A J. Policy Pract. 2010, 4, 203–210. [Google Scholar] [CrossRef]
  45. Raheem, A.R.; Romeika, G.; Kauliene, R.; Streimikis, J.; Dapkus, R. ES-QUAL model and customer satisfaction in online banking: Evidence from multivariate analysis techniques. Oeconomia Copernic. 2020, 11, 59–93. [Google Scholar] [CrossRef] [Green Version]
  46. Hosmer, L.T. Trust: The connecting link between organizational theory and philosophical ethics. Acad. Manag. Rev. 1995, 20, 379–403. [Google Scholar] [CrossRef]
  47. Papadopoulou, P.; Nikolaidou, M.; Martakos, D. What is trust in e-government? A proposed typology. In Proceedings of the Annual Hawaii International Conference on System Sciences, Honolulu, HI, USA, 5–8 January 2010; pp. 1–10. [Google Scholar]
  48. Smyth, H.; Edkins, A. Relationship management in the management of PFI/PPP projects in the UK. Int. J. Proj. Manag. 2007, 25, 232–240. [Google Scholar] [CrossRef]
  49. Ebert, T. Trust as the Key to Loyalty in Business-to-Consumer Exchanges: Trust Building Measures in the Banking Industry; Springer Gabler: Wiesbaden, Germany, 2009; ISBN 9783834916228. [Google Scholar]
  50. Jantoń-Drozdowska, E.; Majewska, M. Social capital as a key driver of productivity growth of the economy: Across-countries comparison. Equilib. Q. J. Econ. Econ. Policy 2015, 10, 61–83. [Google Scholar] [CrossRef] [Green Version]
  51. Rotter, J.B. Interpersonal trust, trustworthiness, and gullibility. Am. Psychol. 1980, 35, 1–7. [Google Scholar] [CrossRef]
  52. Rotter, J.B. Generalized expectancies for interpersonal trust. Am. Psychol. 1971, 26, 443–452. [Google Scholar] [CrossRef]
  53. McKnight, D.H.; Cummings, L.L.; Chervany, N.L. Initial trust formation in new organizational relationships. Acad. Manag. Rev. 1998, 23, 473–490. [Google Scholar] [CrossRef] [Green Version]
  54. McKnight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. 2011, 2, 12–32. [Google Scholar] [CrossRef]
  55. McKnight, D.H.; Chervany, N.L. What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology. Int. J. Electron. Commer. 2001, 6, 35–59. [Google Scholar] [CrossRef]
  56. Marsh, S.; Dibben, M.R. The role of trust in information science and technology. Annu. Rev. Inf. Sci. Technol. 2005, 37, 465–498. [Google Scholar] [CrossRef]
  57. Taddeo, M. Trust in Technology: A Distinctive and a Problematic Relation. Knowl. Technol. Policy 2010, 23, 283–286. [Google Scholar] [CrossRef] [Green Version]
  58. Ejdys, J. Determinanty zaufania do technologii. Przegląd Organ. 2017, 12, 20–27. [Google Scholar] [CrossRef]
  59. Siau, K.; Wang, W. Building trust in artificial intelligence, machine learning, and robotics. Cut. Bus. Technol. J. 2018, 31, 47–53. [Google Scholar]
  60. Davenport, T.H.; Ronanki, R. Artificial intelligence for the real world. Harv. Bus. Rev. 2018, 96, 108–116. [Google Scholar]
  61. Akerkar, R. Artificial Intelligence for Business; SpringerBriefs in Business; Springer International Publishing: Cham, Germany, 2019; ISBN 978-3-319-97435-4. [Google Scholar]
  62. Cheatham, B.; Javanmardian, K.; Samandari, H. Confronting the risks of artificial intelligence. McKinsey Q. 2019, 1–9. [Google Scholar]
  63. Ryczkowski, M.; Zinecker, M. Gender unemployment in the Czech and Polish labour market. Argum. Oeconomica 2020, 2020, 213–229. [Google Scholar] [CrossRef]
  64. Ganesan, S. Determinants of long-term orientation in buyer-seller relationships. J. Mark. 1994, 58, 1–19. [Google Scholar] [CrossRef]
  65. Jian, J.-Y.; Bisantz, A.M.; Drury, C.G. Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [Google Scholar] [CrossRef]
  66. Hacker, S.; Willard, M. The Trust Imperative: Performance Improvement through Productive Relationship; ASQ Quality Press: Milwaukee, WI, USA, 2002. [Google Scholar]
  67. Seppänen, R.; Blomqvist, K.; Sundqvist, S. Measuring inter-organizational trust-a critical review of the empirical research in 1990–2003. Ind. Mark. Manag. 2007, 36, 249–265. [Google Scholar] [CrossRef]
  68. Ellonen, R.; Blomqvist, K.; Puumalainen, K. The role of trust in organisational innovativeness. Eur. J. Innov. Manag. 2008, 11, 160–181. [Google Scholar] [CrossRef]
  69. Zeffane, R. Pride and commitment in organizations: Exploring the impact of satisfaction and trust climate. Manag. Organ. Syst. Res. 2009, 51, 163–176. [Google Scholar]
  70. Jurek, P. Metody Pomiaru Kompetencji Zawodowych; Ministerstwo Pracy i Polityki Społecznej: Warszawa, Poland, 2012; ISBN 9788361752752.
  71. Rossi, F. Building trust in artificial intelligence. J. Int. Aff. 2019, 72, 127–134. [Google Scholar]
  72. Hardré, P.L. When, how, and why do we trust technology too much? In Emotions, Technology, and Behaviors; Tettegah, S.Y., Espelage, D.L., Eds.; Academic Press: Cambridge, MA, USA, 2016; pp. 85–106. ISBN 9780128018736. [Google Scholar] [CrossRef]
  73. Reeves, B.; Nass, C. The Media Equation: How People Treat Computers, Television and New Media Like Real People and Places; Cambridge University Press: New York, NY, USA, 1996. [Google Scholar]
  74. Nowak, K.L.; Rauh, C. The influence of the avatar on online perceptions of anthropomorphism, androgyny, credibility, homophily, and attraction. J. Comput. Commun. 2005, 11, 153–178. [Google Scholar] [CrossRef] [Green Version]
  75. Lankton, N.K.; Harrison Mcknight, D.; Tripp, J. Technology, humanness, and trust: Rethinking trust in technology. J. Assoc. Inf. Syst. 2015, 16, 880–918. [Google Scholar] [CrossRef]
  76. Wortham, R.H.; Theodorou, A. Robot transparency, trust and utility. Connect. Sci. 2017, 29, 242–248. [Google Scholar] [CrossRef] [Green Version]
  77. Winfield, A.F.T.; Jirotka, M. Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos. Trans. R. Soc. 2018, 376, 1–13. [Google Scholar] [CrossRef] [Green Version]
  78. Bill, B.; Scott, B.; Sandeep, S. Tech Trends 2020. Deloitte Insights 2020, 5, 1–130. [Google Scholar]
  79. Kiran, A.H.; Verbeek, P.-P. Trusting our selves to technology. Knowl. Technol. Policy 2010, 23, 409–427. [Google Scholar] [CrossRef] [Green Version]
  80. Lippert, S.K. Assessing post-adoption utilisation of information technology within a supply chain management context. Int. J. Technol. Manag. 2007, 7, 36–59. [Google Scholar] [CrossRef]
  81. Thatcher, J.B.; McKnight, D.H.; Baker, E.W.; Arsal, R.E.; Roberts, N.H. The role of trust in postadoption IT exploration: An empirical examination of knowledge management systems. IEEE Trans. Eng. Manag. 2011, 58, 56–70. [Google Scholar] [CrossRef]
  82. Tan, H.; Lim, A. Trust in coworkers and trust in organizations. J. Psychol. Interdiscip. Appl. 2009, 143, 45–66. [Google Scholar] [CrossRef] [PubMed]
  83. Huang, J.T. Be proactive as empowered? The role of trust in one’s supervisor in psychological empowerment, feedback seeking, and job performance. J. Appl. Soc. Psychol. 2012, 42, 103–127. [Google Scholar] [CrossRef]
  84. Schaubroeck, J.M.; Peng, A.C.; Hannah, S.T. Developing trust with peers and leaders: Impacts on organizational identification and performance during entry. Acad. Manag. J. 2013, 56, 1148–1168. [Google Scholar] [CrossRef]
  85. Fulmer, A.C.; Gelfand, M.J. At what level (and in whom) we trust: Trust across multiple organizational levels. J. Manag. 2012, 38, 1167–1230. [Google Scholar] [CrossRef] [Green Version]
  86. Sankowska, A. Analiza Zaufania W Sieciach Badawczo-Rozwojowych; Polskie Wydawnictwo Naukowe: Warszawa, Poland, 2015. [Google Scholar]
  87. Bugdol, M. Wymiary I Problemy Zarządzania Organizacją Opartą Na Zaufaniu; Wydawnictwo Uniwersystetu Jagiellońskiego: Kraków, Poland, 2010. [Google Scholar]
  88. Dirks, K.T.; Ferrin, D.L. The role of trust in organizational settings. Organ. Sci. 2001, 12, 450–467. [Google Scholar] [CrossRef]
  89. Colquitt, J.A.; Scott, B.A.; LePine, J.A. Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. J. Appl. Psychol. 2007, 92, 909–927. [Google Scholar] [CrossRef] [PubMed]
  90. Cheung, M.F.Y.; Wong, C.S.; Yuan, G.Y. Why mutual trust leads to highest performance: The mediating role of psychological contract fulfillment. Asia Pac. J. Hum. Resour. 2017, 55, 430–453. [Google Scholar] [CrossRef]
  91. Morgan, D.E.; Zeffane, R. Employee involvement, organizational change and trust in management. Int. J. Hum. Resour. Manag. 2003, 14, 55–75. [Google Scholar] [CrossRef]
  92. Thomas, G.F.; Zolin, R.; Hartman, J.L. The central role of communication in developing trust and its effect on employee involvement. J. Bus. Commun. 2009, 46, 287–310. [Google Scholar] [CrossRef]
  93. Lopes, H.; Calapez, T.; Lopes, D. The determinants of work autonomy and employee involvement: A multilevel analysis. Econ. Ind. Democr. 2017, 38, 448–472. [Google Scholar] [CrossRef]
  94. Bibb, S.; Kourdi, J. Trust Matters: For Organisational and Personal Success; Palgrave Mcmillan: New York, NY, USA, 2004; ISBN 9780230508330. [Google Scholar]
  95. Matzler, K.; Renzl, B. The relationship between interpersonal trust, employee satisfaction, and employee loyalty. Total Qual. Manag. Bus. Excell. 2006, 17, 1261–1271. [Google Scholar] [CrossRef]
  96. Monji, L.; Ortlepp, K. The Relationship between organisational trust, job satisfaction and intention to leave: An exploratory study. Alternation 2011, 18, 192–214. [Google Scholar]
  97. Mollering, G. Trust: Reason, Routine, Reflexivity; Elsevier: Oxford, UK, 2006. [Google Scholar]
  98. Lewis, D.J.; Weigert, A.J. The social dynamics of trust: Theoretical and empirical research, 1985–2012. Soc. Forces 2012, 91, 25–31. [Google Scholar] [CrossRef] [Green Version]
  99. Malik, A.; Singh, P.; Chan, C. The roles of organizational trust and employee attributions in the context of talent management. Acad. Manag. Annu. Meet. Proc. 2017, 1, 12404. [Google Scholar] [CrossRef]
  100. Ambrosius, J. Strategic talent management in emerging markets and its impact on employee retention: Evidence from brazilian MNCs. Thunderbird Int. Bus. Rev. 2018, 60, 53–68. [Google Scholar] [CrossRef]
  101. Holste, J.S.; Fields, D. Trust and tacit knowledge sharing and use. J. Knowl. Manag. 2010, 14, 128–140. [Google Scholar] [CrossRef] [Green Version]
  102. Lee, P.; Gillespie, N.; Mann, L.; Wearing, A. Leadership and trust: Their effect on knowledge sharing and team performance. Manag. Learn. 2010, 41, 473–491. [Google Scholar] [CrossRef]
  103. McNeish, J.E.; Mann, I. Knowledge sharing and trust in organizations. IUP J. Knowl. Manag. 2010, 2, 18–38. [Google Scholar]
  104. McAllister, D.J. Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manag. J. 1995, 38, 24–59. [Google Scholar] [CrossRef] [Green Version]
  105. Dirks, K.T. The effects of interpersonal trust on work group performance. J. Appl. Psychol. 1999, 84, 445–455. [Google Scholar] [CrossRef] [Green Version]
  106. Costa, A.C. Work team trust and effectiveness. Pers. Rev. 2003, 32, 605–622. [Google Scholar] [CrossRef]
  107. De Jong, B.A.; Dirks, K.T.; Gillespie, N. Trust and team performance: A meta-analysis of main effects, moderators, and covariates. J. Appl. Psychol. 2016, 101, 1134–1150. [Google Scholar] [CrossRef] [Green Version]
  108. Rodrigues, A.F.C.; de Oliveira Marques Veloso, A.L. Organizational trust, risk and creativity. Rev. Bus. Manag. 2013, 15, 545–561. [Google Scholar] [CrossRef]
  109. Matherly, L.L.; Al Nahyan, S.S. Building competitiveness through effective governance of national-expatriate knowledge transfer and development of sustainable human capital. Int. J. Organ. Anal. 2015, 23, 456–471. [Google Scholar] [CrossRef]
  110. Gaines, S.O.; Lyde, M.D.; Panter, A.T.; Steers, W.N.; Rusbult, C.E.; Cox, C.L.; Wexler, M.O. Evaluating the circumplexity of interpersonal traits and the manifestation of interpersonal traits in interpersonal trust. J. Pers. Soc. Psychol. 1997, 73, 610–623. [Google Scholar] [CrossRef]
  111. Deutsch, M. Trust and suspicion. J. Confl. Resolut. 1958, 2, 265–279. [Google Scholar] [CrossRef]
  112. Ho, G.; Kiff, L.M.; Plocher, T.; Haigh, K.Z. A model of trust and reliance of automation technology for older users. AAAI Fall Symp. Caring Mach. 2005, 45–50. [Google Scholar]
  113. McBride, S.E.; Rogers, W.A.; Fisk, A.D. Do younger and older adults differentially depend on an automated system? Proc. Hum. Factors Ergon. Soc. 2010, 54, 175–179. [Google Scholar] [CrossRef]
  114. Sanchez, J.; Rogers, W.A.; Fisk, A.D.; Rovira, E. Understanding reliance on automation: Effects of error type, error distribution, age and experience. Theor. Issues Ergon. Sci. 2014, 15, 134–160. [Google Scholar] [CrossRef] [Green Version]
  115. Lee, E.J. Flattery may get computers somewhere, sometimes: The moderating role of output modality, computer gender, and user gender. Int. J. Hum. Comput. 2008, 66, 789–800. [Google Scholar] [CrossRef]
  116. Merritt, S.M.; Ilgen, D.R. Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Hum. Factors 2008, 50, 194–210. [Google Scholar] [CrossRef] [Green Version]
  117. McBride, M.; Carter, L.; Ntuen, C. The impact of personality on nurses’ bias towards automated decision aid acceptance. Int. J. Inf. Syst. Chang. Manag. 2012, 6, 132–146. [Google Scholar] [CrossRef]
  118. Merritt, S.M. Affective processes in human-automation interactions. Hum. Factors 2011, 53, 356–370. [Google Scholar] [CrossRef]
  119. Case, K.; Sinclair, M.A.; Rani, M.R.A. An experimental investigation of human mismatches in machining. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 1999, 213, 197–201. [Google Scholar] [CrossRef] [Green Version]
  120. Mayer, A.K.; Sanchez, J.; Fisk, A.D.; Rogers, W.A. Don’t let me down: The role of operator expectations in human-automation interaction. In Proceedings of the Human Factors and Ergonomics Society, San Francisco, CA, USA, 16–20 October 2006; Volume 50, pp. 2345–2349. [Google Scholar]
  121. Co, H.C.; Patuwo, B.E.; Hu, M.Y. The human factor in advanced manufacturing technology adoption: An empirical analysis. Int. J. Oper. Prod. Manag. 1998, 18, 87–106. [Google Scholar] [CrossRef]
  122. Delaney, R.; D’Agostino, R. The Challenges of Integrating New Technology into an Organization; La Salle University: Philadelphia, PA, USA, 2015. [Google Scholar]
  123. Łapińska, J.; Sudolska, A.; Górka, J.; Escher, I.; Kądzielawski, G.; Brzustewicz, P. Zaufanie Pracowników do Sztucznej Inteligencji w Przedsiębiorstwach Przemysłowych Funkcjonujących w Polsce. Raport z Badania; Instytut Badań Gospodarczych: Olsztyn, Poland, 2020. [Google Scholar]
  124. Bollen, K.A. Structural Equations with Latent Variables; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1989; ISBN 0-471-01171-1. [Google Scholar]
  125. Bollen, K.A.; Long, J.S. Testing Structural Equation Models; SAGE Publications Inc.: Sauzend Oaks, CA, USA, 1993. [Google Scholar]
  126. Konarski, R. Modele Równań Strukturalnych; Polskie Wydawnictwo Naukowe: Warszawa, Poland, 2009; ISBN 978-83-011-6094-4. [Google Scholar]
  127. Kline, R.B. Principles and Practice of Structural Equation Modeling, 3rd ed.; The Guilford Press: New York, NY, USA, 2011; ISBN 978-1-60623-876-9. [Google Scholar]
  128. Hoyle, R.H. (Ed.) Handbook of Structural Equation Modeling; Guilford Press: New York, NY, USA, 2012. [Google Scholar]
  129. Hair, J.F., Jr.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis; Pearson Education Limited: Edinburgh, Scotland, 2014; ISBN 9781482219807. [Google Scholar]
  130. Hollander, M.; Wolfe, D. Nonparametric Statistical Methods; Probability; Wiley: New York, NY, USA, 1999. [Google Scholar]
  131. Schermelleh-Engel, K.; Moosbrugger, H.; Müller, H. Evaluating the fit of structural equation models: Tests of significance and descriptive goodness-of-fit measures. Methods Psychol. Res. 2003, 8, 23–74. [Google Scholar]
  132. Sagan, A. Model pomiarowy satysfakcji i lojalności. In Statistica; StatSoft Polska: Krakow, Poland, 2003; pp. 75–85. [Google Scholar]
  133. Hooper, D.; Coughlan, J.; Mullen, M.R. Structural equation modelling: Guidelines for determining model fit. Electron. J. Bus. Res. Methods 2008, 6, 53–60. [Google Scholar] [CrossRef]
  134. Schreiber, J.B.; Stage, F.K.; King, J.; Nora, A.; Barlow, E.A. Reporting structural equation modeling and confirmatory factor analysis results: A review. J. Educ. Res. 2006, 99, 323–337. [Google Scholar] [CrossRef]
  135. Januszewski, A. Modele równań strukturalnych w metodologii badań psychologicznych. Problematyka przyczynowości w modelach strukturalnych i dopuszczalność modeli. In Studia z Psychologii w KUL; Gorbaniuk, O., Kostrubiec-Wojtachnio, B., Musiał, D., Wiechetek, M., Błachnio, A., Przepiórka, A., Eds.; Wyd kul: Lublin, Poland, 2011; Volume 17, pp. 213–245. ISBN 9788377024737. [Google Scholar]
  136. Asyraf, W.M.; Afthanorhan, B.W. A comparison of partial least square structural equation modeling (PLS-SEM) and covariance based structural equation modeling (CB-SEM) for confirmatory factor analysis. Int. J. Eng. Sci. Innov. Technol. 2013, 2, 198–205. [Google Scholar]
  137. Teo, T.; Tsai, L.T.; Yang, C.C. Applying structural equation modeling (SEM) in educational research: An introduction. In Application of Structural Equation Modeling in Educational Research and Practice; Khine, M.S., Ed.; Sense Publishers: Perth, Australia, 2013; pp. 3–21. ISBN 9789462093324. [Google Scholar]
  138. Satorra, A.; Bentler, P.M. Corrections to test statistics and standard errors in covariance structure analysis. In Latent Variables Analysis: Applications for Developmental Research; von Eye, A., Clogg, C.C., Eds.; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 1994; pp. 399–419. [Google Scholar]
  139. Hair, J.F.; Ringle, C.M.; Sarstedt, M. PLS-SEM: Indeed a silver bullet. J. Mark. Theory Pract. 2011, 19, 139–151. [Google Scholar] [CrossRef]
  140. Dzindolet, M.T.; Peterson, S.A.; Pomranky, R.A.; Pierce, L.G.; Beck, H.P. The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 2003, 58, 697–718. [Google Scholar] [CrossRef]
  141. Jones, S.L.; Shah, P.P. Diagnosing the locus of trust: A temporal perspective for trustor, trustee, and dyadic influences on perceived trustworthiness. J. Appl. Psychol. 2016, 101, 392–414. [Google Scholar] [CrossRef]
  142. AI at Work: It’s Time to Embrace AI; Oracle: Redwood Shores, CA, USA, 2018; pp. 1–9. Available online: https://www.oracle.com/a/ocom/docs/ytt-ai-at-work-report.pdf (accessed on 15 February 2021).
  143. Carmeli, A.; Tishler, A.; Edmondson, A.C. CEO relational leadership and strategic decision quality in top management teams: The role of team trust and learning from failure. Strateg. Organ. 2012, 10, 31–54. [Google Scholar] [CrossRef]
  144. Levin, D.Z.; Cross, R. The strength of weak ties you can trust: The mediating role of trust in effective knowledge transfer. Manag. Sci. 2004, 50, 1463–1613. [Google Scholar] [CrossRef] [Green Version]
  145. Sankowska, A. Relationships between organizational trust, knowledge transfer, knowledge creation, and firm’s innovativeness. Learn. Organ. 2013, 20, 85–100. [Google Scholar] [CrossRef]
Figure 1. Measuring model (standardized loads are presented on the arrows).
Figure 1. Measuring model (standardized loads are presented on the arrows).
Energies 14 01942 g001
Figure 2. Structural model.
Figure 2. Structural model.
Energies 14 01942 g002
Table 1. Latent variables and their corresponding items.
Table 1. Latent variables and their corresponding items.
Latent VariableItems
Employees’ trust in AI in the company (TrAICom)I.1 The AI solutions used in my company are safe
I.2 The AI solutions used in my company are reliable
I.3 I can rely on AI solutions used in my company
I.5 The use of AI solutions in my company is intuitive
General trust in technology (GenTrTech)II.1 Producers of advanced technology, including artificial intelligence (AI), are reliable (they have the knowledge and resources necessary to implement solutions)
II.2 Producers of advanced technology, including artificial intelligence (AI), are honest
II.3 Producers of advanced technology, including AI, have a good reputation
II.5 Producers of advanced technology, including AI, have good will and offer customers the best possible solutions
Intra-organizational trust (InOrgTr)III.1 In my company, the opinions of competent (key) employees are consulted before significant (large) changes are implemented (e.g., new technological solutions)
III.2 Employees in my company have a say in matters that concern them (e.g., the scope of their duties, positions)
III.3 In my company, activities are undertaken aimed at substantive support for employees (e.g., training, mentoring)
III.5 The flow of information in my company is fast and effective
Individual competence trust (IndComTr)IV.2 I like challenges at work (new tasks, projects, duties that exceed my skills), I treat them as an opportunity for professional development
IV.4 I follow all novelties referring to what I do on a daily basis
IV.5 In stressful situations, I quickly gain control over my emotions and concentrate on the task
IV.6 With high self-confidence, I convince others to take risky decisions when I do not see better solutions
Table 2. Score reliabilities, validities and correlations of latent variables and values of selected fit statistics for CFA model.
Table 2. Score reliabilities, validities and correlations of latent variables and values of selected fit statistics for CFA model.
AlphaCRAVEGenTrTechTrAIComInOrgTrIndComTr
GenTrTech0.9220.9460.8141
TrAICom0.9460.9620.8620.7811
InOrgTr0.9150.9400.7970.5710.6091
IndComTr0.8860.9230.7500.3370.3790.51
χ 2 dfp-value CFITLIGFIRMSEASRMR
156.48980.0000.9840.9810.9320.0470.036
Alpha—Cronbach’s Alpha; CR—Composite Reliability; AVE—Average Variance Extracted; CFI—Comparative Fit Index; TLI—Tucker-Lewis Index; GFI—Goodness of Fit Index; RMSEA—Root Mean Square Error of Approximation; SRMR—Standardized Root Mean Square Residual.
Table 3. Parameter estimation of the regression model for employees’ trust in AI in the company.
Table 3. Parameter estimation of the regression model for employees’ trust in AI in the company.
VariableBStd.Err Bp-Valueβ
GenTrTech0.6940.0780.0000.639
InOrgTr0.2070.0590.0000.216
IndComTr0.0860.0610.1570.056
Goodness of Fitdfʹ2p-valueRMSEA
98156.4790.0000.047
CFITLIGFISRMR
0.9840.9810.9320.036
B—estimate parameter; β—standardized estimate parameter; CFI—Comparative Fit Index; TLI—Tucker-Lewis Index; GFI—Goodness of Fit Index; RMSEA—Root Mean Square Error of Approximation; SRMR—Standardized Root Mean Square Residual.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Łapińska, J.; Escher, I.; Górka, J.; Sudolska, A.; Brzustewicz, P. Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland. Energies 2021, 14, 1942. https://doi.org/10.3390/en14071942

AMA Style

Łapińska J, Escher I, Górka J, Sudolska A, Brzustewicz P. Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland. Energies. 2021; 14(7):1942. https://doi.org/10.3390/en14071942

Chicago/Turabian Style

Łapińska, Justyna, Iwona Escher, Joanna Górka, Agata Sudolska, and Paweł Brzustewicz. 2021. "Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland" Energies 14, no. 7: 1942. https://doi.org/10.3390/en14071942

APA Style

Łapińska, J., Escher, I., Górka, J., Sudolska, A., & Brzustewicz, P. (2021). Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland. Energies, 14(7), 1942. https://doi.org/10.3390/en14071942

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop