1. Introduction
Over the past decades, a shift in healthcare needs has become evident. World population is ageing, whereas replacement fertility is dropping [
1] (pp. 2–5). Life expectancy is higher than ever [
2] and older adults require an increasing amount of healthcare [
3,
4]. By 2040, the number of older dementia patients is estimated to be 81.1 million worldwide [
5]. These people often require more specialised care [
6,
7]. For example, loneliness among older adults may lead to excess morbidity and mortality [
8]. The changes in the amount and specialisation of healthcare needs also increase the costs [
9]. On top of severe healthcare budget-cuts in most Western countries, a shortage of educated care professionals is expected in the area of specialised care [
2]. This shortage of hands together with the aftermath of a global financial crisis foreshadows a future of low-quality eldercare against high costs [
10].
Progressively, the call for care technology becomes louder and care robots seem to be in the vanguard of that development [
11]. Care robots come in many forms, from surgery machines (e.g., Da Vinci Surgical System) to cuddle toys (e.g., PARO). Generally, three types of care robots may be distinguished: assistive, monitoring, and companion robots [
12]. Assistive robots help with hygiene chores such as washing someone’s hair [
13]. Monitoring robots survey matters of behaviour or health [
14]. Companion robots provide entertainment and daily management, which is typical for dementia patients [
15] as well as for socially isolated seniors [
16].
A healthcare robot is not a conventional machine but an agency that makes (partially) independent decisions and executes specialised tasks with little assistance [
17]. Healthcare robots provide support for basic activities, assisting seniors and their caretakers [
18]. However, despite recent advances in robotics, care robots cannot independently tend to the needs of older adults, nor will they in the foreseeable future. Thus, robots themselves need their minders, on the work floor as well as in operational management. That is, current students of care will be working with robots as their colleagues [
19]. Others will be managers and planners, dealing with teams of care personnel and robots. To prepare healthcare systems for new technology, proper training of (future) care workers is required [
3]. Higher vocational students will become the care managers that coach the lower vocational students, who become the care professionals, working in mixed robot teams.
The questions that arise from such observations include: How do professionals of tomorrow perceive such robot care, as useful and helpful in their jobs or threatening, expecting massive lay-offs? Does robot care clash with moral principles of “good care?” These and related questions are addressed in the current paper, investigating what trainee professionals think of assistive, monitoring, and companion robots in terms of morality, utility, and acceptance. In view of robot care, how do care students consider patient: (1) autonomy; (2) non-maleficence; (3) beneficence; and (4) justice [
20]? These basic principles of healthcare ethics are generally accepted to assess medical and care procedures, treatments, interventions, and technologies.
To guarantee autonomy, patients should have full control of decisions that concern their health. They cannot be forced into treatment. Patients should be informed as completely as possible, and made aware of dangers, benefits, and success rates. Treatment is performed only under the patient’s fully informed-consent. Non-maleficence states to better do nothing than something that worsens the patient’s condition. Treatment should not harm the patient, their kin, environment, or society. An incision may temporarily and locally “harm” a patient as long as the greater benefit is ensured. Therefore, beneficence entails that all acts, thoughts, and deliberations are with the good of the patient in mind. Treatment should be tailored to the individual and education, training, and technology continuously updated, improved, and tested. Justice refers to fairness with equal rights for everyone to the best treatment: medicine, expertise, and other resources are equally distributed among all. This principle demands to address potential conflicts of treatment with current law, legislation, rights, liabilities, and other obligations.
Our first research question (RQ1), then, is: To what extent do higher and lower vocational care students believe that assistive, monitoring, and companion robots affect a patient’s autonomy, may do harm, or be beneficial, and “just”? Conversely, is it that care students fear that robots “take over” a patient’s decision making, hurt them physically, companion robots do not help against loneliness, and denying human empathy to patients is unfair [
21,
22]?
Another concern of care students might be that robots take their jobs [
23]. Do students perceive robots as complementary or are robots considered so capable that robots will replace them?
Utility of a robot, in our case, relates to how handy and practical care students think a robot would be during job performance [
24].
Acceptance would be the actual agreement to and adoption of robot technology in the work practice, for instance, based on utility and ease of use [
25].
About a decade ago, the general tendency among healthcare students was that robots would replace them and that employing robots in care was unacceptable [
26]. Ekland [
27] and Schulman [
28] found that care students were pessimistic about the usefulness of robots in telemedicine. In nursing older adults, the workforce did not believe robots to be useful [
29].
Would this position have changed over the past years? What is known is that, if people do not expect too much of the performance of a technology, the behavioural intention to actually use the system is weak [
30]. Likewise, if perceived usefulness and perceived ease of use are low, intentions to use new technology drop [
25]. The reverse seems also valid: Intention to use and actual use increase if people feel that a device will perform well, is useful, and easy (ibid.). Hence, if care professionals see that a robot is practical and functional, would their initial moral objections become less of an issue [
31] or will they co-exist?
Our RQ2, then, is: To what extent do care students believe that assistive, monitoring, and companion robots are acceptable and that robots will be useful in their future occupations? Furthermore, RQ3 asks: Does high perceived usefulness perhaps downplay earlier ethical concerns? To investigate ethical and occupational concerns of care students with different types of care robots, we designed a questionnaire study, probing the contrast between principles of ethics and considerations of utility and acceptance.
In other words, the current study seeks to investigate how care professionals’ moral concerns relate to different types of care robots, whether these differ among the higher and lower educated professionals, how care professionals perceive a care robot’s usefulness, and how their moral concerns relate to perceived utility and acceptability.
3. Results
Table 3 presents the descriptive statistics of the dependent variables per Robot Type. To analyse our Research Questions, we ran a 3 (Robot Type: assisting, monitoring, companion) × 2 (Education Level: higher vs. intermediate vocational) GLM MANOVA on the dependent measures Maleficence, Autonomy, Utility, and Acceptance.
The Box M value was associated with a
p-value of 0.04. Note, however, that the Box M test is highly sensitive and should be ignored unless
N ≥ 200 and
p ≤ 0.001 [
34]. Our sample size was larger and our
p-value was 0.04, so a MANOVA was appropriate to execute.
Results of the multivariate analysis showed a significant main effect of Robot Type on the dependent variables (V = 0.26, F(8,698) = 13.22, p < 0.001, ηp2 = 0.13). However, the main effect of Education Level was not significant (p = 0.111) and neither was the interaction between Education Level and Robot Type (F = 0.78). Thus, specifics of Education Level could not be tested further.
The univariate between-subjects effects showed the main effect of Robot Type on participants’ moral considerations (RQ1) and their evaluations of the different robot type’s utility and acceptance (RQ2). Based on Levene’s F test, the homogeneity of variance assumption was considered satisfactory with p > 0.05 for each subgroup. Robot Type had a significant effect on Maleficence (F(2,353) = 31.86, p < 0.001, ηp2 = 0.15), on Utility (F(2,353) = 3.75, p = 0.024, ηp2 = 0.02), and on Acceptance (F(2,353) = 12.58, p < 0.001, ηp2 = 0.07). The effect of Robot Type on Autonomy was not significant (p > 0.05).
Pairwise comparisons (Tukey) showed that Acceptance was significantly higher for Companion than for Monitoring robots (ΔM = 0.73, SE = 0.15, p < 0.001, 95% CI = 0.44–1.03) and for Assisting robots (ΔM = 0.46, SE = 0.15, p = 0.002, 95% CI = 0.17–0.75). The difference between Monitoring and Assisting robots was not significant (p > 0.05).
Regarding Maleficence, Assisting robots raised significantly higher scores than did Monitoring robots (ΔM = 0.40, SE = 0.14, p = 0.005, 95% CI = 0.12–0.67) and significantly higher scores than Companion robots (ΔM = 1.06, SE = 0.14, p < 0.001, 95% CI = 0.80–1.33). The difference between Monitoring and Companion robots also was significant, Monitoring robots being considered more Maleficent than Companion robots (ΔM = 0.67, SE = 0.14, p < 0.001, 95% CI = 0.40–0.93). Utility merely showed a trend, indicating that Monitoring robots seemed less useful than Companion or Assisting robots.
To answer RQ3, we analysed the extent to which acceptance of a care robot was determined by maleficence and autonomy or by utility. Therefore, we conducted a multiple regression analysis with an interaction term [
35] to predict the Acceptance scores through interactions with Autonomy, Utility, and Maleficence. Age and Education Level served as control variables (i.e., moderators).
Regarding assumptions, the Durbin–Watson of 2.032 was well within the range of 1.5–2.5, indicating the absence of autocorrelations. The variance-inflation factor (VIF) was acceptable (<5.0), suggesting no multicollinearity due to interdependency of variables (Rogerson, 2001). Three residual outliers based on a z-score > 3.29 or < −3.29 were removed, which was 0.1% of the most extreme values. The residuals were mean centred. There were no linearity problems and the standardized predicted and residual plot showed no problems of heteroscedasticity. Overall, it was suitable to perform a regression analysis.
The full model with five predictors (Autonomy, Utility, Maleficence, Age, and Education Level) explained 53.6% of the variance in Acceptance (R² = 0.54, F(5, 348) = 80.26, p < 0.001), also after Bonferroni correction (α = 0.05/5 = 0.01). The combination of the terms of interaction between Autonomy, Utility, and Maleficence with Age and Education Level as moderators added a marginally significant (according to Bonferroni correction) and small amount of explained variance (2%): ΔR² = 0.02, F(6, 343) = 2.54, p = 0.020. The total explained variance was 55.5% for this model (R² = 0.56, F(11, 342) = 38.83, p < 0.001).
Furthermore, potential Maleficence had a significantly negative influence on Acceptance (β = −0.33, t(342) = 8.05, p < 0.001). By contrast, increased Autonomy positively influenced Acceptance (β = 0.18, t(342) = 4.39, p < 0.001) just like Perceived Utility (β = 0.41, t(342) = 9.23, p < 0.001). Age also positively influenced Acceptance but not beyond the Bonferroni-corrected level (β = 0.08, t(342) = 1.99, p = 0.047), whereas Education Level was not significantly related to Acceptance (p > 0.05).
To test the relative weights of moral considerations and utility perceptions on the acceptance of healthcare robots, we performed a hierarchical regression (method Enter) with Autonomy and Maleficence (both indicating moral concerns) entered as predictors in the first block, and Utility in the second block, on the Acceptance measure as dependent variable. Autonomy and Maleficence together significantly explained Acceptance, R2 = 0.48, R2adj = 0.47, F(2,409) = 184.75, p = 0.000, which was mainly due to Maleficence. Utility added a significant increase of 9% in explained variance (R2change = 0.09, F(1,408) = 81.97, p = 0.000). The three predictors together explained a substantial amount of variance in Acceptance (R2 = 0.56, R2adj = 0.56, Fchange(1,408) = 81.97, p = 0.000) in which the largest part is explained by the health professionals’ moral considerations about care robots.
4. Discussion
The current study examined how moral considerations and perceptions of utility and acceptance of different types of care robots were appraised by trainee healthcare and nursing professionals at intermediate and higher educational levels. We also analysed the relative contributions of perceived utility and ethics in robot acceptance. Results showed that trainee care professionals evaluated assistive robots as more maleficent than either monitoring or companion robots. Companion robots were also more likely to be accepted to collaborate with than monitoring and assistive robots. Furthermore, monitoring robots were considered more maleficent than companion robots. Participants also thought that monitoring robots are less useful than companion or assisting robots. Considerations of autonomy did not differentiate between the robot types. No significant differences between the intermediate and higher educational levels were found. Finally, results show that moral concerns weigh more heavily in accepting to work with a robot than practical utility, although both contribute significantly to and explain a substantial amount of variance in accepting robots on the work floor.
Healthcare and nursing students were presented three types of care robots to examine their moral considerations, acceptance and utility perceptions. Results show that these trainee healthcare professionals saw little harm in companion robots and worried most for assistive robots, irrespective of their schooling. From the debriefing interviews, it appeared that assistive robots were perceived as most maleficent because they can actually physically drop someone or make a wrong move. Importantly, our results provide a more positive view on attitudes toward healthcare robots than previous studies showed. Only several years ago, healthcare students feared that robots would replace them and considered employing robots in care unacceptable and not very useful [
26,
27,
28,
29]. Today, trainee healthcare and nursing professionals are willing to accept care robots on the work floor provided that the technology does not harm the patient and that patient safety is guaranteed. Acceptance levels vary according to the type of robot with companion robots being the most acceptable and useful.
It is argued in the extant literature that companion robots that, for instance, are able to play the favourite music of an elderly patient who is suffering from dementia, could alleviate loneliness and increase feelings of well-being [
36,
37,
38]. Hence, a robot might provide a perfect interface for accessing such a resource, providing music as treatment with a greater degree of interactivity simulating personal interaction with a human being.
Results further showed that morality was more important than utility in accepting a robot in care although utility was not trivial. This is an important addition to prevailing theories on technology acceptance [
25], which primarily focus on utility and ease of use but neglect the possible moral or ethical considerations. The participants’ position in our study is in line with Stephany and Majkowski [
24] (p. 131) who opposed the trend that utility considerations often overthrow a “moral sense of care.” In our study, morality and utility together explained a substantial amount of variance in the willingness to accept healthcare robots. Notably, when considering the relative weights in a hierarchical regression analysis, the largest part is explained by moral considerations such as robots doing harm or increasing a patient’s independence. Thus, indeed, moral senses of care overthrow utility considerations in (trainee) care professionals.
Our findings suggest that care robots have the potential to solve urgent problems in care [
10], provided that, before implementation, the moral concerns of healthcare and nursing professionals are taken into account. Our results indicate that each type of robot should follow its own line of introduction because their different functionalities come with different moral concerns. Companion robots may take the lead and pave the way because they were seen as highly useful, most acceptable, and least harmful. For the development and implementation of care technology, our results suggest that companion robots can be employed right away without much resistance from the caregiver, both at the work floor (lower/intermediate vocational) and in operational management (higher vocational). It also means that more work has to be put in making robots less threatening. Particularly for assisting machines, this means that robots that perform physical tasks should comply with the highest safety regulations and technology standards. Technically, they should be flawless, which should be demonstrated by extensive user tests. Compare, for example, the aviation industry where fear of flight is countered by proving it is the safest form of transportation available [
39].
A limitation might be that the measurement of the moral considerations was based on the principles of medical ethics [
20], which are generally used to assess medical and care procedures, treatments, interventions, and technologies. However, no clear measurements of these constructs yet existed (cf. available measurements for utility and acceptance [
25]). Therefore, we carefully constructed items following the definitions of these four basic principles of medical ethics. Psychometric analyses showed that the items for “justice” could not be indexed as a reliable scale. Furthermore, the items for “non-maleficence” and “beneficence” showed overlap and had to be merged into one scale rather than two separate constructs. Although “(non)-maleficence” and “beneficence” are not just the opposite [
20], the overlap in empirical observation is comprehensible. Finally, all scales used for analyses were internally consistent.
Another methodological consideration is the lower response rate of participants at the lower/intermediate educational level. Therefore, the subsamples in the current study were not equally distributed. Even though MANOVA is a robust technique and Box’s test showed that the homogeneity of variance–covariance matrix was not violated [
34], it is desirable that groups are of similar size. The relatively low response rate overall (7065 persons were approached, and only 406 of them completed the questionnaire) is due to a mistake in the planning of our study. School holidays are spread in time over different regions and it happened that it has been sent out during a school holiday to their school e-mail addresses. Moreover, the deadline for completing the questionnaire was set within this holiday period. Therefore, most likely many students did not open their school e-mail. For this exploratory study, we reasoned that the number of participants still is sufficient for interesting insights.
Related, another interesting note is that 75% of all vocational educated participants at the intermediate level were attendees of a Christian (Reformed) school. Because their holiday took place on a different schedule than the secular one, this might explain their relatively higher response rate. Whether religion could have any influence on the acceptance of care robots is unclear. It might be interesting to include religion in future research.
Implications of our results highlight that moral considerations are important in professional health care. Previous research emphasized the responsibilities of nurse educators and healthcare employers to provide learning opportunities for new care professionals in technical skills, to maintain patient safety, and to provide “good care” [
40]. To achieve these goals, nursing students and trainee care professionals should understand the importance of using evidence-based guidelines and develop a reflective approach toward performance of technical tasks. This requires nuanced care education that welcomes innovative technology, whereas such innovations should also be consistently tested against ethical principles. With the increasing importance of healthcare technology, it is imperative that (trainee) healthcare professionals should learn skills and gain knowledge concerning health technology and medical informatics [
41,
42]. This should also include ethical considerations.