1. Introduction
Since Apple introduced Siri in 2011 [
1], the concept of voice-activated virtual assistants (VAs) has been widely adopted by many users—from the phones in their pockets to the watches on their waists—and significantly influences their daily lives. With technological advancements, smart or connected home devices have become more accessible and more powerful than ever. The introduction of VA helped push smart homes into the mass market as VAs free users from the need of connecting numerous apps to devices via simple and natural voice dialogues.
VA functionalities available in the mass market remain for basic tasks, such as information retrieval and simple entertainment [
2,
3]. Technology reviews and research argued whether VAs are useful to users’ daily lives [
4,
5,
6].
Very limited attention from the research community was paid to VA design methods that incorporate users. Hence, this project attempts to fill this gap. The primary focus is how to involve users in the design process while managing their expectations of future technologies.
2. Virtual Assistants (VA) Voice-Based Interactions and Privacy Concerns
VAs are usually given a humanized voice (typically a female voice by default); notably, the VAs’ personalities impact user behavior [
7]. Researchers examined the effects and concerns regarding VAs and investigated how voice affects people’s attitudes and usage of VAs [
8]. VAs are capable of recognizing different users’ voices, in order to provide personalized multi-user support (currently only applicable to English language users).
VAs relying on voice-based interactions imply that the devices are always listening to people’s conversations, and as a result, they raise privacy concerns, despite their benefits [
9]. To provide the most seamless interaction, devices with VAs are usually designed to be in the “always listening” or even “always watching” mode, aiming for instant responses. Companies such as Google and Amazon accomplished this by excellently combining the power of cloud computing and AI/ML. Nonetheless, they were caught violating users’ privacy, for example, by allowing employees to listen to users’ conversations using their VAs [
10,
11,
12]. Meanwhile, Apple’s Siri was marketed with user privacy as their highest priority; however, this raised concerns regarding its limited skills and inaccurate responses.
3. Literature on Engaging Users in VA Design
A user study is one of the common methods to collect users’ feedback on the design ideas of intelligent agents, e.g., [
13]. User studies are often conducted with prototypes that can offer the interactions proposed in a design concept. Researchers can conduct user studies even before the ideas are technically ready to be made into a prototype. For example, they can conduct studies with the Wizard of Oz method, e.g., [
14]. Although user studies with the Wizard of Oz method offer designers a way to engage users in the design process of VAs, user studies are mainly suitable for collecting feedback on an existing design concept rather than exploring design ideas together with users.
Lee et al. [
15] presented a modified version of the Wizard of Oz method, which asked participants (‘the VA participant’) to take the role of the VA and interact with another user participant (‘the user participant’). Both problems and solutions were explored in their method. In each session, the user participant interacted with a VA, whose behavior was simulated by the VA participants. Through observing the interactions between the participants, the researchers can collect design insights. They qualitatively analyzed the VA behavior simulated by the users. The authors acknowledged that they obtained unexpected and interesting design insights.
4. Asking What Users Want: A Preliminary Survey Using Pop Culture as a Reference for Future Technologies
People’s understanding of artificial intelligence (AI) technologies stems from films and popular media, among others. Their understanding of AI is primarily based on AI portrayed in pop culture, which may not be accurate or according to the most recent developments. According to Pilling and Coulton [
16], pop culture presents a myth of AI, thus impacting people’s perception of it. Furthermore, the authors argued that a proper understanding of AI technology is essential not only for tech professionals to design intelligent products and services, but also for users in general because AIs are progressively becoming embedded in people’s daily lives. Nevertheless, using pop culture to portray AI can be beneficial, as it is easily understood by end users. The current project’s initial section explores pop culture as a reference for future technologies by collecting users’ VA design preferences.
A preliminary survey was conducted via a 38-question online questionnaire. The questionnaire consisted of four parts: demographics, current VA usage experience, VAs in pop culture, and VAs in the future. The 38 questions are shown in
Table A1 in
Appendix A. The survey order aimed to help participants envision the future of VAs starting from their personal experience with the help of the influence of pop culture. Regarding the influence of pop culture, three movie trailers with dominant VA characters were selected:
Her,
Jexi, and
Iron Man. The participants were asked if they would like to own the VA from each movie and why. These initial questions aimed to make use of fictional plots as a reference for future technologies.
Using snowball sampling, the survey received 42 responses (female: 59.5%; male: 40.5%). The respondents’ age ranged from 18–49 years. The majority of the respondents had a bachelor’s degree or higher and spoke Cantonese as their mother tongue. The reasons respondents quit using their VAs were mainly due to the low autonomous ability, lack of skillsets, and the fact that most of the respondents viewed the VAs only as a feature. Privacy and functionality are the most important criteria of VAs. However, availability, another important VA criterion, has a mixed distribution in terms of priority. This suggests that the availability of VAs could be an underlying need for the respondents. Apart from the fictional VAs mentioned previously, a few respondents expressed that they would like to have Doraemon, the blue robotic cat from a Japanese comic franchise, as a VA, which is not virtual at all.
By clustering the responses of whether the respondents want to have fictional VAs or not, some interesting observations were found. The first group believed that VAs are too human-like, yet prefer human interaction. Contrastingly, respondents supporting VAs believe that VAs are akin to a companion and human-like, and responded with other positive adjectives typically used for describing humans. Respondents are approaching and opposing VAs for the same reason: they are human-like. Ostensibly, respondents fantasized about the future. Moreover, the flow was purposely set in a way that helped respondents to think out of the box, respondents generated very practical tasks, or some features already existed.
The preliminary survey results suggest that pop culture can be a reference for future technologies and assist users in understanding AI accurately. However, it may simultaneously limit the users’ imagination. As such, this project explored the possibility of using co-design workshops with design fiction as a method to engage users in VA design.
5. Co-Design Workshops with Design Fiction
In the next phase of the current project, co-design workshops were adopted to engage users in designing VAs. In co-design practices, which actively involve users, users’ clear understanding of AI technologies is essential in order to generate design ideas [
17]. However, explaining AI technologies to users can be extremely difficult. Nonetheless, user-centered, instead of algorithm-focused, explainable AI recently received attention from the research community [
18].
It is logical to predict that future AI technologies will continue to advance. However, guiding users to appropriately envision how advanced these technologies could become is challenging. Schurig and Thomas assumed that future AI would be powerful [
17], although it is unclear from this study how powerful they could become. Additionally, they acknowledged that AI technology advances every day, thus making it difficult to define precisely. Their analysis used the concept of smartness and, therefore, formed a dumb–smart continuum for analyzing products. This presents an anthropomorphic view of equating the smartness of AI with human intelligence. However, smartness is an ambiguous concept, making it difficult to generate and discuss designs without a clear common understanding of technologies.
While co-design workshops proved to be a method of engaging users in VA design, the workshop protocol required a way to help participants imagine the future of AI technologies. This project adopted design fiction.
5.1. Methods and Materials
Design fiction is a way of exploring different approaches to design, probing the material conclusions of one’s imagination, and removing the usual constraints when designing for massive market commercialization [
19]. The combination of design artifacts and story narratives helps users think and explore the possibilities of the future. These approaches allow us to remove constraints when designing for commercialization in the real world [
19].
The workshops aimed to use design fiction to explore how VAs could address users’ needs. Thus, the design fiction would have to be similar to one’s lifestyle and have props and settings that invite a participant use their imagination. By combining insights and research from the earlier phase of the project, this project’s design fiction world has the following future-setting descriptive points:
5.2. Future-Setting Description 1: The World Does Not Only Depend on Small Screens (e.g., Smartphones)
Since smartphones gained popularity in 2007, users have become used to interactions with mobile device touchscreens via touch-based interactions. Touchscreen-based smartphones permeate every aspect of daily life, from working to dating, and even in situations where smartphones are not required. The current description point attempted to encourage workshop participants to look beyond touchscreen-based interactions.
5.3. Future-Setting Description 2: VAs Can Access All Forms of Knowledge
There are two important concepts in this setting: “knowledge” and “can access.” The knowledge here refers to intangible, non-actionable information and wisdom accumulated in human history. In addition, space was intentionally left here for users to decide what kind and how much knowledge his/her VAs could access. Assuming that VAs knew everything by default was believed to limit the users’ exploration of ideas.
5.4. Future-Setting Description 3: VAs Do Not Necessarily Have a Physical Body
When people talk about future technologies, they often mix up AI with robots, which is connected to the myth that advocates for the future replacement of humans. The current description attempted to incorporate the possibility of how users envision VAs, whether they remain purely virtual as a companion or are given a physical body to complete their tasks.
5.5. Future-Setting Description 4: VAs Can Socialize with Other VAs
The interaction pattern between a VA and a user suggests that the latter seeks answers/assistance from the “machine.” This interaction pattern is linear and straightforward; as such, providing successful feedback, executing an action, or offering simple apologies for a request is beyond its capabilities. The interaction pattern remains a one-to-one relationship. In contrast, when a human asks another human for help, the pattern is a dynamic interpersonal interaction. Even if the one who is being asked cannot resolve the request, (s)he may refer to someone (s)he knows that can help, which, in sociology, is understood as social capital [
20]. Furthermore, new titles or requests may emerge during social interactions, possibly introducing new subjects. Hence, this future-setting description is to determine if VAs can socialize and have their own communities (similar to Dobby in
Harry Potter), in which VAs can share knowledge and information or even gossip with each other.
5.6. Future-Setting Description 5: VAs Can Learn from Users’ Behavior
This future-setting description aims to allow users to tailor and customize the VAs in a way that best suits their daily requirements. VAs can evolve to fit the users’ needs rather than a “one-size-fits-all” AI model. Thus, users should have a greater sense of personalization with this setting as different VAs can behave differently according to different user behavior.
5.7. Future-Setting Description 6: No Magic—The Laws of Physics Need to Be Obeyed
To avoid unnecessary loss of focus during the co-design process, an extra setting, this future-setting description, was added to the fictional world to prevent participants from being overimaginative (e.g., teleportation ideas). The intention was to remain as realistic as possible.
The aim of the fictional world of the design fiction specified above was to manage the workshop participants’ imagination of future AI technologies. A series of co-design workshops were conducted. The goal was to observe how participants envisioned their ideal VAs in the fictional world specified previously. The workshop is framed in this manner: The workshop organizer role-played a customer representative from a company called “Very Advanced Corp.” and invited participants to test their new VAs with customization.
There were three tasks in the workshop: (1) identify life goals or important things in their lives; (2) visualize the timetable of their daily routine and typical day of leisure; and lastly, (3) sketch or write out features of the VA that could benefit them in their daily lives. The first two tasks were designed to prepare participants for performing the third task by reflecting on their needs and desires in their daily lives. All three tasks were performed in the following order: First, the coordinator briefed the tasks to the participants, and then the participants performed the tasks. Once they finished, the coordinator invited the participants to share their ideas with the others. The coordinator might ask follow-up questions to clarify the participants’ ideas.
As for capturing the most realistic needs of and feedback from real users, the inclusion criteria were that participants came from non-design disciplines and diversified sectors. In total, seven co-design workshops were conducted. There were a total of 21 participants, aged 23–61 years; 76% of them were from non-design disciplines. Of the seven workshops, six were audio recorded. The first workshop was not recorded because of technical issues.
6. Results
The ideas generated by the workshop participants were analyzed qualitatively. The qualitative analysis followed these rules:
Particular focus was paid to the ideas that followed a pattern(s) commonly mentioned by multiple participants.
Only the ideas explicitly expressed by the participants were analyzed.
If insights were inferred from the observations, they would be specifically stated.
The observations and insights are summarized below.
6.1. The Greater the Level of Daily Life Routine, the Weaker the Desire to Use VAs
Participants were clustered into two groups based on the task 2 outcome: (1) typical weekday schedule and (2) highly organized weekday schedule. As the majority of the participants had full-time jobs, a structured weekday schedule was often expected, and some variations in activities would be reflected in the outcome, which reflected the typical weekday schedule cluster. Some participants—NDF5 (Labeling convention of the participants: ND = non-designer; D = designer; M = male; F = female. Note: Each label ends with a reference number), NDM9, NDF13, and NDF14—even expressed their weekdays in an extremely routine manner, even stating the timestamps down to the minute level. This observation is not intended as a judgement of the individuals’ lifestyles but rather shows that this participant cluster shared VA design/interaction patterns when combined with the task 3 outcome. They rarely utilized VAs in their personal lives. Participant NDM9 shared with the group that he enjoyed his self-assigned engagement time with the VA but that those moments were not essential. Others in the cluster behaved similarly; they hardly created non-essential features, such as preparing an enjoyable video playlist or preparing ingredients for a meal they enjoyed. Furthermore, some openly stated that they did not “need” a VA.
6.2. The Term “Assistant” Makes Participants Consider the Task Completion Direction
Participant NDF8 explicitly stated that her idea was bounded by the word “assistant” and, therefore, assigned tasks to her VA; participants in the same session shared similar sentiments. A similar pattern was observed in other sessions, as the participants’ designs remained mostly task-oriented. Furthermore, it was observed that participants could easily assign VA features or tasks that directly benefitted their daytime jobs; however, few participants could easily design features for their personal lives. Moreover, it was rarely observed that participants associated VAs with emotional and/or social support.
6.3. Participants Coupled a “Sense of Control” with VA Intelligence
Almost all the participants were against the idea of VAs being proactive. They preferred the VAs to show up on request. There was an interesting observation regarding the participants’ views on VA proactivity. The coordinator asked participants if they would prefer if the VAs knew the query answers required by the users and proactively informed them. The participants stated that they did not care. In VAs only responding to their queries, participants felt that they were in charge of their own lives. Additionally, many participants preferred the VAs not to make decisions by themselves, even if they were capable of doing so. Instead, participants preferred that VAs informed and asked them for instructions on every decision. In fact, some of them explicitly voiced that they did not want their VAs to be “wise”; their VAs should be just good enough to complete the required tasks. However, it was observed that participants shifted their viewpoints concerning VA proactivity and intelligence when the coordinator presented different scenarios.
6.4. Participants Have Multiple Dimensions of VA Expectations, Which Vary across Different Contexts of Their Daily Lives
Throughout the sharing and discussions in task 3, even though the participants may have expressed that they did not want the VAs to be smart, they saw their designed VAs as an intelligent entity or, more precisely, a human assistant. Generally, the participants could not precisely list the features and requirements of the VAs, and they tended to describe the VAs’ tasks in unorganized ways. However, when the coordinator asked them to elaborate, they would typically share a common scenario. Commonly, the coordinator, or sometimes other participants, would follow up by asking alternate what-if scenarios. Notably, participants would shift their expectations to meet the new scenario, and they usually concluded with: “The VAs should ask me what to do” or “I will take over if I find it necessary.”
In addition to varying expectations, it was found that participants had multiple dimensions of expectations of their VAs, which are not directly related to skillsets but more of a complex behavioral expression. Here, dimensions that the workshops participants commonly mentioned were summarized:
Level of vocalness—to what extent VAs proactively inform and/or speak to the users;
Level of autonomy—to what extent VAs make decisions for the users;
Level of sociability—to what extent VAs socialize with other Vas;
Level of privacy—to what extent VAs protect the users’ information;
Analytical vs. factual—to what extent VAs analyze and/or summarize for the users instead of just reporting the facts;
Acting like “me” vs. acting like a machine—to what extent VAs behave like a person instead of a machine.
Participants noticed that the dimensions could be mixed and matched. Participant NDM16 wanted his VA to be analytical and vocal, with a low level of autonomy; contrastingly, participant NDF11 wanted her VA to be less vocal with high autonomy on the same tasks. This suggested that the participants were looking for VAs with unique “characters” in different contexts.
7. Diegetic Prototype: The Regulator
Based on the workshop insights, one major observation was that when users did not need to worry whether their understanding of future AI technologies was accurate, they focused more on what they could control rather than the features/skillsets they wanted. How participants’ distrust in technology impacted the way that they perceived the future was also observed. At present, the technological developments appear to be heading toward a highly transparent interface, amplifying the black-box problem of technology [
18]. People do not trust what they cannot see. What is more, it was noticed that the relationship between participants and VAs was similar to that with their pets or children in real life, just without the physical forms.
One of the key findings in the workshops was the diversity of VA expectations across different dimensions, in which the dimensions shifted across functional perspectives to some highly conceptual ideas (e.g., acting like “me”). During the workshop, it was impossible to conclude which dimensions were the most important; however, it was noticed that participants shifted their VA expectations throughout the day. This suggests that the context, which includes a dynamic combination of space, time, and other involving individuals, is also an important criterion for people’s varying VA expectation dimensions. The context-awareness notion and context-shifting properties of VAs resembled the notion of continuously adapting assistance addressed by Islas-Cota et al. [
21]. The workshops’ new insights relate to how users prefer control over the change in context, rather than letting VAs detect and adapt to it. This is in contrast to previous research, proposing that users program their own VAs [
22].
This project aimed to design a diegetic prototype that was tangible, could offer a simple affordance of control, and was related to the ‘expectation’ dimension based on insights from the co-creation workshop. Based on these insights, people expect control over different dimensions of VA properties. Thus, slider controls were selected for the prototype.
The prototype was designed and constructed via a 3D printer and then shown to some of the workshop participants during the follow-up sessions to obtain their feedback. The size of the prototype was similar to a smartphone and equipped with two sliders—autonomous and vocal. With the basic understanding of the design fiction from the previous workshops, the participants immediately identified it as a VA controller and asked questions, including “How do I know if I set it correctly?” and “Can I change it to other dimensions?”. Their comments and questions concerning the settings were incorporated into the next iteration.
The final concept/prototype is called The Regulator (
Figure 1), a portable device with user-controlled sliders for different VA properties. This portable form factor aimed to provide control on the go. The swappable knob serves as a user option for deciding two aspects they want to regulate: from autonomy to vocalness level. The swappable knob suggested a new perspective for consumer smart agents in the future. As AIs have evolved to the point where they know more than general users, users no longer readily utilize all the features/functions of intelligent agents. Users can use The Regulator to control the VAs as they deem personally beneficial in their daily lives.
The following paragraphs present four hypothetical use scenarios for The Regulator. The corresponding scenarios are presented from the perspective of hypothetical VA owners.
Scenario 1 (Level of privacy: medium; Level of sociability: high)—“Go and gossip with my colleagues’ VAs for me, but don’t disclose too much of my information.”
“Networking is very important for business. It is the best way to stay updated with current events in the market. I am happy that my VA can socialize with other VAs, and now I can have a new way of reaching out and networking with others. The Regulator allows me to control the extent to which my VA can socialize with other VAs. It is like how I pet my dog in the park.”
Scenario 2 (Level of sociability: low; Level of vocalness: high)—“Please tell me what I should do to comfort him when he cries.”
“Being a parent of a newborn is tough. I am very busy taking care of everything of my little boy. I am grateful that my VA can tell me what to do when she senses I am in trouble. She also helps me to apologize to my neighbors when the little one is crying too loud in the middle of the night. However, of course, I do not want my parents to know that their grandson is crying all the time. I will adjust the level of vocal when we are all asleep.”
Scenario 3 (Level of autonomy: low; analytical vs. factual: medium-high towards analytical)—“Just tell me what you think and what you find, I will make the decision.”
“We must admit that technologies do outperform humans in some aspects, such as analyzing data. Why not give it a chance? My VA can do analysis like my staff used to do in the olden days. I am still the one making all the decisions. I am fine to have my VA to organize my personal life with a higher level of autonomy, but definitely not for my work.”
Scenario 4 (Level of autonomy: low; Level of vocalness: low)—“I would like to do things by myself.”
“I enjoy brainstorming and interacting with nature. I know I can benefit from a VA in handling some of my daily tasks, such as registering government services, but I still want to maintain my distance from AI technologies in my daily life.”
8. Discussion
In today’s rapidly changing world, it is very difficult for designers and users to envision the future. From the co-design workshops, it was noticed that users had fallen behind technological development. Technologies are too opaque for users to completely understand. The workshop results suggest that there is an urgent need to invent a novel means of restoring a sense of control between technologies and their users. Moreover, the workshop outcome suggests that a function/feature can have different impacts, both positive and negative, across different contexts. This is especially the case when designing ubiquitous technologies.
Based on the survey and workshop results, it is impossible to ignore the influence of pop culture on how people envision future technologies. This study supports the adoption of design fiction as means of incorporating non-professional participants in the design process.
The current project demonstrated that design fiction could provide a common ground for future technologies when users engaged in designing future VAs. It is challenging to consolidate every participant’s expectation and vision of future technologies into a co-design process. Design fiction offers a future world for users to immerse into and then explores VA design ideas and expresses these users’ designs and inspirations of VAs. Using description points of a design fiction world, designers can offer a common ground of future technologies for users without limiting the users’ imagination after generating their initial ideas. Carefully creating the descriptions of the world presented in the design fiction becomes critical in managing users’ expectations of future technologies.
The current project’s description points are textual descriptions of the future world in design fiction. There are other approaches to describing the world of design fiction, such as a future product catalog [
23]. However, this project’s results reveal that the use of a textual format effectively enables users to understand and imagine what the future world could be. The results also reveal that users can understand textual descriptions. Nevertheless, there are pros and cons regarding textual forms of describing the world. Visual forms of describing the world may appear more interesting to users and easier to understand because of their concreteness. However, visual materials may unintentionally be biased because concrete scenarios and/or products are shown visually. Textual descriptions offer more room for the users’ imagination to explore the design space because of the absence of visual materials. However, a potential drawback is that textual descriptions may be more abstract than visual materials. Users may have difficulty immersing themselves in a world and imagining what type of VA designs they would require in such a world. Future research can compare the effects of the textual and visual descriptive forms of design fiction worlds for the purpose of engaging users in VA designing.
9. Conclusions
The current project highlights that the problem concerning VAs is not only about features and skillsets but also about how people perceive future technologies, as the recent presentations of most VAs attempt to blur the line between humans and technology. While a few major technology companies control and develop today’s VAs, they attempt to introduce new VA features every year.
Based on this study’s design fiction, however, an alternate VA possibility is suggested. When facing the unknown future of technology, people are more concerned about the object in their hands rather than its features. People may not know what they want in the future, but they know what they do not want. The findings suggest that people expect an experience-based, relationship-driven interaction with technology.
Author Contributions
Conceptualization, H.C.H.L. and J.C.F.H.; methodology, H.C.H.L.; formal analysis, H.C.H.L.; investigation, H.C.H.L.; resources, H.C.H.L. and J.C.F.H.; data curation, H.C.H.L.; writing—original draft preparation, H.C.H.L.; writing—review and editing, J.C.F.H.; visualization, H.C.H.L.; supervision, J.C.F.H.; project administration, H.C.H.L.; funding acquisition, J.C.F.H. All authors have read and agreed to the published version of the manuscript.”
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted according to the guidelines of the Declaration of Helsinki and the law in Hong Kong SAR.
Informed Consent Statement
Informed consent was obtained from the participants to publish this paper.
Data Availability Statement
Data are available on request due to restrictions (privacy and consortial restrictions) in a pseudonymized and aggregated form from the corresponding author.
Acknowledgments
We would like to thank all the respondents to the online survey and participants in the co-design workshops.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Table A1.
The 38 questions used in the survey are presented in the table below.
Table A1.
The 38 questions used in the survey are presented in the table below.
# | Question |
---|
1 | What is your age group? |
2 | What is your gender? |
3 | What is your highest qualifications? |
4 | What is your mother tongue? |
5 | Excluding yourself, how many people are you living with? |
6 | Do you, or your co-living family, employed any domestic helper? |
7 | Which VA have you interact with? |
8 | Which language do your VA speak? |
9 | What is the language of your mobile operating system? |
10 | How often will you interact with your VA? |
11 | Please check all the locations/ venues you have interact with VA |
12 | Other than your phone, do you have any VA-enabled devices (like Smart speaker, smart watch)? |
13 | What are the reasons you interact with a VA? |
14 | What are the reasons that stopping you to interact with VA? |
15 | How is your attitude toward VA? |
16 | Rate your skill level of interacting with VAs |
17 | Do your VA know about your social relation? |
18 | Do any of your social networks (Facebook, Instagram, WeChat etc) know about your social relation? (Like who is your family members or close friends in REAL world) |
19 | Please prioritise the importance of criteria for VA |
20 | Please rate your level of trust to a “voice-based” VA (Like Siri, Alexa) |
21 | Please rate your level of trust to a “Text-based” VA (Like chatbot) |
22 | I would like to have a VA like Samantha |
23 | Follow up with #22, Why? |
24 | I would like to have a VA like Jexi |
25 | Follow up #24, why? |
26 | I would like to have a VA like Javis |
27 | Follow up with #26, why? |
28 | Do you have any VAs in mind that you love to but we did not covered ( anything from Movie, Anime, book etc) |
29 | Can you share the names and sources with us? |
30 | I think a virtual assistant should work/ behave like ____________ |
31 | What would be the pronoun for your VA? |
32 | If you can name your VA, what will be the name? |
33 | If can add features/ functions to your VA at no cost and technical limitation, what would it be? |
34 | Let us dream for the Tomorrowland, I wish my VA can ____________ |
35 | Follow up with #34, any more things you wished for your ideal VA? |
36 | In exchange for your ideal VA’s functionalities, I am willing to grant the following access/ functions to my VA |
37 | Try to complete this sentence: “I want my VA can be able to _____________ so that I can ____________ |
38 | Any interesting/ awkward moment with VA can share? |
References
- Apple Launches Iphone 4S, IOS 5 & ICloud. Available online: https://www.apple.com/newsroom/2011/10/04Apple-Launches-iPhone-4S-iOS-5-iCloud/ (accessed on 26 September 2021).
- Pyae, A.; Joelsson, T.N. Investigating the Usability and User Experiences of Voice User Interface: A Case of Google Home Smart Speaker. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, Barcelona, Spain, 3–6 September 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 127–131. [Google Scholar]
- Brownlee, M. The Voice Assistant Battle! Available online: https://youtu.be/BkpAro4zIwU (accessed on 2 December 2021).
- 8 Best Smart Speakers (2021): Alexa, Google Assistant, Siri WIRED. Available online: https://www.wired.com/story/best-smart-speakers/#ten (accessed on 26 September 2021).
- The Downfall of the Virtual Assistant (So Far) Computerworld. Available online: https://www.computerworld.com/article/3403332/downfall-virtual-assistant.html (accessed on 26 September 2021).
- We’re Still Not Getting Voice Assistants Right—The Verge. Available online: https://www.theverge.com/2019/8/28/20835294/bbc-beeb-intelligent-assistant-alexa-google-cortana-siri (accessed on 26 September 2021).
- Poushneh, A. Humanizing Voice Assistant: The Impact of Voice Assistant Personality on Consumers’ Attitudes and Behaviors. J. Retail. Consum. Serv. 2021, 58, 102283. [Google Scholar] [CrossRef]
- Pollmann, K.; Ruff, C.; Vetter, K.; Zimmermann, G. Robot vs. Voice Assistant: Is Playing with Pepper More Fun than Playing with Alexa? In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 395–397. [Google Scholar]
- Alepis, E.; Patsakis, C. Monkey Says, Monkey Does: Security and Privacy on Voice Assistants. IEEE Access 2017, 5, 17841–17851. [Google Scholar] [CrossRef]
- What Private Conversations and Intimate Moments Do Google Home and Amazon Alexa Actually Record? Available online: https://www.independent.co.uk/life-style/gadgets-and-tech/news/google-home-recordings-listen-privacy-amazon-alexa-hack-a9002096.html (accessed on 27 September 2021).
- Malkin, N.; Deatrick, J.; Tong, A.; Wijesekera, P.; Egelman, S.; Wagner, D. Privacy Attitudes of Smart Speaker Users. Proc. Priv. Enhancing Technol. 2019, 250–271. [Google Scholar] [CrossRef] [Green Version]
- Lau, J.; Zimmerman, B.; Schaub, F. “Alexa, Stop Recording’’: Mismatches between smart speaker privacy controls and user needs. In Proceedings of the Poster at the 14th Symposium on Usable Privacy and Security (SOUPS 2018), Baltimore, MD, USA, 12–14 August 2018. [Google Scholar]
- Azmandian, M.; Arroyo-Palacios, J.; Osman, S. Guiding the Behavior Design of Virtual Assistants. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Paris, France, 2–5 July 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 16–18. [Google Scholar]
- Kim, K.; Park, M.; Lim, Y. Guiding Preferred Driving Style Using Voice in Autonomous Vehicles: An On-Road Wizard-of-Oz Study. In Proceedings of the Designing Interactive Systems Conference 2021, Online, 28 June–2 July 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 352–364. [Google Scholar]
- Lee, S.; Lee, J.; Lee, K. Designing Intelligent Assistant through User Participations. In Proceedings of the 2017 Conference on Designing Interactive Systems, Edinburgh, UK, 10–14 June 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 173–177. [Google Scholar]
- Pilling, F.; Coulton, P. Forget the Singularity, Its Mundane Artificial Intelligence That Should Be Our Immediate Concern. Des. J. 2019, 22, 1135–1146. [Google Scholar] [CrossRef] [Green Version]
- Schurig, A.; Thomas, C.G. Designing the Next Generation of Connected Devices in the Era of Artificial Intelligence. Des. J. 2017, 20, S3801–S3810. [Google Scholar] [CrossRef] [Green Version]
- Liao, Q.V.; Gruen, D.; Miller, S. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–15. [Google Scholar]
- Bleecker, J. Design Fiction: A Short Essay on Design, Science, Fact and Fiction. Near Future Lab. 2009. Available online: https://blog.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/ (accessed on 2 December 2021).
- Coleman, J.S. Social Capital in the Creation of Human Capital. Am. J. Sociol. 1988, 94, S95–S120. [Google Scholar] [CrossRef]
- Islas-Cota, E.; Gutierrez-Garcia, J.O.; Acosta, C.O.; Rodríguez, L.-F. A Systematic Review of Intelligent Assistants. Future Gener. Comput. Syst. 2022, 128, 45–62. [Google Scholar] [CrossRef]
- Fischer, M.H.; Campagna, G.; Choi, E.; Lam, M.S. DIY Assistant: A Multi-Modal End-User Programmable Virtual Assistant. In Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation, Virtual, Canada, 20–25 June 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 312–327. [Google Scholar]
- Brown, B.; Bleecker, J.; D’Adamo, M.; Ferreira, P.; Formo, J.; Glöss, M.; Holm, M.; Höök, K.; Johnson, E.-C.B.; Kaburuan, E.; et al. The IKEA Catalogue: Design Fiction in Academic and Industrial Collaborations. In Proceedings of the 19th International Conference on Supporting Group Work, Sanibel Island, FL, USA, 13–16 November 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 335–344. [Google Scholar]
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).