A Dialogue System That Models User Opinions Based on Information Content
Abstract
:1. Introduction
2. Existing Research
3. Proposed Method
3.1. Opinion Model
3.2. Dialogue System
3.3. Data Collection
- Nouns included in multiple menu names were used as genres.
- One genre was adopted for each menu item.
- If there are multiple menu components, the main one is used.
- A menu genre that can be inferred from the restaurant where the menu item is served is assigned.
- If there are abbreviations or synonyms, they are treated as the same.
3.4. Dialogue to Specify Intention
3.4.1. Determining Strategies Based on Interest and Knowledge
3.4.2. Identification of Intended Concepts
3.4.3. Intention Estimation by Elaboration
- SYSTEM: “What have you eaten at (a restaurant)?” (1)
- USER: “Various things.” (2)
- SYSTEM: “For example, have you had teriyaki corn pizza?” (3)
- SYSTEM: “What have you eaten (a restaurant)?”
- USER: “Various.”
- SYSTEM: “Have you ever eaten pizza, for example?”
- SYSTEM: “What have you eaten at (a restaurant)?”
- USER: “I have eaten Margherita pizza.”
- SYSTEM: “Is Margherita pizza a pizza?”
- Rules for prioritizing the subject of the topic presentation. Since the purpose of dialogue is to determine the user’s experience, it is desirable for the user to present specific experiences to the dialogue system. When the system uses estimation to ask questions to the user, the probability of achieving the dialogue objective depends on the accuracy of the estimation. Therefore, in order to obtain the user’s experience according to a small number of dialogue acts, the dialogue system should give priority to utterances that elicit topic suggestions from the user as much as possible. The reason why a small number of dialogues is desirable is that it is burdensome for the user to continue a monotonous dialogue for a long period of time. In other words, user satisfaction can be improved by modeling the user with a smaller number of conversations.
- Rules for choosing between open and closed questions. In some cases, users may not be able to recall their own experiences clearly in response to the system questions. In a chat dialogue, a highly abstract question such as “What did you do yesterday?” may not allow the user to recall concrete details immediately. Therefore, if the user does not provide a clear answer, it is necessary to provide an example from the dialogue system.
- Rules for prohibiting duplication of synonymous questions. If the user does not give a meaningful answer to a particular question, the system should attempt to resolve the issue by other means. For example, if the question “What did you eat?” does not return an intention that the system can interpret, it is undesirable for the system to repeatedly ask “What did you eat?” Repeated mechanical responses will decrease the user’s motivation to engage in dialogue. The question “What did you eat at that time at X?” is a solution to some extent, as it changes the question to be different from “What did you eat at that time?” However, this method cannot solve the problems that occur when the question’s intention is accurately conveyed to the user and no answer is obtained. In addition, asking the user “Did you eat X?” followed by “Then did you eat X?” and “Then did you eat X?” repeatedly and the user saying the same thing over and over again but with different words indicates that the probability of the system being able to infer the user’s intention is low. However, if there are only two or three candidates for estimation, then it is acceptable to say, “Then, did you eat X?” in the sense of “If it is not X, then it is only X”. It is also possible that (a) “I thought it was X, but it could be X” could be accepted as the intention. However, if the same question type is not answered twice in a row, then that question is not asked a third time. The second failure is acceptable to the user because the user feels that the system is generating a different question than the first one based on the results of the first failure. However, since the third question is not different from the second one, repeating the failed method may have a negative effect on the user. Therefore, the number of repetitions of closed questions in one cycle is limited to two.
3.5. Dialogue to Specify Intention
3.5.1. Intention Estimation Handling Ambiguity
- (1)
- Nominalizing dictionaries and noun dictionaries correspond to the superordinate categories and categories.
- (2)
- Notation fluctuation dictionary of instances.
- (3)
- Dictionary of word distortion.
3.5.2. Common Sense Candidate Reasoning
- {restaurant: (restaurant A), genre: hamburger, menu: unknown}
- {restaurant: (restaurant A), genre: hamburger, menu: hamburger}
- {restaurant: (restaurant A), genre: hamburger, menu: cheeseburger}
- Priority is given to the one with the highest frequency in the total data.
- Estimating a third party close to the user and estimating the user’s opinion from the third party’s opinion data.
3.6. Opinion Dialogue
- Directly asking.
- Inferring other opinions from the user model.
3.6.1. Estimated Opinion Model
- Evaluation of proximity between opinions.
- 2.
- Evaluation of the proximity of opinion tendencies among people.
- 3.
- Evaluation of the likelihood of the estimation.
3.6.2. Selection Algorithm Based on Information Content
- Maximizing the number of user opinions.
- Ensuring that the opinions that are obtained are “rare opinions”.
- Both of the above.
4. Experiment
- Conduct a conversation with the dialogue system.
- Respond to the question, “What is your impression of the conversation?”
- Repeat 1 and 2 three times.
- Respond to a questionnaire collecting basic information, personality characteristics [37], and opinion data.
- Did they try to understand you?
- Did they understand you?
- Would you like to talk to them again?
- Satisfied with the dialogue?
- Was the presumption natural?
Interface
- Opinion dialogue is performed a total of four times.
- Topic presentation is performed 12 times in total.
- The user speaks 3 s after the dialogue system generates its utterances.
- Separation with a punctuation mark (end of a sentence).
- Speak 3 s after the previous utterance.
5. Result
- No dialogue breakdowns occurred throughout the entire dialogue.
- The termination conditions for all dialogues were met.
- All dummy questions placed in the opinion survey questionnaire were answered correctly.
6. Discussion
6.1. User Modeling Strategies
6.2. Ambiguity in User Speech
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Dominey, P.F.; Paléologue, V.; Pandey, A.K.; Ventre-Dominey, J. Improving quality of life with a narrative companion. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 127–134. [Google Scholar]
- Sabelli, A.M.; Kanda, T.; Hagita, N. A conversational robot in an elderly care center: An ethnographic study. In Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2011, Lausanne, Switzerland, 6–9 March 2011; pp. 37–44. [Google Scholar]
- Uchida, T.; Ishiguro, H.; Dominey, P.F. Improving Quality of Life with a Narrative Robot Companion: II–Creating Group Cohesion via Shared Narrative Experience. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 906–913. [Google Scholar]
- Thoppilan, R.; De Freitas, D.; Hall, J.; Shazeer, N.; Kulshreshtha, A.; Cheng, H.T.; Le, Q. Lamda: Language models for dialog applications. arXiv 2022, arXiv:2201.08239. [Google Scholar]
- Weizenbaum, J. ELIZA—A computer program for the study of natural language communication between man and machine. Commun. ACM 1966, 9, 36–45. [Google Scholar] [CrossRef]
- Uchida, T.; Minato, T.; Nakamura, Y.; Yoshikawa, Y.; Ishiguro, H. Female-Type Android’s Drive to Quickly Understand a User’s Concept of Preferences Stimulates Dialogue Satisfaction: Dialogue Strategies for Modeling User’s Concept of Preferences. Int. J. Soc. Robot. 2021, 13, 1499–1516. [Google Scholar] [CrossRef]
- Konishi, T.; Sano, H.; Ohta, K.; Ikeda, D.; Katagiri, M. Item Recommendation with User Contexts Obtained through Chat Bot. In Proceedings of the Multimedia, Distributed, Cooperative, and Mobile Symposium, Sapporo, Hokkaido, 28–30 June 2017; pp. 487–493. (In Japanese). [Google Scholar]
- Higashinaka, R.; Dohsaka, K.; Isozaki, H. Effects of empathy and self-disclosure in dialogue systems. In Proceedings of the Association for Natural Language Processing, 15th Annual Meeting, Tokyo, Japan, 2–7 August 2009; pp. 446–449. [Google Scholar]
- Bohm, D.; Factor, D.; Garrett, P. Dialogue: A Proposal 1991. Available online: http://www.david-bohm.net/dialogue/dialogue_proposal.html (accessed on 1 September 2022).
- Hiraki, N. Assertion Training, 1st ed.; Kaneko Shobo: Tokyo, Japan, 1993. [Google Scholar]
- Dinarelli, M.; Stepanov, E.A.; Varges, S.; Riccardi, G. The luna spoken dialogue system: Beyond utterance classification. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 5366–5369. [Google Scholar]
- Ma, Y.; Nguyen, K.L.; Xing, F.; Cambria, E. A survey on empathetic dialogue systems. Inf. Fusion 2020, 64, 50–70. [Google Scholar] [CrossRef]
- Goldberg, D.; Nichols, D.; Oki, B.M.; Terry, D. Using collaborative filtering to weave an information tapestry. Commun. ACM 1992, 35, 61–70. [Google Scholar] [CrossRef]
- Shardanand, U.; Maes, P. Social information filtering: Algorithms for automating “word of mouth”. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 7–11 May 1995; pp. 210–217. [Google Scholar]
- Ungar, L.H.; Foster, D.P. Clustering methods for collaborative filtering. In Proceedings of the AAAI Workshop on Recommendation Systems, Madison, WI, USA, 26–31 July 1998; Volume 1, pp. 114–129. [Google Scholar]
- Burke, R. Integrating knowledge-based and collaborative-filtering recommender systems. In Proceedings of the Workshop on AI and Electronic Commerce, Orlando, FL, USA, 18–19 July 1999; pp. 69–72. [Google Scholar]
- Kobyashi, S.; Hagiwara, M. Non-task-oriented dialogue system considering user’s preference and human relations. Trans. Jpn. Soc. Artif. Intell. 2016, 31, 1. [Google Scholar] [CrossRef] [Green Version]
- Sakai, K.; Nakamura, Y.; Yoshikawa, Y.; Kano, S.; Ishiguro, H. Dialogal robot that estimates user’s preferences by using subjective similarity. In Proceedings of the IROS 2018 Workshop Fr-WS7 Autonomous Dialogue Technologies in Symbiotic Human-Robot Interaction 2018, Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Sumi, K.; Sumi, Y.; Mase, K.; Nakasuka, S.; Hori, K. Information presentation by inferring user’s interests based on individual conceptual spaces. Syst. Comput. Jpn. 2008, 31, 41–55. [Google Scholar] [CrossRef]
- Maroto-Gómez, M.; Castro-González, Á.; Castillo, J.C.; Malfaz, M.; Salichs, M.Á. An adaptive decision-making system supported on user preference predictions for human-robot interactive communication. User Model User-Adapt. Interact. 2022, 9, 1–45. [Google Scholar] [CrossRef] [PubMed]
- Clémençon, S.; Depecker, M.; Vayatis, N. Ranking forests. J. Mach. Learn. Res. 2013, 14, 39–73. [Google Scholar]
- Wang, W.; Zhang, Z.; Guo, J.; Dai, Y.; Chen, B.; Luo, W. Task-Oriented Dialogue System as Natural Language Generation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 2698–2703. [Google Scholar]
- Manuhara, G.W.M.; Muthugala, M.A.V.J.; Jayasekara, A.G.B.P. Design and Development of an Interactive Service Robot as a Conversational Companion for Elderly People. In Proceedings of the 2018 Moratuwa Engineering Research Conference (MERCon), Moratuwa, Sri Lanka, 30 May–1 June 2018; pp. 378–383. [Google Scholar]
- Muthugala, M.V.J.; Jayasekara, A.B.P. MIRob: An intelligent service robot that learns from interactive discussions while handling uncertain information in user instructions. In Proceedings of the 2016 Moratuwa Engineering Research Conference (MERCon), Moratuwa, Sri Lanka, 5–6 April 2016; pp. 397–402. [Google Scholar]
- Ni, J.; Pandelea, V.; Young, T.; Zhou, H.; Cambria, E. Hitkg: Towards goal-oriented conversations via multi-hierarchy learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Crete, Greece, 17–20 June 2022; Volume 36, pp. 11112–11120. [Google Scholar]
- Higashinaka, R.; Funakoshi, K.; Araki, M.; Tsukahara, Y.; Kobayashi, Y.; Mizukami, M. Text Chat Dialogue Corpus Construction and Analysis of Dialogue Breakdown. J. Nat. Lang. Processing 2016, 23, 59–86. [Google Scholar] [CrossRef] [Green Version]
- Speer, R.; Chin, J.; Havasi, C. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Miller, G.A. WordNet: A lexical database for English. Commun. ACM 1995, 38, 39–41. [Google Scholar] [CrossRef]
- Young, T.; Xing, F.; Pandelea, V.; Ni, J.; Cambria, E. Fusing task-oriented and open-domain dialogues in conversational agents. In Proceedings of the AAAI Conference on Artificial Intelligence, Crete, Greece, 17–20 June 2022; Volume 36, pp. 11622–11629. [Google Scholar]
- Fernández-Rodicio, E.; Castro-González, Á.; Alonso-Martín, F.; Maroto-Gómez, M.; Salichs, M.Á. Modelling multimodal dialogues for social robots using communicative acts. Sensors 2020, 20, 3440. [Google Scholar] [CrossRef] [PubMed]
- Havasi, C.; Speer, R.; Alonso, J. ConceptNet 3: A flexible, multilingual semantic network for common sense knowledge. In Proceedings of the Recent Advances in Natural Language Processing, Borovets, Bulgaria, 27–29 September 2007; pp. 27–29. [Google Scholar]
- Kawahara, D.; Kurohashi, S. A fully-lexicalized probabilistic model for Japanese syntactic and case structure analysis. In Proceedings of the Human Language Technology Conference of the NAACL; Main Conference; Association for Computational Linguistics: Stroudsburg, PA, USA, 2006; pp. 176–183. [Google Scholar]
- Lee, J.H.; Kim, M.H.; Lee, Y.J. Information retrieval based on conceptual distance in IS-A hierarchies. J. Doc. 1993, 49, 188–207. [Google Scholar] [CrossRef]
- Mihalcea, R.; Corley, C.; Strapparava, C. Corpus-Based and Knowledge-Based Measures of Text Semantic Similarity; American Association for Artificial Intelligence: Palo Alto, CA, USA, 2006; Volume 6, pp. 775–780. [Google Scholar]
- Goldberg, Y.; Levy, O. word2vec Explained: Deriving Mikolov et al.’s negative-sampling word-embedding method. arXiv 2014, arXiv:1402.3722. [Google Scholar]
- Miyamoto, T.; Iwashita, M.; Endo, M.; Nagai, N.; Katagami, D. Influence of Utterance Strategies to Get Closer Psychologically on Evaluation of Dialogue in a Nontask-oriented Dialogue System. Trans. Jpn. Soc. Artif. Intell. 2021, 36, AG21-I. (In Japanese) [Google Scholar] [CrossRef]
- Oshio, A.; Shingo, A.B.E.; Cutrone, P. Development, reliability, and validity of the Japanese version of Ten Item Personality Inventory (TIPI-J). Jpn. J. Personal. 2012, 21, 40–52. [Google Scholar] [CrossRef] [Green Version]
- Clark, H.H. Using Language; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
- Norman, D.A. The Psychology of Everyday Things; Basic Books: New York, NY, USA, 1988. [Google Scholar]
- Costa, P.T.; McCrae, R.R. Neo Personality Inventory-Revised (NEO PI-R); Psychological Assessment Resources: Odessa, FL, USA, 1992. [Google Scholar]
- McCrae, R.R.; Yamagata, S.; Jang, K.L.; Riemann, R.; Ando, J.; Ono, Y.; Angleitner, Y.; Spinath, F.M. Substance and artifact in the higher-order factors of the Big Five. J. Personal. Soc. Psychol. 2008, 95, 442. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ogawa, Y.; Kikuchi, H. Effect of Agent’s Self-Disclosures on its Personality. In Proceedings of the Human-Agenct Interaction Symposium, Kyoto, Japan, 3–5 December 2011. [Google Scholar]
- Li, W.; Shao, W.; Ji, S.; Cambria, E. BiERU: Bidirectional emotional recurrent unit for conversational sentiment analysis. Neurocomputing 2022, 467, 73–82. [Google Scholar] [CrossRef]
- Cordes, L. Who speaks?—Ambiguity and Vagueness in the Design of Cicero’s Dialogue Speakers. In Strategies of Ambiguity in Ancient Literature; De Gruyter: Berlin, Germany, 2021; Volume 114, p. 297. [Google Scholar]
- Muthugala, M.A.V.J.; Jayasekara, A.G.B.P. A Review of Service Robots Coping with Uncertain Information in Natural Language Instructions. IEEE 2018, 6, 12913–12928. [Google Scholar] [CrossRef]
- Mavridis, N. A review of verbal and non-verbal human–robot interactive communication. Robot. Auton. Syst. 2015, 63, 22–35. [Google Scholar] [CrossRef]
Rule Name | Immediately Preceding Question Type | Example of User’s Utterance | System Response |
---|---|---|---|
Forget | Any | “I forgot” “I don’t remember” | Ask close questions with one level of elaboration. |
Not interested | Open topic | “I’m not interested” | “Are you interested in a meal?” and if yes, end the dialogue. |
Label | Description |
---|---|
Open genre | Ask a question about GENRE in a “WHAT”. |
Closed genre | Ask a question about genre in a “YesNo”. |
Open menu | Ask a question about menu in a “WHAT”. |
Closed menu | Ask a question about menu in a “YesNo”. |
Open restaurant | Ask a question about restaurant in a “WHERE”. |
Closed restaurant | Ask a question about restaurant in a “YesNo”. |
Open topic | While changing the topic, ask the experience question “What have you eaten before?” |
Closed opinion | Ask questions about the user’s experience in “YesNo” while presenting adjectives |
Talk opinion | Give feedback on the system in response to user responses |
Label | Description |
---|---|
Open x | Extract a word from the word type corresponding to x from the user’s utterance and add it to the context. |
Closed x | If the user answers yes to a question from the dialogue system that contains the word corresponding to x, x is included in the context; if the user answers no, it is stored in the no-exp variable. |
Open topic | Empty context. Only keep the elements that appear in the previous dialogue system utterance and leave the rest empty. |
Closed opinion | If the user answers “yes” to the system question, the option stored in context is added to the user model; if the user answers “no”, it is added to the no user model. |
Talk opinion | Add nouns that appear in the system’s statements to the context. |
Acts | Definition in Opinion Dialogue System |
---|---|
Illocutionary act | One or more are set for each locutionary act. The definition depends on the locutionary act. |
Locutionary act | Table 2 presents the correspondence with the illocutionary act. One locutionary act is defined for each word type and for each combination of word types. |
Perlocutionary act | More details in Table 3. |
Communicative act | Intention level: The purpose is to slot-fill a CONTEXT. Opinion level: Close-opinions are generated for opinions that can be expressed by nouns in the context. The opinion selection rules are explained in the “Opinion Dialogue” section. Interested or not: See Table 1: “Not interested”. No intention: Indicated in Table 1: “Forget”. |
Question Type | Items |
---|---|
Whether you have been to a particular restaurant | Marugame Udon, McDonald’s, Saizeriya, Yoshinoya, Gyoza no Ohsho, Kura Sushi |
Impressions of specific menus | Good, bad, salty, greasy, sweet, spicy, expensive, cheap, no particular impression |
Whether or not they have had a particular menu item | Have eaten, have never eaten |
Label | Description | Rule |
---|---|---|
Restaurant | Name of restaurant | |
Genre | Name of menu genre | Allow unknown when menu is known |
Menu item | Name of food | |
Opinion | Opinion on menu item |
Type | Threshold |
---|---|
Similar Topic Threshold | 2 |
Similar Third-Party Threshold | 0.2 (20%) |
Opinion Estimation Threshold | 0.2 (20%) |
Personality Traits | Questionnaire Item | Condition | ANOVA p-Value | Sub Effect Tests Ryan-Method p-Value (Pair) | ||
---|---|---|---|---|---|---|
Max | Count | Sum | ||||
Openness (n = 33) | Talk again | 2.88 | 3.00 | 2.48 | 0.0251 * | 0.00952 (count-sum) |
Estimation naturalness | 2.97 | 3.06 | 2.79 | |||
Understand | 2.85 | 3.00 | 2.82 | |||
Try to understand | 3.64 | 3.82 | 3.33 | 0.0356 * | 0.0109 (count-sum) | |
Satisfaction | 2.91 | 2.85 | 2.39 | 0.0122 * | 0.00652 (max-sum) 0.0157 (count-sum) | |
Extroversion (m = 26) | Talk again | 2.81 | 2.92 | 2.58 | ||
Estimation naturalness | 2.88 | 3.08 | 2.96 | |||
Understand | 2.77 | 2.92 | 2.85 | |||
Try to understand | 3.58 | 3.73 | 3.35 | |||
Satisfaction | 2.73 | 2.85 | 2.38 | 0.0631 + | ||
Agreeableness (n = 21) | Talk again | 2.87 | 2.89 | 2.58 | ||
Estimation naturalness | 3.08 | 3.03 | 2.89 | |||
Understand | 2.89 | 2.97 | 2.87 | |||
Try to understand | 3.63 | 3.76 | 3.42 | |||
Satisfaction | 2.95 | 2.79 | 2.55 | |||
Conscientiousness (n = 32) | Talk again | 2.78 | 2.94 | 2.44 | 0.0496 * | |
Estimation naturalness | 3.00 | 3.06 | 2.84 | |||
Understand | 2.75 | 2.97 | 2.81 | |||
Try to understand | 3.63 | 3.78 | 3.34 | 0.0859 + | ||
Satisfaction | 2.88 | 2.75 | 2.44 | 0.0756 + | ||
Neuroticism (n = 22) | Talk again | 3.07 | 3.00 | 2.66 | ||
Estimation naturalness | 3.21 | 3.14 | 2.93 | |||
Understand | 3.03 | 3.03 | 2.93 | |||
Try to understand | 3.66 | 3.79 | 3.52 | |||
Satisfaction | 3.21 | 2.93 | 2.72 |
Max | Count | Sum | |
---|---|---|---|
Sum of information content of directly heard opinions (average) | 11.26 (1.413) | 9.619 (1.380) | 9.741 (1.337) |
Sum of information content of opinions estimated (average) | 39.37 (2.141) | 51.67 (2.253) | 50.33 (2.218) |
Average of the number of opinions directly heard | 5.211 | 4.816 | 4.895 |
Average of the number of opinions estimated | 19.61 | 21.87 | 24.29 |
Number of types of opinions held by the user model (38 total) | 286 | 279 | 320 |
Average number of duplicates of one type of opinion | 2.606 | 2.979 | 2.884 |
Agreeableness | Extraversion | Conscientiousness | Neuroticism | Openness | |
---|---|---|---|---|---|
Did they try to understand you? | 0.0404 (0.691) | 0.128 (0.264) | 0.0249 (0.819) | −0.0821 (0.493) | 0.0404 (0.691) |
Did they understand you? | 0.0172 (0.866) | 0.0652 (0.571) | −0.0314 (0.773) | −0.0269 (0.823) | 0.0172 (0.866) |
Would you like to talk to them again? | 0.0531 (0.602) | 0.0966 (0.400) | 0.00909 (0.933) | 0.0321 (0.798) | 0.0531 (0.602) |
Satisfied with the dialogue? | −0.0360 (0.723) | 0.0126 (0.913) | 0.0974 (0.370) | 0.0345 (0.774) | 0.0360 (0.723) |
Was the presumption natural? | 0.0350 (0.731) | 0.0881 (0.443) | 0.0201 (0.853) | −0.00574 (0.962) | 0.0350 (0.731) |
Agreeableness | Extraversion | Conscientiousness | Neuroticism | Openness | |
---|---|---|---|---|---|
Did they try to understand you? | 0.132 (0.193) | 0.204 + (0.0730) | 0.168 (0.120) | −0.0118 (0.921) | 0.132 (0.193) |
Did they understand you? | 0.204 + (0.0730) | 0.360 ** (0.00121) | 0.304 ** (0.00425) | 0.143 (0.229) | 0.258 ** (0.00992) |
Would you like to talk to them again? | 0.168 (0.120) | 0.319 ** (0.00442) | 0.265 * (0.0130) | 0.120 (0.315) | 0.252 * (0.0118) |
Satisfied with the dialogue? | −0.0118 (0.921) | 0.377 ** (0.000675) | 0.305 ** (0.00404) | 0.179 (0.132) | 0.280 ** (0.00497) |
Was the presumption natural? | 0.132 (0.193) | 0.315 ** (0.00491) | 0.250 * (0.0193) | 0.124 (0.298) | 0.244 * (0.0150) |
Agreeableness | Extraversion | Conscientiousness | Neuroticism | Openness | |
---|---|---|---|---|---|
Did they try to understand you? | 0.0261 (0.797) | 0.127 (0.269) | 0.0117 (0.914) | −0.117 (0.330) | 0.0261 (0.797) |
Did they understand you? | 0.127 (0.269) | −0.0399 (0.729) | −0.0937 (0.388) | −0.125 (0.300) | −0.0639 (0.530) |
Would you like to talk to them again? | 0.0117 (0.914) | 0.00657 (0.954) | −0.0534 (0.623) | −0.0725 (0.545) | −0.0267 (0.793) |
Satisfied with the dialogue? | −0.117 (0.330) | −0.0421 (0.714) | −0.135 (0.213) | −0.0959 (0.423) | −0.0880 (0.387) |
Was the presumption natural? | 0.0261 (0.797) | −0.0170 (0.883) | −0.0698 (0.521) | −0.103 (0.388) | −0.0577 (0.571) |
Agreeableness | Extraversion | Conscientiousness | Neuroticism | Openness | |
---|---|---|---|---|---|
Did they try to understand you? | 0.123 (0.225) | 0.181 (0.113) | 0.141 (0.191) | 0.0381 (0.751) | 0.123 (0.225) |
Did they understand you? | 0.183 + (0.0697) | 0.291 ** (0.00969) | 0.205 + (0.0569) | 0.101 (0.400) | 0.183 + (0.0697) |
Would you like to talk to them again? | 0.146 (0.149) | 0.180 (0.115) | 0.154 (0.155) | 0.0791 (0.509) | 0.146 (0.149) |
Satisfied with the dialogue? | 0.174 + (0.0842) | 0.222 + (0.0507) | 0.199 + (0.0649) | 0.152 (0.202) | 0.174 + (0.0842) |
Was the presumption natural? | 0.113 (0.263) | 0.180 (0.116) | 0.117 (0.280) | 0.0929 (0.438) | 0.113 (0.263) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ohira, Y.; Uchida, T.; Minato, T.; Ishiguro, H. A Dialogue System That Models User Opinions Based on Information Content. Multimodal Technol. Interact. 2022, 6, 91. https://doi.org/10.3390/mti6100091
Ohira Y, Uchida T, Minato T, Ishiguro H. A Dialogue System That Models User Opinions Based on Information Content. Multimodal Technologies and Interaction. 2022; 6(10):91. https://doi.org/10.3390/mti6100091
Chicago/Turabian StyleOhira, Yoshiki, Takahisa Uchida, Takashi Minato, and Hiroshi Ishiguro. 2022. "A Dialogue System That Models User Opinions Based on Information Content" Multimodal Technologies and Interaction 6, no. 10: 91. https://doi.org/10.3390/mti6100091
APA StyleOhira, Y., Uchida, T., Minato, T., & Ishiguro, H. (2022). A Dialogue System That Models User Opinions Based on Information Content. Multimodal Technologies and Interaction, 6(10), 91. https://doi.org/10.3390/mti6100091