Design Space for Voice-Based Professional Reporting
Abstract
:1. Introduction
2. Related Work
2.1. Voice-Based Reporting
2.2. Design Spaces
- increase the awareness of constraints with certain design choices
- help understand how design activities and (chosen) constraints have led to particular design
- help challenge the (chosen) constraints and realize opportunities.
3. Materials and Methods
3.1. Voice-Based Reporting Systems Used as Material
3.1.1. Lawyer Dictation
3.1.2. Nurse Dictation
3.1.3. Crane Maintenance Reporting and Support
3.1.4. Elevator Maintenance Reporting
3.2. Forming the Design Space
4. Results
4.1. The Design Space, Its Categories, and Dimensions
4.1.1. Language Processing (LP)
4.1.2. Structure of Reporting (SR)
4.1.3. Technical Limitations in the Work Domain (TD)
4.1.4. Interaction Related Aspects in the Work Domain (ID)
4.1.5. Organization (OR)
4.2. The Design Process of Voice-Based Reporting Systems
5. Discussion and Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Benyon, D. Designing User Experience: A Guide to HCI, UX and Interaction Design, 4th ed.; Pearson: London, UK, 2019. [Google Scholar]
- Munteanu, C.; Penn, G. Speech-based Interaction: Myths, Challenges, and Opportunities. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘17), Denver, CO, USA, 6–11 May 2017; ACM: New York, NY, USA, 2017; pp. 1196–1199. [Google Scholar]
- Bijani, C.; White, B.K.; Vilrokx, M. Giving voice to enterprise mobile applications. In Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ‘13), Munich Germany, 27–30 August 2013; ACM: New York, NY, USA, 2013; pp. 428–433. [Google Scholar]
- Leeming, B.W.; Porter, D.; Jackson, J.D.; Bleich, H.L.; Simon, M. Computerized radiologic reporting with voice data-entry. Radiology 1981, 138, 585–588. [Google Scholar] [CrossRef] [PubMed]
- Kumah-Crystal, Y.A.; Pirtle, C.J.; Whyte, H.M.; Goode, E.S.; Anders, S.H.; Lehmann, C.U. Electronic Health Record Interactions through Voice: A Review. Appl. Clin. Inform. 2018, 9, 541–552. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Devine, E.G.; Gaehde, S.A.; Curtis, A.C. Comparative Evaluation of Three Continuous Speech Recognition Software Packages in the Generation of Medical Reports. J. Am. Med. Inform. Assoc. 2000, 7, 462–468. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lai, J.; Vergo, J. MedSpeak: Report creation with continuous speech recognition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘97), Ft. Lauderdale, FL, USA, 5–10 April 1997; pp. 431–438. [Google Scholar]
- Kang, H.P.; Sirintrapun, S.J.; Nestler, R.J.; Parwani, A.V. Experience with Voice Recognition in Surgical Pathology at a Large Academic Multi-Institutional Center. Am. J. Clin. Pathol. 2010, 133, 156–159. [Google Scholar] [CrossRef] [Green Version]
- Fratzke, J.; Tucker, S.; Shedenhelm, H.; Arnold, J.; Belda, T.; Petera, M. Enhancing Nursing Practice by Utilizing Voice Recognition for Direct Documentation. J. Nurs. Adm. 2014, 44, 79–86. [Google Scholar] [CrossRef]
- Awan, S.K.; Dunoyer, E.J.; Genuario, K.E.; Levy, A.C.; O’Connor, K.P. Using voice recognition enabled smartwatches to improve nurse documentation. In Proceedings of the 2018 Systems and Information Engineering Design Symposium, Charlottesville, VA, USA, 27 April 2018. [Google Scholar]
- Drevenstedt, G.L.; Mcdonald, J.C.; Drevenstedt, L.W. The role of voice-activated technology in today’s dental practice. J. Am. Dent. Assoc. 2005, 136, 157–161. [Google Scholar] [CrossRef]
- Mohr, D.N.; Turner, D.W.; Pond, R.G.; Kamath, J.S.; De Vos, C.B.; Carpenter, P.C. Speech Recognition as a Transcription Aid: A Randomized Comparison with Standard Transcription. J. Am. Med. Inform. Assoc. 2003, 10, 85–93. [Google Scholar] [CrossRef] [Green Version]
- Lööf, J.; Falavigna, D.; Schlüter, R.; Giuliani, D.; Gretter, R.; Ney, H. Evaluation of automatic transcription systems for the judicial domain. In Proceedings of the 2010 IEEE Spoken Language Technology Workshop, Berkeley, CA, USA, 12–15 December 2010. [Google Scholar]
- Hosni, Y.A.; Hamid, T.S. Voice Data Entry (VDE) training and Implementation Strategies. Comput. Ind. Eng. 1990, 19, 356–361. [Google Scholar] [CrossRef]
- Parente, R.; Kock, N.; Sonsini, J. An analysis of the implementation and impact of speech-recognition technology in the healthcare sector. Perspect. Health Inf. Manag. 2004, 1, 5. [Google Scholar]
- Borowitz, S.M. Computer-based Speech Recognition as an Alternative to Medical Transcription. J. Am. Med. Inform. Assoc. 2001, 8, 101–102. [Google Scholar] [CrossRef] [Green Version]
- Simon, S.J.; Paper, D. User Acceptance of Voice Recognition Technology: An Empirical Extension of the Technology Acceptance Model. J. Organ. End User Comput. 2007, 19, 24–50. [Google Scholar] [CrossRef]
- Grasso, M.A. The long-term adoption of speech recognition in medical applications. In Proceedings of the IEEE Symposium Computer-Based Medical Systems, New York, NY, USA, 26–27 June 2003; pp. 257–262. [Google Scholar]
- Alapetite, A.; Andersen, H.B.; Hertzum, M. Acceptance of speech recognition by physicians: A survey of expectations, experiences, and social influence. Int. J. Hum. Comput. Stud. 2009, 67, 36–49. [Google Scholar] [CrossRef]
- Abd Ghani, M.K.; Dewi, I.N. Comparing speech recognition and text writing in recording patient health records. In Proceedings of the IEEE-EMBS Conference on Biomedical Engineering and Sciences, Langkawi, Malaysia, 17–19 December 2012; pp. 365–370. [Google Scholar]
- Teel, M.M.; Sokolowski, R.; Rosenthal, D.; Belge, M. Voice-enabled structured medical reporting. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘98), Los Angeles, CA, USA, 18–23 April 1998; pp. 595–602. [Google Scholar]
- Goss, F.R.; Blackley, S.V.; Ortega, C.A.; Kowalski, L.T.; Landman, A.B.; Lin, C.T.; Meteer, M.; Bakes, S.; Gradwohl, S.C.; Bates, D.W.; et al. A clinician survey of using speech recognition for clinical documentation in the electronic health record. Int. J. Med. Inform. 2019, 130, 103938. [Google Scholar] [CrossRef] [PubMed]
- Hoyt, R.; Yoshihashi, A. Lessons Learned from Implementation of Voice Recognition for Documentation in the Military Electronic Health Record System. Perspect. Health Inf. Manag. 2010, 7, 1e. [Google Scholar]
- Clarke, M.A.; King, J.L.; Kim, M.S. Toward Successful Implementation of Speech Recognition Technology: A Survey of SRT Utilization Issues in Healthcare Settings. South. Med. J. 2015, 108, 445–451. [Google Scholar]
- Hadidi, S.A.; Upadhaya, S.; Shastri, R.; Alamarat, A. Use of dictation as a tool to decrease documentation errors in electronic health records. J. Community Hosp. Intern. Med. Perspect. 2017, 7, 282–286. [Google Scholar] [CrossRef] [Green Version]
- Brown, S.; Rosenbloom, T.S.; Hardenbrook, S.P.; Clark, T.; Fielstein, E.; Elkin, P.; Speroff, T. Documentation quality and time costs: A randomized controlled trial of structured entry versus dictation. J. Data Inf. Qual. 2012, 3, 2. [Google Scholar] [CrossRef]
- Hodgson, T.; Coiera, E. Risks and benefits of speech recognition for clinical documentation: A systematic review. J. Am. Med. Inform. Assoc. 2016, 23, e169–e179. [Google Scholar] [CrossRef] [Green Version]
- Sistrom, C.L. Conceptual Approach for the Design of Radiology Reporting Interfaces: The Talking Template. J. Digit. Imaging 2005, 18, 176–187. [Google Scholar] [CrossRef] [Green Version]
- Liu, D.; Zucherman, M.; Tulloss, W.B. Six Characteristics of Effective Structured Reporting and the Inevitable Integration with Speech Recognition. J. Digit. Imaging 2006, 19, 98. [Google Scholar] [CrossRef] [Green Version]
- Von Berg, J. A grammar-based speech user interface generator for structured reporting. Int. Congr. Ser. 2003, 1256, 887–892. [Google Scholar] [CrossRef]
- Nugues, P.; ElGuedj, P.O.; Cazenave, F.; de Ferrière, B. Issues in the design of a voice man machine dialogue system generating written medical reports. In Proceedings of the 14th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Paris, France, 29 October–1 November 1992. [Google Scholar]
- Jancsary, J.; Matiasek, J.; Trost, H. Revealing the structure of medical dictations with conditional random fields. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP ‘08), Honolulu, HI, USA, 25–27 October 2008; Association for Computational Linguistics: Stroudsburg, PA, USA, 2008; pp. 1–10. [Google Scholar]
- Kondratova, I.L. Speech-Enabled Handheld Computing for Fieldwork. In Proceedings of the International Conference on Computing in Civil Engineering, Cancun, Mexico, 11–15 July 2005; pp. 1–11. [Google Scholar]
- Matiasek, J.; Jancsary, J.; Klein, A.; Trost, H. Identifying segment topics in medical dictations. In Proceedings of the 2nd Workshop on Semantic Representation of Spoken Language (SRSL ‘09), Athens, Greece, 30 March 2009; Association for Computational Linguistics: Stroudsburg, PA, USA, 2009; pp. 19–25. [Google Scholar]
- York, J.; Pendharkar, P.C. Human–computer interaction issues for mobile computing in a variable work context. Int. J. Hum. Comput. Stud. 2004, 60, 771–797. [Google Scholar] [CrossRef]
- Westerlun, B. Design Space Exploration: Co-Operative Creation of Proposals for Desired Interactions with Future Artefacts. Ph.D. Thesis, Kungliga Tekniska Högskolan, Stockholm, Sweden, 2009. [Google Scholar]
- Nigay, L.; Coutaz, J. A design space for multimodal systems: Concurrent processing and data fusion. In Proceedings of the INTERACT ‘93 and CHI ‘93 Conference on Human Factors in Computing Systems (CHI ‘93), Amsterdam, The Netherlands, 24–29 April 1993; Association for Computing Machinery: New York, NY, USA, 1993; pp. 172–178. [Google Scholar]
- Bowen, J.; Dittmar, A. Coping with Design Complexity: A Conceptual Framework for Design Alternatives and Variants. In Human-Computer Interaction—INTERACT 2017; Lecture Notes in Computer Science; Bernhaupt, R., Dalvi, G., Joshi, A.K., Balkrishan, D., O’Neill, J., Winckler, M., Eds.; Springer: Cham, Germany, 2017; Volume 10513. [Google Scholar]
- Bowen, J.; Dittmar, A. A semi-formal framework for describing interaction design spaces. In Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS ’16), Brussels, Belgium, 21–24 June 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 229–238. [Google Scholar]
- Bowen, J.; Dittmar, A. Formal Definitions for Design Spaces and Traces. In Proceedings of the 2017 24th Asia-Pacific Software Engineering Conference (APSEC), Nanjing, China, 4–8 December 2017; pp. 600–605. [Google Scholar]
- Dove, G.; Hansen, N.B.; Halskov, K. An Argument for Design Space Reflection. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ’16), Gothenburg, Sweden, 23–27 October 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1–10. [Google Scholar]
- Braun, M.; Broy, N.; Pfleging, B.; Alt, F. A design space for conversational in-vehicle information systems. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’17), Vienna, Austria, 4–7 September 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–8. [Google Scholar]
- Card, S.K.; Mackinlay, J. The structure of the information visualization design space. In Proceedings of the 1997 IEEE Symposium on Information Visualization (InfoVis ’97) (INFOVIS ’97), Phoenix, AZ, USA, 21 October 1997; IEEE Computer Society: Piscataway, NJ, USA, 1997; p. 92. [Google Scholar]
- Müller, J.; Alt, F.; Michelis, D.; Schmidt, A. Requirements and design space for interactive public displays. In Proceedings of the 18th ACM International Conference on Multimedia (MM ’10), Firenze, Italy, 25–29 October 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 1285–1294. [Google Scholar]
- Haeuslschmid, R.; Pfleging, B.; Alt, F. A Design Space to Support the Development of Windshield Applications for the Car. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), San Jose, CA, USA, 7–12 May 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 5076–5091. [Google Scholar]
- MacLean, A.; Young, R.M.; Bellotti, V.M.E.; Moran, T.P. Questions, Options, and Criteria: Elements of Design Space Analysis. Hum. Comput. Interact. 1991, 6, 201–250. [Google Scholar]
- Turunen, M.; Melto, A.; Kainulainen, A.; Hakulinen, J. MobiDic—A Mobile Dictation and Notetaking Application. In Proceedings of the Ninth Annual Conference of the International Speech Communication Association, Brisbane, Australia, 22–26 September 2008. [Google Scholar]
- Keskinen, T.; Melto, A.; Hakulinen, J.; Turunen, M.; Saarinen, S.; Pallos, T.; Kallioniemi, P.; Danielsson-Ojala, R.; Salanterä, S. Mobile dictation for healthcare professionals. In Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia (MUM ‘13), Luleå, Sweden, 2–5 December 2013; ACM: New York, NY, USA, 2013; p. 41. [Google Scholar]
- DIMECC. S-STEP–Smart Technologies for Lifecycle Performance; Final Project Report, DIMECC Publications Series 11; DIMECC: Tampere, Finland, 2017; Available online: https://www.dimecc.com/wp-content/uploads/2019/06/DIMECC_FINAL_REPORT_11_S-Step.pdf (accessed on 25 September 2020).
- Kaasinen, E.; Turunen, M. (Eds.) DYNAVIS Dynamic Visualization in Project/Service Lifecycle. Final Report of the Research Project 2016–2018. DIMECC Publications Series NO. 23. Result Publications 1/2019. 2019. Available online: https://www.dimecc.com/wp-content/uploads/2019/06/DIMECC_DYNAVIS-final-report.pdf (accessed on 9 January 2021).
Dimension | Description | Values | Case Values | Steps |
---|---|---|---|---|
LP1: Required transcription accuracy | How accurate the results must be? Are proofreading and approval required? How harmful are recognition errors? Are there critical items? | 0: no accuracy, for own use only 1: basic comprehensibility 2: good readability 3: critical (proofread needed) | Lawyer: 2 Nurse: 3 Crane: 1 Elevator: 1 | 4 |
LP2: Required level of produced data | Will the recognition result be used as such or is the outcome, e.g., concepts or keywords extracted from the recognized text? | 0: raw text used 1: key concepts spotted 2: concepts and keywords spotted 3: full syntactic parse with semantic tags | Lawyer: 0 Nurse: 0 Crane: 2 Elevator: 2 | 4 |
LP3: Automatic processing potential | Can the results be automatically utilized, e.g., to order parts? How complex is the required natural language processing? | 0: no automation 1: existence of report and reporting time used 2: indicators automatically processed 3: significant operations happen automatically | Lawyer: 0 Nurse: 0 Crane: 3 Elevator: 3 | 4, 7 |
Dimension | Description | Values | Case Values | Steps |
---|---|---|---|---|
SR1: Structure of the report | How structured must or should the final document be—fully structured, semi-structured, or unstructured? | 0: no structure 1: top level headings 2: sub level headings 3: each information item has dedicated section, fully structured document | Lawyer: 0 Nurse: 2 Crane: 3 Elevator: 3 | 1 |
SR2: Reporting order | How fixed is the order of data entry? | 0: no limitations 1: main segments in specified order 2: subheadings in specified order 3: order fully fixed | Lawyer: 0 Nurse: 0 Crane: 3 Elevator: 2 | 2 |
SR3: Reporting and main task | Can reporting be integrated to work processes? | 0: no integration, reporting separately 1: reporting done between main tasks 2: reporting done between different subtasks 3: reporting part of the work | Lawyer: 0 Nurse: 1 Crane: 3 Elevator: 3 | 2 |
SR4: Fragmentation of reporting | Does reporting happen in one, uninterrupted session or in fragments? | 0: in a single session (e.g., end of day) 1: after each main task 2: after subtasks 3: reporting continuously as part of primary task | Lawyer: 3 Nurse: 1 Crane: 2 Elevator: 2 | 2 |
SR5: Schedule of reporting | What are suitable times to do the reporting, e.g., in the field? | A: reporting during the primary task b: reporting before and after the task c: reporting while in transit d: reporting in the end of the day | Lawyer: b, c, d Nurse: b Crane: a Elevator: a | 2, 7 |
Dimension | Description | Values | Case Values | Steps |
---|---|---|---|---|
TD1: Processing delays | What kind of delays the processing and data transmission may create? | 0: local processing, minimal delays 1: distributed processing, delays on results 2: cloud/server processing, short network delays 3: cloud/server processing, long network delays | Lawyer: 0 Nurse: 3 Crane: 2 Elevator: 2 | 4, 6 |
TD2: Real-time requirements | What are the processing time requirements and acceptable delays? How soon will the results be valuable for different parts of the organization? | 0: no requirements (e.g., one-week is acceptable) 1: report must be ready and proofed within 24 h 2: report must be ready within 1 h 3: report must be ready with one minute | Lawyer: 2 Nurse: 1 Crane: 3 Elevator: 2 | 4, 7 |
TD3: Privacy and Information Security | Can data transfer and processing cause potential privacy or security issues? Should the speaker remain anonymous? | 0: no security concerns 1: encrypted data transfer 2: secured device identity and transfer 3: data transfer via secure network | Lawyer: 1 Nurse: 3 Crane: 2 Elevator: 2 | 5 |
TD4: Dictation privacy and security per locations | Is there risk of privacy or security issues due to reporting location and public nature of speech? (eavesdropping) | 0: no risks (reporting in company facilities) 1: limited risk, non-public spaces 2: in public, non-critical information 3: critical information, unsecure locations | Lawyer: 2 Nurse: 0 Crane: 1 Elevator: 1 | 5 |
TD5: Formal requirements of the domain | Are there legal and comparable requirements imposed to the documents? Are there traditions in the field to follow? | 0: no requirements 1: reporting required 2: content of the reports specified 3: reporting practices, format, and usage regulated | Lawyer: 0 Nurse: 3 Crane: 2 Elevator: 2 | 1 |
TD6: Language of the Domain | How precise and formal is the language in the domain? How much is there domain specific terminology and jargon? | 0: no specifics 1: domain jargon 2: organization has own terminology 3: specific language must be used (e.g., legislative restrictions) | Lawyer: 3 Nurse: 2 Crane: 1 Elevator: 1-2 | 4 |
TD7: Role of the final document | Is the document the main result of the work, part of process, or does it support work and improve efficiency? | a: report is the primary result b: report is part of the result c: report is supportive or for quality control d: report for personal use (note taking vs. reporting) | Lawyer: a Nurse: b Crane: c Elevator: c | 7 |
Dimension | Description | Values | Case Values | Steps |
---|---|---|---|---|
ID1: Physical load | Is the user under physical load while reporting? | 0: no physical load 1: light physical load 2: moderate physical load, limits speech 3: heavy physical load, speech impossible during task | Lawyer: 0–1 Nurse: 0 Crane: 3 Elevator: 2 | 6 |
ID2: Cognitive load | Is there cognitive load to the user due to main/other tasks while reporting? | 0: no other cognitive load 1: limited cognitive load 2: significant cognitive load, limiting reporting at the same time 3: severe cognitive load, reporting at the same time not possible | Lawyer: 0–1 Nurse: 0 Crane: 1–3 Elevator: 1–3 | 3 |
ID3: Reserved resources | Are user’s hands and/or eyes busy while reporting? | 0: hands and eyes can be used freely 1: hands and/or eyes can be used most of the time 2: hands/eyes busy most of the time 3: hands/eyes busy all the time | Lawyer: 0–2 Nurse: 0–2 Crane: 2 Elevator: 2 | 3, 6 |
ID4: Noise | Are the reporting conditions noisy? | 0: no noise (quiet office) 1: low background noise 2: varying noise, at time prevents reporting 3: extreme noise, speech-based reporting impossible | Lawyer: 0–1 Nurse: 0–1 Crane: 3 Elevator: 2 | 3, 4, 6 |
ID5: Devices and Speech Technology | What kind of devices are and can be used? What is the potential for local or remote processing and what are the related delays? | a: local hardware for all processing b: preprocessing in local hardware, server-based recognition and language processing c: server-based processing | Lawyer: c Nurse: c Crane: b, c Elevator: b, c | 4, 6 |
ID6: Interaction modalities | What interaction modalities (input, output) can be applied to the reporting interface? | a: speech input b: gesture input c: buttons (e.g., in work clothing) d: audio output e: visual output, small display (smart watch) f: visual, large display (mobile device) g: haptic feedback | Lawyer: a, c, f Nurse: a, c, d, f; a, e Crane: a, c, d, e, g Elevator: a, b, c, d, e/f, g | 3<, 6 |
ID7: Speech activation method | What are the suitable speech input activation methods–buttons, touch screens, or audio processing? | a: push to talk b: press to talk, press to stop c: press to talk, VAD detects end d: keyword for start, VAD for end e: automatically detected start and end | Lawyer: a Nurse: b Crane: c Elevator: c, d | 6 |
Dimension | Description | Values | Case Values | Steps |
---|---|---|---|---|
OR1: Rules and practices of the organization | Are there strict rules to follow or flexible practices regarding reporting? | 0: no rules 1: generic instructions on what and how to report 2: overall reporting process specified 3: strict rules for process, content and structure | Lawyer: 0 Nurse: 2 Crane: 3 Elevator: 3 | 1 |
OR2: Backend system | How is reporting linked to other information systems? How much these systems provide information which affects the speech interface? | 0: no links 1: report stored in organization level system with other materials 2: report goes to systems in structured format 3: other systems and external information, possibly from multiple organizations, directs reporting | Lawyer: 0 Nurse: 2 Crane: 3 Elevator: 3 | 1, 7 |
OR3: Users | What are the backgrounds of users and what is their education and training level? | 0: all kinds of users 1: all users have certain education 2: users have certain education and training 3: users have matching education and training | Lawyer: 1 Nurse: 3 Crane: 2 Elevator: 2 | 3 |
OR4: Multilingual. multicultural users | Is the user population multilingual and/or multicultural? | 0: users from different cultures 1: all can speak the same language (not necessarily natively) 2: proficient in organization’s language 3: matching background and language fluency | Lawyer: 2 Nurse: 2 Crane: 0, 2 Elevator: 0, 2 | 5 |
OR5: Users of the document | Are there different uses and users for the document aside from the original user? | a: reporter only (personal note taking) b: peers c: multiple people in the organization d: used across organizations e: used to plan future f: multiple purposes | Lawyer: a Nurse: b, c Crane: f Elevator: f | 5, 7 |
OR6: Actors contributing to the report | Do multiple people contribute to the document (includes proofreading)? | a: one person per report b: report is checked and confirmed by others c: proofread and edited by another person d: multiple people contribute e: document which evolves over long time. | Lawyer: a Nurse: c Crane: a/e Elevator: a/e | 5, 7 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hakulinen, J.; Keskinen, T.; Turunen, M.; Siltanen, S. Design Space for Voice-Based Professional Reporting. Multimodal Technol. Interact. 2021, 5, 3. https://doi.org/10.3390/mti5010003
Hakulinen J, Keskinen T, Turunen M, Siltanen S. Design Space for Voice-Based Professional Reporting. Multimodal Technologies and Interaction. 2021; 5(1):3. https://doi.org/10.3390/mti5010003
Chicago/Turabian StyleHakulinen, Jaakko, Tuuli Keskinen, Markku Turunen, and Sanni Siltanen. 2021. "Design Space for Voice-Based Professional Reporting" Multimodal Technologies and Interaction 5, no. 1: 3. https://doi.org/10.3390/mti5010003
APA StyleHakulinen, J., Keskinen, T., Turunen, M., & Siltanen, S. (2021). Design Space for Voice-Based Professional Reporting. Multimodal Technologies and Interaction, 5(1), 3. https://doi.org/10.3390/mti5010003