A Systematic Mapping of Translation-Enabling Technologies for Sign Languages
Abstract
:1. Introduction
- Provides the scholarly community interested in translation-enabling technologies for sign languages with a broad vision on the subject.
- Quantifies the categories, subcategories, and other relevant criteria that allow sectioning the object of study.
- Displays the results by means of different data visualization techniques.
2. Background and Related Work
2.1. Sign Languages Overview
2.2. Technologies Used in SL Machine Translation
2.3. Applications Currently Available
2.3.1. Mobile Applications Already Available
2.3.2. Applications for Web, Windows, and Android
2.3.3. Other Applications
2.3.4. Wearables Incorporating into SL Translation
2.3.5. Real Time SL to Text and Speech
2.3.6. Systems Incorporating Deep Learning
2.3.7. Systems for Teaching Deaf People to Read
2.4. Findings and Challenges
3. Methods
3.1. Research Questions
- RQ1: How often the topics of interest have been published?
- RQ2: Which specific topics have been addressed?
- RQ3: Where and when were the studies published?
- RQ4: How were the proposals, implementation, or evaluation processes conducted?
- RQ5: Which proposals have derived in specific products?
- RQ6: What are the research trends and gaps?
3.2. Search
- Population: In sign languages context, population may refer to specific translation techniques, avatar deployment, application areas, or specific projects. In our context, the population is composed of sign languages, avatars, and translation studies.
- Intervention: In sign languages, intervention refers to methodologies, tools, or technologies. We do not have a specific intervention to be investigated.
- Comparison: In this study, we compare the different proposals, implementations, and evaluations by identifying the strategies used. No empirical comparison is made, but the alternative strategies are identified.
- Outcomes: The number of identified initiatives.
3.3. Data Extraction
3.4. Analysis and Classification
3.5. Validity Evaluation
- If conclusions cannot be drawn from the data (interpretative validity), it is most likely to draw different conclusions, assuming the research can be repeated
- If there is no generalizability, the study cannot be repeated in different contexts for comparison purposes
- If there are no means to collect correct data, it is likely to get different results when measuring the same attributes.
3.5.1. Descriptive Validity
3.5.2. Theoretical Validity
3.5.3. Generalizability
3.5.4. Interpretive Validity
3.5.5. Repeatability
4. Results
4.1. Frequency of Publication (RQ1)
4.2. Topics (RQ2)
4.3. Venues of Publication (RQ3)
4.4. Approaches (RQ4 and RQ5)
5. Mapping Process Evaluation
6. Discussion
7. Conclusions
Funding
Conflicts of Interest
References
- Ribeiro, P.; Lima, R.; Queiroz, P. Tecnologias para o Ensino da Língua Brasileira de Sinais (LIBRAS): Uma Revisão Sistemática da Literatura. Braz. J. Comput. Educ. 2018, 26, 42–60. [Google Scholar] [CrossRef]
- Fischer, S.L.; Marshall, M.M.; Woodcock, K. Musculoskeletal disorders in sign language interpreters: A systematic review and conceptual model of musculoskeletal disorder development. Work 2012, 42, 173–184. [Google Scholar] [PubMed]
- Fitzpatrick, E.; Stevens, A.; Garritty, C.; Moher, D. The effects of sign language on spoken language acquisition in children with hearing loss: A systematic review protocol. Syst. Rev. 2013, 2, 108. [Google Scholar] [CrossRef] [PubMed]
- Fitzpatrick, E.; Hamel, C.; Stevens, A.; Pratt, M.; Moher, D.; Doucet, S.P.; Neuss, D.; Bernstein, A.; Na, E. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review. Pediatrics 2016, 137, e20151974. [Google Scholar] [CrossRef] [PubMed]
- Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for conducting systematic mapping studies in software engineering: An update. Inf. Softw. Technol. 2015, 64, 1–18. [Google Scholar] [CrossRef]
- Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report EBSE-2007-01; EBSE: UK, Durham, 2007. [Google Scholar]
- Ethnologue. Languages of the World. 2019. Available online: https://www.ethnologue.com/ (accessed on 29 June 2019).
- Parton, B.S. Sign language recognition and translation: A multidisciplined approach from the field of artificial intelligence. J. Deaf Stud. Deaf Educ. 2005, 11, 94–101. [Google Scholar] [CrossRef] [PubMed]
- Tsujii, J. Computational Linguistics and Natural Language Processing. In Computational Linguistics and Intelligent Text Processing; Gelbukh, A.F., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
- Martins, P.; Rodrigues, H.; Rocha, T.; Francisco, M.; Morgado, L. Accessible options for Deaf people in e-Learning platforms: Technology solutions for Sign Language translation. Procedia Comput. Sci. 2015, 67, 263–272. [Google Scholar] [CrossRef]
- San-Segundo, R.; Montero, J.; Macías-Guarasa, J.; Córdoba, R.; Ferreiros, J.; Pardo, J. Proposing a speech to gesture translation architecture for Spanish deaf people. J. Vis. Lang. Comput. 2008, 5, 523–538. [Google Scholar] [CrossRef]
- Veale, T.; Conway, A.; Collins, B. The challenges of cross-modal translation: English to sign language translation in the Zardoz system. Mach. Transl. 1998, 13, 81–106. [Google Scholar] [CrossRef]
- Zhao, L.; Kipper, K.; Schuler, W.; Vogler, C.; Badler, N.; Palmer, M. Machine translation system from English to American Sign Language. Lect. Notes Comput. Sci. 2000, 1934, 54–67. [Google Scholar]
- Naert, L.; Larboulette, C.; Gibet, S. Coarticulation Analysis for Sign Language Synthesis. In Proceedings of the Part II of the 11th International Conference, UAHCI 2017, Vancouver, BC, Canada, 9–14 July 2017. [Google Scholar]
- Huenerfauth, M. Generating American sign language animation: Overcoming misconceptions and technical challenges. Univers. Access Inf. Soc. 2008, 6, 419–434. [Google Scholar] [CrossRef]
- Anuja, K.; Suryapriya, S.; Idicula, S. Design and development of a frame based MT system for English-to-ISL. In Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBIC’2009), Coimbatore, India, 9–11 December 2009; pp. 1382–1387. [Google Scholar]
- López-Colino, F.; Colás, J. Spanish sign language synthesis system. J. Visual Lang. Comput. 2012, 23, 121–136. [Google Scholar] [CrossRef]
- Cooper, H.; Holt, B.; Bowden, R. Sign language recognition. In Visual Analysis of Human; Springer: London, UK, 2011; pp. 539–562. [Google Scholar]
- Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Meth. 2005, 8, 19–32. [Google Scholar] [CrossRef]
- Handtalk. Hand Talk Translator. 2019. Available online: https://play.google.com/store/apps/details?id=br.com.handtalk&hl=en_US (accessed on 29 August 2019).
- Helloasl. ASL American Sign Language. 2019. Available online: https://play.google.com/store/apps/details?id=tenmb.asl.americansignlanguagepro&hl=en_US (accessed on 29 August 2019).
- López, M. Visualfy, la Idea Española que Ofrece un Asistente Virtual a Las Personas Sordas. 2019. Available online: https://www.xataka.com/otros-dispositivos/visualfy-idea-espanola-que-ofrece-asistente-virtual-a-personas-sordas (accessed on 29 August 2019).
- Raya. textoSIGN, una Útil Herramienta de Conversión de Texto a Lengua de Signos Española para Android. 2012. Available online: https://www.xatakamovil.com/aplicaciones/textosign-una-util-herramienta-de-conversion-de-texto-a-lengua-de-signos-espanola-para-android (accessed on 29 August 2019).
- López, M. Singslator Traduce del Español a la Lengua de Signos Directamente Desde la Web. 2014. Available online: https://www.genbeta.com/web/singslator-traduce-del-espanol-a-la-lengua-de-signos-directamente-desde-la-web (accessed on 29 August 2019).
- Penalva, J. MyVoice Convierte la Lengua de Signos en Voz. 2012. Available online: https://www.xataka.com/otros/myvoice-convierte-el-lenguaje-de-signos-en-voz (accessed on 29 August 2019).
- Álvarez, R. Si no Conoces el Lenguaje de Signos, este Guante es Capaz de Traducirlo en Voz y Texto. 2015. Available online: https://www.xataka.com/investigacion/si-no-conocer-el-el-lenguaje-de-signos-este-guante-es-capaz-de-traducirlo-en-voz-y-texto (accessed on 29 August 2019).
- Garrido, R. Con este Guante Creado en el IPN Pretenden Traducir la Lengua de Señas a Texto. 2015. Available online: https://www.xataka.com.mx/investigacion/con-este-guante-creado-en-el-ipn-pretenden-traducir-la-lengua-de-senas-a-texto (accessed on 29 August 2019).
- Sacristán, L. Un Traductor de Lengua de Signos y un Wearable que Detecta la Epilepsia entre los Nuevos Proyectos de la Fundación Vodafone. 2019. Available online: https://www.xatakamovil.com/vodafone/traductor-lengua-signos-wearable-que-detecta-epilepsia-nuevos-proyectos-fundacion-vodafone (accessed on 29 August 2019).
- Sacristán, L. Así es Showleap: El Traductor de Lengua de Signos a Texto y Voz en Tiempo Real Está Cada Vez Más Cerca. 2019. Available online: https://www.xataka.com/aplicaciones/asi-showleap-traductor-lengua-signos-a-texto-voz-tiempo-real-esta-cada-vez-cerca (accessed on 29 August 2019).
- The Economic Times Meet the New Google Translator: An AI App That Converts Sign Language into Text, Speech. 2018. Available online: https://economictimes.indiatimes.com/magazines/panache/meet-the-new-google-translator-an-ai-app-that-converts-sign-language-into-text-speech/articleshow/66379450.cms (accessed on 29 August 2019).
- Kamen, G. Electromyographic Kinesiology. In Research Methods in Biomechanics; Robertson, G.E., Caldwell, G.E., Hamill, J., Kamen, G., Whittlesey, S., Eds.; Human Kinetics Publishers: Champaign, IL, USA, 2004. [Google Scholar]
- Bailey, J. Google App Translates Sign Language. 2014. Available online: https://www.ajc.com/technology/google-app-translates-sign-language/wgmYzp46ALU5EyEmejOiMM/ (accessed on 29 August 2019).
- Merino, M. Un Algoritmo que Lee el Movimiento de las Manos Abre la Puerta a que los Smartphones Puedan Traducir el Lenguaje de Signos. 2019. Available online: https://www.xataka.com/inteligencia-artificial/algoritmo-que-lee-movimiento-manos-abre-puerta-a-que-smartphones-puedan-traducir-lenguaje-signos (accessed on 29 August 2019).
- Merino, M. Google Apuesta por el Reconocimiento de Voz Para Ayudar a que las Personas Sordas Tengan más Fácil Interactuar en Eventos Sociales. 2019. Available online: https://www.xataka.com/inteligencia-artificial/google-apuesta-reconocimiento-voz-para-ayudar-a-que-personas-sordas-tengan-facil-interactuar-eventos-sociales (accessed on 29 August 2019).
- Sacristán, L. Así es StorySign, la Aplicación que Utiliza la IA de Huawei para Enseñar a Leer a Niños Sordos. 2018. Available online: https://www.xatakandroid.com/aplicaciones-android/asi-storysign-aplicacion-que-utiliza-ia-huawei-para-ensenar-a-leer-a-ninos-sordos (accessed on 29 August 2019).
- Morrissey, S.; Way, A. Joining hands: Developing a sign language machine translation system with and for the deaf community. In Proceedings of the CVHI-2007—Conference and Workshop on Assistive Technologies for People with Vision and Hearing Impairments: Assistive Technology for All Ages, Granada, Spain, 28–31 August 2007. [Google Scholar]
- Morrissey, S. Assistive translation technology for deaf people: Translating into and animating Irish sign language. In Proceedings of the ICCHP 2008—12th International Conference on Computers Helping People with Special Needs, Linz, Austria, 9–11 July 2008. [Google Scholar]
- Viera, J.; Hernández, J.; Rodríguez, D.; Castillo, J. Interactive Application in Spanish Sign Language for a Public Transport Environment. In Proceedings of the 11th International Conference on Cognition and Exploratory Learning in Digital Age (CELDA), Porto, Portugal, 25–27 October 2014. [Google Scholar]
- Ebling, S.; Huenerfauth, M. Bridging the gap between sign language machine translation and sign language animation using sequence classification. In Proceedings of the SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies, Dresden, Germany, 11 September 2015; pp. 2–9. [Google Scholar]
- Carlo, G.; Mazzei, A. Last train to “Rebaudengo Fossano”: The case of some names in avatar translation. In Proceedings of the 6th Workshop on the Representation and Processing of the Sign Languages: Beyond the Manual Channel. Language Resources and Evaluation Conference (LREC 2014), Reykjavik, Iceland, 31 May 2014; pp. 63–66. [Google Scholar]
- Geraci, C.; Mazzei, A.; Angster, M. Some issues on Italian to LIS automatic translation: The case of train announcements. In Proceedings of the First Italian Conference on Computational Linguistics CLiC-it 2014 & the Fourth International Workshop (EVALITA 2014), Pisa, Italy, 9–11 December 2014; pp. 191–196. [Google Scholar]
- Paire-Ficout, L.; Alauzet, A.; Chevret, M.; Boucheix, J.; Lefebvre-Albaret, F.; Saby, L.; Jobez, P. Innovative visual design to assure information for all in transportation. In Proceedings of the 28th International Congress of Applied Psychology (ICAP 2014), Paris, France, 8–13 July 2014. [Google Scholar]
- Paire-Ficout, L.; Alauzet, A.; Boucheix, J.; Saby, L.; Lefebvre-Albaret, F.; Groff, J.; Argon, J.; Jobez, P. How not to give up on train travel when you are deaf? In Proceedings of the TRANSED 2015—14th International Conference on Mobility and Transport for Elderly and Disabled Persons, Lisbon, Portugal, 28–31 July 2015. [Google Scholar]
- Motlhabi, M.; Glaser, M.; Tucker, W. SignSupport: A limited communication domain mobile aid for a Deaf patient at the pharmacy. In Proceedings of the Southern African Telecommunication Networks and Applications Conference, Stellenbosch, South Africa, 1–4 September 2013; pp. 173–178. [Google Scholar]
- Yang, O.; Morimoto, K.; Kuwahara, N. Evaluation of Chinese Sign Language animation for mammography inspection of hearing-impaired people. In Proceedings of the 2014 IIAI 3rd International Conference on Advanced Applied Informatics, Kita-Kyushu, Japan, 31 August–4 September 2014; pp. 831–836. [Google Scholar]
- Süzgün, M.; Özdemir, H.; Camgöz, N.; Kındıroğlu, A.; Başaran, D.; Togay, C.; Akarun, L. Hospisign: An interactive sign language platform for hearing impaired. J. Nav. Sci. Eng. 2015, 11, 75–92. [Google Scholar]
- Camgöz, N.; Kındıroğlu, A.; Akarun, L. Sign language recognition for assisting the deaf in hospitals. In Proceedings of the International Workshop on Human Behavior Understanding, Amsterdam, The Netherlands, 16 October 2016; Springer: Cham, Switzerland, 2016; pp. 89–101. [Google Scholar]
- Ahmed, F.; Bouillon, P.; Destefano, C.; Gerlach, J.; Halimi, I.; Hooper, A.; Rayner, E.; Spechbach, H.; Strasly, I.; Tsourakis, N. A Robust Medical Speech-to-Speech/Speech-to-Sign Phraselator. In Proceedings of the INTERSPEECH 2017, Stockholm, Sweden, 20–24 August 2017. [Google Scholar]
- Koehn, P. Statistical Machine Translation; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2010. [Google Scholar]
- Hutchins, J. Multiple uses of machine translation and computerised translation tools. In Proceedings of the International Symposium on Data and Sense Mining, Machine Translation and Controlled Languages (ISMTCL 2009), Besançon, France, 1–3 July 2009; pp. 13–20. [Google Scholar]
- Williams, P.; Sennrich, R.; Post, M.; Koehn, P. Syntax-Based Statistical Machine Translation; Morgan & Claypool Publishers: San Rafael, CA, USA, 2016. [Google Scholar]
- Abiola, O.; Adetunmbi, A.; Oguntimilehin, A. Review of the Various Approaches to Text to Text Machine Translations. Int. J. Comput. Appl. 2015, 120, 7–12. [Google Scholar]
- Song, N.; Yang, H.; Zhi, P. Towards Realizing Sign Language to Emotional Speech Conversion by Deep Learning. In Proceedings of the International Conference of Pioneering Computer Scientists, Engineers and Educators, Zhengzhou, China, 21–23 September 2018; Springer: Singapore, 2018; pp. 416–430. [Google Scholar]
- Kajonpong, P. Recognizing American Sign Language Using Deep Learning. Ph.D. Thesis, The University of Texas at San Antonio, San Antonio, TX, USA, 2019. [Google Scholar]
- An, X.; Yang, H.; Gan, Z. Towards realizing sign language-to-speech conversion by combining deep learning and statistical parametric speech synthesis. In Proceedings of the International Conference of Pioneering Computer Scientists, Engineers and Educators, Harbin, China, 20–22 August 2016; Springer: Singapore, 2016; pp. 678–690. [Google Scholar]
- Song, N.; Yang, H.; Zhi, P. A deep learning based framework for converting sign language to emotional speech. In Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; pp. 2047–2053. [Google Scholar]
- Oramas, J.; Moreno, A.; Chiluiza, K. Technology for Hearing Impaired People: A Novel Use of Xstroke Pointer Gesture Recognition Algorithm for Teaching/Learning Ecuadorian Sign Language. Available online: https://pdfs.semanticscholar.org/a55a/a8a5e3da73dd92ce4b81c55d8ae9618d2fe8.pdf (accessed on 12 May 2019).
- Costagliola, G.; Deufemia, V.; Risi, M. Sketch grammars: A formalism for describing and recognizing diagrammatic sketch languages. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR 2005), Seoul, Korea, 31 August–1 September 2005; pp. 1226–1230. [Google Scholar]
- Costagliola, G.; Vincenzo, V.; Risi, M. A multi-layer parsing strategy for on-line recognition of hand-drawn diagrams. In Proceedings of the Visual Languages and Human-Centric Computing (VL/HCC’06), Brighton, UK, 4–8 September 2006; pp. 103–110. [Google Scholar]
- Valli, C. Linguistics of American Sign Language: An Introduction; Gallaudet University Press: Washington, DC, USA, 2011. [Google Scholar]
- Schlenker, P. Sign language and the foundations of anaphora. Annu. Rev. Linguist. 2017, 3, 149–177. [Google Scholar] [CrossRef]
- Wienholz, A.; Nuhbalaoglu, D.; Mani, N.; Herrmann, A.; Onea, E.; Steinbach, M. Pointing to the right side? An ERP study on anaphora resolution in German Sign Language. PLoS ONE 2018, 13, e0204223. [Google Scholar] [CrossRef] [PubMed]
- Steinbach, M.; Onea, E. A DRT analysis of discourse referents and anaphora resolution in sign language. J. Semant. 2015, 33, 409–448. [Google Scholar] [CrossRef]
- Cecchetto, C.; Checchetto, A.; Geraci, C.; Santoro, M.; Zucchi, S. The syntax of predicate ellipsis in Italian Sign Language (LIS). Lingua 2015, 166, 214–235. [Google Scholar] [CrossRef]
- Xu, B.S.; Fu, M. Ellipsis of sign language under the deaf culture and its linguistics analysis. Disabil. Res. 2015, 15, 31–34. [Google Scholar]
- Zorzi, G. Gapping vs. VP-ellipsis in Catalan sign language. Feast. Form. Exp. Adv. Sign Lang. Theory 2018, 1, 70–81. [Google Scholar]
- Costa-jussà, M.; Rapp, R.; Lambert, P.; Eberle, K.; Banchs, R.; Babych, B. Hybrid Approaches to Machine Translation; Springer: Basel, Switzerland, 2016. [Google Scholar]
- Agrawal, N.; Singla, A. Using Named Entity Recognition to Improve Machine Translation; Technical Report; Natural Language Processing; Stanford University: Stanford, CA, USA, 2012. [Google Scholar]
- Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.J. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, Philadelphia, PA, USA, 7–12 July 2002; pp. 311–318. [Google Scholar]
- MacWilliams, F.J.; Sloane, N.J.A. The Theory of Error-Correcting Codes; Elsevier: Amsterdam, The Netherlands, 1977; Volume 16, p. 18. [Google Scholar]
- Wohlin, C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering (EASE’14), London, UK, 13–14 May 2014; ACM: London, UK, 2014. [Google Scholar]
- PyPI. Scholarly API. Available online: https://pypi.org/project/scholarly/ (accessed on 30 August 2019).
- Petticrew, M.; Roberts, H. Systematic Reviews in the Social Sciences: A Practical Guide; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
- Petersen, K.; Gencel, C. Worldviews, research methods, and their relationship to validity in empirical software engineering research. In Proceedings of the 2013 Joint Conference of the 23rd International Workshop on Software Measurement and the 2013 Eighth International Conference on Software Process and Product Measurement (IWSM-MENSURA), Ankara, Turkey, 23–26 October 2013; pp. 81–89. [Google Scholar]
- Naranjo-Zeledón, L.; Peral, J.; Ferrández, A.; Chacón-Rivas, M. Systematic mapping data for translation-enabling technologies for sign languages (Version 1) [Data set]. Zenodo 2019. [Google Scholar] [CrossRef]
- Azarbayejani, A.; Wren, C.; Pentland, A. Real-time 3-D tracking of the human body. In Proceedings of the IMAGE’COM, Bordeaux, France, 15 May 1996; pp. 1–6. [Google Scholar]
- ACM. The 2012 ACM Computing Classification System. 2012. Available online: https://www.acm.org/publications/class-2012 (accessed on 15 May 2019).
- Jemni, M.; Elghoul, O. A system to make signs using collaborative approach. In Proceedings of the International Conference on Computers for Handicapped Persons, Linz, Austria, 9–11 July 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 670–677. [Google Scholar]
- Jemni, M.; Elghoul, O.; Makhlouf, S. A web-based tool to create online courses for deaf pupils. In Proceedings of the International Conference on Interactive Mobile and Computer Aided Learning, Amman, Jordan, 17–21 April 2007; pp. 18–20. [Google Scholar]
- Jemni, M.; Elghoul, O. Towards Web-Based automatic interpretation of written text to Sign Language. Proc. ICTA 2007, 7, 12–14. [Google Scholar]
- El Ghoul, O.; Jemni, M. Multimedia Courses Generator for Deaf Children. Int. Arab J. Inf. Technol. (IAJIT) 2009, 6, 458–464. [Google Scholar]
- El Ghoul, O.; Jemni, M. A Multi-layer Model for Sign Language’s Non-Manual Gestures Generation. In Proceedings of the International Conference on Computers for Handicapped Persons, Paris, France, 9–11 July 2014; Springer: Cham, Switzerland, 2014; pp. 466–473. [Google Scholar]
- El Ghoul, O.; Jemni, M. WebSign: A system to make and interpret signs using 3D Avatars. In Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Dundee, UK, 23 October 2011. [Google Scholar]
- San-Segundo, R.; Barra, R.; Córdoba, R.; D’Haro, L.F.; Fernández, F.; Ferreiros, J.; Pardo, J.M. Speech to sign language translation system for Spanish. Speech Commun. 2008, 50, 1009–1020. [Google Scholar] [CrossRef] [Green Version]
- San-Segundo, R.; Montero, J.M.; Córdoba, R.; Sama, V.; Fernández, F.; D’Haro, L.F.; García, A. Design, development and field evaluation of a Spanish into sign language translation system. Pattern Anal. Appl. 2012, 15, 203–224. [Google Scholar] [CrossRef]
- San-Segundo, R.; Pardo, J.M.; Ferreiros, J.; Sama, V.; Barra-Chicote, R.; Lucas, J.M.; García, A. Spoken Spanish generation from sign language. Interact. Comput. 2009, 22, 123–139. [Google Scholar] [CrossRef]
- López-Ludeña, V.; González-Morcillo, C.; López, J.C.; Barra-Chicote, R.; Córdoba, R.; San-Segundo, R. Translating bus information into sign language for deaf people. Eng. Appl. Artif. Intell. 2014, 32, 258–269. [Google Scholar] [CrossRef] [Green Version]
- López-Ludeña, V.; González-Morcillo, C.; López, J.C.; Ferreiro, E.; Ferreiros, J.; San-Segundo, R. Methodology for developing an advanced communications system for the Deaf in a new domain. Knowl.-Based Syst. 2014, 56, 240–252. [Google Scholar]
- López-Ludeña, V.; San-Segundo, R.; Montero, J.M.; Córdoba, R.; Ferreiros, J.; Pardo, J.M. Automatic categorization for improving Spanish into Spanish Sign Language machine translation. Comput. Speech Lang. 2012, 26, 149–167. [Google Scholar] [CrossRef] [Green Version]
- Lu, P.; Huenerfauth, M. Collecting a motion-capture corpus of American Sign Language for data-driven generation research. In Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies, Los Angeles, CA, USA, 5 June 2010; pp. 89–97. [Google Scholar]
- Lu, P.; Huenerfauth, M. Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation. Comput. Speech Lang. 2014, 28, 812–831. [Google Scholar] [CrossRef]
- Lu, P.; Huenerfauth, M. Synthesizing American Sign Language spatially inflected verbs from motion-capture data. In Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), in Conjunction with ASSETS, Dundee, UK, 23 October 2011. [Google Scholar]
- Braffort, A.; Dalle, P. Sign language applications: Preliminary modeling. Univers. Access Inf. Soc. 2008, 6, 393–404. [Google Scholar] [CrossRef]
- Braffort, A. Research on computer science and sign language: Ethical aspects. In Proceedings of the International Gesture Workshop, London, UK, 18–20 April 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 1–8. [Google Scholar]
- Braffort, A.; Bolot, L.; Chételat-Pelé, E.; Choisier, A.; Delorme, M.; Filhol, M.; Devos, N. Sign Language Corpora for Analysis, Processing and Evaluation. In Proceedings of the LREC 2010, Valletta, Malta, 17–23 May 2010. [Google Scholar]
- Fotinea, S.E.; Efthimiou, E.; Caridakis, G.; Karpouzis, K. A knowledge-based sign synthesis architecture. Univers. Access Inf. Soc. 2008, 6, 405–418. [Google Scholar] [CrossRef]
- Fotinea, S.E.; Efthimiou, E.; Kouremenos, D. Generating linguistic content for Greek to GSL conversion. In Proceedings of the 7th Hellenic European Conference on Computer Mathematics and its Applications, Athens, Greece, 22–24 September 2005. [Google Scholar]
- Efthimiou, E.; Fontinea, S.E.; Hanke, T.; Glauert, J.; Bowden, R.; Braffort, A.; Goudenove, F. Dicta-sign–sign language recognition, generation and modelling: A research effort with applications in deaf communication. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, Valletta, Malta, 17–23 May 2010; pp. 80–83. [Google Scholar]
- Efthimiou, E.; Fotinea, S.E. An environment for deaf accessibility to educational content. In Proceedings of the ICTA 2007, Hammamet, Tunisia, 12–14 April 2007. [Google Scholar]
- Efthimiou, E.; Fotinea, S.E.; Hanke, T.; Glauert, J.; Bowden, R.; Braffort, A.; Lefebvre-Albaret, F. The dicta-sign wiki: Enabling web communication for the deaf. In Proceedings of the International Conference on Computers for Handicapped Persons, Linz, Austria, 11–13 July 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 205–212. [Google Scholar]
- Efthimiou, E.; Fotinea, S.E.; Dimou, A.L.; Goulas, T.; Kouremenos, D. From grammar-based MT to post-processed SL representations. Univers. Access Inf. Soc. 2016, 15, 499–511. [Google Scholar] [CrossRef]
- Glauert, J.; Elliott, R. Extending the SiGML Notation—A Progress Report. In Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Dundee, Scotland, 23 October 2011; Volume 23. [Google Scholar]
- Adamo-Villani, N.; Doublestein, J.; Martin, Z. Sign language for K-8 mathematics by 3D interactive animation. J. Educ. Technol. Syst. 2005, 33, 241–257. [Google Scholar] [CrossRef]
- Adamo-Villani, N.; Wilbur, R. Two novel technologies for accessible math and science education. IEEE Multimed. 2008, 15, 38–46. [Google Scholar] [CrossRef]
- Adamo-Villani, N. 3d rendering of American sign language finger-spelling: A comparative study of two animation techniques. Int. J. Hum. Soc. Sci. 2008, 3, 24. [Google Scholar]
- Adamo-Villani, N.; Wilbur, R.; Eccarius, P.; Abe-Harris, L. Effects of character geometric model on perception of sign language animation. In Proceedings of the 2009 Second International Conference in Visualisation, Barcelona, Spain, 15–17 July 2009; pp. 72–75. [Google Scholar]
- Adamo-Villani, N.; Hayward, K.; Lestina, J.; Wilbur, R.B. Effective animation of sign language with prosodic elements for annotation of digital educational content. In Proceedings of the SIGGRAPH Talks 2010, Los Angeles, CA, USA, 26–30 July 2010. [Google Scholar]
- Huenerfauth, M.; Hanson, V. Sign language in the interface: Access for deaf signers. In Universal Access Handbook; Stephanidis, C., Ed.; CRC Press: Boca Raton, FL, USA, 2009; Volume 38. [Google Scholar]
- Huenerfauth, M. A linguistically motivated model for speed and pausing in animations of American sign language. ACM Trans. Access. Comput. (TACCESS) 2009, 2, 9. [Google Scholar] [CrossRef]
- Huenerfauth, M.; Lu, P.; Rosenberg, A. Evaluating importance of facial expression in American sign language and pidgin signed English animations. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, Dundee, UK, 24–26 October 2011; pp. 99–106. [Google Scholar]
- Huenerfauth, M.; Lu, P. Effect of spatial reference and verb inflection on the usability of sign language animations. Univers. Access Inf. Soc. 2012, 11, 169–184. [Google Scholar] [CrossRef]
- Filhol, M.; Hadjadj, M.N.; Choisier, A. Non-manual features: The right to indifference. In Proceedings of the 6th Workshop on the Representation and Processing of Sign Language (LREC), Reykjavik, Iceland, 31 May 2014. [Google Scholar]
- Filhol, M.; Hadjadj, M.N.; Testu, B. A rule triggering system for automatic text-to-sign translation. Univers. Access Inf. Soc. 2016, 15, 487–498. [Google Scholar] [CrossRef]
- Filhol, M.; Tannier, X. Construction of a French-LSF corpus. In Proceedings of the Building and Using Comparable Corpora Workshop, Language Resource and Evaluation Conference, Reykjavik, Iceland, 27 May 2014; pp. 2–5. [Google Scholar]
- Kacorri, H.; Lu, P.; Huenerfauth, M. Evaluating facial expressions in American Sign Language animations for accessible online information. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction, Las Vegas, Nevada, USA, 21–26 July 2013; pp. 510–519. [Google Scholar]
- Kacorri, H.; Huenerfauth, M.; Ebling, S.; Patel, K.; Willard, M. Demographic and experiential factors influencing acceptance of sign language animation by deaf users. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, Lisbon, Portugal, 26–28 October 2015; pp. 147–154. [Google Scholar]
- Kacorri, H.; Lu, P.; Huenerfauth, M. Effect of displaying human videos during an evaluation study of American Sign Language animation. ACM Trans. Access. Comput. (TACCESS) 2013, 5, 4. [Google Scholar] [CrossRef]
- Kacorri, H.; Huenerfauth, M. Implementation and evaluation of animation controls sufficient for conveying ASL facial expressions. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility, Rochester, NY, USA, 20–22 October 2014; pp. 261–262. [Google Scholar]
- Escudeiro, N. Virtual Sign Translator in Serious Games. In Proceedings of the InforAbERTA, Jornadas de Informática, Universidade Aberta, Porto, Portugal, 15 March 2014; pp. 1–22. [Google Scholar]
- Escudeiro, P.; Escudeiro, N.; Reis, R.; Lopes, J.; Norberto, M.; Baltasar, A.B.; Bidarra, J. Virtual Sign—A Real Time Bidirectional Translator of Portuguese Sign Language. Procedia Comput. Sci. 2015, 67, 252–262. [Google Scholar] [CrossRef]
- Escudeiro, P.; Escudeiro, N.; Reis, R.; Barbosa, M.; Bidarra, J.; Baltazar, A.B.; Gouveia, B. Virtual sign translator. In Proceedings of the International Conference on Computer, Networks and Communication Engineering (ICCNCE 2013), Beijing, China, 23–24 May 2013. [Google Scholar]
- Escudeiro, P.; Escudeiro, N.; Reis, R.; Barbosa, M.; Bidarra, J.; Baltasar, A.B.; Norberto, M. Virtual sign game learning sign language. In Proceedings of the 5th International Conference on Education and Educational Technologies, Kuala Lumpur, Malaysia, 23–25 April 2014. [Google Scholar]
- Stokoe, W. Sign Language structure: An outline of the visual communication systems of the American deaf. Stud. Linguist. Occas. Pap. 1960, 8. [Google Scholar] [CrossRef] [PubMed]
- Prillwitz, S.; Leven, R.; Zienert, H.; Hanke, T.; Henning, J. HamNoSys Version 2.0; Hamburg Notation System for Sign Languages. An introductory Guide; International Studies on Sign Language and Communication of the Deaf 5; Signum Press: Hamburg, Germany, 1989. [Google Scholar]
- Jemni, M.; Chabeb, Y.; Elghoul, O. Towards improving accessibility of Deaf people to ICT. In Proceedings of the 3rd International Conference on Information Technology, Amman, Jordan, 9–11 May 2007. [Google Scholar]
- Jemni, M.; Chabeb, Y.; Elghoul, O. An avatar based approach for automatic interpretation of text to Sign language. In Challenges for Assistive Technology, AAATE 07; IOS Press: Amsterdam, The Netherlands, 2007. [Google Scholar]
- Jemni, M.; El Ghoul, O.; Yahia, N.B.; Boulares, M. Sign Language MMS to Make Cell Phones Accessible to the Deaf and Hard-of-hearing Community. In Proceedings of the Conference and Workshop on Assistive Technologies for People with Vision and Hearing Impairments: Assistive Technology for All Ages (CVHI-2007), Granada, Spain, 28–31 August 2007. [Google Scholar]
- San-Segundo, R.; Barra, R.; D’Haro, L.F.; Montero, J.M.; Córdoba, R.; Ferreiros, J. A spanish speech to sign language translation system for assisting deaf-mute people. In Proceedings of the Ninth International Conference on Spoken Language Processing, Pittsburgh, PA, USA, 17–21 September 2006. [Google Scholar]
- San Segundo, R.; Gallo, B.; Lucas, J.M.; Barra-Chicote, R.; D’Haro, L.F.; Fernandez, F. Speech into sign language statistical translation system for deaf people. IEEE Lat. Am. Trans. 2009, 7, 400–404. [Google Scholar]
- López-Ludeña, V.; San-Segundo, R. Statistical Methods for Improving Spanish into Spanish Sign Language Translation. In Proceedings of the 15th Mexican International Conference on Artificial Intelligence, Cancún, Mexico, 23–28 October 2016. [Google Scholar]
- López-Ludeña, V.; San-Segundo, R.; Morcillo, C.G.; López, J.C.; Muñoz, J.M.P. Increasing adaptability of a speech into sign language translation system. Expert Syst. Appl. 2013, 40, 1312–1322. [Google Scholar] [CrossRef]
- López-Ludeña, V.; San Segundo, R.; González-Morcillo, C.; López, J.C.; Ferreiro, E. Adapting a speech into sign language translation system to a new domain. In Proceedings of the INTERSPEECH 2013, Lyon, France, 25–29 August 2013; pp. 1164–1168. [Google Scholar]
- López-Ludeña, V.; San Segundo, R.; Ferreiros, J.; Pardo, J.M.; Ferreiro, E. Developing an information system for deaf. In Proceedings of the INTERSPEECH 2013, Lyon, France, 25–29 August 2013; pp. 3617–3621. [Google Scholar]
- Braffort, A.; Boutora, L. Défi d’annotation DEGELS2012: La segmentation (DEGELS2012 annotation challenge: Segmentation. In Proceedings of the JEP-TALN-RECITAL 2012, Workshop DEGELS 2012: Défi GEste Langue des Signes (DEGELS 2012: Gestures and Sign Language Challenge), Grenoble, France, 4–8 June 2012; pp. 1–8. (In French). [Google Scholar]
- Kacorri, H. TR-2015001: A Survey and Critique of Facial Expression Synthesis in Sign Language Animation. CUNY Academic Works. 2015. Available online: https://academicworks.cuny.edu/gc_cs_tr/403 (accessed on 20 August 2019).
- Kacorri, H.; Huenerfauth, M. Evaluating a dynamic time warping based scoring algorithm for facial expressions in ASL animations. In Proceedings of the SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies, Dresden, Germany, 11 September 2015; pp. 29–35. [Google Scholar]
- Naranjo-Zeledón, L.; Peral, J.; Ferrández, A.; Chacón-Rivas, M. Classification-Subclassification Co-Occurrency Frequency Table for Sign Languages Systematic Mapping (Version 1) [Data set]. Zenodo 2019. [Google Scholar] [CrossRef]
- Jung, W.S.; Kim, H.S.; Jeon, J.K.; Kim, S.J.; Lee, H.W. Apparatus for Bi-Directional Sign Language/Speech Translation in Real Time and Method. U.S. Patent No. 15/188,099, 2 October 2018. [Google Scholar]
- Kanevsky, D.; Pickover, C.A.; Ramabhadran, B.; Rish, I. Language Translation in an Environment Associated with a Virtual Application. U.S. Patent No. 9,542,389, 10 January 2017. [Google Scholar]
- Dharmarajan, D. Sign Language Communication with Communication Devices. U.S. Patent No. 9,965,467, 28 September 2017. [Google Scholar]
- Opalka, A.; Kellard, W. Systems and Methods for Recognition and Translation of Gestures. U.S. Patent No. 14/686,708, 11 February 2016. [Google Scholar]
- Kurzweil, R.C. Use of Avatar with Event Processing. U.S. Patent No. 8,965,771, 24 February 2015. [Google Scholar]
- Bokor, B.R.; Smith, A.B.; House, D.E.; Nicol, I.W.B.; Haggar, P.F. Translation of Gesture Responses in a Virtual World. U.S. Patent No. 9,223,399, 29 December 2015. [Google Scholar]
- Kacorri, H.; Huenerfauth, M. Continuous profile models in ASL syntactic facial expression synthesis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; pp. 2084–2093. [Google Scholar]
- Kacorri, H.; Huenerfauth, M. Selecting exemplar recordings of American sign language non-manual expressions for animation synthesis based on manual sign timing. In Proceedings of the 7th Workshop on Speech and Language Processing for Assistive Technologies (INTERSPEECH 2016), San Francisco, CA, USA, 13 September 2016. [Google Scholar]
- Kacorri, H.; Syed, A.R.; Huenerfauth, M.; Neidle, C. Centroid-based exemplar selection of ASL non-manual expressions using multidimensional dynamic time warping and mpeg4 features. In Proceedings of the 7th Workshop on the Representation and Processing of the Sign Languages, Language Resources and Evaluation Conference (LREC), Portorož, Slovenia, 23–28 May 2016. [Google Scholar]
- Huenerfauth, M.; Lu, P.; Kacorri, H. Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture Data. In Proceedings of the SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies, Dresden, Germany, 11 September 2015; pp. 22–28. [Google Scholar]
- Huenerfauth, M.; Kacorri, H. Augmenting EMBR virtual human animation system with MPEG-4 controls for producing ASL facial expressions. In Proceedings of the International Symposium on Sign Language Translation and Avatar Technology, Paris, France, 9–10 April 2015; Volume 3. [Google Scholar]
- Escudeiro, P.; Escudeiro, N.; Norberto, M.; Lopes, J. Jogos Sérios para Língua Gestual Portuguesa. In Proceedings of the Anais dos Workshops do Congresso Brasileiro de Informática na Educação, Maceió, Brasil, 26–30 October 2015. [Google Scholar]
- Escudeiro, P.; Escudeiro, N.; Norberto, M.; Lopes, J. Virtual Sign in serious games. In Proceedings of the International Conference on Serious Games, Interaction, and Simulation, Novedrate, Italy, 16–18 September 2015; Springer: Cham, Switzerland, 2015; pp. 42–49. [Google Scholar]
- Escudeiro, P.; Escudeiro, N.; Norberto, M.; Lopes, J. Virtualsign translator as a base for a serious game. In Proceedings of the 3rd International Conference on Technological Ecosystems for Enhancing Multiculturality, Porto, Portugal, 7–9 October 2015; pp. 251–255. [Google Scholar]
- Escudeiro, P.; Escudeiro, N.; Norberto, M.; Lopes, J. Virtualsign game evaluation. In Proceedings of the International Conference on Serious Games, Interaction, and Simulation, Porto, Portugal, 16–17 June 2016; Springer: Cham, Switzerland, 2016; pp. 117–124. [Google Scholar]
- Lu, P.; Huenerfauth, M. CUNY American Sign Language Motion-Capture Corpus: First Release. In Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, the 8th International Conference on Language Resources and Evaluation, Istanbul, Turkey, 21–27 May 2012. [Google Scholar]
- CNLSE. Corpus de la Lengua de Signos Española. Available online: https://www.cnlse.es/es/corpus-de-la-lengua-de-signos-espa%C3%B1ola (accessed on 14 May 2019).
Data Item | Value | RQ |
---|---|---|
General | - | - |
Study ID | Integer | - |
Article Title | Name of the article | - |
Authors Names | Set of names of the authors | - |
Year of Publication | Calendar year | RQ3 |
University/Research Center | Name of the university/research center | RQ2 |
Venue | Name of publication venue | RQ3 |
Country | Name of the country (or countries) | RQ3 |
Characterization | ||
Sign Language-Project | Name of the sign language or project | RQ3, RQ5 |
Classification | According to predefined scheme | RQ1, RQ6 |
Sub-classification | According to predefined scheme | RQ2, RQ6 |
Abstract | Text | RQ4 |
Classification | Subclassification | Frequency |
---|---|---|
Animation Techniques | Avatar | 29 |
- | Notation | 14 |
- | Translation | 12 |
Automatic Translation | Translation | 182 |
- | Avatar | 104 |
- | Animation | 68 |
Avatar | Translation | 18 |
- | Animation | 32 |
- | Notation | 32 |
Computational Model | Avatar | 4 |
- | Animation | 1 |
- | Notation | 1 |
Corpus | Translation | 20 |
- | Example Based | 1 |
- | Avatar | 11 |
Educational | Avatar | 43 |
- | Translation | 24 |
- | Animation | 24 |
Example Based | Translation | 2 |
- | Animation | 1 |
- | Corpus | 1 |
Gesture or Sign Recognition | Translation | 19 |
- | Machine Learning | 2 |
- | Avatar | 14 |
Machine Learning | Translation | 3 |
- | Notation | 1 |
- | Recognition | 1 |
Notation | Translation | 3 |
- | Avatar | 8 |
- | Animation | 10 |
Projects | Translation | 2 |
- | Avatar | 2 |
- | Grammar | 1 |
Rule Based | Translation | 6 |
- | Avatar | 2 |
- | Animation | 4 |
SL Editor | Translation | 2 |
- | Avatar | 5 |
- | Animation | 3 |
SL General-Non technical | Translation | 2 |
- | Avatar | 2 |
- | Animation | 2 |
SL Grammar | Translation | 23 |
- | Rule Based | 1 |
- | Avatar | 18 |
Statistical Based | Translation | 10 |
- | Avatar | 1 |
- | Animation | 1 |
User validation | Translation | 6 |
- | Avatar | 16 |
- | Animation | 18 |
Venue | Class | Type | Count |
---|---|---|---|
Bachelor Thesis | Thesis | Bachelor’s Thesis | 5 |
Book Chapter or Book | Non-refereed | Book Section or Book | 40 |
Conference Paper | Peer-reviewed | Conference proceedings | 404 |
Doctoral Thesis | Thesis | Doctoral dissertation | 23 |
Journal Article | Peer-reviewed | Journal Article | 259 |
Master–Grade Thesis | Thesis | Master’s thesis | 29 |
Paper–unknown source | Non-refereed conference proceedings | Non-refereed articles | 46 |
Patent | Patents and invention disclosures | Granted patent | 6 |
Poster | Peer-reviewed | Conference proceedings | 4 |
Technical report | Peer-reviewed scientific articles | Conference proceedings | 1 |
Web Site Project | Unclassified | Unclassified | 2 |
Workshop Paper | Peer-reviewed | Conference proceedings | 84 |
Authors | Reference | Title | Country | Year |
---|---|---|---|---|
WS Jung, HS Kim, JK Jeon, SJ Kim and HW Lee | [138] | Apparatus for bi-directional sign language/speech translation in real time and method | United States | 2018 |
D Kanevsky, CA Pickover and B Ramabhadran | [139] | Language translation in an environment associated with a virtual application | United States | 2017 |
D Dharmarajan | [140] | Sign language communication with communication devices | United States | 2017 |
A Opalka and W Kellard | [141] | Systems and methods for recognition and translation of gestures | United States | 2016 |
RC Kurzweil | [142] | Use of avatar with event processing | United States | 2015 |
BR Bokor, AB Smith, DE House, BNII William and PF Haggar | [143] | Translation of gesture responses in a virtual world | United States | 2015 |
Phase | Actions | Applied |
---|---|---|
Need for mapping | Motivate the need and relevance | ✔ |
Define objectives and questions | ✔ | |
Consult with target audience to define questions | ✔ | |
Study identification | Choosing search strategy | - |
Snowballing | • | |
Manual | • | |
Conduct database search | ✔ | |
Develop the search | - | |
PICO | ✔ | |
Consult librarians or experts | • | |
Iteratively try finding more relevant papers | • | |
Keywords from known papers | ✔ | |
Use standards, encyclopedias, and thesaurus | • | |
Evaluate the search | ||
Test-set of known papers | • | |
Expert evaluates result | ✔ | |
Search web-pages of key authors | ✔ | |
Test–retest | • | |
Inclusion and Exclusion | - | |
Identify objective criteria for decision | ✔ | |
Add additional reviewer, resolve disagreements between them when needed | • | |
Decision rules | • | |
Data extraction and classification | Extraction process | - |
Identify objective criteria for decision | • | |
Obscuring information that could bias | • | |
Add additional reviewer, resolve disagreements between them when needed | • | |
Test–retest | • | |
Classification scheme | ✔ | |
Research type | • | |
Research method | • | |
Venue type | ✔ | |
Validity discussion | Validity discussion/limitations provided | ✔ |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Naranjo-Zeledón, L.; Peral, J.; Ferrández, A.; Chacón-Rivas, M. A Systematic Mapping of Translation-Enabling Technologies for Sign Languages. Electronics 2019, 8, 1047. https://doi.org/10.3390/electronics8091047
Naranjo-Zeledón L, Peral J, Ferrández A, Chacón-Rivas M. A Systematic Mapping of Translation-Enabling Technologies for Sign Languages. Electronics. 2019; 8(9):1047. https://doi.org/10.3390/electronics8091047
Chicago/Turabian StyleNaranjo-Zeledón, Luis, Jesús Peral, Antonio Ferrández, and Mario Chacón-Rivas. 2019. "A Systematic Mapping of Translation-Enabling Technologies for Sign Languages" Electronics 8, no. 9: 1047. https://doi.org/10.3390/electronics8091047
APA StyleNaranjo-Zeledón, L., Peral, J., Ferrández, A., & Chacón-Rivas, M. (2019). A Systematic Mapping of Translation-Enabling Technologies for Sign Languages. Electronics, 8(9), 1047. https://doi.org/10.3390/electronics8091047