Next Article in Journal / Special Issue
Comparison of Using an Augmented Reality Learning Tool at Home and in a Classroom Regarding Motivation and Learning Outcomes
Previous Article in Journal
Simulating Wearable Urban Augmented Reality Experiences in VR: Lessons Learnt from Designing Two Future Urban Interfaces
Previous Article in Special Issue
Roadmap for the Development of EnLang4All: A Video Game for Learning English
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind Persons

Computer Engineering and Informatics Department, University of Patras, 26504 Rio, Greece
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2023, 7(2), 22; https://doi.org/10.3390/mti7020022
Submission received: 20 January 2023 / Revised: 10 February 2023 / Accepted: 11 February 2023 / Published: 17 February 2023

Abstract

:
Millions of people with vision impairment or vision loss face considerable barriers in using mobile technology and services due to the difficulty of text entry. In this paper, we review related studies involving the design and evaluation of novel prototypes for mobile text entry for persons with vision loss or impairment. We identify the practices and standards of the research community and compare them against the practices in research for non-impaired persons. We find that there are significant shortcomings in the methodological and result-reporting practices in both population types. In highlighting these issues, we hope to inspire more and better quality research in the domain of mobile text entry for persons with and without vision impairment.

1. Introduction

As our societies are increasingly permeated by information and communication technologies, the mediation of transactions, interactions, identities and participation through digital means requires a significant level of digital skills on behalf of individuals [1]. These skills are critical for the ability to perform tasks and solve problems in the real world using the assistance of digital environments and also to perform tasks and solve problems that may exist only in the virtual worlds themselves [2]. Digital skills were oriented toward hte use of a personal computer, but recent years have seen the emergence of mobile platforms (e.g., smartphones, tablets and smartwatches) as the predominant computing device for a significant part of the population, especially those belonging to vulnerable or at-risk segments of society [3]. Therefore, the inability to take full advantage of mobile technology, especially smartphones, can lead to or deepen existing social, cultural and economic exclusion [4]. Furthermore, mobile digital skills are increasingly important due to the benefits of mobile health applications in health and wellness management [5]. The ability to input text with a digital device rests at the core of mobile digital skills. It is a means to provide instruction and data to various applications and services running on the device (e.g., searching the web, saving contact details and taking notes) or purely for the scope of digitally mediated communication with other humans.
While input methods to assist mobile text entry have been extensively studied in the recent literature (e.g., see [6]), text entry research has focused much less on the needs of persons with vision problems. Definitions of what constitutes a vision problem and its severity vary across countries and even within research [7]. Using the World Health Organization’s International Classification of Disease as a guide [8], for the purposes of this paper, we use the term Vision-Impaired or Blind Persons (VIBPs) to describe people with uncorrected or uncorrectable impairements of visual acuity, the visual field, light sensitivity, contrast vision, binocular functions or light perception. This description includes a range of conditions such as cataracts, glaucoma, macular degeneration and blindness (including the legal, scientific and literal senses of blindness).
It can be assumed that the VIBP population group is not sufficiently large in order to attract more attention from the research community, but in reality, permanent vision impairment or blindness are experienced by over 100 million people worldwide, while unaddressed vision impairments are estimated to affect over 800 million people, with prevalence up to four times higher in low- and middle-income countries or communities [9]. Furthermore, the impact of vision impairment and blindness on the quality of life of persons affected by them is significant. Further from reduced independence in daily life and activities, children VIBPs may suffer from motor, psychological and social development issues, while adult VIBPs are at higher risk of depression and anxiety, and older adult VIBPs are exposed to significantly more physical and psychological risks than the rest of the population [10,11,12,13]. The use of mobile technology can mediate some of these risks by increasing independence in daily activities and by facilitating communication and interventions that address cognitive and psychological needs. However, the use of mobile technology is strongly contingent on the ability to overcome the barrier of learning and applying the core skill of mobile text entry. If not overcome, this barrier can significantly deepen the societal divide, health risk and exclusion posed by the challenges of low vision or blindness.
A range of assistive technological tools to facilitate mobile text entry by VIBPs has been implemented in the context of research but also as commercially available solutions. These tools rely mostly on audio feedback during text entry and the use of speech-to-text as an input method, and they may leverage other modalities such as haptic feedback. In contrast to the interaction experience of persons with no vision impairment, where mobile text entry is primarily visual and therefore unimodal, VIPs interact with mobile text entry in a heavily multimodal manner. While this multimodal support can offer adequate experience and performance for users, it can have significant drawbacks due to the reliance on speech and audio, including reduced privacy in non-private contexts and during use in noisy environments [14]. As such, the ability to perform text entry with virtual keyboards is highly desirable due to its practicality and privacy-preserving nature.
In this paper, we present a structured review of recent developments in mobile text entry for VIBPs. Previous related work on examining this field of research has focused on aggregate reports of participant performance with input methods but has not examined the methodological approaches adopted by researchers in the design and evaluation process of novel prototypes [15,16,17]. Our work expands the previous literature by examining the research process (i.e., methodology), rather than the research outcome. Furthermore, we present findings from a wider body of research than previously examined (24 papers reporting a total of 26 studies, compared with 11 in [15], 16 in [16] and 19 in [17], which is the most recent work). Additionally, we sought to discover any differences in practice between research for this population and research carried out in the context of non-impaired populations in order to identify areas of focus or improvement for future work. We contrast our findings from the 26 studies in our literature set against a representative sample of a further 26 high-quality studies that focus on non-impaired persons.
Therefore, in Section 2, we outline our research questions and the methodology for collecting and analyzing the previous literature. In Section 3, we present our findings from the analysis first for the design approaches for mobile text entry for VIBPs (Section 3.1) and then for the evaluation approaches for the prototypes developed (Section 3.2). We then compare these findings to the community practices in research for non-impaired persons in Section 3.3. Finally, Section 4 and Section 5 present our discussion and conclusions, respectively. Before proceeding, we present a brief overview of the Braille writing system, since it plays a central role in the focus of the related literature, and this brief explanation should be helpful to readers who are not familiar with the subject.

The Braille Writing System

In this section, we briefly introduce the Braille writing system, which is fundamental to much of the effort in facilitating reading and writing for VIBPs. Braille represents symbols in linguistic writing scripts using the concept of symbol cells, in which a 3 × 2 layout of numbered dots is uniformly positioned as per Figure 1 (left). Each symbol (e.g., a letter) is represented by activating a combination of dots in the cell, thus forming a unique pattern for each symbol. For example, the letter “t” is represented by an activation of dots 2, 3, 4 and 5, as per Figure 1 (right).
On paper, these dots appear as embossed points which are raised when activated or not present when inactive. Touching the patterns from left to right on paper allows a person to feel the embossed dots and translate them into language for the purpose of reading. Writing Braille can be accomplished by using a special typewriter (the Perkins Brailler, an embossing typewriter with six keys). In the digital age, Braille writing has been enabled on touchscreen devices by using various commercial and research prototypes that leverage the basic Braille cell layout. To account for the lack of tactile feedback on a touchscreen, various methods have been proposed, such as forming patterns in a Braille cell using one of three interaction paradigms. First, chording is the use of multi-finger presses on the touchscreen with both hands, as shown in Figure 2 (left). Next, gestural input is the gliding of a finger along the positions of the dots to be activated, as shown in Figure 2 (middle). Finally, tapping uses sequential presses of a single finger on the screen to form a pattern, as shown in Figure 2 (right). A short idle time or a special gesture (e.g., tapping on the screen with two fingers) may indicate that the user has completed inputting a pattern using a gesture or a tapping sequence and is ready to move to inputting the next character. Chording is more akin to writing with a Perkins Brailler, which is the device on which most persons with vision impairments will have trained with, but it requires the use of both hands. The other two methods have the advantage of requiring only one hand to use. In all cases, the lack of tactile sensation regarding the locations of dot areas on the screen means that such methods must rely on audio or speech feedback and the tactile characteristics of the device (e.g., the screen’s edge bevel) to guide the user or advanced algorithms to allow pattern recognition regardless of actual finger positioning.

2. Survey Methodology

Literature surveys are a valuable research method aimed at making sense of the state of the art in a given scientific discipline. This is achieved by aggregating, interpreting, explaining and integrating existing research. Among the various types of literature surveys, as described in the taxonomy by the authors of [18], the work presented in this paper is best classified as a scoping review. Scoping reviews focus on the extraction of data and methodological approaches from each research piece in order to provide an accurate depiction of the practices and current state of a field. This can be useful for identifying issues with the current work and challenges or opportunities for the betterment of science and practices in the field of interest.
To conduct literature surveys, researchers can follow advice from a multitude of related guidance papers which focus on specific scientific disciplines (e.g., [19,20]) or aim to provide generic cross-disciplinary guidance (e.g., [21,22,23,24]). For this paper, we draw on the PRISMA methodological approach by Liberati et al. [25], since it contains a complete and independently validated framework for the article selection process and a thorough checklist for the contents and structure of the survey paper itself, thus ensuring the quality of the process and the outputs of the survey. The PRISMA methodology focuses on systematic reviews that consist of syntheses and meta-analyses of study findings, which is not the focus of our study. Rather than focusing on specific outcomes (e.g., the effects of interventions on text entry speed or error corrections), we focus on the methodological approaches used to design and evaluate interventions regardless of the outcome. Our paper is therefore structured in the closest possible adherence to the checklist proposed in the PRISMA methodology and contains only the pertinent items to our objectives, adapted to the terminology and methods applicable to the domain of human–computer interaction. From the 2020 version of the PRISMA statement checklist [26], we fully or partially omitted 14 out of a total of 27 items as they were not applicable to our research (see Table 1).

2.1. Research Questions

The goal of our scoping study is to discover the body of extant work on text entry research for VIBPs and to examine the methodological approaches used to design such technology, as well as to evaluate it. Our aim is twofold. First, we aim to provide a clear indication of the approaches, baselines and standards for newcomers to the field. Secondly, given our past experience in text entry research for non-vision-impaired persons, we aim to examine whether the work with vision-impaired persons has been methodologically approached differently by the research community and whether the difficulties of working with this population have led to novel approaches or continue to pose significant challenges in research practice and research outcome quality. As such, our research questions are the following:
  • RQ1: What are the design approaches used in text entry research for VIBPs (e.g., theory driven, use of human participants in the design process or use of computational methods)?
  • RQ2: What are the community standards and practices in conducting text entry evaluations with VIBPs (e.g., sample sizes, evaluation tasks, sample characteristics, study design and materials and metrics captured during evaluation)?
  • RQ3: Do the design and evaluation practices for text entry methods for VIBPs differ from the community standards in research addressing non-impaired persons?

2.2. Search Strategy

Our study follows a three-stage approach: a planning stage, conducting stage and analyzing stage. We followed this approach to ensure that our review provided valid and reliable findings [27].

2.2.1. Planning Stage

In this stage, we defined our information sources, our search strategy and finally the process of selecting studies (inclusion and exclusion criteria).
Data sources: The main source for selecting our papers was the Google Scholar academic search engine. Google Scholar searches all the main digital libraries related to Computer Science and Informatics, including ACM Digital Library, IEEE Xplore, ScienceDirect, and SpringerLink. Additionally, Google Scholar gives the option to explore related works and citations and apply filters (e.g., year of publication). These characteristics make it the most comprehensive academic search engine [28], removing the need to perform the search in each distinct reputable academic repository.
Search terms: In order to retrieve papers related to our review, we selected the following core search queries: “mobile keyboard”, “mobile text entry” and “virtual keyboard”. Each core phrase was combined with three phrases—“low vision”, “blind” and “visually impaired”—resulting in the formation of nine different search queries, with and without the boolean AND operator, using term only and paired term queries when the boolean operator was applied (e.g., [mobile keyboard low vision], and [“mobile keyboard” AND “low vision”]. We retrieved the first 10 pages of results from Google Scholar for each of these queries, resulting in 900 publications from which to perform further screening.
Inclusion and exclusion criteria: In the planning phase, we identified a two-pass process and related criteria that would be applied after the results of the search queries were collected. In the first pass, we applied a set of general criteria as follows:
  • The publication year was 2013 or later;
  • The publication was a scientific article published in reputable conference proceedings or journals;
  • The publication was written in English (for example, a few papers provided an English title and abstract, but the rest of the paper was written in another language).
For each search query, we applied the “year of publication” filter, electing to include papers published in 2013 or later in order to provide adequate balance between the breadth of the search and the relevance of the results to the present-day status. The choice of year was due to the fact that at the end of 2009 (Q4), the majority of the devices shipped came with capacitative touchscreens for the first time [29], while in 2013, 90% of shipped devices came with capacitative touchscreens [30]. Therefore, by 2013, there was enough time for consumers and researchers to be familiar with this technology, and thus research published thereafter would be relevant to the current state of the art.
In the second pass, more detailed criteria were applied to evaluate each paper carefully and decide if it was relevant to include in our review. For this assessment, we considered whether the paper answered the following questions:
  • Does the paper propose a prototype text entry method or input support system (e.g., error correction) for mobile devices (smartphone, tablet or smartwatch)?
  • Is the proposed prototype intended for use by VIBPs?
  • Do the researchers conduct at least one evaluation study with users (non-impaired or VIBPs)?
  • Do the researchers mention details about the evaluation procedure (study environment, task type, etc.)?
  • Do the researchers present evaluation results (e.g., typing speed or error metrics)?
Papers that did not meet all five criteria as per the above five questions, were excluded from our review.

2.2.2. Conducting Stage

The conducting stage of the review included the retrieval of relative papers based on the search queries mentioned previously. Based on the PRISMA approach [25], we adopted a four-stage execution process, shown in Figure 3. However, we modified the process by performing the identification and screening phases twice: once to manually identify and screen a core set of publications and once more by using this core as input for an AI-assisted search tool in order to retrieve further possible results that might have been missed by Google Scholar. In this second iteration, we applied the same screening process and also eliminated any duplicates, as explained below.
Identification: For each search query, we applied the “year of publication” filter, electing to include papers published in 2013 or later in order to provide adequate balance between the breadth of the search and the relevance of the results to the present-day status (e.g., in order to include research with touchscreen mobile devices with internet connectivity and capacitative screens). Result retrieval was carried out in the period from 01 September 2022 to 31 October 2022. For each result, we conducted screening, first by title and secondly by reading the abstract, in order to eliminate obviously irrelevant results. This assessment was performed in a non-blinded manner by two reviewers. Disagreements between reviewers were resolved by consensus. This core set of manually identified results was imported to Zotero, which is free and open-source reference management software. Next, we exported the list to BibTeX format and used this as input for Research Rabbit, an AI-driven tool for research discovery [31]. Research Rabbit uses imported publications as “seeds” for searching for related content and creates suggestion lists with similar works, earlier works (that were not already included in the input list) and later works from the imported papers. We exported Research Rabbit’s suggestions and imported the new list to Zotero. After removing duplicates, we performed a final round of screening based on the titles and abstracts, and from this process, we developed a set of papers for the detailed eligibility assessment phase. We downloaded the full-text PDFs from the official publishers’ websites or author manuscripts from the authors’ websites in the cases where we did not have access to the final published content. In cases where neither of these sources were available for the full text, we contacted the authors to obtain copies of their manuscripts.
After the manuscripts were collected, the eligibility evaluation phase included the application of the inclusion and exclusion criteria as detailed previously in a two-pass process performed in parallel by two reviewers, splitting the list equally between the two persons. After each reviewer completed his or her part of the list, the outcomes were independently checked by the other reviewer. Publications which presented uncertainty over the satisfaction of the criteria were discussed jointly, and as in the cases where disagreements existed, a consensus decision was made. In total, 24 publications were selected for inclusion in the analysis stage (see Table 2).

2.2.3. Analysis Stage

For this stage, we created a data extraction sheet to record the basic characteristics of each study and piloted it with five randomly selected results. The sheet was then refined accordingly as we proceeded to work through the rest of the results, using consensual approaches to add, remove or modify data fields and the respective data codings. The data sheet captured the following information in seven sections:
  • Prototype
    • Writing script which the user could employ with the prototype (see Section 3.1.1);
    • Input style: single-tap, chording or gestural (see Section 3.1.1);
    • Target device (smartphone, tablet or smartwatch).
  • Design Phase
    • Main design approach (see Section 3.1.2);
    • Use of focus groups in the design;
    • Use of pilot study to inform the design;
    • Number of non-impaired participants in the pilot;
    • Number of blind participants in the pilot;
    • Number of vision-impaired participants in pilot.
  • Main Study Participants
    • Total number of participants;
    • Number of non-impaired participants;
    • Number of blind participants;
    • Number of vision-impaired participants;
    • Number of female participants;
    • Participant ages (minimum, maximum, average and standard deviation).
  • Main Study Design
    • Ethics approval;
    • Study environment (single lab trial, repeated lab trials and field);
    • Method of participant familiarization;
    • Type of task performed in the study;
    • Phrase set used (in the case of transcription tasks);
    • Phrase set language (in the case of transcription tasks);
    • Number of phrases to be entered (in case of transcription task);
    • Corrections allowed during entry.
  • Main Study Metrics
    • Text entry speed metric(s) used in analysis
    • Error metric(s) used in analysis.
  • Post-Experiment
    • Use of post- or mid-experiment questionnaires;
    • Use of post- or mid-experiment interviews.
While going through the 24 retrieved papers, we realized that 2 of them reported on multiple experiments (studies) in the same paper, such as using different prototypes with different populations or the same prototype under different conditions. We decided to treat each study as a separate entry in our data extraction sheet. Therefore, we ended up with data for 26 discrete studies.

2.2.4. Evaluation Studies for Non-Impaired Persons

Our third research question was based on the comparison of the design and evaluation practices for text entry methods for VIBPs and non-impaired persons. We created a second set of evaluation studies based on research papers in the field of text entry aimed at persons without vision impairment or blindness (non-impaired). For this dataset, we used a recently published list [6] with 460 research papers on text entry research published between 2018 and May 2022, further adding any relevant papers we could find dated after May 2022. Result retrieval was carried out in the period from 01 September 2022 to 31 October 2022. Then, we selected papers for inclusion in the analysis, using the same criteria as those applied in the selection process for text entry methods for VIBPs. Specifically, for this process, we considered whether each paper answered the following questions:
  • Does the paper propose a text entry method or input support system (e.g., error correction) for mobile devices (smartphone, tablet or smartwatch)?
  • Do the researchers conduct at least one study with users without vision impairment or blindness?
  • Do the researchers mention the details of the evaluation procedure (study environment, task type, etc.)?
  • Do the researchers present evaluation results (e.g., speed typing metric)?
There were many more papers in this set compared with the set of papers related to VIBPs. Since our aim was not to provide a complete survey of the field of text entry research for this population, we randomly selected five papers from each publication year (2018–2022). Data on the studies contained in these papers were captured using a modified version of the data extraction sheet, which omitted items related to vision loss or impairment. While capturing data, we realized that two of the selected papers reported on multiple studies (two each). These papers were published in 2020 and 2022. To maintain an equal number of studies, we kept these and randomly removed one other study from the publications in 2020 and 2022, therefore resulting in a total of 26 studies (5 in each publication year except 2020, which contained 6).

3. Results

In this section, we discuss our findings per each research question, as stated previously. We begin by discussing the design approaches in text entry research for VIBPs. Next, we describe the structure and characteristics of the evaluation approaches used for the developed prototypes and report on the metrics used in these studies. Finally, we perform the same analysis for studies aimed at VIBPs and contrast the findings.

3.1. Design Methods

3.1.1. Design Concept and Target Device

The studies selected for our review focused predominantly on text entry with smartphone devices (22 studies). One study focused on tablet devices [32], one had an adaptation of the design for both smartphone and tablet devices [33], and two studies focused on text entry with smart watches [34,35].
In terms of the input method interaction principle, the facilitation of Braille input was a primary focus for many studies, since this skill is frequently acquired by VIBPs. Traditional Braille input using a Perkins typewriter uses chording with both hands (multiple fingers). This input style has been studied for use in mobile devices, but other methods such as the use of stroke gestures (e.g., [36]) or multiple taps per Braille character with a single hand have been proposed (e.g., [37]). Other studies aimed to improve the usability of QWERTY keyboards for VIBPs (e.g., [38]) or to propose novel gestural (stroke-based) [39] or alternative layout keyboard-based input methods [40]. Broadly, we can classify all these approaches in terms of using single-finger actions (one or more sequential taps required to enter one character), chording (simultaneous tapping with multiple fingers to enter a single character) or gestural entry (single stroke to enter one or more characters) performed on a touchscreen. In this classification, single-finger tapping may include different tapping styles (short or long taps or a combination thereof). Furthermore, we consider gestural input to include whitespace and non-character actions (e.g., space or delete) which may be performed with a gesture. In several articles where the underlying user interface was a keyboard (QWERTY or other), the user was required to explore the keyboard with their fingers in order to locate the desired characters while a text-to-speech system read out the character under the user’s fingers. While sliding over the keyboard to locate the desired character can be considered a form of gesture, here, we are interested solely in characterizing how characters are entered (e.g., via finger lifting or finger tapping) instead of how they are located in a user interface layout. Thus, such input methods are also considered single-finger tapping.
Furthermore, we can characterize input methods according to the intended symbol (character) that they facilitate the entry of. The most obvious are methods which directly map symbols of a language’s alphabet (characters) to one or more interactive elements (keys). For example, the QWERTY keyboard has one key for each character of the alphabet, and the 12-button keypad has 3 characters mapped to each key. Repeated tapping of the same key allows the selection of the desired character in such keyboards. On the other hand, Braille keyboards map multiple keys (1–6) to a single character of the alphabet. Finally, handwriting input with gestures can support constructed scripts (i.e., new writing systems with new symbols) which may directly map onto the characters of an existing alphabet. Palm’s Graffiti (allegedly inspired by Unistroke [41]) is one such well-known example for mobile computing. The Moon alphabet is a less well-known writing system for the blind which is based on symbols that can be created with a single stroke.
As such, we could position the retrieved literature along two categorical axes related to the actions supported or required to perform inputs and the targeted symbols which the user could produce with the method, as presented in Table 2. From this analysis, we note that the majority of the publications were related to improving the usability of Braille input for mobile devices, although there were also several attempts at basing input methods on variations of the standard keyboard style. Only one publication touched on text entry using a constructed script: the work of Heni et al. [40], namely for the Moon alphabet. Given the lack of adoption of Moon script, this is not surprising. However, we might have expected evaluations of common script using online handwriting recognition, given the improvements in recent years in this domain and how many low-vision users practice the skill of physical handwriting, despite its difficulties [42]. We finally note that the majority of the prototypes involved some form of gesture support as the main method for entering characters (e.g., [36,38] or as a support mechanism, such as to change keyboard modes (from alphabetic to numeric) or to perform special actions such as backspacing or marking the end of a word.
Table 2. Input method characteristics.
Table 2. Input method characteristics.
PublicationInput Symbology 1Single TapChorded EntryGesture Entry
Anu Bharath et al. [43]AC
Billah et al. [39]AC
Buzzi et al. [44]AC
Gaines et al. [45]AC
Lai et al. [46]AC
Lottridge et al. [38]AC
Rakhmetulla and Arif [47]AC
Raynal and Roussille [48]AC
Samanta and Chakraborty [49]AC
Shi et al. [50]AC
Alhussaini et al. [32]BC
Alnfiai and Sampalli [51]BC
Alnfiai and Sampalli [37]BC
Alnfiai and Sampali [52]BC
Dobosz and Szuścik [53]BC
Šepić et al. [54]BC
Facanha et al. [55]BC
Luna et al. [34]BC
Luna et al. [35]BC
Li et al. [56]BC
Mattheiss et al. [36]BC
Southern et al. [33]BC
Zhang and Zeng [57]BC
Heni et al. [40]CS
1 AC = alphabet character; BC = Braille character; CS = constructed symbol.

3.1.2. Design Methodology

In terms of design, Saffer [58] identified four main approaches, of which three employ user input as an integral element of the design of a system (user-centered, activity-centered and system design), while a fourth approach (genius design) relies on designers as the sole source of inspiration and restricts users to the role of validating these designs. The latter category can be considered to include the use of computational optimization as a tool to model human behavior and to rapidly traverse the large design space which is afforded by the permutations of user interface element parameters, such as size or positioning and the arrangement of interactive elements [59]. Computational optimization has been used in the past to guide text entry method design (e.g., [60,61]). Synthesizing from these perspectives, we classified the design approaches for the prototypes described in the retrieved works as follows:
  • User-led: This approach produces designs by involving users in all process stages, including before the conceptual phase, through methodologies such as human-centered design, design thinking or activity-centered design;
  • Designer-led: This approach produces designs based on designer inspiration. Designs are guided by classic human–computer interaction theory and principles or are informed by previous work without any involvement from users in the conceptual phase, although users may be involved in prototype refinement activities;
  • Computation-led: This approach produces designs by making significant use of data-driven or computational optimization approaches without any direct involvement from users, other than users serving as the source of raw data fed into the design process.
Researchers may choose to employ a combination of approaches, such as using human-centered design principles at the start of the process to elicit requirements or define and prioritize goals and computational optimization in the prototyping stage in order to narrow down the design space or using computational optimization to inform or produce a first iteration of a prototype, followed by co-design or small pilot studies with users to refine the prototype before the main evaluation. Such combinatory approaches are reminiscent of mixed-method research, which places emphasis on both the qualitative (user input) and quantitative (computational optimization) aspects of a design problem [62]. We assume that designers are familiar with core human–computer interaction theory and principles and will apply these in any final design. Therefore, we introduce a fourth category that reflects designs derived using both users and computation in the process:
  • Combination: designs which are the product of combining user involvement and computational methods as part of the design process.
The retrieved works are classified according to their design approach in Table 3. We note that the majority of the articles adopted a principle-led approach to design. Only two studies adopted a user-led approach, despite the emphasis given to such approaches by the broader research community. Three studies adopted a combination approach. The remaining 19 studies all described prototypes which were generated by the designers alone. This is a surprising finding, insofar as the practicalities and challenges of vision impairment not being easy to understand. We would have expected that the research community would dedicate more effort to understanding the contextual nuances of text entry for vision impaired persons before proceeding to propose novel designs.
As part of the preliminary evaluation process, seven studies used a pilot phase to refine their prototypes (user-led: 2; designer-led: 2; combination: 3). It is again noteworthy that only two designer-led studies employed human participants as a means for refining their prototypes. Only one study (combination) used focus groups as part of the design, and this was only after a pilot evaluation [47]. Pilot studies involved either blind or vision-impaired participants, with only one paper reporting pilot testing with both population types [47]. Participant numbers ranged from as low as 2 [44] to up to 13 participants [38].

3.2. Evaluation Methods

3.2.1. Participants

As mentioned in the quality assessment section, all collected research papers included at least one evaluation study. The number of participants in each evaluation study ranged from only 1 participant to 14 participants (av: 8.65; stdev: 3.4). Three evaluation studies involved up to 5 participants, 12 evaluation studies involved from 6 to 9 participants, and 11 evaluation studies involved 10–14 participants. From the 26 evaluation studies, 5 of them did not mention the gender of the participants, and 4 of them did not mention the age of the participants. The age of the participants varied from 17 years old to 75 years old. It is not clear how many participants belonged to a certain age group because some studies mentioned only the range of the participants’ ages or only the average age of the participants. The majority of the studies (24) involved a main evaluation with VIBPs. Thirteen evaluation studies involved only blind persons, 1 only had vision-impaired persons, and 8 evaluation studies involved both blind and vision-impaired persons. The two remaining studies [45,46] did not involve any blind or vision-impaired persons and were instead carried out with non-impaired persons. In [45], there were four non-impaired persons, and the device used for text entry was obscured from their sight (the authors did not disclose the precise manner). In [46], the participants were 13 non-impaired persons, and the device was obscured using paper cones attached to the participants’ wrists.

3.2.2. Study Environment

All reviewed papers conducted their evaluation studies in a controlled environment (lab studies). A mentionable point is that 17 evaluation studies were conducted in a single session (1 lab experiment), while 9 evaluation studies were conducted in repeated sessions (repeated lab trials). For example, the authors of [36] conducted a long-term evaluation study (2 weeks of evaluation of daily training on text input), and in [40], the researchers conducted a weekly evaluation (one session per day). The main task in all evaluation studies was a transcription task (ask the participants to transcribe a predefined set of memorable sentences or phrases), an established methodology for evaluating text entry [64]. Before the main task, it is also common to offer a familiarization procedure for the input method. In 15 evaluation studies, a guided tutorial was offered to each participant. In five evaluation studies, the participants had free time to practice the evaluated input method, while six studies did not mention details about the familiarization procedure.
Another important parameter during a transcription task is the selection of the phrases that the users have to type. Most of the researchers created a custom phrase set (16 evaluation studies). In six evaluation studies, an original or modified MacKenzie and Soukoreff phrase set [65] was used. In one evaluation study, the dataset proposed in [66] was used. One evaluation study selected words from the Open American National Corpus (ANC), and one evaluation study used Pinyin, a phonetic spelling system in Roman characters for inputting Chinese characters. Finally, one evaluation study did not provide details about the used phrase set. We believe that using a custom phrase sets makes it more difficult for other researchers to validate the results or to compare them with other studies. A possible explanation for the use of a custom phrase set could be the option of conducting the evaluation task with non-English phrases. Actually, in 11 evaluation studies, non-English phrases were used.
One more important characteristic of the evaluation studies is the number of phrases that the users have to type and if corrections while typing are allowed. Most of the evaluation studies (18 of them) asked the users to type a fixed number of phrases, words or numbers without worrying about the time. On the other hand, five evaluation studies asked the users to type as many phrases as they could in a given time period. Finally, three evaluation studies did not give details about this parameter. During a transcription task, the decision to allow users to correct their mistypes is a decision that affects their performance. Ten evaluation studies explicitly reported that corrections were allowed, two said that corrections were not allowed, and the remaining 14 studies did not provide this information.
It is a common practice to offer participants a small reward (e.g., a gift card or cash) as compensation for their time. In six evaluation studies, a flat reward was offered to all participants. In one evaluation study [33], a performance- and accuracy-based reward was given, and in one study [56], a mixed reward method was applied, with a flat reward for all participants and an extra reward for the best participant (performance-based).

3.2.3. Evaluation Metrics

Text entry research is frequently evaluated with an assortment of metrics, which are described in [67,68,69,70]. Researchers are typically interested in the quantification of performance in terms of speed and errors made during input. Speed of entry is typically measured in words per minute (WPM), where the time taken to input five consecutive printable characters (the average length of actual words) is measured, including the time taken when using backspace or other editing functions. Error rates are measured in various ways. The error rate (ER) metric refers to the ratio of incorrect characters in the submitted text over the length of that text. The corrected error rate (CER) and non-corrected error rate (NCER) measure the number of erroneous but fixed and erroneous unfixed characters over the length of the total number of entered characters, respectively, regardless of whether they were deleted. Combining the CER and NCER, the total error rate (TER) is a comprehensive metric aiming to capture both the fixed and non-fixed errors as well as the effort expended to fix errors, measuring the total incorrect and corrected characters over the total correct, incorrect and corrected characters. The minimum string distance error rate (MSD-ER) calculates the smallest number of edit operations on the entered text in order to precisely match the text that the participants were supposed to enter. Another metric is keystrokes per character (KSPC), which measures the number of keys pressed in total during entry over the length of the submitted text. For gestural input, gestures per character (GPC) is an equivalent metric to the KSPC. In addition to these frequently used metrics, researchers can define their own based on the prototype functionality or behavior which they wish to investigate.
From our collected set of papers, we examined the metrics employed by the researchers, and we summarize these in Table 4. From this table, we note that WPM remains the predominant metric for speed, though others have used characters per minute, characters per second, chording rate and time per task as alternatives for measuring entry speed. In terms of errors, the predominant metric was the corrected error rate (CER), followed by the non-corrected error rate (NCER), MSD-ER and Other (the latter two were tied for third place). Even though the total error rate has been characterized as the most powerful metric out of all those available, since it captures not just error-making behavior but also the effort expended to fix errors [68], it was only reported in five studies. An additional two studies reported both the CER and NCER and therefore could have also reported the TER, but they did not. Still, challenges in text entry due to errors were left inadequately captured in the majority of the studies we investigated. We also note that the number of error metrics reported in the studies ranged from none (e.g., [55]) to a maximum of five (e.g., [52]), with a mean x ¯ = 1.667 and standard deviation σ = 1.685 metrics per study. This demonstrates that behaviors in the making and correction of errors were insufficiently reported in many of the studies in this domain.

3.3. Comparisons with Text Entry Research for Non-Impaired Persons

In this section, we present an analysis of the studies on text entry for non-impaired persons, comparing the quantified characteristics of these studies to those reported in the previous section on studies for VIBPs. Overall, there was greater variability in the type of device targeted by the prototypes in our set, as seven focused on entry with various novel virtual keyboard forms, nine investigated gestural entry both on top of virtual keyboards and disassociated with any particular keyboard layout, and eight more papers focused on various methods for input support (e.g., error correction, disambiguation and predictive text). Eleven papers focused on smartphones as the input device, 10 papers used smartwatch (or wrist-wearable) devices, and 3 papers considered extra-device sensors (e.g., finger-worn sensors) as the primary input method (Table 5).

3.3.1. Design Methodology

As shown in Figure 4, a more balanced approach between design informed by theory and previous works and design informed by a combination of methods including computational optimization was present in works aimed at users without impairment. Strikingly, across both population types, a purely user-led design approach made up for very small percentages of the prototypes presented in the retrieved papers.

3.3.2. Participants

We note that the average number of participants per study was almost twice as large in studies involving non-impaired persons ( χ ¯ = 15.46 , σ = 5.04 ) compared with studies involving VIBPs ( χ ¯ = 8.65 , σ = 3.4 ). The minimum and maximum number of participants were correspondingly considerably different, as shown in Figure 5.
While the number of participants in studies involving VIBPs was low, these studies exhibited a better gender balance on average (51.85%) compared with the studies addressing non-impaired persons (36.69%), as shown in Figure 6. Furthermore, they encompassed a wider range of ages, with three of these studies (11.54%) including persons over 65. On the other hand, older participants were not addressed in any of the studies involving non-impaired persons.

3.3.3. Study Environment

In terms of the actual study design, we observed that 6 studies involving non-impaired persons were repeated lab trials, while the rest (20) used single sessions (one lab experiment). No study with in-field settings was found among our selected paper set (Figure 7). All studies involved the execution of transcription tasks as the main evaluation task. Similar to the studies for VIBPs, most studies (76.9%) reported a familiarization process before the start of the actual experiment. In contrast, only 7.7% used a custom phrase set, and no studies were found where the language was one other than English. The majority of the studies (61.5%) explicitly reported that error corrections were allowed, while the remaining studies did not mention information on this aspect. The majority (76.9%) also reported that the participants had to enter a fixed number of phrases, with the rest not mentioning the precise number of phrases to be entered. Comparisons with the characteristics of the studies on prototypes addressing VIBPs are shown in Figure 8.

3.3.4. Evaluation Metrics

Speed of entry was predominantly measured using the words per minute (WPM) metric (Figure 9). One study utilized the characters per minute (CPM) metric [85], while two other studies in the same paper [80] did not report on the entry speed, since the emphasis was on the time required to correct errors already present in the pre-entered text. In terms of error metrics, the total error rate was the most popular choice (reported in 38.5% of studies), while the ER, CER, NCER, MSD-ER and KSPC were reported in a smaller percentage, as shown in Figure 10. Notably, a large proportion of the studies (61.5%) reported novel metrics alongside more established ones, such as the utilized and wasted bandwidth [92], word error rate [71] and actions per word [88], among others. Overall, the studies reported on an average of χ ¯ = 3.038 , σ = 1.612 metrics.

4. Discussion

The analysis in the preceding sections yielded a range of interesting findings. We now turn to exploring these further in the context of the first two research questions (design approach and evaluation practice), and at the same time, we contrast these findings with those from studies with non-impaired persons.

4.1. Design Approaches

Overall, we note that there was an emphasis primarily on smartphone and, to a lesser extent, smartwatch-based interventions. This contrasts the research for non-impaired persons, where more imaginative approaches to text entry using extra-device hardware and sensing were encountered. While it is natural to focus on an input using the same device where information is delivered, the lack of attention to alternative approaches constrained the possibilities that might otherwise be afforded by other means of input that could be more accessible to VIBPs. For example, solutions such as the use of fingers for chorded input without a touchscreen, described in [71], might afford a plausible alternative for visual impairment, since it leverages the tactile and motor abilities that are unaffected by vision problems and, at the same time, preserve privacy in public environments. We also note that gesture entry is commonly used in parallel with other input techniques (i.e., chording and tapping) as a means to provide control for auxiliary actions. These gestural actions, however, are defined by researchers and not the users themselves, leading to questions about their suitability and room for improvement in the design space.
The preceding observations could be a result of the designer-led approach mostly used in studies for VIBPs. It is indeed striking that the overwhelming majority of the prototypes were designed by the researchers themselves and that only five studies discussed prototypes derived with input from persons with vision impairment or blindness in the design process. Lack of user involvement in the design was also observed in studies with non-impaired persons, though this can be explained by researchers having first-hand experience with functionality during the design and therefore arguably being in a better position to assess the suitability of various alternatives as they explore the design space. Naturally, trying to fully comprehend the experience of a person with vision problems is hard, even with the use of advanced simulation techniques [95]. Therefore, we would have expected that stricter adherence to human-centered design principles would have been observed in research for VIBPs. We also observed that computational optimization played a significant part in informing the design of prototypes in the domain of text entry for non-impaired persons, accounting for almost half of studies encountered in our research body, while it was only encountered in three studies with VIBPs.

4.2. Evaluation Practice

We explored several aspects in the evaluation process of the studies we examined from a methodological viewpoint. Starting with the participants, we found that studies with VIBPs involved, on average, a much lower count of participants ( χ ¯ = 8.65 ) compared with the studies with non-impaired persons ( χ ¯ = 16.15 ). Still, these numbers seem quite low compared with the average number of participants in human–computer interaction experiments ( χ ¯ = 20 , σ = 12 ) reported by Caine [96]. Arguably, the recruitment of VIBPs is harder, as not only do these persons have to be identified and approached, a process that often involves an external organization, but it is more difficult (and dangerous) for them to travel to the locus of the experiment (e.g., a university lab). An alternative for researchers is to conduct the experiment at the participants’ preferred locations, but this may not be a viable option in case external research equipment (e.g., eye trackers, cameras or sensing devices) need to be used. On a more positive note, the studies with VIBPs maintained a better gender balance ( χ ¯ = 51.85 % female participants), and a small percentage of studies included persons aged 65 or over. Overall, these findings place significant questions on the generalizability of the reported results, especially considering that we did not encounter analyses on the power of the study in any of the 26 + 26 studies we examined nor the reporting of effect sizes. In other words, where results were reported without statistical significance, we could not be sure if this was due to sample sizes that were not sufficiently large to expose effects with statistical significance or if any observed differences were indeed not significant. The external validity of the findings was also threatened by the fact that none took place during in-field settings and exclusively consisted of laboratory experiments. Worse, the majority of the studies with both population types were single-round experiments, with approximately one-third of the studies reporting results from repeated trials spaced out over time with the same participants.
Delving into the details of how the lab studies were performed, we noted that all involved a form of transcription tasks, which have been termed as the de facto method for text entry studies [64,97], even though its validity has been criticized previously and alternatives have been proposed, such as the image description task or the composition task [64,98,99]. A significant percentage of the studies in both population types did not report important details pertinent to the transcription task, such as whether they used a fixed number of transcribed phrases or allowed error correction during input (see Figure 8). One noteworthy difference is that the majority of the studies for VIBPs used custom phrase sets instead of community-standard ones which had been validated for memorability and suitability for text entry research (61.5% vs. just 7.7% of studies for non-impaired persons). This is partly explained by the fact that a significant percentage of the studies with VIBPs did not use English as the language of input (42.3%), while the studies for non-impaired persons were conducted exclusively in English. This finding correlates with the observation that vision problems are more prevalent in countries with middle and lower incomes [9]. Therefore, researchers might be more likely to address populations from these areas. However, the use of non-English phrase sets in transcription tasks, which were produced without quality assurance from a methodological approach, places further doubt on the internal consistency of laboratory studies [100,101]. Finally, we note that almost a quarter of the studies with both population types did not mention the use of any process of familiarizing the participants prior to the experiment. Therefore, it is possible that the results in these studies were biased in case this was not actually carried out.
We now comment on the metrics used in the evaluation studies. Studies on the prototypes for VIBPs reported, on average, a narrower array of metric statistics ( χ ¯ = 1.667 , σ = 1.685 ) compared with the studies involving non-impaired persons ( χ ¯ = 3.038 , σ = 1.612 ). This demonstrates that participant performance in these studies with VIBPs was possibly insufficiently measured, since it is unlikely that a low number of captured metrics can provide a truly spherical view of their performance. While most studies for VIBPs used the WPM metric to measure the text entry speed (61.5%), this proportion was lower than that for studies with non-impaired persons (84.6%). This non-standard method of reporting places some difficulty on contextualizing the results against a background of other research. In terms of error rate reporting, a variety of metrics was used to report the results, but even though the total error rate has been lauded as the most complete measure of both errors and the effort (if expended) to correct them [68], it was only used in 19.2% of the studies with VIBPs and 30.8% of the studies with non-impaired persons. An impressive 57.7% of the studies with non-impaired persons used metrics other than the core ones examined in [68]. Lack of validation for these metrics jeopardizes the informative qualities of the papers that report them, and they may offer incomplete or incorrect explanations for human behavior [102]. These observations demonstrate a lack of consensus in the research community concerning the most appropriate ways to report error performance in text entry studies. This dissonance makes the results in the reviewed literature difficult to contextualize and compare against related work in the field (e.g., as discussed in [103]).

4.3. Limitations

The limitations in our work are related to the approach used to collect and analyze the related literature and the fact that the related literature for VIBPs is rather scarce. Access limitations (paywalls) for some papers recovered from the search process did not allow us to consider them for inclusion, though these were an insignificant number of works (two papers). We only included contributions which were published in peer-reviewed conference proceedings and journals, and therefore data from studies reported in undergraduate, postgraduate and doctoral dissertations were not considered in our results. Furthermore, because of the quantity of papers in text entry research for non-impaired persons, we only included a small sample equal to the number of studies for VIBPs for the purposes of comparison. The sample was randomly selected, and therefore another selection might have yielded slightly different results. However, due to the quality of these papers (all published in highly reputable conferences or journals), we believe this sample reflects some of the very best work in recent years and therefore presents a solid standard for comparison. We explicitly did not consider expanding our search strategy to include the extant strand of text entry research which focuses on text entry for older adults. In the future, a scoping review that might include older adults and other specific population categories might provide a better overview of the entirety of the field of mobile text entry research. This objective, however, was not in the scope for our current work.

5. Conclusions

Vision impairment and blindness affect millions of persons worldwide, and these conditions are more prevalent in geographic areas and communities where the risk of exclusion is higher. The difficulties faced by these persons when performing text entry on mobile devices place significant barriers on the use of modern services that rest on the delivery and production of information with these devices. As such, these difficulties place vulnerable populations at even greater physical and mental health risk and deepen the digital divide in our societies. In this paper, we attempted a review of recent research on mobile text entry for persons with vision impairment or blindness, with the objective to identify the methodological approaches used to design and evaluate related prototypes. Through this process, we discovered several discrepancies between the published research in this domain and the research in the domain of mobile text entry for non-impaired persons. We found that the prototype designs were primarily based on designer inspiration, with little involvement of the actual end users in the design process despite the strong emphasis on human-centered design in the wider discipline of human–computer interaction. Furthermore, we noticed that assistive tools such as computational optimization are seldom used to provide design insights for these prototypes. We also found that prototype evaluations tend to include a low number of participants and consist exclusively of lab-based experiments, therefore placing some doubt on the generalizability of their findings. Further issues in the published studies relate to the frequent omission of important experiment design details, such as limits on the quantity of text to be entered, information on the process of participant familiarization with the prototypes and the allowing of error corrections. Studies often rely on the use of non-validated phrase sets to provide input stimuli for participants which are not available to other researchers. These issues affect the reproducibility of research and therefore the ability of the wider community to build on this published experience. Furthermore, we observed a wide variation in the number and type of metrics reported in these studies, as well as the omission of important statistical analysis details, such as an estimation of the study power and the reporting of effect sizes. These shortcomings detract from the ability of researchers to compare the outcomes from published research or to solidly position their own results in the context of previous works. At the same time, the analysis of a small but highly representative sample of mobile text entry research for non-impaired users revealed similar weaknesses but also practices which all text entry researchers might want to consider taking into account, such as the use of computational optimization to explore the design space or the use of the total error rate as a more reliable metric for performance.
Addressing the issues uncovered in our review may involve various levels of difficulty. For example, at the lowest level of difficulty, addressing some of the omissions in the reporting of a study’s design and results only requires following a few good examples of previously published research or taking up advice such as that published in [96]. More consideration for user input or computational optimization during the design process requires more time and further effort on behalf of the researchers, but it is definitely possible to achieve. The issues of low participant numbers, the use of validated non-English phrase sets or the establishment of common reporting standards (i.e., common metrics) are examples of issues which are harder to address. To improve the participant numbers, researchers might consider sharing developed prototypes with other research teams globally in order to perform parallel experiments. This would not only increase the participant counts but also the geographic and cultural diversity in studies. More work following up on [100,101] is needed to source and validate non-English phrase sets for text entry not just for the needs of studies involving persons with vision impairment but the wider text entry community. Finally, agreeing on a common set of metrics (some of which might be considered mandatory) might require concerted efforts across the global research community or, at the very least, leading by example from prominent members who should ensure that these metrics are not just consistently used but also justified in all their future publications.
We hope that our findings may inspire future researchers to become more proactively involved in this important, interesting and valuable field of text entry research for VIBPs, since it appears to currently be understudied, and presents fertile ground for exploration and innovation with a potentially large positive impact. We also hope that our findings may inform future work in the domain of mobile text entry and guide researchers toward a more considerate approach in both the methodological aspects of their work and the standards of reporting research in publications.

Funding

This research was co-financed by Greece and the European Union (European Social Fund (ESF)) through the Operational Programme “Human Resources Development, Education and Lifelong Learning 2014–2020” in the context of the project “MoTEVIUs-Mobile text entry for vision impaired users” (5047127).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Allmann, K.; Blank, G. Rethinking Digital Skills in the Era of Compulsory Computing: Methods, Measurement, Policy and Theory. Inform. Commun. Soc. 2021, 24, 633–648. [Google Scholar] [CrossRef]
  2. Reddy, P.; Sharma, B.; Chaudhary, K. Digital Literacy: A Review of Literature. Int. J. Technoethics (IJT) 2020, 11, 65–94. [Google Scholar] [CrossRef]
  3. de Araujo, M.H.; Reinhard, N. Substituting Computers for Mobile Phones? An Analysis of the Effect of Device Divide on Digital Skills in Brazil. In Proceedings of the Electronic Participation, San Benedetto Del Tronto, Italy, 2–4 September 2019; Panagiotopoulos, P., Edelmann, N., Glassey, O., Misuraca, G., Parycek, P., Lampoltshammer, T., Re, B., Eds.; Springer International Publishing: Cham, Switzerland, 2019. Lecture Notes in Computer Science. pp. 142–154. [Google Scholar] [CrossRef]
  4. Lee, H.; Park, N.; Hwang, Y. A New Dimension of the Digital Divide: Exploring the Relationship between Broadband Connection, Smartphone Use and Communication Competence. Telemat. Inform. 2015, 32, 45–56. [Google Scholar] [CrossRef]
  5. Kumar, D.; Hemmige, V.; Kallen, M.A.; Giordano, T.P.; Arya, M. Mobile Phones May Not Bridge the Digital Divide: A Look at Mobile Phone Literacy in an Underserved Patient Population. Cureus 2019, 11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Komninos, A.; Simou, I. Text Entry Research-the Last 5 Years (2018–2022). In Proceedings of the TEXT2030 Workshop Held at MobileHCI’22, Vancouver, BC, Canada, 1 October 2022. [Google Scholar]
  7. Flaxman, S.R.; Bourne, R.R.A.; Resnikoff, S.; Ackland, P.; Braithwaite, T.; Cicinelli, M.V.; Das, A.; Jonas, J.B.; Keeffe, J.; Kempen, J.H.; et al. Global Causes of Blindness and Distance Vision Impairment 1990–2020: A Systematic Review and Meta-Analysis. Lancet Glob. Health 2017, 5, e1221–e1234. [Google Scholar] [CrossRef] [Green Version]
  8. International Classification of Diseases 11th Revision (ICD-11). Available online: https://icd.who.int/en (accessed on 7 February 2023).
  9. Steinmetz, J.D.; Bourne, R.R.A.; Briant, P.S.; Flaxman, S.R.; Taylor, H.R.B.; Jonas, J.B.; Abdoli, A.A.; Abrha, W.A.; Abualhasan, A.; Abu-Gharbieh, E.G.; et al. Causes of Blindness and Vision Impairment in 2020 and Trends over 30 Years, and Prevalence of Avoidable Blindness in Relation to VISION 2020: The Right to Sight: An Analysis for the Global Burden of Disease Study. Lancet Glob. Health 2021, 9, e144–e160. [Google Scholar] [CrossRef]
  10. Elsman, E.B.M.; Al Baaj, M.; van Rens, G.H.M.B.; Sijbrandi, W.; van den Broek, E.G.C.; van der Aa, H.P.A.; Schakel, W.; Heymans, M.W.; de Vries, R.; Vervloed, M.P.J.; et al. Interventions to Improve Functioning, Participation, and Quality of Life in Children with Visual Impairment: A Systematic Review. Surv. Ophthalmol. 2019, 64, 512–557. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Brown, R.L.; Barrett, A.E. Visual Impairment and Quality of Life Among Older Adults: An Examination of Explanations for the Relationship. J. Gerontol. Ser. B 2011, 66B, 364–373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Demmin, D.L.; Silverstein, S.M. Visual Impairment and Mental Health: Unmet Needs and Treatment Options. Clin. Ophthalmol. 2020, 14, 4229–4251. [Google Scholar] [CrossRef] [PubMed]
  13. Vu, H.T.V.; Keeffe, J.E.; McCarty, C.A.; Taylor, H.R. Impact of Unilateral and Bilateral Vision Loss on Quality of Life. Br. J. Ophthalmol. 2005, 89, 360–363. [Google Scholar] [CrossRef] [Green Version]
  14. Stefanis, V.; Komninos, A.; Garofalakis, J. Challenges in Mobile Text Entry Using Virtual Keyboards for Low-Vision Users. In Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia, Essen, Germany, 22–25 November 2020; Association for Computing Machinery: New York, NY, USA, 2020. MUM ’20. pp. 42–46. [Google Scholar] [CrossRef]
  15. Siqueira, J.; Soares, F.; Ferreira, D.J.; Silva, C.R.G.; Silva, C.R.G.; Berretta, L.d.O.; Ferreira, C.B.R.; Felix, I.M.; Soares, A.d.S.; da Costa, R.M.; et al. Braille Text Entry on Smartphones: A Systematic Review of the Literature. In Proceedings of the 2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC), Atlanta, GA, USA, 10–14 June 2016. [Google Scholar] [CrossRef]
  16. Shahid, H.; Ali Shah, M.; Dar, B.K.; Fizzah, F. A Review of Smartphone’s Text Entry for Visually Impaired. In Proceedings of the 2018 24th International Conference on Automation and Computing (ICAC), Newcastle upon Tyne, UK, 6–7 September 2018; pp. 1–7. [Google Scholar] [CrossRef]
  17. Shokat, S.; Riaz, R.; Rizvi, S.S.; Khan, K.; Riaz, F.; Kwon, S.J. Analysis and Evaluation of Braille to Text Conversion Methods. Mob. Inf. Syst. 2020, 2020, 1–14. [Google Scholar] [CrossRef]
  18. Xiao, Y.; Watson, M. Guidance on Conducting a Systematic Literature Review. J. Plan. Educ. Res. 2019, 39, 93–112. [Google Scholar] [CrossRef]
  19. Mengist, W.; Soromessa, T.; Legese, G. Method for Conducting Systematic Literature Review and Meta-Analysis for Environmental Science Research. MethodsX 2020, 7, 100777. [Google Scholar] [CrossRef]
  20. Torres-Carrión, P.V.; González-González, C.S.; Aciar, S.; Rodríguez-Morales, G. Methodology for Systematic Literature Review Applied to Engineering and Education. In Proceedings of the 2018 IEEE Global Engineering Education Conference (EDUCON), Santa Cruz de Tenerife, Spain, 17–20 April 2018; pp. 1364–1373. [Google Scholar] [CrossRef]
  21. Lame, G. Systematic Literature Reviews: An Introduction. Proc. Des. Soc. Int. Conf. Eng. Des. 2019, 1, 1633–1642. [Google Scholar] [CrossRef] [Green Version]
  22. Nightingale, A. A Guide to Systematic Literature Reviews. Surgery (Oxford) 2009, 27, 381–384. [Google Scholar] [CrossRef]
  23. Okoli, C. A Guide to Conducting a Standalone Systematic Literature Review. Commun. Assoc. Inf. Syst. 2015, 37. [Google Scholar] [CrossRef] [Green Version]
  24. Snyder, H. Literature Review as a Research Methodology: An Overview and Guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
  25. Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gøtzsche, P.C.; Ioannidis, J.P.A.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration. PLoS Med. 2009, 6, e1000100. [Google Scholar] [CrossRef]
  26. PRISMA Statement Checklist. Available online: https://prisma-statement.org/PRISMAStatement/Checklist (accessed on 13 February 2023).
  27. Tranfield, D.; Denyer, D.; Smart, P. Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
  28. Gusenbauer, M. Google Scholar to Overshadow Them All? Comparing the Sizes of 12 Academic Search Engines and Bibliographic Databases. Scientometrics 2019, 118, 177–214. [Google Scholar] [CrossRef] [Green Version]
  29. Canalys Newsroom-Majority of Smart Phones Now Have Touch Screens. Available online: https://www.canalys.com/newsroom/majority-smart-phones-now-have-touch-screens (accessed on 13 February 2023).
  30. Walker, G. Fundamentals of Projected-Capacitive Touch Technology. Available online: https://www.walkermobile.com/Touch_Technologies_Tutorial_Latest_Version.pdf (accessed on 13 February 2023).
  31. ResearchRabbit. Available online: https://www.researchrabbit.ai (accessed on 13 February 2023).
  32. Alhussaini, H.; Ludi, S.; Leone, J. An Evaluation of AccessBraille: A Tablet-Based Braille Keyboard for Individuals with Visual Impairments. In HCI International 2015—Posters’ Extended Abstracts; Stephanidis, C., Ed.; Springer International Publishing: Cham, Switzerland, 2015; Volume 529, pp. 369–374. [Google Scholar] [CrossRef]
  33. Southern, C.; Clawson, J.; Frey, B.; Abowd, G.; Romero, M. An Evaluation of BrailleTouch: Mobile Touchscreen Text Entry for the Visually Impaired. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services—MobileHCI ’12, San Francisco, CA, USA, 21–24 September 2012; ACM Press: San Francisco, CA, USA, 2012; p. 317. [Google Scholar] [CrossRef] [Green Version]
  34. Luna, M.M.; de M. N. Soares, F.A.A.; Nascimento, H.A.D.; Quigley, A. Braille Text Entry on Smartwatches: An Evaluation of Methods for Composing the Braille Cell. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Valencia, Spain, 18–21 June 2019; ACM: Valencia, Spain, 2019; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  35. Luna, M.M.; Nascimento, H.A.D.; Quigley, A.; Soares, F. Text Entry for the Blind on Smartwatches: A Study of Braille Code Input Methods for a Novel Device. Univers. Access Inf. Soc. 2022. [Google Scholar] [CrossRef]
  36. Mattheiss, E.; Regal, G.; Schrammel, J.; Garschall, M.; Tscheligi, M. Dots and Letters: Accessible Braille-Based Text Input for Visually Impaired People on Mobile Touchscreen Devices. In Computers Helping People with Special Needs: 14th International Conference, ICCHP 2014, Paris, France, July 9–11, 2014, Proceedings, Part I 14; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar] [CrossRef]
  37. Alnfiai, M.; Sampalli, S. An Evaluation of SingleTapBraille Keyboard: A Text Entry Method That Utilizes Braille Patterns on Touchscreen Devices. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, Reno, NV, USA, 23–26 October 2016; ACM: Reno, NV, USA, 2016; pp. 161–169. [Google Scholar] [CrossRef]
  38. Lottridge, D.; Yoon, C.; Burton, D.; Wang, C.; Kaye, J. Ally: Understanding Text Messaging to Build a Better Onscreen Keyboard for Blind People. ACM Trans. Access. Comput. 2022, 15, 3533707. [Google Scholar] [CrossRef]
  39. Billah, S.M.; Ko, Y.J.; Ashok, V.; Bi, X.; Ramakrishnan, I. Accessible Gesture Typing for Non-Visual Text Entry on Smartphones. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019. CHI ’19. pp. 1–12. [Google Scholar] [CrossRef]
  40. Heni, S.; Abdallah, W.; Archambault, D.; Archambault, D.; Archambault, D.; Uzan, G.; Bouhlel, M.S. An Empirical Evaluation of MoonTouch: A Soft Keyboard for Visually Impaired People. In Computers Helping People with Special Needs: 15th International Conference, ICCHP 2016, Linz, Austria, July 13–15, 2016, Proceedings, Part II 15; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef]
  41. Goldberg, D.; Richardson, C. Touch-Typing with a Stylus. In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems, Amsterdam, The Netherlands, 24–29 April 1993; Association for Computing Machinery: New York, NY, USA, 1993. CHI ’93. pp. 80–87. [Google Scholar] [CrossRef]
  42. Wu, Z.; Yu, C.; Xu, X.; Wei, T.; Zou, T.; Wang, R.; Shi, Y. LightWrite: Teach Handwriting to The Visually Impaired with A Smartphone. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. CHI ’21. pp. 1–15. [Google Scholar] [CrossRef]
  43. Anu Bharath, P.; Jadhav, C.; Ahire, S.; Joshi, M.; Ahirwar, R.; Joshi, A. Performance of Accessible Gesture-Based Indic Keyboard. In Proceedings of the Human-Computer Interaction—INTERACT 2017, Mumbai, India, 25–29 September 2017; Bernhaupt, R., Dalvi, G., Joshi, A.K., Balkrishan, D., O’Neill, J., Winckler, M., Eds.; Springer International Publishing: Cham, Switzerland, 2017. Lecture Notes in Computer Science. pp. 205–220. [Google Scholar] [CrossRef] [Green Version]
  44. Buzzi, M.C.; Buzzi, M.; Leporini, B.; Trujillo, A. Designing a Text Entry Multimodal Keypad for Blind Users of Touchscreen Mobile Phones. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility—ASSETS ’14, Rochester, NY, USA, 20–22 October 2014; ACM Press: Rochester, NY, USA, 2014; pp. 131–136. [Google Scholar] [CrossRef]
  45. Gaines, D. Exploring an Ambiguous Technique for Eyes-Free Mobile Text Entry. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, Galway, Ireland, 22–24 October 2018; Association for Computing Machinery: New York, NY, USA, 2018. ASSETS ’18. pp. 471–473. [Google Scholar] [CrossRef]
  46. Lai, J.; Zhang, D.; Wang, S.; Kilic, I.D.Y.; Zhou, L. ThumbStroke: A Virtual Keyboard in Support of Sight-Free and One-Handed Text Entry on Touchscreen Mobile Devices. ACM Trans. Manag. Inf. Syst. 2019, 10, 1–19. [Google Scholar] [CrossRef]
  47. Rakhmetulla, G.; Arif, A.S. Senorita: A Chorded Keyboard for Sighted, Low Vision, and Blind Mobile Users. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: Honolulu, HI, USA, 2020; pp. 1–13. [Google Scholar] [CrossRef]
  48. Raynal, M.; Roussille, P. DUCK: A DeDUCtive Soft Keyboard for Visually Impaired Users. In Harnessing the Power of Technology to Improve Lives; IOS Press: Amsterdam, The Netherlands, 2017; pp. 902–909. [Google Scholar] [CrossRef]
  49. Samanta, D.; Chakraborty, T. VectorEntry: Text Entry Mechanism Using Handheld Touch-Enabled Mobile Devices for People with Visual Impairments. ACM Trans. Access. Comput. 2020, 13, 1–29. [Google Scholar] [CrossRef]
  50. Shi, W.; Yu, C.; Fan, S.; Wang, F.; Wang, T.; Yi, X.; Bi, X.; Shi, Y. VIPBoard: Improving Screen-Reader Keyboard for Visually Impaired People with Character-Level Auto Correction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; ACM: Glasgow, UK, 2019; pp. 1–12. [Google Scholar] [CrossRef]
  51. Alnfiai, M.; Sampalli, S. SingleTapBraille: Developing a Text Entry Method Based on Braille Patterns Using a Single Tap. Procedia Comput. Sci. 2016, 94, 248–255. [Google Scholar] [CrossRef] [Green Version]
  52. Alnfiai, M.; Sampali, S. An Evaluation of the BrailleEnter Keyboard: An Input Method Based on Braille Patterns for Touchscreen Devices. In Proceedings of the 2017 International Conference on Computer and Applications (ICCA), Doha, United Arab Emirates, 6–7 September 2017; pp. 107–119. [Google Scholar] [CrossRef]
  53. Dobosz, K.; Szuścik, M. OneHandBraille: An Alternative Virtual Keyboard for Blind People. In Man-Machine Interactions 5; Gruca, A., Czachórski, T., Harezlak, K., Kozielski, S., Piotrowska, A., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 659, pp. 62–71. [Google Scholar] [CrossRef]
  54. Šepić, B.; Ghanem, A.; Vogel, S. BrailleEasy: One-handed Braille Keyboard for Smartphones. Assist. Technol. 2015, 1030–1035. [Google Scholar] [CrossRef]
  55. Façanha, A.R.; Viana, W.; Pequeno, M.C.; Campos, M.d.B.; Sánchez, J. Touchscreen Mobile Phones Virtual Keyboarding for People with Visual Disabilities. In Human-Computer Interaction. Applications and Services: 16th International Conference, HCI International 2014, Heraklion, Crete, Greece, June 22–27, 2014, Proceedings, Part III 16; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar] [CrossRef]
  56. Li, M.; Fan, M.; Truong, K.N. BrailleSketch: A Gesture-based Text Input Method for People with Visual Impairments. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA, 20 October–1 November 2017; ACM: Baltimore, MD, USA, 2017; pp. 12–21. [Google Scholar] [CrossRef] [Green Version]
  57. Zhang, J.; Zeng, X. Multi-Touch Gesture Recognition of Braille Input Based on Petri Net and RBF Net. Multimed. Tools Appl. 2022, 81, 19395–19413. [Google Scholar] [CrossRef]
  58. Saffer, D. Designing for Interaction: Creating Innovative Applications and Devices; New Riders: Indianapolis, IN, USA, 2010. [Google Scholar]
  59. Oulasvirta, A. Optimizing User Interfaces for Human Performance. In Proceedings of the Intelligent Human Computer Interaction, Paris, France, 11–13 December 2017; Horain, P., Achard, C., Mallem, M., Eds.; Springer International Publishing: Cham, Switzerland, 2017. Lecture Notes in Computer Science. pp. 3–7. [Google Scholar] [CrossRef] [Green Version]
  60. Dunlop, M.; Levine, J. Multidimensional Pareto Optimization of Touchscreen Keyboards for Speed, Familiarity and Improved Spell Checking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; Association for Computing Machinery: New York, NY, USA, 2012. CHI ’12. pp. 2669–2678. [Google Scholar] [CrossRef] [Green Version]
  61. Feit, A.M. Computational Design of Input Methods. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; Association for Computing Machinery: New York, NY, USA, 2017. CHI EA ’17. pp. 274–279. [Google Scholar] [CrossRef]
  62. van Turnhout, K.; Bennis, A.; Craenmehr, S.; Holwerda, R.; Jacobs, M.; Niels, R.; Zaad, L.; Hoppenbrouwers, S.; Lenior, D.; Bakker, R. Design Patterns for Mixed-Method Research in HCI. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, Helsinki, Finland, 26–30 October 2014; Association for Computing Machinery: New York, NY, USA, 2014. NordiCHI ’14. pp. 361–370. [Google Scholar] [CrossRef] [Green Version]
  63. Mattheiss, E.; Regal, G.; Schrammel, J.; Garschall, M.; Tscheligi, M. EdgeBraille: Braille-based Text Input for Touch Devices. J. Assist. Technol. 2015, 9, 147–158. [Google Scholar] [CrossRef]
  64. Vertanen, K.; Kristensson, P.O. Complementing Text Entry Evaluations with a Composition Task. ACM Trans. -Comput.-Hum. Interact. 2014, 21, 1–33. [Google Scholar] [CrossRef] [Green Version]
  65. MacKenzie, I.S.; Soukoreff, R.W. Phrase Sets for Evaluating Text Entry Techniques. In Proceedings of the CHI ’03 Extended Abstracts on Human Factors in Computing Systems, Ft. Lauderdale, FL, USA, 5–10 April 2003; Association for Computing Machinery: New York, NY, USA, 2003. CHI EA ’03. pp. 754–755. [Google Scholar] [CrossRef] [Green Version]
  66. Yi, X.; Yu, C.; Shi, W.; Bi, X.; Shi, Y. Word Clarity as a Metric in Sampling Keyboard Test Sets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; Association for Computing Machinery: New York, NY, USA, 2017. CHI ’17. pp. 4216–4228. [Google Scholar] [CrossRef]
  67. Soukoreff, R.W.; MacKenzie, I.S. Recent Developments in Text-Entry Error Rate Measurement. In Proceedings of the CHI ’04 Extended Abstracts on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004; Association for Computing Machinery: New York, NY, USA, 2004. CHI EA ’04. pp. 1425–1428. [Google Scholar] [CrossRef]
  68. Arif, A.S.; Stuerzlinger, W. Analysis of Text Entry Performance Metrics. In Proceedings of the 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), Toronto, ON, Canada, 26–27 September 2009; pp. 100–105. [Google Scholar] [CrossRef]
  69. MacKenzie, I.S.; Soukoreff, R.W. Text Entry for Mobile Computing: Models and Methods, Theory and Practice. Hum. Comput. Interact. 2002, 17, 147–198. [Google Scholar] [CrossRef]
  70. MacKenzie, I.S.; Tanaka-Ishii, K. Text Entry Systems: Mobility, Accessibility, Universality; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2007. [Google Scholar]
  71. Lee, D.; Kim, J.; Oakley, I. FingerText: Exploring and Optimizing Performance for Wearable, Mobile and One-Handed Typing. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. CHI ’21. pp. 1–15. [Google Scholar] [CrossRef]
  72. Streli, P.; Jiang, J.; Fender, A.R.; Meier, M.; Romat, H.; Holz, C. TapType: Ten-finger Text Entry on Everyday Surfaces via Bayesian Inference. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; Association for Computing Machinery: New York, NY, USA, 2022. CHI ’22. pp. 1–16. [Google Scholar] [CrossRef]
  73. Wong, P.C.; Zhu, K.; Fu, H. FingerT9: Leveraging Thumb-to-finger Interaction for Same-side-hand Text Entry on Smartwatches. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018. CHI ’18. pp. 1–10. [Google Scholar] [CrossRef]
  74. Cui, W.; Zhu, S.; Li, Z.; Xu, Z.; Yang, X.D.; Ramakrishnan, I.; Bi, X. BackSwipe: Back-of-device Word-Gesture Interaction on Smartphones. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. CHI ’21. pp. 1–12. [Google Scholar] [CrossRef]
  75. Dobosz, K.; Pindel, M. Increasing the Efficiency of Text Input in the 8pen Method. In Proceedings of the Computers Helping People with Special Needs, Virtual, 9–11 September 2020; Miesenberger, K., Manduchi, R., Covarrubias Rodriguez, M., Peňáz, P., Eds.; Springer International Publishing: Cham, Switzerland, 2020. Lecture Notes in Computer Science. pp. 355–362. [Google Scholar] [CrossRef]
  76. Xu, Z.; Meng, Y.; Bi, X.; Yang, X.D. Phrase-Gesture Typing on Smartphones. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October–2 November 2022; Association for Computing Machinery: New York, NY, USA, 2022. UIST ’22. pp. 1–11. [Google Scholar] [CrossRef]
  77. Ye, L.; Sandnes, F.E.; MacKenzie, I.S. QB-Gest: Qwerty Bimanual Gestural Input for Eyes-Free Smartphone Text Input. In Proceedings of the Universal Access in Human-Computer Interaction, Design Approaches and Supporting Technologies, Copenhagen, Denmark, 19–24 July 2020; Antona, M., Stephanidis, C., Eds.; Springer International Publishing: Cham, Switzerland, 2020. Lecture Notes in Computer Science. pp. 223–242. [Google Scholar] [CrossRef]
  78. Zhong, M.; Yu, C.; Wang, Q.; Xu, X.; Shi, Y. ForceBoard: Subtle Text Entry Leveraging Pressure. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018. CHI ’18. pp. 1–10. [Google Scholar] [CrossRef]
  79. Banovic, N.; Sethapakdi, T.; Hari, Y.; Dey, A.K.; Mankoff, J. The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, Taipei, Taiwan, 1–4 October 2019; Association for Computing Machinery: New York, NY, USA, 2019. MobileHCI ’19. pp. 1–12. [Google Scholar] [CrossRef]
  80. Cui, W.; Zhu, S.; Zhang, M.R.; Schwartz, H.A.; Wobbrock, J.O.; Bi, X. JustCorrect: Intelligent Post Hoc Text Correction Techniques on Smartphones. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, Virtual Event, 20–23 October 2020; Association for Computing Machinery: New York, NY, USA, 2020. UIST ’20. pp. 487–499. [Google Scholar] [CrossRef]
  81. Li, T.; Quinn, P.; Zhai, S. C-PAK: Correcting and Completing Variable-length Prefix-based Abbreviated Keystrokes. ACM Trans. -Comput.-Hum. Interact. 2022. [Google Scholar] [CrossRef]
  82. Yadav, A.; Arif, A.S. Effects of Keyboard Background on Mobile Text Entry. In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia, Cairo, Egypt, 25–28 November 2018; Association for Computing Machinery: New York, NY, USA, 2018. MUM ’18. pp. 109–114. [Google Scholar] [CrossRef]
  83. Zhang, M.R.; Wen, H.; Cui, W.; Zhu, S.; Andrew Schwartz, H.; Bi, X.; Wobbrock, J.O. AI-Driven Intelligent Text Correction Techniques for Mobile Text Entry. In Artificial Intelligence for Human Computer Interaction: A Modern Approach; Li, Y., Hilliges, O., Eds.; Human–Computer Interaction Series; Springer International Publishing: Cham, Switzerland, 2021; pp. 131–168. [Google Scholar] [CrossRef]
  84. Zhang, M.R.; Wen, H.; Wobbrock, J.O. Type, Then Correct: Intelligent Text Correction Techniques for Mobile Text Entry Using Neural Networks. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, New Orleans, LA, USA, 20–23 October 2019; Association for Computing Machinery: New York, NY, USA, 2019. UIST ’19. pp. 843–855. [Google Scholar] [CrossRef]
  85. Go, K.; Kikawa, M.; Kinoshita, Y.; Mao, X. Eyes-Free Text Entry with EdgeWrite Alphabets for Round-Face Smartwatches. In Proceedings of the 2019 International Conference on Cyberworlds (CW), Kyoto, Japan, 2–4 October 2019; pp. 183–186. [Google Scholar] [CrossRef]
  86. Gong, J.; Xu, Z.; Guo, Q.; Seyed, T.; Chen, X.A.; Bi, X.; Yang, X.D. WrisText: One-handed Text Entry on Smartwatch Using Wrist Gestures. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018. CHI ’18. pp. 1–14. [Google Scholar] [CrossRef]
  87. Lee, L.H.; Yeung, N.Y.; Braud, T.; Li, T.; Su, X.; Hui, P. Force9: Force-assisted Miniature Keyboard on Smart Wearables. In Proceedings of the 2020 International Conference on Multimodal Interaction, Virtual Event, 25–29 October 2020; Association for Computing Machinery: New York, NY, USA, 2020. ICMI ’20. pp. 232–241. [Google Scholar] [CrossRef]
  88. Rakhmetulla, G.; Arif, A.S. SwipeRing: Gesture Typing on Smartwatches Using a Segmented Qwerty Around the Bezel. In Proceedings of the Graphics Interface 2021, Virtual Event, 27–28 May 2021. [Google Scholar]
  89. Vertanen, K.; Fletcher, C.; Gaines, D.; Gould, J.; Kristensson, P.O. The Impact of Word, Multiple Word, and Sentence Input on Virtual Keyboard Decoding Performance. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018. CHI ’18. pp. 1–12. [Google Scholar] [CrossRef] [Green Version]
  90. Vertanen, K.; Gaines, D.; Fletcher, C.; Stanage, A.M.; Watling, R.; Kristensson, P.O. VelociWatch: Designing and Evaluating a Virtual Keyboard for the Input of Challenging Text. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019. CHI ’19. pp. 1–14. [Google Scholar] [CrossRef]
  91. De Rosa, M.; Fuccella, V.; Costagliola, G.; Adinolfi, G.; Ciampi, G.; Corsuto, A.; Di Sapia, D. T18: An Ambiguous Keyboard Layout for Smartwatches. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; pp. 1–4. [Google Scholar] [CrossRef]
  92. Jang, R.; Jung, C.; Mohaisen, D.; Lee, K.; Nyang, D. A One-Page Text Entry Method Optimized for Rectangle Smartwatches. IEEE Trans. Mob. Comput. 2022, 21, 3443–3454. [Google Scholar] [CrossRef]
  93. Min, K.B.; Seo, J. Efficient Typing on Ultrasmall Touch Screens with In Situ Decoder and Visual Feedback. IEEE Access 2021, 9, 75187–75201. [Google Scholar] [CrossRef]
  94. Xu, Z.; Wong, P.C.; Gong, J.; Wu, T.Y.; Nittala, A.S.; Bi, X.; Steimle, J.; Fu, H.; Zhu, K.; Yang, X.D. TipText: Eyes-Free Text Entry on a Fingertip Keyboard. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, New Orleans, LA, USA, 20–23 October 2019; Association for Computing Machinery: New York, NY, USA, 2019. UIST ’19. pp. 883–899. [Google Scholar] [CrossRef]
  95. Jones, P.R.; Somoskeöy, T.; Chow-Wing-Bom, H.; Crabb, D.P. Seeing Other Perspectives: Evaluating the Use of Virtual and Augmented Reality to Simulate Visual Impairments (OpenVisSim). npj Digit. Med. 2020, 3, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Caine, K. Local Standards for Sample Size at CHI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; ACM: New York, NY, USA, 2016. CHI ’16. pp. 981–992. [Google Scholar] [CrossRef] [Green Version]
  97. Reyal, S.; Zhai, S.; Kristensson, P.O. Performance and User Experience of Touchscreen and Gesture Keyboards in a Lab Setting and in the Wild. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015. CHI ’15. pp. 679–688. [Google Scholar] [CrossRef] [Green Version]
  98. Gaines, D.; Kristensson, P.O.; Vertanen, K. Enhancing the Composition Task in Text Entry Studies: Eliciting Difficult Text and Improving Error Rate Calculation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. CHI ’21. pp. 1–8. [Google Scholar] [CrossRef]
  99. Nicol, E.; Komninos, A.; Dunlop, M.D. A Participatory Design and Formal Study Investigation into Mobile Text Entry for Older Adults. Int. J. Mob. Hum. Comput. Interact. (IJMHCI) 2016, 8, 20–46. [Google Scholar] [CrossRef] [Green Version]
  100. Franco-Salvador, M.; Leiva, L.A. Multilingual Phrase Sampling for Text Entry Evaluations. Int. J. -Hum.-Comput. Stud. 2018, 113, 15–31. [Google Scholar] [CrossRef]
  101. Leiva, L.A.; Sanchis-Trilles, G. Representatively Memorable: Sampling the Right Phrase Set to Get the Text Entry Experiment Right. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; Association for Computing Machinery: Toronto, ON, Canada, 2014. CHI ’14. pp. 1709–1712. [Google Scholar] [CrossRef]
  102. Wyrich, M.; Preikschat, A.; Graziotin, D.; Wagner, S. The Mind Is a Powerful Place: How Showing Code Comprehensibility Metrics Influences Code Understanding. In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), Madrid, Spain, 22–30 May 2021; pp. 512–523. [Google Scholar] [CrossRef]
  103. Meyer, J.T.; Gassert, R.; Lambercy, O. An Analysis of Usability Evaluation Practices and Contexts of Use in Wearable Robotics. J. Neuroeng. Rehabil. 2021, 18, 170. [Google Scholar] [CrossRef]
Figure 1. The Braille cell (left) and the pattern representing the letter t (right).
Figure 1. The Braille cell (left) and the pattern representing the letter t (right).
Mti 07 00022 g001
Figure 2. Examples of Braille input interaction styles using touchscreens.
Figure 2. Examples of Braille input interaction styles using touchscreens.
Mti 07 00022 g002
Figure 3. Research selection process.
Figure 3. Research selection process.
Mti 07 00022 g003
Figure 4. Design approaches for prototypes aimed at VIBPs and non-impaired persons.
Figure 4. Design approaches for prototypes aimed at VIBPs and non-impaired persons.
Mti 07 00022 g004
Figure 5. Participation levels in experiments reported by studies.
Figure 5. Participation levels in experiments reported by studies.
Mti 07 00022 g005
Figure 6. Participant group diversity in terms of age and gender.
Figure 6. Participant group diversity in terms of age and gender.
Mti 07 00022 g006
Figure 7. Evaluation environment settings for prototypes aimed at VIBPs and non-impaired persons.
Figure 7. Evaluation environment settings for prototypes aimed at VIBPs and non-impaired persons.
Mti 07 00022 g007
Figure 8. Evaluation design aspects for prototypes aimed at VIBPs and non-impaired persons.
Figure 8. Evaluation design aspects for prototypes aimed at VIBPs and non-impaired persons.
Mti 07 00022 g008
Figure 9. Proportion of studies using WPM or other text entry speed metrics.
Figure 9. Proportion of studies using WPM or other text entry speed metrics.
Mti 07 00022 g009
Figure 10. Proportion of studies using each of the error metrics.
Figure 10. Proportion of studies using each of the error metrics.
Mti 07 00022 g010
Table 1. PRISMA checklist elements omitted from our review due to not being applicable. Items without explicitly mentioned subitems were omitted entirely.
Table 1. PRISMA checklist elements omitted from our review due to not being applicable. Items without explicitly mentioned subitems were omitted entirely.
SectionOmitted ItemOmitted Subitem(s)
Abstract2. Abstract-
Methods10. Data items10a
11. Study risk of bias assessment-
12. Effect measures-
13. Synthesis methods13a–13f
14. Reporting bias assessment-
15. Certainty assessment-
Results16. Study selection16b
18. Risk of bias in studies-
19. Results of individual studies-
20. Results of syntheses20a–20d
21. Reporting biases-
22. Certainty of evidence-
Other Information24. Registration and protocol24a–24c
Table 3. Design approach used for text entry methods for VIBPs. The number of papers per approach is reported in the final row of the table.
Table 3. Design approach used for text entry methods for VIBPs. The number of papers per approach is reported in the final row of the table.
User-LedDesigner-LedComputation-LedCombination
[44,55] [32,33,34,35,36,37,40,43,45,46,48,50,51,52,53,54,56,57,63]- [38,39,47]
21903
Table 4. Metrics used during main evaluation of VIBP prototypes. The number of papers using each metric is reported in the final row of the table.
Table 4. Metrics used during main evaluation of VIBP prototypes. The number of papers using each metric is reported in the final row of the table.
SpeedErrors
WPMOtherERCERNCERTERMSD-ERKSPCGPCOther
[33,35,36,37,38,39,40,45,46,47,49,50,52,53,56] [38,43,47,48,54,55] [47] [37,38,43,45,46,48,49,50,52,56] [37,38,43,46,50,52,56] [33,37,38,52,56] [35,36,37,38,52,53] [37,38,52] [32,56] [32,39,45,47,50,55]
156110756326
Table 5. Input method characteristics (studies with non-impaired persons).
Table 5. Input method characteristics (studies with non-impaired persons).
PublicationTarget Device 1Prototype TypePrimary Interaction 2
Lee et al. [71]EDVirtual keyboardST
Lee et al. [72]EDVirtual keyboardST
Lee et al. [73]EDVirtual keyboardST
Lee et al. [74]SPGestural entryGS
Dobosz and Pindel [75]SPGestural entryGS
Xu et al. [76]SPGestural entryGS
Ye et al. [77]SPGestural entryGS
Zhong et al. [78]SPGestural entryGS
Banovic et al. [79]SPInput supportST
Cui et al. [80]SPInput supportST
Li et al. [81]SPInput supportST
Yadav and Arif [82]SPInput supportST
Zhang et al. [83]SPInput supportGS
Zhang et al. [84]SPInput supportGS
Go et al. [85]SWGestural entryGS
Gong et al. [86]SWGestural entryGS
Lee et al. [87]SWGestural entryGS
Rakhmetulla and Arif [88]SWGestural entryGS
Vertanen et al. [89]SWInput supportST
Vertanen et al. [90]SWInput supportST
De Rosa et al. [91]SWVirtual keyboardST
Jang et al. [92]SWVirtual keyboardST
Min and Seo [93]SWVirtual keyboardST
Xu et al. [94]SWVirtual keyboardST
1 SP: smartphone; SW: smartwatch; ED: external device. 2 ST: single-tap; GS: gesture.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Komninos, A.; Stefanis, V.; Garofalakis, J. A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind Persons. Multimodal Technol. Interact. 2023, 7, 22. https://doi.org/10.3390/mti7020022

AMA Style

Komninos A, Stefanis V, Garofalakis J. A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind Persons. Multimodal Technologies and Interaction. 2023; 7(2):22. https://doi.org/10.3390/mti7020022

Chicago/Turabian Style

Komninos, Andreas, Vassilios Stefanis, and John Garofalakis. 2023. "A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind Persons" Multimodal Technologies and Interaction 7, no. 2: 22. https://doi.org/10.3390/mti7020022

APA Style

Komninos, A., Stefanis, V., & Garofalakis, J. (2023). A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind Persons. Multimodal Technologies and Interaction, 7(2), 22. https://doi.org/10.3390/mti7020022

Article Metrics

Back to TopTop