Next Article in Journal
Risks in the Role of Co-Creating the Future of Tourism in “Stigmatized” Destinations
Next Article in Special Issue
Rediscovering the Uptake of Dashboard Feedback: A Conceptual Replication of Foung (2019)
Previous Article in Journal
An Evaluation of the Efficiency of Tertiary Education in the Explanation of the Performance of GDP per Capita Applying Data Envelopment Analysis (DEA)
Previous Article in Special Issue
Interrogating Structural Bias in Language Technology: Focusing on the Case of Voice Chatbots in South Korea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing an AI-Based Learning System for L2 Learners’ Authentic and Ubiquitous Learning in English Language

1
Department of Mathematics and Information Technology, The Education University of Hong Kong, Hong Kong SAR 999077, China
2
Department of Linguistics and Modern Language Studies, The Education University of Hong Kong, Hong Kong SAR 999077, China
3
National Institute of Education, Nanyang Technological University, Singapore 637616, Singapore
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(23), 15527; https://doi.org/10.3390/su142315527
Submission received: 2 October 2022 / Revised: 13 November 2022 / Accepted: 16 November 2022 / Published: 22 November 2022
(This article belongs to the Special Issue Language Education in the Age of AI and Emerging Technologies)

Abstract

:
Motivated by the rapid development and application of artificial intelligence (AI) technologies in education and the needs of language learners during the COVID-19 pandemic, an AI-enabled English language learning (AIELL) system featuring authentic and ubiquitous learning for the acquisition of vocabulary and grammar in English as a second language (L2) was developed. The aim of this study was to present the developmental process and methods used to design, develop, evaluate, and validate the AIELL system and to distil key design features for English learning in authentic contexts. There were 20 participants in the tests, with three interviewees in the study. Mixed research methods were employed to analyse the data, including a demonstration test, a usability test, and an interview. The quantitative and qualitative data collected and analysed affirmed the validity and usability of the design and helped identify areas for further improvements to the desired features. This study informs the integration of AI into facilitating language teaching and learning guided by the mobile learning principle.

1. Introduction

The extensive application of artificial intelligence (AI) in the field of education has been facilitated by the rapid development of AI [1,2,3]. The promotion of AI in education is affected by the current trend of education 4.0, which refers to technology-based teaching and learning methods affected by the automation and digitalization of industries [3]. AI is a broad field of computer science that concerns the building of smart machines capable of performing tasks that typically require human intelligence [4]. AI in Education (AIED) has gradually emerged and been established as a promising branch of research in educational technology [5]. AI technologies have been integrated into various learning environments, facilitating the development of subject knowledge and critical skills across various educational levels [6,7,8,9,10].
In the discipline of language education, AI techniques such as natural language processing (i.e., textual AI) and machine learning (i.e., visual AI) have been leveraged to augment learning across settings and scenarios [11,12,13,14]. Automated writing evaluation applications, for example, have been designed and implemented to help both L1 and L2 learners improve their writing via the mechanism of corrective feedback. These practices have achieved significantly positive results in terms of enhancing learners’ writing performance and motivation [15]. Similarly, in the context of language skill development, AI-enabled chatbots have been used as conversational agents to help learners read and speak in a flexible manner [16,17]. This design has been verified to effectively augment learner confidence and engagement [18]. AI-enabled, personalized, adaptive learning systems have been constructed to facilitate the learning of German as a foreign language [19].
During the COVID-19 pandemic, there has been a pressing need for investigations that can unlock the potential of AI-enabled tools for realising authentic and ubiquitous language learning in areas where there is a lack of school-based language learning opportunities due to the constant shutdowns of schools and classes [20,21]. In addition to the progress made in AI overall, efforts to embed AI in mobile technologies have yielded particularly exciting results [22,23,24,25]. However, the integration of AI techniques into designs for language education in different learning contexts has remained limited compared with the integration of pre-defined algorithms [26]. To address the current language learning situation, there is a strong demand to explore new AI applications that are effective in informal contexts. This challenge, if addressed appropriately, can bring about additional benefits, such as reducing the degree of psychological pressure experienced by learners, promoting engagement, increasing motivation, improving social interaction, and increasing learners’ linguistic knowledge and skills [27,28,29]. Moreover, by using adaptive technologies, learners can choose their own learning content and methods and receive immediate feedback; in this way, learning becomes more student-centred and customised [30].
Motivated by both the prospects and the challenges of applying AI to language education in various learning contexts, we developed an AI-based English language learning (AIELL) system supported by mobile technology to empower authentic, ubiquitous learning for L2 learners in the process of developing their English vocabulary and grammar. AIELL is different from most established AI applications for L2 English learners, which either provide only a single learning function or support comprehensive learning modules tied to specific courses. The unique design feature of the AIELL system is that it engages students in establishing their vocabulary by identifying real-life objects in authentic learning contexts supported by image recognition techniques and mobile technology while developing students’ vocabulary-related grammar skills using AI detection. The result of this process is an exciting, effective medium for students’ self-inquiry about real-life objects in authentic and ubiquitous learning contexts that motivates them to memorize vocabulary and elaborate their grammar skills. AI-supported language learning is especially important for lower-grade learners, who often experience low motivation and high anxiety when using the “repeat-and-memorise” method of vocabulary learning [22,23]. The design also foreshadows the integration of authentic learning and mobile learning-related pedagogical principles (i.e., ubiquitous learning and seamless learning) into AI-based English learning in response to the call to deeply engage students with educational theories through AI-supported language learning [31,32,33].
Below is the literature review of the relevant studies for getting more insights into the theoretical understanding of our study. Then research purposes and questions are presented to guide the research design and methods. The system design and features are introduced. The last sections are the descriptions of the methods for evaluating the system’s usability, the report of research findings, conclusions and discussions, as well as implications.

2. Literature Review

2.1. Technology-Supported Language Learning

Technology-supported learning is defined as the incorporation of technology into learning environments to enhance knowledge, skills, and attitudes [24,25]. Many meta-analyses have shown that technology-supported innovative pedagogical approaches, such as blended learning, seamless learning, the flipped classroom, mobile learning, and collaborative learning, have had positive impacts on education [20,22,26,34,35]. In language education, advanced technologies and other educational resources have been incorporated into language teaching and learning to help learners establish their vocabulary and grammar abilities [24,36,37]. In addition to linguistic knowledge, technological enhancements can contribute to the development of the four basic language skills of listening, speaking, reading, and writing, the acquisition of meta-linguistic knowledge, motivation, and the cultivation of critical 21st-century skills [38,39,40]. In technology-enhanced language learning scenarios, the learner, the software, and the learning environment interact dynamically, constructing more diversified, more integrated, and closer relationships [15]. Recognising the potential of AI, educational researchers and practitioners have deemed the integration of technology into language education, schools, and institutions to be reasonable, inevitable, and important [41]. Further efforts are expected to explore and identify effective innovations to achieve this integration.

2.2. Language Learning in Authentic and Ubiquitous Contexts

Authentic and ubiquitous learning contexts allow learners to interact with real objects and gain real-life experiences in the target language, thus increasing their language mastery in the real world [42,43]. Familiarity with the context in which the target language point is learned and applied has many benefits for language learning. It helps learners to strengthen the connection between the language and their background knowledge, deepen their understanding of the language as it is used, and better retain their learning thanks to contextual cueing effects [44,45]. The design of authentic, real-world learning activities should take into consideration both the learners’ familiarity with the learning materials and their prior knowledge [46,47]. The input of content that is familiar to learners contributes to their output, as this familiarity can help reduce their anxiety and increase their learning motivation [48,49]. When learning in unfamiliar contexts or using unfamiliar media tools, students should be provided with learning strategies that stimulate their creativity and relate to actual language practices. Research has validated the effectiveness of integrating context-aware techniques into language education [50], and future designs should consider adopting these empirically validated strategies. For instance, Liu’s quasi-experimental study implemented a sensor and handheld AI-related AR-supported mobile learning system in English learners’ listening and speaking activities. The results showed that the students’ English listening and speaking skills improved, and they were engaged in the context-aware system’s immersive learning environment [51,52]. Hasnine designed an intelligent ubiquitous learning environment, IUEcosystem, that combined a context-aware mechanism with image-to-context generation and real-time analysis of mobile learning. The system enhanced learning motivation and improved learners’ memory and use of English vocabulary based on visual clues and locations [53].

2.3. AI-Enabled Language Learning

AI-enabled e-learning refers to the use of AI technologies in e-learning to provide personalised or adaptive content, guidance, paths, feedback, or interfaces for learning [54]. Combining e-learning tools with AI technologies gives rise to a diversity of effective educational tools [9], including but not limited to the intelligent feedback system [55], AI chatbots [56], AI tutors [6], deep-learning performance evaluations [57], and image/gesture recognition [58]. Based on a comprehensive review of AI technologies, Sarker (2022) conceded that AI-based models could enhance the intelligence and functionality of real-world applications, and AI-based solutions could be applied more extensively in real-world applications for language learning.
In language learning, a recent bibliometric analysis advocated for embedding AI-enabled functions, such as automated writing evaluation, intelligent tutoring systems for reading and writing, automated error detection, and writing assessment and assistance, to augment language learning [15,23]. Using big data technology, AI can help assess and monitor a learner’s cognitive, emotional, and behavioural status and enable real-time feedback. When AI is incorporated into collaborative learning activities, learning becomes more effective [59]. For example, Hwang (2019) studied recognition technologies, pedagogical mechanisms, and interesting activity designs intended to facilitate the speaking accuracy and motivation of learners of English as a foreign language (EFL). Recognition technologies are object recognition and speech recognition mechanisms that recognise learners’ motions and emotions. Activity design can significantly improve students’ speaking performance and learning motivation [60]. Using AI-powered writing assistants in the English writing process, students’ writing ideas can be detected and translated, thus improving their writing [61]. Shazly (2021) integrated AI chatbots into the speaking practices of EFL students. The findings of that study suggested that AI chatbot technology promised to significantly improve linguistic output gains and had great potential to promote interaction and oral communication [62].

3. Research Purpose and Research Questions

In this study, we designed, developed, and evaluated the AIELL system to facilitate L2 learners’ acquisition of English vocabulary and grammar. In this paper, we document and discuss how the system was developed and evaluated. The following research questions guided the study:
(1)
What is the AIELL system’s developmental process for facilitating students’ acquisition of English vocabulary and grammar?
(2)
What are the AIELL system’s key features for supporting students’ authentic and ubiquitous learning in English?
(3)
What is the correlation between design features and student engagement in the mobile learning environment?

4. Design and Development of the AIELL System

4.1. Design Process

The AIELL system is a web-based English learning platform. Following the general standards and processes of website construction, its development trajectory mainly includes processes of general planning, gathering content, planning structure, design, testing, system implementation, re-testing, and going live (as shown in Figure 1). The design prioritises the orientation attributes and interactive experience of website users [63].

4.2. Design Principles and Key Technologies

The main tasks involved in the design of the AIELL system included website construction, LAN switching, local data storage, and transfer. Three development technologies were utilised to ensure the smooth implementation and operation of the system.

4.2.1. PyCharm Flask for System Construction

PyCharm is a popular Python IDE that offers a suite of tools to improve the productivity of development in Python. PyCharm offers advanced features for professional web development. It has a simple interface for writing codes and running operations and supports the Flask framework [64]. The Flask micro-framework is used in the background development of the server. Flask was selected for the following reasons: (1) The main development language used for the AIELL system’s machine learning components of image recognition and text syntax was also Python. Using Flask made development and maintenance easy. (2) The Flask framework is flexible, lightweight, and efficient. It can support multiple teams working in parallel, each focusing on a specific function. It has several open-source libraries based on Werkzeug and Jinja 2, a built-in server, a testing unit, and adaptive RESTful support for secure cookies, all of which are easy to learn and master [65]. (3) There is a flexible Jinja 2 template engine in Flask to improve the front-end code reuse rate, further enhancing the efficiency of development, and facilitating maintenance [66,67].
Figure 2 presents the working process involved in transporting and translating a user request into the application and reverting the response to the client-end structured by the Flask framework [68]. Communication between the web server and the application is mediated and abided by a set of communication standards specified by WSGI. The Werkzeug library in Flask is a quality WSGI library [69]. In a request-response cycle, the web server converts the user’s request upon receipt from HTTP format to a WSGI-specified format and passes it on to the application for processing and response. The major steps involved in the cycle include the following:
  • The user operates the browser and sends the request.
  • The request is forwarded to the corresponding web server.
  • The web server forwards the request to the web application, and the application processes the request.
  • The application returns the result to the web server.
  • The browser receives the response and presents it to the user.
Database access, web form validation, and user authentication are not supported by the Flask framework, and therefore extensions can be developed and implemented. Developers can enable these functions themselves in a manner that considers the specific needs of their projects. This flexibility is not provided in large frameworks [66]. In our context, these additional functions were not required and were thus not developed.

4.2.2. jQuery of JavaScript

jQuery is a lightweight library for JavaScript. It contains a variety of functions, including HTML element selection, special effects, animations, Ajax asynchronous requests, and utilities [70,71]. jQuery supports additional plug-ins. jQuery is by far the most popular open-source JavaScript library and is free from incompatibility issues across browsers (see Figure 3).

4.2.3. Ajax Asynchronous Request

In this project, the development of a static website was achieved by applying the Flask framework [72]. The Ajax asynchronous request technique is set forth in Figure 4.
Ajax uses JavaScript technology to send asynchronous requests to the server without refreshing the entire page. This facilitates user interface design by enabling instant responses to user requests without the need to refresh the page [73]. The inclusion of an unambiguous code structure also decreases the burden of site maintenance.

4.3. Configuration of the Designed Environment

The adoption of a website as a major learning platform provides mobility and the possibility of using devices with limited memory size and older versions of processors. Figure 5 illustrates how the AIELL system is configured and how the components interact. Situated on the same local network, the teacher can deploy the website and send the URL to students, who then log on using their portable devices.

4.4. Key Features and Functions of AIELL

4.4.1. The Local Environment with Flexible and Stable Conditions

Anaconda is an open-source Python package manager that contains more than 180 scientific packages (e.g., conda and Python) and their dependencies. Anaconda is used to manage multiple projects by placing each project in a separate virtual environment in which the embedded toolkits are independent of each other. Different versions of the Python interpreter and toolkits can be installed, and the environment is free from version conflicts [74]. To construct the local environment for the AIELL system, we used PyCharm (professional version 2021.3) and the TensorFlow–GPU environment. The corresponding versions of CUDA and cuDNN were configured accordingly. The system operated in the Win10 environment, applying conda (version, 5.11.0, Nvidia, New Castle, DE, USA), conda–build (version 3.21.4, Nvidia, New Castle, DE, USA), Python (version 3.8.8, Guido van Rossum, Dutch), and virtual packages (cuda = 11.5 = 0).

4.4.2. AI Image Recognition for Capturing Real-Life Objects in Authentic Learning Contexts Supported by Mobile Devices

Keras is an advanced Python neural network framework added to TensorFlow as its default framework to provide more advanced APIs [75,76,77]. The Keras library has a variety of deep learning models built and trained for image recognition [78], including densenet, efficientnet, imagenet_utils, inception_resnet, mobilenet, nasnet, resnet, vgg, and xception modules, and it was adopted in this AIELL project. The AIELL system uses a basic VGG algorithm for image recognition. When learners upload the images from authentic contexts on the front end, the back end receives the image’s information and activates the VGG for processing and recognition, and the results derived are transported and translated into displays on the front-end. Advanced models enabled in Keras can be used to provide more accurate recognition.
Based on the image recognition function, object recognition is enabled in AIELL. Using the mobile devices available, learners take photos of objects that they intend to identify both inside and outside the classroom; after uploading these photos to the website, they receive the objects’ names and categories in English (see Figure 6). Authentic learning and ubiquitous learning can be realised by relying on these design features [79,80]. The pictures are stored synchronously and locally on the back end.

4.4.3. Automatic Grammar Correction for Sentence Practices with Object Related Vocabulary

Another AI-enabled English learning function on AIELL is “sentence practice”, which leverages automatic grammar correction. Python 2.7.1’s language tool is integrated into the Python library and can be imported directly from PyCharm. The tool supports spelling and grammar checks based on the local server. When there is an error, there is instant feedback on both the error and the necessary corrections, and these processes are achieved via an Ajax asynchronous request. When learners apply the vocabulary that they learned from the identified objects, their sentences are instantaneously evaluated, and areas that require modification and further improvement are suggested in real-time. For example, when a student makes a sentence that incorrectly states, “I loves my Egyptian cat!” after learning the word “cat”, the grammar database checks the sentence by providing feedback and highlighting the error. On the user interface, the error is highlighted, with a message explaining why the sentence is wrong (“The pronoun “I” must be used with a non-third-person form of a verb”) (see Figure 7).

4.5. Workflow of the AIELL System

The workflow of AIELL learning scenario and process system is shown in Figure 8.

5. Evaluation Study

5.1. Participants

An evaluation study was performed to validate the feasibility and usability of the AIELL system. The participants were recruited and informed of the purpose and methods of the study by the consent form that they signed. Ultimately, we evaluated 20 undergraduate and postgraduate students (ranging from 20 to 28 years old) in diversified majors (e.g., Artificial Intelligence and Educational Technology, ICT, English Language, Chinese Language) from a local university.

5.2. Data Collection and Analysis

The evaluation session consisted of three phases. In Phase I, the participants conducted a demonstration test and documented the system’s overall performance. Phase II was an online usability test. Phase III was a voluntary, semi-structured interview in which the participants shared their perceptions of and attitudes towards the system.

5.2.1. Student Demonstration Test

The purpose of the student demonstration test was to invite students to try out the system’s basic functions and workflow in an unstructured manner to gather information about their initial experience of the AIELL system. Thus, the method and focus of observation were not specified in advance. As each participant “walked through” the learning scenarios and processes, trying out all of the functions, that participant took notes on how the system performed, with a focus on technical or operational issues (if any). The participants’ demonstration process using real-life scenarios is shown in Figure 9.

5.2.2. Usability Test

To further test the AIELL system’s functions for facilitating mobile and ubiquitous English learning, the Nielsen Heuristics evaluation method was adapted. During the usability test, the participants comprehensively evaluated the structure and functions of the system, guided by this common and effective evaluation method [81]. Minor modifications were made to the framework to better fit the current design. Altogether, the test included 21 questions that encompassed the following 10 dimensions: (1) visibility of system status (Visibility); (2) match between the system and the real world (Match); (3) user control and freedom (Control); (4) consistency and adherence to standards (Consistency); (5) error prevention (especially usability-related errors) (Error); (6) recognition (not recall) (Recognition); (7) flexibility and efficiency of use (Flexibility); (8) aesthetics and minimalism of design (Aesthetics); (9) recognition, diagnosis, and correction of errors (Recovery); and (10) help and documentation (Help). Each of these dimensions was evaluated based on one to three questions (Table 1). In addition to these 10 dimensions, we added an 11th dimension that evaluated the aspect of mobile learning [82,83]. There were two items for this dimension, one of which was a semi-structured question. A 5-point Likert scale was adopted to elicit the participants’ responses to these 23 items.
The data collected in this online usability test were analysed using IBM SPSS Statistics 27 (IBM, New York, NY, USA). The descriptive analysis of the usability test data focused on identifying the system’s overall performance on each dimension. In addition, a Pearson correlation analysis was conducted to show how the heuristic dimensions were correlated with the mobile learning environment dimension. A paired-samples t-test was used to identify the difference between each dimension. The test results informed the design of the AIELL system and enhanced its functionality, especially with respect to its mobile learning features.

5.2.3. Interview

After completing the usability test, three participants were invited for a voluntary face-to-face interview. The interview lasted approximately 10 min. The participants provided feedback on their perceptions of and attitudes towards the AIELL system and its innovative approach to English learning. The interview session was semi-structured and composed of four questions, with follow-up clarification questions when necessary. The participants’ responses were recorded and coded into four categories. Below are the interview questions.
(1) 
What features or functions did you try during the demonstration test?
(2) 
Do you believe that this experience will help lower-grade students learn English? Why?
(3) 
Do you believe that this experience will make you a better teacher? Why?
(4) 
Do you believe that the AIELL system is useful in the mobile learning environment?

6. Results

6.1. Overall Performance on Usability of AIELL System

As shown in Figure 10, the statistical data analysis showed that all of the items in each dimension examined, on average, scored above 3.0 points. This finding confirmed the overall feasibility and usability of the AIELL system. Four dimensions (i.e., Consistency, Recognition, Flexibility, and Help) had an average score of less than 4.0 and a comparatively high standard deviation, so further improvements should be made to enhance the system’s performance on these dimensions.

6.1.1. Consistency

Under the Consistency dimension, a low score was obtained for the question about whether “common platform standards are followed”. The website development of the AIELL system conformed to standards of functional continuity and consistency and focused on tailoring the front-end and back-end designs to fit the needs and characteristics of lower-grade English learners. We deemed this to be an efficient and effective approach in that learning designs should be in line with a learner’s physical/psychological development and language learning mechanism [84,85]. Areas that required standard specifications in website development, such as content display, access security, information update, interactivity, information search, and backup, were somewhat neglected.

6.1.2. Recognition

Vocabulary learning in AIELL was achieved authentically and ubiquitously by capturing and recognising real-world objects and scenarios, learning their names, categories, and other attributes in English, and applying them in sentences. The system connected a language form to its semantic, visual representation [86]. During the learning process, learners might need to recall information previously obtained and processed to complete subsequent tasks. Therefore, the score for the item under this dimension, which presented the idea that “Users do not need to recall information from one part of a dialogue to another”, was low.

6.1.3. Flexibility

The AIELL system was mainly developed for lower-grade learners. Therefore, it was acceptable that the item about whether “the website developed catered to users at different levels, from novices to experts” resulted in a low score of 3.2 points on average. However, because the creation of a flexible and adaptable environment for interaction has significant educational value in enhancing willingness to learn and fostering a conducive environment for language learning, extensions and enhancements will be planned to improve the system’s future effectiveness and applicability.

6.1.4. Help

The AIELL system was designed to facilitate the instruction of English as an L2. Accordingly, teacher guidance and instructions were expected to accompany learners throughout the learning cycle, and no system-based instructions were provided in the current version. In the future, relevant guides and instructional resources will be provided to facilitate self-directed learning by learners.

6.2. Correlations between Heuristic Dimensions and ME

According to the Pearson correlation analysis results (Table 2), a positive correlation was identified between MLE and the heuristic dimensions of Visibility (r = 0.325), Match (r = 0.599), Control (r = 0.465), Consistency (r = 0.534), Error (r = 0.635), Recognition (r = 0.481), and Flexibility (r = 0.370). In contrast, the dimensions of Aesthetics (r = −0.233), Recovery (r = −0.147), and Help (r = −0.164) were negatively correlated with MLE. The dimensions of Error (r = 0.635, p < 0.001), Match (r = 0.599, p < 0.001), and Consistency (r = 0.534, p < 0.001) were significantly correlated with MLE. Therefore, in the mobile learning environment, there was evidence that Error was closely related to helping lower-grade learners improve their English vocabulary and grammar skills (r = 0.635, p < 0.001). The next most helpful dimensions were Match (r = 0.538, p < 0.001) and Consistency (r = 0.485, p < 0.001).
In the mobile learning environment, these dimensions were important to help lower-grade learners of English learn and improve their vocabulary and grammar knowledge and skills, and they were incorporated into the AIELL system. The different dimensions had coefficients of different magnitudes, thus contributing to a multifaceted and multidimensional approach to system construction to meet the learning needs of lower-level learners under MLE conditions.
Additionally, we used the paired samples t-test to examine whether the scores that the participants assigned to the dimension of MLE differed from the other dimensions of the Nielson Heuristics. According to the results shown in Table 3, significant differences were observed in each pair except for Visibility–MLE. The mean difference between Visibility and MLE was not statistically significant, as p > 0.05. The remaining category of heuristic evaluations indicated that the difference between the means was statistically significant, 6.532 <= t (19) <= 26.897, p < 0.001.

6.3. Interview Responses

In general, the interviewees had positive attitudes towards the AIELL system, as it was easy to use, tailored to lower-grade learners, and may enhance the quality of mobile learning. For example, one of the students spoke highly of the design features: ‘It showed me that I can use AI technologies to support real-life teaching and ensure that it is engaging enough for learners. It helped me come up with ideas to improve how I teach’. Another student commented, ‘It will be helpful for lower-grade learners. The website design is very considerate of and suitable for lower-grade students’.
The interviewees believed that this learning approach could help students master English and related skills in their early years and connect their lessons to authentic contexts. One of the exemplary responses was as follows: ‘It gives students a great deal of autonomy and allows them to play and manipulate freely in the mobile learning process. You can upload pictures freely for recognition, which is very convenient for students learning in authentic contexts’.
The interviewees also provided suggestions to further improve the AIELL system, including integrating more diverse functions, incorporating contextual support and guidance from the teacher, and enhancing the connections among learning devices.

7. Study Limitations and Implications

The results of the study may be affected by the small sample size and the places used for testing because of the COVID-19 pandemic. In the future, more field tests will be done to improve the system’s functional performance. Future efforts will be made to explore the potential to facilitate students’ mobile learning with support from the AIELL system in authentic, ubiquitous language learning. These efforts will involve the use of experimental studies and the recruitment of more participants for educational research. Technologically, diverse functions will be integrated, and libraries that contain multiple languages could be embedded to provide a multilingual learning environment. Pedagogically, there is a need for additional investigations to identify the frameworks, principles, and models needed to integrate innovative design into language instruction.

8. Discussion

Through the introduction of the design process, design principles, and key technologies of the AIELL system, we answered the first research question on how the AIELL developed for achieving the objectives of the study. With emphasis on the design of priority features and the interactive experiences of users, which have been well adopted by studies of mobile application/system design [87], the AIELL thus well fits the needs of students according to students’ usability test results. Moreover, students’ interview responses suggested that they felt satisfied with the design features of the object recognition and the connection of the system with daily life resources, which may probably trigger students’ motivation and inspire teachers to design learning in informal spaces based on the authenticity of mobile learning. These have been echoed by relevant studies that appreciate the greater involvement of informal learning contexts in formal learning supported by mobile technologies [88,89].
Despite these encouraging results, the evaluation study identified areas for improvement in the system. The descriptive analysis of the usability test data indicated that further enhancements could be made to the Consistency, Recognition, Flexibility, and Help dimensions of the system. To facilitate students’ levels and characteristics as lower graders, the design may not accommodate various users, this affected the grading of Consistency and Flexibility in the system design. The issue has been found by other studies, which suggests the generation of an adaptive mobile application using ontology-based models with user interface design pattern methods [90,91].
On the one hand, these findings remind us to elaborate the design in the feature of object recognition, which is the key feature of the AIELL system. As suggested by relevant studies, the object recognition execution environment can be improved by combining the merits of popular machine learning frameworks for their speed and accuracy [92]. On the other hand, teachers are encouraged to guide the mobile learning activities that connect student learning both inside and outside the classroom, thus addressing the cognitive transition among different learning contexts [93,94]. The Pearson correlation analysis indicated that the dimensions of Aesthetics, Recovery, and Help were negatively correlated with mobile learning, whereas the dimensions of Visibility, Match, Control, Consistency, Error, Recognition, and Flexibility had a positive correlation. Moreover, the correlation between Error, Match, and Recognition and the mobile learning environment was significant. These findings indicate that most of the design features will affect the design of mobile learning for lower-grade learners, particularly for aspects of Error, Match, and Recognition. Thus, we suggest adding more confirmation options and real-world information to the system’s design [95].

9. Conclusions

In this study, the first goal was to develop the AIELL system, which is an AI-enabled mobile learning system for learning vocabulary and grammar of English as an L2 that supports authentic and ubiquitous learning. Next, we conducted an evaluation study that involved a demonstration test, an online usability test, and an interview, all for the purpose of assessing the feasibility and usability of the AIELL system for promoting AI-enabled mobile learning. Both the quantitative and qualitative data collected proved that AIELL is a functional, feasible, and usable learning system to help lower-grade learners of English as an L2 improve their vocabulary and grammar knowledge and skills in a mobile learning environment. The potential for promoting students’ learning in English vocabulary and grammar has been demonstrated by the initial usability data and interview responses. In summary, besides the AI-enabled design features, more studies will be conducted to explore the impact of AIELL-supported learning on students’ learning motivations, vocabulary, and grammar skills in English in authentic and ubiquitous learning contexts.

Author Contributions

Conceptualization, F.J. and D.S.; methodology, D.S. and Q.M.; formal analysis, F.J.; investigation, F.J.; resources, F.J.; data curation, F.J. and D.S.; writing—original draft preparation, F.J.; writing—review and editing, F.J., D.S., Q.M. and C.-K.L.; visualization, F.J. and D.S.; supervision, D.S.; project administration, D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Committee) of The Education University of Hong Kong (Ref no. MIT/ER/TPg/202122-07 and 3 May 2022) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, K.-Y.; Chang, C.-Y.; Hwang, G.-J. Trends in artificial intelligence-supported e-learning: A systematic review and co-citation network analysis (1998–2019). Interact. Learn. Environ. 2021, 1–19. [Google Scholar] [CrossRef]
  2. Kong, S.C.; Cheung, M.Y.W.; Zhang, G. Evaluating an artificial intelligence literacy programme for developing university students’ conceptual understanding, literacy, empowerment and ethical aware-ness. Educ. Technol. Soc. 2023, 26, 16–30. [Google Scholar]
  3. Hariharasudan, A.; Kot, S. A Scoping Review on Digital English and Education 4.0 for Industry 4. Soc. Sci. 2018, 7, 227. [Google Scholar] [CrossRef] [Green Version]
  4. Sarker, I.H. AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems. SN Comput. Sci. 2022, 3, 1–20. [Google Scholar] [CrossRef] [PubMed]
  5. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 1–27. [Google Scholar] [CrossRef] [Green Version]
  6. Kim, W.-H.; Kim, J.-H. Individualized AI Tutor Based on Developmental Learning Networks. IEEE Access 2020, 8, 27927–27937. [Google Scholar] [CrossRef]
  7. Laitinen, K.; Laaksonen, S.-M.; Koivula, M. Slacking with the Bot: Programmable Social Bot in Virtual Team Interaction. J. Comput. Commun. 2021, 26, 343–361. [Google Scholar] [CrossRef]
  8. Mandal, S.; Naskar, S.K. Solving arithmetic mathematical word problems: A review and recent advancements. Inf. Technol. Appl. Math. 2019, 699, 95–114. [Google Scholar]
  9. Su, J.; Yang, W. Artificial intelligence in early childhood education: A scoping review. Comput. Educ. Artif. Intell. 2022, 3, 100049. [Google Scholar] [CrossRef]
  10. Villardón-Gallego, L.; García-Carrión, R.; Yáñez-Marquina, L.; Estévez, A. Impact of the Interactive Learning Environments in Children’s Prosocial Behavior. Sustainability 2018, 10, 2138. [Google Scholar] [CrossRef] [Green Version]
  11. Deng, L.; Liu, Y. Deep Learning in Natural Language Processing; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  12. Reinders, H.; Lan, Y.J. Big data in language education and research. Lang. Learn. Technol. 2021, 25, 1–3. [Google Scholar]
  13. Lee, S.-M. A systematic review of context-aware technology use in foreign language learning. Comput. Assist. Lang. Learn. 2019, 35, 294–318. [Google Scholar] [CrossRef]
  14. Chang, C.-K.; Hsu, C.-K. A mobile-assisted synchronously collaborative translation–annotation system for English as a foreign language (EFL) reading comprehension. Comput. Assist. Lang. Learn. 2011, 24, 155–180. [Google Scholar] [CrossRef]
  15. Godwin-Jones, R. Partnering with AI: Intelligent writing assistance and instructed language learning. Lang. Learn. Technol. 2022, 26, 5–24. [Google Scholar]
  16. Lin, P.-Y.; Chai, C.-S.; Jong, M.S.-Y.; Dai, Y.; Guo, Y.; Qin, J. Modeling the structural relationship among primary students’ motivation to learn artificial intelligence. Comput. Educ. Artif. Intell. 2020, 2, 100006. [Google Scholar] [CrossRef]
  17. Fryer, L.K.; Nakao, K.; Thompson, A. Chatbot learning partners: Connecting learning experiences, interest and competence. Comput. Hum. Behav. 2018, 93, 279–289. [Google Scholar] [CrossRef]
  18. Fryer, L.; Coniam, D.; Carpenter, R.; Lăpușneanu, D. Bots for language learning now: Current and future directions. Lang. Learn. Technol. 2020, 24, 8–22. [Google Scholar]
  19. Heift, T. Web Delivery of Adaptive and Interactive Language Tutoring: Revisited. Int. J. Artif. Intell. Educ. 2015, 26, 489–503. [Google Scholar] [CrossRef]
  20. Ahmed, V.; Opoku, A. Technology supported learning and pedagogy in times of crisis: The case of COVID-19 pandemic. Educ. Inf. Technol. 2021, 27, 365–405. [Google Scholar] [CrossRef]
  21. Dong, Y.; Yu, X.; Alharbi, A.; Ahmad, S. AI-based production and application of English multimode online reading using multi-criteria decision support system. Soft Comput. 2022, 26, 10927–10937. [Google Scholar] [CrossRef]
  22. Cheng, L.; Ritzhaupt, A.D.; Antonenko, P. Effects of the flipped classroom instructional strategy on students’ learning outcomes: A meta-analysis. Educ. Technol. Res. Dev. 2019, 67, 793–824. [Google Scholar] [CrossRef]
  23. Huang, X.; Zou, D.; Cheng, K.S.; Chen, X.; Xie, H. Trends, research issues and applications of artificial intelligence in language education. Educ. Technol. Soc. 2023, 26, 12–131. [Google Scholar]
  24. Yeung, W.K.; Sun, D. An exploration of inquiry-based authentic learning enabled by mobile technology for primary science. Int. J. Mob. Learn. Organ. 2021, 15, 1–28. [Google Scholar] [CrossRef]
  25. Shadiev, R.; Wang, X.; Liu, T.; Yang, M. Improving students’ creativity in familiar versus unfamiliar mo-bile-assisted language learning environments. Interact. Learn. Environ. 2022, 1–23. [Google Scholar]
  26. Pikhart, M. Intelligent information processing for language education: The use of artificial intelligence in language learning apps. Procedia Comput. Sci. 2020, 176, 1412–1419. [Google Scholar] [CrossRef]
  27. Chiu, T.K. Digital support for student engagement in blended learning based on self-determination theory. Comput. Hum. Behav. 2021, 124, 106909. [Google Scholar] [CrossRef]
  28. Wei, Y. Toward Technology-Based Education and English as a Foreign Language Motivation: A Review of Literature. Front. Psychol. 2022, 13. [Google Scholar] [CrossRef]
  29. Lenkaitis, C.A. Technology as a mediating tool: Videoconferencing, L2 learning, and learner autonomy. Comput. Assist. Lang. Learn. 2019, 33, 483–509. [Google Scholar] [CrossRef]
  30. Srivani, V.; Hariharasudan, A.; Nawaz, N.; Ratajczak, S. Impact of Education 4.0 among engineering students for learning English language. PLoS ONE 2022, 17, e0261717. [Google Scholar] [CrossRef]
  31. Chen, X.; Xie, H.; Zou, D.; Hwang, G.-J. Application and theory gaps during the rise of Artificial Intelligence in Education. Comput. Educ. Artif. Intell. 2020, 1, 100002. [Google Scholar] [CrossRef]
  32. Kinshuk; Graf, S. Ubiquitous Learning. In Encyclopedia of the Sciences of Learning; Seel, N.M., Ed.; Springer: Boston, MA, USA, 2012. [Google Scholar]
  33. Milrad, M.; Wong, L.H.; Sharples, M.; Hwang, G.J.; Looi, C.K.; Ogata, H. Seamless Learning: An International Perspective on Next-Generation Technology-Enhanced Learning. In Handbook of Mobile Learning; Berge, Z.L., Muilenburg, L.Y., Eds.; Routledge: London, UK, 2013; pp. 95–108. [Google Scholar]
  34. Hwang, G.-J.; Lai, C.-L.; Wang, S.-Y. Seamless flipped learning: A mobile technology-enhanced flipped classroom with effective learning strategies. J. Comput. Educ. 2015, 2, 449–473. [Google Scholar] [CrossRef] [Green Version]
  35. Ma, Q. University L2 Learners. Voices and Experience in Making Use of Dictionary Apps in Mobile Assisted Language Learning (MALL). Int. J. Comput. Lang. Learn. Teach. 2019, 9, 18–36. [Google Scholar] [CrossRef]
  36. Shadiev, R.; Liu, T.; Hwang, W.Y. Review of research on mobile-assisted language learning in familiar, authentic environments. Br. J. Educ. Technol. 2020, 51, 709–720. [Google Scholar] [CrossRef]
  37. Hao, T.; Wang, Z.; Ardasheva, Y. Technology-assisted vocabulary learning for EFL learners: A meta-analysis. J. Res. Educ. Eff. 2021, 14, 645–667. [Google Scholar] [CrossRef]
  38. Park, G.-P.; French, B.F. Gender differences in the Foreign Language Classroom Anxiety Scale. System 2013, 41, 462–471. [Google Scholar] [CrossRef]
  39. Reiber-Kuijpers, M.; Kral, M.; Meijer, P. Digital reading in a second or foreign language: A systematic literature review. Comput. Educ. 2020, 163, 104115. [Google Scholar] [CrossRef]
  40. Lan, Y.-J.; Tam, V.T.T. The impact of 360° videos on basic Chinese writing: A preliminary exploration. Educ. Technol. Res. Dev. 2022, 1–24. [Google Scholar] [CrossRef]
  41. Ma, Q. Examining the role of inter-group peer online feedback on wiki writing in an EAP context. Comput. Assist. Lang. Learn. 2019, 33, 197–216. [Google Scholar] [CrossRef]
  42. Marden, M.P.; Herrington, J. Design principles for integrating authentic activities in an online community of foreign language learners. Issues Educ. Res. 2020, 30, 635–654. [Google Scholar]
  43. Karakaya, K.; Bozkurt, A. Mobile-assisted language learning (MALL) research trends and patterns through bibliometric analysis: Empowering language learners through ubiquitous educational technologies. System 2022, 110, 102925. [Google Scholar] [CrossRef]
  44. Wilson, S.G. The flipped class: A method to address the challenges of an undergraduate statistics course. Teach. Psychol. 2013, 40, 193–199. [Google Scholar] [CrossRef]
  45. Thompson, J.; Childers, G. The impact of learning to code on elementary students’ writing skills. Comput. Educ. 2021, 175, 104336. [Google Scholar] [CrossRef]
  46. Sun, D.; Looi, C.K. Boundary interaction: Towards developing a mobile technology-enabled science curriculum to integrate learning in the informal spaces. Br. J. Educ. Technol. 2018, 49, 505–515. [Google Scholar] [CrossRef]
  47. Li, S.; Wang, W. Effect of blended learning on student performance in K-12 settings: A meta-analysis. J. Comput. Assist. Learn. 2022, 38, 1254–1272. [Google Scholar] [CrossRef]
  48. Carvalho, L.; Martinez-Maldonado, R.; Tsai, Y.S.; Markauskaite, L.; De Laat, M. How can we design for learning in an AI world? Comput. Educ. Artif. Intell. 2022, 3, 100053. [Google Scholar]
  49. Chwo, G.S.M.; Marek, M.; Wu, W.-C.V. Meta-analysis of MALL research and design. System 2018, 74, 62–72. [Google Scholar] [CrossRef]
  50. Chih-Cheng, L.; Hsien-Sheng, H.; Sheng-ping, T.; Hsin-jung, C. Learning English Vocabulary collaboratively in a technology-supported classroom. TOJET Turk. Online J. Educ. Technol. 2014, 13, 162–173. [Google Scholar]
  51. Liu, T.-Y. A context-aware ubiquitous learning environment for language listening and speaking. J. Comput. Assist. Learn. 2009, 25, 515–527. [Google Scholar] [CrossRef]
  52. Lan, Y.J. Immersion, interaction and experience-oriented learning: Bringing virtual reality into FL learning. Lang. Learn. Technol. 2020, 24, 1–15. [Google Scholar]
  53. Hasnine, M.N.; Akçapınar, G.; Mouri, K.; Ueda, H. An Intelligent Ubiquitous Learning Environment and Analytics on Images for Contextual Factors Analysis. Appl. Sci. 2020, 10, 8996. [Google Scholar] [CrossRef]
  54. Kukulska-Hulme, A. Conclusions: A Lifelong Perspective on Mobile Language Learning. In Mobile Assisted Language Learning Across Educational Contexts; Morgana, V., Kukulska-Hulme, A., Eds.; Routledge: London, UK, 2021. [Google Scholar]
  55. Shadiev, R.; Wang, X.; Halubitskaya, Y.; Huang, Y.M. Enhancing foreign language learning outcomes and mitigating cultural attributes inherent in Asian culture in a mobile-assisted language learning environment. Sustainability 2022, 14, 8428. [Google Scholar] [CrossRef]
  56. Seyyedrezaei, M.S.; Amiryousefi, M.; Gimeno-Sanz, A.; Tavakoli, M. A meta-analysis of the relative effectiveness of technology-enhanced language learning on ESL/EFL writing performance: Retrospect and prospect. Comput. Assist. Lang. Learn. 2022, 1–34. [Google Scholar] [CrossRef]
  57. Casey, C. Incorporating cognitive apprenticeship in multi-media. Educ. Technol. Res. Dev. 1996, 44, 71–84. [Google Scholar] [CrossRef]
  58. Wong, L.-H.; King, R.B.; Chai, C.S.; Liu, M. Seamlessly learning Chinese: Contextual meaning making and vocabulary growth in a seamless Chinese as a second language learning environment. Instr. Sci. 2016, 44, 399–422. [Google Scholar] [CrossRef]
  59. Golonka, E.M.; Bowles, A.R.; Frank, V.M.; Richardson, D.L.; Freynik, S. Technologies for foreign language learning: A review of technology types and their effectiveness. Comput. Assist. Lang. Learn. 2014, 27, 70–105. [Google Scholar] [CrossRef]
  60. Hwang, W.-Y.; Manabe, K.; Cai, D.-J.; Ma, Z.-H. Collaborative Kinesthetic English Learning With Recognition Technology. J. Educ. Comput. Res. 2019, 58, 946–977. [Google Scholar] [CrossRef]
  61. Zhao, X. Leveraging Artificial Intelligence (AI) Technology for English Writing: Introducing Wordtune as a Digital Writing Assistant for EFL Writers. RELC J. 2022. [Google Scholar] [CrossRef]
  62. EI Shazly, R. Effects of artificial intelligence on English speaking anxiety and speaking performance: A case study. Expert Syst. 2021, 38, e12667. [Google Scholar] [CrossRef]
  63. Huang, M.-H. Designing website attributes to induce experiential encounters. Comput. Hum. Behav. 2003, 19, 425–442. [Google Scholar] [CrossRef]
  64. Grinberg, M. Flask Web Development: Developing Web Applications with Python; O’Reilly Media: Newton, MA, USA, 2018. [Google Scholar]
  65. Spencer, R.; Smalley, S.; Loscocco, P.; Hibler, M.; Andersen, D.; Lepreau, J. The Flask Security Architecture: System Support for Diverse Security Policies. In Proceedings of the 8th USENIX Security Symposium, Washington, DC, USA, 23–36 August 1999. [Google Scholar] [CrossRef]
  66. Ghimire, D. Comparative study on Python web frameworks: Flask and Django. Bachelor’s Thesis, Metropolia University of Applied Sciences, Helsinki, Finland, 2020. [Google Scholar]
  67. Lokhande, P.S.; Aslam, F.; Hawa, N.; Munir, J.; Gulamgaus, M. Efficient way of web development using python and flask. Int. J. Adv. Res. Comput. Sci. 2015, 6, 54–57. [Google Scholar]
  68. Gardner, J. The Web Server Gateway Interface (WSGI). In The Definitive Guide to Pylons; Gardner, J., Ed.; Apress: New York, NY, USA, 2009; pp. 369–388. [Google Scholar]
  69. Taneja, S.; Gupta, P.R. Python as a tool for web server application development. JIMS 8i-Int. J. Inf. Commun. Comput. Technol. 2014, 2, 77–83. [Google Scholar]
  70. Liao, Y.; Zhang, Z.; Yang, Y. Web Applications Based on AJAX Technology and Its Framework. In Communications and Information Processing; Zhao, M., Sha, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 320–326. [Google Scholar]
  71. Osmani, A. Learning JavaScript Design Patterns: A JavaScript and jQuery Developer’s Guide; O’Reilly Media: Newton, MA, USA, 2012. [Google Scholar]
  72. Woychowsky, E. AJAX: Creating Web Pages with Asynchronous JavaScript and XML; Prentice Hall: Upper Saddle River, NJ, USA, 2006. [Google Scholar]
  73. Liu, T.; Tan, T.; Chu, Y. QR Code and Augmented Reality-Supported Mobile English Learning System. In Mobile Multimedia Processing; Jiang, X., Ma, M.Y., Chen, C.W., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 37–52. [Google Scholar]
  74. Rolon-Mérette, D.; Ross, M.; Rolon-Mérette, T.; Church, K. Introduction to Anaconda and Python: Installation and setup. Python Res. Psychol. 2016, 16, S5–S11. [Google Scholar] [CrossRef]
  75. Gulli, A.; Pal, S. Deep Learning with Keras; Packt Publishing Ltd.: Birmingham, UK, 2017. [Google Scholar]
  76. Ketkar, N.; Moolayil, J. Introduction to Keras. In Deep Learning with Python; Springer: New York, NY, USA, 2017; pp. 97–111. [Google Scholar]
  77. Moolayil, J. An Introduction to Deep Learning and Keras. In Learn Keras for Deep Neural Networks; Apress: Berkeley, CA, USA, 2018; pp. 1–16. [Google Scholar] [CrossRef]
  78. Tammina, S. Transfer learning using VGG-16 with Deep Convolutional Neural Network for Classifying Images. Int. J. Sci. Res. Publ. (IJSRP) 2019, 9, 143–150. [Google Scholar] [CrossRef]
  79. Laborda, J.G.; Litzler, M.F. Current perspectives in teaching English for specific purposes. Onomázein 2015, 31, 38–51. [Google Scholar] [CrossRef] [Green Version]
  80. Looi, C.-K.; Sun, D.; Wu, L.; Seow, P.; Chia, G.; Wong, L.-H.; Soloway, E.; Norris, C. Implementing mobile learning curricula in a grade level: Empirical study of learning effectiveness at scale. Comput. Educ. 2014, 77, 101–115. [Google Scholar] [CrossRef]
  81. Hsieh, Y.-Z.; Su, M.-C.; Chen, S.Y.; Chen, G.-D. The development of a robot-based learning companion: A user-centered design approach. Interact. Learn. Environ. 2013, 23, 356–372. [Google Scholar] [CrossRef]
  82. Nielsen, J. Finding usability problems through heuristic evaluation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Monterey, CA, USA, 3–7 May 1992; pp. 373–380. [Google Scholar]
  83. Hildebrand, E.A.; Bekki, J.M.; Bernstein, B.L.; Harrison, C.J. Online Learning Environment Design: A Heuristic Evaluation. In Proceedings of the 2013 ASEE Annual Conference & Exposition, Atlanta, GA, USA, 23–26 June 2013; pp. 23.945.1–23.945.11. [Google Scholar] [CrossRef]
  84. Tzafilkou, K.; Perifanou, M.; Economides, A.A. Development and validation of a students’ remote learning attitude scale (RLAS) in higher education. Educ. Inf. Technol. 2021, 26, 7279–7305. [Google Scholar] [CrossRef]
  85. Frank, M.C.; Braginsky, M.; Yurovsky, D.; Marchman, V.A. Variability and Consistency in Early Language Learning: The Wordbank Project; MIT Press: Cambridge, MA, USA, 2021. [Google Scholar]
  86. Smith, A.C.; Monaghan, P.; Huettig, F. Complex word recognition behaviour emerges from the richness of the word learning environment. In Proceedings of the 14th Neural Computation and Psychology Workshop, Lancaster, UK, 21–23 August 2014; pp. 99–114. [Google Scholar]
  87. Sharples, M.; Corlett, D.; Westmancott, O. The Design and Implementation of a Mobile Learning Resource. Pers. Ubiquitous Comput. 2002, 6, 220–234. [Google Scholar] [CrossRef]
  88. Viberg, O.; Andersson, A.; Wiklund, M. Designing for sustainable mobile learning–re-evaluating the concepts “formal” and “informal”. Interact. Learn. Environ. 2021, 29, 130–141. [Google Scholar] [CrossRef] [Green Version]
  89. Zimmerman, H.T.; Land, S.M.; Maggiore, C.; Millet, C. Supporting children’s outdoor science learning with mobile computers: Integrating learning on-the-move strategies with context-sensitive computing. Learn. Media Technol. 2019, 44, 457–472. [Google Scholar] [CrossRef]
  90. Braham, A.; Buendía, F.; Khemaja, M.; Gargouri, F. Generation of Adaptive Mobile Applications Based on Design Patterns for User Interfaces. Proceedings 2019, 31, 19. [Google Scholar] [CrossRef] [Green Version]
  91. Hervás, R.; Bravo, J. Towards the ubiquitous visualization: Adaptive user-interfaces based on the Semantic Web. Interact. Comput. 2011, 23, 40–56. [Google Scholar] [CrossRef] [Green Version]
  92. Chow, J.K.; Palmeri, T.J.; Gauthier, I. Haptic object recognition based on shape relates to visual object recognition ability. Psychol. Res. 2021, 86, 1262–1273. [Google Scholar] [CrossRef]
  93. Sun, D.; Looi, C.-K.; Yang, Y.; Sun, J. Design and implement boundary activity based learning (BABL) principle in science inquiry: An exploratory study. J. Educ. Technol. Soc. 2020, 23, 147–162. [Google Scholar]
  94. Arini, D.N.; Hidayat, F.; Winarti, A.; Rosalina, E. Artificial intelligence (AI)-based mobile learning in ELT for EFL learners: The implementation and learners’ attitudes. Int. J. Educ. Stud. Soc. Sci. (IJESSS) 2022, 2, 88–95. [Google Scholar] [CrossRef]
  95. Jayatilleke, B.G.; Ranawaka, G.R.; Wijesekera, C.; Kumarasinha, M.C.B. Development of mobile application through design-based research. Asian Assoc. Open Univ. J. 2019, 13, 145–168. [Google Scholar] [CrossRef]
Figure 1. AIELL design and development process.
Figure 1. AIELL design and development process.
Sustainability 14 15527 g001
Figure 2. Flask framework working processes.
Figure 2. Flask framework working processes.
Sustainability 14 15527 g002
Figure 3. jQuery functions.
Figure 3. jQuery functions.
Sustainability 14 15527 g003
Figure 4. Ajax asynchronous request.
Figure 4. Ajax asynchronous request.
Sustainability 14 15527 g004
Figure 5. AIELL system configuration.
Figure 5. AIELL system configuration.
Sustainability 14 15527 g005
Figure 6. Image recognition interface of AIELL.
Figure 6. Image recognition interface of AIELL.
Sustainability 14 15527 g006
Figure 7. The user interface of the “sentence practice” function.
Figure 7. The user interface of the “sentence practice” function.
Sustainability 14 15527 g007
Figure 8. Flowchart of the AIELL learning scenarios and process.
Figure 8. Flowchart of the AIELL learning scenarios and process.
Sustainability 14 15527 g008
Figure 9. Participants do a demonstration.
Figure 9. Participants do a demonstration.
Sustainability 14 15527 g009
Figure 10. Descriptive analysis of the usability test.
Figure 10. Descriptive analysis of the usability test.
Sustainability 14 15527 g010
Table 1. Usability test based on Nielsen heuristics evaluation.
Table 1. Usability test based on Nielsen heuristics evaluation.
#Dimensions AbbreviationTest Items
1Visibility of system statusVisibility
(1)
The website keeps the user informed about what is going on through constructive, appropriate, and timely feedback.
2Match between the system and the real worldMatch
(2)
Language forms (e.g., terms, phrases, symbols, and concepts) are similar to what is used in the daily environment.
(3)
Information is arranged in a natural and logical order.
3User control and freedomControl
(4)
Users can control the system.
(5)
Users can exit the system at any time even when they have made a mistake.
(6)
The system supports “Undo” and “Redo” actions.
4Consistency and adherence to standardsConsistency
(7)
Concepts, words, symbols, actions, or situations in the system refer to the same thing.
(8)
Common platform standards are followed.
5Error prevention (usability-related errors in particular)Error
(9)
The system is specifically designed to avoid serious, user-made usability errors.
(10)
When there is a user-made error, the application gives an appropriate error message.
6Recognition (not recall)Recognition
(11)
Objects to be manipulated, options for selection, and actions to be taken, are visible.
(12)
Users do not need to recall information from one part of a dialogue to another.
(13)
Instructions on how to use the system are visible and easily retrievable.
7Flexibility and efficiency of useFlexibility
(14)
The website caters to different levels of users, from novices to experts.
(15)
Shortcuts or accelerators are provided and highlighted to speed up interaction and task completion for frequent users.
8Aesthetics and minimalism of designAesthetics
(16)
Site dialogues do not contain irrelevant or useless information, preventing users to be distracted from their tasks.
(17)
Displays are simple, and multiple-page displays are minimized.
9Recognition, diagnosis, and correction of errorsRecovery
(18)
Error messages are expressed in simple language.
(19)
Error messages indicate the errors precisely, and give quick, simple, constructive, specific instructions for correction.
10Help and documentationHelp
(20)
The website has functions of “Help” and “Documentation”.
(21)
Information is easy to search, and steps for completing a task are listed.
11The mobile learning environmentMLE
(22)
The system will improve the English vocabulary and grammar knowledge and skills of younger learners in a mobile learning environment.
(23)
Do you have any suggestions for improving the function and role of the system based on the mobile learning environment?
Table 2. Pearson correlation analysis results.
Table 2. Pearson correlation analysis results.
VisibilityMatchControlConsistencyErrorRecognitionFlexibilityAestheticsRecoveryHelpMLE
VisibilityPearson Correlation10.473 *0.525 *0.563 **0.560 *0.587 **0.607 **−0.675 **−0.586 **0.0610.325
Sig. (2-tailed) 0.0350.0180.0100.0100.0060.0050.0010.0070.7990.162
MatchPearson Correlation0.473 *10.706 **0.667 **0.827 **0.720 **0.669 **−0.648 **−0.716 **0.1920.599 **
Sig. (2-tailed)0.035 0.0010.0010.0000.0000.0010.0020.0000.4160.005
ControlPearson Correlation0.525 *0.706 **10.863 **0.934 **0.952 **0.874 **−0.642 **−0.487 *0.2540.465 *
Sig. (2-tailed)0.0180.001 0.0000.0000.0000.0000.0020.0290.2810.039
ConsistencyPearson Correlation0.563 **0.667 **0.863 **10.908 **0.919 **0.893 **−0.677 **−0.471 *0.0830.534 *
Sig. (2-tailed)0.0100.0010.000 0.0000.0000.0000.0010.0360.7280.015
ErrorPearson Correlation0.560 *0.827 **0.934 **0.908 **10.923 **0.884 **−0.686 **−0.531 *0.0760.635 **
Sig. (2-tailed)0.0100.0000.0000.000 0.0000.0000.0010.0160.7490.003
RecognitionPearson Correlation0.587 **0.720 **0.952 **0.919 **0.923 **10.949 **−0.762 **−0.586 **0.178.481 *
Sig. (2-tailed)0.0060.0000.0000.0000.000 0.0000.0000.0070.4530.032
FlexibilityPearson Correlation0.607 **0.669 **0.874 **0.893 **0.884 **0.949 **1−0.814 **−0.619 **0.0990.370
Sig. (2-tailed)0.0050.0010.0000.0000.0000.000 0.0000.0040.6770.108
AestheticsPearson Correlation−0.675 **−0.648 **−0.642 **−0.677 **−0.686 **−0.762 **−0.814 **10.739 **−0.083−0.233
Sig. (2-tailed)0.0010.0020.0020.0010.0010.0000.000 0.0000.7270.323
RecoveryPearson Correlation−0.586 **−0.716 **−0.487 *−0.471 *−0.531 *−0.586 **−0.619 **0.739 **1−0.106−0.147
Sig. (2-tailed)0.0070.0000.0290.0360.0160.0070.0040.000 0.6560.537
HelpPearson Correlation0.0610.1920.2540.0830.0760.1780.099−0.083−0.1061−0.164
Sig. (2-tailed)0.7990.4160.2810.7280.7490.4530.6770.7270.656 0.490
MCLEPearson Correlation0.3250.599 **0.465 *0.534 *0.635 **0.481 *0.370−0.233−0.147−0.1641
Sig. (2-tailed)0.1620.0050.0390.0150.0030.0320.1080.3230.5370.490
N2020202020202020202020
*. Correlation is significant at the 0.05 level (2-tailed). **. Correlation is significant at the 0.01 level (2-tailed).
Table 3. Paired-Samples t-test analysis.
Table 3. Paired-Samples t-test analysis.
MeanStd. DeviationStd. Error Mean95% Confidence Interval of the Difference
LowerUppertdfSig. (2-tailed)
Pair 1Visibility—MlE−0.350000.933300.20869−0.786800.08680−1.677190.110
Pair 2Match—MlE4.100000.788070.176223.731174.4688323.267190.000
Pair 3Control—MlE8.250001.371710.306727.608028.8919826.897190.000
Pair 4Consistency—MlE3.250000.966550.216132.797643.7023615.038190.000
Pair 5Error—MlE4.100000.718180.160593.763884.4361225.531190.000
Pair 6Recognition—MlE7.550001.605090.358916.798798.3012121.036190.000
Pair 7Flexibility—MlE3.100001.209610.270482.533883.6661211.461190.000
Pair 8Aesthetics—MlE4.250001.208520.270233.684394.8156115.727190.000
Pair 9Recovery—MlE4.350001.182100.264333.796764.9032416.457190.000
Pair 10Help—MlE1.600001.095450.244951.087322.112686.532190.000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jia, F.; Sun, D.; Ma, Q.; Looi, C.-K. Developing an AI-Based Learning System for L2 Learners’ Authentic and Ubiquitous Learning in English Language. Sustainability 2022, 14, 15527. https://doi.org/10.3390/su142315527

AMA Style

Jia F, Sun D, Ma Q, Looi C-K. Developing an AI-Based Learning System for L2 Learners’ Authentic and Ubiquitous Learning in English Language. Sustainability. 2022; 14(23):15527. https://doi.org/10.3390/su142315527

Chicago/Turabian Style

Jia, Fenglin, Daner Sun, Qing Ma, and Chee-Kit Looi. 2022. "Developing an AI-Based Learning System for L2 Learners’ Authentic and Ubiquitous Learning in English Language" Sustainability 14, no. 23: 15527. https://doi.org/10.3390/su142315527

APA Style

Jia, F., Sun, D., Ma, Q., & Looi, C. -K. (2022). Developing an AI-Based Learning System for L2 Learners’ Authentic and Ubiquitous Learning in English Language. Sustainability, 14(23), 15527. https://doi.org/10.3390/su142315527

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop