1. Introduction
A serious game (SG) is defined as a software system that combines game elements with a non-entertaining purpose [
1]. SGs are designed to train players on a subject and they have a wide range of applications in politics, the military, healthcare, and art [
2]. Nonetheless, one of the most important domains of application is the education field [
3], as SGs can assist teachers and students by providing the learning material in an engaging manner. Learning programming is a multi-layer ability that requires strenuous effort from students to develop problem-solving skills and abstract thinking [
4]. Therefore, many SGs have been developed to assist educators and students in the learning process of programming.
SGs use several methodologies and frameworks in their design to facilitate their entertaining and educational role. Elements such as attractive graphics, engaging narrative, and interactivity increase student motivation and active participation [
5]. The pedagogical objectives are fulfilled with mechanisms that provide educational content to students. Feedback and support systems are integrated into the game environment, presenting new knowledge and promoting educational goals.
In the context of an SG, support can be defined as any type of supplementary information, guidance, or assistance that is provided to aid players in acquiring and achieving the learning objectives of the game. The design and utilization of various support types can vary depending on the game’s design and intended use case [
6]. Text support is a common form of support that provides written instructions, explanations, or feedback messages. It can appear during specific game events or between levels. Textbook support is a variant of text support that consists of relevant passages from textbooks or other educational materials that provide additional background information on the learning content. Hint support, another form of text support, offers brief notifications that provide clues or prompts to guide the player toward the correct answer or solution [
7]. Image support presents visual aids or diagrams that illustrate the learning content, making it easier for players to understand. Video support provides short videos containing additional explanations or demonstrations of the learning content. Finally, working examples of support demonstrate how the learning content can be applied in similar situations to those presented in-game [
8].
We argue that support is the most important factor for the success of an SG, as it can greatly impact the player’s learning experience. It is critical since it offers learners the necessary resources and assistance to facilitate their learning process. Without it, learners may become disengaged, resulting in a lack of interest and motivation to learn. The availability of support builds confidence, reduces stress and anxiety, and encourages learners to experiment and learn from their mistakes. Given that learning is a process that takes time and effort, support encourages learners to persist despite setbacks, mistakes, or challenges. Furthermore, its various forms can accommodate the diverse learning needs of players, resulting in more effective knowledge acquisition.
Nonetheless, every student has a unique learning style and strategy while studying or encountering a new problem [
9]. Educators in classrooms acknowledge these different learning preferences and adjust their teaching methods to handle the individual needs of their students. Following this model, an SG has the potential to significantly enhance learning efficiency by providing support customized to the unique requirements of each student. Through personalized support, an SG can help students overcome their specific learning obstacles. Emulating the interaction between a teacher and student in a classroom, individualized support in an SG can help bridge the gap between the varying learning styles and preferences. This will ultimately lead to a more productive and satisfactory learning experience.
Artificial intelligence (AI) applications in game-based learning environments have been studied in the past, but recent improvements in AI and data analysis have led to a significant research interest in adaptive learning [
10]. Adaptive SGs can provide tailored content and experiences to their users by adjusting challenges and activities, maintaining motivation and engagement. Adapting feedback requires modeling the student’s behavior by monitoring and assessing knowledge levels during gameplay. At the core of this process are AI models that collect user data from in-game actions, analyze them, and provide support according to educational goals. These models utilize pre-trained data of expert knowledge or data-driven approaches that track student development in real-time.
Adaptive supports have a great potential for fostering educational content in SGs and improving learning efficiency. Adaptive SGs can assist educators in overcrowded classrooms where help is not easily accessible, and/or act as self-teaching tools. This paper surveys the research on adaptive support in SGs for programming, reviews their design process, discusses the learning results, and presents the latest advancements in the field. It is structured as follows:
Section 2 presents a literature review regarding adaptive systems in SGs.
Section 3 defines the methodology and scope of the survey. Next,
Section 4 presents the results of the research question analysis. Finally,
Section 5,
Section 6 and
Section 7 summarize and conclude the paper.
2. Related Work
The field of adaptive support is still in its early stages. To our knowledge, there has not yet been a systematic literature review (SLR) in this field for the topic of serious games for programming. Nonetheless, adaptive systems in serious games and learning applications have been studied in terms of player engagement and feedback. We identified five reviews published in recent years with a relevant context in education and adaptivity, as shown in
Table 1.
Hooshyar et al. [
15] conducted a review of data-driven approaches for player modeling in educational games. They considered data-driven techniques an optimal solution when modeling player knowledge and behavior in adaptive games. This approach relies less on expert authoring, and it captures a variety of player data while the game is being executed. They discovered that the main objective of data mining in adaptive educational games was behavior modeling, followed by goal recognition and procedural content generation methods.
In the work of Lopes et al. [
16], 16 papers were presented in a review on adaptive gamification strategies for various learning goals, researching how the adaptive features work. Gamification applies game thinking and game elements such as badges, leaderboards, and virtual points to promote learning by strategically combining them according to the education field or situation. Adaptive gamification dynamically customizes the game elements for each user by categorizing their preferences, in contrast to the “one-size-fits-all” approach adopted in most gamified environments. The survey concluded that most adaptive gamification implementations use existing player typologies to categorize users as a base for further adaptation of their gamification elements. It suggests that the adaptive gamification process in learning environments should also consider continuous player-type profiling and adaptability to learning topics as strategies for optimal effectiveness in gamification element integration.
A systematic analysis [
14] was conducted on the integration of adaptivity in educational games using bibliometric, qualitative, thematic, and meta-analysis methods. Liu et al. [
14] identified 62 publications, and concluded that adaptivity in games does not positively contribute to learning and game performance compared with non-adaptive games. According to the research, these conditions were more likely to be affected by factors such as randomization, game genre, and gameplay length rather than adaptivity design. However, this has also resulted in adaptivity having a positive effect on user engagement and focus on learning.
A study by Ninaus and Nebel [
13] reviewed methods of acquiring data from serious games and how these analytics were utilized to adapt their learning environments. They found 10 relevant studies focused mostly on natural sciences with their integrated adaptive mechanisms set to improve learning. The majority of implementations relied on data-driven approaches instead of cognitive frameworks, and most of them resulted in positive outcomes. The authors reported a lack of standardized methods and theoretical foundations to analyze the effects of adaptations. Although they identify an increased interest in the field, the low number of papers indicates that more research is needed.
Another survey by McBroom et al. [
12] provided an overview of methods for automatic hint generation as support for programming exercises. The paper surveyed the research on the numerous developed techniques by analyzing them as a series of smaller steps. Most hint-producing approaches were based on a next-step selection from past data, goal identification, or utilizing program features. Other techniques provided feedback by automatically repairing syntax errors in user programs either by using machine learning or search algorithms to find possible corrections. The authors proposed their own framework called HINTS (Hint Iteration by Narrow-down and Transformation Steps) for describing hint generation techniques and identifying relationships to facilitate future comparisons.
Lastly, another study by Aydin et al. [
11] examined the adaptation components in educational games. Reviewing 26 articles revealed that adaptive game design is applied in various fields, although most of them refer to teaching programming. The study revealed that games adapt their educational content, item behaviors, and interface, with educational content being the preferred adaption element. They concluded that the most frequent method for adapting the learning content was by adjusting its difficulty. In addition, it was found that adaptation in the games was implemented both pre-game and in-game with a range of methods such as deep learning, Bayesian networks, and decision trees.
Previous research on adaptive methods in serious games focused on identifying various techniques and strategies for their implementation. This review shares the common research goal of classifying the adaptive methods and player behavior modeling, as demonstrated in the works of Aydin et al. [
11], Hooshyar et al. [
15], and Lopes et al. [
16]. Additionally, some of the previous studies reported a similar research objective to the present work, which is the examination of the learning effect of adaptivity [
13,
14]. Although past research has explored the adaptivity of various game elements such as content and difficulty, the present study is distinct in its explicit focus on adaptive support methods. While one previous study, conducted by McBroom et al. [
12], focused on support, it solely referred to hints, the present study includes all types of support. Lastly, this study stands out in its focus on teaching programming, as it is the only study to explicitly investigate the use of SGs for this purpose. By filling this research gap, this study contributes to the broader understanding of the potential benefits and challenges of using SGs for programming education. This paper provides an evaluation of provided support in terms of learning efficiency and discusses the implications of the design choices. This review is important for researchers and designers of serious games about programming who are considering implementing adaptive support to increase the learning results of their students. In addition, researchers can familiarize themselves with the up-to-date adaptive approaches and appliances of support in the field.
4. Results Analysis
A quantitative analysis classified the papers by year (
Figure 2) and by type of publication (
Figure 3). The majority of papers were published after the year 2016, with journals and conferences being the main source of representation.
4.2. Methods Used to Generate Adaptive Supports (RQ2)
Table 7 summarizes the methodological approaches used in the games to adapt their learning content. Overall, five types were identified: questionnaire, fuzzy logic, Bayesian network, hint factory, and dialog responses. Most of the games adapt their support during gameplay by processing player input in real time, with the exception of two games that adapt it statically with input taken prior to starting.
Quiz Time!, NanoDoc, HTML Escape, and its expansion FuzAd_Escape utilize fuzzy logic to represent the student knowledge level and cognitive state of the player. Fuzzy logic [
32] is a heuristic approach that can mimic real-life conditions with partial truth statements. This multivalued logic is in contrast to binary systems and provides partial values of “true” or “false” conditions, allowing decision-making with imprecise estimates. Through fuzzy logic, the aforementioned games approach the uncertainty of describing the knowledge level of a student on a domain using partial levels instead of distinct values. For example, the knowledge level of a student on the iteration control structure could be 0.6 insufficiently known and 0.4 known. These values are calculated from membership functions that take input from player actions. HTML Escape and FuzAd_Escape use quiz results from the player as input, while NanoDoc utilizes the result of player-created programs. Quiz Time! also receives the player’s answers to quiz questions to calculate the current knowledge, but additionally requires two more inputs. These are the previous knowledge of players in computer programming and the frequency of misconceptions made by players.
The Neverwinter Nights-based game modifies the text of the learning materials that appear in the conversation system between the player and non-player characters (NPCs). This text is adapted according to the learning style of the player and is based on the Felder–Silverman model [
33], which indicates areas of personality that contribute to learning. The game has three modes, one non-adaptive and two adaptive depending on the support presented. In the first adaptive mode, the characteristics of the player are identified with a questionnaire of 44 questions that are filled out before starting the game. These characteristics remain static throughout the game session, and player actions are recorded passively without altering the support presentation. In the second adaptive mode, the student’s learning style is analyzed in real time according to the player’s interactions with the game. The support presentation is based on the evaluation of player responses in each conversation. Moreover, it provides an option to change the presentation type manually during conversation and records the selection as additional information for the calculation of the next NPC interaction.
BOTS provides intelligent feedback in the form of personalized hints. It extends the hint factory [
34] which is a data-driven method for creating hints by tracing student states to a predefined solution graph. The graph contains data on previous student user interactions, and when a solution path is found, a hint is generated by suggesting a potential next step, which is the next node of the solution graph. However, BOTS uses the output of players’ programs instead of player actions as data, reducing the size of the solution graph, which would otherwise be immense for its open-ended puzzle problems.
The OGITS models student progress with a Bayesian network [
35] and adapts the support material accordingly. Bayesian networks are probabilistic graphical systems that employ directed acyclic graphs to describe a set of variables and their conditional dependencies. The game implements a Bayesian network that contains a directed acyclic graph where each node represents a programming concept. Nodes that require knowledge of previous concepts are connected with them. Employing conditional dependencies between joints and taking into account the student answers to quiz questions, each node is labeled “known” or “unknown”. Thus, with the Bayesian network, the game estimates what prerequisite concepts were not learned and directs students to the required resources. Minerva utilizes a questionnaire to identify student learning styles based on the models of Mumford’s learning style questionnaire [
36] and Bartle’s player types [
37] to handle students both as players and as learners. The questionnaire has 32 Likert scale statements that players answer before entering the game, mapping them as activists, theorists, pragmatists, or reflectors. The detection stage occurs only once and remains static throughout the course of gameplay. Mapping players with a corresponding learning style allows the game to alter the viewing order of the learning content, therefore adapting the support.
AutoThinking creates a cognitive model of the player with a Bayesian network allowing a real-time non-invasive assessment. The model consists of a directed acyclic graph where each skill is represented as a node and prerequisite relationships are connected with directed edges in a parent–child manner. A conditional probability set of distributions is defined for each node depending on its parent. The Bayesian network uses the input from the player solution, classifying it as satisfactory, normal, or unsatisfactory, and adapts the support content. This process can occur in two phases, in the debug mode before submitting the solution or after executing it.
ENGAGE also uses two Bayesian networks, named outer loop and inner loop, to support students. The first one is an adaptive problem-solving task strategy that selects tasks related to topics where students have knowledge gaps. A dynamic Bayesian network models student knowledge from their submitted solutions and quantifies their skill with binary variables on specific learning concepts. When it detects that a concept is not successfully learned, the game uses a scaffolding approach and dynamically reallocates players to intermediate tasks in order to master the concept. The second Bayesian network generates adaptive hints by suggesting the next steps of a solution in a problem task. Additionally, it displays hints to enhance learning outcomes when it predicts that a concept is not fully learned. The article mentions that both Bayesian networks are planned but not yet implemented.
4.3. Adaptivity Effectiveness in Learning (RQ3)
In
Table 8, a summary of the results for learning effectiveness and the adaptivity effect of the reviewed games that provide evaluation data are presented.
HTML Escape collected feedback with a questionnaire survey. The researchers reported positive results as students stated that the game helped them overcome difficulties in the learning content. FuzAd_Escape used questionnaires for teachers and students and in accordance with their answers, the game improved the learning results in HTML. Additionally, FuzAd_Escape conducted a T -test on two groups of students, with only one group playing the game. After a teaching period of three weeks, a test about HTML was given to both groups, and their grades were compared. It was found that students who played the game had better scores on the test.
The Neverwinter Nights-based game evaluated its adaptive methods by comparing the results of a pre-test and post-test on SQL with four groups of students. The first group (control) studied the learning material only from a textbook and the second group played the game in a non-adaptive mode. The third group used the adaptive mode with data taken in advance of playing the game, while the last group played the mode where support is automatically adapted to the learning style of the player. It was concluded that students who played the game performed better on the test compared to those who studied with the textbook. The adaptive mode group had shorter completion times than all other groups and also the highest means of learning effectiveness, although the latter was proven to be not statistically significant. Interviews were conducted with students, and quantitative and qualitative data were gathered to detect the contribution of the game to student programming skills. The analysis showed that students improved their comprehension of programming concepts and that the game helped them to recognize their weak spots. Moreover, it was noted that the game succeeded in teaching students without a teacher.
The Minerva game’s adaptive model was assessed with a formative evaluation with elementary school students. The researchers compared the learning outcomes of a post-test questionnaire between a group of students who played the game and a control group who studied with a textbook. Additionally, semi-structured interviews were conducted, and game log data were analyzed. It was concluded that learning efficiency was equally effective for both groups, although the game was shown to facilitate engagement among the students. A similar approach to the evaluation was followed for the AutoThinking game with an experimental and a control group of students. The computational thinking knowledge was estimated with a pre-test that all participants took. The control group were taught the learning content with traditional teaching, while the experimental group played the game for an equal amount of time. Statistical analysis of the post-test revealed that students who used the game had better scores than the control group. It was concluded that the game improved the computational thinking of students related to both conceptual knowledge and skills.
For Quiz Time!, an evaluation of the game’s learning effectiveness was conducted using 80 university students. The population was split into two equal-sized groups, and students used the game for an academic semester. A different version of the game was given to each group. The first one had a conventional version of the game that lacked the dynamic advice generator and knowledge assessment features, while the second had a fully adaptive application. Statistical analysis with a t-test revealed significantly better performance in terms of quiz grades for the group of students who used the dynamic advice generator support. For NanoDoc, researchers conducted an empirical study on 102 elementary school students divided randomly by the game into two even groups. The game provided the same support method for both groups but with different types of accessibility. In the first, players could manually choose when to receive assistance, while in the second group, the support was activated adaptively according to the knowledge level of the student. Statistical analysis of the comparison in player programs revealed higher educational efficiency of the group with adaptive support.
Three of the studies examined in this review did not provide any empirical evidence of the impact of their games (BOTS, ENGAGE, and OGITS) on learning efficiency. On the other hand, four SGs, namely HTML Escape, FuzAd_Escape, Minerva, and AutoThinking, conducted experiments involving control groups who received traditional teaching or textbook-based instruction alongside experimental groups who played the game. Although these studies reported positive effects on learning efficiency, none of them explicitly measured the effect of game adaptivity on learning effectiveness. However, given that the primary feature of these games was the adaptive support, it is plausible that this was the cause of the positive learning outcomes. Nonetheless, such claims cannot be made with certainty. In contrast, the studies on three SGs, namely Quiz Time!, NanoDoc, and Neverwinter Nights, directly compared the learning outcomes of groups who received adaptive support with those who received non-adaptive support. These studies reported positive learning outcomes for adaptive support groups, except for Neverwinter Nights, which reported no impact.
5. Discussion
The vast majority of reviewed games in the survey were not publicly available, so it was not possible to examine them and verify their adaptive support methods and mechanics. It would be beneficial and effective for future research if serious game developers provided public access to their games. Moreover, there was a lack of a standard methodology in the presented articles, since various types of quantitative and qualitative techniques were applied. As indicated in similar studies [
38], there is a need for common methodological practices to be established for the evaluation of serious games.
Although this survey includes articles from 2000–2022, it is reported that the first study about adaptive serious game support for programming was published in 2014. After that year, there has been growing interest in the field and a steady increase in the creation of new applications. Recent artificial intelligence and data analysis developments allowed researchers to produce new methods for personalization and adaptivity [
39]. Additionally, advancements in the game industry and computing reduced the costs of game development and offered new tools for designing, prototyping, and authoring a game. These improvements overcome technical challenges of the past such as real-time data processing and the creation of complex immersive worlds, allowing more challenging and sophisticated games to be implemented.
According to the results of RQ1, the most common type of educational content presentation through support was text. Although support was offered with other means too such as images, videos, or working examples, the text method had a constant presence in almost all studies. Support via text is easy to implement and provides an effective and direct way of communication with the user. The text was presented in the games utilized in the studies with a diversity of forms such as tips (FuzAd_Escape, HTML Escape, Minerva), hints (BOTS, ENGAGE, AutoThinking), textbook material (OGITS), dialog (Neverwinter Nights, OGITS, FuzAd_Escape, HTML Escape), or in a chat form (Quiz Time). Some authors attempted to integrate support in the game context by presenting it over a game element such as a non-player character. However, in most cases, although the content of the message was personalized and adapted to cover the learning insufficiencies for a specific user, it was not delivered as a game mechanic. Integrating learning material through a game mechanic instead of plain text was reported to produce better learning results [
40].
This review presented several techniques that were applied in games to generate adaptive support content (RQ2). Fuzzy logic was most frequently used (NanoDoc, FuzAd_Escape, Quiz Time, HTML Escape) followed by Bayesian networks (AutoThinking, OGITS, ENGAGE), questionnaires (Neverwinter Nights, Minerva), and the hint factory algorithm (BOTS). All of them were based on data-driven approaches and did not apply models that require training before execution such as artificial neural networks and deep learning. Data-driven methods tend to be easier to develop and cost-effective when compared with model-based approaches such as intelligent tutoring systems [
41]. The most common utilization of collected data in games mentioned in the survey was to model student knowledge and behavior through user interactions and progress.
The majority of the studies reviewed in this analysis provided data on how the use of support affects learning effectiveness, as assessed by research question 3 (RQ3). The data in
Table 8 show that in most cases, the impact of support on learning was positive. However, three of the SGs (BOTS, ENGAGE, and OGITS) did not present any empirical evidence on how their support implementation affected players, while two SGs (Neverwinter Nights and Minerva) reported no changes in learning efficiency. It should be noted that the adaptive support methods used by Neverwinter Nights and Minerva were static, based on data collected from questionnaires prior to gameplay (as shown in
Table 7). Conversely, all of the real-time adaptive support methods that provided evaluation data demonstrated positive results in terms of learning effectiveness. This suggests that static adaptive methods may perform worse than methods that can adapt to player feedback during gameplay. Notably, three SGs (Quiz Time!, NanoDoc, and Neverwinter Nights) directly compared adaptive and non-adaptive support, with two of them (Quiz Time!, NanoDoc) reporting positive results in terms of learning efficiency.
All the participants in the reviewed studies (
Table 8) were novices in programming (elementary schools) or new to the discipline subject (universities). This suggests that researchers tend to assess adaptive support using students with no previous experience in programming. The logic behind this decision is probably because this group of students requires constant assistance due to the difficulties of learning programming during the early stages [
4]. An additional confirmation of the above argument is the design decision of developers to create mostly puzzle games (
Table 5), which is the most popular game genre in teaching introductory programming [
42]. However, secondary school students or people of older ages (age > 25) were not included in the surveys. Further study is needed to investigate adaptive support models on various knowledge levels of programming and how it differentiates according to age.
Author Contributions
Conceptualization, methodology, validation, formal analysis, data curation, writing—original draft preparation, writing—review and editing, visualization, P.T. and S.X.; supervision, S.X.; project administration, S.X. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
All data are included in the article.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Felicia, P. Handbook of Research on Improving Learning and Motivation through Educational Games; iGi Global: Hershey, PA, USA, 2011. [Google Scholar] [CrossRef]
- Susi, T.; Johannesson, M.; Backlund, P. Serious Games—An Overview. 28. 2007. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2:2416 (accessed on 5 December 2022).
- Bellotti, F.; Kapralos, B.; Lee, K.; Moreno-Ger, P.; Berta, R. Assessment in and of Serious Games: An Overview. Adv. Hum.-Comput. Interact. 2013, 1–11. [Google Scholar] [CrossRef]
- Cheah, C.S. Factors Contributing to the Difficulties in Teaching and Learning of Computer Programming: A Literature Review. Contemp. Educ. Technol. 2020, 12, ep272. [Google Scholar] [CrossRef] [PubMed]
- Avila-Pesántez, D.; Rivera, L.A.; Alban, M.S. Approaches for Serious Game Design: A Systematic Literature Review. ASEE Comput. Educ. (CoED) J. 2017, 8, 12. [Google Scholar]
- Schrader, C. Serious Games and Game-Based Learning. In Handbook of Open, Distance and Digital Education; Springer: Singapore, 2022; Volume 1, pp. 1–14. [Google Scholar] [CrossRef]
- Hooshyar, D.; Pedaste, M.; Yang, Y.; Malva, L.; Hwang, G.-J.; Wang, M.; Lim, H.; Delev, D. From Gaming to Computational Thinking: An Adaptive Educational Computer Game-Based Learning Approach. J. Educ. Comput. Res. 2020, 59, 383–409. [Google Scholar] [CrossRef]
- Toukiloglou, P.; Xinogalos, S. Adaptive Support with Working Examples in Serious Games About Programming. J. Educ. Comput. Res. 2023, 07356331231151393. [Google Scholar] [CrossRef]
- Vermunt, J.D.; Vermetten, Y.J. Patterns in Student Learning: Relationships Between Learning Strategies, Conceptions of Learning, and Learning Orientations. Educ. Psychol. Rev. 2004, 16, 359–384. [Google Scholar] [CrossRef]
- Hooshyar, D.; Lim, H.; Pedaste, M.; Yang, K.; Fathi, M.; Yang, Y. AutoThinking: An Adaptive Computational Thinking Game. In Proceedings of the Innovative Technologies and Learning: Second International Conference, ICITL 2019, Tromsø, Norway, 2–5 December 2019; pp. 381–391. [Google Scholar] [CrossRef]
- Aydin, M.; Karal, H.; Nabiyev, V. Examination of adaptation components in serious games: A systematic review study. Educ. Inf. Technol. 2022, 1, 1–22. [Google Scholar] [CrossRef]
- McBroom, J.; Koprinska, I.; Yacef, K. A Survey of Automated Programming Hint Generation: The HINTS Framework. ACM Comput. Surv. 2021, 54, 1–27. [Google Scholar] [CrossRef]
- Ninaus, M.; Nebel, S. A Systematic Literature Review of Analytics for Adaptivity Within Educational Video Games. Front. Educ. 2021, 5, 611072. [Google Scholar] [CrossRef]
- Liu, Z.; Moon, J.; Kim, B.; Dai, C.-P. Integrating adaptivity in educational games: A combined bibliometric analysis and meta-analysis review. Educ. Technol. Res. Dev. 2020, 68, 1931–1959. [Google Scholar] [CrossRef]
- Hooshyar, D.; Yousefi, M.; Lim, H. A systematic review of data-driven approaches in player modeling of educational games. Artif. Intell. Rev. 2017, 52, 1997–2017. [Google Scholar] [CrossRef]
- Lopes, V.; Reinheimer, W.; Medina, R.; Bernardi, G.; Nunes, F.B. Adaptive gamification strategies for education: A systematic literature review. Braz. Symp. Comput. Educ. (Simpósio Bras. De Inf. Na Educ. SBIE) 2019, 30, 1032. [Google Scholar] [CrossRef]
- Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [PubMed]
- Toukiloglou, P.; Xinogalos, S. NanoDoc: Designing an Adaptive Serious Game for Programming with Working Examples Support. In Proceedings of the European Conference on Games Based Learning, Lisbon, Portuga, 6–7 October 2022; Volume 16, pp. 628–636. [Google Scholar] [CrossRef]
- Chrysafiadi, K.; Papadimitriou, S.; Virvou, M. Cognitive-based adaptive scenarios in educational games using fuzzy reasoning. Knowl. Based Syst. 2022, 250, 109111. [Google Scholar] [CrossRef]
- Hooshyar, D.; Malva, L.; Yang, Y.; Pedaste, M.; Wang, M.; Lim, H. An adaptive educational computer game: Effects on students’ knowledge and learning attitude in computational thinking. Comput. Hum. Behav. 2020, 114, 106575. [Google Scholar] [CrossRef]
- Troussas, C.; Krouska, A.; Sgouropoulou, C. Collaboration and fuzzy-modeled personalization for mobile game-based learning in higher education. Comput. Educ. 2019, 144, 103698. [Google Scholar] [CrossRef]
- Chrysafiadi, K.; Papadimitriou, S.; Virvou, M. Fuzzy states for dynamic adaptation of the plot of an educational game in relation to the learner’s progress. In Proceedings of the 2020 11th International Conference on Information, Intelligence, Systems and Applications, Piraeus, Greece, 15–17 July 2020; pp. 1–7. [Google Scholar] [CrossRef]
- Papadimitriou, S.; Chrysafiadi, K.; Virvou, M. FuzzEG: Fuzzy logic for adaptive scenarios in an educational adventure game. Multimed Tools Appl. 2019, 78, 32023–32053. [Google Scholar] [CrossRef]
- Papadimitriou, S.; Virvou, M. Adaptivity in scenarios in an educational adventure game. In Proceedings of the 2017 8th International Conference on Information, Intelligence, Systems & Applications (IISA), Larnaca, Cyprus, 27–30 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Hooshyar, D.; Ahmad, R.B.; Wang, M.; Yousefi, M.; Fathi, M.; Lim, H. Development and Evaluation of a Game-Based Bayesian Intelligent Tutoring System for Teaching Programming. J. Educ. Comput. Res. 2017, 56, 775–801. [Google Scholar] [CrossRef]
- Lindberg, R.S.N.; Hasanov, A.; Laine, T.H. Improving Play and Learning Style Adaptation in a Programming Education Game. In Proceedings of the 9th International Conference on Computer Supported Education, Porto, Portugal, 21–23 April 2017; pp. 450–457. [Google Scholar] [CrossRef]
- Lindberg, R.S.N.; Laine, T.H. Formative evaluation of an adaptive game for engaging learners of programming concepts in K-12. Int. J. Serious Games 2018, 5, 3–24. [Google Scholar] [CrossRef]
- Hicks, A.; Peddycord, B.; Barnes, T. Building Games to Learn from Their Players: Generating Hints in a Serious Game. In Proceedings of the Intelligent Tutoring Systems: 12th International Conference, ITS 2014, Honolulu, HI, USA, 5–9 June 2014; pp. 312–317. [Google Scholar] [CrossRef]
- Hicks, D.; Dong, Y.; Zhi, R.; Cateté, V.; Barnes, T. BOTS: Selecting Next-Steps from Player Traces in a Puzzle Game. In Proceedings of the 8th International Conference on Educational Data Mining, Madrid, Spain, 26–29 June 2015; Available online: https://ceur-ws.org/Vol-1446/GEDM_2015_Submission_10.pdf (accessed on 5 December 2022).
- Soflano, M.; Connolly, T.M.; Hainey, T. An application of adaptive games-based learning based on learning style to teach SQL. Comput. Educ. 2015, 86, 192–211. [Google Scholar] [CrossRef]
- Min, W.; Mott, B.; Lester, J.C. Adaptive Scaffolding in an Intelligent Game-Based Learning Environment for Computer Science. In Proceedings of the Second Workshop on AI-Supported Education for Computer Science, Honolulu, HI, USA, 5 June 2014; pp. 41–50. [Google Scholar]
- Zadeh, L.A. Is there a need for fuzzy logic? Inf. Sci. 2008, 178, 2751–2779. [Google Scholar] [CrossRef]
- Graf, S.; Viola, S.R.; Leo, T. Kinshuk In-Depth Analysis of the Felder-Silverman Learning Style Dimensions. J. Res. Technol. Educ. 2007, 40, 79–93. [Google Scholar] [CrossRef]
- Stamper, J.; Barnes, T.; Lehmann, L.; Croy, M. The Hint Factory: Automatic Generation of Contextualized Help for Existing Computer Aided Instruction. In Proceedings of the 9th International Conference on Intelligent Tutoring Systems Young Researchers Track, Raleigh, NC, USA, 29 June–2 July 2016; p. 9. [Google Scholar]
- Heckerman, D. A Tutorial on Learning with Bayesian Networks. arXiv 2022, arXiv:2002.00269. [Google Scholar] [CrossRef]
- Honey, P.; Mumford, A. The Learning Styles Helper’s Guide; Peter Honey Publications: Maidenhead, UK, 2000. [Google Scholar]
- Bartle, R. Hearts, Clubs, Diamonds, Spades: Players Who Suit MUDs. J. MUD Res. 1996, 1, 28. [Google Scholar]
- Miljanovic, M.A.; Bradbury, J.S. A Review of Serious Games for Programming. In Proceedings of the Serious Games: 4th Joint International Conference, JCSG 2018, Darmstadt, Germany, 7–8 November 2018; pp. 204–216. [Google Scholar] [CrossRef]
- Streicher, A.; Smeddinck, J.D. Personalized and Adaptive Serious Games. In Proceedings of the Entertainment Computing and Serious Games: International GI-Dagstuhl Seminar 15283, Dagstuhl Castle, Germany, 5–10 July 2015; pp. 332–377. [Google Scholar] [CrossRef]
- Molnar, A.; Kostkova, P. On Effective Integration of Educational Content in Serious Games: Text vs. Game Mechanics. In Proceedings of the 2013 IEEE 13th International Conference on Advanced Learning Technologies, Beijing, China, 15–18 July 2013; pp. 299–303. [Google Scholar] [CrossRef]
- Hooshyar, D.; Lee, C.; Lim, H. A survey on data-driven approaches in educational games. In Proceedings of the 2016 2nd International Conference on Science in Information Technology, Balikpapan, Indonesia, 26–27 October 2016; pp. 291–295. [Google Scholar] [CrossRef]
- Pelánek, R.; Effenberger, T. Design and analysis of microworlds and puzzles for block-based programming. Comput. Sci. Educ. 2020, 32, 66–104. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).