Next Article in Journal
Syndetic Sensitivity and Mean Sensitivity for Linear Operators
Next Article in Special Issue
Link Prediction Based on Heterogeneous Social Intimacy and Its Application in Social Influencer Integrated Marketing
Previous Article in Journal
Modeling Asymmetric Volatility: A News Impact Curve Approach
Previous Article in Special Issue
Brain Tumor Segmentation Using a Patch-Based Convolutional Neural Network: A Big Data Analysis Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Personalized Learning Path Recommendation Based on Saltatory Evolution Ant Colony Optimization Algorithm

1
School of Management, Shanghai University, Shanghai 200444, China
2
Songjiang No. 2 Middle School, Shanghai 201600, China
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(13), 2792; https://doi.org/10.3390/math11132792
Submission received: 12 April 2023 / Revised: 4 June 2023 / Accepted: 16 June 2023 / Published: 21 June 2023
(This article belongs to the Special Issue Big Data and Complex Networks)

Abstract

:
To solve the problems of slow convergence and low accuracy when the traditional ant colony optimization (ACO) algorithm is applied to online learning path recommendation problems, this study proposes an online personalized learning path recommendation model (OPLPRM) based on the saltatory evolution ant colony optimization (SEACO) algorithm to achieve fast, accurate, real-time interactive and high-quality learning path recommendations. Consequently, an online personalized learning path optimization model with a time window was constructed first. This model not only considers the learning order of the recommended learning resources, but also further takes the review behavior pattern of learners into consideration, which improves the quality of the learning path recommendation. Then, this study constructed a SEACO algorithm suitable for online personalized learning path recommendation, from the perspective of optimal learning path prediction, which predicts path pheromone evolution by mining historical data, injecting the domain knowledge of learning path prediction that can achieve best learning effects extracted from domain experts and reducing invalid search, thus improving the speed and accuracy of learning path optimization. A simulation experiment was carried out on the proposed online personalized learning path recommendation model by using the real leaner learning behavior data set from the British “Open University” platform. The results illustrate that the performance of the proposed online personalized learning path recommendation model, based on the SEACO algorithm for improving the optimization speed and accuracy of the learning path, is better than traditional ACO algorithm, and it can quickly and accurately recommend the most suitable learning path according to the changing needs of learners in a limited time.

1. Introduction

With the accelerated development of online education, the ever-increasing amount of online learning resources has led to the problem of “information overload” and “learning lost” for learners, and it has become a difficult task to select appropriate and high-quality learning resources and plan an efficient online learning path. At the same time, the homogeneous learning resource recommendation methods of existing online learning platforms cannot meet the growing personalized learning needs of learners. Therefore, in order to improve the effectiveness of online learning and optimize the learning experience, online learning platforms need to provide personalized learning support services for learners. At present, personalized learning resources recommendations and personalized learning path recommendations are the main research contents of online personalized learning support services [1]. Compared with personalized learning resource recommendations, online personalized learning path recommendations can not only help learners find suitable learning resources quickly by mining learning behavior data, but also recommend an effective learning order of learning resources for learners.
This data-driven online personalized learning path recommendation is beneficial to compensate for the limited support from teachers in the online learning environment and help learners achieve their learning goals efficiently. Firstly, personalized learning path recommendations provide learners with appropriate learning resources in a structured form, and show them the learning path to achieve their learning goals, so that learners can have a clear grasp of the entire learning process and are less intimidated psychologically. Secondly, personalized learning path recommendations can support learners to see the key learning contents clearly and definitely, and allocate learning time reasonably, improving learning efficiency. Third, in the process of guiding learners to accomplish their learning goals, personalized learning path recommendations can also play a role in monitoring and reminding users of their learning progress, optimizing the learning experience. However, most existing studies on learning path recommendation only focus on the applicability of learning path recommendation results, ignoring the interactivity of learning and the dynamic agility of recommendations [2]. Meanwhile, there is still a need for further exploration in improving the recommendation quality of online personalized learning paths through the mining of effective learning behavior patterns [3].
In order to make up for the shortcomings of existing research, this study proposes an online personalized learning path recommendation model (OPLPRM), based on the saltatory evolution ant colony optimization (SEACO) algorithm, which aims to accurately recommend suitable learning paths based on user characteristics in real time. Beyond that, it can support learners to adjust the learning process freely and flexibly, and improve the quality of recommendation results. That is, the OPLPRM has fast, accurate, real-time interactive and high-quality features. Therefore, this study first constructs a learner model, using data mining to ensure the applicability of personalized learning path recommendation results. This data-driven learner model can portray learner differences in terms of learner education level, learning load and expected learning volume, and track the changes dynamically. Secondly, this study constructs an online personalized learning path optimization model with a time window, to further refine the recommended personalized learning paths and make them more applicable to the actual learning scenarios for online learners. The optimization model not only considers the learning order of learning resources, but also integrates the review behavior pattern of learners and supports users with choosing the suitable learning path according to the expected performance and corresponding time commitment. Third, this study constructs a SEACO algorithm for online personalized learning path recommendation, to achieve dynamic real-time learning path optimization recommendation. Finally, a simulation experiment was carried out on the proposed OPLPRM by using the real user learning behavior data set from the British “Open University” platform. Compared with the classical group heuristic algorithm particle swarm optimization (PSO) algorithm [4] and the new metaheuristic algorithm dwarf mongoose optimization (DMO) algorithm [5], the SEACO algorithm proposed in this study can significantly improve the speed and accuracy of learning path optimization. Further, the experimental results show that the OPLPRM proposed in this study can significantly improve the speed and quality of personalized learning path recommendations, and the recommendation results are helpful to improve the learning efficiency of learners.
The rest of the study is organized as follows. The related work about online personalized learning path recommendation is present in Section 2. The online personalized learning path optimization model with a time window is present in Section 3. The personalized learning path network considering the review behavior is shown in Section 4. The OPLPRM based on the SEACO algorithm is introduced in Section 5. The results of the experiments and analysis are provided in Section 6. Finally, Section 7 concludes the study.

2. Related Work

Based on the development of the online personalized recommendations of learning resource, many researchers and scholars have proposed dynamic learning path recommendations oriented by learning objectives to further improve the personalized service capability of online learning platforms. This kind of recommendation can not only help online learners find suitable learning resources in the learning process, but also establish an effective learning sequence for learners [6]. The recommendation is different from online personalized learning resources, and it is mainly reflected by: Firstly, each recommended learning resource in the learning path recommendation is coherent with the recommended learning resources before and after. Secondly, the selection of learning resources in the learning path is not limited by the distance or time of arrival from another learning resource, but by the user’s prerequisites and ability gain [7]. Furthermore, the learning path represents a learning process that follows both the structural relationship between learning resources and the laws of learning behavior. Since dynamic learning path recommendations can better assist learners in carrying out personalized online learning, and alleviating the problems of “information overload” and “learning lost” compared with disordered learning resource recommendations, such learning support services have received much attention. Currently, there are two main methods of dynamic learning path recommendation systems; one is based on sequential pattern mining and the other is based on combinatorial optimization.

2.1. Learning Path Recommendations Based on Sequential Pattern Mining

The method of sequential pattern mining refers to the mining of frequent subsequences from a sequence database for recommendation, and is often applied to find sequential relationships between data elements in sequence databases. In personalized learning path recommendation problems, sequential pattern mining is regularly integrated with collaborative filtering techniques [8] to extract frequent learning sequences from learning behavior data, in order to recommend them to similar learners [9]. The aprioriall algorithm and GSP algorithm are the most used sequential pattern algorithms. However, these algorithms require several scans of the database, which will generate a large number of alternative sets, and the operation efficiency is relatively low. This drawback is especially prominent when the support threshold is small and the frequent sequential patterns are long. In response, some researchers have proposed the FreeSpan algorithm and the PrefixSpan algorithm for mining sequential patterns [10]. Such algorithms reduce the search space and improve the algorithm performance by constructing projection databases. However, generating a large number of projection databases implies a large amount of time consumption. Further, when the volume of sequence data becomes huge, the algorithm computation speed will decrease. In conclusion, the sequential pattern mining approach is difficult to apply to the increasingly large and complex online learning behavior database, and only mining the sequential patterns most frequently selected by historical users for recommendations cannot meet the diverse needs of online learners.

2.2. Learning Path Recommendation Based on Combinatorial Optimization

The combinatorial optimization-based learning path recommendation is to abstract the problem of personalized learning path recommendations as a combinatorial optimization problem, and then search for the personalized learning path that best fits the user’s characteristics, or best meets the user’s needs, from a learning path network. The learning path network is a graph of nodes describing the structural relationship of learning resources and the laws of learning behaviors, covering all possible learning paths to achieve learning goals [11]. The optimization algorithm is used to select one learning path from the personalized learning path network, and dynamically recommend it to the user based on the characteristics of the learner. This recommendation method is more flexible and scalable compared with sequential pattern mining, which provides a new possibility to build an intelligent self-applicable OPLPRM. However, finding the most suitable and efficient learning paths from the massive learning resources is an NP-hard combinatorial optimization problem [12]. Fortunately, there have been many research foundations in this regard, and the evolutionary algorithm has become an effective method to find the near-optimal personalized learning paths in the complex and changing online learning environment by virtue of self-organization, self-adaptation and self-learning. Currently, the ant colony optimization (ACO) [13], genetic (GA) [14] and particle swarm optimization (PSO) [8] algorithms have been applied to find personalized optimal learning paths for learners in e-learning platforms, and have achieved good experimental results. For example, Niknam M proposed a bionic intelligent learning path recommendation system based on the meaningful learning theory and using the ACO algorithm, which could find appropriate learning paths for learners and improve them according to learners’ needs continuously and dynamically [15]. Marouane B combined the genetic algorithm and the ACO algorithm to propose a two-level self-adaptive learning path recommendation model, and it added social network analysis to assign a specific learning pace to learners [16]. NGO T S proposed a multi-objective optimization model for learning path finding based on a MOOC platform to take into account the diverse needs of users, and used a meta-heuristic algorithm to quickly find the near optimal learning path for learners [17].
As learners’ online personalized learning needs continue to escalate, a good online personalized learning support service should not only be able to proactively provide learners with personalized learning guidance advice, but also support users in choosing and adjusting the learning process freely and flexibly [18]. Accordingly, it requires OPLPRMs to respond to learners’ individual needs in real time. The ACO algorithm has been proven to be more efficient than other evolutionary algorithms in the problem of personalized learning path recommendations based on combinatorial optimization [19,20], and the recommendation results [21,22] are of high adaptability. However, the ACO algorithm generally has the problems of slow convergence and low solution accuracy, which make it difficult to recommend personalized learning paths for learners accurately and in real time. Therefore, it is necessary to build a new fast ACO algorithm to quickly and accurately recommend personalized learning paths for target learners in real time in massive learning resources with explosive growth. It can improve the personalized service capability of online learning platforms, and enhance the perception of platform agility and ease of use, optimizing the product experience of learners. In our previous research [23], the SEACO algorithm was initially proposed. However, the previous research only used the SEACO algorithm to solve the traveling salesman problem, and only used the domain knowledge of shortest path discrimination. Based on our previous research, this study further proposes a SEACO algorithm suitable for online personalized learning path recommendation by using the domain knowledge of optimal learning path discrimination. Furthermore, this study introduces a new online personalized learning path recommendation method with a time window, to improve recommendation quality.

3. Online Personalized Learning Path Recommendation Problem with a Time Window

Online personalized learning path recommendation refers to recommending suitable learning resource sequences based on individual characteristics of learners, to help them achieve their learning goals efficiently. The problem can be abstracted as a combinatorial optimization problem of online learning resources, with the goal of maximizing the satisfaction of learners’ needs. In order to better solve the online personalized learning path recommendation problem from the perspective of combinatorial optimization, this study first gives a description of the problem and then constructs a new online personalized learning path optimization model.

3.1. Personalized Learning Path Recommendation Problem Description

Online learning path refers to the learning sequence that learners arrange for online learning resources in order to achieve their learning goals. It includes both the directional linking relationship between learning resources and the learning behavior pattern of learners. The directional linking relationship between learning resources can be represented by a directed weight network composed of learning resources, where learning resources are network nodes, links between learning resources (i.e., individual learning paths) are connected edges, and the weights between connected edges represent learning gains, which are expressed by academic performance. In this study, G represents the directed network of learning resources, V means the set of nodes, E is the set of linked edges, and S represents the set of linked edge weights.
The learning behavior patterns refer to the effective laws of learning behaviors that learners adopt in the learning process to achieve their learning goals, often expressed in the way learning resources are organized. This study finds, from the real learning behavior data of online learners, that learners do not complete learning resources only once, but repeatedly study the learning resources during the learning process, in order to achieve the learning goals. This kind of review behavior can help learners better establish knowledge links and improve learning efficiency, but it is ignored by most research about existing learning path mining. Therefore, this study proposes a personalized learning path recommendation that considers the review behavior, aiming to further improve the quality of personalized learning path recommendations. In order to clearly represent the review behavior in the learning resource network, this study adds the dimension of times a learning resource is learned. And the maximum set of times a learning resource is learned is denoted by K .
The learning paths of three learners (i.e., Learner 1, Learner 2, and Learner 3) represented by the learning resource directed network are given in Figure 1. Where a to e denote the learning resources; the subscripts of a to e denote the order in which the learning resources are learned. S a 1 b 1 denotes the weight of learning path ( a 1 , b ) for Learner 1, which is equal to the academic performance of Learner 1. S a 1 b denotes the comprehensive weight of learning paths ( a 1 , b ) , which can be calculated by the similarity analysis of learner model, which will be introduced in detail in Section 4.
Each learning path in Figure 1 is described as follows.
Learner 1: only a single learning of the learning resources ( a ~ e ) is performed, the learning paths are represented as < a 1 , b 1 , c 1 , d 1 , e 1 > with a path length of 4 units, and the set of learning path weights is ( S a 1 b 1 , S b 1 c 1 , S c 1 d 1 , S d 1 e 1 ) .
Learner 2: repeatedly learns resources b and c . The learning path is denoted as < a 1 , b 1 , c 1 , b 2 , c 2 , d 1 , e 1 > with a path length of 6 units, and the set of learning path weights is ( S a 1 b 2 , S b 1 c 2 , S c 1 b 2 , S b 2 c 2 , S c 2 d 2 , S d 1 e 2 ) .
Learner 3: repeatedly learned resources a, b and c. The learning paths are denoted as < a 1 , b 1 , c 1 , a 2 , b 2 , c 2 , d 1 , e 1 > with a path length of 7 units; the set of comprehensive weights of learning paths is ( S a 1 b 3 , S b 1 c 3 , S c 1 a 3 , S a 2 b 3 , S b 2 c 3 , S c 2 d 3 , S d 1 e 3 ) .
Learning path network: three learners’ learning paths are integrated, and the set of comprehensive learning path weights is ( S a 1 b , S a 2 b , S b 1 c , S b 2 c , S c 1 a , S c 1 b , S c 1 d , S c 2 d , S d 1 e ) .
The online personalized learning path recommendation problem is to find and recommend the learning path that best meets the needs of the learner from the learning path network, based on the individual characteristics of the learner.

3.2. Personalized Learning Path Optimization Model with a Time Window

This section aims to construct a fast, accurate, real-time interactive and high-quality OPLPRM to support an online learning platform to provide high-quality personalized services. Therefore, in order to improve the efficiency of learning path optimization and recommend personalized learning paths accurately in real time, this study adds a running time window limit to the online personalized learning path optimization model. The online learning platform administrator can set the maximum optimization time of the learning path T m a x through the time window, according to the requirements of system operation efficiency. Moreover, the model allows learners to customize the set of learning resources they plan to learn, including the learning scope V , the set of maximum times that the learning resources can be learned K , the demand weight of expected academic performance h s and the sensitive weight of time cost h t , so that the model can meet learners’ needs dynamically and adjustably.
In terms of learning path optimization goals, this study sets the objective function to maximize learning efficiency and help learners find the most efficient learning path from the vast amount of learning resources, and assist them to complete their learning goals quickly. The optimization goal considers both learning achievement and the time cost. Among them, if the demand weight of expected academic performance h s is greater than sensitive weight of time cost h t , it means that the learner is more concerned about achieving better learning results, and if h t is larger than h s , the learner is more interested in saving learning time.
In terms of learning path optimization constraints, this study makes six main constraints, as follows.
Constraint 1: each learning resource within the learning scope is studied at least once.
Constraint 2: learning paths are directional in nature.
Constraint 3: the actual optimization time of the learning path is less than or equal to the maximum optimization time.
Constraint 4: times the learning resource is actually learned is less than or equal to the maximum times.
Constraint 5: the sum of demand weight of expected academic performance and the sensitive weight of time cost is equal to 1.
Constraint 6: the optimal learning path does not exceed the learning scope.
According to the above optimization objective and constraints, the mathematical expression of the online learning path optimization model based on a time window is shown in Formula (1).
E = M a x h s i = 1 n + 1 j = 1 n + 1 k = 1 K S i k j X i k j h t ( i = 1 n + 1 j = 1 n + 1 k = 1 K X i k j ) 2 s . t i = 1 n + 1 X i k j = 1 ,         j V , i j ,     k 1 j = 1 n + 1 X i k j = 1 ,         i V , i j ,     k 1 S i k j S j k i Z t T m a x k K h s + h t = 1 n N X i k j = 0.1 ,         i , j V
where N denotes the total number of e-learning resources and n denotes the number of learning resources within the learning scope. S i k j represents the path weight of learning path ( i k , j ); V denotes the set of indexes of learning resources within the learning scope. X i k j is a 0–1 variable, meaning the learning path ( i k , j ) is adopted or not; a value of 1 indicates that the learning path ( i k , j ) is adopted, and a value of 0 reveals the learning path ( i k , j ) is not adopted; k means that the learning resource has been learned k times; Z t denotes the actual optimization time of the learning path.

3.3. The Principles That Affect Learning Effects

The learner model is a portrayal of learners’ personalized characteristics, which is an important basis for a personalized recommendation of an online learning path. Previous studies on the extraction of learner personalization characteristics mainly pay attentions to cognitive style and knowledge level [24]. However, these two aspects of learner characteristics are not directly available, and require additional time for the learner to take a pre-study test. The application of such pre-learning tests in current online learning platforms is very limited, and learners do not always participate in the tests. Furthermore, the test results are status data, which cannot track the changes in personalization characteristics that occur during the learners’ learning process. In order to remedy these shortcomings, this study extracts the principles that affect learning effects from domain experts, as shown in the following:
  • The expected learning volume has a significant positive effect on learning effect, and the larger the expected learning volume, the better the learning effect;
  • The level of learners’ education has a significant positive effect on learning effect, and the higher the education, the better the learning effect;
  • The learning load has a significant negative effect on academic performance, and the greater the learning load, the worse the learning effect.
Therefore, this study constructs a vector of learner characteristics in terms of learner’s learning volume, education level and learning load to portray the personalized characteristics of learners, expressed as < W 1 , W 2 , W 3 >. It will be used to recommend appropriate personalized learning paths for learners.

4. Personalized Learning Path Network with Review Behavior

As can be seen from Section 3.1, the personalized learning path network is an integration of the learning paths of previous learners. This section will represent the personalized learning path network in the form of a matrix to lay the foundation for constructing the learning path recommendation algorithm, and clarify how to determine the comprehensive weights of paths in the personalized learning path network based on the learner’s personalized feature data.

4.1. Personalized Learning Path Matrix

This study uses an asymmetric matrix U to represent the personalized learning path network. The row elements of the matrix U represent the learning resource that the learner has just completed (i.e., the starting point of the next learning path) and are denoted by i k (where i denotes the label of the learning resource in the learning scope V ; k is the k th time the learning resource was completed by the learner). The column elements of the matrix U represent the learning resources that the learner is about to learn (i.e., the end point of the next learning path), denoted by j (where j denotes the label of the learning resource in the learning scope V ). The elements of the matrix U denote the one-step learning path, represented by ( i k , j ).
Before determining the matrix U , this study needs to specify the learner’s learning scope V and the maximum number K of learning resources in the review behavior. As shown in Section 3.2, both variables can be user-defined and real-time adjusted. However, to reduce the burden of learner customization, this study provides decision support as follows for learners through the mining of historical data.
(a): Provide a popularity ranking of learning resources to support learners in determining the learning scope V .
The popularity of learning resources refers to the proportion of the total number of people who choose the learning resources among the groups who have achieved learning results. For learners whose learning interests are unclear or who are completely unfamiliar with the learning resource, the popularity ranking of the learning resource is valuable reference information. It can effectively reduce the confusion of learners in determining the learning scope and enhance the interactive experience.
(b): Provide a selection interval for the maximum learning resources and default values to support learners in customizing the set K of maximum learning times for each learning resource.
The maximum times each learning resource was learned is defined as the maximum times the learning resource was learned in the group that had passed the test; the default value is set as the average times the learning resource was learned in the group that had passed the test.
After determining the learning scope V and the set of maximum learning times of learning resources K , the dimensionality of the matrix U can be determined as ( i V k i ) | V | , where k i denotes the maximum learning times of learning resource i .

4.2. Personalized Learning Path Weights

Based on the personalized learning path matrix U , the construction of the personalized learning path network also requires calculation of the comprehensive weight of each path.
The comprehensive weight of the learning path is obtained by the similarity analysis of the learner model. The similarity R between learners is calculated by the Pearson correlation coefficient [25], as shown in Formula (2). Where, R ( g , l ) denotes the similarity between learner g and learner l ; W g i represents the component of learner g on feature vector W i and W g means the average value over all feature components of learner g .
R g , l = i = 1 3 W i g W g W i l W l i 3 W i g W g 2 i = 1 3 W i l W l 2
For a given target learner g , the comprehensive weight of the learning path is equal to the product of the similarity between the target learner and the historical learner, and the academic performance of the corresponding historical learner, as shown in Formula (3), where S i k j g denotes the comprehensive weight of the learning path( i k , j ) for the target learner g , and L i k j means the set of all historical learners who have adopted the path( i k , j ).
S i k j g = l L i k j R g , l S i k j l l L i k j R g , l

5. Personalized Learning Path Recommendation Model Based on SEACO

This section aims to build a fast recommendation algorithm for online personalized learning paths, which can quickly and accurately recommend the near-optimal learning paths for learners to achieve learning goals based on personalized learning path network, and meet their personalized needs dynamically and in real time. According to the literature review, it is known that the ACO algorithm has been proven to have good performance in solving the personalized learning path recommendation problem, but it has the problems of slow convergence and being easy to fall into local optimal solutions, which cannot optimize the learning path quickly. Therefore, in this study, in order to improve the speed and accuracy of learning path recommendation, the traditional ACO algorithm is improved from the perspective of optimal learning path prediction, and a SEACO algorithm is proposed for personalized learning path recommendation on the Web.

5.1. Personalized Learning Path Recommendations Based on Traditional Ant Colony Optimization Algorithm

The operation mechanism of the traditional ACO algorithm [26] mimics the process of finding the shortest foraging path through information exchange in ant group. The information between ants is exchanged by path pheromone. Specifically, the shorter the path, the more pheromones the ants leave behind, and the longer the path, the less pheromones the ants leave behind [27]. Based on the ant group’s experience, later ants usually choose paths with stronger pheromones to find the shortest foraging paths.
In order to abstract the personalized learning path optimization problem as an ant colony foraging process, and solve it using the ACO algorithm, the following assumptions are made in this study.
M denotes the number of ants. ρ is the path pheromone volatility factor, taking values between 0 to 1. S represents the personalized comprehensive weight matrix of learning paths, where S i k j is the comprehensive weight of learning path ( i k , j ) . Γ ( t ) means the path pheromone matrix at iteration t , where τ i k j ( t ) is the pheromone of learning path ( i k , j ) at iteration t .
In the personalized learning path optimization problem, the heuristic function η i k j is the comprehensive weight of learning path ( i k , j ) divided by 100, as shown in Formula (4).
η i k j = S i k j 100
At the initial moment, the pheromone on each learning path is the same, which is the initial concentration c , and M ants randomly choose the initial location.
At the iteration t , the ant located at learning resource i calculates the selection probability of the learning path according to the roulette wheel method [28] based on Formula (5), where P i k j ( t ) denotes the selection probability of learning path ( i k , j ) at iteration t ; α is the pheromone importance factor; β is the influence factor of heuristic information; N i k represents the set of next learning resources that ants can choose after learning resource i has been learned k times.
P i k j t = τ i k j α t η i k j β j N i k τ i k j α t η i k j β i f     j N i k
The pheromone update strategy is shown in Formula (6), where τ i k j m denotes the pheromone left by ant m on learning path ( i k , j ) .
τ i k j t + 1 = ρ τ i k j t + m = 1 M τ i k j m
τ i k j m is calculated by Formula (7), where Q is the pheromone constant and E m means the learning path objective value solved by ant m during this iteration, which is calculated by the objective function of personalized learning path optimization, as shown in Formula (1) of Section 3.2.
τ i k j m = Q E m , i f   p a t h i k , j   i s   a d p o t e d   b y   a n t   m 0 ,       e l s e
After the pheromone matrix is updated, the ACO algorithm continues with the next iteration until the time specified in the time window is reached. Finally, the ACO algorithm outputs the cumulative optimal learning path for the historical iterations.

5.2. SEACO Algorithm for Personalized Learning Path Recommendation

The design mechanism of the SEACO algorithm is to predict the evolutionary trend of path pheromones, based on the historical evolution data of the pheromone matrix of the traditional ACO algorithm, by mining and injecting the domain knowledge of optimal path prediction in order to reduce the invalid search in the early stage of the algorithm and improve the speed and accuracy of path optimization.
Based on the above mechanism, this section constructs the optimal learning path prediction model, as shown in Formulas (8) and (9).
V i k j T + 1 = a V i k j T + 1 a V i k j T , T 1
τ i k j t = τ i k j t + sin π 2 V i k j T + 1 min V i k T + 1 max V i k T + 1 min V i k T + 1 S i k j min S i k max S i k min S i k
where τ i k j ( t ) denotes the path( i k , j ) pheromone at the iteration t ; τ i k j ( t ) denotes the predicted pheromone of path( i k , j ) in the optimal pheromone matrix; V i k j T + 1 means the average trend value of the solutions containing path( i k , j ) in the previous T iterations; V i k T + 1 is a vector whose elements denote the predicted pheromones of the learning paths starting with i k ; and this variable is calculated using the exponential smoothing method as shown in Formula (8), where a is the exponential smoothing coefficient, and V i k j T is the average value of the solutions of previous T iterations containing the path( i k , j ), V i k j T denotes the predicted value of V i k j T ; S i k j represents the comprehensive weight of the learning path ( i k , j ); S i k is a vector whose elements denote the comprehensive weights of the learning paths starting from i k .
Based on the optimal learning path prediction model, this study constructs a SEACO algorithm applicable to the problem of fast recommendation of online personalized learning paths. The pseudocode of the SEACO algorithm is presented in Algorithm 1, and the flowchart of the SEACO algorithm is shown in Figure 2.
Algorithm 1: Pseudocode of the SEACO algorithm
Initialization:
Initialize SEACO parameters: the number of ants M , the pheromone importance factor α , the heuristic information impact factor β , the pheromone volatility factor ρ , the pheromone constant Q .
Set the maximum iterations of the algorithm T m a x , the moment of optimal learning path prediction model injection T i n s , the learning scope V and the set of maximum learning times of learning resources K.
while  T T m a x
Initialize the positions of M ants.
Calculate the selection probability of each path: P i k j t = τ i k j α t η i k j β j N i k τ i k j α t η i k j β i f   j N i k
Select the next paths for M ants one by one according to the roulette wheel method.
Calculate the fitness of M ants: f i t n e s s = h s i = 1 n + 1 j = 1 n + 1 k = 1 K S i k j X i k j h t ( i = 1 n + 1 j = 1 n + 1 k = 1 K X i k j ) 2
Output the maximum fitness as the result of this iteration.
Update the pheromone value of each learning path: τ i k j t + 1 = ρ τ i k j t + m = 1 M τ i k j m
where τ i k j m = Q E m , i f   p a t h i k , j   i s   a d p o t e d   b y   a n t   m 0 ,     e l s e
if  T = T i n s
Calculate the average trend value of each learning path solution V i k j T + 1 :
V i k j T + 1 = a V i k j T + 1 a V i k j T , T 1
Predict the pheromone value of each learning path τ i k j t :
τ i k j t = τ i k j t + sin π 2 V i k j T + 1 min V i k T + 1 max V i k T + 1 min V i k T + 1 S i k j min S i k max S i k min S i k
Update the pheromone matrix of traditional ACO algorithm.
end if
Update best solution.
end while
Return best solution
End

6. Experimental Results and Analysis

6.1. Experimental Data and Design

The learning behavior data set of 338 learners from the British “Open University” platform studying for the AAA course (2013J semester) was used for simulation experiments. This dataset records learners’ human characteristics (e.g., age, education level, gender, etc.), learning behaviors (including learning resources, learning time, etc.) and academic performance data.
This study used a random sampling method to select 70% of the dataset as the training dataset and 30% of the dataset as the test dataset, estimated the learning time using the total length of learners’ learning paths, and selected the learning resources that were chosen by more than half of the group of learners who passed the test (academic performance ≥ 60) to form the generic learning scope, which included 76 learning resources.
This experiment was conducted on an Intel Xeon processor of 3.80 GHz with 6 cores (40 GB RAM) using Matlab R2017b software.
The parameters of the SEACO algorithm were set as follows: the number of ants M was 100; the pheromone importance factor α was 1; the heuristic information impact factor β was 5; the pheromone volatility factor ρ was 0.1; the pheromone constant Q was 0.1. The setting rules for the parameters of the ACO algorithm and SEACO algorithm refer to [29,30], and we made appropriate adjustments to the parameters to make them more suitable for the personalized learning path optimization problem in this research. The maximum iterations of the algorithm T m a x was 200; the maximum learning resources K was equal to the average number of learning resources learned by learners in the tested population; the moment of injecting the optimal learning path prediction model T i n s (i.e., the T i n s th iteration) was 10, 20 and 30; the weight of the learner’s demand for the expected effect h s was 0.5; the weight of learners’ sensitivity to time cost h t was 0.5.

6.2. Adaptation Analysis of Personalized Learning Path Recommendation Results

The OPLPRM proposed in this study can be used in real-world applications to recommend appropriate learning paths based on each learner’s individual characteristics. To verify the wide applicability of the model in recommending personalized learning paths, the following experiments were conducted in this study.
Using the OPLPRM, near-optimal learning paths were found from the training dataset and recommended to the learners based on their characters in the test dataset. Then, the learning effect of the online personalized learning paths recommended by the SEACO algorithm was compared with the learners’ actual learning paths in the test dataset, to verify the wide applicability of the recommendation model in meeting learners’ needs and helping learners improve their learning efficiency.
In this experiment, the training data set included 204 learners, and the test data set included 87 learners. The near-optimal learning path in the test data set is shown in the Figure 3, and its optimization objective value is 0.5348, calculated by Formula (1).
As shown in Figure 3, the vertical coordinate represents the label of learning resources in the generic learning scope, the horizontal coordinate denotes the label of learning steps in the learning path, and the line connecting two points represents the directed learning path. The length of the near-optimal learning path is 155, in which 65% of the learning resources are repeatedly learned, the maximum learning times is 4.
Next, this study gave the near-optimal paths to 87 learners in the test dataset, and calculated the estimated academic performance, by Formula (3), that learners could achieve by adopting the learning paths recommended by the algorithm based on each learner’s personalized learning path network, and the summary results are shown in Table 1.
As shown in Table 1, the recommended learning paths using the OPLPRM is able to improve the average academic performance of the learners in the test set from 69.57 to 82.89, improving the performance of 96% of learners. Furthermore, the algorithm-recommended learning paths helped 60.92% of the learners shorten the length of their learning paths and save learning time. In addition, the recommended learning paths can improve the learning efficiency of 75% of learners and help them achieve their learning goals more efficiently. In summary, the experimental results show that the OPLPRM proposed in this study can effectively help learners on the online platform allocate their learning time rationally, improve their learning efficiency, and better accomplish their learning goals.

6.3. Analysis of SEACO Algorithm Effectiveness

In order to verify the effectiveness of the SEACO algorithm on improving the speed and accuracy of path optimization, the following experimental design was carried out.
The pheromone matrix of the ACO algorithm was updated by injecting the optimal learning path prediction model when the traditional ACO algorithm ran to 10th, 20th and 30th iteration, respectively. The experimental results of the SEACO algorithm and the traditional ACO algorithm in the same period were compared, to verify the effectiveness of the SEACO algorithm in fast-optimization of learning paths.
Based on the above experimental design, this study first conducted simulation experiments on the test dataset. In order to observe the improvement effect of short-term operation of the algorithm after the optimal learning path prediction model injection, this study analyzed the best objective values of the algorithm running three iterations after injection the optimal learning path prediction model. We compared the average best target values obtained by running the two algorithms independently ten times, and the results are shown in Table 2.
As shown in Table 2, the short-term improvement effect of the SEACO algorithm is most obvious when the optimal path prediction model is injected at the 10th iteration and the 20th iteration, and it can save 12 iterations at most. Further, the earlier the optimal path prediction model is injected, the more obvious the algorithm improvement effect is.
Next, in order to observe the improvement effect of the long-term operation of the proposed algorithm, this study injects the optimal learning path prediction model in the 20th iteration, and compares the best objective value of the SEACO algorithm running to the 200th iteration with that of the traditional ACO algorithm, and the comparison results of average best objective values for 10 experiments are given in Figure 4.
As shown in Figure 4, after the optimal path prediction model is injected in the 20th iteration, the algorithm can effectively avoid converging on the local optimal solution too early, and improve the optimization speed and quality of the learning path. Meanwhile, at the 73rd iteration, the SEACO algorithm has been able to achieve the target value that was achieved by the traditional ACO algorithm at the 200th iteration, which was 127 generations ahead of schedule and equivalent to saving 63.5% of optimization time. In terms of the objective value under the same optimization time, the objective value of the SEACO algorithm at the 200th iteration is 0.536, higher than that of the traditional ACO algorithm (0.051), which is equivalent to improving the objective value by 10.5%. In summary, the SEACO algorithm proposed in this study can effectively solve the problems of slow convergence and low solution accuracy of the traditional ACO algorithm, and realize the rapid optimization of online personalized learning paths.
In order to further validate the optimization-finding ability of the SEACO algorithm for online personalized learning path recommendations on a larger dataset, this study conducted simulation experiments using all learner data. First, consistent with the above, the optimal learning path prediction model was injected into the traditional ACO algorithm up to 10th, 20th and 30th iteration, respectively, and then the algorithm running for 3 iterations to compare with the optimal objective value of the traditional ACO algorithm at the same time to observe the short-term improvement effect. In addition, the SEACO algorithm was further compared with early classical group heuristic algorithms and new meta heuristic algorithms. In the early classical group heuristic algorithms, particle swarm optimization (PSO) has good performance in solving personalized learning path optimization problems [31]. The new metaheuristic algorithms include the dwarf mongoose optimization (DMO) algorithm [5], the improved chaotic grey wolf optimization (ICGWO) algorithm [32], the marine predator algorithm (MPA) [33], etc. Due to the similar operating principles of these metaheuristic algorithms, we chose the DMO algorithm as a representative of the new metaheuristic algorithms for comparison, because of its performance in solving route optimization problems and its source code availability. Table 3 briefly summarizes the descriptions and the parameter settings of PSO and DMO.
We compared the average best target values obtained by running the four algorithms independently ten times respectively, and the results of the comparative experiment are shown in Table 4 and Table 5.
From Table 4, it can be seen that the SEACO algorithm achieves the highest average best objective value in the 13th, 23rd and 33rd iteration compared to the ACO algorithm, PSO algorithm and DMO algorithm. From the perspective of time saving, Table 5 shows that the optimal learning path prediction model has the most obvious improvement effect of the SEACO algorithm at the 20th iteration of injection to ACO, which can save 13 iterations at most. As for PSO and DMO, SEACO also has the better effect of time saving, which can save 22 iterations and 8 iterations, respectively. Moreover, the earlier the optimal learning path prediction model is injected, the more obvious the algorithm improvement effect is.
Additionally, we paired any two of the four algorithms in the comparative experiment and used the Wilcoxon rank sum test to determine whether the average values of the two samples are statistically equal. The results of the Wilcoxon test show that all determined p-values are less than the significance level of 5%, indicating a statistically significant difference between algorithms.
Next, in order to analyze the improvement effect of the long-term operation of the algorithm on the whole data set, this study injected the optimal learning path prediction model at the 20th iteration, and compared the optimal objective value of the SEACO with that of the traditional ACO algorithm at iteration 200. The comparison of average optimal objective values for 10 experiments is given in Figure 5.
As can be seen from the Figure 5, after updating the pheromone matrix with the optimal learning path prediction model, the SEACO algorithm can effectively avoid the algorithm from stalling in the early stage of local optimal solutions and improve the quality of algorithm optimization. In addition, at the 115th iteration, the SEACO algorithm has been able to reach the target value that was achieved by the traditional ACO algorithm at the 200th iteration, which is 85 generations ahead of schedule and equivalent to saving 57.5% of optimization time. The average best objective value of the SEACO algorithm is 0.769, which is 12.3% higher than that of the traditional ACO algorithm.
In summary, the SEACO algorithm proposed in this study can effectively avoid the traditional ACO algorithm from converging on the local optimal solution too early, improve the speed and optimization quality of learning path optimization, and achieve fast and accurate learning path recommendation.

7. Conclusions and Future Work

7.1. Conclusions

To address the problems of “information overload” and “learning lost” caused by massive learning resources, this study proposes an OPLPRM with fast, accurate, real-time interaction and high-quality features to help learners accomplish their learning goals efficiently, optimize the online personalized learning experience, and improve the personalized service capability and efficiency of online learning platforms. The contribution of this study is reflected in two aspects. First, this study constructs an online personalized learning path optimization model with a time window, which not only considers the sequence relationships between learning resources, but also innovatively considers the review behavior of learners in the learning path optimization. This kind of learning path recommendation considering review behavior is helpful to improve the applicability of learning path recommendation results, and is also more applicable to the actual learning scenarios of learners. Second, in order to quickly and accurately recommend learning paths for learners in real-time interaction scenarios, this study constructs a new SEACO algorithm suitable for online personalized learning path recommendations, from the perspective of optimal learning path prediction. Based on the domain knowledge, an optimal learning path prediction model is constructed to accelerate the convergence of the algorithm and improve the speed and accuracy of learning path optimization by predicting the evolutionary trend of the pheromone matrix and updating the pheromone matrix.
The experiment was conducted using the real online learning behavior dataset of learners on the British “Open University” platform. The experimental results show that (1) the OPLPRM proposed in this study has wide applicability, and can help learners allocate their learning time rationally and improve their learning efficiency; and (2) the OPLPRM based on the SEACO algorithm can effectively speed up the optimization of optimal learning paths, while improving the optimization quality. These provide a feasible solution for improving the efficiency of using online personalized learning platforms and realizing dynamic real-time online learning path optimization recommendation services.

7.2. Future Work

However, this study has the following shortcomings and should be improved in the future.
Firstly, this study only focuses on building learner models from the perspective of individual learners, without considering users’ online social factors. In the future, we plan to improve the applicability of personalized learning path recommendation results from the perspective of online learning groups by mining users’ online social data. By incorporating online social factors, such as group interactions, collaboration and peer influence, we can gain a deeper understanding of how learners’ social networks impact their learning experiences. This will enable us to develop a more comprehensive and effective rule of online personalized learning paths recommendations that take into account the dynamics of online communities and facilitate collaborative learning.
Secondly, in the learning process, learners with the same characteristics may manifest different learning abilities. This study has not deeply explored the learning abilities of online users. In the future, we plan to improve the quality of personalized learning path recommendations by analyzing users’ learning abilities and other implicit attributes, so that online personalized learning path recommendation services can be more competitive. By delving into learners’ individual learning abilities, cognitive styles, and preferences, we can create more accurate learner models that capture their unique learning needs. This will allow us to tailor the learning paths and content to suit their specific strengths and weaknesses, optimizing their learning outcomes. Additionally, incorporating other implicit attributes, such as motivation, self-regulation skills and prior knowledge, will further enhance the personalization of the learning experience, ensuring that learners receive relevant and engaging recommendations that maximize their learning efficiency.

Author Contributions

Conceptualization, S.L. and X.L.; Data curation, X.L.; Formal analysis, H.C.; Funding acquisition, S.L.; Investigation, J.L.; Methodology, X.L.; Project administration, S.L.; Resources, J.L.; Software, H.C.; Supervision, S.L.; Validation, S.L., H.C. and X.L.; Visualization, K.P. and Z.W.; Writing—original draft, X.L.; Writing—review and editing, K.P. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No. 71871135 and No. 72271155).

Data Availability Statement

The data presented in this study are available upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Imran, H.; Belghis-Zadeh, M.; Chang, T.; Graf, S. PLORS: A personalized learning object recommender system. Vietnam J. Comput. Sci. 2016, 3, 3–13. [Google Scholar] [CrossRef] [Green Version]
  2. Wan, H.; Yu, S. A recommendation system based on an adaptive learning cognitive map model and its effects. Interact. Learn. Environ. 2020, 32, 1821–1839. [Google Scholar] [CrossRef]
  3. Tam, V.; Lam, E.Y.; Fung, S. A new framework of concept clustering and learning path optimization to develop the next-generation e-learning systems. J. Comput. Educ. 2014, 1, 335–352. [Google Scholar] [CrossRef]
  4. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  5. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Dwarf Mongoose Optimization Metaheuristics for Autoregressive Exogenous Model Identification. Mathematics 2022, 10, 3821. [Google Scholar] [CrossRef]
  6. Shi, D.; Wang, T.; Xing, H.; Xu, H. A learning path recommendation model based on a multidimensional knowledge graph framework for e-learning. Knowl. Based Syst. 2020, 195, 105618. [Google Scholar] [CrossRef]
  7. Rodriguez-Medina, A.E.; Dominguez-Isidro, S.; Ramirez-Martinell, A. A Microlearning path recommendation approach based on ant colony optimization. J. Intell. Fuzzy Syst. 2022, 42, 4699–4708. [Google Scholar] [CrossRef]
  8. Tarus, J.K.; Niu, Z.; Yousif, A. A hybrid knowledge-based recommender system for e-learning based on ontology and sequential pattern mining. Future Gener. Comput. Syst. 2017, 72, 37–48. [Google Scholar] [CrossRef]
  9. Salehi, M. Application of implicit and explicit attribute based collaborative filtering and BIDE for learning resource recommendation. Data Knowl. Eng. 2013, 87, 130–145. [Google Scholar] [CrossRef]
  10. Zhao, J.; Liu, S.; Zhang, J. Personalized Distance Learning System based on Sequence Analysis Algorithm. Int. J. Online Eng. 2015, 11, 33–36. [Google Scholar] [CrossRef] [Green Version]
  11. Gao, Y.; Zhai, X.; Andersson, B.; Zeng, P.; Xin, T. Developing a learning progression of buoyancy to model conceptual change: A latent class and rule space model analysis. Res. Sci. Educ. 2020, 50, 1369–1388. [Google Scholar] [CrossRef] [Green Version]
  12. Al-Muhaideb, S.; Menai, M.E.B. Evolutionary computation approaches to the Curriculum Sequencing problem. Nat. Comput. 2011, 10, 891–920. [Google Scholar] [CrossRef]
  13. Vanitha, V.; Krishnan, P. A modified ant colony algorithm for personalized learning path construction. J. Intell. Fuzzy Syst. 2019, 37, 6785–6800. [Google Scholar] [CrossRef]
  14. Elshani, L.; Nuçi, K.P. Constructing a personalized learning path using genetic algorithms approach. arXiv 2021, arXiv:2104.11276. [Google Scholar] [CrossRef]
  15. Niknam, M.; Thulasiraman, P. LPR: A bio-inspired intelligent learning path recommendation system based on meaningful learning theory. Educ. Inf. Technol. 2020, 25, 3797–3819. [Google Scholar] [CrossRef]
  16. Benmesbah, O.; Lamia, M.; Hafidi, M. An improved constrained learning path adaptation problem based on genetic algorithm. Interact. Learn. Environ. 2021, 1–18. [Google Scholar] [CrossRef]
  17. Son, N.T.; Jaafar, J.; Aziz, I.A.; Anh, B.N. Meta-heuristic algorithms for learning path recommender at MOOC. IEEE Access 2021, 9, 59093–59107. [Google Scholar] [CrossRef]
  18. Lu, H.; Wang, H.; Liu, M. How to transform the school education model in the digital economy era?—Interpretation of the report “Schools of the future: Defining new models of education for the fourth industrial revolution”. Mod. Educ. Technol. 2021, 31, 42–49. [Google Scholar]
  19. Wang, T.-I.; Wang, K.-T.; Huang, Y.-M. Using a style-based ant colony system for adaptive learning. Expert. Syst. Appl. 2008, 34, 2449–2464. [Google Scholar] [CrossRef]
  20. Lin, Y.; Gong, Y.; Zhang, J. An adaptive ant colony optimization algorithm for constructing cognitive diagnosis tests. Appl. Soft Comput. 2017, 52, 1–13. [Google Scholar] [CrossRef]
  21. Krynicki, K.; Jaen, J.; Navarro, E. An ACO-based personalized learning technique in support of people with acquired brain injury. Appl. Soft Comput. 2016, 47, 316–331. [Google Scholar] [CrossRef] [Green Version]
  22. Wong, L.-H.; Looi, C.-K. Adaptable learning pathway generation with ant colony optimization. J. Educ. Techno. Soc. 2009, 12, 309–326. [Google Scholar]
  23. Li, S.; Wei, Y.; Liu, X.; Zhu, H.; Yu, Z. A New Fast Ant Colony Optimization Algorithm: The Saltatory Evolution Ant Colony Optimization Algorithm. Mathematics 2022, 10, 925. [Google Scholar] [CrossRef]
  24. Lalitha, T.; Sreeja, P. Personalised self-directed learning recommendation system. Procedia Comput. Sci. 2020, 171, 583–592. [Google Scholar] [CrossRef]
  25. Ly, A.; Marsman, M.; Wagenmakers, E.J. Analytic posteriors for Pearson’s correlation coefficient. Stat. Neerl. 2018, 72, 4–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  27. Wei, X. Task scheduling optimization strategy using improved ant colony optimization algorithm in cloud computing. J. Ambient. Intell. Humaniz. Comput. 2020, 1–12. [Google Scholar] [CrossRef]
  28. Lipowski, A.; Lipowska, D. Roulette-wheel selection via stochastic acceptance. Phys. A 2012, 391, 2193–2196. [Google Scholar] [CrossRef] [Green Version]
  29. Martens, D.; De Backer, M.; Haesen, R.; Vanthienen, J.; Snoeck, M.; Baesens, B. Classification with ant colony optimization. IEEE Trans. Evol. 2007, 11, 651–665. [Google Scholar] [CrossRef]
  30. Ng, S.T.; Zhang, Y. Optimizing construction time and cost using ant colony optimization approach. J. Constr. Eng. 2008, 134, 721–728. [Google Scholar] [CrossRef]
  31. Chu, C.-P.; Chang, Y.-C.; Tsai, C.-C. PC2PSO: Personalized e-course composition based on Particle Swarm Optimization. Appl. Intell. 2011, 34, 141–154. [Google Scholar] [CrossRef]
  32. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z. Variants of Chaotic Grey Wolf Heuristic for Robust Identification of Control Autoregressive Model. Biomimetics 2023, 8, 141. [Google Scholar] [CrossRef] [PubMed]
  33. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Nonlinear Hammerstein System Identification: A Novel Application of Marine Predator Optimization Using the Key Term Separation Technique. Mathematics 2022, 10, 4217. [Google Scholar] [CrossRef]
Figure 1. Learning paths and learning path network.
Figure 1. Learning paths and learning path network.
Mathematics 11 02792 g001
Figure 2. The flowchart of the SEACO algorithm.
Figure 2. The flowchart of the SEACO algorithm.
Mathematics 11 02792 g002
Figure 3. Recommended near-optimal learning path.
Figure 3. Recommended near-optimal learning path.
Mathematics 11 02792 g003
Figure 4. Comparison results of the SEACO algorithm and the traditional ACO algorithm (test data set).
Figure 4. Comparison results of the SEACO algorithm and the traditional ACO algorithm (test data set).
Mathematics 11 02792 g004
Figure 5. Comparison of experimental results of the SEACO algorithm and traditional ACO algorithm (all data sets).
Figure 5. Comparison of experimental results of the SEACO algorithm and traditional ACO algorithm (all data sets).
Mathematics 11 02792 g005
Table 1. Comparison of recommended learning path performance with actual learning path.
Table 1. Comparison of recommended learning path performance with actual learning path.
Statistical QuantitiesAverage Academic
Performance
Average Learning
Efficiency
Average Length of
Learning Path
Recommended learning path82.890.535155
Learners’ actual learning path69.570.526196.4
Average optimization value13.320.009−41.4
Optimized learner ratio96%75%60.92%
Table 2. Comparison on short-term improvement effect (test data set).
Table 2. Comparison on short-term improvement effect (test data set).
AlgorithmIterationAverage Best Objective ValueSTD
ACO13th0.2570.0043
23rd0.3200.0048
33rd0.3510.0042
SEACO13th0.296 (save 12 iterations)0.0039
23rd0.376 (save 6 iterations)0.0040
33rd0.391 (save 2 iteration)0.0039
Table 3. Parameter settings of PSO and DMO.
Table 3. Parameter settings of PSO and DMO.
AlgorithmDescriptionParameter
PSOInspired from the motion of bird flocks and schooling fish.Population size N = 100
Cognitive component C1 = 0.1
Social component C2 = 0.075
Minimum inertia Wmin = 0.5
Maximum inertia Wmax = 1
DMOInspired from the social structure and foraging nature of dwarf mongooses in their natural environment.Population size N = 100
Number of babysitters Nb = 20
Babysitter exchange parameter K = 7
Female vocalization α = 2
Table 4. Comparison on short-term improvement effect (all data sets).
Table 4. Comparison on short-term improvement effect (all data sets).
AlgorithmIterationAverage Best Objective ValueSTD
ACO13th0.3710.0042
23rd0.4020.0046
33rd0.4790.0044
PSO13th0.3190.0049
23rd0.3480.0048
33rd0.4150.0049
DMO13th0.4030.0059
23rd0.4310.0060
33rd0.4840.0060
SEACO13th0.4130.0038
23rd0.4490.0036
33rd0.4920.0039
Table 5. Time saved of SEACO compared with other algorithms (all data sets).
Table 5. Time saved of SEACO compared with other algorithms (all data sets).
AlgorithmACOPSODMO
13th iteration11 iterations13 iterations3 iterations
23rd iteration13 iterations22 iterations8 iterations
33rd iteration1 iteration19 iterations1 iteration
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S.; Chen, H.; Liu, X.; Li, J.; Peng, K.; Wang, Z. Online Personalized Learning Path Recommendation Based on Saltatory Evolution Ant Colony Optimization Algorithm. Mathematics 2023, 11, 2792. https://doi.org/10.3390/math11132792

AMA Style

Li S, Chen H, Liu X, Li J, Peng K, Wang Z. Online Personalized Learning Path Recommendation Based on Saltatory Evolution Ant Colony Optimization Algorithm. Mathematics. 2023; 11(13):2792. https://doi.org/10.3390/math11132792

Chicago/Turabian Style

Li, Shugang, Hui Chen, Xin Liu, Jiayi Li, Kexin Peng, and Ziming Wang. 2023. "Online Personalized Learning Path Recommendation Based on Saltatory Evolution Ant Colony Optimization Algorithm" Mathematics 11, no. 13: 2792. https://doi.org/10.3390/math11132792

APA Style

Li, S., Chen, H., Liu, X., Li, J., Peng, K., & Wang, Z. (2023). Online Personalized Learning Path Recommendation Based on Saltatory Evolution Ant Colony Optimization Algorithm. Mathematics, 11(13), 2792. https://doi.org/10.3390/math11132792

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop