1. Introduction
Different learners have different learning styles (LS), which can be defined as cognitive characteristics and ways of perceiving, interacting with, and responding to the learning environment [
1,
2]. In traditional classrooms, it is difficult to perform and adapt the teaching process to different learning styles of students. E-learning systems could resolve this issue, first by identifying the learner’s learning style, and then delivering him/her content in such a way that would suit him/her the best, providing adaptability and personalisation [
3,
4].
Most educational researchers agree on the relevance of learning styles in the learning process and notably in online learning contexts. Numerous research has been conducted on the topic of student activities in an e-learning environment and its relationship to learning styles [
5,
6]. In [
7], authors made a comprehensive presentation of different approaches and algorithms used to predict learning styles along with their classification and the analysis of their advantages and disadvantages.
Researchers continue to debate over the causal relationship between a learner’s adaptability to their learning style and the quality of the learning process in an e-learning environment and learning results. Many researchers believe that having a technique for adapting a certain learner’s learning style is crucial for an adaptive learning system [
8,
9]. The unifying objective in their research is to accurately adapt courses to learners’ learning preferences.
Some academics downplay the relevance of learning-style adaptation in e-learning systems [
10,
11]. They mainly consider that there is no substantial improvement in the learning process because of adaption to the learning style. Additionally, psychological research shows that studying in a learning environment that is inconsistent with the preferences of learners may stimulate the learner to gain new abilities [
11].
On the other hand, positive views on the impact of an online learning environment’s adaptability to a certain learning style throughout the online training process predominate [
12,
13]. This approach is supported by experiments with observable findings in [
14].
Although learning-style research dates back more than 30 years, the creation of learning-style-adaptive educational systems only began in the first decade of this millennium [
15]. According to Cristea and Stash, it is essential to integrate knowledge about a learners’ learning style in an online learning environment, which allows learners to choose the best learning method for them and might improve their outcomes [
16]. Adaptive learning systems, according to Popescu et al., can improve contentment, efficiency, and effectiveness by changing instructions of the learning process, relying on the learners’ preferred learning style [
15].
The importance of understanding learning styles in an adaptable learning environment is undeniably worthwhile for additional investigation. Regardless of differing viewpoints in related studies, we endorse a study path in which learning-style modelling in an online learning environment must be a built-in feature. This might provide learners with the option of defining their learning style and selecting from a variety of educational strategies.
Among numerous learning-style models and theories [
17,
18], we chose the Felder–Silverman learning-style model (FSLSM). The Index of Learning Styles (ILS), an FSLSM psychometric instrument, has not been completely validated yet. This paper seeks to provide a subjective assessment, which considers the learners’ opinion for the ILS evaluation. An online learning system, named Protus, was developed as an experimental platform. Implementation of learning-style identification (LSI) in Protus and the effects of this process on personalisation were considered and proposed in this paper. In the end, we present the results of the study, in which we compared and analysed students’ learning preferences acquired from two different sources: a subjective questionnaire and the Index of Learning Styles (ILS).
2. Related Work
A learning-style model categorises learners based on how they collect and process information, as well as how they acquire knowledge. Researchers have developed distinct learning-style theories, as well as the instruments to assess them. They mainly differ in how they differentiate the most important characteristics of the learning processes. Learning-style models are classified into different LS families based on their cognition of the learning-style paradigm [
19].
Attwell declared it would be best to use different learning styles in different contexts, subjects, and knowledge domains, responding to different learning aims and goals, concluding that it does not have one learning style [
20]. We have selected Felder and Silverman’s learning- and teaching-style model, which aims at features and learning differences important in engineering education. The four dimensions of the learning style are defined [
18,
21]: Information processing (Active or Reflective Learners); Information reception (Visual or Verbal Learners); Information perception (Sensing or Intuitive Learners); Information understanding (Sequential/Global Learners). In adaptive educational systems that focus on learning styles, FSLSM is widely used. According to several academics, FSLSM is the ideal model for such systems [
22,
23]. FSLSM is different from existing models in that it incorporates important learning-style models such as Kolb’s [
24] and Pask’s [
25]. Several other learning-style models treat learning styles as mandatory categories, while FSLSM treats them as tendencies [
26]. Özpolat and Akar find this model more suitable for applications covering basic science issues [
27]. Although FSLSM has its opponents, we found more positive opinions about it in literature [
28,
29].
Some studies have identified navigational behaviour as a crucial learners’ feature for accurately assessing learning styles in adaptive e-learning systems [
30,
31,
32]. When designing an adaptive learning system with the goal of achieving an appropriate personalisation function, this must be considered. Most of the existing e-learning environments, such as CS383 [
33], WELSA [
34], TSAL [
35], E-learning 2.0 [
36], iLearn [
37], and EDUCA [
38] include adaptability characteristics depending on the student’s navigational behaviour, considering learning styles that are not mandatory online.
Graf, Liu, and Kinshuk, on the other hand, constrained their study to learners’ activities in an online context only [
31]. They investigated the correlation between learners’ diverse learning styles and their preferences and actions throughout the e-learning process by conducting research on learners’ navigational behaviour in an online course in an adaptive learning environment. According to the findings, information regarding variations in learners’ navigational behaviour may be utilised to design a new pattern in learner modelling that automatically recognises learning styles based on learners’ behaviour in an online course.
Research findings were applied in the building of DeLeS, an online learning environment in which authors offer an enhanced method for automatic learning-style recognition in an adaptive learning system [
39]. DeLeS’s extended structure comprises three different types of data sources: behaviour patterns, cognitive abilities, and navigation patterns. This study’s findings revealed that combining data from many sources on students’ learning behaviour (navigation patterns and cognitive qualities) might increase the accuracy of the learning style. Other studies were searching for new methods of automatic learning-style recognition, some of which were relying on big data analysis [
40], while others were using EEG signals to detect the learning style [
41].
A large group of researchers have devoted their studies to the topic of assessing whether the ILS is an appropriate tool that produces valid and reliable results in the Semantic Web Rule Language based on Learning styles for MOOCs [
42]. A web-based instrument defined as the Index of Learning Styles (ILS) assesses learners’ preferences on the four dimensions: Active/Reflective, Visual/Verbal, Sensing/Intuiting, and Sequential/Global [
43]. Some of them provided evidence of construct validity for this instrument [
44], while others expressed concerns regarding the robustness of the model [
45]. Although most of these studies have provided evidence that the ILS instrument is reliable, valid, and suitable for its use in education, we found a lot of opinions that the research in the field of evaluation of ILS should be continued.
Cook conducted the study of the ILS instrument’s validity on the sample of internal medical residents [
46]. Tests were performed by using two different LS instruments, twice ILS and once Learning Style Type Indicator (LSTI), which was used for comparison of the results. He found acceptable reliability and validity of ILS for assessing two LS dimensions: active or reflective and sensing or intuitive. However, the data from this study provided weak support for the validity of visual or verbal and sequential/global LS dimensions. Cook indicated that style classifications based on ILS results have variable reproducibility, despite acceptable reliability of ILS results. He suggested further research of the ILS instrument’s validity and its comparison with other available tools.
Platsidou and Metallidou [
47] compare the psychometric properties of two different learning-style tools: ILS and the Learning Style Inventory (LSI) [
43]. Internal consistency reliability, construct validity, and discriminant validity of these measures was investigated in their study. According to the results, the authors found psychometric weaknesses and limitations in both instruments. They also criticised the phenomenon of grouping learners according to their learning style as a tool for adaptation and personalisation of the learning environment [
47].
Hosford and Siders conducted a study for evaluation of the ILS instrument, assessing the temporal stability, internal consistency, and factor structure of students’ responses to the ILS [
48]. During a two-year study, the findings were moderate to highly reliable, with an acceptable rate of internal consistency. In a conclusion, the study found the suitability of the ILS as a tool for evaluating learning-style preferences.
In their research at North Carolina University, Felkel and Gosky also evaluated the ILS instrument by assessing reliability, discriminant validity, and construct validity [
49]. They tested validity by checking whether the four dimensions of learning style, measured by ILS, are non-overlapping concepts. The study proved that the ILS instrument is a trustworthy and valid tool for assessing learning styles.
Zywno conducted a study about psychometric properties of the ILS instrument in a hypermedia-assisted learning environment [
50]. The methodology included test–retest reliability, Cronbach’s alpha/factor analysis, interscale correlation, construct validity, internal consistency, and total item correlation. The study’s findings revealed that ILS is a useful instrument for assessing engineering students’ learning styles. The author argues that the work on ILS evaluation must be continued.
To summarise, the current body of evidence is inconclusive, and further study in this area is needed. Our research question investigates ILS results regarding students’ learning styles by offering students to study first through specially designed lessons in Protus and subsequently fill out a subjective questionnaire. Studying differences between the preferences obtained from the ILS and the subjective questionnaire is important in relation to the improvement process of designing an adaptive, personalised, and flexible online learning system by allowing students to change the presentation of the lessons through its interface.
According to the research question and objectives, the null hypothesis, H0, is defined: There are no differences between the ILS and the subjective questionnaire. The alternative hypotheses are determined, per each dimension:
Hypothesis 1a (H1a). There is a significant difference between the Active/Reflective preferences obtained from the ILS and subjective questionnaires for the Information processing dimension.
Hypothesis 1b (H1b). There is a significant difference between the Sensing/Intuitive preferences obtained from the ILS and subjective questionnaires for the Information perception dimension.
Hypothesis 1c (H1c). There is a significant difference between the Visual/Verbal preferences obtained from the ILS and subjective questionnaires for the Information reception dimension.
Hypothesis 1d (H1d). There is a significant difference between the Sequential/Global preferences obtained from the ILS and subjective questionnaires for the Information understanding dimension.
According to our knowledge, this type of study, which considers the learners’ opinion, has not been used for the ILS assessment. The results initiated enhancing the functionalities of the user interface, and thus expecting better adaptability of Protus.
3. Adaptive Learning System Protus
In this section, we will present an intelligent and adaptive web-based educational system, named Protus. It is developed to help learners in the learning process of different courses such as Essentials of Programming Languages, E-Business, and Use of Information Technology [
32] (
Figure 1).
Functionalities of the Protus system are adjusted and highly oriented to learners’ and teachers’ needs. Several essential goals and requirements for modern e-learning personalised systems are also realised in Protus [
30]: disjunction of the two distinct interfaces—for students and teachers; achieving a high-quality system modularisation; a strong separation of distinct system components: domain module, application module, learner model and adaptation module, inside a sharable and dynamic learner model; continuous management of learning preferences, progress, and personal learners’ data; facilitating communication and collaboration among students as well as between students and instructors knowledge assessment and increasing the learners’ competency level; features for creating new learning content, as well as content transfer from external sources; necessity to achieve semantically rich descriptions of the components’ functions to support successful interoperability between system components; and ensuring that the system’s components are properly coordinated and communicated.
In the following sections, we explain how adaptability and personalisation are achieved in this system and present the FSLSM that was chosen to be implemented in Protus.
Adaptability and personalisation in the Protus system are obtained by the implementation of recommendation techniques (e.g., collaborative filtering, clustering, and association rule mining). The Protus system suggests online learning activities or optimal learning sequences based on learners’ interests, knowledge, learning style, and the browsing history of similar learners with comparable characteristics.
The learner model, as one of the constituent components of the core of the Protus system, stores the information needed to predict student behaviour and thus achieve adaptation. That information is about [
32]: the learner, with cognitive, affective, and social characteristics; the hardware and software characteristics of the learner’s environment; the learner’s knowledge and feedback on the content; and the way the learner interacts with online content, including noted metrics such as the learner’s number of keystrokes, dwell time, and patterns of access.
All those data are categorised through three layers: the learner’s performance, objective information, and learning path.
To make Protus intelligent and adaptive, an automatic recommendation system was built. It contains three modules: a learner system-interaction module, an offline module, and a recommendation engine (
Figure 2).
The learner system-interaction module records all of a student’s activities such as visited pages, sequence patterns, test outcomes, and grades obtained, and saves them into the server logs. Those data are combined with information previously collected through the learner’s registration process and the learning-style survey. This module also keeps track of the added, modified, and deleted tags.
The offline module is activated periodically, and its goal is to filter the learning content based on the course’s current state, learners’ tags, and learners’ affiliations.
The recommendation engine generates a list of recommendations based on tags posted by learners or educators for each created cluster and the frequent sequences valuation, supplied by the Protus.
The recommendation module of Protus is relying on a specific learner style, determined by the results of an initial questionnaire based on FSLSM, that learners fill in as a first activity after accomplishing their registration in the system.
4. Materials and Methods
Felder and Soloman’s ILS, as a data-gathering instrument for researching learning styles, is used to evaluate learning styles [
43]. The ILS is a 44-question multiple-choice learning-styles instrument that allows the assessment of alterations in personal learning-style preferences through four dimensions: Information Perception, Information Understanding, Information Reception, and Information Processing. This questionnaire’s data are used to generate suitable clusters, which are groups of students with similar learning patterns. These results will directly influence the look and the content of the learners’ interface, determining the way in which the lesson will be presented based on the learner’s favourite style.
The Information-Processing dimension allows us to differentiate learners that are example-oriented, called Reflectors, from the ones that are activity-oriented, named Activists [
21]. When students are doing something active with their information, such as reviewing, practising, or clarifying it to others, they are more likely to recall and grasp it. Reflectors prefer to gather and analyse information before proceeding with any action. Faced with an active learner, the Protus 2.1 system will first present him/her the activity, then a theoretical explanation, followed by a clarification and an example. This order is different for a reflective learner, an example is first shown to him/her, succeeded by an explanation and theory, and at the end, he/she is requested to execute an activity.
The Information-Perception dimension defines sensing learners, named Sensors, which are known for their patience with details, as well as their ability to memorise knowledge and perform laboratory work. Intuitors are more skilled at acquiring new ideas, concepts, and complex mathematical formulations and abstractions than Sensors. Sensing learners work slower and are less inventive than intuitive learners. Sensors are more practical and cautious, while intuitors despise repetition and care for innovation.
Sensing learners, for example, are expected to be interested in supplementary resources; thus, they can click on the “additional material” button on the screen interface. On the other hand, the interface offers intuitors formulas, abstract material, and concepts. Specific syntax rules or block diagrams are used to provide adequate explanations. The Information-Reception dimension defines verbal and visual learners. Visual learners recall better what they look at in visual forms such as diagrams, drawings, demonstrations, timelines, and flowcharts. Words in the form of written and spoken explanations have a greater impact on verbal learners.
The Information-Understanding dimension defines global and sequential learners. Sequential learners tend to follow the learning material in a straight line, with each stage logically following the one before it. Learners in Protus 2.1 go through lessons in a predetermined order, according to the criteria of the Sequential learning style.
Global learners tend to learn by performing big jumps, passing over learning objects and moving on to more sophisticated information. They are given a broad overview of the course, along with brief descriptions of each unit and the option of gaining access to the unit they choose by clicking the unit hyperlinks rather than completing the course in the sequence.
4.1. Participants
The research was conducted during three months within two courses (E-business and Information Technology Implementation) of the second and third year of a study program in Economics at Novi Sad School of Business. In total, 71 learners voluntarily took part in it. The percentage of the valid questionnaire was 95.77%; 3 of them were not complete and their data were deleted. Thus, we obtained 68 cases for analysis. The participants’ gender structure shows that 47 of them were females and 21 were males. They were 24 years old on average (MAD = 2.55).
Figure 3 shows that most of the students were 22 and 23 years old.
The participants were enrolled in four different economics subspecialties: Finance (20), Trade (14), Entrepreneurship (28), and Tourism (6).
4.2. Procedure
We conducted a survey using a subjective questionnaire in order to investigate the learners’ preferences during the learning process, and compared its results with the results obtained using the ILS. The procedure was subdivided into two phases: Experimental Phase 1 and Experimental Phase 2.
4.2.1. Experimental Phase 1
At the beginning of learning with the Protus system, learners filled out the ILS questionnaire (based on FSLSM) to predict their initial learning styles.
Each of the four FSLSM dimensions was covered with 11 questions. The learner could respond to each question by choosing one of the two offered answers. Each answer had an impact on the final result, leading it towards one of two categories within the corresponding learning styles’ dimension.
The system assigns one point to each learner’s answer in the relevant field, which is then entered into the related table (
Table 1). In the next step, the system sums the numbers belonging to the same column. The number of replies identified as A and B is used to determine the final index. Therefore, if all 11 responses were of type A, the index would be −6. In the case of ten responses of type A and just one of type B, the index would be −5. The index would present a value of −4 in the case of nine type-A answers and two type-B answers, and so on.
If the resulting index has a value between −2 and 2, the learner is considered as “fairly well-balanced” and one of the dimension’s categories has a modest preference for a learner. In the case that the value of the index is −4, −3, 3, or 4, the learner has a moderate propensity for one of the categories (“moderate preference” type) and will find it easier to study in an environment that prioritises that category. If the index is −6, −5, 5, or 6, the student is strongly inclined to one of the categories of dimension (type “strong preferences”) and may have problems in a learning environment that supports the opposite category.
The Protus then determines the learning style’s numerical value and sets up the relevant learner’s model. The learner’s model’s setting up is critical in determining the initial options that would lead to the system’s personalisation.
4.2.2. Experimental Phase 2
To see to what extent the results of the ILS questionnaire match the learner’s requirements, we integrated into Protus—particularly for the purpose of this study—new specifically designed lessons. For each of the four previously mentioned dimensions, we created two lessons on the same topic, each of them designed to illustrate a particular learning style. Thus, if the learner was classified by the ILS questionnaire in the visual category, he/she was asked to learn from the lessons designed in two opposite ways: visual, but also verbal. After learning using both types of the same lesson for each of the four dimensions, learners were requested to fill out a questionnaire (subjective questionnaire) consisting of 11 questions that inquired from different points of view which of the two presented lessons were more appropriate to the learner from their own studying experience (
Figure 4).
5. Data Analysis
During the practical use of Protus, we noticed nonuniform segmentation of learning categories. Thus, learning styles are not equally distributed among students. For example, within the Information-Perception domains, nearly 80% of students had a sensing learning style, while only a small part had an intuitive learning style. We distributed the learners to one of the two categories within each dimension based on the previously calculated index. Each category included all learners with the same preference, fairly balanced, moderate, or strong.
Figure 5 presents the learners’ comparison of established learning-style preferences within all four dimensions.
The data processing of the learners’ answers was conducted in the same way as with the answers from the ILS questionnaire. As a result, each learner was classified in one of the following three LS types: a fairly well-balanced type, a type with moderate preference, or a type with a strong preference for every opposite type of the lesson within each dimension.
In the next step, we assigned numeric values for each of those learning-style types within each of the four dimensions. Consequently, we gave each student, for each of the four dimensions, a numeric value in the range from −2 to 2, depicting in that way his/her inclination toward one of the two opposite learning styles.
Table 2 shows the distribution of those values for each possible instance.
We repeated this for the results obtained from the ILS questionnaire. Therefore, every learner, for each of the four dimensions, was classified in one of the five possible types, first based on the findings of the ILS questionnaire, and secondly following the results of the questionnaire that they filled in after learning from specially designed lessons. In order to estimate to which extent those two values would vary for each learner, we calculated the absolute value of their difference (
Table 3).
The minimum value of the absolute difference would be zero, in case the learner was classified in the same learning-style group by both questionnaires. The maximum value could be 4 in case the learner was categorised in totally opposite learning-style types by the two questionnaires.
Finally, after calculating the average value of all the absolute differences we obtained the mean absolute difference for each learning-style dimension.
To investigate the different reliability and validity aspects of our survey along with some other numeric data presented in the tables shown in our paper, we used two software products. The simple calculations were performed in Microsoft Excel 2016, while for the more sophisticated ones we used the Statistical Package for the Social Sciences (SPSS), version 26. It was used to calculate Cronbach’s alpha to present the internal consistency and reliability of the subjective questionnaire. We also used SPSS to perform the Exploratory Factor Analysis (EFA). One of the EFA resulting tables, the Rotated Component Matrix, gave us the data necessary to perform the calculation of Composite Reliability (CR) and Average Variance Extracted (AVE) the latter of which was used to confirm the convergent validity of our survey. To establish the discriminant validity, we used the SPSS Bivariate Correlation module based on Pearson’s correlation coefficient. The calculation of the McDonald’s omega, as an additional indicator of the reliability of our survey, was performed using the Hayes Omega macro in SPSS.
In order to assess the internal consistency and reliability of the subjective questionnaire in all four domains, Cronbach’s alpha was used. The results shown in
Table 4 indicate a high Cronbach’s alpha, slightly lower for the Sensing/Intuitive one. The same conclusions were drawn after calculating McDonald’s omega, its values were almost the same as Cronbach’s alpha ones.
In addition, we calculated values of Composite Reliability (
CR) and Average Variance Extracted (
AVE).
CR was calculated according to the following formula.
where
i refers to the number of items ranging from 1 to
n;
n represents the total number of items;
λi is the standardised factor loading for item
i; and
εi is the error variance of item
i (
εi = 1 −
λi2).
The value of the
AVE was obtained by using the following formula:
where
i refers to the number of items ranging from 1 to
n;
n represents the total number of items; and
λi is the standardised factor loading for item
i.
6. Results
As shown in
Table 5, the
AVE values for all of the four dimensions are considerably above the acceptable 0.5 value, except for the Sensing/Intuitive style dimension, so we can claim that it indicates a good convergent validity. The same can be said regarding composite reliability. For all of the four dimensions of our subjective survey, the
CR values are above the 0.7 minimum acceptable value.
The discriminant validity of our survey has been investigated as well. The square root of the average variance extracted for each style dimension was higher than the correlations involving the style dimension (
Table 6), suggesting acceptable discriminant validity.
The comparison of the results obtained by the analysis of the ILS and subjective questionnaires showed that the lowest mean absolute difference was 1.28 for the Sensing/Intuitive dimension, while the highest was 1.79 in the case of the Sequential/Global and Active/Reflective dimensions. The mean difference value for the Visual/Verbal dimension was 1.42. Although we can notice divergences in all four dimensions, they are significantly below the maximum value of 4.
Results obtained from both questionnaires were also interpreted by counting how many times every student answered each of the 11 questions inclined to one specific style in each of the four dimensions. The mean values of all scores for each dimension are presented in
Table 7.
The values of the ILS mean scores are very close to the subjective questionnaire ones. Their absolute differences vary from 0.34 for the Sensing/Intuitive dimension to 1.79 for the Sequential/Global dimension.
Results obtained from the subjective questionnaire and the ILS regarding the distribution of different learning styles among students are shown in
Figure 6.
The distribution is almost the same in the Information-Perception and Information-Processing dimensions and marginally different for the Information-Reception dimension. Once more, the Information-Understanding domain is the one that has the most divergent figures. According to the ILS, the sequential learning style is the prevailing one (65%), while the subjective questionnaire suggests that the global learning style is the most present among the students (58%).
Comparing the results gathered by the ILS and the subjective questionnaire, we also calculated the percentage of students that changed their learning style. According to the viewpoint that "fairly balanced" categories show only a weak leaning to any of the two opposite learning styles, we took into consideration only when someone with a “strong” or “moderate” preference for a learning style shifted to the "strong" or "moderate" preferences for the opposite one. The lowest percentage of style changing is in the Information-Reception dimension (9.09%); a bit higher is in the Information-Perception dimension, while the highest is in the Information-Processing (19.18%) and Information-Understanding dimensions (19.40%).
In order to test our hypotheses, for each learning-style dimension, a paired-samples t-test was conducted in SPSS to determine if there was a significant difference between the results obtained by the ILS questionnaire and the ones gathered by the subjective questionnaire. For the purpose of this test, preferences previously presented as strong and weak were joined in the one category.
The data presented in
Table 8, designed to test hypothesis H1a, suggest that there was a significant difference between results obtained by the ILS questionnaire (M = 2.515; SD = 0.702) and those obtained by the subjective questionnaire (M = 1.765; SD = 0.601); [t(67) = 6.720,
p < 0.001]. We can therefore reject the null hypothesis that within the Information-Processing dimension there is no difference between the ILS’s and subjective questionnaire’s results.
The testing of hypothesis H1b was conducted using the data shown in
Table 9, which led us to conclude that there was not a statistically significant difference between results obtained by the ILS questionnaire (M = 2.206; SD = 0.771) and those obtained by the subjective questionnaire (M = 2.059; SD = 0.722); [t(67) = 1.559,
p = 0.124]. Due to received value
p > 0.05, we failed to reject the null hypothesis that within the Information-Perception dimension there is no difference between the results of the ILS and the subjective questionnaires.
From the data presented in
Table 10, related to the testing of hypothesis H1c, we concluded that there was a statistically significant difference between Visual/Verbal preferences obtained by the ILS questionnaire (M = 2.338; SD = 0.745) and those obtained by the subjective questionnaire (M = 1.853; SD = 0.579); [t(67) = 4.500,
p < 0.001]. For that reason, we can reject the null hypothesis that within the Information-Reception dimension there is no difference between the ILS’s and subjective questionnaire’s results.
The testing of hypothesis H1d was based on data presented in
Table 11. The presented indicators suggest that there was a statistically significant difference between results obtained by the ILS questionnaire (M = 2.544; SD = 0.609) and those obtained by the subjective questionnaire (M = 1.573; SD = 0.676); [t(67) = 8.462,
p < 0.001]. On that account, we can reject the null hypothesis that within the Information-Understanding dimension there is no difference between the ILS’s and subjective questionnaire’s results.
Results of our study suggest that the ILS is not a fully reliable tool for making precise and final conclusions about the learner’s learning style, but its results are still good as a starting point for defining the initial learner’s model.
According to certain studies, a learner’s learning style might alter depending on the activity that the learner has learned [
51,
52]. In addition, learning styles may be modified based on the learning content and learning duration. The study that we conducted in order to try and find out to which extent the results gathered by ILS are accurate also confirmed the necessity of providing the learners with the possibility of changing the presentation method of the lessons during the learning process. According to this, we enhanced the user interface of Protus by adding one new functionality: the experience bar. Using the experience bar, learners may freely choose between presenting approaches and styles throughout the remainder of the course (
Figure 7).
7. Discussion and Conclusions
Information about personal learning preferences—referred to in scientific society as individual LS—are essential to achieve adaptability and personalisation as important features of the modern e-learning environment. Among numerous models for LS representation, we presented Felder and Silverman’s learning-style model, its principles, and its practical use for LS representation in an e-learning environment. The main reasons for this choice were the wide use of FSLSM, its flexibility and suitability, and most importantly, the existence of ILS as an accepted assessment tool associated with this learning model.
Considering all the existing and previously investigated controversies of the learning style paradigm presented in the introductory theoretical review, and the necessity to continue with the evaluation of ILS, we conducted the study among students of School of Business, Novi Sad, measuring the concurrent validity of the ILS instrument. An adaptive learning system was developed as an experimental platform for studying the use of LS identification in the process of personalisation. During the study, we introduced a subjective questionnaire as a control tool for the assessment of ILS. Namely, we acquired students’ learning preferences using two instruments: ILS at the beginning, and later a subjective questionnaire. This subjective assessment, which considers the learners’ opinions, was the base for our conclusions about the ILS’s validity.
The results of the study suggest the satisfying validity of ILS as a tool for defining the initial learner’s model, at the beginning of the learning cycle. The new fact is the findings of the differences between the results of ILS and the results of subjective assessment.
This part of the results, which aims to answer our research question, suggests that ILS is not a tool for making comprehensive and final conclusions about the learner’s LS. Results of the hypothesis related to the Information-Perception dimension failed to reject the null hypothesis. The results of the other three alternative hypotheses were statistically significant enough to reject the null hypothesis that within the respective dimensions there is no difference between the ILS’s and subjective questionnaire’s results. As a result, it is unproductive to keep the learner’s learning style consistent during the course, specifically if the learner is dissatisfied with his or her existing learning style.
Our research results show that a balanced approach can improve the process of designing an adaptive and flexible online learning system. That goal can be reached by first implementing an additional subjective questionnaire that will enhance the results of the ILS questionnaire. Independent of the initial ‘cold start’ assigned LS, the extension of Protus functionalities with the experience bar will allow students to change the initial interface design by choosing the presentation method that they find the most suitable for them. The additional information can be included in the calculation process of learning styles. Incorporating more data in the calculation processes leads to a more reliable result and therefore improves student modelling. Moreover, this solution may find wider application in different types of personalised e-learning environments, regardless of the applied assessment tool.
A limitation of our study is the uniformity of the participants regarding their study program. Only students enrolled in Economics participated in the experiment. Another limitation is that the lessons designed for the students to be learned before they fill out the subjective questionnaire only cover subjects from the Information Technology area. Finally, most of the participants were about the same age (22–24).
Further research of the ILS instrument’s validity and its comparison with other available tools would be valuable. In addition, future research should investigate the evaluation and prediction of student goals, knowledge gaps, motivation, values, trust, and other variables critical to the learning process by analysing the success achieved through the possibility to change learning style using the experience bar, as an adaptive tool for learning-style selection.