Next Article in Journal
Mutual Influence of Users Credibility and News Spreading in Online Social Networks
Next Article in Special Issue
Dynamic Detection and Recognition of Objects Based on Sequential RGB Images
Previous Article in Journal
IoT Technologies during and Beyond COVID-19: A Comprehensive Review
Previous Article in Special Issue
Coronary Centerline Extraction from CCTA Using 3D-UNet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Pattern Mining Method for Teaching Practices

by
Bernhard Standl
* and
Nadine Schlomske-Bodenstein
Institute for Informatics and Digital Education, Karlsruhe University of Education, Bismarckstrasse 10, 76133 Karlsruhe, Germany
*
Author to whom correspondence should be addressed.
Future Internet 2021, 13(5), 106; https://doi.org/10.3390/fi13050106
Submission received: 8 March 2021 / Revised: 19 April 2021 / Accepted: 19 April 2021 / Published: 23 April 2021

Abstract

:
When integrating digital technology into teaching, many teachers experience similar challenges. Nevertheless, sharing experiences is difficult as it is usually not possible to transfer teaching scenarios directly from one subject to another because subject-specific characteristics make it difficult to reuse them. To address this problem, instructional scenarios can be described as patterns, which has already been applied in educational contexts. Patterns capture proven teaching strategies and describe teaching scenarios in a unified structure that can be reused. Since priorities for content, methods, and tools are different in each subject, we show an approach to develop a domain-independent graph database to collect digital teaching practices from a taxonomic structure via the intermediate step of an ontology. Furthermore, we outline a method to identify effective teaching practices from interdisciplinary data as patterns from the graph database using an association rule algorithm. The results show that an association-based analysis approach can derive initial indications of effective teaching scenarios.

1. Introduction

Teaching and learning are highly individualized processes and sharing best practices in teaching can be difficult because each teacher’s teaching characteristics are different. Hence, the description of effective teaching scenarios in a reusable way is often an essential requirement to ensure sustainability in good teaching practices [1]. This led to the idea of using a pattern approach to capture teaching practices [2,3]. If domain-specific teaching and evaluation data are available in a database, the use of pattern mining techniques can help to identify candidates for effective teaching patterns. Once patterns are identified, they describe the essence of effective teaching scenarios for reuse in practice. The pattern approach, originally introduced by Christopher Alexander in the field of architecture [4,5], was later applied in computer science in object-oriented design [6] and also in education to describe teaching–learning scenarios in schools [7,8,9] and in tertiary education in the field of technology-enhanced learning [10,11]. In this paper, we describe a semi-automated approach, which is based on related work in the field of digital humanities [12,13] and further combine expertise from different domains such as educational science, computer science, and teaching practice.
In order to make high quality teaching sequences available to teachers for reuse as patterns, they must be analyzed and captured along a suitable structure [1,14]. Therefore, a taxonomy provides a basic structure for a representation for categorizing teaching sequences in an ordered, hierarchical way [15]. To use such a structure as a basis for a database, further semantic meanings of the relations and properties have to be added to the taxonomy. For this purpose, the taxonomy is transformed into an ontology. The representation of a domain-specific structure as a (Web)-ontology also has the advantage that it can be shared, reused, and further developed according to the uniform OWL (Web Ontology Language) standard. Furthermore, an ontology can provide the basis as a backbone for a graph database for extensive exploration of data and patterns in the graph. Graph databases use nodes to store data which is connected with relationships as edges [16]. Unlike relational databases that have existed since the 1970s [17], a graph database is a non-relational database that provides an effective and efficient solution for highly interconnected data with direct relationships between data in nodes [18,19]. The graph model also enables to model data close to the structure that is being modeled [20]. As graph databases represent relationships by design, in graph databases data can be systematically analyzed to identify possible patterns of effective teaching.
Even though, learning analytics is related to this approach, this work focuses rather on teaching-analytics. While learning analytics explores in particular data provided by learners [21,22,23], teaching analytics investigates data describing teaching scenarios from the teacher’s perspective [24]. Even if teaching analytics is frequently part of learning analytics, teaching analytics focuses on the teacher’s perspective of the design, development, evaluation of methods and tools for teachers, so as to understand learning and teaching processes [25]. The method described in this article captures teaching practices but compares them to student perceptions.
If teaching–learning scenarios are well-documented and corresponding evaluation data is available along criteria for good teaching [26], the following questions drive this paper: (1) How can a domain-specific data model be created for capturing domain-specific teaching sequences? (2) How can possible candidates for best practices or patterns be identified from captured teaching sequences? In the next section, we will first describe the background of the approach for transforming a teaching domain to a database model and the pattern mining process. This will subsequently be followed by describing our proposed approach with an example from the domain of digital education. We will close this paper with a description of a possible pattern and conclusions on the presented method.

2. Materials and Methods

This section describes the method used to capture and reuse teaching–learning scenarios. The approach follows three steps as shown in Figure 1: (1) Conceptualizing of teaching domain as a taxonomy; (2) transformation of the taxonomy to a graph database; (3) identification of possible candidates for teaching patterns.
In the next subsections, the first step of conceptualizing the domain of digital education is presented in Section 2.1. Next, it will be shown how, based on this concept, first an ontology and subsequently from it a graph database is transformed. The database is then used to capture instructional data from digital teaching (Section 2.2). In the last step, a method is presented to identify possible patterns of effective teaching settings from the graph database data using an association rule algorithm (Section 2.3).

2.1. Conceptualizing a Domain-Specific Teaching Model

To identify domain-specific teaching–learning patterns, teaching sequences are collected in a database, which is domain-adequately structured. The conceptual modeling process towards a data-model for such a domain-specific database follows an approach from ontology engineering, where different approaches are discussed [18,19,27,28,29,30]. A common approach is to first describe a glossary with a list of domain-related terms/words and their definitions, which are then arranged into a taxonomy that is subsequently transformed into an ontology. Frequently, a thesaurus (identification of synonyms and similarities) and topic maps (attributes and relationships) are considered separately, but in this case they are combined in the step of the ontology. This process is illustrated in Figure 2. It can be seen that although terms in a taxonomy are connected (in a generalization/specialization), the transformation to an ontology also adds properties to the nodes and relationships.
In Figure 3, a taxonomy is illustrated with an example from the domain of digital education. To obtain the broadest possible basis of agreement in a teaching domain on its conceptual mapping, domain experts are usually involved in the design process. In order to develop a consensus among different disciplines, methods such as the Delphi method can also be used [31]. As the description of this procedure is not part of this paper, the taxonomy given in Figure 3 serves here as an example, without claiming to have a complete set of details of the selected domain of digital education.
The next step shows how this taxonomy is completed with further relations and with attributes in an ontology. This will then also be the basis for the transformation into a graph database.

2.2. Transformation

While a taxonomy represents a domain in categories and sub-categories, an ontology adds to the taxonomy further semantics with relationships between the (sub)-categories and also defines further details. Using a web ontology language (OWL, specified in [32]) the taxonomy is transformed into an OWL ontology supported by the modeling software Protégé (https://protege.stanford.edu accessed on 19 April 2021) [33], which is an open-source ontology software with a visual editor supporting different modeling languages [34,35]. The concept of an OWL ontology is basically consisted of the components of classes, properties and optionally instances, defined by a list of triples: subject, predicate, object, where a subject is in relation to an object. Considering the root class digital didactics in Figure 3, multiple classes are defined on the next levels, for instance digital knowledge and one of its sub-classes mobile devices. Whereas (sub)-classes describe concepts, the OWL definition of instances describes specializations of them as concrete instances of a class. In the taxonomy above, no instances are specified, as in our approach the taxonomy serves as a model for the ontology. Based on this, the following ontology depicted in Figure 4 was developed from the taxonomy (as can be seen, some sub-classes were added to FeaturesOfInstructionalQuality during the modeling process).
Even though Protégé provides tools for simple data visualizations and techniques to analyze data, graph databases provide a more intuitive way to explore a graph-based data-representation [36]. As an ontology has a graph structure, a transformation from an ontology to a graph database is possible. From a technical perspective, an ontology can be converted from Protégé to a graph database into the graph database management system Neo4j (https://neo4j.com accessed on 19 April 2021) via the neosemantics (https://github.com/neo4j-labs/neosemantics accessed on 19 April 2021) toolkit [37]. Therefore, neosemantics creates in the graph database a node for each class of the ontology. If there is a relationship to another class, it creates another node, if not, neosemantics adds a property to the node [16]. With the ontology ready, the next step is to import it via neosemantics into a graph database in Neo4j. Figure 5 shows the graph database imported into Neo4j based on the ontology made in Protégé and manually edited (the node “Lecture” was added for relating single lessons to lecture series). The class-structure of the ontology thus builds the underlying framework as it sits in the database [36] and provides the corresponding classes for the actual data nodes. This nodes are connected and the properties specify the names of the nodes. From the database model it is also possible to see how it can be further extended. For example, when a new node “iPad” is added, it will be related to the node “MobileDevice”. Adding teaching data will hence increasingly create a dense network of mutually connected information. The more data are available, the more effective pattern mining can be performed. To demonstrate the approach in this paper, we will leave the amount of data at this level for better comprehensibility.

2.3. Pattern Mining

In this section, the method of identifying frequent associations in the graph database as possible pattern-candidates is described. As introduced in the field of architecture [4,5], patterns express a relation between a certain context, a problem, and a solution on a medium level of abstraction. Hence, a pattern can be seen as a description for the resolution of a conflict that occurs in a certain context [38]. The basic idea of the pattern approach is to describe practices for reuse in a different context. Each pattern has the same structure, which differs depending on the domain, but usually describes name, context, problem, forces, solution, resulting context, example and the connection to other patterns. It was also underlined that each pattern can only become alive if it is connected to other patterns and becomes a pattern language [4]. Therefore, a pattern language consists of multiple single patterns structured in a hierarchical way. In the educational context, this means that larger patterns describe, for example, a general methodological approach and smaller patterns describe a specific practice in a teaching setting. Hence, a pattern is not an isolated entity and can only exist to the extent that it is supported by other patterns. As a consequence, when deriving frequent occurring practices from a pattern language, multiple patterns are usually combined. A recommended approach is to start with the pattern that is most likely to solve the problem, and then look for other patterns that are related to that pattern [4]. For example, the teacher starts from a pattern that most represents the overall goal (as for example achieving a high cognitive activation for students) and then moves down to the relevant lower patterns in the hierarchy (as for example using a certain teaching method or technology).
For the identification of new patterns, prior work in this field has already presented a variety of techniques, such as experience-based [8] and collaborative [39,40], but also semi-automated approaches [12]. Identifying teaching patterns that describe effective teaching requires not only describing and characterizing the implementation of teaching practice, but also evaluating its impact. This is the only way to determine whether, for example, a particular use of media or methods has had a specific effect on learning. Designing and implementing teaching sequences in a way that enables effective learning is not only related to the choice of digital technology, but also corresponds to learning outcomes and is based on theories of teaching and learning [41]. Using instructional sequences that have been proven to have high instructional quality and a high impact on learning, and making these sequences available to educators so that they can be easily adapted and reused, is therefore of particular interest to educational researchers [1,14]. This can be based on effective characteristics of teaching quality from educational research [42]. For example, aspects of cognitive activation have a greater impact on student outcomes than aspects of classroom management or supportive climate [43]. In addition, features such as the communication of learning objectives and coherence in instruction also have an important impact on students’ learning. Specifically, it was identified that students who experienced a lesson with goal clarity and coherence in instruction were more likely to be motivated [26]. Other aspects that point to effective teaching are self-efficacy [44], motivation [45], or learning strategies [46]. Thus, assessing the impact of instruction on learners is an essential feature for describing the approach in this paper evaluating effective instructional strategies and methods. The proposed pattern mining process in this paper has four steps:
  • Identification of frequent associations (Apriori algorithm);
  • Description of a hypothesis;
  • Manual review and comparison of the lesson data;
  • Description of a possible pattern.
The first step is to analyze data utilizing association rule mining to identify frequent associations across all data in the graph database. In this work, the Apriori algorithm is used, although it should be noted that the algorithm can be computationally expensive, and improvements are possible depending on the application context. In this article, for the sake of simplicity and small amount of data, the simple variant is presented. In step 2, the results from association rule mining are used to design hypotheses to draw initial conclusions about possible patterns. In the third step, the results are manually checked for plausibility with further material such as lesson plans or evaluation data (e.g. surveys, interviews) and conclusions are drawn. This forms the basis for describing a pattern based on a structure (problem, context, solution).
The Apriori algorithm for association rule mining was initially presented in [47] for the prediction of shopping patterns of customers (as known for “you may also like to buy”) and was already applied for pattern research in the field of digital humanities [12]. The idea of the Apriori algorithm approach is to first discover frequent occurring items (support) from a dataset and then identify the likeliness of the occurence of a one itemset, if another itemset is already present (confidence). For example, in the area of teaching, the frequencies of combined use of different digital tools could first be identified and then evaluated in terms of their occurrence with the evaluation data, which may indicate possible patterns depending on support and confidence.
Support and confidence are defined as follows [12,48]: Let P : = P 1 , P 2 , P n be the set of all available parameters to a teaching sequence and P 1 : = b 1 , b 2 , b k for example be a set of digital tools, and P 2 : = s 1 , s 2 , s n be a set of evaluation data and so on. Then the transaction of a lesson sequence is given by T L S : = T 1 T 2 T n such that T 1 P 1 , T 2 P 2 T n P n . Then an association rule X Y is consisted of an antecedent X (a set of items with P X P ) and a consequent Y (a set of items with P Y P ) and for the selection of association rules, there are the following metrics:
s = s u p p o r t ( X Y ) = | t T | ( X Y ) t | | T | .
Then s describes the frequency of the association-rule X Y in the set of transactions. A high value means that the rule describes a large part of the dataset.
c = c o n f i d e n c e ( X Y ) = s u p p o r t ( X Y ) s u p p o r t ( X ) .
Then, c describes the proportion of transactions with X that also contain Y and determine the estimate of a conditional probability.
In the search for association rules, lower bounds are defined for a minimum support and a minimum confidence that must be met. The approach to identify frequent association rules then form two steps, where first the sets of items that meet a certain minimum support are determined and then association rules are searched for in this set based on the minimum confidence. Following the proposed four-step pattern mining approach described above, the next step (2) is to hypothesize about the Apriori algorithm’s results with high support and confidence factors and how it might be interpreted. To support these hypotheses, the instructional data collected in the graph database is then qualitatively considered again to derive more insights from the results (3). If the assumptions from the Apriori algorithm and the qualitative verification are confirmed, a pattern is described (4). In the next section, this process is demonstrated through an example.

3. Experimental Results

To demonstrate the procedures described in the previous sections, this part of the article uses generated sample data for demonstration purposes, as not enough real data were collected yet at this time. The following example uses a small sample of data from teaching to show how the association algorithm is used to identify frequencies, which in turn can be used to describe a pattern. The data describe three courses (Algorithms 2, Didactics 1, Statistics, Digital Education 1, and Analysis 1) and which digital tools and software were used during the introductory phase of lessons.
Therefore, first data were retrieved from the graph database (Neo4j) using the Cypher database query language for further processing. The query in Listing 1 includes a sub-procedure (CALL), which first queries the relationship between “Seminar” and “Activation” with the MATCH query. The result of this is passed via WITH to another query to the “App” node, and so on. At the end, the entire queries are aggregated in a list with COLLECT and passed to d via RETURN. The output of the database query is shown in Figure 6.
In order to analyze the data from Listing 1 using the Apriori algorithm, we chose Python and Jupyter Notebook and the library mlxtend (http://rasbt.github.io/mlxtend/ accessed on 19 April 2021), which offers also an association rule function. The code in Listing 2 shows how the association rules from the data retrieved above were calculated.
As the apriori function expects data as pandas DataFrame, the results of the database query are first transformed via the TransactionEncoder into a one-hot encoded format from line 5–7 (which results in a boolean instead of categorial representation). Figure 7 shows snippet of the one-hot representation of the data.
In line 9–10 in Listing 2 the metric for the rule generation using the apriori algorithm is set as confidence for evaluating association rules. In line 11 a filter is set, in order to identify rules, which associate a consequent of a high student activation. The output is shown in Figure 8.
The results in Figure 8, show frequent associations with a high impact on student activation. The support of each association is >0.25 and confidence >0.66. In particular, the combination of Padlet, Notebook –> high(support s = 25%, confidence c = 100%) showed a particularly strong criterion for cognitive activation, which also can bee seen in the single associations of Padlet and Notebook in Figure 8. But the support of this association rule shows a low importance in this small dataset.

4. Interpretation of Results for Pattern Candidates

First, it must be mentioned that the following interpretations were only intended to demonstrate the procedure. No conclusions can be drawn from the small amount of data here. Therefore, the lower bound of the support was also set very low to be able to demonstrate results. To analyze and interpret these results, the four-step process of a semi-automatized pattern identification as suggested above was considered: (1) Identification of frequent associations (Apriori algorithm); (2) description of a hypothesis; (3) manual review and comparison of the lesson data; (4) description of possible pattern. From the above results, it could be also concluded that the use of a notebook with the app Padlet possibly results in higher cognitive activation in the introductory phase of the lesson (in this example of the domain digital education) as a possible hypothesis from these results. Although these interpretations are only suggestive of an effective use of technology, these results can still serve as a further basis for identifying, describing and validating a pattern. In the next step, in order to validate this hypothesis and to describe a pattern, additional information is now compiled from the available data. For example, the larger context of the use of notebooks during an introductory phase of instruction will be considered. What was the teacher’s exact approach? In what sequence were the notebooks used in class? Were there any difficulties? How were these resolved?

4.1. Example Pattern

Therefore, based on the approach described above for describing individual patterns, an initial draft of a pattern candidate could be structured as follows:
Pattern Activation of Students
Problem
Instructional settings that contain problem-based tasks are required in order to activate learners [49]. The more students are engaged in the learning process through activating elements, the more likely they are to learn. In a learning environment where students are to absorb the presented contents only without deeply thinking about them and thus not changing their actual world knowledge, they are more likely to forget the presented content.
Context
Cognitive Activation was proven to have the greatest impact on learning as it enables deeper learning processes [43,50]. Digital tools (as for instance notebooks) enable a learning setting where students can be prompted to think about the topics in more detail.
Forces
Aspects of cognitive activation have a greater impact on students’ outcomes than aspects of classroom management. Therefore, cognitive activation should be considered as an essential part for initiating effective teaching.
Solution
Interactive student involvement can have an impact on students’ cognitive activation. The integration of mobile devices such as notebooks during the initial phase of a lesson can lead to higher levels of students’ activation.
Resulting Context
Providing a learning environment for students where they are faced with real problems and are thus activated in a way that they discuss and think about the content in more detail was shown to have the greatest impact on learning.
This pattern serves as an example and typically, patterns are described much more extensively. For further differentiation, however, additional categories or even examples are added in pattern descriptions for a manual completion. This data is then manually drawn from the teaching data from the database, which are related to the frequent features found (e.g., brainstorming app, mentimeter, notebook).

4.2. Towards a Pattern Language

From the pattern representation, the essential aspects for practice can be quickly identified and further patterns in the network can be taken into account. Considering this, a possible pattern language could appear as it is shown in Figure 9. The network is arranged into larger and smaller patterns, where based on Alexander’s idea [4,5], the smaller patterns complete the larger patterns. Assuming the teacher is searching for a good approach to activate students, a start would be at the pattern “Activation of Students” and the following ones would complete this one. The information contained therein then determines the individual procedures for teaching.

5. Conclusions

Teaching, just like learning, is a highly individual process that depends on various aspects such as the teacher’s personality, framework conditions or previous experiences. It is therefore all the more difficult not only to capture effective teaching scenarios for reuse, but also to make them available. The reason for the difficulties in reuse is that adaptation to new settings with different preconditions is often not applicable. In this article an approach was described pertaining to how effective teaching scenarios can be systematically identified from teaching data in order to subsequently make them available as patterns in a uniform description for easy reuse. We presented the initial idea of the basic approach and the proposed procedure, and it can be used as a basis for direct follow-up perspectives in this field. The proposed evaluation-based domain-specific approach, along criteria of good teaching, allows a systematic mining for patterns in graph-based data for capturing teaching scenarios. Implementing association-based algorithms can suggest initial assumptions about frequent associations in data for describing reusable patterns. Considering the underlying research questions: (1) How can a domain-specific data model be created to capture domain-specific instructional sequences? (2) How can possible candidates for best practices or patterns be identified from the captured instructional sequences? The first research question was described through the process of conceptualizing a domain-specific taxonomy via ontologizing it to the database model and the second was elaborated by the implementation of an association-based algorithm and the proposed four-step process of pattern indentation. This involved the identification of frequent association rules (Apriori algorithm), the description of a hypothesis, a manual review, and a comparison of the lesson data and description of a possible pattern. Based on the shown approach for describing and creating a pattern language, future work should extend the analysis approach described here by further steps and also develop an automated process for the completion of pattern description from teaching data. In addition, the automated transfer of the patterns into a learning package for integration on learning platforms may be a possible avenue for future work. This study is limited in various points that must be addressed in future work. Even though we included no real data for demonstrating the method, for further conclusions detailed insight into the teaching–learning scenario is needed in order to better capture learning sequences in the taxonomy. As a consequence, in future research data should be implemented as well in order to gain more insights. Once the limitations and future research aims have been addressed, the taxonomy and the graph database will offer new innovations in teacher education.

Author Contributions

Conceptualization, B.S. and N.S.-B.; methodology, B.S.; software, B.S.; validation, B.S., N.S.-B.; formal analysis, B.S.; investigation, B.S.; resources, B.S.; data curation, B.S.; writing—original draft preparation, B.S. and N.S.-B.; writing—review and editing, B.S. and N.S.-B.; visualization, B.S.; supervision, B.S.; project administration, B.S.; funding acquisition, B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by BMBF (German Federal Ministry of Education and Research) grant number 01JA2027.

Data Availability Statement

Not Applicable, the study does not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vercoustre, A.M.; McLean, A. Reusing educational material for teaching and learning: Current approaches and directions. Int. J. E-Learn. 2005, 4, 57–68. [Google Scholar]
  2. Fincher, S.; Utting, I. Pedagogical patterns: Their place in the genre. In ITiCSE 2002—Proceedings of the 7th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education; Association for Computing Machinery: New York, NY, USA, 2002; pp. 199–202. [Google Scholar]
  3. Magnusson, E. Pedagogical Patterns—A Method to Capture Best Practices in Teaching and Learning. 2013. Available online: https://www.lth.se/fileadmin/lth/genombrottet/konferens2006/PedPatterns.pdf (accessed on 19 April 2021).
  4. Alexander, C.; Ishikawa, S.; Silverstein, M. A Pattern Language: Towns, Buildings, Construction; University Press: Oxford, UK, 1977; p. 1171. [Google Scholar]
  5. Alexander, C. The Timeless Way of Building; University Press: Oxford, UK, 1979; p. 552. [Google Scholar]
  6. Gamma, E.; Helm, R.; Johnson, R.E.; Vlissides, J. Design Patterns. Elements of Reusable Object-Oriented Software, 1st ed.; Addison-Wesley Longman: Amsterdam, The Netherlands, 1995; p. 416. [Google Scholar]
  7. Bergin, J.; Eckstein, J.; Volter, M.; Sipos, M.; Wallingford, E.; Marquardt, K.; Chandler, J.; Sharp, H.; Manns, M.L. Pedagogical Patterns: Advice for Educators; CreateSpace: Scotts Valley, CA, USA, 2012. [Google Scholar]
  8. Standl, B. Conceptual Modeling and Innovative Implementation of Person-Centered Computer Science Education at Secondary School Level. Ph.D. Thesis, University of Vienna, Vienna, Austria, 2014. [Google Scholar]
  9. Baker, R. Data mining for education. Int. Encycl. Educ. 2010, 7, 112–118. [Google Scholar]
  10. Derntl, M. Patterns for Person Centered E-Learning. Ph.D. Thesis, University of Vienna, Vienna, Austria, 2006. [Google Scholar]
  11. Schön, M.; Ebner, M.; Das Gesammelte Interpretieren. Educational Data Mining und Learning Analytics. Available online: https://www.pedocs.de/volltexte/2013/8367/pdf/L3T_2013_Schoen_Ebner_Das_Gesammelte_interpretieren.pdf (accessed on 19 April 2021).
  12. Falkenthal, M.; Barzen, J.; Breitenbücher, U.; Brügmann, S.; Joos, D.; Leymann, F.; Wurster, M. Pattern research in the digital humanities: How data mining techniques support the identification of costume patterns. Comput. Sci. Res. Dev. 2017, 32, 311–321. [Google Scholar] [CrossRef]
  13. Weichselbraun, A.; Kuntschik, P.; Francolino, V.; Saner, M.; Dahinden, U.; Wyss, V. Adapting Data-Driven Research to the Fields of Social Sciences and the Humanities. Future Internet 2021, 13, 59. [Google Scholar] [CrossRef]
  14. Agostinho, S.; Bennett, S.J.; Lockyer, L.; Kosta, L.; Jones, J.; Harper, B. An Examination of Learning Design Descriptions in a Repository. 2009. Available online: https://ro.uow.edu.au/edupapers/115/ (accessed on 19 April 2021).
  15. Rich, P. The organizational taxonomy: Definition and design. Acad. Manag. Rev. 1992, 17, 758–781. [Google Scholar] [CrossRef]
  16. Moreira, E.J.V.F.; Ramalho, J.C. SPARQLing Neo4J (Short Paper). In Proceedings of the 9th Symposium on Languages, Applications and Technologies (SLATE 2020), Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Rende, Italy, 13–14 July 2020. [Google Scholar]
  17. Codd, E. Relational Data Model. Commun. ACM 1970, 13, 1. [Google Scholar] [CrossRef]
  18. Niu, J.; Issa, R.R.A. Developing taxonomy for the domain ontology of construction contractual semantics: A case study on the AIA A201 document. Adv. Eng. Informatics 2015, 29, 472–482. [Google Scholar] [CrossRef]
  19. Giunchiglia, F.; Zaihrayeu, I. Lightweight Ontologies. 2007. Available online: http://eprints.biblio.unitn.it/1289/ (accessed on 19 April 2021).
  20. Lal, M. Neo4j Graph Data Modeling; Packt Publishing Ltd.: Birmingham, UK, 2015. [Google Scholar]
  21. Greller, W.; Ebner, M.; Schön, M. Learning analytics: From theory to practice–data support for learning and teaching. In International Computer Assisted Assessment Conference; Springer: Berlin/Heidelberg, Germany, 2014; pp. 79–87. [Google Scholar]
  22. Hao, X.; Han, S. An Algorithm for Generating a Recommended Rule Set Based on Learner’s Browse Interest. Int. J. Emerg. Technol. Learn. 2018, 13, 102–116. [Google Scholar] [CrossRef]
  23. Salihoun, M. State of Art of Data Mining and Learning Analytics Tools in Higher Education. Int. J. Emerg. Technol. Learn. 2020, 15, 58–76. [Google Scholar] [CrossRef]
  24. Prieto, L.P.; Sharma, K.; Dillenbourg, P.; Jesús, M. Teaching analytics: Towards automatic extraction of orchestration graphs using wearable sensors. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, Edinburgh, UK, 25–29 April 2016; pp. 148–157. [Google Scholar]
  25. Vatrapu, R.K. Towards semiology of teaching analytics. In Workshop Towards Theory and Practice of Teaching Analytics, at the European Conference on Technology Enhanced Learning, TAPTA; Citeseer: University Park, PA, USA, 2012; Volume 12. [Google Scholar]
  26. Seidel, T.; Rimmele, R.; Prenzel, M. Clarity and coherence of lesson goals as a scaffold for student learning. Learn. Instr. 2005, 15, 539–556. [Google Scholar] [CrossRef]
  27. Eine, B.; Jurisch, M.; Quint, W. Ontology-based big data management. Systems 2017, 5, 45. [Google Scholar] [CrossRef] [Green Version]
  28. Diogo, M.; Cabral, B.; Bernardino, J. Consistency models of NoSQL databases. Future Internet 2019, 11, 43. [Google Scholar] [CrossRef] [Green Version]
  29. Gilchrist, A. Thesauri, taxonomies and ontologies—An etymological note. J. Doc. 2003. [Google Scholar] [CrossRef] [Green Version]
  30. De Nicola, A.; Missikoff, M. A lightweight methodology for rapid ontology engineering. Commun. ACM 2016, 59, 79–86. [Google Scholar] [CrossRef]
  31. Clayton, M.J. Delphi: A technique to harness expert opinion for critical decision-making tasks in education. Educ. Psychol. 1997, 17, 373–386. [Google Scholar] [CrossRef]
  32. OWL 2 Web Ontology Language. Structural Specification and Functional-Style Syntax (Second Edition). 2012. Available online: https://www.w3.org/TR/owl2-overview/ (accessed on 19 April 2021).
  33. Musen, M.A. The protégé project: A look back and a look forward. AI Matters 2015, 1, 4–12. [Google Scholar] [CrossRef]
  34. Reshma, P.K.; Lajish, V.L. Ontology Based Semantic Information Retrieval Model for University Domain. Int. J. Appl. Eng. Res. 2018, 13, 12142–12145. [Google Scholar]
  35. Rezgui, K.; Mhiri, H.; Ghédira, K. An Ontology-based Profile for Learner Representation in Learning Networks. Int. J. Emerg. Technol. Learn. 2014, 9. [Google Scholar] [CrossRef]
  36. Wiegand, S. And Now for Something Completely Different: Using OWL with Neo4j. 2013. Available online: https://neo4j.com/blog/using-owl-with-neo4j/ (accessed on 19 April 2021).
  37. Nsmntx—neo4j Redf and Semantics Toolkot. 2021. Available online: https://neo4j.com/nsmtx-rdf/ (accessed on 19 April 2021).
  38. Caiza, J.C.; Martín, Y.S.; Del Alamo, J.M.; Guamán, D.S. Organizing design patterns for privacy: A taxonomy of types of relationships. In Proceedings of the 22nd European Conference on Pattern Languages of Programs, Irsee, Germany, 12–16 July 2017; pp. 1–11. [Google Scholar]
  39. Köppe, C. Using pattern mining for competency-focused education. In Proceedings of Second Computer Science Education Research Conference—CSERC’12; ACM Press: Wroclaw, Poland, 2012; pp. 23–26. [Google Scholar] [CrossRef]
  40. Köppe, C.; Nørgård, R.T.; Pedersen, A.Y. Towards a pattern language for hybrid education. In Proceedings of the VikingPLoP 2017 Conference on Pattern Languages of Program, Grube, Schleswig-Holstein, Germany, 30 March–2 April 2017; pp. 1–17. [Google Scholar]
  41. Biggs, J.; Tang, C. Teaching for Quality Learning at University; Open University Press: Milton Keynes, UK, 2003. [Google Scholar]
  42. Seidel, T.; Shavelson, R.J. Teaching effectiveness research in the past decade: The role of theory and research design in disentangling meta-analysis results. Rev. Educ. Res. 2007, 77, 454–499. [Google Scholar] [CrossRef]
  43. Dorfner, T. Instructional Quality Features in Biology Instruction and Their Orchestration in the Form of a Lesson Planning Model. Ph.D. Thesis, LMU, Munich, Germany, 2019. [Google Scholar]
  44. Bandura, A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol. Rev. 1977, 84, 191–215. [Google Scholar] [CrossRef]
  45. Deci, E.L.; Ryan, R.M. Intrinsic Motivation and Self-Determination in Human Behavior; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
  46. Friedrich, H.F.; Mandl, H. Lernstrategien: Zur Strukturierung des Forschungsfeldes. Handb. Lernstrategien 2006, 1, 23. [Google Scholar]
  47. Agrawal, R.; Srikant, R. Fast algorithms for mining association rules. In Proceedings of the 20th VLDB Conference, Santiago de, Chile, Chile, 12–15 September 1994; Volume 1215, pp. 487–499. [Google Scholar]
  48. Beierle, C.; Kern-Isberner, G. Maschinelles Lernen. In Methoden Wissensbasierter Systeme: Grundlagen, Algorithmen, Anwendungen; Springer: Wiesbaden, Germany, 2019; pp. 99–160. [Google Scholar] [CrossRef]
  49. Kiel, E. Basiswissen Unterrichtsgestaltung. 1. Geschichte der Unterrichtsgestaltung; Schneider Verlag Hohengehren: Baltmannsweiler, Germany, 2011. [Google Scholar]
  50. Yen, S.J.; Lee, Y.S.; Wu, C.W.; Lin, C.L. An efficient algorithm for maintaining frequent closed itemsets over data stream. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2009; pp. 767–776. [Google Scholar]
Figure 1. Process of the approach.
Figure 1. Process of the approach.
Futureinternet 13 00106 g001
Figure 2. Process of ontologizing a domain.
Figure 2. Process of ontologizing a domain.
Futureinternet 13 00106 g002
Figure 3. Example of a taxonomy representing digital teaching.
Figure 3. Example of a taxonomy representing digital teaching.
Futureinternet 13 00106 g003
Figure 4. OWL ontology visualization from Protégé.
Figure 4. OWL ontology visualization from Protégé.
Futureinternet 13 00106 g004
Figure 5. Model of the graph database.
Figure 5. Model of the graph database.
Futureinternet 13 00106 g005
Figure 6. Table output of Cypher query.
Figure 6. Table output of Cypher query.
Futureinternet 13 00106 g006
Figure 7. Snippet of the one-hot representation of the data.
Figure 7. Snippet of the one-hot representation of the data.
Futureinternet 13 00106 g007
Figure 8. Output of the algorithm in Listing 2.
Figure 8. Output of the algorithm in Listing 2.
Futureinternet 13 00106 g008
Figure 9. Example pattern language.
Figure 9. Example pattern language.
Futureinternet 13 00106 g009
Listing 1. Cypher query and output.
Listing 1. Cypher query and output.
Futureinternet 13 00106 i001
Listing 2. Calculation of association rules with Python.
Listing 2. Calculation of association rules with Python.
Futureinternet 13 00106 i002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Standl, B.; Schlomske-Bodenstein, N. A Pattern Mining Method for Teaching Practices. Future Internet 2021, 13, 106. https://doi.org/10.3390/fi13050106

AMA Style

Standl B, Schlomske-Bodenstein N. A Pattern Mining Method for Teaching Practices. Future Internet. 2021; 13(5):106. https://doi.org/10.3390/fi13050106

Chicago/Turabian Style

Standl, Bernhard, and Nadine Schlomske-Bodenstein. 2021. "A Pattern Mining Method for Teaching Practices" Future Internet 13, no. 5: 106. https://doi.org/10.3390/fi13050106

APA Style

Standl, B., & Schlomske-Bodenstein, N. (2021). A Pattern Mining Method for Teaching Practices. Future Internet, 13(5), 106. https://doi.org/10.3390/fi13050106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop