Next Article in Journal
Microbial and Physicochemical Status of Raw and Processed Sea Cucumbers from the Hellenic Seawaters
Next Article in Special Issue
Artificial Intelligence Bringing Improvements to Adaptive Learning in Education: A Case Study
Previous Article in Journal
The Dynamic Impacts of Economic Growth, Financial Globalization, Fossil Fuel, Renewable Energy, and Urbanization on Load Capacity Factor in Mexico
Previous Article in Special Issue
Proposal of a Mathematical Modelling Activity to Facilitate Students’ Learning of Ordinary Differential Equation Concepts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating the Knowledge Co-Construction Process in Homogeneous Ability Groups during Computational Lab Activities in Financial Mathematics

1
Department of Molecular Biotechnology and Health Sciences, University of Turin, 10126 Torino, Italy
2
Department of Mathematics “Giuseppe Peano”, University of Turin, 10123 Torino, Italy
3
School of Mathematics and Statistics, University College Dublin, D04 V1W8 Dublin, Ireland
*
Authors to whom correspondence should be addressed.
Sustainability 2023, 15(18), 13466; https://doi.org/10.3390/su151813466
Submission received: 31 July 2023 / Revised: 24 August 2023 / Accepted: 4 September 2023 / Published: 8 September 2023

Abstract

:
Inclusive computational practices are increasingly being employed to enrich knowledge and facilitate sensemaking in STEM education. Embedding computational activities in Computer-Supported Collaborative Learning environments can enhance students’ experiences. This study aimed to investigate the knowledge co-construction process within tailored student-led computational lab activities designed for a Computational Finance module. In particular, this study focused on the analysis of the effects of different lab practices and of group composition on knowledge co-construction. The groups designed for the lab activities were internally homogenous in terms of student ability. The sample consisted of 396 answers to a weekly survey filled out by all 50 of the undergraduate students who attended the module during the AY 2020/2021. The qualitative analysis relied on an adapted version of the Interaction Analysis Model designed by Gunawardena and colleagues for collaborative knowledge construction. Quantitative analyses were then conducted to study how the different lab practices and the composition of the groups affected the interaction. The findings revealed that, although the lower phases were the most prevalent, significant negotiations of meaning and discussions were activated, especially in tasks guiding towards sensemaking. Furthermore, the groups composed of lower-achieving students were the most engaged in negotiating and improving understanding as a result of the group interaction.

1. Introduction

Nowadays, science and Mathematics increasingly rely on computer-based methodologies. Indeed, there is a growing interest in the integration of computational methods within the scientific field, as it is recognized that they can play a significant role in enhancing knowledge, understanding and sensemaking. This phenomenon serves as the driving force behind a burgeoning body of the literature that has delved into the role of computation in STEM subjects and computational thinking practices [1,2,3,4,5,6,7,8]. In this scenario, Computational Finance emerges as a novel interdisciplinary field within Applied Mathematics, carrying significance in the mastery of Financial Mathematics and computational thinking. While the proliferation of Financial Mathematics programs has been extraordinary in recent years, it is crucial to recognize that research focusing on the Computational Finance curriculum remains limited [9,10,11].
In STEM education, thinking about ways to enhance and improve knowledge acquisition and understanding is crucial. Socio-constructivist theories argue that knowledge is not merely acquired by individuals on their own, but, rather, it is actively and collaboratively constructed through social interactions and the sharing of different viewpoints and experiences. Learners are considered as active knowledge constructors rather than passive information receivers (see [12,13]). According to this perspective, learning and understanding emerge through active engagement with others, as individuals actively participate in interactions, discussions, meaning negotiations, and problem-solving activities. Suitable environments to promote this kind of collaboration-based knowledge construction are the Computer-Supported Collaborative Learning (CSCL) environments.
In the last few decades, there has been increasing investigation in the CSCL field, which explores how digital technology can support collaboration and learning in multiple domains [14]. As pointed out by Zabolotna et al. [15], a current challenge is how to distinguish and understand the interplay between knowledge construction and group-level regulation that take place in social interactions between peers in collaborative learning settings.
When designing collaborative activities, one critical issue is group composition. In the literature there is disagreement about the effectiveness of homogeneous versus heterogeneous ability grouping; in both cases, it seems that collaborative learning is particularly effective in low-achieving students [13]. It is important to notice that there are no studies which relate the students’ achievement level in homogeneous groups and the level of knowledge co-construction in collaborative activities. This paper intends to cover this gap.
This study aims to
  • Extend the analysis conducted by Barana et al. in [9] studying the co-construction of knowledge of undergraduate students in Financial Mathematics within a CSCL environment. To accomplish this, we revised the Interaction Analysis Model (IAM) developed by Gunawardena and Anderson for examining the social construction of knowledge [16] and adapting it to the specific context of Computational Finance.
  • Investigate the impact of computational practices on collaborative knowledge construction, analyzing the levels of collaboration achieved through different student-led computational lab activities designed for a Computational Finance module.
  • Investigate the impact of group composition on the co-construction of knowledge. Specifically, we argue if and how an internally homogeneous composition of groups in terms of students’ Grade Point Average (GPA) can affect the achievement of higher-level knowledge co-construction.
The data collection was conducted within the labs of the Computational Finance module ACM30070, School of Mathematics and Statistics, University College Dublin (UCD), in the academic year 2020/2021. Computational Finance modules are keystone modules in any Financial Mathematics curricula to master the subject and to acquire competences that are fundamental in any quantitative finance job. A total of 10 labs took place in Spring 2021. The groups participating in the lab activities were homogenous, i.e., components had similar GPAs; as well as this, the presence of Applied Computational Mathematics (ACM) and Financial Mathematics (FM) students was balanced within groups. Moreover, gender and a possible minority balance were considered while creating the groups. For our study, we analyzed the answers to a weekly survey that all 50 students attending the ACM30070 module completed after each lab. To address the study aims, we performed a qualitative analysis of those answers, starting from the adapted IAM framework, and then statistically validated our findings with a Fisher–Freeman–Halton Exact Test based on 10,000 tables sampled according to the Monte Carlo method.
This research study also contributes to fostering the diversity and inclusion of students in Mathematics programs. A person’s discipline identity is deeply related to one’s perceived self-association with the discipline, which is the strongest predictor of a person’s future career path [17]. Research has demonstrated that learning environments that provide spaces to develop students’ interests, understanding outside of the formal learning environment, and metacognitive reflection can reduce the lack of gender and socioeconomic diversity and give rise to a clear sense of belonging and community [18,19]. Moreover, teaching Mathematics through more open-ended, collaborative, problem-solving approaches contributes to the achievement of more equitable results and a reduction in the gender gap [20,21]. Some authors have also proposed the use of computers as a way to overcome barriers to equity, since technologies can offer valid feedback to push the learner forward, provide realistic data, and display tasks and processes using different registers and media, thus making Mathematics more relevant and realistic [19,22,23].
This paper is structured as follows: the Section 2 introduces the literature background on which the study is grounded and the framework through which the analyses are carried out; the Section 3 discusses the context of the research, the data-collection process and the analysis methods implemented; the Section 4 presents the results obtained and provides explanations for these findings, engaging in a discussion of their meaning and significance; and the Section 5 sets out the conclusions drawn from the results obtained and presents possible further developments for the research.

2. Theoretical Framework and Literature Review

2.1. Computation, Computational Thinking and Sensemaking

In recent decades, there has been a growing focus on the role of computation in Mathematics, driven by the continuous development of new technological tools and resources. Lockwood et al. [1] sought to show evidence that computing is an inherent part of doing Mathematics and is a practice that students should develop. While acknowledging the potential limitations of their choice, they employed computation synonymously with computing. They defined computing in Mathematics as follows: “the practice of using tools to perform mathematical calculations or to develop or implement algorithms in order to accomplish a mathematical goal” [1]. This definition shows the relevance of two main components in computing: calculations and algorithms. As they pointed out, examples of such computation might include developing an algorithm to analyze data, implementing a procedure to generate examples, or writing an algorithm to estimate the errors of numerical calculations. Lockwood et al. [1] interviewed six mathematicians to learn how they perceived the role of computing in their work. The interview revealed that mathematicians utilize computing in their research by facilitating experimentation, testing conjectures, approximating, and visualizing. They also discovered that computing reinforces desirable habits of the mind and additional disciplinary practices. Moreover, it is closely intertwined with theoretical Mathematics and is something that future generations of Mathematics students should be able to do.
The literature related to the embedding of computational activities into student learning in STEM subjects is wide (e.g., [2,3,4,5,6,7]). Donnelly et al. [2] emphasized the significant role of technology and computation in supporting students’ science learning. Notably, research-based aspects such as meaningful and authentic science experiences, powerful visualizations, collaboration, and autonomy/meta-cognition have been shown to impact students’ learning outcomes positively. In [3], Caballero et al. pointed out that it is important to distinguish between computational modelling and simply writing program statements. They claimed that computational modelling is “the expressing, describing, and/or solving of a science problem by using a computer. It requires additional steps beyond simple programming, such as contextualizing the programming task and making sense of the physical model” [3]. Moreover, one might learn to develop new models to solve new problems. Students who learn to use computational modelling and are confident in their abilities will be better prepared to solve challenging problems. Exploring strategies for incorporating computational perspectives into student learning is the main topic in [4]. Here, Caballero and Hjorth-Jensen introduced, for the first time, the notion of inclusive computational practices, by defining computing as follows: “solving scientific problems using all possible tools, including symbolic computing, computers and numerical algorithms, experiments (often of a numerical character) and analytical paper and pencil solutions” [4]. Their paper demonstrates the efficacy of integrating standard analytical work with diverse algorithms and a computational approach to enhance students’ comprehension of the scientific method and foster a deeper understanding of key concepts.
Implementing computation in STEM education means also considering computational thinking and its importance as a core scientific practice. Recently, there has been an increasing interest in computational thinking. This interest stems from the recognition that knowledge and skills derived from computer science possess wide-ranging applications that can benefit individuals from various fields. In 2006, describing computational thinking, Wing states (see [24]): “computational thinking is a fundamental skill for everyone, not just for computer scientists. To reading, writing, and arithmetic, we should add computational thinking to every child’s analytical ability. […] Computational thinking involves solving problems, designing systems, and understanding human behavior, by drawing on the concepts fundamental to computer science. Computational thinking includes a range of mental tools that reflect the breadth of the field of computer science.” A complete definition is given by Wing in 2014 in a blog post [25]: “Computational thinking is the thought processes involved in formulating a problem and expressing its solution(s) in such a way that a computer—human or machine—can effectively carry out. Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution. The solution can be carried out by a human or machine. This latter point is important. First, humans compute. Second, people can learn computational thinking without a machine. Also, computational thinking is not just about problem solving, but also about problem formulation.”
With the increasing prevalence of computation in science and Mathematics and the consequent transformation of these disciplines, the need arises to establish a clear definition of computational thinking within these fields. This is the goal that Weintrop et al. sought to achieve in their work [8]. The authors proposed a definition of computational thinking for Mathematics and science in the form of a taxonomy, consisting of four main categories: data practices, modelling and simulation practices, computational problem-solving practices, and systems thinking practices. Each of these categories was composed of a subset of five to seven practices. As the authors pointed out, although they presented the taxonomy as a set of distinct categories, the practices were highly interrelated and dependent on one another. In practice, they are often used in conjunction to achieve specific scientific and mathematical goals.
Computational thinking is applied in several activities in STEM education at different levels, from kindergarten to teacher training (e.g., [26,27,28,29,30,31]).
By applying computation and computational thinking, the process of sensemaking can be greatly enhanced. An example of how computation can facilitate sensemaking in STEM subjects can be found in [32]. Indeed, recently, sensemaking has become a fast-growing topic in science education research, as pointed out by Odden and Russ in [33]. In that paper, through reviewing the literature on sensemaking, they found three primary reasons that researchers are interested in this view of science learning. First, researchers have argued that sensemaking promotes “deep” learning, allowing students to build connections more easily between the new and existing knowledge and encouraging interest in the subject or task. Second, and relatedly, researchers have argued that when students make sense of ideas, it facilitates the process of transferring those ideas to new and different domains. Third, sensemaking is essential to the way that scientists and engineers construct knowledge, and so researchers have argued that promoting it in the science classroom can bring “school science” more in line with disciplinary practices. The authors then described sensemaking by trying to answer the following question: when people are sensemaking, what are they doing? In doing so they reached a proposed definition of sensemaking as follows: “sensemaking is a dynamic process of building or revising an explanation in order to “figure something out”—to ascertain the mechanism underlying a phenomenon in order to resolve a gap or inconsistency in one’s understanding. One builds this explanation out of a mix of everyday knowledge and formal knowledge by iteratively proposing and connecting up different ideas on the subject. One also simultaneously checks that those connections and ideas are coherent, both with one another and with other ideas in one’s knowledge system.” They explained that sensemaking begins when something is puzzling or unexpected, there is some gap in existing knowledge, individual facts or ideas conflict with one another, or some combination of these. The next step is to start projecting ideas for why this would be the case: it is essentially a process of brainstorming in which all the ideas are linked in a chain. Checking that ideas are consistent is the following step. The final step is a coherent explanation that fills in the gap in knowledge or resolves the inconsistency—at this point, things “make sense”. During the sensemaking process, students engage in building and refining their mental models. They actively draw on and integrate various representations and external explanations to enhance their understanding. This process can be strengthened by collaboration. Indeed, collaboration and sensemaking together create a synergetic relationship, where collaboration improves the quality and depth of sensemaking by leveraging different perspectives and knowledge that can be shared.

2.2. Collaborative Learning and the Role of Technology

Learning can be strongly influenced by collaboration. As social constructivist theories argue, knowledge is actively and collaboratively constructed through social interaction. “Social interaction” does not mean a mere process of sharing individually stored knowledge, but means being actively engaged with others in discussions, integration of different perspectives, negotiations of meaning, and problem-solving activities [12,13,34].
In this regard, Roschelle and Teasley, in [35], defined collaboration as “a coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem”, and they focused on the mutual engagement of participants in a coordinated effort to solve problems together. In particular, they studied the process of collaboration by using a microanalysis of one dyad in the context of problem-solving activities. They argued that collaborative problem solving takes place in a negotiated and shared conceptual space, constructed through the external mediational framework of shared language, situation, and activity—not merely inside the cognitive contents of each individual’s head. As their analysis makes clear, the process of collaborative learning is not homogeneous or predictable and it does not necessarily occur simply by putting two or more students together. In other words, collaboration is not simply a result of individuals being physically co-present; it requires a deliberate and ongoing effort from them to coordinate their language and activities with respect to shared knowledge. The aid of technology can then facilitate collaboration. Indeed, they see the support of the computer as a contributing resource that mediates collaboration in several ways, such as a means for disambiguating language, establishing shared references, resolving impasses, testing different strategies, seeing what works, and suggesting interpretations.
The application of technology in support of collaborative learning is the main topic in [36]. That paper focused on the ways computers support social interaction, cooperation, and collaboration, for learning and knowledge building. The authors reviewed research conducted over the past 20 years on the application of technology in support of collaborative learning in higher education. Thanks to their review, they identified four instructional reasons for incorporating technology to enhance collaborative learning, such as
  • To prepare students for the knowledge society (collaboration skills and knowledge creation).
  • To enhance student cognitive performance or foster deep understanding.
  • To add flexibility of time and space for cooperative/collaborative learning.
  • To foster student engagement and keep track of student cooperative/collaborative work (online written discourse).
Furthermore, they put emphasis on the emerging paradigm of Computer-Supported Collaborative Learning (CSCL). As they state, CSCL is a dynamic, interdisciplinary, and international field of research focused on how technology can facilitate the sharing and creation of knowledge and expertise through peer interaction and group learning processes. The CSCL field of inquiry includes a range of situations in which interactions take place among students using computer networks to enhance the learning environment. It includes the use of technology to support asynchronous and synchronous communication between students on campus as well as students who are geographically distributed. The primary aim of CSCL is to provide an environment that supports collaboration between students to enhance their learning process, facilitate collective learning, or group cognition.
As pointed out in [14], the CSCL field is part of the overall development of technology, culture, and society. The authors also agreed that CSCL provides new learning designs that support collaboration and learning in multiple domains and rigorous analyses of emerging social practices supported by digital technology. CSCL, spanning educational, social, learning, and computer sciences, seeks to discover the dynamics of social interactions and collaborative meaning making and sensemaking when students engage in group work, as well as the role played by technology in this process.
When designing collaborative activities, one critical issue to address is group composition. There are studies that explore the differences between homogeneous and heterogeneous groups under several perspectives (e.g., [13,37,38,39]). Some studies have suggested that homogeneous groups are effective to promote differentiated instruction (e.g., [40]); and that they can enhance engagement, especially in lower-achieving students (e.g., [13]). On the other hand, a consistent body of research supports the use of heterogeneous grouping to enhance learning, since the lower-achieving students can learn from the higher-achieving ones [37,38,39]. However, many studies cannot find any difference between homogeneous and heterogeneous grouping, but that in both cases collaborative learning helps low-ability students more [41,42].
In this scenario, investigating collaboration in the knowledge construction process within computer-supported environments has emerged as a significant challenge in contemporary educational research. As a consequence, models such as the Interaction Analysis Model [16] have been designed.

2.3. The Interaction Analysis Model

Gunawardena et al., in [16], designed a framework to examine and understand the patterns and dynamics of interaction and collaboration within computer-supported learning environments. In particular, they put emphasis on the knowledge co-construction process that occurs during asynchronous computer conferencing (CMC). In their work, the authors critically examined interaction analysis techniques that have been developed for the analysis of computer conferences and then designed a new framework for analyzing the quality of the learning experience of debates, the Interaction Analysis Model (IAM).
In describing interaction, they stated that is “the process through which negotiation of meaning and co-creation of knowledge occurs. […] Interaction is the essential process of putting together the pieces in the co-creation of knowledge” [15]. Based on this definition, they analyzed the entire transcript of a global online debate conducted through computer conferencing among researchers and education professionals. They outlined five phases of knowledge co-construction. Each phase has between three and five sub-phases describing the operations that occur. The phases are the following:
  • Sharing and comparing of information;
  • Discovering and exploring dissonance or inconsistency among ideas or statements;
  • Negotiation of meaning/co-construction of knowledge;
  • Testing and modification of proposed synthesis or co-construction;
  • Agreement statement(s)/applications of newly constructed meaning.
This model is able to effectively outline the evolution of the sensemaking process described in [33]. Indeed, sensemaking begins by identifying dissonances, which may arise due to a lack of knowledge, or inconsistencies and conflicting ideas: by sharing information within groups, individuals can discover these kinds of dissonances. The next step in sensemaking involves projecting ideas to explain these dissonances, necessitating a negotiation of meaning. This is followed by a test, such as checking that ideas and knowledge constructed through discussion are consistent. In the end, thanks to the group, new knowledge is co-constructed and things “make sense”.
A critical analysis of the use and adequacy of the IAM for investigating the knowledge process (with a special interest in emergent technologies) was carried out by Lucas et al. in [43]. As the authors state, the IAM is one of the most frequently used instruments in the study of knowledge construction and the extent of its use makes it one of the most coherent and empirically validated instruments in the research field. In the attempt to discuss the applicability and suitability of the model, the authors conducted a literature review in different international online databases. According to the authors’ final considerations, the studies presented provide evidence that complex thinking, and higher phases of knowledge construction, can be achieved in different types of communication tools, given that the activities are appropriately designed. In essence, their findings support the suitability of the model for analyzing knowledge construction but also indicate the need of examining how learning is orchestrated and re-evaluating certain aspects of the model itself.
Based on the assumption that the IAM was created to investigate knowledge construction in a collaborative learning environment specifically mediated by computer communication, this model was and continues to be applied in different educational contexts and not only in asynchronous communications. These, in particular, were exclusion criteria of the review of Lucas et al. in [43]. However, as stated by Floren et al. in [44], the IAM enables one to analyze activities and complex interactions also occurring outside the context of asynchronous communication and is potentially suitable in various educational settings.
We conducted a literature review to gain a more general view of the contexts of application of the model. We carried out this review in different international online databases, i.e., Scopus, Web of Science and Google Scholar. We selected 21 studies, published between 1997 and 2023, that used the IAM as an analysis framework (see [9,15,16,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61]). In the various studies, the IAM was used for different aims. For example, it was used to investigate patterns of knowledge construction [15,50,54,59], and to study the extent of different forms of interaction involved [45,49,52,55,56] (e.g., small group collaboration vs. whole class discussions; with or without the support of mobile devices), the applicability of the model itself in an in-person setting [44], and the impact of group composition [53]. None of the studies we selected focused on the impact of group composition by considering the homogeneity or heterogeneity of participants’ skills, even though this is an interesting topic to examine. One of the purposes of this research was to investigate this precise aspect. The contexts of its application have been very varied, e.g., in majors of education, e-commerce, computer science, medicine, veterinary, design, and finance, but also in primary schools. Participants from early primary grades up to PhD students and teachers have been involved. They have been actively engaged in curricular activities, problem-solving activities, or group project tasks. Both small samples (4–6 people) and very large samples (over 300 people) have been analyzed. It is interesting to note that the only study conducted in the Financial Mathematics setting was the pilot study of this research, i.e., [9], which highlights the lack of studies in this area. This study, along with [9], contributes to addressing this existing gap.
In most of the studies included in our review, the IAM was used to analyze asynchronous discussions. However, there is evidence of studies using the IAM also for synchronous discussions and in-person activities. This wide range of applications observed aligns closely with the original proposal put forward by the authors of the IAM in [16,43], namely, to employ the model in other settings.
For both asynchronous and synchronous settings, transcripts of the discussions were used as sources to conduct the analyses. For in-person activities, however, video and audio recordings, student interviews, and surveys were used for analysis. As pointed out by Floren et al. in [44], this can be time-consuming and labor-intensive, but the feasibility of the IAM has been shown to be satisfactory.
It is interesting to note that although the IAM was designed in 1997, it is still used nowadays, and in increasingly varied and different contexts. In particular, it is beginning to be implemented more and more in in-person settings in recent years.
In all studies in which frequencies were presented, phase 1 was the most frequent one. Higher phases (i.e., phase 4 and phase 5) tended to have low frequencies; in some cases there was not even an occurrence (see [45,46,47,49,54]).
In several studies, the IAM framework was modified. In fact, phases were added to categorize interventions that were not relevant to the task context (see [50,54,55]) or that were not aimed at sharing knowledge but only providing background or emotional information (see [46,58]). In other studies, descriptions of subphases were changed (e.g., see [15]), or, based on the IAM, the authors proposed a new framework (see [60]).

2.4. Adaptation of the Interaction Analysis Model

After a first explorative analysis of our data, we revised the IAM framework to better adapt it to our setting, namely, synchronous lab activities in Financial Mathematics. In particular, we modified some subphases that were not effectively meaningful in our context. We introduced a new phase, named phase 0. This phase identified responses in which students did not state anything about group interaction. Concerning the other phases, we worked on subphases as follows. Subphases 2A, 2B, 2C, 3A, 3B, 3C, 5A, 5B remained the same, because we found them significant in our context. We deleted some subphases because they were not meaningful in our context. Specifically, we removed 4B, 4C, 4D, 4E: students attending the third year of BSc in Financial Mathematics at UCD have no direct industry experience, so the students from this year did not have the opportunity to perform “testing against cognitive schema, personal experience”. Moreover, the majority of financial data providers are not open-source and free to download; instead, they require a paid subscription. So, those students had no or very limited access to real-world financial data. Consequently, “Testing against formal data collected” was hardly feasible. Finally, in this module there was no research component, thus “Testing against contradictory testimony in literature” was hardly feasible, unless a student independently conducted research on a specific topic. A few additional changes were made to the other subphases. We added some details in subphases 1A, 4A, and 5C, and we merged 1B and 1C, and 1D and 1E. Finally, we added a new subphase 4B starting from the description of phase 4 in [15]. The resulting “new” framework is shown in Table 1, and henceforth, we will refer to it as “the adapted framework”.

3. Setting and Research Methodology

In this study we aim to answer the following research questions:
(RQ1)
How do the adapted phases of the Interaction Analysis Model occur and have they been proved to be suitable in detecting the knowledge co-construction process in the context of synchronous lab activities in Financial Mathematics?
(RQ2)
How do the characteristics of computational lab activities affect the interaction and, thus, the achievement of higher-level knowledge co-construction?
(RQ3)
How does the composition of the groups, specifically considering the GPA, affect the interaction and, thus, the achievement of higher-level knowledge construction?
To outline the progression of the analysis conducted to meet the research goals, we have structured this section into three subsections: Section 3.1 describes the setting of the research study, Section 3.2 details the data collection process, and Section 3.3 elucidates the data-processing methods employed.

3.1. Setting

Computational Finance is a relatively new discipline, is highly interdisciplinary and very often is considered as a sub-section of Applied Mathematics, since it studies the mathematical models underlying the evolution of stochastic financial variables. It should be acknowledged that while it is an emerging discipline, it is an under-researched area from an educational perspective.
In this study, we aim to investigate the knowledge co-construction process and the impact of group composition within tailored student-led computational lab activities designed for a Computational Finance module. In particular, the analyses were conducted within the Computational Finance module (ACM30070) for the academic year 2020/2021.
The ACM30070 module is delivered by the School of Mathematics and Statistics at the University College Dublin (UCD). It is a core module for stage 3 students studying for their BSc in Financial Mathematics (FM), and it is optional for stage 3 and 4 students studying for their BSc in Applied and Computational Mathematics (ACM). This module was designed from scratch in 2016 and delivered on a pilot basis in 2017. Then, it was re-designed several times according to the lecturer’s teaching experience and students’ feedback. In the spring semester 2020/2021—the academic year in which the data of this study were collected—it was delivered as a 5-credit module; 50 students attended the module, with 35 FM and 15 ACM. Further improvements to the design were made leading up to the Advanced Computational Finance module (ACM30110) that is currently delivered, and has been since the AY 2021/2022, as a 10-credit module. A complete description of the module design process, its purpose and objectives and the learning objectives within the context of the BSc program can be found in [10].
It is worthy to mention here that, leveraging on the most recent STEM education literature, practices and activities were developed in such a way to reciprocally use computational thinking to enhance the Financial Mathematics mastery and the Financial Mathematics context to foster computational thinking.
In 2020/2021, the module was delivered in 12 weeks, each consisting of four slots of 50 min (two lectures, one tutorial, one lab). The computational practices, delivered during the weekly labs, were initially designed to be offered in a face-to-face setting. In March 2020, due to unexpected COVID-19 restrictions, the module was suddenly moved online, and the lecturer had to modify the teaching delivery accordingly. In the AY 2020/2021, the module was delivered completely online due to the ongoing COVID-19 restrictions. The activities had already been adapted the year before so that they could be delivered either face-to-face or online. The classes were live streamed on Zoom, and they were recorded to take into account students’ connection issues.
A typical week of classes has been described in [10]. In summary, the lectures are devoted to the financial modelling part, while tutorials and labs are intended for several computational practices and problem-solving activities. Tutorials start with a walk-through of particular pieces of the pre-class assignment, which helps put each student on an equal footing before starting the day’s activities. Then, computational practices are performed individually or in small groups, so that students learn through problem-solving and comparison with peers. Lab activities are fully student-led and the lecturer, the tutor and the teaching assistant facilitate the practices. The same lab structure is proposed each week, but it is applied to different kinds of practices. A detailed description of the design of lab practices and case studies examples can be found in [11]. For labs, there is no pre-class assignment, and practices are entirely covered in class; students can download working questions only one hour before the class to read them. During the first part of the lab, students work in groups on modelling, pseudo-coding, data analysis and other related activities. In the second part, each group chooses a representative to present their outcomes to the whole class. The lecturer and the tutor act as moderators of discussions and intervene in the students’ conversions only if required by students.
After each lab, students are invited to fill out a Google Form survey to critically reflect on the activities performed in class. These surveys acted as sources for the analyses presented in this study.
In the AY 2020/2021, to enable the groups to work for the lab activities, “breakout rooms” were used. To facilitate class discussions, a “poll” was used. When the COVID-19 restrictions ended, the module went back to face to face (spring semester of the AY 2021/2022).
The groups’ activities coupled with constant moderator interaction kept students constantly engaged with the work and contributed to learning construction during the term. Groups were constructed by the lecturer and stayed the same for the term. Group participants had a similar level of competence, i.e., they had a similar GPA. A total of 7 groups, from A to G, were designed for lab activities. Groups A, B and C were those composed of students with the highest GPA. These groups had an average constituent GPA of 3.96, 3.64 and 3.45, respectively. Considering the UCD Module Grades and Grade Point Value correspondence, we are referring to students whose average grades ranged from B to A+ included. Therefore, A, B and C were the groups of high-achieving students. Groups D and E were those composed of intermediate students: the average constituent GPA was 3.26 and 3.04, respectively, corresponding to students whose average grades ranged from C+ to B- included. Groups F and G were those composed of low-performing students: the average constituent’s’ GPA was 2.79 and 2.43, respectively, corresponding to students whose average grades were below C. The pass grade in UCD is D-. This kind of composition was decided to positively set students’ expectations to contribute to the group’s success [62]. Students were made aware of that, so they were all expected to be able to contribute in the same way within their group. Similar research studies have not reported any difference in sex/gender in the design phase or in the analysis phase. Unless specifically required for research purposes, surveys and focus groups are designed to not distinguish sex, and data is not analyzed according to the sex variable.
However, particular attention was given to the sex/gender balance in group composition in this research study. The groups were constructed in such a way that each group had at least two female students in it. When this was not possible, we preferred to have no female instead of only one, to avoid any sense of discomfort within the group, in line with [36].
As described in [11], lab activities in ACM30070 are intended to build a teaching environment in which students can develop both a deeper and more robust knowledge and a Financial Mathematics identity. The weekly schedule, the technology and logistics chosen, and the staff selection were conceived to support these objectives and to prepare students to be independent learners. As outlined in [11], all lab activities were designed following the steps of the sensemaking process described in the Theoretical Framework and using inclusive computational practices.
The overarching purpose of these lab practices is to help students to develop a robust knowledge allowing them to understand and/or create financial models, translate them into code and both quantitatively and qualitatively compare these models with real-world data. In this setting, computation is not simply a skill to learn but it represents one of the constitutive pillars to master Financial Mathematics within an inquiry-based and student-centered learning educational model. Group work plays a key role in all these processes, such as sensemaking, knowledge construction, fostering computational thinking, and learning enhancement.

3.2. Data Collection

To investigate RQ1–RQ2–RQ3, we analyzed the weekly survey filled out on Google Form individually by the 50 students who attended the ACM30070 in the AY 2020/2021. The sample was composed of the entire class, since all students agreed to participate to the study. The survey stayed open for one week, so this questionnaire, which referred to the synchronous lab activities, can be classified as an individual, asynchronous, computer-mediated one. The questions proposed weekly to students regarding the course were both open questions and Likert-scale questions. They are shown in Figure 1.
To answer the research questions RQ1–RQ2–RQ3, we examined the answers to questions 1, 2, 4, 5, and 7. Questions 4 and 5 directly asked about the group dynamics: specifically they guided students in reflecting on group usefulness and helpfulness. These questions enabled us to delve into the effectiveness of group interactions through the critical considerations provided by students about specific situations that occurred. We specified that the answers expressed students’ self-declarations and self-evaluations of what had happened during the lab, and the analyses were not based on the recording of the discussion.
Questions 1 and 2 were selected with the intention of assisting us, as researchers, in identifying specific topics that may have initially been unclear to students but could have been clarified through group interactions. Exploring these questions allowed us to gain insights into the impact of collaborative learning in addressing issues. Despite the fact that question 7 did not focus specifically on group dynamics, but rather on computational aspects, we took it into account in our analysis to properly address RQ2.

3.3. Data Analysis

We conducted a qualitative analysis on the dataset described in Section 3.2, using the Interaction Analysis Model (IAM) [16] in its version adapted to the Financial Mathematics context, as described in Section 2.3.
We explicitly underlined that we considered knowledge co-construction as a progressive process, where reaching a higher phase implies a transition towards the underlying phases. This perspective emphasizes the iterative nature of knowledge development, highlighting how each phase builds upon the previous ones. As we considered post-lab student responses, we analyzed the written sentences trying to detect how students moved through the different phases to the final one they were able to reach.
As this study aims to be an extension of the research conducted in [9], the initial step involved re-analyzing all the responses of the 6 students who were part of the pilot study using the adapted IAM framework. After comparing our analysis with the one presented in [9], we extended it to the entire dataset.
We assigned the answers as follows:
  • Phase 0: those responses which do not show any interactions or collaboration within the group and are written only in the first person;
  • Phase 1: those answers which simply show that students compared their solutions or ideas without moving to a deeper discussion;
  • Phase 2: those responses which allowed us to detect disagreement or discussion about different emerging ideas;
  • Phase 3: those responses which show evidence of common knowledge, or an agreed solution achieved through collaborative work or discussion after a deep reflection; in this phase, the common knowledge is still shared at the group level;
  • Phase 4: those answers in which we can observe how the students used the technologies to test and find a confirmation of the knowledge shared during the previous discussion;
  • Phase 5: those responses where there was evidence of the modification of individual understanding as a consequence of the interaction, in particular, if there is a statement that all the group members achieved this step.
After assigning phases and subphases to all the responses based on the adapted IAM framework, we created a table displaying the frequencies of each phase and its corresponding subphases. In this table, we highlighted the percentage of occurrences relative to the overall total, as well as the percentage of occurrences of the subphases relative to the total responses classified within their respective phase. In this way, we could verify the occurrence of phases and subphases and observe the distribution of subphases within their respective phases. Based on these results, we reviewed the responses with phases coupled with subphases that had low occurrences. This allowed us to understand whether they were actually relevant or whether they could be reassigned to a subphase with a higher occurrence. In particular, the subphase objects of revision were 1B, 1E, 3E, 4B and 5B.
Then, we conducted quantitative analysis by considering the occurrences of the phases in the 10 labs. We created a contingency table considering the phases as the dependent variable. Given the presence of cells which had counts less than 5, a chi-squared test was not reliable, so we performed the Fisher–Freeman–Halton exact test based on 10,000 tables sampled according to the Monte Carlo method, and computed the Cramer’s V coefficient to investigate the possible dependence between labs and the achievement of higher phases of knowledge co-construction. We used the software SPSS 28 (Statistical Package for Social Science) for the analyses.
To explore the role of group composition, we clustered the responses of the components within each group. We first created a table showing the occurrences of phases in each lab for each group. Then, we created a contingency table by considering the group and the respective percentage of the various phases reached in the total of the labs. Also in this case, we had cells which had counts of less than 5, so, considering the phases as the dependent variable, we computed the Fisher–Freeman–Halton exact test and the Cramer’s V coefficient to investigate how the group composition can affect the knowledge co-construction process.

4. Results and Discussion

4.1. Analyzing the Knowledge Co-Construction Process

To investigate RQ1, after analyzing all the survey’s answers, we computed the occurrences of phases and subphases. Some responses were missing since students were absent from the lab, so they did not fill out the survey for that week. Moreover, in some cases more than one phase or subphase was identified for one response, so the total number of phases or subphases was greater than 396 despite there being 396 answers available to the research study. The results of the analysis are shown in Table 2 (frequencies of phases).
As shown in Table 2, all phases of the adapted IAM framework occurred in the analyzed answers. These results were in line with the ones obtained in the pilot study conducted by Barana et al. in [9]. Phase 1 was the most frequent, which is in line with the literature review; see, for example, [9,16,45,50,59]. A relevant number of discussions developed beyond phase 3: phase 5 occurred 12.63% of the time. In the available literature, there are other studies in which the occurrences of phase 5 were above 9%, except for [9]. This means that the activities proposed in the ACM30070 labs enabled a high level of interaction and negotiation, which led to deeper individual understanding, improving the existing literature. There was also a high occurrence of phase 3, which meant that students were highly engaged in discussions during the lab activities in their groups. In the reviewed literature, there were some studies that obtained the same results, e.g., [15,45,46,58]. However, there were also several studies that obtained a percentage of occurrence of phase 3 of below 5%, e.g., [16,50,52]. Phase 0 had a low occurrence: it occurred when students did not provide evidence of collaboration, or they claimed that they worked better on their own and that the group was not helpful to them. Phase 4 was the least frequent. Phase 4 implied testing and modification of the negotiated knowledge. As mentioned above, we considered knowledge co-construction as a progressive process. Thereby, a low percentage of phase 4 does not imply that nobody reached that phase, but since there was a high percentage of phase 5, it means that students went through phase 4 and reached phase 5. However, what could have happened is that that students reached phase 5 directly, without testing, since, at this stage, they did not have many instruments to test the solutions against real data.
Table 3 shows the frequencies of observed subphases, their percentages relative to the total responses classified within their respective phase, and their percentages of occurrences relative to the overall total responses.
As it can be seen in Table 3, subphases had a balanced distribution within their phase. Moreover, subphases 1A, 1C and 3A were the first three most frequent overall. While remaining at a basic level, phase 1C can be considered as the first real step towards the activation of a group discussion, as it consists of “asking and answering questions to clarify details of statements”. It is noteworthy that it arose with the most frequent subphases. It is also interesting to observe that 3A, “Negotiation or clarification of terms”, emerged among the top three results: this means that students were actively engaged in constructive discussion. There were no occurrences of subphase 3B; it was not removed from the adapted framework since it could be relevant in this context, and it could be detected in future studies extending the sample.
In Table 4 we reported—per each subphase—an explanatory example of a student’s answer falling in the mentioned subphase.

4.2. Analyzing the Knowledge Co-Construction Process: Dependence on Lab Practices

To answer RQ2, we examined the occurrence of phases in each lab. We computed the percentage of occurrence of the phases with respect to the total number of responses in each lab. The results are reported in Table 5. For each phase of knowledge co-construction, we highlighted in bold the lab in which it occurred with the highest percentage.
The highest occurrence of phase 0 was in Lab 2. Lab 2 was about the implied volatility. Since there was a non-linear relation between option price and volatility, a root-finder algorithm was needed for its calculation. Students were provided with a code that breaks down the initial set of data, and they were asked to understand the reason. The lab activity was presented as a guided debugging practice in VBA. Despite the faulty element in this exercise not being a syntax or coding error, but an incoherence between the provided dataset and the model assumptions, debugging can be classified as a “technical” computational practice.
In their answers to the weekly survey, students showed difficulty in engaging in discussions during this lab. Many of them stated that working by themselves was more suitable for the type of activity proposed. For example, one student wrote:
“We found it quite difficult to work as a group in the lab today. Everyone was getting different answers on their code and everyone was at different stages of the process at different times. We could have shared a screen and just have tried to fix one person’s code but we felt that it was too easy to get distracted in that way or that we wouldn’t be as passionate about fixing someone else’s code more than our own.”
This specific sentence was classified as phase 1, but we decided to quote it, since it perfectly represents what happened during the lab and justifies why phase 0 stood out in this lab rather than in the others. In many groups students worked on their own, proceeding at different speeds and, thus, struggling to help and compare to each other. We are reminded that in 2020/2021, the module was delivered entirely online, and the students “met” on Zoom in breakout rooms, so they had to figure out how to deal with technical practice as a group in an online setting. The groups that used screen sharing and worked on a unique code altogether achieved a higher level of comprehension, namely, higher phases of the knowledge co-construction process. Groups where students worked on their own laptop and just compared their solutions stayed at phases 0/1.
Lab 7 was the one with the higher occurrence of phase 1 with respect to the other labs; in terms of content, this lab focused on the Monte Carlo method for stock price simulation. Students were asked to write from scratch the Python code for Geometric Brownian motion simulation via the Monte Carlo method following a step-by-step code guide. It is worthy to note that students were introduced to the topic only from a theoretical perspective, so this practice constituted the very first step towards Monte Carlo simulation via numerical methods. Despite the aim being to stimulate them to apply their prior theoretical knowledge to a practical coding problem, they perceived the practice of coding from scratch as a “technical” one.
In their responses, a lot of students mentioned the fact that they went through the code line by line, trying to understand together the rationale behind the loops and procedure instead of entering in detailed discussions for modelling. This is probably because they were working on Monte Carlo code for the very first time, so they stopped at phase 1 since they needed to better understand the basics of the topic. For example, one student wrote:
“The questions did not require a lot of discussion, but for a few that did, such as altering the identity function for a put option, it was helpful to hear my lab mates opinions.”
Lab 6 was the one in which phase 2 stood out, with respect to the other labs. The worksheet of this lab focused on comparing the Binomial Model and Black Scholes for option pricing, exploring the limitations of the Black Scholes method, and utilizing mathematical tools to improve the accuracy of prices. Therefore, students were stimulated to use their prior knowledge to compare their opinions on the models: this resulted in students discovering dissonance and/or disagreement on the characteristics of the two models and pricing methods. One student wrote
“All discussed not understanding control variate technique or errors for binomial/BS [Black Scholes] methods and we all realised it was unclear for most people so we need to clarify with the Tutor and the Lecturer.”
From this sentence, classified as phase 2, it is clear that group members did not understand, disagreed and did not know how to help each other so much that they asked their tutor and lecturer for help to clarify. This answer also shows the role of the lecturer and the tutor during the groups’ discussions: they acted as moderators to the discussion and stepped in to clarify concepts just in case a group explicitly asked for help.
Phase 3 stood out in Lab 5, which was about the Binomial Model and the Python code to implement it. The related code was presented during the tutorial and students had been invited to review it before attending the lab. This lab practice fostered the sensemaking process since it guided students to discuss advantages/disadvantages of alternative computational solutions and to optimize the known code. They were also required to test their solution to justify it. It is interesting to observe how the specific design of this practice triggered discussions and the co-construction of knowledge within groups. For example, one student wrote
“When were looking at Q4 with the Python code and interpretating it, it was useful to have different points of view and to be able to disagree on certain bits as at the end we all had a deeper understanding.”
This sentence, classified as phase 3, clearly shows the transition through phase 2 (“to be able to disagree on certain bits”) and the achievement of phase 3 (“at the end we all had a deeper understanding”). Starting from the provided code and the dissonances, students engaged in constructive discussion. There were no explicit statements of testing the solution or new knowledge co-construction, so it could not be classified as phase 5.
In Lab 3, both phase 4 and phase 5 stood out, and its design was similar to that of Lab 5. Lab 3 focused on the Python code to implement the Black Scholes model and the role of the implied volatility. As in the case of Lab 5, the code was already known to students. In this lab, students were required to discuss how changes in code syntax and/or inputs affected the results: the testing activity turned out to be a fundamental part of this lab practice. As a consequence, students had to understand the connection between data, computation and mathematical modelling. Thanks to discussion, testing and interactions within the group, students fostered sensemaking in the financial model. For example, one student wrote the following:
“Changing different parts of the formula one by one really allowed us to see what the effect it had on other elements. Also discussing through them individually I learn lots from the other people in my team. Our knowledge came together nicely and I definitely had a better understanding of it all afterwards.”
The student mentioned how changing different parts of the Black and Scholes formula and investigating the various components of the formula in the group was significant for them. They stated that they gained knowledge and understanding thanks to testing and discussions within their group. This answer was classified as phase 5. We remark that in the lab’s worksheet, some questions were designed to stimulate discussions about the effect of changing some inputs, but they did not ask students to test their conclusions, as conducted in the case of Lab 5. Many students used testing to verify what emerged from the discussions within the group on their own initiative. Therefore, they were able to reach higher phases of knowledge co-construction and deep learning by testing, putting together their knowledge and wrapping up a shared conclusion.
To further investigate the possible dependence between lab characteristics in terms of the type of computational practice and tasks proposed and the achievement of higher phases of knowledge co-construction, we created a contingency table reporting the occurrence of the phases of knowledge construction within each lab. Since approximately 56% of the cells had counts less than five, we performed the Fisher–Freeman–Halton Exact test based on 10,000 tables sampled according to the Monte Carlo method. We assumed the following as the null hypothesis: “there is no association between the variables Labs and phases”. The obtained test value was 74.265, with a two-sided significance of less than 0.001, which reveals a significant association between the two variables, namely, labs and phases, leading us to reject the null hypothesis. We also computed that the Cramer’s V coefficient showed a significant value of 0.207 (p-value < 0.001), showing that the association between the two variables was moderate. We can, thus, conclude that there was an association between the labs variable and the phases of knowledge construction variable, meaning that the different practices required in the various labs affected the quality of the knowledge co-construction process.

4.3. Analyzing the Knowledge Co-Construction Process: Dependence on Group Composition

In this section, we turn our attention to examining the results obtained by investigating the impact of group composition. Seven groups, from A to G, were designed for lab activities. Groups A, B and C were the groups of high-achieving students; Groups D and E were those composed of intermediate students; and Groups F and G were those composed of low-performing students. Table 6 shows the total percentage of phases reached in all labs by each group.
In each column of Table 6, we highlighted in bold the two highest percentages relative to each phase to examine in which groups the maximum occurrence of the phases of knowledge was registered. We found the following:
  • Phase 0 stood out in groups A (6.35%) and B (7.02%);
  • Phase 1 stood out in groups C (59.09%) and D (50.85%);
  • Phase 2 stood out in group A (15.87%) and G (14.29%);
  • Phase 3 stood out in groups E (40.91%) and F (38.78%);
  • Phase 4 stood out in group E (6.82%);
  • Phase 5 stood out in groups F (22.45%) and G (16.07%).
These results show that the groups composed of high-achieving students were those that cooperated the least (in particular group B) or tended to clash the most but failed to resolve these dissonances (in particular group A). The other group of high-achieving students, namely, group C, was the one in which phase 1 stood out: group members merely confronted each other, without activating advanced levels of collaboration. The same occurred in group D, which was one of the two groups composed of intermediate students.
It is interesting to observe that the highest phases of the knowledge co-construction process, i.e., 3, 4 and 5, occurred in groups with lower GPAs. This means that groups with low-performing students were more engaged in collaboration and interactions to achieve new knowledge and consolidate deep understanding and learning, whereas high-achieving students often tended to be uncooperative, remaining within their comfort zone and rarely pushing their boundaries or challenging themselves.
It is worth mentioning that there are two concepts that are central to support motivation: the subjective value of a goal and the expectancies, or expectations for the successful attainment of that goal [62]. To positively set students’ expectations to contribute to the group’s success, group members were made aware of having the same GPAs. This way, they were all expected to be able to contribute in the same way within their group. We believe that this could be one of the reasons why groups with lower GPAs outperformed during the activities proposed.
To statistically confirm this result, we created a contingency table with the occurrence of phases by groups. Since about 35% of cells had counts lower than five, we performed the Fisher–Freeman–Halton exact test instead of a chi-squared test. We considered as the null hypothesis the following: “there is no association between the variables groups and phases”. The obtained test value was 48.292 with a two-sided significance of less than 0.005. The Fisher–Freeman–Halton exact test revealed a significant association (p-value = 0.012) between the two variables, namely, groups and phases, leading us to reject the null hypothesis. We obtained a significant Cramer’s V coefficient of 0.159 (p < 0.011), showing that the association between the two variables was moderate. We can, thus, conclude that there is a substantial association between the variable groups and phases. Thus, the group composition may have affected the achievement of the phases of the knowledge co-construction process.
These results shed light on the use of collaborative computational practices as an inclusive teaching strategy. Indeed, we have shown that the lowest-achieving students were those that reached the highest phases of knowledge co-construction. In other words, these groups of students were actively involved in sharing ideas and discussion to agree on common solutions, and, by validating the co-created knowledge, they showed evidence of changing their personal understandings because of group interaction. This result was made possible by the homogeneous composition of the group, because all students in a group were approximately of the same achievement level, and there was no “expert” student who could support the group, giving ideas and explanations; instead all the students had to put their different skills into play and give their personal contribution to reach the common goals.

5. Conclusions

5.1. Summary of Findings

This research study extended the results of the pilot study conducted by Barana et al., in [9], to investigate the collaborative knowledge construction process during lab activities in Financial Mathematics. In our research work, we investigated the following three research questions:
(RQ1)
How do the adapted phases of the Interaction Analysis Model occur, and have they been proved to be suitable in detecting the knowledge co-construction process in the context of synchronous lab activities in Financial Mathematics?
(RQ2)
How do the characteristics of computational lab-activities affect the interaction and, thus, the achievement of higher-level knowledge co-construction?
(RQ3)
How does the composition of the groups, specifically considering the GPA, affect the interaction and, thus, the achievement of higher-level knowledge construction?
Our research focused on lab activities, which are an integral part of a Computational Finance module. During the labs, students worked in groups of homogeneous ability on tailored student-led inclusive computational practices. Every week, after attending the lab, students critically reflected on the computational practices and the specific interactions that took place within their group through an online survey. The answers to these surveys formed the dataset for our analysis. We examined the students’ responses by using an adaptation of the Interaction Analysis Model, originally designed by Gunawardena et al. in [6], to investigate knowledge co-construction in a collaborative learning environment. We conducted a review of the literature to explore the applicability and versatility of this model. From this review, it was interesting to observe that the only study conducted in the Financial Mathematics setting was the pilot study of this research [9]. Furthermore, none of the studies that we selected and studied presented an analysis focused on the impact of group composition, by considering the homogeneity or heterogeneity of participants’ skills. Our research study can, thus, contribute to addressing this gap in the literature.
We classified the students’ responses by using an adapted IAM framework. The obtained results were in line with the ones presented in the pilot study [9]. Although the lower phases were the most prevalent (Ph 0 3.28%, Ph 1 40.40%, Ph 2 10.61%), as happens in similar studies using the IAM framework in different settings, our research showed that significant negotiations of meaning and discussions for the co-construction of knowledge were activated (Ph 3 30.56%, Ph 4 2.53%, Ph 5 12.63%). In particular, compared with the reviewed literature, our results showed an improvement in frequencies of phase 3 and 5. This means that the activities designed for the ACM30070 module were generally effective to guide students into higher-level collaborative knowledge construction, and the adapted IAM framework detected the co-construction process well within a Financial Mathematics context (RQ1).
While analyzing the occurrence of phases in each lab to answer the second research question (RQ2), we observed that the labs focusing on practices that were perceived as more “technical” from the students’ perspectives, such as debugging, writing a code from scratch or guided syntax modifications, were the ones that led to the achievement of lower stages more. In contrast, practices that guided students toward sensemaking, like the ones where students were guided in discussing advantages/disadvantages of computational solutions, or in optimizing an already-known code, triggered more interactions, such as discussions, and the negotiation of meanings and testing. Thanks to these kinds of computational activities, the groups achieved higher levels of knowledge co-construction and a deeper understanding of the proposed topics. Such result supports the results of both the thesis of Donelly et al. [2] and the pilot study [9]: collaboration, coupled with problem solving and tailored computational-thinking activities, creates suitable conditions for co-learning construction in a CSCL environment.
We also investigated the impact of group composition in the knowledge co-construction process. After analyzing the occurrence of phases within groups to answer the third research question (RQ3), it turned out that groups composed of students with lower average GPAs were more involved in negotiating and discussing, resulting in a highly improved understanding and knowledge not only at the group level but also at the individual level. In contrast, groups containing students with higher average GPAs cooperated the least or tended to clash the most but failed to resolve these dissonances. This was reflected by the occurrences of the phases in each group and each lab. Thus, a homogeneous group composition may have affected the achievement of the phases of the knowledge co-construction process. This result also contributes to creating an inclusive learning environment that fosters respect and diversity, enables participation, removes barriers, and considers a variety of learning needs and preferences. The proposed group composition, in fact, avoided leader–follower dynamics: each student considered him/herself able to contribute to the co-construction of learning, and actively participated in the proposed activities. This was reflected in the occurrence of higher-level discussions and knowledge construction in the low-performing student groups.

5.2. Future Directions

Future developments in our research could be to further investigate the impact of group composition in the knowledge co-construction process by comparing homogeneous versus heterogeneous groups, or to investigate how the presence of ACM and FM students in the groups might affect this. We have already collected the same dataset from the ACM30070 2021/2022 class, where the students were grouped in a heterogeneous way; we plan to repeat these analyses and compare the findings. Moreover, using the results of the present study, which have shown that some lab practices foster the achievement of a higher level of knowledge more than others, we are interested in deeply investigating the relationships between computational practices and the level of collaboration and sensemaking activated. We are planning to extend and improve the pilot study [63], in which the authors will perform a thematic analysis to measure the effectiveness of tailored student-led computational lab activities in sensemaking in Financial Mathematics and in developing computing thinking.
Another possible development could be to study the relationship between the phases reached in the groups and the students’ final exam grades to further investigate the extent to which group collaboration can affect sensemaking at the individual level. Moreover, we plan to analyze the final surveys that all students filled in at the end of the module after sitting their final exam. In this survey, students critically reflected on their self-perception of how the teaching modalities affected their understanding of the subject in view of the final exam.

5.3. Limitations

We remark that this study has a limitation: we analyzed students’ answers instead of their recorded conversations, so students may have reached different phases from the one identified but they were not able to express it in words. To improve this aspect, we are planning to collect a new dataset based on video recordings of activities, and to analyze discussion transcripts with the new designed framework.
Another possible limitation could be the size of the dataset. However, we have already collected the final survey and the same dataset in the ACM30070 2021/2022 class, where the students were grouped in a heterogeneous way; we plan to repeat the analyses and compare the findings to the larger dataset.

Author Contributions

Conceptualization, A.B., G.B., M.M., A.P. and M.S.; Methodology, A.B., G.B., M.M., A.P. and M.S.; Software, A.B., G.B., M.M., A.P. and M.S.; Validation, A.B., G.B., M.M., A.P. and M.S.; Formal analysis, A.B., G.B., M.M., A.P. and M.S.; Investigation, A.B., G.B., M.M., A.P. and M.S.; Resources, A.B., G.B., M.M., A.P. and M.S.; Data curation, A.B., G.B., M.M., A.P. and M.S.; Writing—original draft, A.B., G.B., M.M., A.P. and M.S.; Writing—review & editing, A.B., G.B., M.M., A.P. and M.S.; Visualization, A.B., G.B., M.M., A.P. and M.S.; Supervision, A.B., M.M., A.P. and M.S.; Project administration, A.B., G.B., M.M., A.P. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fondazione Compagnia di San Paolo through the OPERA project.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of UCD, University College Dublin (protocol code LS-20-05-Perrotta; date, 15 May 2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are unavailable due to privacy.

Acknowledgments

The authors acknowledge the support given by the tutors Rian Dolphin and Jamie Kennedy in facilitating the sharing and collection of informed consent statements and facilitating the running of computational lab activities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lockwood, E.; DeJarnette, A.F.; Thomas, M. Computing as a Mathematical Disciplinary Practice. J. Math. Behav. 2019, 54, 100688. [Google Scholar] [CrossRef]
  2. Donnelly, D.F.; Linn, M.C.; Ludvigsen, S. Impacts and Characteristics of Computer-Based Science Inquiry Learning Environments for Precollege Students. Rev. Educ. Res. 2014, 84, 572–608. [Google Scholar] [CrossRef]
  3. Caballero, M.D.; Kohlmyer, M.A.; Schatz, M.F. Implementing and Assessing Computational Modeling in Introductory Mechanics. Phys. Rev. ST Phys. Educ. Res. 2012, 8, 020106. [Google Scholar] [CrossRef]
  4. Caballero, M.D.; Hjorth-Jensen, M. Integrating a Computational Perspective in Physics Courses. In New Trends in Physics Education Research; Magazù, S., Ed.; Nova Sciences Publisher: Hauppauge, NY, USA, 2018; pp. 47–76. ISBN 978-1-5361-3893-1. [Google Scholar]
  5. Caballero, M.D.; Merner, L. Prevalence and Nature of Computational Instruction in Undergraduate Physics Programs across the United States. Phys. Rev. Phys. Educ. Res. 2018, 14, 020129. [Google Scholar] [CrossRef]
  6. Funkhouser, K.; Caballero, M.D.; Irving, P.W.; Sawtelle, V. What Counts in Laboratories: Toward a Practice-Based Identity Survey. In Proceedings of the 2018 Physics Education Research Conference Proceedings, Washington, DC, USA, 21 January 2019. [Google Scholar]
  7. Pawlak, A.; Irving, P.W.; Caballero, M.D. Learning Assistant Approaches to Teaching Computational Physics Problems in a Problem-Based Learning Course. Phys. Rev. Phys. Educ. Res. 2020, 16, 010139. [Google Scholar] [CrossRef]
  8. Weintrop, D.; Beheshti, E.; Horn, M.; Orton, K.; Jona, K.; Trouille, L.; Wilensky, U. Defining Computational Thinking for Mathematics and Science Classrooms. J. Sci. Educ. Technol. 2016, 25, 127–147. [Google Scholar] [CrossRef]
  9. Barana, A.; Marchisio, M.; Perrotta, A.; Sacchet, M. Collaborative Knowledge Construction during Computational Lab Activities in Financial Mathematics. In Proceedings of the 9th International Conference on Higher Education Advances (HEAd’23), Valencia, Spain, 19–22 June 2023; Editorial Universitat Politècnica de València: Valencia, Spain, 2023; pp. 1021–1028. [Google Scholar]
  10. Perrotta, A. A Learner-Centered Approach to Design a Computational Finance Module in Higher Education. In Proceedings of the 7th International Conference on Higher Education Advances (HEAd’21), Valencia, Spain, 22 June 2021; Editorial Universitat Politècnica de València: Valencia, Spain, 2021; pp. 405–412. [Google Scholar]
  11. Perrotta, A.; Dolphin, R. Combining Student-Led Lab Activities with Computational Practices to Promote Sensemaking in Financial Mathematics. In Proceedings of the Eighth Conference on Research in Mathematics Education in Ireland (MEI 8), Dublin, Ireland, 15–16 October 2021; pp. 348–355. [Google Scholar] [CrossRef]
  12. Jonassen, D.H. Objectivism versus Constructivism: Do We Need a New Philosophical Paradigm? ETRD 1991, 39, 5–14. [Google Scholar] [CrossRef]
  13. Murphy, P.K.; Greene, J.A.; Firetto, C.M.; Li, M.; Lobczowski, N.G.; Duke, R.F.; Wei, L.; Croninger, R.M.V. Exploring the Influence of Homogeneous versus Heterogeneous Grouping on Students’ Text-Based Discussions and Comprehension. Contemp. Educ. Psychol. 2017, 51, 336–355. [Google Scholar] [CrossRef]
  14. Ludvigsen, S.; Arnseth, H.C. Computer-Supported Collaborative Learning. In Technology Enhanced Learning; Duval, E., Sharples, M., Sutherland, R., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 47–58. ISBN 978-3-319-02599-5. [Google Scholar]
  15. Zabolotna, K.; Malmberg, J.; Järvenoja, H. Examining the Interplay of Knowledge Construction and Group-Level Regulation in a Computer-Supported Collaborative Learning Physics Task. Comput. Hum. Behav. 2023, 138, 107494. [Google Scholar] [CrossRef]
  16. Gunawardena, C.N.; Lowe, C.A.; Anderson, T. Analysis of a Global Online Debate and the Development of an Interaction Analysis Model for Examining Social Construction of Knowledge in Computer Conferencing. J. Educ. Comput. Res. 1997, 17, 397–431. [Google Scholar] [CrossRef]
  17. Good, C.; Aronson, J.; Harder, J.A. Problems in the Pipeline: Stereotype Threat and Women’s Achievement in High-Level Math Courses. J. Appl. Dev. Psychol. 2008, 29, 17–28. [Google Scholar] [CrossRef]
  18. Heritage, M.; Wylie, C. Reaping the Benefits of Assessment for Learning: Achievement, Identity, and Equity. ZDM Math. Educ. 2018, 50, 729–741. [Google Scholar] [CrossRef]
  19. Nortvedt, G.A.; Buchholtz, N. Assessment in Mathematics Education: Responding to Issues Regarding Methodology, Policy, and Equity. ZDM Math. Educ. 2018, 50, 555–570. [Google Scholar] [CrossRef]
  20. Boaler, J. Promoting ‘Relational Equity’ and High Mathematics Achievement through an Innovative Mixed-ability Approach. Br. Educ. Res. J. 2008, 34, 167–194. [Google Scholar] [CrossRef]
  21. Wright, P. Social Justice in the Mathematics Classroom. Lond. Rev. Educ. 2016, 14, 104–118. [Google Scholar] [CrossRef]
  22. Barana, A.; Marchisio, M.; Sacchet, M. Interactive Feedback for Learning Mathematics in a Digital Learning Environment. Educ. Sci. 2021, 11, 279. [Google Scholar] [CrossRef]
  23. Stacey, K.; Wiliam, D. Technology and Assessment in Mathematics. In Third International Handbook of Mathematics Education; Clements, M.A., Ed.; Springer International Handbooks of Education; Springer: New York, NY, USA, 2013; Volume 27, pp. 721–751. [Google Scholar]
  24. Wing, J. Computational Thinking. Commun. ACM 2006, 49, 33–35. [Google Scholar] [CrossRef]
  25. Wing, J. Computational Thinking Benefits Society. Soc. Issues Comput. 2014, 2014, 26. [Google Scholar]
  26. Barana, A.; Conte, A.; Fissore, C.; Floris, F.; Marchisio, M.; Sacchet, M. The Creation of Animated Graphs to Develop Computational Thinking and Support STEM Education. In Maple in Mathematics Education and Research; Gerhard, J., Kotsireas, I., Eds.; Springer: Cham, Switzerland, 2020; pp. 189–204. [Google Scholar]
  27. Barana, A.; Fissore, C.; Marchisio, M.; Pulvirenti, M. Teacher Training for the Development of Computational Thinking and Problem Posing & Solving Skills with Technologies. In eLearning Sustainment for Never-Ending Learning, Proceedings of the 16th International Scientific Conference ELearning and Software for Education, Bucharest, Romania, 23-24 April 2020; Carol I National Defence University: București, Romania, 2020; Volume 2, pp. 136–144. [Google Scholar]
  28. Gossen, F.; Kuhn, D.; Margaria, T.; Lamprecht, A.-L. Computational Thinking: Learning by Doing with the Cinco Adventure Game Tool. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; pp. 990–999. [Google Scholar]
  29. Borg, A.; Fahlgren, M.; Ruthven, K. Programming as a Mathematical Instrument: The Implementation of an Analytic Framework. In Proceedings of the Mathematics Education in the Digital Age (MEDA) Proceedings, Linz, Austria, 16–18 September 2020; Donevska-Todorova, A., Faggiano, E., Trgalova, J., Lavicza, Z., Weinhandl, R., Clark-Wilson, A., Weigand, H.-G., Eds.; pp. 435–442. [Google Scholar]
  30. Fracchiolla, C.; Meehan, M. Computational Practices in Introductory Science Courses. In Proceedings of the Physics Education Research Conference Proceedings 2021, Virtual, 4–5 August 2021; pp. 129–134. [Google Scholar]
  31. Irving, P.W.; Obsniuk, M.J.; Caballero, M.D. P3: A Practice Focused Learning Environment. Eur. J. Phys. 2017, 38, 055701. [Google Scholar] [CrossRef]
  32. Sand, O.P.; Odden, T.O.B.; Lindstrøm, C.; Caballero, M.D. How Computation Can Facilitate Sensemaking about Physics: A Case Study. In Proceedings of the 2018 Physics Education Research Conference Proceedings, Washington, DC, USA, 21 January 2019. [Google Scholar]
  33. Odden, T.O.B.; Russ, R.S. Defining Sensemaking: Bringing Clarity to a Fragmented Theoretical Construct. Sci. Ed. 2019, 103, 187–205. [Google Scholar] [CrossRef]
  34. Lave, J. Situating Learning in Communities of Practice. In Perspectives on Socially Shared Cognition; Resnick, L.B., Levine, J.M., Teasley, S.D., Eds.; American Psychological Association: Washington, DC, USA, 1991; pp. 63–82. ISBN 978-1-55798-121-9. [Google Scholar]
  35. Roschelle, J.; Teasley, S.D. The Construction of Shared Knowledge in Collaborative Problem Solving. In Computer Supported Collaborative Learning; O’Malley, C., Ed.; Springer: Berlin/Heidelberg, Germany, 1995; pp. 69–97. ISBN 978-3-642-85100-1. [Google Scholar]
  36. Resta, P.; Laferrière, T. Technology in Support of Collaborative Learning. Educ. Psychol. Rev. 2007, 19, 65–83. [Google Scholar] [CrossRef]
  37. Johnson, D.W.; Johnson, R.T. Cooperation and the Use of Technology. In Handbook of Research on Educational Communications and Technology; Jonassen, D., Ed.; MacMillan: London, UK, 1996; pp. 785–812. [Google Scholar]
  38. Webb, N.M.; Palincsar, A.S. Group Processes in the Classroom. In Handbook of Research for Educational Communications and Technology; Berliner, D.C., Calfee, R.C., Eds.; Simon & Schuster: New York, NY, USA, 1996. [Google Scholar]
  39. Tram, N.T.M.; Quyen, B.T.T. The Effects of Grouping Types on Promoting Critical Thinking in EFL Collaborative Writing. Hcmcoujs Soc. Sci. 2019, 9, 108–120. [Google Scholar] [CrossRef]
  40. Tomlinson, C.A. How to Differentiate Instruction in Mixed-Ability Classrooms; Association for Supervision & Curriculum Development: Alexandria, Egypt, 2005; ISBN 978-0-87120-917-7. [Google Scholar]
  41. Hooper, S.; Hannafin, M.J. Cooperative CBI: The Effects of Heterogeneous versus Homogeneous Grouping on the Learning of Progressively Complex Concepts. J. Educ. Comput. Res. 1988, 4, 413–424. [Google Scholar] [CrossRef]
  42. Wyman, P.J.; Watson, S.B. Academic Achievement with Cooperative Learning Using Homogeneous and Heterogeneous Groups. Sch. Sci. Math. 2020, 120, 356–363. [Google Scholar] [CrossRef]
  43. Lucas, M.; Gunawardena, C.; Moreira, A. Assessing Social Construction of Knowledge Online: A Critique of the Interaction Analysis Model. Comput. Hum. Behav. 2014, 30, 574–582. [Google Scholar] [CrossRef]
  44. Floren, L.C.; Ten Cate, O.; Irby, D.M.; O’Brien, B.C. An Interaction Analysis Model to Study Knowledge Construction in Interprofessional Education: Proof of Concept. J. Interprof. Care 2021, 35, 736–743. [Google Scholar] [CrossRef]
  45. Moore, J.L.; Marra, R.M. A Comparative Analysis of Online Discussion Participation Protocols. J. Res. Technol. Educ. 2005, 38, 191–212. [Google Scholar] [CrossRef]
  46. Osman, G.; Herring, S.C. Interaction, Facilitation, and Deep Learning in Cross-Cultural Chat: A Case Study. Internet High. Educ. 2007, 10, 125–141. [Google Scholar] [CrossRef]
  47. Hou, H.-T.; Chang, K.-E.; Sung, Y.-T. Analysis of Problem-Solving-Based Online Asynchronous Discussion Pattern. Educ. Technol. Soc. 2008, 11, 17–28. [Google Scholar]
  48. Yang, Y.-T.C.; Newby, T.; Bill, R. Facilitating Interactions through Structured Web-Based Bulletin Boards: A Quasi-Experimental Study on Promoting Learners’ Critical Thinking Skills. Comput. Educ. 2008, 50, 1572–1585. [Google Scholar] [CrossRef]
  49. Wang, Q.; Woo, H.L.; Zhao, J. Investigating Critical Thinking and Knowledge Construction in an Interactive Learning Environment. Interact. Learn. Environ. 2009, 17, 95–104. [Google Scholar] [CrossRef]
  50. Hou, H.-T.; Chang, K.-E.; Sung, Y.-T. Using Blogs as a Professional Development Tool for Teachers: Analysis of Interaction Behavioral Patterns. Interact. Learn. Environ. 2009, 17, 325–340. [Google Scholar] [CrossRef]
  51. Heo, H.; Lim, K.Y.; Kim, Y. Exploratory Study on the Patterns of Online Interaction and Knowledge Co-Construction in Project-Based Learning. Comput. Educ. 2010, 55, 1383–1392. [Google Scholar] [CrossRef]
  52. Quek, C.L. Analysing High School Students’ Participation and Interaction in an Asynchronous Online Project-Based Learning Environment. AJET 2010, 26, 327–340. [Google Scholar] [CrossRef]
  53. Hew, K.F.; Cheung, W.S. Higher-Level Knowledge Construction in Asynchronous Online Discussions: An Analysis of Group Size, Duration of Online Discussion, and Student Facilitation Techniques. Instr. Sci. 2011, 39, 303–319. [Google Scholar] [CrossRef]
  54. Hou, H.-T.; Wu, S.-Y. Analyzing the Social Knowledge Construction Behavioral Patterns of an Online Synchronous Collaborative Discussion Instructional Activity Using an Instant Messaging Tool: A Case Study. Comput. Educ. 2011, 57, 1459–1468. [Google Scholar] [CrossRef]
  55. Lan, Y.-F.; Tsai, P.-W.; Yang, S.-H.; Hung, C.-L. Comparing the Social Knowledge Construction Behavioral Patterns of Problem-Based Online Asynchronous Discussion in e/m-Learning Environments. Comput. Educ. 2012, 59, 1122–1135. [Google Scholar] [CrossRef]
  56. Bao, W.; Blanchfield, P.; Hopkins, G. The Effectiveness of Face-to-Face Discussion in Chinese Primary Schools; IATED: Barcelona, Spain, 2016; pp. 3863–3872. [Google Scholar]
  57. Zhou, P.; Yang, Q. Fostering Elementary Students’ Collaborative Knowledge Building in Smart Classroom with Formative Evaluation. In Proceedings of the 2017 International Conference of Educational Innovation through Technology (EITT), Osaka, Japan, 7–9 December 2017; pp. 116–117. [Google Scholar]
  58. Yang, X.; Li, J.; Xing, B. Behavioral Patterns of Knowledge Construction in Online Cooperative Translation Activities. Internet High. Educ. 2018, 36, 13–21. [Google Scholar] [CrossRef]
  59. Socratous, C.; Ioannou, A. A Study of Collaborative Knowledge Construction in STEM via Educational Robotics. In Proceedings of the Rethinking Learning in the Digital Age: Making the Learning Sciences Count, 13th International Conference of the Learning Sciences (ICLS), London, UK, 23–27 June 2018; Kay, J., Luckin, R., Eds.; International Society of the Learning Sciences: London, UK, 2018; Volume 1, pp. 496–503. [Google Scholar]
  60. Wang, C.; Fang, T.; Gu, Y. Learning Performance and Behavioral Patterns of Online Collaborative Learning: Impact of Cognitive Load and Affordances of Different Multimedia. Comput. Educ. 2020, 143, 103683. [Google Scholar] [CrossRef]
  61. Dubovi, I.; Tabak, I. An Empirical Analysis of Knowledge Co-Construction in YouTube Comments. Comput. Educ. 2020, 156, 103939. [Google Scholar] [CrossRef]
  62. Ambrose, S.A.; Bridges, M.W.; DiPietro, M.; Lovett, M.C.; Norman, M.K. How Learning Works: Seven Research-Based Principles for Smart Teaching; Jossey-Bass: San Francisco, CA, USA, 2010. [Google Scholar]
  63. Clarke, C.; Perrotta, A.; Cronin, A. Evaluating Student-Centred Computational Activities in Financial Mathematics. In Proceedings of the Accepted for Publication on MEI9 Conference Proceedings, DCU, Dublin, Ireland, 13–14 October 2023. [Google Scholar]
Figure 1. Weekly survey.
Figure 1. Weekly survey.
Sustainability 15 13466 g001
Table 1. The adapted framework.
Table 1. The adapted framework.
PhasePhase
Description
Sub-PhaseSub-Phase Description
0No evidence of interaction0ANo statement regarding group interaction, sentences written only in the first person, or statement of better working individually
1Sharing and comparing of information1AA statement of observation or opinion on the proposed exercise topics
1BA statement of explanation by one or more group members, along with a potential agreement statement
1CAsking and answering questions to clarify details of statements or demands of the exercises
2The discovery and exploration of dissonance or inconsistency among ideas, concepts or statements2AIdentifying and stating areas of disagreement
2BAsking and answering questions to clarify the source and extent of disagreement
2CRestating the participant’s position and possibly advancing arguments or considerations in its support by references to the participant’s experience, the literature, formal data collected, or proposal of a relevant metaphor or analogy to illustrate point of view
3Negotiation of meaning/co-construction of knowledge3ANegotiation or clarification of terms
3BNegotiation of the relative weight to be assigned to types of argument
3CIdentification of areas of agreement or overlap among conflicting concepts
3DProposal and negotiation of new statements embodying compromise, co-construction, and potentially provision of application examples
4Testing and modification of proposed synthesis or co-construction4ATesting proposed synthesis against ‘‘received fact’’ as shared by the participants or their prior knowledge
4BModify the co-constructed knowledge and/or solutions based on test applications and feedback
5Agreement statement(s)/applications of newly constructed meaning5ASummarization of agreement(s)
5BApplications of new knowledge
5CMetacognitive statements by participants illustrating their understanding that their knowledge or way of thinking (cognitive schema) have changed as a result of the group interaction
Table 2. Phases of knowledge construction reached during lab discussions identified by using the adapted IAM.
Table 2. Phases of knowledge construction reached during lab discussions identified by using the adapted IAM.
PhaseFrequencyPercentage
0133.28%
116340.40%
24310.61%
312330.56%
4102.53%
55012.63%
Table 3. Subphases of knowledge construction reached during lab discussions identified by using the adapted framework.
Table 3. Subphases of knowledge construction reached during lab discussions identified by using the adapted framework.
PhaseSub-PhaseFrequencyPercentage within Phase
00A13100.00%
11A6539.88%
1B1811.04%
1C8049.08%
22A2148.84%
2B1330.23%
2C920.93%
33A6048.78%
3B00.00%
3C2621.14%
3D3730.08%
44A660.00%
4B440.00%
55A2550.00%
5B510.00%
5C2040.00%
Table 4. Examples of answers falling in each subphase.
Table 4. Examples of answers falling in each subphase.
Sub-PhaseSub-Phase
Description
Examples
0ANo statement regarding group interaction, sentences written only in the first person, or statement of better working individually“I learn better on my own.” (Lab 2)
“I pretty much understood everything before I went into the lab, I’d already written code on my own for both techniques so there was really nothing extra to be gained from the lab. Apart from perhaps some clarification on the assumptions of the symmetry of Weiner process which is necessary for antithetic variate.” (Lab 8)
1AA statement of observation or opinion on the proposed exercise topics“It was good to see what others opinions were on the coding and different techniques used and seeing which was the clearest to understand as a lot of people had gotten different bits but not the entire thing to run.” (Lab 6)
1BA statement of explanation by one or more group members, along with a potential agreement statement“The antithetic method in general was a bit unclear to me, I wasn’t really sure on how it worked but Gavin was able to explain it fairly well. He had a deep understanding of it and was well able to pass it on.” (Lab 8)
1CAsking and answering questions to clarify details of statements or demands of the exercises“Open discussion is always good to help learn. We worked together to figure out how to do question 2. I shared my screen to demonstrate the solution and explained it to my peers. This allowed me to ensure that I understood the exercise properly.” (Lab 2)
2AIdentifying and stating areas of disagreement“In general working in the group helps if there is any questions that I am unsure of, like question 2 mentioned above. I think when we have to design some pieces of code and bring them to the group, it’s especially useful. I got to see the different ways that other people formatted their code to mine. For example, I used one print statement and\n line separators, while other people in my group did four separate print statements. Also I designed black sholes and binomial pricing functions outside my main control variate function and just called upon them, while other people in my group did those functions within the main body of their control function. It was interesting to see all the different ways that people designed their code and to compare and contrast.” (Lab 6)
2BAsking and answering questions to clarify the source and extent of disagreement“For questions 2 and three in particular it [the group work] was very useful for understanding how the grids work and the boundedness of our model. The mutual confusion about the variables a, b and c led to some good discussions” (Lab 9)
2CRestating the participant’s position and possibly advancing arguments or considerations in its support by references to the participant’s experience, the literature, formal data collected, or proposal of relevant metaphor or analogy to illustrate point of view“When converting the exotic option into a regular European put it was useful to have a group looking through the code as I initially thought that only the single boundary condition would have to change and I was reminded that both the boundary conditions would have to change. “ (Lab 10)
3ANegotiation or clarification of terms“As mentioned before I feel like I fully understand the computation of the “a” value as a result of discussion with my group.” (Lab 4)
3BNegotiation of the relative weight to be assigned to types of argument//
3CIdentification of areas of agreement or overlap among conflicting concepts“Me and my team had a lot of debate about the final question. Unfortunately it was me and Stephen arguing our side which was in fact wrong but it was a good loosing debate anyways.” (Lab 8)
3DProposal and negotiation of new statements embodying compromise, co-construction, and potentially provide application examples“As a group, the discussion about the advantages and disadvantages of the Monte Carlo method helped concrete my understanding of the practicalities of this method. This helped me get a better sense of how we decide what method we use to price options more generally and when we should/shouldn’t use certain methods like Monte Carlo for certain options. For example, we discussed how there’s no benefit to using the method to price vanilla European options because it’s computationally more expensive than the binomial method; but on the other hand the Monte Carlo method allows you to price more general exotic options which may not be as easily calculated with the binomial model.” (Lab 7)
4ATesting proposed synthesis against ‘‘received fact’’ as shared by the participants or their prior knowledge“We went through the solution code as a group and compared it to our own pointing out the differences, and we predicted the change before running the macro and explained our reasons to each other, giving great insight to each others opinions” (Lab 3)
4BModify the co-constructed knowledge and/or solutions based on test applications and feedback“For Q8 we looked to alter/add code in python for the Monte Carlo Pricing and were able to discuss different methods and try to find the most simplistic and efficient one and add it to our code.” (Lab 7)
5ASummarization of agreement(s)“We had one member share the screen so we could all clearly see which piece of the code we were discussing and we were able to discuss some answers and explain our reasonings, particularly in the bonus question, until we agreed on an answer” (Lab 5)
5BApplications of new knowledge“The only question which maybe was not that clear to all of us as a group was the final question where we had to alter the code so that it would be true for a put option. We realised after discussing it as a group that the delta of a put option is always negative, so in the indicator function I(x), it must return either −1 or 0 for a put option. We then change the sign in the if statement and this indicator function will be true for the put option case where we have a maximum between K-St and 0.” (Lab 7)
5CMetacognitive statements by participants illustrating their understanding that their knowledge or way of thinking (cognitive schema) have changed as a result of the group interaction“Peer discussion is effective (in my opinion) because we all have a similar level of understanding of the topic so by talking about our understanding of a topic, we can see where we all agree and where we disagree about something and from there, get to the bottom what is actually the right way to think about the topic. For example, there was a disagreement about what a parameter was vs. a variable for the Black Scholes model. Through the group discussion, everyone was able to understand what was right and why it was right.” (Lab 1)
Table 5. Percentage of phases occurrence in each lab. For each phase, the lab registering the highest percentage is highlighted in bold.
Table 5. Percentage of phases occurrence in each lab. For each phase, the lab registering the highest percentage is highlighted in bold.
Phase 0Phase 1Phase 2Phase 3Phase 4Phase 5
Lab 10.00%32.43%0.00%37.84%2.70%27.03%
Lab 29.52%47.62%19.05%19.05%0.00%4.76%
Lab 34.88%29.27%2.44%26.83%4.88%31.71%
Lab 44.55%50.00%4.55%34.09%2.27%4.55%
Lab 50.00%42.86%2.86%45.71%2.86%5.71%
Lab 60.00%27.03%29.73%29.73%0.00%13.51%
Lab 74.55%54.55%9.09%20.45%4.55%6.82%
Lab 82.50%40.00%7.50%37.50%2.50%10.00%
Lab 92.44%34.15%19.51%29.27%4.88%9.76%
Lab 102.86%42.86%11.43%28.57%0.00%14.29%
Table 6. Percentage of phase occurrence in each lab. For each phase, the groups registering the highest percentages are highlighted in bold.
Table 6. Percentage of phase occurrence in each lab. For each phase, the groups registering the highest percentages are highlighted in bold.
Phase 0Phase 1Phase 2Phase 3Phase 4Phase 5
Group A6.35%26.98%15.87%33.33%3.17%14.29%
Group B7.02%38.60%5.26%35.09%0.00%14.04%
Group C3.03%59.09%6.06%19.70%3.03%9.09%
Group D0.00%50.85%10.17%30.51%3.39%5.08%
Group E4.55%29.55%9.09%40.91%6.82%9.09%
Group F0.00%26.53%12.24%38.78%0.00%22.45%
Group G1.79%46.43%14.29%21.43%0.00%16.07%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barana, A.; Boetti, G.; Marchisio, M.; Perrotta, A.; Sacchet, M. Investigating the Knowledge Co-Construction Process in Homogeneous Ability Groups during Computational Lab Activities in Financial Mathematics. Sustainability 2023, 15, 13466. https://doi.org/10.3390/su151813466

AMA Style

Barana A, Boetti G, Marchisio M, Perrotta A, Sacchet M. Investigating the Knowledge Co-Construction Process in Homogeneous Ability Groups during Computational Lab Activities in Financial Mathematics. Sustainability. 2023; 15(18):13466. https://doi.org/10.3390/su151813466

Chicago/Turabian Style

Barana, Alice, Giulia Boetti, Marina Marchisio, Adamaria Perrotta, and Matteo Sacchet. 2023. "Investigating the Knowledge Co-Construction Process in Homogeneous Ability Groups during Computational Lab Activities in Financial Mathematics" Sustainability 15, no. 18: 13466. https://doi.org/10.3390/su151813466

APA Style

Barana, A., Boetti, G., Marchisio, M., Perrotta, A., & Sacchet, M. (2023). Investigating the Knowledge Co-Construction Process in Homogeneous Ability Groups during Computational Lab Activities in Financial Mathematics. Sustainability, 15(18), 13466. https://doi.org/10.3390/su151813466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop