Teaching Software Engineering Topics Through Pedagogical Game Design Patterns: An Empirical Study
Abstract
:1. Introduction
2. Software Engineering Education and Serious Games
3. The ABC Triangle for Game-Based Software Engineering Education
3.1. Concepts and Process
- edge A. The first step is to identify the relationship between game design patterns and teaching and learning functions (already done by Kelle [30]).
- edge B The second step is to establish a mapping between learning and teaching functions and SE knowledge (SEK). This is done by resorting to surveys and inquiries at SE educational specialists and professionals.
- edge C. The third step is enabled by joining the previous two (A + B) whose outcome provides a specific sub-set of GDPs aimed at a SEK education.
3.2. Consolidation of Intermediate Results
3.3. Research Methodology
4. EEEE—Expert and Efficient Estimators Enterprise
- Software Estimation: definition, purpose, advantages and more generic concepts, such as sprints and user stories.
- Planning Poker: roles, mechanics and outcomes.
- SCRUM —an agile methodology for software development [40]: general concepts about the process, its phases and roles.
4.1. Narrative and Characters
“You’ve just been hired as a Junior Software Estimator at E4, a software house. You have four colleagues on your team, which is responsible for developing an application for managing local selling stores. Its features range from product browsing and rating (by customers) to stock management and customer accounting (by a manager). All these features are presented as User Stories. The goal of your team is to deliver the project on time and budget.”
- The Guru Master is the wise tutor and narrator. This elder figure introduces the game and guides the player (at his/hers demand) throughout each step of the game.
- John, the Scrum Master, is the team leader. Not so much a project manager, he is more of a facilitator and mediator of the “Planning Poker” team meeting. He is a good communicator, skilled at driving the planning meeting.
- Natalie, the Product Owner, serves as a communication surrogate between the project stakeholders and the development team. She holds the product vision and tries to pass that onto the team, while responsible for prioritising the User Stories and validating the completed tasks. She is present at the planning meeting to clarify the team doubts on the product requirements during user story estimation.
- The remaining team members are the developers, with different experience levels: Peter is more experienced in back-end development, while Adam is proficient in front-end development. Sarah has some experience in both front-end and systems integration, while Barry, the rookie, is the least experienced.
PP: So far, all game elements serve to “set the stage” where the learning process will take place. Overall, they impersonate real-life setting elements such as team members, development locations and organisational entities. The learner is taken to a believable simulated environment of a real project, promoting immersion and engagement into the game. The game’s objective is set and explained (gdp:predefined goals), having the Guru Master character as a permanently available tutor (or help system, gdp:helpers), that, at any time and at the learner’s discretion, may assist in coping with difficulties that may arise during gameplay.
4.2. Gameplay
- Presenting—The Scrum Master presents the team with a User Story. This is a quick and concise description of what the intended system should do. An example of a possible User Story might be: “As a visitor, I want to create an account so that I can access the application.”
- Q&A—The player has to select a question, out of a possible three, to ask the Product Owner (Figure 3c). Choosing a proper question adds to the team’s knowledge about the User Story and increases the chance of a more accurate team estimation.
- Estimation—At this point, the team is ready to estimate the user story. The player chooses which card he/she intends to “play” as his/her estimation value. All team member cards are then shown simultaneously.
- Discussion—If all members played the same card, it jumps directly to the Consensus step. If not, members whose card deviates from the “most played card”, explain why they chose that card (Figure 3d). If the player happens to be one of those members, then all members explain their estimates.
- Consensus—Finally, the Scrum Master suggests a final estimation value to be agreed upon by the team. At this point, the player can “force” his/hers estimate or can accept the proposed value.
- Back to step 1, with a new user story.
PP: Throughout this estimation cycle, the learner perceives the Planning Poker mechanics and is challenged to take part in the process actively. Challenging the learner to seek options that will impact the process, and whose evaluation is immediately fed back to the learner (gdp:score), consolidates knowledge intake. For example, at step 2. (Q&A), choosing the better question will immediately earn the learner a higher score and will effectively impact on the other team members’ estimates. Furthermore, choosing poorly will trigger specific responses from the Product Owner that hint (gdp:clues) towards a more proper question, so that the learner may improve its performance over time. During step 4. (Discussion) the learner discloses the reasons behind the other team members’ estimates (gdp:clues), progressively building up a mental model about their profiles and their user story judgement and appraisal.
- Player Profile: Experience level, current employment position, number of correctly asked questions and number of accurately estimated user stories.
- Project Monitoring: Completed user stories, current sprint (and user story completion), remaining user stories and sprints (until the deadline).
- Team Members: profile description of each team member, their roles and expertise.
- Knowledge Base: Contains definitions and explanation about both Software Estimation and the development process concepts present in the game.
PP: During the Office scene, the learner gets complementary knowledge sources, containing more detailed and accurate information (gdp:direct information) on the team, process, project and his/hers game progression (gdp:progress indicators). Every time an estimation cycle ends, the learner can look for more information on these elements, as a means to clarify or consolidate doubts that might have arisen during the estimation cycle. To avoid excessive information and promote a sustainable learning process, data are not disclosed all at once. As the game progresses, more becomes available, focusing the student’s attention on what is essential at the time and promoting an incremental intake of new, consolidated knowledge.
4.3. Game Engine
4.3.1. User Stories and Questions
- What data will the user need to input besides email and password? (High relevance. If selected, player scores +2 XP.)
- Should the password have requirements like minimum size and the mandatory use of numbers and capitals? (Medium relevance. If selected, player scores +1 XP.)
- What about social sign up with Google or Facebook credentials? (Low relevance. If selected, the player does not score.)
4.3.2. Team Members, Estimation and Discussion
- . The team member performs an accurate estimate, choosing the Real Effort value of the User Story.
- . The team member has a 50% change of performing an accurate estimate, a 25% change of overestimating (+1) and a 25% change of underestimating (−1) (PP:gdp: randomness).
- . The team member estimates poorly, randomly over or underestimating (PP:gdp: randomness), by a measure of more than one.
- The confidence level () translates to quotes such as: “I’m confident that…”, or “I’m pretty sure that…”, if the is high, or “I think that…”, or “I believe that…” for medium values of and “I suppose that…”, or “I’m not sure, but…” for lower values of .
- the difficulty level of the user story, according to its nominal estimate translates to quotes such as (preceded by “this user story is…”) “very easy…”, “easy…”, “a bit demanding…” or “quite demanding….”
- the focusing task(s) the team member chooses to measure the effort needed to complete the User Story. These tasks are selected according to the team member’s best skill level for each scope component. If the of the team member is high, more than one task is used to compose the “opinion”.
4.3.3. Project Metrics and Scoring
- Experience Points (XP): The player gains XP as a way of improving his/her Player Character, which will, in future versions, unlock new abilities and access to additional sources of information. In this version, it transmits a sense of advancement in the progress of the game’s narrative, not only by giving feedback on performance, but also promoting engagement. The player only scores experience points (XP) when: (a) selects a proper question to ask the Product Owner (+2 XP for a highly relevant question, +1 XP for a medium relevant question); and (b) performs an accurate estimate on a User Story (+3 XP for an exact estimate, +1 XP if it only deviates by one).
- Overall Project Score (OPS): This is the actual final game score when a project is finished and is computed as follows: Let be the number of effective sprints used to finish the project, the number of ideal sprints the project should have spent to complete, the number of high-relevance questions posed to the Product Owner, the number of exact estimates (same as Real Effort value) and the total experience points of the player. Then, the Overall Project (Final) Score is calculated according to the following equation:Thus, there is an initial 30 point default score, where a 10 point penalty is subtracted for each “extra” sprint taken beyond the ideal number of sprints needed to complete the project. To this value, 10 points are added for each highly relevant question posed to the Product Owner and 20 points for each “spot-on” estimate. Lastly, the player’s experience points are added.
4.4. Game Design Patterns
- Clues. These are implicit or explicit bits of information that might hint or direct the player towards a more successful path within the game. The Product Owner answering on questions posed by the player (implicit) or team members explaining why they have chosen a specific estimate (explicit) are examples of this pattern.
- Predefined Goals. Almost every game has a (set of) predefined goal(s). Beyond the typical “win the game”, the initial introduction of the game (by the Guru Master) defines the main objective of “deliver the project on time and budget”. Implicitly, by explaining the mechanics of the game, the Guru Master also defines high estimation accuracy as an expected outcome for successfully achieving the goal of the game.
- Randomness. Every game must have a component of uncertainty, or else, it will become predictable and boring. This is introduced during computation of the team members estimate, so that, after a while, estimates do not become fully deterministic, due to understanding the underlying mechanics.
- Score. This “reward feedback” happens throughout the game, so as to keep the player motivated. Namely, it occurs when asking questions to the Product Owner and estimating User Stories, scoring experience points (XPs). At the end of each project, a final score is also awarded.
- Progress Indicators. In the Office scene, the player can see how good (or bad) he/she is faring, perceiving the User Stories completion ratio (completed/to complete). This feedback is important to keep the player aware of his/her progress so that tactical/strategical ideas might be reviewed to improve performance.
- Direct Information. This is explicit, clear knowledge information about concepts, context and game aspects. This is achieved by providing a “Knowledge Base” element, in the Office scene, where the player can review, consolidate and capture specific topics on the learning context (e.g., “sprints”, “User Stories”, “Story Points”, “SCRUM”, etc.).
- Helpers. This introduces aids to the players whenever he/she becomes stuck or lost inside the game. The “Guru Master” character acts as such a mechanism, allowing the player to resort to him, whenever he/she deems necessary.
- Surprises. These are random, unexpected events that impact on the course of the game and that try to trigger a different response from the player. In this particular pattern, the authors did not find a suitable metaphor within the game to apply this pattern. There was the idea of introducing new User Stories during the course of the project so that it would “scramble” the planned work (as it usually happens in real projects). However, it was seen as too intrusive to the primary learning goals by introducing added complexity to the game mechanics at a first game level. It is planned to be introduced in future versions of the game.
5. Empirical Study with Students
5.1. Goal
RQ: Can a game provide a fun and effective way of learning about Software Estimation?
5.2. Subjects
5.3. Environment
The study setting must be appropriate relative to its goals, the skills required and the activities under study.
5.4. Protocol
- Baseline Group (BL): Firstly, this group answered a 5-minute questionnaire (Background Questionnaire) to ascertain the students’ background and general profile. The goal was to screen out possible “outliers” among the students regarding their basic skills, and (non-) acquaintance with Software Estimation. The group then answered a 10-minute questionnaire (Knowledge Questionnaire) regarding the Software Estimation topic, to measure the amount of knowledge the students would possess (presumably, not much, if any at all) at this point.
- Experimental Group (EG): This was the group that played the game. After being put the same background questionnaire as the BL group, these students then, in pairs, played the game for about 45 minutes. In the end, they answered (individually) to the same knowledge questionnaire as the BL group, so as to measure the knowledge attained upon playing the game. To rule out external threats to the validity of the results, questions on external factors and overall satisfaction were also posed to these students. During the “playing” stage, the students were constantly, and discreetly, monitored and observed by the authors.
6. Results
6.1. Background
- BG1 I have experience in Software programming.
- BG2.1 …Java.
- BG2.2 …SQL.
- BG2.3 …Swing or JavaFX.
- BG2.4 …HTML.
- BG3 I have considerable knowledge in the Software Engineering area.
- BG4 I have developed projects using the SCRUM methodology.
- BG5 I know what Planning Poker is and I have played it.
6.2. Knowledge Intake
- KW1. I feel I am acquainted with Software Estimation and its applications.
- KW2. Software Estimation allows for control of project development progress in order to meet its goals.
- KW3. An estimate is as accurate as the person’s knowledge about the task at hand.
- KW4. An estimate can be absolute or relative (compared with previous similar User Stories).
- KW5. A Sprint is the short time before a deadline where a developer team works extra hard.
- KW6. A Sprint is a pre-defined time interval during which there is specific work to be completed.
- KW7. In Planning Poker, the estimates are shown at the same time to avoid bias.
- KW8. In Planning Poker, the discussion phase comes after giving an estimate.
- KW9. A User Story is a sentence where a functionality is described.
- KW10. A User Story does not follow a template.
- KW11. Number sequences like Fibonacci (1,2,3,5,8) or T-Shirt sizes (1,2,4,8) may represent the number of hours needed to complete a User Story.
- KW12. An Epic is a User Story that is too big, thus is discarded.
- KW13. Late deliveries and unexpected time delays are a consequence of underestimating.
- KW14. An inefficient resource usage and a low productivity team may be consequences of overestimating.
- KW15. The Product Owner’s main goal is to share his vision with the development team, while also participating in Sprint Meetings to clarify any doubts.
- KW16. The Product Owner also gives estimates.
- KW17. The Scrum Master is the leader of the software development team, and can override an estimate.
- KW18. In Scrum methodology, there are successive software deliveries since the start of the project.
- KW3. An estimate is as accurate as the person’s knowledge about the task at hand. The negative difference for this question was only 0.08, (BG: 0.70 vs EG: 0.78). Comparing the answers using the Mann–Whitney–Wilcoxon test (), further showed that there was no statistically significant difference between the responses of both groups. These results indicate that, regarding this specific question, playing the game did not offer further insight into the subjects, with both groups giving similar answers based on their background knowledge and intrinsic common-sense.
- KW11. Number sequences such as Fibonacci (1, 2, 3, 5, 8) or T-Shirt sizes (1, 2, 4, 8) may represent the number of hours needed to complete a User Story. Although the difference is only 0.33 (BG: 2.00 vs EG:2.33), the concept of Story Points was not correctly assimilated by playing the game. Story Points are not a measure of time, but of effort/cost of task completion (without a specific unit). In the game, this concept is, consequently, not as clear as expected (the “time” factor is not mentioned). Therefore, the notion of effort as a time-based measure is of natural adherence, when task duration is considered. Making this non-assumption explicit in the game should improve the correct understanding of this concept.
- KW13. Late deliveries and unexpected time delays are a consequence of underestimating. This question had a negative difference value of 0.39 (BG: 0.50 vs. EG:0.89), and this is mainly due to the concepts of “late delivery” and “unexpected time delays”, which, although being mentioned during the game, did not occur during gameplay. It was expected that the subjects could, subjectively, perceive and reinforce these as consequences of underestimating, but the results do not support that. The game could be improved by provoking these events during gameplay, as a result of underestimating.
- KW17. The Scrum Master is the leader of the software development team, and can override an estimate. This was a trick question (diff: 0.44, BL: 2.00 vs EG:2.44), with both a true (“leader”) and false assumptions (“can override an estimate”), thus resulting in an overall false statement. During gameplay, the Scrum Master character never overrides an estimate, yet, nothing is said about his permission to do so if desired. The subjects assumed that it could be possible, as there was no clear evidence of otherwise. Again, demonstration during gameplay of this constraint would enhance the expected learning outcomes.
6.3. Overall Satisfaction
- OS1. I enjoyed playing EEEE.
- OS2. I felt overwhelmed by the amount of information in the game.
- OS3. I felt like I was in a lecture class.
- OS4. I think the game allowed to experience real situations that otherwise would be hard to simulate in a regular class.
- OS5. I had fun playing the game.
- OS6. I feel I learned new things.
- OS7. I liked this learning method.
- OS8. I would like to learn other topics using this method.
- OS9. The game kept me focused.
- OS10. The main goal of the game was clear.
- OS11. The game gave me good feedback about my performance (scores, metrics, progress bars, etc.)
- OS12. I felt the game was too predictable.
- OS13. The information shown on the screen was enough.
- OS14. The game assisted me in understanding what I needed to do.
- OS15. The game session was too long.
- EF1. I found the whole experience environment intimidating.
- EF2. I would play with my partner again.
- EF3. I kept getting distracted by other colleagues outside my group.
6.4. Threats to Validity
- Environmental factors affected the course of the experiment. As already stated, the experiment took place within the familiar physical space of the course class, so as to prevent the overall setting from having any influence on the performance of the subjects (and as corroborated by answers to questions EF1, EF2 and EF3).
- The game (fun factor) did not keep the students engaged or focused. On the contrary, the students held their focus and enjoyment of the game (questions OS1, OS5 and OS9) throughout the duration (question OS15) of the experiment. Possible impediments were also screened, such as overloading of information (question OS2) and lecture class aversion (question OS3).
- Learning was not perceived at the end of the game. Not only was learning perceived, but enjoyed (questions OS6, OS7 and OS8). The purpose and utility of the game were also acknowledged (questions OS10 and OS14), and some key design aspects of the game were screened (questions OS11, OS12 and OS13). Of course, inside a typical learning environment (class), it is expected to engage in learning activities. Thus, the perception of learning is somewhat expected. Even so, posed questions addressed game usage and not learning in class.
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Sommerville, I. Software Engineering, 10th ed.; Pearson: London, UK, 2015. [Google Scholar]
- Society, I.C.; Bourque, P.; Fairley, R.E. Guide to the Software Engineering Body of Knowledge (SWEBOK(R)): Version 3.0, 3rd ed.; IEEE Computer Society Press: Los Alamitos, CA, USA, 2014. [Google Scholar]
- Claypool, K.; Claypool, M. Teaching Software Engineering Through Game Design. SIGCSE Bull. 2005, 37, 123–127. [Google Scholar] [CrossRef]
- Paasivaara, M.; Lassenius, C.; Damian, D.; Räty, P.; Schröter, A. Teaching students global software engineering skills using distributed Scrum. In Proceedings of the 2013 35th International Conference on Software Engineering (ICSE), San Francisco, CA, USA, 18–26 May 2013; pp. 1128–1137. [Google Scholar]
- Schilling, W.W., Jr.; Sebern, M.J. Teaching Software Engineering: An Active Learning Approach. ASEE Comput. Educ. (CoED) J. 2013, 4, 13. [Google Scholar]
- Yu, L. Overcoming Challenges in Software Engineering Education: Delivering Non-Technical Knowledge and Skills, 1st ed.; IGI Global: Hershey, PA, USA, 2014. [Google Scholar]
- Westera, W.; Nadolski, R.; Hummel, H.; Wopereis, I. Serious games for higher education: A framework for reducing design complexity. J. Comput. Assist. Learn. 2008, 24, 420–432. [Google Scholar] [CrossRef]
- Molokken, K.; Jorgensen, M. A review of software surveys on software effort estimation. In Proceedings of the 2003 International Symposium on Empirical Software Engineering, Rome, Italy, 30 September–1 October 2003; pp. 223–230. [Google Scholar]
- Letra, P.; Paiva, A.C.R.; Flores, N. Game Design Techniques for Software Engineering Management Education. In Proceedings of the 2015 IEEE 18th International Conference on Computational Science and Engineering, Porto, Portugal, 21–23 October 2015; pp. 192–199. [Google Scholar]
- Ghezzi, C.; Mandrioli, D. The Challenges of Software Engineering Education. In Proceedings of the 27th International Conference on Software Engineering, Louis, MO, USA, 15–21 May 2005; ACM: New York, NY, USA, 2005; pp. 637–638. [Google Scholar]
- Shaw, M. Software Engineering Education: A Roadmap. In Proceedings of the Conference on the Future of Software Engineering, Limerick, Ireland, 4–11 June 2000; ACM: New York, NY, USA, 2000; pp. 371–380. [Google Scholar]
- Li, J.; Wang, X. Research on Reform in the Teaching of Software Engineering Course. In Advances in Intelligent Systems; Lee, G., Ed.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 141–144. [Google Scholar]
- Garris, R.; Ahlers, R.; Driskell, J.E. Games, motivation, and learning: A research and practice model. Simul. Gaming 2002, 33, 441–467. [Google Scholar] [CrossRef]
- Ritterfeld, U.; Cody, M.; Vorderer, P. Serious Games: Mechanisms and Effects, 1st ed.; Routledge: New York, NY, USA, 2009. [Google Scholar]
- Susi, T.; Johannesson, M.; Backlund, P. Serious Games—An overview; Technical Report HS-IKI-TR-07-001; University of Skövde: Skövde, Sweden, 2007. [Google Scholar]
- Navarro, E. SimSE: A Software Engineering Simulation Environment. Ph.D. Thesis, University of California, Irvine, CA, USA, 2006. [Google Scholar]
- Benitti, F.B.V.; Molléri, J.S. Utilização de um RPG no Ensino de Gerenciamento e Processo de Desenvolvimento de Software. In WEI-Workshop Sobre Educação em Computação; Sociedade Brasileira de Computação: Belém do Pará, Brazil, 2008; pp. 258–267. [Google Scholar]
- Fernandes, J.M.; Sousa, S.M. PlayScrum—A Card Game to Learn the Scrum Agile Method. In Proceedings of the 2010 Second International Conference on Games and Virtual Worlds for Serious Applications, Braga, Portugal, 25–26 March 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 52–59. [Google Scholar]
- Drappa, A.; Ludewig, J. Simulation in Software Engineering Training. In Proceedings of the 22nd International Conference on Software Engineering, Limerick, Ireland, 4–11 June 2000; ACM: New York, NY, USA, 2000; pp. 199–208. [Google Scholar]
- Bollin, A.; Hochmuller, E.; Samuelis, L. Teaching Software Project Management using Simulations—The AMEISE Environment: From Concepts to Class Room Experience. In Proceedings of the 2012 IEEE 25th Conference on Software Engineering Education and Training, Nanjing, China, 17–19 April 2012; pp. 85–86. [Google Scholar]
- Shaw, K.; Dermoudy, J. Engendering an Empathy for Software Engineering. In Proceedings of the 7th Australasian Conference on Computing Education, Newcastle, NSW, Australia, 31 January–4 February 2005; Australian Computer Society, Inc.: Darlinghurst, Australia, 2005; Volume 42, pp. 135–144. [Google Scholar]
- Xia, J.C.; Caulfield, C.; Baccarini, D.; Yeo, S. Simsoft: A game for teaching project risk management. In Proceedings of the 21st Annual Teaching Learning Forum, Perth, Australian, 2–3 February 2012. [Google Scholar]
- Von Wangenheim, C.G.; Thiry, M.; Kochanski, D. Empirical Evaluation of an Educational Game on Software Measurement. Empir. Softw. Engg. 2009, 14, 418–452. [Google Scholar] [CrossRef]
- Baker, A.; Navarro, E.O.; van der Hoek, A. An experimental card game for teaching software engineering. In Proceedings of the 16th Conference on Software Engineering Education and Training, Madrid, Spain, 20–22 March 2003; pp. 216–223. [Google Scholar]
- von Wangenheim, C.G.; Savi, R.; Borgatto, A.F. DELIVER!—An Educational Game for Teaching Earned Value Management in Computing Courses. Inf. Softw. Technol. 2012, 54, 286–298. [Google Scholar] [CrossRef]
- Calderón, A.; Ruiz, M.; Orta, E. Integrating Serious Games As Learning Resources in a Software Project Management Course: The Case of ProDec. In Proceedings of the 1st International Workshop on Software Engineering Curricula for Millennials, Buenos Aires, Argentina, 27 May 2017; IEEE Press: Piscataway, NJ, USA, 2017; pp. 21–27. [Google Scholar]
- Silva, A.C. Jogo Educacional para Apoiar o Ensino de Técnicas para Elaboração de Testes de Unidade. Master’s Thesis, Computação Aplicada, São José, Brazil, 2010. [Google Scholar]
- Farias, V.; Moreira, C.; Coutinho, E.; Santos, I.S. iTest Learning: Um Jogo para o Ensino do Planejamento de Testes de Software. In Proceedings of the V Fórum de Educação em Engenharia de Software (FEES 2012), Natal, Brazil, 26–27 September 2012. [Google Scholar]
- Ribeiro, T. iLearnTest: Jogo Educativo para Aprendizagem de Testes de Software. Master’s Thesis, Faculty of Engineering, Porto, Portugal, 2014. [Google Scholar]
- Kelle, S.; Klemke, R.; Specht, M. Design Patterns for Learning Games. Int. J. Technol. Enhanc. Learn. 2011, 3, 555–569. [Google Scholar] [CrossRef]
- Grosser, M. Effective teaching: Linking teaching to learning functions. S. Afr. J. Educ. 2007, 27, 37–52. [Google Scholar]
- Shuell, T.; Moran, K. Learning theories: Historical overview and trends. In The International Encyclopedia of Education, 2nd ed.; Husen, T., Postlethwaite, T., Eds.; Pergamon Press: Oxford, UK, 1994; pp. 3340–3345. [Google Scholar]
- Alexander, C. A Pattern Language: Towns, Buildings, Construction; Oxford University Press Inc: New York, NY, USA, 1977. [Google Scholar]
- Gamma, E.; Helm, R.; Johnson, R.; Vlissides, J. Design Patterns: Elements of Reusable Object-Oriented Software; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1995. [Google Scholar]
- Bjork, S.; Holopainen, J. Patterns in Game Design (Game Development Series); Charles River Media, Inc.: Rockland, MA, USA, 2004. [Google Scholar]
- Flores, N.H.; Paiva, A.C.; Letra, P. Software Engineering Management Education through Game Design Patterns. Procedia Soc. Behav. Sci. 2016, 228, 436–442. [Google Scholar] [CrossRef] [Green Version]
- Faste, T.; Faste, H. Demystifying “design research”: Design is not research, research is design. In Proceedings of the IDSA Education Symposium, Las Vegas, NV, USA, 15–20 July 2012; p. 15. [Google Scholar]
- Zelkowitz, M.V.; Wallace, D.R. Experimental models for validating technology. Computer 1998, 31, 23–31. [Google Scholar] [CrossRef] [Green Version]
- Kapp, K.M. The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education, 1st ed.; Pfeiffer: San Francisco, CA, USA, 2012. [Google Scholar]
- Schwaber, K. Scrum Development Process. In OOPSLA Business Object Design and Implementation Workshop; Sutherland, J., Patel, D., Casanave, C., Hollowell, G., Miller, J., Eds.; Springer: London, UK, 2012; Volume 10. [Google Scholar]
- Goulao, M.; e Abreu, F.B. Modeling the Experimental Software Engineering Process. In Proceedings of the 6th International Conference on the Quality of Information and Communications Technology (QUATIC 2007), Lisbon, Portugal, 12–14 September 2007; pp. 77–90. [Google Scholar]
- Kitchenham, B.; Al-Khilidar, H.; Babar, M.A.; Berry, M.; Cox, K.; Keung, J.; Kurniawati, F.; Staples, M.; Zhang, H.; Zhu, L. Evaluating guidelines for reporting empirical software engineering studies. Empir. Softw. Eng. 2008, 13, 97–121. [Google Scholar] [CrossRef]
- Shull, F.; Singer, J.; Sjøberg, D.I. Guide to Advanced Empirical Software Engineering; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Carver, J.C.; Jaccheri, L.; Morasca, S.; Shull, F. A checklist for integrating student empirical studies with research and teaching goals. Empir. Softw. Eng. 2010, 15, 35–59. [Google Scholar] [CrossRef]
- Likert, R. A Technique for the Measurement of Attitudes; The Science Press: New York, NY, USA, 1932; Volume 22. [Google Scholar]
- Hollander, M.; Wolfe, D.A.; Chicken, E. Nonparametric Statistical Methods; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
EG | BL | MWW (EG1 vs. BL) | |||||
---|---|---|---|---|---|---|---|
U | |||||||
BG1 | 4.10 | 0.70 | 3.50 | 0.50 | ≠ | 27.5 | 0.062 |
BG2.1 | 3.70 | 0.46 | 3.90 | 0.30 | ≠ | 40.0 | 0.276 |
BG2.2 | 2.80 | 0.60 | 3.20 | 0.75 | ≠ | 35.0 | 0.218 |
BG2.3 | 2.00 | 1.34 | 2.40 | 0.92 | ≠ | 41.0 | 0.478 |
BG2.4 | 1.40 | 0.80 | 1.90 | 1.22 | ≠ | 39.5 | 0.401 |
BG3 | 3.00 | 0.63 | 3.10 | 0.83 | ≠ | 43.0 | 0.546 |
BG4 | 1.30 | 0.64 | 1.40 | 0.49 | ≠ | 42.0 | 0.451 |
BG5 | 1.10 | 0.30 | 1.30 | 0.90 | ≠ | 49.5 | 0.942 |
Group | N | Mean | Std. Deviation | Std. Error Mean |
---|---|---|---|---|
BL | 18 | 1.7833 | 0.5628 | 0.1326 |
EG | 18 | 1.2339 | 0.4861 | 0.1146 |
F | Sig. | t | df | Sig. (2-Tailed) | |
---|---|---|---|---|---|
Eq. Var. Assumed | 0.418 | 0.522 | 3.134 | 34.00 | 0.004 |
Eq. Var. Not Assumed | 3.134 | 33.30 | 0.004 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Flores, N.; Paiva, A.C.R.; Cruz, N. Teaching Software Engineering Topics Through Pedagogical Game Design Patterns: An Empirical Study. Information 2020, 11, 153. https://doi.org/10.3390/info11030153
Flores N, Paiva ACR, Cruz N. Teaching Software Engineering Topics Through Pedagogical Game Design Patterns: An Empirical Study. Information. 2020; 11(3):153. https://doi.org/10.3390/info11030153
Chicago/Turabian StyleFlores, Nuno, Ana C. R. Paiva, and Nuno Cruz. 2020. "Teaching Software Engineering Topics Through Pedagogical Game Design Patterns: An Empirical Study" Information 11, no. 3: 153. https://doi.org/10.3390/info11030153
APA StyleFlores, N., Paiva, A. C. R., & Cruz, N. (2020). Teaching Software Engineering Topics Through Pedagogical Game Design Patterns: An Empirical Study. Information, 11(3), 153. https://doi.org/10.3390/info11030153