3.1. Scalability
Highly scalable models would be those which can easily adapt to increasing or decreasing student enrollment, either gradual trends or sudden variations, without significant modifications to the modality or workload burden. Models that involve the responsibility of the instructor(s) to oversee all students equally (SI, MI-J) work well for student cohorts below a specific threshold, up to approximately 70 for the Mechanical Engineering capstone structure at UMaine. Above that number, the burden of meeting with all teams and evaluating deliverables becomes infeasible. Some flexibility may be possible by reducing other teaching duties of the lead instructors, but, in any case, these models will have a relatively hard upper limit to feasible cohort size. Models with support faculty (MI-VF, and SI-SF) can be scaled relatively well, as they allow for a more granular approach to increasing or decreasing the technical advising faculty effort. However, these models require a relatively long period of time to effectively scale, making them poorly suited for rapidly increasing enrollment. When several new support faculty members are included, it was found that the training and monitoring burden on the lead instructor(s) was high, providing an upper limit on the number of support faculty members which can be included. It is possible that, with several years of execution, enough faculty members would become familiar with the capstone structure and the burden on the lead faculty could be reduced. However, in three consecutive years of including support faculty members, this stage was not achieved at UMaine. It is expected that attempting to extend one of these methods to a year with a greatly increased student cohort would lead to an immense burden on the lead instructor(s), though slow, gradual growth may be somewhat feasible with these modalities. The MI-S model is well suited to scaling, provided that enough faculty members who can tightly coordinate are willing to carry out these duties. Though scaling is not as granular as MI-VF or SI-SF modalities, when an additional lead instructor is included in the team, they can effectively be trained and manage a significant workload during the first year. While there will be a burden on other lead instructors during this initial year, this training burden can essentially be eliminated by the second year, as was seen in 2022–2023 at UMaine.
3.2. Project Diversity
It is common for mechanical engineering students to develop specific interests in which to specialize. It is natural that increased student satisfaction and better learning outcomes will be achieved through a diverse set of projects [
19,
20]. Increasing project diversity requires appropriate technical expertise from team supervisors and can increase instructor workload compared to standardized projects. It is also possible that a diverse project set can also lead to inequality in student workload or grading.
Figure 5 shows the number of unique projects per capstone student for the seven years of the case study. In many cases, the same problem statement is given to multiple teams, with this being considered one unique project despite multiple teams. Larger values of unique projects per student indicate greater project diversity. Project diversity was extremely low in the initial year of the study. As noted above, this was intentional by the instructor. In subsequent years, the inclusion of multiple lead instructors increased the project diversity. The highest value of 0.151 unique projects per student was achieved in the first year of including volunteer support faculty. The inclusion of support faculty provides a more diverse technical expertise in team advising, allowing for a wider variety of offered projects. Often, the lead instructors of capstone have industry and teamwork experience and the support faculty members participate primarily due to their specific technical knowledge. In these models (MI-VF, SI-SF), the lead instructors will support the teams in teamwork aspects while delegating technical advising.
In the years following 2018–2019, the reduction in project diversity was mainly due to student choice. In each of these years, unselected projects were presented to students, while certain projects received enough interest to assign multiple teams the same project. The move to the MI-S model with three lead instructors was found to retain a sufficiently broad set of technical expertise to continue with similar levels of project diversity as with support faculty. The increase in diversity during 2022–2023 is attributed to the experienced instructor team developing an especially diverse set of project options. Other than the SI model, no other model severely limits the diversity of student projects.
3.3. Cost
The UMaine Mechanical Engineering Department provides the majority of the funding for capstone projects. Some projects, including the purchase of hardware, are supported by external clients. The projects are typically completed for approximately USD 1000, and problem statements are designed with this approximate figure in mind. However, there are several factors related to the teaching modality which can influence the total cost to the department.
Figure 6 shows the cost per project invested by the Department for each academic year. Data from the 2020–2021 year during the COVID-19 pandemic should not be used to draw conclusions as the build process that year was reduced in scope to deal with public health restrictions. The highest values of cost per team were achieved in years with volunteer faculty, a natural expectation when the instructor team expands and guidance is not as tightly coordinated. With fewer faculty coordinating more closely (MI-J, MI-S models), costs can be more tightly controlled, both through the definition of project scopes, and through careful hardware usage such as common components and standard parts, such as fasteners. The benefits of standardizing project options are evident in the 2016–2017 year (SI model), which achieved the second lowest cost per team behind the COVID-19-restricted year.
It should be noted that the costs within this case study, and likely in other executions of the changing capstone teaching modality, are greatly influenced by factors other than the teaching modality. For example, the 2021–2022 year saw a dramatic increase in capstone students to 112. With far more teams than in prior years, the instructor team would normally have offered projects with smaller, cheaper builds to contain costs. However, rollover money from the 2020–2021 year made that unnecessary. Other factors, such as external client funding, specific student project selection, and available department funding may have as much influence on the capstone costs to a department as the teaching modality.
3.4. Student Satisfaction
Capstone is a unique experience for most undergraduate engineering students, and can be frustrating, especially for students who are unexpectedly confronted by the need for modern engineering competencies such as managing engineering activities, communication, and lifelong learning. Key drivers to student satisfaction are interest level, driven largely by project topic, perceived grading fairness, manageable workload, consistency of information, and project success [
2,
9,
20,
21,
22]. In particular, most capstone experiences do not require a successful project to achieve a high grade and student learning outcomes, but students can be frustrated if the projects are seen as complete failures. The choice of teaching modality can have a major impact on many of the above aspects. Student satisfaction in this case study is evaluated through university-administered student evaluations and through instructor observations. At UMaine, prior to 2019–2020, students completed a 29-question paper questionnaire to evaluate each course and instructor. Starting in 2019–2020, UMaine adopted a 19-question online questionnaire. Many of the questions were identical or near-identical. The results presented in this study focus on three questions which were all identical in the two questionaries. Several caveats must be provided regarding student evaluations. The first is that the data are limited to students who responded. It is possible that, in low-response-rate cases, only data from students with a strong positive or negative experience are recorded. The literature on the topic has noted either minimal impact of selection bias [
23], or a slight overprediction of true rankings on average [
24], although the impact of selection bias on a single course is very difficult to predict. Response rates ranged from 33% in 2021–2022 to 73% in 2017–2018. The second is that student evaluation ratings of courses and instructors are known to be influenced by grades given and many other factors separate from the teaching modality [
25,
26,
27]. Hence, instructor observations are included in the following discussion. Finally, the ratings during the pandemic years of 2019–2020 and 2020–2021 may be significantly impacted by the changes that occurred to comply with public health restrictions, including meeting virtually, and reducing hands-on work.
Student evaluation data providing answers to three key questions are given in
Figure 7, with the questions being provided in
Table 2.
Generally, no strong correlations between teaching modality and student evaluations of course or instructor are evident. Both ratings have generally increased over time, which is attributed to the development of an improved course structure and revision of deliverables by a relatively consistent team of instructors, more than any other factor. In one example, the MI-VF model received the second lowest and the second highest ratings of instructors, even though the same instructors and model were being evaluated. The 2017–2018 year also saw the lowest overall course rating but an above-average instructor rating, showing that the course and instructor ratings are not strongly correlated.
A significant concern in modalities with multiple instructors is grading consistency. Especially for complex assignments, consistent grading across multiple evaluators, even with a detailed rubric, is a noted concern by students and instructors alike [
28,
29]. This is investigated by the student evaluation question: “How fair were the assessment procedures?”, with responses shown in
Figure 7. Responses to this question have little variation between years, with the exception of 16–17, when the reduced number of relevant deliverables led students to believe that grades were being assigned relatively arbitrarily. Years with multiple lead instructors do not lead to a perceived unfairness in grading. In fact, the average response during the years 2021–2022 and 2022–2023, where each instructor only graded content from the portion of teams assigned to them, was higher than during the years 2017–2018 and 2018–2019, when both instructors co-graded content from all student teams. The most positive responses were in 2019–2020, when a single instructor was responsible for all grading, and 2020–2021, when four supporting faculty members were responsible for grading groups they advised, with close coordination from a single lead instructor. It is possible that this is evidence that strong guidance from a single lead instructor may lead to fairer grading. However, the small separation from year to year indicates that fair grading can be achieved with many modalities, provided the instructor team coordinates well.
The SI model had limited project diversity and few deliverables. Students were mostly focused on designing and building their projects, which led to poor student learning outcomes, but a moderate course rating when students could spend most of their time “tinkering”. However, the lack of deliverables left the students feeling as though grades were determined arbitrarily with little chance for feedback, a likely reason for low instructor rating. The students did note receiving consistent information in this model.
The ratings of models with volunteer faculty (MI-VF and SI-SF) had similar student feedback. Project diversity was high and technical advice was strong, leading to good project outcomes, but the consistency of information received and communication were issues, which is reflected in strong course ratings but relatively low instructor ratings, as the rating was that of the lead instructor, who was not necessarily the faculty advisor of the team. Although the lead instructor(s) extensively trained supporting faculty members on the expected level of engagement, the actual level varied between teams and was noticed by the students. This is not due to neglect, but because of different backgrounds and level of experience in mentoring teams. Inconsistency in technical expectations and stakeholder input is a reality of the industrial world and could be integrated into the learning experience. However, student frustration primarily centered on varying availability of team advisors and differing instructions for graded course deliverables. Models with support faculty require intensive training and coordination by the lead instructor to achieve high student satisfaction.
In the MI-J and MI-S models, the lower number of instructors but high attention received by each team led to strong student satisfaction. Students appreciated high project diversity and relatively consistent information. The low course rating in 2017–2018 can be attributed to the first time introduction of a large number of deliverables, which were refined in later years. With fewer faculty members included, messaging could be well coordinated. For example, more students in these models were able to understand that a project device with poor performance can still be a successful learning experience. More coordination is required in the MI-S model to ensure consistent grading, as not all deliverables are seen by all instructors. Coordination is also necessary to ensure equitable workload by all students, as not all instructors meet with each team regularly. Hence, the MI-S model does present some minor challenges to student satisfaction.
The MI-S model received strong ratings of both the course and instructors in 2021–2022. The following year, the highest ratings of both across the case study were achieved. This can be attributed not just to the model, but to the continued development of structure, project scope, deliverables, and grading rubrics in capstone over the prior seven years. While teaching modality can impact student satisfaction, continued refinements and knowledge transfer is extremely important. The authors argue that a strong focus on the quality of the capstone experience (evidenced by the compiling of data and completion of this study) is a major factor in the gradual improvement in student evaluations during this case study.
3.5. Difficulties with Student Evaluations
Obtaining actionable student evaluation data for capstone can be challenging for some instruction modalities. The unique nature of the course compared to most in the undergraduate curriculum, and the teamwork aspects mean that the scope of some evaluation questions may be unclear to students. The instructors may be acting in several roles (lecturer, teamwork mentor, technical advisor) and students may each be providing their ratings based on different interpretations of a question.
The prior UMaine student evaluation method, used until 2019–2020, asked students to evaluate multiple instructors with a single response. The new method allows for individual ratings, but leaves some ambiguity over what is necessary for students to complete. With supporting faculty members included, evaluating the quality of advising for each team is extremely difficult, even with custom developed questions. Sample sizes may only include a few students and without prior, similar experiences to compare to, student feedback is often not actionable. Reliably evaluating support faculty may require lead instructors attending additional meetings, further increasing workloads. Alternatives to traditional student evaluations are recommended to evaluate instructors for most modalities in this case study.
3.6. Faculty Workload
Exploration of teaching modes in this case study has been driven primarily by a balance between project diversity, student learning and assessment, and faculty workload. The course workload must account for both a suitable level for each individual faculty involved, and for the total effort put in by all faculty in a department.
We consider the number of students in project teams managed by an instructor to be a suitable proxy for faculty workload, as shown in
Figure 8. Efforts that do not scale with number of students, including lecturing (1–2 h per week) and managing the course webpage, are relatively minor. The typical structure throughout the case study period is for an instructor to meet with each team of approximately five students weekly for 30 min. Approximately, an additional 3–6 h per semester are required for each group for grading deliverables, including reports and presentations. Despite many of the benefits described above, the MI-J and MI-VF modes of instruction do not significantly reduce instructor workload because each lead instructor is still responsible for many contact hours, attending weekly meetings with all teams, and for reviewing major deliverables from each team. The SI-SF model in 2020–2021 was introduced as a means of reducing lead instructor workload by delegating responsibility for weekly meetings and deliverable grading to support faculty. While these workload aspects were significantly reduced, this modality created a burden on the lead instructor to train and coordinate support faculty, especially considering the complex assessment requirements for ABET, which was equally time-consuming compared to advising teams directly. While it is likely that the training effort would decrease in subsequent years, it is also likely that the support faculty members will rotate in and out, minimizing the training reduction. During the three years with support faculty in some capacity, the overall burden on the lead instructor workload did not noticeably decrease. This modality was also quite burdensome on the Department. While only a single lead instructor was assigned to the course, the Department considered the effort of support faculty in course assignment, and paid an overload fee for the participating support faculty members.
The MI-S modality effectively reduced the workload for lead instructors without requiring significant investment across many faculty members. The coordination burden, including providing lectures and organization to the student cohort, was split between lead instructors, which was not the case with the support faculty members. This modality was considered the most manageable for instructors of those included in this case study.