1. Introduction
The adaptation of studies to the European higher education area (EHEA) has led Spanish universities to develop systematic evaluation mechanisms for the teaching–learning process in terms of methodology and teaching competencies [
1]. To enhance quality in line with the standards outlined by the European Association for Quality Assurance in Higher Education (ENQA), universities have established internal and external systematic mechanisms for evaluating the quality of their activities. The official regulations in Spain [
2,
3,
4] mandate that university programs must incorporate an internal quality assurance system encompassing, among other aspects, evaluation and improvement procedures for teaching and staff. These programs are required to adhere to the criteria and guidelines for quality assurance outlined in the EHEA (Standards and Guidelines for Quality Assurance in the European Higher Education Area, ESG).
In this context, the evaluation of teaching activities has become a significant aspect of this assessment. Consequently, the “DOCENTIA” programs were initiated in Spanish universities in 2007 [
5], aligning with standards established by internationally recognized organizations for teacher evaluation. These standards, particularly those outlined in “The Personnel Evaluation Standards” by The Joint Committee of Standards for Educational Evaluation [
6], serve as a reference for designing, developing, and evaluating teacher assessments. DOCENTIA programs provide a procedure that allows addressing the evaluation of teaching activity in all areas of action of university teaching staff by analyzing four basic dimensions: (1) teaching planning, (2) teaching development, (3) results, and (4) innovation and improvement. The various stakeholders engaged in the learning process, such as department heads, deans, the teaching innovation service, teachers, and students, participate in assessing these four dimensions. Notably, the students’ perspective currently holds a significant influence on the appraisal of teaching quality [
5,
7]. Moreover, in the context of evaluation and improvement procedures, the established protocol includes deliberate mechanisms linking teacher evaluation to their training, acknowledgment, and promotion, as outlined in the support guide of the verification program [
4].
In this sense, Spanish universities have developed questionnaires that incorporate the assessment of various items related to the teaching–learning process for each subject and for all the teachers involved in that subject. These questionaries are normally filled out voluntarily and anonymously by students at the end of the semester.
Table 1 shows the questionary used at the University of Malaga, indicating the three dimensions to which the students’ evaluations contribute. The items are assessed using a Likert scale with a scoring range from 1 (completely disagree) to 5 (completely agree). The results of these questionnaires contribute 28% to the overall evaluation of teaching activity, while the remaining evaluation is contributed by teaching managers (30%), the innovation service (30%), and teachers (12%).
Teachers undergoing the accreditation process for promotion require the assessment, which is expressed numerically on a scale from 0 to 100, with the following categorization: unfavorable (<50), favorable (50–69), very favorable (70–89), and excellent (>90).
Questionnaires are a valuable tool in the assessment of teaching quality, enabling the collection of feedback from students and the generation of easily analyzable quantitative data [
8,
9]. The anonymity provided by questionnaires encourages students to express their opinions freely. However, the widespread use of surveys, driven by the desire to ensure teaching quality, has resulted in potential survey fatigue among students [
10,
11]. This saturation may affect the quality of responses, as individuals can become overwhelmed by the number of surveys they encounter. Furthermore, student participation in surveys is typically voluntary, and this fatigue can result in low participation, leading to potential underrepresentation and biased feedback.
Making survey participation mandatory could raise ethical concerns, as it infringes upon students’ autonomy and may generate coerced or insincere responses. The imposition of mandatory participation can lead to student resentment and compromise the integrity of the feedback. In such a scenario, students may provide responses just to fulfill the requirement, potentially undermining the purpose of collecting meaningful feedback. Incentivizing students to engage in surveys emerges as a potential solution [
12,
13]. While incentives can enhance participation rates, the choice of incentives is crucial for ensuring the integrity of feedback.
This study focused on the use of incentives to encourage student participation in the questionnaires for evaluating teaching quality. It is worth noting that we detected a lack of publications in the existing literature exploring the implementation of a supplementary evaluation score as a motivational tool for student participation in QAS questionnaires. In this context, the proposed incentive system provides an extra score equivalent to 5% of the overall attainable grade for the student. The QAS questionnaires are provided by the university’s quality service and distributed by the teacher. The responses, which serve as the primary data source in this study, were collected from the student participants through an online platform. It is crucial to emphasize that this online format ensures the anonymity of the student responses. The research was conducted across a total of five subjects, including diverse undergraduate and master’s courses from two different degree programs of the University of Malaga, to guarantee the representativeness of the study results. With the aim of assessing the impact of the incentive system, the results were compared with those from previous academic years where the incentive system was not employed. The proposed questions that served as a guide for this research are the following:
How does the proposed guaranteed prize incentive system influence student participation in quality assessment questionnaires?
Is there any significant relationship between the incentive system and students’ final grades?
How does the incentive system influence the student’s perception regarding the anonymity of their responses to the survey?
The assessment of the impact of the proposed prize incentive system is a critical aspect of this research. Understanding how incentives influence student engagement in the evaluation process is essential for improving the effectiveness of questionnaires as a tool for assessing teaching quality. The proposed methodology aims to assess the motivational impact of the incentive system. Studying the potential relationship between the incentive system and the students’ final grades is crucial to understand its implications on academic outcomes. Finally, this paper explores students’ perceptions regarding the anonymity of their questionnaire responses, focusing on understanding their concerns regarding the confidentiality of their feedback.
3. Results
Figure 1 shows the evolution of student participation in QAS questionnaires for the subjects SOCP and IMS. The modifications in the system for collecting questionnaire responses are also highlighted, along with the period of COVID-19 confinement in Spain. The gaps in the trends result from the fact that none of the authors held the teaching responsibility for the subject during those periods. Consequently, the feedback from those questionnaires remains private to safeguard the personal assessments of the instructors who taught the subject in those academic years.
As can be seen, participation in the questionnaires started declining in both subjects from 2016. The shift to an online survey administration did not stop this trend. The slight increase in participation observed during the COVID-19 confinement could probably be attributed to the exceptional circumstances of the confinement and the widespread imperative for interpersonal engagement. Nevertheless, participation in the academic year 2020–2021 remained consistently low. It was only with the implementation of the incentive system that a significant surge in participation became evident.
The significant enhancement in participation observed in those two subjects during the 2021–2022 academic year led to the extension of the incentive system to the remaining subjects included in this study for the 2022–2023 courses.
Table 4 presents the results obtained in all five subjects over the three academic years under investigation.
Table 5 shows the responses of students to the short questionnaire regarding the new incentive system. The overall participation rate of students in this questionnaire was 70%.
The advantage of the incentive used in this study is that it imposes no monetary cost on the institution; it involves an extra score added to the student’s final grade. However, as mentioned earlier, it is essential to verify that the impact of this additional score does not significantly affect the overall final grades of students, ensuring that it does not unduly influence the academic assessment process. The box-and-whisker diagram in the
Figure 2 compares the grades achieved by students over the three academic years under examination. As the same teacher taught the SOCP subject for over 5 years, its results span two additional academic years, taking advantage of the data availability.
At first glance, there seems to be no substantial impact of the incentive on the final grades of the two mandatory subjects, SOPC and IMS. Furthermore, if any effect were present in IMS, it would apparently go against what was expected. In the TSC subject, the upward trend in the average grade seems to precede the introduction of the incentive. The average grade trends in MENPP and PASEL subjects appear to exhibit more randomness.
To quantify whether the incentive significantly affects final grades, a one-way analysis of variance (ANOVA) [
15] was employed to examine whether the differences between the average grades across academic years are statistically significant.
Table 6 summarizes the ANOVA results for all subjects at a significance level of 0.05, treating academic years as distinct groups.
Furthermore, we can take a step further and examine whether specific groups differ significantly from one another in the four subjects where ANOVA results indicate significant differences in average grades across the three academic years under study.
Table 7 presents the results of pairwise multiple comparison tests, highlighting the academic years with incentive in bold.
Finally, the outcomes of the QAS questionnaires were also compared for the same academic years across all subjects, except for MENPP, as it has questionnaire results available only for the 2022–2023 academic year. These comparisons are presented in
Table 8,
Table 9,
Table 10 and
Table 11.
4. Discussion
Survey fatigue represents a facet of respondent burden, commonly defined as the time and effort required to participate in a survey [
16]. Survey fatigue can arise in both scenarios: surveys with a substantial number of questions and the administration of consecutive surveys. In the former, it typically manifests as a decline in response rates towards the end of the survey. Conversely, in the latter, it is characterized by a reduction in participant engagement [
10].
In our study, we observed fatigue primarily due to the consecutive administration of questionnaires to students within the same semester. This is evidenced by the decline in questionnaire participation over time, as shown in
Figure 1. Interestingly, we did not observe fatigue attributable to the number of questions within the survey, as the response rates remained consistent across all 14 questions included in the quality assurance survey (QAS) questionnaire. This finding aligns with previous research indicating that the length of the survey itself may not be the sole determinant of survey fatigue [
10]. Instead, our results suggest that the timing and frequency of survey administration play a significant role in exacerbating respondent burden and subsequent survey fatigue among participants.
As can be seen in the results presented in
Figure 1 and
Table 4, the incentive system consistently leads to a significant increase in student participation in QAS questionnaires across all subjects. The effectiveness of incentivized surveys in increasing participation has been demonstrated in various non-educational environments [
13,
17,
18]. Incentives such as gift vouchers, participation in raffles, and lotteries are commonly employed. However, it appears that the educational environment is not an exception, and a guaranteed prize [
19,
20,
21] is more likely to be successful, as observed in this experience.
The following theories provide frameworks for understanding how incentives influence response rates in surveys by considering factors such as perceived value, costs, and rewards [
22,
23]:
- 1.
Social exchange theory: This theory posits that individuals weigh the costs and benefits of participating in an activity. In the context of surveys, respondents evaluate the effort required to complete the survey against the perceived rewards or incentives offered. If the perceived benefits outweigh the costs, respondents are more likely to participate;
- 2.
Leverage saliency theory: This theory suggests that respondents are more likely to participate in surveys when they perceive the incentives offered as valuable and relevant. In other words, incentives that are salient and meaningful to respondents are more likely to motivate participation;
- 3.
Benefit–cost theory: This theory emphasizes the comparison between the benefits gained from participating in a survey (such as incentives or rewards) and the costs associated with participation (such as time and effort). If the perceived benefits exceed the perceived costs, respondents are more likely to participate.
The results from the short questionnaire administered in the last week of class indicate that students are very satisfied with the incentive system (
Table 5). Overall, 88% of students acknowledged that the incentive in the form of an extra score played a significant role in motivating their participation in the QAS questionnaires. This positive response suggests that the extra score is an effective incentive, aligning with the idea discussed above that a guaranteed prize is a favorable motivating factor. Our opinion aligns with the literature findings [
13,
22,
23,
24], indicating that using a guaranteed prize to incentivize questionnaire participation is consistent with leverage saliency theory. The extra score seems to exert more influence over students’ decisions to complete the survey than the time and effort required.
However, a potential concern arises: Approximately 40% of students expressed concern about the compromise of anonymity within the incentive system. This concern appears to arise from the physical presence of the teacher during the verification process of the questionnaire submission, even though the device screen only displays an acknowledgment upon survey submission. Addressing this concern could enhance the perceived confidentiality of the process. A solution to the anonymity issue could be that the quality service of the university takes on the responsibility of verifying students’ submissions. This could be achieved through an email notification sent to the teacher with a list of students who have successfully submitted the QAS questionnaire, eliminating the need for direct teacher–student interaction during that verification process. Finally, students reported completing an average of 3.8 questionnaires in the current semester, which is significantly lower than the expected range of 6 to 12 questionnaires. This suggests a notable decline in participation, reinforcing the notion of survey fatigue among students, in line with the consecutive administration of surveys [
10]. Addressing this issue is crucial for maintaining the effectiveness of the evaluation process and ensuring a representative response rate.
Regarding whether the incentive significantly influences the academic assessment process, the ANOVA results summarized in
Table 6 indicates that the average grades across the five academic years did not differ significantly in the SOPC subject. This indicates that the incentive system did not have a significant impact on the academic assessment in that subject. In contrast, for the remaining four subjects, the F values exceed their critical values, suggesting that the observed differences are unlikely to be attributed to random chance. This indicates that there is some factor affecting the average grade, but it does not directly identify the incentive as the factor causing the observed effect.
The results of pairwise multiple comparison tests for these four subjects (
Table 7) are not conclusive. For IMS, Tukey’s test [
15] suggested no significant difference for the last two academic years (with incentive); nevertheless, as shown in
Figure 2, the decrease in average grade from 6.8 (2020–2021) to 5.8 (2021–2023) is not coherent with an extra score. For TCS, the pairwise multiple comparison test also indicated no significant difference for the last two academic years, one without incentive and the other with it, suggesting no significant impact of the incentive system on the average final grade. Similar discrepancies were found in the Tukey’s comparison tests across academic years for MENPP and PASEL subjects. In conclusion, the non-significance in some of the Tukey’s post hoc tests do not definitively negate the global significance found in the ANOVAs for those four subjects. It suggests that there may not be clear evidence of specific pairwise differences. The lack of clarity on the incentive’s effect on the final grade could be attributed to factors such as sample size, effect size, heterogeneity within academic years, etc. However, this does not necessarily imply that the incentive has no effect; rather, it highlights the complexity of interpreting results in the context of various statistical considerations. Therefore, the influence of the incentive on the final grade remains uncertain or inconclusive based on the current analysis. This emphasizes the need for further investigation or consideration of additional factors to draw more definitive conclusions.
Regarding whether the incentive significantly affects the outcomes of the official QAS questionnaires (
Table 8,
Table 9,
Table 10 and
Table 11), no significant impact on the results of any questions within the QAS questionnaire was detected for the SOPC, IMS, and TCS subjects at a confidence level of 0.05. Regarding the PASEL subject, the questionnaire results, available for only the last two academic years, were analyzed, with notably low participation (7%; six student responses) in the 2021–2022 academic year. This could potentially explain why questions 5, 7, 9, and 13 appeared to be influenced by the incentive system, exhibiting an increase in their assessment when the incentive was applied.
Finally, when examining the crucial question, “14. I am satisfied with the teaching performance of this teacher”, it becomes evident that the incentive system does not impact students’ satisfaction with the teaching performance across all subjects. This finding aligns with the results of other authors who did not find significant differences in response distributions among groups that received incentives or not [
13,
25,
26,
27], although it should be highlighted that those works are not in the education area.
5. Conclusions
The guaranteed prize incentive consisting of an extra score has proven to be highly effective in significantly boosting student participation rates in the QAS questionnaires used for teaching performance assessment at the University of Malaga. The notably positive feedback from students, with over 85% acknowledging that the incentive motivates their engagement, demonstrates its success in encouraging active participation. However, the incentive system should address a noteworthy concern raised by 40% of students regarding the compromise of anonymity in the QAS questionnaire due to the implementation of the incentive. This concern seems to arise from the physical presence of teachers during the verification process of the questionnaire submission. We suggest that a third-party entity, such as the quality service of the University, should take responsibility for verifying student submissions.
The emergence of survey fatigue is evident in the low number of QAS questionnaires that students reported completing per semester, aligning with the official participation rates published by the university. This highlights the importance of addressing this issue to maintain the effectiveness of the assessment process and ensure a representative response rate.
The analysis of the influence of the extra score incentive on final grades remains inconclusive. While no significant differences were found in the final grades for one subject across academic years both with and without incentives, the results for the remaining subjects did not provide a clear indication of the incentive’s impact. This underscores the complexity of interpreting the influence of incentives on academic outcomes and highlights the need for further investigation or consideration of additional factors for more definitive conclusions.
Regarding the QAS questionnaire results, the study indicates that the incentive system does not significantly affect the results for all the subjects studied.