Digital Issue Movements: Political Repertoires and Drivers of Participation among Belgian Youth in the Context of ‘School Strike for Climate’
Round 1
Reviewer 1 Report
Manuscript review: Digital issue movements: political repertoires and drivers of participation among Belgian youth in the context of “School Strike for Climate”
I would like to start by congratulating the authors for choosing such interesting topic and the editor of Sustainability for the opportunity to review this paper.
The paper has potential and in my opinion it deserves publication. However, I have some concerns about the research methodology and conclusion.
- Research methodology: I realized that the data collection takes approximately 2 years, which seems a little excessive for the analysis and writing of the article. Can you explain a little why it took 2 years to prepare the article? Despite the two-year gap, I was pleased that your conceptual framework is strong, as you present a relevant and recent reference list.
- Conclusion: It may be useful to divide the last section into two sections and subsections, for example: 4. Discussion and 5. Conclusions (5.1. Contributions to the Theory; 5.2. Managerial contributions; 5.3. Limitations; 5.4. Suggested future limitations). However, please don't feel obligated to follow this suggestion. I just think it would improve the readability of the article.
Overall, my opinion is that you have a very good article. Congratulations for your research.
Author Response
Please see attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
This article asks how and why do young people differ in terms of the forms of collective action they participate in? In order to answer the question, this article draws on survey questionnaires with 498 Belgian high school students who participated (or didn’t participate) in School Strike for Climate actions to understand the different repertoires of collective action the groups fall into. The authors find that students fell into four groups, a group that engaged in online and offline activism (all around activists), a group that engaged predominantly in offline activism (protesters), a group that primarily engaged in online activism (private activists), and a group that did not or minimally engaged (non-engagers). The authors use several psychological predictors to understand the differences between these groups. They find that young women who cared about the environment and persuading others were more likely to be protesters, young people who believed they could make a difference, did not believe social media made a difference, and who wanted to inform others were more likely to be all around activists, and young men who felt they could make a difference through social media were more likely to be private engagers. The authors conclude that this contributes to our understanding of youth activism because it gives a clearer picture of the ways that young people engage and the role that social media (here, facebook specifically) plays in the process.
My primary question is regarding the motivation for the paper. The authors summarize literature on the participatory turn in youth civic engagement, and how youth political participation has individualized with an emphasis on issue specific engagement, as well as more lifestyle politics. This clearly written and summarizes the literature well, but we never get a sense of the puzzle or how their analysis of repertoires of participation (specifically differential participation in online/offline activism). I’m not seeing how the paper is intervening in this literature though. How does the author’s emergent typology of repertoires of participation contribute to the discussed work? The motivations for the independent variables could be further developed as well. Indeed, most of the theoretical explanation for gender, efficacy, and engagement is conveyed as the authors discuss the operationalization of the measures. Some scaffolding in the literature for why these may matter and how they contribute to the broader literature would be useful. Especially considering the authors focus almost entirely on psychological motivations (with the exception of gender and self-identified economic strain [which there doesn’t appear to be much variability on]). There are several well-cited studies (including some cited by the authors) that engage with more sociological elements like peer participation (have you been invited to attend a protest), political beliefs (liberal or conservative), and political engagement (e.g., reading the newspaper)?
The authors also appear to be missing engagement with work that is already asking questions related to these repertoires of participation. Brunsting and Postmes (2002) find that online and offline activism are driven by similar factors. Maher and Earl (2019) and Crossley (2015) find that the internet and social media is less a separate method and more a medium that all engagement flows through. The authors results clearly fit with this broader work, and engaging with it directly would offer a clearer space to make a theoretical contribution.
Empirically, I think the Latent Class Analysis is well done, and I particularly appreciated the visualization in Figure 1. It helps show how the different groups look, and conveys that there are meaningful distinctions between these groups. In regards to the model fit (table 2), what is a meaningful difference in the class models? The BIC’s provided have two decimal places (I am guessing this is a typo?), but how meaningful is that difference between 7.52 and 7.56? What would the results look like if you produced 5 latent classes? Could we just produce a different version of Figure 1 or would it look significantly worse? Put another way, aside from having the lowest BIC score, what ensures that this class number is truly the best fit?
I appreciate the lengths the authors went to identify high school students for their survey. High school students are hard to capture group, and, as the authors note, they are an increasingly important part of the global climate movement. However, the approach the authors took relied on young people from the schools to volunteer to participation. What incentives did the authors offer for participation? Did they see any selection bias in terms of the students who volunteered (were they different demographically from the school they came from?). More detail on this point would help convey the robustness of the results.
Finally, why just focus on Facebook? I’m not sure about the students in the study, but here in the United States students are more likely to use Instagram, tik tok, and snapchat than Facebook. They also use them to connect with people they are interacting with in person as well (so if you’re trying to convince your friend to go to a protest, you’re not just posting on their wall). Is there a possibility that the authors are missing engagement that is happening on other platforms (or that the students are answering based on the platforms they are using)?
Author Response
Please see attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 2 Report
The authors do a fine job of addressing the comments on the prior draft of the paper, and the paper should be published.