Next Article in Journal
A Longitudinal Study of Speech Acoustics in Older French Females: Analysis of the Filler Particle euh across Utterance Positions
Next Article in Special Issue
Multiple Stakeholder Interaction to Enhance Preservice Teachers’ Language Assessment Literacy
Previous Article in Journal
Modeling Heritage Language Phonetics and Phonology: Toward an Integrated Multilingual Sound System
Previous Article in Special Issue
Policy in Practice: Teachers’ Conceptualizations of L2 English Oral Proficiency as Operationalized in High-Stakes Test Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing Writing in French-as-a-Foreign-Language: Teacher Practices and Learner Uptake

Department of Teacher Education and School Research, Faculty of Educational Sciences, University of Oslo, 0317 Oslo, Norway
Languages 2021, 6(4), 210; https://doi.org/10.3390/languages6040210
Submission received: 20 September 2021 / Revised: 20 November 2021 / Accepted: 12 December 2021 / Published: 17 December 2021
(This article belongs to the Special Issue Recent Developments in Language Testing and Assessment)

Abstract

:
Formative assessment and adaptive instruction have been focus areas in Norwegian educational policy for more than a decade. Writing instruction in the language subjects is no exception; assessment of writing should help the learners improve their writing skills and, thus, feedback must be adapted to the individual learner’s needs. The present study aims to shed light on the relations between teacher feedback practices and learner uptake in French-as-a-foreign-language upper secondary classes in Norway. Using material from a longitudinal corpus of learner texts, including teacher feedback (the TRAWL corpus), the study investigates the written feedback practices of three L3 French teachers, and explores whether any signs of uptake can be identified in 27 learners’ new pieces of writings. The findings show that although the teachers followed best practice principles for formative assessment and written corrective feedback, less than half of the students showed any signs of uptake in subsequent pieces of writing. With one exception, these were students with an intermediate-high to very high proficiency level in French. The study emphasises the importance of strategies that could encourage learners to use the feedback they receive, thus moving the centre of attention from teacher practices to learner activities.

1. Introduction

Ever since the Norwegian Directorate for Education and Training launched the national Assessment for Learning programme in 2010, formative assessment and differentiated instruction have been emphasised in Norwegian educational policy (Norwegian Directorate for Education and Training 2018). The Assessment for Learning programme aimed at developing feedback practices and cultures that were conducive to learning, in all subjects and for all skills. Writing instruction in language subjects was also affected by this development. Assessment of writing could no longer be summative only; it also had to include comments and suggestions that would assist learners in further developing their skills. In order to achieve this goal, feedback had to be adapted to the individual learner’s needs.
However, what constitutes effective formative feedback in writing instruction is a hotly debated topic. Much-cited and particularly well known is the debate between Truscott and Ferris regarding written error correction, which started with Truscott’s claim that such feedback does not help learners improve their language (Truscott 1996). Ferris (1999) argued the opposite, and since then, several metastudies have tried to establish the effects of error correction on learner written language, often with inconclusive results (see Truscott 2016). Research from the English-as-a-second-language (L2) field in Norway can be interpreted as lending some support to Truscott’s claims, as follows: studies show that although teachers provide error correction on learner writing, this feedback is not always perceived by learners as formative in nature and many learners admit that they do not make use of the feedback they receive (Burner 2016; Saliu-Abdulahi and Hellekjær 2020).
Most students in Norway also study a second foreign language (henceforth called L3), usually French, German, or Spanish. Very few studies have investigated feedback practices and written error correction as a formative assessment tool in these subjects, with Sandvik’s case study from an L3 German classroom as a notable exception (Sandvik 2011). The patterns in L3 teaching are not necessarily similar to those in L2. Previous research suggests that the L3 subject tends to be form focused, while the L2 subject is more content-oriented (Storch and Sato 2020; Vold, forthcoming). Due to this focus on form in foreign language (L3) classrooms, Li (2010) argued that learners in foreign language settings are more predisposed than learners in second language settings to pay attention to the corrections that they receive.
Since little research has been conducted on formative practices in the L3 field in Norway, there is a lack of knowledge not only of how students use teacher feedback, but also of what teacher feedback practices look like. The objective of the present study is twofold, as follows: (1) it investigates what characterises the written feedback practices of three L3 French teachers in Norwegian upper secondary schools, and (2) it explores whether any signs of uptake among the learners can be identified in new pieces of writings. The overarching aim is to shed light on the relations between assessment and learning and, thus, to generate knowledge that can contribute to improved feedback practices in foreign language teaching and learning.
Based on a digital and longitudinal corpus of learner texts that include teacher feedback, this study sought to answer the following research questions:
(1)
What do L3 French teachers’ feedback practices regarding learner written texts look like? Specifically:
(a)
Which aspects of the learners’ writing are commented upon by the teachers?
(b)
What types of errors are ignored and which are corrected, through what types of feedback, in which learners’ writings?
(c)
Do teachers tailor and adjust their comments and corrections to the learner’s proficiency level?
(2)
Following the teacher feedback received, are there any signs of uptake in the learners’ subsequent writings?
Proficiency level is just one of several factors that can influence a learner’s individual needs. I chose to focus on this factor for the following two reasons: (1) different proficiency levels are frequently mentioned by L3 teachers when asked about challenges in their everyday classroom work, and (2) the data revealed considerable variation with regard to proficiency levels, which made it interesting to look further into this matter.

2. The Educational Context of the Study

The present study was carried out among students in general studies programmes in Norwegian upper secondary schools (ages 16–18). These programmes include an obligatory L3 subject to qualify students for higher education. The most frequently offered languages are French, German, and Spanish. Most students already have a background in their chosen language from lower secondary school (level I) and continue the subject in upper secondary school at level II. Level II is taught over the first two years of upper secondary, with four teaching hours a week. Some schools offer an additional third year (level III) for students with a special interest in languages.
National policy documents state that the overarching approach to L3 teaching is communicative. The subject curriculum (a joint curriculum for all L3s) is inspired by the Common European Framework of Reference for Languages (Council of Europe 2001) and promotes practical language use and communicative competence (Ministry of Education and Research 2004; Norwegian Directorate for Education and Training 2020). Nevertheless, previous research from Norwegian schools suggests that L3 teaching in Norway is rather traditional and form-focused (Heimark 2013; Llovet Vilà 2018; Vold and Brkan 2020; Vold, forthcoming). Moreover, the presence of the French language in Norwegian society is low, so unless the learners deliberately seek French input, for example through web-based resources, they will rarely encounter French outside school.
In the national Assessment for Learning programme, the following four research-based principles for formative assessment were highlighted (Norwegian Directorate for Education and Training 2011): learners should get to know what is expected of them, they should receive feedback that states the quality of their work, they should receive advice on how to improve, and they should be given opportunities to evaluate their own work and learning process. The programme built on seminal research such as that of Black and Wiliam (1998) and Hattie and Timperley (2007), which stated that feedback should give students information about their performance in relation to specific and clear goals, and help them to identify and to fill the gap between their desired and actual level. Many teachers and school leaders have attended professional development courses within these areas over the last decade. Assessment for learning purposes is therefore central in the educational context under study. Apart from these overarching principles, which all teachers in Norwegian schools are expected to follow, teachers are free to choose the feedback formats they see fit, including how often they want to use grades in their assessment of homework and tests.

3. Literature Review

There is an extensive amount of research on written corrective feedback (WCF) in second and foreign language teaching and learning. Given the plethora of studies that exist, this literature review does not intend to be exhaustive, but aims to summarise some of the most relevant findings and claims from previous research. The summary will help identify areas in need of further research and serve as a background for the analysis and discussion of the findings from the present study.
One major question within research on WCF is whether it is effective, that is, whether it is beneficial for the acquisition process. In a seminal review article, Truscott (1996) concluded that written grammar correction was not helpful in the learning process and should be avoided. He suggested that content feedback and reading activities would be more effective methods for improving learners’ accuracy in writing. Ferris (1999) refuted Truscott’s claim by pointing to several weaknesses in his argument. She claimed that error correction would be effective (at least for many learners) if performed in line with the following best practice principles: error correction should be selective (i.e., teachers should prioritise certain areas and not attend to every single error in the learner’s writing), and it should be clear and accurate, so as not to mislead or confuse the learner. Moreover, indirect correction (i.e., signalling the presence of an error without providing the correct form) should be used for treatable errors (i.e., errors that can be easily corrected by the learner through applying a clear rule, e.g., verb endings), whereas direct correction (providing the correct form) should be used for untreatable errors, such as word choice and problems with sentence structures. For untreatable errors, there are no or only very complicated rules to consult, and the learner, therefore, cannot be expected to find the correct form without help.
The controversial error correction debate, which is still ongoing (Mohebbi 2021), generated a large number of studies that aimed to investigate the effects of different types of error correction in a methodologically sound way. Feedback scope (comprehensive vs. selective) is one aspect that was often studied (Lee 2020). Most of these studies lend support to the claim that selective (or ‘focused’) feedback is more effective (Ferris et al. 2013; Shintani et al. 2014). A recent example is Rahimi (2021), who showed that French-speaking ESL learners who received selective feedback outperformed those who received comprehensive feedback. Sheen and Ellis (2011), based on a synthesis of previous studies on this topic, also concluded that focused feedback is most effective, and that unfocused written feedback seems to have little effect on acquisition.
Ferris’ and many other researchers’ fondness for indirect correction has not seen the same support; studies show that learners often have limited understanding of indirect corrections (Zheng and Yu 2018) and that even presumably highly motivated learners, such as teacher-students, do not make the effort to learn how to amend their mistakes (Cabot 2019). Indirect corrections are often accompanied by a code system indicating the types of errors that have been made. Guénette and Jean (2012) argued that these code systems often are ineffective because learners struggle with understanding them. Their advice is that indirect correction should only be used for linguistic structures of low complexity.
Although studies that compare selective and comprehensive feedback conclude in favour of selective feedback, only a limited number of studies were able to prove a beneficial effect of error correction over no correction on long-lasting learning. Van Beuningen et al. (2012) showed that ESL learners who received comprehensive WCF demonstrated higher gains in linguistic accuracy after one and four weeks compared with learners who self-corrected or practiced writing without WCF. Sheen (2007) and Bitchener (2008) showed that focused corrective feedback on the use of English articles led to lasting gains in linguistic accuracy in L2 writing. However, a more recent study (Shintani and Ellis 2013) found no durable effect of WCF on the use of English articles.
In a review study of WCF research in ESL/EFL contexts, Chong (2019) found that studies aiming to investigate the effect of different types of error correction typically used a pretest-posttest design, focusing on one or a few specific linguistic forms. Although pretest-posttest design studies may be important in that they bring nuances to the polarised error correction debate, Ferris (2010) and Lee (2019, 2020) argued that such studies have limited practical implications because the settings in which they were conducted were so far from real-life classroom situations. On this issue, Ferris and Truscott seem to agree. In a recent interview, Truscott argued that the studies demonstrating positive evidence in favour of error correction are not valid for everyday classroom practices because they are artificial and narrow in focus (Mohebbi 2021). Lee (2020) recommended that more research on WCF should take place in authentic classrooms instead of in experimental or quasi-experimental settings. She calls for more classroom-based, longitudinal, qualitative studies of WCF—a call that the present study contributes to answering.
Although studies conducted in laboratory-like settings still dominate the field, there is a growing interest among researchers in studying naturally occurring teacher feedback and learners’ responses to this feedback. These studies typically include more types of feedback than only error correction, since teachers in authentic classrooms provide feedback on all areas of writing and not only linguistic errors (Lee 2020). Such studies have been more concerned with describing practices and attitudes than with testing feedback effects. Lee’s own studies show that teachers and learners tend to prefer comprehensive WCF, although they might not believe in its effect (Lee 2003, 2004). In a large survey on assessment practices, conducted among language teachers and students in four European countries, Vogt et al. (2020) found that the most widely used feedback mechanisms were marks (percentages, points, letter grades, etc.) and brief comments (e.g., ‘well done’). More detailed comments and indications on how learners could improve their learning were less often made. The authors concluded that such practices do not see feedback as part of assessment to support learning.
However, local feedback cultures differ considerably, and Nordic countries were not represented in Vogt et al.’s (2020) study. Horverak (2015) documented that feedback cultures in ESL classes in Norway were changing towards more formative and process-oriented practices. There are, nevertheless, challenges; teachers’ heavy workload may hinder such change (Saliu-Abdulahi et al. 2017) and good feedback practices do not automatically entail student learning, since learners do not necessarily make use of the feedback to improve their writing (Burner 2016; Cabot 2019; Saliu-Abdulahi and Hellekjær 2020).
Learners’ use of the feedback they receive is key to feedback effect. Many researchers have pointed out the importance of learners making active use of WCF (see e.g., Bitchener 2021; Ellis 2009; Ellis and Shintani 2014, ch. 10; Zheng and Yu 2018). The effectiveness of selective feedback compared with comprehensive corrective feedback seems to be related to learners’ use of the corrections. Comprehensive feedback might be perceived as overwhelming (Burner 2016), which in turn makes the learner feel unable to profit from it. Burner (2016) also showed that learners could be demotivated by exclusively negative feedback and could need positive feedback in addition in order to be able to focus on the areas of improvement. Bruno and Santos (2010) stressed that feedback must be clear and concise in order for the learners to be able to act on the suggestions made; however, selective and clear feedback is no guarantee that it will be used. Asking learners to revise their texts is one teacher technique that requires learners to engage with the feedback and that has shown some positive long-term effects (Rahimi 2021; Van Beuningen et al. 2012). However, in other studies, text revision did not lead to uptake in new pieces of writing (Truscott and Hsu 2008).
Studies on naturally occurring feedback have brought to the fore individual and contextual factors that might impact learners’ use of WCF—factors that context-independent, (quasi-)experimental studies are not always able to identify (Lee 2020). In a naturalistic case study of four Chinese EFL learners, Han and Hyland (2015) found that learner engagement with WCF varied between individual learners, depending on their beliefs and previous experiences. Zheng and Yu (2018) showed that low-proficiency ESL university students engaged less with feedback than students with higher proficiency levels. This mirrors a finding from Shintani and Ellis’ (2015) quasi-experimental correlational study, stating that students with a high language analytic ability benefited more from feedback than their peers with a lower language analytic ability, though the extra benefit was only present in writing performed shortly after the feedback and not in the long term.
Proficiency levels influence not only learner use, but also teacher and learner preferences. In a study of French L1 writing education, Doquet (2011) found that at lower levels, teachers tended to focus on micro-level issues, such as morphology and spelling, while at higher levels they focused on cohesion, coherence, and content. Similarly, studies show that second language learners prefer feedback on content and style, while foreign language learners appreciate feedback directed at grammar and lexicon (Hedgcock and Lefkowitz 1994). In a study of WCF in ESL teacher training, Cabot (2019) found that teacher-students preferred comprehensive feedback and that for this group, comprehensive feedback might actually be more fruitful than selective feedback. The teacher trainees were not necessarily very proficient ESL learners, but their learning goals were different from those of other learner groups; they wanted not only to improve their own writing, but also to be able to correct their own (future) learners’ writing. For this particular group of advanced learners, the finding that selective feedback is more beneficial than comprehensive feedback did not seem to apply.
To conclude, although studies on WCF have proliferated in the last decades, the field is largely dominated by quasi-experimental effect studies and there is a scarcity of studies on naturally occurring feedback. Most studies concern ESL/EFL and learners at university level. To the best of my knowledge, no studies have addressed French-as-a-foreign language in a Nordic school context. Contextual factors can considerably impact what sort of WCF is provided, what effect it has, and how it is used by the learners (Ferris 1999; Sheen and Ellis 2011; Shintani and Ellis 2015). By examining naturally occurring teacher feedback on written assignments given in real-life classroom situations, and potential signs of learner uptake following that feedback, I follow Lee’s (2020, p. 6) recommendation to ‘[home] in on the realities of the classroom’ in order to produce research with high ecological validity and utility for authentic classrooms.

4. Materials and Methods

4.1. Data Collection and Participants

The data are taken from the TRAWL (Tracking Written Learner Language) corpus, a digital collection of second and foreign language learner texts that includes written teacher feedback on the submitted assignments (TRAWL 2021). TRAWL collected naturally occurring written assignments among Norwegian primary and secondary school students in different geographical regions. Invitations to participate were sent to schools in the TRAWL team members’ network. Participants were not asked to perform specific writing tasks; instead, they were encouraged to share the assignments and texts that they, in any case, would produce. Questionnaires tapping into students’ language backgrounds were collected along with the texts. Teachers and students who agreed to join signed written consent forms. The collection and compilation of the TRAWL corpus followed national guidelines for research ethics (NESH [1993] 2016) and was approved by the Norwegian Centre for Research Data (project number 49447).
Data for the present study consist of the L3 French subcorpus of texts from upper secondary schools. At the time of data analysis, L3 French texts from six upper secondary schools had been collected and prepared for analysis. From three of these schools, teachers’ written feedback was collected along with the texts. The material was collected between 2015 and 2018. For the present study, I investigated selected assignments handed in by 27 L3 French learners in the three schools. The learners were in their second year of upper secondary school (age 17). Four of them attended an International Baccalauréate (IB) class, while the rest followed ordinary general studies programme classes. The three teachers all had many years of teaching experience from Norwegian schools. Two of them were French native speakers. A presentation of the class profiles follows below:
The IB class consisted of seven students, of which four had handed in all the selected assignments and, thus, were included in the study. None of these four students had parents with French as L1, but three of them had a background from a French-speaking country; one student had had one year in pre-school, and the other two had spent several school years in a French-speaking area. Only one of the four students had never lived in a French-speaking country. For the students who had spent parts of their childhood in a French-speaking country, French was more of a second than a foreign language. In a school setting, this might still be considered an L3, since the students also study Norwegian (L1) and English (L2). IB programmes generally have a good reputation and attract motivated and high-achieving students. The students follow an international language curriculum.
The first general studies class (GS-A) consisted of 14 students, of which 13 had handed in all three selected assignments and were included in the study. Although several of these 13 students were plurilingual in the sense that they had L1s other than or in addition to Norwegian, none of them listed French as their own or their parents’ L1. One student had lived in a French-speaking country for 1–3 months, while two had spent 2–4 weeks in a French-speaking country.
The second general studies class (GS-B) consisted of 17 students, of which ten had handed in all the three selected assignments and were included in the study. Some of these had a plurilingual background, but it did not include French, and none of them had stayed in a French-speaking country for more than 2–4 weeks. The school had a good reputation and attracted motivated and high-achieving students.

4.2. The Assignments

The TRAWL corpus is longitudinal, which made it possible to analyse texts written by the same learner during different stages of the learning process to make hypotheses about the possible influence of the teacher’s feedback on the learner’s writings. The students handed in several written assignments during the school year. In order to limit the data to a manageable amount, I picked out three assignments for each class. The first text (T1) was handed in during the first semester. Since I wanted to look for signs of short-term as well as long-term uptake, I selected as the second text (T2) the next text that was handed in after the first one, whether this happened in the autumn or the spring semester. For the third text (T3) I selected one that was handed in later in the spring semester. The IB class wrote the texts as homework, while in the other two schools, the texts were written in a test setting. As is common in naturally occurring assignments, learners could choose between different tasks, although all tasks were about the same topic. The extent to which the teachers provided the learners with specific assessment criteria in written form varied between the assignments. Table 1 gives an overview of the selected assignments.

4.3. Data Analysis

4.3.1. RQ1: Teacher Feedback Practices

In order to answer RQ1 about teacher feedback practices, the learners’ proficiency levels were determined and the error types produced by the learners and the feedback types provided by the teachers were analysed using established taxonomies.
Learner proficiency level: I assigned a proficiency level (low, intermediate-low, intermediate-high, high, or very high) to each student, based on the linguistic quality of the first text (T1). I used criteria developed for the national written exam; the proficiency level labels should not be understood as referring to CEFR levels or any other international scales—instead, they reflect the levels that French subject teachers in Norwegian schools use when they assess their students. The criteria fitted well because most of the assignments given to the students were exam tasks from previous years. The criteria include aspects such as vocabulary range, grammatical accuracy, text coherence, and content. I took all but content into account, since I was primarily interested in linguistic mastery. The level ‘very high’ is not part of the written exam evaluation matrix, but was added to the study because the participants from the IB programme were very proficient users of French and their performances exceeded the scale developed for French-as-a-foreign-language learners in Norwegian schools.
Error types: Errors in second and foreign language writing can be classified in a number of ways (Dobrić and Sigott 2014; Thewissen 2015). I opted for a traditional taxonomy based on linguistic categories (cf. Dobrić and Sigott 2014; Thewissen 2015, p. 40), but I also included content issues in order to capture teachers’ feedback on content and not only language issues. Table 2 provides an overview of the taxonomy of error types:
Not all errors can be univocally classified. Prepositional errors fall somewhere in-between morphosyntactic and lexical errors (Thewissen 2015, p. 42). I classified errors related to short, frequent prepositions with a flexible meaning (à, de, sur, in English to/at/with/for/in, of/from/in/with/by, on/about/upon/out of) as morphosyntactic errors, while errors related to prepositions with a more precise meaning such as devant (in front of) and avant (before), were classified as lexical errors.
Errors could belong to several types simultaneously. For example, in a text on unemployment, a learner wrote:
Surtoutsurspécial travail,ilsvenusconcurrence
Especiallyonspecial work, theymasccomePast participle masc. Plur. competition
The learner was probably trying to say that, especially with specialised jobs, people encounter hard competition. There are numerous errors in this sentence, but I will focus on the ones marked by the teacher. She marked ‘spécial travail’ and ‘ils venus concurrence’ as erroneous. In both cases, there are both a morphosyntactic error and a lexical error. In the first case, the adjective (spécial) is wrongly placed before the noun (syntactic error related to word order), but also, the wrong adjective is chosen (spécial instead of e.g., spécialisé). In the second case, there is a mix of error types related to the choice of verb (lexical) and verb conjugation (morphological).
I also took note of which errors were not corrected. In many of the texts, uncorrected errors were so numerous that I did not count and categorise each of them, but instead jotted down the dominating categories.
Feedback types: The types of written corrective feedback used by the teachers were categorised according to the taxonomy in Sheen and Ellis (2011, p. 594). This taxonomy distinguishes between direct and indirect corrections; direct corrections include provision of the correct form, whereas indirect corrections do not—they simply indicate that there is an error. Both types (direct and indirect) can come with or without metalinguistic information that gives a linguistic explanation, or some hint, as to what is wrong and why. Thus, the taxonomy has the following four main categories: direct with metalinguistic information, direct without metalinguistic information, indirect with metalinguistic information, and indirect without metalinguistic information. In addition to instances of corrective feedback, teacher comments on content, structure, coherence, and genre conventions were also included in the analysis, because as Lee (2020) pointed out, in authentic classrooms, students get feedback on all areas of writing and not only WCF.

4.3.2. RQ2: Learner Uptake

In order to answer RQ2, texts written at different stages by the same learner were compared to check for signs of uptake. When looking for signs of uptake, I did not assess progress in overall writing quality, since the assignments were not necessarily comparable in terms of, for example, genre and level of difficulty. Since the aim was to examine naturally occurring instruction, I could not plan for such comparability. Instead, I focused on the specific features highlighted by the teacher in T1 as areas in need of improvement. Highlighted in this context meant that the teacher mentioned the feature(s) in her end-comment to T1, in addition to commenting on the feature(s) in in-text comments. I considered as signs of uptake any improvement made for these features from T1 to T2 and (potentially) to T3. For a student who made many errors in, for example, verb conjugation, I would compare the percentage of correctly conjugated verb forms in T1 and T2. If there were signs of progress on this specific feature in T2, I would repeat the analysis for T3, to check for signs of sustained uptake. This procedure would give an indication of whether there had been any progress, but it does not take full account of the complexity of the language used; if many of the verb occurrences were third person singular forms of frequent verbs such as être and avoir, there would likely be fewer errors than if the student used a varied range of personal pronouns/noun phrases and verbs. Such considerations were taken into account when concluding (see Section 5.2).

5. Findings

The proficiency level analysis described above revealed that all four IB students had a very high proficiency level in French. In GS-A, one student was classified with a high proficiency level, four with the level intermediate-high, five intermediate-low, and three with a low proficiency level in French. In GS-B, three students were classified with a high proficiency level, five with the level intermediate-high, one intermediate-low, and one with a low proficiency level. The differences in proficiency levels between the two general studies groups were not unexpected, since the GS-B school has a very good reputation and is known for attracting high-achieving students, while GS-A is a more ordinary upper secondary school. Against this backdrop of proficiency levels, I will address RQ1 concerning teacher feedback practices before I present the findings related to RQ2, based on an analysis of signs of uptake identified in each learner’s text production.

5.1. Teacher Feedback Practices

All three teachers used a combination of general end comments and more detailed in-text comments and corrections. In addition, the IB teacher assessed the collected texts with a grade1, using a point system divided into the following three areas: language, message, and format. Table 3 below provides an overview of the overall number of in-text corrections for each teacher and their distribution across CF types and error types. Below, I will present the findings concerning teacher feedback practices related to the three subquestions presented above.
(a) Which aspects of the learners’ writing are commented upon by the teachers?
Overall, the teachers focused more on weaknesses and errors than on positive features of the learner texts. This was especially true for in-text comments; all the IB teacher’s in-text comments were corrections, while teacher A provided four praising in-text comments against 409 in-text corrections. Teacher B provided 16 praising in-text comments and 421 corrections. Her praising in-text comments most often consisted of a smiley, and it is often unclear exactly what in the text the teacher liked or whether the praise was related to content or language. The end-comments, however, usually served to summarise both the strengths and weaknesses of the student’s text. Teacher GS-A displayed a systematic use of positive remarks in her end-comments, in that she always pointed to positive features of the learner text before moving on to areas in need of improvement. The IB teacher and teacher GS-B also highlighted positive features of the learner texts in their end-comments, but the positive remarks did not necessarily precede the more negative ones, as can be seen in the following example from teacher GS-B (translated from French by the author):
You need to work on prepositions. Have another look at the rules and do the exercises in the book once more. Your text is well structured and contains interesting information, but: many errors related to prepositions, verbs (you need to write in the future tense) and the definite article. Have another look at the rules in the book. Use more linking words.
Turning to corrections, the morphosyntactic issues category was, by far, the most prominent, especially for the GS-A class. 84% of the GS-A teacher’s corrections were related to morphosyntactic errors. For the other teachers, these percentages were 66% (GS-B) and 58% (IB). The high number of morphosyntactic corrections reflects the frequency of error types in the students’ texts and does not indicate an excessive emphasis on grammar from the teachers’ side. Besides, these percentages include corrections that were double-coded as morphosyntactic + lexical, and morphosyntactic + orthographic (see Table 3 for details). The following is a typical example of a correction that targeted both morphosyntax and lexicon:
The learner wrote:
Jevais partir rondàlamonde
Iwill go roundadjinthefem worldmasc’
which the teacher corrected to ‘je vais faire le tour du monde’ (‘I will travel the world’). Through her correction, she provided a more appropriate lexical expression and she corrected the agreement error related to noun + article.
Lexical comments made up 22%, 16%, and 15% of the corrections in the GS-B, GS-A and IB group, respectively (including double-coded corrections). One example is when a student wrote ‘il y a beaucoup de commissions’ (‘there are many commissions’), which teacher GS-B corrected to ‘il y a beaucoup d’organisations internationales’ (‘there are many international organisations’). Corrections of spelling made up 11% and 10% of teachers GS-A’s and GS-B’s corrections, respectively, whereas it made up 6% of the IB teacher’s corrections.
The IB teacher seemed to focus more on textual cohesion issues than the other two teachers. Overall, 16% of the IB teacher’s corrections addressed textual cohesion issues, thus constituting the second most prominent category for this teacher. Corrections in this category were made in order to improve the flow of the text. For example, a learner wrote ‘Sa maman lui a dit de m’amener dans la chambre de Nicolas’ (‘His mother told him to take me to Nicolas’ room’). The IB teacher, realizing that the 3rd person dative pronoun lui (him) referred to Nicolas, corrected this to ‘Sa maman lui a dit de m’amener dans sa chambre’ (‘His mother told him to take me to his room’). Overall, 4% of the GS-B teacher’s corrections addressed textual cohesion issues, while the GS-A teacher did not comment upon such issues at all, although some of the morphosyntactic comments concerned subjunctions that bind main and subordinate clauses. In most of these cases, the teacher inserted a missing que (that) after verbs such as penser (think) and trouver (find).
Sociolinguistic issues made up 2% (IB) and 1% (GS-A and GS-B) of the corrections, respectively. In the GS-A class, most of these comments addressed the choice between tu and vous, whereas, in the other classes, they related to genre conventions and language register. One example is a learner who opened a letter with ‘Salut grand-mère et grand-père’ (‘Hello grandmother and grandfather’), which the GS-B teacher corrected to ‘Chers grand-mère et grand-père’ (‘Dear grandmother and grandfather’), an opening she found more appropriate for a letter. Corrections of punctuation were rare, but made up 5% of the IB teacher’s corrections.
The teachers rarely commented on content issues in their in-text comments, with the possible exception of teacher GS-B, who provided 11 in-text comments (3%) on content. At times, she even engaged emotionally in what the learner had written and added personal reactions to the content. In response to a student’s story about having been food poisoned after eating the typical French-Canadian dish poutine, the teacher reacted with the comment ‘what a catastrophe with the poutine!’ Such emotional engagement in the text content makes her appear as an authentic and interested reader and not simply as a teacher who reads to correct.
The end-comments, on the other hand, often included brief comments on content. These were usually general in nature and came after comments on language aspects, as illustrated by the following example from the IB teacher (translated from French by the author): ‘Good syntax, but too many sloppy errors. You should do a careful proofreading. The content is appropriate and fits well with the story. A bit short.’
(b) What types of errors are ignored and which are corrected, through what types of feedback and in which learners’ writings?
The IB teacher consistently used a comprehensive approach and corrected practically all errors in the learners’ texts. The GS-B teacher also tended to correct all errors; however, in two learners’ texts, she left many errors uncorrected. In terms of proficiency levels, these learners did not stand out from the others, as they both had an intermediate level in French. The data cannot indicate whether the teacher deliberately chose to limit her error correction with these two students or whether it was simply a coincidence. The GS-A teacher differed from the other two in that many errors in her learners’ texts were left uncorrected. Accent errors, which are frequent in French learner writing, were hardly ever corrected. Apart from that, there were no specific error types that were systematically left uncorrected. Teacher A’s choice of leaving many errors uncorrected thus seems to be a question of quantity rather than of error type. She did not focus her error correction on specific linguistic categories.
The teachers differed when it came to preferred feedback type. The IB teacher used both direct (48%, n = 31) and indirect (52%, n = 33) corrections. Some of these came with a metalinguistic explanation or hint (five of the direct corrections and ten of the indirect ones). The following very clear pattern arose concerning the distribution of these feedback categories in relation to error types: the teacher tended to provide indirect feedback for grammar and spelling errors, while she provided direct feedback for lexical errors and issues linked to textual coherence, such as variation between pronouns and noun phrases. Direct corrections were also used for prepositional errors. The GS-A teacher used mostly indirect corrections (71%, n = 292), but also a fair number of direct corrections (29%, n = 117). For indirect correction, she used a code system in which different error types were marked with different colours. This system ensured that even for indirect corrections, the students received a hint in the form of a metalinguistic cue indicating what was wrong. Most of the teacher’s corrections (68%, n = 278) thus fell within the category of indirect correction with metalinguistic information; this metalinguistic information was, however, not very specific. The code system comprised the following five broad categories: agreement and conjugation errors, spelling errors, lexical errors, incomplete sentences, and content issues. In a few cases (n = 8), additional metalinguistic information was given. The direct corrections usually came without any additional metalinguistic information, although some of them were colour-marked and, thus, included a metalinguistic hint, and some came with a metalinguistic comment. For example, when a learner wrote
Jevais àlycée
Igo to upper secondary schoolindef
the teacher corrected à to au (à + the definite article le) and added a metalinguistic explanation in the margin (‘à + le = au’). All students received a mixture of direct and indirect corrections, and there was no clear pattern as to what type of errors received what type of feedback. Teacher GS-B used primarily direct corrections. A total of 374 (89%) of the in-text corrections were direct, and seven of these came with a metalinguistic explanation. Only 47 (11%) were indirect and, with one exception, they did not come with metalinguistic hints or explanations.
(c) Do teachers tailor and adjust their comments and corrections to the learner’s proficiency level?
The IB teacher’s students all had a very high proficiency level in French. Teachers GS-A and GS-B had learners of different proficiency levels, but their feedback patterns tended to be similar across all students; teacher GS-A left many errors uncorrected, independent of the student’s proficiency level, while teacher GS-B tended to correct all errors for all learners, apart from the two exceptions mentioned above. Both teachers structured their end-comments in a similar way for all learners, regardless of proficiency level; teacher GS-A systematically highlighted positive features first, and then focused on areas in need of improvement. Her comments had this same structure regardless of the quality of the learner text. For example, she wrote the following comment to a relatively low-proficient student who had written a text of which the content was hard to understand (translated from Norwegian by the author): ‘You have written a detailed text with some relevant content. Good that you divide it in paragraphs. Unfortunately, there are some sentences that communicate poorly, so you should go through these thoroughly. Feel free to ask. It is also important that you remember to use the definite/indefinite article in front of nouns’. Teacher GS-B also most often summarised strength and weakness of the text in her end-comments. Four learners received only negative feedback in the end-comments, but these learners had different proficiency levels. The teacher’s praising in-text comments were also spread over learners of different proficiency levels.
Although feedback was structured in similar ways across all proficiency levels, all three teachers demonstrated a differentiated approach in the sense that each learner’s most frequent or most prominent errors were corrected, no matter what type of errors they were. The teachers had not selected structures and phenomena to correct beforehand, but rather made this choice based on each individual’s error patterns. Consequently, the percentage of different error types corrected varied between learner texts; for example, the percentage of morphosyntactic corrections in the IB teacher’s in-text comments ranged from 33% for one student to 73% for another.
In addition, teacher B differentiated her comments in that she often provided concrete advice concerning how the learners could work and what they could do to improve. One example is when she encouraged a student to make a list of basic adjectives along with more sophisticated synonyms and their antonyms, in order to improve vocabulary range and lexical variation. This piece of advice was given to a high-proficiency learner, but lower-proficiency learners were also provided with concrete ideas for how to improve their language.
Moreover, learners were sometimes explicitly asked to correct errors in the text (IB) or to rewrite the text in accordance with the teacher’s advice (GS-B). Such prompts for further work also constitute a way of tailoring feedback to the individual learner. Unfortunately, the data does not allow any conclusions to be drawn as regards the extent to which learners were expected to revise their texts.

5.2. Learner Signs of Uptake

The results from the analysis of signs of short-term and sustained uptake are presented in Table 4. As can be seen from the table, twelve students showed clear signs of uptake in T2, while seven also showed clear signs of sustained uptake (T3). For four students, it was unclear whether uptake had taken place, mainly because there were not enough occurrences of the highlighted features in T2 and T3 to conclude. For ten students, no signs of uptake could be identified. There seemed to be no improvement on the highlighted features for these ten students. One of them, however, showed considerable improvement on the highlighted features from T1 to T3, although no improvement was visible from T1 to T2.
All but one of the students who showed clear signs of uptake had an intermediate-high to very high proficiency level in French. One student had a low proficiency level; he made progress in that he learnt the difference between the personal pronouns nous (we) and vous (they), which he consistently mixed up in T1, but used perfectly adequately in T2 and T3.
There was thus a considerable difference between the learners with a high or intermediate-high proficiency level on the one hand, and those with a low or intermediate-low proficiency level on the other. All learners with a high proficiency level showed some signs of uptake, as did most (but not all) of those with an intermediate-high level. Among students with a low or intermediate-low proficiency level, only one student showed signs of uptake.
Similarly, there was a considerable difference between the different classes when it came to signs of uptake. In GS-B, seven out of ten students showed signs of short-term and sustained uptake, while in GS-A, only two of thirteen students showed any signs of uptake at all. In the IB group, three students showed signs of short-term uptake, while for the fourth, no areas of improvement were identified. There are few clear signs of sustained uptake for this group, but this could be due to limited material. In some cases, it is due to a ‘back and forth’ movement in the learning process. This is the case for IB07, who struggled with the alternation between two past tenses in French, the passé composé (PC) and the imparfait (IMP). In T1, she demonstrated an excessive use of the IMP, which was commented upon by the teacher. In T2, the student followed up on the teacher’s comments from T1, although in a somewhat exaggerated way, leading to tendencies of hypercorrection and overuse of the PC instead. The teacher commented on this pattern in her end-comments to T2, and in T3, the student was back to her exaggerated use of the IMP. Although the student seemingly ended up where she started, this trial-and-error procedure can be interpreted as demonstrating learning in progress and a sign that uptake has taken place. It is a well-known fact from SLA research that the learning process is not a straight line and that what looks like a setback might be a sign of learning (Selinker 1972; Gass and Selinker 2001).

6. Discussion

This study set out to investigate teacher feedback practices and learner uptake in L3 French writing instruction in Norwegian upper secondary schools. The three participating teachers who volunteered to join the project were all highly experienced teachers, and were informed that the project would investigate the development of students’ written skills and the role that teacher feedback might play in this development. The above presentation of these teachers’ feedback practices describes what L3 French written feedback practices may look like, rather than what such practices usually look like.
The three teachers demonstrated differing feedback practices, but there were also the following similarities: they all commented on several aspects of the learners’ texts and none of them focused on error correction only. They gave quite detailed comments and corrections, in contrast to the teachers in Vogt et al.’s (2020) study. They commented on positive as well as negative aspects of the texts, although the negative feedback was more extensive. The variation between these three teachers’ feedback practices might be explained by individual styles, but could also be the result of the different learner groups. Although they all taught at upper secondary level in Norwegian schools, their teaching contexts were quite different; while the IB teacher taught a small group of high-achieving students with a high proficiency level in French, teacher GS-A taught a larger and more heterogeneous group of students attending a more average-rated school. Therefore, the fact that the IB teacher provided comprehensive feedback while teacher GS-A left a large number of errors uncorrected is more likely to do with class proficiency level than with different teacher styles. As Lee (2020) pointed out, although research indicates that selective feedback overall is more effective than comprehensive feedback, the latter type might be the best option for highly proficient learners.
Another case in point is that teacher GS-A did not focus on textual cohesion issues, while the other two did, at least to a certain extent. Again, this might have to do with the students’ proficiency levels. Even in the L1 subject, it is common to focus more on micro-level issues at lower levels (Doquet 2011). While this choice is sometimes criticised, there might be good reasons for applying it in L3 teaching, since learners need to be able to form sentences and phrases before they can construct longer and coherent texts.
The three teachers’ practices may serve as examples of exemplary practice, in particular if taken together; the IB teacher provided direct corrections for lexical errors, prepositional errors and issues linked to textual coherence, and indirect feedback for grammar and spelling errors. This pattern is in line with research-based principles for error correction that state that direct feedback should be used with complex and untreatable errors, while indirect corrections should be restricted to treatable errors (Ferris 1999; Guénette and Jean 2012). The GS-A teacher always started by commenting on the positive aspects of the learner text, a widely accepted pedagogical principle for formative assessment (Burner 2016). The GS-B teacher provided many of the learners with concrete advice on how to make further progress in specific areas. In offering such advice, she followed best practice principles for formative assessment (Norwegian Directorate for Education and Training 2011). Teacher B also sporadically acted as an interested, authentic reader, expressing personal reactions to text content. Ferris (1999) argued that writing for an authentic readership promotes learners’ use of feedback.
There were also indications that the teachers adapted their feedback to the teaching context, for example in that comprehensive feedback was used for high-proficient learners (IB), while more selective feedback was used for learners at lower levels (GS-A). In the IB class, students had a very high proficiency level in French and made very few mistakes, which made it feasible to comment upon all of them without overloading the learners. In the GS-A class, it was not possible to correct all errors, and a selection was therefore required. However, the selection did not seem to focus on error types, but more on the quantity of corrected errors. The fact that some errors of one type were corrected, while others of the same type were not, could possibly lead to confusion among the students.
On the other hand, feedback was tailored to individual students such that the teachers selected areas for improvement and provided corrections based on individual students’ needs. This is often a better solution than selecting error types to correct for the entire class beforehand, because students will differ with regard to which linguistic phenomena they need and are ready to work with.
Regarding learner uptake, the difference between the classes (IB and GS-B on the one hand and GS-A on the other), reflects a difference between proficiency levels, since the GS-B and the IB schools had more high-proficient students than the GS-A school. The design of the present study does not allow for any conclusions concerning correlations between factors, but based on the above analysis of the teachers’ feedback practices, it is reasonable to assume that the difference in signs of uptake has more to do with the learners’ proficiency levels than with the teachers’ feedback practices. 12 out of 27 students showed signs of uptake in new pieces of writing, and all of these but one had an intermediate-high to very high proficiency level in French. For the IB students, another possible explanation is that they had better opportunities to use the feedback, since they wrote their texts as homework and not in a test setting as the other two classes did (cf. Table 1). For the GS-A and the GS-B classes, however, the conditions were similar and students in the relatively high-achieving GS-B class showed more signs of uptake than the ones in the more average GS-A class.
The finding that proficiency level is linked to signs of uptake mirrors conclusions drawn in previous studies, such as Zheng and Yu (2018) and Shintani and Ellis (2015). Again, the current explorative and qualitative study was not designed to establish causal relations between feedback and uptake, and it is possible that these learners would have similar progress even without feedback. However, when a student demonstrated progress from T1 to T2 on the features specifically highlighted by the teacher in T1, it is reasonable to assume that this progress can at least be partly explained by the feedback received. Thus, the feedback seems to have been successfully used by almost half of the students, whereas the other half did not profit from it.
A key question then is why some students use the feedback they receive and others do not. Learners with a high proficiency level can more easily understand the corrections. Previous research stated that teacher feedback should be clear, concise, and easy to understand (Bruno and Santos 2010; Ferris 1999), but in order to motivate all students to actually act on the feedback, teachers also need techniques to involve students in feedback work. The GS-A teacher’s code system for providing metalinguistic information related to indirect error correction might be an example of feedback that is too complicated for some learners (Guénette and Jean 2012), but it represents an attempt to actively engage the learners in using the feedback they have received. Another way of encouraging learners to use the feedback, is to ask for revisions, as teacher GS-B sometimes did. However, revision does not necessarily entail learning, because learners might revise more or less mechanically without really understanding the linguistic phenomena in question (Truscott 2016).
When many studies fail to find any relation between WCF and uptake in subsequent pieces of writing, it might not be that WCF is inherently ineffective, but that it is effective only when used by the learner (Ellis 2009). Unfortunately, feedback is often left unused, especially, it seems, by students with lower proficiency levels and lower levels of motivation (Zheng and Yu 2018). Students’ levels of motivation were not measured in this study, and it might be that the high-proficiency students also were the most motivated, and therefore used—and thus learnt from—the corrective feedback. Since high levels of motivation, dutifulness and proficiency often go hand in hand (though definitely not always), it is hard to know which factors cause which effects, and such causal relations are probably not unidirectional. Quantitative studies using advanced statistical analyses could help identify patterns of causes and effects.
The main issue for classroom teachers, however, seems to be to find ways of motivating students to use the feedback. Indirect feedback with code systems and requests for revisions do not always work as intended (Guénette and Jean 2012; Truscott and Hsu 2008) and teachers therefore need other techniques that can push all learners to engage with the feedback. One example of such a technique is when teachers ask their students to create exercises for themselves that serve to test a phenomenon they did not master well in a written assignment. The teacher reviews the exercises and the student’s performance on them to verify that the student has understood. This type of follow-up work is fine-tuned in the sense that it is tailored to the individual learner’s needs, and it ensures students’ active engagement with the feedback, which should lead not only to noticing, but also a deeper cognitive processing of the feedback. Previous research shows, however, that there are few such opportunities for learners in Norwegian classrooms to engage with the feedback (Burner 2016; Saliu-Abdulahi 2019). Setting aside enough time in the classroom to let learners work with feedback in such ways might be essential for success if we also want to reach students of lower proficiency levels.

7. Conclusions

The present study showed that although teachers provide detailed feedback to learners’ text, with few exceptions only the more proficient learners show signs of uptake in subsequent pieces of writing. The study supports previous research stating that learners’ use of feedback is key to feedback effect (Bitchener 2021; Ellis 2009). Yet, learners’ use of feedback has received little interest from the research field compared with the ample focus on feedback types and their effects. Although the question of what feedback types ‘work best’ is interesting from a language acquisition theoretical perspective, the answer will vary depending on a large range of contextual and individual factors, such as previous language learning experience, proficiency level, age, motivation, etc. For the practice field, research that focuses on how to ensure that students make use of the feedback they receive, might be even more relevant. More studies on naturally occurring feedback, and students’ motivations for using or not using this feedback, could provide the field with useful answers in this respect, and help move the centre of attention from the teachers and their practices towards the learners, their actions, and the effects of these actions.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the Norwegian national guidelines for research ethics (NESH [1993] 2016), and approved by the Norwegian Centre for Research Data (project number 49447).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Anonymized data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

Notes

1
This is not to say that teacher A and B never graded texts, but the three assignments selected for this study did not receive a grade.
2
Filled in if ‘yes’ or ‘unclear’ in previous column.

References

  1. Bitchener, John. 2008. Evidence in support of written corrective feedback. Journal of Second Language Writing 17: 102–18. [Google Scholar] [CrossRef]
  2. Bitchener, John. 2021. Written corrective feedback. In The Cambridge Handbook of Corrective Feedback in Second Language Learning and Teaching. Edited by Hossein Nassaji and Eva Kartchava. Cambridge: Cambridge University Press, pp. 207–25. [Google Scholar] [CrossRef]
  3. Black, Paul, and Dylan Wiliam. 1998. Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice 5: 7–74. [Google Scholar] [CrossRef]
  4. Bruno, Inês, and Leonor Santos. 2010. Written comments as a form of feedback. Studies in Educational Evaluation 36: 111–20. [Google Scholar] [CrossRef]
  5. Burner, Tony. 2016. Formative assessment of writing in English as a foreign language. Scandinavian Journal of Educational Research 60: 626–48. [Google Scholar] [CrossRef]
  6. Cabot, Michel. 2019. Unpacking Meaningful Grammar Feedback: An Analysis of EFL Students’ Feedback Preferences and Learning Moments. Journal of Linguistics and Language Teaching 10: 133–55. [Google Scholar]
  7. Chong, Sin Wang. 2019. A systematic review of written corrective feedback research in ESL/EFL contexts. Language Education and Assessment 2: 70–95. [Google Scholar] [CrossRef]
  8. Council of Europe. 2001. The Common European Framework of Reference for Languages. Cambridge: Cambridge University Press. [Google Scholar]
  9. Dobrić, Nikola, and Guenther Sigott. 2014. Towards an error taxonomy for student writing. Zeitschrift für Interkulturellen Fremd-sprachenunterricht 19: 111–18. Available online: https://www.researchgate.net/publication/266526852_Towards_an_Error_Taxonomy_for_Student_Writing (accessed on 20 November 2021).
  10. Doquet, Claire. 2011. L’écriture débutante, pratiques scripturales à l’école élémentaire. Rennes: Presses universitaires de Rennes. [Google Scholar]
  11. Ellis, Rod, and Natsuko Shintanti. 2014. Exploring Language Pedagogy through Second Language Acquisition Research. London: Routledge. [Google Scholar]
  12. Ellis, Rod. 2009. A typology of written corrective feedback types. ELT Journal 63: 97–107. [Google Scholar] [CrossRef] [Green Version]
  13. Ferris, Dana. 1999. The case for grammar correction in L2 writing classes: A response to Truscott (1996). Journal of Second Language Writing 8: 1–11. [Google Scholar] [CrossRef]
  14. Ferris, Dana. 2010. Second language writing research and written corrective feedback in SLA. Studies in Second Language Acquisition 32: 181–201. [Google Scholar] [CrossRef]
  15. Ferris, Dana, Hsiang Liu, Aparna Sinha, and Manuel Senna. 2013. Written corrective feedback for individual L2 writers. Journal of Second Language Writing 3: 307–29. [Google Scholar] [CrossRef]
  16. Gass, Susan M., and Larry Selinker. 2001. Second Language Acquisition: An Introductory Course, 2nd ed. Mahwah: Lawrence Erlbaum Associates. [Google Scholar]
  17. Guénette, Danielle, and Gladys Jean. 2012. Les erreurs linguistiques des apprenants en langue seconde: Quoi corriger, et comment le faire? Correspondance 18: 15–19. [Google Scholar]
  18. Han, Ye, and Fiona Hyland. 2015. Exploring learner engagement with written corrective feedback in a Chinese tertiary EFL classroom. Journal of Second Language Writing 30: 31–44. [Google Scholar] [CrossRef]
  19. Hattie, John, and Helen Timperley. 2007. The power of feedback. Review of Educational Research 77: 81–112. [Google Scholar] [CrossRef]
  20. Hedgcock, John, and Natalie Lefkowitz. 1994. Feedback on feedback: Assessing learner receptivity to teacher response in L2 composing. Journal of Second Language Writing 3: 141–63. [Google Scholar] [CrossRef]
  21. Heimark, Gunn Elin. 2013. Praktisk tilnærming i teori og praksis. Ungdomsskolelæreres forståelse av en praktisk tilnærming i fremmedspråksundervisningen. [A Practical Approach in Theory and in Praxis. Lower Secondary School Teachers’ Understanding of a Practical Approach to the Teaching of Foreign Languages]. Ph.D. thesis, University of Oslo, Oslo, Norway. [Google Scholar]
  22. Horverak, May Olaug. 2015. English Writing Instruction in Norwegian Upper Secondary Schools. Acta Didactica Norge 9: Art. 11. [Google Scholar] [CrossRef]
  23. Lee, Icy. 2003. L2 writing teachers’ perspectives, practices and problems regarding error feedback. Assessing Writing 8: 216–37. [Google Scholar] [CrossRef]
  24. Lee, Icy. 2004. Error correction in L2 secondary writing classrooms: The case of Hong Kong. Journal of Second Language Writing 13: 285–312. [Google Scholar] [CrossRef]
  25. Lee, Icy. 2019. Teachers’ frequently asked questions about focused written corrective feedback. TESOL Journal 10: e00427. [Google Scholar] [CrossRef]
  26. Lee, Icy. 2020. Utility of focused/comprehensive written corrective feedback research for authentic L2 writing classrooms. Journal of Second Language Writing 49: 1–7. [Google Scholar] [CrossRef]
  27. Li, Shaofeng. 2010. The effectiveness of corrective feedback in SLA: A metaanalysis. Language Learning 60: 309–65. [Google Scholar] [CrossRef]
  28. Llovet Vilà, Xavier. 2018. Language Teacher Cognition and Curriculum Reform in Norway: Oral Skill Development in the Spanish Classroom. Acta Didactica Norge 12: 12–Art. 12. [Google Scholar] [CrossRef]
  29. Ministry of Education and Research. 2004. Kultur for læring [Culture for Learning]. White Paper 030, 2003–2004. Oslo: Ministry of Education and Research. [Google Scholar]
  30. Mohebbi, Hassan. 2021. 25 years on, the written error correction debate continues: An interview with John Truscott. Asian-Pacific Journal of Second and Foreign Language Education 6: 1–8. [Google Scholar] [CrossRef]
  31. NESH (National Committee for Research Ethics in the Social Sciences and the Humanities). 2016. Guidelines for Research Ethics in the Social Sciences, Humanities, Law and Theology, 4th ed. Oslo: NESH. First Published 1993. [Google Scholar]
  32. Norwegian Directorate for Education and Training. 2018. Observations on the National Assessment for Learning Programme (2010–2018). Final Report. Oslo: Norwegian Directorate for Education and Training. [Google Scholar]
  33. Norwegian Directorate for Education and Training. 2020. Subject Curriculum for Foreign Languages. Oslo: Norwegian Directorate for Education and Training. [Google Scholar]
  34. Norwegian Directorate for Education and Training. 2011. Grunnlagsdokument. Satsingen “Vurdering for Læring” 2010–2014. Oslo: Norwegian Directorate for Education and Training. [Google Scholar]
  35. Rahimi, Mohammad. 2021. A comparative study of the impact of focused vs. comprehensive corrective feedback and revision on ESL learners’ writing accuracy and quality. Language Teaching Research 25: 687–710. [Google Scholar] [CrossRef]
  36. Saliu-Abdulahi, Drita, and Glenn Ole Hellekjær. 2020. Upper secondary school students’ perceptions of and experiences with feedback in English writing instruction. Acta Didactica Norden 14: Art. 6. [Google Scholar] [CrossRef]
  37. Saliu-Abdulahi, Drita, Glenn Ole Hellekjær, and Frøydis Hertzberg. 2017. Teachers’ (formative) feedback practices in EFL writing classes in Norway. Journal of Response to Writing 3: 31–55. [Google Scholar]
  38. Saliu-Abdulahi, Drita. 2019. Teacher and Student Perceptions of Current Feedback Practices in English Writing Instruction. Ph.D. thesis, University of Oslo, Oslo, Norway. [Google Scholar]
  39. Sandvik, Lise. 2011. Via mål til Mening. En Studie av Skriving og Vurderingskultur i Grunnskolens Tysk-Undervisning. Doktoravhandling. Trondheim: NTNU. [Google Scholar]
  40. Selinker, Larry. 1972. Interlanguage. International Review of Applied Linguistics in Language Teaching 10: 201–31. [Google Scholar] [CrossRef]
  41. Sheen, Younghee, and Rod Ellis. 2011. Corrective feedback in language teaching. In Handbook of Research in Second Language Teaching and Learning Volume II. Edited by Eli Hinkel. New York: Routledge, pp. 593–610. [Google Scholar]
  42. Sheen, Younghee. 2007. The effect of focused written corrective feedback and language aptitude on ESL’ learners acquisition of articles. TESOL Quarterly 41: 255–83. [Google Scholar] [CrossRef]
  43. Shintani, Natsuko, and Rod Ellis. 2013. The comparative effect of direct written corrective feedback and metalinguistic explanation on learners’ explicit and implicit knowledge of the English indefinite article. Journal of Second Language Writing 22: 286–306. [Google Scholar] [CrossRef] [Green Version]
  44. Shintani, Natsuko, and Rod Ellis. 2015. Does language analytical ability mediate the effect of written feedback on grammatical accuracy in second language writing? System 49: 110–19. [Google Scholar] [CrossRef] [Green Version]
  45. Shintani, Natsuko, Rod Ellis, and Wataru Suzuki. 2014. Effects of written feedback and revision on learners’ accuracy in using two English grammatical structures. Language Learning 64: 103–31. [Google Scholar] [CrossRef]
  46. Storch, Neomi, and Masatoshi Sato. 2020. Comparing the same task in ESL vs. EFL learning contexts: An activity theory perspective. International Journal of Applied Linguistics 30: 50–69. [Google Scholar] [CrossRef]
  47. Thewissen, Jennifer. 2015. Accuracy across Proficiency Levels. A Learner Corpus Approach. Louvain-la-Neuve: Presses Universitaires de Louvain. [Google Scholar]
  48. TRAWL. 2021. TRAWL: Tracking Written Learner Language. Available online: https://www.hf.uio.no/ilos/english/research/groups/trawl-tracking-written-learner-language/ (accessed on 20 November 2021).
  49. Truscott, John, and Angela Yi-ping Hsu. 2008. Error correction, revision, and learning. Journal of Second Language Writing 17: 292–305. [Google Scholar] [CrossRef]
  50. Truscott, John. 1996. The case against grammar correction in L2 writing classes. Language Learning 46: 327–69. [Google Scholar] [CrossRef]
  51. Truscott, John. 2016. The effectiveness of error correction: Why do meta-analytic reviews produce such different answers? In Epoch Making in English Teaching and Learning: A Special Monograph for Celebration of ETA-ROC’s 25th Anniversary. Edited by Yiu-nam Leung. Taipei: Crane, pp. 129–41. [Google Scholar]
  52. Van Beuningen, Catherine G., Nivja H. De Jong, and Folkert Kuiken. 2012. Evidence on the effectiveness of comprehensive error correction in second language writing. Language Learning 62: 1–41. [Google Scholar] [CrossRef]
  53. Vogt, Karin, Dina Tsagari, Ildikó Csépes, Anthony Green, and Nicos Sifakis. 2020. Linking learners’ perspective on language assessment practices to Teachers’ Assessment Literacy Enhancement (TALE): Insights from four European countries. Language Assessment Quarterly 17: 410–33. [Google Scholar] [CrossRef]
  54. Vold, Eva Thue. Learner spoken output and teacher response in second versus foreign language classrooms. Language Teaching Research. forthcoming. [CrossRef]
  55. Vold, Eva Thue, and Altijana Brkan. 2020. Classroom discourse in lower secondary French-as-a-foreign-language classes in Norway: Amounts and contexts of first and target language use. System 93: 102309. [Google Scholar] [CrossRef]
  56. Zheng, Yao, and Shulin Yu. 2018. Student engagement with teacher written corrective feedback in EFL writing: A case study of Chinese lower-proficiency students. Assessing Writing 37: 13–24. [Google Scholar] [CrossRef]
Table 1. Overview of the assignments that constitute the data material.
Table 1. Overview of the assignments that constitute the data material.
Text Date TopicTypeCriteria
The IB class
T1 11.15.15A choice between four assignments related to the analysis of a short story: (1) a diary page belonging to a main character; (2) a note sent from one main character to another; (3) a dialogue between main characters; (4) an analysis of the humor in the texthome-workLanguage, message and format. Level descriptors focus on vocabulary range, sentence structure and number of errors (language), coherent and effective communication of relevant ideas (message), and text type conventions (format).
-
No specific word limit is mentioned.
T211.29.15A choice between four assignments related to a film: (1) a newspaper article on a central film scene; (2) a diary page belonging to a main character; (3) a TV interview with the actor playing the main character; (4) a letter from a character describing a central scene.home-workSame as for T1
-
500–600 words
T3 01.25.16A choice between six assignments related to a film: (1) a letter from one character to his wife; (2) an imagined scene between characters; (3) a letter from a character to his family; (4) a reflection text on how the film might influence modern society; (5) a letter to the film director; (6) a text describing a character’s imagined future. home-workSame as for T1 and T2
-
approximately 500 words
General studies class A
T1 12.02.16A choice between three assignments related to unemployment: (1) a text on how unemployment influences family life; (2) a text about being young and unemployed; (3) a dialogue from a job interview.testNo written criteria provided
-
a minimum of 250 words
T2 02.28.17A choice between three assignments related to French history: (1) a letter to Jeanne d’Arc; (2) a narrative text describing a visit to Versailles under Louis XIV; (3) a personal reflective text on important French historical eventstest
-
a minimum of 250 words
-
knowledge of French history
-
expressions of experiences, opinions and emotions
-
varied and purposeful use of words, sentence structure and textual cohesion markers
-
correct listing of sources
T3 04.26.17A choice between two tasks related to school life in France: (1) a text about a young Norwegian spending a year in France; (2) a descriptive text based on a picturetest
-
a minimum of 300 words
-
knowledge of society and culture in the French-speaking world
-
correct listing of sources
General studies class B
T109.27.17A text on a planned journey to French-speaking countries.test
-
no written criteria available
T211.03.17A text about a four-weeks journey to four countries in French-speaking Africa.test
-
variation in content and vocabulary
-
no copy-paste
-
the passé compose (PC) and the imparfait (IMP)
T304.25.18A choice between three assignments: (1) a text describing a journey to Syria to help refugees; (2) a letter to the UN on women’s rights; (3) a presentation to a school on plans for organizing an international week.test
-
no written criteria available
Table 2. Overview of error types.
Table 2. Overview of error types.
Error TypeExamples
morphosyntactic errorse.g., agreement and conjugation errors (morphology) and word order errors and missing phrase elements (syntax)
lexical errorsinaccurate word choices
orthographic errorsspelling errors
textual cohesion errorse.g., redundant repetition, inadequate relation between anaphoric pronouns and noun phrases or between time adverbials and verb tense, lack of temporal cohesion.
socio-linguistic errorse.g., inappropriate choice of address pronouns
punctuation errorse.g., missing commas
content issueserrors or shortcomings in text content
Table 3. Overview of the overall number of in-text corrections for each teacher and their distribution across CF types and error types.
Table 3. Overview of the overall number of in-text corrections for each teacher and their distribution across CF types and error types.
CF Type
Error TypeDirect CFIndirect CFTotal
With MI aWithout MIWith MIWithout MI
The IB teacherMorpho-syntactic2% (n = 1)14% (n = 9)9% (n = 6)33% (n = 21)58% (n = 37)
Lexical3% (n = 2)9% (n = 6)2% (n = 1)-14% (n = 9)
Orthographic-2% (n = 1)2% (n = 1)3% (n = 2)6% (n = 4)
Textual3% (n = 2)11% (n = 7)2% (n = 1)-16% (n = 10)
Sociolinguistic--2% (n = 1)-2% (n = 1)
Punctuation-5% (n = 3)--5% (n = 3)
Total8% (n = 5)41% (n = 26)16% (n = 10)36% (n = 23)100% (n = 64)
The GS-A teacherMorpho-syntactic3% (n = 13)18% (n = 72)48% (n = 196)2% (n = 10)71% (n = 291)
Lexical2% (n = 7)0% (n = 2)4% (n = 16)0% (n = 1)6% (n = 26)
Morphosyntactic + lexical2% (n = 7)1% (n = 4)7% (n = 27)0% (n = 0)9% (n = 38)
Orthographic0% (n = 0)0% (n = 2)6% (n = 26)0% (n = 1)7% (n = 29)
Morphosyntactic + orthographic1% (n = 3)-3% (n = 13)-4% (n = 16)
Textual----
Sociolinguistic1% (n = 3)0% (n = 1)-0% (n = 1)1% (n = 5)
Punctuation-0% (n = 2)--0% (n = 2)
Content-0% (n = 1)-0% (n = 1)0% (n = 2)
Total8% (n = 33)21% (n = 84)68% (n = 278)3% (n = 14)100% (n = 409)
The GS-B teacherMorpho-syntactic0% (n = 2)56% (n = 234)0% (n = 1)4% (n = 18)61% (n = 255)
Lexical1% (n = 5)15% (n = 65)-1% (n = 3)17% (n = 73)
Morphosyntactic + lexical-3% (n = 13)-1% (n = 5)4% (n = 18)
Orthographic-7% (n = 30)-1% (n = 5)8% (n = 35)
Morphosyntactic + orthographic-0% (n = 2)-1% (n = 3)1% (n = 5)
Textual-3% (n = 12)-1% (n = 6)4% (n = 18)
Sociolinguistic-0% (n = 1)-1% (n = 3)1% (n = 4)
Punctuation-0% (n = 2)-00% (n = 2)
Content-2% (n = 8)-1% (n = 3)3% (n = 11)
Total2% (n = 7)87% (n = 367)0% (n = 1)11% (n = 46)100% (n = 421)
Table 4. Signs of uptake in the learners’ texts.
Table 4. Signs of uptake in the learners’ texts.
StudentProficiencyFeature(s) for Improvement Highlighted by the TeacherSigns of Uptake
(Text 1–2)
Signs of Sustained Uptake2
(Text 3)
Conclusion
IB01Very highUse of the subjunctiveYes. In T1, the subjunctive is required in one place, but not used. In T2, it is required in three places and used in two of them.Unclear. The subjunctive is required in two places, and used in one of them.The student shows tendencies to integrate the subjunctive more in her interlanguage, but there is not enough material to conclude
IB03Very highNone, and the student makes almost no mistakes--The student demonstrates a very high level of French through all three texts.
IB04Very highNo specific linguistic feature highlighted, but the teacher recommends proof reading, because there are several ‘sloppy mistakes’: mistakes with structures that the student clearly masters.Yes. In T1 (111 words), there are seven mistakes that are probably ‘sloppy’ ones (mistake rate 1/16 words). In T2 (287 words) there are only two such mistakes (1/144).Yes, though less clear than in T2. In T3, there are 12 sloppy mistakes for 462 words (1/39). The student clearly knows the rules, and it is more a question of increased thoroughness than of uptake in the sense of learning outcome
IB07Very highNone explicitly highlighted, but the most frequently corrected error type is the overuse of the imparfait (IMP) at the expense of passé composé (PC).Yes, including tendencies to hypercorrection. The student uses a lot of PC, even in some places where the IMP would be preferable. In T2, the teacher points out an excessive use of the PC.Unclear. In T3, the student is back to the pattern from T1, in which there was excessive use of the IMP at the expense of the PC. This indicates uptake from teacher feedback to T2.The student’s use of the two past tenses is generally very good in all three texts. She tries to perfection her use as a response to the teacher’s comments.
GS-A 207LowSentences that do not communicate a meaningful content. Errors in verb conjugation and spelling errors.No. The student writes a much shorter text, but there are still unclear sentences and many conjugation and spelling errors.-Text 3 shows the same problems as text 1 and 2.
GS-A 208Intermediate-lowThe difference between c’est and il y a, article use, verb conjugation and unclear sentencesNo. The difference between c’est and il y a is not relevant in T2. Still many problems regarding the other areas for improvement.-Text 3 shows the same problems as text 1 and 2.
GS-A 209Intermediate-highThe reduced partitive article and hiatus Unclear. There are no occurrences in T2 where the reduced partitive article is required. There are no hiatus mistakes, but also few opportunities for making them.No. In T3 there is again underuse of the reduced partitive article, and there are several examples of hiatus (e.g., que on)The student does not show any signs of uptake
GS-A 210Intermediate-lowUnclear sentences, lack of articles before nounsUnclear. The student uses articles before nouns, but they are not always placed correctly (tu l’as été première femme). Some unclear sentences.No. In T3 there are many articles missing before nouns. There are also some sentences with unclear meaning.The student does not show any clear signs of uptake
GS-A 211Intermediate-lowAgreement errors, verb conjugation errors, lexical choicesNone. T2 is very short, but with several agreement and verb conjugation errors, as well as errors in lexical choice-The student does not show any signs of uptake
GS-A 212Intermediate-highAgreement errors, verb conjugation errors, difference between c’est/il y a and que/commeNone. Still many errors of the same type as in T1-The student does not show any signs of uptake
GS-A 213HighAdjective agreement and verb conjugationYes, for adj., but not for verbs. In T1, 57% of the adj. did not agree correctly (4 of 7), in T2 only 18% (2 of 11) do not agree correctly. For verbs, 18% (9 of 51) were conjugated wrongly, in T2 this percentage is 43 (17 of 40)Unclear, 6 out of 17 (35%) adjectives in T3 do not agree correctly, but several of the seventeen are invariable adjectives that pose no challenge.The student shows a positive tendency in adjective agreement from T1 to T2.
GS-A 214Intermediate-highExcessive use of the indefinite article, verb conjugationUnclear. No overuse of the indefinite article, but not many contexts for potential overuse either. A slight progress in verb conjugation, from 39% erroneous (13 of 33) in T1 to 28% erroneous (13 of 46) in T2.Unclear. No overuse of the indefinite article, but not many contexts for potential overuse either. Only 15% of the verbs are conjugated wrongly (7 of 46), but most of the verbs are very frequent forms of common verbs.The student shows tendencies towards more appropriate article use and verb conjugation, but it is hard to conclude because T2 and T3 do not contain much room for mistakes in these areas.
GS-A 215Intermediate-lowVous vs. tu, agreement errors, lexical choices, PC (form), ce/cet/cetteNone. Lots of errors of the same kind in T2.-The student does not show any signs of uptake from T1 to T2, but T3 is better, especially when it comes to the PC.
GS-A 216Intermediate-lowExcessive use of the infinitive (missing verb conjugation), determinative (including possessives) and adjective agreementNone. Lots of errors of the same kind in T2. The student does not show any signs of uptake
GS-A 217LowAgreement errorsNone. There is a similar rate of agreement errors in T2.-The student does not show any signs of uptake
GS-A 218Intermediate-highSpelling errors (particularly related to hiatus), adjective agreement errors, verb conjugation errorsYes. There is still one instance of hiatus, but verb conjugation errors have decreased from 27% (8 of 30) in T1 to 5% in T2 (4 of 75). Adjective agreement errors have decreased from 42% (5 of 12) to 15% (2 of 13).No. There is still one instance of hiatus, Verb conjugation errors are back to 12% (5 of 42). Adjective agreement errors have increased to 53% (10 of 19).The student shows signs of uptake from T1 to T2, in T3 there are again lots of adjective agreement and verb conjugation errors.
GS-A 219LowUnclear sentence content, spelling errors, e.g., the difference between est and etNone. The text is very short The student does not show any clear signs of uptake
GS-B 125Intermediate-highNo linguistic issues highlighted, but one important problem seems to be the conjugation of verbs in the 3rd person plural.Unclear. While all four occurrences of the 3rd person plural had the wrong verb form in T1, all five such occurrences had the appropriate verb form in T2. On the other hand, the occurrences in T2 were less complex than those in T1 (e.g., pronoun + verb in T2 vs. coord. noun phrase + verb in T1)Unclear. There are eight occurrences of the 3rd person plural in T3, of which six are accompanied by the correct verb form.Not enough data to conclude.
GS-B 127Intermediate-highNo linguistic issues highlighted, but many of the corrections relate to adjective agreementYes, some. In T1, the adjective did not agree correctly in six out of 14 cases (42%). In T2, there are only two occurrences of incorrect adjective agreement (out of 11 occurrences)Yes, the adjective agrees correctly in 20 out of 22 occurrences.The student seems to have made progress in adj. agreement. Although some of the adj. are invariable and/or used more than once, the sharp decline in number of errors indicates a clear progression.
GS-B 132Intermediate-highPC and IMP: form as well as use.Yes. There are 22 occurrences of the PC and ten of IMP, and all but two are correctly formed. In only three of the cases (two IMP and 1 PC), the other tense would be more appropriate.T3 is written in the present tense, but there are one occurrence of the IMP and two of the PC, all appropriately used and constructed.The student shows considerable progress concerning use and form of the PC and the IMP.
GS-B 133HighThe morphology of the PC (the student tends to conjugate the participle even when the copula is avoir). The use of indefinite and definite articles.Yes. T2 is in the past tense, and on only one occasion does the student conjugate the participle when the copula is avoir. It is correctly conjugated with the copula être. The use of def. and indef. articles is mostly correct.Yes. There are many occurrences of the PC with avoir, and the learner no longer conjugates the participle. Like in T1, there are some errors in the use of definite and indefinite articlesThe student has learnt how to form the PC with both copulas. There is a less clear progress when it comes to use of the definite/indefinite articles.
GS-B 135Intermediate-highThe morphology of the PCYes. In T1, the student used the infinitive instead of the participle in 7 out of 13 cases (54%). In T2, there are 22 occurrences of the PC, all correctly formed with copula + participle.T3 is written in the present tense and has only three occurrences of the PC, all correctly formed.The student has learnt how to correctly construct the PC with copula in the present tense + participle.
GS-B 136HighAdjective agreement errors, negation, prepositions, need for more lexical variationSome signs of uptake for adjective agreement. In T2, 6 of 31 (19%) adj. do not agree correctly, while 5 of 18 (27%) did not in T1. T2 has two occ. of negation, both incorrect. Similar preposition errors in T2 as in T1.Yes, only 3 out of 31 adjectives (10%) do not agree correctly.The student seems to have made some progress in adjective agreement.
GS-B 140Intermediate-highFuture tense, prepositions, definite article (overall appropriate use, but there are missing articles with regions, and sometimes inappropriate article use with towns and some other nouns)No. T2 is written in the past tense, and the future tense is not relevant. There are similar prep. errors as in T1. The use of the def. article is mostly correct, but some error types from T1 are repeated: notre l’avion, missing art. before regions-The student does not show any signs of uptake. There are similar errors also in T3.
GS-B 141Intermediate-lowPrepositions, the definite article (especially with countries)No. There are similar preposition and article errors in T2 as in T1-The student does not show any signs of uptake. There are similar errors also in T3.
GS-B 142LowNone highlighted, but most corrections concern personal pronouns (the student uses vous for nous)Yes. The student uses nous correctly through the entire text. Yes. The student uses nous correctly through the entire text.In T1, the student consistently used vous for nous. This confusion is eradicated in T2 and T3.
GS-B 150HighNo linguistic issues highlighted, but most corrections concern missing article use before countriesYes. Countries are mentioned 16 times, in all cases with article use when appropriate. In only two cases is the wrong article chosen.Yes, but less than in T2. Countries are mentioned nine times, and in six of them article use is correct. In one case, the student forgets the article, in another she chooses the wrong one, and in the third case she uses an article where there should not be one.The student seems to have progressed considerably when it comes to article use with countries. From not using articles at all, the student starts to consistently use articles (with one exception in T3), although sometimes the wrong article is chosen.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vold, E.T. Assessing Writing in French-as-a-Foreign-Language: Teacher Practices and Learner Uptake. Languages 2021, 6, 210. https://doi.org/10.3390/languages6040210

AMA Style

Vold ET. Assessing Writing in French-as-a-Foreign-Language: Teacher Practices and Learner Uptake. Languages. 2021; 6(4):210. https://doi.org/10.3390/languages6040210

Chicago/Turabian Style

Vold, Eva Thue. 2021. "Assessing Writing in French-as-a-Foreign-Language: Teacher Practices and Learner Uptake" Languages 6, no. 4: 210. https://doi.org/10.3390/languages6040210

APA Style

Vold, E. T. (2021). Assessing Writing in French-as-a-Foreign-Language: Teacher Practices and Learner Uptake. Languages, 6(4), 210. https://doi.org/10.3390/languages6040210

Article Metrics

Back to TopTop