Next Article in Journal
A Qualitative Approach to Combined Sewer Overflow Modelling on the WATERVERSE Project
Previous Article in Journal
The LOTUS International Multifunctional Digital Twin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A Preliminary Study on Exploring the Use of Translation Apps and Post-Editing Strategies of University Students †

Department of Applied English, Chaoyang University of Technology, Taichung 413310, Taiwan
*
Author to whom correspondence should be addressed.
Presented at the 2024 IEEE 4th International Conference on Electronic Communications, Internet of Things and Big Data, Taipei, Taiwan, 19–21 April 2024.
Eng. Proc. 2024, 74(1), 33; https://doi.org/10.3390/engproc2024074033
Published: 2 September 2024

Abstract

:
In the digital age, finding technology for English learning experience is imperative. With various language learning technologies emerging, many students are using translation applications (apps) as indispensable tools in their English learning. Previous research has primarily focused on the translation strategies for learners of English as a foreign language (EFL) and post-editing strategies using machine translation and the practices of professional post-editors and translators. We explored students’ utilization of translation apps in the context of English language learning and their application of post-editing strategies to understand their perspectives on technology’s role. A questionnaire survey was conducted to collect data from university students in Taiwan, which was analyzed utilizing descriptive statistics and one-way ANOVA via SPSS version 23. Students used translation apps to learn vocabulary and comprehend English articles. Post-editing strategies such as punctuation and terminology correction were not used frequently by the students. Most of them agreed that translation apps serve as valuable English learning supplements. By understanding the prevailing learning dynamics of university students, the results of this study lay a foundation for more investigations.

1. Introduction

In the digital era, acquiring language skills has become ubiquitous with the help of various high-end technologies and applications (apps). Software is designed and available on mobile devices such as smartphones and tablets with versatile functions for various purposes [1]. Technology is seamlessly integrated into the language learning experience. With the advancement of translation tools, translation apps (e.g., Google Translate, DeepL Translate, and ChatGPT) assist students in learning English. For instance, teachers apply Google Translate in the classroom and teach students reading, writing, vocabulary skills, and translation. Students are not inclined to use them in translation assignments and writing assignments [2,3]. They can improve their language learning to indulge in their learning and better understand their abilities [3].
Given the inferior translation quality, post-editing is an imperative procedure. Post-editing (also called machine translation post-editing) is defined as the correction of the output produced by machine translation (MT), and based on the requirement of the final product, the target text is amended for translation quality [4,5]. Chang et al. [6] suggested that students must enhance their abilities in machine translation post-editing because they can learn how to recognize the quality of AI-generated texts. MT and translation apps assist students with their translation. Students can accelerate their work and strengthen their competence in transcreation.
Previous research has focused on the translation strategies utilized by learners of English as a foreign language (EFL) [7,8,9,10] and the post-editing strategies applied in MT. Researchers have delved into the practices of professional post-editors and translators [11,12]. We explored translation apps in the context of English language learning and investigated their application of post-editing strategies to understand technology’s role in their English language learning. Post-editing originally refers to the modification of machine-generated texts. Since MT embodies many translation tools, we used the translated texts generated by translation apps for this study. Based on the purpose of the study, the following research areas for analysis were proposed:
  • Situations where students learn English with translation apps;
  • Post-editing strategies that students adopt after using translation apps;
  • Students’ perceptions of using translation apps as an English learning supplement.

2. Literature Review

2.1. Translation in English Learning Strategy

Translation is widely used in English training and facilitates learners’ English learning as long as teachers use it for teaching particular English skills [13]. Translation is advantageous for learners to acquire a second language (L2). For instance, they learn new vocabulary, grammatical structures, idiomatic expressions, reading and listening contexts, writing, and the differences and similarities between their first language (L1) and L2 [8,13,14]. Many researchers view translation as a beneficial and supporting method for language learning [7,8,9,13,15]. It is unavoidable for learners to use their mother tongue to think and comprehend the contexts while learning foreign languages [7,9,15]. On the other hand, several researchers have negative beliefs about using translation as a language-learning strategy because they regard it as a hindrance to their language acquisition [10,15]. Even so, they still think that translation helps them. Highly proficient learners prefer using English directly and translation strategies to find the differences and similarities between the two languages and improve their English skills. This causes them to rarely use translation strategies compared to those with low English proficiency [9,16,17]. Regardless of disparate opinions on strategy use, translation plays a significant role in language learning.

2.2. Post-Editing Strategies

In language learning, students use self-monitoring strategies to ensure that they comprehend correctly and regulate their ways of study [18,19], similar to the machine translation post-editing procedure. Students monitor and control their learning and reflect, improving their language learning outcomes and translation quality [20]. Machine translation post-editing is necessary for correcting a raw output [21,22]. A post-editor needs to ensure the MT-generated text fits the intended meaning of the source language and correct errors appearing in the target language [23,24]. When the post-editors conducted light post-editing, the translation text presented information appropriately and kept it semantically correct, including correcting words, grammar, and punctuation (i.e., linguistic MT errors). As for the full post-editing, the contexts must be grammatically correct and stylistically acceptable with translation quality, so the strategy of adding supplementary explanation (i.e., the correction of pragmatic errors and affective errors) is adopted [21,25,26,27,28].
It is essential to educate students to apply MT and post-edit its output. To guarantee the fluency and accuracy of the MT, students use various resources and MT tools to crosscheck [29,30]. In post-editing practices, students can notice different strategies in distinct fields, gain linguistic knowledge and translation skills exploited in both L1 and L2, and enhance the productivity and quality of translation texts [31,32,33,34]. Nonetheless, several researchers are concerned about the translation technology in language teaching and learning [33,35]. Although the quality of neural machine translation (NMT) has improved a lot, it cannot translate as well as humans. Students are inclined to rely exclusively on its outcomes but without an excellent language proficiency level and translation competence, contributing to the translation with errors [34,35,36]. Despite different factors that make the results of post-editing questionable, students have a positive attitude toward post-editing [33,34,37].

2.3. Translation Tools

MT is a digital device to translate texts from one language into another without human intervention [38,39]. Owing to the large amount of MT, we used translation apps to help students translate using mobile devices such as smartphones and tablets. Teachers use translation apps in their teaching, impacting students’ language learning [40,41,42]. Students showed progress in their translation performances and enhanced translation skills, promoting better conversational competence and cultivating linguistic skills [40,41,43,44]. Thus, translation apps need to be utilized in translation classes, and teachers must be open-minded with the use of translation apps in class and teach students how to use them wisely [41,43]. Researchers have a positive attitude toward using Google Translate because they regard it as a significant tool for improving language competence.

3. Methodology

We collected data from university students in Taiwan using a questionnaire survey, which consisted of seven demographic questions and 42 questions related to English learning, post-editing strategies, and students’ perspectives. A five-point Likert scale was used in the survey: 5 = Strongly Agree, 4 = Agree, 3 = Neutral, 2 = Disagree, and 1 = Strongly Disagree. The questionnaire was modified for learning strategies [9] and post-editing strategies [12,45]. The survey was administered online, and the data collected were analyzed using descriptive statistics and one-way ANOVA via SPSS version 23.
Thirty-two students were randomly selected for the preliminary questionnaire study. Reliability and consistency were assessed for the quality of research [46]. To test the reliability, Cronbach’s α was adopted in this study. Generally, Cronbach’s α of 0.7 is deemed acceptable, and a value greater than or equal to 0.9 stands is regarded as excellent. The Cronbach’s α of this questionnaire was 0.936, revealing a high level of consistency for 42 items in the questionnaire. Considering the value of Cronbach’s α and the types of questions, items 13 (Cronbach’s α if Item Deleted = 0.936) and 14 (Cronbach’s α if Item Deleted = 0.936) were deleted. Hence, there were seven demographic questions and 40 questions in total. Three professors specializing in English teaching evaluated this questionnaire’s validity and provided suggestions for improving its validity, referring to how accurately a measure reflects the reality of the sampled items [47]. The order of questions was reorganized and divided into four sections: (1) background information, (2) the use of translation apps in English learning, (3) post-editing strategies, and (4) students’ perspectives.
The research was conducted in the following six steps as depicted in Figure 1. First, we identified the problems. Second, the literature review was conducted. Third, a questionnaire was devised. Fourth, the questionnaire survey was conducted. Fifth, the questionnaire was modified for the survey (Figure 1).

4. Results and Discussion

The respondents included 28 females (87.5%) and 4 males (12.5%). To distinguish students’ English proficiency, they were assessed for language ability using the Common European Framework of Reference for Languages (CEFR). Students at CEFR B1 (Threshold) level represented the majority (N = 12), followed by CEFR A2 (Waystage) level (N = 8), CEFR A1 (Breakthrough) level (N = 5), CEFR B2 (Vantage) level (N = 5), CEFR C1 (Effective Operational Proficiency) level (N = 2), and CEFR C2 (Mastery) level (N = 0).
The situation of students’ English learning with translation apps was revealed in items from 1 to 26 except items 13, 14, 23, and 24. The average score was 3.63, and the score ranged from 2.75 to 4.44. Item 26 (M = 4.44), item 8 (M = 4.28), item 7 (M = 4.03), and item 2 (M = 4.03) showed higher scores. The respondents used translation apps for vocabulary learning and understanding English articles. They did not use it before reading English articles as presented in item 1 (M = 2.75). There was a statistically significant difference in English proficiency (item 19). The threshold was set at 0.05. One-way ANOVA results showed that translation apps helped to recall the content of English lessons (at a significance of 0.014 (p < 0.05)). In addition, multiple comparisons of Scheffe’s test revealed that students at the CEFR A2 level significantly differed from those at the CEFR C1 level (p = 0.047 < 0.05). Students at the CEFR A2 level were more serious and better than those at the CEFR C1 level in using translation apps to recall the content of English lessons.
The average score of students’ post-editing strategies for translation apps was 3.75. Most students perceived that post-editing strategies after using translation apps helped them accomplish their translation more quickly (item 36, M = 4.06). The standard deviation (0.95) presented a concentrated distribution. The scores for items 30, 31, 33, and 32 elucidated that students rarely corrected the punctuation (M = 3.31), terminology use (M = 3.44), cultural references and metaphorical expressions (M = 3.47), and added supplementary information or explanations (M = 3.72). There was a significant difference between item 32 and English proficiency (p = 0.012 < 0.05). Students’ post-editing of supplementary information or explanations in the translation text was significant compared with different English proficiency. According to the results of multiple comparisons of Scheffe’s test, learners at the CEFR B1 level and those at the CEFR C1 level showed significant differences (p = 0.043 < 0.05). Students at the CEFR B1 level added supplementary information or explanations in the translated text better than those at the CEFR C1 level during strategy use.
Items 23, 24, 27, 40, 41, and 42 were asked to examine students’ perceptions. The average score was 3.57, ranging from 3.06 to 4.06. Most participants thought translation apps were helpful English learning supplements (4.06), and the standard deviation (0.91) indicated a relatively concentrated distribution. They had concerns with the translation quality produced by translation apps (item 23, M = 3.06 and item 27, M = 3.09). The correlation between the score of item 24 (I think translation apps are good English learning supplements) and English proficiency was 0.024 (p < 0.05), demonstrating that they were significant. The correlation was significant (p = 0.045 < 0.05) for students at the CEFR A2 level and CEFR C1 level and was more significant for students at the CEFR A2 level (MD = 2.1250*). The overall descriptive statistics are presented in Table 1.

5. Conclusions

Most students learn vocabulary and comprehend reading passages via translation apps. In the question survey, item 19 (I use translation apps to recall the content of English lessons better) was related to English proficiency. Students with low English proficiency preferred to use translation apps to recall the content of English lessons to those with high English proficiency. Most students seldom added supplementary information or explanations, modified cultural references and metaphorical expressions, or corrected the terms and punctuation. Item 32 (I add supplementary information or explanations in the translation done by translation apps to ensure the completeness of the translation work) was significantly different among students at different levels. Students at the CEFR B1 level adopted this strategy more frequently than students at the CEFR C1 level when post-editing the texts. Moreover, they were positive about using translation apps as English learning supplements. Students with low English proficiency were more agreeable to the statement. Students who were not skilled in English had more positive attitudes toward English learning and post-editing strategies with translation apps.
Due to the limitations of this study, the results of this study cannot be generalized for all university students. Accordingly, it is necessary to conduct further research with more samples and questions. Then, the prevailing learning dynamics of university students can be elucidated to understand more about the use of English learning apps.

Author Contributions

Both authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lu, Y.; Xiong, T. The attitudes of high school students and teachers toward mobile apps for learning English: A Q methodology study. Soc. Sci. Humanit. Open 2023, 8, 100555. [Google Scholar] [CrossRef]
  2. Alhaisoni, E.; Alhaysony, M. An investigation of Saudi EFL university students’ attitudes towards the use of Google Translate. Int. J. Engl. Lang. Educ. 2017, 5, 72–82. [Google Scholar] [CrossRef]
  3. Bahri, H.; Mahadi, T.S.T. Google Translate as a supplementary tool for learning Malay: A case study at Universiti Sains Malaysia. Adv. Lang. Lit. Stud. 2016, 7, 161–167. [Google Scholar] [CrossRef]
  4. O’Brien, S. Methodologies for measuring the correlations between post-editing effort and machine translatability. Mach. Translat. 2005, 19, 37–58. [Google Scholar] [CrossRef]
  5. Tezcan, A.; Hoste, V.; Macken, L. Estimating post-editing time using a gold-standard set of machine translation errors. Comput. Speech Lang. 2019, 55, 120–144. [Google Scholar] [CrossRef]
  6. Chang, C.-C.; Chen, B.-J.; Chen, P.-W.; Chin, C.-C.; Liao, P.-S.; Lin, C.-L.; Shih, C.-L. How does translation education face the challenge of AI and use AI. Compil. Transl. Rev. 2021, 14, 157–181. [Google Scholar] [CrossRef]
  7. Edelia, A.D.; Maharsi, I. A survey on translation as a learning strategy by EFL higher education students in English learning. Pedagog. J. Engl. Lang. Teach. 2022, 10, 111–122. [Google Scholar] [CrossRef]
  8. Khan, R.M.I.; Kumar, T.; Supriyatno, T.; Nukapangu, V. The phenomenon of Arabic-English translation of foreign language classes during the pandemic. Ijaz Arab. J. Arab. Learn. 2021, 4, 570–582. [Google Scholar] [CrossRef]
  9. Liao, P. EFL learners’ beliefs about and strategy use of translation in English learning. RELC J. 2006, 37, 191–215. [Google Scholar] [CrossRef]
  10. Thani, A.S.; Ageli, N.R. The use of translation as a learning strategy: A case study of students of the University of Bahrain. Int. J. Linguist. Lit. Transl. 2020, 3, 87–101. [Google Scholar] [CrossRef]
  11. Girletti, S. Working with pre-translated texts: Preliminary findings from a survey on post-editing and revision practices in Swiss corporate in-house language services. In Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, Ghent, Belgium, 1–3 June 2022; pp. 271–280. [Google Scholar]
  12. Hong, J.; Lee, I.J. A satisfaction survey on the human translation outcomes and machine translation post-editing outcomes. Int. J. Adv. Smart Converg. 2021, 10, 86–96. [Google Scholar] [CrossRef]
  13. Samardali, M.F.S.; Ismael, A.M.H. Translation as a tool for teaching English as a second language. J. Lit. Lang. Linguist. 2017, 40, 64–69. [Google Scholar]
  14. Alaboud, A. The positive effect of translation on improving reading comprehension among female Arabic learners of English as foreign language. Arab. World Engl. J. 2022, 13, 424–436. [Google Scholar] [CrossRef]
  15. Tze Ying, B.; Lay Hoon, A.; Abdul Halim, H.; Majtanova, M. Students’ beliefs on translation strategy in learning German language. GEMA Online J. Lang. Stud. 2018, 18, 69–86. [Google Scholar] [CrossRef]
  16. Qassem, M. Translation strategy and procedure analysis: A cultural perspective. Asia Pac. Transl. Intercult. Stud. 2021, 8, 300–319. [Google Scholar] [CrossRef]
  17. Yulita, D. The correlation of English proficiency level and translation strategies used by Indonesian EFL learners. Lang. Lang. Teach. J. 2021, 24, 240–248. [Google Scholar] [CrossRef]
  18. Cárdenas, A.M. Tackling intermediate students’ fossilized grammatical errors in speech through self-evaluation and self-monitoring strategies. Profile Issues Teach. Prof. Dev. 2018, 20, 195–209. [Google Scholar] [CrossRef]
  19. O’Malley, J.M.; Chamot, A.U. Learning Strategies in Second Language Acquisition; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  20. Yang, Y.; Wang, X. Predicting student translators’ performance in machine translation post-editing: Interplay of self-regulation, critical thinking, and motivation. Interact. Learn. Envir. 2020, 31, 340–354. [Google Scholar] [CrossRef]
  21. Chang, Y. Differences in Machine Translation Post-Editing (MTPE) Strategies between Technical and Tourism Texts: Analysis from the Perspective of Register Theory. Master’s Thesis, National Kaohsiung University of Science and Technology, Kaohsiung, Taiwan, 2020. [Google Scholar]
  22. Ye, N.; Jiang, L.; Ma, D.; Zhang, Y.; Zhao, S.; Cai, D. Predicting post-editing effort for English-Chinese neural machine translation. In Proceedings of the 2021 International Conference on Asian Language Processing, Singapore, 11–13 December 2021; pp. 154–158. [Google Scholar]
  23. Carl, M.; Bangalore, S.; Schaeffer, M. New Directions in Empirical Translation Process Research: Exploring the CRITT TPR-DB, 1st ed.; Springer: Cham, Switzerland, 2016; pp. 77–94. [Google Scholar]
  24. do Carmo, F.E.M. Post-editing: A theoretical and practical challenge for translation studies and machine learning. Ph.D. Dissertation, University of Porto, Porto, Portugal, 2017. [Google Scholar]
  25. Koponen, M. Is machine translation post-editing worth the effort? A survey of research into post-editing and effort. J. Spec. Transl. 2016, 25, 131–148. [Google Scholar]
  26. Krings, H.P. Repairing Texts: Empirical Investigations of Machine Translation Post-Editing Processes; The Kent State University Press: Kent, OH, USA, 2001. [Google Scholar]
  27. Massardo, I.; Meer, J.V.D.; O’Brien, S.; Hollowood, F.; Aranberri, N.; Drescher, K. MT Post-Editing Guidelines; TAUS Signature Editions: Amsterdam, The Netherlands, 2016. [Google Scholar]
  28. Shih, C. Re-looking into machine translation errors and post-editing strategies in a changing high-tech context. Compil. Transl. Rev. 2021, 14, 125–166. [Google Scholar] [CrossRef]
  29. Chang, L.-C. Chinese language learners evaluating machine translation accuracy. JALT CALL J. 2022, 18, 110–136. [Google Scholar] [CrossRef]
  30. Tavares, C.; Tallone, L.; Oliveira, L.; Ribeiro, S. The challenges of teaching and assessing technical translation in an era of neural machine translation. Educ. Sci. 2023, 13, 541. [Google Scholar] [CrossRef]
  31. Cui, Y.; Liu, X.; Cheng, Y. A comparative study on the effort of human translation and post-editing in relation to text types: An eye-tracking and key-logging experiment. SAGE Open 2023, 13. [Google Scholar] [CrossRef]
  32. Harto, S.; Hamied, F.A.; Musthafa, B.; Setyarini, S. Exploring undergraduate students’ experiences in dealing with post-editing of machine translation. Indones. J. Appl. Linguist. 2022, 11, 696–707. [Google Scholar] [CrossRef]
  33. Jia, Y.; Lai, S. Post-editing metaphorical expressions: Productivity, quality, and strategies. J. Foreign Lang. Cult. 2022, 6, 28–43. [Google Scholar] [CrossRef]
  34. Varela Salinas, M.-J.; Burbat, R. Google Translate and DeepL: Breaking taboos in translator training. Observational study and analysis. Ibérica 2023, 243–266. [Google Scholar] [CrossRef]
  35. Liu, K.; Kwok, H.L.; Liu, J.; Cheung, A.K.F. Sustainability and influence of machine translation: Perceptions and attitudes of translation instructors and learners in Hong Kong. Sustainability 2022, 14, 6399. [Google Scholar] [CrossRef]
  36. Yu, Z.; Lu, Y. Post-editing performance of English-major undergraduates in China: A case study of C-E translation with pedagogical reflections. In Proceedings of the 17th International Conference on Computer Science and Education, Zhejiang, China, 18–21 August 2022; pp. 408–420. [Google Scholar]
  37. Jia, Y.; Carl, M.; Wang, X. How does the post-editing of neural machine translation compare with from-scratch translation? A product and process study. J. Spec. Transl. 2019, 31, 60–86. [Google Scholar]
  38. Kornacki, M. Computer-Assisted Translation (CAT) Tools in the Translator Training Process; Peter Lang: Berlin, Germany, 2018. [Google Scholar]
  39. Potapova, R.; Potapov, V.; Kuzmin, O. Logistics translator. Concept vision on future interlanguage computer assisted translation. In Proceedings of the International Conference on Speech and Computer, Gurugram, India, 14–16 November 2022; pp. 579–589. [Google Scholar]
  40. Chen, Y. Using a game-based translation learning app and Google apps to enhance translation skills: Amplification and omission. Int. J. Hum.-Comput. Int. 2023, 39, 3894–3908. [Google Scholar] [CrossRef]
  41. Lake, V.E.; Beisly, A.H. Translation apps: Increasing communication with dual language learners. Early Child. Educ. J. 2019, 47, 489–496. [Google Scholar] [CrossRef]
  42. Ross, R.K.; Lake, V.E.; Beisly, A.H. Preservice teachers’ use of a translation app with dual language learners. J. Digit. Learn. Teach. Educ. 2020, 37, 86–98. [Google Scholar] [CrossRef]
  43. Alotaibi, H.; Salamah, D. The impact of translation apps on translation students’ performance. Educ. Inf. Technol. 2023, 28, 10709–10729. [Google Scholar] [CrossRef]
  44. Bin Dahmash, N. I can’t live without Google Translate: A close look at the use of Google Translate app by second language learners in Saudi Arabia. Arab. World Engl. J. 2020, 11, 226–240. [Google Scholar] [CrossRef]
  45. Yang, Z.; Mustafa, H.R. On postediting of machine translation and workflow for undergraduate translation program in China. Hum. Behav. Emerg. Technol. 2022, 2022, 5793054. [Google Scholar] [CrossRef]
  46. Cronbach, L.J. Coefficient alpha and the internal structure of tests. Psychometrika 1951, 16, 297–334. [Google Scholar] [CrossRef]
  47. Almanasreh, E.; Moles, R.; Chen, T.F. Evaluation of methods used for estimating content validity. Res. Soc. Adm. Pharm. 2019, 15, 214–221. [Google Scholar] [CrossRef]
Figure 1. Flow chart of procedure of this study.
Figure 1. Flow chart of procedure of this study.
Engproc 74 00033 g001
Table 1. Descriptive statistics of students at three levels in variance.
Table 1. Descriptive statistics of students at three levels in variance.
ItemMeanStandard DeviationN
A3.631.1232
B3.751.1632
C3.571.0432
(A: the situation of students’ English learning with translation apps, B: post-editing strategies, and C: students’ perspectives).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhan, Y.-T.; Hsu, H.-T. A Preliminary Study on Exploring the Use of Translation Apps and Post-Editing Strategies of University Students. Eng. Proc. 2024, 74, 33. https://doi.org/10.3390/engproc2024074033

AMA Style

Zhan Y-T, Hsu H-T. A Preliminary Study on Exploring the Use of Translation Apps and Post-Editing Strategies of University Students. Engineering Proceedings. 2024; 74(1):33. https://doi.org/10.3390/engproc2024074033

Chicago/Turabian Style

Zhan, Ya-Ting, and Hsiao-Tung Hsu. 2024. "A Preliminary Study on Exploring the Use of Translation Apps and Post-Editing Strategies of University Students" Engineering Proceedings 74, no. 1: 33. https://doi.org/10.3390/engproc2024074033

APA Style

Zhan, Y. -T., & Hsu, H. -T. (2024). A Preliminary Study on Exploring the Use of Translation Apps and Post-Editing Strategies of University Students. Engineering Proceedings, 74(1), 33. https://doi.org/10.3390/engproc2024074033

Article Metrics

Back to TopTop