Exploring the impact of language models, such as ChatGPT, on student learning and assessment

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Recent developments in language models, such as ChatGPT, have sparked debate. These tools can help, for example, dyslexic people, to write formal emails from a prompt and can be used by students to generate assessed work. Proponents argue that language models enhance the student experience and academic achievement. Those concerned argue that language models impede student learning and call for a cautious approach to their adoption. This paper aims to provide insights into the role of language models in reshaping student learning and assessment in higher education. For that purpose, it probes the impact of language models, specifically ChatGPT, on student learning and assessment. It also explores the implications of language models in higher education settings, focusing on their effects on pedagogy and evaluation. Using the Scopus database, a search protocol was employed to identify 25 articles based on relevant keywords and selection criteria. The developed themes suggest that language models may alter how students learn and are assessed. While language models can provide information for problem-solving and critical thinking, reliance on them without critical evaluation adversely impacts student learning. Language models can also generate teaching and assessment material and evaluate student responses, but their role should be limited to ‘play a specific and defined role’. Integration of language models in student learning and assessment is only helpful if students and educators play an active and effective role in checking the generated material's validity, reliability and accuracy. Propositions and potential research questions are included to encourage future research.

Original languageEnglish
Article numbere3433
Number of pages18
JournalReview of Education
Volume11
Issue number3
Early online date30 Oct 2023
DOIs
Publication statusPublished - 1 Dec 2023

Cite this