So we’re embracing LLMs? Now What? A study on enhancing feedback and assessment in higher education through generative AI

A. Williamson, John Murray

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The rapidly evolving landscape of Artificial Intelligence (AI) in educational settings has opened new avenues for enhancing teaching and learning practices. Within this wave of technological advancements, the field of Generative AI, especially through the development of Large Language Models (LLMs) such as ChatGPT, Google’s Bard & Gemini, and Meta's Llama 2, has demonstrated remarkable proficiency in parsing and interpreting complex human language tasks. Institutional attitudes are changing, seeking to embrace this technology. This capability holds the promise of revolutionising the way feedback is delivered in academic environments.

In the landscape of Higher Education (HE), the shift towards authentic assessment marked a pivotal change, prioritising real-world relevance in learner evaluation. This method's essence lies not just in assessing students' abilities, but in preparing them for practical applications of their learning; fostering critical thinking and problem-solving skills. Within this context, feedback forms a cornerstone of the learning process, transcending its traditional function of right or wrong answers. By providing learners with meaningful and actionable feedback, educators are able to highlight areas for enhancement which can be applied to future challenges. The value of feedback lies not just in its capacity to assess but in its ability to empower learners, facilitating deeper engagement.

Assessment plays a crucial role in HE, guiding both student learning and instructional methods. However, the traditional marking process is often weighed down by challenges of scalability and timeliness, particularly in settings with large student cohorts. As such, the quality and depth of feedback, vital to learner improvement, can suffer as a result.

In response to these challenges, this work explores how LLMs can be leveraged to offer meaningful and contextually relevant feedback based on initial instructor-provided feedback and scoring, thereby enriching the learner experience whilst grounding the assessment itself in human academic judgement; the aim being to augment existing workflows. Current approaches to the use of Generative AI may seek to remove human academic judgement, and have work entirely marked and assessed by AI. However, in this study we look at the academic viability of integrating such Generative AI tools into marking workflows, augmenting instructor-produced critique as opposed to assessing directly, as well as gathering student perceptions on use of such technologies for providing summative feedback. Unlike the reliance on widely known proprietary online tools, our work focuses on the development and application of LLMs in an offline scenario to meet educational needs and objectives, reducing potential issues of data governance. Through the exploration of LLM applications towards improving summative assessment feedback, this study contributes to the broader discourse on integrating AI within education.
Original languageEnglish
Title of host publicationEDULEARN24 Proceedings
Subtitle of host publication16th International Conference on Education and New Learning Technologies
EditorsLuis Gómez Chova, Chelo González Martínez, Joanna Lees
PublisherIATED Academy
Pages2486-2494
Number of pages9
ISBN (Print)9788409629381
DOIs
Publication statusPublished - 1 Jul 2024
Externally publishedYes
Event16th International Conference on Education and New Learning Technologies - Palma, Spain
Duration: 1 Jul 20243 Jul 2024
https://library.iated.org/publications/EDULEARN24

Publication series

NameEDULEARN Proceedings
PublisherIATED Academy
ISSN (Electronic)2340-1117

Conference

Conference16th International Conference on Education and New Learning Technologies
Abbreviated titleEDULEARN24
Country/TerritorySpain
CityPalma
Period1/07/243/07/24
Internet address

Fingerprint

Dive into the research topics of 'So we’re embracing LLMs? Now What? A study on enhancing feedback and assessment in higher education through generative AI'. Together they form a unique fingerprint.

Cite this