Human Judgment and Technology in Selecting Texts for English for Academic Purposes Assessment

Activity: Talk or presentation typesOral presentation

Description

An important validity evidence in reading comprehension assessment is through showing the parallelism between the real-life reading demands in a particular context and the assessment tasks. This can be done by providing evidence that the reading skills and strategies invoked by the test items are relevant and texts reflect the characteristics of the texts to be read in the target context. Recently developed automated text analysis tools, such as Lexile and Coh-metrix are now widely used to match readers with appropriate texts.

This exploratory study focuses on evaluating the contribution of automated text analysis in establishing assessment standards and assessment validity. Following Weir’s (2005) socio-cognitive validity framework, we have focused on certain textual features and compared automated text analysis results and human evaluators’ reaction to them in order to understand how these different systems work better and how they can be used more effectively in text selection for EAP reading assessment purposes. Mainly based on qualitative comparison made on a small sample of 10 texts, we provide an in-depth analysis with interesting cases in which sometimes human judges and sometimes automated tools fail to identify textual features accurately. As these tools are becoming increasingly used not only by large assessment bodies but also local practitioners, it is important to discuss where they may fail to reach human judgement and where they may reliably be used as fast and efficient tools.
Period17 Jun 2023
Event title19th European Association for Language Testing and Assessment: Sustainable Language Assessment: Improving Educational Opportunities at the Individual, Local and Global Level
Event typeConference
Conference number19
LocationHelsinki, FinlandShow on map
Degree of RecognitionInternational