Providers of tests of languages for academic purposes generally claim to provide evidence on the extent to which students are likely to be able to cope with the future demands of reading in specified real-life contexts. Such claims need to be supported by evidence that the texts employed in the test reflect salient features of the texts the test takers will encounter in the target situation as well as demonstrating the comparability of the cognitive processing demands of accompanying test tasks with target reading activities. This paper will focus on the issue of text comparability. For reasons of practicality, evidence relating to text characteristics is generally based on the expert judgement of individual test writers, arrived at through a holistic interpretation of test specifications. However, advances in automated textual analysis and a better understanding of the value of pooled qualitative judgement have now made it feasible to provide more quantitative approaches focusing analytically on a wide range of individual characteristics. This paper will employ these techniques to explore the comparability of texts used in a test of academic reading comprehension and key texts used by first-year undergraduates at a British university. It offers a principled means for test providers and test users to evaluate this aspect of test validity.