University rankings as developed by the media are used by many stakeholders in higher education: students looking for university places; academics looking for university jobs; university managers who need to maintain standing in the competitive arena of student recruitment; and governments who want to know that public funds spent on universities are delivering a world class higher education system. Media rankings deliberately draw attention to the performance of each university relative to all others, and as such they are undeniably simple to use and interpret. But one danger is that they are potentially open to manipulation and gaming because many of the measures underlying the rankings are under the control of the institutions themselves. This paper examines media rankings (constructed from an amalgamation of variables representing performance across numerous dimensions) to reveal the problems with using a composite index to reflect overall performance. It ends with a proposal for an alternative methodology which leads to groupings rather than point estimates.