Online radicalisation is a phenomenon which has received notable attention from both academics exploring the area and counter terrorism agencies looking to reduce the likelihood of the phenomena occurring to vulnerable internet users. At the forefront of this issue are social media platforms where research suggests majority of online radicalisation first occurs, leading to these platforms implementing safeguarding interventions to protect users from extremist content that may influence them to become radicalised or engage with extreme ideologies. Yet despite these efforts, online radicalisation rates from social media are steadily increasing, inspiring this research to explore the phenomena from not only a user perspective but to investigate the extreme content that is bypassing these safeguarding measures and potentially encouraging radicalisation in social media users. The results from the present thesis showcase that despite social media sites claiming to not tolerate extremist content, large quantities of extreme content are able to bypass current safeguarding efforts and reach large numbers of users. The results of this thesis also highlight that extreme content utilises regulatory conversational terminology, while focusing on the themes of preaching, segregation and hostility, with prominent concepts discussed in the content being narratives surrounding demeaning, misogyny and conspiratorial. All of the results of the present thesis collectively highlight the prevalence of extremist content online and provides a valuable exploration of the content that safeguarding efforts do not detect. Criticisms of current safeguarding interventions are discussed in detail, aswell as implications for the results of this thesis to potentially enhance safeguarding and our understanding of extreme content online.