Toxic Online Communities: A Corpus Linguistics based Approach to Identify Risk Factors Associated with Online Extremism
With the increase in internet and social media usage there has been an associated rise in the proliferation of online extremism (Awan, 2017). Online communities and social media platforms, many accessible even by smartphones have provided extremists with a means of reaching international audiences and recruiting members (Gerstenfeld, Grant, & Chiang, 2003). The infrastructure of social media sites such as Twitter allows users to be able to propagate extremism online (Kotenko, 2013). Whilst there is experimental evidence of how risk factors such as anonymity and length of group membership can have an impact on aggressive or dis-inhibited behaviour (Spears & Lea, 1994; Postmes et al, 1998; Lea et al, 2001) generally this is largely limited to small scale experiments or short observations (Zimmerman & Ybarra, 2014), and is not within the context of social media. One approach to analysing social media use and online behaviour is corpus linguistics, which involves the quantitative comparison of large bodies of text (corpora) with each other to determine differences in language frequency, use, common context and associated sentiment (Crawford & Csomay, 2016). This can be applied to online data using a range of software.
The proposed PhD aims to investigate factors relating to online extremism and membership of extremist online communities, largely within social media platforms such as Twitter. The research would combine approaches and theory used in Psychology and Criminology around potential influencing factors (such as level of user anonymity/identifiability, length of membership of online groups, postage/tweeting frequency, number and type of followers) with corpus-based methods of online data collection and analysis. The research will also employ methods to evaluate how factors, such as the media, Covid-19 and politics can act as driving factors for individuals to disseminate extremism online. There needs to be a greater understanding of the risk factors associated with extremism to enable the development of systems that detect human factors rather than just malware (Bryce, 2015). This research aims to encourage the development of systematic software to eradicate online extremism as well as providing recommendations to help in the prevention of extremism online.
Commission for Extremism research project
Hollie was part of a research team who were commissioned and selected as one of only 29 experts selected by the UK Commission for Extremism (which was set up the British Prime Minister) to produce an academic research report detailing the harms of social media and extremism. This project involved attending a focused conference organised by the commission. This research paper was published by the commission for countering extremism and was also used within a report on challenging hateful extremism, developed by the commission to inform policy.
ESRC Project (awaiting submission)- Drivers of the Islamophobic infodemic communications on social media during the Covid-19 pandemic
Hollie worked as a research assistant for this project, alongside Professor Imran Awan, Dr. Pelham Carter and Harkeeret Lally. This research will form a report for the ESRC, as well as several publishable papers, an independent conference, organised by the research team including myself, and a large-scale dissemination of infographics, both academic and non-academic friendly.
Sutch, H., & Carter, P. (2019). Anonymity, Membership-Length and Postage Frequency as Predictors of Extremist Language and Behaviour among Twitter Users. International Journal of Cyber Criminology, 13(2), 439-459.
Awan, I., Sutch, H., & Carter, P. (2019) Extremism Online - Analysis of extremist material on social media In: Commission for Countering Extremism: Presentation & Discussion of Academic Research, 1/5/19, London, UK