Cyber racism has become a widely recognised scourge of the Internet, interacting with violent radicalisation and cyber bullying to make the worldwide web a more dangerous space. Many governments have tried to limit its impact, though few democratic states have been successful. Australia’s particular combination of denial and desire to collaborate with the major Internet corporations is analysed to reveal the critical role of civil society backed by law were they together enabled to push back against the threats that such racism poses to social cohesion.
Almost as soon as the public Internet was launched in the mid – 1990s, organised racist groups recognised in it a powerful tool for building and sustaining communities of like-minded people. More than two decades on and the Internet has become saturated with racist hate sites and narratives, so much so that Tim Berners-Lee, creator of the world wide web, repeatedly condemns how the web has become a tool for the manipulation of hate and fear.1, 2
More than two decades on and the Internet has become saturated with racist hate sites and narratives, so much so that Tim Berners-Lee, creator of the world wide web, repeatedly condemns how the web has become a tool for the manipulation of hate and fear.
Governments have become increasingly frustrated about and apparently impotent in implementing anti-racism laws in the context of the web and its main social media. 3 Many factors contribute to this situation. This article outlines what these factors are, what strategies have been adopted in different countries to respond to the threats such hate speech offers, and what conclusions governments, social activist organisations, and citizens in broad might draw from the overview.
Racism is a particularly pernicious set of attitudes, values and ideology that mobilises hate and fear, justifies it on the basis of the alleged superiority of the subject group, and seeks the exclusion, suppression or exploitation of the targeted social groups.4 Since the Second World War international organisations, both intergovernmental and non-government, have worked to define the threats posed by racism, and find strategies to undermine its legitimacy and contain its spread.5 While there is broad international consensus racism remains a corrosive and fragmenting perspective wherever it is allowed to flourish in civil society,6 nevertheless the advent of the Internet has globalised, denationalised, and at times joined together advocates of a racialised world in which their self-avowedly racial group should sit at the apex.
Whether the global networks advocate White Power, assert the superiority of a religious community while violently condemning others, or racialise political disputes, they have increasingly adopted similar strategies and discovered that their opportunities grow when they draw on the Internet and its many afforded interventions.7 Different racisms can be thought of as social movements, with their centres drawn from committed activists, while their practices are concerned to recruit new followers and sustain the engagement of existing supporters, while attacking both their opponents and sometimes competitors. Thus online racism both shares many characteristics with, and can also be distinguished from, older forms of racism in the world of the everyday.
Why is the Internet such a great place to do racism?
The Internet, and the World Wide Web, that set of social relations and their expressive codes that use the global infrastructure, need to be understood as technological, social and economic networks.
The Internet, and the World Wide Web, that set of social relations and their expressive codes that use the global infrastructure, need to be understood as technological, social and economic networks. Human beings interact with each of these networks, and are affected by them and the synergies they generate.8 Billions of people sit in front of individual screens – from handheld to wall-hanging, and greater – where they are able to communicate but do so in an environment with little or no real-time feedback that would provide socially sanctioned modification of their behaviour. This context of dis-inhibition particularly suits people whose personalities tend towards sadism, narcissism, and psychopathy. Such people are found in significant numbers in groups dedicated to racial intolerance, even if they are only a small minority when indeed spread throughout society.9, 10
Australia’s own legal environment adds to the opportunities for racism. Australia followed the U.S.A. in placing a reservation on Article 4 of the 1966 UN International Convention on the Elimination of Racial Discrimination, the section that criminalises race hate speech.11 However, Australia, unlike the U.S.A., has no national Bill of Rights. The only national laws relating to racial vilification are civil (the criminal law deals with actual violence), and seek to bring about conciliation between the aggrieved parties and the perpetrators. The legal framework has limited reach, while in 2013 and again in 2017 the national government, arguing that the current protections were a breach of free speech, tried unsuccessfully to remove some of the provisions of the Racial Discrimination Act that give offended parties pathways to seek protection from vilification.12
Most jurisdictions in European societies have criminal laws that seek to prevent racist vilification while additionally giving citizens rights to heightened protections from such attacks.13 Even so over recent years nativist and racist political groups have tested the limits of such protections, while utilising social media such as Facebook to build communities of hate, and attack communities of ethno-religious minorities, especially refugees and asylum seekers. In such contexts major platforms such as Facebook have been challenged by government and civil society to accept far greater responsibility for removing hate material and are required to do so with alacrity. In Germany for instance Facebook executives could be held to be criminally liable for what is published online.
However Australia has no such laws, while the major commercial platforms have made representations to Parliament that they should be excused any such responsibility for the content they publish.14 Moreover no civil society organisations operate at the national level that serve as potential interlocutors of government or the Internet industry, with the government’s Human Rights Commission itself a regular target for criticism by conservative political groups for its advocacy of the need for greater protections against racism, while its capacity remains constrained by limited powers.
Where does racism happen online?
A recent Australian study of online contact with racist material 15 suggests that over one third of regular users of the Internet encounter material they recognise as racist, while in addition many more encounter racist material without recognition or while in denial. This scale of encounter is born out by studies in the U.S.A.16 and Germany,17 suggesting that a significant level of racist encounter pervades social media. The most significant location for their racist encounters remains Facebook, followed by the comments lists on newspaper and media digital publications. However racism appears in many different situations, not surprisingly given its persistence in societies throughout the world.
The most intense concentration of racist hate speech occurs on websites, message boards and other Internet location, which have been created by people advocating a racist world view
The most intense concentration of racist hate speech occurs on websites, message boards and other Internet location, which have been created by people advocating a racist world view. Here the intentions remain to justify and promote the ideology of the proponents, denigrate their opponents, “hurt” their targets, and recruit additional followers. As White Power advocates in the U.S.A. have claimed,18 the aim of their online campaigns is to “normalise” White Power racism as the taken-for-granted world-view of American citizens. Ethnic minorities are to be intimidated into acquiescence, silence or retreat.
However racism also occurs as systematically but less intentionally in spaces not controlled by racist groups. Studies have shown patterns of racialised power and exclusion on both gay19 and straight20 internet dating sites, in online gaming and sports21, and in more general news sites where there may be no conscious determination to facilitate racist discourses or exclude racially-denoted groups.
Moreover there is little consistency among users as to what constitutes racism – so that the identification of what they encounter as racist tends to fall into two categories. Australian research15 demonstrates that people who hold attitudes that are less racist tend to identify a wider range and intensity of encounters as potentially racist – including not only overtly racist material or attacks, but also less intentional comments and actions. On the other hand people with strongly racist values are more likely to deny that what they encounter can be described as racist, often limiting the label of racism to threats of violence or overt negative action.
Power to Act on Online Racism
Racism online, a group of hate speech acts ranging from prejudice and denigration through the advocacy of violence and murder, reflects not only the social conditions within and between different societies, but also the legal contexts that operate in nation states.22 In the U.S.A. racism has been shaped by both the political history of slavery but also by the right to freedom of speech embodied in the Constitution, meaning that overt racist hostility is permissible online in any public system. In Australia the Constitution does not specify a Bill of Rights, with racism and racial vilification being seen as essentially civil wrongs until they reach the level of intimidation or threats of violence. The European Convention on Human Rights covering racial discrimination applies to all European states, while the optional protocol on xenophobia and racism applies to signatories to the European convention on cyber-crime (except Australia, which has rejected the additional protocol).23
If we think of the Internet as having three major stakeholders – industry (or capital), states and inter-state agencies, and civil society – then cyber racism affects all three and can be reduced by concerted action amongst them. While the Internet is created by industry (joining technology and commerce) it is facilitated by states, and given real form as it spreads through civil society. The transnational capacity of corporations allows them to avoid the priorities of nation states, and find haven in states that protect their freedom to operate unrestrained. For instance one of the most prolific Australian race hate sites has a critical level of its operation embedded in Panama, while another global hate site that harasses Indigenous people in Australia has refuge in the Ukraine.
Nation states can introduce legislation that would curtail the freedom of action by hate sites.24 Australia and the U.S.A. have chosen to excuse themselves from this power, which they would have been obligated to use if they had not filed reservations on Article 4 of the UN Convention on the Elimination of Racial Discrimination. The UK Human Rights Act (which incorporates much of the European Convention) empowers criminal prosecutions of identifiable purveyors of serious race hate. Germany has been pursuing Facebook25 as a corporation for its laxity in controlling hate pages directed towards refugees, Muslims and other migrants, threatening criminal charges against senior executives.
Recent submissions to the Australian Senate14, 26 by active parties on cyber-bullying prevention provide an insight as to how such perspectives are formed and promoted. Instagram, a Facebook subsidiary, wrote that: “online safety is best achieved when government, industry and the community work together. Given the strong commitment of industry to promote the safety of people when they use our services, we believe that no changes to existing criminal law are required. If anything, we would encourage the Committee to consider carve outs from liability for responsible intermediaries”. Instagram does not refer specifically to racism, but it does mention hate speech in general. Its desire for a “carve out” (ie protection of platforms from any responsibility for harm occurring to their users) underlies the submission. However Instagram does not recognise the unequal power in such situations, where as a global corporation it has greater leverage than national governments or fragmented communities.
Facebook’s preferred approach remains that individuals create their own ring of privacy, rather than limiting the freedom of others to promote (non-violent) hate on their sites.
Similarly Facebook’s submission describes its action on creating a safer space for users, while arguing that there is no need for any further criminal sanction. Facebook makes no reference to the use of its services to promote racist hate speech, even though there is now widespread research that Facebook is the most significant location for users across the world where they might encounter such material. Facebook’s strategy remains one of allowing material to be posted until a significant enough reporting by users draws their attention to the need for some action – its preferred approach remains that individuals create their own ring of privacy, rather than limiting the freedom of others to promote (non-violent) hate on their sites.
What has Australia done?
Online race hate in Australia has been dealt with under many different approaches, none particularly satisfactory. The Racial Discrimination Act of 1975 through which Australia sought to implement its ratification of the ICERD, was unable to deal with online racism until the amendments on racial vilification were passed in 1996. Two main cases reveal the issues that have been most difficult with the changes then made to Act.27, 28
In the case of the Adelaide Institute which began in 1996 soon after the Racial Vilification law was introduced, its Holocaust denial founder was able to hold off attempts for thirteen years by the Human Rights Commission and the Executive Council of Australian Jewry (ECAJ) to close down its website. This case demonstrated the arduous and costly processes that community groups would need to follow when faced with an overtly and aggressively racist perpetrator. In the event, the case eventually proved irresolvable, as when the ECAJ finally proved its claims, the defendant simply passed the whole exercise over to another party to operate.
The case of Andrew Bolt was rather different. Bolt, a strong advocate of neo-conservative world views,29 also a writer for a popular newspaper and online, was found by the Federal Court to have made unsubstantiated vilifying claims about a group of Indigenous people. He was found to have breached the Act, though he garnered support amongst many of his readers and wider socially conservative groups for his position.30, 31 Bolt did not appeal the court decision, though he did embark on a campaign to have the law revoked. The conservative national government elected in 2013 then tried to have key elements of the law removed; each time it was defeated by public campaigns in defence of the law, and a refusal by the Senate to approve the changes.
The Commonwealth Criminal Code does contain a provision for prosecutions where a public carriage service (like the Internet or the postal service) has been used to harass or intimidate someone; the grounds are not directly relevant. In a crucial case a vilifying racist Internet post against an Indigenous politician was found to have been a breach and the perpetrator was convicted of the offence.
Civil society groups are not well resourced in Australia, especially in the field of racism. Their main role is to bring before the public and government the continuing dangers of online racism, while arguing for more effective action. The Online Hate Prevention Institute has documented the spread of racism and other hate speech through Facebook, adding to the pressures for the platform in Australia to respond to social concerns about the normalisation of hate speech.
Conclusion: what can we learn about online racism and its diminution from Australia
Australia falls between many stools in relation to combating online racism. With its reservation on Article 4 of ICERD it has parallels to the U.S.A. However it does not have a Bill of Rights like the U.S.A. or Human Rights laws like Canada or Europe. The criminal law is poorly framed to have any impact, enlivening by its absence the willingness of racist groups to test the limits of civil law. The civil law, which is tortuous and expensive to pursue, means that only the most resolute civil society organisations can engage with the proponents of racism. The E-Safety Commissioner 32 has in more recent times agreed to address racisms which affect the well-being of young people, and the wider community, collaborating in doing so with YouTube, Facebook and Twitter.33
Australia provides a valuable case study for testing the interventions and goals of other countries, as it stands alone in the peculiar combination of denial and avoidance that characterises its political response to cyber-racism.
The Cyber racism and Community Resilience (CRaCR) group has argued that Australia needs at the very least to adopt legislation similar to New Zealand’s Harmful Digital Communications legislation34 which requires publishers of racist material to respond to bona fide complaints from the community and remove the material.15 It also empowers prosecutions by the state of those who deliberately create and post seriously harmful material. While there is a recognised defence in freedom of speech, the notion of harm in the digital world advances the linking of the real and digital worlds, especially in societies where racial tensions may exist and where the inflaming of such tensions can undermine safety more widely.
Australia provides a valuable case study for testing the interventions and goals of other countries, as it stands alone in the peculiar combination of denial and avoidance that characterises its political response to cyber-racism. Civil society however has recognised some of these issues, though the state and industry together still resist change.
Featured Image: The Joint Parliamentary Committee on Human Rights has handed down its report on changes to the Racial Discrimination Act Photo by Rachael Hocking/AAP
About the Author
Andrew Jakubowicz is Emeritus Professor of sociology at the University of Technology Sydney. His most recent book with colleagues is Cyber Racism and Community Resilience, Palgrave, 2017. His web sites include the Menorah of Fang Bang Lu, and Making Multicultural Australia. He has been an advisor to governments on cultural diversity, and published widely in academic forms, popular online blogs and through well-received television documentaries.
1. World Wide Web Foundation. “Delivering Digital Equality: The Web Foundation’s 2017 – 2022 Strategy”. in Web Foundation 2017, World Wide Web Foundation.
2. Berners-Lee, T. Tim Berners-Lee: I invented the web. Here are three things we need to change to save it. The Guardian, 2017.
3. Capitanchik, D. and M. Whine. “The governance of cyberspace : racism on the Internet”. 1996, London: Institute for Jewish Policy Research.
4. Cerase, A., E. D’Angelo, and C. Santoro. Monitoring Racist and Xenophobic Extremism to Counter Hate Speech Online: Ethical Dilemmas and Methods of a Preventive Approach. VOX Pol, 2015.
5. Council of Europe. Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems. ETS No.189, 2003.
6. European Commission against Racism and Intolerance (ECRI) “Combating the Dissemination of Racist, Xenophobix and Antisemtiic Material via the Internet”. ECRI General Policy Recommendation No.6, 2000.
7. Douglas, K., et al. “Understanding Cyberhate: Social competition and Social Creativity in Online White Supremacist Groups”. Social Science Computer Review. 2005. 23(1): p. 68-76.
8. Fuchs, C. “The Macbride Report in Twenty-First Century Capitalism, the Age of Social Media, and the BRICS“. Javnost – The Public : Journal of the European Institute for Communication and Culture 2015. 22.
9. Brown, A. “What is so special about online (as compared to offline) hate speech?” Ethnicities. 2017.
10. Stein, J. “How Trolls Are Ruining the Internet”. Time. 2016.
11. Hunyor, J. “Cyber-racism: Can the RDA Prevent It?” Law Society Journal 2008. May, 34-35.
12. Jakubowicz, A., K. Dunn, and R. Sharples “Australians believe 18C protections should stay”. The Conversation. 2017.
13. Bleich, E., “The Freedom to be Racist? How the United States and Europe Struggle to Preserve Freedom and Combat Racism”. 2011. Oxford: Oxford University Press.
14. Peatling, S.. “Facebook and Instagram oppose tougher penalties for cyber bullies. Sydney Morning Herald. 2017.
15. Jakubowicz, A., et al. “Cyber Racism and Community Resilience: Strategies for Combating Online Race Hate”. 2017, Palgrave Macmillan: London.
16. Lenhart, A., et al., OnlIne Harassment, Digital Abuse, and Cyberstalking in America. 2016, Data and Society Research Institute, and Center for Innovative Public Health Research: New York.
17. eco eco Survey: Every Third Person Has Encountered Racist Hate Speech Online. eco:Association of the Internet Industry. 2016.
18. Anglin, A. A Normie’s Guide to the Alt-Right. The Daily Stormer, 2016.
19. Holt, D., M. Callander, and C.E. Newman, “Not everyone’s gonna like me’: Accounting for race and racism in sex and dating web services for gay and bisexual men”. Ethnicities, 2016. 16(1).
20. March, E., et al. “Trolling on Tinder® (and other dating apps): Examining the role of the Dark Tetrad and impulsivity. Personality and Individual Differences”. 2017(110): p. 139 –143.
21. Daniels, J. “Race, Civil Rights, and Hate Speech in the Digital Era, in Learning Race and Ethnicity: Youth and Digital Media”. A. Everett, Editor. 2008, MIT Press: Cambridge, MA.
22. Phillips, W. “This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture”. 2015, Cambridge, Mass.: MIT Press.
23. Jakubowicz, A. “Alt_Right White Lite: trolling, hate speech and cyber racism on social media”. CosmopolitanCivil Societies: an Interdisciplinary Journal 2017. 9(3).
24. McGonagle, T. “The troubled relationship between free speech and racist hate speech: the ambiguous roles of the media and internet”. Day of Thematic Discussion “Racist Hate Speech”, 2012.
25. Maier, L.” Germany Investigates Mark Zuckerberg and Facebook Over Slow Removal of Hate Speech”. Forward, 2016.
26. Senate Legal and Constitutional Affairs References Committee. “Sumissions, The adequacy of existing offences in the Commonwealth Criminal Code and of state and territory criminal laws to capture cyberbullying”. 2017.
27. Jakubowicz, A. “Cyber Racism, in More or less: democracy and new media”. 2012. Future leaders: Melbourne.
28. Jakubowicz, A. “Cyber Racism, Cyber Bullying, and Cyber Safety”. Conversation at the AHRC Cyber-Racism Summit 2010, 2010.
29. Bolt, A. “Why this anti-White racism?” The Bolt Report, 2017. 2017.
30. Gelber, K. and L. McNamara, “Freedom of speech and racial vilification in Australia: ‘The Bolt case’ in public discourse”. Australian Journal of Political Science. 2013. 48(4): p. 470-484.
31. Aggarwal, B. “The Bolt Case: Silencing Speech or Promoting Tolerance?, in More or Less: Democracy and New Media”. 2012. Futureleaders: Melbourne. p. 238 – 257.
32. Parliament of Australia, “Enhancing Online Safety for Children Bill 2014, Explanatory Memorandum”. 2014, House of Representatives: Canberra.
33. Office of the eSafety Commissioner. “Young Australians given eSafety tools to counter online hate”. Media Release. 2017.
34. Mason, G. and N. Czapski. “Regulating Cyber Racism”. Melbourne University Law Review (advance). 2017. 41.