MEDIA ARTS: How Twitter and Facebook are influenced by biases and social hierarchies

by Amal Naveed

How do established biases and social hierarchies manifest in algorithmically mediated and/or moderated content online? Discuss this in relation to two or more platforms.

Since the 1990s, there has been a drastic increase in content that is published on the web, as well as an increase in the usage of social media for both personal and professional reasons. According to a 2022 Global Overview Report from DataReportal, 62.5% of the world’s population are active internet users, and out of this total population, 58.4% are social media users (Kemp, 2022). Naturally, this increases in usage and the uploading of content, particularly on social networking sites (SNS) has meant an increase in content moderation by the sites themselves, often to safeguard users from seeing harmful or dangerous content, or ‘fake news’. The sites’ own algorithms also deliberately curate content for users that align with their interests, to encourage more usage of the internet and by extension the SNS the user is engaging with.

This essay will use the platforms of Twitter and Facebook to highlight how content moderation is influenced by established biases and social hierarchies, particularly race and ethnicity, political affiliations, and professional pursuits based on educational and class backgrounds. Content moderation refers to “the regulation of the material that users create and disseminate online” (Alizadeh, et al., 2022). Other definitions include content moderation as a way to “structure participation” of users online, often by the owners of the platforms upon which this content is being circulated (Grimmelmann, 2015). The key feature here is that content moderation is performed by a combination of human moderators who are trained to identify potentially harmful material, as well as computerised structures that do the same - therefore, it is not just the employees of companies who make moderating decisions, but often an algorithmic system as well (Alizadeh, et al., 2022).

In the case of Facebook, content moderation in the form of its ‘real name policy’ will be examined, alongside the accusations of racism that have surfaced due to insistence on ID checks, as well as issues of anonymity and safety that the policy has not considered. g. The case of Twitter will also be examined using the example of the permanent banning of President Donald Trump’s account after a tweet about the 2021 United States Capitol attack, as compared to previous tweets that were seen by many to violate Twitter’s guidelines but were never the basis for any action taken against him.

In a 2021 note written on Facebook, Mark Zuckerberg homed in on the new privacy features being launched by the company, and how they would benefit users: “For a service to feel private, there must never be any doubt about who you are communicating with. We’ve worked hard to build privacy into all our products, including those for public sharing” (Zuckerberg, 2021). This emphasis on knowing who users are communicating with stem from Facebook’s ‘real name’ policy, which has always drawn criticism from regular users, as well as activists.

Facebook’s reasoning for the ‘real name’ policy, renamed the ‘authentic’ name policy after backlash, is to prevent anonymous and pseudonymous users from exploiting others, through fraud, impersonation, or abuse (Phillip, 2015). What this policy does not take into account, however, is the large percent of Facebook’s users that do not hail from the Western world, and thus their ‘real names’ do not conform to what Facebook’s arguably Eurocentric database considers as authentic. Native Americans such as Shane Creepingbear (Phillip, 2015) and Dana Lone Hill in 2015 reported their accounts being taken down despite the submission of multiple forms of ID that proved their names were ‘real’; in addition, drag queens such as Sister Roma in 2014 campaigned against the policy, after their accounts were suspended when their profile names did not match those on government-issued ID documents (Holpuch, 2015). Facing accusations of racism and discrimination, therefore, Facebook has attempted to rework its policy, citing that it does not flag users for using legal names b, instead the names they used in “real life”. This was in order to ensure their identities were public (Phillip, 2015).

A wider debate here has to do with what exactly prompts Facebook to suspend accounts, and how the company finds out whether a user is using their ‘real’ name. Databases aside, one of Facebook’s content moderation options has to do with the ‘report’ feature, where users can report a profile for harmful content, impersonating someone else, and a variety of other actions that go against the company’s safeguarding policies. The unfortunate reality of this feature, however, is that it is often used by people with biases and prejudices, particularly against people from racial minorities, members of the LGBTQI+ community, etc. Even more dangerously, governments have often reported the accounts of activists, who operate under pseudonymous or anonymous accounts, in order to avoid persecution and self-censorship: Vietnamese groups in 2014 estimated that around 44 journalists had their accounts taken down, causing a shutdown of accurate information that was being spread to create awareness for an international audience (Brandom, 2014).

Facebook’s discouragement of anonymous and pseudonymous accounts also has wider implications for the safety of activists in countries that rank low on the Human Freedom Index. In Pakistan, for example, where blasphemy laws exist that carry a death sentence in case of disrespectful comments made against Islam, Facebook has played a key role in a number of people being falsely accused of blasphemy, and often being murdered as a result of mob violence. In 2017, a university student, Mashal Khan, was lynched and shot dead after allegedly posting blasphemous content on Facebook (DAWN, 2017). Facebook’s reputation of content moderation subsequently resulted in the government demanding access to personal information of several accounts that were allegedly posting such content, with activists criticising the company "for colluding with authorities and censoring content infringing freedom of expression". – Facebook, at the time of such accusations, made no public admittance of providing personal details of any such accounts to the Pakistani authorities (BBC, 2017).

De-platforming as a moderation strategy was famously used by Twitter in 2021, when the accounts of the then-President of the United States was permanently suspended. Twitter’s reasoning for doing so, in a public statement realised released on the company’s website, was summarised as: “After assessing the language in these Tweets against our Glorification of Violence policy, we have determined that these Tweets are in violation of the Glorification of Violence Policy and the user @realDonaldTrump should be immediately permanently suspended from the service” (Twitter, 2021). The tweets in question had to do with the January 2021 attack on the United States Capitol building by protestors in an attempt to contest the 2020 US presidential election results, which Trump’s tweet regarding his non-attendance at the Inauguration was seen to support.

Douek says that “there is no end-state of content moderation with stable rules or regulatory forms; it will always be a matter of contestation, iteration, and technological evolution” (2021). The contestation at play in the case of Trump’s de-platforming had to do with the ideological battle between the left and right wing of American politics. With Trump’s tweets seen as not only inciting and encouraging violence within the capital, but also supporting the notion that the presidential election was rigged, Twitter saw fit to permanently ban his social media presence, a move that Facebook mimicked soon after. What is interesting to note, however, is the fact that this particular instance was not the first time Donald Trump was using his Twitter account to incite violence – from the use of racist language to describe Muslims to targeted harassment of celebrities, politicians and other individuals, Trump’s Twitter activity made headlines throughout his presidency, with many individuals reporting it not only for harassment, but also for the spread of misinformation. However, it was not Trump’s racism that prompted Twitter to take action, but rather the alleged threat to American democracy in the aftermath of the Capitol attack. As Twitter had highlighted in its 2019 updated policies for world leaders, while “clear and direct threats of violence against an individual” would not be tolerated, the statement was caveated with the following: “(context matters: as noted above, direct interactions with fellow public figures and/or commentary on political and foreign policy issues would likely not result in enforcement)” (Twitter, 2019). This effectively absolved all world leaders of responsibility, such as Trump’s racist remarks directed at three Democractic congresswomen, all of them people of colour, to “go back where they came from” – in this case, his remarks would not and still are not enough to get him de-platformed (NPR, 2019).

Twitter’s history of acting arguably too late when it comes to de-platforming its users is not a phenomenon confined to Donald Trump. A 2021 article “Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter” cites three other examples of de-platforming: Milo Yiannopoulos in 2016 for targeted harassment of an actress; Alex Jones in 2018 for targeted harassment of a journalist; and Owen Benjamin in 2018 for inappropriate comments about a minor (jhaverJhaver, Bboylston, Yyang, & Bbruckman, 2021). In the case of these individuals, all three were known for hate speech, holding Islamophobic and anti-Semitic views, as well as having neo-Nazi beliefs. While their previous content was reported by users and flagged as offensive, it was three separate and specific incidents, much like Trump’s, that led to de-platforming, often years after initial complaints were made. The inability, and often unwillingness, of social media sites to immediately ban individuals, particularly prominent ones, who engage in hate speech is a factor that contributes to other individuals with similar beliefs feeling entitled enough to not only share their opinions on these sites, but also engage in the kind of targeted harassment that eventually gets them de-platformed. This particular brand of user, part of what Massanari calls “toxic technocultures” (2015), creates an environment of hostility, particularly for those belonging to minority communities who may now find themselves the target of hate speech in both the real and the online world.

De-platforming, however, becomes a troubled concept when applied in the aftermath of the removal of the accounts of Yiannopoulos, Jones, and Benjamin; in the case of Donald Trump, it was not simply a case of removing the account of a user propagating hate speech and inciting violence. Trump’s identity as the President of the United States meant that de-platforming him, while he still held office, would be effectively silencing the leader of the so-called free world. A 2021 Vox article summed up social media platforms’ attitude towards Trump: “Trump’s ban came after years of the social media giants allowing him to push their limits, creating and adjusting their rules about world leaders to avoid having to take action against him — and to avoid positioning themselves as the arbiters of acceptable political speech” (Morrison, 2021).

This essay has discussed established biases and social hierarchies manifesting in moderated content online, with particular reference to the cases of Facebook’s ‘real name’ policy, and Twitter’s de-platforming of Donald Trump. While both platforms have moderated content effectively through an insistence on their respective policies in order to safeguard the experience of their users, there is a clear lack of willingness to go the extra mile to create an environment where hate speech on a social media platform will not be tolerated, whether from an everyday user or an established personality, such as a celebrity or a politician. This unwillingness to moderate, and the willingness to turn a blind eye to racism, xenophobia, and hate speech, is a problem that social media sites continue to struggle with, both in the case of public personalities, as well as everyday users.

 

Bibliography

Alizadeh, M., Gilardi, F., Hoes, E., Kluser, K., Kubli, M., & Marchal, N. (2022, January 24). Content Moderation As a Political Issue: The Twitter Discourse Around Trump’s Ban. Retrieved May 2022, from FABRIZIO GILARDI : https://www.fabriziogilardi.org/resources/papers/Content-Moderation-Political-Issue.pdf

BBC. (2017, March 17). Pakistan asks Facebook to help fight blasphemy. Retrieved May 2022, from BBC News: https://www.bbc.co.uk/news/world-asia-39300270

Brandom, R. (2014, September 2). Facebook’s Report Abuse button has become a tool of global oppression. Retrieved May 2022, from The Verge: https://www.theverge.com/2014/9/2/6083647/facebook-s-report-abuse-button-has-become-a-tool-of-global-oppression

DAWN. (2017, April 13). Mardan university student lynched by mob over alleged blasphemy: police. Retrieved May 2022, from DAWN: https://www.dawn.com/news/1326729/mardan-university-student-lynched-by-mob-over-alleged-blasphemy-police

Douek, E. (2021). Governing online speech: From ‘posts-as-trumps’ to proportionality and probability. Columbia Law Review, 121(3), 759-833.

Grimmelmann, J. (2015, April). The Virtues of Moderation. Yale Journal of Law & Technology, 17, 42-109.

Holpuch, A. (2015, February 16). Facebook still suspending Native Americans over 'real name' policy. Retrieved May 2022, from The Guardian: https://www.theguardian.com/technology/2015/feb/16/facebook-real-name-policy-suspends-native-americans

Jhaver, A., Boylston, C., Yang, D., & Bruckman, A. (2021, October). Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter. Proc. ACM Hum.-Comput. Interact., 5(CSCW2, Article 381).

Kemp, S. (2022, January 26). DIGITAL 2022: GLOBAL OVERVIEW REPORT . Retrieved May 2022, from DataReportal: https://datareportal.com/reports/digital-2022-global-overview-report

Massanari, A. (2015). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346.

Morrison, S. (2021, January 20). Facebook and Twitter made special world leader rules for Trump. What happens now? Retrieved May 2022, from Vox: https://www.vox.com/recode/22233450/trump-twitter-facebook-ban-world-leader-rules-exception

NPR. (2019, July 15). 'Go Back Where You Came From': The Long Rhetorical Roots Of Trump's Racist Tweets. Retrieved May 2022, from NPR: https://www.npr.org/2019/07/15/741827580/go-back-where-you-came-from-the-long-rhetorical-roots-of-trump-s-racist-tweets?t=1651533378573

Phillip, A. (2015, February 10). Online ‘authenticity’ and how Facebook’s ‘real name’ policy hurts Native Americans. Retrieved May 2022, from The Washington Post: https://www.washingtonpost.com/news/morning-mix/wp/2015/02/10/online-authenticity-and-how-facebooks-real-name-policy-hurts-native-americans/

Twitter. (2019, October 15). World Leaders on Twitter: principles & approach. Retrieved May 2022, from Twitter: https://blog.twitter.com/en_us/topics/company/2019/worldleaders2019

Twitter. (2021, January 8). Permanent suspension of @realDonaldTrump. Retrieved May 2022, from Twitter: https://blog.twitter.com/en_us/topics/company/2020/suspension

Zuckerberg, M. (2021, March 12). A Privacy-Focused Vision for Social Networking. Retrieved May 2022, from Facebook: https://www.facebook.com/notes/2420600258234172/

 

Interview with Amal
Essay Commentary
More writing resources
Previous
Previous

MEDIA ARTS: My Own Private Idaho: Are independent films ‘against Hollywood’?

Next
Next

DRAMA: Environmental activism in the performance work of Yin Xiuzhen