Are Social Media Algorithms Racists and Body-Shamers?

Are algorithms biased?

Twitter’s algorithm crops out black people from large photos, while Instagram’s AI favours thin and white persons. N13 explores if social media algorithms show racial discrimination.

With the recent uprising around ‘Black Lives Matter’ after the killing of Geroge Floyd, Breonna Taylor, and Ahmed Arbury, racism has become one of the most talked-about subjects today. Black and minority ethnic people continue to face racial abuse, and there have been protests across the world to put an end to it. 

But what if machines also show racial discrimination? Is racial discrimination and body shaming inherent within the algorithms of social media platforms? Let’s take a look.

Racist social media

Have you tried posting a large picture on Twitter which does not entirely fit in a thumbnail? How does it decide which part of the photo is to be shown within the thumbnail? Twitter’s algorithm, the creators say, is designed to choose the most appealing part of that photo for the thumbnail. Now this is a highly subjective call to make.

On September 19, Iqlusion co-founder Tony Arcieri posted a tall photo with mugshots of former American president Barack Obama and Senate majority leader Mitch McConnell. Obama was positioned at the extreme bottom of the photo, while McConnell at the extreme top.

Despite being a more prominent and popular figure around the globe, Twitter did not pick Obama’s face for its thumbnail. Instead it chose the whiteman from the photo. Arcieri also posted another photo with the same aspect ratio, but this time Obama at the extreme top and McConnell at the extreme bottom. Surprisingly (or not), Twitter again chose McConnell’s face, proving that it has nothing to do with the position of the face.

The experiment was followed by thousands of other Tweeples with faces of other famous persons. Twitter chose a white face over the black face every single time.

The same happened with photos of dogs and cartoon characters. It did not matter on which side the black or white person’s picture was – left, right, up, down – the algorithm always chose the photo of the white person/animal/item.

Twitter not alone

It is not just Twitter that racially discriminates the images. The issue was first raised by a Vancouver man who found that his Black colleague’s face kept disappearing on Zoom while using a virtual background. 

Only when he published it on Twitter did he realise that even the micro-blogging platform was biased. It retained his face while cropped his Black colleague out of the thumbnail.

Facebook’s AI too has been facing racial discrimination accusations. In 2012, it undertook a Herculean task to remove 83 million fake users on the platform. The algorithm was programmed to remove accounts with names that seemed fake. And guess what? Facebook ended up deactivating accounts of several Native Americans and other marginalised groups, effectively shutting them out of their digital social circles.

Body-shaming algorithm

Unlike Twitter, Instagram’s algorithm seems to be practising a different kind of discrimination. The Facebook-owned platform censors content using artificial intelligence (AI). If it finds that a photo has violated its community standards, the platform removes it by sending a warning to the person who posted it.

The censorship could be done if it detects the violation by itself, or if fellow Instagrammers report the post. Of late, users have had experiences of  the Instagram algorithm favouring thin white people and censors the rest.

The discrimination came to the limelight when Australian comedian Celeste Barber posted a parody semi-nude photo of her imitating former Victoria’s Secret model Candice Swanepoel.  Although both photos revealed the exact same parts of the body, Instagram allowed Swanepoel’s photo on the platform but removed Barber’s parody.

Both Twitter and Instagram have apologised to the users, saying the racial and body discriminations were accidental. Instagram has also reinstated the removed photos after finding that the machine erred on its judgement.

How does the algorithm work?

Critics say, albeit not intentional, the way the algorithm works is a reflection of the society as such.

The AI learns what to do by continually analysing the content on the platform. Twitter chief technology officer Parag Agrawal and chief design officer Dantley Davis said in a blog post that the image cropping system relies on saliency, which predicts where people might look first.

One explanation is that the algorithm scanned more images of white people than other skins during its creation, resulting in racial bias.  Algorithms learn and function based on the inputs they get, which means that they could reflect the bias of those who build it and the assumptions they program into it. So if an algorithm is trained using the database of a particular group – say white men – it could function inaccurately while dealing with another group, say women of colour.

Impacts 

Given the way social media has become an intrinsic part of our lives, biased and discriminatory algorithms can leave a profound and dangerous impact on us. 

In January, Detroit police wrongfully arrested a black man named Robert Julian-Borchak Williams for a shoplifting incident that had taken place two years earlier, although William had nothing to do with it. Probably the first case of its kind, it was a faulty algorithm of the facial recognition technology used by the Police that led to the incident. 

In another instance in October 2019, a study revealed that an algorithm used in US hospitals to provide health care to patients has been discriminating against black people.

Experts even point to the possibility of automated systems discriminating against women while scanning resumes for jobs.

Needless to say, such biases of algorithms against women, marginalised groups and people of colour would only further dangerously deepen the existing divides in the society based on race, gender, sexual orientations and other dividing factors.

Read: Writing on the Wall: Decoding the FB Hate Speech Row

Possible solutions

Twitter is now in the process of offering more control to its users over how their images are cropped in the thumbnail. In July Instagram and Facebook teams announced that they would look for bias in algorithms and work to make both platforms safe and fair for all. Meanwhile Snapchat announced an investigation into racism within the company.

Critics argue that, apart from the datasets fed into the system, one the main reason for algorithmic discrimination is the lack of diversity in the offices of these social media giants. Although multinational companies have started efforts to bring in more diversity into their workspace, the Black and the minority ethnic community is still under-represented. Take the case of Facebook itself – under 4 per cent of roles at the company are held by Black employees, and around 6 per cent is held by Hispanic employees, according to Facebook’s diversity report

A more diverse office culture essential to rectify algorithmic biases. Having a more diverse workforce leads to a diverse thought process during the creation of algorithms. This would also lead to inputs that are inclusive and representative of different groups and communities.

It is also important to educate algorithm designers about the social context of their work as they are often unaware about the harmful impacts of automation.

There have also been demands for external audits of algorithms to ensure transparency and accountability in the remedial measures taken by social media platforms. 

Before you go..

Here’s an interesting study:  How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications

If you would like some additional read: Google’s algorithms discriminate against women and people of colour 

One thought on “Are Social Media Algorithms Racists and Body-Shamers?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.