Luc Cousineau, a masculinities researcher from the University of Waterloo says that when it comes to content like Tate’s, young men tell him, “I literally cannot get away from it. It doesn’t matter what I do, the algorithm keeps feeding me this content.”  

While TikTok has banned Tate’s personal account and stated that they do not promote misogyny, accounts using Andrew Tate’s image and content remain active on TikTok despite a rule against using another person’s likeness as a profile picture. 

The algorithm regularly promotes this content on TikTok’s For You page, where trending videos or content based on an account’s viewing history are displayed.  

Lingering on a post for less than seconds is enough to prompt the algorithm to suggest similar posts, meaning that accounts can be pushed to misogynistic content even if they swipe away from a video after realising it doesn’t align with their beliefs.  

A team of researchers from The Guardian’s Observer created a TikTok account to test how aggressive TikTok’s algorithm really is.  

At first, The Guardian reports that their team’s account prompted them to watch typical social media content of cute animals, funny videos and so on.  

Then, they watched a self-help post meant for men. The video was a legitimate resource about men’s mental health, however, the algorithm then began to suggest Tate video edits.  

After closing the app and opening it again they were shown four Tate videos out of a total 20 displayed on their For You page.  

After closing the app and opening it again a week later, Tate video edits made up the first eight consecutive suggested posts.

In part, this is due to Tate’s clever manipulation of social media platforms. Following his removal from Instagram and TikTok in 2022, Tate reportedly enlisted members of Hustlers University to maintain his social media presence. 

These followers were asked to create accounts reposting content from his videos, podcasts, and more. Sometimes with an expectation of a set number of posts per day or per week. This maintained the saturation of social media with his content, despite the lack of Tate’s personal presence. 

In a 2022 statement to the New York Times, a TikTok spokesperson stated, “Misogyny is a hateful ideology that is not tolerated on TikTok,” adding that an investigation into Tate’s content was ongoing and accounts in violation were being removed.  

At this point in time, Instagram has many accounts using Tate’s likeness and content that remain active, while TikTok has moderated the content more effectively.

Will Fleisher, an AI ethicist at Georgetown University’s Centre for Digital Ethics, says social media companies have a responsibility to ensure that their social platforms and algorithms are designed in a way that will not cause harm to users.  

“I think all of us have a moral responsibility not to harm others. And so, when you’re designing these systems, you need to design with that in mind.” 

Regulating social media algorithms is easier said than done, however. According to Fleisher, the legal enforcement of this responsibility would not be a simple process.  

“We have good reasons not to want the government or other powerful entities to be restricting people’s speech in an intense way,” says Fleisher.”  

“There’s a difficult tradeoff there between avoiding having an overly empowered government censoring people, which can just as easily be used to harm people, and the sort of responsibility we all have to make sure that people aren’t being harmed” 

When it comes to software developers, “They’re not necessarily in that kind of government situation…they’re not enforcing the laws,” says Fleisher. 

“It’s certainly within their power to, and I think is the moral requirement that they think about and design to mitigate or avoid those kinds of problems,” he adds. 

TikTok’s 2022 statement also claimed that the company was pursuing measures to strengthen detection models against misogynistic content.  

Fleisher says that training an AI system to do this job would be difficult. “You’ve got the goal of making sure you find all the objectionable content, but if you make it too sensitive to subtle features it’s likely to catch other kinds of things and eliminate them.” 

This means that marginalised creators might find themselves among the accounts being censored. 

Says Fleisher, “It’s a known problem that content filters are filtering out important conversations among people in marginalised groups who are trying to resist oppression, because if it was uttered by other people, it would be problematic.”  

In other words, accounts on social media who use flagged language while discussing the oppression faced by a marginalised group would be censored by an algorithm in the same way as accounts using the language as hate speech. 

Fleisher says it may be impossible for AI to distinguish between the two types of speech. 

The alternative is human moderation, he says, to which there are two big impediments.  

“One is the cost of getting an adequate number of experts on a topic who can tell the difference between hate speech and perfectly appropriate camaraderie in marginalised communities,” Fleisher says.  

“It’s a non-trivial expense, but it’s been done. There’s been human content moderation on all these platforms before,” he argues. 

The other impediment? Political. The issue can be traced back to 2018, when Facebook’s human content moderation team came under fire. 

“That team was accused of anti-conservative bias, and it was a big, huge scandal. Zuckerberg ended up firing all the people on the human content moderation team, and they implemented an algorithm instead,” says Fleisher. 

What resulted from the firing was a proliferation of fake news on Facebook that the human content moderation team had been effective at catching prior to their removal.  

“It seems to me that right now, given our current level of technology, the appropriate way to deal with these problems is human moderation,” Fleisher concludes. 

Like Fleisher, Cousineau worries that the monetary cost of responsibly moderating social media will dissuade developers.  

“You cannot escape having to talk about not just the social spaces of young men spending time online and learning about masculinity from these people,” says Luc Cousineau, “but the way that the companies that furnish this information to them make money and curate content.” 

Fleisher agrees. “Often people pay attention to things that anger them,” he says. 

In other words, there is precious little motivation for social media companies and software developers to design systems which will eradicate misogynistic content on their apps.  

Controversy drives user interaction, and the cult following amassed by creators with misogynistic beliefs increases usership, benefiting social media companies.  

Before being removed from the platform, Andrew Tate amassed an Instagram following of more than 4 million and his TikTok videos received millions of views.  

“The fundamental problem of having these organisations be in private ownership is that they’re there to make a profit, and it benefits them to have this kind of content that brings in people’s attention,” says Fleisher. 

Some have questioned whether creators like Tate truly believe the extreme views they espouse, or whether they’re in it for the money.

Lawson says that to his mind, that question doesn’t matter.

“Regardless of whether the underlying belief system is legitimate or whether it’s just for clicks and money, I think that’s kind immaterial to the harm that this content can cause.”

Instead of relying on government regulations or the conscience of social media organisations, some parties are exploring other possible measures. 

Jump to Chapter 7

Leave a Reply

Your email address will not be published. Required fields are marked *