After Facebook’s supervisory board upheld the company’s decision to shut down Donald Trump’s account, a report suggests the social network removed less malicious content in 2020 than in 2019, despite an upsurge in speeches from hatred in the US elections.
The initial ban was imposed on former President Trump in late January after posts deemed inflammatory by Twitter and Facebook were followed by the storming of Capitol Hill. Trump had recently finished speaking outside the building before rioters sought to prevent Congress from ratifying President Joe Biden’s victory. Five people died as a result of the violence.
Although the Supervisory Board, which operates as an independent entity, appears to support Facebook’s position to crack down on hate speech, social media agency Reboot has raised questions. Figures released by major social media sites indicate that Facebook is the only one to have reduced the amount of content it removes. In 2020, Reboot said figures from Facebook’s transparency reports showed it had removed 12.4 billion pieces of content in 2020, a 21% reduction from the 15.6 billion removed in 2019.
By comparison, according to the agency, the same numbers show removal actions increased by 427% over the same period on Instagram, which is owned by Facebook. Additionally, reports from the site’s social rivals showed content removal increased 135% on YouTube and 112% on TikTok. Twitter has remained relatively unchanged.
Facebook rejected Reboot’s methodology. He claimed that a comparison between different sites is impossible as each has its own criteria for classifying material and taking removal action. He also pointed out that individual sites can change their classification of harmful material and add new categories over time. Merging quarterly figures into an annual figure can create an inaccurate picture, the social network said.
Less to dismantle?
However, Reboot co-founder and chief executive Shai Aharony insisted the numbers came from Facebook’s own numbers posted, as well as those posted by competing sites. He rejected Facebook’s suggestion that there is no point in comparing content removal rates between sites or even between different years on the same site.
“We agree that content removal measures vary from platform to platform, but the factors and rates remain largely consistent,” he said.
Aharony praised the fact that Facebook and Instagram are constantly updating their policies and improving detection techniques to, in his own words, make social media “safer”. However, he stands by the conclusion that Facebook removed 21% less content in 2020 than in 2019 and thinks that highlights two possibilities.
“One can only assume from the data that either less data is captured by their scalable metrics, or less content worthy of deletion is released on the platform to begin with,” he said.
More hate, seen less often
One thing Aharony and Facebook will agree on is that 2020 hasn’t brought a reduction in the amount of hate speech content the site has had to remove.
The site’s own figures showed it removed nearly five times more hateful content in Q4 2020 than Q4 2019 (26.9 million posts, up from 5.5 million). Similarly, in the last quarter of 2020, covering the period of the elections, the removal of hate content from dangerous groups quadrupled to 6.4 million messages, compared to 1.6 million in the fourth quarter of 2019.
Concerns over Facebook’s alleged failure to remove hate speech from its platform have led to a boycott of advertisers last July, focused on the United States, which ultimately involved the majority of the country’s biggest consumer brands.
Facebook said that although the withdrawal is on the rise, the prevalence of people seeing harmful material has been reduced. This was due to improvements in detection technology which removes harmful messages before they are widely seen.
This means that the rate of hate speech consumption fell in Q4 2020, from 10 in 10,000 “content views” to seven. This is a more realistic figure, the site claimed, as it shows a decrease in the consumption of material rather than an increase in its publication and rapid deletion.
Is AI the answer?
Detection is the key to Facebook. He claims that the increase in content removal on Instagram reflects improvements in his AI which can detect malicious content. Technology has been improved this year, for both sites, with a better understanding of Arabic, Spanish and Portuguese, which could explain why more material has been spotted.
Daily usage on Facebook and Instagram increased 15% in December 2020 from the previous year, so there were more people posting and consuming content on each site, both of which use better detection technology. .
This does raise the question, however,: why would Facebook remove less content in 2020, despite an increase in hate speech and its improving technology to spot malicious content? The answer could be better detection of fake accounts. These are often configured to send spam or disseminate harmful messages anonymously by an individual or as part of an orchestrated botnet producing disinformation.
Deepening Reboot’s use of Facebook numbers, the social media agency deleted 700 million fewer fake accounts in 2020 than in 2019. During the same period, 2.6 billion spam messages in fewer have been deleted. Figures from Facebook show that these two categories were responsible for 97% of all content and account deletions over the two years.
Technological advancements in detecting fake accounts and closing them before they send malicious content or spam, may well be the most likely explanation why content deletion has declined in 2020.
Facebook claimed that it had become much more efficient at detecting such malicious activity. Importantly, it doesn’t report how many accounts its technology automatically closes before they have a chance to spread hate or spam. However, it does include those that manage to evade initial detection and are closed once they post malicious content.
This could well mean that the number of content deletions is on the decline, as fewer spam messages are sent by accounts that were blocked before they could take action, but do not appear in the official figures.
Improving enforcement and AI detection of fake accounts would appear to be the most logical answer to why hate speech increased in 2020 as overall content removal appears to have decreased.
This story first appearance on campagnelive.co.uk.