Facebook by numbers: Inappropriate content removal figures revealed

Cristina Cross
May 16, 2018

The inappropriate content includes vilification, graphic violence, adult nudity and sexual activity, terrorist propaganda, spam and fake accounts.

The company has been using artificial intelligence to help pinpoint the bad content, but Rosen said the technology still struggles to understand the context around a Facebook post pushing hate, and one simply recounting a personal experience.

You can read the full report here and Facebook has also provided a guide to the report as well as a Hard Questions post about how it measures the impact of its enforcement.

Facebook stated that artificial intelligence has played an essential role in helping the social media company flag down content.

Facebook says it disabled almost 1.3 billion fake accounts in the six months through March.

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious.

For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams.

Aston Villa's John Terry Issues 'Dream' Statement After playoff Victory Vs Middlesbrough
Pulis is keeping his fingers crossed over central defender Daniel Ayala, who limped out of Saturday's game with a knee injury. But in the final analysis a lame performance in the final third was the main reason for Boro's limp elimination last night.

Facebook has come under fire for showing too much zeal on this front, such as removing images of artwork tolerated under its own rules.

The figures are contained in an updated transparency report published by the company which for the first time contains data around content that breaches Facebook's community standards. Facebook's technology did a better job of finding graphic violence and automatically identified 86% of the 3.5 million pieces of that kind of content that was removed during the period.

Only 38 percent of these were flagged by automation, which fails to interpret nuances like counter speech, self-referential comments or sarcasm.

The company said in its first quarterly Community Standards Enforcement Report that the overwhelming majority of moderation action was against spam posts and fake accounts. For instance, the company estimated that for every 10,000 times that people looked at content on its social network, 22 to 27 of the views may have included posts that included impermissible graphic violence.

"We're not releasing that in this particular report", said Alex Schultz, the company's vice president of data analytics. The full guidelines are about 8,500 words long, and and go into explicit detail around what is and isn't acceptable in terms of violent, sexual, or otherwise controversial content, along with hate speech and threatening language.

The company said government requests for account data rose globally by about 4% during the first half of 2018 compared to the first half of 2017.

Had the company not shut down all those fake accounts, its audience of monthly users would have swelled beyond its current 2.2 billion and probably created more potentially offensive material for Facebook to weed out.

Other reports by

Discuss This Article