Facebook announced that it has removed over 3.2 billion fake accounts in the April-September period along with taking action on 11.4 million hate speech posts in the same period. The social media company says that it has removed 5.4 billion fake accounts and 15.5 million hate speech posts in total since January.
“Over the past two quarters, we have improved our ability to detect and block attempts to create fake, abusive accounts. We can estimate that every day, we prevent millions of attempts to create fake accounts using these detection systems,” the social networking giant said on Wednesday.
The company said that the majority of these accounts were caught within minutes of registration before they became a part of Facebook monthly active user (MAU) population.
“Our proactive rate remained above 99 per cent for both quarters. Prevalence for fake accounts continues to be estimated at approximately 5 per cent of our worldwide monthly active users (MAU) on Facebook,” said the company.
Facebook in its Community Standards Enforcement Report for November 2019 Edition revealed that it is for the first time sharing data on how it is enforcing its policies on Instagram. The company removed about 835,000 pieces of content related to self harm and suicide in Q2 2019, of which 77.8 per cent were detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1 per cent were detected proactively.
Facebook removed about 11.6 million pieces of content related to child nudity and sexual exploitation of children. The company says that over the last four quarters, it has proactively detected over 99 per cent of the content found to be violating the companys policy in this regard.
Earlier this year, Facebook began allowing its hate speech algorithms to begin automatically removing content that violates its policies.
“One result of that decision has been a sharp spike in the amount of hate speech taken off Facebook,” said the company. Facebook said it is using machine learning-based detection technology that can find and flag hate speech using several different methods.
“Starting in Q2 2019, our systems began removing posts automatically when they received very high scores or matched existing hate speech in our database. In all other cases when our systems detect potential hate speech, they send the post to our review team to determine if it should be removed,” explained Facebook.