Facebook increases the use of AI for Content Moderation

0
104
Facebook increases the use of AI for Content Moderation

In a recent press briefing, Facebook has informed that technology all around the world has started to play a integrating role in the content moderation on the social media platform. The said changes took place in order to make the reported content more up to date with the community guidelines of Facebook.. At the moment, there are 15,000 human reviewers all over the world and by inputting Artificial Intelligence (AI) in their system, AI has been able to help in removing content which is against Facebook’s policies and terms. .

So far, between the months of April and June this year, around 95 percent of the content that Facebook has removed was identified and removed by their AI system without having the need for someone to report that content. This included 99.6 percent of fake accounts, 99.8 percent of spam, 99.5 percent of violent and graphic content, 98.5 percent of what was believed to be terror content, and 99.3 percent of child nudity and sexual exploitation content, all of which was identified and removed from Facebook, thanks to the integration of AI in their platform. Moving on, Facebook has shared that they are now prioritizing content which needs to be reviewed based upon several factors which are taken in to account, for example; virality, severity, and likelihood of violation.

Must Read: Apple And Facebook Start Exposing Each Other For Misuse Of User Data

In turn, by Categorizing content in such a way, regardless of when and where it was shared on the social media platform, and whether or not the said content was even  reported by someone or detected by the technology and allows them to get to the highest severity content first. This also means that  the 15,000 reviewers in the Global Operations team spend much more time on complex content issues where a very research based judgment is required to be given and less time is given to the reports that are lower in severity, which can thus be identified by the AI system. 

Now, technology is undoubtedly playing a much more integral role in how the platform is able moderate content, for some cases, they still use a combination of technology along with reports from the community and human review in order to see which content is against the Community Standards. In turn, these steps are taken by the company in order to see that the context of the post that is being reviewed is understood in a better way. It should hence, be noted that the technology which has been created for this purpose has been dubbed as the Whole Post Integrity Embeddings (WPIE).

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.