We share society’s concern with misinformation which is why we have taken aggressive steps to combat it — from creating an unparalleled global network of over 80 fact-checking partners and promoting accurate information to removing content when it breaks our rules. Misinformation is complex, constantly evolving and there’s no ‘silver bullet’, which is why we continue to consult with outside experts, grow our fact-checking program, and improve our internal technical capabilities.
Misinformation is false information that is often shared unintentionally. The content is shared on an individual basis and is not part of any coordinated attempt to mislead or deceive people.
Disinformation refers to sharing content with the deliberate intent to mislead as part of a manipulation campaign or information operation. This activity is coordinated and can involve the use of fake accounts. We don’t tolerate this activity and take these actors and their content down as soon as we become aware of them. We also disclose our work in this space via a monthly report in our Newsroom.
Our Approach to Misinformation
We have a three-part strategy for addressing misinformation on Facebook – Remove, Reduce and Inform. Part of this strategy is our third party fact checking program.
Third Party Fact Checking
We do not believe any single entity – either a private company or government – should have the power to decide what is true and what is false. When one single actor is the arbiter of truth, there is a power imbalance and potential for overreach. With this in mind, we rely on independent fact-checkers to identify and review potential misinformation, which enables us to take action.
We partner with over 80 independent third party fact checkers globally, working in over 60 languages. In Asia Pacific, this includes fact checkers in Indonesia, Singapore, Malaysia, Hong Kong, Taiwan, Myanmar, Thailand, Sri Lanka, Bangladesh, Pakistan, Korea, India and the Philippines.
In the past year, we have extended support to the fact-checking communities including $2 million in grants from Facebook and WhatsApp – for third-party fact-checkers in highly affected regions to help them increase capacity as they do this essential work. We also launched a year-long fellowship with 10 fact-checking organizations, including several from this region, to bring on new team members to help build capacity within the region.
Our partners have been certified through the independent, non-partisan International Fact-Checking Network.
More information about our third party fact checking program can be found here.
We have policies in place to address some of the most harmful types of false information and we REMOVE this content as soon as we become aware of it.
- We will remove content that could lead to real world violence or imminent harm.
- We have a policy which prohibits manipulated media or deep fakes, which is media that’s been edited beyond clarity and is designed to deliberately mislead.
- Late last year, we introduced a policy prohibiting militarized social movements such as Kenosha Guard in the US and violence inducing conspiracy networks such as QAnon.
- We also prohibit content which is linked to voter suppression.
We remove COVID-19 misinformation that could contribute to imminent physical harm including false claims about cures, treatments, the availability of essential services, or the location and severity of the outbreak. We also remove false claims in relation to the COVID-19 vaccine that have been debunked or are unsupported by evidence such as false claims about the safety, efficacy, ingredients or side effects of COVID-19 vaccines. Between March and October 2020, we removed 12 million pieces of COVID-19 misinformation content.
- Ads for supplies or products related to COVID-19 that use the public health crisis to create a sense of urgency or incite fear
- Direct or implied prevention claims or overstating the impact on health or safety outcomes in relation to non-medical masks, hand sanitizer and/or surface disinfectant wipes.
- Ads that make deceptive, false, or unsubstantiated health claims, including claims that a product or service can provide 100% prevention or immunity, or is a cure for the virus.
We are also removing a number of ad targeting options, such as “vaccine controversies,” that might have been used to help spread this sort of misinformation.
If ads about COVID-19 include political content, or if the content includes advocacy, debate or discussion about social issues (in certain countries), then advertisers are required to get authorized and include a “Paid for by” disclaimer on these ads to run them.
For misinformation or false information that doesn’t fall under these particular policies, it may be shared in a way that violates our other Community Standards – for example hate speech, bullying and harassment or spam, and we will remove anything we identify for violating these policies.
We’ve also taken additional steps to address hoaxes related to vaccines in advertising. By investing in systems to better ensure that ads that include this type of misinformation about vaccines will be rejected. We are also removing a number of ad targeting options, such as “vaccine controversies,” that might have been used to help spread this sort of misinformation.
When fact-checkers rate a story as false, altered or partly false, we significantly REDUCE its distribution in the Facebook News Feed and Instagram Feed so that less people see it. On Instagram, we also make it harder to find by filtering it from Explore and hashtag pages.
Pages and domains that repeatedly share false news will also see their distribution reduced and their ability to monetize and advertise removed.
We INFORM people by giving them more context so they can decide from themselves what to read, trust and share.
On Facebook and Instagram
There are a number of different labels our third party fact checkers can choose from when they are rating content including False, Altered, Partly False, Missing Context and Satire.
Content across Facebook and Instagram that has been rated false or altered is prominently labeled so people can better decide for themselves what to read, trust, and share. These labels are shown on top of false and altered photos and videos, including on top of Stories content on Instagram and link out to the assessment from the fact-checker. For content rated partly false or missing context, we’ll apply a lighter-weight warning label.
To help scale the work of our third party fact checkers, we use artificial intelligence to identify identical or similar content to that which has been rated by our fact checkers and automatically apply labels or reduce the distribution of the content.
In 2018, we launched a context button which provides information about the sources of articles people see in News Feed. It includes important details such as when the article was first shared, when the publisher registered on Facebook and links to other stories from the publisher.
In June 2020, we launched a new notification to let people know when a news article they’re about to share is more than 90 days old.
Access to reliable information
We have a number of additional measures in place to keep people informed about COVID-19.
In March 2020, we launched the COVID-19 Information Center at the top of News Feed, which includes real-time updates from health authorities, as well as helpful articles, videos and posts about social distancing and preventing the spread of COVID-19. People can also follow the COVID-19 Information Center to receive updates from health authorities directly in their News Feed. Through our COVID-19 Information Center we have connected over 2 billion people to resources from health authorities.
In April 2020, we began showing messages in News Feed to people who liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed. These messages connect people to the World Health Organization for accurate information. In April, we put warning labels on about 50 million pieces of content related to COVID-19 on Facebook, based on around 7,500 articles by our independent fact-checking partners.
Digital literacy programs
We invest in a range of programs, and partner with civil society organizations as well as industry partners on different initiatives to address the underlying issue of digital and news literacy.
One of our flagship programs is We Think Digital which was created in Asia Pacific, for Asia Pacific.
We Think Digital, an online education portal with interactive tutorials aimed at helping people think critically and share thoughtfully online. We designed the program in partnership with experts from across Asia Pacific, and aim to train 1 million people across 8 countries in Asia Pacific by 2020, with our resources available in 6 different languages.
Our Work Across our Family of Apps
WhatsApp uses advanced machine learning technology that works around the clock to identify and ban accounts engaging in bulk or automated messaging so they cannot be used to spread misinformation.
We are fighting misinformation in three ways:
- Preventing abuse with spam detection technology and product changes that limit how messages are sent
- Educating and empowering people on how to use WhatsApp in a safe and responsible way
- Partnering with government, civil society, and law enforcement.
WhatsApp has made a number of product changes to reduce and address virality on the platform. In 2019 we reduced the number of people you can forward a message to just five chats at once, and introduced the ‘forwarded’ and ‘highly forwarded’ labels to highlight when something has been shared multiple times.
We have since further reduced the number of people you can send a highly forwarded message to to just one chat at once, which has resulted in a 70% reduction in the number of highly forwarded messages on WhatsApp.
Unlike text messages, WhatsApp bans mass messages outright. We use machine learning to identify and ban accounts engaged in mass messaging, and ban 2 million accounts a month in this way. We published a white paper on the impact of these efforts.
We help users decide who can add them to groups, which gives everyone the ability to control which groups they are added to. This significant change increases user privacy and prevents people from being added to unwanted groups.
We use image detection technology to find content that has been debunked by Facebook’s third party fact checking program on Instagram. Once we find this content, we filter it from hashtags and Explore. We are also working on additional measures to reduce Instagram-specific misinformation, including surfacing this content to our third party fact-checkers.
The way misinformation spreads on Instagram is different from Facebook, since there’s no re-share button and no capability to share clickable links in Feed posts. This lessens the potential for something to spread ‘virally’ on our platform.
We want Messenger to be a safe and trustworthy platform to connect with friends and family. Last year we introduced features like safety notifications, two-factor authentication, and easier ways to block and report unwanted messages. This new feature provides yet another layer of protection by limiting the spread of viral misinformation or harmful content, and we believe it will help keep people safer online.
Messages in Messenger can only be forwarded to five people or groups at a time. Limiting forwarding is an effective way to slow the spread of viral misinformation and harmful content that has the potential to cause real world harm.
Many of our other efforts to combat misinformation on Facebook such as removing content that may lead to real-world harm or third-party fact-checking already help limit the spread of misinformation from being shared in Messenger.