You have been successfully signed up.

Loading, please wait...

It's only fair to share...Share on FacebookTweet about this on Twitter

December 19th, 2016 by


After the recent American elections, Facebook came under heavy fire for not curbing the spread of fake news stories, which many people believe contributed to Hillary Clinton not winning.

Facebook, at first, denied claims that false news stories may have played any role in the outcome. But, later admitted that false news stories were a widespread problem. At that stage, the company stated that it would develop a plan to combat fake news. Yet, other than lots of planning, no concrete changes were seen…until now.

The Solution To Stopping Fake News

On December 15, Facebook issued the following press release from Adam Mosseri, VP, News Feed:

“We’re committed to doing our part. Today we’d like to share some updates we’re testing and starting to roll out.

We believe in giving people a voice. We realise that we cannot become arbiters of truth ourselves, so we’re approaching this problem carefully. We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.

The work falls into the following four areas. These are just some of the first steps we’re taking to improve the experience for people on Facebook. We’ll learn from these tests, and iterate and extend them over time.”

1. Easier Reporting

Facebook says that they’re testing several ways to make it easier to report a hoax if you see one on the site. Reporting a hoax can be done by clicking the upper right hand corner of a post.

The company says that they’ve relied heavily on the Facebook community for help and that this can help them detect more fake news.

reporting-a-story-as-fake

2. Flagging Stories As Disputed

Facebook has also started a program which works with third-party fact checking organisations that are signatories of Poynter’s International Fact Checking Code of Principles.

According to Facebook, they’ll use the reports from the Facebook community, along with other signals, which the company does not clarify or define, to send stories to these organisations. If the fact checking organisations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why. Stories that have been disputed may also appear lower in the News Feed.

disputed-story

It will still be possible to share these stories, but you will see a warning that the story has been disputed as you share.

sharing-disputed-story

Once a story is flagged, it can’t be made into an ad and promoted, either.

3. Informed Sharing

Facebook says that they’ve found that if reading an article makes people significantly less likely to share it, then that may be a sign that a story has misled people in some way. They’re going to be using that “barometer” as part of an article’s ranking. This will help determine whether or not that article could be classified as fake or a hoax.

4. Disrupting Financial Incentives For Spammers

A lot of fake news is financially motivated. Spammers make money by masquerading as well-known news organisations. Posting hoaxes that get people to visit to their sites, which are often mostly ads.

To combat this, Facebook says that they’re putting several initiatives in place to reduce the financial incentives:

  • On the buying side, they’ve eliminated the ability to spoof domains, which the company hopes will reduce the prevalence of sites that pretend to be real publications.
  • On the publisher side, they’re analysing publisher sites to detect where policy enforcement actions might be necessary.

Facebook says that “it’s important to us that the stories you see on Facebook are authentic and meaningful. We’re excited about this progress, but we know there’s more to be done. We’re going to keep working on this problem for as long as it takes to get it right.”