Facebook is launching fake news checking in South Africa – here’s how it works

Technology

Facebook on Thursday launched fake-news checking in South Africa, following another similar launch in Kenya earlier this week.

Potential fake news stories will be put in front of Africa Check, a third-party fact-checking based at the journalism department of the University of the Witwatersrand in Johannesburg since 2012, and the AFP news service.

The launch comes as Facebook rolls fact-checking out across the world – and just before elections in South Africa, due in 2019.

Africa Check on Thursday announced its involvement in a hard-hitting post that criticised Facebook for failing in its responsibility as a publisher for many years, and points out some of the problems with its fact-checking programme. Facebook will pay Africa Check for its services.

Here is how the system works.

Facebook’s increasingly smart algorithms will flag some articles for review automatically, but those systems are in addition to and learn from the reports made by humans.

The three dots at the top right of each post give you the option to “give feedback” on the post, or in some iterations to “report” it. That takes you to another screen, where you can identify it as fake news.

Possible fake news goes to a trusted third-party fact checker for review.

Image – WordPress

Facebook uses a global network of fact-checkers ranging from the well-known American site Snopes.com to operations largely unknown outside their home countries, such as Indonesia’s Liputan6. The network currently covers 17 countries.

In South Africa, fact checking will be by Africa Check or the France-headquarterd AFP, which serves the same function in various countries.

Fact-checkers review the material, and rate it.

The reviewing organisation has a range of options, Africa Check’s Anim Van Wyk said on Thursday, and can classify articles as “true”, or “false”, or “mixture”. They can also flag contents as “no eligible” for fact checking, if it is satire, or clear opinion.

In theory, only articles, pictures, or videos for which the primary claim is false will be flagged.

Fake news is flagged, and users are warned about it.

Facebook does not remove fake news from its platform, but it does downgrade its distribution, so that fewer people see it. This, the company says, strikes the best balance between freedom of expression and not promoting falsehoods.

Where fake news does show up in feeds, it will come with additional information from fact checkers.

If you try to share such an article once it has been flagged, you will be warned it is fake. Those who have previously shared such fake news will also be notified that it has been found to be false.

Repeat offenders are throttled of audience and money.

“If a Facebook Page or website repeatedly shares misinformation, we’ll reduce the overall distribution of the Page or website, not just individual false articles. We’ll also cut off their ability to make money or advertise on our services,” Facebook says.

There is, however, a process for publishers to dispute a finding against them, or to publish a correction. In that case the “strike” can be removed, and the publisher gets access to a normal Facebook audience again.

Africa Check is starting with fake news that has consequences.

Photo credit – http://cureanythingnaturally.com/

Facebook’s third-party fact-checkers can proactively deal with posts on Facebook too. Africa Check will initially “focus on bogus health cures, false crime rumours and things like pyramid schemes – the kind of content that can lead to poor decisions and physical harm,” Van Wyk said.

Business Insider

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *