While Facebook’s data misuse concerns have taken the spotlight in recent weeks, the platform’s also still working to stamp out false information, and the way in which Facebook can be used to spread unsubstantiated, and divisive, content.
Last year, Facebook rolled out a new tool to help users verify the trustworthiness of the content they see in their feeds, adding an information marker on each post which, when clicked, provides detail on the originating publisher (pulled from Wikipedia), and links to related articles, giving users resources to further clarify the validity of an article.
The tool was initially only rolled out to some users in the U.S., but now Facebook’s expanding the option to all U.S. users, while they’re also adding in some new options to make it a more valuable option.
As per Facebook, the two new features being added are:
- More From This Publisher, which will give people a quick snapshot of the other recent stories posted by the publisher
- Shared By Friends, which will show people any of their friends who have shared the article
The idea with the first is to better connect users with more credible sources – if they think a publisher is reliable, they can follow them to stay up to date, as opposed to getting their news from various, potentially less reliable, sources. The second measure may also help reduce the spread of fake news by adding a level of peer assurance – if all your friends are sharing this same post, that could make you more inclined to trust the same.
In themselves, neither measure would necessarily have a significant influence on the spread of more trustworthy news content – if you and your friends are all sharing the latest reports from a fake news outlet, then these measures probably won’t help reduce that. But in order for Facebook to provide the additional information pulled from Wikipedia to verify the source, the publisher actually has to have a Wikipedia page in the first place, which most fake news peddling outlets won’t have, while if you were to look at the list of other posts from the same publisher and see some overly outlandish topics, that could quickly cancel them out as a trustworthy outlet.
In addition to these new tools, Facebook’s also rolling out a new test which will provide more information about individual authors.
“People [in the test group] will be able to tap an author’s name in Instant Articles to see additional information, including a description from the author’s Wikipedia entry, a button to follow their Page or Profile, and other recent articles they’ve published.”
What’s most interesting about this is that Facebook already had a form of authorship, which they rolled back last year. When that system was in place, Page owners could link to the Facebook Page of an article author, adding a more direct reference point for each, and providing more context on the writer. Facebook didn’t offer any explanation at the time as to why they removed the option, but they did add an author tags tool which looked set to replace it.
Now, this new process utilizes that option:
“This information will only display if the publisher has implemented author tags on their website to associate the author’s Page or Profile to the article byline, and the publisher has validated their association to the publisher.”
Again, the aim is to add more context to the information being shared on The Social Network – the theory being that with more options to confirm and follow reliable sources, the less likely people will share false and misleading news from questionable sources.
Will that work? It’s hard to say – as shown by the latest investigation into the spread of political propaganda, people have become more reactive to headlines and sensationalism, with the capacity to add their own comment and share their thoughts on social often outweighing the need to read into the facts and learn more. If that’s become habitual, it could be hard to break, while news outlets are also increasingly incentivized to drive clicks, leading to more sensationalized content (more people will click on extreme headlines than measured responses).
But Facebook is trying to do something – rather than just washing their hands of it and shifting full responsibility onto users, Facebook’s trying to show more context, and give people more options to clarify their media intake. We should all hope that it does provide assistance, and helps slow the spread of false narratives.
And worth noting, in addition to these measures, Facebook has also announced the widespread removal of questionable Pages operated by Russian-affiliated groups.
As per Facebook:
“This morning we removed 70 Facebook and 65 Instagram accounts — as well as 138 Facebook Pages — that were controlled by the Russia-based Internet Research Agency (IRA). Many of the Pages also ran ads, all of which have been removed. Of the Pages that had content, the vast majority of them (95%) were in Russian — targeted either at people living in Russia or Russian-speakers around the world including from neighboring countries like Azerbaijan, Uzbekistan and Ukraine.”
Facebook says that the IRA has “repeatedly used complex networks of inauthentic accounts to deceive and manipulate people who use Facebook, including before, during and after the 2016 US presidential elections”, which has lead to this action.
Removing fake news will remain a massive challenge, and as noted, media consumption habits have also shifted, adding to the difficulty. But Facebook is taking action.