Following Facebook’s recent investigation into Russian-backed ads which ran on their network in the lead-up to the 2016 U.S. Presidential Election, The Social Network has announced a new set of measures to help them tighten control over ad content and ensure such misuses of their platform are minimized where possible.
Facebook recently conducted an internal investigation after questions were raised over the role the platform might have played in swaying voter behavior. After initially dismissing the idea that their system played any significant part, Facebook has now handed over more than 3,000 ads to congressional investigators which had been purchased by a Russian company, seemingly to influence U.S. politics.
The aims of those ads vary significantly – while many don’t necessarily support one candidate or another, the ads clearly sought to amplify divisive political issues, thereby destabilizing the election process.
An example of a pro-Trump ad allegedly run by a Russian-backed Facebook Page
The true motives of the Russian-backed ads are at this stage unclear, but the expanded set of examples Facebook has now provided to Congress may reveal more clues and help provide some understanding of the outcomes they sought to achieve.
In response to their findings, Facebook has announced five new ad measures – which will have impacts beyond political interference alone:
1. Making advertising more transparent
The first initiative will essentially lift the curtain on Facebook dark posts – or ads that are targeted to specific audiences but are not visible on the originating Page or to those outside that focus group.
You’ll still be able to target your Facebook ads to whatever audience you wish, but the new measures will ensure that anyone will be able to see all the ads your Page is running at any given time.
“To provide even greater transparency for people and accountability for advertisers, we’re now building new tools that will allow you to see the other ads a Page is running as well – including ads that aren’t targeted to you directly.”
The idea behind this is that it will enable Facebook to highlight discriminatory ad content, and will give users more insight into the motivations of a brand or Page.
Many of the Russian-backed ads were aimed at very specific audience segments, with the intent to incite emotional response and sway voter response. If a user were able to see the different ad variants, they might be less inclined to trust the motivations of such Pages.
And while this system will have clear benefits for those in the political sphere, it’ll also have impacts for regular businesses using Facebook ads. Now, competitors will be able to see your ad variants, while you’ll also need to consider your messaging to each audience segment, knowing that anyone will be able to see all your ad content.
In a general sense, the impact should be minimal, but it will add another consideration to the Facebook ads process.
2. Strengthening enforcement against improper ads
Facebook’s also beefing up their ads review team to help detect and eliminate bad actors before they even make it through to their audience.
“Reviewing ads means assessing not just the content of an ad, but the context in which it was bought and the intended audience – so we’re changing our ads review system to pay more attention to these signals. We’re also adding more than 1,000 people to our global ads review teams over the next year, and investing more in machine learning to better understand when to flag and take down ads. Enforcement is never perfect, but we will get better at finding and removing improper ads.”
The addition of more than 1,000 new staff underlines just how seriously Facebook is taking the issue, underlining the potential influence the platform may have had.
More reviewers will mean better quality ads, which can only be a good thing overall, while improved detection systems will help filter out questionable content and ensure there are more clear links between the advertiser and the intent of each ad.
3. Tightening restrictions on advertiser content
Facebook’s also broadening their interpretation of ad content which utilizes ‘expressions of violence’:
“We hold people on Facebook to our Community Standards, and we hold advertisers to even stricter guidelines. Our ads policies already prohibit shocking content, direct threats and the promotion of the sale or use of weapons. Going forward, we are expanding these policies to prevent ads that use even more subtle expressions of violence.”
What, exactly, those more subtle expressions might be will be a relevant note, but the general idea of eliminating ads which condone or suggest any type of violence is obviously a positive. It may, however, drag Facebook back into debates around free speech, particularly for those who want to use Facebook ads to promote a cause, depending on how they define the new regulations.
4. Increasing requirements for authenticity
This measure directly relates to election-based advertising:
“We’re updating our policies to require more thorough documentation from advertisers who want to run US federal election-related ads. Potential advertisers will have to confirm the business or organization they represent before they can buy ads.”
This is more in line with general advertising guidelines, adding an additional requirement of authenticity and disclosure to the political campaigning process.
5. Establishing industry standards and best practices
And the last new measure is more a rallying call, a note to indicate that Facebook’s looking to work with related authorities and improve their performance.
“In order to fight threats like these, we’re all going to need to work together. We are reaching out to leaders in our industry and governments around the world to share information on bad actors and make sure they stay off all platforms.”
Really, Facebook needs to underline its intent in this regard. The platform has long touted the advanced capacity of its ad platform, its ability to reach almost anyone and utilize advanced targeting to connect with them just at the right moment, ensuring optimum response. It seemed almost counter-intuitive for Facebook to then claim their platform couldn’t possibly have been used to influence the election – especially when various studies have already proven the power of Facebook’s almighty audience data set, even in specific election context.
There’s no doubt Facebook has the capacity to influence, and with more than 2 billion users, that capacity is only increasing – and that’s before you consider their other platforms in WhatsApp (1.3b users), Messenger (1.3b) and Instagram (800m), which can all also be used for advertising purposes to varying degree.
The Social Network may want to play this down to avoid regulation, but as has been noted by various commentators, you can’t tout the effectiveness of your ad products, then disparage them when they’re used to influence other movements. Facebook can be used to influence behavior, and clearly was used for such purpose in the 2016 Election.
Hopefully these new measures are step towards more effective policing of those with questionable motives.