Over the past year, Facebook’s been skirting around the definition of what it is – or more specifically, Facebook’s actively worked to avoid being labeled a media company. And that makes sense – Facebook sees itself as more of a facilitator, a platform for anyone to share their voice. They don’t make editorial decisions, they simply provide the tools through which to share content.
But increasingly, that stance has been tested, particularly as The Social Network has been pushed to crack down on controversial content and misuses of their platform.
Now, that definition looks even more suited, with Facebook announcing a new listing of content which will be ineligible for monetization, based on a set of advertiser standards.
As explained by Facebook:
“At Facebook, we take very seriously our responsibility to earn and maintain the trust of our advertiser partners – and give them the confidence they need to invest in us. Which is why today, we’re introducing new monetization eligibility standards that will provide clearer guidance around the types of publishers and creators eligible to earn money on Facebook, and the kind of content that can be monetized.”
Essentially, Facebook’s making rulings on what’s acceptable content on their platform, which sounds kind of like an editorial decision. Kind of.
So, what kind of content will no longer be eligible for promotion? The full explanation of each is available in Facebook’s ‘Content Guidelines for Monetization’, but here’s a point-by-point list:
- Misappropriation of Children’s Characters – Content that depicts family entertainment characters engaging in violent, sexualized, or otherwise inappropriate behavior – including videos positioned in a comedic or satirical manner.
- Tragedy & Conflict – Content that focuses on real world tragedies, including but not limited to depictions of death, casualties, physical injuries, even if the intention is to promote awareness or education
- Debated Social Issues – Content that is incendiary, inflammatory, demeaning or disparages people, groups, or causes.
- Violent Content – Content that is depicting threats or acts of violence against people or animals, where this is the focal point and is not presented with additional context.
- Adult Content – Content where the focal point is nudity or adult content, including depictions of people in explicit or suggestive positions, or activities that are overly suggestive or sexually provocative.
- Prohibited Activity – Content that depicts, constitutes, facilitates, or promotes the sale or use of illegal or illicit products, services or activities.
- Explicit Content – Content that depicts overly graphic images, blood, open wounds, bodily fluids, surgeries, medical procedures, or gore that is intended to shock or scare.
- Drugs or Alcohol Use – Content depicting or promoting the excessive consumption of alcohol, smoking, or drug use
- Inappropriate Language – Content should not contain excessive use of derogatory language, including language intended to offend or insult particular groups of people.
All of these categories seem fairly logical, with clear reason why Facebook wouldn’t want to allow such content to be promoted. But then again, ‘debated social issues’ will no doubt raise the hackles of various groups, and lead to more criticism of Facebook censorship.
And this is where the editorial accusation comes in – while all of the other categories are fairly clear-cut, ‘debated social issues’ comes down to a level of judgment, someone has to make a call on what’s appropriate.
The same criticism has been leveled at Facebook over their decision to ban Pages which repeatedly share false news from buying ads – various groups have questioned who it is that decides what’s ‘false news’ in this context. In this case, it’s content highlighted by third-party fact checkers, but still, some further question what gives those groups the authority to label something as fake.
Essentially, at the end of this chain, Facebook needs to make a decision as to how they rule on such cases – which, it could be argued, is an editorial decision. If, that is, Facebook were a media company.
Interestingly, that decision – to ban certain Pages from buying ads – extends to this announcement too. Facebook notes that:
“Those who share content that repeatedly violates our Content Guidelines for Monetization, share clickbait or sensationalism, or post misinformation and false news, may be ineligible or may lose their eligibility to monetize.”
Facebook’s plan is to make it increasingly difficult for bad actors to make money on their platform, but again, this is clearly an editorial decision. Facebook’s making a call on what content people can share on their platform, a fairly direct editorial link.
In addition to this, Facebook’s also looking to boost the credibility of their ad metrics – which have taken several hits in recent times – by seeking accreditation from the Media Rating Council, while they’re also partnering with third parties, like DoubleVerify and Integral Ad Science, ‘to ensure the brand safety tools and controls we create serve our advertisers’ needs’.
This is more aligned with the recent controversies over YouTube ads appearing alongside questionable content – on which, Facebook is also adding an extra 3,000 new content reviewers to better safeguard against such issues.
The added security, and clarity, over Facebook ad content is a positive, but definitely it will raise questions over how The Social Network decides what is and isn’t acceptable – and how it can then maintain its position that it’s not a media company.
Overall, the new guidelines will make Facebook a safer, better place for interaction, but the controversy around their content controls aren’t likely to ease up, and could push some users off to other, less censored platforms.