Instagram’s rolling out some new moderation tools which will use machine learning to detect not only offensive language and spam, but also, comments that are considered discouraging for other users.
As explained by Instagram:
“Many of you have told us that toxic comments discourage you from enjoying Instagram and expressing yourself freely. To help, we’ve developed a filter that will block certain offensive comments on posts and in live video.”
As shown in the above screenshot, users will be able to activate the option within their ‘Comment Settings’. Once in place, the system will use Facebook’s DeepText AI classification system to detect and eliminate any comments that it deems to be in violation of Instagram’s Community Guidelines.
The system has been ‘taught’, by analyzing a huge range of examples in comments, how to detect such remarks, and utilizes a range of qualifiers to improve it’s results.
As explained by Wired:
“As with spam, the comments are rated based both on a semantic analysis of the text and factors such as the relationship between the commenter and the poster, as well as the commenter’s history. Something typed by someone you’ve never met is more likely to be graded poorly than something typed by a friend.”
The commenter themself will still see the comment on his/her device, reducing their motivation to try again, as they’ll assume that comment is visible to all. But that could also lead for some confusing interactions, especially if your comment is blocked erroneously – which will still occur in some cases.
This adds to Instagram’s existing comment moderation tools, including the ability to block specific words, or emoji, or your choosing, and the option to switch off comments on posts entirely.
In some ways, Instagram is the perfect testing ground for such features. With the focus of Instagram is on the visual elements, there’s less text on screen, making it a less enticing platform for trolls and abuse than, say, Facebook or Twitter, where their words get given more focus. That’s not to say it’s not an issue on Instagram, nor to diminish the significance of such actions on the platform, but the lessened emphasis on the text elements may mean Facebook can use Instagram’s tests of their DeepText system for this purpose as a research point for wider implementation across their other apps.
And it makes sense for Instagram too – the platform is now up to 700 million monthly active users, and growing fast, while it’s particularly popular amongst younger audiences. The younger skew makes the need for such measures even more pressing – suicide remains the second leading cause of death for people aged between 10 and 24 in the U.S. As such, anything that can be done to help is a positive.
In addition to this, Instagram’s also expanding its spam filter to detect content in nine different languages – English, Spanish, Portuguese, Arabic, French, German, Russian, Japanese and Chinese. And worth noting – the new offensive comment filter will only be available in English to begin with.
It’ll be interesting to see how these combined measures improve the user experience on Instagram, and whether their refined system can form a template of sorts for other platforms. As noted, eliminating trolls and abuse is an essential concern, and the platforms are already working together on similar fronts to find better solutions.
Hopefully these new measures deliver positive results, and advance the process.