Twitter has long been criticized for its inaction in addressing problems with trolls and abuse. But they have made improvements on this front – over the last 18 months, Twitter’s introduced a range of new tools and options, including:
- A new automated ‘Quality Filter’ which helps detect and eliminate questionable tweets from your timeline, including threats and offensive or abusive language.
- An expanded mute option, which enables users to block out any words, phrases, hashtags, @handles and emojis that they don’t want to see.
- New restrictions on offending accounts, including 12-hour bans where your tweet reach is limited.
- New processes to restrict the creation of abusive accounts and a presentation tool which collapses potentially abusive tweets to reduce their exposure.
- An update to the default egg avatar to encourage users to upload a real photo.
All of those measures are positive steps, and they’ve no doubt helped. But evidently they haven’t been enough – in recent weeks, Twitter has come under fire once again for their action, and inaction, on trolls and abuse.
The latest large-scale controversy came when Twitter suspended the account of actor Rose McGowan after McGowan spoke out about Hollywood producer Harvey Weinstein, and suggested, via tweet, that others within the industry have long known, and turned a blind eye, to the allegations of sexual abuse around Weinstein’s conduct.
For Twitter’s part, they explained that McGowan’s account was suspended because she tweeted out a private phone number, which is in violation of their Terms of Service – but even with that explanation, that’s not how people saw it. As you can see in the replies to this tweet, the general consensus among many users seems to be that Twitter takes action when it suits them and their commercial interests, and such processes have little to do with established rules or guidelines.
The problem then would appear to be the system itself – Twitter’s either not very good at explaining and/or enforcing their rules on a regular basis, or they’re not great at detecting issues, which leads to inconsistency.
The added element at play here was Twitter’s recent insistence that they won’t ban U.S. President Donald Trump for potential Terms of Service violations because his tweets are ‘in the public interest’.
Again, this clouds Twitter’s process, and as such, the platform has essentially been forced to go back to the drawing board and come up with a new plan to tackle on-platform actions, or at the least, to provide better explanations as to why some accounts are blocked and/or suspended while others are allowed to continue on with no repercussions.
In response to this, Twitter has now released a new schedule of their planned safety work, which they hope will improve platform transparency and user experience.
There’s some interesting ideas in there – one of particular note is the removal of ‘hateful display names’.
This issue came up recently when former White House Press Secretary Sean Spicer quote tweeted another user, who then quickly changed his display name to criticize Spicer.
BuzzFeed’s labeled this practice ‘nameflaming’, and some have identified it as an example of why Twitter doesn’t want to allow tweet editing (because embedded tweets can be altered quite easily, given their brief nature). How a ban on such tweets might work could be difficult in practice, but it’ll be interesting to see how Twitter goes about it.
In fact, it’ll be interesting to see how Twitter goes about most of these changes. Various measures included here note that Twitter will provide clearer explanations as to why actions have been taken – which is great, but it’ll also likely mean a significant increase in workload for Twitter’s team. No doubt they’ll be able to provide automated responses for various violations, but there’ll also be quite a few which require human intervention. How that will add to Twitter’s operating capacity will be another element to consider.
And while it’s all good to see Twitter once again announcing action, the true measure will be in how they’re actually put into effect – which Twitter knows too. The platform has struggled with its transition into becoming a business, as opposed to a fun app, and part of that transition also includes taking care of community members. Twitter’s rolled out changes, as noted above, and they’ve been working to meet user needs in this regard, but they’ve continued to fall down at crucial moments – and those collapses then lead to an avalanche of criticism from other seemingly wronged users.
In this respect, Twitter’s moves here are about more than the violations and actions themselves, they’re about Twitter’s capacity, as a company, to take responsibility and improve.
If they can demonstrate that they can do so, it’ll be a significant step, but it’ll also require a significant evolution in approach.