With Twitter has come under intense scrutiny and criticism in recent times, one of the platform’s biggest identified problems has been trolls and abuse, and the platform’s historic inaction to address such concerns. Twitter has acknowledged such, and made it a key focus, and over the last year they’ve introduced a range of tools to protect users and eliminate such behavior.
Those measures include collapsing or hiding potentially offensive tweets, the ability to mute certain words from your notifications, an update of the default egg avatar to encourage users to upload an actual photo, new transparency over reporting processes, placing warnings on potentially offensive profiles and tweets and a secondary ‘message requests’ DM inbox.
And that’s not even all of them – while Twitter has taken a lot of heat over their previous inaction, they deserve credit for the work they’ve done on this front of late, with an increasing array of options to help users improve their on-platform experience.
But have these measures worked?
According to the platform’s latest update, yes they have – Twitter’s General Manager, Consumer Product and Engineering says that the platform is now taking action against 10x more accounts, compared to the same time last year.
Amongst the actions stemming from those investigations, Ho says that they’ve:
- Removed twice the amount of repeat offenders who create new accounts after being suspended for violations
- Seen a 25% decline in abuse reports related to accounts that have been suspended for violating platform rules
- Seen a significant reduction in repeat offenders following suspension (65% did not offend a second time)
- Seen a 40% reduction in accounts blocked after @mentions from people users don’t follow, which Ho says is due to their improved quality filters
Those seem like solid results, but then again, as noted by The Verge, Twitter hasn’t released specific numbers, just percentage shifts. It’d be interesting to get an actual volume comparison, but still it does seem like Twitter’s efforts are working, at least to some degree.
Eliminating trolls and abuse is key to the ongoing viability of the platform – late last year, amidst various reports that Twitter was up for sale, Disney was said to have pulled out of any potential bid for the platform due to concerns about its abuse issues. While protecting users alone should be enough motivation to act, the financial implications are likely just as significant for the company – if Twitter wants to see long-term success, they need to act, which they evidently now are, and that can only be a positive.
But then again, more transparency would help. Aside from protecting users, another issue Twitter faces is fake accounts and spam. This week, a new report suggested that Twitter’s security team had purged nearly 90,000 fake accounts after outside researchers discovered a massive botnet peddling links to fake “dating” and “romance” services. This also comes after a research report released earlier this year suggested that up to 15% of active Twitter accounts are bots, strategically set up to target specific groups and inflate digital metrics by using artificial retweet and mention strategies. Twitter’s current active audience is listed at 328 million monthly active users, which means that by this count, around 49 million profiles are bots, not real people.
The concerns around fake profiles have even extended to U.S. President Donald Trump, who’s following reportedly increased by 5 million people overnight recently – a claim which Twitter has since said is not true. But they didn’t produce any data to support this.
Of course, fake profiles and spam are a different issue to trolls and abuse, but it would seem that Twitter would benefit from being more transparent on both, by providing clear numbers which show the true impact of their efforts to tackle such concerns, and to ensure the platform is populated by real, engaged users, interacting in a safe and welcoming manner.
But then again, there may not be much incentive for them to do so – if Twitter were to show you actual numbers of their anti-abuse efforts, they may seem less impressive than percentage shifts, while removing known fake or spam accounts can only reduce their overall usage stats.
But still, if Twitter wanted to make a real stand on the issue, providing real numbers would give them a concrete basis upon which to display their improvements – they already report legal takedown and copyright notices, maybe they could add safety reports and fake account removals to the same listing.
Twitter’s transparency reports
Of course, things are never that simple – there are varying reasons as to why Twitter might not be able, or want, to do this, but with so much discussion around both issues, it may be beneficial for the platform to lead from the front and win the trust of both users and advertisers by showing their progress.