The decline of empathetic behavior on social media is a growing issue. Our industry has the opportunity to push for an agenda that offers teens and young adults the ability to understand the choices they’re making behind their devices. Platforms are exploring ways they can play a role in this process and create productive interactions through shared experiences and understanding.
Here are a few recent examples unpacked:
Collaborating with Suicide Prevention Programs
On the heels of its test to remove “likes” from posts and a feature called ‘Restrict’ that allows users to seamlessly shadow-ban others posting harmful or offensive comments, Instagram is continuing its effort to support the fight against cyberbullying.
The photo-sharing platform recently unveiled a new AI-powered feature that notifies users when their captions on photos and videos could be considered offensive and gives them the opportunity to edit their post before it’s live.
Beyond aiming to limit the reach of bullying, a key goal of the tool is to provide education around proper Instagram etiquette and what violates the platform’s community guidelines. In order to create the new system, Instagram partnered with suicide prevention programs.
“Earlier this year, we launched a feature that notifies people when their comments may be considered offensive before they’re posted. Results have been promising, and we’ve found that these types of nudges can encourage people to reconsider their words when given a chance,” the company shared in the announcement.
The stance is an important one that prioritizes limiting the reach of bullying but, more importantly, is one primed to foster more education. The app hopes to inspire people to care about their words and choices online, and understand how they can affect other people negatively and deter from the growth of a positive, diverse community.
Expanding Definitions of Harmful Misinformation
Yet another app putting education at the center of its decisions to combat bullying and harmful misinformation is TikTok.
The video-sharing app recently overhauled its Community Guidelines to incorporate a specific section dedicated to the sharing of misinformation within the app intended to add transparency to how harmful or unsafe content is defined and regulated on the platform. “It’s important that users have insight into the philosophy behind our moderation decisions and the framework for making such judgements,” the announcement stressed.
At its core, the updates outline how violations are grouped into 10 distinct categories, each of which contains an explanation of the rationale and several detailed bullet points to clarify what type of misbehavior would fall into that category. While TikTok’s rules against misleading content have been in place for a while, until this expansion the focus had primarily been around scams and barring the creation of fake profiles.
Here’s a quick outline as to the additional types of content the app is targeting:
- Content that incites fear, hate, or prejudice
- Content that could cause harm to an individual’s health – such as misleading information about medical treatment
- Content that proliferates hoaxes, phishing attempts, or “manipulated content meant to cause harm”
- Content that misleads community members about elections or other civic processesOne critique that has surfaced since the announcement is that the expanded guidelines don’t explain how TikTok will decipher harmful, “misleading” content and appear to be open for interpretation regarding enforcement decisions.
“Our global guidelines are the basis of the moderation policies TikTok’s regional and country teams localize and implement in accordance with local laws and norms.”
It will be interesting to see if there are further iterations based on this feedback and how the language and structure of these guidelines will evolve as the community continues to grow.
Letting users define the audience of their content
As part of a broader discussion about the rise of ephemeral messaging and its potential for Twitter, the platform is looking to specific dimensions, such as control around who can see or participate in tweet conversations.
During CES 2020, Suzanne Xie, Director of Product Management, outlined several key changes that are in the works aimed to address these issues and promote a more healthy, positive user experience.
Twitter product lead Kayvon Beykpour articulated the rationale in an interview with WIRED’s Editor in Chief Nick Thompson. “We’re exploring ways for people to control proactively, not reactively hiding a reply…The philosophical approach we took here is, when you start a conversation, as the author of a tweet you should have a little more control over the replies to that tweet.”
During the presentation, Xie showed the below images illustrating the new process in development. Fundamentally, it allows users to define the audience for each of their tweets, directly from the composer window.
The four core audience settings are as follows:
- Global: Anyone can reply to the tweet
- Group: Only people you follow or mention would be able to reply
- Panel: Only people you directly mention within the tweet text itself would be able to reply
- Statement: No tweet replies would be allowed
Let’s break down a couple of pros and cons with this move.
Primarily, it seems to go against the idea of Twitter serving as a larger, public square where everyone is given a say. Contrarily, it could pave the way for easier facilitated interview-style conversations or live chats featuring celebrities or influencers that often are overwhelmed by spam. In this way, the conversations can feel more familiar and authentic without all of the secondary noise.
Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.