Hundreds Of People Share Stories About Falling Down YouTube’s Recommendation Rabbit Hole



It all started with a simple keyword search for “transgender” on YouTube. Alex had just come out as trans, and was looking for videos from other queer people who had been through a similar experience. YouTube was helpful at first, but soon enough, it served up a series of videos that portrayed being transgender as a mental illness and something to be ashamed of.

“YouTube reminded me why I hid in the closet for so many years,” Alex said, adding that the platform “will always be a place that reminds LGBT individuals that they are hated and provides the means for bigots to make a living spouting hate speech.”

YouTube’s recommendation algorithm — which drives 70% of the site’s traffic — is designed to maximize ad revenue by keeping viewers watching for as long as possible, even if that means pushing out increasingly extreme content for them to binge. Alex, who is identified here by a pseudonym, is one of hundreds of people who were pulled down dark rabbit holes by the algorithm, and shared their stories with the Mozilla Foundation, a San Francisco-based nonprofit that’s urging YouTube to prioritize user safety over profits. (It is also the sole shareholder of the Mozilla Corp., which makes the Firefox web browser.)

One YouTube user who was curious about scientific discovery videos was sucked into a web of conspiracy theories and fringe content. Another who searched for humorous “fail videos” was later fed dash cam footage of horrific, fatal accidents. A third who watched confidence-building videos from a drag queen ended up in an echo chamber of homophobic rants.

READ ALSO  Top 10 Best Text Spying Apps to Track Messages

Through the crowdsourced compilation of such anecdotes, which reflect the findings of investigative reporting from news outlets including The New York Times, The Washington Post and HuffPost, the Mozilla Foundation aims to show just how powerful the recommendation algorithm can be.

Earlier this year, YouTube announced it would tweak its algorithm to reduce the spread of harmful misinformation and “borderline content,” and to feature authoritative sources more prominently in search results. The Google-owned company pushed back against Mozilla’s research, and told HuffPost that such changes have already yielded progress.

“While we welcome more research on this front, we have not seen the videos, screenshots or data in question and can’t properly review Mozilla’s claims,” said YouTube spokesperson Farshad Shadloo. “Generally, we’ve designed our systems to help ensure that content from more authoritative sources is surfaced prominently in recommendations,” he added, noting that the number of views of borderline content has been cut in half since YouTube adjusted its algorithm.

But independent researchers have no way to verify that claim, leaving them to rely on largely anecdotal data — and that’s part of the problem, said Brandi Geurkink, a Mozilla campaigner based in Europe.

The Mozilla Foundation, which met with YouTube to discuss such issues last month, is joining other organizations and activists in calling on the platform to provide researchers with access to engagement and impression data as well as a historical archive of videos.

READ ALSO  You've Got Questions, I've Got Answers (Some of Them)

“What’s really missing is data that allows researchers to study this problem at scale in a reliable way,” said Geurkink, pointing to YouTube’s API (application programming interface) rate limit, which makes it difficult for researchers to gather meaningful data about the kind of content YouTube promotes. “There’s not enough transparency.”

Has YouTube recommended extremist content to you? Get in touch: [email protected] or [email protected].





Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com