ASSOCIATED PRESSNews 

Child abuse allegedly thrives on Twitch’s Clips feature, according to reports.

Bloomberg’s investigative report reveals the challenges faced by Twitch in effectively moderating its livestreaming platform, particularly in relation to its Clips feature that enables users to save brief videos. The report highlights that out of approximately 1,100 clips analyzed, at least 83 contained sexualized content featuring children. Twitch promptly removed these videos upon notification, and a spokesperson for the company informed ReturnByte via email that they have made significant investments in enforcement tools and preventive measures, with a commitment to further improvements.

Bloomberg highlighted one incident that exemplified the enduring nature of Clips on an otherwise ephemeral platform. It tells the disturbing story of a 12-year-old boy who took to Twitch last spring to “eat a sandwich and play his horn.” He soon began receiving requests from viewers, which (sad picture of online behavior) somehow led to the boy pulling his pants down.

The outlet describes the incident as being over “in an instant”. Still, Clips’ recording feature allowed one viewer — who allegedly followed more than a hundred accounts belonging to children — to keep it. This allegedly resulted in over 130 views of the 20-second clip before Twitch was notified and removed.

Clips was launched in 2016 as a way to preserve otherwise fleeting moments on the platform. The feature records 25 seconds before (and five seconds after) tapping the record button. This has the unfortunate side effect of allowing predators to save a worrying moment and spread it elsewhere.

Twitch has planned to expand Clips this year as part of a strategy to produce more TikTok-like content on the platform. It plans to launch a discovery feed (also similar to TikTok) where users can submit their short videos.

Bloomberg’s report cites the Canadian Center for Child Protection, which reviewed 83 videos of abuse and concluded that 34 showed young users showing their genitals on camera. The majority were allegedly boys between the ages of 5 and 12. In addition, 49 clips contained sexualized content in which minors “exposed body parts or underwent treatment.”

The organization said the 34 “most” videos were viewed 2,700 times. The rest garnered 7,300 views.

Twitch’s response

“Harming young people anywhere online is unacceptable, and we take this matter very seriously,” a Twitch spokesperson wrote to ReturnByte. In response to the Child Sexual Exploitation Material (CSAM) alert, the company says it has developed new models to detect potential grooming behavior and is updating its existing tools to more effectively identify and remove banned users who attempt to create new accounts (including for minors). matters related to safety).

Twitch adds that it has stepped up its security teams’ enforcement of live broadcasts, the root of Clips. “This means that when we disable a stream containing harmful content and suspend the channel because clips are created from the streams, we prevent the creation and spread of harmful clips at the source,” the company wrote. “Importantly, we’ve also worked to ensure that when we remove and disable clips that violate our community rules, they’re not available through public domains or other direct links.”

“We also know that, unfortunately, network vulnerabilities are evolving,” the spokesperson continued. “We improved the guidelines our internal security teams use to identify some online vulnerabilities, such as generative AI-powered child sexual abuse material (CSAM).” Twitch added that it has expanded the list of external organizations it is working with to (hopefully) suppress similar content in the future.

Twitch moderation issues

Bloomberg reports that Clips has been one of the least moderated sections on Twitch. It also notes that the company laid off 15 percent of its internal trust and security team in April 2023 (part of a painful year in technology layoffs) and has grown more dependent on outside partners to crack CSAM content.

Twitch’s streaming-focused platform makes it a trickier moderation challenge than more traditional video sites like YouTube or Instagram. These platforms can compare uploaded videos to hashes – digital fingerprints that can detect previously known problematic files that have been published online. “Hash technology looks for something that matches something seen before,” Lauren Coffren of the US National Center for Missing and Exploited Children told Bloomberg. “Streaming means it’s brand new.”

Related posts

Leave a Comment