YouTube turns home movies of swimsuit-wearing kids into pedophiles’ buffet, report claims – The Mercury News

A New York Times reporter who looked into YouTube’s recommendations of videos featuring scantily clad children says he now has nightmares.

The Google company’s recommendation algorithms take family videos of kids in swimsuits, or dressing, or momentarily nude, and gather them together into a virtual pedophiles’ buffet, Max Fisher reported in a story Monday.

“We talked to child psychologists, sexual trauma specialists, psychologists who work with pedophiles, academic experts on pedophilia, network analysts,” he said in a tweet about the story.

“They all said YouTube has built a vast audience — maybe unprecedented — for child sexual exploitation, with grave risks for kids.”

To illustrate the problematic, artificial intelligence-driven recommendation process, Fisher and co-writer Amanda Taub described what happened when a 10-year-old girl and her friend uploaded a video of themselves playing in a pool: Within a few days, it had garnered 400,000 views.

“YouTube’s automated recommendation system — which drives most of the platform’s billions of views by suggesting what users should watch next — had begun showing the video to users who watched other videos of prepubescent, partially clothed children, a team of researchers has found,” the New York Times story said.

“YouTube had curated the videos from across its archives, at times plucking out the otherwise innocuous home movies of unwitting families, the researchers say. In many cases, its algorithm referred users to the videos after they watched sexually themed content.”

The reporters noted that Wired magazine and other publications in February reported that sexual deviants were using YouTube comments to guide each other to sexualized imagery of children. YouTube said it was deeply concerned, and disabled comments on many videos showing kids.

“But the recommendation system, which remains in place, has gathered dozens of such videos into a new and easily viewable repository, and pushed them out to a vast audience.”

At the root of the problem is YouTube’s AI, which learned from users “who sought out revealing or suggestive images of children,” the Times reported.

After the paper told YouTube what the reporters and researchers had discovered, the platform’s system stopped linking some of the videos together, according to the article.

“Strangely, however, YouTube insisted that the timing was a coincidence,” Fisher tweeted. “When I pushed, YT said the timing might have been related, but wouldn’t say it was.”

A YouTube product director told the paper that the company was committed to removing material that exploited children, and had worked since February to get better at it.

“Protecting kids is at the top of our list,” Jennifer O’Connor said.

However, according to the story, “YouTube has not put in place the one change that researchers say would prevent this from happening again: turning off its recommendation system on videos of children, though the platform can identify such videos automatically.

YouTube told the paper that it would limit recommendations on videos it deemed put children at risk, but wouldn’t altogether get rid of recommendations from child videos because recommendations are the biggest drivers of traffic and taking them away would harm “creators” who rely on clicks.

Let’s block ads! (Why?)