Share:

YouTube Algorithms Fail to Filter Graphic Content From Young Viewers

Share:

Google’s YouTube video service is filled with disturbing channels and low-quality pre-school-aimed videos that “border on abuse,” reads a new article.

A new article from writer James Bridle takes a deep dive into the weird world of Youtube Kids videos, whose popular genres and channels include endless series of videos of children being vomited on by family members and machinima-like music videos in which stock cartoon characters meet gory, violent ends.

Bridle, a campaigning technology-focused artist and writer, documented the way the video platform’s algorithmic curation drives enormous amounts of viewers to content made purely to satisfy those algorithms as closely as possible.

Basically, these algorithms come down to automation. Whether it’s pregnant Disney character videos or the rambling words that Bridle points to, these clips and their titles are constructed to exploit YouTube’s computerized curation. They contain words and phrases that parents might search for or would trigger YouTube’s recommended videos feature, sliding them in alongside innocuous nursery rhymes and legitimate videos from popular kids shows.

While the videos are sometimes funny and always strange, we can’t forget that these are targeted at extremely young children, who often watch unattended. What’s more, their basic soundtracks have nothing to alert parents who use audio cues to tell them when it’s time to supervise their kids’ viewing.

To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.

The mysterious nature of the videos’ production cycle and the way they represent the “attention economy’s” worst traits makes them an exploitative version of the propaganda machines that exist on Twitter and Facebook, but aimed at tiny children, for the seemingly sole purpose of racking up ad-views. The traumatizing nature of these videos seems to be beside the point.

This, I think, is my point: The system is complicit in the abuse.

And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale. I believe they have an absolute responsibility to deal with this, just as they have a responsibility to deal with the radicalisation of (mostly) young (mostly) men via extremist videos?—?of any political persuasion. They have so far showed absolutely no inclination to do this, which is in itself despicable. However, a huge part of my troubled response to this issue is that I have no idea how they can respond without shutting down the service itself, and most systems which resemble it. We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the examples I’ve used in this essay. The asides I’ve kept in parentheses throughout, if expanded upon, would allow one with minimal effort to rewrite everything I’ve said, with very little effort, to be not about child abuse, but about white nationalism, about violent religious ideologies, about fake news, about climate denialism, about 9/11 conspiracies.

Read James Bridle’s excellent article in its entirety here.

Share:

  • Any unbridled access to the internet – even open-ended theme-based searches for autogenerated playlists on youtube – is dangerous from anyone caring about curated content. A TV station you can choose to turn on, and if you trust them, let it run. The parallel would be a Youtube channel you trust and choose to play through. Curate your viewing options, don’t leave it open to topical insertion from the gamut of unverified content, even if algorithmically selected. Have a human behind all those decisions, somewhere, whether yourself or a trusted individual/organization.

    I fear for children whose parents just let them watch youtube on auto-play with no supervision, even if limited to themes or subjects, as this article addresses. That’s effectively like giving them the search bar and viewing any image related to a keyword. Guaranteed there’s a twisted/undesireable varaint of most any topic out there.

    If you care about what your children see, make sure you see what they see, or trust WHO (not what, like an algorithm) determines what they see.

Deals