Meet BreadTube, the YouTube activists trying to beat the far-right at their own game



Nordwood Themes/Unsplash, CC BY-SA

Alexander Mitchell Lee, Australian National University

YouTube has gained a reputation for facilitating far-right radicalisation and spreading antisocial ideas.

However, in an interesting twist, the same subversive, comedic, satiric and ironic tactics used by far-right internet figures are now being countered by a group of leftwing YouTubers known as “BreadTube”.

By making videos on the same topics as the far-right, BreadTube videos essentially hijack Youtube’s algorithm by getting recommended to viewers who consume far-right content. BreadTubers want to pop YouTube’s political bubbles to create space for deradicalisation.

The subreddit devoted to BreadTube content desribes it as being like ‘YouTube, but good’.

Pivot to the (political) left

The name “BreadTube” has its origin in anarcho-socialist book The Conquest of Bread, by Peter Kropotkin. The name emerged organically as a more comedic alternative to the name “LeftTube”, and captures the dissident leftwing nature of the creators it encompasses.

The movement has no clear origin, but many BreadTube channels started in opposition to “anti-SJW” (social justice warrior) content that gained traction in the mid-2010s.

The main figures associated with BreadTube are Natalie Wynn, creator of ContraPoints; Abigail Thorn, creator of Philosophy Tube; Harris Brewis, creator of Hbomberguy; and Lindsay Ellis, creator of a channel named after herself. Originally the label was imposed on these creators, and while they all identify with it to varying degrees, there remains a vibrant debate as to who is part of the movement.

YouTuber Natalie Wynn’s ContraPoints is among the leading channels for BreadTube content.

BreadTubers are united only by a shared interest in combating the far-right online and a willingness to engage with challenging social and political issues. These creators infuse politics with their other interests such as films, video games, popular culture, histories and philosophy.

The current most popular Breadtuber, Wynn, has described her channel as a “long theatrical response to fascism” — and a part of “the left’s immune system”. In an interview with the New Yorker, Wynn said she wants to create better propaganda than the far-right, with the aim of winning people over rather than just criticising.

Euphemisms, memes and “inside” internet language are also used in a way that traditional media struggle to replicate. The Southern Poverty Law Centre has referenced BreadTubers to help unpack how memes spread among far-right groups, and the difficulty in identifying the line between “trolling” and genuine use of far-right symbols.

BreadTubers use the same titles, descriptions and tags as far-right YouTube personalities, so their content is recommended to the same viewers. In their recent journal article on BreadTube, researchers Dmitry Kuznetsov and Milan Ismangil summed up the strategy thus:

The first layer involves use of search algorithms by BreadTubers to disseminate their videos. The second layer – a kind of affective hijacking – revolves around using a variety of theatrical and didactical styles to convey leftist thought.

What are the results?

The success of Breadtubers has been hard to quantify, although they seem to be gaining significant traction. They receive tens of millions of views a month and have been increasingly referenced in media and academia as a case study in deradicalisation.

For example, The New York Times has reported deeply on the journey of individuals from the far-right to deradicalisation via BreadTube. Further, the r/Breadtube section of Reddit and videos from all BreadTube creators are littered with users describing how they broke away from the far-right.

These anecdotal journeys, while individually unremarkable, collectively demonstrate the success of the movement.

YouTube’s algorithms are a problem

The claim that YouTube helps promote far-right content is both widely accepted and contested.

The central problem in trying to understand which is true is that YouTube’s algorithm is secret. YouTube’s fixation with maximising watch time has meant users are recommended content designed to keep them hooked.




Read more:
YouTube’s algorithms might radicalise people – but the real problem is we’ve no idea how they work


Critics say YouTube has historically had a tendency to recommend increasingly extreme content to the site’s rightwing users. Until recently, mainstream conservatives had a limited presence on YouTube and thus the extreme right was over-represented in rightwing political and social commentary.

At its worst, the YouTube algorithm can allegedly create a personalised radicalisation bubble, recommending only far-right content and even introducing the viewer to content that pushes them further in that direction.

YouTube is aware of these concerns and does tinker with its algorithm. But how effectively it does this has been questioned.

Limitations

Ultimately, BreadTubers identify and discuss, but don’t have the answer to, many of the structural causes of alienation that may be driving far-right recruitment.

Economic inequality, lack of existential purpose, distrust in modern media and frustration at politicians are just some of the problems that may have a part to play.

Still, BreadTube may yet be one piece of the puzzle in addressing the problem of far-right content online. Having popular voices that are tuned into internet culture —and which aim to respond to extremist content using the same tone of voice — could be invaluable in turning the tide of far-right radicalisation.The Conversation

Alexander Mitchell Lee, PhD Candidate, Crawford School of Public Policy, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?


Timothy Graham, Queensland University of Technology

Amid the chaos in the US Capitol, stoked largely by rhetoric from President Donald Trump, Twitter has locked his account, with 88.7 million followers, for 12 hours.

Facebook and Instagram quickly followed suit, locking Trump’s accounts — with 35.2 million followers and 24.5 million, respectively — for at least two weeks, the remainder of his presidency. This ban was extended from 24 hours.

The locks are the latest effort by social media platforms to clamp down on Trump’s misinformation and baseless claims of election fraud.

They came after Twitter labelled a video posted by Trump and said it posed a “risk of violence”. Twitter removed users’ ability to retweet, like or comment on the post — the first time this has been done.

In the video, Trump told the agitators at the Capitol to go home, but at the same time called them “very special” and said he loved them for disrupting the Congressional certification of President-elect Joe Biden’s win.

That tweet has since been taken down for “repeated and severe violations” of Twitter’s civic integrity policy. YouTube and Facebook have also removed copies of the video.

But as people across the world scramble to make sense of what’s going on, one thing stands out: the events that transpired today were not unexpected.

Given the lack of regulation and responsibility shown by platforms over the past few years, it’s fair to say the writing was on the wall.

The real, violent consequences of misinformation

While Trump is no stranger to contentious and even racist remarks on social media, Twitter’s action to lock the president’s account is a first.

The line was arguably crossed by Trump’s implicit incitement of violence and disorder within the halls of the US Capitol itself.

Nevertheless, it would have been a difficult decision for Twitter (and Facebook and Instagram), with several factors at play. Some of these are short-term, such as the immediate potential for further violence.

Then there’s the question of whether tighter regulation could further incite rioting Trump supporters by feeding into their theories claiming the existence of a large-scale “deep state” plot against the president. It’s possible.




Read more:
QAnon believers will likely outlast and outsmart Twitter’s bans


But a longer-term consideration — and perhaps one at the forefront of the platforms’ priorities — is how these actions will affect their value as commercial assets.

I believe the platforms’ biggest concern is their own bottom line. They are commercial companies legally obliged to pursue profits for shareholders. Commercial imperatives and user engagement are at the forefront of their decisions.

What happens when you censor a Republican president? You can lose a huge chunk of your conservative user base, or upset your shareholders.

Despite what we think of them, or how we might use them, platforms such as Facebook, Twitter, Instagram and YouTube aren’t set up in the public interest.

For them, it’s risky to censor a head of state when they know that content is profitable. Doing it involves a complex risk calculus — with priorities being shareholders, the companies’ market value and their reputation.




Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day


Walking a tightrope

The platforms’ decisions to not only force the removal of several of Trump’s posts but also to lock his accounts carries enormous potential loss of revenue. It’s a major and irreversible step.

And they are now forced to keep a close eye on one another. If one appears too “strict” in its censorship, it may attract criticism and lose user engagement and ultimately profit. At the same time, if platforms are too loose with their content regulation, they must weather the storm of public critique.

You don’t want to be the last organisation to make the tough decision, but you don’t necessarily want to be the first, either — because then you’re the “trial balloon” who volunteered to potentially harm the bottom line.

For all major platforms, the past few years have presented high stakes. Yet there have been plenty of opportunities to stop the situation snowballing to where it is now.

From Trump’s baseless election fraud claims to his false ideas about the coronavirus, time and again platforms have turned a blind eye to serious cases of mis- and disinformation.

The storming of the Capitol is a logical consequence of what has arguably been a long time coming.

The coronavirus pandemic illustrated this. While Trump was partially censored by Twitter and Facebook for misinformation, the platforms failed to take lasting action to deal with the issue at its core.

In the past, platforms have cited constitutional reasons to justify not censoring politicians. They have claimed a civic duty to give elected officials an unfiltered voice.

This line of argument should have ended with the “Unite the Right” rally in Charlottesville in August 2017, when Trump responded to the killing of an anti-fascism protester by claiming there were “very fine people on both sides”.

An age of QAnon, Proud Boys and neo-Nazis

While there’s no silver bullet for online misinformation and extremist content, there’s also no doubt platforms could have done more in the past that may have prevented the scenes witnessed in Washington DC.

In a crisis, there’s a rush to make sense of everything. But we need only look at what led us to this point. Experts on disinformation have been crying out for platforms to do more to combat disinformation and its growing domestic roots.

Now, in 2021, extremists such as neo-Nazis and QAnon believers no longer have to lurk in the depths of online forums or commit lone acts of violence. Instead, they can violently storm the Capitol.

It would be a cardinal error to not appraise the severity and importance of the neglect that led us here. In some ways, perhaps that’s the biggest lesson we can learn.


This article has been updated to reflect the news that Facebook and Instagram extended their 24-hour ban on President Trump’s accounts.The Conversation

Timothy Graham, Senior Lecturer, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Anxieties over livestreams can help us design better Facebook and YouTube content moderation



File 20190319 60995 19te2fg.jpg?ixlib=rb 1.1
Livestream on Facebook isn’t just a tool for sharing violence – it has many popular social and political uses.
glen carrie / unsplash, CC BY

Andrew Quodling, Queensland University of Technology

As families in Christchurch bury their loved ones following Friday’s terrorist attack, global attention now turns to preventing such a thing ever happening again.

In particular, the role social media played in broadcasting live footage and amplifying its reach is under the microscope. Facebook and YouTube face intense scrutiny.




Read more:
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


New Zealand’s Prime Minister Jacinda Ardern has reportedly been in contact with Facebook executives to press the case that the footage should not available for viewing. Australian Prime Minister Scott Morrison has called for a moratorium on amateur livestreaming services.

But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.

Increasing scrutiny

With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.

As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.

After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.

Both platforms have made public statements about their efforts at moderation.

YouTube noted the challenges of dealing with an “unprecedented volume” of uploads.

Although it’s been reported less than 4000 people saw the initial stream on Facebook, Facebook said:

In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]

Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:

  1. the length of time it was available on Facebook’s platform before it was removed
  2. the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.

These issues illustrate the weaknesses of existing content moderation policies and practices.

Not an easy task

Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.

When platforms perform this responsibility poorly (or, utterly abdicate it) they pass on the task to others — like the New Zealand Internet Service Providers that blocked access to websites that were re-distributing the shooter’s footage.

People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.




Read more:
A guide for parents and teachers: what to do if your teenager watches violent footage


We know from investigative reporting that the moderation teams at platforms like Facebook and YouTube are tasked with particularly challenging work. They seem to have a relatively high turnover of staff who are quickly burnt-out by severe workloads while moderating the worst content on the internet. They are supported with only meagre wages, and what could be viewed as inadequate mental healthcare.

And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.

For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.

We must also acknowledge the other ways livestreaming features in modern life. Livestreaming is a lucrative niche entertainment industry, with thousands of innocent users broadcasting hobbies with friends from board games to mukbang (social eating), to video games. Livestreaming is important for activists in authoritarian countries, allowing them to share eyewitness footage of crimes, and shift power relationships. A ban on livestreaming would prevent a lot of this activity.

We need a new approach

Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:

  1. companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  2. companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
  3. companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.

In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.The Conversation

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.