Anxieties over livestreams can help us design better Facebook and YouTube content moderation



File 20190319 60995 19te2fg.jpg?ixlib=rb 1.1
Livestream on Facebook isn’t just a tool for sharing violence – it has many popular social and political uses.
glen carrie / unsplash, CC BY

Andrew Quodling, Queensland University of Technology

As families in Christchurch bury their loved ones following Friday’s terrorist attack, global attention now turns to preventing such a thing ever happening again.

In particular, the role social media played in broadcasting live footage and amplifying its reach is under the microscope. Facebook and YouTube face intense scrutiny.




Read more:
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


New Zealand’s Prime Minister Jacinda Ardern has reportedly been in contact with Facebook executives to press the case that the footage should not available for viewing. Australian Prime Minister Scott Morrison has called for a moratorium on amateur livestreaming services.

But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.

Increasing scrutiny

With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.

As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.

After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.

Both platforms have made public statements about their efforts at moderation.

YouTube noted the challenges of dealing with an “unprecedented volume” of uploads.

Although it’s been reported less than 4000 people saw the initial stream on Facebook, Facebook said:

In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]

Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:

  1. the length of time it was available on Facebook’s platform before it was removed
  2. the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.

These issues illustrate the weaknesses of existing content moderation policies and practices.

Not an easy task

Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.

When platforms perform this responsibility poorly (or, utterly abdicate it) they pass on the task to others — like the New Zealand Internet Service Providers that blocked access to websites that were re-distributing the shooter’s footage.

People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.




Read more:
A guide for parents and teachers: what to do if your teenager watches violent footage


We know from investigative reporting that the moderation teams at platforms like Facebook and YouTube are tasked with particularly challenging work. They seem to have a relatively high turnover of staff who are quickly burnt-out by severe workloads while moderating the worst content on the internet. They are supported with only meagre wages, and what could be viewed as inadequate mental healthcare.

And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.

For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.

We must also acknowledge the other ways livestreaming features in modern life. Livestreaming is a lucrative niche entertainment industry, with thousands of innocent users broadcasting hobbies with friends from board games to mukbang (social eating), to video games. Livestreaming is important for activists in authoritarian countries, allowing them to share eyewitness footage of crimes, and shift power relationships. A ban on livestreaming would prevent a lot of this activity.

We need a new approach

Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:

  1. companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  2. companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
  3. companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.

In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.The Conversation

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

Why news outlets should think twice about republishing the New Zealand mosque shooter’s livestream


Colleen Murrell, Swinburne University of Technology

Like so many times before with acts of mass violence in different parts of the world, news of shootings at two Christchurch mosques on Friday instantly ricocheted around the world via social media.

When these incidents occur, online activity follows a predictable pattern as journalists and others try to learn the name of the perpetrator and any reason behind the killings.

This time they did not have to wait long. In an appalling example of the latest technology, the gunman reportedly livestreamed his killings on Facebook. According to reports, the footage apparently showed a man moving through the interior of a mosque and shooting at his victims indiscriminately.

Amplifying the spread of this kind of material can be harmful.




Read more:
Since Boston bombing, terrorists are using new social media to inspire potential attackers


Mainstream media outlets posted raw footage from gunman

The video was later taken down but not before many had called out the social media company. The ABC’s online technology reporter, Ariel Bogle, blamed the platforms for allowing the video to be shared.

ABC investigative reporter Sophie McNeil asked people on Twitter not to share the video, since the perpetrator clearly wanted it to be widely disseminated. New Zealand police similarly urged people not to share the link and said they were working to have the footage removed.

Following a spate of killings in France in 2016, French mainstream media proprietors decided to adopt a policy of not recycling pictures of atrocities.

The editor of Le Monde, Jérôme Fenoglio, said:

Following the attack in Nice, we will no longer publish photographs of the perpetrators of killings, to avoid possible effects of posthumous glorification.

Today, information about the name of the Christchurch gunman, his photograph and his Twitter account, were easy to find. Later, it was possible to see that his Twitter account had been suspended. On Facebook, it was easy to source pictures, and even a selfie, that the alleged perpetrator had shared on social media before entering the mosque.

But it was not just social media that shared the pictures. Six minutes of raw video was posted by news.com.au, which, after a warning at the front of the clip, showed video from the gunman’s helmet camera as he drove through the streets on his way to the mosque.




Read more:
Mainstream media outlets are dropping the ball with terrorism coverage


The risks of sharing information about terrorism

Sharing this material can be highly problematic. In some past incidences of terrorism and hate crime, pictures of the wrong people have been published around the world on social and in mainstream media.

After the Boston Marathon bombing in 2013, the wrong man was fingered as a culprit by a crowd-sourced detective hunt on various social media sites.

There is also the real fear that publishing such material could lead to copycat crimes. Along with the photographs and 17 minutes of film, the alleged perpetrator has penned a 73-page manifesto, in which he describes himself as “just a regular white man”.

Norwegian extremist Anders Behring Breivik, who killed 69 people on the island of Utøya in 2011, took a similar approach to justifying his acts. Before his killing spree, Breivik wrote a 1,518 page manifesto called “2083: A European Declaration of Independence”.




Read more:
Four ways social media companies and security agencies can tackle terrorism


The public’s right to know

Those who believe in media freedom and the public’s right to know are likely to complain if information and pictures are not available in full view on the internet. Conspiracies fester when people believe they are not being told the truth.

Instant global access to news can also pose problems to subsequent trials of perpetrators, as was shown in the recent case involving Cardinal George Pell.

While some large media platforms, like Facebook and Twitter, are under increasing pressure to clean up their acts in terms of publishing hate crime material, it is nigh on impossible to stop the material popping up in multiple places elsewhere.

Members of the public, and some media organisations, will not stop speculating, playing detective or “rubber necking” at horror, despite what well-meaning social media citizens may desire. For the media, it’s all about clicks, and unfortunately horror drives clicks.The Conversation

Colleen Murrell, Associate Professor, Journalism, Swinburne University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.