Dealing with Social Media Trolls


Advertisements

Anti-vaccination mothers have outsized voice on social media – pro-vaccination parents could make a difference


Vaccinations are important to protect against a host of diseases.
www.shutterstock.com

Brooke W. McKeever, University of South Carolina and Robert McKeever, University of South Carolina

A high school student from Ohio made national headlines recently by getting inoculated despite his family’s anti-vaccination beliefs.

Ethan Lindenberger, 18, who never had been vaccinated, had begun to question his parents’ decision not to immunize him. He went online to research and ask questions, posting to Reddit, a social discussion website, about how to be vaccinated. His online quest went viral.

In March 2019, he was invited to testify before a U.S. Senate Committee hearing on vaccines and preventable disease outbreaks. In his testimony, he said that his mother’s refusal to vaccinate him was informed partly by her online research and the misinformation about vaccines she found on the web.

Lindenberger’s mother is hardly alone. Public health experts have blamed online anti-vaccination discussions in part for New York’s worst measles outbreak in 30 years. Anti-vaccine activists also have been cited for the growth of anti-vaccination sentiments in the U.S. and abroad.

We are associate professors who study health communication. We are also parents who read online vaccination-related posts, and we decided to conduct research to better understand people’s communication behaviors related to childhood vaccinations. Our research examined the voices most central to this discussion online, mothers, and our findings show that those who oppose vaccinations communicate most about this issue.

What prompts mothers to speak out

A strong majority of parents in the U.S. support vaccinations, yet at the same time, anti-vaccination rates in the U.S. and globally are rising. The World Health Organization identified the reluctance or refusal to vaccinate despite the availability of vaccines as one of 10 top threats to global health in 2019.

Mothers are critical decision-makers in determining whether their children should be vaccinated. In our study, we surveyed 455 mothers online to determine who communicates most about vaccinations and why.

In general, previous research has shown that people evaluate opinion climates – what the majority opinion seems to say – before expressing their own ideas about issues. This is true particularly on controversial subjects such as affirmative action, abortion or immigration. If an individual perceives their opinion to be unpopular, they may be less likely to say what they think, especially if an issue receives a lot of media attention, a phenomenon known as the spiral of silence.

If individuals, however, have strong beliefs about an issue, they may express their opinions whether they are commonly held or minority perspectives. These views can dominate conversations as others online find support for their views and join in.

Our recent study found that mothers who contributed information online shared several perspectives. Mothers who didn’t strongly support childhood vaccinations were more likely to seek, pay attention to, forward information and speak out about the issue – compared to those who do support childhood vaccinations.

Those who believed that vaccinations were an important issue (whether they were for or against them) were more likely to express an opinion. And those who opposed vaccinations were more likely to post their beliefs online.

Ethan Lindenberger testifies before a congressional committee about his decision to be vaccinated against his family’s wishes.
AP Photo/Carolyn Kaster

How social media skews facts

Online news content can be influenced by social media information that millions of people read, and it can amplify minority opinions and health myths. For example, Twitter and Reddit posts related to the vaccine-autism myth can drive news coverage.

Those who expressed online opinions about vaccinations also drove news coverage. Other research we co-authored shows that posts related to the vaccine-autism myth were followed by online news stories related to tweets in the U.S., Canada and the U.K.

Recent reports about social media sites, such as Facebook, trying to interrupt false health information from spreading can help correct public misinformation. However, it is unclear what types of communication will counter misinformation and myths that are repeated and reinforced online.

Countering skepticism

Our work suggests that those who agree with the scientific facts about vaccination may not feel the need to pay attention to this issue or voice their opinions online. They likely already have made up their minds and vaccinated their children.

But from a health communication perspective, it is important that parents who support vaccination voice their opinions and experiences, particularly in online environments.

Studies show that how much parents trust or distrust doctors, scientists or the government influences where they land in the vaccination debate. Perspectives of other parents also provide a convincing narrative to understand the risks and benefits of vaccination.

Scientific facts and messaging about vaccines, such as information from organizations like the World Health Organization and the Centers for Disease Control and Prevention, are important in the immunization debate.

But research demonstrates that social consensus, informed in part by peers and other parents, is also an effective element in conversations that shape decisions.

If mothers or parents who oppose or question vaccinations continue to communicate, while those who support vaccinations remain silent, a false consensus may grow. This could result in more parents believing that a reluctance to vaccinate children is the norm – not the exception.

[ Expertise in your inbox. Sign up for The Conversation’s newsletter and get a digest of academic takes on today’s news, every day. ]The Conversation

Brooke W. McKeever, Associate Professor, University of South Carolina and Robert McKeever, Associate Professor, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Australian media regulators face the challenge of dealing with global platforms Google and Facebook



‘Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.’
Shutterstock/Roman Pyshchyk

Terry Flew, Queensland University of Technology

With concerns growing worldwide about the economic power of digital technology giants such as Google and Facebook, there was plenty of interest internationally in Australia’s Digital Platforms Inquiry.

The Australian Competition and Consumer Commission (ACCC) inquiry was seen as undertaking a forensic account of market dominance by digital platforms, and the implications for Australian media and the rights of citizens around privacy and data protection.

The inquiry’s final report, released last month, has been analysed from perspectives such as competition policy, consumer protection and the future of journalism.




Read more:
Consumer watchdog calls for new measures to combat Facebook and Google’s digital dominance


But the major limitation facing the ACCC, and the Australian government, in developing new regulations for digital platforms is jurisdictional authority – given these companies are headquartered in the United States.

More ‘platform neutral’ approach

Among the ACCC’s 23 recommendations is a proposal to reform media regulations to move from the current platform-specific approaches (different rules for television, radio, and print media) towards a “platform-neutral” approach.

This will ensure comparable functions are effectively and consistently regulated:

Digitalisation and the increase in online sources of news and media content highlight inconsistencies in the current sector-specific approach to media regulation in Australia […]

Digital platforms increasingly perform similar functions to media businesses, such as selecting and curating content, evaluating content, and ranking and arranging content online. Despite this, virtually no media regulation applies to digital platforms.

The ACCC’s recommendations to harmonise regulations across different types of media draw on major Australian public enquiries from the early 2010s, such as the Convergence Review and the Australian Law Reform Commission’s review of the national media classification system. These reports identified the inappropriateness of “silo-ised” media laws and regulations in an age of digital convergence.




Read more:
What Australia’s competition boss has in store for Google and Facebook


The ACCC also questions the continued appropriateness of the distinction between platforms and publishers in an age where the largest digital platforms are not simply the carriers of messages circulated among their users.

The report observes that such platforms are increasingly at the centre of digital content distribution. Online consumers increasingly access social news through platforms such as Facebook and Google, as well as video content through YouTube.

The advertising dollar

While the ACCC inquiry focused on the impact of digital platforms on news, we can see how they have transformed the media landscape more generally, and where issues of the wider public good arise.

Their dominance over advertising has undercut traditional media business models. Online now accounts for about 50% of total advertising spend, and the ACCC estimates that 71 cents of every dollar spent on digital advertising in Australia goes to Google or Facebook.

All media are now facing the implications of a more general migration to online advertising, as platforms can better micro-target consumers rather than relying on the broad brush approach of mass media advertising.

The larger issue facing potential competitors to the digital giants is the accumulation of user data. This includes the lack of transparency around algorithmic sorting of such data, and the capacity to use machine learning to apply powerful predictive analytics to “big data”.

In line with recent critiques of platform capitalism, the ACCC is concerned about the lack of information consumers have about what data the platforms hold and how it’s being used.

It’s also concerned the “winner-takes-most” nature of digital markets creates a long term structural crisis for media businesses, with particularly severe implications for public interest journalism.

Digital diversity

Digital platform companies do not sit easily within a recognisable industry sector as they branch across information technology, content media, and advertising.

They’re also not alike. While all rely on the capacity to generate and make use of consumer data, their business models differ significantly.

The ACCC chose to focus only on Google and Facebook, but they are quite different entities.

Google dominates search advertising and is largely a content aggregator, whereas Facebook for the most part provides display advertising that accompanies user-generated social media. This presents its own challenges in crafting a regulatory response to the rise of these digital platform giants.

A threshold issue is whether digital platforms should be understood to be media businesses, or businesses in a more generic sense.

Communications policy in the 1990s and 2000s commonly differentiated digital platforms as carriers. This indemnified them from laws and regulations relating to content that users uploaded onto their sites.

But this carriage/content distinction has always coexisted with active measures on the part of the platform companies to manage content that is hosted on their sites. Controversies around content moderation, and the legal and ethical obligations of platform providers, have accelerated greatly in recent years.

To the degree that companies such as Google and Facebook increasingly operate as media businesses, this would bring aspects of their activities within the regulatory purview of the Australian Communication and Media Authority (ACMA).

The ACCC recommended ACMA should be responsible for brokering a code of conduct governing commercial relationships between the digital platforms and news providers.




Read more:
Consumer watchdog: journalism is in crisis and only more public funding can help


This would give it powers related to copyright enforcement, allow it to monitor how platforms are acting to guarantee the trustworthiness and reliability of news content, and minimise the circulation of “fake news” on their sites.

Overseas, but over here

Companies such as Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.

The capacity to address competition and market dominance issues is limited by the fact real action could only meaningfully occur in their home market of the US.

Australian regulators are going to need to work closely with their counterparts in other countries and regions: the US and the European Union are the two most significant in this regard.The Conversation

Terry Flew, Professor of Communication and Creative Industries, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

US House of Representatives condemns racist tweets in another heady week under President Donald Trump


Bruce Wolpe, University of Sydney

The past three days in US politics have been very difficult – and ugly.

President Donald Trump chose to exploit divisions inside the Democratic Party in the House of Representatives – generational and ideological – by attacking four new women members of Congress, denying their status as Americans and their legitimacy to serve in Congress. They are women of colour and, yes, they are from the far left of the Democratic Party. They have pushed hard against their leaders.

But Trump’s vicious, racist attacks on them have in fact solved the unity problem among the Democrats: they are today (re)united against Trump.




Read more:
Two dozen candidates, one big target: in a crowded Democratic field, who can beat Trump?


You can draw a straight line from Trump’s birther attacks on Obama, to his “Mexican rapists” attack when he announced his run for the presidency, to his Muslim immigration ban, to equivocating over Nazis marching in Charlottesville, to sending troops to the US-Mexico border, to shutting down the government, to declaring a national emergency, to what he is doing today.

And his attacks on these lawmakers is based on a lie: three of the congresswomen were born in America. One is an immigrant, now a citizen, and as American as any citizen – just like Trump’s wife.

I worked in the House of Representatives for ten years. I learned early that you do not impugn – you have no right to impugn – the legitimacy of an elected member of Congress. Only the voters can do that.

Other presidents have been racist. Lyndon Johnson worked with the southern segregationists. Nixon railed in private against Jews. But none have spoken so openly, so publicly, without shame or remorse for these sentiments. So this is new territory.

And this is unlike Charlottesville, where there was vocal and visible pushback from Republicans on Trump giving an amber light to the Nazis in the streets. This is how much the political culture and norms have corroded over the past two years.

The Democrats chose to fight back by bringing a resolution condemning Trump for his remarks to the House of Representatives floor. Historians are still scurrying, but it appears this is unprecedented – the house has never in its history, which dates to the 1790s, voted to condemn a president’s remarks. (The Senate censured President Andrew Jackson over banking issues in 1834.)

The house passed the measure almost along party lines, with only four Republicans out of 197 – just 2% – voting for the resolution.

The concluding words in the resolution are these:

Whereas President Donald Trump’s racist comments have legitimised fear and hatred of new Americans and people of color: Now, therefore, be it resolved, That the House of Representatives […] condemns President Donald Trump’s racist comments that have legitimised and increased fear and hatred of new Americans and people of colour by saying that our fellow Americans who are immigrants, and those who may look to the President like immigrants, should “go back” to other countries, by referring to immigrants and asylum seekers as “invaders”, and by saying that Members of Congress who are immigrants (or those of our colleagues who are wrongly assumed to be immigrants) do not belong in Congress or in the United States of America.

So Trump is secure within his party – and he believes he has nothing to fear from the testimony of the special counsel, Robert Mueller, next week before the House Judiciary and Intelligence Committees.

Much attention will be paid to the examination of obstruction-of-justice issues when Mueller testifies. But the more meaningful discussion will occur in the assessment by the intelligence committee examining Russian interference in the 2016 election, and the persistence of a Russian threat in 2020.

Mueller ended his Garbo-like appearance before the media in May with these words:

The central allegation of our indictments [is] that there were multiple, systematic efforts to interference in our election. That allegation deserves the attention of every American.

The US presidential election remains vulnerable and it is not clear that sufficient safeguards are being put in place to protect the country’s democracy.

But it is the unresolved drama over impeachment that will colour Mueller’s appearance on Wednesday.




Read more:
Explainer: what is a special counsel and what will he investigate in the Trump administration?


Mueller concluded he could not indict a sitting president. However, he forensically detailed ten instances of possible obstruction of justice. Mueller said that if he believed Trump had not committed a crime he would have said so and that, as a result, he could not “exonerate” Trump.

The key question that will be asked of Mueller is: “If the record you developed on obstruction of justice was applied to any individual who was not president of the United States, would you have sought an indictment?”

And on the answer to that question turns the issue of whether there will be critical mass among House of Representatives Democrats, and perhaps supported by the American people, to vote for a bill of impeachment against Donald J. Trump.The Conversation

Bruce Wolpe, Non-resident senior fellow, United States Study Centre, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

View from The Hill: Victorian Liberal candidates find social media footprints lethal


Michelle Grattan, University of Canberra

Whether or not it’s some sort of record, the Liberals’ loss of two Victorian candidates in a single day is way beyond what Oscar Wilde would have dubbed carelessness.

Already struggling in that state, the Victorian Liberals managed to select one candidate who, judged on his words, was an appalling Islamophobe and another who was an out-and-out homophobe.

The comments that have brought them down weren’t made in the distant past – they date from last year.

Jeremy Hearn was disendorsed as the party’s candidate for the Labor seat of Isaacs, after it came to light that he had written, among other things, that a Muslim was someone who subscribed to an ideology requiring “killing or enslavement of the citizens of Australia if they do not become Muslim”. This was posted in February 2018.

Peter Killin, who was standing in Wills, withdrew over a comment (in reply to another commenter) he posted in December that included suggesting Liberal MP Tim Wilson should not have been preselected because he’s gay.

Scott Morrison rather quaintly explained the unfortunate choice of Killin by saying “he was a very recent candidate who came in because we weren’t able to continue with the other candidate because of section 44 issues”.

Oops and oops again. First the Victorian Liberals pick someone who didn’t qualify legally and then they replaced that candidate with one who didn’t qualify under any reasonable test of community standards.




Read more:
Politics with Michelle Grattan: Tim Colebatch on the battle in Victoria – and the Senate


It’s not just the Liberals with problems of candidates with unacceptable views, or bad behaviour.

Labor’s Northern Territory number 2 Senate candidate Wayne Kurnoth, who shared anti-Semitic material on social media, recently stood down. Bill Shorten embarrassed himself on Wednesday by saying he hadn’t met the man, despite having been filmed with him.

Then there’s the case of Luke Creasey, the Labor candidate running in Melbourne, which is held by Greens Adam Bandt, who shared rape jokes and pornographic material on social media. He has done a mea culpa, saying his actions happened “a number of years ago” and “in no way reflect the views I hold today”. Creasey still has his endorsement. Labor Senate leader Penny Wong has defended him, including by distinguishing between a “mistake” and “prejudice”.

It should be remembered that, given the post-nomination timing, these latest candidates unloaded by their parties have not lost their spots or their party designations on the ballot paper.

As Antony Green wrote when a NSW Liberal candidate had to withdraw during the state election (after a previous association with an online forum which reportedly engaged in unsavoury jokes) “the election goes ahead as if nothing had happened”.

It won’t occur this time, but recall the Pauline Hanson experience. In 1996, the Liberals disendorsed Hanson for racist remarks but she remained on the ballot paper with the party moniker. She was duly elected – and no doubt quite a few voters had thought she was the official Liberal candidate.

What goes around comes around – sort of.

This week Hanson’s number 2 Queensland Senate candidate, Steve Dickson, quit all his party positions after footage emerged of his groping and denigrating language at a Washington strip club. But Dickson is still on the Senate ballot paper.

While the latest major party candidates have been dumped for their views, this election has produced a large number of candidates who clearly appear to be legally ineligible to sit in parliament.

Their presence is despite the fact that, after the horrors of the constitution’s section 44 during the last parliament, candidates now have to provide extensive details for the Australian Electoral Commission about their eligibility.

Although the AEC does not have any role of enforcing eligibility, the availability of this data makes it easier in many cases to spot candidates who have legal question marks.

Most of the legally-dubious candidates have come from minor parties, and these parties, especially One Nation, Palmer’s United Australia Party and Fraser Anning’s Conservative National Party are getting close media attention.

When the major parties discovered prospective candidates who would hit a section 44 hurdle – and there have been several – they quickly replaced them.

But the minor parties don’t seem too worried about eligibility. While most of these people wouldn’t have a hope in hell of being elected, on one legal view there is a danger of a High Court challenge if someone was elected on the preferences of an ineligible candidate.

The section 44 problems reinforce the need to properly fix the constitution, as I have argued before. It will be a miracle if it doesn’t cause issues in the next parliament, because in more obscure cases a problem may not be easy to spot.




Read more:
View from The Hill: Section 44 remains a constitutional trip wire that should be addressed


But what of those with beyond-the-pale views?

At one level the fate of the two Victorian Liberal candidates carries the obvious lesson for aspirants: be careful what you post on social media, and delete old posts.

That’s the expedient point. These candidates were caught out by what they put, and left, online.

But there is a deeper issue. Surely vetting of candidates standing for major parties must properly require a very thorough examination of their views and character.

Admittedly sometimes decisions will not be easy – judgements have to be made, including a certain allowance in the case of things said or done a long time before (not applicable with the two in question).

But whether it is the factional nature of the Victorian division to blame for allowing these candidates to get through, or the inattention of the party’s powers-that-be (or likely a combination of both) it’s obvious that something went badly wrong.

That they were in unwinnable seats (despite Isaacs being on a small margin) should be irrelevant. All those who carry the Liberal banner should be espousing values in line with their party, which does after all claim to put “values” at the heart of its philosophy. The same, of course, goes for Labor.The Conversation

Michelle Grattan, Professorial Fellow, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Goodbye Google+, but what happens when online communities close down?



File 20190403 177184 jfjjy0.jpg?ixlib=rb 1.1
Google+ is the latest online community to close.
Shutterstock/rvlsoft

Stan Karanasios, RMIT University

This week saw the closure of Google+, an attempt by the online giant to create a social media community to rival Facebook.

If the Australian usage of Google+ is anything to go by – just 45,000 users in March compared to Facebook’s 15 million – it never really caught on.

Google+ is no longer available to users.
Google+/Screengrab

But the Google+ shutdown follows a string of organisations that have disabled or restricted community features such as reviews, user comments and message boards (forums).




Read more:
Sexual subcultures are collateral damage in Tumblr’s ban on adult content


So are we witnessing the decline of online communities and user comments?

Turning off online communities and user generated content

One of the most well-known message boards – which existed on the popular movie website IMDb since 2001 – was shut down by owner Amazon in 2017 with just two weeks’ notice for its users.

This is not only confined to online communities but mirrors a trend among organisations to restrict or turn off their user-generated content. Last year the subscription video-on-demand website Netflix said it no longer allowed users to write reviews. It subsequently deleted all existing user-generated reviews.

Other popular websites have disabled their comments sections, including National Public Radio (NPR), The Atlantic, Popular Science and Reuters.

Why the closures?

Organisations have a range of motivations for taking such actions, ranging from low uptake, running costs, the challenges of managing moderation, as well as the problem around divisive comments, conflicts and lack of community cohesion.

In the case of Google+, low usage alongside data breaches appear to have sped up its decision.

NPR explained its motivation to remove user comments by highlighting how in one month its website NPR.org attracted 33 million unique users and 491,000 comments. But those comments came from just 19,400 commenters; the number of commenters who posted in consecutive months was a fraction of that.

This led NPR’s managing editor for digital news, Scott Montgomery, to say:

We’ve reached the point where we’ve realized that there are other, better ways to achieve the same kind of community discussion around the issues we raise in our journalism.

He said audiences had also moved to engage with NPR more on Facebook and Twitter.

Likewise, The Atlantic explained that its comments sections had become “unhelpful, even destructive, conversations” and was exploring new ways to give users a voice.

In the case of IMDB closing its message boards in 2017, the reason given was:

[…] we have concluded that IMDb’s message boards are no longer providing a positive, useful experience for the vast majority of our more than 250 million monthly users worldwide.

The organisation also nudged users towards other forms of social media, such as its Facebook page and Twitter account @IMDB, as the “(…) primary place they (users) choose to post comments and communicate with IMDb’s editors and one another”.

User backlash

Unsurprisingly, such actions often lead to confusion, criticism and disengagement by user communities, and in some cases petitions to have the features reinstated (such as this one for Google+) and boycotts of the organisations.

But most organisations take these aspects into their decision-making.

The petition to save IMDB’s message boards.
Change.org/Screengrab

For fans of such community features these trends point to some harsh realities. Even though communities may self-organise and thrive, and users are co-creators of value and content, the functionality and governance are typically beyond their control.

Community members are at the mercy of hosting organisations, some profit-driven, which may have conflicting motivations to those of the users. It’s those organisations that hold the power to change or shut down what can be considered by some to be critical sources of knowledge, engagement and community building.

In the aftermath of shutdowns, my research shows that communities that existed on an organisation’s message boards in particular may struggle to reform.

This can be due to a number of factors, such as high switching costs, and communities can become fragmented because of the range of other options (Reddit, Facebook and other message boards).

So it’s difficult for users to preserve and maintain their communities once their original home is disabled. In the case of Google+, even its Mass Migration Group – which aims to help people, organisations and groups find “new online homes” – may not be enough to hold its online communities together.

The trend towards the closure of online communities by organisations might represent a means to reduce their costs in light of declining usage and the availability of other online options.

It’s also a move away from dealing with the reputational issues related to their use and controlling the conversation that takes place within their user bases. Trolling, conflicts and divisive comments are common in online communities and user comments spaces.

Lost community knowledge

But within online groups there often exists social and network capital, as well as the stock of valuable knowledge that such community features create.




Read more:
Zuckerberg’s ‘new rules’ for the internet must move from words to actions


Often these communities are made of communities of practice (people with a shared passion or concern) on topics ranging from movie theories to parenting.

They are go-to sources for users where meaningful interactions take place and bonds are created. User comments also allow people to engage with important events and debates, and can be cathartic.

Closing these spaces risks not only a loss of user community bases, but also a loss of this valuable community knowledge on a range of issues.The Conversation

Stan Karanasios, Senior Research Fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Livestreaming terror is abhorrent – but is more rushed legislation the answer?



File 20190401 177178 1cjkc2w.jpg?ixlib=rb 1.1
The perpetrator of the Christchurch attacks livestreamed his killings on Facebook.
Shutterstock

Robert Merkel, Monash University

In the wake of the Christchurch attack, the Australian government has announced its intention to create new criminal offences relating to the livestreaming of violence on social media platforms.

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill will create two new crimes:

It will be a criminal offence for social media platforms not to remove abhorrent violent material expeditiously. This will be punishable by 3 years’ imprisonment or fines that can reach up to 10% of the platform’s annual turnover.

Platforms anywhere in the world must notify the Australian Federal Police if they become aware their service is streaming abhorrent violent conduct that is happening in Australia. A failure to do this will be punishable by fines of up to A$168,000 for an individual or A$840,000 for a corporation.

The government is reportedly seeking to pass the legislation in the current sitting week of Parliament. This could be the last of the current parliament before an election is called. Labor, or some group of crossbenchers, will need to vote with the government if the legislation is to pass. But the draft bill was only made available to the Labor Party last night.

This is not the first time that legislation relating to the intersection of technology and law enforcement has been raced through parliament to the consternation of parts of the technology industry, and other groups. Ongoing concerns around the Access and Assistance bill demonstrate the risks of such rushed legislation.




Read more:
China bans streaming video as it struggles to keep up with live content


Major social networks already moderate violence

The government has defined “abhorrent violent material” as:

[…] material produced by a perpetrator, and which plays or livestreams the very worst types of offences. It will capture the playing or streaming of terrorism, murder, attempted murder, torture, rape and kidnapping on social media.

The major social media platforms already devote considerable resources to content moderation. They are often criticised for their moderation policies, and the inconsistent application of those policies. But content fitting the government’s definition is already clearly prohibited by Twitter, Facebook, and Snapchat.

Social media companies rely on a combination of technology, and thousands of people employed as content moderators to remove graphic content. Moderators (usually contractors, often on low wages) are routinely called on to remove a torrent of abhorrent material, including footage of murders and other violent crimes.




Read more:
We need to talk about the mental health of content moderators


Technology is helpful, but not a solution

Technologies developed to assist with content moderation are less advanced than one might hope – particularly for videos. Facebook’s own moderation tools are mostly proprietary. But we can get an idea of the state of the commercial art from Microsoft’s Content Moderator API.

The Content Moderator API is an online service designed to be integrated by programmers into consumer-facing communication systems. Microsoft’s tools can automatically recognise “racy or adult content”. They can also identify images similar to ones in a list. This kind of technology is used by Facebook, in cooperation with the office of the eSafety Comissioner, to help track and block image-based abuse – commonly but erroneously described as “revenge porn”.

The Content Moderator API cannot automatically classify an image, let alone a video, as “abhorrent violent content”. Nor can it automatically identify videos similar to another video.

Technology that could match videos is under development. For example, Microsoft is currently trialling a matching system specifically for video-based child exploitation material.

As well as developing new technologies themselves, the tech giants are enthusiastic adopters of methods and ideas devised by academic researchers. But they are some distance from being able to automatically identify re-uploads of videos that violate their terms of service, particularly when uploaders modify the video to evade moderators. The ability to automatically flag these videos as they are uploaded or streamed is even more challenging.

Important questions, few answers so far

Evaluating the government’s proposed legislative amendments is difficult given that details are scant. I’m a technologist, not a legal academic, but the scope and application of the legislation is currently unclear. Before any legislation is passed, a number of questions need to be addressed – too many to list here, but for instance:

Does the requirement to remove “abhorrent violent material” apply only to material created or uploaded by Australians? Does it only apply to events occurring within Australia? Or could foreign social media companies be liable for massive fines if videos created in a foreign country, and uploaded by a foreigner, were viewed within Australia?

Would attempts to render such material inaccessible from within Australia suffice (even though workarounds are easy)? Or would removal from access anywhere in the world be required? Would Australians be comfortable with a foreign law that required Australian websites to delete content displayed to Australians based on the decisions of a foreign government?




Read more:
Anxieties over livestreams can help us design better Facebook and YouTube content moderation


Complex legislation needs time

The proposed legislation does nothing to address the broader issues surrounding promotion of the violent white supremacist ideology that apparently motivated the Christchurch attacker. While that does not necessarily mean it’s a bad idea, it would seem very far from a full governmental response to the monstrous crime an Australian citizen allegedly committed.

It may well be that the scope and definitional issues are dealt with appropriately in the text of the legislation. But considering the government seems set on passing the bill in the next few days, it’s unlikely lawmakers will have the time to carefully consider the complexities involved.

While the desire to prevent further circulation of perpetrator-generated footage of terrorist attacks is noble, taking effective action is not straightforward. Yet again, the federal government’s inclination seems to be to legislate first and discuss later.The Conversation

Robert Merkel, Lecturer in Software Engineering, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Digital campaigning on sites like Facebook is unlikely to swing the election



File 20190412 44802 mem06u.jpg?ixlib=rb 1.1
Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.
Shutterstock

Glenn Kefford, Macquarie University

With the federal election now officially underway, commentators have begun to consider not only the techniques parties and candidates will use to persuade voters, but also any potential threats we are facing to the integrity of the election.

Invariably, this discussion leads straight to digital.

In the aftermath of the 2016 United States presidential election, the coverage of digital campaigning has been unparalleled. But this coverage has done very little to improve understanding of the key issues confronting our democracies as a result of the continued rise of digital modes of campaigning.

Some degree of confusion is understandable since digital campaigning is opaque – especially in Australia. We have very little information on what political parties or third-party campaigners are spending their money on, some of which comes from taxpayers. But the hysteria around digital is for the most part, unfounded.




Read more:
Chinese social media platform WeChat could be a key battleground in the federal election


Why parties use digital media

In any attempt to better understand digital, it’s useful to consider why political parties and other campaigners are using it as part of their election strategies. The reasons are relatively straightforward.

The media landscape is fragmented. Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.

Compared to the cost of advertising on television, radio or in print, digital advertising is very affordable.

Platforms like Facebook offer services that give campaigners a relatively straightforward way to segment voters. Campaigners can use these tools to micro-target them with tailored messaging.

Voting, persuasion and mobilisation

While there is certainly more research required into digital campaigning, there is no scholarly study I know of that suggests advertising online – including micro-targeted messaging – has the effect that it is often claimed to have.

What we know is that digital messaging can have a small but significant effect on mobilisation, that there are concerns about how it could be used to demobilise voters, and that it is an effective way to fundraise and organise. But its ability to independently persuade voters to change their votes is estimated to be close to zero.




Read more:
Australian political journalists might be part of a ‘Canberra bubble’, but they engage the public too


The exaggeration and lack of clarity around digital is problematic because there is almost no evidence to support many of the claims made. This type of technology fetishism also implies that voters are easily manipulated, when there is little evidence of this.

While it might help some commentators to rationalise unexpected election results, a more fruitful endeavour than blaming technology would be to try to understand why voters are attracted to various parties or candidates, such as Trump in the US.

Digital campaigning is not a magic bullet, so commentators need to stop treating it as if it is. Parties hope it helps them in their persuasion efforts, but this is through layering their messages across as many mediums as possible, and using the network effect that social media provides.

Data privacy and foreign interference

The two clear and obvious dangers related to digital are about data privacy and foreign meddling. We should not accept that our data is shared widely as a result of some box we ticked online. And we should have greater control over how our data are used, and who they are sold to.

An obvious starting point in Australia is questioning whether parties should continue to be exempt from privacy legislation. Research suggests that a majority of voters see a distinction between commercial entities advertising to us online compared to parties and other campaigners.

We also need to take some personal responsibility, since many of us do not always take our digital footprint as seriously as we should. It matters, and we need to educate ourselves on this.

The more vexing issue is that of foreign interference. One of the first things we need to recognise is that it is unlikely this type of meddling online would independently turn an election.

This does not mean we should accept this behaviour, but changing election results is just one of the goals these actors have. Increasing polarisation and contributing to long-term social divisions is part of the broader strategy.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


The digital battleground

As the 2019 campaign unfolds, we should remember that, while digital matters, there is no evidence it has an independent election-changing effect.

Australians should be most concerned with how our data are being used and sold, and about any attempts to meddle in our elections by state and non-state actors.

The current regulatory environment fails to meet community standards. More can and should be done to protect us and our democracy.


This article has been co-published with The Lighthouse, Macquarie University’s multimedia news platform.The Conversation

Glenn Kefford, Senior Lecturer, Department of Modern History, Politics and International Relations, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Anxieties over livestreams can help us design better Facebook and YouTube content moderation



File 20190319 60995 19te2fg.jpg?ixlib=rb 1.1
Livestream on Facebook isn’t just a tool for sharing violence – it has many popular social and political uses.
glen carrie / unsplash, CC BY

Andrew Quodling, Queensland University of Technology

As families in Christchurch bury their loved ones following Friday’s terrorist attack, global attention now turns to preventing such a thing ever happening again.

In particular, the role social media played in broadcasting live footage and amplifying its reach is under the microscope. Facebook and YouTube face intense scrutiny.




Read more:
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


New Zealand’s Prime Minister Jacinda Ardern has reportedly been in contact with Facebook executives to press the case that the footage should not available for viewing. Australian Prime Minister Scott Morrison has called for a moratorium on amateur livestreaming services.

But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.

Increasing scrutiny

With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.

As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.

After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.

Both platforms have made public statements about their efforts at moderation.

YouTube noted the challenges of dealing with an “unprecedented volume” of uploads.

Although it’s been reported less than 4000 people saw the initial stream on Facebook, Facebook said:

In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]

Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:

  1. the length of time it was available on Facebook’s platform before it was removed
  2. the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.

These issues illustrate the weaknesses of existing content moderation policies and practices.

Not an easy task

Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.

When platforms perform this responsibility poorly (or, utterly abdicate it) they pass on the task to others — like the New Zealand Internet Service Providers that blocked access to websites that were re-distributing the shooter’s footage.

People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.




Read more:
A guide for parents and teachers: what to do if your teenager watches violent footage


We know from investigative reporting that the moderation teams at platforms like Facebook and YouTube are tasked with particularly challenging work. They seem to have a relatively high turnover of staff who are quickly burnt-out by severe workloads while moderating the worst content on the internet. They are supported with only meagre wages, and what could be viewed as inadequate mental healthcare.

And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.

For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.

We must also acknowledge the other ways livestreaming features in modern life. Livestreaming is a lucrative niche entertainment industry, with thousands of innocent users broadcasting hobbies with friends from board games to mukbang (social eating), to video games. Livestreaming is important for activists in authoritarian countries, allowing them to share eyewitness footage of crimes, and shift power relationships. A ban on livestreaming would prevent a lot of this activity.

We need a new approach

Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:

  1. companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  2. companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
  3. companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.

In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.The Conversation

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


Stuart M Bender, Curtin University

The shocking mass-shooting in Christchurch on Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media.

In the highly disturbing video, the gunman drives to the Masjid Al Noor mosque, walks inside and shoots multiple people before leaving the scene in his car.

The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later.

In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays, it’s much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves.

In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.




Read more:
Why news outlets should think twice about republishing the New Zealand mosque shooter’s livestream


Performance crime is about notoriety

There is a tragic and recent history of performance crime videos that use livestreaming and social media video services as part of their tactics.

In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was livestreamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and livestreamed.

American journalist Gideon Lichfield wrote of the 2015 incident, that the killer:

didn’t just want to commit murder – he wanted the reward of attention, for having done it.

Performance crimes can be distinguished from the way traditional terror attacks and propaganda work, such as the hyper-violent videos spread by ISIS in 2014.

Typical propaganda media that feature violence use a dramatic spectacle to raise attention and communicate the group’s message. But the perpetrators of performance crimes often don’t have a clear ideological message to convey.

Steve Stephens, for example, linked his murder of a random elderly victim to retribution for his own failed relationship. He shot the stranger point-blank on video. Vester Flanagan’s appalling murder of two journalists seems to have been motivated by his anger at being fired from the same network.

The Christchurch attack was a brutal, planned mass murder of Muslims in New Zealand, but we don’t yet know whether it was about communicating the ideology of a specific group.

Even though it’s easy to identify explicit references to white supremacist ideas, the document is also strewn with confusing and inexplicable internet meme references and red herrings. These could be regarded as trolling attempts to bait the public into interrogating his claims, and magnifying the attention paid to the perpetrator and his gruesome killings.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


How we should respond

While many questions remain about the attack itself, we need to consider how best to respond to performance crime videos. Since 2012, many academics and journalists have argued that media coverage of mass violence should be limited to prevent the reward of attention from potentially driving further attacks.

That debate has continued following the tragic events in New Zealand. Journalism lecturer Glynn Greensmith argued that our responsibility may well be to limit the distribution of the Christchurch shooting video and manifesto as much as possible.

It seems that, in this case, social media and news platforms have been more mindful about removing the footage, and refusing to rebroadcast it. The video was taken down within 20 minutes by Facebook, which said that in the first 24 hours it removed 1.5 million videos of the attack globally.

Telecommunication service Vodafone moved quickly to block New Zealand users from access to sites that would be likely to distribute the video.

The video is likely to be declared objectionable material, according to New Zealand’s Department of Internal Affairs, which means it is illegal to possess. Many are calling on the public not to share it online.

Simply watching the video can cause trauma

Yet the video still exists, dispersed throughout the internet. It may be removed from official sites, but its online presence is maintained via re-uploads and file-sharing sites. Screenshots of the videos, which frequently appear in news reports, also inherit symbolic and traumatic significance when they serve as visual reminders of the distressing event.

Watching images like these has the potential to provoke vicarious trauma in viewers. Studies since the September 11 attacks suggest that “distant trauma” can be linked to multiple viewings of distressing media images.

While the savage violence of the event is distressing in its own right, this additional potential to traumatise people who simply watch the video is something that also plays into the aims of those committing performance crimes in the name of terror.

Rewarding the spectacle

Platforms like Facebook, Instagram and YouTube are powered by a framework that encourages, rewards and creates performance. People who post cat videos cater to this appetite for entertainment, but so do criminals.

According to British criminologist Majid Yar, the new media environment has created different genres of performance crime. The performances have increased in intensity, and criminality – from so-called “happy slapping” videos circulated among adolescents, to violent sexual assault videos. The recent attack is a terrifying continuation of this trend, which is predicated on a kind of exhibitionism and desire to be identified as the performer of the violence.

Researcher Jane O’Dea, who has studied the role played by the media environment in school shootings, claims that we exist in:

a society of the spectacle that regularly transforms ordinary people into “stars” of reality television or of websites like Facebook or YouTube.

Perpetrators of performance crime are inspired by the attention that will inevitably result from the online archive they create leading up to, and during, the event.




Read more:
View from The Hill: A truly inclusive society requires political restraint


We all have a role to play

I have previously argued that this media environment seems to produce violent acts that otherwise may not have occurred. Of course, I don’t mean that the perpetrators are not responsible or accountable for their actions. Rather, performance crime represents a different type of activity specific to the technology and social phenomenon of social media – the accidental dark side of livestreaming services.

Would the alleged perpetrator of this terrorist act in Christchurch still have committed it without the capacity to livestream? We don’t know.

But as Majid Yar suggests, rather than concerning ourselves with old arguments about whether media violence can cause criminal behaviour, we should focus on how the techniques and reward systems we use to represent ourselves to online audiences are in fact a central component of these attacks.

We may hope that social media companies will get better at filtering out violent content, but until they do we should reflect on our own behaviour online. As we like and share content of all kinds on social platforms, let’s consider how our activities could contribute to an overall spectacle society that inspires future perpetrator-produced videos of performance crime – and act accordingly.The Conversation

Stuart M Bender, Early Career Research Fellow (Digital aesthetics of violence), Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.