View from The Hill: Victorian Liberal candidates find social media footprints lethal


Michelle Grattan, University of Canberra

Whether or not it’s some sort of record, the Liberals’ loss of two Victorian candidates in a single day is way beyond what Oscar Wilde would have dubbed carelessness.

Already struggling in that state, the Victorian Liberals managed to select one candidate who, judged on his words, was an appalling Islamophobe and another who was an out-and-out homophobe.

The comments that have brought them down weren’t made in the distant past – they date from last year.

Jeremy Hearn was disendorsed as the party’s candidate for the Labor seat of Isaacs, after it came to light that he had written, among other things, that a Muslim was someone who subscribed to an ideology requiring “killing or enslavement of the citizens of Australia if they do not become Muslim”. This was posted in February 2018.

Peter Killin, who was standing in Wills, withdrew over a comment (in reply to another commenter) he posted in December that included suggesting Liberal MP Tim Wilson should not have been preselected because he’s gay.

Scott Morrison rather quaintly explained the unfortunate choice of Killin by saying “he was a very recent candidate who came in because we weren’t able to continue with the other candidate because of section 44 issues”.

Oops and oops again. First the Victorian Liberals pick someone who didn’t qualify legally and then they replaced that candidate with one who didn’t qualify under any reasonable test of community standards.




Read more:
Politics with Michelle Grattan: Tim Colebatch on the battle in Victoria – and the Senate


It’s not just the Liberals with problems of candidates with unacceptable views, or bad behaviour.

Labor’s Northern Territory number 2 Senate candidate Wayne Kurnoth, who shared anti-Semitic material on social media, recently stood down. Bill Shorten embarrassed himself on Wednesday by saying he hadn’t met the man, despite having been filmed with him.

Then there’s the case of Luke Creasey, the Labor candidate running in Melbourne, which is held by Greens Adam Bandt, who shared rape jokes and pornographic material on social media. He has done a mea culpa, saying his actions happened “a number of years ago” and “in no way reflect the views I hold today”. Creasey still has his endorsement. Labor Senate leader Penny Wong has defended him, including by distinguishing between a “mistake” and “prejudice”.

It should be remembered that, given the post-nomination timing, these latest candidates unloaded by their parties have not lost their spots or their party designations on the ballot paper.

As Antony Green wrote when a NSW Liberal candidate had to withdraw during the state election (after a previous association with an online forum which reportedly engaged in unsavoury jokes) “the election goes ahead as if nothing had happened”.

It won’t occur this time, but recall the Pauline Hanson experience. In 1996, the Liberals disendorsed Hanson for racist remarks but she remained on the ballot paper with the party moniker. She was duly elected – and no doubt quite a few voters had thought she was the official Liberal candidate.

What goes around comes around – sort of.

This week Hanson’s number 2 Queensland Senate candidate, Steve Dickson, quit all his party positions after footage emerged of his groping and denigrating language at a Washington strip club. But Dickson is still on the Senate ballot paper.

While the latest major party candidates have been dumped for their views, this election has produced a large number of candidates who clearly appear to be legally ineligible to sit in parliament.

Their presence is despite the fact that, after the horrors of the constitution’s section 44 during the last parliament, candidates now have to provide extensive details for the Australian Electoral Commission about their eligibility.

Although the AEC does not have any role of enforcing eligibility, the availability of this data makes it easier in many cases to spot candidates who have legal question marks.

Most of the legally-dubious candidates have come from minor parties, and these parties, especially One Nation, Palmer’s United Australia Party and Fraser Anning’s Conservative National Party are getting close media attention.

When the major parties discovered prospective candidates who would hit a section 44 hurdle – and there have been several – they quickly replaced them.

But the minor parties don’t seem too worried about eligibility. While most of these people wouldn’t have a hope in hell of being elected, on one legal view there is a danger of a High Court challenge if someone was elected on the preferences of an ineligible candidate.

The section 44 problems reinforce the need to properly fix the constitution, as I have argued before. It will be a miracle if it doesn’t cause issues in the next parliament, because in more obscure cases a problem may not be easy to spot.




Read more:
View from The Hill: Section 44 remains a constitutional trip wire that should be addressed


But what of those with beyond-the-pale views?

At one level the fate of the two Victorian Liberal candidates carries the obvious lesson for aspirants: be careful what you post on social media, and delete old posts.

That’s the expedient point. These candidates were caught out by what they put, and left, online.

But there is a deeper issue. Surely vetting of candidates standing for major parties must properly require a very thorough examination of their views and character.

Admittedly sometimes decisions will not be easy – judgements have to be made, including a certain allowance in the case of things said or done a long time before (not applicable with the two in question).

But whether it is the factional nature of the Victorian division to blame for allowing these candidates to get through, or the inattention of the party’s powers-that-be (or likely a combination of both) it’s obvious that something went badly wrong.

That they were in unwinnable seats (despite Isaacs being on a small margin) should be irrelevant. All those who carry the Liberal banner should be espousing values in line with their party, which does after all claim to put “values” at the heart of its philosophy. The same, of course, goes for Labor.The Conversation

Michelle Grattan, Professorial Fellow, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

Goodbye Google+, but what happens when online communities close down?



File 20190403 177184 jfjjy0.jpg?ixlib=rb 1.1
Google+ is the latest online community to close.
Shutterstock/rvlsoft

Stan Karanasios, RMIT University

This week saw the closure of Google+, an attempt by the online giant to create a social media community to rival Facebook.

If the Australian usage of Google+ is anything to go by – just 45,000 users in March compared to Facebook’s 15 million – it never really caught on.

Google+ is no longer available to users.
Google+/Screengrab

But the Google+ shutdown follows a string of organisations that have disabled or restricted community features such as reviews, user comments and message boards (forums).




Read more:
Sexual subcultures are collateral damage in Tumblr’s ban on adult content


So are we witnessing the decline of online communities and user comments?

Turning off online communities and user generated content

One of the most well-known message boards – which existed on the popular movie website IMDb since 2001 – was shut down by owner Amazon in 2017 with just two weeks’ notice for its users.

This is not only confined to online communities but mirrors a trend among organisations to restrict or turn off their user-generated content. Last year the subscription video-on-demand website Netflix said it no longer allowed users to write reviews. It subsequently deleted all existing user-generated reviews.

Other popular websites have disabled their comments sections, including National Public Radio (NPR), The Atlantic, Popular Science and Reuters.

Why the closures?

Organisations have a range of motivations for taking such actions, ranging from low uptake, running costs, the challenges of managing moderation, as well as the problem around divisive comments, conflicts and lack of community cohesion.

In the case of Google+, low usage alongside data breaches appear to have sped up its decision.

NPR explained its motivation to remove user comments by highlighting how in one month its website NPR.org attracted 33 million unique users and 491,000 comments. But those comments came from just 19,400 commenters; the number of commenters who posted in consecutive months was a fraction of that.

This led NPR’s managing editor for digital news, Scott Montgomery, to say:

We’ve reached the point where we’ve realized that there are other, better ways to achieve the same kind of community discussion around the issues we raise in our journalism.

He said audiences had also moved to engage with NPR more on Facebook and Twitter.

Likewise, The Atlantic explained that its comments sections had become “unhelpful, even destructive, conversations” and was exploring new ways to give users a voice.

In the case of IMDB closing its message boards in 2017, the reason given was:

[…] we have concluded that IMDb’s message boards are no longer providing a positive, useful experience for the vast majority of our more than 250 million monthly users worldwide.

The organisation also nudged users towards other forms of social media, such as its Facebook page and Twitter account @IMDB, as the “(…) primary place they (users) choose to post comments and communicate with IMDb’s editors and one another”.

User backlash

Unsurprisingly, such actions often lead to confusion, criticism and disengagement by user communities, and in some cases petitions to have the features reinstated (such as this one for Google+) and boycotts of the organisations.

But most organisations take these aspects into their decision-making.

The petition to save IMDB’s message boards.
Change.org/Screengrab

For fans of such community features these trends point to some harsh realities. Even though communities may self-organise and thrive, and users are co-creators of value and content, the functionality and governance are typically beyond their control.

Community members are at the mercy of hosting organisations, some profit-driven, which may have conflicting motivations to those of the users. It’s those organisations that hold the power to change or shut down what can be considered by some to be critical sources of knowledge, engagement and community building.

In the aftermath of shutdowns, my research shows that communities that existed on an organisation’s message boards in particular may struggle to reform.

This can be due to a number of factors, such as high switching costs, and communities can become fragmented because of the range of other options (Reddit, Facebook and other message boards).

So it’s difficult for users to preserve and maintain their communities once their original home is disabled. In the case of Google+, even its Mass Migration Group – which aims to help people, organisations and groups find “new online homes” – may not be enough to hold its online communities together.

The trend towards the closure of online communities by organisations might represent a means to reduce their costs in light of declining usage and the availability of other online options.

It’s also a move away from dealing with the reputational issues related to their use and controlling the conversation that takes place within their user bases. Trolling, conflicts and divisive comments are common in online communities and user comments spaces.

Lost community knowledge

But within online groups there often exists social and network capital, as well as the stock of valuable knowledge that such community features create.




Read more:
Zuckerberg’s ‘new rules’ for the internet must move from words to actions


Often these communities are made of communities of practice (people with a shared passion or concern) on topics ranging from movie theories to parenting.

They are go-to sources for users where meaningful interactions take place and bonds are created. User comments also allow people to engage with important events and debates, and can be cathartic.

Closing these spaces risks not only a loss of user community bases, but also a loss of this valuable community knowledge on a range of issues.The Conversation

Stan Karanasios, Senior Research Fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Livestreaming terror is abhorrent – but is more rushed legislation the answer?



File 20190401 177178 1cjkc2w.jpg?ixlib=rb 1.1
The perpetrator of the Christchurch attacks livestreamed his killings on Facebook.
Shutterstock

Robert Merkel, Monash University

In the wake of the Christchurch attack, the Australian government has announced its intention to create new criminal offences relating to the livestreaming of violence on social media platforms.

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill will create two new crimes:

It will be a criminal offence for social media platforms not to remove abhorrent violent material expeditiously. This will be punishable by 3 years’ imprisonment or fines that can reach up to 10% of the platform’s annual turnover.

Platforms anywhere in the world must notify the Australian Federal Police if they become aware their service is streaming abhorrent violent conduct that is happening in Australia. A failure to do this will be punishable by fines of up to A$168,000 for an individual or A$840,000 for a corporation.

The government is reportedly seeking to pass the legislation in the current sitting week of Parliament. This could be the last of the current parliament before an election is called. Labor, or some group of crossbenchers, will need to vote with the government if the legislation is to pass. But the draft bill was only made available to the Labor Party last night.

This is not the first time that legislation relating to the intersection of technology and law enforcement has been raced through parliament to the consternation of parts of the technology industry, and other groups. Ongoing concerns around the Access and Assistance bill demonstrate the risks of such rushed legislation.




Read more:
China bans streaming video as it struggles to keep up with live content


Major social networks already moderate violence

The government has defined “abhorrent violent material” as:

[…] material produced by a perpetrator, and which plays or livestreams the very worst types of offences. It will capture the playing or streaming of terrorism, murder, attempted murder, torture, rape and kidnapping on social media.

The major social media platforms already devote considerable resources to content moderation. They are often criticised for their moderation policies, and the inconsistent application of those policies. But content fitting the government’s definition is already clearly prohibited by Twitter, Facebook, and Snapchat.

Social media companies rely on a combination of technology, and thousands of people employed as content moderators to remove graphic content. Moderators (usually contractors, often on low wages) are routinely called on to remove a torrent of abhorrent material, including footage of murders and other violent crimes.




Read more:
We need to talk about the mental health of content moderators


Technology is helpful, but not a solution

Technologies developed to assist with content moderation are less advanced than one might hope – particularly for videos. Facebook’s own moderation tools are mostly proprietary. But we can get an idea of the state of the commercial art from Microsoft’s Content Moderator API.

The Content Moderator API is an online service designed to be integrated by programmers into consumer-facing communication systems. Microsoft’s tools can automatically recognise “racy or adult content”. They can also identify images similar to ones in a list. This kind of technology is used by Facebook, in cooperation with the office of the eSafety Comissioner, to help track and block image-based abuse – commonly but erroneously described as “revenge porn”.

The Content Moderator API cannot automatically classify an image, let alone a video, as “abhorrent violent content”. Nor can it automatically identify videos similar to another video.

Technology that could match videos is under development. For example, Microsoft is currently trialling a matching system specifically for video-based child exploitation material.

As well as developing new technologies themselves, the tech giants are enthusiastic adopters of methods and ideas devised by academic researchers. But they are some distance from being able to automatically identify re-uploads of videos that violate their terms of service, particularly when uploaders modify the video to evade moderators. The ability to automatically flag these videos as they are uploaded or streamed is even more challenging.

Important questions, few answers so far

Evaluating the government’s proposed legislative amendments is difficult given that details are scant. I’m a technologist, not a legal academic, but the scope and application of the legislation is currently unclear. Before any legislation is passed, a number of questions need to be addressed – too many to list here, but for instance:

Does the requirement to remove “abhorrent violent material” apply only to material created or uploaded by Australians? Does it only apply to events occurring within Australia? Or could foreign social media companies be liable for massive fines if videos created in a foreign country, and uploaded by a foreigner, were viewed within Australia?

Would attempts to render such material inaccessible from within Australia suffice (even though workarounds are easy)? Or would removal from access anywhere in the world be required? Would Australians be comfortable with a foreign law that required Australian websites to delete content displayed to Australians based on the decisions of a foreign government?




Read more:
Anxieties over livestreams can help us design better Facebook and YouTube content moderation


Complex legislation needs time

The proposed legislation does nothing to address the broader issues surrounding promotion of the violent white supremacist ideology that apparently motivated the Christchurch attacker. While that does not necessarily mean it’s a bad idea, it would seem very far from a full governmental response to the monstrous crime an Australian citizen allegedly committed.

It may well be that the scope and definitional issues are dealt with appropriately in the text of the legislation. But considering the government seems set on passing the bill in the next few days, it’s unlikely lawmakers will have the time to carefully consider the complexities involved.

While the desire to prevent further circulation of perpetrator-generated footage of terrorist attacks is noble, taking effective action is not straightforward. Yet again, the federal government’s inclination seems to be to legislate first and discuss later.The Conversation

Robert Merkel, Lecturer in Software Engineering, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Digital campaigning on sites like Facebook is unlikely to swing the election



File 20190412 44802 mem06u.jpg?ixlib=rb 1.1
Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.
Shutterstock

Glenn Kefford, Macquarie University

With the federal election now officially underway, commentators have begun to consider not only the techniques parties and candidates will use to persuade voters, but also any potential threats we are facing to the integrity of the election.

Invariably, this discussion leads straight to digital.

In the aftermath of the 2016 United States presidential election, the coverage of digital campaigning has been unparalleled. But this coverage has done very little to improve understanding of the key issues confronting our democracies as a result of the continued rise of digital modes of campaigning.

Some degree of confusion is understandable since digital campaigning is opaque – especially in Australia. We have very little information on what political parties or third-party campaigners are spending their money on, some of which comes from taxpayers. But the hysteria around digital is for the most part, unfounded.




Read more:
Chinese social media platform WeChat could be a key battleground in the federal election


Why parties use digital media

In any attempt to better understand digital, it’s useful to consider why political parties and other campaigners are using it as part of their election strategies. The reasons are relatively straightforward.

The media landscape is fragmented. Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.

Compared to the cost of advertising on television, radio or in print, digital advertising is very affordable.

Platforms like Facebook offer services that give campaigners a relatively straightforward way to segment voters. Campaigners can use these tools to micro-target them with tailored messaging.

Voting, persuasion and mobilisation

While there is certainly more research required into digital campaigning, there is no scholarly study I know of that suggests advertising online – including micro-targeted messaging – has the effect that it is often claimed to have.

What we know is that digital messaging can have a small but significant effect on mobilisation, that there are concerns about how it could be used to demobilise voters, and that it is an effective way to fundraise and organise. But its ability to independently persuade voters to change their votes is estimated to be close to zero.




Read more:
Australian political journalists might be part of a ‘Canberra bubble’, but they engage the public too


The exaggeration and lack of clarity around digital is problematic because there is almost no evidence to support many of the claims made. This type of technology fetishism also implies that voters are easily manipulated, when there is little evidence of this.

While it might help some commentators to rationalise unexpected election results, a more fruitful endeavour than blaming technology would be to try to understand why voters are attracted to various parties or candidates, such as Trump in the US.

Digital campaigning is not a magic bullet, so commentators need to stop treating it as if it is. Parties hope it helps them in their persuasion efforts, but this is through layering their messages across as many mediums as possible, and using the network effect that social media provides.

Data privacy and foreign interference

The two clear and obvious dangers related to digital are about data privacy and foreign meddling. We should not accept that our data is shared widely as a result of some box we ticked online. And we should have greater control over how our data are used, and who they are sold to.

An obvious starting point in Australia is questioning whether parties should continue to be exempt from privacy legislation. Research suggests that a majority of voters see a distinction between commercial entities advertising to us online compared to parties and other campaigners.

We also need to take some personal responsibility, since many of us do not always take our digital footprint as seriously as we should. It matters, and we need to educate ourselves on this.

The more vexing issue is that of foreign interference. One of the first things we need to recognise is that it is unlikely this type of meddling online would independently turn an election.

This does not mean we should accept this behaviour, but changing election results is just one of the goals these actors have. Increasing polarisation and contributing to long-term social divisions is part of the broader strategy.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


The digital battleground

As the 2019 campaign unfolds, we should remember that, while digital matters, there is no evidence it has an independent election-changing effect.

Australians should be most concerned with how our data are being used and sold, and about any attempts to meddle in our elections by state and non-state actors.

The current regulatory environment fails to meet community standards. More can and should be done to protect us and our democracy.


This article has been co-published with The Lighthouse, Macquarie University’s multimedia news platform.The Conversation

Glenn Kefford, Senior Lecturer, Department of Modern History, Politics and International Relations, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Anxieties over livestreams can help us design better Facebook and YouTube content moderation



File 20190319 60995 19te2fg.jpg?ixlib=rb 1.1
Livestream on Facebook isn’t just a tool for sharing violence – it has many popular social and political uses.
glen carrie / unsplash, CC BY

Andrew Quodling, Queensland University of Technology

As families in Christchurch bury their loved ones following Friday’s terrorist attack, global attention now turns to preventing such a thing ever happening again.

In particular, the role social media played in broadcasting live footage and amplifying its reach is under the microscope. Facebook and YouTube face intense scrutiny.




Read more:
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


New Zealand’s Prime Minister Jacinda Ardern has reportedly been in contact with Facebook executives to press the case that the footage should not available for viewing. Australian Prime Minister Scott Morrison has called for a moratorium on amateur livestreaming services.

But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.

Increasing scrutiny

With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.

As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.

After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.

Both platforms have made public statements about their efforts at moderation.

YouTube noted the challenges of dealing with an “unprecedented volume” of uploads.

Although it’s been reported less than 4000 people saw the initial stream on Facebook, Facebook said:

In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]

Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:

  1. the length of time it was available on Facebook’s platform before it was removed
  2. the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.

These issues illustrate the weaknesses of existing content moderation policies and practices.

Not an easy task

Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.

When platforms perform this responsibility poorly (or, utterly abdicate it) they pass on the task to others — like the New Zealand Internet Service Providers that blocked access to websites that were re-distributing the shooter’s footage.

People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.




Read more:
A guide for parents and teachers: what to do if your teenager watches violent footage


We know from investigative reporting that the moderation teams at platforms like Facebook and YouTube are tasked with particularly challenging work. They seem to have a relatively high turnover of staff who are quickly burnt-out by severe workloads while moderating the worst content on the internet. They are supported with only meagre wages, and what could be viewed as inadequate mental healthcare.

And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.

For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.

We must also acknowledge the other ways livestreaming features in modern life. Livestreaming is a lucrative niche entertainment industry, with thousands of innocent users broadcasting hobbies with friends from board games to mukbang (social eating), to video games. Livestreaming is important for activists in authoritarian countries, allowing them to share eyewitness footage of crimes, and shift power relationships. A ban on livestreaming would prevent a lot of this activity.

We need a new approach

Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:

  1. companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  2. companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
  3. companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.

In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.The Conversation

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


Stuart M Bender, Curtin University

The shocking mass-shooting in Christchurch on Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media.

In the highly disturbing video, the gunman drives to the Masjid Al Noor mosque, walks inside and shoots multiple people before leaving the scene in his car.

The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later.

In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays, it’s much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves.

In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.




Read more:
Why news outlets should think twice about republishing the New Zealand mosque shooter’s livestream


Performance crime is about notoriety

There is a tragic and recent history of performance crime videos that use livestreaming and social media video services as part of their tactics.

In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was livestreamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and livestreamed.

American journalist Gideon Lichfield wrote of the 2015 incident, that the killer:

didn’t just want to commit murder – he wanted the reward of attention, for having done it.

Performance crimes can be distinguished from the way traditional terror attacks and propaganda work, such as the hyper-violent videos spread by ISIS in 2014.

Typical propaganda media that feature violence use a dramatic spectacle to raise attention and communicate the group’s message. But the perpetrators of performance crimes often don’t have a clear ideological message to convey.

Steve Stephens, for example, linked his murder of a random elderly victim to retribution for his own failed relationship. He shot the stranger point-blank on video. Vester Flanagan’s appalling murder of two journalists seems to have been motivated by his anger at being fired from the same network.

The Christchurch attack was a brutal, planned mass murder of Muslims in New Zealand, but we don’t yet know whether it was about communicating the ideology of a specific group.

Even though it’s easy to identify explicit references to white supremacist ideas, the document is also strewn with confusing and inexplicable internet meme references and red herrings. These could be regarded as trolling attempts to bait the public into interrogating his claims, and magnifying the attention paid to the perpetrator and his gruesome killings.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


How we should respond

While many questions remain about the attack itself, we need to consider how best to respond to performance crime videos. Since 2012, many academics and journalists have argued that media coverage of mass violence should be limited to prevent the reward of attention from potentially driving further attacks.

That debate has continued following the tragic events in New Zealand. Journalism lecturer Glynn Greensmith argued that our responsibility may well be to limit the distribution of the Christchurch shooting video and manifesto as much as possible.

It seems that, in this case, social media and news platforms have been more mindful about removing the footage, and refusing to rebroadcast it. The video was taken down within 20 minutes by Facebook, which said that in the first 24 hours it removed 1.5 million videos of the attack globally.

Telecommunication service Vodafone moved quickly to block New Zealand users from access to sites that would be likely to distribute the video.

The video is likely to be declared objectionable material, according to New Zealand’s Department of Internal Affairs, which means it is illegal to possess. Many are calling on the public not to share it online.

Simply watching the video can cause trauma

Yet the video still exists, dispersed throughout the internet. It may be removed from official sites, but its online presence is maintained via re-uploads and file-sharing sites. Screenshots of the videos, which frequently appear in news reports, also inherit symbolic and traumatic significance when they serve as visual reminders of the distressing event.

Watching images like these has the potential to provoke vicarious trauma in viewers. Studies since the September 11 attacks suggest that “distant trauma” can be linked to multiple viewings of distressing media images.

While the savage violence of the event is distressing in its own right, this additional potential to traumatise people who simply watch the video is something that also plays into the aims of those committing performance crimes in the name of terror.

Rewarding the spectacle

Platforms like Facebook, Instagram and YouTube are powered by a framework that encourages, rewards and creates performance. People who post cat videos cater to this appetite for entertainment, but so do criminals.

According to British criminologist Majid Yar, the new media environment has created different genres of performance crime. The performances have increased in intensity, and criminality – from so-called “happy slapping” videos circulated among adolescents, to violent sexual assault videos. The recent attack is a terrifying continuation of this trend, which is predicated on a kind of exhibitionism and desire to be identified as the performer of the violence.

Researcher Jane O’Dea, who has studied the role played by the media environment in school shootings, claims that we exist in:

a society of the spectacle that regularly transforms ordinary people into “stars” of reality television or of websites like Facebook or YouTube.

Perpetrators of performance crime are inspired by the attention that will inevitably result from the online archive they create leading up to, and during, the event.




Read more:
View from The Hill: A truly inclusive society requires political restraint


We all have a role to play

I have previously argued that this media environment seems to produce violent acts that otherwise may not have occurred. Of course, I don’t mean that the perpetrators are not responsible or accountable for their actions. Rather, performance crime represents a different type of activity specific to the technology and social phenomenon of social media – the accidental dark side of livestreaming services.

Would the alleged perpetrator of this terrorist act in Christchurch still have committed it without the capacity to livestream? We don’t know.

But as Majid Yar suggests, rather than concerning ourselves with old arguments about whether media violence can cause criminal behaviour, we should focus on how the techniques and reward systems we use to represent ourselves to online audiences are in fact a central component of these attacks.

We may hope that social media companies will get better at filtering out violent content, but until they do we should reflect on our own behaviour online. As we like and share content of all kinds on social platforms, let’s consider how our activities could contribute to an overall spectacle society that inspires future perpetrator-produced videos of performance crime – and act accordingly.The Conversation

Stuart M Bender, Early Career Research Fellow (Digital aesthetics of violence), Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

It’s time for third-party data brokers to emerge from the shadows



File 20180404 189813 1ihb282.jpg?ixlib=rb 1.1
Personal data has been dubbed the “new oil”, and data brokers are very efficient miners.
Emanuele Toscano/Flickr, CC BY-NC-ND

Sacha Molitorisz, University of Technology Sydney

Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.

Graham Mudd, Facebook’s product marketing director, said in a statement:

We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.

Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.

The invisible industry worth billions

In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.

The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.




Read more:
Facebook data harvesting: what you need to know


These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.

Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.

In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.

A growing concern

Third party access to personal data is causing increasing concern. This week, Grindr was shown to be revealing its users’ HIV status to third parties. Such news is unsettling, as if there are corporate eavesdroppers on even our most intimate online engagements.

The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.




Read more:
Your online privacy depends as much on your friends’ data habits as your own


Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.

As one group of privacy researchers noted in 2011, this process, “which nearly invisibly shares not just a user’s, but a user’s friends’ information with third parties, clearly violates standard norms of information flow”.

With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.

More transparency and more respect for users

To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.

Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.

In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.

Time to regulate

Having resisted regulation since 2004, Mark Zuckerberg has finally conceded that Facebook should be regulated – and advocated for laws mandating transparency for online advertising.

Historically, Facebook has made a point of dedicating itself to openness, but Facebook itself has often operated with a distinct lack of openness and transparency. Data brokers have been even worse.

The ConversationFacebook’s motto used to be “Move fast and break things”. Now Facebook, data brokers and other third parties need to work with lawmakers to move fast and fix things.

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.

Why the business model of social media giants like Facebook is incompatible with human rights



File 20180329 189824 1k13qax.jpg?ixlib=rb 1.1
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance.
EPA/Peter DaSilva

Sarah Joseph, Monash University

Facebook has had a bad few weeks. The social media giant had to apologise for failing to protect the personal data of millions of users from being accessed by data mining company Cambridge Analytica. Outrage is brewing over its admission to spying on people via their Android phones. Its stock price plummeted, while millions deleted their accounts in disgust.

Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.

Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.

The good

In some ways, social media has been a boon for human rights – most obviously for freedom of speech.

Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.

But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.

Social media enhances the effectiveness of non-mainstream political movements, public assemblies and demonstrations, especially in countries that exercise tight controls over civil and political rights, or have very poor news sources.

Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.




Read more:
#MeToo is not enough: it has yet to shift the power imbalances that would bring about gender equality


The bad and the ugly

But the social media “free speech” machines can create human rights difficulties. Those newly empowered voices are not necessarily desirable voices.

The UN recently found that Facebook had been a major platform for spreading hatred against the Rohingya in Myanmar, which in turn led to ethnic cleansing and crimes against humanity.

Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.

YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.

The business model and human rights

Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.

Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.

Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.

Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.

In late 2017, Facebook was reported as having more than 2.2 billion active users.
EPA/Ritchie B. Tongo

Power through knowledge

In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.

So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.

It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.




Read more:
Can Facebook influence an election result?


Microtargeting

A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.

This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.

Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.

Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.

While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.

Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.

As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.

‘Fake news’

Finally, there is the issue of the spread of misinformation.

While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.

In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.

Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.

Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.

The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.

However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:

… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.

Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:

… false news spreads more than the truth because humans, not robots, are more likely to spread it.

The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.

Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.

Fake news disseminated by social media is argued to have played a role in electing Donald Trump to the presidency.
EPA/Jim Lo Scalzo

What next?

It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).

Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.

Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.

However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.

The ConversationAt the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.

Sarah Joseph, Director, Castan Centre for Human Rights Law, Monash University

This article was originally published on The Conversation. Read the original article.

Your online privacy depends as much on your friends’ data habits as your own



File 20180326 54872 1q80274.jpg?ixlib=rb 1.1
Many social media users have been shocked to learn the extent of their digital footprint.
Shutterstock

Vincent Mitchell, University of Sydney; Andrew Stephen, University of Oxford, and Bernadette Kamleitner, Vienna University of Economics and Business

In the aftermath of revelations about the alleged misuse of Facebook user data by Cambridge Analytica, many social media users are educating themselves about their own digital footprint. And some are shocked at the extent of it.

Last week, one user took advantage of a Facebook feature that enables you to download all the information the company stores about you. He found his call and SMS history in the data dump – something Facebook says is an opt-in feature for those using Messenger and Facebook Lite on Android.

//platform.twitter.com/widgets.js

This highlights an issue that we don’t talk about enough when it comes to data privacy: that the security of our data is dependent not only on our own vigilance, but also that of those we interact with.

It’s easy for friends to share our data

In the past, personal data was either captured in our memories or in physical objects, such as diaries or photo albums. If a friend wanted data about us, they would have to either observe us or ask us for it. That requires effort, or our consent, and focuses on information that is both specific and meaningful.

Nowadays, data others hold about us is given away easily. That’s partly because the data apps ask for is largely intangible and invisible, as well as vague rather than specific.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


What’s more, it doesn’t seem to take much to get us to give away other people’s data in return for very little, with one study finding 98% of MIT students would give away their friends’ emails when promised free pizza.

Other studies have shown that collaborating in folders on cloud services, such as Google Drive, can result in privacy losses that are 39% higher due collaborators installing third-party apps you wouldn’t choose to install yourself. Facebook’s data download tool poses another risk in that once the data is taken out of Facebook it becomes even easier to copy and distribute.

This shift from personal to interdependent online privacy reliant on our friends, family and colleagues is a seismic one for the privacy agenda.

How much data are we talking about?

With more than 3.5 million apps on Google Play alone, the collection of data from our friends via back-door methods is more common than we might think. The back-door opens when you press “accept” to permissions to give access to your contacts when installing an app.

WhatsApp might have your contact information even if you aren’t a registered user.
Screen Shot at 1pm on 26 March 2018

Then the data harvesting machinery begins its work – often in perpetuity, and without us knowing or understanding what will be done with it. More importantly, our friends never agreed to us giving away their data. And we have a lot of friends’ data to harvest.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The average Australian has 234 Facebook friends. Large-scale data collection is easy in an interconnected world when each person who signs up for an app has 234 friends, and each of them has 234 and, so on. That’s how Cambridge Analytica was apparently able to collect information on up to 50 million users, with permission from just 270,000.

Add to that the fact that the average person uses nine different apps on a daily basis. Once installed, some of these apps can harvest data on a daily basis without your friends knowing and 70% of apps share it with third parties.




Read more:
7 in 10 smartphone apps share your data with third-party services


We’re more likely to refuse data requests that are specific

Around 60% of us never, or only occasionally, review the privacy policy and permissions requested by an app before downloading. And in our own research conducted with a sample of 287 London business students, 96% of participants failed to realise the scope of all the information they were giving away.

However, this can be changed by making a data request more specific – for example, by separating out “contacts” from “photos”. When we asked participants if they had the right to give all the data on their phone, 95% said yes. But when they focused on just contacts, this decreased to 80%.

We can take this further with a thought experiment. Imagine if an app asked you for your “contacts, including your grandmother’s phone number and your daughter’s photos”. Would you be more likely to say no? The reality of what you are actually giving away in these consent agreements becomes more apparent with a specific request.

The silver lining is more vigilance

This new reality not only threatens moral codes and friendships, but can cause harm from hidden viruses, malware, spyware or adware. We may also be subject to prosecution as in a recent German case in which a judge ruled that giving away your friend’s data on Whatsapp without their permission was wrong.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Although company policies on privacy can help, these are difficult to police. Facebook’s “platform policy” at the time the Cambridge Analytica data was harvested only allowed the collection of friends’ data to improve the user experience of an app, while preventing it from being sold on or used for advertising. But this puts a huge burden on companies to police, investigate and enforce these policies. It’s a task few can afford, and even a company the size of Facebook failed.

The ConversationThe silver lining to the Cambridge Analytica case is that more and more people are recognising that the idea of “free” digital services is an illusion. The price we pay is not only our own privacy, but the privacy of our friends, family and colleagues.

Vincent Mitchell, Professor of Marketing, University of Sydney; Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford, and Bernadette Kamleitner, , Vienna University of Economics and Business

This article was originally published on The Conversation. Read the original article.