Anxieties over livestreams can help us design better Facebook and YouTube content moderation



File 20190319 60995 19te2fg.jpg?ixlib=rb 1.1
Livestream on Facebook isn’t just a tool for sharing violence – it has many popular social and political uses.
glen carrie / unsplash, CC BY

Andrew Quodling, Queensland University of Technology

As families in Christchurch bury their loved ones following Friday’s terrorist attack, global attention now turns to preventing such a thing ever happening again.

In particular, the role social media played in broadcasting live footage and amplifying its reach is under the microscope. Facebook and YouTube face intense scrutiny.




Read more:
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


New Zealand’s Prime Minister Jacinda Ardern has reportedly been in contact with Facebook executives to press the case that the footage should not available for viewing. Australian Prime Minister Scott Morrison has called for a moratorium on amateur livestreaming services.

But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.

Increasing scrutiny

With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.

As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.

After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.

Both platforms have made public statements about their efforts at moderation.

YouTube noted the challenges of dealing with an “unprecedented volume” of uploads.

Although it’s been reported less than 4000 people saw the initial stream on Facebook, Facebook said:

In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]

Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:

  1. the length of time it was available on Facebook’s platform before it was removed
  2. the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.

These issues illustrate the weaknesses of existing content moderation policies and practices.

Not an easy task

Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.

When platforms perform this responsibility poorly (or, utterly abdicate it) they pass on the task to others — like the New Zealand Internet Service Providers that blocked access to websites that were re-distributing the shooter’s footage.

People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.




Read more:
A guide for parents and teachers: what to do if your teenager watches violent footage


We know from investigative reporting that the moderation teams at platforms like Facebook and YouTube are tasked with particularly challenging work. They seem to have a relatively high turnover of staff who are quickly burnt-out by severe workloads while moderating the worst content on the internet. They are supported with only meagre wages, and what could be viewed as inadequate mental healthcare.

And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.

For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.

We must also acknowledge the other ways livestreaming features in modern life. Livestreaming is a lucrative niche entertainment industry, with thousands of innocent users broadcasting hobbies with friends from board games to mukbang (social eating), to video games. Livestreaming is important for activists in authoritarian countries, allowing them to share eyewitness footage of crimes, and shift power relationships. A ban on livestreaming would prevent a lot of this activity.

We need a new approach

Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:

  1. companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  2. companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
  3. companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.

In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.The Conversation

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

The law is closing in on Facebook and the ‘digital gangsters’


Sacha Molitorisz, University of Technology Sydney and Derek Wilding, University of Technology Sydney

For social media and search engines, the law is back in town.

Prompted by privacy invasions, the spread of misinformation, a crisis in news funding and potential interference in elections, regulators in several countries now propose a range of interventions to curb the power of digital platforms.

A newly published UK report is part of this building global momentum.




Read more:
Why are Australians still using Facebook?


Shortly after Valentine’s Day, a committee of the British House of Commons published its final report into disinformation and “fake news”. It was explicitly directed at Facebook CEO Mark Zuckerberg, and it was less a love letter than a challenge to a duel.

The report found:

Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law.

The committee was particularly vexed by Zuckerberg himself, concluding:

By choosing not to appear before the Committee … Mark Zuckerberg has shown contempt.

Its far-reaching recommendations included giving the UK’s Information Commissioner greater capacity to be “… an effective ‘sheriff in the Wild West of the Internet’.”

The law is back in town

In December 2018, the Australian Competition and Consumer Commission (ACCC) handed down its preliminary report into the impact of digital platforms. It tabled a series of bold proposals.




Read more:
Digital platforms. Why the ACCC’s proposals for Google and Facebook matter big time


Then, on February 12, the Cairncross Review – an independent analysis led by UK economist and journalist Frances Cairncross – handed down its report, A Sustainable Future for Journalism.

Referring to sustainability of the production and distribution of high-quality journalism, “Public intervention may be the only remedy,” wrote Cairncross. “The future of a healthy democracy depends on it.”

And a week later, the Digital, Culture, Media and Sport Committee of the House of Commons issued its challenge in its final report on disinformation and “fake news”:

The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight … only governments and the law are powerful enough to contain them.

How do the responses of the three reports compare?

ACCC inquiry broadest in scope

First, it’s important to note that the scope of these three inquiries varied significantly.

The ongoing ACCC inquiry, billed as a world-first and set to hand down its final report in June, is seeking to assess the impact of digital platforms on media and advertising, with a focus on news.




Read more:
Attention economy: Facebook delivers traffic but no money for news media


The Cairncross Review was narrower in intent, addressing “the sustainability of the production and distribution of high quality journalism, and especially the future of the press, in this dramatically changing market.”

And the House of Commons committee had a very direct brief to investigate fake news. It then chose to focus on Facebook.

As such, the three inquiries overlap substantially, but the ACCC investigation is unequivocally the broadest in scope.

Not just distribution platforms

However, all three reports land in roughly the same place when it comes to characterising these businesses. They all see digital platforms as more than just conduits of other people’s content – and this brings certain responsibilities.

The ACCC says digital intermediaries are “considerably more than mere distributors or pure intermediaries” when it comes to the supply of news and journalism.

The Cairncross Review stresses there is a “fundamental difference” between distributors and content creators.

The House of Commons committee proposes “a new category of tech company” as a legal mechanism for having digital platforms assume liability for harmful content.

Need more oversight

A related important point is that all three reviews recommend that digital platforms are brought more squarely into the legal and regulatory environment.

By this, they don’t just mean cross-industry laws that apply to all businesses. There is some of that – for example, adapting competition laws so certain conduct is regulated.




Read more:
Google and Facebook cosy up to media companies in response to the threat of regulation


But these inquiries also raise the prospect of specific rules for platforms as part of communications regulation. How they go about this shows the point at which the inquiries diverge.

News reliability

The ACCC has flagged the need for further work on a platforms code of practice that would bring them into the orbit of the communications regulator, the ACMA.

The platforms would be bound to the code, which would require them to badge content produced under established journalistic standards. It would be the content creators – publishers and broadcasters, not platforms – that would be subject to these standards.

In the UK, Cairncross proposes a collaborative approach under which a new regulator would monitor and report on platforms’ initiatives to improve reliability of news – perhaps, in time, moving to specific regulatory obligations.

Algorithms regulator

In Australia, the ACCC has proposed what others refer to as a new “algorithms regulator”. This would look at how ads and news are ranked in search results or placed in news feeds, and whether vertically integrated digital platforms that arrange advertising favour their own services.

The algorithms regulator would monitor, investigate and report on activity, but would rely on referral to other regulators rather than have its own enforcement powers.

Unsurprisingly, the leading digital platforms in Australia oppose the new algorithms regulator. Equally unsurprisingly, media companies think the proposal doesn’t go far enough.




Read more:
Facebook needs regulation – here’s why it should be done by algorithms


For its part, Cairncross does recommend new codes on aspects such as indexing and ranking of content and treatment of advertising. The codes would be overseen by a new regulator but they would be developed by platforms and a move to a statutory code would only occur if they were inadequate.

In contrast to both these reviews, the House of Commons committee’s Code of Ethics is concerned with “online harms”. Right from the outset, it would be drawn up and enforced by a new regulator in a similar way to Ofcom, the UK communications regulator, enforcing its Broadcasting Code.

It says this would create “a regulatory system for online content that is as effective as that for offline content industries”. Its forcefulness on this is matched by its recommendation on algorithms: it says the new regulator should have access to “tech companies’ security mechanisms and algorithms, to ensure they are operating responsibly”.

Both the ACCC and Cairncross pointedly avoid this level of intervention.

However, the ACCC does raise the prospect of a new digital platforms ombudsman. Apart from delivering 11 preliminary recommendations, the ACCC also specified nine proposed areas for further analysis and assessment. Among these areas, the ACCC suggested the idea of such an ombudsman to deal with complaints about digital platforms from consumers, advertisers, media companies and businesses.

Data privacy

And then there is data privacy.

This is where the ACCC and the House of Commons committee delivered some of their most significant recommendations. It’s also where regulators in other jurisdictions have been turning their attention, often on the understanding that the market power of digital platforms is largely derived from their ability to access user data.

Earlier this month, Germany’s Federal Cartel Office (Bundeskartellamt) found that Facebook could no longer merge a person’s data from their Instagram, Facebook and WhatsApp accounts, without their explicit consent.

In Germany, the law has spoken. In Australia and the UK, it’s still clearing its throat.The Conversation

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney and Derek Wilding, Co-Director, Centre for Media Transition, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why are Australians still using Facebook?


Deborah Lupton, UNSW

This weeks marks 15 years since Facebook founder Mark Zuckerberg first set up the platform with his college roommate Eduardo Saverin. Since then, Facebook has grown into a giant global enterprise.

The platform now has more than 2.32 billion monthly users and ranks fifth in terms of market value among the world’s top internet companies.

Facebook hasn’t escaped without scandal. It’s been subject to data breaches and allegations that it has failed to protect user privacy. Reports suggest that numerous Facebook users have responded to these incidents by giving up the platform.

But the data say otherwise. Preliminary findings from my recent research suggest that although Australian Facebook users do care about the privacy and security of their personal information, this is not enough to drive them to leave the platform.




Read more:
3 ways Facebook and other social media companies could clean up their acts – if they wanted to


The scandals: Australians didn’t leave Facebook

One of the most prominent scandals Facebook has been caught up in involves allegations made in March 2018: that analytics company Cambridge Analytica was using personal data from Facebook users to help political parties in their election campaigns.

This, and other news stories about Facebook’s use of user data, received widespread international attention, including significant coverage in Australia. Numerous news reports claimed that large numbers of Australians were deleting their Facebook accounts as part of the #DeleteFacebook trend. As one news story contended:

Many Australians are for the first time discovering just how much Facebook knows about them and many are shocked, leading them to quit the platform.

Statistics on Australians’ use of Facebook, however, show no change in numbers since the Cambridge Analytica scandal first received public attention. There were 15 million active monthly Facebook users 12 months ago (just before the scandal erupted) and this figure remained steady over the course of the year. Facebook is still far and above the most highly used social media platform in Australia.

The study: what I wanted to know

In September and October 2018, I conducted a study involving in-depth telephone interviews with 30 Australians who were current or past Facebook users.

An equal number of females and males participated across a broad age distribution (10 participants aged 18-40, 10 participants aged 41-60, and 10 participants aged 61 and over) and geographical distribution (10 participants living in rural towns or areas, 20 participants living in cities or major towns).

During the interview, the participants were asked:

  • whether they were bothered or concerned about Facebook’s use of information about them
  • if they had ever changed their use of Facebook or privacy settings in response to these concerns
  • what kinds of personal information they would not want internet companies like Facebook or apps to access or use
  • what steps these companies should take to protect users’ information.

In the interview questions there was no direct mention of Cambridge Analytica or any other scandal about Facebook. I wanted to see if participants spontaneously raised these events and issues in their responses.




Read more:
Facebook is all for community, but what kind of community is it building?


The benefits: it’s all about connecting

The findings show people continue to use Facebook for a wide variety of reasons. For some people, their business depended on their active Facebook use, so they could advertise their offerings and connect with potential clients:

I know that if I did delete it, I’d be harming myself and my business, so yes, so that’s kind of the main reason I keep it.

For most people, however, the key incentive was the desire to connect with family and friends. This included being able to find old friends and reconnect with them, as well as maintaining ties with current friends and family members.

Several people commented that using Facebook was the best way of knowing about the lives of their adult children, grandchildren or other young relatives.

That’s the only thing that’s kept me on there – because my kids are on there and I just want to see what they’re doing, and what and who they’re hanging around with.

For others, being part of a community (for example, a health-related group) was an important way of alleviating isolation and loneliness.

These comments suggest Facebook is an important tool to support social interactions in a world in which people are more dispersed and physically separated from friends and family.

The drawbacks: mundane trivia, too many ads

The drawbacks of being on Facebook reported by participants included feeling annoyed by aspects such as having to see other people’s mundane trivia, random friend requests or too many ads:

Sometimes there’s a lot of nonsense that goes up on there, people posting you don’t want to get involved in, or I think it’s stupid or rude or whatever it may be.

Some people also talked about not liking feeling that you are being watched, and their anxiety about their personal information being accessed and identity theft or bank details being stolen. However, these issues were not considered serious enough for them to leave Facebook.

Apart from targeted advertising, most people were unsure how Facebook might use their personal information. Very few participants mentioned the Cambridge Analytica scandal or related issues, such as the use of Facebook for political campaigning or to disseminate “fake news”. Even when they did refer to these issues, they had difficulty explaining exactly how personal data were involved:

Well, I know Facebook collected the data for that Cambridge business and they collected it via a quiz with an app, and then passed it on to other parties. So I think that’s all they do. I think it’s just maybe for them to earn money off it. I don’t really know.

Data privacy: employing workarounds to stay on Facebook

Most people thought they were careful in not revealing too much information about themselves on Facebook, and therefore protected their data. They reported engaging in practices such as avoiding uploading details about themselves, limiting their number of friends or the type of friends, blocking or unfriending people who annoyed them, clearing their history regularly and being very selective about what photos to upload (including of their children).

Several people mentioned they had recently checked and changed their privacy settings, often in response to a prompt from Facebook to do so:

I keep my personal stuff to myself, and then I share what I want to share through my friends. And I’ve got strict privacy things in place so that I only get things to people that I know, rather than people I don’t know. So that’s fine with me.




Read more:
Amazon, Facebook and Google don’t need to spy on your conversations to know what you’re talking about


These findings show it’s not so much that Australian Facebook users don’t care about their personal data privacy and security. They do think about these issues and have their own ways of managing them.

Australians think Facebook serves them well. They consider other people’s over-sharing or having to see too many ads as more of a problem than alleged political manipulation or other misuse of their information by Facebook.The Conversation

Deborah Lupton, SHARP Professor, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A tale of two media reports: one poses challenges for digital media; the other gives ABC and SBS a clean bill of health



File 20181213 178579 1im07g8.jpg?ixlib=rb 1.1
The competitive neutrality report has given the ABC, and SBS, a clean bill of health.
Shutterstock

Denis Muller, University of Melbourne

Two reports out this week – one into the operations of Facebook and Google, the other into the competitive neutrality of the ABC and SBS – present the federal government with significant policy and political challenges.

The first is by far the more important of the two.

It is the interim report by the Australian Competition and Consumer Commission of its Digital Platforms Inquiry, and in a set of 11 preliminary recommendations it proposes far-reaching changes to media regulation.

Of particular interest are its preliminary recommendations for sustaining journalism and news content.

These are based on the premise that there is a symbiotic relationship between news organisations and the big digital platforms. Put simply, the news organisations depend heavily on these platforms to get their news out to their audiences.

The problem, the ACCC says, is that the way news stories are ranked and displayed on the platforms is opaque. All we know – or think we know – is that these decisions are made by algorithms.




Read more:
Constant attacks on the ABC will come back to haunt the Coalition government


The ACCC says this lack of transparency causes concerns that the algorithms and other policies of the platform giants may be operating in a way that affects the production of news and journalistic content.

To respond to this concern, the preliminary recommendation is for a new regulatory authority to be established. It would have the power to peer into these algorithms and monitor, investigate and report on how content – including news content – is ranked and displayed.

The purpose would be to identify the effects of the algorithms and other policies on the production of news and journalistic content.

It would also allow the authority to assess the impact on the incentives for news and journalistic content creation, particularly where news organisations have invested a lot of time and money in producing original content.

In this way, the ACCC is clearly trying to protect and promote the production of public-interest journalism, which is expensive but vital to democratic life. It is how the powerful are held to account, how wrongdoing is uncovered, and how the public finds out what is going on inside forums such as the courts and local councils.

So far, the big news media organisations have concentrated on these aspects of the ACCC interim report and have expressed support for them.

However, there are two other aspects of the report on which their response has been muted.

The first of these is the preliminary recommendation that proposes a media regulatory framework that would cover all media content, including news content, on all systems of distribution – print, broadcast and online.

The ACCC recommends that the government commission a separate independent review to design such a framework. The framework would establish underlying principles of accountability, set boundaries around what should be regulated and how, set rules for classifying different types of content, and devise appropriate enforcement mechanisms.

Much of this work has already been attempted by earlier federal government inquiries – the Finkelstein inquiry and the Convergence Review – both of which produced reports for the Gillard Labor government in 2012.

Their proposals for an overarching regulatory regime for all types of media generated a hysterical backlash from the commercial media companies, who accused the authors of acting like Stalin, Mao, or the Kim clan in North Korea.

So if the government adopts this recommendation from the ACCC, the people doing the design work can expect some heavy flak from big commercial media.

The other aspect of the ACCC report that is likely to provoke a backlash from the media is a preliminary recommendation concerning personal privacy.

Here the ACCC proposes that the government adopt a 2014 recommendation of the Australian Law Reform Commission that people be given the right to sue for serious invasions of privacy.

The media have been on notice over privacy invasion for many years. As far back as 2001, the High Court developed a test of privacy in a case involving the ABC and an abattoir company called Lenah Game Meats.

Now, given the impact on privacy of Facebook and Google, the ACCC has come to the view that the time has arrived to revisit this issue.

The ACCC’s interim report is one of the most consequential documents affecting media policy in Australia for many decades.

The same cannot be said of the other media-related report published this week: that of the inquiry into the competitive neutrality of the public-sector broadcasters, the ABC and SBS.

This inquiry was established in May this year to make good on a promise made by Malcolm Turnbull to Pauline Hanson in 2017.




Read more:
The politics behind the competitive neutrality inquiry into ABC and SBS


He needed One Nation’s support for the government’s changes to media ownership laws, without which they would not have passed the Senate.

Hanson was not promised any particular focus for the inquiry, so the government dressed it up in the dull raiment of competitive neutrality.

While it had the potential to do real mischief – in particular to the ABC – the report actually gives both public broadcasters a clean bill of health.

There are a couple of minor caveats concerning transparency about how they approach the issue of fair competition, but overall the inquiry finds that the ABC and SBS are operating properly within their charters. Therefore, by definition, they are acting in the public interest.

This has caused pursed lips at News Corp which, along with the rest of the commercial media, took this opportunity to have a free kick at the national broadcasters. But in the present political climate, the issue is likely to vanish without trace.

While the government still has an efficiency review of the ABC to release, it also confronts a political timetable and a set of the opinion polls calculated to discourage it from opening up another row over the ABC.The Conversation

Denis Muller, Senior Research Fellow in the Centre for Advancing Journalism, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ACCC wants to curb digital platform power – but enforcement is tricky


Katharine Kemp, UNSW

We need new laws to monitor and curb the power wielded by Google, Facebook and other powerful digital platforms, according to the Australian Competition and Consumer Commission (ACCC).

The Preliminary Report on the Digital Platforms Inquiry found major changes to privacy and consumer protection laws are needed, along with alterations to merger law, and a regulator to investigate the operation of the companies’ algorithms.

Getting the enforcement right will be key to the success of these proposed changes.




Read more:
Digital platforms. Why the ACCC’s proposals for Google and Facebook matter big time


Scrutinising accumulation of market power

The report says Google and Facebook each possess substantial power in markets such as online search and social media services in Australia.

It’s not against the law to possess substantial market power alone. But these companies would breach our November 2017 misuse of market power law if they engaged in any conduct with the effect, likely effect or purpose of substantially lessening competition – essentially, blocking rivalry in a market.

Moving forwards, the ACCC has indicated it will scrutinise the accumulation of market power by these platforms more proactively. Noting that “strategic acquisitions by both Google and Facebook have contributed to the market power they currently hold”, the ACCC says it intends to ask large digital platforms to provide advance notice of any planned acquisitions.

While such pre-notification of certain mergers is required in jurisdictions such as the US, it is not currently a requirement in other sectors under the Australian law.

At the moment the ACCC is just asking the platforms to do this voluntarily – but has indicated it may seek to make this a formal requirement if the platforms don’t cooperate with the request. It’s not currently clear how this would be enforced.

The ACCC has also recommended the standard for assessing mergers should be amended to expressly clarify the relevance of data acquired in the transaction as well as the removal of potential competitors.

The law doesn’t explicitly refer to potential competitors in addition to existing competitors at present, and some argue platforms are buying up nascent competitors before the competitive threat becomes apparent.




Read more:
Explainer: what is public interest journalism?


A regulator to monitor algorithms

According to the ACCC, there is a “lack of transparency” in Google’s and Facebook’s arrangements concerning online advertising and content, which are largely governed by algorithms developed and owned by the companies. These algorithms – essentially a complex set of instructions in the software – determine what ads, search results and news we see, and in what order.

The problem is nobody outside these companies knows how they work or whether they’re producing results that are fair to online advertisers, content producers and consumers.

The report recommends a regulatory authority be given power to monitor, investigate and publish reports on the operation of these algorithms, among other things, to determine whether they are producing unfair or discriminatory results. This would only apply to companies that generate more than A$100 million per annum from digital advertising in Australia.




Read more:
Attention economy: Facebook delivers traffic but no money for news media


These algorithms have come under scrutiny elsewhere. The European Commission has previously fined Google €2.42 billion for giving unfair preference to its own shopping comparison services in its search results, relative to rival comparison services, thereby contravening the EU law against abuse of dominance. This decision has been criticised though, for failing to provide Google with a clear way of complying with the law.

The important questions following the ACCC’s recommendation are:

  • what will the regulator do with the results of its investigations?
  • if it determines that the algorithm is producing discriminatory results, will it tell the platform what kind of results it should achieve instead, or will it require direct changes to the algorithm?

The ACCC has not recommended the regulator have the power to make such orders. It seems the most the regulator would do is introduce some “sunshine” to the impacts of these algorithms which are currently hidden from view, and potentially refer the matter to the ACCC for investigation if this was perceived to amount to a misuse of market power.

If a digital platform discriminates against competitive businesses that rely on its platform – say, app developers or comparison services – so that rivalry is stymied, this could be an important test case under our misuse of market power law. This law was amended in 2017 to address longstanding weaknesses but has not yet been tested in the courts.




Read more:
We should levy Facebook and Google to fund journalism – here’s how


Privacy and fairness for consumers

The report recommends substantial changes to the Privacy Act and Australian Consumer Law to reduce the power imbalance between the platforms and consumers.

We know from research that most Australians don’t read online privacy policies; many say they don’t understand the privacy terms offered to them, or they feel they have no choice but to accept them. Two thirds say they want more say in how their personal information is used.

The solutions proposed by the ACCC include:

  • strengthening the consent required under our privacy law, requiring it to be express (it may currently be implied), opt-in, adequately informed, voluntary and specific
  • allowing consumers to require their personal data to be erased in certain circumstances
  • increasing penalties for breaches of the Privacy Act
  • introducing a statutory cause of action for serious invasion of privacy in Australia.



Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


This last recommendation was previously made by the Australian Law Reform Commission in 2014 and 2008, and would finally allow individuals in Australia to sue for harm suffered as a result of such an invasion.

If consent is to be voluntary and specific, companies should not be allowed to “bundle” consents for a number of uses and collections (both necessary and unnecessary) and require consumers to consent to all or none. These are important steps in addressing the unfairness of current data privacy practices.

Together these changes would bring Australia a little closer to the stronger data protection offered in the EU under the General Data Protection Regulation.

But the effectiveness of these changes would depend to a large extent on whether the government would also agree to improve funding and support for the federal privacy regulator, which has been criticised as passive and underfunded.

Another recommended change to consumer protection law would make it illegal to include unfair terms in consumer contracts and impose fines for such a contravention. Currently, for a first-time unfair contract terms “offender”, a court could only “draw a line” through the unfair term such that the company could not force the consumer to comply with it.

Making such terms illegal would increase incentives for companies drafting standard form contracts to make sure they do not include detrimental terms which create a significant imbalance between them and their customers, which are not reasonably necessary to protect their legitimate interests.




Read more:
Soft terms like ‘open’ and ‘sharing’ don’t tell the true story of your data


The ACCC might also take action on these standard terms under our misleading and deceptive conduct laws. The Italian competition watchdog last week fined Facebook €10 million for conduct including misleading users about the extent of its data collection and practices.

The ACCC appears to be considering the possibility of even broader laws against “unfair” practices, which regulators like the US Federal Trade Commission have used against bad data practices.

Final report in June 2019

As well as 11 recommendations, the report mentions nine areas for “further analysis and assessment” which in itself reflects the complexity of the issues facing the ACCC.

The ACCC is seeking responses and feedback from stakeholders on the preliminary report, before creating a final report in June 2019.

Watch this space – or google it.




Read more:
How not to agree to clean public toilets when you accept any online terms and conditions


The Conversation


Katharine Kemp, Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Digital platforms. Why the ACCC’s proposals for Google and Facebook matter big time


File 20181210 76971 17q2g3x.jpeg?ixlib=rb 1.1
The Competition and Consumer Commission is worried about the ability of the platforms we use to determine the news we read.
Shutterstock

Sacha Molitorisz, University of Technology Sydney and Derek Wilding, University of Technology Sydney

The Australian Competition and Consumer Commission has released the preliminary report of its Digital Platforms Inquiry, and Google and Facebook won’t be happy.

Rather than adopting a gently-gently approach, the ACCC has produced draft recommendations that are extensive and dramatic.

If implemented, they would significantly affect the way the digital platforms make their money, and help direct the content we consume.

What’s more, the inquiry is touted as a world first. Its findings will be closely monitored, and perhaps even adopted, by regulators internationally.

Who should care?

The digital platforms themselves should (and do) care.

Any new regulations designed to foster competition or protect individual privacy (both are among the ACCC’s recommendations) have the potential to harm their revenues.

They’ve a lot to lose. In 2017, nearly A$8 billion was spent on online advertising in Australia, and more than half went to Google and Facebook (p3).

News organisations whose output is disseminated by those platforms should (and do) care too.

As the ACCC notes, more than half of the traffic on Australian news websites comes via Google and Facebook (p8).




Read more:
News outlets air grievances and Facebook plays the underdog in ACCC inquiry


Increasingly, news producers depend on social media and search engines to connect with consumers. Google is used for 95% of searches (98% on mobile devices).

The rise of Google, Facebook and other digital platforms has been accompanied by unprecedented pressures on traditional news organisations.

Most obviously, classified advertising revenue has been unbundled from newspapers.

In 2001, classified advertising revenue stood at A$2 billion. By 2016, it had fallen to A$200 million. The future of newspapers’ ability to produce news is under a cloud, and digital platforms help control the weather.

Of course, advertisers care too.

But the stakeholders with the most to gain or lose are us, Australian citizens.




Read more:
Taking on big tech: where does Australia stand?


Our lives are mediated by Google, Facebook, Apple, Amazon, Twitter and others as never before. Google answers our search queries; Facebook hosts friends’ baby snaps; YouTube (owned by Google) distributes professional and user-generated videos; Instagram (owned by Facebook) hosts our holiday snaps.

As the ACCC notes, they have given us tremendous benefits, for minimal (apparent) cost.

And they’ve done it at lightning speed. Google arrived in 1998, Facebook in 2004 and Twitter in 2006. They are mediating what comes before our eyes in ways we don’t understand and (because they keep their algorithms secret) in ways we can’t understand.

What does the ACCC recommend?

The ACCC’s preliminary recommendations are far-reaching and bold.

First, it suggests an independent review to address the inadequacy of current media regulatory frameworks.

This would be a separate, independent inquiry to “design a regulatory framework that is able to effectively and consistently regulate the conduct of all entities which perform comparable functions in the production and delivery of content in Australia, including news and journalistic content, whether they are publishers, broadcasters, other media businesses, or digital platforms”.

This is a commendable and urgent proposal. Last year, cross-media ownership laws were repealed as anachronistic in a digital age. To protect media diversity and plurality, the government needs to revisit the issue of regulatory frameworks.




Read more:
Starter’s gun goes off on new phase of media concentration as Nine-Fairfax lead the way


Second, it proposes privacy safeguards. Privacy in Australia is dangerously under-protected. Digital platforms such as Google and Facebook generate revenue by knowing their users and targeting advertising with an accuracy unseen in human history.

As the ACCC puts it, “the current regulatory framework, including privacy laws, does not effectively deter certain data practices that exploit the information asymmetries and the bargaining power imbalances that exist between digital platforms and consumers.”

It makes a number of specific preliminary recommendations, including creating a right to erasure and the requirement of “express, opt-in consent”.

It also supports the creation of a civil right to sue for serious invasions of privacy, as recommended by the Australian Law Reform Commission.

Australians lack the protections that Americans enjoy under the US Bill of Rights; we certainly lack the protection afforded under Europe’s sweeping new privacy law.




Read more:
Google slapped hard in Europe over data handling


It wants the penalties for breaches of our existing Privacy Act increased. It recommends the creation of a third-party certification scheme, which would enable the Office of the Australian Information Commissioner to give complying bodies a “privacy seal or mark”.

And it wants a new or existing organisation to monitor attempts by vertically-integrated platforms such as Google to favour their own businesses. This would happen where Google gives prominence in search results to products sold through Google platforms, or prominence to stories from organisations with which it has a commercial relationship.

The organisation would oversee platforms that generate more than A$100 million annually, and which disseminate news, or hyperlinks to news, or snippets of news.

It would investigate complaints and even initiate its own investigations in order to understand how digital platforms are disseminating news and journalistic content and advertising.

As it notes,

The algorithms operated by each of Google and Facebook, as well as other policies, determine which content is surfaced and displayed to consumers in news feed and search results. However, the operation of these algorithms and other policies determining the surfacing of content remain opaque. (p10)

It makes other recommendations, touching on areas including merger law, pre-installed browsers and search engines, takedown procedures for copyright-infringing content, implementing a code of practice for digital platforms and changing the parts of Australian consumer law that deal with unfair contract terms.

Apart from its preliminary recommendations, there are further areas on which it invites comment and suggestions.




Read more:
New data access bill shows we need to get serious about privacy with independent oversight of the law


These include giving media organisations tax offsets for producing public interest news, and making subscribing to news publications tax deductible for consumers.

Platforms could be brought into a co-regulatory system for flagging content that is subject to quality control, creating their own quality mark. And a new ombudsman could deal with consumer complaints about scams, misleading advertising and the ranking of news content.

All of these recommendations and areas of interest will generate considerable debate.

What’s next?

The ACCC will accept submissions in response to its preliminary report until February 15.

At the Centre for Media Transition, we played a background role in one aspect of this inquiry.

Earlier this year, we were commissioned by the ACCC to prepare a report on the impact of digital platforms on news and journalistic content. It too was published on Monday.

Our findings overlap with the ACCC on some points, and diverge on others.




Read more:
Google and Facebook cosy up to media companies in response to the threat of regulation


Many thorny questions remain, but one point is clear: the current regime that oversees digital platforms is woefully inadequate. Right now, as the ACCC notes, digital platforms are largely unregulated.

New ways of thinking are needed. A mix of old laws (or no laws) and new media spells trouble.The Conversation

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney and Derek Wilding, Co-Director, Centre for Media Transition, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Attention economy: Facebook delivers traffic but no money for news media



File 20181031 76387 1asvvtp.jpg?ixlib=rb 1.1
A lawsuit has been filed against Facebook, because it allegedly overstated its video statistics for years.
from http://www.shutterstock.com, CC BY-ND

Merja Myllylahti, Auckland University of Technology

Facebook and quality journalism are uneasy companions. Recent headlines suggest the platform’s “lies” about video metrics “smashed” journalism and the platform “crashed and burned” news companies’ referral traffic after it changed its algorithm in January.

The Australian Competition and Consumer Commission is currently holding a digital platform inquiry to investigate what kind of impact social media platforms, search engines and other content aggregators have on the local media market. In the United Kingdom, the Cairncross review is assessing similar issues, looking for ways of sustaining high-quality journalism in a changing market.

My research suggests these enquiries should pay attention to the impact of Facebook and the lessons we’ve already learned about the social platform.




Read more:
Class action against Facebook over facial recognition could pave the way for further lawsuits


Lesson one: overstated metrics

You should not trust all the metrics platform companies offer.

A recent article in The Atlantic revealed that a lawsuit has been filed against Facebook, because it allegedly overstated its video statistics for years, “exaggerating the time spent watching them by as much as 900% percent.”

The article asserts that “hundreds of journalists” lost their job after Facebook lured news companies to invest in video. However, Facebook has denied these claims.

The Spinoff, a digital native media company in New Zealand, recently posted a story with a similar analysis of the platform’s impact, saying that Facebook’s fake video statistics “smashed New Zealand journalism” because when it encouraged news companies to invest in online video production, they followed.

New Zealand media were collateral damage in Facebook’s obsessive desire to grow at all costs.




Read more:
How to stop haemorrhaging data on Facebook


Lesson two: traffic dependency

You should not rely on Facebook traffic.

A recent report by the Tow Center for Digital Journalism shows that in the United States, news companies have a substantial presence on Facebook. The report found that roughly 80% of the local news outlets are using the social platform.

Additionally, two recent studies demonstrates how news companies rely on Facebook to drive their traffic. My own research of four New Zealand news companies shows that 24% of their traffic came from social media, and most of that came from Facebook. Together social and search drove 47% of New Zealand news sites’ traffic.

A larger study of 12 newspapers and broadcasters in Europe, conducted by Reuters Institute for the Study of Journalism, notes:

… news organisations are making major investments in social media and report receiving significant amounts of traffic” from social media.

These studies confirm news companies’ Facebook dependency in terms of traffic, although the level of dependency differs between media outlets. The Reuters study found that Facebook’s algorithm changes in January had a severe impact on some of the news companies’ traffic. However, the severity of impact differed between media outlets. For example, Le Monde saw its interactions drop by almost a third, but for The Times these grew.

In New Zealand, The Spinoff suffered a substantial drop in its traffic after Facebook’s algorithm tweaks: its traffic from the platform dropped from 50% to 30%. We have seen similar drops in other markets, too.

The Nieman Lab, which reports on digital media innovation, found that following Facebook’s algorithm change, the drop in referral traffic was not universal. Some not-for-profit media organisations benefited.

Bottom line: The decline in referrals to publishers from Facebook is not universal, and in the face of those declines, other sources of traffic are more important than ever.

Indeed, it would be wise for all news outlets to grow their direct traffic which delivers better user engagement and monetisation.

Lesson three: don’t do it for the money

You don’t make much money on Facebook.

My research suggests if news companies abandoned Facebook, they would hardly lose any money. For the companies studied, social media traffic made up 0.03%–0.14% of their total revenue, and social shares 0.009%–0.2% of total revenue. These figures don’t take into account advertising income from platforms or subscription conversion.

The reason news companies continue to distribute their content on social media platforms is because they gain attention. News companies believe they can turn this attention into money, but so far with little success.

However, some news publishers have reported that they have gained digital subscriptions from the platforms. The Reuters study notes Facebook delivers news companies audience engagement that is “considered more cost effective at driving digital subscription sales.”

The bottom line is there is no data yet to verify how well news publishers manage to convert social media attention to digital subscriptions. Google, Facebook, Amazon and Apple are all offering, or are in the process of offering, digital subscription services to news publishers, but how well these will work for news companies remains to be seen.The Conversation

Merja Myllylahti, Research Fellow, Auckland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

News outlets air grievances and Facebook plays the underdog in ACCC inquiry



File 20180507 166906 1x6rsur.jpg?ixlib=rb 1.1
The ACCC inquiry looks at the impact of digital platforms on the supply of news and journalistic content.
Shutterstock

Andrew Quodling, Queensland University of Technology

The recent Cambridge Analytica scandal and congressional testimony of Facebook CEO Mark Zuckerberg has brought global attention to the power and influence of Facebook as a platform. It has also invigorated discussions about how such platforms should be regulated.

Meanwhile, the Australian Competition and Consumer Commission (ACCC) has been conducting an inquiry into the influence of digital platforms on media and advertising markets in Australia.




Read more:
Google and Facebook cosy up to media companies in response to the threat of regulation


Submissions to the inquiry by a range of media outlets, advertisers, as well as Google and Facebook, were published last week. Although Facebook has expressed interest in participating in regulatory debates, its submission is a disappointing early indication of how we might expect the company to downplay its magnitude and its roles in future regulatory debates.

The purpose of the inquiry

Late in 2017, the Federal Treasurer, Scott Morrison, directed the ACCC to conduct the inquiry into digital platforms, including search engines, social networks and other aggregators. As part of the ongoing inquiry, the ACCC will consider:

the impact of digital platforms on the supply of news and journalistic content and the implications of this for media content creators, advertisers and consumers.

It came about as a result of negotiations between the government and the former independent Senator Nick Xenophon. Xenophon insisted on the inquiry in exchange for his support for the government’s changes to Media Ownership laws.

To some extent, the inquiry retreads familiar ground. Old anxieties about declining revenues for journalistic organisations and the advent of internet technologies and internet-focused stakeholders continue a conversation that has been going for well over a decade.

News outlets air grievances

In total, the ACCC published 57 submissions. This includes contributions from most major Australian media organisations, industry bodies, unions and advertisers.

Many respondents took the opportunity to criticise the narrow scope of the inquiry. The inquiry’s scope is somewhat frustrating considering the complexities digital platforms present. They impact not just media and journalism markets, but also aspects of political, social and everyday life.

While the ABC’s submission was generally favourable in its discussion of online platforms, other Australian media organisations used the inquiry as an opportunity to air grievances about the impact of digital platforms.




Read more:
Government regulation of social media would be a cure far worse than the disease


News Corp accused the platforms of abusing the local market and engaging in anti-competitive practices. Commercial Radio Australia pointed to a lack of regulation compelling transparent and structured audience metrics. Nine complained of declining revenues and a lack of platform-specific regulations, while Foxtel raised the issue of copyright infringement.

Seven West Media and Ten argued that there is a barrier to entry imposed on traditional publishers by the significant existing collection of personal data that platforms like Facebook and Google can leverage.

The platforms respond

In their submissions, Facebook and Google both attempted to build a narrative that emphasised how the tools and systems they provide can empower journalists and other content creators. Meanwhile, they minimised or outright ignored the opportunity to discuss the broader concerns of the broadcasters, publishers and individuals who are stakeholders in the industries Facebook and Google are operating in.

Google’s short response to the inquiry is not particularly interesting, in part due to its brevity and its focus on championing Google’s notionally positive influence for publishers. Facebook had significantly more to say in its 56 page submission, which also gives context to Mark Zuckerberg’s recent comments welcoming the potential for regulation.

Facebook plays the underdog

Facebook’s submission reveals how the company portrays itself to regulators, with an interesting element of self-deprecation. Take for example, the statement that:

Facebook is popular, but it is just one small part of how Australians connect with friends, family and the world around us.

Given a user-base that dwarfs the population of, well, even the most populous countries, Facebook’s most compelling option for presenting itself as an underdog in this space is to compare itself by share of “attention”, rather than share of market.

Facebook presents “multi-homing” – the practice of having and using a variety apps on your phone – as a key concern. It paints a picture of precarity in a marketplace that they dominate.




Read more:
How to regulate Facebook and the online giants in one word: transparency


Facebook’s arguments about competition also ring hollow because the platform’s design and scale allows it to benefit from significant network effects.

Put simply, a network effect is when existing and new users benefit from the growth of a network. A familiar example of these effects can be seen in the services of mobile phone network providers. Telstra and Optus provide cheaper, or no-cost calls or messaging between customers of their own service.

But the similarities end there. While you could still call a friend with a competing mobile phone provider, there is no such interoperability with platforms like Facebook. This design helps Facebook protect its market power by keeping total control over the Facebook platfom’s network.

If you decide to leave Facebook, you sever the connections between yourself and other users of the platform. Given Facebook’s focus on augmenting social functions this can, quite literally, be an ostracising endeavour. In spite of both the recent Cambridge Analytica revelations, and several #deletefacebook campaigns, we’re yet to see a significant exodus of users from the platform.

A disappointing response

Facebook has a colossal user base. Over two billion people use the platform each month, and almost three quarters of those people use Facebook on a daily basis. It owns Instagram and WhatsApp – each of which are profoundly successful platforms in their own right.

The ConversationFacebook is a titan of this industry, and the sooner it stops pretending to be a bit player, the richer our discourse about platforms and their role in society can become.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

How to stop haemorrhaging data on Facebook



File 20180405 189801 1wbjtyg.jpg?ixlib=rb 1.1
Every time you open an app, click a link, like a post, read an article, hover over an ad, or connect to someone, you are generating data.
Shutterstock

Belinda Barnet, Swinburne University of Technology

If you are one of 2.2 billion Facebook users worldwide, you have probably been alarmed by the recent coverage of the Cambridge Analytica scandal, a story that began when The Guardian revealed 50 million (now thought to be 87 million) user profiles had been retrieved and shared without the consent of users.

Though the #deletefacebook campaign has gained momentum on Twitter, it is simply not practical for most of us to delete our accounts. It is technically difficult to do, and given that one quarter of the human population is on the platform, there is an undeniable social cost for being absent.




Read more:
Why we should all cut the Facebook cord. Or should we?


It is also not possible to use or even to have a Facebook profile without giving up at least some data: every time you open the app, click a link, like a post, hover over an ad, or connect to someone, you are generating data. This particular type of data is not something you can control, because Facebook considers such data its property.

Every service has a price, and the price for being on Facebook is your data.

However, you can remain on Facebook (and other social media platforms like it) without haemorrhaging data. If you want stay in touch with those old school friends – despite the fact you will probably never see them again – here’s what you can do, step by step. The following instructions are tailored to Facebook settings on mobile.

Your location

The first place to start is with the device you are holding in your hand.
Facebook requests access to your GPS location by default, and unless you were reading the fine print when you installed the application (if you are that one person please tell me where you find the time), it will currently have access.

This means that whenever you open the app it knows where you are, and unless you have changed your location sharing setting from “Always” to “Never” or “Only while using”, it can track your location when you’re not using the app as well.

To keep your daily movements to yourself, go into Settings on Apple iPhone or Android, go to Location Services, and turn off or select “Never” for Facebook.

While you’re there, check for other social media apps with location access (like Twitter and Instagram) and consider changing them to “Never”.

Remember that pictures from your phone are GPS tagged too, so if you intend to share them on Facebook, revoke access to GPS for your camera as well.

Your content

The next thing to do is to control who can see what you post, who can see private information like your email address and phone number, and then apply these settings in retrospect to everything you’ve already posted.

Facebook has a “Privacy Shortcuts” tab under Settings, but we are going to start in Account Settings > Privacy.

You control who sees what you post, and who sees the people and pages you follow, by limiting the audience here.

Change “Who can see your future posts” and “Who can see the people and pages you follow” to “Only Friends”.

In the same menu, if you scroll down, you will see a setting called “Do you want search engines outside of Facebook to link to your profile?” Select No.

After you have made these changes, scroll down and limit the audience for past posts. Apply the new setting to all past posts, even though Facebook will try to alarm you. “The only way to undo this is to change the audience of each post one at a time! Oh my Goodness! You’ll need to change 1,700 posts over ten years.” Ignore your fears and click Limit.




Read more:
It’s time for third-party data brokers to emerge from the shadows


Next go in to Privacy Shortcuts – this is on the navigation bar below Settings. Then select Privacy Checkup. Limit who can see your personal information (date of birth, email address, phone number, place of birth if you provided it) to “Only Me”.

Third party apps

Every time you use Facebook to “login” to a service or application you are granting both Facebook and the third-party service access to your data.

Facebook has pledged to investigate and change this recently as a result of the Cambridge Analytica scandal, but in the meantime, it is best not to use Facebook to login to third party services. That includes Bingo Bash unfortunately.

The third screen of Privacy Checkup shows you which apps have access to your data at present. Delete any that you don’t recognise or that are unnecessary.

In the final step we will be turning off “Facebook integration” altogether. This is optional. If you choose to do this, it will revoke permission for all previous apps, plugins, and websites that have access to your data. It will also prevent your friends from harvesting your data for their apps.

In this case you don’t need to delete individual apps as they will all disappear.

Turning off Facebook integration

If you want to be as secure as it is possible to be on Facebook, you can revoke third-party access to your content completely. This means turning off all apps, plugins and websites.

If you take this step Facebook won’t be able to receive information about your use of apps outside of Facebook and apps won’t be able to receive your Facebook data.

If you’re a business this is not a good idea as you will need it to advertise and to test apps. This is for personal pages.

It may make life a little more difficult for you in that your next purchase from Farfetch will require you to set up your own account rather than just harvest your profile. Your Klout score may drop because it can’t see Facebook and that might feel terrible.

Remember this setting only applies to the data you post and provide yourself. The signals you generate using Facebook (what you like, click on, read) will still belong to Facebook and will be used to tailor advertising.

To turn off Facebook integration, go into Settings, then Apps. Select Apps, websites and games.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


Facebook will warn you about all the Farmville updates you will miss and how you will have a hard time logging in to The Guardian without Facebook. Ignore this and select “Turn off”.

The ConversationWell done. Your data is now as secure as it is possible to be on Facebook. Remember, though, that everything you do on the platform still generates data.

Belinda Barnet, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article was originally published on The Conversation. Read the original article.