The devil is in the detail of government bill to enable access to communications data


Monique Mann, Queensland University of Technology

The Australian government has released a draft of its long awaited bill to provide law enforcement and security agencies with new powers to respond to the challenges posed by encryption.

According to the Department of Home Affairs, encryption already impacts 90% of Australian Security Intelligence Organisation’s (ASIO) priority cases, and 90% of data intercepted by the Australian Federal Police. The measures aim to counteract estimates that communications among terrorists and organised crime groups are expected to be entirely encrypted by 2020.

The Department of Home Affairs and ASIO can already access encrypted data with specialist decryption techniques – or at points where data are not encrypted. But this takes time. The new bill aims to speed up this process, but these broad and ill-defined new powers have significant scope for abuse.




Read more:
New data access bill shows we need to get serious about privacy with independent oversight of the law


The Department of Home Affairs argues this new framework will not compel communications providers to build systemic weaknesses or vulnerabilities into their systems. In other words, it is not a backdoor.

But it will require providers to offer up details about technical characteristics of their systems that could help agencies exploit weaknesses that have not been patched. It also includes installing software, and designing and building new systems.

Compelling assistance and access

The draft Assistance and Access Bill introduces three main reforms.

First, it increases the obligations of both domestic and offshore organisations to assist law enforcement and security agencies to access information. Second, it introduces new computer access warrants that enable law enforcement to covertly obtain evidence directly from a device (this occurs at the endpoints when information is not encrypted). Finally, it increases existing powers that law enforcement have to access data through search and seizure warrants.

The bill is modelled on the UK’s Investigatory Powers Act, which introduced mandatory decryption obligations. Under the UK Act, the UK government can order telecommunication providers to remove any form of electronic protection that is applied by, or on behalf of, an operator. Whether or not this is technically possible is another question.

Similar to the UK laws, the Australian bill puts the onus on telecommunication providers to give security agencies access to communications. That might mean providing access to information at points where it is not encrypted, but it’s not immediately clear what other requirements can or will be imposed.




Read more:
End-to-end encryption isn’t enough security for ‘real people’


For example, the bill allows the Director-General of Security or the chief officer of an interception agency to compel a provider to do an unlimited range of acts or things. That could mean anything from removing security measures to deleting messages or collecting extra data. Providers will also be required to conceal any action taken covertly by law enforcement.

Further, the Attorney-General may issue a “technical capability notice” directed towards ensuring that the provider is capable of giving certain types of help to ASIO or an interception agency.

This means providers will be required to develop new ways for law enforcement to collect information. As in the UK, it’s not clear whether a provider will be able to offer true end-to-end encryption and still be able to comply with the notices. Providers that breach the law risk facing $10 million fines.

Cause for concern

The bill puts few limits or constraints on the assistance that telecommunication providers may be ordered to offer. There are also concerns about transparency. The bill would make it an offence to disclose information about government agency activities without authorisation. Anyone leaking information about data collection by the government – as Edward Snowden did in the US – could go to jail for five years.

There are limited oversight and accountability structures and processes in place. The Director-General of Security, the chief officer of an interception agency and the Attorney-General can issue notices without judicial oversight. This differs from how it works in the UK, where a specific judicial oversight regime was established, in addition to the introduction of an Investigatory Powers Commissioner.

Notices can be issued to enforce domestic laws and assist the enforcement of the criminal laws of foreign countries. They can also be issued in the broader interests of national security, or to protect the public revenue. These are vague and unclear limits on these exceptional powers.




Read more:
Police want to read encrypted messages, but they already have significant power to access our data


The range of services providers is also extremely broad. It might include telecommunication companies, internet service providers, email providers, social media platforms and a range of other “over-the-top” services. It also covers those who develop, supply or update software, and manufacture, supply, install or maintain data processing devices.

The enforcement of criminal laws in other countries may mean international requests for data will be funnelled through Australia as the “weakest-link” of our Five Eyes allies. This is because Australia has no enforceable human rights protections at the federal level.

It’s not clear how the government would enforce these laws on transnational technology companies. For example, if Facebook was issued a fine under the laws, it could simply withdraw operations or refuse to pay. Also, $10 million is a drop in the ocean for companies such as Facebook whose total revenue last year exceeded US$40 billion.

Australia is a surveillance state

As I have argued elsewhere, the broad powers outlined in the bill are neither necessary nor proportionate. Police already have existing broad powers, which are further strengthened by this bill, such as their ability to covertly hack devices at the endpoints when information is not encrypted.

Australia has limited human rights and privacy protections. This has enabled a constant and steady expansion of the powers and capabilities of the surveillance state. If we want to protect the privacy of our communications we must demand it.

The ConversationThe Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 (Cth) is still in a draft stage and the Department of Home Affairs invites public comment up until 10th of September 2018. Submit any comments to assistancebill.consultation@homeaffairs.gov.au.

Monique Mann, Vice Chancellor’s Research Fellow in Regulation of Technology, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Advertisements

New data access bill shows we need to get serious about privacy with independent oversight of the law



File 20180814 2921 15oljsx.jpg?ixlib=rb 1.1

MICK TSIKAS/AAP

Greg Austin, UNSW

The federal government today announced its proposed legislation to give law enforcement agencies yet more avenues to reach into our private lives through access to our personal communications and data. This never-ending story of parliamentary bills defies logic, and is not offering the necessary oversight and protections.

The trend has been led by Prime Minister Malcolm Turnbull, with help from an ever-growing number of security ministers and senior officials. Could it be that the proliferation of government security roles is a self-perpetuating industry leading to ever more government powers for privacy encroachment?

That definitely appears to be the case.

Striking the right balance between data access and privacy is a tricky problem, but the government’s current approach is doing little to solve it. We need better oversight of law enforcement access to our data to ensure it complies with privacy principles and actually results in convictions. That might require setting up an independent judicial review mechanism to report outcomes on an annual basis.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Where is the accountability?

The succession of data access legislation in the Australian parliament is fast becoming a Mad Hatter’s tea party – a characterisation justified by the increasingly unproductive public conversations between the government on one hand, and legal specialists and rights advocates on the other.

If the government says it needs new laws to tackle “terrorism and paedophilia”, then the rule seems to be that other side will be criticised for bringing up “privacy protection”. The federal opposition has surrendered any meaningful resistance to this parade of legislation.

Rights advocates have been backed into a corner by being forced to repeat their concerns over each new piece of legislation while neither they nor the government, nor our Privacy Commissioner, and all the other “commissioners”, are called to account on fundamental matters of principle.

Speaking of the commissioner class, Australia just got a new one last week: the Data Commissioner. Strangely, the impetus for this appointment came from the Productivity Commission.

The post has three purposes:

  1. to promote greater use of data,
  2. to drive economic benefits and innovation from greater use of data, and
  3. to build trust with the Australian community about the government’s use of data.

The problem with this logic is that purposes one and two can only be distinguished by the seemingly catch-all character of the first: that if data exists it must be used.

Leaving aside that minor point, the notion that the government needs to build trust with the Australian community on data policy speaks for itself.

National Privacy Principles fall short

There is near universal agreement that the government is managing this issue badly, from the census data management issue to the “My Health Record” debacle. The growing commissioner class has not been much help.

Australia does have personal data protection principles, you may be surprised to learn. They are called “Privacy Principles”. You may be even more surprised to learn that the rights offered in these principles exist only up to the point where any enforcement arm of government wants the data.




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


So it seems that Australians have to rely on the leadership of the Productivity Commission (for economic policy) to guarantee our rights in cyber space, at least when it comes to our personal data.

Better oversight is required

There is another approach to reconciling citizens’ interests in privacy protection with legitimate and important enforcement needs against terrorists and paedophiles: that is judicial review.

The government argues, unconvincingly according to police sources, that this process adequately protects citizens by requiring law enforcement to obtain court-ordered warrants to access information. The record in some other countries suggests otherwise, with judges almost always waving through any application from enforcement authorities, according to official US data.

There is a second level of judicial review open to the government. This is to set up an independent judicial review mechanism that is obliged to annually review all instances of government access to personal data under warrant, and to report on the virtues or shortcomings of that access against enforcement outcomes and privacy principles.

There are two essential features of this proposal. First, the reviewing officer is a judge and not a public servant (the “commissioner class”). Second, the scope of the function is review of the daily operation of the intrusive laws, not just the post-facto examination of notorious cases of data breaches.

It would take a lengthy academic volume to make the case for judicial review of this kind. But it can be defended simply on economic grounds: such a review process would shine light on the efficiency of police investigations.

According to data released by the UK government, the overwhelming share of arrests for terrorist offences in the UK (many based on court-approved warrants for access to private data) do not result in convictions. There were 37 convictions out of 441 arrests for terrorist-related offences in the 12 months up to March 2018.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The Turnbull government deserves credit for its recognition of the values of legal review. Its continuing commitment to posts such as the National Security Legislation Monitor – and the appointment of a high-profile barrister to such a post – is evidence of that.

But somewhere along the way, the administration of data privacy is falling foul of a growing bureaucratic mess.

The ConversationThe only way to bring order to the chaos is through robust accountability; and the only people with the authority or legitimacy in our political system to do that are probably judges who are independent of the government.

Greg Austin, Professor UNSW Canberra Cyber, UNSW

This article was originally published on The Conversation. Read the original article.

Here’s what a privacy policy that’s easy to understand could look like



File 20180606 137315 1d8kz0n.jpg?ixlib=rb 1.1
We need a simple system for categorising data privacy settings, similar to the way Creative Commons specifies how work can be legally shared.
Shutterstock

Alexander Krumpholz, CSIRO and Raj Gaire, CSIRO

Data privacy awareness has recently gained momentum, thanks in part to the Cambridge Analytica data breach and the introduction of the European Union’s General Data Protection Regulation (GDPR).

One of the key elements of the GDPR is that it requires companies to simplify their privacy related terms and conditions (T&Cs) so that they are understandable to the general public. As a result, companies have been rapidly updating their terms and conditions (T&Cs), and notifying their existing users.




Read more:
Why your app is updating its privacy settings and how this will affect businesses


On one hand, these new T&Cs are now simplified legal documents. On the other hand, they are still too long. Unfortunately, most of us have still skipped reading those documents and simply clicked “accept”.

Wouldn’t it be nice if we could specify our general privacy preferences in our devices, have them check privacy policies when we sign up for apps, and warn us if the agreements overstep?

This dream is achievable.

Creative Commons as a template

For decades, software was sold or licensed with Licence Agreements that were several pages long, written by lawyers and hard to understand. Later, software came with standardised licences, such as the GNU General Public Licence, Berkeley Software Distribution, or The Apache License. Those licences define users’ rights in different use cases and protect the provider from liabilities.

However, they were still hard to understand.

With the foundation of Creative Commons (CC) in 2001, a simplified licence was developed that reduced complex legal copyright agreements to a small set of copyright classes.

These licences are represented by small icons and short acronyms, and can be used for images, music, text and software. This helps creative users to immediately recognise how – or whether – they can use the licensed content in their own work.




Read more:
Explainer: Creative Commons


Imagine you have taken a photo and want to share it with others for non-commercial purposes only, such as to illustrate a story on a not-for-profit news website. You could licence your photo as CC BY-NC when uploading it to Flickr. In Creative Commons terms, the abbreviation BY (for attribution) requires the user to cite the owner and NC (non-commercial) restricts the use to non-commercial applications.

Internet search engines will index these attributes with the files. So, if I search for photos explicitly licensed with those restrictions, via Google for example, I will find your photo. This is possible because even the computers can understand these licences.

We need to develop Privacy Commons

Similar to Creative Commons licences under which creative content is given to others, we need Privacy Commons by which companies can inform users how they will use their data.

The Privacy Commons need to be legally binding, simple for people to understand and simple for computers to understand. Here are our suggestions for what a Privacy Commons might look like.

We propose that the Privacy Commons classifications cover at least three dimensions of private data: collection, protection, and spread.

What data is being collected?

This dimension is to specify what level of personal information is collected from the user, and is therefore at risk. For example, name, email, phone number, address, date of birth, biometrics (including photos), relationships, networks, personal preferences, and political opinions. The could be categorised at different levels of sensitivities.

How is your data protected?

This dimension specifies:

  • where your data stored – within an app, in one server, or in servers at multiple locations
  • how it is stored and transported – whether it is plain text or encrypted
  • how long the data is kept for – days, months, years or permanently
  • how the access to your data controlled within the organisation – this indicates the protection of your data against potentially malicious actors like hackers.

How is your data spread?

In other words, who is your data shared with? This dimension tells you whether or not the data is shared with third parties. If the data is shared, will it be de-identified appropriately? Is it shared for research purposes, or sold for commercial purposes? Are there any further controls in place after the data is shared? Will it be deleted by the third party when the user deletes it at the primary organisation?




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


Privacy Commons will help companies think about user privacy before offering services. It will also help solve the problem of communication about privacy in the same way that Creative Commons is solving the problems of licensing for humans and computers. Similar ideas have been discussed in the past, such as Mozilla. We need to revisit those thoughts in the contemporary context of the GDPR.

Such a system would allow you to specify Privacy Commons settings in the configuration of your children’s devices, so that only appropriate apps can be installed. Privacy Commons could also be applied to inform you about the use of your data gathered for other purposes like loyalty rewards cards, such as FlyBuys.

Of course, Privacy Commons will not solve everything.

For example, it will still be a challenge to address concerns about third party personal data brokers like Acxiom or Oracle collecting, linking and selling our data without most of us even knowing.

The ConversationBut at least it will be a step in the right direction.

Alexander Krumpholz, Senior Experimental Scientist, CSIRO and Raj Gaire, Senior Experimental Scientist, CSIRO

This article was originally published on The Conversation. Read the original article.

94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour



File 20180514 34038 10eli61.jpg?ixlib=rb 1.1
It would take the average person 244 hours per year (6 working weeks) to read all privacy policies that apply to them.
Shutterstock

Katharine Kemp, UNSW

Australians are agreeing to privacy policies they are not comfortable with and would like companies only to collect data that is essential for the delivery of their service. That’s according to new, nation-wide research on consumer attitudes to privacy policies released by the Consumer Policy Research Centre (CPRC) today.

These findings are particularly important since the government’s announcement last week that it plans to implement “open banking” (which gives consumers better access to and control over their banking data) as the first stage of the proposed “consumer data right” from July 2019.




Read more:
How not to agree to clean public toilets when you accept any online terms and conditions


Consumer advocates argue that existing privacy regulation in Australia needs to be strengthened before this new regime is implemented. In many cases, they say, consumers are not truly providing their “informed consent” to current uses of their personal information.

While some blame consumers for failing to read privacy policies, I argue that not reading is often rational behaviour under the current consent model. We need improved standards for consent under our Privacy Act as a first step in improving data protection.

Australians are not reading privacy policies

Under the Privacy Act, in many cases, the collection, use or disclosure of personal information is justified by the individual’s consent. This is consistent with the “notice and choice” model for privacy regulation: we receive notice of the proposed treatment of our information and we have a choice about whether to accept.

But according to the CPRC Report, most Australians (94%) do not read all privacy policies that apply to them. While some suggest this is because we don’t care about our privacy, there are four good reasons why people who do care about their privacy don’t read all privacy policies.

https://datawrapper.dwcdn.net/hJXfh/1/

We don’t have enough time

There are many privacy policies that apply to each of us and most are lengthy. But could we read them all if we cared enough?

According to international research, it would take the average person 244 hours per year (six working weeks) to read all privacy policies that apply to them, not including the time it would take to check websites for changes to these policies. This would be an impossible task for most working adults.

Under our current law, if you don’t have time to read the thousands of words in the policy, your consent can be implied by your continued use of the website which provides a link to that policy.

We can’t understand them

According to the CPRC, one of the reasons users typically do not read policies is that they are difficult to comprehend.

Very often these policies lead with feel-good assurances “We care about your privacy”, and leave more concerning matters to be discovered later in vague, open-ended terms, such as:

…we may collect your personal information for research, marketing, for efficiency purposes…

In fact, the CPRC Report states around one in five Australians:

…wrongly believed that if a company had a Privacy Policy, it meant they would not share information with other websites or companies.




Read more:
Consent and ethics in Facebook’s emotional manipulation study


We can’t negotiate for better terms

We generally have no ability to negotiate about how much of our data the company will collect, and how it will use and disclose it.

According to the CPRC Report, most Australians want companies only to collect data that is essential for the delivery of their service (91%) and want options to opt out of data collection (95%).

However, our law allows companies to group into one consent various types and uses of our data. Some are essential to providing the service, such as your name and address for delivery, and some are not, such as disclosing your details to “business partners” for marketing research.

These terms are often presented in standard form, on a take-it-or-leave-it basis. You either consent to everything or refrain from using the service.

https://datawrapper.dwcdn.net/L7fPF/2/

We can’t avoid the service altogether

According to the CPRC, over two thirds of Australians say they have agreed to privacy terms with which they are not comfortable, most often because it is the only way to access the product or service in question.

In a 2017 report, the Productivity Commission expressed the view that:

… even in sectors where there are dominant firms, such as social media, consumers can choose whether or not to use the class of product or service at all, without adversely affecting their quality of life.

However, in many cases, we cannot simply walk away if we don’t like the privacy terms.

Schools, for example, may decide what apps parents must use to communicate about their children. Many jobs require people to have Facebook or other social media accounts. Lack of transparency and competition in privacy terms also means there is often little to choose between rival providers.

We need higher standards for consent

There is frequently no real notice and no real choice in how our personal data is used by companies.

The EU General Data Protection Regulation (GDPR), which comes into effect on 25 May 2018, provides one model for improved consent. Under the GDPR, consent:

… should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


The Privacy Act should be amended along these lines to set higher standards for consent, including that consent should be:

  • explicit and require action on the part of the customer – consent should not be implied by the mere use of a website or service and there should be no pre-ticked boxes. Privacy should be the default;

  • unbundled – individuals should be able to choose to consent only to the collection and use of data essential to the delivery of the service, with separate choices of whether to consent to additional collections and uses;

  • revocable – the individual should have the option to withdraw their consent in respect of future uses of their personal data at any time.

The ConversationWhile further improvements are needed, upgrading our standards for consent would be an important first step.

Katharine Kemp, Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article was originally published on The Conversation. Read the original article.

Tough new EU privacy regulations could lead to better protections in Australia



File 20180523 117628 nmvce5.jpg?ixlib=rb 1.1
The EU’s General Data Protection Regulation comes into force on May 25.
Shutterstock

Vincent Mitchell, University of Sydney

Major personal data breaches, such as those that occurred recently at the Commonwealth Bank, Cambridge Analytica and Yahoo, have taught us how vulnerable our privacy is.

Like the cigarette and alcohol markets, it took a long time to prove that poorly regulated data collection can do us harm. And as with passive smoking, we now know that data trading can harm those around us as well as ourselves.

Regulators in the European Union are cracking down on the problem with the introduction the new strict General Data Protection Regulation (GDPR) from May 25. The hope is that the new rules will shift the balance of power in the market for data away from companies and back to the owners of that data.




Read more:
Online privacy must improve after the Facebook data uproar


The GDPR applies to companies who trade in the EU or process the data of people in the EU. This includes some of Australia’s biggest companies, such as the Commonwealth Bank and Bunnings Warehouse. Since companies that don’t operate in the EU or process the data of people in the EU aren’t required to comply, Australian consumers could soon be facing a two-tier system of privacy protections.

That isn’t all bad news. By choosing to deal with companies with better data protection policies, Australian consumers can create pressure for change in how personal data is handled across the board.

How the GDPR empowers consumers

The GDPR makes it clearer what companies should be doing to protect personal data and empowers consumers like never before.

When dealing with companies operating in the EU, you will now have the right to:

  1. access your own data and any derived or inferred data

  2. rectify errors and challenge decisions based on it, including to object to direct marketing

  3. be forgotten and erased in most situations

  4. move your data more easily, such as when changing insurance companies or banks

  5. object to certain types of data processing and challenge significant decisions based purely on profiling, such as for medical insurance or loans

  6. compensation.

This final right will lead to another profound improvement in regulation of the market for personal data.

Consumers as a regulating force

As a result of these new rights and powers, consumers themselves can help regulate company behaviour by monitoring how well they comply with GDPR.

In addition to complaining to authorities, such as the Information Commissioner, when consumers encounter breaches they can complain directly to the company, share stories online and alert fellow users.

This can be powerful – especially when whistleblowers actually work in the industry, as was the case with Cambridge Analytica’s Christopher Wylie.




Read more:
GDPR: ten easy steps all organisations should follow


Companies that don’t protect people’s personal data will face fines from the regulator of up to 4% of global turnover, or €20 million. In addition, they could be required to pay compensation directly to consumers who have asked investigating authorities to claim on their behalf.

This potentially means that all those millions of EU citizens who were caught up in the Facebook Cambridge Analytica scandal could, in the future, be able to sue Facebook.

From the viewpoint of empowering and motivating consumers to monitor what companies do with their data, this is a momentous change.

A shift in our expectations of data privacy

The way things currently stand, there is an imbalance in the personal data market. Companies take all the profit from our personal data, yet we pay the price as individuals, or as a society, for privacy breaches.

But as a result of GDPR, we are likely to see expectations of how companies should act begin to shift. This will create pressure for change.

You’ve probably already been sent notifications from companies asking you to re-consent to their privacy policies. This is because GDPR expects consent to be more explicit and active – default settings and pre-checked boxes are considered inadequate.

Consumers should also expect companies to make it just as easy to withdraw consent as it is to give it.




Read more:
Why your app is updating its privacy settings and how this will affect businesses


Unlike New Zealand, which has strong privacy laws, personal data protections in Australia – and the massive data markets of BRIC countries – are not considered “adequate”, and fall below EU standards.

Consumers should be wary of vested interest arguments, such as Facebook’s claim that it just wants to connect people. To use an analogy, that’s comparable to an alcohol manufacturer saying it just wants people to have a good time, without highlighting the potential risks of alcohol use.

The ConversationIf you want these greater rights and protections, now is the perfect time to lobby your Members of Parliament and demand the best available protection from all the companies you deal with.

Vincent Mitchell, Professor of Marketing, University of Sydney

This article was originally published on The Conversation. Read the original article.

Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

How to stop haemorrhaging data on Facebook



File 20180405 189801 1wbjtyg.jpg?ixlib=rb 1.1
Every time you open an app, click a link, like a post, read an article, hover over an ad, or connect to someone, you are generating data.
Shutterstock

Belinda Barnet, Swinburne University of Technology

If you are one of 2.2 billion Facebook users worldwide, you have probably been alarmed by the recent coverage of the Cambridge Analytica scandal, a story that began when The Guardian revealed 50 million (now thought to be 87 million) user profiles had been retrieved and shared without the consent of users.

Though the #deletefacebook campaign has gained momentum on Twitter, it is simply not practical for most of us to delete our accounts. It is technically difficult to do, and given that one quarter of the human population is on the platform, there is an undeniable social cost for being absent.




Read more:
Why we should all cut the Facebook cord. Or should we?


It is also not possible to use or even to have a Facebook profile without giving up at least some data: every time you open the app, click a link, like a post, hover over an ad, or connect to someone, you are generating data. This particular type of data is not something you can control, because Facebook considers such data its property.

Every service has a price, and the price for being on Facebook is your data.

However, you can remain on Facebook (and other social media platforms like it) without haemorrhaging data. If you want stay in touch with those old school friends – despite the fact you will probably never see them again – here’s what you can do, step by step. The following instructions are tailored to Facebook settings on mobile.

Your location

The first place to start is with the device you are holding in your hand.
Facebook requests access to your GPS location by default, and unless you were reading the fine print when you installed the application (if you are that one person please tell me where you find the time), it will currently have access.

This means that whenever you open the app it knows where you are, and unless you have changed your location sharing setting from “Always” to “Never” or “Only while using”, it can track your location when you’re not using the app as well.

To keep your daily movements to yourself, go into Settings on Apple iPhone or Android, go to Location Services, and turn off or select “Never” for Facebook.

While you’re there, check for other social media apps with location access (like Twitter and Instagram) and consider changing them to “Never”.

Remember that pictures from your phone are GPS tagged too, so if you intend to share them on Facebook, revoke access to GPS for your camera as well.

Your content

The next thing to do is to control who can see what you post, who can see private information like your email address and phone number, and then apply these settings in retrospect to everything you’ve already posted.

Facebook has a “Privacy Shortcuts” tab under Settings, but we are going to start in Account Settings > Privacy.

You control who sees what you post, and who sees the people and pages you follow, by limiting the audience here.

Change “Who can see your future posts” and “Who can see the people and pages you follow” to “Only Friends”.

In the same menu, if you scroll down, you will see a setting called “Do you want search engines outside of Facebook to link to your profile?” Select No.

After you have made these changes, scroll down and limit the audience for past posts. Apply the new setting to all past posts, even though Facebook will try to alarm you. “The only way to undo this is to change the audience of each post one at a time! Oh my Goodness! You’ll need to change 1,700 posts over ten years.” Ignore your fears and click Limit.




Read more:
It’s time for third-party data brokers to emerge from the shadows


Next go in to Privacy Shortcuts – this is on the navigation bar below Settings. Then select Privacy Checkup. Limit who can see your personal information (date of birth, email address, phone number, place of birth if you provided it) to “Only Me”.

Third party apps

Every time you use Facebook to “login” to a service or application you are granting both Facebook and the third-party service access to your data.

Facebook has pledged to investigate and change this recently as a result of the Cambridge Analytica scandal, but in the meantime, it is best not to use Facebook to login to third party services. That includes Bingo Bash unfortunately.

The third screen of Privacy Checkup shows you which apps have access to your data at present. Delete any that you don’t recognise or that are unnecessary.

In the final step we will be turning off “Facebook integration” altogether. This is optional. If you choose to do this, it will revoke permission for all previous apps, plugins, and websites that have access to your data. It will also prevent your friends from harvesting your data for their apps.

In this case you don’t need to delete individual apps as they will all disappear.

Turning off Facebook integration

If you want to be as secure as it is possible to be on Facebook, you can revoke third-party access to your content completely. This means turning off all apps, plugins and websites.

If you take this step Facebook won’t be able to receive information about your use of apps outside of Facebook and apps won’t be able to receive your Facebook data.

If you’re a business this is not a good idea as you will need it to advertise and to test apps. This is for personal pages.

It may make life a little more difficult for you in that your next purchase from Farfetch will require you to set up your own account rather than just harvest your profile. Your Klout score may drop because it can’t see Facebook and that might feel terrible.

Remember this setting only applies to the data you post and provide yourself. The signals you generate using Facebook (what you like, click on, read) will still belong to Facebook and will be used to tailor advertising.

To turn off Facebook integration, go into Settings, then Apps. Select Apps, websites and games.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


Facebook will warn you about all the Farmville updates you will miss and how you will have a hard time logging in to The Guardian without Facebook. Ignore this and select “Turn off”.

The ConversationWell done. Your data is now as secure as it is possible to be on Facebook. Remember, though, that everything you do on the platform still generates data.

Belinda Barnet, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article was originally published on The Conversation. Read the original article.

Why the business model of social media giants like Facebook is incompatible with human rights



File 20180329 189824 1k13qax.jpg?ixlib=rb 1.1
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance.
EPA/Peter DaSilva

Sarah Joseph, Monash University

Facebook has had a bad few weeks. The social media giant had to apologise for failing to protect the personal data of millions of users from being accessed by data mining company Cambridge Analytica. Outrage is brewing over its admission to spying on people via their Android phones. Its stock price plummeted, while millions deleted their accounts in disgust.

Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.

Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.

The good

In some ways, social media has been a boon for human rights – most obviously for freedom of speech.

Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.

But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.

Social media enhances the effectiveness of non-mainstream political movements, public assemblies and demonstrations, especially in countries that exercise tight controls over civil and political rights, or have very poor news sources.

Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.




Read more:
#MeToo is not enough: it has yet to shift the power imbalances that would bring about gender equality


The bad and the ugly

But the social media “free speech” machines can create human rights difficulties. Those newly empowered voices are not necessarily desirable voices.

The UN recently found that Facebook had been a major platform for spreading hatred against the Rohingya in Myanmar, which in turn led to ethnic cleansing and crimes against humanity.

Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.

YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.

The business model and human rights

Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.

Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.

Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.

Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.

In late 2017, Facebook was reported as having more than 2.2 billion active users.
EPA/Ritchie B. Tongo

Power through knowledge

In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.

So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.

It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.




Read more:
Can Facebook influence an election result?


Microtargeting

A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.

This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.

Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.

Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.

While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.

Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.

As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.

‘Fake news’

Finally, there is the issue of the spread of misinformation.

While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.

In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.

Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.

Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.

The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.

However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:

… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.

Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:

… false news spreads more than the truth because humans, not robots, are more likely to spread it.

The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.

Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.

Fake news disseminated by social media is argued to have played a role in electing Donald Trump to the presidency.
EPA/Jim Lo Scalzo

What next?

It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).

Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.

Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.

However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.

The ConversationAt the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.

Sarah Joseph, Director, Castan Centre for Human Rights Law, Monash University

This article was originally published on The Conversation. Read the original article.

Your online privacy depends as much on your friends’ data habits as your own



File 20180326 54872 1q80274.jpg?ixlib=rb 1.1
Many social media users have been shocked to learn the extent of their digital footprint.
Shutterstock

Vincent Mitchell, University of Sydney; Andrew Stephen, University of Oxford, and Bernadette Kamleitner, Vienna University of Economics and Business

In the aftermath of revelations about the alleged misuse of Facebook user data by Cambridge Analytica, many social media users are educating themselves about their own digital footprint. And some are shocked at the extent of it.

Last week, one user took advantage of a Facebook feature that enables you to download all the information the company stores about you. He found his call and SMS history in the data dump – something Facebook says is an opt-in feature for those using Messenger and Facebook Lite on Android.

//platform.twitter.com/widgets.js

This highlights an issue that we don’t talk about enough when it comes to data privacy: that the security of our data is dependent not only on our own vigilance, but also that of those we interact with.

It’s easy for friends to share our data

In the past, personal data was either captured in our memories or in physical objects, such as diaries or photo albums. If a friend wanted data about us, they would have to either observe us or ask us for it. That requires effort, or our consent, and focuses on information that is both specific and meaningful.

Nowadays, data others hold about us is given away easily. That’s partly because the data apps ask for is largely intangible and invisible, as well as vague rather than specific.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


What’s more, it doesn’t seem to take much to get us to give away other people’s data in return for very little, with one study finding 98% of MIT students would give away their friends’ emails when promised free pizza.

Other studies have shown that collaborating in folders on cloud services, such as Google Drive, can result in privacy losses that are 39% higher due collaborators installing third-party apps you wouldn’t choose to install yourself. Facebook’s data download tool poses another risk in that once the data is taken out of Facebook it becomes even easier to copy and distribute.

This shift from personal to interdependent online privacy reliant on our friends, family and colleagues is a seismic one for the privacy agenda.

How much data are we talking about?

With more than 3.5 million apps on Google Play alone, the collection of data from our friends via back-door methods is more common than we might think. The back-door opens when you press “accept” to permissions to give access to your contacts when installing an app.

WhatsApp might have your contact information even if you aren’t a registered user.
Screen Shot at 1pm on 26 March 2018

Then the data harvesting machinery begins its work – often in perpetuity, and without us knowing or understanding what will be done with it. More importantly, our friends never agreed to us giving away their data. And we have a lot of friends’ data to harvest.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The average Australian has 234 Facebook friends. Large-scale data collection is easy in an interconnected world when each person who signs up for an app has 234 friends, and each of them has 234 and, so on. That’s how Cambridge Analytica was apparently able to collect information on up to 50 million users, with permission from just 270,000.

Add to that the fact that the average person uses nine different apps on a daily basis. Once installed, some of these apps can harvest data on a daily basis without your friends knowing and 70% of apps share it with third parties.




Read more:
7 in 10 smartphone apps share your data with third-party services


We’re more likely to refuse data requests that are specific

Around 60% of us never, or only occasionally, review the privacy policy and permissions requested by an app before downloading. And in our own research conducted with a sample of 287 London business students, 96% of participants failed to realise the scope of all the information they were giving away.

However, this can be changed by making a data request more specific – for example, by separating out “contacts” from “photos”. When we asked participants if they had the right to give all the data on their phone, 95% said yes. But when they focused on just contacts, this decreased to 80%.

We can take this further with a thought experiment. Imagine if an app asked you for your “contacts, including your grandmother’s phone number and your daughter’s photos”. Would you be more likely to say no? The reality of what you are actually giving away in these consent agreements becomes more apparent with a specific request.

The silver lining is more vigilance

This new reality not only threatens moral codes and friendships, but can cause harm from hidden viruses, malware, spyware or adware. We may also be subject to prosecution as in a recent German case in which a judge ruled that giving away your friend’s data on Whatsapp without their permission was wrong.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Although company policies on privacy can help, these are difficult to police. Facebook’s “platform policy” at the time the Cambridge Analytica data was harvested only allowed the collection of friends’ data to improve the user experience of an app, while preventing it from being sold on or used for advertising. But this puts a huge burden on companies to police, investigate and enforce these policies. It’s a task few can afford, and even a company the size of Facebook failed.

The ConversationThe silver lining to the Cambridge Analytica case is that more and more people are recognising that the idea of “free” digital services is an illusion. The price we pay is not only our own privacy, but the privacy of our friends, family and colleagues.

Vincent Mitchell, Professor of Marketing, University of Sydney; Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford, and Bernadette Kamleitner, , Vienna University of Economics and Business

This article was originally published on The Conversation. Read the original article.

Australia should strengthen its privacy laws and remove exemptions for politicians


David Vaile, UNSW

As revelations continue to unfold about the misuse of personal data by Cambridge Analytica, many Australians are only just learning that Australian politicians have given themselves a free kick to bypass privacy laws.

Indeed, Australian data privacy laws are generally weak when compared with those in the United States, the United Kingdom and the European Union. They fall short in both specific exemptions for politicians, and because individuals cannot enforce laws even where they do exist.




Read more:
Australia’s privacy laws gutted in court ruling on what is ‘personal information’


While Australia’s major political parties have denied using the services of Cambridge Analytica, they do engage in substantial data operations – including the Liberal Party’s use of the i360 app in the recent South Australian election. How well this microtargeting of voters works to sway political views is disputed, but the claims are credible enough to spur demand for these tools.

//platform.twitter.com/widgets.js

Greens leader Richard di Natale told RN Breakfast this morning that political parties “shouldn’t be let off the hook”:

All political parties use databases to engage with voters, but they’re exempt from privacy laws so there’s no transparency about what anybody’s doing. And that’s why it’s really important that we go back, remove those exemptions, ensure that there’s some transparency, and allow people to decide whether they think it’s appropriate.

Why should politicians be exempt from privacy laws?

The exemption for politicians was introduced way back in the Privacy Amendment (Private Sector) Bill 2000. The Attorney-General at the time, Daryl Williams, justified the exemption on the basis that freedom of political communication was vital to Australia’s democratic process. He said the exemption was:

…designed to encourage that freedom and enhance the operation of the electoral and political process in Australia.

Malcolm Crompton, the then Privacy Commissioner, argued against the exemption, stating that political institutions:

…should follow the same practices and principles that are required in the wider community.

Other politicians from outside the two main parties, such as Senator Natasha Stott Despoja in 2006, have tried to remove the exemptions for similar reasons, but failed to gain support from the major parties.

What laws are politicians exempt from?

Privacy Act

The Privacy Act gives you control over the way your personal information is handled, including knowing why your personal information is being collected, how it will be used, and to whom it will be disclosed. It also allows to you to make a complaint (but not take legal action) if you think your personal information has been mishandled.

“Registered political parties” are exempt from the operation of the Privacy Act 1998, and so are the political “acts and practices” of certain entities, including:

  • political representatives — MPs and local government councillors;
  • contractors and subcontractors of registered political parties and political representatives; and
  • volunteers for registered political parties.

This means that if a company like Cambridge Analytica was contracted to a party or MP in Australia, their activities may well be exempt.




Read more:
Is there such a thing as online privacy? 7 essential reads


Spam Act

Under the Spam Act 2003, organisations cannot email you advertisements without your request or consent. They must also include an unsubscribe notice at the end of a spam message, which allows you to opt out of unwanted repeat messaging. However, the Act says that it has no effect on “implied freedom of political communication”.

Do Not Call Register

Even if you have your number listed on the Do Not Call Register, a political party or candidate can authorise a call to you, at home or at work, if one purpose is fundraising. It also permits other uses.

//platform.twitter.com/widgets.js

How do Australian privacy laws fall short?

No right to sue

Citizens can sue for some version of a breach of privacy in the UK, EU, US, Canada and even New Zealand. But there is still no constitutional or legal right that an individual (or class) can enforce over intrusion of privacy in Australia.

After exhaustive consultations in 2008 and 2014, the Australian Law Reform Commission (ALRC) recommended a modest and carefully limited statutory tort – a right to dispute a serious breach of privacy in court. However, both major parties effectively rejected the ALRC recommendation.

No ‘legal standing’ in the US

Legal standing refers to the right to be a party to legal proceedings. As the tech giants that are most adept at gathering and using user data – Facebook, Google, Apple, Amazon – are based in the US, Australians generally do not have legal standing to bring action against them if they suspect a privacy violation. EU citizens, by contrast, have the benefit of the Judicial Redress Act 2015 (US) for some potential misuses of cloud-hosted data.

Poor policing of consent agreements

Consent agreements – such as the terms and conditions you agree to when you sign up for a service, such as Gmail or Messenger – waive rights that individuals might otherwise enjoy under privacy laws. In its response to the Cambridge Analytica debacle, Facebook claims that users consented to the use of their data.




Read more:
Consent and ethics in Facebook’s emotional manipulation study


But these broad user consent agreements are not policed strictly enough in Australia. It’s known as “bad consent” when protective features are absent from these agreements. By contrast, a “good consent” agreement should be simple, safe and precautionary by default. That means it should be clear about its terms and give users the ability to enforce them, should not be variable, and should allow users to revoke consent at any time.

New laws introduced by the EU – the General Data Protection Regulation – which come into effect on May 25, are an example of how countries can protect their citizens’ data offshore.

Major parties don’t want change

Privacy Commissioner Tim Pilgrim said today in The Guardian that the political exemption should be reconsidered. In the past, independents and minor party representatives have objected to the exemption, as well as the weakness of Australian privacy laws more generally. In 2001, the High Court said that there should be a right to sue for privacy breach.




Read more:
Why big data may be having a big effect on how our politics plays out


But both Liberal and Labor are often in tacit agreement to do nothing substantial about privacy rights. They have not taken up the debates around the collapse of IT security, nor the increase in abuse of the “consent” model, the dangers of so called “open data”, or the threats from artificial intelligence, Big Data, and metadata retention.

The ConversationOne might speculate that this is because they share a vested interest in making use of voter data for the purpose of campaigning and governing. It’s now time for a new discussion about the rules around privacy and politics in Australia – one in which the privacy interests of individuals are front and centre.

David Vaile, Teacher of cyberspace law, UNSW

This article was originally published on The Conversation. Read the original article.