Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Advertisements

How to stop haemorrhaging data on Facebook



File 20180405 189801 1wbjtyg.jpg?ixlib=rb 1.1
Every time you open an app, click a link, like a post, read an article, hover over an ad, or connect to someone, you are generating data.
Shutterstock

Belinda Barnet, Swinburne University of Technology

If you are one of 2.2 billion Facebook users worldwide, you have probably been alarmed by the recent coverage of the Cambridge Analytica scandal, a story that began when The Guardian revealed 50 million (now thought to be 87 million) user profiles had been retrieved and shared without the consent of users.

Though the #deletefacebook campaign has gained momentum on Twitter, it is simply not practical for most of us to delete our accounts. It is technically difficult to do, and given that one quarter of the human population is on the platform, there is an undeniable social cost for being absent.




Read more:
Why we should all cut the Facebook cord. Or should we?


It is also not possible to use or even to have a Facebook profile without giving up at least some data: every time you open the app, click a link, like a post, hover over an ad, or connect to someone, you are generating data. This particular type of data is not something you can control, because Facebook considers such data its property.

Every service has a price, and the price for being on Facebook is your data.

However, you can remain on Facebook (and other social media platforms like it) without haemorrhaging data. If you want stay in touch with those old school friends – despite the fact you will probably never see them again – here’s what you can do, step by step. The following instructions are tailored to Facebook settings on mobile.

Your location

The first place to start is with the device you are holding in your hand.
Facebook requests access to your GPS location by default, and unless you were reading the fine print when you installed the application (if you are that one person please tell me where you find the time), it will currently have access.

This means that whenever you open the app it knows where you are, and unless you have changed your location sharing setting from “Always” to “Never” or “Only while using”, it can track your location when you’re not using the app as well.

To keep your daily movements to yourself, go into Settings on Apple iPhone or Android, go to Location Services, and turn off or select “Never” for Facebook.

While you’re there, check for other social media apps with location access (like Twitter and Instagram) and consider changing them to “Never”.

Remember that pictures from your phone are GPS tagged too, so if you intend to share them on Facebook, revoke access to GPS for your camera as well.

Your content

The next thing to do is to control who can see what you post, who can see private information like your email address and phone number, and then apply these settings in retrospect to everything you’ve already posted.

Facebook has a “Privacy Shortcuts” tab under Settings, but we are going to start in Account Settings > Privacy.

You control who sees what you post, and who sees the people and pages you follow, by limiting the audience here.

Change “Who can see your future posts” and “Who can see the people and pages you follow” to “Only Friends”.

In the same menu, if you scroll down, you will see a setting called “Do you want search engines outside of Facebook to link to your profile?” Select No.

After you have made these changes, scroll down and limit the audience for past posts. Apply the new setting to all past posts, even though Facebook will try to alarm you. “The only way to undo this is to change the audience of each post one at a time! Oh my Goodness! You’ll need to change 1,700 posts over ten years.” Ignore your fears and click Limit.




Read more:
It’s time for third-party data brokers to emerge from the shadows


Next go in to Privacy Shortcuts – this is on the navigation bar below Settings. Then select Privacy Checkup. Limit who can see your personal information (date of birth, email address, phone number, place of birth if you provided it) to “Only Me”.

Third party apps

Every time you use Facebook to “login” to a service or application you are granting both Facebook and the third-party service access to your data.

Facebook has pledged to investigate and change this recently as a result of the Cambridge Analytica scandal, but in the meantime, it is best not to use Facebook to login to third party services. That includes Bingo Bash unfortunately.

The third screen of Privacy Checkup shows you which apps have access to your data at present. Delete any that you don’t recognise or that are unnecessary.

In the final step we will be turning off “Facebook integration” altogether. This is optional. If you choose to do this, it will revoke permission for all previous apps, plugins, and websites that have access to your data. It will also prevent your friends from harvesting your data for their apps.

In this case you don’t need to delete individual apps as they will all disappear.

Turning off Facebook integration

If you want to be as secure as it is possible to be on Facebook, you can revoke third-party access to your content completely. This means turning off all apps, plugins and websites.

If you take this step Facebook won’t be able to receive information about your use of apps outside of Facebook and apps won’t be able to receive your Facebook data.

If you’re a business this is not a good idea as you will need it to advertise and to test apps. This is for personal pages.

It may make life a little more difficult for you in that your next purchase from Farfetch will require you to set up your own account rather than just harvest your profile. Your Klout score may drop because it can’t see Facebook and that might feel terrible.

Remember this setting only applies to the data you post and provide yourself. The signals you generate using Facebook (what you like, click on, read) will still belong to Facebook and will be used to tailor advertising.

To turn off Facebook integration, go into Settings, then Apps. Select Apps, websites and games.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


Facebook will warn you about all the Farmville updates you will miss and how you will have a hard time logging in to The Guardian without Facebook. Ignore this and select “Turn off”.

The ConversationWell done. Your data is now as secure as it is possible to be on Facebook. Remember, though, that everything you do on the platform still generates data.

Belinda Barnet, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article was originally published on The Conversation. Read the original article.

Why the business model of social media giants like Facebook is incompatible with human rights



File 20180329 189824 1k13qax.jpg?ixlib=rb 1.1
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance.
EPA/Peter DaSilva

Sarah Joseph, Monash University

Facebook has had a bad few weeks. The social media giant had to apologise for failing to protect the personal data of millions of users from being accessed by data mining company Cambridge Analytica. Outrage is brewing over its admission to spying on people via their Android phones. Its stock price plummeted, while millions deleted their accounts in disgust.

Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.

Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.

The good

In some ways, social media has been a boon for human rights – most obviously for freedom of speech.

Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.

But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.

Social media enhances the effectiveness of non-mainstream political movements, public assemblies and demonstrations, especially in countries that exercise tight controls over civil and political rights, or have very poor news sources.

Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.




Read more:
#MeToo is not enough: it has yet to shift the power imbalances that would bring about gender equality


The bad and the ugly

But the social media “free speech” machines can create human rights difficulties. Those newly empowered voices are not necessarily desirable voices.

The UN recently found that Facebook had been a major platform for spreading hatred against the Rohingya in Myanmar, which in turn led to ethnic cleansing and crimes against humanity.

Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.

YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.

The business model and human rights

Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.

Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.

Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.

Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.

In late 2017, Facebook was reported as having more than 2.2 billion active users.
EPA/Ritchie B. Tongo

Power through knowledge

In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.

So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.

It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.




Read more:
Can Facebook influence an election result?


Microtargeting

A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.

This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.

Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.

Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.

While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.

Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.

As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.

‘Fake news’

Finally, there is the issue of the spread of misinformation.

While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.

In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.

Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.

Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.

The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.

However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:

… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.

Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:

… false news spreads more than the truth because humans, not robots, are more likely to spread it.

The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.

Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.

Fake news disseminated by social media is argued to have played a role in electing Donald Trump to the presidency.
EPA/Jim Lo Scalzo

What next?

It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).

Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.

Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.

However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.

The ConversationAt the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.

Sarah Joseph, Director, Castan Centre for Human Rights Law, Monash University

This article was originally published on The Conversation. Read the original article.

Your online privacy depends as much on your friends’ data habits as your own



File 20180326 54872 1q80274.jpg?ixlib=rb 1.1
Many social media users have been shocked to learn the extent of their digital footprint.
Shutterstock

Vincent Mitchell, University of Sydney; Andrew Stephen, University of Oxford, and Bernadette Kamleitner, Vienna University of Economics and Business

In the aftermath of revelations about the alleged misuse of Facebook user data by Cambridge Analytica, many social media users are educating themselves about their own digital footprint. And some are shocked at the extent of it.

Last week, one user took advantage of a Facebook feature that enables you to download all the information the company stores about you. He found his call and SMS history in the data dump – something Facebook says is an opt-in feature for those using Messenger and Facebook Lite on Android.

//platform.twitter.com/widgets.js

This highlights an issue that we don’t talk about enough when it comes to data privacy: that the security of our data is dependent not only on our own vigilance, but also that of those we interact with.

It’s easy for friends to share our data

In the past, personal data was either captured in our memories or in physical objects, such as diaries or photo albums. If a friend wanted data about us, they would have to either observe us or ask us for it. That requires effort, or our consent, and focuses on information that is both specific and meaningful.

Nowadays, data others hold about us is given away easily. That’s partly because the data apps ask for is largely intangible and invisible, as well as vague rather than specific.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


What’s more, it doesn’t seem to take much to get us to give away other people’s data in return for very little, with one study finding 98% of MIT students would give away their friends’ emails when promised free pizza.

Other studies have shown that collaborating in folders on cloud services, such as Google Drive, can result in privacy losses that are 39% higher due collaborators installing third-party apps you wouldn’t choose to install yourself. Facebook’s data download tool poses another risk in that once the data is taken out of Facebook it becomes even easier to copy and distribute.

This shift from personal to interdependent online privacy reliant on our friends, family and colleagues is a seismic one for the privacy agenda.

How much data are we talking about?

With more than 3.5 million apps on Google Play alone, the collection of data from our friends via back-door methods is more common than we might think. The back-door opens when you press “accept” to permissions to give access to your contacts when installing an app.

WhatsApp might have your contact information even if you aren’t a registered user.
Screen Shot at 1pm on 26 March 2018

Then the data harvesting machinery begins its work – often in perpetuity, and without us knowing or understanding what will be done with it. More importantly, our friends never agreed to us giving away their data. And we have a lot of friends’ data to harvest.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The average Australian has 234 Facebook friends. Large-scale data collection is easy in an interconnected world when each person who signs up for an app has 234 friends, and each of them has 234 and, so on. That’s how Cambridge Analytica was apparently able to collect information on up to 50 million users, with permission from just 270,000.

Add to that the fact that the average person uses nine different apps on a daily basis. Once installed, some of these apps can harvest data on a daily basis without your friends knowing and 70% of apps share it with third parties.




Read more:
7 in 10 smartphone apps share your data with third-party services


We’re more likely to refuse data requests that are specific

Around 60% of us never, or only occasionally, review the privacy policy and permissions requested by an app before downloading. And in our own research conducted with a sample of 287 London business students, 96% of participants failed to realise the scope of all the information they were giving away.

However, this can be changed by making a data request more specific – for example, by separating out “contacts” from “photos”. When we asked participants if they had the right to give all the data on their phone, 95% said yes. But when they focused on just contacts, this decreased to 80%.

We can take this further with a thought experiment. Imagine if an app asked you for your “contacts, including your grandmother’s phone number and your daughter’s photos”. Would you be more likely to say no? The reality of what you are actually giving away in these consent agreements becomes more apparent with a specific request.

The silver lining is more vigilance

This new reality not only threatens moral codes and friendships, but can cause harm from hidden viruses, malware, spyware or adware. We may also be subject to prosecution as in a recent German case in which a judge ruled that giving away your friend’s data on Whatsapp without their permission was wrong.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Although company policies on privacy can help, these are difficult to police. Facebook’s “platform policy” at the time the Cambridge Analytica data was harvested only allowed the collection of friends’ data to improve the user experience of an app, while preventing it from being sold on or used for advertising. But this puts a huge burden on companies to police, investigate and enforce these policies. It’s a task few can afford, and even a company the size of Facebook failed.

The ConversationThe silver lining to the Cambridge Analytica case is that more and more people are recognising that the idea of “free” digital services is an illusion. The price we pay is not only our own privacy, but the privacy of our friends, family and colleagues.

Vincent Mitchell, Professor of Marketing, University of Sydney; Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford, and Bernadette Kamleitner, , Vienna University of Economics and Business

This article was originally published on The Conversation. Read the original article.

Australia should strengthen its privacy laws and remove exemptions for politicians


David Vaile, UNSW

As revelations continue to unfold about the misuse of personal data by Cambridge Analytica, many Australians are only just learning that Australian politicians have given themselves a free kick to bypass privacy laws.

Indeed, Australian data privacy laws are generally weak when compared with those in the United States, the United Kingdom and the European Union. They fall short in both specific exemptions for politicians, and because individuals cannot enforce laws even where they do exist.




Read more:
Australia’s privacy laws gutted in court ruling on what is ‘personal information’


While Australia’s major political parties have denied using the services of Cambridge Analytica, they do engage in substantial data operations – including the Liberal Party’s use of the i360 app in the recent South Australian election. How well this microtargeting of voters works to sway political views is disputed, but the claims are credible enough to spur demand for these tools.

//platform.twitter.com/widgets.js

Greens leader Richard di Natale told RN Breakfast this morning that political parties “shouldn’t be let off the hook”:

All political parties use databases to engage with voters, but they’re exempt from privacy laws so there’s no transparency about what anybody’s doing. And that’s why it’s really important that we go back, remove those exemptions, ensure that there’s some transparency, and allow people to decide whether they think it’s appropriate.

Why should politicians be exempt from privacy laws?

The exemption for politicians was introduced way back in the Privacy Amendment (Private Sector) Bill 2000. The Attorney-General at the time, Daryl Williams, justified the exemption on the basis that freedom of political communication was vital to Australia’s democratic process. He said the exemption was:

…designed to encourage that freedom and enhance the operation of the electoral and political process in Australia.

Malcolm Crompton, the then Privacy Commissioner, argued against the exemption, stating that political institutions:

…should follow the same practices and principles that are required in the wider community.

Other politicians from outside the two main parties, such as Senator Natasha Stott Despoja in 2006, have tried to remove the exemptions for similar reasons, but failed to gain support from the major parties.

What laws are politicians exempt from?

Privacy Act

The Privacy Act gives you control over the way your personal information is handled, including knowing why your personal information is being collected, how it will be used, and to whom it will be disclosed. It also allows to you to make a complaint (but not take legal action) if you think your personal information has been mishandled.

“Registered political parties” are exempt from the operation of the Privacy Act 1998, and so are the political “acts and practices” of certain entities, including:

  • political representatives — MPs and local government councillors;
  • contractors and subcontractors of registered political parties and political representatives; and
  • volunteers for registered political parties.

This means that if a company like Cambridge Analytica was contracted to a party or MP in Australia, their activities may well be exempt.




Read more:
Is there such a thing as online privacy? 7 essential reads


Spam Act

Under the Spam Act 2003, organisations cannot email you advertisements without your request or consent. They must also include an unsubscribe notice at the end of a spam message, which allows you to opt out of unwanted repeat messaging. However, the Act says that it has no effect on “implied freedom of political communication”.

Do Not Call Register

Even if you have your number listed on the Do Not Call Register, a political party or candidate can authorise a call to you, at home or at work, if one purpose is fundraising. It also permits other uses.

//platform.twitter.com/widgets.js

How do Australian privacy laws fall short?

No right to sue

Citizens can sue for some version of a breach of privacy in the UK, EU, US, Canada and even New Zealand. But there is still no constitutional or legal right that an individual (or class) can enforce over intrusion of privacy in Australia.

After exhaustive consultations in 2008 and 2014, the Australian Law Reform Commission (ALRC) recommended a modest and carefully limited statutory tort – a right to dispute a serious breach of privacy in court. However, both major parties effectively rejected the ALRC recommendation.

No ‘legal standing’ in the US

Legal standing refers to the right to be a party to legal proceedings. As the tech giants that are most adept at gathering and using user data – Facebook, Google, Apple, Amazon – are based in the US, Australians generally do not have legal standing to bring action against them if they suspect a privacy violation. EU citizens, by contrast, have the benefit of the Judicial Redress Act 2015 (US) for some potential misuses of cloud-hosted data.

Poor policing of consent agreements

Consent agreements – such as the terms and conditions you agree to when you sign up for a service, such as Gmail or Messenger – waive rights that individuals might otherwise enjoy under privacy laws. In its response to the Cambridge Analytica debacle, Facebook claims that users consented to the use of their data.




Read more:
Consent and ethics in Facebook’s emotional manipulation study


But these broad user consent agreements are not policed strictly enough in Australia. It’s known as “bad consent” when protective features are absent from these agreements. By contrast, a “good consent” agreement should be simple, safe and precautionary by default. That means it should be clear about its terms and give users the ability to enforce them, should not be variable, and should allow users to revoke consent at any time.

New laws introduced by the EU – the General Data Protection Regulation – which come into effect on May 25, are an example of how countries can protect their citizens’ data offshore.

Major parties don’t want change

Privacy Commissioner Tim Pilgrim said today in The Guardian that the political exemption should be reconsidered. In the past, independents and minor party representatives have objected to the exemption, as well as the weakness of Australian privacy laws more generally. In 2001, the High Court said that there should be a right to sue for privacy breach.




Read more:
Why big data may be having a big effect on how our politics plays out


But both Liberal and Labor are often in tacit agreement to do nothing substantial about privacy rights. They have not taken up the debates around the collapse of IT security, nor the increase in abuse of the “consent” model, the dangers of so called “open data”, or the threats from artificial intelligence, Big Data, and metadata retention.

The ConversationOne might speculate that this is because they share a vested interest in making use of voter data for the purpose of campaigning and governing. It’s now time for a new discussion about the rules around privacy and politics in Australia – one in which the privacy interests of individuals are front and centre.

David Vaile, Teacher of cyberspace law, UNSW

This article was originally published on The Conversation. Read the original article.

You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem



File 20171107 1032 f7pvxc.jpg?ixlib=rb 1.1
Do you care if your data is being used by third parties?
from www.shutterstock.com

Siobhan Lyons, Macquarie University

We all seem worried about privacy. Though it’s not only privacy itself we should be concerned about: it’s also our attitudes towards privacy that are important.

When we stop caring about our digital privacy, we witness surveillance apathy.

And it’s something that may be particularly significant for marginalised communities, who feel they hold no power to navigate or negotiate fair use of digital technologies.


Read more: Yes, your doctor might google you


In the wake of the NSA leaks in 2013 led by Edward Snowden, we are more aware of the machinations of online companies such as Facebook and Google. Yet research shows some of us are apathetic when it comes to online surveillance.

Privacy and surveillance

Attitudes to privacy and surveillance in Australia are complex.

According to a major 2017 privacy survey, around 70% of us are more concerned about privacy than we were five years ago.

Snapshot of Australian community attitudes to privacy 2017.
Office of the Australian Information Commissioner

And yet we still increasingly embrace online activities. A 2017 report on social media conducted by search marketing firm Sensis showed that almost 80% of internet users in Australia now have a social media profile, an increase of around ten points from 2016. The data also showed that Australians are on their accounts more frequently than ever before.

Also, most Australians appear not to be concerned about recently proposed implementation of facial recognition technology. Only around one in three (32% of 1,486) respondents to a Roy Morgan study expressed worries about having their faces available on a mass database.

A recent ANU poll revealed a similar sentiment, with recent data retention laws supported by two thirds of Australians.

So while we’re aware of the issues with surveillance, we aren’t necessarily doing anything about it, or we’re prepared to make compromises when we perceive our safety is at stake.

Across the world, attitudes to surveillance vary. Around half of Americans polled in 2013 found mass surveillance acceptable. France, Britain and the Philippines appeared more tolerant of mass surveillance compared to Sweden, Spain, and Germany, according to 2015 Amnesty International data.


Read more: Police want to read encrypted messages, but they already have significant power to access our data


Apathy and marginalisation

In 2015, philosopher Slavoj Žižek proclaimed that he did not care about surveillance (admittedly though suggesting that “perhaps here I preach arrogance”).

This position cannot be assumed by all members of society. Australian academic Kate Crawford argues the impact of data mining and surveillance is more significant for marginalised communities, including people of different races, genders and socioeconomic backgrounds. American academics Shoshana Magnet and Kelley Gates agree, writing:

[…] new surveillance technologies are regularly tested on marginalised communities that are unable to resist their intrusion.

A 2015 White House report found that big data can be used to perpetuate price discrimination among people of different backgrounds. It showed how data surveillance “could be used to hide more explicit forms of discrimination”.


Read more: Witch-hunts and surveillance: the hidden lives of queer people in the military


According to Ira Rubinstein, a senior fellow at New York University’s Information Law Institute, ignorance and cynicism are often behind surveillance apathy. Users are either ignorant of the complex infrastructure of surveillance, or they believe they are simply unable to avoid it.

As the White House report stated, consumers “have very little knowledge” about how data is used in conjunction with differential pricing.

So in contrast to the oppressive panopticon (a circular prison with a central watchtower) as envisioned by philosopher Jeremy Bentham, we have what Siva Vaidhyanathan calls the “crytopticon”. The crytopticon is “not supposed to be intrusive or obvious. Its scale, its ubiquity, even its very existence, are supposed to go unnoticed”.

But Melanie Taylor, lead artist of the computer game Orwell (which puts players in the role of surveillance) noted that many simply remain indifferent despite heightened awareness:

That’s the really scary part: that Snowden revealed all this, and maybe nobody really cared.

The Facebook trap

Surveillance apathy can be linked to people’s dependence on “the system”. As one of my media students pointed out, no matter how much awareness users have regarding their social media surveillance, invariably people will continue using these platforms. This is because they are convenient, practical, and “we are creatures of habit”.

Are you prepared to give up the red social notifications from Facebook?
nevodka/shutterstock

As University of Melbourne scholar Suelette Dreyfus noted in a Four Corners report on Facebook:

Facebook has very cleverly figured out how to wrap itself around our lives. It’s the family photo album. It’s your messaging to your friends. It’s your daily diary. It’s your contact list.

This, along with the complex algorithms Facebook and Google use to collect and use data to produce “filter bubbles” or “you loops” is another issue.

Protecting privacy

While some people are attempting to delete themselves from the network, others have come up with ways to avoid being tracked online.

Search engines such as DuckDuckGo or Tor Browser allow users to browse without being tracked. Lightbeam, meanwhile, allows users to see how their information is being tracked by third party companies. And MIT devised a system to show people the metadata of their emails, called Immersion.

The ConversationSurveillance apathy is more disconcerting than surveillance itself. Our very attitudes about privacy will inform the structure of surveillance itself, so caring about it is paramount.

Siobhan Lyons, Scholar in Media and Cultural Studies, Macquarie University

This article was originally published on The Conversation. Read the original article.

What should Australian companies be doing right now to protect our privacy


David Glance, University of Western Australia

Australians are increasingly concerned about how companies handle their personal data, especially online.

Faced with the increasing likelihood that this data will be compromised, either through cyber attacks or mishandling, companies are now being forced into a more comprehensive approach to collecting and protecting customers’ personal data. The question remains – what is the best approach to achieving this goal?

The Organisation for Economic Co-operation and Development (OECD) has proposed that instead of talking about cybersecurity – companies, organisations and nations should be viewing the problem from a digital security risk management perspective.

Cybersecurity often overlooks risks to data that have nothing to do with a “cyber” element, even if people could agree on a definition of that term. In the case of Edward Snowden for example, he used a colleague’s credentials to access the system and copied files to a USB drive.

Digital security risk management involves getting everyone in an organisation to see digital risk as part of the overall risks that the organisation faces. The extent of risk any organisation is willing to take in any particular activity depends on the activities value. The aim is to manage the risk to a level that is acceptable to all parties.

What do you do about the weak link: humans?

It is worth remembering that in the case of the Equifax breach in which the personal details of up to 143 million customers in the US were leaked, it was largely human errors that were to blame.

Put simply, the person who was responsible for applying the patch (a piece of software designed to update a computer program or its supporting data, to fix or improve it) simply didn’t do their job. The software that was supposed to check whether the patch had been applied also failed to pick this up.

Until humans can be taken out of the equation entirely, it is almost impossible to remain entirely secure, or to avoid the inadvertent disclosure of personal and private information. Insider threat (as this type of risk is known) is difficult to combat and companies have tried various approaches to managing this risk including predictions based on psychological profiling of staff.

Automation and artificial intelligence may be a way of achieving this in the future. This works by minimising the amount of sensitive information staff have direct access to and surfacing only the analysis or interpretation of that data.

A litany of recent breaches

If you needed convincing about the vulnerability of personal data on the Internet, you only need look at Gemalto’s data breach website or DataBreaches.net.

The breaches of private and personal information don’t recognise national boundaries with hacks of companies like Yahoo having affected 3 billion users, including millions of Australians.

Of course, Australian companies and organisations have also been involved with spectacular data breaches. Last year saw the Australian Red Cross expose 555,000 customer records online.

Of more concern was the Australian Department of Health had published online what they believed were de-identified records of Medicare and pharmaceutical claims of more than 3 million patients. Researchers at the University of Melbourne discovered that the “encrypted” doctor provider numbers could be decrypted.

Are we looking at it in the wrong way?

Whilst there are practical steps companies can take to protect digital systems and data, there are more fundamental questions companies should be asking from a risk perspective. In order to navigate these questions, companies need to understand the data they collect and perhaps surprisingly, this is something most companies struggle to do.

The 13 Australian Privacy Principles from the Office of the Australian Information Commissioner outline the basics of how organisations and agencies should handle personal information. The practical application of these principles involves an approach called Privacy By Design for all applications and services companies offer.

Enter confidential computing

For CSIRO’s Data61, the answer to breaches of this sort is “confidential computing”. Data61 is tasked with data innovation and commercialisation of its research ideas. Confidential computing is the remit of Data61’s latest spin-off, N1 Analytics.

The main aspect of confidential computing involves keeping data encrypted at all times and using special techniques to be able to query data that is still encrypted and only decrypting the answer.

This can even allow others outside an organisation to query internal data directly or link to it with their own data without revealing the actual underlying data to either party.

Aside from the case of allowing the use of sensitive data in research, this approach would allow a company with financial information say, to share this data with an insurance company without handing over sensitive information but theoretically letting the insurance company carry out extensive data analytics.

What companies should do now to protect your data

As a starting point, Australian companies should only collect the minimum of personal information that the business actually needs. This means not collecting extra information simply for marketing purposes at some later date for example.

Companies then need to explain in simple, clear, terms why information is being collected, what it is being used for and get users to consent to giving that information.

Companies then need to secure the data that is collected. Security involves dedicated staff understanding the data that is kept by a company and taking responsibility for its physical security and for controlling who has access, when they have access and what form they can access the data.

The ConversationLastly, they need to understand and enact a risk management approach to all digital data. This means that this is part of the overall culture of the company for every employee.

David Glance, Director of UWA Centre for Software Practice, University of Western Australia

This article was originally published on The Conversation. Read the original article.

The new data retention law seriously invades our privacy – and it’s time we took action



File 20170615 24976 1y7ipnc
Then government’s new law enabling the collection of metadata raises serious privacy concerns.
shutterstock

Uri Gal, University of Sydney

Over the past few months, Australians’ civil rights have come under attack.

In April, the government’s data retention law came into effect. The law requires telecommunications companies to store customer metadata for at least two years. Metadata from our phone calls, text messages, emails, and internet activity is now tracked by the government and accessible by intelligence and law enforcement agencies.

Ironically, the law came into effect only a few weeks before Australia marked Privacy Awareness Week. Alarmingly, it is part of a broad trend of eroding civil rights in Western democracies, most noticeably evident by the passage of the Investigatory Powers Act in the UK, and the decision to repeal the Internet Privacy Law in the US.

Why does it matter?

Australia’s data retention law is one of the most comprehensive and intrusive data collection schemes in the western world. There are several reasons why Australians should challenge this law.

First, it undermines the democratic principles on which Australia was founded. It gravely harms individuals’ right to privacy, anonymity, and protection from having their personal information collected.

The Australian Privacy Principles define limited conditions under which the collection of personal information is permissible. It says personal information must be collected by “fair” means.

Despite a recent ruling by the Federal Court, which determined that our metadata does not constitute “personal information”, we should consider whether sweeping collection of all of Australian citizenry’s metadata is consistent with our right to privacy.

Second, metadata – data about data – can be highly revealing and provide a comprehensive depiction of our daily activities, communications and movements.

As detailed here, metadata is broad in scope and can tell more about us than the actual content of our communications. Therefore, claims that the data retention law does not seriously compromise our privacy should be considered as naïve, ill-informed, or dishonest.

Third, the law is justified by the need to protect Australians from terrorist acts. However, despite the government’s warnings, the risk of getting hurt in a terrorist attack in Australia has been historically, and is today, extremely low.

To date, the government has not presented any concrete empirical evidence to indicate that this risk has substantially changed. Democracies such as France, Germany and Israel – which face more severe terrorist threats than Australia – have not legalised mass data collection and instead rely on more targeted means to combat terrorism that do not jeopardise their democratic foundations.

Fourth, the data retention law is unlikely to achieve its stated objective and thwart serious terrorist activities. There are a range of widely-accessible technologies that can be used to circumvent the government’s surveillance regime. Some of them have previously been outlined by the now-prime minister, Malcolm Turnbull.

Therefore, in addition to damaging our civil rights, the law’s second lasting legacy is likely to be its contribution to increasing the budgetary debt by approximately A$740 million over the next ten years.

How can the law be challenged?

There are several things we can do to challenge the law. For example, there are technologies that we can start using today to increase our online privacy.

A full review of all available options is beyond the scope of this article, but here are three effective ones.

  1. Virtual private networks (VPNs) can hide browsing information from internet service providers. Aptly, April 13, the day the data retention law came into effect, has been declared the Australian “get a VPN day”.

  2. Tor – The Onion Router is free software that can help protect the anonymity of its users and conceal their internet activity from surveillance and analysis.

  3. Encrypted messaging applications – unprotected applications can be easily tracked. Consequently, applications such as Signal and Telegram that offer data encryption solutions have been growing in popularity.

Australian citizens have the privilege of electing their representatives. An effective way to oppose continuing state surveillance is to vote for candidates whose views truly reflect the democratic principles that underpin modern Australian society.

The Australian public needs to have an honest, critical and open debate about the law and its social and ethical ramifications. The absence of such a debate is dangerous. The institutional accumulation of power is a slippery slope – once gained, power is not easily given up by institutions.

And the political climate in Australia is ripe for further deterioration of civil rights, as evident in the government’s continued efforts to increase its regulation of the internet. Therefore, it is important to sound a clear and public voice that opposes such steps.

Finally, we need to call out our elected representatives when they make logically muddled claims. In a speech to parliament this week Tuesday, Turnbull said:

The rights and protections of the vast overwhelming majority of Australians must outweigh the rights of those who will do them harm.

The ConversationThe data retention law is a distortion of the logic embedded in this statement because it indiscriminately targets all Australians. We must not allow the pernicious intent of a handful of terrorists to be used as an excuse to harm the rights of all Australians and change the fabric of our society.

Uri Gal, Associate Professor in Business Information Systems, University of Sydney

This article was originally published on The Conversation. Read the original article.

How the law allows governments to publish your private information



Image 20170310 10926 1lptfki
Controversy has recently surrounded Centrelink and its handling of ‘overpayments’ and personal information.
AAP/Dave Hunt

Bruce Baer Arnold, University of Canberra

Recent controversy over the government’s use of information provided to Human Services and Veterans’ Affairs demonstrates there are major holes in Australia’s privacy regime that we need to fix. The Conversation

Australians are accustomed to providing personal information to federal and state governments. We do it repeatedly throughout our lives. We do so to claim entitlements. We also do so as the basis of public administration – the contemporary “information state”.

In making that state possible we trust we will not be treated as a file number or an incident. We will not be doxed.

A key aspect of that trust, consistent with international rights law since the 1940s, is that our privacy will be protected. We assume officials – and private sector entities they use as their agents – will not be negligent in safeguarding personal information.

We also assume they will not share personal information with other agencies unless there is a substantive need for that sharing – for example, for national security or to prevent harm to an individual. And we expect they will not disclose personal information to the media or directly to the community at large as a way of silencing criticism or resolving disputes.

Australia has a sophisticated body of administrative law and ombudsmen. So, there is no need for public shaming of people who disagree with ministers, officials or databases.

The complicated and inconsistent body of privacy law highlighted by law reform commissions over the past two decades attempts to provide legal protection for personal information. It is overseen by under-resourced watchdogs that – amid threats of termination – are inclined to lick the ministerial hand that feeds them.

That law has major weaknesses, illustrated by the Centrelink controversy and the furore over the Veterans’ Affairs Legislation Amendment (Digital Readiness and Other Measures) Bill. The Commonwealth is able to ignore ostensible protections under the Privacy Act and other statutes. That is quite lawful. It has been so for many years, evident in the watchdog’s finding in L v Commonwealth Agency.

The watchdog’s guidelines state that where someone:

… makes adverse comments in the media about the way [a body] has treated them … it may be reasonable to expect that the entity may respond publicly to these comments in a way that reveals personal information specifically relevant to the issues that the individual has raised.

Put simply, if you complain publicly about a Commonwealth agency that holds personal information relating to you, that agency can lawfully give the information to the media or publish it directly. It can do so to correct what the minister deems to be “misinformation”.

There is no requirement that your complaint be malicious, fraudulent, vexatious or otherwise wrong. Disclosure is at the minister’s discretion, not subject to independent review. You have no legal remedies unless it could be proved that the official was malicious or corrupt.

We have seen such a disclosure. The Department of Human Services gave personal information to a journalist for publication about a person who disagreed with action by Centrelink to recover an alleged overpayment of an entitlement.

There has been much discussion in the media and the national parliament about the vigour with which the government is seeking to recover overpayments. Worryingly, it remains uncertain whether many of the alleged overpayments actually exist.

Ongoing changes to entitlements policy, the hollowing out of key agencies by the annual “efficiency dividend” (that is, ongoing cuts to budgets) and problematical design and management of very large information technology projects mean overpayments might not have occurred.

Public disclosure of someone’s personal information thus looks very much like bullying, if not a deliberate effort to chill legitimate criticism and discussion of publicly funded programs.

The veterans’ affairs minister and the shadow minister have apparently not done their homework. The new Digital Readiness Bill – passed in the House of Representatives but not in the Senate – allows the minister to publicly disclose medical and other personal information about veterans. The rationale for that disclosure is to correct misinformation.

Understandably, veterans are unhappy. Legal practitioners and academics wonder about the scope for public shaming through release of department information that might not be correct.

The national Privacy Commissioner has been complacent. Labor’s veterans’ affairs spokeswoman, Amanda Rishworth, has belatedly expressed concern. The minister has simply referred to the establishment of an independent review by the Australian Government Solicitor and his department. It is difficult to understand why privacy wasn’t properly considered before the bill went into parliament.

There are too many loopholes in Australia’s privacy regime. Government agencies also need to toughen up in the face of criticism – legitimate or otherwise – and not respond by bullying people through publication of personal information.

Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article was originally published on The Conversation. Read the original article.