New data access bill shows we need to get serious about privacy with independent oversight of the law



File 20180814 2921 15oljsx.jpg?ixlib=rb 1.1

MICK TSIKAS/AAP

Greg Austin, UNSW

The federal government today announced its proposed legislation to give law enforcement agencies yet more avenues to reach into our private lives through access to our personal communications and data. This never-ending story of parliamentary bills defies logic, and is not offering the necessary oversight and protections.

The trend has been led by Prime Minister Malcolm Turnbull, with help from an ever-growing number of security ministers and senior officials. Could it be that the proliferation of government security roles is a self-perpetuating industry leading to ever more government powers for privacy encroachment?

That definitely appears to be the case.

Striking the right balance between data access and privacy is a tricky problem, but the government’s current approach is doing little to solve it. We need better oversight of law enforcement access to our data to ensure it complies with privacy principles and actually results in convictions. That might require setting up an independent judicial review mechanism to report outcomes on an annual basis.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Where is the accountability?

The succession of data access legislation in the Australian parliament is fast becoming a Mad Hatter’s tea party – a characterisation justified by the increasingly unproductive public conversations between the government on one hand, and legal specialists and rights advocates on the other.

If the government says it needs new laws to tackle “terrorism and paedophilia”, then the rule seems to be that other side will be criticised for bringing up “privacy protection”. The federal opposition has surrendered any meaningful resistance to this parade of legislation.

Rights advocates have been backed into a corner by being forced to repeat their concerns over each new piece of legislation while neither they nor the government, nor our Privacy Commissioner, and all the other “commissioners”, are called to account on fundamental matters of principle.

Speaking of the commissioner class, Australia just got a new one last week: the Data Commissioner. Strangely, the impetus for this appointment came from the Productivity Commission.

The post has three purposes:

  1. to promote greater use of data,
  2. to drive economic benefits and innovation from greater use of data, and
  3. to build trust with the Australian community about the government’s use of data.

The problem with this logic is that purposes one and two can only be distinguished by the seemingly catch-all character of the first: that if data exists it must be used.

Leaving aside that minor point, the notion that the government needs to build trust with the Australian community on data policy speaks for itself.

National Privacy Principles fall short

There is near universal agreement that the government is managing this issue badly, from the census data management issue to the “My Health Record” debacle. The growing commissioner class has not been much help.

Australia does have personal data protection principles, you may be surprised to learn. They are called “Privacy Principles”. You may be even more surprised to learn that the rights offered in these principles exist only up to the point where any enforcement arm of government wants the data.




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


So it seems that Australians have to rely on the leadership of the Productivity Commission (for economic policy) to guarantee our rights in cyber space, at least when it comes to our personal data.

Better oversight is required

There is another approach to reconciling citizens’ interests in privacy protection with legitimate and important enforcement needs against terrorists and paedophiles: that is judicial review.

The government argues, unconvincingly according to police sources, that this process adequately protects citizens by requiring law enforcement to obtain court-ordered warrants to access information. The record in some other countries suggests otherwise, with judges almost always waving through any application from enforcement authorities, according to official US data.

There is a second level of judicial review open to the government. This is to set up an independent judicial review mechanism that is obliged to annually review all instances of government access to personal data under warrant, and to report on the virtues or shortcomings of that access against enforcement outcomes and privacy principles.

There are two essential features of this proposal. First, the reviewing officer is a judge and not a public servant (the “commissioner class”). Second, the scope of the function is review of the daily operation of the intrusive laws, not just the post-facto examination of notorious cases of data breaches.

It would take a lengthy academic volume to make the case for judicial review of this kind. But it can be defended simply on economic grounds: such a review process would shine light on the efficiency of police investigations.

According to data released by the UK government, the overwhelming share of arrests for terrorist offences in the UK (many based on court-approved warrants for access to private data) do not result in convictions. There were 37 convictions out of 441 arrests for terrorist-related offences in the 12 months up to March 2018.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The Turnbull government deserves credit for its recognition of the values of legal review. Its continuing commitment to posts such as the National Security Legislation Monitor – and the appointment of a high-profile barrister to such a post – is evidence of that.

But somewhere along the way, the administration of data privacy is falling foul of a growing bureaucratic mess.

The ConversationThe only way to bring order to the chaos is through robust accountability; and the only people with the authority or legitimacy in our political system to do that are probably judges who are independent of the government.

Greg Austin, Professor UNSW Canberra Cyber, UNSW

This article was originally published on The Conversation. Read the original article.

Advertisements

What could a My Health Record data breach look like?



File 20180723 189308 dv0gue.jpg?ixlib=rb 1.1
Health information is an attractive target for offenders.
Tammy54/Shutterstock

Cassandra Cross, Queensland University of Technology

Last week marked the start of a three-month period in which Australians can opt out of the My Health Record scheme before having an automatically generated electronic health record.

Some Australians have already opted out of the program, including Liberal MP Tim Wilson and former Queensland LNP premier Campbell Newman, who argue it should be an opt-in scheme.

But much of the concern about My Health Records centres around privacy. So what is driving these concerns, and what might a My Health Records data breach look like?

Data breaches

Data breaches exposing individuals’ private information are becoming increasingly common and can include demographic details (name, address, birthdate), financial information (credit card details, pin numbers) and other details such as email addresses, usernames and passwords.

Health information is also an attractive target for offenders. They can use this to perpetrate a wide variety of offences, including identity fraud, identity theft, blackmail and extortion.




Read more:
Another day, another data breach – what to do when it happens to you


Last week hackers stole the health records of 1.5 million Singaporeans, including Prime Minister Lee Hsien Loong, who may have been targeted for sensitive medical information.

Meanwhile in Canada, hackers reportedly stole the medical histories of 80,000 patients from a care home and held them to ransom.

Australia is not immune. Last year Australians’ Medicare details were advertised for sale on the dark net by a vendor who had sold the records of at least 75 people.

Earlier this year, Family Planning NSW experienced a breach of its booking system, which exposed client data of those who had contacted the organisation within the past two and a half years.

Further, in the first report since the introduction of mandatory data breach reporting, the Privacy Commissioner revealed that of the 63 notifications received in the first quarter, 15 were from health service providers. This makes health the leading industry for reported breaches.

Human error

It’s important to note that not all data breaches are perpetrated from the outside or are malicious in nature. Human error and negligence also pose a threat to personal information.

The federal Department of Health, for instance, published a supposedly “de-identified” data set relating to details from the Medicare Benefits Scheme and the Pharmaceutical Benefits Scheme of 2.5 million Australians. This was done for research purposes.

But researchers were able to re-identify the details of individuals using publicly available information. In a resulting investigation, the Privacy Commissioner concluded that the Privacy Act had been breached three times.

The latest data breach investigation from US telecommunications company Verizon notes that health care is the only sector where the threat from inside is greater than from the outside. Human error contributes largely to this.

There are promises of strong security surrounding My Health Records but, in reality, it’s a matter of when, not if, a data breach of some sort occurs.

Human error is one of the biggest threats.
Shutterstock

Privacy controls

My Health Record allows users to set the level of access they’re comfortable with across their record. This can target specific health-care providers or relate to specific documents.

But the onus of this rests heavily on the individual. This requires a high level of computer and health literacy that many Australians don’t have. The privacy control process is therefore likely to be overwhelming and ineffective for many people.




Read more:
My Health Record: the case for opting out


With the default option set to “general access”, any organisation involved in the person’s care can access the information.

Regardless of privacy controls, other agencies can also access information. Section 70 of the My Health Records Act 2012 states that details can be disclosed to law enforcement for a variety of reasons including:

(a) the prevention, detection, investigation, prosecution or punishment of criminal offences.

While no applications have been received to date, it is reasonable to expect this may occur in the future.

There are also concerns about sharing data with health insurance agencies and other third parties. While not currently authorised, there is intense interest from companies that can see the value in this health data.

Further, My Health Record data can be used for research, policy and planning. Individuals must opt out of this separately, through the privacy settings, if they don’t want their data to be part of this.

What should you do?

Health data is some of the most personal and sensitive information we have and includes details about illnesses, medications, tests, procedures and diagnoses. It may contain information about our HIV status, mental health profile, sexual activity and drug use.

These areas can attract a lot of stigma so keeping this information private is paramount. Disclosure may not just impact the person’s health and well-being, it may also affect their relationships, their employment and other facets of their life.

Importantly, these details can’t be reset or reissued. Unlike passwords and credit card details, they are static. Once exposed, it’s impossible to “unsee” or “unknow” what has been compromised.

Everyone should make their own informed decision about whether to stay in My Health Record or opt out. Ultimately, it’s up to individuals to decide what level of risk they’re comfortable with, and the value of their own health information, and proceed on that basis.


The Conversation


Read more:
My Health Record: the case for opting in


Cassandra Cross, Senior Lecturer in Criminology, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

New data tool scores Australia and other countries on their human rights performance



File 20180329 189827 17jfcp2.jpg?ixlib=rb 1.1
Despite the UN’s Universal Declaration of Human Rights, it remains difficult to monitor governments’ performance because there are no comprehensive human rights measures.
from http://www.shutterstock.com, CC BY-ND

K. Chad Clay, University of Georgia

This year, the Universal Declaration of Human Rights will mark its 70th anniversary, but despite progress in some areas, it remains difficult to measure or compare governments’ performance. We have yet to develop comprehensive human rights measures that are accepted by researchers, policymakers and advocates alike.

With this in mind, my colleagues and I have started the Human Rights Measurement Initiative (HRMI), the first global project to develop a comprehensive suite of metrics covering international human rights.

We have now released our beta dataset and data visualisation tools, publishing 12 metrics that cover five economic and social rights and seven civil and political rights.

Lack of human rights data

People often assume the UN already produces comprehensive data on nations’ human rights performance, but it does not, and likely never will. The members of the UN are governments, and governments are the very actors that are obligated by international human rights law. It would be naïve to hope for governments to effectively monitor and measure their own performance without political bias. There has to be a role for non-state measurement.




Read more:
Australia’s Human Rights Council election comes with a challenge to improve its domestic record


We hope that the data and visualisations provided by HRMI will empower practitioners, advocates, researchers, journalists and others to speak clearly about human rights outcomes worldwide and hold governments accountable when they fail to meet their obligations under international law.

These are the 12 human rights measured by the Human Rights Measurement Initiative (HRMI) project during its pilot stage. The UN’s Universal Declaration of Human Rights defines 30 human rights.
Human Rights Measurement Initiative, CC BY

The HRMI pilot

At HRMI, alongside our existing methodology for economic and social rights, we are developing a new way of measuring civil and political human rights. In our pilot, we sent an expert survey directly to human rights practitioners who are actively monitoring each country’s human rights situation.

That survey asked respondents about their country’s performance on the rights to assembly and association, opinion and expression, political participation, freedom from torture, freedom from disappearance, freedom from execution, and freedom from arbitrary or political arrest and imprisonment.

Based on those survey responses, we develop data on the overall level of respect for each of the rights. These data are calculated using a statistical method that ensures responses are comparable across experts and countries, and with an uncertainty band to provide transparency about how confident we are in each country’s placement. We also provide information on who our respondents believed were especially at risk for each type of human rights violation.

Human rights in Australia

One way to visualise data on our website is to look at a country’s performance across all 12 human rights for which we have released data at this time. For example, the graph below shows Australia’s performance across all HRMI metrics.

Human rights performance in Australia. Data necessary to calculate a metric for the right to housing at a high-income OECD assessment standard is currently unavailable for Australia.
CC BY

As shown here, Australia performs quite well on some indicators, but quite poorly on others. Looking at civil and political rights (in blue), Australia demonstrates high respect for the right to be free from execution, but does much worse on the rights to be free from torture and arbitrary arrest.

Our respondents often attributed this poor performance on torture and imprisonment to the treatment of refugees, immigrants and asylum seekers, as well as Indigenous peoples, by the Australian government.

Looking across the economic and social rights (in green), Australia shows a range of performance, doing quite well on the right to food, but performing far worse on the right to work.




Read more:
Ten things Australia can do to be a human rights hero


Freedom from torture across countries

Another way to visualise our data is to look at respect for a single right across several countries. The graph below shows, for example, overall government respect for the right to be free from torture and ill treatment in all 13 of HRMI’s pilot countries.

Government respect for the right to be free from torture, January to June 2017.
Human Rights Measurement Initiative (HRMI)

Here, the middle of each blue bar (marked by the small white lines) represents the average estimated level of respect for freedom from torture, while the length of the blue bars demonstrate our certainty in our estimates. For instance, we are much more certain regarding Mexico’s (MEX) low score than Brazil’s (BRA) higher score. Due to this uncertainty and the resulting overlap between the bars, there is only about a 92% chance that Brazil’s score is better than Mexico’s.

In addition to being able to say that torture is probably more prevalent in Mexico than in Brazil, and how certain we are in that comparison, we can also compare the groups of people that our respondents said were at greatest risk of torture. This information is summarised in the two word clouds below; larger words indicate that that group was selected by more survey respondents as being at risk.

These word clouds show, on the left, the attributes that place a person at risk of torture in Brazil, and on the right, attributes that place one at risk for torture in Mexico, January to June 2017, respectively.
Human Rights Measurement Initiative (HRMI), CC BY

There are both similarities and differences between the groups that were at highest risk in Brazil and Mexico. Based on the survey responses our human rights experts in Brazil gave us, we know that black people, those who live in favelas or quilombolas, those who live in rural or remote areas, landless rural workers, and prison inmates are largely the groups referred to by the terms “race,” “low social or economic status,” or “detainees or suspected criminals”.

On the other hand, in Mexico, imprisoned women and those suspected of involvement with organised crime are the detainees or suspected criminals that our respondents stated were at high risk of torture. Migrants, refugees and asylum seekers travelling through Mexico on the way to the United States are also at risk.

The ConversationThere is much more to be learned from the visualisations and data on our website. After you have had the opportunity to explore, we would love to hear your feedback here about any aspect of our work so far. We are just getting started, and we thrive on collaboration with the wider human rights community.

K. Chad Clay, Assistant Professor of International Affairs, University of Georgia

This article was originally published on The Conversation. Read the original article.

How to stop haemorrhaging data on Facebook



File 20180405 189801 1wbjtyg.jpg?ixlib=rb 1.1
Every time you open an app, click a link, like a post, read an article, hover over an ad, or connect to someone, you are generating data.
Shutterstock

Belinda Barnet, Swinburne University of Technology

If you are one of 2.2 billion Facebook users worldwide, you have probably been alarmed by the recent coverage of the Cambridge Analytica scandal, a story that began when The Guardian revealed 50 million (now thought to be 87 million) user profiles had been retrieved and shared without the consent of users.

Though the #deletefacebook campaign has gained momentum on Twitter, it is simply not practical for most of us to delete our accounts. It is technically difficult to do, and given that one quarter of the human population is on the platform, there is an undeniable social cost for being absent.




Read more:
Why we should all cut the Facebook cord. Or should we?


It is also not possible to use or even to have a Facebook profile without giving up at least some data: every time you open the app, click a link, like a post, hover over an ad, or connect to someone, you are generating data. This particular type of data is not something you can control, because Facebook considers such data its property.

Every service has a price, and the price for being on Facebook is your data.

However, you can remain on Facebook (and other social media platforms like it) without haemorrhaging data. If you want stay in touch with those old school friends – despite the fact you will probably never see them again – here’s what you can do, step by step. The following instructions are tailored to Facebook settings on mobile.

Your location

The first place to start is with the device you are holding in your hand.
Facebook requests access to your GPS location by default, and unless you were reading the fine print when you installed the application (if you are that one person please tell me where you find the time), it will currently have access.

This means that whenever you open the app it knows where you are, and unless you have changed your location sharing setting from “Always” to “Never” or “Only while using”, it can track your location when you’re not using the app as well.

To keep your daily movements to yourself, go into Settings on Apple iPhone or Android, go to Location Services, and turn off or select “Never” for Facebook.

While you’re there, check for other social media apps with location access (like Twitter and Instagram) and consider changing them to “Never”.

Remember that pictures from your phone are GPS tagged too, so if you intend to share them on Facebook, revoke access to GPS for your camera as well.

Your content

The next thing to do is to control who can see what you post, who can see private information like your email address and phone number, and then apply these settings in retrospect to everything you’ve already posted.

Facebook has a “Privacy Shortcuts” tab under Settings, but we are going to start in Account Settings > Privacy.

You control who sees what you post, and who sees the people and pages you follow, by limiting the audience here.

Change “Who can see your future posts” and “Who can see the people and pages you follow” to “Only Friends”.

In the same menu, if you scroll down, you will see a setting called “Do you want search engines outside of Facebook to link to your profile?” Select No.

After you have made these changes, scroll down and limit the audience for past posts. Apply the new setting to all past posts, even though Facebook will try to alarm you. “The only way to undo this is to change the audience of each post one at a time! Oh my Goodness! You’ll need to change 1,700 posts over ten years.” Ignore your fears and click Limit.




Read more:
It’s time for third-party data brokers to emerge from the shadows


Next go in to Privacy Shortcuts – this is on the navigation bar below Settings. Then select Privacy Checkup. Limit who can see your personal information (date of birth, email address, phone number, place of birth if you provided it) to “Only Me”.

Third party apps

Every time you use Facebook to “login” to a service or application you are granting both Facebook and the third-party service access to your data.

Facebook has pledged to investigate and change this recently as a result of the Cambridge Analytica scandal, but in the meantime, it is best not to use Facebook to login to third party services. That includes Bingo Bash unfortunately.

The third screen of Privacy Checkup shows you which apps have access to your data at present. Delete any that you don’t recognise or that are unnecessary.

In the final step we will be turning off “Facebook integration” altogether. This is optional. If you choose to do this, it will revoke permission for all previous apps, plugins, and websites that have access to your data. It will also prevent your friends from harvesting your data for their apps.

In this case you don’t need to delete individual apps as they will all disappear.

Turning off Facebook integration

If you want to be as secure as it is possible to be on Facebook, you can revoke third-party access to your content completely. This means turning off all apps, plugins and websites.

If you take this step Facebook won’t be able to receive information about your use of apps outside of Facebook and apps won’t be able to receive your Facebook data.

If you’re a business this is not a good idea as you will need it to advertise and to test apps. This is for personal pages.

It may make life a little more difficult for you in that your next purchase from Farfetch will require you to set up your own account rather than just harvest your profile. Your Klout score may drop because it can’t see Facebook and that might feel terrible.

Remember this setting only applies to the data you post and provide yourself. The signals you generate using Facebook (what you like, click on, read) will still belong to Facebook and will be used to tailor advertising.

To turn off Facebook integration, go into Settings, then Apps. Select Apps, websites and games.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


Facebook will warn you about all the Farmville updates you will miss and how you will have a hard time logging in to The Guardian without Facebook. Ignore this and select “Turn off”.

The ConversationWell done. Your data is now as secure as it is possible to be on Facebook. Remember, though, that everything you do on the platform still generates data.

Belinda Barnet, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article was originally published on The Conversation. Read the original article.

It’s time for third-party data brokers to emerge from the shadows



File 20180404 189813 1ihb282.jpg?ixlib=rb 1.1
Personal data has been dubbed the “new oil”, and data brokers are very efficient miners.
Emanuele Toscano/Flickr, CC BY-NC-ND

Sacha Molitorisz, University of Technology Sydney

Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.

Graham Mudd, Facebook’s product marketing director, said in a statement:

We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.

Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.

The invisible industry worth billions

In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.

The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.




Read more:
Facebook data harvesting: what you need to know


These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.

Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.

In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.

A growing concern

Third party access to personal data is causing increasing concern. This week, Grindr was shown to be revealing its users’ HIV status to third parties. Such news is unsettling, as if there are corporate eavesdroppers on even our most intimate online engagements.

The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.




Read more:
Your online privacy depends as much on your friends’ data habits as your own


Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.

As one group of privacy researchers noted in 2011, this process, “which nearly invisibly shares not just a user’s, but a user’s friends’ information with third parties, clearly violates standard norms of information flow”.

With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.

More transparency and more respect for users

To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.

Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.

In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.

Time to regulate

Having resisted regulation since 2004, Mark Zuckerberg has finally conceded that Facebook should be regulated – and advocated for laws mandating transparency for online advertising.

Historically, Facebook has made a point of dedicating itself to openness, but Facebook itself has often operated with a distinct lack of openness and transparency. Data brokers have been even worse.

The ConversationFacebook’s motto used to be “Move fast and break things”. Now Facebook, data brokers and other third parties need to work with lawmakers to move fast and fix things.

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.

Your online privacy depends as much on your friends’ data habits as your own



File 20180326 54872 1q80274.jpg?ixlib=rb 1.1
Many social media users have been shocked to learn the extent of their digital footprint.
Shutterstock

Vincent Mitchell, University of Sydney; Andrew Stephen, University of Oxford, and Bernadette Kamleitner, Vienna University of Economics and Business

In the aftermath of revelations about the alleged misuse of Facebook user data by Cambridge Analytica, many social media users are educating themselves about their own digital footprint. And some are shocked at the extent of it.

Last week, one user took advantage of a Facebook feature that enables you to download all the information the company stores about you. He found his call and SMS history in the data dump – something Facebook says is an opt-in feature for those using Messenger and Facebook Lite on Android.

//platform.twitter.com/widgets.js

This highlights an issue that we don’t talk about enough when it comes to data privacy: that the security of our data is dependent not only on our own vigilance, but also that of those we interact with.

It’s easy for friends to share our data

In the past, personal data was either captured in our memories or in physical objects, such as diaries or photo albums. If a friend wanted data about us, they would have to either observe us or ask us for it. That requires effort, or our consent, and focuses on information that is both specific and meaningful.

Nowadays, data others hold about us is given away easily. That’s partly because the data apps ask for is largely intangible and invisible, as well as vague rather than specific.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


What’s more, it doesn’t seem to take much to get us to give away other people’s data in return for very little, with one study finding 98% of MIT students would give away their friends’ emails when promised free pizza.

Other studies have shown that collaborating in folders on cloud services, such as Google Drive, can result in privacy losses that are 39% higher due collaborators installing third-party apps you wouldn’t choose to install yourself. Facebook’s data download tool poses another risk in that once the data is taken out of Facebook it becomes even easier to copy and distribute.

This shift from personal to interdependent online privacy reliant on our friends, family and colleagues is a seismic one for the privacy agenda.

How much data are we talking about?

With more than 3.5 million apps on Google Play alone, the collection of data from our friends via back-door methods is more common than we might think. The back-door opens when you press “accept” to permissions to give access to your contacts when installing an app.

WhatsApp might have your contact information even if you aren’t a registered user.
Screen Shot at 1pm on 26 March 2018

Then the data harvesting machinery begins its work – often in perpetuity, and without us knowing or understanding what will be done with it. More importantly, our friends never agreed to us giving away their data. And we have a lot of friends’ data to harvest.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The average Australian has 234 Facebook friends. Large-scale data collection is easy in an interconnected world when each person who signs up for an app has 234 friends, and each of them has 234 and, so on. That’s how Cambridge Analytica was apparently able to collect information on up to 50 million users, with permission from just 270,000.

Add to that the fact that the average person uses nine different apps on a daily basis. Once installed, some of these apps can harvest data on a daily basis without your friends knowing and 70% of apps share it with third parties.




Read more:
7 in 10 smartphone apps share your data with third-party services


We’re more likely to refuse data requests that are specific

Around 60% of us never, or only occasionally, review the privacy policy and permissions requested by an app before downloading. And in our own research conducted with a sample of 287 London business students, 96% of participants failed to realise the scope of all the information they were giving away.

However, this can be changed by making a data request more specific – for example, by separating out “contacts” from “photos”. When we asked participants if they had the right to give all the data on their phone, 95% said yes. But when they focused on just contacts, this decreased to 80%.

We can take this further with a thought experiment. Imagine if an app asked you for your “contacts, including your grandmother’s phone number and your daughter’s photos”. Would you be more likely to say no? The reality of what you are actually giving away in these consent agreements becomes more apparent with a specific request.

The silver lining is more vigilance

This new reality not only threatens moral codes and friendships, but can cause harm from hidden viruses, malware, spyware or adware. We may also be subject to prosecution as in a recent German case in which a judge ruled that giving away your friend’s data on Whatsapp without their permission was wrong.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Although company policies on privacy can help, these are difficult to police. Facebook’s “platform policy” at the time the Cambridge Analytica data was harvested only allowed the collection of friends’ data to improve the user experience of an app, while preventing it from being sold on or used for advertising. But this puts a huge burden on companies to police, investigate and enforce these policies. It’s a task few can afford, and even a company the size of Facebook failed.

The ConversationThe silver lining to the Cambridge Analytica case is that more and more people are recognising that the idea of “free” digital services is an illusion. The price we pay is not only our own privacy, but the privacy of our friends, family and colleagues.

Vincent Mitchell, Professor of Marketing, University of Sydney; Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford, and Bernadette Kamleitner, , Vienna University of Economics and Business

This article was originally published on The Conversation. Read the original article.

The US election hack, fake news, data theft: the cyber security lessons from 2017



File 20171219 4995 17al34.jpg?ixlib=rb 1.1
Cyber attacks have the potential to cause economic disruption, coerce changes in political behaviour and subvert systems of governance.
from http://www.shutterstock.com, CC BY-ND

Joe Burton, University of Waikato

Cyber security played a prominent role in international affairs in 2017, with impacts on peace and security.

Increased international collaboration and new laws that capture the complexity of communications technology could be among solutions to cyber security issues in 2018.


Read more: Artificial intelligence cyber attacks are coming – but what does that mean?


The US election hack and the end of cyber scepticism

The big story of the past year has been the subversion of the US election process and the ongoing controversies surrounding the Trump administration. The investigations into the scandal are unresolved, but it is important to recognise that the US election hack has dispelled any lingering scepticism about the impact of cyber attacks on national and international security.

From the self-confessed “mistake” Secretary Clinton made in setting up a private email server, to the hacking of the Democratic National Committee’s servers and the leaking of Democratic campaign chair John Podesta’s emails to WikiLeaks, the 2016 presidential election was in many ways defined by cyber security issues.

Many analysts had been debating the likelihood of a “digital Pearl Harbour”, an attack producing devastating economic disruption or physical effects. But they missed the more subtle and covert political scope of cyber attacks to coerce changes in political behaviour and subvert systems of governance. Enhancing the security and integrity of democratic systems and electoral processes will surely be on the agenda in 2018 in the Asia Pacific and elsewhere.

Anti-social media

The growing impact of social media and the connection with cyber security has been another big story in 2017. Social media was meant to be a great liberator, to democratise, and to bring new transparency to politics and societies. In 2017, it has become a platform for fake news, misinformation and propaganda.

Social media sites clearly played a role in displacing authoritarian governments during the Arab Spring uprisings. Few expected they would be used by authoritarian governments in an incredibly effective way to sow and exploit divisions in democratic countries. The debate we need to have in 2018 is how we can deter the manipulation of social media, prevent the spread of fake news and encourage the likes of Facebook and Twitter to monitor and police their own networks.

If we don’t trust what we see on these sites, they won’t be commercially successful, and they won’t serve as platforms to enhance international peace and security. Social media sites must not become co-opted or corrupted. Facebook should not be allowed to become Fakebook.

Holding data to ransom

The spread of the Wannacry virus was the third big cyber security story of 2017. Wannacry locked down computers and demanded a ransom (in bitcoin) for the electronic key that would release the data. The virus spread in a truly global attack to an estimated 300,000 computers in 150 countries. It led to losses in the region of four billion dollars – a small fraction of the global cyber crime market, which is projected to grow to $6 trillion by 2021. In the Asia Pacific region, cyber crime is growing by 45% each year.


Read more: Cyberspace aggression adds to North Korea’s threat to global security


Wannacry was an important event because it pointed not only to the growth in cyber crime but also the dangers inherent in the development and proliferation of offensive cyber security capabilities. The exploit to windows XP systems that was used to spread the virus had been stockpiled by the US National Security Agency (NSA). It ended up being released on the internet and then used to generate revenue.

A fundamental challenge in 2018 is to constrain the use of offensive cyber capabilities and to reign in the growth of the cyber-crime market through enhanced cooperation. This will be no small task, but there have been some positive developments.

According to US network security firm FireEye, the recent US-China agreement on commercial cyber espionage has led to an estimated 90% reduction in data breaches in the US emanating from China. Cyber cooperation is possible and can lead to bilateral and global goods.

Death of cyber norms?

The final big development, or rather lack of development, has been at the UN. The Government Group of Experts (GGE) process, established in 2004 to strengthen the security of global information and telecommunications systems, failed to reach a consensus on its latest report on the status of international laws and norms in cyberspace. The main problem has been that there is no definite agreement on the applicability of existing international law to cyber security. This includes issues such as when states might be held responsible for cyber attacks emanating from their territory, or their right to the use of countermeasures in cyber self-defence.

Some analysts have proclaimed this to be “the end of cyber norms”. This betrays a pessimism about UN level governance of the internet that is deeply steeped in overly state-centric views of security and a reluctance to cede any sovereignty to international organisations.

It is true that norms won’t be built from the top down. But the UN does and should have an important role to play in cyber security as we move into 2018, not least because of its universality and global reach.

The NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) in Tallinn, Estonia recently launched the Tallinn Manual 2.0, which examines the applicability of international law to cyber attacks that fall below the use of force and occur outside of armed conflict.

These commendable efforts could move forward hand in hand with efforts to build consensus on new laws that more accurately capture the complexity of new information and communications technology. In February 2017, Brad Smith, the head of Microsoft, proposed a digital Geneva Convention that would outlaw cyber attacks on civilian infrastructure.

The ConversationIn all this we must recognise that cyber security is not a binary process. It is not about “ones and zeros”, but rather about a complex spectrum of activity that needs multi-level, multi-stakeholder responses that include international organisations. This is a cyber reality that we should all bear in mind when we try to find solutions to cyber security issues in 2018.

Joe Burton, Senior Lecturer, Institute for Security and Crime Science, University of Waikato

This article was originally published on The Conversation. Read the original article.

Australia relies on data from Earth observation satellites, but our access is high risk



File 20170920 22691 bkgy2p
The NASA satellite Landsat-8 collects frequent global multispectral imagery of the Earth’s surface.
NASA

Stuart Phinn, The University of Queensland

This article is part of a series Australia’s place in space, where we’ll explore the strengths and weaknesses, along with the past, present and the future of Australia’s space presence and activities.


Rockets, astronomy and humans on Mars: there’s a lot of excited talk about space and what new discoveries might come if Australia’s federal government commits to expanding Australia’s space industry.

But one space industry is often left out of the conversation: Earth observation (EO).


Read more: Why it’s time for Australia to launch its own space agency


EO refers to the collection of information about Earth, and delivery of useful data for human activities. For Australia, the minimum economic impact of EO from space-borne sensors alone is approximately A$5.3 billion each year.

And yet the default position of our government seems to be that the provision of EO resources will come from other countries’ investments, or commercial partners.

This means the extensive Commonwealth-state-local government and industry reliance on access to EO services remains a high-risk.

What is EO (Earth observation)?

You’ve almost certainly relied on EO at some point already today.

The wide range of government, industry and societal uses of Earth observation in Australia.
Australian Earth Observation Community Coordination Plan 2026

EO describes the activities used to gather data about the Earth from satellites, aircraft, remotely piloted systems and other platforms. It delivers information for our daily weather and oceanographic forecasts, disaster management systems, water and power supply, infrastructure monitoring, mining, agricultural production, environmental monitoring and more.

Global positioning and navigation, communications and information derived from satellites looking at, and away from Earth are referred to as “downstream” space activities.

“Upstream” activities are the industries building infrastructure (satellites, sensors), launch vehicles and ground facilities for operating space-based equipment. In this arena, countries such as Russia focus on building, launching and operating satellites and space craft. Others (such as Canada, Italy, UK) target developing industries and government activities that use these services. The US and China maintain a balance.

Components of Australia’s Earth-observation space capabilities (click to zoom for a clearer view)
Australian Earth Observation Community Coordination Plan 2026, Author provided

Australia spends very little on space

Although we rely so heavily on downstream space activities in our economic and other operations, Australia invests very little in space: only 0.003% of GDP, according to 2014 figures.

https://datawrapper.dwcdn.net/7fXSG/2/

Other countries have taken very proactive roles in enabling these industries to develop. Most government space agencies around the world invest 11% to 51% of their funds for developing EO capacity. These investments allow industries and government to build downstream applications and services from secure 24/7 satellite data streams.

https://datawrapper.dwcdn.net/P3Fis/5/

Historically, Australia has invested heavily in research and research infrastructure to produce world leading capabilities in the science of astronomy, space-debris tracking and space exploration communications.

In EO there are no comparable national programs or infrastructure, nor have we contributed to international capability at the same levels as these areas. This seems strange given:

  • our world leading status in applied research and extensive government use of these data as fully operational essential and critical information streams
  • all of the reports requesting increases in government support and enabling for “space” industry cite our reliance on EO as essential, but then don’t present paths forward for it
  • there are now a number of well established and growing small companies focused on delivering essential environmental, agricultural, grazing, energy supply and infrastructure monitoring services using EO, and
  • we have a well organised EO community across research, industry and government, with a clearly articulated national strategic plan to 2026.
Example of an information delivery service built from Earth observation data streams to deliver property level information to graziers and others land-holders (click to zoom for a clearer view).
P Tickle, FarmMap4D, Author provided

Building Australia’s EO capacity

EO plays a vital role in many aspects of Australian life. Australia’s state and Commonwealth agencies, along with research institutions and industry have already built essential tools to routinely deliver satellite images in a form that can be developed further by private industry and delivered as services.

But our lack of a coordinating space agency adds a layer of fragility to vital EO operations as they currently stand.


Read more: The 50 year old space treaty needs adaptation


This places a very large amount of Commonwealth, state and local government activity, economic activity and essential infrastructure at risk, as multiple recent national reviews have noted.

Our federal government started to address the problem with its 2013 Satellites Utilisation Policy, and will hopefully build on this following the current rounds of extensive consultation for the Space Industry Capability Review.

Although our private EO upstream and downstream industry capabilities are currently small, they are world leading, and if they were enabled with government-industry support in a way that the Canadian Space Agency, the European Space Agency/European Commission and UK Space Agency do, we could build this sector.

If Australia is to realistically participate in the “Space 2.0” economy, we need to act now and set clear goals for the next five, ten and 20 years. EO can be a pillar for this activity, enabling significant expansion of our upstream and downstream industries. This generates jobs and growth and addresses national security concerns.

That should be a win for all sectors in Australia – and we can finally give back and participate globally in space.


The ConversationData sources for figure “Proportion of space budget spent on different capacities”: NASA; ESA – here and here; JAXA; PDF report on China.

Stuart Phinn, Professor of Geography, Director – Remote Sensing Research Centre, Chair – Australian Earth Observation Community Coordination Group, The University of Queensland

This article was originally published on The Conversation. Read the original article.

The new data retention law seriously invades our privacy – and it’s time we took action



File 20170615 24976 1y7ipnc
Then government’s new law enabling the collection of metadata raises serious privacy concerns.
shutterstock

Uri Gal, University of Sydney

Over the past few months, Australians’ civil rights have come under attack.

In April, the government’s data retention law came into effect. The law requires telecommunications companies to store customer metadata for at least two years. Metadata from our phone calls, text messages, emails, and internet activity is now tracked by the government and accessible by intelligence and law enforcement agencies.

Ironically, the law came into effect only a few weeks before Australia marked Privacy Awareness Week. Alarmingly, it is part of a broad trend of eroding civil rights in Western democracies, most noticeably evident by the passage of the Investigatory Powers Act in the UK, and the decision to repeal the Internet Privacy Law in the US.

Why does it matter?

Australia’s data retention law is one of the most comprehensive and intrusive data collection schemes in the western world. There are several reasons why Australians should challenge this law.

First, it undermines the democratic principles on which Australia was founded. It gravely harms individuals’ right to privacy, anonymity, and protection from having their personal information collected.

The Australian Privacy Principles define limited conditions under which the collection of personal information is permissible. It says personal information must be collected by “fair” means.

Despite a recent ruling by the Federal Court, which determined that our metadata does not constitute “personal information”, we should consider whether sweeping collection of all of Australian citizenry’s metadata is consistent with our right to privacy.

Second, metadata – data about data – can be highly revealing and provide a comprehensive depiction of our daily activities, communications and movements.

As detailed here, metadata is broad in scope and can tell more about us than the actual content of our communications. Therefore, claims that the data retention law does not seriously compromise our privacy should be considered as naïve, ill-informed, or dishonest.

Third, the law is justified by the need to protect Australians from terrorist acts. However, despite the government’s warnings, the risk of getting hurt in a terrorist attack in Australia has been historically, and is today, extremely low.

To date, the government has not presented any concrete empirical evidence to indicate that this risk has substantially changed. Democracies such as France, Germany and Israel – which face more severe terrorist threats than Australia – have not legalised mass data collection and instead rely on more targeted means to combat terrorism that do not jeopardise their democratic foundations.

Fourth, the data retention law is unlikely to achieve its stated objective and thwart serious terrorist activities. There are a range of widely-accessible technologies that can be used to circumvent the government’s surveillance regime. Some of them have previously been outlined by the now-prime minister, Malcolm Turnbull.

Therefore, in addition to damaging our civil rights, the law’s second lasting legacy is likely to be its contribution to increasing the budgetary debt by approximately A$740 million over the next ten years.

How can the law be challenged?

There are several things we can do to challenge the law. For example, there are technologies that we can start using today to increase our online privacy.

A full review of all available options is beyond the scope of this article, but here are three effective ones.

  1. Virtual private networks (VPNs) can hide browsing information from internet service providers. Aptly, April 13, the day the data retention law came into effect, has been declared the Australian “get a VPN day”.

  2. Tor – The Onion Router is free software that can help protect the anonymity of its users and conceal their internet activity from surveillance and analysis.

  3. Encrypted messaging applications – unprotected applications can be easily tracked. Consequently, applications such as Signal and Telegram that offer data encryption solutions have been growing in popularity.

Australian citizens have the privilege of electing their representatives. An effective way to oppose continuing state surveillance is to vote for candidates whose views truly reflect the democratic principles that underpin modern Australian society.

The Australian public needs to have an honest, critical and open debate about the law and its social and ethical ramifications. The absence of such a debate is dangerous. The institutional accumulation of power is a slippery slope – once gained, power is not easily given up by institutions.

And the political climate in Australia is ripe for further deterioration of civil rights, as evident in the government’s continued efforts to increase its regulation of the internet. Therefore, it is important to sound a clear and public voice that opposes such steps.

Finally, we need to call out our elected representatives when they make logically muddled claims. In a speech to parliament this week Tuesday, Turnbull said:

The rights and protections of the vast overwhelming majority of Australians must outweigh the rights of those who will do them harm.

The ConversationThe data retention law is a distortion of the logic embedded in this statement because it indiscriminately targets all Australians. We must not allow the pernicious intent of a handful of terrorists to be used as an excuse to harm the rights of all Australians and change the fabric of our society.

Uri Gal, Associate Professor in Business Information Systems, University of Sydney

This article was originally published on The Conversation. Read the original article.

Cloud, backup and storage devices: how best to protect your data


Image 20170330 15619 l7vchv
How much data do you still store only on your mobile, tablet or laptop?
Shutterstock/Neirfy

Adnene Guabtni, Data61

We are producing more data than ever before, with more than 2.5 quintillion bytes produced every day, according to computer giant IBM. That’s a staggering 2,500,000,000,000 gigabytes of data and it’s growing fast. The Conversation

We have never been so connected through smart phones, smart watches, laptops and all sorts of wearable technologies inundating today’s marketplace. There were an estimated 6.4 billion connected “things” in 2016, up 30% from the previous year.

We are also continuously sending and receiving data over our networks. This unstoppable growth is unsustainable without some kind of smartness in the way we all produce, store, share and backup data now and in the future.

In the cloud

Cloud services play an essential role in achieving sustainable data management by easing the strain on bandwidth, storage and backup solutions.

But is the cloud paving the way to better backup services or is it rendering backup itself obsolete? And what’s the trade-off in terms of data safety, and how can it be mitigated so you can safely store your data in the cloud?

The cloud is often thought of as an online backup solution that works in the background on your devices to keep your photos and documents, whether personal or work related, backed up on remote servers.

In reality, the cloud has a lot more to offer. It connects people together, helping them store and share data online and even work together online to create data collaboratively.

It also makes your data ubiquitous, so that if you lose your phone or your device fails you simply buy a new one, sign in to your cloud account and voila! – all your data are on your new device in a matter of minutes.

Do you really back up your data?

An important advantage of cloud-based backup services is also the automation and ease of use. With traditional backup solutions, such as using a separate drive, people often discover, a little too late, that they did not back up certain files.

Relying on the user to do backups is risky, so automating it is exactly where cloud backup is making a difference.

Cloud solutions have begun to evolve from online backup services to primary storage services. People are increasingly moving from storing their data on their device’s internal storage (hard drives) to storing them directly in cloud-based repositories such as DropBox, Google Drive and Microsoft’s OneDrive.

Devices such as Google’s Chromebook do not use much local storage to store your data. Instead, they are part of a new trend in which everything you produce or consume on the internet, at work or at home, would come from the cloud and be stored there too.

Recently announced cloud technologies such as Google’s Drive File Stream or Dropbox’s Smart Sync are excellent examples of how cloud storage services are heading in a new direction with less data on the device and a bigger primary storage role for the cloud.

Here is how it works. Instead of keeping local files on your device, placeholder files (sort of empty files) are used, and the actual data are kept in the cloud and downloaded back onto the device only when needed.

Edits to the files are pushed to the cloud so that no local copy is kept on your device. This drastically reduces the risk of data leaks when a device is lost or stolen.

So if your entire workspace is in the cloud, is backup no longer needed?

No. In fact, backup is more relevant than ever, as disasters can strike cloud providers themselves, with hacking and ransomware affecting cloud storage too.

Backup has always had the purpose of reducing risks using redundancy, by duplicating data across multiple locations. The same can apply to cloud storage which can be duplicated across multiple cloud locations or multiple cloud service providers.

Privacy matters

Yet beyond the disruption of the backup market, the number-one concern about the use of cloud services for storing user data is privacy.

Data privacy is strategically important, particularly when customer data are involved. Many privacy-related problems can happen when using the cloud.

There are concerns about the processes used by cloud providers for privacy management, which often trade privacy for convenience. There are also concerns about the technologies put in place by cloud providers to overcome privacy related issues, which are often not effective.

When it comes to technology, encryption tools protecting your sensitive data have actually been around for a long time.

Encryption works by scrambling your data with a very large digital number (called a key) that you keep secret so that only you can decrypt the data. Nobody else can decode your data without that key.

Using encryption tools to encrypt your data with your own key before transferring it into the cloud is a sensible thing to do. Some cloud service providers are now offering this option and letting you choose your own key.

Share vs encryption

But if you store data in the cloud for the purpose of sharing it with others – and that’s often the precise reason that users choose to use cloud storage – then you might require a process to distribute encryption keys to multiple participants.

This is where the hassle can start. People you share data with would need to get the key too, in some way or another. Once you share that key, how would you revoke it later on? How would you prevent it from being re-shared without your consent?

More importantly, how would you keep using the collaboration features offered by cloud providers, such as Google Docs, while working on encrypted files?

These are the key challenges ahead for cloud users and providers. Solutions to those challenges would truly be game-changing.

Adnene Guabtni, Senior Research Scientist/Engineer, Data61

This article was originally published on The Conversation. Read the original article.