Australians want to support government use and sharing of data, but don’t trust their data will be safe



File 20190226 150715 ffa5h7.jpg?ixlib=rb 1.1
A new survey reveals community attitudes towards the use of personal data by government and researchers.
Shutterstock

Nicholas Biddle, Australian National University and Matthew Gray, Australian National University

Never has more data been held about us by government or companies that we interact with. Never has this data been so useful for analytical purposes.

But with such opportunities come risks and challenges. If personal data is going to be used for research and policy purposes, we need effective data governance arrangements in place, and community support (social licence) for this data to be used.

The ANU Centre for Social Research and Methods has recently undertaken a survey of a representative sample of Australians to learn their views about about how personal data is used, stored and shared.

While Australians report a high level of support for the government to use and share data, there is less confidence that the government has the right safeguards in place or can be trusted with people’s data.




Read more:
Soft terms like ‘open’ and ‘sharing’ don’t tell the true story of your data


What government should do with data

In the ANUPoll survey of more than 2,000 Australian adults (available for download at the Australian Data Archive) we asked:

On the whole, do you think the Commonwealth Government should or should not be able to do the following?

Six potential data uses were given.

Do you think the Commonwealth Government should or should not be able to … ?
ANU Centre for Social Research and Methods Working Paper

Overall, Australians are supportive of the Australian government using data for purposes such as allocating resources to those who need it the most, and ensuring people are not claiming benefits to which they are not entitled.

They were slightly less supportive about providing data to researchers, though most still agreed or strongly agreed that it was worthwhile.

Perceptions of government data use

Community attitudes to the use of data by government are tied to perceptions about whether the government can keep personal data secure, and whether it’s behaving in a transparent and trustworthy manner.

To measure views of the Australian population on these issues, respondents were told:

Following are a number of statements about the Australian government and the data it holds about Australian residents.

They were then asked to what extent they agreed or disagreed that the Australian government:

  • could respond quickly and effectively to a data breach
  • has the ability to prevent data being hacked or leaked
  • can be trusted to use data responsibly
  • is open and honest about how data are collected, used and shared.

Respondents did not express strong support for the view that the Australian government is able to protect people’s data, or is using data in an appropriate way.

To what extent do you agree or disagree that the Australian Government … ?
ANU Centre for Social Research and Methods Working Paper



Read more:
What are tech companies doing about ethical use of data? Not much


We also asked respondents to:

[think] about the data about you that the Australian Government might currently hold, such as your income tax data, social security records, or use of health services.

We then asked for their level of concern about five specific forms of data breaches or misuse of their own personal data.

We found that there are considerable concerns about different forms of data breaches or misuse.

More than 70% of respondents were concerned or very concerned about the accidental release of personal information, deliberate hacking of government systems, and data being provided to consultants or private sector organisations who may misuse the data.

Level of concern about specific forms of data breaches or misuse of a person’s own data …
ANU Centre for Social Research and Methods Working Paper

More than 60% were concerned or very concerned about their data being used by the Australian government to make unfair decisions. And more than half were concerned or very concerned about their data being provided to academic researchers who may misuse their information.




Read more:
Facebook’s data lockdown is a disaster for academic researchers


Trust in government to manage data

The data environment in Australia is changing rapidly. More digital information about us is being created, captured, stored and shared than ever before, and there is a greater capacity to link information across multiple sources of data, and across multiple time periods.

While this creates opportunities, it also creates the risk that the data will be used in a way that is not in our best interests.

There is policy debate at the moment about how data should be used and shared. If we don’t make use of the data available, that has costs in terms of worse service delivery and less effective government. So, locking data up is not a cost-free option.

But sharing data or making data available in a way that breaches people’s privacy can be harmful to individuals, and may generate a significant (and legitimate) public backlash. This would reduce the chance of data being made available in any form, and mean that the potential benefits of improving the wellbeing of Australians are lost.

If government, researchers and private companies want to be able to make use of the richness of the new data age, there is an urgent and continuing need to build up trust across the population, and to put policies in place that reassure consumers and users of government services.The Conversation

Nicholas Biddle, Associate Professor, ANU College of Arts and Social Sciences, Australian National University and Matthew Gray, Director, ANU Centre for Social Research and Methods, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

We’ve been hacked – so will the data be weaponised to influence election 2019? Here’s what to look for


Michael Jensen, University of Canberra

Prime Minister Scott Morrison recently said both the Australian Parliament and its major political parties were hacked by a “sophisticated state actor.”

This raises concerns that a foreign adversary may be intending to weaponise, or strategically release documents, with an eye towards altering the 2019 election outcome.




Read more:
A state actor has targeted Australian political parties – but that shouldn’t surprise us


While the hacking of party and parliamentary systems is normally a covert activity, influence operations are necessarily noisy and public in order to reach citizens – even if efforts are made to obscure their origins.

If a state actor has designs to weaponise materials recently hacked, we will likely see them seek to inflame religious and ethnic differences, as well as embarrass the major parties in an effort to drive votes to minor parties.

If this comes to pass, there are four things Australians should look for.

1. Strategic interest for a foreign government to intervene

If the major parties have roughly the same policy position in relation to a foreign country, a foreign state would have little incentive to intervene, for example, in favour of Labor against the Coalition.

They may, however, attempt to amplify social divisions between the parties as a way of reducing the ability of Australians to work together after the election.

They may also try to drive down the already low levels of support for democracy and politicians in Australia to further undermine Australian democracy.

Finally, they may also try to drive the vote away from the major parties to minor parties which might be more favourable to their agenda.

This could be achieved by strategically releasing hacked materials which embarrass the major parties or their candidates, moving voters away from those parties and towards minor parties. These stories will likely be distributed first on social media platforms and later amplified by foreign and domestic broadcast media.

It is no secret that Russia and China seek a weakening of the Five Eyes security relationship between Australia, New Zealand, Canada, the United States, and the United Kingdom. If weakened, that would undermine the alliance structure which has helped prevent major wars for the last 70 years.

2. Disproportionate attention by foreign media to a local campaign

In the US, although Tulsi Gabbard’s polling numbers rank her near the bottom of declared and anticipated candidates for the Democratic nomination, she has received significant attention from Russia’s overt or “white” propaganda outlets, Sputnik and RT (formerly Russia Today).

The suspected reason for this attention is that some of her foreign policy positions on the Middle East are consistent with Russian interests in the region.

In Australia, we might find greater attention than normal directed at One Nation or Fraser Anning – as well as the strategic promotion of Green candidates in certain places to push political discussion further right and further left at the same time.

3. Promoted posts on Facebook and other social media platforms

Research into the 2016 US election found widespread violations of election law. The vast majority of promoted ads on Facebook during the election campaign were from groups which failed to file with the Federal Election Commission and some of this unregistered content came from Russia.

Ads placed by Russia’s Internet Research Agency, which is under indictment by the Mueller investigation, ended up disproportionately in the newsfeeds of Facebook users in Wisconsin and Pennsylvania – two of the three states that looked like a lock for Clinton until the very end of the campaign.

What makes Facebook and many other social media platforms particularly of concern is the ability to use data to target ads using geographic and interest categories. One can imagine that if a foreign government were armed with voting data hacked from the parties, this process would be all the more effective.




Read more:
New guidelines for responding to cyber attacks don’t go far enough


Seats in Australia which might be targeted include seats like Swan (considered a marginal seat with competition against the Liberals on both the left and the right) and the seats of conservative politicians on GetUp’s “hitlist” – such as Tony Abbott’s and Peter Dutton’s seats of Warringah and Dickson.

4. Focus on identity manipulation, rather than fake news

The term “fake news” suffers from conceptual ambiguities – it means different things to different people. “Fake news” has been used not just as a form of classification to describe material which “mimics news media content in form but not in organisational process or intent” but also used to describe satire and even as an epithet used to dismiss disagreeable claims of a factual nature.

Studies of propaganda show that information need not be factually false to effectively manipulate target audiences.

The best propaganda uses claims which are factually true, placing them into a different context which can be used to manipulate audiences or by amplifying negative aspects of a group, policy or politician, without placing that information in a wider context.

For example, to amplify concerns about immigrants, one might highlight the immigrant background of someone convicted of a crime, irrespective of the overall propensity for immigrants to commit crimes compared to native born Australians.

This creates what communication scholars call a “representative anecdote” through which people come to understand and think about a topic with which they are otherwise unfamiliar. While immigrants may or may not be more likely to commit crimes than other Australians, the reporting creates that association.

Among the ways foreign influence operations function is through the politicisation of identities. Previous research has found evidence of efforts to heighten ethnic and racial differences through Chinese language WeChat official accounts operating in Australia as well as through Russian trolling efforts which have targeted Australia. This is the same pattern followed by Russia during the 2016 US election.

Liberal democracies are designed to handle conflicts over interests through negotiation and compromise. Identities, however, are less amenable to compromise. These efforts may not be “fake news” but they are effective in undermining the capacity of a democratic nation to mobilise its people in pursuit of common goals.




Read more:
How digital media blur the border between Australia and China


The Russian playbook

No country is immune from the risk of foreign influence operations. While historically these operations might have involved the creation of false documents and on the ground operations in target countries, today materials can be sourced, faked, and disseminated from the relative security of the perpetrating country. They may include both authentic and faked documents – making it hard for a campaign to charge that certain documents are faked without affirming the validity of others.

Most importantly, in a digitally connected world, these operations can scale up quickly and reach substantially larger populations than previously possible.

While the Russian interference in the 2016 US election has received considerable attention, Russia is not the only perpetrator and the US is not the only target.

But the Russians created a playbook which other countries can readily draw upon and adapt. The question remains as to who that might be in an Australian context.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Five projects that are harnessing big data for good



File 20181101 78456 77seij.jpg?ixlib=rb 1.1
Often the value of data science lies in the work of joining the dots.
Shutterstock

Arezou Soltani Panah, Swinburne University of Technology and Anthony McCosker, Swinburne University of Technology

Data science has boomed over the past decade, following advances in mathematics, computing capability, and data storage. Australia’s Industry 4.0 taskforce is busy exploring ways to improve the Australian economy with tools such as artificial intelligence, machine learning and big data analytics.

But while data science offers the potential to solve complex problems and drive innovation, it has often come under fire for unethical use of data or unintended negative consequences – particularly in commercial cases where people become data points in annual company reports.

We argue that the data science boom shouldn’t be limited to business insights and profit margins. When used ethically, big data can help solve some of society’s most difficult social and environmental problems.

Industry 4.0 should be underwritten by values that ensure these technologies are trained towards the social good (known as Society 4.0). That means using data ethically, involving citizens in the process, and building social values into the design.

Here are a five data science projects that are putting these principles into practice.




Read more:
The future of data science looks spectacular


1. Finding humanitarian hot spots

Social and environmental problems are rarely easy to solve. Take the hardship and distress in rural areas due to the long-term struggle with drought. Australia’s size and the sheer number of people and communities involved make it difficult to pair those in need with support and resources.

Our team joined forces with the Australian Red Cross to figure out where the humanitarian hot spots are in Victoria. We used social media data to map everyday humanitarian activity to specific locations and found that the hot spots of volunteering and charity activity are located in and around Melbourne CBD and the eastern suburbs. These kinds of insights can help local aid organisations channel volunteering activity in times of acute need.

Distribution of humanitarian actions across inner Melbourne and local government areas. Blue dots and red dots represent scraped Instagram posts around the hashtags #volunteer and #charity.

2. Improving fire safety in homes

Accessing data – the right data, in the right form – is a constant challenge for data science. We know that house fires are a serious threat, and that fire and smoke alarms save lives. Targeting houses without fire alarms can help mitigate that risk. But there is no single reliable source of information to draw on.

In the United States, Enigma Labs built open data tools to model and map risk at the level of individual neighbourhoods. To do this effectively, their model combines national census data with a geocoder tool (TIGER), as well as analytics based on local fire incident data, to provide a risk score.

Fire fatality risk scores calculated at the level of Census block groups.
Enigma Labs

3. Mapping police violence in the US

Ordinary citizens can be involved in generating social data. There are many crowdsourced, open mapping projects, but often the value of data science lies in the work of joining the dots.

The Mapping Police Violence project in the US monitors, make sense of, and visualises police violence. It draws on three crowdsourced databases, but also fills in the gaps using a mix of social media, obituaries, criminal records databases, police reports and other sources of information. By drawing all this information together, the project quantifies the scale of the problem and makes it visible.

A visualisation of the frequency of police violence in the United States.
Mapping Police Violence



Read more:
Data responsibility: a new social good for the information age


4. Optimising waste management

The Internet of Things is made up of a host of connected devices that collect data. When embedded in the ordinary objects all around us, and combined with cloud-based analysis and computing, these objects become smart – and can help solve problems or inefficiencies in the built environment.

If you live in Melbourne, you might have noticed BigBelly bins around the CBD. These smart bins have solar-powered trash compactors that regularly compress the garbage inside throughout the day. This eliminates waste overflow and reduces unnecessary carbon emissions, with an 80% reduction in waste collection.

Real-time data analysis and reporting is provided by a cloud-based data management portal, known as CLEAN. The tool identifies trends in waste overflow, which helps with bin placement and planning of collection services.

BigBelly bins are being used in Melbourne’s CBD.
Kevin Zolkiewicz/Flickr, CC BY-NC

5. Identifying hotbeds of street harassment

A group of four women – and many volunteer supporters – in Egypt developed HarassMap to engage with, and inform, the community in an effort to reduce sexual harassment. The platform they built uses anonymised, crowdsourced data to map harassment incidents that occur in the street in order to alert its users of potentially unsafe areas.

The challenge for the group was to provide a means for generating data for a problem that was itself widely dismissed. Mapping and informing are essential data science techniques for addressing social problems.

Mapping of sexual harassment reported in Egypt.
HarassMap



Read more:
Cambridge Analytica’s closure is a pyrrhic victory for data privacy


Building a better society

Turning the efforts of data science to social good isn’t easy. Those with the expertise have to be attuned to the social impact of data analytics. Meanwhile, access to data, or linking data across sources, is a major challenge – particularly as data privacy becomes an increasing concern.

While the mathematics and algorithms that drive data science appear objective, human factors often combine to embed biases, which can result in inaccurate modelling. Digital and data literacy, along with a lack of transparency in methodology, combine to raise mistrust in big data and analytics.

Nonetheless, when put to work for social good, data science can provide new sources of evidence to assist government and funding bodies with policy, budgeting and future planning. This can ultimately result in a better connected and more caring society.The Conversation

Arezou Soltani Panah, Postdoc Research Fellow (Social Data Scientist), Swinburne University of Technology and Anthony McCosker, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

If privacy is increasing for My Health Record data, it should apply to all medical records



File 20180920 10499 1xu9t4w.jpg?ixlib=rb 1.1
Everyone was up in arms about a lack of privacy with My Health Records, but the privacy is the same for other types of patient data.
from http://www.shutterstock.com

Megan Prictor, University of Melbourne; Bronwyn Hemsley, University of Technology Sydney; Mark Taylor, University of Melbourne, and Shaun McCarthy, University of Newcastle

In response to the public outcry against the potential for My Health Record data to be shared with police and other government agencies, Health Minister Greg Hunt recently announced moves to change the legislation.

The laws underpinning the My Health Record as well as records kept by GPs and private hospitals currently allow those records to be shared with the police, Centrelink, the Tax Office and other government departments if it’s “reasonably necessary” for a criminal investigation or to protect tax revenue.

If passed, the policy of the Digital Health Agency (which runs the My Health Record) not to release information without a court order will become law. This would mean the My Health Record has greater privacy protections in this respect than other medical records, which doesn’t make much sense.




Read more:
Opting out of My Health Records? Here’s what you get with the status quo


Changing the law to increase privacy

Under the proposed new bill, state and federal government departments and agencies would have to apply for a court order to obtain information stored in the My Health Record.

The court would need to be satisfied that sharing the information is “reasonably necessary”, and that there is no other effective way for the person requesting it to access the information. The court would also need to weigh up whether the disclosure would “unreasonably interfere” with the person’s privacy.

If granted, a court order to release the information would require the Digital Health Agency to provide information from a person’s My Health Record without the person’s consent, and even if they objected.

If a warrant is issued for a person’s health records, the police can sift through them as they look for relevant information. They could uncover personally sensitive material that is not relevant to the current proceedings. Since the My Health Record allows the collection of information across health providers, there could be an increased risk of non-relevant information being disclosed.




Read more:
Using My Health Record data for research could save lives, but we must ensure it’s ethical


But what about our other medical records?

Although we share all sorts of personal information online, we like to think of our medical records as sacrosanct. But the law underpinning My Health Record came from the wording of the Commonwealth Privacy Act 1988, which applies to all medical records held by GPs, specialists and private hospitals.

Under the Act, doctors don’t need to see a warrant before they’re allowed to share health information with enforcement agencies. The Privacy Act principles mean doctors only need a “reasonable belief” that sharing the information is “reasonably necessary” for the enforcement activity.

Although public hospital records do not fall under the Privacy Act, they are covered by state laws that have similar provisions. In Victoria, for instance, the Health Records Act 2001 permits disclosure if the record holder “reasonably believes” that the disclosure is “reasonably necessary” for a law enforcement function and it would not be a breach of confidence.

In practice, health care providers are trained on the utmost importance of protecting the patient’s privacy. Their systems of registration and accreditation mean they must follow a professional code of ethical conduct that includes observing confidentiality and privacy.

Although the law doesn’t require it, it is considered good practice for health professionals to insist on seeing a warrant before disclosing a patient’s health records.

In a 2014 case, the federal court considered whether a psychiatrist had breached the privacy of his patient. The psychiatrist had given some of his patient’s records to Queensland police in response to a warrant. The court said the existence of a warrant was evidence the doctor had acted appropriately.

In a 2015 case, it was decided a doctor had interfered with a patient’s privacy when disclosing the patient’s health information to police. In this case, there no was warrant and no formal criminal investigation.




Read more:
What could a My Health Record data breach look like?


Unfortunately, there are recent examples of medical records being shared with government departments in worrying ways. In Australia, it has been alleged the immigration department tried, for political reasons, to obtain access to the medical records of people held in immigration detention.

In the UK, thousands of patient records were shared with the Home Office to trace immigration offenders. As a result, it was feared some people would become too frightened to seek medical care for themselves and children.

We can’t change the fact different laws at state and federal level apply to our paper and electronic medical records stored in different locations. But we can try to change these laws to be consistent in protecting our privacy.

If it’s so important to change the My Health Records Act to ensure our records can only be “unlocked” by a court order, the same should apply to the Privacy Act as well as state-based laws. Doing so might help to address public concerns about privacy and the My Health Record, and further inform decisions about opting out or staying in the system.The Conversation

Megan Prictor, Research Fellow in Law, University of Melbourne; Bronwyn Hemsley, Professor of Speech Pathology, University of Technology Sydney; Mark Taylor, Associate professor, University of Melbourne, and Shaun McCarthy, Director, University of Newcastle Legal Centre, University of Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The devil is in the detail of government bill to enable access to communications data


Monique Mann, Queensland University of Technology

The Australian government has released a draft of its long awaited bill to provide law enforcement and security agencies with new powers to respond to the challenges posed by encryption.

According to the Department of Home Affairs, encryption already impacts 90% of Australian Security Intelligence Organisation’s (ASIO) priority cases, and 90% of data intercepted by the Australian Federal Police. The measures aim to counteract estimates that communications among terrorists and organised crime groups are expected to be entirely encrypted by 2020.

The Department of Home Affairs and ASIO can already access encrypted data with specialist decryption techniques – or at points where data are not encrypted. But this takes time. The new bill aims to speed up this process, but these broad and ill-defined new powers have significant scope for abuse.




Read more:
New data access bill shows we need to get serious about privacy with independent oversight of the law


The Department of Home Affairs argues this new framework will not compel communications providers to build systemic weaknesses or vulnerabilities into their systems. In other words, it is not a backdoor.

But it will require providers to offer up details about technical characteristics of their systems that could help agencies exploit weaknesses that have not been patched. It also includes installing software, and designing and building new systems.

Compelling assistance and access

The draft Assistance and Access Bill introduces three main reforms.

First, it increases the obligations of both domestic and offshore organisations to assist law enforcement and security agencies to access information. Second, it introduces new computer access warrants that enable law enforcement to covertly obtain evidence directly from a device (this occurs at the endpoints when information is not encrypted). Finally, it increases existing powers that law enforcement have to access data through search and seizure warrants.

The bill is modelled on the UK’s Investigatory Powers Act, which introduced mandatory decryption obligations. Under the UK Act, the UK government can order telecommunication providers to remove any form of electronic protection that is applied by, or on behalf of, an operator. Whether or not this is technically possible is another question.

Similar to the UK laws, the Australian bill puts the onus on telecommunication providers to give security agencies access to communications. That might mean providing access to information at points where it is not encrypted, but it’s not immediately clear what other requirements can or will be imposed.




Read more:
End-to-end encryption isn’t enough security for ‘real people’


For example, the bill allows the Director-General of Security or the chief officer of an interception agency to compel a provider to do an unlimited range of acts or things. That could mean anything from removing security measures to deleting messages or collecting extra data. Providers will also be required to conceal any action taken covertly by law enforcement.

Further, the Attorney-General may issue a “technical capability notice” directed towards ensuring that the provider is capable of giving certain types of help to ASIO or an interception agency.

This means providers will be required to develop new ways for law enforcement to collect information. As in the UK, it’s not clear whether a provider will be able to offer true end-to-end encryption and still be able to comply with the notices. Providers that breach the law risk facing $10 million fines.

Cause for concern

The bill puts few limits or constraints on the assistance that telecommunication providers may be ordered to offer. There are also concerns about transparency. The bill would make it an offence to disclose information about government agency activities without authorisation. Anyone leaking information about data collection by the government – as Edward Snowden did in the US – could go to jail for five years.

There are limited oversight and accountability structures and processes in place. The Director-General of Security, the chief officer of an interception agency and the Attorney-General can issue notices without judicial oversight. This differs from how it works in the UK, where a specific judicial oversight regime was established, in addition to the introduction of an Investigatory Powers Commissioner.

Notices can be issued to enforce domestic laws and assist the enforcement of the criminal laws of foreign countries. They can also be issued in the broader interests of national security, or to protect the public revenue. These are vague and unclear limits on these exceptional powers.




Read more:
Police want to read encrypted messages, but they already have significant power to access our data


The range of services providers is also extremely broad. It might include telecommunication companies, internet service providers, email providers, social media platforms and a range of other “over-the-top” services. It also covers those who develop, supply or update software, and manufacture, supply, install or maintain data processing devices.

The enforcement of criminal laws in other countries may mean international requests for data will be funnelled through Australia as the “weakest-link” of our Five Eyes allies. This is because Australia has no enforceable human rights protections at the federal level.

It’s not clear how the government would enforce these laws on transnational technology companies. For example, if Facebook was issued a fine under the laws, it could simply withdraw operations or refuse to pay. Also, $10 million is a drop in the ocean for companies such as Facebook whose total revenue last year exceeded US$40 billion.

Australia is a surveillance state

As I have argued elsewhere, the broad powers outlined in the bill are neither necessary nor proportionate. Police already have existing broad powers, which are further strengthened by this bill, such as their ability to covertly hack devices at the endpoints when information is not encrypted.

Australia has limited human rights and privacy protections. This has enabled a constant and steady expansion of the powers and capabilities of the surveillance state. If we want to protect the privacy of our communications we must demand it.

The ConversationThe Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 (Cth) is still in a draft stage and the Department of Home Affairs invites public comment up until 10th of September 2018. Submit any comments to assistancebill.consultation@homeaffairs.gov.au.

Monique Mann, Vice Chancellor’s Research Fellow in Regulation of Technology, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

New data access bill shows we need to get serious about privacy with independent oversight of the law



File 20180814 2921 15oljsx.jpg?ixlib=rb 1.1

MICK TSIKAS/AAP

Greg Austin, UNSW

The federal government today announced its proposed legislation to give law enforcement agencies yet more avenues to reach into our private lives through access to our personal communications and data. This never-ending story of parliamentary bills defies logic, and is not offering the necessary oversight and protections.

The trend has been led by Prime Minister Malcolm Turnbull, with help from an ever-growing number of security ministers and senior officials. Could it be that the proliferation of government security roles is a self-perpetuating industry leading to ever more government powers for privacy encroachment?

That definitely appears to be the case.

Striking the right balance between data access and privacy is a tricky problem, but the government’s current approach is doing little to solve it. We need better oversight of law enforcement access to our data to ensure it complies with privacy principles and actually results in convictions. That might require setting up an independent judicial review mechanism to report outcomes on an annual basis.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Where is the accountability?

The succession of data access legislation in the Australian parliament is fast becoming a Mad Hatter’s tea party – a characterisation justified by the increasingly unproductive public conversations between the government on one hand, and legal specialists and rights advocates on the other.

If the government says it needs new laws to tackle “terrorism and paedophilia”, then the rule seems to be that other side will be criticised for bringing up “privacy protection”. The federal opposition has surrendered any meaningful resistance to this parade of legislation.

Rights advocates have been backed into a corner by being forced to repeat their concerns over each new piece of legislation while neither they nor the government, nor our Privacy Commissioner, and all the other “commissioners”, are called to account on fundamental matters of principle.

Speaking of the commissioner class, Australia just got a new one last week: the Data Commissioner. Strangely, the impetus for this appointment came from the Productivity Commission.

The post has three purposes:

  1. to promote greater use of data,
  2. to drive economic benefits and innovation from greater use of data, and
  3. to build trust with the Australian community about the government’s use of data.

The problem with this logic is that purposes one and two can only be distinguished by the seemingly catch-all character of the first: that if data exists it must be used.

Leaving aside that minor point, the notion that the government needs to build trust with the Australian community on data policy speaks for itself.

National Privacy Principles fall short

There is near universal agreement that the government is managing this issue badly, from the census data management issue to the “My Health Record” debacle. The growing commissioner class has not been much help.

Australia does have personal data protection principles, you may be surprised to learn. They are called “Privacy Principles”. You may be even more surprised to learn that the rights offered in these principles exist only up to the point where any enforcement arm of government wants the data.




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


So it seems that Australians have to rely on the leadership of the Productivity Commission (for economic policy) to guarantee our rights in cyber space, at least when it comes to our personal data.

Better oversight is required

There is another approach to reconciling citizens’ interests in privacy protection with legitimate and important enforcement needs against terrorists and paedophiles: that is judicial review.

The government argues, unconvincingly according to police sources, that this process adequately protects citizens by requiring law enforcement to obtain court-ordered warrants to access information. The record in some other countries suggests otherwise, with judges almost always waving through any application from enforcement authorities, according to official US data.

There is a second level of judicial review open to the government. This is to set up an independent judicial review mechanism that is obliged to annually review all instances of government access to personal data under warrant, and to report on the virtues or shortcomings of that access against enforcement outcomes and privacy principles.

There are two essential features of this proposal. First, the reviewing officer is a judge and not a public servant (the “commissioner class”). Second, the scope of the function is review of the daily operation of the intrusive laws, not just the post-facto examination of notorious cases of data breaches.

It would take a lengthy academic volume to make the case for judicial review of this kind. But it can be defended simply on economic grounds: such a review process would shine light on the efficiency of police investigations.

According to data released by the UK government, the overwhelming share of arrests for terrorist offences in the UK (many based on court-approved warrants for access to private data) do not result in convictions. There were 37 convictions out of 441 arrests for terrorist-related offences in the 12 months up to March 2018.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The Turnbull government deserves credit for its recognition of the values of legal review. Its continuing commitment to posts such as the National Security Legislation Monitor – and the appointment of a high-profile barrister to such a post – is evidence of that.

But somewhere along the way, the administration of data privacy is falling foul of a growing bureaucratic mess.

The ConversationThe only way to bring order to the chaos is through robust accountability; and the only people with the authority or legitimacy in our political system to do that are probably judges who are independent of the government.

Greg Austin, Professor UNSW Canberra Cyber, UNSW

This article was originally published on The Conversation. Read the original article.

What could a My Health Record data breach look like?



File 20180723 189308 dv0gue.jpg?ixlib=rb 1.1
Health information is an attractive target for offenders.
Tammy54/Shutterstock

Cassandra Cross, Queensland University of Technology

Last week marked the start of a three-month period in which Australians can opt out of the My Health Record scheme before having an automatically generated electronic health record.

Some Australians have already opted out of the program, including Liberal MP Tim Wilson and former Queensland LNP premier Campbell Newman, who argue it should be an opt-in scheme.

But much of the concern about My Health Records centres around privacy. So what is driving these concerns, and what might a My Health Records data breach look like?

Data breaches

Data breaches exposing individuals’ private information are becoming increasingly common and can include demographic details (name, address, birthdate), financial information (credit card details, pin numbers) and other details such as email addresses, usernames and passwords.

Health information is also an attractive target for offenders. They can use this to perpetrate a wide variety of offences, including identity fraud, identity theft, blackmail and extortion.




Read more:
Another day, another data breach – what to do when it happens to you


Last week hackers stole the health records of 1.5 million Singaporeans, including Prime Minister Lee Hsien Loong, who may have been targeted for sensitive medical information.

Meanwhile in Canada, hackers reportedly stole the medical histories of 80,000 patients from a care home and held them to ransom.

Australia is not immune. Last year Australians’ Medicare details were advertised for sale on the dark net by a vendor who had sold the records of at least 75 people.

Earlier this year, Family Planning NSW experienced a breach of its booking system, which exposed client data of those who had contacted the organisation within the past two and a half years.

Further, in the first report since the introduction of mandatory data breach reporting, the Privacy Commissioner revealed that of the 63 notifications received in the first quarter, 15 were from health service providers. This makes health the leading industry for reported breaches.

Human error

It’s important to note that not all data breaches are perpetrated from the outside or are malicious in nature. Human error and negligence also pose a threat to personal information.

The federal Department of Health, for instance, published a supposedly “de-identified” data set relating to details from the Medicare Benefits Scheme and the Pharmaceutical Benefits Scheme of 2.5 million Australians. This was done for research purposes.

But researchers were able to re-identify the details of individuals using publicly available information. In a resulting investigation, the Privacy Commissioner concluded that the Privacy Act had been breached three times.

The latest data breach investigation from US telecommunications company Verizon notes that health care is the only sector where the threat from inside is greater than from the outside. Human error contributes largely to this.

There are promises of strong security surrounding My Health Records but, in reality, it’s a matter of when, not if, a data breach of some sort occurs.

Human error is one of the biggest threats.
Shutterstock

Privacy controls

My Health Record allows users to set the level of access they’re comfortable with across their record. This can target specific health-care providers or relate to specific documents.

But the onus of this rests heavily on the individual. This requires a high level of computer and health literacy that many Australians don’t have. The privacy control process is therefore likely to be overwhelming and ineffective for many people.




Read more:
My Health Record: the case for opting out


With the default option set to “general access”, any organisation involved in the person’s care can access the information.

Regardless of privacy controls, other agencies can also access information. Section 70 of the My Health Records Act 2012 states that details can be disclosed to law enforcement for a variety of reasons including:

(a) the prevention, detection, investigation, prosecution or punishment of criminal offences.

While no applications have been received to date, it is reasonable to expect this may occur in the future.

There are also concerns about sharing data with health insurance agencies and other third parties. While not currently authorised, there is intense interest from companies that can see the value in this health data.

Further, My Health Record data can be used for research, policy and planning. Individuals must opt out of this separately, through the privacy settings, if they don’t want their data to be part of this.

What should you do?

Health data is some of the most personal and sensitive information we have and includes details about illnesses, medications, tests, procedures and diagnoses. It may contain information about our HIV status, mental health profile, sexual activity and drug use.

These areas can attract a lot of stigma so keeping this information private is paramount. Disclosure may not just impact the person’s health and well-being, it may also affect their relationships, their employment and other facets of their life.

Importantly, these details can’t be reset or reissued. Unlike passwords and credit card details, they are static. Once exposed, it’s impossible to “unsee” or “unknow” what has been compromised.

Everyone should make their own informed decision about whether to stay in My Health Record or opt out. Ultimately, it’s up to individuals to decide what level of risk they’re comfortable with, and the value of their own health information, and proceed on that basis.


The Conversation


Read more:
My Health Record: the case for opting in


Cassandra Cross, Senior Lecturer in Criminology, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

New data tool scores Australia and other countries on their human rights performance



File 20180329 189827 17jfcp2.jpg?ixlib=rb 1.1
Despite the UN’s Universal Declaration of Human Rights, it remains difficult to monitor governments’ performance because there are no comprehensive human rights measures.
from http://www.shutterstock.com, CC BY-ND

K. Chad Clay, University of Georgia

This year, the Universal Declaration of Human Rights will mark its 70th anniversary, but despite progress in some areas, it remains difficult to measure or compare governments’ performance. We have yet to develop comprehensive human rights measures that are accepted by researchers, policymakers and advocates alike.

With this in mind, my colleagues and I have started the Human Rights Measurement Initiative (HRMI), the first global project to develop a comprehensive suite of metrics covering international human rights.

We have now released our beta dataset and data visualisation tools, publishing 12 metrics that cover five economic and social rights and seven civil and political rights.

Lack of human rights data

People often assume the UN already produces comprehensive data on nations’ human rights performance, but it does not, and likely never will. The members of the UN are governments, and governments are the very actors that are obligated by international human rights law. It would be naïve to hope for governments to effectively monitor and measure their own performance without political bias. There has to be a role for non-state measurement.




Read more:
Australia’s Human Rights Council election comes with a challenge to improve its domestic record


We hope that the data and visualisations provided by HRMI will empower practitioners, advocates, researchers, journalists and others to speak clearly about human rights outcomes worldwide and hold governments accountable when they fail to meet their obligations under international law.

These are the 12 human rights measured by the Human Rights Measurement Initiative (HRMI) project during its pilot stage. The UN’s Universal Declaration of Human Rights defines 30 human rights.
Human Rights Measurement Initiative, CC BY

The HRMI pilot

At HRMI, alongside our existing methodology for economic and social rights, we are developing a new way of measuring civil and political human rights. In our pilot, we sent an expert survey directly to human rights practitioners who are actively monitoring each country’s human rights situation.

That survey asked respondents about their country’s performance on the rights to assembly and association, opinion and expression, political participation, freedom from torture, freedom from disappearance, freedom from execution, and freedom from arbitrary or political arrest and imprisonment.

Based on those survey responses, we develop data on the overall level of respect for each of the rights. These data are calculated using a statistical method that ensures responses are comparable across experts and countries, and with an uncertainty band to provide transparency about how confident we are in each country’s placement. We also provide information on who our respondents believed were especially at risk for each type of human rights violation.

Human rights in Australia

One way to visualise data on our website is to look at a country’s performance across all 12 human rights for which we have released data at this time. For example, the graph below shows Australia’s performance across all HRMI metrics.

Human rights performance in Australia. Data necessary to calculate a metric for the right to housing at a high-income OECD assessment standard is currently unavailable for Australia.
CC BY

As shown here, Australia performs quite well on some indicators, but quite poorly on others. Looking at civil and political rights (in blue), Australia demonstrates high respect for the right to be free from execution, but does much worse on the rights to be free from torture and arbitrary arrest.

Our respondents often attributed this poor performance on torture and imprisonment to the treatment of refugees, immigrants and asylum seekers, as well as Indigenous peoples, by the Australian government.

Looking across the economic and social rights (in green), Australia shows a range of performance, doing quite well on the right to food, but performing far worse on the right to work.




Read more:
Ten things Australia can do to be a human rights hero


Freedom from torture across countries

Another way to visualise our data is to look at respect for a single right across several countries. The graph below shows, for example, overall government respect for the right to be free from torture and ill treatment in all 13 of HRMI’s pilot countries.

Government respect for the right to be free from torture, January to June 2017.
Human Rights Measurement Initiative (HRMI)

Here, the middle of each blue bar (marked by the small white lines) represents the average estimated level of respect for freedom from torture, while the length of the blue bars demonstrate our certainty in our estimates. For instance, we are much more certain regarding Mexico’s (MEX) low score than Brazil’s (BRA) higher score. Due to this uncertainty and the resulting overlap between the bars, there is only about a 92% chance that Brazil’s score is better than Mexico’s.

In addition to being able to say that torture is probably more prevalent in Mexico than in Brazil, and how certain we are in that comparison, we can also compare the groups of people that our respondents said were at greatest risk of torture. This information is summarised in the two word clouds below; larger words indicate that that group was selected by more survey respondents as being at risk.

These word clouds show, on the left, the attributes that place a person at risk of torture in Brazil, and on the right, attributes that place one at risk for torture in Mexico, January to June 2017, respectively.
Human Rights Measurement Initiative (HRMI), CC BY

There are both similarities and differences between the groups that were at highest risk in Brazil and Mexico. Based on the survey responses our human rights experts in Brazil gave us, we know that black people, those who live in favelas or quilombolas, those who live in rural or remote areas, landless rural workers, and prison inmates are largely the groups referred to by the terms “race,” “low social or economic status,” or “detainees or suspected criminals”.

On the other hand, in Mexico, imprisoned women and those suspected of involvement with organised crime are the detainees or suspected criminals that our respondents stated were at high risk of torture. Migrants, refugees and asylum seekers travelling through Mexico on the way to the United States are also at risk.

The ConversationThere is much more to be learned from the visualisations and data on our website. After you have had the opportunity to explore, we would love to hear your feedback here about any aspect of our work so far. We are just getting started, and we thrive on collaboration with the wider human rights community.

K. Chad Clay, Assistant Professor of International Affairs, University of Georgia

This article was originally published on The Conversation. Read the original article.

How to stop haemorrhaging data on Facebook



File 20180405 189801 1wbjtyg.jpg?ixlib=rb 1.1
Every time you open an app, click a link, like a post, read an article, hover over an ad, or connect to someone, you are generating data.
Shutterstock

Belinda Barnet, Swinburne University of Technology

If you are one of 2.2 billion Facebook users worldwide, you have probably been alarmed by the recent coverage of the Cambridge Analytica scandal, a story that began when The Guardian revealed 50 million (now thought to be 87 million) user profiles had been retrieved and shared without the consent of users.

Though the #deletefacebook campaign has gained momentum on Twitter, it is simply not practical for most of us to delete our accounts. It is technically difficult to do, and given that one quarter of the human population is on the platform, there is an undeniable social cost for being absent.




Read more:
Why we should all cut the Facebook cord. Or should we?


It is also not possible to use or even to have a Facebook profile without giving up at least some data: every time you open the app, click a link, like a post, hover over an ad, or connect to someone, you are generating data. This particular type of data is not something you can control, because Facebook considers such data its property.

Every service has a price, and the price for being on Facebook is your data.

However, you can remain on Facebook (and other social media platforms like it) without haemorrhaging data. If you want stay in touch with those old school friends – despite the fact you will probably never see them again – here’s what you can do, step by step. The following instructions are tailored to Facebook settings on mobile.

Your location

The first place to start is with the device you are holding in your hand.
Facebook requests access to your GPS location by default, and unless you were reading the fine print when you installed the application (if you are that one person please tell me where you find the time), it will currently have access.

This means that whenever you open the app it knows where you are, and unless you have changed your location sharing setting from “Always” to “Never” or “Only while using”, it can track your location when you’re not using the app as well.

To keep your daily movements to yourself, go into Settings on Apple iPhone or Android, go to Location Services, and turn off or select “Never” for Facebook.

While you’re there, check for other social media apps with location access (like Twitter and Instagram) and consider changing them to “Never”.

Remember that pictures from your phone are GPS tagged too, so if you intend to share them on Facebook, revoke access to GPS for your camera as well.

Your content

The next thing to do is to control who can see what you post, who can see private information like your email address and phone number, and then apply these settings in retrospect to everything you’ve already posted.

Facebook has a “Privacy Shortcuts” tab under Settings, but we are going to start in Account Settings > Privacy.

You control who sees what you post, and who sees the people and pages you follow, by limiting the audience here.

Change “Who can see your future posts” and “Who can see the people and pages you follow” to “Only Friends”.

In the same menu, if you scroll down, you will see a setting called “Do you want search engines outside of Facebook to link to your profile?” Select No.

After you have made these changes, scroll down and limit the audience for past posts. Apply the new setting to all past posts, even though Facebook will try to alarm you. “The only way to undo this is to change the audience of each post one at a time! Oh my Goodness! You’ll need to change 1,700 posts over ten years.” Ignore your fears and click Limit.




Read more:
It’s time for third-party data brokers to emerge from the shadows


Next go in to Privacy Shortcuts – this is on the navigation bar below Settings. Then select Privacy Checkup. Limit who can see your personal information (date of birth, email address, phone number, place of birth if you provided it) to “Only Me”.

Third party apps

Every time you use Facebook to “login” to a service or application you are granting both Facebook and the third-party service access to your data.

Facebook has pledged to investigate and change this recently as a result of the Cambridge Analytica scandal, but in the meantime, it is best not to use Facebook to login to third party services. That includes Bingo Bash unfortunately.

The third screen of Privacy Checkup shows you which apps have access to your data at present. Delete any that you don’t recognise or that are unnecessary.

In the final step we will be turning off “Facebook integration” altogether. This is optional. If you choose to do this, it will revoke permission for all previous apps, plugins, and websites that have access to your data. It will also prevent your friends from harvesting your data for their apps.

In this case you don’t need to delete individual apps as they will all disappear.

Turning off Facebook integration

If you want to be as secure as it is possible to be on Facebook, you can revoke third-party access to your content completely. This means turning off all apps, plugins and websites.

If you take this step Facebook won’t be able to receive information about your use of apps outside of Facebook and apps won’t be able to receive your Facebook data.

If you’re a business this is not a good idea as you will need it to advertise and to test apps. This is for personal pages.

It may make life a little more difficult for you in that your next purchase from Farfetch will require you to set up your own account rather than just harvest your profile. Your Klout score may drop because it can’t see Facebook and that might feel terrible.

Remember this setting only applies to the data you post and provide yourself. The signals you generate using Facebook (what you like, click on, read) will still belong to Facebook and will be used to tailor advertising.

To turn off Facebook integration, go into Settings, then Apps. Select Apps, websites and games.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


Facebook will warn you about all the Farmville updates you will miss and how you will have a hard time logging in to The Guardian without Facebook. Ignore this and select “Turn off”.

The ConversationWell done. Your data is now as secure as it is possible to be on Facebook. Remember, though, that everything you do on the platform still generates data.

Belinda Barnet, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article was originally published on The Conversation. Read the original article.

It’s time for third-party data brokers to emerge from the shadows



File 20180404 189813 1ihb282.jpg?ixlib=rb 1.1
Personal data has been dubbed the “new oil”, and data brokers are very efficient miners.
Emanuele Toscano/Flickr, CC BY-NC-ND

Sacha Molitorisz, University of Technology Sydney

Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.

Graham Mudd, Facebook’s product marketing director, said in a statement:

We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.

Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.

The invisible industry worth billions

In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.

The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.




Read more:
Facebook data harvesting: what you need to know


These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.

Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.

In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.

A growing concern

Third party access to personal data is causing increasing concern. This week, Grindr was shown to be revealing its users’ HIV status to third parties. Such news is unsettling, as if there are corporate eavesdroppers on even our most intimate online engagements.

The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.




Read more:
Your online privacy depends as much on your friends’ data habits as your own


Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.

As one group of privacy researchers noted in 2011, this process, “which nearly invisibly shares not just a user’s, but a user’s friends’ information with third parties, clearly violates standard norms of information flow”.

With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.

More transparency and more respect for users

To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.

Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.

In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.

Time to regulate

Having resisted regulation since 2004, Mark Zuckerberg has finally conceded that Facebook should be regulated – and advocated for laws mandating transparency for online advertising.

Historically, Facebook has made a point of dedicating itself to openness, but Facebook itself has often operated with a distinct lack of openness and transparency. Data brokers have been even worse.

The ConversationFacebook’s motto used to be “Move fast and break things”. Now Facebook, data brokers and other third parties need to work with lawmakers to move fast and fix things.

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.