Towards a post-privacy world: proposed bill would encourage agencies to widely share your data


Bruce Baer Arnold, University of Canberra

The federal government has announced a plan to increase the sharing of citizen data across the public sector.

This would include data sitting with agencies such as Centrelink, the Australian Tax Office, the Department of Home Affairs, the Bureau of Statistics and potentially other external “accredited” parties such as universities and businesses.

The draft Data Availability and Transparency Bill released today will not fix ongoing problems in public administration. It won’t solve many problems in public health. It is a worrying shift to a post-privacy society.

It’s a matter of arrogance, rather than effectiveness. It highlights deficiencies in Australian law that need fixing.




Read more:
Australians accept government surveillance, for now


Making sense of the plan

Australian governments on all levels have built huge silos of information about us all. We supply the data for these silos each time we deal with government.

It’s difficult to exercise your rights and responsibilities without providing data. If you’re a voter, a director, a doctor, a gun owner, on welfare, pay tax, have a driver’s licence or Medicare card – our governments have data about you.

Much of this is supplied on a legally mandatory basis. It allows the federal, state, territory and local governments to provide pensions, elections, parks, courts and hospitals, and to collect rates, fees and taxes.

The proposed Data Availability and Transparency Bill will authorise large-scale sharing of data about citizens and non-citizens across the public sector, between both public and private bodies. Previously called the “Data Sharing and Release” legislation, the word “transparency” has now replaced “release” to allay public fears.

The legislation would allow sharing between Commonwealth government agencies that are currently constrained by a range of acts overseen (weakly) by the under-resourced Australian Information Commissioner (OAIC).

The acts often only apply to specific agencies or data. Overall we have a threadbare patchwork of law that is supposed to respect our privacy but often isn’t effective. It hasn’t kept pace with law in Europe and elsewhere in the world.

The plan also envisages sharing data with trusted third parties. They might be universities or other research institutions. In future, the sharing could extend to include state or territory agencies and the private sector, too.

Any public or private bodies that receive data can then share it forward. Irrespective of whether one has anything to hide, this plan is worrying.

Why will there be sharing?

Sharing isn’t necessarily a bad thing. But it should be done accountably and appropriately.

Consultations over the past two years have highlighted the value of inter-agency sharing for law enforcement and for research into health and welfare. Universities have identified a range of uses regarding urban planning, environment protection, crime, education, employment, investment, disease control and medical treatment.

Many researchers will be delighted by the prospect of accessing data more cheaply than doing onerous small-scale surveys. IT people have also been enthusiastic about money that could be made helping the databases of different agencies talk to each other.

However, the reality is more complicated, as researchers and civil society advocates have pointed out.

Person hitting a 'share' button on a keyboard.
In a July speech to the Australian Society for Computers and Law, former High Court Justice Michael Kirby highlighted a growing need to fight for privacy, rather than let it slip away.
Shutterstock

Why should you be worried?

The plan for comprehensive data sharing is founded on the premise of accreditation of data recipients (entities deemed trustworthy) and oversight by the Office of the National Data Commissioner, under the proposed act.

The draft bill announced today is open for a short period of public comment before it goes to parliament. It features a consultation paper alongside a disquieting consultants’ report about the bill. In this report, the consultants refer to concerns and “high inherent risk”, but unsurprisingly appear to assume things will work out.

Federal Minister for Government Services Stuart Roberts, who presided over the tragedy known as the RoboDebt scheme, is optimistic about the bill. He dismissed critics’ concerns by stating consent is implied when someone uses a government service. This seems disingenuous, given people typically don’t have a choice.

However, the bill does exclude some data sharing. If you’re a criminologist researching law enforcement, for example, you won’t have an open sesame. Experience with the national Privacy Act and other Commonwealth and state legislation tells us such exclusions weaken over time

Outside the narrow exclusions centred on law enforcement and national security, the bill’s default position is to share widely and often. That’s because the accreditation requirements for agencies aren’t onerous and the bases for sharing are very broad.

This proposal exacerbates ongoing questions about day-to-day privacy protection. Who’s responsible, with what framework and what resources?

Responsibility is crucial, as national and state agencies recurrently experience data breaches. Although as RoboDebt revealed, they often stick to denial. Universities are also often wide open to data breaches.

Proponents of the plan argue privacy can be protected through robust de-identification, in other words removing the ability to identify specific individuals. However, research has recurrently shown “de-identification” is no silver bullet.

Most bodies don’t recognise the scope for re-identification of de-identified personal information and lots of sharing will emphasise data matching.

Be careful what you ask for

Sharing may result in social goods such as better cities, smarter government and healthier people by providing access to data (rather than just money) for service providers and researchers.

That said, our history of aspirational statements about privacy protection without meaningful enforcement by watchdogs should provoke some hard questions. It wasn’t long ago the government failed to prevent hackers from accessing sensitive data on more than 200,000 Australians.

It’s true this bill would ostensibly provide transparency, but it won’t provide genuine accountability. It shouldn’t be taken at face value.




Read more:
Seven ways the government can make Australians safer – without compromising online privacy


The Conversation


Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Keep calm, but don’t just carry on: how to deal with China’s mass surveillance of thousands of Australians



Shutterstock

Bruce Baer Arnold, University of Canberra

National security is like sausage-making. We might enjoy the tasty product, but want to look away from the manufacturing.

Recent news that Chinese company Zhenhua Data is profiling more than 35,000 Australians isn’t a surprise to people with an interest in privacy, security and social networks. We need to think critically about this, knowing we can do something to prevent it from happening again.

Reports indicate Zhenhua provides services to the Chinese government. It may also provide services to businesses in China and overseas.

The company operates under Chinese law and doesn’t appear to have a presence in Australia. That means we can’t shut it down or penalise it for a breach of our law. Also, Beijing is unlikely to respond to expressions of outrage from Australia or condemnation by our government – especially amid recent sabre-rattling.




Read more:
Journalists have become diplomatic pawns in China’s relations with the West, setting a worrying precedent


Zhenhua is reported to have data on more than 35,000 Australians – a list saturated by political leaders and prominent figures. Names, birthdays, addresses, marital status, photographs, political associations, relatives and social media account details are among the information extracted.

It seems Zhenhua has data on a wide range of Australians, including public figures such as Victorian supreme court judge Anthony Cavanough, Australia’s former ambassador to China Geoff Raby, former NSW premier and federal foreign affairs minister Bob Carr, tech billionaire Mike Cannon-Brookes and singer Natalie Imbruglia.

It’s not clear how individuals are being targeted. The profiling might be systematic. It might instead be conducted on the basis of a specific industry, academic discipline, public prominence or perceived political influence.

It’s unlikely Zhenhua profiles random members of the public. That means there’s no reason for average citizens without a China connection to be worried.

Still, details around the intelligence gathering elude us, so best practise for the public is to maintain as much online privacy as possible, whenever possible.

Overall, we don’t know much about Zhenhua’s goals. And what we do know came from a leak to a US academic who sensibly fled China in 2018, fearing for his safety.

Pervasive surveillance is the norm

Pervasive surveillance is now a standard feature of all major governments, which often rely on surveillance-for-profit companies. Governments in the West buy services from big data analytic companies such as Palantir.

Australia’s government gathers information outside our borders, too. Take the bugging of the Timor-Leste government, a supposed friend rather than enemy.

How sophisticated is the plot?

Revelations about Zhenhua have referred to the use of artificial intelligence and the “mosaic” method of intelligence gathering. But this is probably less exciting than it sounds.

Reports indicate much of the data was extracted from online open sources. Access to much of this would have simply involved using algorithms to aggregate targets’ names, dates, qualifications and work history data found on publicly available sites.

The algorithms then help put the individual pieces of the “mosaic” together and fill in the holes on the basis of each individual’s relationship with others, such as their as peers, colleagues or partners.

Some of the data for the mosaic may come from hacking or be gathered directly by the profiler. According to the ABC, some data that landed in Zhenhua’s lap was taken from the dark web.

One seller might have spent years copying data from university networks. For example, last year the Australian National University acknowledged major personal data breaches had taken place, potentially extending back 19 years.

This year there was also the unauthorised (and avoidable) access by cybercriminals to NSW government data on 200,000 people.

While it may be confronting to know a foreign state is compiling information on Australian citizens, it should be comforting to learn sharing this information can be avoided – if you’re careful.

What’s going on in the black box?

One big question is what Zhenhua’s customers in China’s political and business spheres might do with the data they’ve compiled on Australian citizens. Frankly, we don’t know. National security is often a black box and we are unlikely ever to get verifiable details.

Apart from distaste at being profiled, we might say being watched is no big deal, especially given many of those on the list are already public figures. Simply having an AI-assisted “Who’s Who” of prominent Australians isn’t necessarily frightening.

However, it is of concern if the information collected is being used for disinformation, such as through any means intended to erode trust in political processes, or subvert elections.

For instance, a report published in June by the Australian Strategic Policy Institute detailed how Chinese-speaking people in Australia were being targeted by a “persistent, large-scale influence campaign linked to Chinese state actors”.

Illustration of surveillance camera with Chinese flag draped over.
In June, Prime Minister Scott Morrison announced China was supposedly behind a major state-based attack against several of Australia’s sectors, including all levels of government.
Shutterstock

Deep fake videos are another form of subversion of increasing concern to governments and academics, particularly in the US.




Read more:
Deepfake videos could destroy trust in society – here’s how to restore it


Can we fix this?

We can’t make Zhenhua and its competitors disappear. Governments think they are too useful.

Making everything visible to state surveillance is now the ambition of many law enforcement bodies and all intelligence agencies. It’s akin to Google and its competitors wanting to know (and sell) everything about us, without regard for privacy as a human right.

We can, however, build resilience.

One way is to require government agencies and businesses to safeguard their databases. That hasn’t been the case with the NSW government, Commonwealth governments, Facebook, dating services and major hospitals.

In Australia, we need to adopt recommendations by law reform inquiries and establish a national right to privacy. The associated privacy tort would incentivise data custodians and also encourage the public to avoid oversharing online.

In doing so, we might be better placed to condemn both China and other nations participating in unethical intelligence gathering, while properly acknowledging our own wrongdoings in Timor-Leste.The Conversation

Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

US coronavirus data will now go straight to the White House. Here’s what this means for the world


Erin Smith, Edith Cowan University

Led by physicians, scientists and epidemiologists, the US Centers for Disease Control and Prevention (CDC) is one of the most reliable sources of knowledge during disease outbreaks. But now, with the world in desperate need of authoritative information, one of the foremost agencies for fighting infectious disease has gone conspicuously silent.

For the first time since 1946, when the CDC came to life in a cramped Atlanta office to fight malaria, the agency is not at the front line of a public health emergency.




Read more:
Americans still trust doctors and scientists during a public health crisis


On April 22, CDC director Robert Redfield stood at the White House briefing room lectern and conceded that the coronavirus pandemic had “overwhelmed” the United States. Following Redfield at the podium, President Donald Trump said the CDC director had been “totally misquoted” in his warning that COVID-19 would continue to pose serious difficulties as the US moved into its winter ‘flu season in late 2020.

Invited to clarify, Redfield confirmed he had been quoted correctly in giving his opinion that there were potentially “difficult and complicated” times ahead.

Trump tried a different tack. “You may not even have corona coming back,” the president said, once again contradicting the career virologist. “Just so you understand.”

CDC director Robert Redfield and President Donald Trump offer contrasting interpretations at an April 22 White House briefing.

The exchange was interpreted by some pundits as confirmation that the CDC’s venerated expertise had been sidelined as the coronavirus continued to ravage the US.

In the latest development, the New York Times reported this week the CDC has even been bypassed in its data collection, with the Trump administration ordering hospitals to send COVID-19 data directly to the White House.

Diminished role

When facing previous public health emergencies the CDC was a hive of activity, holding regular press briefings and developing guidance that was followed by governments around the world. But during the greatest public health emergency in a century, it appears the CDC has been almost entirely erased by the White House as the public face of the COVID-19 pandemic response.

This diminished role is obvious to former leaders of the CDC, who say their scientific advice has never before been politicised to this extent.

As the COVID-19 crisis was unfolding, several CDC officials issued warnings, only to promptly disappear from public view. Nancy Messonnier, director of the CDC’s National Center for Immunization and Respiratory Diseases, predicted on February 25 that the virus was not contained and would grow into a pandemic.

The stock market plunged and Messonnier was removed from future White House press briefings. Between March 9 and June 12 there was no CDC presence at White House press briefings on COVID-19.

The CDC has erred during the pandemic, most significantly in its initial efforts to develop a test for COVID-19. The testing kits proved to be faulty – a problem compounded by sluggish efforts to rectify the situation – and then by severe delays in distributing enough tests to the public.

But many public health specialists are nevertheless baffled by the CDC’s low profile as the pandemic continues to sweep the globe.

“They have been sidelined,” said Howard Koh, former US assistant secretary for health. “We need their scientific leadership right now.”

What does it mean for the world?

The CDC being bypassed in the collection of COVID-19 data is another body blow to the agency’s standing.

Hospitals have instead been ordered to send all COVID-19 patient information to a central database in Washington DC.

This will have a range of likely knock-on effects. For starters, the new database will not be available to the public, prompting inevitable questions over the accuracy and transparency of data which will now be interpreted and shared by the White House.

The Department of Health and Human Services, which issued the new order, says the change will help the White House’s coronavirus task force allocate resources. But epidemiologists and public health experts around the world fear the new system will make it harder for people outside the White House to track the pandemic or access information.




Read more:
Even during the coronavirus pandemic, the role of public health workers is unrecognized


This affects all nations, because one of the CDC’s roles is to provide sound, independent public health guidance on issues such as infectious diseases, healthy living, travel health, emergency and disaster preparedness, and drug efficacy. Other jurisdictions can then adapt this information to their local context — expertise that has become even more essential during a pandemic, when uncertainty is the norm.

It is difficult to recall a previous public health emergency when political pressure led to a change in the interpretation of scientific evidence.

What happens next?

Despite the inevitable challenges that come with tackling a pandemic in real time, the CDC remains the best-positioned agency – not just in the US but the entire world – to help us manage this crisis as safely as possible.

In the absence of US leadership, nations should start thinking about developing their own national centres for disease control. In Australia’s case, these discussions have been ongoing since the 1990s, stymied by cost and lack of political will.

COVID-19, and the current sidelining of the CDC, may be the impetus needed to finally dust off those plans and make them a reality.The Conversation

Erin Smith, Associate Professor in Disaster and Emergency Response, School of Medical and Health Sciences, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Australians want to support government use and sharing of data, but don’t trust their data will be safe



File 20190226 150715 ffa5h7.jpg?ixlib=rb 1.1
A new survey reveals community attitudes towards the use of personal data by government and researchers.
Shutterstock

Nicholas Biddle, Australian National University and Matthew Gray, Australian National University

Never has more data been held about us by government or companies that we interact with. Never has this data been so useful for analytical purposes.

But with such opportunities come risks and challenges. If personal data is going to be used for research and policy purposes, we need effective data governance arrangements in place, and community support (social licence) for this data to be used.

The ANU Centre for Social Research and Methods has recently undertaken a survey of a representative sample of Australians to learn their views about about how personal data is used, stored and shared.

While Australians report a high level of support for the government to use and share data, there is less confidence that the government has the right safeguards in place or can be trusted with people’s data.




Read more:
Soft terms like ‘open’ and ‘sharing’ don’t tell the true story of your data


What government should do with data

In the ANUPoll survey of more than 2,000 Australian adults (available for download at the Australian Data Archive) we asked:

On the whole, do you think the Commonwealth Government should or should not be able to do the following?

Six potential data uses were given.

Do you think the Commonwealth Government should or should not be able to … ?
ANU Centre for Social Research and Methods Working Paper

Overall, Australians are supportive of the Australian government using data for purposes such as allocating resources to those who need it the most, and ensuring people are not claiming benefits to which they are not entitled.

They were slightly less supportive about providing data to researchers, though most still agreed or strongly agreed that it was worthwhile.

Perceptions of government data use

Community attitudes to the use of data by government are tied to perceptions about whether the government can keep personal data secure, and whether it’s behaving in a transparent and trustworthy manner.

To measure views of the Australian population on these issues, respondents were told:

Following are a number of statements about the Australian government and the data it holds about Australian residents.

They were then asked to what extent they agreed or disagreed that the Australian government:

  • could respond quickly and effectively to a data breach
  • has the ability to prevent data being hacked or leaked
  • can be trusted to use data responsibly
  • is open and honest about how data are collected, used and shared.

Respondents did not express strong support for the view that the Australian government is able to protect people’s data, or is using data in an appropriate way.

To what extent do you agree or disagree that the Australian Government … ?
ANU Centre for Social Research and Methods Working Paper



Read more:
What are tech companies doing about ethical use of data? Not much


We also asked respondents to:

[think] about the data about you that the Australian Government might currently hold, such as your income tax data, social security records, or use of health services.

We then asked for their level of concern about five specific forms of data breaches or misuse of their own personal data.

We found that there are considerable concerns about different forms of data breaches or misuse.

More than 70% of respondents were concerned or very concerned about the accidental release of personal information, deliberate hacking of government systems, and data being provided to consultants or private sector organisations who may misuse the data.

Level of concern about specific forms of data breaches or misuse of a person’s own data …
ANU Centre for Social Research and Methods Working Paper

More than 60% were concerned or very concerned about their data being used by the Australian government to make unfair decisions. And more than half were concerned or very concerned about their data being provided to academic researchers who may misuse their information.




Read more:
Facebook’s data lockdown is a disaster for academic researchers


Trust in government to manage data

The data environment in Australia is changing rapidly. More digital information about us is being created, captured, stored and shared than ever before, and there is a greater capacity to link information across multiple sources of data, and across multiple time periods.

While this creates opportunities, it also creates the risk that the data will be used in a way that is not in our best interests.

There is policy debate at the moment about how data should be used and shared. If we don’t make use of the data available, that has costs in terms of worse service delivery and less effective government. So, locking data up is not a cost-free option.

But sharing data or making data available in a way that breaches people’s privacy can be harmful to individuals, and may generate a significant (and legitimate) public backlash. This would reduce the chance of data being made available in any form, and mean that the potential benefits of improving the wellbeing of Australians are lost.

If government, researchers and private companies want to be able to make use of the richness of the new data age, there is an urgent and continuing need to build up trust across the population, and to put policies in place that reassure consumers and users of government services.The Conversation

Nicholas Biddle, Associate Professor, ANU College of Arts and Social Sciences, Australian National University and Matthew Gray, Director, ANU Centre for Social Research and Methods, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We’ve been hacked – so will the data be weaponised to influence election 2019? Here’s what to look for


Michael Jensen, University of Canberra

Prime Minister Scott Morrison recently said both the Australian Parliament and its major political parties were hacked by a “sophisticated state actor.”

This raises concerns that a foreign adversary may be intending to weaponise, or strategically release documents, with an eye towards altering the 2019 election outcome.




Read more:
A state actor has targeted Australian political parties – but that shouldn’t surprise us


While the hacking of party and parliamentary systems is normally a covert activity, influence operations are necessarily noisy and public in order to reach citizens – even if efforts are made to obscure their origins.

If a state actor has designs to weaponise materials recently hacked, we will likely see them seek to inflame religious and ethnic differences, as well as embarrass the major parties in an effort to drive votes to minor parties.

If this comes to pass, there are four things Australians should look for.

1. Strategic interest for a foreign government to intervene

If the major parties have roughly the same policy position in relation to a foreign country, a foreign state would have little incentive to intervene, for example, in favour of Labor against the Coalition.

They may, however, attempt to amplify social divisions between the parties as a way of reducing the ability of Australians to work together after the election.

They may also try to drive down the already low levels of support for democracy and politicians in Australia to further undermine Australian democracy.

Finally, they may also try to drive the vote away from the major parties to minor parties which might be more favourable to their agenda.

This could be achieved by strategically releasing hacked materials which embarrass the major parties or their candidates, moving voters away from those parties and towards minor parties. These stories will likely be distributed first on social media platforms and later amplified by foreign and domestic broadcast media.

It is no secret that Russia and China seek a weakening of the Five Eyes security relationship between Australia, New Zealand, Canada, the United States, and the United Kingdom. If weakened, that would undermine the alliance structure which has helped prevent major wars for the last 70 years.

2. Disproportionate attention by foreign media to a local campaign

In the US, although Tulsi Gabbard’s polling numbers rank her near the bottom of declared and anticipated candidates for the Democratic nomination, she has received significant attention from Russia’s overt or “white” propaganda outlets, Sputnik and RT (formerly Russia Today).

The suspected reason for this attention is that some of her foreign policy positions on the Middle East are consistent with Russian interests in the region.

In Australia, we might find greater attention than normal directed at One Nation or Fraser Anning – as well as the strategic promotion of Green candidates in certain places to push political discussion further right and further left at the same time.

3. Promoted posts on Facebook and other social media platforms

Research into the 2016 US election found widespread violations of election law. The vast majority of promoted ads on Facebook during the election campaign were from groups which failed to file with the Federal Election Commission and some of this unregistered content came from Russia.

Ads placed by Russia’s Internet Research Agency, which is under indictment by the Mueller investigation, ended up disproportionately in the newsfeeds of Facebook users in Wisconsin and Pennsylvania – two of the three states that looked like a lock for Clinton until the very end of the campaign.

What makes Facebook and many other social media platforms particularly of concern is the ability to use data to target ads using geographic and interest categories. One can imagine that if a foreign government were armed with voting data hacked from the parties, this process would be all the more effective.




Read more:
New guidelines for responding to cyber attacks don’t go far enough


Seats in Australia which might be targeted include seats like Swan (considered a marginal seat with competition against the Liberals on both the left and the right) and the seats of conservative politicians on GetUp’s “hitlist” – such as Tony Abbott’s and Peter Dutton’s seats of Warringah and Dickson.

4. Focus on identity manipulation, rather than fake news

The term “fake news” suffers from conceptual ambiguities – it means different things to different people. “Fake news” has been used not just as a form of classification to describe material which “mimics news media content in form but not in organisational process or intent” but also used to describe satire and even as an epithet used to dismiss disagreeable claims of a factual nature.

Studies of propaganda show that information need not be factually false to effectively manipulate target audiences.

The best propaganda uses claims which are factually true, placing them into a different context which can be used to manipulate audiences or by amplifying negative aspects of a group, policy or politician, without placing that information in a wider context.

For example, to amplify concerns about immigrants, one might highlight the immigrant background of someone convicted of a crime, irrespective of the overall propensity for immigrants to commit crimes compared to native born Australians.

This creates what communication scholars call a “representative anecdote” through which people come to understand and think about a topic with which they are otherwise unfamiliar. While immigrants may or may not be more likely to commit crimes than other Australians, the reporting creates that association.

Among the ways foreign influence operations function is through the politicisation of identities. Previous research has found evidence of efforts to heighten ethnic and racial differences through Chinese language WeChat official accounts operating in Australia as well as through Russian trolling efforts which have targeted Australia. This is the same pattern followed by Russia during the 2016 US election.

Liberal democracies are designed to handle conflicts over interests through negotiation and compromise. Identities, however, are less amenable to compromise. These efforts may not be “fake news” but they are effective in undermining the capacity of a democratic nation to mobilise its people in pursuit of common goals.




Read more:
How digital media blur the border between Australia and China


The Russian playbook

No country is immune from the risk of foreign influence operations. While historically these operations might have involved the creation of false documents and on the ground operations in target countries, today materials can be sourced, faked, and disseminated from the relative security of the perpetrating country. They may include both authentic and faked documents – making it hard for a campaign to charge that certain documents are faked without affirming the validity of others.

Most importantly, in a digitally connected world, these operations can scale up quickly and reach substantially larger populations than previously possible.

While the Russian interference in the 2016 US election has received considerable attention, Russia is not the only perpetrator and the US is not the only target.

But the Russians created a playbook which other countries can readily draw upon and adapt. The question remains as to who that might be in an Australian context.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Five projects that are harnessing big data for good



File 20181101 78456 77seij.jpg?ixlib=rb 1.1
Often the value of data science lies in the work of joining the dots.
Shutterstock

Arezou Soltani Panah, Swinburne University of Technology and Anthony McCosker, Swinburne University of Technology

Data science has boomed over the past decade, following advances in mathematics, computing capability, and data storage. Australia’s Industry 4.0 taskforce is busy exploring ways to improve the Australian economy with tools such as artificial intelligence, machine learning and big data analytics.

But while data science offers the potential to solve complex problems and drive innovation, it has often come under fire for unethical use of data or unintended negative consequences – particularly in commercial cases where people become data points in annual company reports.

We argue that the data science boom shouldn’t be limited to business insights and profit margins. When used ethically, big data can help solve some of society’s most difficult social and environmental problems.

Industry 4.0 should be underwritten by values that ensure these technologies are trained towards the social good (known as Society 4.0). That means using data ethically, involving citizens in the process, and building social values into the design.

Here are a five data science projects that are putting these principles into practice.




Read more:
The future of data science looks spectacular


1. Finding humanitarian hot spots

Social and environmental problems are rarely easy to solve. Take the hardship and distress in rural areas due to the long-term struggle with drought. Australia’s size and the sheer number of people and communities involved make it difficult to pair those in need with support and resources.

Our team joined forces with the Australian Red Cross to figure out where the humanitarian hot spots are in Victoria. We used social media data to map everyday humanitarian activity to specific locations and found that the hot spots of volunteering and charity activity are located in and around Melbourne CBD and the eastern suburbs. These kinds of insights can help local aid organisations channel volunteering activity in times of acute need.

Distribution of humanitarian actions across inner Melbourne and local government areas. Blue dots and red dots represent scraped Instagram posts around the hashtags #volunteer and #charity.

2. Improving fire safety in homes

Accessing data – the right data, in the right form – is a constant challenge for data science. We know that house fires are a serious threat, and that fire and smoke alarms save lives. Targeting houses without fire alarms can help mitigate that risk. But there is no single reliable source of information to draw on.

In the United States, Enigma Labs built open data tools to model and map risk at the level of individual neighbourhoods. To do this effectively, their model combines national census data with a geocoder tool (TIGER), as well as analytics based on local fire incident data, to provide a risk score.

Fire fatality risk scores calculated at the level of Census block groups.
Enigma Labs

3. Mapping police violence in the US

Ordinary citizens can be involved in generating social data. There are many crowdsourced, open mapping projects, but often the value of data science lies in the work of joining the dots.

The Mapping Police Violence project in the US monitors, make sense of, and visualises police violence. It draws on three crowdsourced databases, but also fills in the gaps using a mix of social media, obituaries, criminal records databases, police reports and other sources of information. By drawing all this information together, the project quantifies the scale of the problem and makes it visible.

A visualisation of the frequency of police violence in the United States.
Mapping Police Violence



Read more:
Data responsibility: a new social good for the information age


4. Optimising waste management

The Internet of Things is made up of a host of connected devices that collect data. When embedded in the ordinary objects all around us, and combined with cloud-based analysis and computing, these objects become smart – and can help solve problems or inefficiencies in the built environment.

If you live in Melbourne, you might have noticed BigBelly bins around the CBD. These smart bins have solar-powered trash compactors that regularly compress the garbage inside throughout the day. This eliminates waste overflow and reduces unnecessary carbon emissions, with an 80% reduction in waste collection.

Real-time data analysis and reporting is provided by a cloud-based data management portal, known as CLEAN. The tool identifies trends in waste overflow, which helps with bin placement and planning of collection services.

BigBelly bins are being used in Melbourne’s CBD.
Kevin Zolkiewicz/Flickr, CC BY-NC

5. Identifying hotbeds of street harassment

A group of four women – and many volunteer supporters – in Egypt developed HarassMap to engage with, and inform, the community in an effort to reduce sexual harassment. The platform they built uses anonymised, crowdsourced data to map harassment incidents that occur in the street in order to alert its users of potentially unsafe areas.

The challenge for the group was to provide a means for generating data for a problem that was itself widely dismissed. Mapping and informing are essential data science techniques for addressing social problems.

Mapping of sexual harassment reported in Egypt.
HarassMap



Read more:
Cambridge Analytica’s closure is a pyrrhic victory for data privacy


Building a better society

Turning the efforts of data science to social good isn’t easy. Those with the expertise have to be attuned to the social impact of data analytics. Meanwhile, access to data, or linking data across sources, is a major challenge – particularly as data privacy becomes an increasing concern.

While the mathematics and algorithms that drive data science appear objective, human factors often combine to embed biases, which can result in inaccurate modelling. Digital and data literacy, along with a lack of transparency in methodology, combine to raise mistrust in big data and analytics.

Nonetheless, when put to work for social good, data science can provide new sources of evidence to assist government and funding bodies with policy, budgeting and future planning. This can ultimately result in a better connected and more caring society.The Conversation

Arezou Soltani Panah, Postdoc Research Fellow (Social Data Scientist), Swinburne University of Technology and Anthony McCosker, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

If privacy is increasing for My Health Record data, it should apply to all medical records



File 20180920 10499 1xu9t4w.jpg?ixlib=rb 1.1
Everyone was up in arms about a lack of privacy with My Health Records, but the privacy is the same for other types of patient data.
from http://www.shutterstock.com

Megan Prictor, University of Melbourne; Bronwyn Hemsley, University of Technology Sydney; Mark Taylor, University of Melbourne, and Shaun McCarthy, University of Newcastle

In response to the public outcry against the potential for My Health Record data to be shared with police and other government agencies, Health Minister Greg Hunt recently announced moves to change the legislation.

The laws underpinning the My Health Record as well as records kept by GPs and private hospitals currently allow those records to be shared with the police, Centrelink, the Tax Office and other government departments if it’s “reasonably necessary” for a criminal investigation or to protect tax revenue.

If passed, the policy of the Digital Health Agency (which runs the My Health Record) not to release information without a court order will become law. This would mean the My Health Record has greater privacy protections in this respect than other medical records, which doesn’t make much sense.




Read more:
Opting out of My Health Records? Here’s what you get with the status quo


Changing the law to increase privacy

Under the proposed new bill, state and federal government departments and agencies would have to apply for a court order to obtain information stored in the My Health Record.

The court would need to be satisfied that sharing the information is “reasonably necessary”, and that there is no other effective way for the person requesting it to access the information. The court would also need to weigh up whether the disclosure would “unreasonably interfere” with the person’s privacy.

If granted, a court order to release the information would require the Digital Health Agency to provide information from a person’s My Health Record without the person’s consent, and even if they objected.

If a warrant is issued for a person’s health records, the police can sift through them as they look for relevant information. They could uncover personally sensitive material that is not relevant to the current proceedings. Since the My Health Record allows the collection of information across health providers, there could be an increased risk of non-relevant information being disclosed.




Read more:
Using My Health Record data for research could save lives, but we must ensure it’s ethical


But what about our other medical records?

Although we share all sorts of personal information online, we like to think of our medical records as sacrosanct. But the law underpinning My Health Record came from the wording of the Commonwealth Privacy Act 1988, which applies to all medical records held by GPs, specialists and private hospitals.

Under the Act, doctors don’t need to see a warrant before they’re allowed to share health information with enforcement agencies. The Privacy Act principles mean doctors only need a “reasonable belief” that sharing the information is “reasonably necessary” for the enforcement activity.

Although public hospital records do not fall under the Privacy Act, they are covered by state laws that have similar provisions. In Victoria, for instance, the Health Records Act 2001 permits disclosure if the record holder “reasonably believes” that the disclosure is “reasonably necessary” for a law enforcement function and it would not be a breach of confidence.

In practice, health care providers are trained on the utmost importance of protecting the patient’s privacy. Their systems of registration and accreditation mean they must follow a professional code of ethical conduct that includes observing confidentiality and privacy.

Although the law doesn’t require it, it is considered good practice for health professionals to insist on seeing a warrant before disclosing a patient’s health records.

In a 2014 case, the federal court considered whether a psychiatrist had breached the privacy of his patient. The psychiatrist had given some of his patient’s records to Queensland police in response to a warrant. The court said the existence of a warrant was evidence the doctor had acted appropriately.

In a 2015 case, it was decided a doctor had interfered with a patient’s privacy when disclosing the patient’s health information to police. In this case, there no was warrant and no formal criminal investigation.




Read more:
What could a My Health Record data breach look like?


Unfortunately, there are recent examples of medical records being shared with government departments in worrying ways. In Australia, it has been alleged the immigration department tried, for political reasons, to obtain access to the medical records of people held in immigration detention.

In the UK, thousands of patient records were shared with the Home Office to trace immigration offenders. As a result, it was feared some people would become too frightened to seek medical care for themselves and children.

We can’t change the fact different laws at state and federal level apply to our paper and electronic medical records stored in different locations. But we can try to change these laws to be consistent in protecting our privacy.

If it’s so important to change the My Health Records Act to ensure our records can only be “unlocked” by a court order, the same should apply to the Privacy Act as well as state-based laws. Doing so might help to address public concerns about privacy and the My Health Record, and further inform decisions about opting out or staying in the system.The Conversation

Megan Prictor, Research Fellow in Law, University of Melbourne; Bronwyn Hemsley, Professor of Speech Pathology, University of Technology Sydney; Mark Taylor, Associate professor, University of Melbourne, and Shaun McCarthy, Director, University of Newcastle Legal Centre, University of Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The devil is in the detail of government bill to enable access to communications data


Monique Mann, Queensland University of Technology

The Australian government has released a draft of its long awaited bill to provide law enforcement and security agencies with new powers to respond to the challenges posed by encryption.

According to the Department of Home Affairs, encryption already impacts 90% of Australian Security Intelligence Organisation’s (ASIO) priority cases, and 90% of data intercepted by the Australian Federal Police. The measures aim to counteract estimates that communications among terrorists and organised crime groups are expected to be entirely encrypted by 2020.

The Department of Home Affairs and ASIO can already access encrypted data with specialist decryption techniques – or at points where data are not encrypted. But this takes time. The new bill aims to speed up this process, but these broad and ill-defined new powers have significant scope for abuse.




Read more:
New data access bill shows we need to get serious about privacy with independent oversight of the law


The Department of Home Affairs argues this new framework will not compel communications providers to build systemic weaknesses or vulnerabilities into their systems. In other words, it is not a backdoor.

But it will require providers to offer up details about technical characteristics of their systems that could help agencies exploit weaknesses that have not been patched. It also includes installing software, and designing and building new systems.

Compelling assistance and access

The draft Assistance and Access Bill introduces three main reforms.

First, it increases the obligations of both domestic and offshore organisations to assist law enforcement and security agencies to access information. Second, it introduces new computer access warrants that enable law enforcement to covertly obtain evidence directly from a device (this occurs at the endpoints when information is not encrypted). Finally, it increases existing powers that law enforcement have to access data through search and seizure warrants.

The bill is modelled on the UK’s Investigatory Powers Act, which introduced mandatory decryption obligations. Under the UK Act, the UK government can order telecommunication providers to remove any form of electronic protection that is applied by, or on behalf of, an operator. Whether or not this is technically possible is another question.

Similar to the UK laws, the Australian bill puts the onus on telecommunication providers to give security agencies access to communications. That might mean providing access to information at points where it is not encrypted, but it’s not immediately clear what other requirements can or will be imposed.




Read more:
End-to-end encryption isn’t enough security for ‘real people’


For example, the bill allows the Director-General of Security or the chief officer of an interception agency to compel a provider to do an unlimited range of acts or things. That could mean anything from removing security measures to deleting messages or collecting extra data. Providers will also be required to conceal any action taken covertly by law enforcement.

Further, the Attorney-General may issue a “technical capability notice” directed towards ensuring that the provider is capable of giving certain types of help to ASIO or an interception agency.

This means providers will be required to develop new ways for law enforcement to collect information. As in the UK, it’s not clear whether a provider will be able to offer true end-to-end encryption and still be able to comply with the notices. Providers that breach the law risk facing $10 million fines.

Cause for concern

The bill puts few limits or constraints on the assistance that telecommunication providers may be ordered to offer. There are also concerns about transparency. The bill would make it an offence to disclose information about government agency activities without authorisation. Anyone leaking information about data collection by the government – as Edward Snowden did in the US – could go to jail for five years.

There are limited oversight and accountability structures and processes in place. The Director-General of Security, the chief officer of an interception agency and the Attorney-General can issue notices without judicial oversight. This differs from how it works in the UK, where a specific judicial oversight regime was established, in addition to the introduction of an Investigatory Powers Commissioner.

Notices can be issued to enforce domestic laws and assist the enforcement of the criminal laws of foreign countries. They can also be issued in the broader interests of national security, or to protect the public revenue. These are vague and unclear limits on these exceptional powers.




Read more:
Police want to read encrypted messages, but they already have significant power to access our data


The range of services providers is also extremely broad. It might include telecommunication companies, internet service providers, email providers, social media platforms and a range of other “over-the-top” services. It also covers those who develop, supply or update software, and manufacture, supply, install or maintain data processing devices.

The enforcement of criminal laws in other countries may mean international requests for data will be funnelled through Australia as the “weakest-link” of our Five Eyes allies. This is because Australia has no enforceable human rights protections at the federal level.

It’s not clear how the government would enforce these laws on transnational technology companies. For example, if Facebook was issued a fine under the laws, it could simply withdraw operations or refuse to pay. Also, $10 million is a drop in the ocean for companies such as Facebook whose total revenue last year exceeded US$40 billion.

Australia is a surveillance state

As I have argued elsewhere, the broad powers outlined in the bill are neither necessary nor proportionate. Police already have existing broad powers, which are further strengthened by this bill, such as their ability to covertly hack devices at the endpoints when information is not encrypted.

Australia has limited human rights and privacy protections. This has enabled a constant and steady expansion of the powers and capabilities of the surveillance state. If we want to protect the privacy of our communications we must demand it.

The ConversationThe Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 (Cth) is still in a draft stage and the Department of Home Affairs invites public comment up until 10th of September 2018. Submit any comments to assistancebill.consultation@homeaffairs.gov.au.

Monique Mann, Vice Chancellor’s Research Fellow in Regulation of Technology, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

New data access bill shows we need to get serious about privacy with independent oversight of the law



File 20180814 2921 15oljsx.jpg?ixlib=rb 1.1

MICK TSIKAS/AAP

Greg Austin, UNSW

The federal government today announced its proposed legislation to give law enforcement agencies yet more avenues to reach into our private lives through access to our personal communications and data. This never-ending story of parliamentary bills defies logic, and is not offering the necessary oversight and protections.

The trend has been led by Prime Minister Malcolm Turnbull, with help from an ever-growing number of security ministers and senior officials. Could it be that the proliferation of government security roles is a self-perpetuating industry leading to ever more government powers for privacy encroachment?

That definitely appears to be the case.

Striking the right balance between data access and privacy is a tricky problem, but the government’s current approach is doing little to solve it. We need better oversight of law enforcement access to our data to ensure it complies with privacy principles and actually results in convictions. That might require setting up an independent judicial review mechanism to report outcomes on an annual basis.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Where is the accountability?

The succession of data access legislation in the Australian parliament is fast becoming a Mad Hatter’s tea party – a characterisation justified by the increasingly unproductive public conversations between the government on one hand, and legal specialists and rights advocates on the other.

If the government says it needs new laws to tackle “terrorism and paedophilia”, then the rule seems to be that other side will be criticised for bringing up “privacy protection”. The federal opposition has surrendered any meaningful resistance to this parade of legislation.

Rights advocates have been backed into a corner by being forced to repeat their concerns over each new piece of legislation while neither they nor the government, nor our Privacy Commissioner, and all the other “commissioners”, are called to account on fundamental matters of principle.

Speaking of the commissioner class, Australia just got a new one last week: the Data Commissioner. Strangely, the impetus for this appointment came from the Productivity Commission.

The post has three purposes:

  1. to promote greater use of data,
  2. to drive economic benefits and innovation from greater use of data, and
  3. to build trust with the Australian community about the government’s use of data.

The problem with this logic is that purposes one and two can only be distinguished by the seemingly catch-all character of the first: that if data exists it must be used.

Leaving aside that minor point, the notion that the government needs to build trust with the Australian community on data policy speaks for itself.

National Privacy Principles fall short

There is near universal agreement that the government is managing this issue badly, from the census data management issue to the “My Health Record” debacle. The growing commissioner class has not been much help.

Australia does have personal data protection principles, you may be surprised to learn. They are called “Privacy Principles”. You may be even more surprised to learn that the rights offered in these principles exist only up to the point where any enforcement arm of government wants the data.




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


So it seems that Australians have to rely on the leadership of the Productivity Commission (for economic policy) to guarantee our rights in cyber space, at least when it comes to our personal data.

Better oversight is required

There is another approach to reconciling citizens’ interests in privacy protection with legitimate and important enforcement needs against terrorists and paedophiles: that is judicial review.

The government argues, unconvincingly according to police sources, that this process adequately protects citizens by requiring law enforcement to obtain court-ordered warrants to access information. The record in some other countries suggests otherwise, with judges almost always waving through any application from enforcement authorities, according to official US data.

There is a second level of judicial review open to the government. This is to set up an independent judicial review mechanism that is obliged to annually review all instances of government access to personal data under warrant, and to report on the virtues or shortcomings of that access against enforcement outcomes and privacy principles.

There are two essential features of this proposal. First, the reviewing officer is a judge and not a public servant (the “commissioner class”). Second, the scope of the function is review of the daily operation of the intrusive laws, not just the post-facto examination of notorious cases of data breaches.

It would take a lengthy academic volume to make the case for judicial review of this kind. But it can be defended simply on economic grounds: such a review process would shine light on the efficiency of police investigations.

According to data released by the UK government, the overwhelming share of arrests for terrorist offences in the UK (many based on court-approved warrants for access to private data) do not result in convictions. There were 37 convictions out of 441 arrests for terrorist-related offences in the 12 months up to March 2018.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The Turnbull government deserves credit for its recognition of the values of legal review. Its continuing commitment to posts such as the National Security Legislation Monitor – and the appointment of a high-profile barrister to such a post – is evidence of that.

But somewhere along the way, the administration of data privacy is falling foul of a growing bureaucratic mess.

The ConversationThe only way to bring order to the chaos is through robust accountability; and the only people with the authority or legitimacy in our political system to do that are probably judges who are independent of the government.

Greg Austin, Professor UNSW Canberra Cyber, UNSW

This article was originally published on The Conversation. Read the original article.

What could a My Health Record data breach look like?



File 20180723 189308 dv0gue.jpg?ixlib=rb 1.1
Health information is an attractive target for offenders.
Tammy54/Shutterstock

Cassandra Cross, Queensland University of Technology

Last week marked the start of a three-month period in which Australians can opt out of the My Health Record scheme before having an automatically generated electronic health record.

Some Australians have already opted out of the program, including Liberal MP Tim Wilson and former Queensland LNP premier Campbell Newman, who argue it should be an opt-in scheme.

But much of the concern about My Health Records centres around privacy. So what is driving these concerns, and what might a My Health Records data breach look like?

Data breaches

Data breaches exposing individuals’ private information are becoming increasingly common and can include demographic details (name, address, birthdate), financial information (credit card details, pin numbers) and other details such as email addresses, usernames and passwords.

Health information is also an attractive target for offenders. They can use this to perpetrate a wide variety of offences, including identity fraud, identity theft, blackmail and extortion.




Read more:
Another day, another data breach – what to do when it happens to you


Last week hackers stole the health records of 1.5 million Singaporeans, including Prime Minister Lee Hsien Loong, who may have been targeted for sensitive medical information.

Meanwhile in Canada, hackers reportedly stole the medical histories of 80,000 patients from a care home and held them to ransom.

Australia is not immune. Last year Australians’ Medicare details were advertised for sale on the dark net by a vendor who had sold the records of at least 75 people.

Earlier this year, Family Planning NSW experienced a breach of its booking system, which exposed client data of those who had contacted the organisation within the past two and a half years.

Further, in the first report since the introduction of mandatory data breach reporting, the Privacy Commissioner revealed that of the 63 notifications received in the first quarter, 15 were from health service providers. This makes health the leading industry for reported breaches.

Human error

It’s important to note that not all data breaches are perpetrated from the outside or are malicious in nature. Human error and negligence also pose a threat to personal information.

The federal Department of Health, for instance, published a supposedly “de-identified” data set relating to details from the Medicare Benefits Scheme and the Pharmaceutical Benefits Scheme of 2.5 million Australians. This was done for research purposes.

But researchers were able to re-identify the details of individuals using publicly available information. In a resulting investigation, the Privacy Commissioner concluded that the Privacy Act had been breached three times.

The latest data breach investigation from US telecommunications company Verizon notes that health care is the only sector where the threat from inside is greater than from the outside. Human error contributes largely to this.

There are promises of strong security surrounding My Health Records but, in reality, it’s a matter of when, not if, a data breach of some sort occurs.

Human error is one of the biggest threats.
Shutterstock

Privacy controls

My Health Record allows users to set the level of access they’re comfortable with across their record. This can target specific health-care providers or relate to specific documents.

But the onus of this rests heavily on the individual. This requires a high level of computer and health literacy that many Australians don’t have. The privacy control process is therefore likely to be overwhelming and ineffective for many people.




Read more:
My Health Record: the case for opting out


With the default option set to “general access”, any organisation involved in the person’s care can access the information.

Regardless of privacy controls, other agencies can also access information. Section 70 of the My Health Records Act 2012 states that details can be disclosed to law enforcement for a variety of reasons including:

(a) the prevention, detection, investigation, prosecution or punishment of criminal offences.

While no applications have been received to date, it is reasonable to expect this may occur in the future.

There are also concerns about sharing data with health insurance agencies and other third parties. While not currently authorised, there is intense interest from companies that can see the value in this health data.

Further, My Health Record data can be used for research, policy and planning. Individuals must opt out of this separately, through the privacy settings, if they don’t want their data to be part of this.

What should you do?

Health data is some of the most personal and sensitive information we have and includes details about illnesses, medications, tests, procedures and diagnoses. It may contain information about our HIV status, mental health profile, sexual activity and drug use.

These areas can attract a lot of stigma so keeping this information private is paramount. Disclosure may not just impact the person’s health and well-being, it may also affect their relationships, their employment and other facets of their life.

Importantly, these details can’t be reset or reissued. Unlike passwords and credit card details, they are static. Once exposed, it’s impossible to “unsee” or “unknow” what has been compromised.

Everyone should make their own informed decision about whether to stay in My Health Record or opt out. Ultimately, it’s up to individuals to decide what level of risk they’re comfortable with, and the value of their own health information, and proceed on that basis.


The Conversation


Read more:
My Health Record: the case for opting in


Cassandra Cross, Senior Lecturer in Criminology, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.