Why the government’s proposed facial recognition database is causing such alarm



Andrew Hastie said the broad objectives of the identity-matching system were sound, but key changes were needed to ensure privacy and transparency.
Lukas Coch/AAP

Sarah Moulds, University of South Australia

Since before the 2019 election, the Morrison government has been keen to introduce a new scheme that would allow government agencies, telecos and banks to use facial recognition technology to collect and share images of people across the country.

While there are some benefits to such a system – making it easier to identify the victims of natural disasters, for example – it has been heavily criticised by human rights groups as an attempt to introduce mass surveillance to Australia and an egregious breach of individual privacy.

The plan hit a roadblock when the government-controlled Parliamentary Joint Committee on Intelligence and Security (PJCIS) handed down an extensive report calling for significant changes to the legislation to ensure stronger privacy protections and other safeguards against misuse.




Read more:
Close up: the government’s facial recognition plan could reveal more than just your identity


What are the identity-matching laws?

The identity-matching bills aim to set up a national database of images captured through facial recognition technology and other pieces of information used to identify people, such as driver’s licenses, passports, visa photos. This information could then be shared between government agencies, and in some cases, private organisations like telcos and banks, provided certain legal criteria are met.

The proposed database follows an agreement reached by the Commonwealth and the states and territories in 2017 to facilitate the “secure, automated and accountable” exchange of identity information to help combat identity crime and promote community safety.

Critical to this agreement was that the system include “robust privacy safeguards” to guard against misuse.

The agreement gave the federal government the green light to introduce laws to set up the identity-matching system.




Read more:
Why regulating facial recognition technology is so problematic – and necessary


Access to the service could potentially encompass a wide range of purposes. For example, a government agency could use the system to identify people thought to be involved in identity fraud or considered threats to national security.

But the bill also includes more pedestrian uses, such as in cases of “community safety” or “road safety”.

The proposed laws contain some safeguards against misuse, including criminal sanctions when an “entrusted person” discloses information for an unauthorised purpose. In addition, access by banks or other companies and local councils can only occur with the consent of the person seeking to have their identity verified.

However, much of the detail about precisely who can access the system and what limits apply is not set out in the bills. This will be determined through government regulation or subsequent intergovernmental agreements.

Concerns about scope and safeguards

The Coalition government’s bills were first introduced in 2018, but didn’t come up for a vote. After the government reintroduced the bills in July, the PJCIS launched an inquiry and invited public submissions.

Legal bodies have argued that amendments are needed to tighten the boundaries of who can access the identity-matching services and for what purposes. They note that as currently drafted, the proposed laws give too much discretionary power to government officials and actually create opportunities for identity theft.




Read more:
DNA facial prediction could make protecting your privacy more difficult


This is particularly problematic when coupled with the potential for the rapid spread of facial recognition technology in Australian streets, parks and transport hubs.

The Human Rights Law Centre said the proposed system is “more draconian” than the one launched in the UK. Another concern is that it could be used by a wide range of agencies to confirm the identity of any Australian with government-approved documentation (such as a passport or driver’s license), regardless of whether they are suspected of a crime.

The Australian Human Rights Commission also pointed to research suggesting the software used to capture or match facial imagery could result in higher error rates for women and people from certain ethnic groups.

What’s next for the bills?

When handing down the committee’s unanimous report, Andrew Hastie said the broad objectives of the identity-matching system were sound, but key changes were needed to ensure privacy protections and transparency.

While the PJCIS cannot actually stop the bills from being passed, it has a strong track record of turning its recommendations into legislative amendments.

The states and territories also have an interest in ensuring a national identity-matching scheme gets the balance right when it comes to addressing identity crime and assisting law enforcement and protecting individual privacy.

The question is whether these calls for improvements will be loud enough to put these bills back on the drawing board.

The future of the legislation will tell us something important about the strength of human rights protections in Australia, which rely heavily on parliamentary bodies like the PJCIS to help raise the alarm when it comes to rights-infringing laws.The Conversation

Sarah Moulds, Lecturer of Law, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Here’s how tech giants profit from invading our privacy, and how we can start taking it back



Your online activity can be turned into an intimate portrait of your life – and used for profit.
Shutterstock.com

Katharine Kemp, UNSW

Australia’s consumer watchdog has recommended major changes to our consumer protection and privacy laws. If these reforms are adopted, consumers will have much more say about how we deal with Google, Facebook, and other businesses.

The proposals include a right to request erasure of our information; choices about whether we are tracked online and offline; potential penalties of A$10 million or more for companies that misuse our information or impose unfair privacy terms; and default settings that favour privacy.




Read more:
Consumer watchdog calls for new measures to combat Facebook and Google’s digital dominance


The report from the Australian Competition and Consumer Commission (ACCC) says consumers have growing concerns about the often invisible ways companies track us and disclose our information to third parties. At the same time, many consumers find privacy policies almost impossible to understand and feel they have no choice but to accept.

My latest research paper details how companies that trade in our personal data have incentives to conceal their true practices, so they can use vast quantities of data about us for profit without pushback from consumers. This can preserve companies’ market power, cause harm to consumers, and make it harder for other companies to compete on improved privacy.

The vicious cycle of privacy abuse.
Helen J. Robinson, Author provided

Privacy policies are broken

The ACCC report points out that privacy policies tend to be long, complex, hard to navigate, and often create obstacles to opting out of intrusive practices. Many of them are not informing consumers about what actually happens to their information or providing real choices.

Many consumers are unaware, for example, that Facebook can track their activity online when they are logged out, or even if they are not a Facebook user.




Read more:
Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Some privacy policies are outright misleading. Last month, the US Federal Trade Commission settled with Facebook on a US$5 billion fine as a penalty for repeatedly misleading users about the fact that personal information could be accessed by third-party apps without the user’s consent, if a user’s Facebook “friend” gave consent.

If this fine sounds large, bear in mind that Facebook’s share price went up after the FTC approved the settlement.

The ACCC is now investigating privacy representations by Google and Facebook under the Australian Consumer Law, and has taken action against the medical appointment booking app Health Engine for allegedly misleading patients while it was selling their information to insurance brokers.

Nothing to hide…?

Consumers generally have very little idea about what information about them is actually collected online or disclosed to other companies, and how that can work to their disadvantage.

A recent report by the Consumer Policy Research Centre explained how companies most of us have never heard of – data aggregators, data brokers, data analysts, and so on – are trading in our personal information. These companies often collect thousands of data points on individuals from various companies we deal with, and use them to provide information about us to companies and political parties.

Data companies have sorted consumers into lists on the basis of sensitive details about their lifestyles, personal politics and even medical conditions, as revealed by reports by the ACCC and the US Federal Trade Commission. Say you’re a keen jogger, worried about your cholesterol, with broadly progressive political views and a particular interest in climate change – data companies know all this about you and much more besides.

So what, you might ask. If you’ve nothing to hide, you’ve nothing to lose, right? Not so. The more our personal information is collected, stored and disclosed to new parties, the more our risk of harm increases.

Potential harms include fraud and identity theft (suffered by 1 in 10 Australians); being charged higher retail prices, insurance premiums or interest rates on the basis of our online behaviour; and having our information combined with information from other sources to reveal intimate details about our health, financial status, relationships, political views, and even sexual activity.




Read more:
Why you might be paying more for your airfare than the person seated next to you


In written testimony to the US House of Representatives, legal scholar Frank Pasquale explained that data brokers have created lists of sexual assault victims, people with sexually transmitted diseases, Alzheimer’s, dementia, AIDS, sexual impotence or depression. There are also lists of “impulse buyers”, and lists of people who are known to be susceptible to particular types of advertising.

Major upgrades to Australian privacy laws

According to the ACCC, Australia’s privacy law is not protecting us from these harms, and falls well behind privacy protections consumers enjoy in comparable countries in the European Union, for example. This is bad for business too, because weak privacy protection undermines consumer trust.

Importantly, the ACCC’s proposed changes wouldn’t just apply to Google and Facebook, but to all companies governed by the Privacy Act, including retail and airline loyalty rewards schemes, media companies, and online marketplaces such as Amazon and eBay.

Australia’s privacy legislation (and most privacy policies) only protect our “personal information”. The ACCC says the definition of “personal information” needs to be clarified to include technical data like our IP addresses and device identifiers, which can be far more accurate in identifying us than our names or contact details.




Read more:
Explainer: what is surveillance capitalism and how does it shape our economy?


Whereas some companies currently keep our information for long periods, the ACCC says we should have a right to request erasure to limit the risks of harm, including from major data breaches and reidentification of anonymised data.

Companies should stop pre-ticking boxes in favour of intrusive practices such as location tracking and profiling. Default settings should favour privacy.

Currently, there is no law against “serious invasions of privacy” in Australia, and the Privacy Act gives individuals no direct right of action. According to the ACCC, this should change. It also supports plans to increase maximum corporate penalties under the Privacy Act from A$2.1 million to A$10 million (or 10% of turnover or three times the benefit, whichever is larger).

Increased deterrence from consumer protection laws

Our unfair contract terms law could be used to attack unfair terms imposed by privacy policies. The problem is, currently, this only means we can draw a line through unfair terms. The law should be amended to make unfair terms illegal and impose potential fines of A$10 million or more.

The ACCC also recommends Australia adopt a new law against “unfair trading practices”, similar to those used in other countries to tackle corporate wrongdoing including inadequate data security and exploitative terms of use.

So far, the government has acknowledged that reforms are needed but has not committed to making the recommended changes. The government’s 12-week consultation period on the recommendations ends on October 24, with submissions due by September 12.The Conversation

Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Fingerprint and face scanners aren’t as secure as we think they are



File 20190304 110110 1tgw1we.jpg?ixlib=rb 1.1
Biometric systems are increasingly used in our civil, commercial and national defence applications.
Shutterstock

Wencheng Yang, Edith Cowan University and Song Wang, La Trobe University

Despite what every spy movie in the past 30 years would have you think, fingerprint and face scanners used to unlock your smartphone or other devices aren’t nearly as secure as they’re made out to be.

While it’s not great if your password is made public in a data breach, at least you can easily change it. If the scan of your fingerprint or face – known as “biometric template data” – is revealed in the same way, you could be in real trouble. After all, you can’t get a new fingerprint or face.

Your biometric template data are permanently and uniquely linked to you. The exposure of that data to hackers could seriously compromise user privacy and the security of a biometric system.

Current techniques provide effective security from breaches, but advances in artificial intelligence (AI) are rendering these protections obsolete.




Read more:
Receiving a login code via SMS and email isn’t secure. Here’s what to use instead


How biometric data could be breached

If a hacker wanted to access a system that was protected by a fingerprint or face scanner, there are a number of ways they could do it:

  1. your fingerprint or face scan (template data) stored in the database could be replaced by a hacker to gain unauthorised access to a system

  2. a physical copy or spoof of your fingerprint or face could be created from the stored template data (with play doh, for example) to gain unauthorised access to a system

  3. stolen template data could be reused to gain unauthorised access to a system

  4. stolen template data could be used by a hacker to unlawfully track an individual from one system to another.

Biometric data need urgent protection

Nowadays, biometric systems are increasingly used in our civil, commercial and national defence applications.

Consumer devices equipped with biometric systems are found in everyday electronic devices like smartphones. MasterCard and Visa both offer credit cards with embedded fingerprint scanners. And wearable fitness devices are increasingly using biometrics to unlock smart cars and smart homes.

So how can we protect raw template data? A range of encryption techniques have been proposed. These fall into two categories: cancellable biometrics and biometric cryptosystems.




Read more:
When your body becomes your password, the end of the login is nigh


In cancellable biometrics, complex mathematical functions are used to transform the original template data when your fingerprint or face is being scanned. This transformation is non-reversible, meaning there’s no risk of the transformed template data being turned back into your original fingerprint or face scan.

In a case where the database holding the transformed template data is breached, the stored records can be deleted. Additionally, when you scan your fingerprint or face again, the scan will result in a new unique template even if you use the same finger or face.

In biometric cryptosystems, the original template data are combined with a cryptographic key to generate a “black box”. The cryptographic key is the “secret” and query data are the “key” to unlock the “black box” so that the secret can be retrieved. The cryptographic key is released upon successful authentication.

AI is making security harder

In recent years, new biometric systems that incorporate AI have really come to the forefront of consumer electronics. Think: smart cameras with built-in AI capability to recognise and track specific faces.

But AI is a double-edged sword. While new developments, such as deep artificial neural networks, have enhanced the performance of biometric systems, potential threats could arise from the integration of AI.

For example, researchers at New York University created a tool called DeepMasterPrints. It uses deep learning techniques to generate fake fingerprints that can unlock a large number of mobile devices. It’s similar to the way that a master key can unlock every door.

Researchers have also demonstrated how deep artificial neural networks can be trained so that the original biometric inputs (such as the image of a person’s face) can be obtained from the stored template data.




Read more:
Facial recognition is increasingly common, but how does it work?


New data protection techniques are needed

Thwarting these types of threats is one of the most pressing issues facing designers of secure AI-based biometric recognition systems.

Existing encryption techniques designed for non AI-based biometric systems are incompatible with AI-based biometric systems. So new protection techniques are needed.

Academic researchers and biometric scanner manufacturers should work together to secure users’ sensitive biometric template data, thus minimising the risk to users’ privacy and identity.

In academic research, special focus should be put on two most important aspects: recognition accuracy and security. As this research falls within Australia’s science and research priority of cybersecurity, both government and private sectors should provide more resources to the development of this emerging technology.The Conversation

Wencheng Yang, Post Doctoral Researcher, Security Research Institute, Edith Cowan University and Song Wang, Senior Lecturer, Engineering, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Seven ways the government can make Australians safer – without compromising online privacy



File 20190211 174894 12g4z9d.jpg?ixlib=rb 1.1
We need a cyber safety equivalent to the Slip! Slop! Slap! campaign to nudge behavioural change in the community.
Shutterstock

Damien Manuel, Deakin University

This is part of a major series called Advancing Australia, in which leading academics examine the key issues facing Australia in the lead-up to the 2019 federal election and beyond. Read the other pieces in the series here.

When it comes to data security, there is an inherent tension between safety and privacy. The government’s job is to balance these priorities with laws that will keep Australians safe, improve the economy and protect personal data from unwarranted surveillance.

This is a delicate line to walk. Recent debate has revolved around whether technology companies should be required to help law enforcement agencies gain access to the encrypted messages of suspected criminals.

While this is undoubtedly an important issue, the enacted legislation – the Telecommunications and Other Legislation Amendment (Assistance and Access) Act – fails on both fronts. Not only is it unlikely to stop criminals, it could make personal communications between everyday people less secure.

Rather than focus on the passage of high-profile legislation that clearly portrays a misunderstanding of the technology in question, the government would do better to invest in a comprehensive cyber security strategy that will actually have an impact.

Achieving the goals set out in the strategy we already have would be a good place to start.




Read more:
The difference between cybersecurity and cybercrime, and why it matters


Poor progress on cyber security

The Turnbull government launched Australia’s first Cyber Security Strategy in April 2016. It promised to dramatically improve the online safety of all Australian families and businesses.

In 2017, the government released the first annual update to report on how well it was doing. On the surface some progress had been made, but a lot of items were incomplete – and the promised linkages to businesses and the community were not working well.

Unfortunately, there was never a second update. Prime ministers were toppled, cabinets were reshuffled and it appears the Morrison government lost interest in truly protecting Australians.

So, where did it all go wrong?

A steady erosion of privacy

Few Australians paid much notice when vested interests hijacked technology law reforms. The amendment of the Copyright Act in 2015 forced internet service providers (ISPs) to block access to sites containing pirated content. Movie studios now had their own version of China’s “Great Firewall” to block and control internet content in Australia.

In 2017, the government implemented its data retention laws, which effectively enabled specific government agencies to spy on law-abiding citizens. The digital trail (metadata) people left through phone calls, SMS messages, emails and internet activity was retained by telecommunications carriers and made accessible to law enforcement.

The public was assured only limited agencies would have access to the data to hunt for terrorists. In 2018, we learned that many more agencies were accessing the data than originally promised.

Enter the Assistance and Access legislation. Australia’s technology sector strongly objected to the bill, but the Morrison government’s consultation process was a whitewash. The government ignored advice on the damage the legislation would do to the developing cyber sector outlined in the Cyber Security Strategy – the very sector the Turnbull government had been counting on to help rebuild the economy in this hyper-connected digital world.




Read more:
What skills does a cybersecurity professional need?


While the government focuses on the hunt for terrorists, it neglects the thousands of Australians who fall victim each year to international cybercrime syndicates and foreign governments.

Australians lose money to cybercrime via scam emails and phone calls designed to harvest passwords, banking credentials and other personal information. Losses from some categories of cybercrime have increased by more than 70% in the last 12 months. The impact of cybercrime on Australian business and individuals is estimated at $7 billion a year.

So, where should government focus its attention?

Seven actions that would make Australia safer

If the next government is serious about protecting Australian businesses and families, here are seven concrete actions it should take immediately upon taking office.

1. Review the Cyber Security Strategy

Work with industry associations, the business and financial sectors, telecommunication providers, cyber startups, state government agencies and all levels of the education sector to develop a plan to protect Australians and businesses. The plan must be comprehensive, collaborative and, most importantly, inclusive. It should be adopted at the federal level and by states and territories.

2. Make Australians a harder target for cybercriminals

The United Kingdom’s National Cyber Security Centre is implementing technical and process controls that help people in the UK fight cybercrime in smart, innovative ways. The UK’s Active Cyber Defence program uses top-secret intelligence to prevent cyber attacks and to detect and block malicious email campaigns used by scammers. It also investigates how people actually use technology, with the aim of implementing behavioural change programs to improve public safety.

3. Create a community education campaign

A comprehensive community education program would improve online behaviours and make businesses and families safer. We had the iconic Slip! Slop! Slap! campaign from 1981 to help reduce skin cancer through community education. Where is the equivalent campaign for cyber safety to nudge behavioural change in the community at all levels from kids through to adults?

4. Improve cyber safety education in schools

Build digital literacy into education from primary through to tertiary level so that young Australians understand the consequences of their online behaviours. For example, they should know the risks of sharing personal details and nude selfies online.




Read more:
Cybersecurity of the power grid: A growing challenge


5. Streamline industry certifications

Encourage the adoption of existing industry certifications, and stop special interest groups from introducing more. There are already more than 100 industry certifications. Minimum standards for government staff should be defined, including for managers, technologists and software developers.

The United States Defence Department introduced minimum industry certification for people in government who handle data. The Australian government should do the same by picking a number of vendor-agnostic certifications as mandatory in each job category.

6. Work with small and medium businesses

The existing cyber strategy doesn’t do enough to engage with the business sector. Small and medium businesses form a critical part of the larger business supply-chain ecosystem, so the ramifications of a breach could be far-reaching.

The Australian Signals Directorate recommends businesses follow “The Essential Eight” – a list of strategies businesses can adopt to reduce their risk of cyber attack. This is good advice, but it doesn’t address the human side of exploitation, called social engineering, which tricks people into disclosing passwords that protect sensitive or confidential information.

7. Focus on health, legal and tertiary education sectors

The health, legal and tertiary education sectors have a low level of cyber maturity. These are among the top four sectors reporting breaches, according to the Office of the Australian Information Commissioner.

While health sector breaches could lead to personal harm and blackmail, breaches in the legal sector could result in the disclosure of time-sensitive business transactions and personal details. And the tertiary education sector – a powerhouse of intellectual research – is ripe for foreign governments to steal the knowledge underpinning Australia’s future technologies.

A single person doing the wrong thing and making a mistake can cause a major security breach. More than 900,000 people are employed in the Australian health and welfare sector, and the chance of one of these people making a mistake is unfortunately very high.The Conversation

Damien Manuel, Director, Centre for Cyber Security Research & Innovation (CSRI), Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tim Wilson’s ‘retirement tax’ website doesn’t have a privacy policy. So how is he using the data?


Andre Oboler, La Trobe University

A growing debate over Labor’s policy to end cash rebates for excess franking credits has led to calls for the chair of parliament’s economics committee, Liberal MP Tim Wilson, to resign.

Labor has accused Wilson of using a parliamentary inquiry into the policy to spearhead a partisan campaign against it.

Part of the controversy revolves around a website Wilson is promoting – stoptheretirementtax.com – that initially required people who wanted to register to attend public hearings for the inquiry to agree to put their name to a petition against the policy. Wilson described this as a “mistake” that has since been fixed.

But there’s another issue with the website that’s worth taking a look at: if it complies with privacy law.

Political parties are exempt from the usual privacy rules, so we need to know if stoptheretirementtax.com is a Liberal party website or government website. The answer has implications for whether privacy law may have been breached, and if the data collected can be used for political campaigning in the upcoming federal election.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


A party or parliamentary website?

Stoptheretirementtax.com was registered anonymously on October 31. While it’s a requirement of website registration for owners to be publicly listed, in this case a domain privacy service was used to hide those details.

By mid-November the site was being shared by a financial services company with their clients, who said that Wilson had sent the website details to them. In several tweets promoting the inquiry in November, Wilson didn’t mention the site.

The site was promoted publicly in January, when Wilson tweeted six times that people should use it to register for hearings in Queensland and New South Wales.

In these tweets, Wilson identified himself as both the Liberal MP for Goldstein and the Chair of the Economics Committee.

By contrast, stoptheretirementtax.com doesn’t mention Wilson’s electorate or political party. The bottom of the site has the Australian coat of arms with the words “Chair of the House Economics Committee”. Wilson’s parliamentary contact details appear alongside a statement that reads:

Authorised by Tim Wilson MP, Chair of the Standing Committee on Economics.

The confusion around whether stoptheretirementtax.com is an official government website begins with the website’s domain name. It’s based on a slogan coined by Wilson Asset Management, a financial services company that is actively campaigning against Labor’s policy on franking credits. The site also uses a photograph the company has used in their campaign, and Wilson has said Wilson Asset Management were consulted in the site’s development.

Then there is the text, which reads:

At the next election your financial security will be on the ballot … Labor are attacking your full tax refund. After the election they want to scrap refundable franking credits. That will hit your security in retirement and risk pushing many vulnerable retirees below the poverty line.




Read more:
The Australian public cares about privacy: do politicians?


What data is being collected?

Stoptheretirementtax.com is collecting personal information. Visitors who wish to send a submission to the inquiry or register to attend public hearings are required to provide their name, email address, mailing address and phone number.

Visitors who want to send a submission to the standing committee on economics are offered a box with pre-filled text. A small note reads: “feel free to edit, or write your own”. A second box invites visitors to share their story.

Design features such as the colouring of the text could be seen to discourage editing of the first box while directing people to the second, meaning many people who submit a response will likely end up including the pre-filled text in their submission.

When registering for the public hearings, users are offered two check boxes (pre-checked), which state:

I want to be registered for the petition against the retirement tax

I want to be contacted on future activities to stop the retirement tax.

Until Sunday, it was impossible to register for a hearing without also signing the petition. Tim Wilson has said this was an “error”. The required check box for hearings and the design of the submission boxes may in fact be a dark pattern – a use of design feature to manipulate users into making the decision the site owner wants.

The site contains no privacy policy or indication of who the data is shared with or how it will be used.

On Monday, a page for the inquiry was added to the Australian Parliament’s website describing itself as the “the official page of the committee”. It states that submissions to the inquiry can be made via the Parliament’s submission system or by email. It also explains that “pre-registration is not required to participate” in the hearings.

A matter of privacy

Australian privacy is largely regulated by the Privacy Act and the Australian Privacy Principles it contains. Registered political parties are exempt, but stoptheretirementtax.com does not appear to come from a registered political party.

To assert it is campaign material from a registered political party at this stage would raise electoral law issues. The Commonwealth Electoral Act requires that registered political parties identify themselves in the authorisation statement on their political materials. Stoptheretirementtax.com has no such authorisation.

The Privacy Act does apply to government agencies, including ministers, departments and people:

holding or performing the duties of an appointment… made… by a Minister.

The Chair of a Standing Committee is “appointed by the prime minister”, making them an agency subject to the Australian Privacy Principles.

The Australian Privacy Principles requirements for government agencies include:

  • being open and transparent about how personal information is managed, including having a privacy policy
  • explaining why they are collecting, holding, using or disclosing personal information
  • only collecting personal information if it is reasonably necessary or directly related to one of their function or activities
  • only collect personal information by lawful and fair means
  • disclosing who else the personal information would usually be shared with

A failure to comply with the Australian Privacy Principles may put personal information at risk and can attract the attention of the Information Commissioner, who regulates privacy.

What about parliamentary privilege?

The Australian Law Reform Commission noted in 2008 that:

Ministers engaging in their official capacity are bound by the Privacy Act, while MPs engaging in political acts and practices are not.

A Committee Chair would likely be similarly bound only while acting in that capacity.

Some of the time, while acting in their capacity, they may be effectively exempt from the Privacy Act due to parliamentary privilege.

Section 16(2) of the Parliamentary Privileges Act reasserts a right of immunity going back to the Bill of Rights of 1688. It covers:

all words spoken and acts done in the course of, or for purposes of or incidental to, the transacting of the business of a House or of a committee.

That doesn’t mean the principles don’t apply, just that enforcing corrective action may be beyond the reach of the courts. Parliament has its own processes that could still be used to address concerns.

The usual rules, enforceable by the courts, may still apply in circumstances where a committee chair is acting in that capacity, but outside the business of the committee.

Advocacy activities, like running a petition or soliciting contact details for political action may not be something “for the purpose” or “incidental” to the business of a committee. In fact, publishing an overtly political website may itself step outside the protection – as it is the committee and its parliamentary work, not the activities of the chair per se, that attract the privilege.




Read more:
Australians’ trust in politicians and democracy hits an all-time low: new research


Reaching a resolution

The best resolution would be for Tim Wilson to take down the site (particularly in light of the new official site), pass to the Committee Secretariat any information they require (such as submissions), then delete all personal information he has collected through the stoptheretirementtax.com website.

A full disclosure of who data may have been shared with, where it was held and how it was secured would also help. If data has been disclosed to anyone other than the Parliamentary Committee, those who have been impacted should be informed. The Information Commissioner should be consulted for guidance and assistance.

The broader lesson is that privacy must be taken seriously. The Australian Privacy Principles are designed to ensure transparency and accountability. The lack of a privacy policy on the website should have served as a warning.The Conversation

Andre Oboler, Senior Lecturer, Master of Cyber-Security Program (Law), La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Five projects that are harnessing big data for good



File 20181101 78456 77seij.jpg?ixlib=rb 1.1
Often the value of data science lies in the work of joining the dots.
Shutterstock

Arezou Soltani Panah, Swinburne University of Technology and Anthony McCosker, Swinburne University of Technology

Data science has boomed over the past decade, following advances in mathematics, computing capability, and data storage. Australia’s Industry 4.0 taskforce is busy exploring ways to improve the Australian economy with tools such as artificial intelligence, machine learning and big data analytics.

But while data science offers the potential to solve complex problems and drive innovation, it has often come under fire for unethical use of data or unintended negative consequences – particularly in commercial cases where people become data points in annual company reports.

We argue that the data science boom shouldn’t be limited to business insights and profit margins. When used ethically, big data can help solve some of society’s most difficult social and environmental problems.

Industry 4.0 should be underwritten by values that ensure these technologies are trained towards the social good (known as Society 4.0). That means using data ethically, involving citizens in the process, and building social values into the design.

Here are a five data science projects that are putting these principles into practice.




Read more:
The future of data science looks spectacular


1. Finding humanitarian hot spots

Social and environmental problems are rarely easy to solve. Take the hardship and distress in rural areas due to the long-term struggle with drought. Australia’s size and the sheer number of people and communities involved make it difficult to pair those in need with support and resources.

Our team joined forces with the Australian Red Cross to figure out where the humanitarian hot spots are in Victoria. We used social media data to map everyday humanitarian activity to specific locations and found that the hot spots of volunteering and charity activity are located in and around Melbourne CBD and the eastern suburbs. These kinds of insights can help local aid organisations channel volunteering activity in times of acute need.

Distribution of humanitarian actions across inner Melbourne and local government areas. Blue dots and red dots represent scraped Instagram posts around the hashtags #volunteer and #charity.

2. Improving fire safety in homes

Accessing data – the right data, in the right form – is a constant challenge for data science. We know that house fires are a serious threat, and that fire and smoke alarms save lives. Targeting houses without fire alarms can help mitigate that risk. But there is no single reliable source of information to draw on.

In the United States, Enigma Labs built open data tools to model and map risk at the level of individual neighbourhoods. To do this effectively, their model combines national census data with a geocoder tool (TIGER), as well as analytics based on local fire incident data, to provide a risk score.

Fire fatality risk scores calculated at the level of Census block groups.
Enigma Labs

3. Mapping police violence in the US

Ordinary citizens can be involved in generating social data. There are many crowdsourced, open mapping projects, but often the value of data science lies in the work of joining the dots.

The Mapping Police Violence project in the US monitors, make sense of, and visualises police violence. It draws on three crowdsourced databases, but also fills in the gaps using a mix of social media, obituaries, criminal records databases, police reports and other sources of information. By drawing all this information together, the project quantifies the scale of the problem and makes it visible.

A visualisation of the frequency of police violence in the United States.
Mapping Police Violence



Read more:
Data responsibility: a new social good for the information age


4. Optimising waste management

The Internet of Things is made up of a host of connected devices that collect data. When embedded in the ordinary objects all around us, and combined with cloud-based analysis and computing, these objects become smart – and can help solve problems or inefficiencies in the built environment.

If you live in Melbourne, you might have noticed BigBelly bins around the CBD. These smart bins have solar-powered trash compactors that regularly compress the garbage inside throughout the day. This eliminates waste overflow and reduces unnecessary carbon emissions, with an 80% reduction in waste collection.

Real-time data analysis and reporting is provided by a cloud-based data management portal, known as CLEAN. The tool identifies trends in waste overflow, which helps with bin placement and planning of collection services.

BigBelly bins are being used in Melbourne’s CBD.
Kevin Zolkiewicz/Flickr, CC BY-NC

5. Identifying hotbeds of street harassment

A group of four women – and many volunteer supporters – in Egypt developed HarassMap to engage with, and inform, the community in an effort to reduce sexual harassment. The platform they built uses anonymised, crowdsourced data to map harassment incidents that occur in the street in order to alert its users of potentially unsafe areas.

The challenge for the group was to provide a means for generating data for a problem that was itself widely dismissed. Mapping and informing are essential data science techniques for addressing social problems.

Mapping of sexual harassment reported in Egypt.
HarassMap



Read more:
Cambridge Analytica’s closure is a pyrrhic victory for data privacy


Building a better society

Turning the efforts of data science to social good isn’t easy. Those with the expertise have to be attuned to the social impact of data analytics. Meanwhile, access to data, or linking data across sources, is a major challenge – particularly as data privacy becomes an increasing concern.

While the mathematics and algorithms that drive data science appear objective, human factors often combine to embed biases, which can result in inaccurate modelling. Digital and data literacy, along with a lack of transparency in methodology, combine to raise mistrust in big data and analytics.

Nonetheless, when put to work for social good, data science can provide new sources of evidence to assist government and funding bodies with policy, budgeting and future planning. This can ultimately result in a better connected and more caring society.The Conversation

Arezou Soltani Panah, Postdoc Research Fellow (Social Data Scientist), Swinburne University of Technology and Anthony McCosker, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Travelling overseas? What to do if a border agent demands access to your digital device



File 20181005 52691 12zqgzn.jpg?ixlib=rb 1.1
New laws enacted in New Zealand give customs agents the right to search your phone.
Shutterstock

Katina Michael, Arizona State University

New laws enacted in New Zealand this month give border agents the right to demand travellers entering the country hand over passwords for their digital devices. We outline what you should do if it happens to you, in the first part of a series exploring how technology is changing tourism.


Imagine returning home to Australia or New Zealand after a long-haul flight, exhausted and red-eyed. You’ve just reclaimed your baggage after getting through immigration when you’re stopped by a customs officer who demands you hand over your smartphone and the password. Do you know your rights?

Both Australian and New Zealand customs officers are legally allowed to search not only your personal baggage, but also the contents of your smartphone, tablet or laptop. It doesn’t matter whether you are a citizen or visitor, or whether you’re crossing a border by air, land or sea.




Read more:
How to protect your private data when you travel to the United States


New laws that came into effect in New Zealand on October 1 give border agents:

…the power to make a full search of a stored value instrument (including power to require a user of the instrument to provide access information and other information or assistance that is reasonable and necessary to allow a person to access the instrument).

Those who don’t comply could face prosecution and NZ$5,000 in fines. Border agents have similar powers in Australia and elsewhere. In Canada, for example, hindering or obstructing a border guard could cost you up to C$50,000 or five years in prison.

A growing trend

Australia and New Zealand don’t currently publish data on these kinds of searches, but there is a growing trend of device search and seizure at US borders. There was a more than fivefold increase in the number of electronic device inspections between 2015 and 2016 – bringing the total number to 23,000 per year. In the first six months of 2017, the number of searches was already almost 15,000.

In some of these instances, people have been threatened with arrest if they didn’t hand over passwords. Others have been charged. In cases where they did comply, people have lost sight of their device for a short period, or devices were confiscated and returned days or weeks later.




Read more:
Encrypted smartphones secure your identity, not just your data


On top of device searches, there is also canvassing of social media accounts. In 2016, the United States introduced an additional question on online visa application forms, asking people to divulge social media usernames. As this form is usually filled out after the flights have been booked, travellers might feel they have no choice but to part with this information rather than risk being denied a visa, despite the question being optional.

There is little oversight

Border agents may have a legitimate reason to search an incoming passenger – for instance, if a passenger is suspected of carrying illicit goods, banned items, or agricultural products from abroad.

But searching a smartphone is different from searching luggage. Our smartphones carry our innermost thoughts, intimate pictures, sensitive workplace documents, and private messages.

The practice of searching electronic devices at borders could be compared to police having the right to intercept private communications. But in such cases in Australia, police require a warrant to conduct the intercept. That means there is oversight, and a mechanism in place to guard against abuse. And the suspected crime must be proportionate to the action taken by law enforcement.

What to do if it happens to you

If you’re stopped at a border and asked to hand over your devices and passwords, make sure you have educated yourself in advance about your rights in the country you’re entering.

Find out whether what you are being asked is optional or not. Just because someone in a uniform asks you to do something, it does not necessarily mean you have to comply. If you’re not sure about your rights, ask to speak to a lawyer and don’t say anything that might incriminate you. Keep your cool and don’t argue with the customs officer.




Read more:
How secure is your data when it’s stored in the cloud?


You should also be smart about how you manage your data generally. You may wish to switch on two-factor authentication, which requires a password on top of your passcode. And store sensitive information in the cloud on a secure European server while you are travelling, accessing it only on a needs basis. Data protection is taken more seriously in the European Union as a result of the recently enacted General Data Protection Regulation.

Microsoft, Apple and Google all indicate that handing over a password to one of their apps or devices is in breach of their services agreement, privacy management, and safety practices. That doesn’t mean it’s wise to refuse to comply with border force officials, but it does raise questions about the position governments are putting travellers in when they ask for this kind of information.The Conversation

Katina Michael, Professor, School for the Future of Innovation in Society & School of Computing, Informatics and Decision Systems Engineering, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

If privacy is increasing for My Health Record data, it should apply to all medical records



File 20180920 10499 1xu9t4w.jpg?ixlib=rb 1.1
Everyone was up in arms about a lack of privacy with My Health Records, but the privacy is the same for other types of patient data.
from http://www.shutterstock.com

Megan Prictor, University of Melbourne; Bronwyn Hemsley, University of Technology Sydney; Mark Taylor, University of Melbourne, and Shaun McCarthy, University of Newcastle

In response to the public outcry against the potential for My Health Record data to be shared with police and other government agencies, Health Minister Greg Hunt recently announced moves to change the legislation.

The laws underpinning the My Health Record as well as records kept by GPs and private hospitals currently allow those records to be shared with the police, Centrelink, the Tax Office and other government departments if it’s “reasonably necessary” for a criminal investigation or to protect tax revenue.

If passed, the policy of the Digital Health Agency (which runs the My Health Record) not to release information without a court order will become law. This would mean the My Health Record has greater privacy protections in this respect than other medical records, which doesn’t make much sense.




Read more:
Opting out of My Health Records? Here’s what you get with the status quo


Changing the law to increase privacy

Under the proposed new bill, state and federal government departments and agencies would have to apply for a court order to obtain information stored in the My Health Record.

The court would need to be satisfied that sharing the information is “reasonably necessary”, and that there is no other effective way for the person requesting it to access the information. The court would also need to weigh up whether the disclosure would “unreasonably interfere” with the person’s privacy.

If granted, a court order to release the information would require the Digital Health Agency to provide information from a person’s My Health Record without the person’s consent, and even if they objected.

If a warrant is issued for a person’s health records, the police can sift through them as they look for relevant information. They could uncover personally sensitive material that is not relevant to the current proceedings. Since the My Health Record allows the collection of information across health providers, there could be an increased risk of non-relevant information being disclosed.




Read more:
Using My Health Record data for research could save lives, but we must ensure it’s ethical


But what about our other medical records?

Although we share all sorts of personal information online, we like to think of our medical records as sacrosanct. But the law underpinning My Health Record came from the wording of the Commonwealth Privacy Act 1988, which applies to all medical records held by GPs, specialists and private hospitals.

Under the Act, doctors don’t need to see a warrant before they’re allowed to share health information with enforcement agencies. The Privacy Act principles mean doctors only need a “reasonable belief” that sharing the information is “reasonably necessary” for the enforcement activity.

Although public hospital records do not fall under the Privacy Act, they are covered by state laws that have similar provisions. In Victoria, for instance, the Health Records Act 2001 permits disclosure if the record holder “reasonably believes” that the disclosure is “reasonably necessary” for a law enforcement function and it would not be a breach of confidence.

In practice, health care providers are trained on the utmost importance of protecting the patient’s privacy. Their systems of registration and accreditation mean they must follow a professional code of ethical conduct that includes observing confidentiality and privacy.

Although the law doesn’t require it, it is considered good practice for health professionals to insist on seeing a warrant before disclosing a patient’s health records.

In a 2014 case, the federal court considered whether a psychiatrist had breached the privacy of his patient. The psychiatrist had given some of his patient’s records to Queensland police in response to a warrant. The court said the existence of a warrant was evidence the doctor had acted appropriately.

In a 2015 case, it was decided a doctor had interfered with a patient’s privacy when disclosing the patient’s health information to police. In this case, there no was warrant and no formal criminal investigation.




Read more:
What could a My Health Record data breach look like?


Unfortunately, there are recent examples of medical records being shared with government departments in worrying ways. In Australia, it has been alleged the immigration department tried, for political reasons, to obtain access to the medical records of people held in immigration detention.

In the UK, thousands of patient records were shared with the Home Office to trace immigration offenders. As a result, it was feared some people would become too frightened to seek medical care for themselves and children.

We can’t change the fact different laws at state and federal level apply to our paper and electronic medical records stored in different locations. But we can try to change these laws to be consistent in protecting our privacy.

If it’s so important to change the My Health Records Act to ensure our records can only be “unlocked” by a court order, the same should apply to the Privacy Act as well as state-based laws. Doing so might help to address public concerns about privacy and the My Health Record, and further inform decisions about opting out or staying in the system.The Conversation

Megan Prictor, Research Fellow in Law, University of Melbourne; Bronwyn Hemsley, Professor of Speech Pathology, University of Technology Sydney; Mark Taylor, Associate professor, University of Melbourne, and Shaun McCarthy, Director, University of Newcastle Legal Centre, University of Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The devil is in the detail of government bill to enable access to communications data


Monique Mann, Queensland University of Technology

The Australian government has released a draft of its long awaited bill to provide law enforcement and security agencies with new powers to respond to the challenges posed by encryption.

According to the Department of Home Affairs, encryption already impacts 90% of Australian Security Intelligence Organisation’s (ASIO) priority cases, and 90% of data intercepted by the Australian Federal Police. The measures aim to counteract estimates that communications among terrorists and organised crime groups are expected to be entirely encrypted by 2020.

The Department of Home Affairs and ASIO can already access encrypted data with specialist decryption techniques – or at points where data are not encrypted. But this takes time. The new bill aims to speed up this process, but these broad and ill-defined new powers have significant scope for abuse.




Read more:
New data access bill shows we need to get serious about privacy with independent oversight of the law


The Department of Home Affairs argues this new framework will not compel communications providers to build systemic weaknesses or vulnerabilities into their systems. In other words, it is not a backdoor.

But it will require providers to offer up details about technical characteristics of their systems that could help agencies exploit weaknesses that have not been patched. It also includes installing software, and designing and building new systems.

Compelling assistance and access

The draft Assistance and Access Bill introduces three main reforms.

First, it increases the obligations of both domestic and offshore organisations to assist law enforcement and security agencies to access information. Second, it introduces new computer access warrants that enable law enforcement to covertly obtain evidence directly from a device (this occurs at the endpoints when information is not encrypted). Finally, it increases existing powers that law enforcement have to access data through search and seizure warrants.

The bill is modelled on the UK’s Investigatory Powers Act, which introduced mandatory decryption obligations. Under the UK Act, the UK government can order telecommunication providers to remove any form of electronic protection that is applied by, or on behalf of, an operator. Whether or not this is technically possible is another question.

Similar to the UK laws, the Australian bill puts the onus on telecommunication providers to give security agencies access to communications. That might mean providing access to information at points where it is not encrypted, but it’s not immediately clear what other requirements can or will be imposed.




Read more:
End-to-end encryption isn’t enough security for ‘real people’


For example, the bill allows the Director-General of Security or the chief officer of an interception agency to compel a provider to do an unlimited range of acts or things. That could mean anything from removing security measures to deleting messages or collecting extra data. Providers will also be required to conceal any action taken covertly by law enforcement.

Further, the Attorney-General may issue a “technical capability notice” directed towards ensuring that the provider is capable of giving certain types of help to ASIO or an interception agency.

This means providers will be required to develop new ways for law enforcement to collect information. As in the UK, it’s not clear whether a provider will be able to offer true end-to-end encryption and still be able to comply with the notices. Providers that breach the law risk facing $10 million fines.

Cause for concern

The bill puts few limits or constraints on the assistance that telecommunication providers may be ordered to offer. There are also concerns about transparency. The bill would make it an offence to disclose information about government agency activities without authorisation. Anyone leaking information about data collection by the government – as Edward Snowden did in the US – could go to jail for five years.

There are limited oversight and accountability structures and processes in place. The Director-General of Security, the chief officer of an interception agency and the Attorney-General can issue notices without judicial oversight. This differs from how it works in the UK, where a specific judicial oversight regime was established, in addition to the introduction of an Investigatory Powers Commissioner.

Notices can be issued to enforce domestic laws and assist the enforcement of the criminal laws of foreign countries. They can also be issued in the broader interests of national security, or to protect the public revenue. These are vague and unclear limits on these exceptional powers.




Read more:
Police want to read encrypted messages, but they already have significant power to access our data


The range of services providers is also extremely broad. It might include telecommunication companies, internet service providers, email providers, social media platforms and a range of other “over-the-top” services. It also covers those who develop, supply or update software, and manufacture, supply, install or maintain data processing devices.

The enforcement of criminal laws in other countries may mean international requests for data will be funnelled through Australia as the “weakest-link” of our Five Eyes allies. This is because Australia has no enforceable human rights protections at the federal level.

It’s not clear how the government would enforce these laws on transnational technology companies. For example, if Facebook was issued a fine under the laws, it could simply withdraw operations or refuse to pay. Also, $10 million is a drop in the ocean for companies such as Facebook whose total revenue last year exceeded US$40 billion.

Australia is a surveillance state

As I have argued elsewhere, the broad powers outlined in the bill are neither necessary nor proportionate. Police already have existing broad powers, which are further strengthened by this bill, such as their ability to covertly hack devices at the endpoints when information is not encrypted.

Australia has limited human rights and privacy protections. This has enabled a constant and steady expansion of the powers and capabilities of the surveillance state. If we want to protect the privacy of our communications we must demand it.

The ConversationThe Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 (Cth) is still in a draft stage and the Department of Home Affairs invites public comment up until 10th of September 2018. Submit any comments to assistancebill.consultation@homeaffairs.gov.au.

Monique Mann, Vice Chancellor’s Research Fellow in Regulation of Technology, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.