Towards a post-privacy world: proposed bill would encourage agencies to widely share your data


Bruce Baer Arnold, University of Canberra

The federal government has announced a plan to increase the sharing of citizen data across the public sector.

This would include data sitting with agencies such as Centrelink, the Australian Tax Office, the Department of Home Affairs, the Bureau of Statistics and potentially other external “accredited” parties such as universities and businesses.

The draft Data Availability and Transparency Bill released today will not fix ongoing problems in public administration. It won’t solve many problems in public health. It is a worrying shift to a post-privacy society.

It’s a matter of arrogance, rather than effectiveness. It highlights deficiencies in Australian law that need fixing.




Read more:
Australians accept government surveillance, for now


Making sense of the plan

Australian governments on all levels have built huge silos of information about us all. We supply the data for these silos each time we deal with government.

It’s difficult to exercise your rights and responsibilities without providing data. If you’re a voter, a director, a doctor, a gun owner, on welfare, pay tax, have a driver’s licence or Medicare card – our governments have data about you.

Much of this is supplied on a legally mandatory basis. It allows the federal, state, territory and local governments to provide pensions, elections, parks, courts and hospitals, and to collect rates, fees and taxes.

The proposed Data Availability and Transparency Bill will authorise large-scale sharing of data about citizens and non-citizens across the public sector, between both public and private bodies. Previously called the “Data Sharing and Release” legislation, the word “transparency” has now replaced “release” to allay public fears.

The legislation would allow sharing between Commonwealth government agencies that are currently constrained by a range of acts overseen (weakly) by the under-resourced Australian Information Commissioner (OAIC).

The acts often only apply to specific agencies or data. Overall we have a threadbare patchwork of law that is supposed to respect our privacy but often isn’t effective. It hasn’t kept pace with law in Europe and elsewhere in the world.

The plan also envisages sharing data with trusted third parties. They might be universities or other research institutions. In future, the sharing could extend to include state or territory agencies and the private sector, too.

Any public or private bodies that receive data can then share it forward. Irrespective of whether one has anything to hide, this plan is worrying.

Why will there be sharing?

Sharing isn’t necessarily a bad thing. But it should be done accountably and appropriately.

Consultations over the past two years have highlighted the value of inter-agency sharing for law enforcement and for research into health and welfare. Universities have identified a range of uses regarding urban planning, environment protection, crime, education, employment, investment, disease control and medical treatment.

Many researchers will be delighted by the prospect of accessing data more cheaply than doing onerous small-scale surveys. IT people have also been enthusiastic about money that could be made helping the databases of different agencies talk to each other.

However, the reality is more complicated, as researchers and civil society advocates have pointed out.

Person hitting a 'share' button on a keyboard.
In a July speech to the Australian Society for Computers and Law, former High Court Justice Michael Kirby highlighted a growing need to fight for privacy, rather than let it slip away.
Shutterstock

Why should you be worried?

The plan for comprehensive data sharing is founded on the premise of accreditation of data recipients (entities deemed trustworthy) and oversight by the Office of the National Data Commissioner, under the proposed act.

The draft bill announced today is open for a short period of public comment before it goes to parliament. It features a consultation paper alongside a disquieting consultants’ report about the bill. In this report, the consultants refer to concerns and “high inherent risk”, but unsurprisingly appear to assume things will work out.

Federal Minister for Government Services Stuart Roberts, who presided over the tragedy known as the RoboDebt scheme, is optimistic about the bill. He dismissed critics’ concerns by stating consent is implied when someone uses a government service. This seems disingenuous, given people typically don’t have a choice.

However, the bill does exclude some data sharing. If you’re a criminologist researching law enforcement, for example, you won’t have an open sesame. Experience with the national Privacy Act and other Commonwealth and state legislation tells us such exclusions weaken over time

Outside the narrow exclusions centred on law enforcement and national security, the bill’s default position is to share widely and often. That’s because the accreditation requirements for agencies aren’t onerous and the bases for sharing are very broad.

This proposal exacerbates ongoing questions about day-to-day privacy protection. Who’s responsible, with what framework and what resources?

Responsibility is crucial, as national and state agencies recurrently experience data breaches. Although as RoboDebt revealed, they often stick to denial. Universities are also often wide open to data breaches.

Proponents of the plan argue privacy can be protected through robust de-identification, in other words removing the ability to identify specific individuals. However, research has recurrently shown “de-identification” is no silver bullet.

Most bodies don’t recognise the scope for re-identification of de-identified personal information and lots of sharing will emphasise data matching.

Be careful what you ask for

Sharing may result in social goods such as better cities, smarter government and healthier people by providing access to data (rather than just money) for service providers and researchers.

That said, our history of aspirational statements about privacy protection without meaningful enforcement by watchdogs should provoke some hard questions. It wasn’t long ago the government failed to prevent hackers from accessing sensitive data on more than 200,000 Australians.

It’s true this bill would ostensibly provide transparency, but it won’t provide genuine accountability. It shouldn’t be taken at face value.




Read more:
Seven ways the government can make Australians safer – without compromising online privacy


The Conversation


Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

TikTok can be good for your kids if you follow a few tips to stay safe


Tashatuvango/Shutterstock

Joanne Orlando, Western Sydney University

The video-sharing app TikTok is a hot political potato amid concerns over who has access to users’ personal data.

The United States has moved to ban the app. Other countries, including Australia, have expressed concern.

But does this mean your children who use this app are at risk? If you’re a parent, let me explain the issues and give you a few tips to make sure your kids stay safe.

A record-breaker

Never has an app for young people been so popular. By April this year the TikTok app had been downloaded more than 2 billion times worldwide.

The app recently broke all records for the most downloaded app in a quarterly period, with 315 million downloads globally in the first three months of 2020.

Its popularity with young Aussies has sky-rocketed. Around 1.6 million Australians use the app, including about one in five people born since 2006. That’s an estimated 537,000 young Australians.

Like all social media apps, TikTok siphons data about its users such as email address, contacts, IP address and geolocation information.

TikTok was fined $US5.8 million (A$8 million) to settle US government claims it illegally collected personal information from children.

As a Chinese company, ByteDance, owns TikTok, US President Donald Trump and others are also worried about the app handing over this data to the Chinese state. TikTok denies it does this.




Read more:
China could be using TikTok to spy on Australians, but banning it isn’t a simple fix


Just days ago the Trump administration signed an executive order to seek a ban on TikTok operating or interacting with US companies.

Youngsters still TikToking

There is no hint of this stopping our TikToking children. For them it’s business as usual, creating and uploading videos of themselves lip-syncing, singing, dancing or just talking.

The most recent trend on TikTok – Taylor Swift Love Story dance – has resulted in more than 1.5 million video uploads in around two weeks alone.

But the latest political issues with TikTok raise questions about whether children should be on this platform right now. More broadly, as we see copycat sites such as Instagram Reels launched, should children be using any social media platforms that focus on them sharing videos of themselves at all?

The pros and cons

The TikTok app has filled a genuine social need for this young age group. Social media sites can offer a sense of belonging to a group, such as a group focused on a particular interest, experience, social group or religion.

TikTok celebrates diversity and inclusivity. It can provide a place where young people can join together to support each other in their needs.

During the COVID-19 pandemic, TikTok has had huge numbers of videos with coronavirus-related hashtags such as #quarantine (65 billion views), #happyathome (19.5 billion views) and #safehands (5.4 billion views).

Some of these videos are funny, some include song and dance. The World Health Organisation even posted its own youth-oriented videos on TikTok to provide young people with reliable public health advice about COVID-19.

The key benefit is the platform became a place where young people joined together from all corners of the planet, to understand and take the stressful edge off the pandemic for themselves and others their age. Where else could they do that? The mental health benefits this offers can be important.

Let’s get creative

Another benefit lies in the creativity TikTok centres on. Passive use of technology, such as scrolling and checking social media with no purpose, can lead to addictive types of screen behaviours for young people.

Whereas planning and creating content, such as making their own videos, is meaningful use of technology and curbs addictive technology behaviours. In other words, if young people are going to use technology, using it creatively, purposefully and with meaning is the type of use we want to encourage.

Users of TikTok must be at least 13 years old, although it does have a limited app for under 13s.

Know the risks

Like all social media platforms, children are engaging in a space in which others can contact them. They may be engaging in adult concepts that they are not yet mature enough for, such as love gone wrong or suggestively twerking to songs.




Read more:
The secret of TikTok’s success? Humans are wired to love imitating dance moves


The platform moves very quickly, with a huge amount of videos, likes and comments uploaded every day. Taking it all in can lead to cognitive overload. This can be distracting for children and decrease focus on other aspects of their life including schoolwork.

Three young girls video themselves on a smartphone.
How to stay safe and still have fun with TikTok.
Luiza Kamalova/Shutterstock

So here are a few tips for keeping your child safe, as well as getting the most out of the creative/educational aspects of TikTok.

  1. as with any social network, use privacy settings to limit how much information your child is sharing

  2. if your child is creating a video, make sure it is reviewed before it’s uploaded to ensure it doesn’t include content that can be misconstrued or have negative implications

  3. if a child younger than 13 wants to use the app, there’s a section for this younger age group that includes extra safety and privacy features

  4. if you’re okay with your child creating videos for TikTok, then doing it together or helping them plan and film the video can be a great parent-child bonding activity

  5. be aware of the collection of data by TikTok, encourage your child to be aware of it, and help them know what they are giving away and the implications for them.

Happy (safe) TikToking!The Conversation

Joanne Orlando, Researcher: Children and Technology, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Private browsing: What it does – and doesn’t do – to shield you from prying eyes on the web



The major browsers have privacy modes, but don’t confuse privacy for anonymity.
Oleg Mishutin/iStock via Getty Images

Lorrie Cranor, Carnegie Mellon University and Hana Habib, Carnegie Mellon University

Many people look for more privacy when they browse the web by using their browsers in privacy-protecting modes, called “Private Browsing” in Mozilla Firefox, Opera and Apple Safari; “Incognito” in Google Chrome; and “InPrivate” in Microsoft Edge.

These private browsing tools sound reassuring, and they’re popular. According to a 2017 survey, nearly half of American internet users have tried a private browsing mode, and most who have tried it use it regularly.

However, our research has found that many people who use private browsing have misconceptions about what protection they’re gaining. A common misconception is that these browser modes allow you to browse the web anonymously, surfing the web without websites identifying you and without your internet service provider or your employer knowing what websites you visit. The tools actually provide much more limited protections.

Other studies conducted by the Pew Research Center and the privacy-protective search engine company DuckDuckGo have similar findings. In fact, a recent lawsuit against Google alleges that internet users are not getting the privacy protection they expect when using Chrome’s Incognito mode.

How it works

While the exact implementation varies from browser to browser, what private browsing modes have in common is that once you close your private browsing window, your browser no longer stores the websites you visited, cookies, user names, passwords and information from forms you filled out during that private browsing session.

Essentially, each time you open a new private browsing window you are given a “clean slate” in the form of a brand new browser window that has not stored any browsing history or cookies. When you close your private browsing window, the slate is wiped clean again and the browsing history and cookies from that private browsing session are deleted. However, if you bookmark a site or download a file while using private browsing mode, the bookmarks and file will remain on your system.

Although some browsers, including Safari and Firefox, offer some additional protection against web trackers, private browsing mode does not guarantee that your web activities cannot be linked back to you or your device. Notably, private browsing mode does not prevent websites from learning your internet address, and it does not prevent your employer, school or internet service provider from seeing your web activities by tracking your IP address.

Reasons to use it

We conducted a research study in which we identified reasons people use private browsing mode. Most study participants wanted to protect their browsing activities or personal data from other users of their devices. Private browsing is actually pretty effective for this purpose.

We found that people often used private browsing to visit websites or conduct searches that they did not want other users of their device to see, such as those that might be embarrassing or related to a surprise gift. In addition, private browsing is an easy way to log out of websites when borrowing someone else’s device – so long as you remember to close the window when you are done.

Smart phone displaying Google incognito mode
Private browsing can help cover your internet tracks by automatically deleting your browsing history and cookies when you close the browser.
Avishek Das/SOPA Images/LightRocket via Getty Images

Private browsing provides some protection against cookie-based tracking. Since cookies from your private browsing session are not stored after you close your private browsing window, it’s less likely that you will see online advertising in the future related to the websites you visit while using private browsing.

[Get the best of The Conversation, every weekend. Sign up for our weekly newsletter.]

Additionally, as long as you have not logged into your Google account, any searches you make will not appear in your Google account history and will not affect future Google search results. Similarly, if you watch a video on YouTube or other service in private browsing, as long as you are not logged into that service, your activity does not affect the recommendations you get in normal browsing mode.

What it doesn’t do

Private browsing does not make you anonymous online. Anyone who can see your internet traffic – your school or employer, your internet service provider, government agencies, people snooping on your public wireless connection – can see your browsing activity. Shielding that activity requires more sophisticated tools that use encryption, like virtual private networks.

Private browsing also offers few security protections. In particular, it does not prevent you from downloading a virus or malware to your device. Additionally, private browsing does not offer any additional protection for the transmission of your credit card or other personal information to a website when you fill out an online form.

It is also important to note that the longer you leave your private browsing window open, the more browsing data and cookies it accumulates, reducing your privacy protection. Therefore, you should get in the habit of closing your private browsing window frequently to wipe your slate clean.

What’s in a name

It is not all that surprising that people have misconceptions about how private browsing mode works; the word “private” suggests a lot more protection than these modes actually provide.

Furthermore, a 2018 research study found that the disclosures shown on the landing pages of private browsing windows do little to dispel misconceptions that people have about these modes. Chrome provides more information about what is and is not protected than most of the other browsers, and Mozilla now links to an informational page on the common myths related to private browsing.

However, it may be difficult to dispel all of these myths without changing the name of the browsing mode and making it clear that private browsing stops your browser from keeping a record of your browsing activity, but it isn’t a comprehensive privacy shield.The Conversation

Lorrie Cranor, Professor of Computer Science and of Engineering & Public Policy, Carnegie Mellon University and Hana Habib, Graduate Research Assistant at the Institute for Software Research, Carnegie Mellon University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Giving your details to restaurants and cafes: your rights, their obligations and privacy concerns



Shutterstock

Mahmoud Elkhodr, CQUniversity Australia

While lockdown restrictions have eased in many places, the coronavirus threat isn’t over yet. The number of cases globally has surpassed 9 million, and infections have slowly crept back for Victoria.




Read more:
In many countries the coronavirus pandemic is accelerating, not slowing


Restaurants, pubs and cafes have been among the first places to which people have flocked for some respite from social isolation. In many cases, diners must provide their personal details to these venues for potential contact tracing later on.

Unfortunately, there’s a lack of clarity regarding what the best options are for businesses, and many aren’t following official guidelines.

Keeping records

In the rush to reopen while also abiding by government requirements, many businesses are resorting to collecting customer information using pen and paper.

This entails sharing the stationery, which goes against the basic principles of social distancing. Your written details can also be seen by other diners and staff, triggering privacy concerns.

You wouldn’t normally leave your name, phone number, email, address or any combination of these on a piece of paper in public – so why now?

Businesses collecting personal information from customers must abide by the Australian Privacy Principles under the Privacy Act 1988. This requires they “take reasonable steps to protect the personal information collected or held”.

The federal government has also released an updated guide to collecting personal information for contact tracing purposes. Establishments must use this guide in conjunction with individual directions or orders from certain states and territories. See some below.

.tg {border-collapse:collapse;border-spacing:0;margin:0px auto;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top}

QLD Must keep contact information about all guests and staff including name, address, mobile phone number and the date/time period of patronage for a period of 56 days.

More details here.

ACT Businesses should ask for the first name and contact phone number of each attendee.

More details here.

SA Only real estate agents, wedding and funeral businesses should collect personal information from customers. But not restaurants.

More details here:

NSW Keep the name and mobile number or email address for all staff and dine-in customers for at least 28 days.

More details here.

The guide also outlines how businesses should handle customers’ contact information. The relevant parts are:

  1. you should only collect the personal information required under the direction or order

  2. you should notify individuals before you collect personal information

  3. you should securely store this information once you have collected it.

One point specifically notes:

Do not place the names and phone numbers or other details in a book or on a notepad or computer screen where customers may see it.

Thus, many establishments are clearly not sticking to official guidance. So could you refuse to give your details in such cases?

Venues are required by law to collect the necessary details as per their state or territory’s order. Venues can deny entry to people who refuse.

What would a comprehensive solution look like?

For contact tracing to work effectively, it should be implemented systematically, not in a piecemeal way. This means there should be a system that securely collects, compiles, and analyses people’s data in real time, without impinging on their privacy.

It’s perhaps too much to ask hospitality businesses to take the lead on this. Ideally, government agencies should have done it already.

The COVIDSafe app could have provided this service, but with it being optional — and contact tracing by businesses being mandatory — it’s not a viable option. That’s not to mention the issues with the running of the app, including Bluetooth requirements, battery life drainage, and history of problems with iPhones.




Read more:
How safe is COVIDSafe? What you should know about the app’s issues, and Bluetooth-related risks


Nonetheless, there are some free technologies that can offer better alternatives to the manual collection of customers’ details. These include:

All these tools have a similar set up process, and provide similar services. Let’s take a look at one of the most popular ones, Google Forms.

Using Google Forms

Google Forms is a tool that comes free with a Google account. The “contact information template” is a good starting point for businesses wanting to make a secure log of visitor details.

In Google Forms, you can create a workable contact tracing form within minutes.

Once you create a form to collect customers’ information, you just have to share a URL, and customers can fill the form on their own device.

You can generate a shareable URL for your Google form.

Data gathered via Google Forms is stored securely on the Google Drive account and can only be accessed through the same login that was used to create the form. The transmission of data from the customer’s device to Google Drive (where the data is then stored) is also secure.

Or use a QR code

If you want to make the whole process even easier, and not use a clunky URL, then using a QR code (linked to the URL of your Google form) is a great option. For this, you can use any free external QR code generator. These will generate a QR code which, when scanned by a smartphone, will direct the user to your URL.

This code can also be printed and hung on a wall, or stuck to tables where it’s easy to access without any human-to-human contact. A comprehensive guide to creating and accessing Google Forms can be found here.

QR code created using the website https://www.qr-code-generator.com/

That said, although the process of setting up and using such tools is very simple, there may still be people who are too mistrusting of the way their data is used, and may refuse to hand it over.The Conversation

Mahmoud Elkhodr, Lecturer in Information and Communication Technologies, CQUniversity Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How safe is COVIDSafe? What you should know about the app’s issues, and Bluetooth-related risks



Shutterstock

James Jin Kang, Edith Cowan University and Paul Haskell-Dowland, Edith Cowan University

The Australian government’s COVIDSafe app has been up and running for almost a fortnight, with more than five million downloads.

Unfortunately, since its release many users – particularly those with iPhones – have been in the dark about how well the app works.

Digital Transformation Agency head Randall Brugeaud has now admitted the app’s effectiveness on iPhones “deteriorates and the quality of the connection is not as good” when the phone is locked, and the app is running in the background.

There has also been confusion regarding where user data is sent, how it’s stored, and who can access it.

Conflicts with other apps

Using Bluetooth, COVIDSafe collects anonymous IDs from others who are also using the app, assuming you come into range with them (and their smartphone) for a period of at least 15 minutes.

Bluetooth must be kept on at all times (or at least turned on when leaving home). But this setting is specifically advised against by the Office of the Australian Information Commissioner.

It’s likely COVIDSafe isn’t the only app that uses Bluetooth on your phone. So once you’ve enabled Bluetooth, other apps may start using it and collecting information without your knowledge.

Bluetooth is also energy-intensive, and can quickly drain phone batteries, especially if more than one app is using it. For this reason, some may be reluctant to opt in.

There have also been reports of conflicts with specialised medical devices. Diabetes Australia has received reports of users encountering problems using Bluetooth-enabled glucose monitors at the same time as the COVIDSafe app.

If this happens, the current advice from Diabetes Australia is to uninstall COVIDSafe until a solution is found.

Bluetooth can still track your location

Many apps require a Bluetooth connection and can track your location without actually using GPS.

Bluetooth “beacons” are progressively being deployed in public spaces – with one example in Melbourne supporting visually impaired shoppers. Some apps can use these to log locations you have visited or passed through. They can then transfer this information to their servers, often for marketing purposes.

To avoid apps using Bluetooth without your knowledge, you should deny Bluetooth permission for all apps in your phone’s settings, and then grant permissions individually.

If privacy is a priority, you should also read the privacy policy of all apps you download, so you know how they collect and use your information.

Issues with iPhones

The iPhone operating system (iOS), depending on the version, doesn’t allow COVIDSafe to work properly in the background. The only solution is to leave the app running in the foreground. And if your iPhone is locked, COVIDSafe may not be recording all the necessary data.

You can change your settings to stop your iPhone going into sleep mode. But this again will drain your battery more rapidly.

Brugeaud said older models of iPhones would also be less capable of picking up Bluetooth signals via the app.

It’s expected these issues will be fixed following the integration of contact tracing technology developed by Google and Apple, which Brugeaud said would be done within the next few weeks.




Read more:
The COVIDSafe bill doesn’t go far enough to protect our privacy. Here’s what needs to change


Vulnerabilities to data interception

If a user tests positive for COVID-19 and consents to their data being uploaded, the information is then held by the federal government on an Amazon Web Services server in Australia.

Data from the app is stored on a user’s device and transmitted in an encrypted form to the server. Although it’s technically possible to intercept such communications, the data would still be encrypted and therefore offer little value to an attacker.

The government has said the data won’t be moved offshore or made accessible to US law enforcement. But various entities, including Australia’s Law Council, have said the privacy implications remain murky.

That said, it’s reassuring the Amazon data centre (based in Sydney) has achieved a very high level of security as verified by the Australian Cyber Security Centre.

Can the federal government access the data?

The federal government has said the app’s data will only be made available to state and territory health officials. This has been confirmed in a determination under the Biosecurity Act and is due to be implemented in law.

Federal health minister Greg Hunt said:

Not even a court order during an investigation of an alleged crime would be allowed to be used [to access the data].

Although the determination and proposed legislation clearly define the who and how of access to COVIDSafe data, past history indicates the government may not be best placed to look after our data.

It seems the government has gone to great lengths to promote the security and privacy of COVIDSafe. However, the government commissioned the development of the app, so someone will have the means to obtain the information stored within the system – the “keys” to the vault.

If the government did covertly obtain access to the data, it’s unlikely we would find out.

And while contact information stored on user devices is deleted on a 21-day rolling basis, the Department of Health has said data sent to Amazon’s server will “be destroyed at the end of the pandemic”. It’s unclear how such a date would be determined.

Ultimately, it comes down to trust – something which seems to be in short supply.




Read more:
The COVIDSafe app was just one contact tracing option. These alternatives guarantee more privacy


The Conversation


James Jin Kang, Lecturer, Computig and Security, Edith Cowan University and Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The COVIDSafe bill doesn’t go far enough to protect our privacy. Here’s what needs to change


Katharine Kemp, UNSW and Graham Greenleaf, UNSW

The Australian government will need to correct earlier misstatements and improve privacy protections to gain the trust of the millions of Australians being called on to download the COVIDSafe contact tracing app.

The draft Privacy Amendment (Public Health Contact Information) Bill 2020, or the “COVIDSafe bill”, released yesterday, is the first step towards parliamentary legislation providing privacy protections for users of the app.

The COVIDSafe bill includes some significant improvements on the protections offered by federal health minister Greg Hunt’s current determination under the Biosecurity Act, which put rules in place to encourage uptake of the app. However, the bill falls short on other substantial concerns.

Improvements incorporated in the bill

The COVIDSafe bill includes several amendments to the privacy protections originally set out in the determination, which the legislation is intended to replace.

The bill, like the determination, would make it illegal to gather or use data collected by the app for purposes other than those specified. Such an offence would be punishable by up to five years in prison.

Importantly, the bill also permits individuals to take some enforcement action on their own behalf if the privacy protections are breached, rather than relying on the government to bring criminal proceedings. It does this by making a breach of those protections an “interference with privacy” under the Privacy Act. This means users can make a complaint to the federal privacy commissioner.

The bill also improves the kind of consent needed to upload a user’s list of contacts to the central data store, if the user tests positive for COVID-19. Instead of allowing anyone with control of a mobile phone to consent, the bill requires consent from the actual registered COVIDSafe user.

The legislation will also apply to state and territory health officials to cover data accessed for contact tracing purposes, in case they misuse it.




Read more:
The COVIDSafe app was just one contact tracing option. These alternatives guarantee more privacy


Not 1.5 metres, not 15 minutes

A crucial problem with the bill is it allows the government to collect much more personal data than is necessary for contact tracing.

Just before the app’s release, federal services minister Stuart Roberts said the app would only collect data of other app users within 1.5 metres, for at least 15 minutes. He also said when a user tests positive the app would allow the user to consent to the upload of only those contacts.

Neither of these statements is true.

According to the Privacy Impact Assessment of COVIDSafe, the app collects and – with consent of a user who tests positive – uploads to the central data store, data about all other users who came within Bluetooth signal range even for a minute within the preceding 21 days.

While the Department of Health more recently said it would prevent state and territory health authorities from accessing contacts other than those that meet the “risk parameters”, the bill includes no data collection or use restrictions based on the distance or duration of contact.

The government should correct its misstatements and minimise the data collected and decrypted to that which is necessary, to the extent that is technically possible.

An overly narrow definition of protected data

The privacy protections in the bill only apply to certain data. And the definition of that data does not capture critical personal data created and used in the process of COVIDSafe contact tracing.

The bill defines “COVID app data” as data collected or generated through the operation of the app which has been stored on a mobile phone or device. This would include the encrypted contacts stored on a user’s phone.

But if the user tests positive and uploads those encrypted contacts to the national data store, the decrypted records of their contacts over the last 21 days do not clearly fall within that definition. Data transformed or derived from that data by state and territory health officers would also fall outside the definition.

“COVID app data” should be re-defined to expressly include these types of data.

No source code

Ministers have said COVIDSafe’s source code, or at least the parts of it which do not pose “security issues”, would be made available within a fortnight after the app’s release. Yet, there is no sign of this.

The full source code should be made public at least a week prior to the COVIDSafe Act being enacted so experts can identify weaknesses in privacy protections.

The bill also fails to provide any guarantee of independent scientific advice on whether the app is continuing to be of practical benefit, or should be terminated.

Loopholes in the rules against coercion

The bill contains some good protections against coercing people to download or use the COVIDSafe app, but these need to be strengthened, by preventing requirements to disclose installation of the app, and discriminatory conditions. This is especially necessary given various groups, including chambers of commerce, have already proposed (illegal) plans to make participation or entry conditional on app usage.

Some behavioural economists have proposed making government payments, tax break or other financial rewards dependent on individuals using the app. The bill should make clear that no discount, payment or other financial incentive may be conditional on a person downloading or using the app.

The government must abide by its promise that use of the COVIDSafe app is voluntary. Coercion or “pseudo-voluntary” agreement should not be used to circumvent this.

‘Google knows everything about you’ doesn’t cut it

Many have argued Australians who do not yet trust the COVIDSafe app should download it anyway since Google, Facebook, Uber or Amazon already “know far more about you”. But the fact that some entities are being investigated for data practices which disadvantage consumers is not a reason to diminish the need for privacy protections.

The harms from government invasions of privacy have even more dramatic and immediate impacts on our liberty.

Parliament will debate the COVIDSafe Bill in the sitting expected to start May 12, and a Senate Committee will continue to investigate it. Many are likely to wait for improved protections in the final legislation before making the choice to opt in.




Read more:
Coronavirus contact-tracing apps: most of us won’t cooperate unless everyone does


The Conversation


Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Academic Lead, UNSW Grand Challenge on Trust, UNSW and Graham Greenleaf, Professor of Law and Information Systems, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

COVIDSafe tracking app reviewed: the government delivers on data security, but other issues remain


<Mahmoud Elkhodr, CQUniversity Australia

About 1.13 million people had downloaded the federal government’s COVIDSafe app by 6am today, just 12 hours after its release last night, said Health Minister Greg Hunt. The government is hoping at least 40% of the population will make use of the app, designed to help reduce the spread of the coronavirus disease.

Previously dubbed TraceTogether – in line with a similar app rolled out in Singapore – the coronavirus contact tracing app has been an ongoing cause of contention among the public. Many people have voiced concerns of an erosion of privacy, and potential misuse of citizen data by the government.

But how does COVIDSafe work? And to what extent has the app addressed our privacy concerns?




Read more:
Coronavirus contact-tracing apps: most of us won’t cooperate unless everyone does


Getting started

The app’s landing page outlines its purpose: to help Australian health authorities trace and prevent COVID-19’s spread by contacting people who may have been in proximity (to a distance of about 1.5 metres) with a confirmed case, for 15 minutes or more.

The second screen explains how Bluetooth technology is used to record users’ contact with other app users. This screen says collected data is encrypted and can’t be accessed by other apps or users without a decryption mechanism. It also says the data is stored locally on users’ phones and isn’t sent to the government (remote server storage).

These screens that show up upon app installation explain the app’s functions and guide users through registration.

COVIDSafe requires certain permissions to run.

In subsequent screens, the app links to its privacy policy, seeks user consent to retrieve registration details, and lets users register by entering their name, age range, postcode and mobile number.

This is followed by a declaration page where the user must give consent to enable Bluetooth, “location permissions” and “battery optimiser”.

In regards to enabling location permissions, it’s important to note this isn’t the same as turning on location services. Location permissions must be enabled for COVIDSafe to access Bluetooth on Android and Apple devices. And access to your phone’s battery optimiser is required keep the app running in the background.

Once the user is registered, a notification should confirm the app is up and running.

Users will have to manually grant some permissions.

Importantly, COVIDSafe doesn’t have an option for users to exit or “log-off”.

Currently, the only way to stop the app is to uninstall it, or turn off Bluetooth. The app’s reliance on prolonged Bluetooth usage also has users worried it might quickly drain their phone batteries.

Preliminary tests

Upon preliminary testing of the app, it seems the federal government has delivered on its promises surrounding data security.

Tests run for one hour showed the app didn’t transmit data to any external or remote server, and the only external communication made was a “handshake” to a remote server. This is simply a way of establishing a secure communication.

Additional tests should be carried out on this front.

This screenshot shows test results run via the Wireshark software to determine whether data from COVIDSafe was being transmitted to external servers.

Issues for iPhone users

According to reports, if COVIDSafe is being used on an iPhone in low-power mode, this may impact the app’s ability to track contacts.

Also, iPhone users must have the app open (in the foreground) for Bluetooth functionality to work. The federal government plans to fix this hitch “in a few weeks”, according to The Guardian.




Read more:
The coronavirus contact tracing app won’t log your location, but it will reveal who you hang out with


This complication may be because Apple’s operating system generally doesn’t allow apps to run Bluetooth-related tasks, or perform Bluetooth-related events unless running in the foreground.

Source code

Source code” is the term used to describe the set of instructions written during the development of a program. These instructions are understandable to other programmers.

In a privacy impact assessment response from the Department of Health, the federal government said it would make COVIDSafe’s source code publicly available, “subject to consultation with” the Australian Cyber Security Centre. It’s unclear exactly when or how much of the source code will be released.

Making the app’s source code publicly available, or making it “open source”, would allow experts to examine the code to evaluate security risks (and potentially help fix them). For example, experts could determine whether the app collects any personal user information without user consent. This would ensure COVIDSafe’s transparency and enable auditing of the app.

Releasing the source code isn’t only important for transparency, but also for understanding the app’s functionality.

Some COVIDSafe users reported the app wouldn’t accept their mobile number until they turned off wifi and used their mobile network (4G) instead. Until the app is made open source, it’s difficult to say exactly why this happens.




Read more:
Explainer: what is contact tracing and how does it help limit the coronavirus spread?


Civic duty

Overall, it seems COVIDSafe is a promising start to the national effort to ease lockdown restrictions, a luxury already afforded to some states including Queensland.

Questions have been raised around whether the app will later be made compulsory to download, to reach the 40% uptake target. But current growth in download numbers suggests such enforcement may not be necessary as more people rise up to their “civic duty”.

That said, only time will reveal the extent to which Australians embrace this new contact tracing technology. The Conversation

Mahmoud Elkhodr, Lecturer in Information and Communication Technologies, CQUniversity Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Is the government’s coronavirus app a risk to privacy?



Shutterstock

Rick Sarre, University of South Australia

Few people can fault the government’s zeal in staring down the coronavirus and steering a path for Australia to emerge on the other side ready to do business again.

Unlike the crowds amassing in some US cities to declare their scorn for “stay at home” rules, Australians, generally speaking, have been supportive of federal and state government strategies to tackle the pandemic.

Prime Minister Scott Morrison has added a potential new weapon to his armoury – a COVID-19 tracing app. Government Services Minister Stuart Robert has been spruiking the plan to introduce the app, which is based on technology in use in Singapore.




Read more:
The coronavirus contact tracing app won’t log your location, but it will reveal who you hang out with


But the idea of a government potentially monitoring our daily travels and interactions has drawn suspicion or even scorn. Nationals MP Barnaby Joyce says he won’t be downloading the app.

Robert has since gone on the offensive, explaining the process and playing down any concerns.

So if your app has been within 15 minutes’ duration of someone within 1.5 metres proximity, there’ll be a ping or swapping of phone numbers, and that’ll stay on your phone. And then of course if you test positive … you’ll give consent and those numbers will be provided securely to health professionals, and they’ll be able to call people you’ve been in contact with … Those numbers will be on your phone, nowhere else, encrypted. You can’t access them, no one else can.

Downloading the app is to be voluntary. But its effectiveness would be enhanced, Robert says, if a significant proportion of the population embraced the idea.

On ABC Radio National Breakfast this week he backed away from a previously mentioned minimum 40% community commitment. Instead, Robert said: “Any digital take-up … is of great value.”

He has strong support from other quarters. Epidemiologist Marion Kainer said the adoption of such an app would allow contact tracing to occur much more quickly.

Having the rapid contact tracing is essential in controlling this, so having an app may allow us to open up society to a much greater extent than if we didn’t have an app.

This all sounds well and good. But there are potential problems. Our starting point is that governments must ensure no policy sacrifices our democratic liberties in the pursuit of a goal that could be attained by other, less intrusive, schemes.

The immediate concern comes down to the age-old (and important) debate about how much freedom we are prepared to give up in fighting an existential threat, be it a virus, terrorism, or crime more generally.

Law academic Katharine Kemp last week highlighted her concerns about the dangers of adopting a poorly thought-through strategy before safeguards are in place.

The app, she said:

will require a clear and accurate privacy policy; strict limits on the data collected and the purposes for which it can be used; strict limits on data sharing; and clear rules about when the data will be deleted.

Other commentators have warned more broadly against “mission creep”: that is, with the tool in place, what’s to stop a government insisting upon an expanded surveillance tool down the track?

True, downloading the app is voluntary, but the government has threatened that the price of not volunteering is a longer time-frame for the current restrictions. That threat fails any “pub” test of voluntariness.




Read more:
Latest coronavirus modelling suggests Australia on track, detecting most cases – but we must keep going


On the other hand, there is a privacy trade-off that most people are willing to make if the benefits are manifestly clear. For example, our in-car mapping devices are clever enough (based on the speed of other road users with similar devices) to warn us of traffic problems ahead.

Remember, too, that Australians have had a 20-year love affair with smart technologies. We’re a generation away from the naysayers who argued successfully against the Hawke government’s failed Australia Card in the mid-1980s.

By the same token, the Coalition does not have a strong record of inspiring confidence in large-scale data collection and retrieval. One need only recall the lack of enthusiasm healthcare provider organisations showed for the My Health Record system. In 2019, the National Audit Office found the system had failed to manage its cybersecurity risks adequately.

So where do we go from here? The government sought to allay public concerns about the metadata retention scheme, a program introduced in 2015 to amass private telecommunications data, by giving a role to the Commonwealth Ombudsman to assess police agencies’ compliance with their legislated powers. In the case of the COVID-19 tracing app, the government has, appropriately, enlisted the support of the Office of the Australian Information Commissioner. Robert has said:

Right now a privacy impact assessment is being conducted, the Privacy Commissioner is involved, and all of that will be made public.

While that is an admirable sentiment, one would hope the government would put specific legislation in place to set out all of the conditions of use, and that the commissioner would not be asked for her view unless and until that legislation is in order. The Law Council of Australia has today joined this chorus.

Once the commissioner gives the “all clear”, I will be happy to download the app. Let’s hope it then works as intended.The Conversation

Rick Sarre, Adjunct Professor of Law and Criminal Justice, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Privacy vs pandemic: government tracking of mobile phones could be a potent weapon against COVID-19


Patrick Fair, Deakin University

Borders, beaches, pubs and churches are closed, large events are cancelled, and travellers are subject to 14 days’ isolation – all at significant cost to taxpayers and the economy. But could telecommunications technology offer a more targeted approach to controlling the spread of the COVID-19 coronavirus?

One possibility is to use location history data from the mobile phones of confirmed cases, to help track and trace the spread of infection.

Some people can be contagious without knowing, either because they have not yet developed symptoms, or because their symptoms are mild. These individuals cannot be identified until they become sufficiently unwell to seek medical assistance. Finding them more quickly could help curb the spread of the disease.

This suggestion clearly raises complex privacy issues.




Read more:
Explainer: what is contact tracing and how does it help limit the coronavirus spread?


All mobile service providers in Australia are required to hold two years of data relating to the use of each mobile phone on their network, including location information.

For anyone who tests positive with COVID-19, this data could be used to list every location where they (or, more accurately, their phone) had been over the preceding few weeks. Using that list, it would then be possible to identify every phone that had been in close proximity to the person’s phone during that time. The owners of those phones could then be tested, even though they may not necessarily have developed symptoms or suspected that they had come into contact with the coronavirus.

The government could do this in a systematic way. It could assemble everyone’s location history into a single, searchable database that could then be cross-referenced against the locations of known clusters of infection. This would allow contact tracing throughout the entire population, creating a more proactive way to track down suspected cases.

The privacy problem

You may well ask: do we want the government to assemble a searchable database showing the locations of almost every person over 16 in Australia over the past month?

Some people will undoubtedly find it a confronting prospect to be contacted by the government and told that surveillance analysis suggests they need to be isolated or tested. Others will be concerned that such a database, or the broad surveillance capability that underpins it, could be used to intrude on our privacy in other ways.

Several countries are already using mobile phone data in the fight against the coronavirus. The UK government is reportedly in talks with major mobile phone operators to use location data to analyse the outbreak’s spread.

India, Hong Kong, Israel, Austria, Belgium, Germany are also among the list of countries taking advantage of mobile data to tackle the pandemic.

The Singapore government has launched an app called Trace Together, which allows mobile users to voluntarily share their location data. Iran’s leaders have been accused of being rather less transparent, amid reports that its coronavirus “diagnosis” app also logs people’s whereabouts.

Is it legal anyway?

We may well take the view that the privacy risks are justified in the circumstances. But does the Australian government actually have the power to use our data for this purpose?

The Telecommunications Act requires carriers to keep telecommunications data secure, but also allows federal, state and territory governments to request access to it for purposes including law enforcement, national security, and protecting public revenue.

Being infected with COVID-19 is not a crime, and while a pandemic is arguably a threat to national security, it is not specifically listed under the Act. Limiting the outbreak would undoubtedly benefit public revenue, but clearly the primary intent of contact tracing is as a public health measure.

There is another law that could also compel mobile carriers to hand over users’ data. During a “human biosecurity emergency period”, the Biosecurity Act 2015 allows the federal health minister to take any action necessary to prevent or control the “emergence, establishment or spread” of the declared emergency disease. A human biosecurity emergency period was declared on Sunday 23 March.




Read more:
Explainer: what are the laws mandating self-isolation and how will they be enforced?


In recent years there has been a great deal of debate over the use of telecommunications data for surveillance purposes. The introduction of the mandatory data retention regime was contentious, as was the broad power granted to multiple agencies to access the data for law enforcement.

One reason for the controversy was the relatively low threshold for use of these laws: authorities could access data relating to any suspected offence punishable by three years or more in prison.

Australia is now facing a crisis that is orders of magnitude more serious. Many Australians would be willing to see their information used in this way if it saves lives, limits the economic impact, and impedes the spread of COVID-19.

The Commonwealth has the legal power to do it, the security and privacy issues can be managed, and the benefits may be significant.The Conversation

Patrick Fair, Adjunct Professor, School of Information Technology, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why the government’s proposed facial recognition database is causing such alarm



Andrew Hastie said the broad objectives of the identity-matching system were sound, but key changes were needed to ensure privacy and transparency.
Lukas Coch/AAP

Sarah Moulds, University of South Australia

Since before the 2019 election, the Morrison government has been keen to introduce a new scheme that would allow government agencies, telecos and banks to use facial recognition technology to collect and share images of people across the country.

While there are some benefits to such a system – making it easier to identify the victims of natural disasters, for example – it has been heavily criticised by human rights groups as an attempt to introduce mass surveillance to Australia and an egregious breach of individual privacy.

The plan hit a roadblock when the government-controlled Parliamentary Joint Committee on Intelligence and Security (PJCIS) handed down an extensive report calling for significant changes to the legislation to ensure stronger privacy protections and other safeguards against misuse.




Read more:
Close up: the government’s facial recognition plan could reveal more than just your identity


What are the identity-matching laws?

The identity-matching bills aim to set up a national database of images captured through facial recognition technology and other pieces of information used to identify people, such as driver’s licenses, passports, visa photos. This information could then be shared between government agencies, and in some cases, private organisations like telcos and banks, provided certain legal criteria are met.

The proposed database follows an agreement reached by the Commonwealth and the states and territories in 2017 to facilitate the “secure, automated and accountable” exchange of identity information to help combat identity crime and promote community safety.

Critical to this agreement was that the system include “robust privacy safeguards” to guard against misuse.

The agreement gave the federal government the green light to introduce laws to set up the identity-matching system.




Read more:
Why regulating facial recognition technology is so problematic – and necessary


Access to the service could potentially encompass a wide range of purposes. For example, a government agency could use the system to identify people thought to be involved in identity fraud or considered threats to national security.

But the bill also includes more pedestrian uses, such as in cases of “community safety” or “road safety”.

The proposed laws contain some safeguards against misuse, including criminal sanctions when an “entrusted person” discloses information for an unauthorised purpose. In addition, access by banks or other companies and local councils can only occur with the consent of the person seeking to have their identity verified.

However, much of the detail about precisely who can access the system and what limits apply is not set out in the bills. This will be determined through government regulation or subsequent intergovernmental agreements.

Concerns about scope and safeguards

The Coalition government’s bills were first introduced in 2018, but didn’t come up for a vote. After the government reintroduced the bills in July, the PJCIS launched an inquiry and invited public submissions.

Legal bodies have argued that amendments are needed to tighten the boundaries of who can access the identity-matching services and for what purposes. They note that as currently drafted, the proposed laws give too much discretionary power to government officials and actually create opportunities for identity theft.




Read more:
DNA facial prediction could make protecting your privacy more difficult


This is particularly problematic when coupled with the potential for the rapid spread of facial recognition technology in Australian streets, parks and transport hubs.

The Human Rights Law Centre said the proposed system is “more draconian” than the one launched in the UK. Another concern is that it could be used by a wide range of agencies to confirm the identity of any Australian with government-approved documentation (such as a passport or driver’s license), regardless of whether they are suspected of a crime.

The Australian Human Rights Commission also pointed to research suggesting the software used to capture or match facial imagery could result in higher error rates for women and people from certain ethnic groups.

What’s next for the bills?

When handing down the committee’s unanimous report, Andrew Hastie said the broad objectives of the identity-matching system were sound, but key changes were needed to ensure privacy protections and transparency.

While the PJCIS cannot actually stop the bills from being passed, it has a strong track record of turning its recommendations into legislative amendments.

The states and territories also have an interest in ensuring a national identity-matching scheme gets the balance right when it comes to addressing identity crime and assisting law enforcement and protecting individual privacy.

The question is whether these calls for improvements will be loud enough to put these bills back on the drawing board.

The future of the legislation will tell us something important about the strength of human rights protections in Australia, which rely heavily on parliamentary bodies like the PJCIS to help raise the alarm when it comes to rights-infringing laws.The Conversation

Sarah Moulds, Lecturer of Law, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.