China’s ‘surveillance creep’: how big data COVID monitoring could be used to control people post-pandemic


Ausma Bernot, Griffith University; Alexander Trauth-Goik, University of Wollongong, and Sue Trevaskes, Griffith UniversityChina has used big data to trace and control the outbreak of COVID-19. This has involved a significant endeavour to build new technologies and expand its already extensive surveillance infrastructure across the country.

In our recent study, we show how the State Council, the highest administrative government unit in China, plans to retain some of those new capabilities and incorporate them into the broader scheme of mass surveillance at a national level. This is likely to lead to tighter citizen monitoring in the long term.

This phenomenon of adopting a system of surveillance for one purpose and using it past the originally intended aims is known as “function creep”.

In China, this involves the use of big data initially collected to monitor people’s COVID status and movements around the country to keep the pandemic under control. The Chinese government has been quite successful at this, despite recent spikes in infections in eastern China.

But this big data exercise has also served as an opportunity for authorities to patch gaps in the country’s overall surveillance infrastructure and make it more cohesive, using the COVID crisis as cover to avoid citizen backlash.

Mass testing at a factory in Wuhan
Mass testing at a factory in Wuhan, where COVID was first detected in 2019.
AP

How China’s COVID surveillance system worked

Two key shifts have occurred to enable more comprehensive surveillance during the pandemic.

First, a more robust system was constructed to collect and monitor big data related to pandemic control.

Second, these data were then collated at the provincial levels and transferred to a national, unified platform where they were analysed. This analysis focused on calculated levels of risk for every individual related to possible exposure to COVID.

This is how it worked. Every night, Chinese citizens received a QR code to their mobile phone called a “health code”. The code required users to upload their personal information to a special app to verify their identity (such as their national ID number and a biometric selfie), along with their body temperature, any COVID symptoms, and their recent travel history.

The system then assessed whether they had been in close contact with an infected person. If users received a green code to their phone, they were good to go. But an orange code mandated a seven-day home isolation, and a red code was 14-day isolation.

The system was not perfect. Some people suspected their codes remained red because they were from the hotspot province of Hubei, or questioned why their codes unexpectedly turned red for just one day. Others reported the codes incorrectly identified their exposure risk.

Staff checking people's green 'health codes'.
Staff checking people’s green ‘health codes’ at the gate of the entrance to a park in Shanghai.
Yang Jianzheng/AP

How Chinese people feel about this data collection

Multiple studies suggest that although the system was intrusive, this state-controlled, big data monitoring was supported by the public because of how effective it was in containing the epidemic.

A recent study found the public viewed this comprehensive data collection as positive and that it helped strengthen the legitimacy of the Chinese Communist Party.

The Chinese public also viewed the initial criticism from Western countries as unfair and hypocritical, given many subsequently adopted varying forms of big data collection systems themselves.




Read more:
From ground zero to zero tolerance – how China learnt from its COVID response to quickly stamp out its latest outbreak


One scholar, Chuncheng Liu, canvassed Chinese social media and observed a notable social backlash against this type of criticism. After the state of South Australia released a new QR code system, for example, one comment read:

China QR code – ‘invasion of privacy, invasion of human rights’. Australian QR Code – ‘Fantastic new tool’.

On the flip side, there has been some public resistance in China over the potential for health codes to be re-engineered and used for other purposes.

The city of Hangzhou was the first to implement the health codes in February 2020. However, in May 2020 when the municipal government proposed re-purposing the app for other uses after the pandemic (such as mapping people’s lifestyle habits), it was met with strong citizen backlash.

Concerns were further exacerbated when health code data was hacked in Beijing in December 2020. The hackers published the selfies that celebrities had used for biometric identity verification, as well as their COVID testing data.

How these systems can be used for other purposes

When big data systems become as expansive as they are now in China, they can shape, direct and even coerce behaviours en masse. The implications of this in a surveillance state are concerning.

In the Guangxi autonomous region in March 2020, for example, one party member suggested using pandemic surveillance to “search for people that couldn’t previously be found”, effectively turning a health service into a security tool.




Read more:
How China is controlling the COVID origins narrative — silencing critics and locking up dissenters


Another example is how China’s notorious “social credit system” was revamped during the pandemic.

The system was originally set up before the pandemic to rate myriad “trustworthy” and “untrustworthy” behaviours among individuals and businesses. Good scores came with benefits such as cheaper transportation.

During the pandemic, this system was expanded to reward people for “good pandemic behaviour” and punish “bad pandemic behaviour”. Two academics in the Netherlands found punishments were imposed for selling medical supplies at an inflated price or counterfeit supplies, or for violating quarantine.

Such behaviour could get a person blacklisted, which might deny them the ability to travel or even serve as a civil servant, among other restrictions.




Read more:
Hundreds of Chinese citizens told me what they thought about the controversial social credit system


As we argue, it is crucial these surveillance systems embed principles of transparency and accountability within their design. If these systems aren’t thoroughly tested or their potential future uses questioned, people can become habituated to top-down surveillance and function creep.

To what extent these new surveillance systems will direct the behaviours of people in China remains to be seen. A lot depends on how the public reacts to them, especially as they are used for non-health purposes after the pandemic.The Conversation

Ausma Bernot, PhD Candidate, Griffith University; Alexander Trauth-Goik, , University of Wollongong, and Sue Trevaskes, Head of School (Interim), Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Police access to COVID check-in data is an affront to our privacy. We need stronger and more consistent rules in place


Graham Greenleaf, UNSW and Katharine Kemp, UNSWThe Australian Information Commissioner this week called for a ban on police accessing QR code check-in data, unless for COVID-19 contact tracing purposes.

State police have already accessed this data on at least six occasions for unrelated criminal investigations, including in Queensland and Western Australia — the latter of which has now banned this. Victorian police also attempted access at least three times, according to reports, but were unsuccessful.

The ACT is considering a law preventing police from engaging in such activity, but the position is different in every state and territory.

We need cooperation and clarity regarding how COVID surveillance data is handled, to protect people’s privacy and maintain public trust in surveillance measures. There is currently no consistent, overarching law that governs these various measures — which range from QR code check-ins to vaccine certificates.




Read more:
Australia has all but abandoned the COVIDSafe app in favour of QR codes (so make sure you check in)


Last week the Office of the Australian Information Commissioner released a set of five national COVID-19 privacy principles as a guide to “best practice” for governments and businesses handling personal COVID surveillance data.

But we believe these principles are vague and fail to address a range of issues, including whether or not police can access our data. We propose more detailed and consistent laws to be enacted throughout Australia, covering all COVID surveillance.

Multiple surveillance tools are being used

There are multiple COVID surveillance tools currently in use in Australia.

Proximity tracking through the COVIDSafe app has been available since last year, aiming to identify individuals who have come into contact with an infected person. But despite costing millions to develop, the app has reportedly disclosed only 17 unique unknown cases.

Over the past year we’ve also seen widespread attendance tracking via QR codes, now required by every state and territory government. This is probably the most extensive surveillance operation Australia has ever seen, with millions of check-ins each week. Fake apps have even emerged in an effort to bypass contact tracing.

In addition, COVID status certificates showing vaccination status are now available on MyGov (subject to problems of registration failure and forgery). They don’t yet display COVID test results or COVID recovery status (as they do in countries in the European Union).

It’s unclear exactly where Australian residents will need to show COVID status certificates, but this will likely include for travel between states or local government areas, attendance at events (such as sport events and funerals) and hospitality venues, and in some “no jab no job” workplaces.

As a possible substitute for hotel quarantine, South Australia is currently testing precise location tracking to enable home quarantine. This combines geolocation tracking of phones with facial recognition of the person answering the phone.
Shutterstock

The proposed principles don’t go far enough

The vague privacy principles proposed by Australia’s privacy watchdogs are completely inadequate in the face of this complexity. They are mostly “privacy 101” requirements of existing privacy laws.

Here they are summarised, with some weaknesses noted.

  1. Data minimisation. The personal information collected should be limited to the minimum necessary to achieve a legitimate purpose.
  2. Purpose limitation. Information collected to mitigate COVID-19 risks “should generally not be used for other purposes”. The term “generally” is undefined, and police are not specifically excluded.
  3. Security. “Reasonable steps” should be taken to protect this data. Data localisation (storing it in Australia) is mentioned in the principles, but data encryption is not.
  4. Data retention/deletion. The data should be deleted once no longer needed for the purpose for which it was collected. But there is no mention of a “sunset clause” requiring whole surveillance systems to also be dismantled when no longer needed.
  5. Regulation under privacy law. The data should be protected by “an enforceable privacy law to ensure individuals have redress if their information is mishandled”. The implied call for South Australia and Western Australia to enact privacy laws is welcome.

A proposal for detailed and consistent laws

Since COVID-19 surveillance requirements are justified as “emergency measures”, they also require emergency quality protections.

Last year, the federal COVIDSafe Act provided the strongest privacy protections for any category of personal information collected in Australia. Although the app was a dud, the Act was not.

The EU has enacted thorough legislation for EU COVID digital certificates, which are being used across EU country borders. We can learn from this and establish principles that apply to all types of COVID surveillance in Australia. Here’s what we recommend:

  1. Legislation, not regulations, of “emergency quality”. Regulations can be changed at will by the responsible minister, whereas changes in legislation require parliamentary approval. Regarding COVID surveillance data, a separate act in each jurisdiction should state the main rules and there should be no exceptions to these — not even for police or ASIO.
  2. Prevent unjustifiable discrimination. This would include preventing discrimination against those who are unable to get vaccinated such as for health reasons, or those without access to digital technology such as mobile phones. In the EU, it’s free to obtain a paper certificate and these must be accepted.
  3. Prohibit and penalise unauthorised use of data. Permitted uses of surveillance data should be limited, with no exceptions for police or intelligence. COVID status certificates may be abused by employers or venues that decide to grant certain rights privileges based on them, without authorisation by law.
  4. Give individuals the right to sue. If anyone breaches the acts we propose above for each state, individuals concerned should be able to sue in the courts for compensation for an interference with privacy.
  5. Prevent surveillance creep. The law should make it as difficult as possible for any extra uses of the data to be authorised, say for marketing or town planning.
  6. Minimise data collection. The minimum data necessary should be collected, and not collected with other data. If data is only needed for inspection, it should not be retained.
  7. Ongoing data deletion. Data must be deleted periodically once it is no longer needed for pandemic purposes. In the EU, COVID certificate data inspected for border crossings is not recorded or retained.
  8. A “sunset clause” for the whole system. Emergency measures should provide for their own termination. The law requires the COVIDSafe app to be terminated when it’s no longer required or effective, along with its data. A similar plan should be in place for QR-code data and COVID status certificates.
  9. Active supervision and reports. Privacy authorities should have clear obligations to report on COVID surveillance operations, and express views on termination of the system.
  10. Transparency. Overarching all of these principles should be requirements for transparency. This should include publicly releasing medical/epidemiological advice on necessary measures, open-source software in all cases of digital COVID surveillance, initial privacy impact assessments and sunset clause recommendations.

COVID-19 has necessitated the most pervasive surveillance most of us have ever experienced. But such surveillance is really only justifiable as an emergency measure. It must not become a permanent part of state surveillance.




Read more:
Coronavirus: digital contact tracing doesn’t have to sacrifice privacy


The Conversation


Graham Greenleaf, Professor of Law and Information Systems, UNSW and Katharine Kemp, Senior Lecturer, Faculty of Law & Justice, UNSW, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How hackers can use message mirroring apps to see all your SMS texts — and bypass 2FA security


Shutterstock

Syed Wajid Ali Shah, Deakin University; Jongkil Jay Jeong, Deakin University, and Robin Doss, Deakin UniversityIt’s now well known that usernames and passwords aren’t enough to securely access online services. A recent study highlighted more than 80% of all hacking-related breaches happen due to compromised and weak credentials, with three billion username/password combinations stolen in 2016 alone.

As such, the implementation of two-factor authentication (2FA) has become a necessity. Generally, 2FA aims to provide an additional layer of security to the relatively vulnerable username/password system.

It works too. Figures suggest users who enabled 2FA ended up blocking about 99.9% of automated attacks.

But as with any good cybersecurity solution, attackers can quickly come up with ways to circumvent it. They can bypass 2FA through the one-time codes sent as an SMS to a user’s smartphone.

Yet many critical online services in Australia still use SMS-based one-time codes, including myGov and the Big 4 banks: ANZ, Commonwealth Bank, NAB and Westpac.




Read more:
A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?


So what’s the problem with SMS?

Major vendors such as Microsoft have urged users to abandon 2FA solutions that leverage SMS and voice calls. This is because SMS is renowned for having infamously poor security, leaving it open to a host of different attacks.

For example, SIM swapping has been demonstrated as a way to circumvent 2FA. SIM swapping involves an attacker convincing a victims’s mobile service provider they themselves are the victim, and then requesting the victim’s phone number be switched to a device of their choice.

SMS-based one-time codes are also shown to be compromised through readily available tools such as Modlishka by leveraging a technique called reverse proxy. This facilitates communication between the victim and a service being impersonated.

So in the case of Modlishka, it will intercept communication between a genuine service and a victim and will track and record the victims’s interactions with the service, including any login credentials they may use).

In addition to these existing vulnerabilities, our team have found additional vulnerabilities in SMS-based 2FA. One particular attack exploits a feature provided on the Google Play Store to automatically install apps from the web to your android device.

Due to syncing services, if a hacker manages to compromise your Google login credentials on their own device, they can then install a message mirroring app directly onto your smartphone.
Shutterstock

If an attacker has access to your credentials and manages to log into your Google Play account on a laptop (although you will receive a prompt), they can then install any app they’d like automatically onto your smartphone.

The attack on Android

Our experiments revealed a malicious actor can remotely access a user’s SMS-based 2FA with little effort, through the use of a popular app (name and type withheld for security reasons) designed to synchronise user’s notifications across different devices.

Specifically, attackers can leverage a compromised email/password combination connected to a Google account (such as username@gmail.com) to nefariously install a readily-available message mirroring app on a victim’s smartphone via Google Play.

This is a realistic scenario since it’s common for users to use the same credentials across a variety of services. Using a password manager is an effective way to make your first line of authentication — your username/password login — more secure.

Once the app is installed, the attacker can apply simple social engineering techniques to convince the user to enable the permissions required for the app to function properly.

For example, they may pretend to be calling from a legitimate service provider to persuade the user to enable the permissions. After this they can remotely receive all communications sent to the victim’s phone, including one-time codes used for 2FA.

Although multiple conditions must be fulfilled for the aforementioned attack to work, it still demonstrates the fragile nature of SMS-based 2FA methods.

More importantly, this attack doesn’t need high-end technical capabilities. It simply requires insight into how these specific apps work and how to intelligently use them (along with social engineering) to target a victim.

The threat is even more real when the attacker is a trusted individual (e.g., a family member) with access to the victim’s smartphone.

What’s the alternative?

To remain protected online, you should check whether your initial line of defence is secure. First check your password to see if it’s compromised. There are a number of security programs that will let you do this. And make sure you’re using a well-crafted password.

We also recommend you limit the use of SMS as a 2FA method if you can. You can instead use app-based one-time codes, such as through Google Authenticator. In this case the code is generated within the Google Authenticator app on your device itself, rather than being sent to you.

However, this approach can also be compromised by hackers using some sophisticated malware. A better alternative would be to use dedicated hardware devices such as YubiKey.

Hand holds up a YubiKey USB with the text 'Citrix' in the background.
The YubiKey, first developed in 2008, is an authentication device designed to support one-time password and 2FA protocols without having to rely on SMS-based 2FA.
Shutterstock

These are small USB (or near-field communication-enabled) devices that provide a streamlined way to enable 2FA across different services.

Such physical devices need to be plugged into or brought into close proximity of a login device as a part of 2FA, therefore mitigating the risks associated with visible one-time codes, such as codes sent by SMS.

It must be stressed an underlying condition to any 2FA alternative is the user themselves must have some level of active participation and responsibility.

At the same time, further work must be carried out by service providers, developers and researchers to develop more accessible and secure authentication methods.

Essentially, these methods need to go beyond 2FA and towards a multi-factor authentication environment, where multiple methods of authentication are simultaneously deployed and combined as needed.




Read more:
Can I still be hacked with 2FA enabled?


The Conversation


Syed Wajid Ali Shah, Research Fellow, Centre for Cyber Security Research and Innovation, Deakin University; Jongkil Jay Jeong, CyberCRC Research Fellow, Centre for Cyber Security Research and Innovation (CSRI), Deakin University, and Robin Doss, Research Director, Centre for Cyber Security Research and Innovation, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ACCC ‘world first’: Australia’s Federal Court found Google misled users about personal location data


Henry Perks / Unsplash

Katharine Kemp, UNSWThe Federal Court has found Google misled some users about personal location data collected through Android devices for two years, from January 2017 to December 2018.

The Australian Competition & Consumer Commission (ACCC) says this decision is a “world first” in relation to Google’s location privacy settings. The ACCC now intends to seek various orders against Google. These will include monetary penalties under the Australian Consumer Law (ACL), which could be up to A$10 million or 10% of Google’s local turnover.

Other companies too should be warned that representations in their privacy policies and privacy settings could lead to similar liability under the ACL.

But this won’t be a complete solution to the problem of many companies concealing what they do with data, including the way they share consumers’ personal information.

How did Google mislead consumers about their location history?

The Federal Court found Google’s previous location history settings would have led some reasonable consumers to believe they could prevent their location data being saved to their Google account. In fact, selecting “Don’t save my Location History in my Google Account” alone could not achieve this outcome.

Users needed to change an additional, separate setting to stop location data from being saved to their Google account. In particular, they needed to navigate to “Web & App Activity” and select “Don’t save my Web & App Activity to my Google Account”, even if they had already selected the “Don’t save” option under “Location History”.




Read more:
The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do


ACCC Chair Rod Sims responded to the Federal Court’s findings, saying:

This is an important victory for consumers, especially anyone concerned about their privacy online, as the Court’s decision sends a strong message to Google and others that big businesses must not mislead their customers.

Google has since changed the way these settings are presented to consumers, but is still liable for the conduct the court found was likely to mislead some reasonable consumers for two years in 2017 and 2018.

ACCC has misleading privacy policies in its sights

This is the second recent case in which the ACCC has succeeded in establishing misleading conduct in a company’s representations about its use of consumer data.

In 2020, the medical appointment booking app HealthEngine admitted it had disclosed more than 135,000 patients’ non-clinical personal information to insurance brokers without the informed consent of those patients. HealthEngine paid fines of A$2.9 million, including approximately A$1.4 million relating to this misleading conduct.




Read more:
How safe are your data when you book a COVID vaccine?


The ACCC has two similar cases in the wings, including another case regarding Google’s privacy-related notifications and a case about Facebook’s representations about a supposedly privacy-enhancing app called Onavo.

In bringing proceedings against companies for misleading conduct in their privacy policies, the ACCC is following the US Federal Trade Commission which has sued many US companies for misleading privacy policies.

The ACCC has more cases in the wings about data privacy.
Shutterstock

Will this solve the problem of confusing and unfair privacy policies?

The ACCC’s success against Google and HealthEngine in these cases sends an important message to companies: they must not mislead consumers when they publish privacy policies and privacy settings. And they may receive significant fines if they do.

However, this will not be enough to stop companies from setting privacy-degrading terms for their users, if they spell such conditions out in the fine print. Such terms are currently commonplace, even though consumers are increasingly concerned about their privacy and want more privacy options.

Consider the US experience. The US Federal Trade Commission brought action against the creators of a flashlight app for publishing a privacy policy which didn’t reveal the app was tracking and sharing users’ location information with third parties.




Read more:
We need a code to protect our online privacy and wipe out ‘dark patterns’ in digital design


However, in the agreement settling this claim, the solution was for the creators to rewrite the privacy policy to disclose that users’ location and device ID data are shared with third parties. The question of whether this practice was legitimate or proportionate was not considered.

Major changes to Australian privacy laws will also be required before companies will be prevented from pervasively tracking consumers who do not wish to be tracked. The current review of the federal Privacy Act could be the beginning of a process to obtain fairer privacy practices for consumers, but any reforms from this review will be a long time coming.


This is an edited version of an article that originally appeared on UNSW Newsroom.The Conversation

Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Academic Lead, UNSW Grand Challenge on Trust, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Privacy erosion by design: why the Federal Court should throw the book at Google over location data tracking


Shutterstock

Jeannie Marie Paterson, The University of Melbourne and Elise Bant, The University of Western AustraliaThe Australian Competition and Consumer Commission has had a significant win against Google. The Federal Court found Google misled some Android users about how to disable personal location tracking.

Will this decision actually change the behaviour of the big tech companies? The answer will depend on the size of the penalty awarded in response to the misconduct.




Read more:
ACCC ‘world first’: Australia’s Federal Court found Google misled users about personal location data


In theory, the penalty is A$1.1 million per contravention. There is a contravention each time a reasonable person in the relevant class is misled. So the total award could, in theory, amount to many millions of dollars.

But the actual penalty will depend on how the court characterises the misconduct. We believe Google’s behaviour should not be treated as a simple accident, and the Federal Court should issue a heavy fine to deter Google and other companies from behaving this way in future.

Misleading conduct and privacy settings

The case arose from the representations made by Google to users of Android phones in 2018 about how it obtained personal location data.

The Federal Court held Google had misled some consumers by representing that “having Web & App Activity turned ‘on’ would not allow Google to obtain, retain and use personal data about the user’s location”.

In other words, some consumers were misled into thinking they could control Google’s location data collection practices by switching “off” Location History, whereas Web & App Activity also needed to be disabled to provide this protection.




Read more:
The ACCC is suing Google for misleading millions. But calling it out is easier than fixing it


The ACCC also argued consumers reading Google’s privacy statement would be misled into thinking personal data was collected for their own benefit rather than Google’s. However, the court dismissed this argument on the grounds that reasonable users wanting to turn the Location History “off”

would have assumed that Google was obtaining as much commercial advantage as it could from use of the user’s personal location data.

This is surprising and might deserve further attention from regulators concerned to protect consumers from corporations “data harvesting” for profit.

How much should Google pay?

The penalty and other enforcement orders against Google will be made at a later date.

The aim of the penalty is to deter Google specifically, and other firms like Google, from engaging in misleading conduct again. If penalties are too low they may be treated by wrongdoing firms as merely a “cost of doing business”.

However, in circumstances where there is a high degree of corporate culpability, the Federal Court has shown willingness to award higher amounts than in the past. This has occurred even where the regulator has not sought higher penalties. In the recent Volkswagen Aktiengesellschaft v ACCC judgement, the full Federal Court confirmed an award of A$125 million against Volkswagen for making false representations about compliance with Australian diesel emissions standards.

The Federal Court found Google’s information about local data tracking was misleading.
Shutterstock

In setting Google’s penalty, a court will consider factors such as the nature and extent of the misleading conduct and any loss to consumers. The court will also take into account whether the wrongdoer was involved in “deliberate, covert or reckless conduct, as opposed to negligence or carelessness”.

At this point, Google may well argue that only some consumers were misled, that it was possible for consumers to be informed if they read more about Google’s privacy policies, that it was only one slip-up, and that its contravention of the law was unintentional. These might seem to reduce the seriousness or at least the moral culpability of the offence.

But we argue they should not unduly cap the penalty awarded. Google’s conduct may not appear as “egregious and deliberately deceptive” as the Volkswagen case.

But equally Google is a massively profitable company that makes its money precisely from obtaining, sorting and using its users’ personal data. We think therefore the court should look at the number of Android users potentially affected by the misleading conduct and Google’s responsibility for its own choice architecture, and work from there.

Only some consumers?

The Federal Court acknowledged not all consumers would be misled by Google’s representations. The court accepted many consumers would simply accept the privacy terms without reviewing them, an outcome consistent with the so-called privacy paradox. Others would review the terms and click through to more information about the options for limiting Google’s use of personal data to discover the scope of what was collected under the “Web & App Activity” default.




Read more:
The privacy paradox: we claim we care about our data, so why don’t our actions match?


This might sound like the court was condoning consumers’ carelessness. In fact the court made use of insights from economists about the behavioural biases of consumers in making decisions.

Consumers have limited time to read legal terms and limited ability to understand the future risks arising from those terms. Thus, if consumers are concerned about privacy they might try to limit data collection by selecting various options, but are unlikely to be able to read and understand privacy legalese like a trained lawyer or with the background understanding of a data scientist.

If one option is labelled “Location History”, it is entirely rational for everyday consumers to assume turning it off limits location data collection by Google.

The number of consumers misled by Google’s representations will be difficult to assess. But even if a small proportion of Android users were misled, that will be a very large number of people.

There was evidence before the Federal Court that, after press reports of the tracking problem, the number of consumers switching off the “Web” option increased by 500%. Moreover, Google makes considerable profit from the large amounts of personal data it gathers and retains, and profit is important when it comes deterrence.

Google’s choice architecture

It has also been revealed that some employees at Google were not aware of the problem until an exposé in the press. An urgent meeting was held, referred to internally as the “Oh Shit” meeting.

The individual Google employees at the “Oh Shit” meeting may not have been aware of the details of the system. But that is not the point.

It is the company fault that is the question. And a company’s culpability is not just determined by what some executive or senior employee knew or didn’t know about its processes. Google’s corporate mindset is manifested or revealed in the systems it designs and puts in place.




Read more:
Inducing choice paralysis: how retailers bury customers in an avalanche of options


Google designed the information system that faced consumers trying to manage their privacy settings. This kind of system design is sometimes referred to as “choice architecture”.

Here the choices offered to consumers steered them away from opting out of Google collecting, retaining and using personal location data.

The “Other Options” (for privacy) information failed to refer to the fact that location tracking was carried out via other processes beyond the one labelled “Location History”. Plus, the default option for “Web & App Activity” (which included location tracking) was set as “on”.

This privacy eroding system arose via the design of the “choice architecture”. It therefore warrants a serious penalty.The Conversation

Jeannie Marie Paterson, Professor of Law, The University of Melbourne and Elise Bant, Professor of Law, The University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

What is “upskirting” and what are your rights to privacy under the law?


Shutterstock

Rick Sarre, University of South AustraliaQueensland federal MP Andrew Laming has been accused of taking an inappropriate photograph of a young woman, Crystal White, in 2019 in which her underwear was showing. When challenged about the photo this week, he reportedly replied:

it wasn’t meant to be rude. I thought it was funny

Inappropriate photography is a criminal offence in Queensland. Whether or not Laming’s behaviour amounted to an offence for which he could be charged is a matter for the police to determine. (White is reportedly considering taking her complaint to police.)

So, what do the laws say about this kind of behaviour, and what rights to privacy do people have when it comes to indecent photographs taken by others?

What can ‘upskirting’ include?

A new term has entered the lexicon in this regard: “upskirting”. The act of upskirting is generally defined as taking a sexually intrusive photograph of someone without their permission.

It is not a recent phenomenon. There have been incidents in which people (invariably men) have placed cameras on their shoes and photographed “up” a woman’s skirt for prurient purposes. Other instances have involved placing cameras under stairs where women in dresses or skirts were likely to pass by.

The broader category of “upskirting” can also include indecent filming of anyone without their knowledge, including photographing topless female bathers at a public beach, covertly filming women undressing in their bedrooms, or installing a camera in a dressing room, public toilet or a swimming pool changing room.




Read more:
Andrew Laming: why empathy training is unlikely to work


With every new electronic device that comes on the market comes the possibility of inappropriate use and, thus, the creation of new criminal offences.

We saw that with the advent of small listening devices. With this technology, it was now possible to record private conversations, so legislators had to create offences under the law to deal with any inappropriate use.

The same thing happened with small (and now very affordable) drones, which made it possible to capture images of people in compromising positions, even from a distance. Our laws have been adjusted accordingly.

And in recent years, lawmakers have been faced with the same potential for inappropriate use with mobile phones. Such devices are now ubiquitous and improved technology has allowed people to record and photograph others at a moment’s notice — often impulsively, without proper thought.

How have legislators responded in Australia?

There is a patchwork array of laws across the country dealing with this type of photography and video recording.

In South Australia, for instance, it is against the law to engage in “indecent filming” of another person under part 5A of the state’s Summary Offences Act.

The term “upskirting” itself was used when amendments were made in 2007 to Victoria’s Summary Offences Act. This made it an offence for a person to observe or visually capture another person’s genital region without their consent.

In New South Wales, the law is equally specific in setting out the type of filming that is punishable under the law. It outlaws the filming of another person’s “private parts” for “sexual arousal or sexual gratification” without the consent of the person being filmed.

Queensland’s law, meanwhile, makes it an offence to:

observe or visually record another person, in circumstances where a reasonable adult would expect to be afforded privacy […] without the other person’s consent

Interestingly, the Queensland law is more broadly worded than the NSW, Victorian or South Australian laws since it makes it an offence to take someone’s picture in general, rather than specifying that it needs to be sexually explicit.

The maximum penalty for such an offence in Queensland is three years’ imprisonment.

What would need to be proven for a conviction

Just like any criminal offence, the prosecution in a case like this must first determine, before laying a charge, whether there’s enough evidence that could lead to a conviction and, moreover, whether such a prosecution is in the public interest.

Once the decision to charge is made, a conviction will only be possible if the accused pleads guilty or is found guilty beyond reasonable doubt. (Being a misdemeanour, this could only be by a magistrate, not a jury.)

The role of the criminal law here is to bring offending behaviour to account while also providing a deterrent for the future conduct of that person or any other persons contemplating such an act.




Read more:
View from The Hill: Morrison should appoint stand-alone minister for women and boot Andrew Laming to crossbench


As with any criminal law, its overarching purpose is to indicate society’s disdain for the behaviour. The need to protect victims from such egregious and lewd behaviour is an important consideration too.

Any decision by a Queensland magistrate to convict a person alleged to have taken an indecent photo would hang on three facts:

  • whether the photo was taken by the person accused
  • whether the victim believed she should have been afforded privacy
  • and whether she offered no consent to have the photo taken.

Other mitigating factors might come into play, however, including whether the photograph was impulsive and not premeditated, whether the image was immediately deleted, and whether the alleged offender showed any regret or remorse for his actions.

Recently a Queensland man, Justin McGufficke, pleaded guilty to upskirting offences in NSW after he took pictures up the skirts of teenage girls at a zoo while they were looking at animals.

In another case, a conviction for upskirting was deemed sufficient to deny a man permission to work with children in Victoria.

In a moment of impulsivity — and with the easy access of the mobile phone — anything can happen in today’s world. Poor judgements are common. Women are invariably the targets.

The laws on filming, recording and in some cases distributing the images of another person are clear — and the potential consequences for the accused are substantial. One would hope that any potential offenders are taking note.The Conversation

Rick Sarre, Emeritus Professor of Law and Criminal Justice, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Not just complacency: why people are reluctant to use COVID-19 contact-tracing apps



Mark Coote/Bloomberg via Getty Images

Farkhondeh Hassandoust, Auckland University of Technology

This week’s announcement of two new COVID-19 vaccine pre-purchase deals is encouraging, but doesn’t mean New Zealanders should become complacent about using the NZ COVID Tracer app during the summer holidays.

The immunisation rollout won’t start until the second quarter of 2021, and the government is encouraging New Zealanders to continue using the app, including the recently upgraded bluetooth function, as part of its plan to manage the pandemic during the holiday period.




Read more:
How to keep COVID-19 at bay during the summer holidays — and help make travel bubbles a reality in 2021


During the past weeks, the number of daily scans has dropped significantly, down from just over 900,000 scans per day at the end of November to fewer than 400,000 in mid-December.

With no active cases of COVID-19 in the commmunity, complacency might be part of the issue in New Zealand, but as our research in the US shows, worries about privacy and trust continue to make people reluctant to use contact-tracing apps.

Concerns about privacy and surveillance

We surveyed 853 people from every state in the US to identify the factors promoting or inhibiting their use of contact-tracing applications. Our survey reveals two seemingly contradictory findings.

Individuals are highly motivated to use contact-tracing apps, for the sake of their own health and that of society as a whole. But the study also found people are concerned about privacy, social disapproval and surveillance.

The findings suggest people’s trust in the data collectors is dependent on the technology features of these apps (for example, information sensitivity and anonymity) and the privacy protection initiatives instigated by the authorities.

With the holiday season just around the corner — and even though New Zealand is currently free of community transmission — our findings are pertinent. New Zealanders will travel more during the summer period, and it is more important than ever to use contact-tracing apps to improve our chances of getting on top of any potential outbreaks as quickly as possible.

How, then, to overcome concerns about privacy and trust and make sure New Zealanders use the upgraded app during summer?

The benefits of adopting contact-tracing apps are mainly in shared public health, and it is important these societal health benefits are emphasised. In order to quell concerns, data collectors (government and businesses) must also offer assurance that people’s real identity will be concealed.

It is the responsibility of the government and the office of the Privacy Commissioner to ensure all personal information is managed appropriately.




Read more:
An Australia–NZ travel bubble needs a unified COVID contact-tracing app. We’re not there


Transparency and data security

Our study also found that factors such as peer and social influence, regulatory pressures and previous experiences with privacy loss underlie people’s readiness to adopt contact-tracing apps.

The findings reveal that people expect regulatory protection if they are to use contact-tracing apps. This confirms the need for laws and regulations with strict penalties for those who collect, use, disclose or decrypt collected data for any purpose other than contact tracing.

The New Zealand government is working with third-party developers to complete the integration of other apps by the end of December to enable the exchange of digital contact-tracing information from different apps and technologies.

The Privacy Commissioner has already endorsed the bluetooth upgrade of the official NZ COVID Tracer app because of its focus on users’ privacy. And the Ministry of Health aims to release the source code for the app so New Zealanders can see how their personal data has been managed.

Throughout the summer, the government and ministry should emphasise the importance of using the contact-tracing app and assure New Zealanders about the security and privacy of their personal data.

Adoption of contact-tracing apps is no silver bullet in the battle against COVID-19, but it is a crucial element in New Zealand’s collective public health response to the global pandemic.The Conversation

Farkhondeh Hassandoust, Lecturer, Auckland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

83% of Australians want tougher privacy laws. Now’s your chance to tell the government what you want



Shutterstock

Normann Witzleb, Monash University

Federal Attorney-General Christian Porter has called for submissions to the long-awaited review of the federal Privacy Act 1988.

This is the first wide-ranging review of privacy laws since the Australian Law Reform Commission produced a landmark report in 2008.

Australia has in the past often hesitated to adopt a strong privacy framework. The new review, however, provides an opportunity to improve data protection rules to an internationally competitive standard.

Here are some of the ideas proposed — and what’s at stake if we get this wrong.




Read more:
It’s time for privacy invasion to be a legal wrong


Australians care deeply about data privacy

Personal information has never had a more central role in our society and economy, and the government has a strong mandate to update Australia’s framework for the protection of personal information.

In the Australian Privacy Commissioner’s 2020 survey, 83% of Australians said they’d like the government to do more to protect the privacy of their data.

The intense debate about the COVIDSafe app earlier this year also shows Australians care deeply about their private information, even in a time of crisis.

Privacy laws and enforcement can hardly keep up with the ever-increasing digitalisation of our lives. Data-driven innovation provides valuable services that many of us use and enjoy. However, the government’s issues paper notes:

As Australians spend more of their time online, and new technologies emerge, such as artificial intelligence, more personal information about individuals is being captured and processed, raising questions as to whether Australian privacy law is fit for purpose.

The pandemic has accelerated the existing trend towards digitalisation and created a range of new privacy issues including working or studying at home, and the use of personal data in contact tracing.

Australians are rightly concerned they are losing control over their personal data.

So there’s no question the government’s review is sorely needed.

Issues of concern for the new privacy review

The government’s review follows the Australian Competition and Consumer Commission’s Digital Platforms Inquiry, which found that some data practices of digital platforms are unfair and undermine consumer trust. We rely heavily on digital platforms such as Google and Facebook for information, entertainment and engagement with the world around us.

Our interactions with these platforms leave countless digital traces that allow us to be profiled and tracked for profit. The Australian Competition and Consumer Commission (ACCC) found that the digital platforms make it hard for consumers to resist these practices and to make free and informed decisions regarding the collection, use and disclosure of their personal data.

The government has committed to implement most of the ACCC’s recommendations for stronger privacy laws to give us greater consumer control.

However, the reforms must go further. The review also provides an opportunity to address some long-standing weaknesses of Australia’s privacy regime.

The government’s issues paper, released to inform the review, identified several areas of particular concern. These include:

  • the scope of application of the Privacy Act, in particular the definition of “personal information” and current private sector exemptions

  • whether the Privacy Act provides an effective framework for promoting good privacy practices

  • whether individuals should have a direct right to sue for a breach of privacy obligations under the Privacy Act

  • whether a statutory tort for serious invasions of privacy should be introduced into Australian law, allowing Australians to go to court if their privacy is invaded

  • whether the enforcement powers of the Privacy Commissioner should be strengthened.

While most recent attention relates to improving consumer choice and control over their personal data, the review also brings back onto the agenda some never-implemented recommendations from the Australian Law Reform Commission’s 2008 review.

These include introducing a statutory tort for serious invasions of privacy, and extending the coverage of the Privacy Act.

Exemptions for small business and political parties should be reviewed

The Privacy Act currently contains several exemptions that limit its scope. The two most contentious exemptions have the effect that political parties and most business organisations need not comply with the general data protection standards under the Act.

The small business exemption is intended to reduce red tape for small operators. However, largely unknown to the Australian public, it means the vast majority of Australian businesses are not legally obliged to comply with standards for fair and safe handling of personal information.

Procedures for compulsory venue check-ins under COVID health regulations are just one recent illustration of why this is a problem. Some people have raised concerns that customers’ contact-tracing data, in particular collected via QR codes, may be exploited by marketing companies for targeted advertising.

A woman uses a QR code at a restaurant
Under current privacy laws, cafe and restaurant operators are exempt from complying with certain privacy obligations.
Shutterstock

Under current privacy laws, cafe and restaurant operators are generally exempt from complying with privacy obligations to undertake due diligence checks on third-party providers used to collect customers’ data.

The political exemption is another area of need of reform. As the Facebook/Cambridge Analytica scandal showed, political campaigning is becoming increasingly tech-driven.

However, Australian political parties are exempt from complying with the Privacy Act and anti-spam legislation. This means voters cannot effectively protect themselves against data harvesting for political purposes and micro-targeting in election campaigns through unsolicited text messages.

There is a good case for arguing political parties and candidates should be subject to the same rules as other organisations. It’s what most Australians would like and, in fact, wrongly believe is already in place.




Read more:
How political parties legally harvest your data and use it to bombard you with election spam


Trust drives innovation

Trust in digital technologies is undermined when data practices come across as opaque, creepy or unsafe.

There is increasing recognition that data protection drives innovation and adoption of modern applications, rather than impedes it.

A woman looks at her phone in the twilight.
Trust in digital technologies is undermined when data practices come across as opaque, creepy, or unsafe.
Shutterstock

The COVIDSafe app is a good example.
When that app was debated, the government accepted that robust privacy protections were necessary to achieve a strong uptake by the community.

We would all benefit if the government saw that this same principle applies to other areas of society where our precious data is collected.


Information on how to make a submission to the federal government review of the Privacy Act 1988 can be found here.




Read more:
People want data privacy but don’t always know what they’re getting


The Conversation


Normann Witzleb, Associate Professor in Law, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Towards a post-privacy world: proposed bill would encourage agencies to widely share your data


Bruce Baer Arnold, University of Canberra

The federal government has announced a plan to increase the sharing of citizen data across the public sector.

This would include data sitting with agencies such as Centrelink, the Australian Tax Office, the Department of Home Affairs, the Bureau of Statistics and potentially other external “accredited” parties such as universities and businesses.

The draft Data Availability and Transparency Bill released today will not fix ongoing problems in public administration. It won’t solve many problems in public health. It is a worrying shift to a post-privacy society.

It’s a matter of arrogance, rather than effectiveness. It highlights deficiencies in Australian law that need fixing.




Read more:
Australians accept government surveillance, for now


Making sense of the plan

Australian governments on all levels have built huge silos of information about us all. We supply the data for these silos each time we deal with government.

It’s difficult to exercise your rights and responsibilities without providing data. If you’re a voter, a director, a doctor, a gun owner, on welfare, pay tax, have a driver’s licence or Medicare card – our governments have data about you.

Much of this is supplied on a legally mandatory basis. It allows the federal, state, territory and local governments to provide pensions, elections, parks, courts and hospitals, and to collect rates, fees and taxes.

The proposed Data Availability and Transparency Bill will authorise large-scale sharing of data about citizens and non-citizens across the public sector, between both public and private bodies. Previously called the “Data Sharing and Release” legislation, the word “transparency” has now replaced “release” to allay public fears.

The legislation would allow sharing between Commonwealth government agencies that are currently constrained by a range of acts overseen (weakly) by the under-resourced Australian Information Commissioner (OAIC).

The acts often only apply to specific agencies or data. Overall we have a threadbare patchwork of law that is supposed to respect our privacy but often isn’t effective. It hasn’t kept pace with law in Europe and elsewhere in the world.

The plan also envisages sharing data with trusted third parties. They might be universities or other research institutions. In future, the sharing could extend to include state or territory agencies and the private sector, too.

Any public or private bodies that receive data can then share it forward. Irrespective of whether one has anything to hide, this plan is worrying.

Why will there be sharing?

Sharing isn’t necessarily a bad thing. But it should be done accountably and appropriately.

Consultations over the past two years have highlighted the value of inter-agency sharing for law enforcement and for research into health and welfare. Universities have identified a range of uses regarding urban planning, environment protection, crime, education, employment, investment, disease control and medical treatment.

Many researchers will be delighted by the prospect of accessing data more cheaply than doing onerous small-scale surveys. IT people have also been enthusiastic about money that could be made helping the databases of different agencies talk to each other.

However, the reality is more complicated, as researchers and civil society advocates have pointed out.

Person hitting a 'share' button on a keyboard.
In a July speech to the Australian Society for Computers and Law, former High Court Justice Michael Kirby highlighted a growing need to fight for privacy, rather than let it slip away.
Shutterstock

Why should you be worried?

The plan for comprehensive data sharing is founded on the premise of accreditation of data recipients (entities deemed trustworthy) and oversight by the Office of the National Data Commissioner, under the proposed act.

The draft bill announced today is open for a short period of public comment before it goes to parliament. It features a consultation paper alongside a disquieting consultants’ report about the bill. In this report, the consultants refer to concerns and “high inherent risk”, but unsurprisingly appear to assume things will work out.

Federal Minister for Government Services Stuart Roberts, who presided over the tragedy known as the RoboDebt scheme, is optimistic about the bill. He dismissed critics’ concerns by stating consent is implied when someone uses a government service. This seems disingenuous, given people typically don’t have a choice.

However, the bill does exclude some data sharing. If you’re a criminologist researching law enforcement, for example, you won’t have an open sesame. Experience with the national Privacy Act and other Commonwealth and state legislation tells us such exclusions weaken over time

Outside the narrow exclusions centred on law enforcement and national security, the bill’s default position is to share widely and often. That’s because the accreditation requirements for agencies aren’t onerous and the bases for sharing are very broad.

This proposal exacerbates ongoing questions about day-to-day privacy protection. Who’s responsible, with what framework and what resources?

Responsibility is crucial, as national and state agencies recurrently experience data breaches. Although as RoboDebt revealed, they often stick to denial. Universities are also often wide open to data breaches.

Proponents of the plan argue privacy can be protected through robust de-identification, in other words removing the ability to identify specific individuals. However, research has recurrently shown “de-identification” is no silver bullet.

Most bodies don’t recognise the scope for re-identification of de-identified personal information and lots of sharing will emphasise data matching.

Be careful what you ask for

Sharing may result in social goods such as better cities, smarter government and healthier people by providing access to data (rather than just money) for service providers and researchers.

That said, our history of aspirational statements about privacy protection without meaningful enforcement by watchdogs should provoke some hard questions. It wasn’t long ago the government failed to prevent hackers from accessing sensitive data on more than 200,000 Australians.

It’s true this bill would ostensibly provide transparency, but it won’t provide genuine accountability. It shouldn’t be taken at face value.




Read more:
Seven ways the government can make Australians safer – without compromising online privacy


The Conversation


Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.