Vaccination status – when your medical information is private and when it’s not


Shutterstock

Megan Prictor, The University of MelbourneIn the US, some National Basketball Association (NBA) players have recently asserted their right to privacy over their COVID vaccination status. In Australia, discussion of vaccine passports has also highlighted this issue.

We value the idea that our medical information is private and subject to special protection and that our doctor can’t freely share it with others. Yet suddenly, it seems we might be asked to hand over information about our vaccination status in many different situations.

It might be so we can keep doing our job, go into shops and restaurants or travel. It might make us uneasy. But can we refuse to tell others our vaccination status on privacy grounds? What does the law in Australia say about who can ask for it, and why, and what they can do with it?




Read more:
‘Are you double dosed?’ How to ask friends and family if they’re vaccinated, and how to handle it if they say no


What we already disclose

Vaccinations and medical exemptions are recorded on the Australian Immunisation Register operated by the federal government.

Information from the register is used to create immunisation history statements and COVID digital certificates. This information can then flow through to check-in apps to let us prove our vaccination status when we are asked to.

It’s understandable to think our health information should be secret – kept between us and our doctor. But the law – principally the Australian Privacy Act and health records laws in many states – allows it to be collected by other people if certain conditions are met. And it’s not only the doctor’s clinic and other health services where this information is allowed to move around.

For instance the No Jab, No Play legislation in Victoria, designed to increase immunisation rates in young children, means proof of their vaccination status must be given in order for the child to access kindergarten.

Adults have to disclose information about medical conditions and disabilities to organisations like VicRoads in order to obtain a driver licence. We might even disclose a health condition to our employer so “reasonable adjustments” can be made to help us keep working.

So there are many examples of disclosing health information well beyond the doctor’s clinic walls, and all of them are provided for by law.




Read more:
Health workers are among the COVID vaccine hesitant. Here’s how we can support them safely


Sensitive information

Our vaccination status is classified as “health information” under Australia’s privacy laws.

Health information falls into a larger category of “sensitive information” – information that requires the most careful handling. The Australian Privacy Principles (APPs) in our Privacy Act set out the rules for how this information can be collected, used and disclosed.

The APPs say a business or employer (an APP entity) can only collect sensitive information like our vaccination status under certain conditions. An example is if the information is reasonably necessary for the business’s activities and we give our consent.

For this consent to be valid it must be given freely. People can’t be threatened or intimidated into disclosing their vaccination status.

Employers can mandate vaccination – as some businesses are doing – if it is “lawful and reasonable”. In this situation, an employee refusing to disclose their vaccination status would likely be in breach of a lawful and reasonable direction by their employer. Any consequences would be covered by the terms of their employment contract.




Read more:
The 9 psychological barriers that lead to COVID-19 vaccine hesitancy and refusal


Public health and consequences

The collection of our vaccination status might also be allowed by other Australian laws, such as public health orders and directions. The mandatory collection of vaccination status in the aged-care sector is a good example.

Where proof of vaccination becomes a requirement of entering a premises or working in a particular job, we can choose to keep that information private, but not without consequences. Our privacy is not protected absolutely – the trade-off might be that we are denied entry or refused employment.

Information about a person’s vaccination status can only be collected by “lawful and fair” means such as asking them directly, but not collecting it by deception or without them knowing.

Separate rules say what can then be done with the information. Generally, it can’t be used for a different purpose than it was collected for, or shared with other people or organisations, unless an exception applies.

Although private sector employers’ handling of employee records is exempt from the Australian Privacy Principles, they should still store this information securely and make sure it is not used and disclosed unnecessarily.

covid vaccination proof on mobile phone
Many Australians will soon be asked to show proof of vaccination to enter venues or workplaces.
Shutterstock



Read more:
If privacy is increasing for My Health Record data, it should apply to all medical records


But isn’t privacy a human right?

Privacy is recognised as a fundamental human right in the Universal Declaration of Human Rights and other international human rights documents.

Australia is a signatory to the International Covenant on Civil and Political Rights, which states: “no-one shall be subjected to arbitrary or unlawful interference with his privacy” (Article 17.1).

But this right is not absolute and it can be limited by national measures “in time of public emergency” (Article 4.1). On the flip side, any requirement to disclose vaccination status is shaped by human rights principles so that the requirement must be reasonable, proportionate and necessary.

It must also take into account the risk of discrimination. Our Human Rights Commission has outlined how certain people might be at particular risk of discrimination related to sharing their vaccine status. They might have difficulty using technology or not have access to it. So, even those who have been vaccinated might find it difficult to provide proof.

The World Health Organisation says people who don’t disclose their vaccination status shouldn’t be denied participation in public life.

Although health information is protected under Australian law, the law also allows this information to be collected, used and shared when reasonably necessary.

Privacy is not absolute. The COVID emergency limits some privacy protections in favour of public health goals. We need to be alert to the trade-offs and potential discrimination – particularly when access to jobs and services depends on the disclosure of vaccine status.The Conversation

Megan Prictor, Senior Research Fellow in Law, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Police access to COVID check-in data is an affront to our privacy. We need stronger and more consistent rules in place


Graham Greenleaf, UNSW and Katharine Kemp, UNSWThe Australian Information Commissioner this week called for a ban on police accessing QR code check-in data, unless for COVID-19 contact tracing purposes.

State police have already accessed this data on at least six occasions for unrelated criminal investigations, including in Queensland and Western Australia — the latter of which has now banned this. Victorian police also attempted access at least three times, according to reports, but were unsuccessful.

The ACT is considering a law preventing police from engaging in such activity, but the position is different in every state and territory.

We need cooperation and clarity regarding how COVID surveillance data is handled, to protect people’s privacy and maintain public trust in surveillance measures. There is currently no consistent, overarching law that governs these various measures — which range from QR code check-ins to vaccine certificates.




Read more:
Australia has all but abandoned the COVIDSafe app in favour of QR codes (so make sure you check in)


Last week the Office of the Australian Information Commissioner released a set of five national COVID-19 privacy principles as a guide to “best practice” for governments and businesses handling personal COVID surveillance data.

But we believe these principles are vague and fail to address a range of issues, including whether or not police can access our data. We propose more detailed and consistent laws to be enacted throughout Australia, covering all COVID surveillance.

Multiple surveillance tools are being used

There are multiple COVID surveillance tools currently in use in Australia.

Proximity tracking through the COVIDSafe app has been available since last year, aiming to identify individuals who have come into contact with an infected person. But despite costing millions to develop, the app has reportedly disclosed only 17 unique unknown cases.

Over the past year we’ve also seen widespread attendance tracking via QR codes, now required by every state and territory government. This is probably the most extensive surveillance operation Australia has ever seen, with millions of check-ins each week. Fake apps have even emerged in an effort to bypass contact tracing.

In addition, COVID status certificates showing vaccination status are now available on MyGov (subject to problems of registration failure and forgery). They don’t yet display COVID test results or COVID recovery status (as they do in countries in the European Union).

It’s unclear exactly where Australian residents will need to show COVID status certificates, but this will likely include for travel between states or local government areas, attendance at events (such as sport events and funerals) and hospitality venues, and in some “no jab no job” workplaces.

As a possible substitute for hotel quarantine, South Australia is currently testing precise location tracking to enable home quarantine. This combines geolocation tracking of phones with facial recognition of the person answering the phone.
Shutterstock

The proposed principles don’t go far enough

The vague privacy principles proposed by Australia’s privacy watchdogs are completely inadequate in the face of this complexity. They are mostly “privacy 101” requirements of existing privacy laws.

Here they are summarised, with some weaknesses noted.

  1. Data minimisation. The personal information collected should be limited to the minimum necessary to achieve a legitimate purpose.
  2. Purpose limitation. Information collected to mitigate COVID-19 risks “should generally not be used for other purposes”. The term “generally” is undefined, and police are not specifically excluded.
  3. Security. “Reasonable steps” should be taken to protect this data. Data localisation (storing it in Australia) is mentioned in the principles, but data encryption is not.
  4. Data retention/deletion. The data should be deleted once no longer needed for the purpose for which it was collected. But there is no mention of a “sunset clause” requiring whole surveillance systems to also be dismantled when no longer needed.
  5. Regulation under privacy law. The data should be protected by “an enforceable privacy law to ensure individuals have redress if their information is mishandled”. The implied call for South Australia and Western Australia to enact privacy laws is welcome.

A proposal for detailed and consistent laws

Since COVID-19 surveillance requirements are justified as “emergency measures”, they also require emergency quality protections.

Last year, the federal COVIDSafe Act provided the strongest privacy protections for any category of personal information collected in Australia. Although the app was a dud, the Act was not.

The EU has enacted thorough legislation for EU COVID digital certificates, which are being used across EU country borders. We can learn from this and establish principles that apply to all types of COVID surveillance in Australia. Here’s what we recommend:

  1. Legislation, not regulations, of “emergency quality”. Regulations can be changed at will by the responsible minister, whereas changes in legislation require parliamentary approval. Regarding COVID surveillance data, a separate act in each jurisdiction should state the main rules and there should be no exceptions to these — not even for police or ASIO.
  2. Prevent unjustifiable discrimination. This would include preventing discrimination against those who are unable to get vaccinated such as for health reasons, or those without access to digital technology such as mobile phones. In the EU, it’s free to obtain a paper certificate and these must be accepted.
  3. Prohibit and penalise unauthorised use of data. Permitted uses of surveillance data should be limited, with no exceptions for police or intelligence. COVID status certificates may be abused by employers or venues that decide to grant certain rights privileges based on them, without authorisation by law.
  4. Give individuals the right to sue. If anyone breaches the acts we propose above for each state, individuals concerned should be able to sue in the courts for compensation for an interference with privacy.
  5. Prevent surveillance creep. The law should make it as difficult as possible for any extra uses of the data to be authorised, say for marketing or town planning.
  6. Minimise data collection. The minimum data necessary should be collected, and not collected with other data. If data is only needed for inspection, it should not be retained.
  7. Ongoing data deletion. Data must be deleted periodically once it is no longer needed for pandemic purposes. In the EU, COVID certificate data inspected for border crossings is not recorded or retained.
  8. A “sunset clause” for the whole system. Emergency measures should provide for their own termination. The law requires the COVIDSafe app to be terminated when it’s no longer required or effective, along with its data. A similar plan should be in place for QR-code data and COVID status certificates.
  9. Active supervision and reports. Privacy authorities should have clear obligations to report on COVID surveillance operations, and express views on termination of the system.
  10. Transparency. Overarching all of these principles should be requirements for transparency. This should include publicly releasing medical/epidemiological advice on necessary measures, open-source software in all cases of digital COVID surveillance, initial privacy impact assessments and sunset clause recommendations.

COVID-19 has necessitated the most pervasive surveillance most of us have ever experienced. But such surveillance is really only justifiable as an emergency measure. It must not become a permanent part of state surveillance.




Read more:
Coronavirus: digital contact tracing doesn’t have to sacrifice privacy


The Conversation


Graham Greenleaf, Professor of Law and Information Systems, UNSW and Katharine Kemp, Senior Lecturer, Faculty of Law & Justice, UNSW, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Privacy erosion by design: why the Federal Court should throw the book at Google over location data tracking


Shutterstock

Jeannie Marie Paterson, The University of Melbourne and Elise Bant, The University of Western AustraliaThe Australian Competition and Consumer Commission has had a significant win against Google. The Federal Court found Google misled some Android users about how to disable personal location tracking.

Will this decision actually change the behaviour of the big tech companies? The answer will depend on the size of the penalty awarded in response to the misconduct.




Read more:
ACCC ‘world first’: Australia’s Federal Court found Google misled users about personal location data


In theory, the penalty is A$1.1 million per contravention. There is a contravention each time a reasonable person in the relevant class is misled. So the total award could, in theory, amount to many millions of dollars.

But the actual penalty will depend on how the court characterises the misconduct. We believe Google’s behaviour should not be treated as a simple accident, and the Federal Court should issue a heavy fine to deter Google and other companies from behaving this way in future.

Misleading conduct and privacy settings

The case arose from the representations made by Google to users of Android phones in 2018 about how it obtained personal location data.

The Federal Court held Google had misled some consumers by representing that “having Web & App Activity turned ‘on’ would not allow Google to obtain, retain and use personal data about the user’s location”.

In other words, some consumers were misled into thinking they could control Google’s location data collection practices by switching “off” Location History, whereas Web & App Activity also needed to be disabled to provide this protection.




Read more:
The ACCC is suing Google for misleading millions. But calling it out is easier than fixing it


The ACCC also argued consumers reading Google’s privacy statement would be misled into thinking personal data was collected for their own benefit rather than Google’s. However, the court dismissed this argument on the grounds that reasonable users wanting to turn the Location History “off”

would have assumed that Google was obtaining as much commercial advantage as it could from use of the user’s personal location data.

This is surprising and might deserve further attention from regulators concerned to protect consumers from corporations “data harvesting” for profit.

How much should Google pay?

The penalty and other enforcement orders against Google will be made at a later date.

The aim of the penalty is to deter Google specifically, and other firms like Google, from engaging in misleading conduct again. If penalties are too low they may be treated by wrongdoing firms as merely a “cost of doing business”.

However, in circumstances where there is a high degree of corporate culpability, the Federal Court has shown willingness to award higher amounts than in the past. This has occurred even where the regulator has not sought higher penalties. In the recent Volkswagen Aktiengesellschaft v ACCC judgement, the full Federal Court confirmed an award of A$125 million against Volkswagen for making false representations about compliance with Australian diesel emissions standards.

The Federal Court found Google’s information about local data tracking was misleading.
Shutterstock

In setting Google’s penalty, a court will consider factors such as the nature and extent of the misleading conduct and any loss to consumers. The court will also take into account whether the wrongdoer was involved in “deliberate, covert or reckless conduct, as opposed to negligence or carelessness”.

At this point, Google may well argue that only some consumers were misled, that it was possible for consumers to be informed if they read more about Google’s privacy policies, that it was only one slip-up, and that its contravention of the law was unintentional. These might seem to reduce the seriousness or at least the moral culpability of the offence.

But we argue they should not unduly cap the penalty awarded. Google’s conduct may not appear as “egregious and deliberately deceptive” as the Volkswagen case.

But equally Google is a massively profitable company that makes its money precisely from obtaining, sorting and using its users’ personal data. We think therefore the court should look at the number of Android users potentially affected by the misleading conduct and Google’s responsibility for its own choice architecture, and work from there.

Only some consumers?

The Federal Court acknowledged not all consumers would be misled by Google’s representations. The court accepted many consumers would simply accept the privacy terms without reviewing them, an outcome consistent with the so-called privacy paradox. Others would review the terms and click through to more information about the options for limiting Google’s use of personal data to discover the scope of what was collected under the “Web & App Activity” default.




Read more:
The privacy paradox: we claim we care about our data, so why don’t our actions match?


This might sound like the court was condoning consumers’ carelessness. In fact the court made use of insights from economists about the behavioural biases of consumers in making decisions.

Consumers have limited time to read legal terms and limited ability to understand the future risks arising from those terms. Thus, if consumers are concerned about privacy they might try to limit data collection by selecting various options, but are unlikely to be able to read and understand privacy legalese like a trained lawyer or with the background understanding of a data scientist.

If one option is labelled “Location History”, it is entirely rational for everyday consumers to assume turning it off limits location data collection by Google.

The number of consumers misled by Google’s representations will be difficult to assess. But even if a small proportion of Android users were misled, that will be a very large number of people.

There was evidence before the Federal Court that, after press reports of the tracking problem, the number of consumers switching off the “Web” option increased by 500%. Moreover, Google makes considerable profit from the large amounts of personal data it gathers and retains, and profit is important when it comes deterrence.

Google’s choice architecture

It has also been revealed that some employees at Google were not aware of the problem until an exposé in the press. An urgent meeting was held, referred to internally as the “Oh Shit” meeting.

The individual Google employees at the “Oh Shit” meeting may not have been aware of the details of the system. But that is not the point.

It is the company fault that is the question. And a company’s culpability is not just determined by what some executive or senior employee knew or didn’t know about its processes. Google’s corporate mindset is manifested or revealed in the systems it designs and puts in place.




Read more:
Inducing choice paralysis: how retailers bury customers in an avalanche of options


Google designed the information system that faced consumers trying to manage their privacy settings. This kind of system design is sometimes referred to as “choice architecture”.

Here the choices offered to consumers steered them away from opting out of Google collecting, retaining and using personal location data.

The “Other Options” (for privacy) information failed to refer to the fact that location tracking was carried out via other processes beyond the one labelled “Location History”. Plus, the default option for “Web & App Activity” (which included location tracking) was set as “on”.

This privacy eroding system arose via the design of the “choice architecture”. It therefore warrants a serious penalty.The Conversation

Jeannie Marie Paterson, Professor of Law, The University of Melbourne and Elise Bant, Professor of Law, The University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ACCC ‘world first’: Australia’s Federal Court found Google misled users about personal location data


Henry Perks / Unsplash

Katharine Kemp, UNSWThe Federal Court has found Google misled some users about personal location data collected through Android devices for two years, from January 2017 to December 2018.

The Australian Competition & Consumer Commission (ACCC) says this decision is a “world first” in relation to Google’s location privacy settings. The ACCC now intends to seek various orders against Google. These will include monetary penalties under the Australian Consumer Law (ACL), which could be up to A$10 million or 10% of Google’s local turnover.

Other companies too should be warned that representations in their privacy policies and privacy settings could lead to similar liability under the ACL.

But this won’t be a complete solution to the problem of many companies concealing what they do with data, including the way they share consumers’ personal information.

How did Google mislead consumers about their location history?

The Federal Court found Google’s previous location history settings would have led some reasonable consumers to believe they could prevent their location data being saved to their Google account. In fact, selecting “Don’t save my Location History in my Google Account” alone could not achieve this outcome.

Users needed to change an additional, separate setting to stop location data from being saved to their Google account. In particular, they needed to navigate to “Web & App Activity” and select “Don’t save my Web & App Activity to my Google Account”, even if they had already selected the “Don’t save” option under “Location History”.




Read more:
The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do


ACCC Chair Rod Sims responded to the Federal Court’s findings, saying:

This is an important victory for consumers, especially anyone concerned about their privacy online, as the Court’s decision sends a strong message to Google and others that big businesses must not mislead their customers.

Google has since changed the way these settings are presented to consumers, but is still liable for the conduct the court found was likely to mislead some reasonable consumers for two years in 2017 and 2018.

ACCC has misleading privacy policies in its sights

This is the second recent case in which the ACCC has succeeded in establishing misleading conduct in a company’s representations about its use of consumer data.

In 2020, the medical appointment booking app HealthEngine admitted it had disclosed more than 135,000 patients’ non-clinical personal information to insurance brokers without the informed consent of those patients. HealthEngine paid fines of A$2.9 million, including approximately A$1.4 million relating to this misleading conduct.




Read more:
How safe are your data when you book a COVID vaccine?


The ACCC has two similar cases in the wings, including another case regarding Google’s privacy-related notifications and a case about Facebook’s representations about a supposedly privacy-enhancing app called Onavo.

In bringing proceedings against companies for misleading conduct in their privacy policies, the ACCC is following the US Federal Trade Commission which has sued many US companies for misleading privacy policies.

The ACCC has more cases in the wings about data privacy.
Shutterstock

Will this solve the problem of confusing and unfair privacy policies?

The ACCC’s success against Google and HealthEngine in these cases sends an important message to companies: they must not mislead consumers when they publish privacy policies and privacy settings. And they may receive significant fines if they do.

However, this will not be enough to stop companies from setting privacy-degrading terms for their users, if they spell such conditions out in the fine print. Such terms are currently commonplace, even though consumers are increasingly concerned about their privacy and want more privacy options.

Consider the US experience. The US Federal Trade Commission brought action against the creators of a flashlight app for publishing a privacy policy which didn’t reveal the app was tracking and sharing users’ location information with third parties.




Read more:
We need a code to protect our online privacy and wipe out ‘dark patterns’ in digital design


However, in the agreement settling this claim, the solution was for the creators to rewrite the privacy policy to disclose that users’ location and device ID data are shared with third parties. The question of whether this practice was legitimate or proportionate was not considered.

Major changes to Australian privacy laws will also be required before companies will be prevented from pervasively tracking consumers who do not wish to be tracked. The current review of the federal Privacy Act could be the beginning of a process to obtain fairer privacy practices for consumers, but any reforms from this review will be a long time coming.


This is an edited version of an article that originally appeared on UNSW Newsroom.The Conversation

Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Academic Lead, UNSW Grand Challenge on Trust, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

What is “upskirting” and what are your rights to privacy under the law?


Shutterstock

Rick Sarre, University of South AustraliaQueensland federal MP Andrew Laming has been accused of taking an inappropriate photograph of a young woman, Crystal White, in 2019 in which her underwear was showing. When challenged about the photo this week, he reportedly replied:

it wasn’t meant to be rude. I thought it was funny

Inappropriate photography is a criminal offence in Queensland. Whether or not Laming’s behaviour amounted to an offence for which he could be charged is a matter for the police to determine. (White is reportedly considering taking her complaint to police.)

So, what do the laws say about this kind of behaviour, and what rights to privacy do people have when it comes to indecent photographs taken by others?

What can ‘upskirting’ include?

A new term has entered the lexicon in this regard: “upskirting”. The act of upskirting is generally defined as taking a sexually intrusive photograph of someone without their permission.

It is not a recent phenomenon. There have been incidents in which people (invariably men) have placed cameras on their shoes and photographed “up” a woman’s skirt for prurient purposes. Other instances have involved placing cameras under stairs where women in dresses or skirts were likely to pass by.

The broader category of “upskirting” can also include indecent filming of anyone without their knowledge, including photographing topless female bathers at a public beach, covertly filming women undressing in their bedrooms, or installing a camera in a dressing room, public toilet or a swimming pool changing room.




Read more:
Andrew Laming: why empathy training is unlikely to work


With every new electronic device that comes on the market comes the possibility of inappropriate use and, thus, the creation of new criminal offences.

We saw that with the advent of small listening devices. With this technology, it was now possible to record private conversations, so legislators had to create offences under the law to deal with any inappropriate use.

The same thing happened with small (and now very affordable) drones, which made it possible to capture images of people in compromising positions, even from a distance. Our laws have been adjusted accordingly.

And in recent years, lawmakers have been faced with the same potential for inappropriate use with mobile phones. Such devices are now ubiquitous and improved technology has allowed people to record and photograph others at a moment’s notice — often impulsively, without proper thought.

How have legislators responded in Australia?

There is a patchwork array of laws across the country dealing with this type of photography and video recording.

In South Australia, for instance, it is against the law to engage in “indecent filming” of another person under part 5A of the state’s Summary Offences Act.

The term “upskirting” itself was used when amendments were made in 2007 to Victoria’s Summary Offences Act. This made it an offence for a person to observe or visually capture another person’s genital region without their consent.

In New South Wales, the law is equally specific in setting out the type of filming that is punishable under the law. It outlaws the filming of another person’s “private parts” for “sexual arousal or sexual gratification” without the consent of the person being filmed.

Queensland’s law, meanwhile, makes it an offence to:

observe or visually record another person, in circumstances where a reasonable adult would expect to be afforded privacy […] without the other person’s consent

Interestingly, the Queensland law is more broadly worded than the NSW, Victorian or South Australian laws since it makes it an offence to take someone’s picture in general, rather than specifying that it needs to be sexually explicit.

The maximum penalty for such an offence in Queensland is three years’ imprisonment.

What would need to be proven for a conviction

Just like any criminal offence, the prosecution in a case like this must first determine, before laying a charge, whether there’s enough evidence that could lead to a conviction and, moreover, whether such a prosecution is in the public interest.

Once the decision to charge is made, a conviction will only be possible if the accused pleads guilty or is found guilty beyond reasonable doubt. (Being a misdemeanour, this could only be by a magistrate, not a jury.)

The role of the criminal law here is to bring offending behaviour to account while also providing a deterrent for the future conduct of that person or any other persons contemplating such an act.




Read more:
View from The Hill: Morrison should appoint stand-alone minister for women and boot Andrew Laming to crossbench


As with any criminal law, its overarching purpose is to indicate society’s disdain for the behaviour. The need to protect victims from such egregious and lewd behaviour is an important consideration too.

Any decision by a Queensland magistrate to convict a person alleged to have taken an indecent photo would hang on three facts:

  • whether the photo was taken by the person accused
  • whether the victim believed she should have been afforded privacy
  • and whether she offered no consent to have the photo taken.

Other mitigating factors might come into play, however, including whether the photograph was impulsive and not premeditated, whether the image was immediately deleted, and whether the alleged offender showed any regret or remorse for his actions.

Recently a Queensland man, Justin McGufficke, pleaded guilty to upskirting offences in NSW after he took pictures up the skirts of teenage girls at a zoo while they were looking at animals.

In another case, a conviction for upskirting was deemed sufficient to deny a man permission to work with children in Victoria.

In a moment of impulsivity — and with the easy access of the mobile phone — anything can happen in today’s world. Poor judgements are common. Women are invariably the targets.

The laws on filming, recording and in some cases distributing the images of another person are clear — and the potential consequences for the accused are substantial. One would hope that any potential offenders are taking note.The Conversation

Rick Sarre, Emeritus Professor of Law and Criminal Justice, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Not just complacency: why people are reluctant to use COVID-19 contact-tracing apps



Mark Coote/Bloomberg via Getty Images

Farkhondeh Hassandoust, Auckland University of Technology

This week’s announcement of two new COVID-19 vaccine pre-purchase deals is encouraging, but doesn’t mean New Zealanders should become complacent about using the NZ COVID Tracer app during the summer holidays.

The immunisation rollout won’t start until the second quarter of 2021, and the government is encouraging New Zealanders to continue using the app, including the recently upgraded bluetooth function, as part of its plan to manage the pandemic during the holiday period.




Read more:
How to keep COVID-19 at bay during the summer holidays — and help make travel bubbles a reality in 2021


During the past weeks, the number of daily scans has dropped significantly, down from just over 900,000 scans per day at the end of November to fewer than 400,000 in mid-December.

With no active cases of COVID-19 in the commmunity, complacency might be part of the issue in New Zealand, but as our research in the US shows, worries about privacy and trust continue to make people reluctant to use contact-tracing apps.

Concerns about privacy and surveillance

We surveyed 853 people from every state in the US to identify the factors promoting or inhibiting their use of contact-tracing applications. Our survey reveals two seemingly contradictory findings.

Individuals are highly motivated to use contact-tracing apps, for the sake of their own health and that of society as a whole. But the study also found people are concerned about privacy, social disapproval and surveillance.

The findings suggest people’s trust in the data collectors is dependent on the technology features of these apps (for example, information sensitivity and anonymity) and the privacy protection initiatives instigated by the authorities.

With the holiday season just around the corner — and even though New Zealand is currently free of community transmission — our findings are pertinent. New Zealanders will travel more during the summer period, and it is more important than ever to use contact-tracing apps to improve our chances of getting on top of any potential outbreaks as quickly as possible.

How, then, to overcome concerns about privacy and trust and make sure New Zealanders use the upgraded app during summer?

The benefits of adopting contact-tracing apps are mainly in shared public health, and it is important these societal health benefits are emphasised. In order to quell concerns, data collectors (government and businesses) must also offer assurance that people’s real identity will be concealed.

It is the responsibility of the government and the office of the Privacy Commissioner to ensure all personal information is managed appropriately.




Read more:
An Australia–NZ travel bubble needs a unified COVID contact-tracing app. We’re not there


Transparency and data security

Our study also found that factors such as peer and social influence, regulatory pressures and previous experiences with privacy loss underlie people’s readiness to adopt contact-tracing apps.

The findings reveal that people expect regulatory protection if they are to use contact-tracing apps. This confirms the need for laws and regulations with strict penalties for those who collect, use, disclose or decrypt collected data for any purpose other than contact tracing.

The New Zealand government is working with third-party developers to complete the integration of other apps by the end of December to enable the exchange of digital contact-tracing information from different apps and technologies.

The Privacy Commissioner has already endorsed the bluetooth upgrade of the official NZ COVID Tracer app because of its focus on users’ privacy. And the Ministry of Health aims to release the source code for the app so New Zealanders can see how their personal data has been managed.

Throughout the summer, the government and ministry should emphasise the importance of using the contact-tracing app and assure New Zealanders about the security and privacy of their personal data.

Adoption of contact-tracing apps is no silver bullet in the battle against COVID-19, but it is a crucial element in New Zealand’s collective public health response to the global pandemic.The Conversation

Farkhondeh Hassandoust, Lecturer, Auckland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

83% of Australians want tougher privacy laws. Now’s your chance to tell the government what you want



Shutterstock

Normann Witzleb, Monash University

Federal Attorney-General Christian Porter has called for submissions to the long-awaited review of the federal Privacy Act 1988.

This is the first wide-ranging review of privacy laws since the Australian Law Reform Commission produced a landmark report in 2008.

Australia has in the past often hesitated to adopt a strong privacy framework. The new review, however, provides an opportunity to improve data protection rules to an internationally competitive standard.

Here are some of the ideas proposed — and what’s at stake if we get this wrong.




Read more:
It’s time for privacy invasion to be a legal wrong


Australians care deeply about data privacy

Personal information has never had a more central role in our society and economy, and the government has a strong mandate to update Australia’s framework for the protection of personal information.

In the Australian Privacy Commissioner’s 2020 survey, 83% of Australians said they’d like the government to do more to protect the privacy of their data.

The intense debate about the COVIDSafe app earlier this year also shows Australians care deeply about their private information, even in a time of crisis.

Privacy laws and enforcement can hardly keep up with the ever-increasing digitalisation of our lives. Data-driven innovation provides valuable services that many of us use and enjoy. However, the government’s issues paper notes:

As Australians spend more of their time online, and new technologies emerge, such as artificial intelligence, more personal information about individuals is being captured and processed, raising questions as to whether Australian privacy law is fit for purpose.

The pandemic has accelerated the existing trend towards digitalisation and created a range of new privacy issues including working or studying at home, and the use of personal data in contact tracing.

Australians are rightly concerned they are losing control over their personal data.

So there’s no question the government’s review is sorely needed.

Issues of concern for the new privacy review

The government’s review follows the Australian Competition and Consumer Commission’s Digital Platforms Inquiry, which found that some data practices of digital platforms are unfair and undermine consumer trust. We rely heavily on digital platforms such as Google and Facebook for information, entertainment and engagement with the world around us.

Our interactions with these platforms leave countless digital traces that allow us to be profiled and tracked for profit. The Australian Competition and Consumer Commission (ACCC) found that the digital platforms make it hard for consumers to resist these practices and to make free and informed decisions regarding the collection, use and disclosure of their personal data.

The government has committed to implement most of the ACCC’s recommendations for stronger privacy laws to give us greater consumer control.

However, the reforms must go further. The review also provides an opportunity to address some long-standing weaknesses of Australia’s privacy regime.

The government’s issues paper, released to inform the review, identified several areas of particular concern. These include:

  • the scope of application of the Privacy Act, in particular the definition of “personal information” and current private sector exemptions

  • whether the Privacy Act provides an effective framework for promoting good privacy practices

  • whether individuals should have a direct right to sue for a breach of privacy obligations under the Privacy Act

  • whether a statutory tort for serious invasions of privacy should be introduced into Australian law, allowing Australians to go to court if their privacy is invaded

  • whether the enforcement powers of the Privacy Commissioner should be strengthened.

While most recent attention relates to improving consumer choice and control over their personal data, the review also brings back onto the agenda some never-implemented recommendations from the Australian Law Reform Commission’s 2008 review.

These include introducing a statutory tort for serious invasions of privacy, and extending the coverage of the Privacy Act.

Exemptions for small business and political parties should be reviewed

The Privacy Act currently contains several exemptions that limit its scope. The two most contentious exemptions have the effect that political parties and most business organisations need not comply with the general data protection standards under the Act.

The small business exemption is intended to reduce red tape for small operators. However, largely unknown to the Australian public, it means the vast majority of Australian businesses are not legally obliged to comply with standards for fair and safe handling of personal information.

Procedures for compulsory venue check-ins under COVID health regulations are just one recent illustration of why this is a problem. Some people have raised concerns that customers’ contact-tracing data, in particular collected via QR codes, may be exploited by marketing companies for targeted advertising.

A woman uses a QR code at a restaurant
Under current privacy laws, cafe and restaurant operators are exempt from complying with certain privacy obligations.
Shutterstock

Under current privacy laws, cafe and restaurant operators are generally exempt from complying with privacy obligations to undertake due diligence checks on third-party providers used to collect customers’ data.

The political exemption is another area of need of reform. As the Facebook/Cambridge Analytica scandal showed, political campaigning is becoming increasingly tech-driven.

However, Australian political parties are exempt from complying with the Privacy Act and anti-spam legislation. This means voters cannot effectively protect themselves against data harvesting for political purposes and micro-targeting in election campaigns through unsolicited text messages.

There is a good case for arguing political parties and candidates should be subject to the same rules as other organisations. It’s what most Australians would like and, in fact, wrongly believe is already in place.




Read more:
How political parties legally harvest your data and use it to bombard you with election spam


Trust drives innovation

Trust in digital technologies is undermined when data practices come across as opaque, creepy or unsafe.

There is increasing recognition that data protection drives innovation and adoption of modern applications, rather than impedes it.

A woman looks at her phone in the twilight.
Trust in digital technologies is undermined when data practices come across as opaque, creepy, or unsafe.
Shutterstock

The COVIDSafe app is a good example.
When that app was debated, the government accepted that robust privacy protections were necessary to achieve a strong uptake by the community.

We would all benefit if the government saw that this same principle applies to other areas of society where our precious data is collected.


Information on how to make a submission to the federal government review of the Privacy Act 1988 can be found here.




Read more:
People want data privacy but don’t always know what they’re getting


The Conversation


Normann Witzleb, Associate Professor in Law, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Towards a post-privacy world: proposed bill would encourage agencies to widely share your data


Bruce Baer Arnold, University of Canberra

The federal government has announced a plan to increase the sharing of citizen data across the public sector.

This would include data sitting with agencies such as Centrelink, the Australian Tax Office, the Department of Home Affairs, the Bureau of Statistics and potentially other external “accredited” parties such as universities and businesses.

The draft Data Availability and Transparency Bill released today will not fix ongoing problems in public administration. It won’t solve many problems in public health. It is a worrying shift to a post-privacy society.

It’s a matter of arrogance, rather than effectiveness. It highlights deficiencies in Australian law that need fixing.




Read more:
Australians accept government surveillance, for now


Making sense of the plan

Australian governments on all levels have built huge silos of information about us all. We supply the data for these silos each time we deal with government.

It’s difficult to exercise your rights and responsibilities without providing data. If you’re a voter, a director, a doctor, a gun owner, on welfare, pay tax, have a driver’s licence or Medicare card – our governments have data about you.

Much of this is supplied on a legally mandatory basis. It allows the federal, state, territory and local governments to provide pensions, elections, parks, courts and hospitals, and to collect rates, fees and taxes.

The proposed Data Availability and Transparency Bill will authorise large-scale sharing of data about citizens and non-citizens across the public sector, between both public and private bodies. Previously called the “Data Sharing and Release” legislation, the word “transparency” has now replaced “release” to allay public fears.

The legislation would allow sharing between Commonwealth government agencies that are currently constrained by a range of acts overseen (weakly) by the under-resourced Australian Information Commissioner (OAIC).

The acts often only apply to specific agencies or data. Overall we have a threadbare patchwork of law that is supposed to respect our privacy but often isn’t effective. It hasn’t kept pace with law in Europe and elsewhere in the world.

The plan also envisages sharing data with trusted third parties. They might be universities or other research institutions. In future, the sharing could extend to include state or territory agencies and the private sector, too.

Any public or private bodies that receive data can then share it forward. Irrespective of whether one has anything to hide, this plan is worrying.

Why will there be sharing?

Sharing isn’t necessarily a bad thing. But it should be done accountably and appropriately.

Consultations over the past two years have highlighted the value of inter-agency sharing for law enforcement and for research into health and welfare. Universities have identified a range of uses regarding urban planning, environment protection, crime, education, employment, investment, disease control and medical treatment.

Many researchers will be delighted by the prospect of accessing data more cheaply than doing onerous small-scale surveys. IT people have also been enthusiastic about money that could be made helping the databases of different agencies talk to each other.

However, the reality is more complicated, as researchers and civil society advocates have pointed out.

Person hitting a 'share' button on a keyboard.
In a July speech to the Australian Society for Computers and Law, former High Court Justice Michael Kirby highlighted a growing need to fight for privacy, rather than let it slip away.
Shutterstock

Why should you be worried?

The plan for comprehensive data sharing is founded on the premise of accreditation of data recipients (entities deemed trustworthy) and oversight by the Office of the National Data Commissioner, under the proposed act.

The draft bill announced today is open for a short period of public comment before it goes to parliament. It features a consultation paper alongside a disquieting consultants’ report about the bill. In this report, the consultants refer to concerns and “high inherent risk”, but unsurprisingly appear to assume things will work out.

Federal Minister for Government Services Stuart Roberts, who presided over the tragedy known as the RoboDebt scheme, is optimistic about the bill. He dismissed critics’ concerns by stating consent is implied when someone uses a government service. This seems disingenuous, given people typically don’t have a choice.

However, the bill does exclude some data sharing. If you’re a criminologist researching law enforcement, for example, you won’t have an open sesame. Experience with the national Privacy Act and other Commonwealth and state legislation tells us such exclusions weaken over time

Outside the narrow exclusions centred on law enforcement and national security, the bill’s default position is to share widely and often. That’s because the accreditation requirements for agencies aren’t onerous and the bases for sharing are very broad.

This proposal exacerbates ongoing questions about day-to-day privacy protection. Who’s responsible, with what framework and what resources?

Responsibility is crucial, as national and state agencies recurrently experience data breaches. Although as RoboDebt revealed, they often stick to denial. Universities are also often wide open to data breaches.

Proponents of the plan argue privacy can be protected through robust de-identification, in other words removing the ability to identify specific individuals. However, research has recurrently shown “de-identification” is no silver bullet.

Most bodies don’t recognise the scope for re-identification of de-identified personal information and lots of sharing will emphasise data matching.

Be careful what you ask for

Sharing may result in social goods such as better cities, smarter government and healthier people by providing access to data (rather than just money) for service providers and researchers.

That said, our history of aspirational statements about privacy protection without meaningful enforcement by watchdogs should provoke some hard questions. It wasn’t long ago the government failed to prevent hackers from accessing sensitive data on more than 200,000 Australians.

It’s true this bill would ostensibly provide transparency, but it won’t provide genuine accountability. It shouldn’t be taken at face value.




Read more:
Seven ways the government can make Australians safer – without compromising online privacy


The Conversation


Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

TikTok can be good for your kids if you follow a few tips to stay safe


Tashatuvango/Shutterstock

Joanne Orlando, Western Sydney University

The video-sharing app TikTok is a hot political potato amid concerns over who has access to users’ personal data.

The United States has moved to ban the app. Other countries, including Australia, have expressed concern.

But does this mean your children who use this app are at risk? If you’re a parent, let me explain the issues and give you a few tips to make sure your kids stay safe.

A record-breaker

Never has an app for young people been so popular. By April this year the TikTok app had been downloaded more than 2 billion times worldwide.

The app recently broke all records for the most downloaded app in a quarterly period, with 315 million downloads globally in the first three months of 2020.

Its popularity with young Aussies has sky-rocketed. Around 1.6 million Australians use the app, including about one in five people born since 2006. That’s an estimated 537,000 young Australians.

Like all social media apps, TikTok siphons data about its users such as email address, contacts, IP address and geolocation information.

TikTok was fined $US5.8 million (A$8 million) to settle US government claims it illegally collected personal information from children.

As a Chinese company, ByteDance, owns TikTok, US President Donald Trump and others are also worried about the app handing over this data to the Chinese state. TikTok denies it does this.




Read more:
China could be using TikTok to spy on Australians, but banning it isn’t a simple fix


Just days ago the Trump administration signed an executive order to seek a ban on TikTok operating or interacting with US companies.

Youngsters still TikToking

There is no hint of this stopping our TikToking children. For them it’s business as usual, creating and uploading videos of themselves lip-syncing, singing, dancing or just talking.

The most recent trend on TikTok – Taylor Swift Love Story dance – has resulted in more than 1.5 million video uploads in around two weeks alone.

But the latest political issues with TikTok raise questions about whether children should be on this platform right now. More broadly, as we see copycat sites such as Instagram Reels launched, should children be using any social media platforms that focus on them sharing videos of themselves at all?

The pros and cons

The TikTok app has filled a genuine social need for this young age group. Social media sites can offer a sense of belonging to a group, such as a group focused on a particular interest, experience, social group or religion.

TikTok celebrates diversity and inclusivity. It can provide a place where young people can join together to support each other in their needs.

During the COVID-19 pandemic, TikTok has had huge numbers of videos with coronavirus-related hashtags such as #quarantine (65 billion views), #happyathome (19.5 billion views) and #safehands (5.4 billion views).

Some of these videos are funny, some include song and dance. The World Health Organisation even posted its own youth-oriented videos on TikTok to provide young people with reliable public health advice about COVID-19.

The key benefit is the platform became a place where young people joined together from all corners of the planet, to understand and take the stressful edge off the pandemic for themselves and others their age. Where else could they do that? The mental health benefits this offers can be important.

Let’s get creative

Another benefit lies in the creativity TikTok centres on. Passive use of technology, such as scrolling and checking social media with no purpose, can lead to addictive types of screen behaviours for young people.

Whereas planning and creating content, such as making their own videos, is meaningful use of technology and curbs addictive technology behaviours. In other words, if young people are going to use technology, using it creatively, purposefully and with meaning is the type of use we want to encourage.

Users of TikTok must be at least 13 years old, although it does have a limited app for under 13s.

Know the risks

Like all social media platforms, children are engaging in a space in which others can contact them. They may be engaging in adult concepts that they are not yet mature enough for, such as love gone wrong or suggestively twerking to songs.




Read more:
The secret of TikTok’s success? Humans are wired to love imitating dance moves


The platform moves very quickly, with a huge amount of videos, likes and comments uploaded every day. Taking it all in can lead to cognitive overload. This can be distracting for children and decrease focus on other aspects of their life including schoolwork.

Three young girls video themselves on a smartphone.
How to stay safe and still have fun with TikTok.
Luiza Kamalova/Shutterstock

So here are a few tips for keeping your child safe, as well as getting the most out of the creative/educational aspects of TikTok.

  1. as with any social network, use privacy settings to limit how much information your child is sharing

  2. if your child is creating a video, make sure it is reviewed before it’s uploaded to ensure it doesn’t include content that can be misconstrued or have negative implications

  3. if a child younger than 13 wants to use the app, there’s a section for this younger age group that includes extra safety and privacy features

  4. if you’re okay with your child creating videos for TikTok, then doing it together or helping them plan and film the video can be a great parent-child bonding activity

  5. be aware of the collection of data by TikTok, encourage your child to be aware of it, and help them know what they are giving away and the implications for them.

Happy (safe) TikToking!The Conversation

Joanne Orlando, Researcher: Children and Technology, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Private browsing: What it does – and doesn’t do – to shield you from prying eyes on the web



The major browsers have privacy modes, but don’t confuse privacy for anonymity.
Oleg Mishutin/iStock via Getty Images

Lorrie Cranor, Carnegie Mellon University and Hana Habib, Carnegie Mellon University

Many people look for more privacy when they browse the web by using their browsers in privacy-protecting modes, called “Private Browsing” in Mozilla Firefox, Opera and Apple Safari; “Incognito” in Google Chrome; and “InPrivate” in Microsoft Edge.

These private browsing tools sound reassuring, and they’re popular. According to a 2017 survey, nearly half of American internet users have tried a private browsing mode, and most who have tried it use it regularly.

However, our research has found that many people who use private browsing have misconceptions about what protection they’re gaining. A common misconception is that these browser modes allow you to browse the web anonymously, surfing the web without websites identifying you and without your internet service provider or your employer knowing what websites you visit. The tools actually provide much more limited protections.

Other studies conducted by the Pew Research Center and the privacy-protective search engine company DuckDuckGo have similar findings. In fact, a recent lawsuit against Google alleges that internet users are not getting the privacy protection they expect when using Chrome’s Incognito mode.

How it works

While the exact implementation varies from browser to browser, what private browsing modes have in common is that once you close your private browsing window, your browser no longer stores the websites you visited, cookies, user names, passwords and information from forms you filled out during that private browsing session.

Essentially, each time you open a new private browsing window you are given a “clean slate” in the form of a brand new browser window that has not stored any browsing history or cookies. When you close your private browsing window, the slate is wiped clean again and the browsing history and cookies from that private browsing session are deleted. However, if you bookmark a site or download a file while using private browsing mode, the bookmarks and file will remain on your system.

Although some browsers, including Safari and Firefox, offer some additional protection against web trackers, private browsing mode does not guarantee that your web activities cannot be linked back to you or your device. Notably, private browsing mode does not prevent websites from learning your internet address, and it does not prevent your employer, school or internet service provider from seeing your web activities by tracking your IP address.

Reasons to use it

We conducted a research study in which we identified reasons people use private browsing mode. Most study participants wanted to protect their browsing activities or personal data from other users of their devices. Private browsing is actually pretty effective for this purpose.

We found that people often used private browsing to visit websites or conduct searches that they did not want other users of their device to see, such as those that might be embarrassing or related to a surprise gift. In addition, private browsing is an easy way to log out of websites when borrowing someone else’s device – so long as you remember to close the window when you are done.

Smart phone displaying Google incognito mode
Private browsing can help cover your internet tracks by automatically deleting your browsing history and cookies when you close the browser.
Avishek Das/SOPA Images/LightRocket via Getty Images

Private browsing provides some protection against cookie-based tracking. Since cookies from your private browsing session are not stored after you close your private browsing window, it’s less likely that you will see online advertising in the future related to the websites you visit while using private browsing.

[Get the best of The Conversation, every weekend. Sign up for our weekly newsletter.]

Additionally, as long as you have not logged into your Google account, any searches you make will not appear in your Google account history and will not affect future Google search results. Similarly, if you watch a video on YouTube or other service in private browsing, as long as you are not logged into that service, your activity does not affect the recommendations you get in normal browsing mode.

What it doesn’t do

Private browsing does not make you anonymous online. Anyone who can see your internet traffic – your school or employer, your internet service provider, government agencies, people snooping on your public wireless connection – can see your browsing activity. Shielding that activity requires more sophisticated tools that use encryption, like virtual private networks.

Private browsing also offers few security protections. In particular, it does not prevent you from downloading a virus or malware to your device. Additionally, private browsing does not offer any additional protection for the transmission of your credit card or other personal information to a website when you fill out an online form.

It is also important to note that the longer you leave your private browsing window open, the more browsing data and cookies it accumulates, reducing your privacy protection. Therefore, you should get in the habit of closing your private browsing window frequently to wipe your slate clean.

What’s in a name

It is not all that surprising that people have misconceptions about how private browsing mode works; the word “private” suggests a lot more protection than these modes actually provide.

Furthermore, a 2018 research study found that the disclosures shown on the landing pages of private browsing windows do little to dispel misconceptions that people have about these modes. Chrome provides more information about what is and is not protected than most of the other browsers, and Mozilla now links to an informational page on the common myths related to private browsing.

However, it may be difficult to dispel all of these myths without changing the name of the browsing mode and making it clear that private browsing stops your browser from keeping a record of your browsing activity, but it isn’t a comprehensive privacy shield.The Conversation

Lorrie Cranor, Professor of Computer Science and of Engineering & Public Policy, Carnegie Mellon University and Hana Habib, Graduate Research Assistant at the Institute for Software Research, Carnegie Mellon University

This article is republished from The Conversation under a Creative Commons license. Read the original article.