Privacy erosion by design: why the Federal Court should throw the book at Google over location data tracking


Shutterstock

Jeannie Marie Paterson, The University of Melbourne and Elise Bant, The University of Western AustraliaThe Australian Competition and Consumer Commission has had a significant win against Google. The Federal Court found Google misled some Android users about how to disable personal location tracking.

Will this decision actually change the behaviour of the big tech companies? The answer will depend on the size of the penalty awarded in response to the misconduct.




Read more:
ACCC ‘world first’: Australia’s Federal Court found Google misled users about personal location data


In theory, the penalty is A$1.1 million per contravention. There is a contravention each time a reasonable person in the relevant class is misled. So the total award could, in theory, amount to many millions of dollars.

But the actual penalty will depend on how the court characterises the misconduct. We believe Google’s behaviour should not be treated as a simple accident, and the Federal Court should issue a heavy fine to deter Google and other companies from behaving this way in future.

Misleading conduct and privacy settings

The case arose from the representations made by Google to users of Android phones in 2018 about how it obtained personal location data.

The Federal Court held Google had misled some consumers by representing that “having Web & App Activity turned ‘on’ would not allow Google to obtain, retain and use personal data about the user’s location”.

In other words, some consumers were misled into thinking they could control Google’s location data collection practices by switching “off” Location History, whereas Web & App Activity also needed to be disabled to provide this protection.




Read more:
The ACCC is suing Google for misleading millions. But calling it out is easier than fixing it


The ACCC also argued consumers reading Google’s privacy statement would be misled into thinking personal data was collected for their own benefit rather than Google’s. However, the court dismissed this argument on the grounds that reasonable users wanting to turn the Location History “off”

would have assumed that Google was obtaining as much commercial advantage as it could from use of the user’s personal location data.

This is surprising and might deserve further attention from regulators concerned to protect consumers from corporations “data harvesting” for profit.

How much should Google pay?

The penalty and other enforcement orders against Google will be made at a later date.

The aim of the penalty is to deter Google specifically, and other firms like Google, from engaging in misleading conduct again. If penalties are too low they may be treated by wrongdoing firms as merely a “cost of doing business”.

However, in circumstances where there is a high degree of corporate culpability, the Federal Court has shown willingness to award higher amounts than in the past. This has occurred even where the regulator has not sought higher penalties. In the recent Volkswagen Aktiengesellschaft v ACCC judgement, the full Federal Court confirmed an award of A$125 million against Volkswagen for making false representations about compliance with Australian diesel emissions standards.

The Federal Court found Google’s information about local data tracking was misleading.
Shutterstock

In setting Google’s penalty, a court will consider factors such as the nature and extent of the misleading conduct and any loss to consumers. The court will also take into account whether the wrongdoer was involved in “deliberate, covert or reckless conduct, as opposed to negligence or carelessness”.

At this point, Google may well argue that only some consumers were misled, that it was possible for consumers to be informed if they read more about Google’s privacy policies, that it was only one slip-up, and that its contravention of the law was unintentional. These might seem to reduce the seriousness or at least the moral culpability of the offence.

But we argue they should not unduly cap the penalty awarded. Google’s conduct may not appear as “egregious and deliberately deceptive” as the Volkswagen case.

But equally Google is a massively profitable company that makes its money precisely from obtaining, sorting and using its users’ personal data. We think therefore the court should look at the number of Android users potentially affected by the misleading conduct and Google’s responsibility for its own choice architecture, and work from there.

Only some consumers?

The Federal Court acknowledged not all consumers would be misled by Google’s representations. The court accepted many consumers would simply accept the privacy terms without reviewing them, an outcome consistent with the so-called privacy paradox. Others would review the terms and click through to more information about the options for limiting Google’s use of personal data to discover the scope of what was collected under the “Web & App Activity” default.




Read more:
The privacy paradox: we claim we care about our data, so why don’t our actions match?


This might sound like the court was condoning consumers’ carelessness. In fact the court made use of insights from economists about the behavioural biases of consumers in making decisions.

Consumers have limited time to read legal terms and limited ability to understand the future risks arising from those terms. Thus, if consumers are concerned about privacy they might try to limit data collection by selecting various options, but are unlikely to be able to read and understand privacy legalese like a trained lawyer or with the background understanding of a data scientist.

If one option is labelled “Location History”, it is entirely rational for everyday consumers to assume turning it off limits location data collection by Google.

The number of consumers misled by Google’s representations will be difficult to assess. But even if a small proportion of Android users were misled, that will be a very large number of people.

There was evidence before the Federal Court that, after press reports of the tracking problem, the number of consumers switching off the “Web” option increased by 500%. Moreover, Google makes considerable profit from the large amounts of personal data it gathers and retains, and profit is important when it comes deterrence.

Google’s choice architecture

It has also been revealed that some employees at Google were not aware of the problem until an exposé in the press. An urgent meeting was held, referred to internally as the “Oh Shit” meeting.

The individual Google employees at the “Oh Shit” meeting may not have been aware of the details of the system. But that is not the point.

It is the company fault that is the question. And a company’s culpability is not just determined by what some executive or senior employee knew or didn’t know about its processes. Google’s corporate mindset is manifested or revealed in the systems it designs and puts in place.




Read more:
Inducing choice paralysis: how retailers bury customers in an avalanche of options


Google designed the information system that faced consumers trying to manage their privacy settings. This kind of system design is sometimes referred to as “choice architecture”.

Here the choices offered to consumers steered them away from opting out of Google collecting, retaining and using personal location data.

The “Other Options” (for privacy) information failed to refer to the fact that location tracking was carried out via other processes beyond the one labelled “Location History”. Plus, the default option for “Web & App Activity” (which included location tracking) was set as “on”.

This privacy eroding system arose via the design of the “choice architecture”. It therefore warrants a serious penalty.The Conversation

Jeannie Marie Paterson, Professor of Law, The University of Melbourne and Elise Bant, Professor of Law, The University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ACCC ‘world first’: Australia’s Federal Court found Google misled users about personal location data


Henry Perks / Unsplash

Katharine Kemp, UNSWThe Federal Court has found Google misled some users about personal location data collected through Android devices for two years, from January 2017 to December 2018.

The Australian Competition & Consumer Commission (ACCC) says this decision is a “world first” in relation to Google’s location privacy settings. The ACCC now intends to seek various orders against Google. These will include monetary penalties under the Australian Consumer Law (ACL), which could be up to A$10 million or 10% of Google’s local turnover.

Other companies too should be warned that representations in their privacy policies and privacy settings could lead to similar liability under the ACL.

But this won’t be a complete solution to the problem of many companies concealing what they do with data, including the way they share consumers’ personal information.

How did Google mislead consumers about their location history?

The Federal Court found Google’s previous location history settings would have led some reasonable consumers to believe they could prevent their location data being saved to their Google account. In fact, selecting “Don’t save my Location History in my Google Account” alone could not achieve this outcome.

Users needed to change an additional, separate setting to stop location data from being saved to their Google account. In particular, they needed to navigate to “Web & App Activity” and select “Don’t save my Web & App Activity to my Google Account”, even if they had already selected the “Don’t save” option under “Location History”.




Read more:
The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do


ACCC Chair Rod Sims responded to the Federal Court’s findings, saying:

This is an important victory for consumers, especially anyone concerned about their privacy online, as the Court’s decision sends a strong message to Google and others that big businesses must not mislead their customers.

Google has since changed the way these settings are presented to consumers, but is still liable for the conduct the court found was likely to mislead some reasonable consumers for two years in 2017 and 2018.

ACCC has misleading privacy policies in its sights

This is the second recent case in which the ACCC has succeeded in establishing misleading conduct in a company’s representations about its use of consumer data.

In 2020, the medical appointment booking app HealthEngine admitted it had disclosed more than 135,000 patients’ non-clinical personal information to insurance brokers without the informed consent of those patients. HealthEngine paid fines of A$2.9 million, including approximately A$1.4 million relating to this misleading conduct.




Read more:
How safe are your data when you book a COVID vaccine?


The ACCC has two similar cases in the wings, including another case regarding Google’s privacy-related notifications and a case about Facebook’s representations about a supposedly privacy-enhancing app called Onavo.

In bringing proceedings against companies for misleading conduct in their privacy policies, the ACCC is following the US Federal Trade Commission which has sued many US companies for misleading privacy policies.

The ACCC has more cases in the wings about data privacy.
Shutterstock

Will this solve the problem of confusing and unfair privacy policies?

The ACCC’s success against Google and HealthEngine in these cases sends an important message to companies: they must not mislead consumers when they publish privacy policies and privacy settings. And they may receive significant fines if they do.

However, this will not be enough to stop companies from setting privacy-degrading terms for their users, if they spell such conditions out in the fine print. Such terms are currently commonplace, even though consumers are increasingly concerned about their privacy and want more privacy options.

Consider the US experience. The US Federal Trade Commission brought action against the creators of a flashlight app for publishing a privacy policy which didn’t reveal the app was tracking and sharing users’ location information with third parties.




Read more:
We need a code to protect our online privacy and wipe out ‘dark patterns’ in digital design


However, in the agreement settling this claim, the solution was for the creators to rewrite the privacy policy to disclose that users’ location and device ID data are shared with third parties. The question of whether this practice was legitimate or proportionate was not considered.

Major changes to Australian privacy laws will also be required before companies will be prevented from pervasively tracking consumers who do not wish to be tracked. The current review of the federal Privacy Act could be the beginning of a process to obtain fairer privacy practices for consumers, but any reforms from this review will be a long time coming.


This is an edited version of an article that originally appeared on UNSW Newsroom.The Conversation

Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Academic Lead, UNSW Grand Challenge on Trust, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How safe are your data when you book a COVID vaccine?


Shutterstock

Joan Henderson, University of Sydney and Kerin Robinson, La Trobe UniversityThe Australian government has appointed the commercial company HealthEngine to establish a national booking system for COVID-19 vaccinations.

Selected through a Department of Health limited select tender process, the platform is being used by vaccine providers who don’t have their own booking system.

However, HealthEngine has a track record of mishandling confidential patient information.

Previous problems

In 2019 the Australian Competition and Consumer Commission took HealthEngine to court for allegedly skewing reviews and ratings of medical practices on its platform and selling more than 135,000 patients’ details to private health insurance brokers.

The Federal Court fined HealthEngine A$2.9 million in August 2020, just eight months ago.

Department of Health associate secretary Caroline Edwards told a Senate hearing the issues were “historical in nature, weren’t intentional and did not involve the sharing of clinical or medical related information”.

How might the alleged misconduct, which earned HealthEngine A$1.8 million, be considered “historical in nature” and “not intentional”?

Edwards added that HealthEngine had strengthened its privacy and security processes, following recommendations in the ACCC’s digital platforms inquiry report. Regarding the new contract, she said:

[…] the data available to HealthEngine through what it’s been contracted to do does not include any clinical information or any personal information over what’s required for people to book.

That’s somewhat reassuring, considering the larger amount of information usually requested from patients booking an appointment (as per HealthEngine’s current Privacy Policy).

The list of personal information HealthEngine may collect from patients booking an appointment with a health professional.
Screenshot

Importantly, HealthEngine then owns this information. This raises an important question: why is so much personal information requested just to book an ordinary appointment?

A need for accessible information

While using HealthEngine to book a vaccination is not mandatory, individual practices will determine whether patients can make appointments over the phone, are directed to use HealthEngine’s platform, or another existing platform.

Personal details currently requested through HealthEngine’s vaccination booking system are:

HealthEngine’s Privacy Policy for COVID-19 vaccination bookings.
screenshot

This list is substantially shorter than the one concerning non-COVID related bookings. That said, there’s still more information being gathered than would be required for the sole purpose of arranging a patient’s vaccination.

What is the justification for this system to collect data about patients’ non-COVID medical and health services, or the pages they visit?

A representative from the Department of Health told The Conversation that all patient data collected through the COVID vaccination booking system was owned by the department, not HealthEngine. But what need would the department have to collect web analytics data about what sites a patient visits?

An underlying administrative principle of any medical appointment platform is that it should collect the minimum amount of data needed to fulfil its purpose.

Also, HealthEngine’s website reveals the company has, appropriately, created an additional privacy policy for its COVID-19 vaccination booking platform. However, this is currently embedded within its pre-existing policy. Therefore it’s unlikely many people will find, let alone read it.

For transparency, the policy should be easy to find, clearly labelled and presented as distinct from HealthEngine’s regular policies. A standalone page would be feasible, given the value of the contract is more than A$3.8 million.

What protections are in place?

Since the pandemic began, concerns have been raised regarding the lack of clear information and data privacy protection afforded to patients by commercial organisations.

Luckily, there are safeguards in place to regulate how patient data are handled. The privacy of data generated through health-care provision (such as in general practices, hospitals, community health centres and pharmacies) is protected under state and territory or Commonwealth laws.

Data reported (on a compulsory basis) by vaccinating clinicians to the Australian Immunisation Register fall within the confines of the Australian Immunisation Register Act 2015 and its February 2021 amendment.




Read more:
Queensland Health’s history of software mishaps is proof of how hard e-health can be


Also, data collected through the Department of Health’s vaccination registration system are legally protected under the Privacy Act 1988, as are data collected via HealthEngine’s government-approved COVID-19 vaccination booking system.

But there’s still a lack of clarity regarding what patients are being asked to consent to, the amount of information collected and how it’s handled. It’s a critical legal and ethical requirement patients have the right to consent to the use of their personal information.

If the privacy policy of a booking system is unclear, this presents risk for patients who have challenges with the English language, literacy, or are potentially distracted by pain or anxiety while making an appointment.
Shutterstock

Gaps in our knowledge

As health information managers, we had further questions regarding the government’s decision to appoint HealthEngine as a national COVID-19 vaccination booking provider. The Conversation put these questions to HealthEngine, which forwarded them to the Department of Health. They were as follows.

  1. Is there justification for the rushed outsourcing of the national appointment platform, given the number of vaccine recipients whose data will be collected?
  2. How did the department’s “limited select tender” process ensure equity?
  3. Who will own data collected via HealthEngine’s optional national booking system?
  4. What rights will the “owner” of the data have to give third-party access via the sharing or selling of data?
  5. What information will vaccine recipients be given on their right to not use HealthEngine’s COVID-19 vaccination booking system (or any appointment booking system) if they’re uncomfortable providing personal information to a commercial entity?
  6. How will these individuals be reassured they may still receive a vaccine, should they not wish to use the system?

In response, a department representative provided information already available online here, here, here and here. They gave no clarification about how patients might be guided if they’re directed to the HealthEngine platform but don’t want to use it.

They advised the data collected by HealthEngine:

can not be used for secondary purposes, and can only be disclosed to third-party entities as described in HealthEngine’s tailored Privacy Policy and Collection Notice, as well as the department’s Privacy Notice.

But according to HealthEngine’s privacy policy, this means patient data could still be provided to other health professionals a patient selects, and de-identified information given to the Department of Health. The policy states HealthEngine may also disclose patients’ personal information to:

  • third-party IT and software providers such as Adobe Analytics
  • professional advisers such as lawyers and auditors, for the purpose of providing goods or services to HealthEngine
  • courts, tribunals and law enforcement, as required by law or to defend HealthEngine’s legal rights and
  • others parties, as consented to by the patient, or as required by law.

Ideally, the answers to our questions would have helped shed light on the extent to which patient privacy was considered in the government’s decision. But inconsistencies between what is presented in the privacy policies and the Department of Health’s response have not clarified this.




Read more:
The government is hyping digitalised services, but not addressing a history of e-government fails


The Conversation


Joan Henderson, Senior Research Fellow (Hon). Editor, Health Information Management Journal (HIMJ), University of Sydney and Kerin Robinson, Adjunct Associate Professor, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The AstraZeneca vaccine and over-65s: we may not have all the data yet, but limiting access could be counterproductive


Kylie Quinn, RMIT University

Last week, a German vaccine advisory committee recommended the AstraZeneca vaccine only be used in 18-64-year-olds, citing a lack of data on the efficacy of the vaccine in people over 65.

Subsequently, the European regulator, the European Medicines Agency, conditionally approved the vaccine for anyone over 18.

What can we make of this? Should we be giving this vaccine to older people or not?

While we don’t yet have all the data we’d like, we don’t have reason to believe this vaccine won’t be at least somewhat effective in older adults. To exclude them from receiving it wouldn’t necessarily be the right approach.

The recommendation

STIKO, a German vaccine advisory committee that reports to the country’s government, was responsible for the draft recommendation which caused the stir. It released a similar final recommendation at the weekend.

While the German government may elect to follow STIKO’s advice or the European Medicines Agency’s guidelines, the latter’s approval carries significant weight. Equivalent to the Therapeutic Goods Administration (TGA) in Australia, this body decides which vaccines may legally be supplied in Europe.

The AstraZeneca vaccine has already received approvals, not singling out older age groups, from multiple international regulators, including those in the United Kingdom, India and Mexico.




Read more:
Germany may not give the Oxford-AstraZeneca vaccine to over-65s, but that doesn’t mean it won’t work


Why did STIKO make this recommendation?

STIKO’s advice is based on the fact it didn’t have enough data to definitively say whether the vaccine will work in older people — not because it won’t.

According to the data we have so far from AstraZeneca’s phase 3 trials, only two out of 660 people in the trial aged over 65 got sick with COVID-19. Two sick people isn’t enough for a strong statistical analysis.

AstraZeneca initially enrolled younger people in its trials, with older people enrolled only later. So data on older people in the original trials and another trial in the United States are still on the way.

A doctor prepares to vaccinate a grey-haired woman.
AstraZeneca’s early trials didn’t include as many older people as younger people.
Shutterstock

What do we know about the vaccine?

We have very good safety data for the AstraZeneca vaccine in older people. Older people actually have significantly lower levels of early side effects after vaccination. This makes sense, as older people’s immune systems don’t tend to react as strongly to vaccines, which would reduce many of these early side effects.

But the vaccine has been shown to induce strong immune responses in older people, which are likely to provide a degree of protection. The European Medicine Agency’s press release on their decision refers to a reasonable likelihood of protection based on this data.

So, just looking at immune responses, it’s reasonable to anticipate the AstraZeneca vaccine will be of some benefit, at least, to older people.




Read more:
Why we should prioritise older people when we get a COVID vaccine


What do we know from other vaccines?

Often, vaccines aren’t as effective in older people as compared to younger people, because their immune responses can be less robust. For example, in 2010-2011 in the US, the flu vaccine was 60% effective in the general population, but only 38% effective in people over 65.

There’s more information on efficacy in older people for other COVID-19 vaccines. Notably, the Pfizer vaccine maintained efficacy of 93.7% for people over 55, compared to 95% overall. Accordingly, it would be reasonable to prioritise the Pfizer vaccine for older people.

But we’re beginning to see that vaccine supply and distribution can be unpredictable, with supply issues for Pfizer and AstraZeneca starting to affect vaccine rollout.

Importantly, all COVID-19 vaccines assessed so far, including the AstraZeneca vaccine, provide a high level of protection against severe disease and death across variants of the SARS-CoV-2 virus.

A health-care worker administers a vaccine to a senior man.
Older people are more susceptible to the coronavirus.
Shutterstock

Limiting access limits options for older people

The question that advisory committees and regulators are weighing up is, should the AstraZeneca vaccine, or any vaccine, be recommended for older people if we know:

  • the vaccine has low risk of side effects

  • the vaccine has a fair but unconfirmed likelihood of providing some benefit

  • COVID-19 has a higher likelihood of severe disease and death in the demographic.

This is tricky to navigate and advice may differ across different vaccines and countries. For example, China is delaying vaccine rollout to older people while it waits for more data.

But conditional approval is a reasonable path to take. It allows for some uncertainty and maintains contact with the manufacturer. It also recognises that the likely benefit of giving older people any available vaccine could outweigh the hypothetical risk that it might not work in the midst of a crushing pandemic.




Read more:
The Oxford vaccine has unique advantages, as does Pfizer’s. Using both is Australia’s best strategy


In any case, approvals from regulators, such as the European Medicines Agency and the TGA, have the most impact — defining who the vaccine can be supplied to in a country.

If regulatory guidelines are kept open to all age groups above 18, it will facilitate access to vaccines for many people who could benefit from them. The next steps are distributing these vaccines, and educating and updating the public with the latest information as it comes to hand.

Crucially, we should support older people in vaccine decisions with two things; good information and easy access to an array of safe, protective vaccines.The Conversation

Kylie Quinn, Vice-Chancellor’s Research Fellow, School of Health and Biomedical Sciences, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Data from 45 countries show containing COVID vs saving the economy is a false dichotomy



Shutterstock

Michael Smithson, Australian National University

There is no doubt the COVID-19 crisis has incurred widespread economic costs. There is understandable concern that stronger measures against the virus, from social distancing to full lockdowns, worsen its impact on economies.

As a result, there has been a tendency to consider the problem as a trade-off between health and economic costs.

This view, for example, has largely defined the approach of the US federal government. “I think we’ve learned that if you shut down the economy, you’re going to create more damage,” said US Treasury Secretary Steve Mnuchin in June, as the Trump administration resisted calls to decisively combat the nation’s second COVID wave.

But the notion of a trade-off is not supported by data from countries around the world. If anything, the opposite may be true.

Data from 45 nations

Let’s examine available data for 45 nations from the Organisation for Economic Co-operation and Development, using COVID-19 data and economic indicators.

The COVID-19 statistics we’ll focus on are deaths per million of population. No single indicator is perfect, and these rates don’t always reflect contextual factors that apply to specific countries, but this indicator allows us to draw a reasonably accurate global picture.

The economic indicators we’ll examine are among those most widely used for overall evaluations of national economic performance. Gross domestic product (GDP) per capita is an index of national wealth. Exports and imports measure a country’s international economic activity. Private consumption expenditure is an indicator of how an economy is travelling.

Effects on GDP per capita

Our first chart plots nations’ deaths per million from COVID-19 against the percentage change in per capita GDP during the second quarter of 2020.

The size of each data point shows the scale of deaths per million as of June 30, using a logarithmic, or “log”, scale – a way to display a very wide range of values in compact graphical form.


Log(deaths per million) by percentage change in Q2 2020 GDP per capita.


If suppressing the virus, thereby leading to fewer deaths per million, resulted in worse national economic downturns, then the “slope” in figure 1 would be positive. But the opposite is true, with the overall correlation being -0.412.

The two outliers are China, in the upper-left corner, with a positive change in GDP per capita, and India at the bottom. China imposed successful hard lockdowns and containment procedures that meant economic effects were limited. India imposed an early hard lockdown but its measures since have been far less effective. Removing both from our data leaves a correlation of -0.464.

Exports and imports

Our second chart shows the relationship between deaths per million and percentage change in exports.

If there was a clear trade-off between containing the virus and enabling international trade, we would see a positive relationship between the changes in exports and death-rates. Instead, there appears to be no relationship.


Log(deaths per million) by percentage change in Q2 2020 exports.


Our third chart shows the relationship between deaths per million and percentage change in imports. As with exports, a trade-off would show in a positive relationship. But there is no evidence of such a relationship here either.


Log(deaths per million) by percentage change in Q2 2020 imports.


Consumer spending

Our fourth chart shows the relationship between deaths per million and percentage change in private consumption expenditure. This complements the picture we get from imports and exports, by tracking consumer spending as an indicator of internal economic activity.


Log(deaths per million) by percentage change in Q2 2020 private consumption.


Again, no positive relationship. Instead, the overall negative relationship suggests those countries that succeeded (at least temporarily) in suppressing the virus were better off economically than those countries adopting a more laissez-faire approach.

National wealth

As a postscript to this brief investigation, let’s take a quick look at whether greater national wealth seems to have helped countries deal with the virus.

Our fifth and final chart plots cases per million (not deaths per million) against national GDP per capita.


Log(GDP per capita) by log(cases per million).


If wealthier countries were doing better at suppressing transmission, the relationship should be negative. Instead, the clusters by region suggest it’s a combination of culture and politics driving the effectiveness of nations’ responses (or lack thereof).

In fact, if we examine the largest cluster, of European countries (the green dots), the relationship between GDP per capita and case rates is positive (0.379) – the opposite of what we would expect.




Read more:
Vital Signs: the cost of lockdowns is nowhere near as big as we have been told


It’s not a zero-sum game

The standard economic indicators reviewed here show, overall, countries that have contained the virus also tend to have had less severe economic impacts than those that haven’t.

No one should be misled into believing there is zero-sum choice between saving lives and saving the economy. That is a false dichotomy.

If there is anything to be learned regarding how to deal with future pandemics, it is that rapidly containing the pandemic may well lessen its economic impact.The Conversation

Michael Smithson, Professor, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Very convincing evidence’: Pfizer now has the data it needs to apply for COVID vaccine approval



Shutterstock

Kylie Quinn, RMIT University

On Wednesday, Pfizer and BioNTech announced their mRNA vaccine has demonstrated a remarkable 95% efficacy in the “final efficacy analysis” of its phase 3 trial.

The news comes hot on the heels of Pfizer/BioNTech’s interim analysis last week, which pointed to greater than 90% efficacy, and Moderna’s announcement on Monday, also based on an interim analysis, that its vaccine is 94.5% efficacious.

The word “efficacy” describes how well the vaccine offers protection against the target disease during the trial, whereas the word “effectiveness” refers to how well the vaccine protects against the disease in the real world.

This “final efficacy analysis” represents the Pfizer/BioNTech study’s “primary endpoint” — which means there are enough volunteers in the study who have developed COVID-19 to perform a solid evaluation of whether the vaccine is working.

Before the study began, statisticians designing the study identified that 164 people with confirmed COVID-19 would be enough cases to evaluate efficacy (more than 43,000 participants are enrolled in the trial in total).

There were 94 people who had COVID-19 in the interim analysis last week, and they reached 170 people this week — 162 of whom got the placebo and only eight of whom received the vaccine. This is very convincing evidence that this vaccine protects against developing COVID-19 disease.

The fact the primary endpoint was reached so quickly indicates cases are surging in the United States across a lot of the sites where the trial is taking place. Yes, these surging cases are providing more data than anticipated for phase 3 clinical studies; but they also highlight the urgency of the situation in the US.




Read more:
90% efficacy for Pfizer’s COVID-19 mRNA vaccine is striking. But we need to wait for the full data


Deeper insights

Pfizer/BioNTech have provided three additional important pieces of information.

First, the vaccine appears to be safe. Volunteers in the study were asked to report different symptoms after receiving the vaccine, and the most common symptoms of note were fatigue and headaches (3.8% of participants experienced more severe fatigue, and 2% headaches).

Second, the vaccine appears to protect against severe disease. The trial saw ten people become severely unwell with COVID-19, only one of whom had received the vaccine. This is a huge relief, because severe COVID-19 puts immense pressure on health-care systems.

Third, they’ve reported the vaccine has 94% efficacy in older people. This is crucial as older adults are bearing the brunt of COVID-19. In Australia, people over 65 make up only 20% of cases but almost 50% of all ICU admissions and more than 95% of deaths from COVID-19.

This efficacy in older people exceeds what many researchers had anticipated, as vaccines often don’t work as well in this group.

An elderly lady wearing a mask walks with a frame in a garden.
The Pfizer/BioNTech vaccine appears to work equally well in older people.
Shutterstock

It’s not a competition

The Moderna vaccine has also shown promising results on those first two measures — safety and protecting against severe disease. We await data on its efficacy in older people.

This rapid-fire succession of press releases may feel like Pfizer/BioNTech and Moderna are competing for the “biggest” efficacy, but competition is not the driving factor.

The primary endpoints are pre-defined by both companies and, when the study reaches them, an interim or final analysis can be performed. Data and safety monitoring boards, independent from the companies, perform these analyses.




Read more:
Moderna’s COVID vaccine reports 95% efficacy. It means we might have multiple successful vaccines


From a scientific perspective, it’s plausible these two vaccines would have similar efficacy, because they use very similar mRNA vaccine designs. In fact, it’s reassuring they are similar because, in science, we must be able to repeat our results. This gives us confidence the data are correct and that we’ll see similar results outside the lab.

In any scenario, competition is redundant when you consider the size of the problem. Nearly eight billion people around the world urgently need a vaccine. Pfizer/BioNTech and Moderna have each indicated they can make enough vaccines for around 500 million people next year. That still leaves seven billion people needing a vaccine — more than enough of a market for both companies, and more.

Any way you look at it, the real competition is against the virus.

What’s next?

In the coming days, Pfizer/BioNTech will apply to the US Food and Drug Administration (FDA) for an emergency use approval for their vaccine. Moderna and other vaccine developers likely won’t be far behind once they reach their primary endpoints.

Applications to other regulatory bodies around the world will follow, including the Therapeutic Goods Administration in Australia. A successful emergency use approval with the FDA can accelerate approvals with other bodies.




Read more:
How to read results from COVID vaccine trials like a pro


This study will continue for two years to collect “secondary endpoints” — more in-depth details on how the vaccine works and its safety longer term. It will aim to answer three important questions:

  • longevity: how long the vaccine protects people for

  • infection: these latest results show that the vaccine prevents people from getting sick and showing symptoms of COVID-19. But we also need to see whether the vaccine protects people from getting infected in the first place

  • transmission: whether the vaccine reduces the likelihood of an infected but vaccinated person passing the virus on to another person.

It’s fairly straightforward to measure whether a vaccine prevents people from developing disease — you wait for people to report symptoms that could be COVID-19 and then perform a COVID test. Longer timelines and more complicated, laborious lab work are needed to learn about longevity, infection and transmission.

So, there are more insights into the virus and vaccines to come. But these studies are an exciting landmark in vaccine development.




Read more:
Why we should prioritise older people when we get a COVID vaccine


The Conversation


Kylie Quinn, Vice-Chancellor’s Research Fellow, School of Health and Biomedical Sciences, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

83% of Australians want tougher privacy laws. Now’s your chance to tell the government what you want



Shutterstock

Normann Witzleb, Monash University

Federal Attorney-General Christian Porter has called for submissions to the long-awaited review of the federal Privacy Act 1988.

This is the first wide-ranging review of privacy laws since the Australian Law Reform Commission produced a landmark report in 2008.

Australia has in the past often hesitated to adopt a strong privacy framework. The new review, however, provides an opportunity to improve data protection rules to an internationally competitive standard.

Here are some of the ideas proposed — and what’s at stake if we get this wrong.




Read more:
It’s time for privacy invasion to be a legal wrong


Australians care deeply about data privacy

Personal information has never had a more central role in our society and economy, and the government has a strong mandate to update Australia’s framework for the protection of personal information.

In the Australian Privacy Commissioner’s 2020 survey, 83% of Australians said they’d like the government to do more to protect the privacy of their data.

The intense debate about the COVIDSafe app earlier this year also shows Australians care deeply about their private information, even in a time of crisis.

Privacy laws and enforcement can hardly keep up with the ever-increasing digitalisation of our lives. Data-driven innovation provides valuable services that many of us use and enjoy. However, the government’s issues paper notes:

As Australians spend more of their time online, and new technologies emerge, such as artificial intelligence, more personal information about individuals is being captured and processed, raising questions as to whether Australian privacy law is fit for purpose.

The pandemic has accelerated the existing trend towards digitalisation and created a range of new privacy issues including working or studying at home, and the use of personal data in contact tracing.

Australians are rightly concerned they are losing control over their personal data.

So there’s no question the government’s review is sorely needed.

Issues of concern for the new privacy review

The government’s review follows the Australian Competition and Consumer Commission’s Digital Platforms Inquiry, which found that some data practices of digital platforms are unfair and undermine consumer trust. We rely heavily on digital platforms such as Google and Facebook for information, entertainment and engagement with the world around us.

Our interactions with these platforms leave countless digital traces that allow us to be profiled and tracked for profit. The Australian Competition and Consumer Commission (ACCC) found that the digital platforms make it hard for consumers to resist these practices and to make free and informed decisions regarding the collection, use and disclosure of their personal data.

The government has committed to implement most of the ACCC’s recommendations for stronger privacy laws to give us greater consumer control.

However, the reforms must go further. The review also provides an opportunity to address some long-standing weaknesses of Australia’s privacy regime.

The government’s issues paper, released to inform the review, identified several areas of particular concern. These include:

  • the scope of application of the Privacy Act, in particular the definition of “personal information” and current private sector exemptions

  • whether the Privacy Act provides an effective framework for promoting good privacy practices

  • whether individuals should have a direct right to sue for a breach of privacy obligations under the Privacy Act

  • whether a statutory tort for serious invasions of privacy should be introduced into Australian law, allowing Australians to go to court if their privacy is invaded

  • whether the enforcement powers of the Privacy Commissioner should be strengthened.

While most recent attention relates to improving consumer choice and control over their personal data, the review also brings back onto the agenda some never-implemented recommendations from the Australian Law Reform Commission’s 2008 review.

These include introducing a statutory tort for serious invasions of privacy, and extending the coverage of the Privacy Act.

Exemptions for small business and political parties should be reviewed

The Privacy Act currently contains several exemptions that limit its scope. The two most contentious exemptions have the effect that political parties and most business organisations need not comply with the general data protection standards under the Act.

The small business exemption is intended to reduce red tape for small operators. However, largely unknown to the Australian public, it means the vast majority of Australian businesses are not legally obliged to comply with standards for fair and safe handling of personal information.

Procedures for compulsory venue check-ins under COVID health regulations are just one recent illustration of why this is a problem. Some people have raised concerns that customers’ contact-tracing data, in particular collected via QR codes, may be exploited by marketing companies for targeted advertising.

A woman uses a QR code at a restaurant
Under current privacy laws, cafe and restaurant operators are exempt from complying with certain privacy obligations.
Shutterstock

Under current privacy laws, cafe and restaurant operators are generally exempt from complying with privacy obligations to undertake due diligence checks on third-party providers used to collect customers’ data.

The political exemption is another area of need of reform. As the Facebook/Cambridge Analytica scandal showed, political campaigning is becoming increasingly tech-driven.

However, Australian political parties are exempt from complying with the Privacy Act and anti-spam legislation. This means voters cannot effectively protect themselves against data harvesting for political purposes and micro-targeting in election campaigns through unsolicited text messages.

There is a good case for arguing political parties and candidates should be subject to the same rules as other organisations. It’s what most Australians would like and, in fact, wrongly believe is already in place.




Read more:
How political parties legally harvest your data and use it to bombard you with election spam


Trust drives innovation

Trust in digital technologies is undermined when data practices come across as opaque, creepy or unsafe.

There is increasing recognition that data protection drives innovation and adoption of modern applications, rather than impedes it.

A woman looks at her phone in the twilight.
Trust in digital technologies is undermined when data practices come across as opaque, creepy, or unsafe.
Shutterstock

The COVIDSafe app is a good example.
When that app was debated, the government accepted that robust privacy protections were necessary to achieve a strong uptake by the community.

We would all benefit if the government saw that this same principle applies to other areas of society where our precious data is collected.


Information on how to make a submission to the federal government review of the Privacy Act 1988 can be found here.




Read more:
People want data privacy but don’t always know what they’re getting


The Conversation


Normann Witzleb, Associate Professor in Law, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How political parties legally harvest your data and use it to bombard you with election spam



Robin Worrall/Unsplash, CC BY-SA

Erica Mealy, University of the Sunshine Coast

On Monday October 26, five days ahead of Queensland’s election, many voters received an unsolicited text message from Clive Palmer’s mining company Mineralogy, accusing Labor of planning to introduce a “death tax” and providing a link to an online how-to-vote card for Palmer’s United Australia Party.

Screenshot of campaign text message
Screenshot of a text message sent by Clive Palmer’s Mineralogy.
Author provided

Many recipients angrily wondered how Palmer’s firm had got hold of their contact details, and why they were receiving information that had already been thoroughly debunked.

It’s not clear how many voters received the message, although Deputy Premier Steven Miles accused Palmer of sending it to “hundreds of thousands of Queenslanders”. The message was also sent to many permanent and interstate residents not eligible to vote in the election.

Screenshot of election text message.
Not all of the recipients of this message were in the relevant electorate.
Author provided

But the issue goes deeper than Palmer’s dubious tactics, although his message was a particularly egregious example. This and similar messages have been sent to voters outside the relevant electorate. For example, one message from an independent candidate for the electorate of Macalister was received by a resident of Stafford.

In fact, there’s no law to prevent registered political parties — and the contractors and volunteers who work on their behalf — collecting your contact details and bombarding you with messages, regardless of whether you consented or not.

The problem of spam text messages was also prevalent during the 2019 federal election, when the tactics of Palmer’s United Australia Party in particular were called into question, prompting the party to pledge to stop the practice.




Read more:
From robo calls to spam texts: annoying campaign tricks that are legal


Political candidates, including independents and members of registered political parties, can request access to the Australian Electoral Commission’s database of voters’ contact details, to use in their campaign messaging. And it doesn’t stop there: they can also buy access to voters’ data from “information aggregator” companies such as Sensis, including voters’ names, home addresses, phone numbers and e-mail addresses.

Information aggregators also collect and analyse your publicly available data. Your Twitter, Facebook, LinkedIn, Tiktok or Instagram posts can easily be scraped. If your phone number or email address appears publicly, such as on an advert for a community event or public comment on a town planning submission, it can be collected. Few people realise how large their digital footprint really is.

This can reveal not only your contact details but also your political views. Publicly share a post about environmental concerns or social justice issues and you may have just pigeonholed yourself as a left-leaning voter, potentially putting you in line for targeted campaign messages.

Australia has laws against unsolicited spam, so how do political parties get away with this? Because they are entirely exempt from anti-spam legislation.

How politicians dodge spam laws

Private businesses have to abide by strict federal laws about data privacy and spam. The Privacy Act 1998 and the Spam Act 2003 were enacted to protect the public from unwanted and harmful information sharing.

The Privacy act regulates who may have access to your personal information, how it must be stored and what must happen should that data be compromised. For example, if your data is hacked you must be notified.

As summarised by the Australian Communications and Media Authority (ACMA), Spam is unwanted marketing messages sent via email, text or instant messaging containing offers, advertisements or promotions. Permission to contact you by these means can be part of the terms and conditions of sale or use of a product, through a specific check box or if you make your e-mail or phone number public. Specific exemptions are made in the Spam Act for registered charities, government organisations, educational institutions and registered political parties.

Of course, in an open democracy it makes sense to allow elected officials to communicate directly with the voting public, particularly at election time. But aside from the nuisance and (legal) invasion of privacy, there are two main problems with the current free-for-all.

Problem 1: data security

If a data breach occurs for a non-exempt organisation, such as a bank or government organisation, any person who could be harmed from having information shared must be notified. The types of harm include the potential for identity theft and fraud.

But political parties, being exempt from privacy laws, are also exempt from this responsibility. This means if a political party has a data breach and shares your contact details, it doesn’t have to tell you.

Political parties reportedly maintain detailed databases of their constituents. These databases contain not just personal information held by the electoral commission, but any interactions with elected members, including complaints and contacts with electorate offices.

Problem 2: misinformation

Palmer’s text messages were a blanket salvo rather than tailored to particular voters. Hundreds of voters, and many non-voters, received the same message, despite repeated explicit denials from Labor it’s considering introducing a “death tax”.

In Australia, with an arguably free press and available fact-checking, the public can seek balanced, factual information if they are motivated to do so. But in the internet age, many people are vulnerable to “fake news”, whether through naivete or because of “confirmation bias” — the increased likelihood of believing information that fits with their pre-existing worldview.

What can you do about it?

Screenshot of election campaign message
Clive Palmer’s follow-up message sent on October 29.
Author provided

Until the law changes, there are limited ways to combat political text and email intrusion. The first is judicious use of the block and delete buttons and e-mail spam filters. While not foolproof, this does reduce the potential for receiving messages again from that same number or email. However, this tactic would not have helped avoid a second round of messages sent by Palmer on October 29 from a different number.

The second way to combat these messages is to prevent your data and opinions from reaching political databases. In Australia, there is currently no reliable service to help remove your data from the public view, so the best option is to keep it from getting out in the first place.

To do this, you must always read the terms and conditions before giving away personal data. If you have time, audit your entire public online presence to find all the places on the internet that store your personal data, including on all social media platforms and on personal, professional or community web pages. You must always remain vigilant about protecting your information, which is no simple task.The Conversation

Erica Mealy, Lecturer in Computer Science, University of the Sunshine Coast

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Towards a post-privacy world: proposed bill would encourage agencies to widely share your data


Bruce Baer Arnold, University of Canberra

The federal government has announced a plan to increase the sharing of citizen data across the public sector.

This would include data sitting with agencies such as Centrelink, the Australian Tax Office, the Department of Home Affairs, the Bureau of Statistics and potentially other external “accredited” parties such as universities and businesses.

The draft Data Availability and Transparency Bill released today will not fix ongoing problems in public administration. It won’t solve many problems in public health. It is a worrying shift to a post-privacy society.

It’s a matter of arrogance, rather than effectiveness. It highlights deficiencies in Australian law that need fixing.




Read more:
Australians accept government surveillance, for now


Making sense of the plan

Australian governments on all levels have built huge silos of information about us all. We supply the data for these silos each time we deal with government.

It’s difficult to exercise your rights and responsibilities without providing data. If you’re a voter, a director, a doctor, a gun owner, on welfare, pay tax, have a driver’s licence or Medicare card – our governments have data about you.

Much of this is supplied on a legally mandatory basis. It allows the federal, state, territory and local governments to provide pensions, elections, parks, courts and hospitals, and to collect rates, fees and taxes.

The proposed Data Availability and Transparency Bill will authorise large-scale sharing of data about citizens and non-citizens across the public sector, between both public and private bodies. Previously called the “Data Sharing and Release” legislation, the word “transparency” has now replaced “release” to allay public fears.

The legislation would allow sharing between Commonwealth government agencies that are currently constrained by a range of acts overseen (weakly) by the under-resourced Australian Information Commissioner (OAIC).

The acts often only apply to specific agencies or data. Overall we have a threadbare patchwork of law that is supposed to respect our privacy but often isn’t effective. It hasn’t kept pace with law in Europe and elsewhere in the world.

The plan also envisages sharing data with trusted third parties. They might be universities or other research institutions. In future, the sharing could extend to include state or territory agencies and the private sector, too.

Any public or private bodies that receive data can then share it forward. Irrespective of whether one has anything to hide, this plan is worrying.

Why will there be sharing?

Sharing isn’t necessarily a bad thing. But it should be done accountably and appropriately.

Consultations over the past two years have highlighted the value of inter-agency sharing for law enforcement and for research into health and welfare. Universities have identified a range of uses regarding urban planning, environment protection, crime, education, employment, investment, disease control and medical treatment.

Many researchers will be delighted by the prospect of accessing data more cheaply than doing onerous small-scale surveys. IT people have also been enthusiastic about money that could be made helping the databases of different agencies talk to each other.

However, the reality is more complicated, as researchers and civil society advocates have pointed out.

Person hitting a 'share' button on a keyboard.
In a July speech to the Australian Society for Computers and Law, former High Court Justice Michael Kirby highlighted a growing need to fight for privacy, rather than let it slip away.
Shutterstock

Why should you be worried?

The plan for comprehensive data sharing is founded on the premise of accreditation of data recipients (entities deemed trustworthy) and oversight by the Office of the National Data Commissioner, under the proposed act.

The draft bill announced today is open for a short period of public comment before it goes to parliament. It features a consultation paper alongside a disquieting consultants’ report about the bill. In this report, the consultants refer to concerns and “high inherent risk”, but unsurprisingly appear to assume things will work out.

Federal Minister for Government Services Stuart Roberts, who presided over the tragedy known as the RoboDebt scheme, is optimistic about the bill. He dismissed critics’ concerns by stating consent is implied when someone uses a government service. This seems disingenuous, given people typically don’t have a choice.

However, the bill does exclude some data sharing. If you’re a criminologist researching law enforcement, for example, you won’t have an open sesame. Experience with the national Privacy Act and other Commonwealth and state legislation tells us such exclusions weaken over time

Outside the narrow exclusions centred on law enforcement and national security, the bill’s default position is to share widely and often. That’s because the accreditation requirements for agencies aren’t onerous and the bases for sharing are very broad.

This proposal exacerbates ongoing questions about day-to-day privacy protection. Who’s responsible, with what framework and what resources?

Responsibility is crucial, as national and state agencies recurrently experience data breaches. Although as RoboDebt revealed, they often stick to denial. Universities are also often wide open to data breaches.

Proponents of the plan argue privacy can be protected through robust de-identification, in other words removing the ability to identify specific individuals. However, research has recurrently shown “de-identification” is no silver bullet.

Most bodies don’t recognise the scope for re-identification of de-identified personal information and lots of sharing will emphasise data matching.

Be careful what you ask for

Sharing may result in social goods such as better cities, smarter government and healthier people by providing access to data (rather than just money) for service providers and researchers.

That said, our history of aspirational statements about privacy protection without meaningful enforcement by watchdogs should provoke some hard questions. It wasn’t long ago the government failed to prevent hackers from accessing sensitive data on more than 200,000 Australians.

It’s true this bill would ostensibly provide transparency, but it won’t provide genuine accountability. It shouldn’t be taken at face value.




Read more:
Seven ways the government can make Australians safer – without compromising online privacy


The Conversation


Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Keep calm, but don’t just carry on: how to deal with China’s mass surveillance of thousands of Australians



Shutterstock

Bruce Baer Arnold, University of Canberra

National security is like sausage-making. We might enjoy the tasty product, but want to look away from the manufacturing.

Recent news that Chinese company Zhenhua Data is profiling more than 35,000 Australians isn’t a surprise to people with an interest in privacy, security and social networks. We need to think critically about this, knowing we can do something to prevent it from happening again.

Reports indicate Zhenhua provides services to the Chinese government. It may also provide services to businesses in China and overseas.

The company operates under Chinese law and doesn’t appear to have a presence in Australia. That means we can’t shut it down or penalise it for a breach of our law. Also, Beijing is unlikely to respond to expressions of outrage from Australia or condemnation by our government – especially amid recent sabre-rattling.




Read more:
Journalists have become diplomatic pawns in China’s relations with the West, setting a worrying precedent


Zhenhua is reported to have data on more than 35,000 Australians – a list saturated by political leaders and prominent figures. Names, birthdays, addresses, marital status, photographs, political associations, relatives and social media account details are among the information extracted.

It seems Zhenhua has data on a wide range of Australians, including public figures such as Victorian supreme court judge Anthony Cavanough, Australia’s former ambassador to China Geoff Raby, former NSW premier and federal foreign affairs minister Bob Carr, tech billionaire Mike Cannon-Brookes and singer Natalie Imbruglia.

It’s not clear how individuals are being targeted. The profiling might be systematic. It might instead be conducted on the basis of a specific industry, academic discipline, public prominence or perceived political influence.

It’s unlikely Zhenhua profiles random members of the public. That means there’s no reason for average citizens without a China connection to be worried.

Still, details around the intelligence gathering elude us, so best practise for the public is to maintain as much online privacy as possible, whenever possible.

Overall, we don’t know much about Zhenhua’s goals. And what we do know came from a leak to a US academic who sensibly fled China in 2018, fearing for his safety.

Pervasive surveillance is the norm

Pervasive surveillance is now a standard feature of all major governments, which often rely on surveillance-for-profit companies. Governments in the West buy services from big data analytic companies such as Palantir.

Australia’s government gathers information outside our borders, too. Take the bugging of the Timor-Leste government, a supposed friend rather than enemy.

How sophisticated is the plot?

Revelations about Zhenhua have referred to the use of artificial intelligence and the “mosaic” method of intelligence gathering. But this is probably less exciting than it sounds.

Reports indicate much of the data was extracted from online open sources. Access to much of this would have simply involved using algorithms to aggregate targets’ names, dates, qualifications and work history data found on publicly available sites.

The algorithms then help put the individual pieces of the “mosaic” together and fill in the holes on the basis of each individual’s relationship with others, such as their as peers, colleagues or partners.

Some of the data for the mosaic may come from hacking or be gathered directly by the profiler. According to the ABC, some data that landed in Zhenhua’s lap was taken from the dark web.

One seller might have spent years copying data from university networks. For example, last year the Australian National University acknowledged major personal data breaches had taken place, potentially extending back 19 years.

This year there was also the unauthorised (and avoidable) access by cybercriminals to NSW government data on 200,000 people.

While it may be confronting to know a foreign state is compiling information on Australian citizens, it should be comforting to learn sharing this information can be avoided – if you’re careful.

What’s going on in the black box?

One big question is what Zhenhua’s customers in China’s political and business spheres might do with the data they’ve compiled on Australian citizens. Frankly, we don’t know. National security is often a black box and we are unlikely ever to get verifiable details.

Apart from distaste at being profiled, we might say being watched is no big deal, especially given many of those on the list are already public figures. Simply having an AI-assisted “Who’s Who” of prominent Australians isn’t necessarily frightening.

However, it is of concern if the information collected is being used for disinformation, such as through any means intended to erode trust in political processes, or subvert elections.

For instance, a report published in June by the Australian Strategic Policy Institute detailed how Chinese-speaking people in Australia were being targeted by a “persistent, large-scale influence campaign linked to Chinese state actors”.

Illustration of surveillance camera with Chinese flag draped over.
In June, Prime Minister Scott Morrison announced China was supposedly behind a major state-based attack against several of Australia’s sectors, including all levels of government.
Shutterstock

Deep fake videos are another form of subversion of increasing concern to governments and academics, particularly in the US.




Read more:
Deepfake videos could destroy trust in society – here’s how to restore it


Can we fix this?

We can’t make Zhenhua and its competitors disappear. Governments think they are too useful.

Making everything visible to state surveillance is now the ambition of many law enforcement bodies and all intelligence agencies. It’s akin to Google and its competitors wanting to know (and sell) everything about us, without regard for privacy as a human right.

We can, however, build resilience.

One way is to require government agencies and businesses to safeguard their databases. That hasn’t been the case with the NSW government, Commonwealth governments, Facebook, dating services and major hospitals.

In Australia, we need to adopt recommendations by law reform inquiries and establish a national right to privacy. The associated privacy tort would incentivise data custodians and also encourage the public to avoid oversharing online.

In doing so, we might be better placed to condemn both China and other nations participating in unethical intelligence gathering, while properly acknowledging our own wrongdoings in Timor-Leste.The Conversation

Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.