How hackers can use message mirroring apps to see all your SMS texts — and bypass 2FA security


Shutterstock

Syed Wajid Ali Shah, Deakin University; Jongkil Jay Jeong, Deakin University, and Robin Doss, Deakin UniversityIt’s now well known that usernames and passwords aren’t enough to securely access online services. A recent study highlighted more than 80% of all hacking-related breaches happen due to compromised and weak credentials, with three billion username/password combinations stolen in 2016 alone.

As such, the implementation of two-factor authentication (2FA) has become a necessity. Generally, 2FA aims to provide an additional layer of security to the relatively vulnerable username/password system.

It works too. Figures suggest users who enabled 2FA ended up blocking about 99.9% of automated attacks.

But as with any good cybersecurity solution, attackers can quickly come up with ways to circumvent it. They can bypass 2FA through the one-time codes sent as an SMS to a user’s smartphone.

Yet many critical online services in Australia still use SMS-based one-time codes, including myGov and the Big 4 banks: ANZ, Commonwealth Bank, NAB and Westpac.




Read more:
A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?


So what’s the problem with SMS?

Major vendors such as Microsoft have urged users to abandon 2FA solutions that leverage SMS and voice calls. This is because SMS is renowned for having infamously poor security, leaving it open to a host of different attacks.

For example, SIM swapping has been demonstrated as a way to circumvent 2FA. SIM swapping involves an attacker convincing a victims’s mobile service provider they themselves are the victim, and then requesting the victim’s phone number be switched to a device of their choice.

SMS-based one-time codes are also shown to be compromised through readily available tools such as Modlishka by leveraging a technique called reverse proxy. This facilitates communication between the victim and a service being impersonated.

So in the case of Modlishka, it will intercept communication between a genuine service and a victim and will track and record the victims’s interactions with the service, including any login credentials they may use).

In addition to these existing vulnerabilities, our team have found additional vulnerabilities in SMS-based 2FA. One particular attack exploits a feature provided on the Google Play Store to automatically install apps from the web to your android device.

Due to syncing services, if a hacker manages to compromise your Google login credentials on their own device, they can then install a message mirroring app directly onto your smartphone.
Shutterstock

If an attacker has access to your credentials and manages to log into your Google Play account on a laptop (although you will receive a prompt), they can then install any app they’d like automatically onto your smartphone.

The attack on Android

Our experiments revealed a malicious actor can remotely access a user’s SMS-based 2FA with little effort, through the use of a popular app (name and type withheld for security reasons) designed to synchronise user’s notifications across different devices.

Specifically, attackers can leverage a compromised email/password combination connected to a Google account (such as username@gmail.com) to nefariously install a readily-available message mirroring app on a victim’s smartphone via Google Play.

This is a realistic scenario since it’s common for users to use the same credentials across a variety of services. Using a password manager is an effective way to make your first line of authentication — your username/password login — more secure.

Once the app is installed, the attacker can apply simple social engineering techniques to convince the user to enable the permissions required for the app to function properly.

For example, they may pretend to be calling from a legitimate service provider to persuade the user to enable the permissions. After this they can remotely receive all communications sent to the victim’s phone, including one-time codes used for 2FA.

Although multiple conditions must be fulfilled for the aforementioned attack to work, it still demonstrates the fragile nature of SMS-based 2FA methods.

More importantly, this attack doesn’t need high-end technical capabilities. It simply requires insight into how these specific apps work and how to intelligently use them (along with social engineering) to target a victim.

The threat is even more real when the attacker is a trusted individual (e.g., a family member) with access to the victim’s smartphone.

What’s the alternative?

To remain protected online, you should check whether your initial line of defence is secure. First check your password to see if it’s compromised. There are a number of security programs that will let you do this. And make sure you’re using a well-crafted password.

We also recommend you limit the use of SMS as a 2FA method if you can. You can instead use app-based one-time codes, such as through Google Authenticator. In this case the code is generated within the Google Authenticator app on your device itself, rather than being sent to you.

However, this approach can also be compromised by hackers using some sophisticated malware. A better alternative would be to use dedicated hardware devices such as YubiKey.

Hand holds up a YubiKey USB with the text 'Citrix' in the background.
The YubiKey, first developed in 2008, is an authentication device designed to support one-time password and 2FA protocols without having to rely on SMS-based 2FA.
Shutterstock

These are small USB (or near-field communication-enabled) devices that provide a streamlined way to enable 2FA across different services.

Such physical devices need to be plugged into or brought into close proximity of a login device as a part of 2FA, therefore mitigating the risks associated with visible one-time codes, such as codes sent by SMS.

It must be stressed an underlying condition to any 2FA alternative is the user themselves must have some level of active participation and responsibility.

At the same time, further work must be carried out by service providers, developers and researchers to develop more accessible and secure authentication methods.

Essentially, these methods need to go beyond 2FA and towards a multi-factor authentication environment, where multiple methods of authentication are simultaneously deployed and combined as needed.




Read more:
Can I still be hacked with 2FA enabled?


The Conversation


Syed Wajid Ali Shah, Research Fellow, Centre for Cyber Security Research and Innovation, Deakin University; Jongkil Jay Jeong, CyberCRC Research Fellow, Centre for Cyber Security Research and Innovation (CSRI), Deakin University, and Robin Doss, Research Director, Centre for Cyber Security Research and Innovation, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Calling out China for cyberattacks is risky — but a lawless digital world is even riskier


http://www.shutterstock.com

Alexander Gillespie, University of WaikatoToday’s multi-country condemnation of cyber-attacks by Chinese state-sponsored agencies was a sign of increasing frustration at recent behaviour. But it also masks the real problem — international law isn’t strong or coherent enough to deal with this growing threat.

The coordinated announcement by several countries, including the US, UK, Australia and New Zealand, echoes the most recent threat assessment from the US intelligence community: cyber threats from nation states and their surrogates will remain acute for the foreseeable future.

Joining the chorus against China may be diplomatically risky for New Zealand and others, and China has already described the claims as “groundless and irresponsible”. But there is no doubt the problem is real.

The latest report from New Zealand’s Government Communications Security Bureau (GCSB) recorded 353 cyber security incidents in the 12 months to the middle of 2020, compared with 339 incidents in the previous year.

Given the focus is on potentially high-impact events targeting organisations of national significance, this is likely only a small proportion of the total. But the GCSB estimated state-sponsored attacks accounted for up to 30% of incidents recorded in 2019-20.

Since that report, more serious incidents have occurred, including attacks on the stock-exchange and Waikato hospital. The attacks are becoming more sophisticated and inflicting greater damage.

Globally, there are warnings that a major cyberattack could be as deadly as a weapon of mass destruction. The need to de-escalate is urgent.

Global solutions missing

New Zealand would be relatively well-prepared to cope with domestic incidents using criminal, privacy and even harmful digital communications laws. But most cybercrime originates overseas, and global solutions don’t really exist.

In theory, the attacks can be divided into two types — those by criminals and those by foreign governments. In reality, the line between the two is blurred.

Dealing with foreign criminals is slightly easier than combating attacks by other governments, and Prime Minister Jacinda Ardern has recognised the need for a global effort to fight this kind of cybercrime.




Read more:
With cyberattacks growing more frequent and disruptive, a unified approach is essential


To that end, the government recently announced New Zealand was joining the Council of Europe’s Convention on Cybercrime, a global regime signed by 66 countries based on shared basic legal standards, mutual assistance and extradition rules.

Unfortunately, some of the countries most often suspected of allowing international cybercrime to be committed from within their borders have not signed, meaning they are not bound by its obligations.

That includes Russia, China and North Korea. Along with several other countries not known for their tolerance of an open, free and secure internet, they are trying to create an alternative international cybercrime regime, now entering a drafting process through the United Nations.

Cyberattacks as acts of war

Dealing with attacks by other governments (as opposed to criminals) is even harder.

Only broad principles exist, including that countries refrain from the threat or use of force against the territorial integrity or political independence of any state, and that they should behave in a friendly way towards one another. If one is attacked, it has an inherent right of self-defence.




Read more:
Improving cybersecurity means understanding how cyberattacks affect both governments and civilians


Malicious state-sponsored cyber activity involving espionage, ransoms or breaches of privacy might qualify as unfriendly and in bad faith, but they are not acts of war.

However, cyberattacks directed by other governments could amount to acts of war if they cause death, serious injury or significant damage to the targeted state. Cyberattacks that meddle in foreign elections may, depending on their impact, dangerously undermine peace.

And yet, despite these extreme risks, there is no international convention governing state-based cyberattacks in the ways the Geneva Conventions cover the rules of warfare or arms control conventions limit weapons of mass destruction.

Vladimir Putin shaking hands with Joe Biden
Drawing a red line on cybercrime: US President Joe Biden meets Russian President Vladimir Putin in Geneva in June.
GettyImages

Risks of retaliation

The latest condemnation of Chinese-linked cyberattacks notwithstanding, the problem is not going away.

At their recent meeting in Geneva, US President Joe Biden told his Russian counterpart, Vladimir Putin, the US would retaliate against any attacks on its critical infrastructure. A new US agency aimed at countering ransomware attacks would respond in “unseen and seen ways”, according to the administration.

Such responses would be legal under international law if there were no alternative means of resolution or reparation, and could be argued to be necessary and proportionate.

Also, the response can be unilateral or collective, meaning the US might call on its friends and allies to help. New Zealand has said it is open to the proposition that victim states can, in limited circumstances, request assistance from other states to apply proportionate countermeasures against someone acting in breach of international law.




Read more:
Ransomware, data breach, cyberattack: What do they have to do with your personal information, and how worried should you be?


A drift towards lawlessness

But only a month after Biden drew his red line with Putin, another massive ransomware attack crippled hundreds of service providers across 17 countries, including New Zealand schools and kindergartens.

The Russian-affiliated ransomware group REvil that was probably behind the attacks mysteriously disappeared from the internet a few weeks later.




Read more:
Cyber Cold War? The US and Russia talk tough, but only diplomacy will ease the threat


Things are moving fast and none of it is very reassuring. In an interconnected world facing a growing threat from cyberattacks, we appear to be drifting away from order, stability and safety and towards the darkness of increasing lawlessness.

The coordinated condemnation of China by New Zealand and others has considerably upped the ante. All parties should now be seeking a rules-based international solution or the risk will only grow.The Conversation

Alexander Gillespie, Professor of Law, University of Waikato

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Did someone drop a zero? Australia’s digital economy budget spend should be 10 times bigger


Marek Kowalkiewicz, Queensland University of TechnologyThe federal budget for 2021-22 promises A$1.2 billion over the next six years to support the Digital Economy Strategy, a plan to make Australia “a leading digital economy and society by 2030”.

The Digital Economy Strategy proclaims

We are well placed to be a leading digital economy and have strong foundations, but many countries are investing heavily in their digital futures.

This may sound like a lot, but a closer look at the strategy and funding announcements, compared with what other countries are doing, shows we may not be so well placed after all.

Countries such as France and Singapore have implemented similar initiatives, with one key difference: they are spending about ten times as much money as Australia.




Read more:
Cuts, spending, debt: what you need to know about the budget at a glance


The world picture

To see how Australia compares worldwide, we can look to the most comprehensive global analysis of the digital evolution of nations, the Digital Intelligence Index produced by researchers at Tufts University in the United States.

This index looks at many factors, such as digital payment and logistics infrastructure, internet usage, regulations and research, to give each country scores for the current state of its digital economy and also how fast the digital economy is developing.

In the 2020 edition, Australia ranked as the 17th digital economy in the world — behind Sweden, Taiwan, New Zealand, and the leading nation, Singapore. In 2017 Australia came 11th, so we are already dropping down the rankings.

Just to maintain our position, we need to improve at least as rapidly as those behind us. Prime Minister Scott Morrison has acknowledged this, noting “we must keep our foot on the digital accelerator to secure our economic recovery from COVID-19”.

However, the Digital Intelligence Index ranks Australia 88th of the 90 countries analysed when it comes to our speed of improvement. The only two countries slower than Australia are Hungary and Nigeria, and there are 87 digital economies developing faster than us.

Since 2017, countries such as Slovenia, Egypt, Greece and Pakistan, which used to grow more slowly, are moving faster, increasing the pressure from the back of the pack.

Denmark and Sweden, two countries ahead of us in the Digital Evolution ranking above, used to grow slower, giving us a chance to overtake them. Not anymore. They have now picked up speed, and are increasing the gap we need to cover even to catch up with them.

The right ideas, but not enough funding

The Digital Economy Strategy package, announced in the budget, covers a broad range of initiatives. They are grouped into eight priorities, covering education, support for small and medium enterprises (SMEs), cyber security, artificial intelligence (AI), drone technologies, data sharing, support of government services, and tax incentives.

It is promising to see government’s dedicated investment, particularly in securing future skills and building Australia’s AI capability. But it is concerning to see the spending on some priorities fails to reflect the importance of these topics.

The federal government recognised the need for upskilling Australians. According to the Australia’s Digital Pulse report compiled by Deloitte and the Australian Computing Society, we will need 60,000 new technology workers every year for the next five years, just to meet the growing demand. Yet only 7,000 students graduated with IT degrees in Australia in 2019.

The new budget will support graduate and cadet programs, including through additional funding assigned to AI. Unfortunately, the government’s new programs will barely put a dent in our projected skills shortage of about 50,000 workers annually. The new programs will provide scholarships for only up to 468 graduates over a six-year period.

Artificial intelligence is another key topic. AI is upturning industries globally, and creating opportunities for emerging and transforming businesses. The federal government allocated $124.2 million to this priority, distributed among initiatives lasting between four and six years.

Compare this with France, which has allocated €1.5 billion (A$2.3 billion) to AI initiatives running between 2018 and 2022. Given France’s economy is roughly twice the size of Australia’s, an equivalent commitment from Australia would be slightly over A$1 billion — almost 10 times the promised A$124.2 million.

Not enough funding for private enterprise

A huge chunk of the $1.2 billion promised in the budget will be spent on the Enhancing Government Services Delivery priority. Aside from two small expenses of $13.2 million, it consists of just two large initiatives.

The first will deliver an enhanced version of the government’s online service platform, myGov. The second is for digital health, funding My Health Record and Australian Digital Health Agency activities. Together, they will consume more than half of the entire Digital Economy Strategy budget.


This seems grossly unbalanced and skewed toward digital transformation of the public sector, rather than supporting Australia’s digital economy holistically.

Are we really keeping our foot on the digital accelerator, or just pretending to?

We need to do better

Australia’s budget spending on the Digital Economy Strategy for 2021-22 is planned to be just shy of $500 million (with the remainder of the announced $1.2 billion to be spent over the following five years). That’s less than 0.1% of Australia’s entire projected budget spending. How does it compare to leading digital economies?

In Singapore (the world’s top digital economy), a single initiative to support organisations in adopting digital solutions and technologies received S$1 billion (A$960 million) in funding this year. That’s just shy of 1% of Singapore’s entire budget in 2021. Again, the commitment is around ten times higher than Australia’s investment.

To stop sliding down the rankings, Australia needs to put its (our) money where its mouth is. Countries ahead of us (Singapore) and behind us (France) are investing ten times as much as we do in digital economy initiatives.

Are we really well placed to be a leading digital economy? Like so much in life, you get what you pay for.




Read more:
To change our economy we need to change our thinking


The Conversation


Marek Kowalkiewicz, Professor and Founding Director of QUT Centre for the Digital Economy, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We spent six years scouring billions of links, and found the web is both expanding and shrinking


Shutterstock

Paul X. McCarthy, UNSW and Marian-Andrei Rizoiu, University of Technology SydneyThe online world is continuously expanding — always aggregating more services, more users and more activity. Last year, the number of websites registered on the “.com” domain surpassed 150,000,000.

However, more than a quarter of a century since its first commercial use, the growth of the online world is now slowing down in some key categories.

We conducted a multi-year research project analysing global trends in online diversity and dominance. Our research, published today in Public Library of Science, is the first to reveal some long-term trends in how businesses compete in the age of the web.

We saw a dramatic consolidation of attention towards a shrinking (but increasingly dominant) group of online organisations. So, while there is still growth in the functions, features and applications offered on the web, the number of entities providing these functions is shrinking.

Web diversity nosedives

We analysed more than six billion user comments from the social media website Reddit dating back to 2006, as well as 11.8 billion Twitter posts from as far back as 2011. In total, our research used a massive 5.6Tb trove of data from more than a decade of global activity.

This dataset was more than four times the size of the original data from the Hubble Space Telescope, which helped Brian Schmidt and colleagues do their Nobel-prize winning work in 1998 to prove the universe’s expansion is accelerating.

With the Reddit posts, we analysed all the links to other sites and online services — more than one billion in total — to understand the dynamics of link growth, dominance and diversity through the decade.

We used a measure of link “uniqueness”. On this scale, 1 represents maximum diversity (all links have their own domain) and 0 is minimum diversity (all links are on one domain, such as “youtube.com”).

A decade ago, there was a much greater variety of domains within links posted by users of Reddit, with more than 20 different domains for every 100 random links users posted. Now there are only about five different domains for every 100 links posted.

Web diversity is nosediving.
Our Reddit analysis showed the pool of top-performing sources online is shrinking.

In fact, between 60—70% of all attention on key social media platforms is focused towards just ten popular domains.

Beyond social media platforms, we also studied linkage patterns across the web, looking at almost 20 billion links over three years. These results reinforced the “rich are getting richer” online.

The authority, influence and visibility of the top 1,000 global websites (as measured by network centrality or PageRank) is growing every month, at the expense of all other sites.




Read more:
The internet’s founder now wants to ‘fix the web’, but his proposal misses the mark


App diversity is on the rise

The web started as a source of innovation, new ideas and inspiration — a technology that opened up the playing field. It’s now also becoming a medium that actually stifles competition and promotes monopolies and the dominance of a few players.

Our findings resolve a long-running paradox about the nature of the web: does it help grow businesses, jobs and investment? Or does it make it harder to get ahead by letting anyone and everyone join the game? The answer, it turns out, is it does both.

While the diversity of sources is in decline, there is a countervailing force of continually increasing functionality with new services, products and applications — such as music streaming services (Spotify), file sharing programs (Dropbox) and messaging platforms (Messenger, Whatsapp and Snapchat).

Functional diversity
Functional diversity grows continuously online.

Website ‘infant mortality’

Another major finding was the dramatic increase in the “infant mortality” rate of websites — with the big kids on the block guarding their turf more staunchly than ever.

We examined new domains that were continually referenced or linked-to in social media after their first appearance. We found that while almost 40% of the domains created 2006 were active five years on, only a little more than 3% of those created in 2015 remain active today.

The dynamics of online competition are becoming clearer and clearer. And the loss of diversity is concerning. Unlike the natural world, there are no sanctuaries; competition is part of both nature and business.

Our study has profound implications for business leaders, investors and governments everywhere. It shows the network effects of the web don’t just apply to online businesses. They have permeated the entire economy and are rewriting many previously accepted rules of economics.

For example, the idea that businesses can maintain a competitive advantage based on where they are physically located is increasingly tenuous. Meanwhile, there’s new opportunities for companies to set up shop from anywhere in the world and serve a global customer base that’s both mainstream and niche.

TikTok users record a short video.
Innovative global products and services, such as TikTok, Klarna and SkyScanner, continue to emerge from a range of creators around the world.

The best way to encourage diversity is to have more global online businesses focused on providing diverse services, by addressing consumers’ increasingly niche needs.

In Australia, we’re starting to see this through homegrown companies such as Canva, SafetyCulture and iWonder. Hopefully many more will appear in the decade ahead.




Read more:
If it’s free online, you are the product


The Conversation


Paul X. McCarthy, Adjunct Professor, UNSW and Marian-Andrei Rizoiu, Lecturer in Computer Science, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Privacy erosion by design: why the Federal Court should throw the book at Google over location data tracking


Shutterstock

Jeannie Marie Paterson, The University of Melbourne and Elise Bant, The University of Western AustraliaThe Australian Competition and Consumer Commission has had a significant win against Google. The Federal Court found Google misled some Android users about how to disable personal location tracking.

Will this decision actually change the behaviour of the big tech companies? The answer will depend on the size of the penalty awarded in response to the misconduct.




Read more:
ACCC ‘world first’: Australia’s Federal Court found Google misled users about personal location data


In theory, the penalty is A$1.1 million per contravention. There is a contravention each time a reasonable person in the relevant class is misled. So the total award could, in theory, amount to many millions of dollars.

But the actual penalty will depend on how the court characterises the misconduct. We believe Google’s behaviour should not be treated as a simple accident, and the Federal Court should issue a heavy fine to deter Google and other companies from behaving this way in future.

Misleading conduct and privacy settings

The case arose from the representations made by Google to users of Android phones in 2018 about how it obtained personal location data.

The Federal Court held Google had misled some consumers by representing that “having Web & App Activity turned ‘on’ would not allow Google to obtain, retain and use personal data about the user’s location”.

In other words, some consumers were misled into thinking they could control Google’s location data collection practices by switching “off” Location History, whereas Web & App Activity also needed to be disabled to provide this protection.




Read more:
The ACCC is suing Google for misleading millions. But calling it out is easier than fixing it


The ACCC also argued consumers reading Google’s privacy statement would be misled into thinking personal data was collected for their own benefit rather than Google’s. However, the court dismissed this argument on the grounds that reasonable users wanting to turn the Location History “off”

would have assumed that Google was obtaining as much commercial advantage as it could from use of the user’s personal location data.

This is surprising and might deserve further attention from regulators concerned to protect consumers from corporations “data harvesting” for profit.

How much should Google pay?

The penalty and other enforcement orders against Google will be made at a later date.

The aim of the penalty is to deter Google specifically, and other firms like Google, from engaging in misleading conduct again. If penalties are too low they may be treated by wrongdoing firms as merely a “cost of doing business”.

However, in circumstances where there is a high degree of corporate culpability, the Federal Court has shown willingness to award higher amounts than in the past. This has occurred even where the regulator has not sought higher penalties. In the recent Volkswagen Aktiengesellschaft v ACCC judgement, the full Federal Court confirmed an award of A$125 million against Volkswagen for making false representations about compliance with Australian diesel emissions standards.

The Federal Court found Google’s information about local data tracking was misleading.
Shutterstock

In setting Google’s penalty, a court will consider factors such as the nature and extent of the misleading conduct and any loss to consumers. The court will also take into account whether the wrongdoer was involved in “deliberate, covert or reckless conduct, as opposed to negligence or carelessness”.

At this point, Google may well argue that only some consumers were misled, that it was possible for consumers to be informed if they read more about Google’s privacy policies, that it was only one slip-up, and that its contravention of the law was unintentional. These might seem to reduce the seriousness or at least the moral culpability of the offence.

But we argue they should not unduly cap the penalty awarded. Google’s conduct may not appear as “egregious and deliberately deceptive” as the Volkswagen case.

But equally Google is a massively profitable company that makes its money precisely from obtaining, sorting and using its users’ personal data. We think therefore the court should look at the number of Android users potentially affected by the misleading conduct and Google’s responsibility for its own choice architecture, and work from there.

Only some consumers?

The Federal Court acknowledged not all consumers would be misled by Google’s representations. The court accepted many consumers would simply accept the privacy terms without reviewing them, an outcome consistent with the so-called privacy paradox. Others would review the terms and click through to more information about the options for limiting Google’s use of personal data to discover the scope of what was collected under the “Web & App Activity” default.




Read more:
The privacy paradox: we claim we care about our data, so why don’t our actions match?


This might sound like the court was condoning consumers’ carelessness. In fact the court made use of insights from economists about the behavioural biases of consumers in making decisions.

Consumers have limited time to read legal terms and limited ability to understand the future risks arising from those terms. Thus, if consumers are concerned about privacy they might try to limit data collection by selecting various options, but are unlikely to be able to read and understand privacy legalese like a trained lawyer or with the background understanding of a data scientist.

If one option is labelled “Location History”, it is entirely rational for everyday consumers to assume turning it off limits location data collection by Google.

The number of consumers misled by Google’s representations will be difficult to assess. But even if a small proportion of Android users were misled, that will be a very large number of people.

There was evidence before the Federal Court that, after press reports of the tracking problem, the number of consumers switching off the “Web” option increased by 500%. Moreover, Google makes considerable profit from the large amounts of personal data it gathers and retains, and profit is important when it comes deterrence.

Google’s choice architecture

It has also been revealed that some employees at Google were not aware of the problem until an exposé in the press. An urgent meeting was held, referred to internally as the “Oh Shit” meeting.

The individual Google employees at the “Oh Shit” meeting may not have been aware of the details of the system. But that is not the point.

It is the company fault that is the question. And a company’s culpability is not just determined by what some executive or senior employee knew or didn’t know about its processes. Google’s corporate mindset is manifested or revealed in the systems it designs and puts in place.




Read more:
Inducing choice paralysis: how retailers bury customers in an avalanche of options


Google designed the information system that faced consumers trying to manage their privacy settings. This kind of system design is sometimes referred to as “choice architecture”.

Here the choices offered to consumers steered them away from opting out of Google collecting, retaining and using personal location data.

The “Other Options” (for privacy) information failed to refer to the fact that location tracking was carried out via other processes beyond the one labelled “Location History”. Plus, the default option for “Web & App Activity” (which included location tracking) was set as “on”.

This privacy eroding system arose via the design of the “choice architecture”. It therefore warrants a serious penalty.The Conversation

Jeannie Marie Paterson, Professor of Law, The University of Melbourne and Elise Bant, Professor of Law, The University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ACCC ‘world first’: Australia’s Federal Court found Google misled users about personal location data


Henry Perks / Unsplash

Katharine Kemp, UNSWThe Federal Court has found Google misled some users about personal location data collected through Android devices for two years, from January 2017 to December 2018.

The Australian Competition & Consumer Commission (ACCC) says this decision is a “world first” in relation to Google’s location privacy settings. The ACCC now intends to seek various orders against Google. These will include monetary penalties under the Australian Consumer Law (ACL), which could be up to A$10 million or 10% of Google’s local turnover.

Other companies too should be warned that representations in their privacy policies and privacy settings could lead to similar liability under the ACL.

But this won’t be a complete solution to the problem of many companies concealing what they do with data, including the way they share consumers’ personal information.

How did Google mislead consumers about their location history?

The Federal Court found Google’s previous location history settings would have led some reasonable consumers to believe they could prevent their location data being saved to their Google account. In fact, selecting “Don’t save my Location History in my Google Account” alone could not achieve this outcome.

Users needed to change an additional, separate setting to stop location data from being saved to their Google account. In particular, they needed to navigate to “Web & App Activity” and select “Don’t save my Web & App Activity to my Google Account”, even if they had already selected the “Don’t save” option under “Location History”.




Read more:
The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do


ACCC Chair Rod Sims responded to the Federal Court’s findings, saying:

This is an important victory for consumers, especially anyone concerned about their privacy online, as the Court’s decision sends a strong message to Google and others that big businesses must not mislead their customers.

Google has since changed the way these settings are presented to consumers, but is still liable for the conduct the court found was likely to mislead some reasonable consumers for two years in 2017 and 2018.

ACCC has misleading privacy policies in its sights

This is the second recent case in which the ACCC has succeeded in establishing misleading conduct in a company’s representations about its use of consumer data.

In 2020, the medical appointment booking app HealthEngine admitted it had disclosed more than 135,000 patients’ non-clinical personal information to insurance brokers without the informed consent of those patients. HealthEngine paid fines of A$2.9 million, including approximately A$1.4 million relating to this misleading conduct.




Read more:
How safe are your data when you book a COVID vaccine?


The ACCC has two similar cases in the wings, including another case regarding Google’s privacy-related notifications and a case about Facebook’s representations about a supposedly privacy-enhancing app called Onavo.

In bringing proceedings against companies for misleading conduct in their privacy policies, the ACCC is following the US Federal Trade Commission which has sued many US companies for misleading privacy policies.

The ACCC has more cases in the wings about data privacy.
Shutterstock

Will this solve the problem of confusing and unfair privacy policies?

The ACCC’s success against Google and HealthEngine in these cases sends an important message to companies: they must not mislead consumers when they publish privacy policies and privacy settings. And they may receive significant fines if they do.

However, this will not be enough to stop companies from setting privacy-degrading terms for their users, if they spell such conditions out in the fine print. Such terms are currently commonplace, even though consumers are increasingly concerned about their privacy and want more privacy options.

Consider the US experience. The US Federal Trade Commission brought action against the creators of a flashlight app for publishing a privacy policy which didn’t reveal the app was tracking and sharing users’ location information with third parties.




Read more:
We need a code to protect our online privacy and wipe out ‘dark patterns’ in digital design


However, in the agreement settling this claim, the solution was for the creators to rewrite the privacy policy to disclose that users’ location and device ID data are shared with third parties. The question of whether this practice was legitimate or proportionate was not considered.

Major changes to Australian privacy laws will also be required before companies will be prevented from pervasively tracking consumers who do not wish to be tracked. The current review of the federal Privacy Act could be the beginning of a process to obtain fairer privacy practices for consumers, but any reforms from this review will be a long time coming.


This is an edited version of an article that originally appeared on UNSW Newsroom.The Conversation

Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Academic Lead, UNSW Grand Challenge on Trust, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A new online safety bill could allow censorship of anyone who engages with sexual content on the internet



shutterstock.

Zahra Zsuzsanna Stardust, UNSW

Under new draft laws, the eSafety Commissioner could order your nude selfies, sex education or slash fiction to be taken down from the internet with just 24 hours notice.

Officially, the Morrison government’s new bill aims to improve online safety.

But in doing so, it gives broad, discretionary powers to the commissioner, with serious ramifications for anyone who engages with sexual content online.

Broad new powers

After initial consultation in 2019, the federal government released the draft online safety bill last December. Public submissions closed on the weekend.

The bill contains several new initiatives, from cyberbullying protections for children to new ways to remove non-consensual intimate imagery.

eSafety Commissioner Julie Inman Grant
Julie Inman Grant was appointed as the government’s eSafety Commissioner in 2016.
Lukas Coch/AAP

Crucially, it gives the eSafety Commissioner — a federal government appointee — a range of new powers.

It contains rapid website-blocking provisions to prevent the circulation of “abhorrent violent material” (such as live-streaming terror attacks). It reduces the timeframe for “takedown notices” (where a hosting provider is directed to remove content) from 48 to 24 hours. It can also require search engines to delete links and app stores to prevent downloads, with civil penalties of up to $111,000 for non-compliance.

But one concerning element of the bill that has not received wide public attention is its takedown notices for so-called “harmful online content”.

A move towards age verification

Due to the impracticality of classifying the entire internet, regulators are now moving towards systems that require access restrictions for certain content and make use of user complaints to identify harmful material.

In this vein, the proposed bill will require online service providers to use technologies to prevent children gaining access to sexual material.




Read more:
Coalition plans to improve online safety don’t address the root cause of harms: the big tech business model


Controversially, the bill gives the commissioner power to impose their own specific “restricted access system”.

This means the commissioner could decide that, to access sexual content, users must upload their identity documents, scan their fingerprints, undergo facial recognition technology or have their age estimated by artificial intelligence based on behavioural signals.

But there are serious issues with online verification systems. This has already been considered and abandoned by similar countries. The United Kingdom dropped its plans in 2019, following implementation difficulties and privacy concerns.

The worst-case scenario here is governments collect databases of people’s sexual preferences and browsing histories that can be leaked, hacked, sold or misused.

eSafety Commissioner as ‘chief censor’

The bill also creates an “online content scheme”, which identifies content that users can complain about.

The bill permits any Australian internet user to make complaints about “class 1” and “class 2” content that is not subject to a restricted access system. These categories are extremely broad, ranging from actual, to simulated, to implied sexual activity, as well as explicit nudity.

In practice, people can potentially complain about any material depicting sex that they find on the internet, even on specific adult sites, if there is no mechanism to verify the user’s age.

Screen shot of YouPorn website
The potential for complaints about sexual material online is very broad under the proposed laws.
http://www.shutterstock.com

The draft laws then allow the commissioner to conduct investigations and order removal notices as they “think fit”. There are no criteria for what warrants removal, no requirement to give reasons, and no process for users to be notified or have opportunity to respond to complaints.

Without the requirement to publish transparent enforcement data, the commissioner can simply remove content that is neither harmful nor unlawful and is specifically exempt from liability for damages or civil proceedings.

This means users will have little clarity on how to actually comply with the scheme.

Malicious complaints and self-censorship

The potential ramifications of the bill are broad. They are likely to affect sex workers, sex educators, LGBTIQ health organisations, kink communities, online daters, artists and anyone who shares or accesses sexual content online.

While previous legislation was primarily concerned with films, print publications, computer games and broadcast media, this bill applies to social media, instant messaging, online games, websites, apps and a range of electronic and internet service providers.

Open palms holding a heart shape and a condom.
Sex education material may be subject to complaints.
http://www.shutterstock.com

It means links to sex education and harm reduction material for young people could be deleted by search engines. Hook up apps such as Grindr or Tinder could be made unavailable for download. Escort advertising platforms could be removed. Online kink communities like Fetlife could be taken down.

The legislation could embolden users – including anti-pornography advocates, disgruntled customers or ex-partners – to make vexatious complaints about sexual content, even where there is nothing harmful about it.

The complaints system is also likely to have a disproportionate impact on sex workers, especially those who turned to online work during the pandemic, and who already face a high level of malicious complaints.

Sex workers consistently report restrictive terms of service as well as shadowbanning and deplatforming, where their content is stealthily or selectively removed from social media.




Read more:
How the ‘National Cabinet of Whores’ is leading Australia’s coronavirus response for sex workers


The requirement for service providers to restrict children’s access to sexual content also provides a financial incentive to take an over-zealous approach. Providers may employ artificial intelligence at scale to screen and detect nudity (which can confuse sex education with pornography), apply inappropriate age verification mechanisms that compromise user privacy, or, where this is too onerous or expensive, take the simpler route of prohibiting sexual content altogether.

In this sense, the bill may operate in a similar way to United States “FOSTA-SESTA” anti-trafficking legislation, which prohibits websites from promoting or facilitating prostitution. This resulted in the pre-emptive closure of essential sites for sex worker safety, education and community building.

New frameworks for sexual content moderation

Platforms have been notoriously poor when it comes to dealing with sexual content. But governments have not been any better.

We need new ways to think about moderating sexual content.

Historically, obscenity legislation has treated all sexual content as if it was lacking in value unless it was redeemed by literary, artistic or scientific merit. Our current classification framework of “offensiveness” is also based on outdated notions of “morality, decency and propriety”.




Read more:
The Chatterley Trial 60 years on: a court case that secured free expression in 1960s Britain


Research into sex and social media suggests we should not simply conflate sex with risk.

Instead, some have proposed human rights approaches. These draw on a growing body of literature that sees sexual health, pleasure and satisfying sexual experiences as compatible with bodily autonomy, safety and freedom from violence.

Others have pointed to the need for improved sex education, consent skills and media literacy to equip users to navigate online space.

What’s obvious is we need a more nuanced approach to decision-making that imagines sex beyond “harm”, thinks more comprehensively about safer spaces, and recognises the cultural value in sexual content.The Conversation

Zahra Zsuzsanna Stardust, Adjunct Lecturer, Centre for Social Research in Health, Research Assistant, Faculty of Law and Justice, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.