Digital campaigning on sites like Facebook is unlikely to swing the election



File 20190412 44802 mem06u.jpg?ixlib=rb 1.1
Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.
Shutterstock

Glenn Kefford, Macquarie University

With the federal election now officially underway, commentators have begun to consider not only the techniques parties and candidates will use to persuade voters, but also any potential threats we are facing to the integrity of the election.

Invariably, this discussion leads straight to digital.

In the aftermath of the 2016 United States presidential election, the coverage of digital campaigning has been unparalleled. But this coverage has done very little to improve understanding of the key issues confronting our democracies as a result of the continued rise of digital modes of campaigning.

Some degree of confusion is understandable since digital campaigning is opaque – especially in Australia. We have very little information on what political parties or third-party campaigners are spending their money on, some of which comes from taxpayers. But the hysteria around digital is for the most part, unfounded.




Read more:
Chinese social media platform WeChat could be a key battleground in the federal election


Why parties use digital media

In any attempt to better understand digital, it’s useful to consider why political parties and other campaigners are using it as part of their election strategies. The reasons are relatively straightforward.

The media landscape is fragmented. Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.

Compared to the cost of advertising on television, radio or in print, digital advertising is very affordable.

Platforms like Facebook offer services that give campaigners a relatively straightforward way to segment voters. Campaigners can use these tools to micro-target them with tailored messaging.

Voting, persuasion and mobilisation

While there is certainly more research required into digital campaigning, there is no scholarly study I know of that suggests advertising online – including micro-targeted messaging – has the effect that it is often claimed to have.

What we know is that digital messaging can have a small but significant effect on mobilisation, that there are concerns about how it could be used to demobilise voters, and that it is an effective way to fundraise and organise. But its ability to independently persuade voters to change their votes is estimated to be close to zero.




Read more:
Australian political journalists might be part of a ‘Canberra bubble’, but they engage the public too


The exaggeration and lack of clarity around digital is problematic because there is almost no evidence to support many of the claims made. This type of technology fetishism also implies that voters are easily manipulated, when there is little evidence of this.

While it might help some commentators to rationalise unexpected election results, a more fruitful endeavour than blaming technology would be to try to understand why voters are attracted to various parties or candidates, such as Trump in the US.

Digital campaigning is not a magic bullet, so commentators need to stop treating it as if it is. Parties hope it helps them in their persuasion efforts, but this is through layering their messages across as many mediums as possible, and using the network effect that social media provides.

Data privacy and foreign interference

The two clear and obvious dangers related to digital are about data privacy and foreign meddling. We should not accept that our data is shared widely as a result of some box we ticked online. And we should have greater control over how our data are used, and who they are sold to.

An obvious starting point in Australia is questioning whether parties should continue to be exempt from privacy legislation. Research suggests that a majority of voters see a distinction between commercial entities advertising to us online compared to parties and other campaigners.

We also need to take some personal responsibility, since many of us do not always take our digital footprint as seriously as we should. It matters, and we need to educate ourselves on this.

The more vexing issue is that of foreign interference. One of the first things we need to recognise is that it is unlikely this type of meddling online would independently turn an election.

This does not mean we should accept this behaviour, but changing election results is just one of the goals these actors have. Increasing polarisation and contributing to long-term social divisions is part of the broader strategy.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


The digital battleground

As the 2019 campaign unfolds, we should remember that, while digital matters, there is no evidence it has an independent election-changing effect.

Australians should be most concerned with how our data are being used and sold, and about any attempts to meddle in our elections by state and non-state actors.

The current regulatory environment fails to meet community standards. More can and should be done to protect us and our democracy.


This article has been co-published with The Lighthouse, Macquarie University’s multimedia news platform.The Conversation

Glenn Kefford, Senior Lecturer, Department of Modern History, Politics and International Relations, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

As responsible digital citizens, here’s how we can all reduce racism online



File 20190409 2918 9q8zs6.jpg?ixlib=rb 1.1
No matter how innocent you think it is, what you type into search engines can shape how the internet behaves.
Hannah Wei / unsplash, CC BY

Ariadna Matamoros-Fernández, Queensland University of Technology

Have you ever considered that what you type into Google, or the ironic memes you laugh at on Facebook, might be building a more dangerous online environment?

Regulation of online spaces is starting to gather momentum, with governments, consumer groups, and even digital companies themselves calling for more control over what is posted and shared online.

Yet we often fail to recognise the role that you, me and all of us as ordinary citizens play in shaping the digital world.

The privilege of being online comes with rights and responsibilities, and we need to actively ask what kind of digital citizenship we want to encourage in Australia and beyond.




Read more:
How the use of emoji on Islamophobic Facebook pages amplifies racism


Beyond the knee-jerk

The Christchurch terror attack prompted policy change by governments in both New Zealand and Australia.

Australia recently passed a new law that will enforce penalties for social media platforms if they don’t remove violent content after it becomes available online.

Platforms may well be lagging behind in their content moderation responsibilities, and still need to do better in this regard. But this kind of “kneejerk” policy response won’t solve the spread of problematic content on social media.

Addressing hate online requires coordinated efforts. Platforms must improve the enforcement of their rules (not just announce tougher measures) to guarantee users’ safety. They may also reconsider a serious redesign, because the way they currently organise, select, and recommend information often amplifies systemic problems in society like racism.




Read more:
New livestreaming legislation fails to take into account how the internet actually works


Discrimination is entrenched

Of course, biased beliefs and content don’t just live online.

In Australia, racial discrimination has been perpetuated in public policy, and the country has an unreconciled history of Indigenous dispossession and oppression.

Today, Australia’s political mainstream is still lenient with bigots, and the media often contributes to fearmongering about immigration.

However, we can all play a part in reducing harm online.

There are three aspects we might reconsider when interacting online so as to deny oxygen to racist ideologies:

  • a better understanding of how platforms work
  • the development of empathy to identify differences in interpretation when engaging with media (rather than focusing on intent)
  • working towards a more productive anti-racism online.

Online lurkers and the amplification of harm

White supremacists and other reactionary pundits seek attention on mainstream and social media. New Zealand Prime Minister Jacinda Ardern refused to name the Christchurch gunman to prevent fuelling his desired notoriety, and so did some media outlets.

The rest of us might draw comfort from not having contributed to amplifying the Christchurch attacker’s desired fame. It’s likely we didn’t watch his video or read his manifesto, let alone upload or share this content on social media.

But what about apparently less harmful practices, such as searching on Google and social media sites for keywords related to the gunman’s manifesto or his live video?

It’s not the intent behind these practices that should be the focus of this debate, but the consequences of it. Our everyday interactions on platforms influence search autocomplete algorithms and the hierarchical organisation and recommendation of information.

In the Christchurch tragedy, even if we didn’t share or upload the manifesto or the video, the zeal to access this information drove traffic to problematic content and amplified harm for the Muslim community.

Normalisation of hate through seemingly lighthearted humour

Reactionary groups know how to capitalise on memes and other jokey content that degrades and dehumanises.

By using irony to deny the racism in these jokes, these far-right groups connect and immerse new members in an online culture that deliberately uses memetic media to have fun at the expense of others.

The Christchurch terrorist attack showed this connection between online irony and the radicalisation of white men.

However, humour, irony and play – which are protected on platform policies – serve to cloak racism in more mundane and everyday contexts.




Read more:
Racism in a networked world: how groups and individuals spread racist hate online


Just as everyday racism shares discourses and vocabularies with white supremacy, lighthearted racist and sexist jokes are as harmful as online fascist irony.

Humour and satire should not be hiding places for ignorance and bigotry. As digital citizens we should be more careful about what kind of jokes we engage with and laugh at on social media.

What’s harmful and what’s a joke might not be apparent when interpreting content from a limited worldview. The development of empathy to others’ interpretations of the same content is a useful skill to minimise the amplification of racist ideologies online.

As scholar danah boyd argues:

The goal is to understand the multiple ways of making sense of the world and use that to interpret media.

Effective anti-racism on social media

A common practice in challenging racism on social media is to publicly call it out, and show support for those who are victims of it. But critics of social media’s callout culture and solidarity sustain that these tactics often do not work as an effective anti-racism tool, as they are performative rather than having an advocacy effect.

An alternative is to channel outrage into more productive forms of anti-racism. For example, you can report hateful online content either individually or through organisations that are already working on these issues, such as The Online Hate Prevention Institute and the Islamophobia Register Australia.

Most major social media platforms struggle to understand how hate articulates in non-US contexts. Reporting content can help platforms understand culturally specific coded words, expressions, and jokes (most of which are mediated through visual media) that moderators might not understand and algorithms can’t identify.

As digital citizens we can work together to deny attention to those that seek to discriminate and inflict harm online.

We can also learn how our everyday interactions might have unintended consequences and actually amplify hate.

However, these ideas do not diminish the responsibility of platforms to protect users, nor do they negate the role of governments to find effective ways to regulate platforms in collaboration and consultation with civil society and industry.The Conversation

Ariadna Matamoros-Fernández, Lecturer in Digital Media at the School of Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Don’t click that link! How criminals access your digital devices and what happens when they do



File 20190207 174851 1lwq94r.jpg?ixlib=rb 1.1
A link is a mechanism for data to be delivered to your device.
Unsplash/Marvin Tolentino

Richard Matthews, University of Adelaide and Kieren Niĉolas Lovell, Tallinn University of Technology

Every day, often multiple times a day, you are invited to click on links sent to you by brands, politicians, friends and strangers. You download apps on your devices. Maybe you use QR codes.

Most of these activities are secure because they come from sources that can be trusted. But sometimes criminals impersonate trustworthy sources to get you to click on a link (or download an app) that contains malware.

At its core, a link is just a mechanism for data to be delivered to your device. Code can be built into a website which redirects you to another site and downloads malware to your device en route to your actual destination.

When you click on unverified links or download suspicious apps you increase the risk of exposure to malware. Here’s what could happen if you do – and how you can minimise your risk.




Read more:
How suppliers of everyday devices make you vulnerable to cyber attack – and what to do about it


What is malware?

Malware is defined as malicious code that:

will have adverse impact on the confidentiality, integrity, or availability of an information system.

In the past, malware described malicious code that took the form of viruses, worms or Trojan horses.

Viruses embedded themselves in genuine programs and relied on these programs to propagate. Worms were generally stand alone programs that could install themselves using a network, USB or email program to infect other computers.

Trojan horses took their name from the gift to the Greeks during the Trojan war in Homer’s Odyssey. Much like the wooden horse, a Trojan Horse looks like a normal file until some predetermined action causes the code to execute.

Today’s generation of attacker tools are far more sophisticated, and are often a blend of these techniques.

These so-called “blended attacks” rely heavily on social engineering – the ability to manipulate someone to doing something they wouldn’t normally do – and are often categorised by what they ultimately will do to your systems.

What does malware do?

Today’s malware comes in easy to use, customised toolkits distributed on the dark web or by well meaning security researchers attempting to fix problems.

With a click of a button, attackers can use these toolkits to send phishing emails and spam SMS messages to eploy various types of malware. Here are some of them.

https://datawrapper.dwcdn.net/QDA3R/2/

  • a remote administration tool (RAT) can be used to access a computer’s camera, microphone and install other types of malware

  • keyloggers can be used to monitor for passwords, credit card details and email addresses

  • ransomware is used to encrypt private files and then demand payment in return for the password

  • botnets are used for distributed denial of service (DDoS) attacks and other illegal activities. DDoS attacks can flood a website with so much virtual traffic that it shuts down, much like a shop being filled with so many customers you are unable to move.

  • crytptominers will use your computer hardware to mine cryptocurrency, which will slow your computer down

  • hijacking or defacement attacks are used to deface a site or embarrass you by posting pornographic material to your social media

An example of a defacement attack on The Utah Office of Tourism Industry from 2017.
Wordfence



Read more:
Everyone falls for fake emails: lessons from cybersecurity summer school


How does malware end up on your device?

According to insurance claim data of businesses based in the UK, over 66% of cyber incidents are caused by employee error. Although the data attributes only 3% of these attacks to social engineering, our experience suggests the majority of these attacks would have started this way.

For example, by employees not following dedicated IT and information security policies, not being informed of how much of their digital footprint has been exposed online, or simply being taken advantage of. Merely posting what you are having for dinner on social media can open you up to attack from a well trained social engineer.

QR codes are equally as risky if users open the link the QR codes point to without first validating where it was heading, as indicated by this 2012 study.

Even opening an image in a web browser and running a mouse over it can lead to malware being installed. This is quite a useful delivery tool considering the advertising material you see on popular websites.

Fake apps have also been discovered on both the Apple and Google Play stores. Many of these attempt to steal login credentials by mimicking well known banking applications.

Sometimes malware is placed on your device by someone who wants to track you. In 2010, the Lower Merion School District settled two lawsuits brought against them for violating students’ privacy and secretly recording using the web camera of loaned school laptops.

What can you do to avoid it?

In the case of the the Lower Merion School District, students and teachers suspected they were being monitored because they “saw the green light next to the webcam on their laptops turn on momentarily.”

While this is a great indicator, many hacker tools will ensure webcam lights are turned off to avoid raising suspicion. On-screen cues can give you a false sense of security, especially if you don’t realise that the microphone is always being accessed for verbal cues or other forms of tracking.

Facebook CEO Mark Zuckerberg covers the webcam of his computer. It’s commonplace to see information security professionals do the same.
iphonedigital/flickr

Basic awareness of the risks in cyberspace will go a long the way to mitigating them. This is called cyber hygiene.

Using good, up to date virus and malware scanning software is crucial. However, the most important tip is to update your device to ensure it has the latest security updates.

Hover over links in an email to see where you are really going. Avoid shortened links, such as bit.ly and QR codes, unless you can check where the link is going by using a URL expander.

What to do if you already clicked?

If you suspect you have malware on your system, there are simple steps you can take.

Open your webcam application. If you can’t access the device because it is already in use this is a telltale sign that you might be infected. Higher than normal battery usage or a machine running hotter than usual are also good indicators that something isn’t quite right.

Make sure you have good anti-virus and anti-malware software installed. Estonian start-ups, such as Malware Bytes and Seguru, can be installed on your phone as well as your desktop to provide real time protection. If you are running a website, make sure you have good security installed. Wordfence works well for WordPress blogs.

More importantly though, make sure you know how much data about you has already been exposed. Google yourself – including a Google image search against your profile picture – to see what is online.

Check all your email addresses on the website haveibeenpwned.com to see whether your passwords have been exposed. Then make sure you never use any passwords again on other services. Basically, treat them as compromised.

Cyber security has technical aspects, but remember: any attack that doesn’t affect a person or an organisation is just a technical hitch. Cyber attacks are a human problem.

The more you know about your own digital presence, the better prepared you will be. All of our individual efforts better secure our organisations, our schools, and our family and friends.The Conversation

Richard Matthews, Lecturer Entrepreneurship, Commercialisation and Innovation Centre | PhD Candidate in Image Forensics and Cyber | Councillor, University of Adelaide and Kieren Niĉolas Lovell, Head of TalTech Computer Emergency Response Team, Tallinn University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A tale of two media reports: one poses challenges for digital media; the other gives ABC and SBS a clean bill of health



File 20181213 178579 1im07g8.jpg?ixlib=rb 1.1
The competitive neutrality report has given the ABC, and SBS, a clean bill of health.
Shutterstock

Denis Muller, University of Melbourne

Two reports out this week – one into the operations of Facebook and Google, the other into the competitive neutrality of the ABC and SBS – present the federal government with significant policy and political challenges.

The first is by far the more important of the two.

It is the interim report by the Australian Competition and Consumer Commission of its Digital Platforms Inquiry, and in a set of 11 preliminary recommendations it proposes far-reaching changes to media regulation.

Of particular interest are its preliminary recommendations for sustaining journalism and news content.

These are based on the premise that there is a symbiotic relationship between news organisations and the big digital platforms. Put simply, the news organisations depend heavily on these platforms to get their news out to their audiences.

The problem, the ACCC says, is that the way news stories are ranked and displayed on the platforms is opaque. All we know – or think we know – is that these decisions are made by algorithms.




Read more:
Constant attacks on the ABC will come back to haunt the Coalition government


The ACCC says this lack of transparency causes concerns that the algorithms and other policies of the platform giants may be operating in a way that affects the production of news and journalistic content.

To respond to this concern, the preliminary recommendation is for a new regulatory authority to be established. It would have the power to peer into these algorithms and monitor, investigate and report on how content – including news content – is ranked and displayed.

The purpose would be to identify the effects of the algorithms and other policies on the production of news and journalistic content.

It would also allow the authority to assess the impact on the incentives for news and journalistic content creation, particularly where news organisations have invested a lot of time and money in producing original content.

In this way, the ACCC is clearly trying to protect and promote the production of public-interest journalism, which is expensive but vital to democratic life. It is how the powerful are held to account, how wrongdoing is uncovered, and how the public finds out what is going on inside forums such as the courts and local councils.

So far, the big news media organisations have concentrated on these aspects of the ACCC interim report and have expressed support for them.

However, there are two other aspects of the report on which their response has been muted.

The first of these is the preliminary recommendation that proposes a media regulatory framework that would cover all media content, including news content, on all systems of distribution – print, broadcast and online.

The ACCC recommends that the government commission a separate independent review to design such a framework. The framework would establish underlying principles of accountability, set boundaries around what should be regulated and how, set rules for classifying different types of content, and devise appropriate enforcement mechanisms.

Much of this work has already been attempted by earlier federal government inquiries – the Finkelstein inquiry and the Convergence Review – both of which produced reports for the Gillard Labor government in 2012.

Their proposals for an overarching regulatory regime for all types of media generated a hysterical backlash from the commercial media companies, who accused the authors of acting like Stalin, Mao, or the Kim clan in North Korea.

So if the government adopts this recommendation from the ACCC, the people doing the design work can expect some heavy flak from big commercial media.

The other aspect of the ACCC report that is likely to provoke a backlash from the media is a preliminary recommendation concerning personal privacy.

Here the ACCC proposes that the government adopt a 2014 recommendation of the Australian Law Reform Commission that people be given the right to sue for serious invasions of privacy.

The media have been on notice over privacy invasion for many years. As far back as 2001, the High Court developed a test of privacy in a case involving the ABC and an abattoir company called Lenah Game Meats.

Now, given the impact on privacy of Facebook and Google, the ACCC has come to the view that the time has arrived to revisit this issue.

The ACCC’s interim report is one of the most consequential documents affecting media policy in Australia for many decades.

The same cannot be said of the other media-related report published this week: that of the inquiry into the competitive neutrality of the public-sector broadcasters, the ABC and SBS.

This inquiry was established in May this year to make good on a promise made by Malcolm Turnbull to Pauline Hanson in 2017.




Read more:
The politics behind the competitive neutrality inquiry into ABC and SBS


He needed One Nation’s support for the government’s changes to media ownership laws, without which they would not have passed the Senate.

Hanson was not promised any particular focus for the inquiry, so the government dressed it up in the dull raiment of competitive neutrality.

While it had the potential to do real mischief – in particular to the ABC – the report actually gives both public broadcasters a clean bill of health.

There are a couple of minor caveats concerning transparency about how they approach the issue of fair competition, but overall the inquiry finds that the ABC and SBS are operating properly within their charters. Therefore, by definition, they are acting in the public interest.

This has caused pursed lips at News Corp which, along with the rest of the commercial media, took this opportunity to have a free kick at the national broadcasters. But in the present political climate, the issue is likely to vanish without trace.

While the government still has an efficiency review of the ABC to release, it also confronts a political timetable and a set of the opinion polls calculated to discourage it from opening up another row over the ABC.The Conversation

Denis Muller, Senior Research Fellow in the Centre for Advancing Journalism, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ACCC wants to curb digital platform power – but enforcement is tricky


Katharine Kemp, UNSW

We need new laws to monitor and curb the power wielded by Google, Facebook and other powerful digital platforms, according to the Australian Competition and Consumer Commission (ACCC).

The Preliminary Report on the Digital Platforms Inquiry found major changes to privacy and consumer protection laws are needed, along with alterations to merger law, and a regulator to investigate the operation of the companies’ algorithms.

Getting the enforcement right will be key to the success of these proposed changes.




Read more:
Digital platforms. Why the ACCC’s proposals for Google and Facebook matter big time


Scrutinising accumulation of market power

The report says Google and Facebook each possess substantial power in markets such as online search and social media services in Australia.

It’s not against the law to possess substantial market power alone. But these companies would breach our November 2017 misuse of market power law if they engaged in any conduct with the effect, likely effect or purpose of substantially lessening competition – essentially, blocking rivalry in a market.

Moving forwards, the ACCC has indicated it will scrutinise the accumulation of market power by these platforms more proactively. Noting that “strategic acquisitions by both Google and Facebook have contributed to the market power they currently hold”, the ACCC says it intends to ask large digital platforms to provide advance notice of any planned acquisitions.

While such pre-notification of certain mergers is required in jurisdictions such as the US, it is not currently a requirement in other sectors under the Australian law.

At the moment the ACCC is just asking the platforms to do this voluntarily – but has indicated it may seek to make this a formal requirement if the platforms don’t cooperate with the request. It’s not currently clear how this would be enforced.

The ACCC has also recommended the standard for assessing mergers should be amended to expressly clarify the relevance of data acquired in the transaction as well as the removal of potential competitors.

The law doesn’t explicitly refer to potential competitors in addition to existing competitors at present, and some argue platforms are buying up nascent competitors before the competitive threat becomes apparent.




Read more:
Explainer: what is public interest journalism?


A regulator to monitor algorithms

According to the ACCC, there is a “lack of transparency” in Google’s and Facebook’s arrangements concerning online advertising and content, which are largely governed by algorithms developed and owned by the companies. These algorithms – essentially a complex set of instructions in the software – determine what ads, search results and news we see, and in what order.

The problem is nobody outside these companies knows how they work or whether they’re producing results that are fair to online advertisers, content producers and consumers.

The report recommends a regulatory authority be given power to monitor, investigate and publish reports on the operation of these algorithms, among other things, to determine whether they are producing unfair or discriminatory results. This would only apply to companies that generate more than A$100 million per annum from digital advertising in Australia.




Read more:
Attention economy: Facebook delivers traffic but no money for news media


These algorithms have come under scrutiny elsewhere. The European Commission has previously fined Google €2.42 billion for giving unfair preference to its own shopping comparison services in its search results, relative to rival comparison services, thereby contravening the EU law against abuse of dominance. This decision has been criticised though, for failing to provide Google with a clear way of complying with the law.

The important questions following the ACCC’s recommendation are:

  • what will the regulator do with the results of its investigations?
  • if it determines that the algorithm is producing discriminatory results, will it tell the platform what kind of results it should achieve instead, or will it require direct changes to the algorithm?

The ACCC has not recommended the regulator have the power to make such orders. It seems the most the regulator would do is introduce some “sunshine” to the impacts of these algorithms which are currently hidden from view, and potentially refer the matter to the ACCC for investigation if this was perceived to amount to a misuse of market power.

If a digital platform discriminates against competitive businesses that rely on its platform – say, app developers or comparison services – so that rivalry is stymied, this could be an important test case under our misuse of market power law. This law was amended in 2017 to address longstanding weaknesses but has not yet been tested in the courts.




Read more:
We should levy Facebook and Google to fund journalism – here’s how


Privacy and fairness for consumers

The report recommends substantial changes to the Privacy Act and Australian Consumer Law to reduce the power imbalance between the platforms and consumers.

We know from research that most Australians don’t read online privacy policies; many say they don’t understand the privacy terms offered to them, or they feel they have no choice but to accept them. Two thirds say they want more say in how their personal information is used.

The solutions proposed by the ACCC include:

  • strengthening the consent required under our privacy law, requiring it to be express (it may currently be implied), opt-in, adequately informed, voluntary and specific
  • allowing consumers to require their personal data to be erased in certain circumstances
  • increasing penalties for breaches of the Privacy Act
  • introducing a statutory cause of action for serious invasion of privacy in Australia.



Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


This last recommendation was previously made by the Australian Law Reform Commission in 2014 and 2008, and would finally allow individuals in Australia to sue for harm suffered as a result of such an invasion.

If consent is to be voluntary and specific, companies should not be allowed to “bundle” consents for a number of uses and collections (both necessary and unnecessary) and require consumers to consent to all or none. These are important steps in addressing the unfairness of current data privacy practices.

Together these changes would bring Australia a little closer to the stronger data protection offered in the EU under the General Data Protection Regulation.

But the effectiveness of these changes would depend to a large extent on whether the government would also agree to improve funding and support for the federal privacy regulator, which has been criticised as passive and underfunded.

Another recommended change to consumer protection law would make it illegal to include unfair terms in consumer contracts and impose fines for such a contravention. Currently, for a first-time unfair contract terms “offender”, a court could only “draw a line” through the unfair term such that the company could not force the consumer to comply with it.

Making such terms illegal would increase incentives for companies drafting standard form contracts to make sure they do not include detrimental terms which create a significant imbalance between them and their customers, which are not reasonably necessary to protect their legitimate interests.




Read more:
Soft terms like ‘open’ and ‘sharing’ don’t tell the true story of your data


The ACCC might also take action on these standard terms under our misleading and deceptive conduct laws. The Italian competition watchdog last week fined Facebook €10 million for conduct including misleading users about the extent of its data collection and practices.

The ACCC appears to be considering the possibility of even broader laws against “unfair” practices, which regulators like the US Federal Trade Commission have used against bad data practices.

Final report in June 2019

As well as 11 recommendations, the report mentions nine areas for “further analysis and assessment” which in itself reflects the complexity of the issues facing the ACCC.

The ACCC is seeking responses and feedback from stakeholders on the preliminary report, before creating a final report in June 2019.

Watch this space – or google it.




Read more:
How not to agree to clean public toilets when you accept any online terms and conditions


The Conversation


Katharine Kemp, Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Digital platforms. Why the ACCC’s proposals for Google and Facebook matter big time


File 20181210 76971 17q2g3x.jpeg?ixlib=rb 1.1
The Competition and Consumer Commission is worried about the ability of the platforms we use to determine the news we read.
Shutterstock

Sacha Molitorisz, University of Technology Sydney and Derek Wilding, University of Technology Sydney

The Australian Competition and Consumer Commission has released the preliminary report of its Digital Platforms Inquiry, and Google and Facebook won’t be happy.

Rather than adopting a gently-gently approach, the ACCC has produced draft recommendations that are extensive and dramatic.

If implemented, they would significantly affect the way the digital platforms make their money, and help direct the content we consume.

What’s more, the inquiry is touted as a world first. Its findings will be closely monitored, and perhaps even adopted, by regulators internationally.

Who should care?

The digital platforms themselves should (and do) care.

Any new regulations designed to foster competition or protect individual privacy (both are among the ACCC’s recommendations) have the potential to harm their revenues.

They’ve a lot to lose. In 2017, nearly A$8 billion was spent on online advertising in Australia, and more than half went to Google and Facebook (p3).

News organisations whose output is disseminated by those platforms should (and do) care too.

As the ACCC notes, more than half of the traffic on Australian news websites comes via Google and Facebook (p8).




Read more:
News outlets air grievances and Facebook plays the underdog in ACCC inquiry


Increasingly, news producers depend on social media and search engines to connect with consumers. Google is used for 95% of searches (98% on mobile devices).

The rise of Google, Facebook and other digital platforms has been accompanied by unprecedented pressures on traditional news organisations.

Most obviously, classified advertising revenue has been unbundled from newspapers.

In 2001, classified advertising revenue stood at A$2 billion. By 2016, it had fallen to A$200 million. The future of newspapers’ ability to produce news is under a cloud, and digital platforms help control the weather.

Of course, advertisers care too.

But the stakeholders with the most to gain or lose are us, Australian citizens.




Read more:
Taking on big tech: where does Australia stand?


Our lives are mediated by Google, Facebook, Apple, Amazon, Twitter and others as never before. Google answers our search queries; Facebook hosts friends’ baby snaps; YouTube (owned by Google) distributes professional and user-generated videos; Instagram (owned by Facebook) hosts our holiday snaps.

As the ACCC notes, they have given us tremendous benefits, for minimal (apparent) cost.

And they’ve done it at lightning speed. Google arrived in 1998, Facebook in 2004 and Twitter in 2006. They are mediating what comes before our eyes in ways we don’t understand and (because they keep their algorithms secret) in ways we can’t understand.

What does the ACCC recommend?

The ACCC’s preliminary recommendations are far-reaching and bold.

First, it suggests an independent review to address the inadequacy of current media regulatory frameworks.

This would be a separate, independent inquiry to “design a regulatory framework that is able to effectively and consistently regulate the conduct of all entities which perform comparable functions in the production and delivery of content in Australia, including news and journalistic content, whether they are publishers, broadcasters, other media businesses, or digital platforms”.

This is a commendable and urgent proposal. Last year, cross-media ownership laws were repealed as anachronistic in a digital age. To protect media diversity and plurality, the government needs to revisit the issue of regulatory frameworks.




Read more:
Starter’s gun goes off on new phase of media concentration as Nine-Fairfax lead the way


Second, it proposes privacy safeguards. Privacy in Australia is dangerously under-protected. Digital platforms such as Google and Facebook generate revenue by knowing their users and targeting advertising with an accuracy unseen in human history.

As the ACCC puts it, “the current regulatory framework, including privacy laws, does not effectively deter certain data practices that exploit the information asymmetries and the bargaining power imbalances that exist between digital platforms and consumers.”

It makes a number of specific preliminary recommendations, including creating a right to erasure and the requirement of “express, opt-in consent”.

It also supports the creation of a civil right to sue for serious invasions of privacy, as recommended by the Australian Law Reform Commission.

Australians lack the protections that Americans enjoy under the US Bill of Rights; we certainly lack the protection afforded under Europe’s sweeping new privacy law.




Read more:
Google slapped hard in Europe over data handling


It wants the penalties for breaches of our existing Privacy Act increased. It recommends the creation of a third-party certification scheme, which would enable the Office of the Australian Information Commissioner to give complying bodies a “privacy seal or mark”.

And it wants a new or existing organisation to monitor attempts by vertically-integrated platforms such as Google to favour their own businesses. This would happen where Google gives prominence in search results to products sold through Google platforms, or prominence to stories from organisations with which it has a commercial relationship.

The organisation would oversee platforms that generate more than A$100 million annually, and which disseminate news, or hyperlinks to news, or snippets of news.

It would investigate complaints and even initiate its own investigations in order to understand how digital platforms are disseminating news and journalistic content and advertising.

As it notes,

The algorithms operated by each of Google and Facebook, as well as other policies, determine which content is surfaced and displayed to consumers in news feed and search results. However, the operation of these algorithms and other policies determining the surfacing of content remain opaque. (p10)

It makes other recommendations, touching on areas including merger law, pre-installed browsers and search engines, takedown procedures for copyright-infringing content, implementing a code of practice for digital platforms and changing the parts of Australian consumer law that deal with unfair contract terms.

Apart from its preliminary recommendations, there are further areas on which it invites comment and suggestions.




Read more:
New data access bill shows we need to get serious about privacy with independent oversight of the law


These include giving media organisations tax offsets for producing public interest news, and making subscribing to news publications tax deductible for consumers.

Platforms could be brought into a co-regulatory system for flagging content that is subject to quality control, creating their own quality mark. And a new ombudsman could deal with consumer complaints about scams, misleading advertising and the ranking of news content.

All of these recommendations and areas of interest will generate considerable debate.

What’s next?

The ACCC will accept submissions in response to its preliminary report until February 15.

At the Centre for Media Transition, we played a background role in one aspect of this inquiry.

Earlier this year, we were commissioned by the ACCC to prepare a report on the impact of digital platforms on news and journalistic content. It too was published on Monday.

Our findings overlap with the ACCC on some points, and diverge on others.




Read more:
Google and Facebook cosy up to media companies in response to the threat of regulation


Many thorny questions remain, but one point is clear: the current regime that oversees digital platforms is woefully inadequate. Right now, as the ACCC notes, digital platforms are largely unregulated.

New ways of thinking are needed. A mix of old laws (or no laws) and new media spells trouble.The Conversation

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney and Derek Wilding, Co-Director, Centre for Media Transition, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Travelling overseas? What to do if a border agent demands access to your digital device



File 20181005 52691 12zqgzn.jpg?ixlib=rb 1.1
New laws enacted in New Zealand give customs agents the right to search your phone.
Shutterstock

Katina Michael, Arizona State University

New laws enacted in New Zealand this month give border agents the right to demand travellers entering the country hand over passwords for their digital devices. We outline what you should do if it happens to you, in the first part of a series exploring how technology is changing tourism.


Imagine returning home to Australia or New Zealand after a long-haul flight, exhausted and red-eyed. You’ve just reclaimed your baggage after getting through immigration when you’re stopped by a customs officer who demands you hand over your smartphone and the password. Do you know your rights?

Both Australian and New Zealand customs officers are legally allowed to search not only your personal baggage, but also the contents of your smartphone, tablet or laptop. It doesn’t matter whether you are a citizen or visitor, or whether you’re crossing a border by air, land or sea.




Read more:
How to protect your private data when you travel to the United States


New laws that came into effect in New Zealand on October 1 give border agents:

…the power to make a full search of a stored value instrument (including power to require a user of the instrument to provide access information and other information or assistance that is reasonable and necessary to allow a person to access the instrument).

Those who don’t comply could face prosecution and NZ$5,000 in fines. Border agents have similar powers in Australia and elsewhere. In Canada, for example, hindering or obstructing a border guard could cost you up to C$50,000 or five years in prison.

A growing trend

Australia and New Zealand don’t currently publish data on these kinds of searches, but there is a growing trend of device search and seizure at US borders. There was a more than fivefold increase in the number of electronic device inspections between 2015 and 2016 – bringing the total number to 23,000 per year. In the first six months of 2017, the number of searches was already almost 15,000.

In some of these instances, people have been threatened with arrest if they didn’t hand over passwords. Others have been charged. In cases where they did comply, people have lost sight of their device for a short period, or devices were confiscated and returned days or weeks later.




Read more:
Encrypted smartphones secure your identity, not just your data


On top of device searches, there is also canvassing of social media accounts. In 2016, the United States introduced an additional question on online visa application forms, asking people to divulge social media usernames. As this form is usually filled out after the flights have been booked, travellers might feel they have no choice but to part with this information rather than risk being denied a visa, despite the question being optional.

There is little oversight

Border agents may have a legitimate reason to search an incoming passenger – for instance, if a passenger is suspected of carrying illicit goods, banned items, or agricultural products from abroad.

But searching a smartphone is different from searching luggage. Our smartphones carry our innermost thoughts, intimate pictures, sensitive workplace documents, and private messages.

The practice of searching electronic devices at borders could be compared to police having the right to intercept private communications. But in such cases in Australia, police require a warrant to conduct the intercept. That means there is oversight, and a mechanism in place to guard against abuse. And the suspected crime must be proportionate to the action taken by law enforcement.

What to do if it happens to you

If you’re stopped at a border and asked to hand over your devices and passwords, make sure you have educated yourself in advance about your rights in the country you’re entering.

Find out whether what you are being asked is optional or not. Just because someone in a uniform asks you to do something, it does not necessarily mean you have to comply. If you’re not sure about your rights, ask to speak to a lawyer and don’t say anything that might incriminate you. Keep your cool and don’t argue with the customs officer.




Read more:
How secure is your data when it’s stored in the cloud?


You should also be smart about how you manage your data generally. You may wish to switch on two-factor authentication, which requires a password on top of your passcode. And store sensitive information in the cloud on a secure European server while you are travelling, accessing it only on a needs basis. Data protection is taken more seriously in the European Union as a result of the recently enacted General Data Protection Regulation.

Microsoft, Apple and Google all indicate that handing over a password to one of their apps or devices is in breach of their services agreement, privacy management, and safety practices. That doesn’t mean it’s wise to refuse to comply with border force officials, but it does raise questions about the position governments are putting travellers in when they ask for this kind of information.The Conversation

Katina Michael, Professor, School for the Future of Innovation in Society & School of Computing, Informatics and Decision Systems Engineering, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Digital government isn’t working in the developing world. Here’s why



File 20180525 51115 10ceqsl.jpg?ixlib=rb 1.1
Digital government is primarily a social and political phenomenon driven by human behaviour.
Shutterstock

Rania Fakhoury, Université Libanaise

The digital transformation of society has brought many immediate benefits: it’s created new jobs and services, boosted efficiency and promoted innovation. But when it comes to improving the way we govern, the story is not that simple.

It seems reasonable to imagine introducing digital information and communication technologies into public sector organisations – known as “digital government” or “e-government” – would have a beneficial impact on the way public services are delivered. For instance, by enabling people to claim rebates for medical bills via a government website.

When implemented well, e-government can reduce the cost of delivering government and public services, and ensure better contact with citizens – especially in remote or less densely populated areas. It can also contribute to greater transparency and accountability in public decisions, stimulate the emergence of local e-cultures, and strengthen democracy.




Read more:
Welcome to E-Estonia, the tiny nation that’s leading Europe in digital innovation


But implementing e-government is difficult and uptake among citizens can be slow. While Denmark – the number one ranked country in online service delivery in 2018 – sees 89% of its citizens using e-services, many other countries are struggling. In Egypt, for example, uptake of e-services is just 2%.

E-Government Development Index (EGDI) of global regions in 2018.
United Nations E-Government Survey 2018

I argue the implementation of digital government is a intractable problem for developing countries. But there are small steps we can take right now to make the issues more manageable.

Few digital government projects succeed

The nature of government is complex and deeply rooted in the interactions among social, political, economic, organisational and global systems. At the same time, technology is itself a source of complexity – its impacts, benefits and limitations are not yet widely understood by stakeholders.

Given this complexity, it’s not uncommon for many digital government projects to fail, and not just in the developing world. In fact, 30% of projects are total failures. Another 50-60% are partial failures, due to budget overruns and missed timing targets. Fewer than 20% are considered a success.

In 2016, government spending on technology worldwide was around US$430 billion, with a forecast of US$476 billion by 2020. Failure rates for these kinds of projects are therefore a major concern.

What’s gone wrong in developing countries?

A major factor contributing to the failure of most digital government efforts in developing countries has been the “project management” approach. For too long, government and donors saw the introduction of digital services as a stand-alone “technical engineering” problem, separate from government policy and internal government processes.

But while digital government has important technical aspects, it’s primarily a social and political phenomenon driven by human behaviour – and it’s specific to the local political and the country context.

Change therefore depends mainly upon “culture change” – a long and difficult process that requires public servants to engage with new technologies. They must also change the way they regard their jobs, their mission, their activities and their interaction with citizens.




Read more:
Narendra Modi, India’s social media star, struggles to get government online


In developing countries, demand for e-services is lacking, both inside and outside the government. External demand from citizens is often silenced by popular cynicism about the public sector, and by inadequate channels for communicating demand. As a result, public sector leaders feel too little pressure from citizens for change.

For example, Vietnam’s attempt in 2004 to introduce an Education Management Information System (EMIS) to track school attendance, among other things, was cancelled due to lack of buy-in from political leaders and senior officials.

Designing and managing a digital government program also requires a high level of administrative capacity. But developing countries most in need of digital government are also the ones with the least capacity to manage the process thus creating a risk of “administrative overload”.

How can we start to solve this problem?

Approaches to digital government in developing countries should emphasise the following elements.

Local leadership and ownership

In developing countries, most donor driven e-government projects attempt to transplant what was successful elsewhere, without adapting to the local culture, and without adequate support from those who might benefit from the service.

Of the roughly 530 information technology projects funded by the World Bank from 1995 to 2015, 27% were evaluated as moderately unsatisfactory or worse.

The swiftest solution for change is to ensure projects have buy-in from locals – both governments and citizens alike.

Public sector reform

Government policy, reflected in legislation, regulations and social programs, must be reformulated to adapt to new digital tools.

The success of digital government in Nordic countries results from extensive public sector reforms. In the United States, investments in information technology by police departments, which lowered crime rates, were powered by significant organisational changes.

In developing countries, little progress has been made in the last two decades in reforming the public sector.

Accept that change will be slow

Perhaps the most easily overlooked lesson about digital government is that it takes a long time to achieve the fundamental digitisation of a public sector. Many developing countries are attempting to achieve in the space of a few decades what took centuries in what is now the developed world. The Canadian International Development Agency found:

In Great Britain, for example, it was only in 1854 that a series of reforms
was launched aimed at constructing a merit-based public service shaped by rule of law. It took a further 30 years to eliminate patronage as the modus operandi of public sector staffing.




Read more:
‘Digital by default’ – efficient eGovernment or costly flop?


Looking to the future

Effective strategies for addressing the problem of e-government in developing countries should combine technical infrastructure with social, organisational and policy change.

The best way forward is to acknowledge the complexities inherent in digital government and to break them into more manageable components. At the same time, we must engage citizens and leaders alike to define social and economic values.

Local leaders in developing countries, and their donor partners, require a long-term perspective. Fundamental digital government reform demands sustained effort, commitment and leadership over many generations. Taking the long view is therefore an essential part of a global socio-economic plan.The Conversation

Rania Fakhoury, Chercheur associé à LaRIFA, Université Libanaise

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Use this app twice daily’: how digital tools are revolutionising patient care



File 20180718 142417 mehnhy.jpg?ixlib=rb 1.1
New electronic devices are being used by people of all ages to track activity, measure sleep and record nutrition.
Shutterstock

Caleb Ferguson, Western Sydney University; Debra Jackson, University of Technology Sydney, and Louise Hickman, University of Technology Sydney

Imagine you’ve recently had a heart attack.

You’re a lucky survivor. You’ve received high-quality care from nurses and doctors whilst in hospital and you’re now preparing to go home with the support of your family.

The doctors have made it clear that the situation is grim. It’s a case of: change your lifestyle or die. You’ve got to stop smoking, increase your physical activity, eat a healthy balanced diet (whilst reducing your salt), and make sure you take all your medicine as prescribed.




Read more:
Evidence-based medicine is broken: why we need data and technology to fix it


But before you leave the hospital, the cardiology nurse wants to talk to you. There are a few apps you can download on your smartphone that will help you manage your recovery, including the transition from hospital to home and all the health-related behavioural changes necessary to reduce the risk of another heart attack.

Rapid advancements in digital technologies are revolutionising healthcare. The benefits are numerous, but the rate of development is difficult to keep up with. And that’s creating challenges for both healthcare professionals and patients.

What are digital therapeutics?

Digital therapeutics can be defined as any intervention that is digitally delivered and has a therapeutic effect on a patient. They can be used to treat medical conditions in a similar way to drugs or surgery.

Current examples of digital therapeutics include apps for managing medications and cardiovascular health, apps to support mental health and well being, or augmented and virtual reality tools for patient education.

Paper-based letters, health records, prescription charts and education pamphlets are outdated. We can now send emails, enter information into electronic databases and access electronic medication charts.

And patient education is no longer a static, one-way communication. The digital revolution facilitates dynamic and personalised education, and a two-way interaction between patient and therapist.

How do digital therapeutics help?

Digital health care improves overall quality of care, even in cases where a patient lives hundreds of kilometres away from their doctor.

Take diabetes for example. This condition affects 1.7 million Australians. It’s a major risk factor for developing cardiovascular disease and stroke. So it’s important that people with diabetes manage their condition to reduce their risk.

A recent study evaluated a team-based online game, which was delivered by an app to provide diabetes self-management education. The participants who received the app in this trial had meaningful and sustained improvements in their diabetes, as measured by their HbA1c (blood glucose levels).

App based games of this kind hold promise to improve chronic disease outcomes at scale.

New electronic devices are also being used by people of all ages to track activity, measure sleep and record nutrition. This information provides instant and accurate feedback to individuals and their therapists, allowing for adjustments where necessary. The logged information can also be combined into large data sets to reveal patterns over time and inform future treatments.




Read more:
How virtual reality spiders are helping people face their arachnophobia


Digital therapeutics are spawning a new language within the healthcare industry. “Connected health” reflects the increasingly digital ways clinicians and patients communicate. A few examples include text messaging, telehealth, and video consultations with health professionals.

There is increasing evidence that digitally delivered care (including apps and text message based interventions) can be good for your health and can help you manage chronic conditions, such as diabetes and cardiovascular disease.

But not all health apps are the same

Whilst the digital health revolution is exciting, results of research studies should be carefully interpreted by patients and providers.

Innovation has led to 325,000 mobile health apps available in 2017. This raises significant governance issues relating to patient safety (including data protection) when using digital therapeutics.

A recent review identified that most studies have a relatively short duration of intervention and only reflect short-term follow up with participants. The long-term effect of these new therapeutic interventions remains largely unknown.

The current speed of technological development means the usual safety mechanisms face new ethical and regulatory challenges. Who is doing the prescribing? Who is responsible for the efficacy, storage and accuracy of data? How are these technologies being integrated into existing care systems?

Digital health needs a collaborative approach

Digital health presents seismic disruption to patient care, particularly when new technologies are cheap and readily accessible to patients who might lack the insight required to recognise normality or cause for alarm. Technology can be enabling and empowering for self management, however there’s a lot more needs to be done to link these new technologies into the current health system.

Take the new Apple Watch functionality of heart rate notifications for example. Research like the Apple Heart Study suggests this exciting innovation could lead to significantly improved detection rates of heart rhythm disorders, and enhanced stroke prevention efforts.

But when a patient receives a high heart rate notification, what should they do? Ignore it? Go to a GP? Head straight to the emergency department? And, what is the flow on impact on the health system?




Read more:
Why virtual reality won’t replace cadavers in medical school


Many of these questions remain unanswered suggesting there is an urgent need for research that examines how technology is implemented into existing healthcare systems.

The ConversationIf we are to produce useful digital therapeutics for real-world problems, then it is critical that the end-users are engaged in the process. Patients and healthcare professionals will need to work with software developers to design applications that meet the complex healthcare needs of patients.

Caleb Ferguson, Senior Research Fellow, Western Sydney University; Debra Jackson, Professor, University of Technology Sydney, and Louise Hickman, Associate Professor of Nursing, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.

A new levy on digital giants like Google, Facebook and eBay is a step towards a fairer way of taxing


Antony Ting, University of Sydney

The government is reportedly considering a new tax on the digital economy. While no details of the tax are available yet, the digital services tax recently proposed by the European Commission may give us an idea what the tax might look like.

In essence, the proposal will impose a 3% tax on the turnover of large digital economy companies in the European Union. Similar ideas have been suggested in the UK and France.

The current international tax system was designed before internet was invented, so this new tax is a response to this problem. Under the current system, a foreign company will not be subject to income tax in Australia unless it has a significant physical presence in the country. The key word here is “physical”.

It is well known that modern multinationals such as Google can derive substantial revenue and profits from Australia without significant physical presence here. It is no surprise that this 20th-century tax principle struggles to deal with the 21st-century economy.

This problem is well known but the solution is far more elusive.

Attempts to tax digital companies

The best solution in response to the rise of the digital economy is to reword the laws to take more into account than the “physical” presence of a company in the international tax regime. However, this reform would require international consensus on a new set of rules to allocate the taxing rights on the profits of multinationals among different countries.

In particular, it would mean more taxing rights for source countries where the revenue is generated. The formidable political resistance is not difficult to imagine.

The OECD has attempted to address this fundamental issue, but in vain so far. Its report on the taxation of digital economy in the Base Erosion Profit Shifting project did not provide any recommendation to improve the system at all. The recent report on its continuing work on the digital economy again shows little progress.

While the EU also recognises that the long-term solution should be a major reform of the international tax regime, the slow progress of the OECD’s effort is seriously testing the patience of many countries. Therefore, the EU has proposed the digital services tax as an “interim” measure.

Google as an example

The Senate enquiry into corporate tax avoidance revealed that Google is deriving billions of dollars of revenue every year from Australia but has been paying very little tax. In particular, the revenue reported to the Australian Securities and Investments Commission in Australia in 2015 was less than A$500 million, with net profits of A$47 million.

The government responded by introducing the Multinational Anti-Avoidance Law in 2016, targeting the particular tax structures used by multinational enterprises such as Google.

Google Australia’s 2016 annual report states that the company has restructured its business. Though not stated explicitly, the restructure was most likely undertaken in response to the introduction of this law.

As a result of the restructure, both revenue and net profits of Google Australia increased by 2.2 times.

However, here is the bad news. Though Google has reported significantly more profits in Australia, the profit margins of the local company remain very low compared to its worldwide group. For example, the net profit margin of Google Australia was 9% while that of the group was 22%.

Of course, a business may have different profit margins in different countries for genuine commercial reasons. However, based on our understanding of the tax structures of these multinationals, it’s likely that significant amounts of profits are booked in low-tax or even zero-tax jurisdictions.

This example suggests that while the Multinational Anti-Avoidance Law is achieving its objectives, it alone is unlikely to be enough.

A digital services tax in Australia

The digital services tax is a turnover tax, not an income tax. This circumvents the restrictions imposed by the current international income tax regime.

The targets of this tax include income of large multinationals from providing advertising space (for example, Google), trading platforms (for example, eBay) and the transmission of data collected about users (for example, Facebook).

If Australia follows the model of the digital services tax, the new tax may generate substantial amount of revenue. For example, Google Australia’s revenue reported in its 2016 annual report was A$1.1 billion. A 3% tax on that amount would be A$33 million.

Along with the digital services tax proposal, the EU proposed the concept of “significant digital presence” as the long-term solution for the international tax system. The exact details are subject to further consultation. However, the relevant factors may include a company’s annual revenue from digital services, the number of users of such services, and the number of online contracts concluded on the platform.

The ConversationThe destiny of this proposal is unclear, but it’s likely to be subject to fierce debate among countries. In any case, the proposals of the digital services tax and the digital presence concept suggest there may be a paradigm shift in the thinking of tax policymakers in response to the challenges imposed by the digital economy that would be difficult, if not impossible, to resist.

Antony Ting, Associate Professor, University of Sydney

This article was originally published on The Conversation. Read the original article.