Seven ways the government can make Australians safer – without compromising online privacy



File 20190211 174894 12g4z9d.jpg?ixlib=rb 1.1
We need a cyber safety equivalent to the Slip! Slop! Slap! campaign to nudge behavioural change in the community.
Shutterstock

Damien Manuel, Deakin University

This is part of a major series called Advancing Australia, in which leading academics examine the key issues facing Australia in the lead-up to the 2019 federal election and beyond. Read the other pieces in the series here.

When it comes to data security, there is an inherent tension between safety and privacy. The government’s job is to balance these priorities with laws that will keep Australians safe, improve the economy and protect personal data from unwarranted surveillance.

This is a delicate line to walk. Recent debate has revolved around whether technology companies should be required to help law enforcement agencies gain access to the encrypted messages of suspected criminals.

While this is undoubtedly an important issue, the enacted legislation – the Telecommunications and Other Legislation Amendment (Assistance and Access) Act – fails on both fronts. Not only is it unlikely to stop criminals, it could make personal communications between everyday people less secure.

Rather than focus on the passage of high-profile legislation that clearly portrays a misunderstanding of the technology in question, the government would do better to invest in a comprehensive cyber security strategy that will actually have an impact.

Achieving the goals set out in the strategy we already have would be a good place to start.




Read more:
The difference between cybersecurity and cybercrime, and why it matters


Poor progress on cyber security

The Turnbull government launched Australia’s first Cyber Security Strategy in April 2016. It promised to dramatically improve the online safety of all Australian families and businesses.

In 2017, the government released the first annual update to report on how well it was doing. On the surface some progress had been made, but a lot of items were incomplete – and the promised linkages to businesses and the community were not working well.

Unfortunately, there was never a second update. Prime ministers were toppled, cabinets were reshuffled and it appears the Morrison government lost interest in truly protecting Australians.

So, where did it all go wrong?

A steady erosion of privacy

Few Australians paid much notice when vested interests hijacked technology law reforms. The amendment of the Copyright Act in 2015 forced internet service providers (ISPs) to block access to sites containing pirated content. Movie studios now had their own version of China’s “Great Firewall” to block and control internet content in Australia.

In 2017, the government implemented its data retention laws, which effectively enabled specific government agencies to spy on law-abiding citizens. The digital trail (metadata) people left through phone calls, SMS messages, emails and internet activity was retained by telecommunications carriers and made accessible to law enforcement.

The public was assured only limited agencies would have access to the data to hunt for terrorists. In 2018, we learned that many more agencies were accessing the data than originally promised.

Enter the Assistance and Access legislation. Australia’s technology sector strongly objected to the bill, but the Morrison government’s consultation process was a whitewash. The government ignored advice on the damage the legislation would do to the developing cyber sector outlined in the Cyber Security Strategy – the very sector the Turnbull government had been counting on to help rebuild the economy in this hyper-connected digital world.




Read more:
What skills does a cybersecurity professional need?


While the government focuses on the hunt for terrorists, it neglects the thousands of Australians who fall victim each year to international cybercrime syndicates and foreign governments.

Australians lose money to cybercrime via scam emails and phone calls designed to harvest passwords, banking credentials and other personal information. Losses from some categories of cybercrime have increased by more than 70% in the last 12 months. The impact of cybercrime on Australian business and individuals is estimated at $7 billion a year.

So, where should government focus its attention?

Seven actions that would make Australia safer

If the next government is serious about protecting Australian businesses and families, here are seven concrete actions it should take immediately upon taking office.

1. Review the Cyber Security Strategy

Work with industry associations, the business and financial sectors, telecommunication providers, cyber startups, state government agencies and all levels of the education sector to develop a plan to protect Australians and businesses. The plan must be comprehensive, collaborative and, most importantly, inclusive. It should be adopted at the federal level and by states and territories.

2. Make Australians a harder target for cybercriminals

The United Kingdom’s National Cyber Security Centre is implementing technical and process controls that help people in the UK fight cybercrime in smart, innovative ways. The UK’s Active Cyber Defence program uses top-secret intelligence to prevent cyber attacks and to detect and block malicious email campaigns used by scammers. It also investigates how people actually use technology, with the aim of implementing behavioural change programs to improve public safety.

3. Create a community education campaign

A comprehensive community education program would improve online behaviours and make businesses and families safer. We had the iconic Slip! Slop! Slap! campaign from 1981 to help reduce skin cancer through community education. Where is the equivalent campaign for cyber safety to nudge behavioural change in the community at all levels from kids through to adults?

4. Improve cyber safety education in schools

Build digital literacy into education from primary through to tertiary level so that young Australians understand the consequences of their online behaviours. For example, they should know the risks of sharing personal details and nude selfies online.




Read more:
Cybersecurity of the power grid: A growing challenge


5. Streamline industry certifications

Encourage the adoption of existing industry certifications, and stop special interest groups from introducing more. There are already more than 100 industry certifications. Minimum standards for government staff should be defined, including for managers, technologists and software developers.

The United States Defence Department introduced minimum industry certification for people in government who handle data. The Australian government should do the same by picking a number of vendor-agnostic certifications as mandatory in each job category.

6. Work with small and medium businesses

The existing cyber strategy doesn’t do enough to engage with the business sector. Small and medium businesses form a critical part of the larger business supply-chain ecosystem, so the ramifications of a breach could be far-reaching.

The Australian Signals Directorate recommends businesses follow “The Essential Eight” – a list of strategies businesses can adopt to reduce their risk of cyber attack. This is good advice, but it doesn’t address the human side of exploitation, called social engineering, which tricks people into disclosing passwords that protect sensitive or confidential information.

7. Focus on health, legal and tertiary education sectors

The health, legal and tertiary education sectors have a low level of cyber maturity. These are among the top four sectors reporting breaches, according to the Office of the Australian Information Commissioner.

While health sector breaches could lead to personal harm and blackmail, breaches in the legal sector could result in the disclosure of time-sensitive business transactions and personal details. And the tertiary education sector – a powerhouse of intellectual research – is ripe for foreign governments to steal the knowledge underpinning Australia’s future technologies.

A single person doing the wrong thing and making a mistake can cause a major security breach. More than 900,000 people are employed in the Australian health and welfare sector, and the chance of one of these people making a mistake is unfortunately very high.The Conversation

Damien Manuel, Director, Centre for Cyber Security Research & Innovation (CSRI), Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A state actor has targeted Australian political parties – but that shouldn’t surprise us



File 20190218 56243 para1s.jpg?ixlib=rb 1.1
Prime Minister Morrison said there was no evidence of electoral interference linked to a hack of the Australian Parliament House computer network.
from www.shutterstock.com

Tom Sear, UNSW

The Australian political digital infrastructure is a target in an ongoing nation state cyber competition which falls just below the threshold of open conflict.

Today Prime Minister Scott Morrison made a statement to parliament, saying:

The Australian Cyber Security Centre recently identified a malicious intrusion into the Australian Parliament House computer network.

During the course of this work, we also became aware that the networks of some political parties – Liberal, Labor and the Nationals – have also been affected.




Read more:
‘State actor’ makes cyber attack on Australian political parties


But cyber measures targeting Australian government infrastructure are the “new normal”. It’s the government response which is the most unique thing about this recent attack.

The new normal

The Australian Signals Directorate (ASD) – which incorporates the Australian Cyber Security Centre (ACSC) – analyses and responds to cyber security threats.

In January ASD identified in a report that across the three financial years (2015-16 to 2017-18) there were 1,097 cyber incidents affecting unclassified and classified government networks which were “considered serious enough to warrant an operational response.”

These figures include all identified intrusions. The prime minister fingered a “sophisticated state actor” for the activity discussed today.

Cyber power states capable of adopting “sophisticated” measures might include the United States, Israel, Russia, perhaps Iran and North Korea. Suspicion currently falls on China.

Advanced persistent threats

Cyber threat actors with such abilities are often identified by a set of handles called Advanced Persistent Threat or APTs.

An APT is a group with a style. They are identifiable by the type of malware (malicious software) they like to deploy, their methods and even their working hours.

For example APT28 is associated with Russian measures to interfere with the 2016 US election

Some APTs have even been publicly traced by cyber security companies to specific buildings in China.

APT1 or Unit 61398 may be linked to the intrusions against the Australian Bureau of Meteorology and possibly the Melbourne International Arts Festival. Unit 61398 has been traced to a non-descript office building in Shanghai.

The advance in APT refers to the “sophistication” mentioned by the PM.




Read more:
How we trace the hackers behind a cyber attack


New scanning tool released

The ACSC today publicly released a “scanning tool, configured to search for known malicious web shells that we have encountered in this investigation.”

The release supports this being called a state sponsored intrusion. A web shell is an exploitation vector often used by APTs which enables an intruder to execute wider network compromise. A web shell is uploaded to a web server remotely, and then an adversary can leverage other techniques like privileges and issue commands. A webshell is a form of a malware.

One well-known shell called “China Chopper” is delivered by a small web application, and then is able to “brute force” password guessing against the authentication portal.

If such malware was used in this incident, this explains why politicians and those working at Australian Parliament House were asked to change their passwords following the latest incident.

Journalism and social media surrounding incidents such as these pivot on speculation of how it could be an adversary state, and who that might be.

Malware and its deployment is close to a signature of an APT and requires teams to deliver and subsequently monitor. That the ACSC has released such a specific scanning tool is a clue why they and the prime minister can make such claims.

An intrusion of Australian Parliament House is symbolically powerful, but whether any actual data was taken at an unclassified level might not be of great intelligence import.

The prime minister’s announcement today suggests Australian political parties have been exposed.

How elections are hacked

In 2018 I detailed how there are a few options for an adversary seeking to “hack” an election.




Read more:
If it ain’t broke, don’t fix it: Australia should stay away from electronic voting


The first is to “go loud” and undermine the public’s belief in the players, the process, or the outcome itself. This might involve stealing information from a major party, for example, and then anonymously leaking it.

Or it might mean attacking and changing the data held by the Australian Electoral Commission or the electoral rolls each party holds. This would force the agency to publicly admit a concern, which in turn would undermine confidence in the system.

This is likely why today the prime minister said in his statement:

I have instructed the Australian Cyber Security Centre to be ready to provide any political party or electoral body in Australia with immediate support, including making their technical experts available.

They have already briefed the Electoral Commissions and those responsible for cyber security for all states and territories.

They have also worked with global anti-virus companies to ensure Australia’s friends and allies have the capacity to detect this malicious activity.

Vulnerability of political parties

Opposition Leader Bill Shorten’s response alluded to what might be another concern of our security and electoral agencies. He said:

… our party political structures perhaps are more vulnerable. Political parties are small organisations with only a few full-time staff, they collect, store and use large amounts of information about voters and communities.

I have previously suggested the real risk to any election is the manipulation of social media, and a more successful and secretive campaign to alter the outcome of the Australian election might focus on a minor party.

An adversary could steal the membership and donor database and electoral roll of a party with poor security, locate the social media accounts of those people, and then slowly use social media manipulations to influence an active, vocal group of voters.

Shades of grey

This is unlikely to have been the first attempt by a “sophisticated state actor” to target networks of Australian political parties. It’s best not to consider such intrusions as if they “did or didn’t work.”

There are shades of grey.

Adversaries clearly penetrated a key network and then leveraged access into others. But the duration of such a presence or whether they are even still in a network is challenging to ascertain. Equally, the government has not suggested data has been removed.

Recognition but no data theft may be a result of improved security awareness at parliament house and in party networks. The government and its administration have been taking action.

The Department of Parliamentary Services – that supplies ICT to parliament house – has improved security in “network design changes to harden the internal ICT network against cyber attack”.

This month a Joint Committee opened a new inquiry into government resilience following a report from the National Audit Office last year which found “relatively low levels of effectiveness of Commonwealth entities in managing cyber risks”.

Government response is what’s new

As the ASD and my own observation has noted, this is likely not the first intrusion of this kind – it may be an APT with more “sophisticated” malware than previous attempts. But the response and fall out from the government is certainly new.

What is increasingly clear is that attribution has become more possible, and especially within alliance structures in the Five Eyes intelligence network – Australia, Britain, Canada, New Zealand and the United States – more common.

Sometimes in cyber security it’s challenging to tell the difference between the noise and signal. The persistent presence of Russian sponsored trolls in Australian online politics, the blurring of digital borders with China and cyber enabled threats to our democratic infrastructure: these are not new.

Australia is not immune to the new immersive information war. Digital border protection might yet become an issue in the 2019 election. In addition to raising concerns our politicians and cyber security agencies will need to develop a strong and clear strategic communication approach to both the Australian public and our adversaries as these incidents escalate.The Conversation

Tom Sear, PhD Candidate, UNSW Canberra Cyber, Australian Defence Force Academy, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘State actor’ makes cyber attack on Australian political parties



File 20190218 56204 18qp4dj.jpg?ixlib=rb 1.1
While the government has not identified the state actor, China is.
being blamed.
Shutterstock

Michelle Grattan, University of Canberra

“A sophisticated state actor” has hacked the networks of the major
political parties, Prime Minister Scott Morrison has told Parliament.

Recently the Parliament House network was disrupted, and the intrusion
into the parties’ networks was discovered when this was being dealt
with.

While the government has not identified the “state actor”, the Chinese
are being blamed.

Morrison gave the reassurance that “there is no evidence of any
electoral interference. We have put in place a number of measures to
ensure the integrity of our electoral system”.

In his statement to the House Morrison said: “The Australian Cyber
Security Centre recently identified a malicious intrusion into the
Australian Parliament House computer network.

“During the course of this work, we also became aware that the
networks of some political parties – Liberal, Labor and the Nationals
– have also been affected.

“Our security agencies have detected this activity and acted
decisively to confront it. They are securing these systems and
protecting users”.

The Centre would provide any party or electoral body with technical help to deal with hacking, Morrison said.

“They have already briefed the Electoral Commissions and those
responsible for cyber security for all states and territories. They
have also worked with global anti-virus companies to ensure
Australia’s friends and allies have the capacity to detect this
malicious activity,” he said.

“The methods used by malicious actors are constantly evolving and this
incident reinforces yet again the importance of cyber security as a
fundamental part of everyone’s business.

“Public confidence in the integrity of our democratic processes is an
essential element of Australian sovereignty and governance,” he said.

“Our political system and our democracy remains strong, vibrant and is
protected. We stand united in the protection of our values and our
sovereignty”.

Bill Shorten said party political structures were perhaps more vulnerable than government institutions – and progressive parties particularly so.

“We have seen overseas that it is progressive parties that are more likely to be targeted by ultra-right wing organisations.

“Political parties are small organisations with only a few full-time staff, they collect, store and use large amounts of information about voters and communities. These institutions can be a soft target and our national approach to cyber security needs to pay more attention to non-government organisations,” Shorten said.

Although the authorities are pointing to a “state actor”, national cyber security adviser Alastair MacGibbon told a news conference: “We don’t know who is behind this, nor their intent.

“We, of course, will continue to work with our friends and colleagues, both here and overseas, to work out who is behind it and hopefully their intent”.

Asked what the hackers had got their hands on MacGibbon said: “We don’t know”.The Conversation

Michelle Grattan, Professorial Fellow, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Don’t click that link! How criminals access your digital devices and what happens when they do



File 20190207 174851 1lwq94r.jpg?ixlib=rb 1.1
A link is a mechanism for data to be delivered to your device.
Unsplash/Marvin Tolentino

Richard Matthews, University of Adelaide and Kieren Niĉolas Lovell, Tallinn University of Technology

Every day, often multiple times a day, you are invited to click on links sent to you by brands, politicians, friends and strangers. You download apps on your devices. Maybe you use QR codes.

Most of these activities are secure because they come from sources that can be trusted. But sometimes criminals impersonate trustworthy sources to get you to click on a link (or download an app) that contains malware.

At its core, a link is just a mechanism for data to be delivered to your device. Code can be built into a website which redirects you to another site and downloads malware to your device en route to your actual destination.

When you click on unverified links or download suspicious apps you increase the risk of exposure to malware. Here’s what could happen if you do – and how you can minimise your risk.




Read more:
How suppliers of everyday devices make you vulnerable to cyber attack – and what to do about it


What is malware?

Malware is defined as malicious code that:

will have adverse impact on the confidentiality, integrity, or availability of an information system.

In the past, malware described malicious code that took the form of viruses, worms or Trojan horses.

Viruses embedded themselves in genuine programs and relied on these programs to propagate. Worms were generally stand alone programs that could install themselves using a network, USB or email program to infect other computers.

Trojan horses took their name from the gift to the Greeks during the Trojan war in Homer’s Odyssey. Much like the wooden horse, a Trojan Horse looks like a normal file until some predetermined action causes the code to execute.

Today’s generation of attacker tools are far more sophisticated, and are often a blend of these techniques.

These so-called “blended attacks” rely heavily on social engineering – the ability to manipulate someone to doing something they wouldn’t normally do – and are often categorised by what they ultimately will do to your systems.

What does malware do?

Today’s malware comes in easy to use, customised toolkits distributed on the dark web or by well meaning security researchers attempting to fix problems.

With a click of a button, attackers can use these toolkits to send phishing emails and spam SMS messages to eploy various types of malware. Here are some of them.

https://datawrapper.dwcdn.net/QDA3R/2/

  • a remote administration tool (RAT) can be used to access a computer’s camera, microphone and install other types of malware

  • keyloggers can be used to monitor for passwords, credit card details and email addresses

  • ransomware is used to encrypt private files and then demand payment in return for the password

  • botnets are used for distributed denial of service (DDoS) attacks and other illegal activities. DDoS attacks can flood a website with so much virtual traffic that it shuts down, much like a shop being filled with so many customers you are unable to move.

  • crytptominers will use your computer hardware to mine cryptocurrency, which will slow your computer down

  • hijacking or defacement attacks are used to deface a site or embarrass you by posting pornographic material to your social media

An example of a defacement attack on The Utah Office of Tourism Industry from 2017.
Wordfence



Read more:
Everyone falls for fake emails: lessons from cybersecurity summer school


How does malware end up on your device?

According to insurance claim data of businesses based in the UK, over 66% of cyber incidents are caused by employee error. Although the data attributes only 3% of these attacks to social engineering, our experience suggests the majority of these attacks would have started this way.

For example, by employees not following dedicated IT and information security policies, not being informed of how much of their digital footprint has been exposed online, or simply being taken advantage of. Merely posting what you are having for dinner on social media can open you up to attack from a well trained social engineer.

QR codes are equally as risky if users open the link the QR codes point to without first validating where it was heading, as indicated by this 2012 study.

Even opening an image in a web browser and running a mouse over it can lead to malware being installed. This is quite a useful delivery tool considering the advertising material you see on popular websites.

Fake apps have also been discovered on both the Apple and Google Play stores. Many of these attempt to steal login credentials by mimicking well known banking applications.

Sometimes malware is placed on your device by someone who wants to track you. In 2010, the Lower Merion School District settled two lawsuits brought against them for violating students’ privacy and secretly recording using the web camera of loaned school laptops.

What can you do to avoid it?

In the case of the the Lower Merion School District, students and teachers suspected they were being monitored because they “saw the green light next to the webcam on their laptops turn on momentarily.”

While this is a great indicator, many hacker tools will ensure webcam lights are turned off to avoid raising suspicion. On-screen cues can give you a false sense of security, especially if you don’t realise that the microphone is always being accessed for verbal cues or other forms of tracking.

Facebook CEO Mark Zuckerberg covers the webcam of his computer. It’s commonplace to see information security professionals do the same.
iphonedigital/flickr

Basic awareness of the risks in cyberspace will go a long the way to mitigating them. This is called cyber hygiene.

Using good, up to date virus and malware scanning software is crucial. However, the most important tip is to update your device to ensure it has the latest security updates.

Hover over links in an email to see where you are really going. Avoid shortened links, such as bit.ly and QR codes, unless you can check where the link is going by using a URL expander.

What to do if you already clicked?

If you suspect you have malware on your system, there are simple steps you can take.

Open your webcam application. If you can’t access the device because it is already in use this is a telltale sign that you might be infected. Higher than normal battery usage or a machine running hotter than usual are also good indicators that something isn’t quite right.

Make sure you have good anti-virus and anti-malware software installed. Estonian start-ups, such as Malware Bytes and Seguru, can be installed on your phone as well as your desktop to provide real time protection. If you are running a website, make sure you have good security installed. Wordfence works well for WordPress blogs.

More importantly though, make sure you know how much data about you has already been exposed. Google yourself – including a Google image search against your profile picture – to see what is online.

Check all your email addresses on the website haveibeenpwned.com to see whether your passwords have been exposed. Then make sure you never use any passwords again on other services. Basically, treat them as compromised.

Cyber security has technical aspects, but remember: any attack that doesn’t affect a person or an organisation is just a technical hitch. Cyber attacks are a human problem.

The more you know about your own digital presence, the better prepared you will be. All of our individual efforts better secure our organisations, our schools, and our family and friends.The Conversation

Richard Matthews, Lecturer Entrepreneurship, Commercialisation and Innovation Centre | PhD Candidate in Image Forensics and Cyber | Councillor, University of Adelaide and Kieren Niĉolas Lovell, Head of TalTech Computer Emergency Response Team, Tallinn University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Online trolling used to be funny, but now the term refers to something far more sinister



File 20190131 108351 w5ujdy.jpg?ixlib=rb 1.1
The definition of “trolling” has changed a lot over the last 15 years.
Shutterstock

Evita March, Federation University Australia

It seems like internet trolling happens everywhere online these days – and it’s showing no signs of slowing down.

This week, the British press and Kensington Palace officials have called for an end to the merciless online trolling of Duchesses Kate Middleton and Meghan Markle, which reportedly includes racist and sexist content, and even threats.

But what exactly is internet trolling? How do trolls “behave”? Do they intend to harm, or amuse?

To find out how people define trolling, we conducted a survey with 379 participants. The results suggest there is a difference in the way the media, the research community and the general public understand trolling.

If we want to reduce abusive online behaviour, let’s start by getting the definition right.




Read more:
How empathy can make or break a troll


Which of these cases is trolling?

Consider the comments that appear in the image below:


Screenshot

Without providing any definitions, we asked if this was an example of internet trolling. Of participants, 44% said yes, 41% said no and 15% were unsure.

Now consider this next image:


Screenshot

Of participants, 69% said this was an example of internet trolling, 16% said no, and 15% were unsure).

These two images depict very different online behaviour. The first image depicts mischievous and comical behaviour, where the author perhaps intended to amuse the audience. The second image depicts malicious and antisocial behaviour, where the author may have intended to cause harm.

There was more consensus among participants that the second image depicted trolling. That aligns with a more common definition of internet trolling as destructive and disruptive online behaviour that causes harm to others.

But this definition has only really evolved in more recent years. Previously, internet trolling was defined very differently.




Read more:
We researched Russian trolls and figured out exactly how they neutralise certain news


A shifting definition

In 2002, one of the earliest definitions of internet “trolling” described the behaviour as:

luring others online (commonly on discussion forums) into pointless and time-consuming activities.

Trolling often started with a message that was intentionally incorrect, but not overly controversial. By contrast, internet “flaming” described online behaviour with hostile intentions, characterised by profanity, obscenity, and insults that inflict harm to a person or an organisation.

So, modern day definitions of internet trolling seem more consistent with the definition of flaming, rather than the initial definition of trolling.

To highlight this intention to amuse compared to the intention to harm, communication researcher Jonathan Bishop suggested we differentiate between “kudos trolling” to describe trolling for mutual enjoyment and entertainment, and “flame trolling” to describe trolling that is abusive and not intended to be humorous.

How people in our study defined trolling

In our study, which has been accepted to be published in the journal Cyberpsychology, Behavior, and Social Networking, we recruited 379 participants (60% women) to answer an online, anonymous questionnaire where they provided short answer responses to the following questions:

  • how do you define internet trolling?

  • what kind of behaviours constitute internet trolling?

Here are some examples of how participants responded:

Where an individual online verbally attacks another individual with intention of offending the other (female, 27)

People saying intentionally provocative things on social media with the intent of attacking / causing discomfort or offence (female, 26)

Teasing, bullying, joking or making fun of something, someone or a group (male, 29)

Deliberately commenting on a post to elicit a desired response, or to purely gratify oneself by emotionally manipulating another (male, 35)

Based on participant responses, we suggest that internet trolling is now more commonly seen as an intentional, malicious online behaviour, rather than a harmless activity for mutual enjoyment.

A word cloud representing how survey participants described trolling behaviours.

Researchers use ‘trolling’ as a catch-all

Clearly there are discrepancies in the definition of internet trolling, and this is a problem.

Research does not differentiate between kudos trolling and flame trolling. Some members of the public might still view trolling as a kudos behaviour. For example, one participant in our study said:

Depends which definition you mean. The common definition now, especially as used by the media and within academia, is essentially just a synonym to “asshole”. The better, and classic, definition is someone who speaks from outside the shared paradigm of a community in order to disrupt presuppositions and try to trigger critical thought and awareness (male, 41)

Not only does the definition of trolling differ from researcher to researcher, but there can also be discrepancy between the researcher and the public.

As a term, internet trolling has significantly deviated from its early, 2002 definition and become a catch-all for all antisocial online behaviours. The lack of a uniform definition of internet trolling leaves all research on trolling open to validity concerns, which could leave the behaviour remaining largely unchecked.




Read more:
Our experiments taught us why people troll


We need to agree on the terminology

We propose replacing the catch-all term of trolling with “cyberabuse”.

Cyberbullying, cyberhate and cyberaggression are all different online behaviours with different definitions, but they are often referred to uniformly as “trolling”.

It is time to move away from the term trolling to describe these serious instances of cyberabuse. While it may have been empowering for the public to picture these internet “trolls” as ugly creatures living under the bridge, this imagery may have begun to downplay the seriousness of their online behaviour.

Continuing to use the term trolling, a term that initially described a behaviour that was not intended to harm, could have serious consequences for managing and preventing the behaviour.The Conversation

Evita March, Senior Lecturer in Psychology, Federation University Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The future of the internet looks brighter thanks to an EU court opinion



File 20190115 180497 187mb0p.jpg?ixlib=rb 1.1
What is illegal in one country may be perfectly legal in all other countries.
Shutterstock

Dan Jerker B. Svantesson, Bond University

Imagine an internet where you couldn’t access any content unless it complied with every law of all the countries in the world.

In this scenario, you would be prevented from expressing views that were critical of many of the world’s dictatorships. You would not be able to question aspects of some religions due to blasphemy laws. And some of the photos you post of your children would be illegal.

A development like this is not as far fetched as it currently may seem.

Every country wants its laws respected online. The scenario above may be an unavoidable outcome if countries are successful in seeking to impose their laws globally. Even though they can’t prosecute the person who posted the content, they can try to force the internet platforms that host the content to remove or block it.

A legal opinion released last week in a case currently before the courts in the European Union argues content should generally only be blocked in countries where it breaches the law, not globally. This is a sensible approach, and a necessity if we wish to continue to enjoy the benefits currently offered by the internet.




Read more:
Country rules: the ‘splinternet’ may be the future of the web


A trend of global orders

There have been numerous examples of courts seeking to impose their content restrictions globally by ordering the major internet platforms to remove or block access to specific content.

The most recent high profile case is a 2017 decision by the Supreme Court of Canada, in which the court sought to compel Google to block certain search results globally. That dispute is still ongoing after a US court sided with Google.

Courts in Australia and the United States have also opted for global content restrictions, without regard for the impact on internet users in other countries. For example, in the Australian case, Justice Pembroke ordered Twitter to block all future postings globally – regardless of topic – by a particular Twitter user.

This is troubling. After all, what is illegal in one country may be perfectly legal in all other countries. Why should the harshest laws determine what can be posted online? Why should duties imposed by one country trump rights afforded to us by the laws in many other countries – particularly international human rights laws?




Read more:
Innocence or arrogance? US court oversteps on internet regulation


The Google France case

The latest case to address this question is an ongoing dispute in the EU. The French data protection authority (CNIL) sought to force search engines to remove search results (known as de-referencing) globally where those results violate the EU’s so-called “right to be forgotten” legislation.

The right to be forgotten is an aspect of the EU’s data privacy law that, in simplified terms, gives people the right to have online content blocked on search engines, where the content is no longer relevant.

Google disputed this and the matter has reached the EU’s highest court – the Court of Justice of the European Union (CJEU). On 10 January 2019, an Advocate General of the court issued his opinion on the matter (so far only available in French). Such opinions are not binding on the court, but the judgment often follows the reasoning of the Advocate General. The judges are now beginning their deliberations in this case and their judgment will be given at a later date.

In his opinion, the Advocate General concluded that, in relation to the right to be forgotten, search engines:

…must take every measure available to it to ensure full and effective de-referencing within the EU.

He went on to say that de-referencing of the search results should only apply inside the EU.

But he didn’t rule out the possibility that:

…in certain situations, a search engine operator may be required to take de-referencing actions at the worldwide level.

This is similar to a nuanced approach advocated for by the Swedish data protection authority in a parallel case currently before the Swedish courts.




Read more:
Google court ruling creates a more forgetful internet


The significance

If the EU court adopts the approach of the Canadian Supreme Court and seeks to impose EU law globally, many other countries – including repressive dictatorships – are likely to view this as a “green light” to impose their laws globally.

But if the EU court adopts the more measured approach proposed in the Advocate General’s opinion, we may see a reversal of the current dangerous trend of global content restriction orders.

It may be months until we see the final judgment. But the stakes are high and the future of the internet, as we know it, hangs in the balance.The Conversation

Dan Jerker B. Svantesson, Co-Director Centre for Commercial Law, Bond University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Racism in a networked world: how groups and individuals spread racist hate online



File 20190125 108342 1xg36s8.jpg?ixlib=rb 1.1
We could see even sharper divisions in society in the future if support for racism spreads online.
Markus Spiske/Unsplash

Ana-Maria Bliuc, Western Sydney University; Andrew Jakubowicz, University of Technology Sydney, and Kevin Dunn, Western Sydney University

Living in a networked world has many advantages. We get our news online almost as soon as it happens, we stay in touch with friends via social media, and we advance our careers through online professional networks.

But there is a darker side to the internet that sees far-right groups exploit these unique features to spread divisive ideas, racial hate and mistrust. Scholars of racism refer to this type of racist communication online as “cyber-racism”.

Even the creators of the internet are aware they may have unleashed a technology that is causing a lot of harm. Since 2017, the inventor of the World Wide Web, Tim Berners-Lee, has focused many of his comments about the dangers of manipulation of the internet around the spread of hate speech, saying that:

Humanity connected by technology on the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken.

Our team conducted a systematic review of ten years of cyber-racism research to learn how different types of communicators use the internet to spread their views.




Read more:
How the use of emoji on Islamophobic Facebook pages amplifies racism


Racists groups behave differently to individuals

We found that the internet is indeed a powerful tool used to influence and reinforce divisive ideas. And it’s not only organised racist groups that take advantage of online communication; unaffiliated individuals do it too.

But the way groups and individuals use the internet differs in several important ways. Racist groups are active on different communication channels to individuals, and they have different goals and strategies they use to achieve them. The effects of their communication are also distinctive.

Individuals mostly engage in cyber-racism to hurt others, and to confirm their racist views by connecting with like-minded people (seeking “confirmation bias”). Their preferred communication channels tend to be blogs, forums, news commentary websites, gaming environments and chat rooms.

Channels, goals and strategies used by unaffiliated people when communicating cyber-racism.

Strategies they use include denying or minimising the issue of racism, denigrating “non-whites”, and reframing the meaning of current news stories to support their views.

Groups, on the other hand, prefer to communicate via their own websites. They are also more strategic in what they seek to achieve through online communication. They use websites to gather support for their group and their views through racist propaganda.

Racist groups manipulate information and use clever rhetoric to help build a sense of a broader “white” identity, which often goes beyond national borders. They argue that conflict between different ethnicities is unavoidable, and that what most would view as racism is in fact a natural response to the “oppression of white people”.

Channels, goals and strategies used by groups when communicating cyber-racism.




Read more:
How the alt-right uses milk to promote white supremacy


Collective cyber-racism has the main effect of undermining the social cohesion of modern multicultural societies. It creates division, mistrust and intergroup conflict.

Meanwhile, individual cyber-racism seems to have a more direct effect by negatively affecting the well being of targets. It also contributes to maintaining a hostile racial climate, which may further (indirectly) affect the well being of targets.

What they have in common

Despite their differences, groups and individuals both share a high level of sophistication in how they communicate racism online. Our review uncovered the disturbingly creative ways in that new technologies are exploited.

For example, racist groups make themselves attractive to young people by providing interactive games and links to music videos on their websites. And both groups and individuals are highly skilled at manipulating their public image via various narrative strategies, such as humour and the interpretation of current news to fit with their arguments.




Read more:
Race, cyberbullying and intimate partner violence


A worrying trend

Our findings suggest that if these online strategies are effective, we could see even sharper divisions in society as the mobilisation of support for racism and far-right movements spreads online.

There is also evidence that currently unaffiliated supporters of racism could derive strength through online communication. These individuals might use online channels to validate their beliefs and achieve a sense of belonging in virtual spaces where racist hosts provide an uncontested and hate-supporting community.

This is a worrying trend. We have now seen several examples of violent action perpetrated offline by isolated individuals who radicalise into white supremacist movements – for example, in the case of Anders Breivik in Norway, and more recently of Robert Gregory Bowers, who was the perpetrator of the Pittsburgh synagogue shooting.

In Australia, unlike most other liberal democracies, there are effectively no government strategies that seek to reduce this avenue for the spread of racism, despite many Australians expressing a desire that this be done.The Conversation

Ana-Maria Bliuc, Senior Lecturer in Social Psychology, Western Sydney University; Andrew Jakubowicz, Emeritus Professor of Sociology, University of Technology Sydney, and Kevin Dunn, Dean of the School of Social Science and Psychology, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Five projects that are harnessing big data for good



File 20181101 78456 77seij.jpg?ixlib=rb 1.1
Often the value of data science lies in the work of joining the dots.
Shutterstock

Arezou Soltani Panah, Swinburne University of Technology and Anthony McCosker, Swinburne University of Technology

Data science has boomed over the past decade, following advances in mathematics, computing capability, and data storage. Australia’s Industry 4.0 taskforce is busy exploring ways to improve the Australian economy with tools such as artificial intelligence, machine learning and big data analytics.

But while data science offers the potential to solve complex problems and drive innovation, it has often come under fire for unethical use of data or unintended negative consequences – particularly in commercial cases where people become data points in annual company reports.

We argue that the data science boom shouldn’t be limited to business insights and profit margins. When used ethically, big data can help solve some of society’s most difficult social and environmental problems.

Industry 4.0 should be underwritten by values that ensure these technologies are trained towards the social good (known as Society 4.0). That means using data ethically, involving citizens in the process, and building social values into the design.

Here are a five data science projects that are putting these principles into practice.




Read more:
The future of data science looks spectacular


1. Finding humanitarian hot spots

Social and environmental problems are rarely easy to solve. Take the hardship and distress in rural areas due to the long-term struggle with drought. Australia’s size and the sheer number of people and communities involved make it difficult to pair those in need with support and resources.

Our team joined forces with the Australian Red Cross to figure out where the humanitarian hot spots are in Victoria. We used social media data to map everyday humanitarian activity to specific locations and found that the hot spots of volunteering and charity activity are located in and around Melbourne CBD and the eastern suburbs. These kinds of insights can help local aid organisations channel volunteering activity in times of acute need.

Distribution of humanitarian actions across inner Melbourne and local government areas. Blue dots and red dots represent scraped Instagram posts around the hashtags #volunteer and #charity.

2. Improving fire safety in homes

Accessing data – the right data, in the right form – is a constant challenge for data science. We know that house fires are a serious threat, and that fire and smoke alarms save lives. Targeting houses without fire alarms can help mitigate that risk. But there is no single reliable source of information to draw on.

In the United States, Enigma Labs built open data tools to model and map risk at the level of individual neighbourhoods. To do this effectively, their model combines national census data with a geocoder tool (TIGER), as well as analytics based on local fire incident data, to provide a risk score.

Fire fatality risk scores calculated at the level of Census block groups.
Enigma Labs

3. Mapping police violence in the US

Ordinary citizens can be involved in generating social data. There are many crowdsourced, open mapping projects, but often the value of data science lies in the work of joining the dots.

The Mapping Police Violence project in the US monitors, make sense of, and visualises police violence. It draws on three crowdsourced databases, but also fills in the gaps using a mix of social media, obituaries, criminal records databases, police reports and other sources of information. By drawing all this information together, the project quantifies the scale of the problem and makes it visible.

A visualisation of the frequency of police violence in the United States.
Mapping Police Violence



Read more:
Data responsibility: a new social good for the information age


4. Optimising waste management

The Internet of Things is made up of a host of connected devices that collect data. When embedded in the ordinary objects all around us, and combined with cloud-based analysis and computing, these objects become smart – and can help solve problems or inefficiencies in the built environment.

If you live in Melbourne, you might have noticed BigBelly bins around the CBD. These smart bins have solar-powered trash compactors that regularly compress the garbage inside throughout the day. This eliminates waste overflow and reduces unnecessary carbon emissions, with an 80% reduction in waste collection.

Real-time data analysis and reporting is provided by a cloud-based data management portal, known as CLEAN. The tool identifies trends in waste overflow, which helps with bin placement and planning of collection services.

BigBelly bins are being used in Melbourne’s CBD.
Kevin Zolkiewicz/Flickr, CC BY-NC

5. Identifying hotbeds of street harassment

A group of four women – and many volunteer supporters – in Egypt developed HarassMap to engage with, and inform, the community in an effort to reduce sexual harassment. The platform they built uses anonymised, crowdsourced data to map harassment incidents that occur in the street in order to alert its users of potentially unsafe areas.

The challenge for the group was to provide a means for generating data for a problem that was itself widely dismissed. Mapping and informing are essential data science techniques for addressing social problems.

Mapping of sexual harassment reported in Egypt.
HarassMap



Read more:
Cambridge Analytica’s closure is a pyrrhic victory for data privacy


Building a better society

Turning the efforts of data science to social good isn’t easy. Those with the expertise have to be attuned to the social impact of data analytics. Meanwhile, access to data, or linking data across sources, is a major challenge – particularly as data privacy becomes an increasing concern.

While the mathematics and algorithms that drive data science appear objective, human factors often combine to embed biases, which can result in inaccurate modelling. Digital and data literacy, along with a lack of transparency in methodology, combine to raise mistrust in big data and analytics.

Nonetheless, when put to work for social good, data science can provide new sources of evidence to assist government and funding bodies with policy, budgeting and future planning. This can ultimately result in a better connected and more caring society.The Conversation

Arezou Soltani Panah, Postdoc Research Fellow (Social Data Scientist), Swinburne University of Technology and Anthony McCosker, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The internet has done a lot, but so far little for economic growth


File 20181023 169807 1dwppfy.jpg?ixlib=rb 1.1
The internet is everywhere, except in the economic growth figures.
Shutterstock

Chris Doucouliagos, Deakin University and Tom Stanley, Deakin University

The internet is transforming every aspect of our lives. It has become indispensable. But, so far, according to a new meta-analysis we have published in the Journal of Economic Surveys, the internet has done next to nothing for economic growth.

Vast resources have been thrown at information and communication technologies. Yet despite exponential growth in ICT and its integration into almost all aspects of our lives, economic growth is not demonstrably faster (and at the moment is demonstrably slower) than it was beforehand.

As Nobel Prize-winning economist Robert Solow famously put it, “you can see the computer age everywhere but in the productivity statistics.”




Read more:
What is 5G? The next generation of wireless, explained


This productivity paradox has caused angst and raised questions about whether the trillions invested in ICT could have been better invested elsewhere.

Our study of studies

We reassessed ICT through a meta-analysis of 59 econometric studies incorporating 466 different observations in both developed and developing countries. We divided ICT into three categories: computing, mobile and landline telephone connections, and the internet. For developed countries, we found that computing had had a moderate impact on growth. Mobile and landline telephone technologies also had a small effect.




Read more:
How landline phones made us happy and connected


But the internet has had no effect, at least not as far as can be ascertained from the research to date.

The promise not yet delivered

Ever since the Industrial Revolution, innovation and technological change have driven rising productivity and economic growth.

Information and communications technologies ought to follow in those footsteps.

Instead, productivity growth in US manufacturing has slid from 2% per year between 1992 to 2004 to minus 0.3% per year between 2005 and 2016.

Where ICT innovations do lead to an increase in productivity, it’s often a one-off boost rather than an ongoing increase year after year.

Where the internet sends us backwards

More disquieting, there is some evidence suggesting that rather than contributing to economic performance, some parts of ICT can harm it.

The internet can be an enabler of procrastination. Cyberslacking can take up to three hours of work a day.

It isn’t all bad. Many of us get a lot of joy from catching up on social media and watching dog and cat videos. But if everyone is distracted by it, little gets done.




Read more:
Ten reasons teachers can struggle to use technology in the classroom


The internet has also enabled greater flexibility in work, another plus. But if it contributes little to economic growth, it is worth asking whether our economic managers should continue to fund its expansion.

No saviour for developing nations

For developing countries, generating economic growth is pressing because resources are scarce. ICT has been held out as a saviour.

Yet, it has almost always been found that more obvious innovations, such as running water, electricity, and primary education for girls, have bigger payoffs.

Our own findings show that developing countries benefit from landline and mobile phone technologies but not at all from computing, at least not yet. ICT might need to reach a critical size before its effects matter.

But maybe later, down the track

The time it takes for ICT investment to generate economic growth might be longer than expected, and it might need to reach an even bigger critical mass before that happens.

But it’s hard to avoid the conclusion that, for the immediate future, growth will continue to depend upon more traditional sources: trade between nations, education, new ideas, the rule of law, sound political institutions, and curtailing inequality.




Read more:
How rising inequality is stalling economies by crippling demand


Unfortunately, these are under threat from growing nationalism and protectionism in the United States and elsewhere. The evidence to date suggests that we would be better off fighting those threats than investing still more in an information technology revolution that has yet to deliver.The Conversation

Chris Doucouliagos, Professor of Economics, Department of Economics, Deakin Business School and Alfred Deakin Institute for Citizenship and Globalisation, Deakin University and Tom Stanley, Professor of Meta-Analysis, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.