6 actions Australia’s government can take right now to target online racism



Paul Fletcher, Australia’s recently appointed minister for communications, cyber safety and the arts, says he wants to make the internet safe for everyone.
Markus Spiske / unsplash, CC BY

Andrew Jakubowicz, University of Technology Sydney

Paul Fletcher was recently appointed as Australia’s Minister for Communications, Cyber Safety and the Arts.

One of his stated priorities is to:

continue the Morrison Government’s work to make the internet a safer place for the millions of Australians who use it every day.

Addressing online racism is a vital part of this goal.

And not just because racism online is hurtful and damaging – which it is. This is also important because sometimes online racism spills into the real world with deadly consequences.




Read more:
Explainer: trial of alleged perpetrator of Christchurch mosque shootings


An Australian man brought up in the Australian cyber environment is the alleged murderer of 50 Muslims at prayer in Christchurch. Planning and live streaming of the event took place on the internet, and across international boundaries.

We must critically assess how this happened, and be clearheaded and non-ideological about actions to reduce the likelihood of such an event happening again.

There are six steps Australia’s government can take.

1. Reconsider international racism convention

Our government should remove its reservation on Article 4 of the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD).

In 1966 Australia declined to sign up to Article 4(a) of the ICERD. It was the only country that had signed the ICERD while deciding to file a reservation on Article 4(a). It’s this section that mandates the criminalisation of race hate speech and racist propaganda.

The ICERD entered into Australian law, minus Article 4(a), through the 1975 Racial Discrimination Act (RDA).

Article 4 concerns, such as they were, would enter the law as “unlawful” harassment and intimidation, with no criminal sanctions, twenty years later. This occurred through the 1996 amendments that produced Section 18 of the RDA, with its right for complainants to seek civil solutions through the Human Rights Commission.

With Article 4 ratified, the criminal law could encompass the worst cases of online racism, and the police would have some framework to pursue the worst offenders.




Read more:
Explainer: what is Section 18C and why do some politicians want it changed?


2. Extend international collaboration

Our government should extend Australia’s participation in the European cybercrime convention by adopting the First Additional Protocol.

In 2001 the Council of Europe opened the Budapest Convention on Cybercrime to signatories, establishing the first international instrument to address crimes committed over the internet. The add-on First Additional Protocol on criminalisation of acts of a racist and xenophobic nature came into effect in 2002.

Australia’s government – Labor at the time – initially considered including the First Additional Protocol in cyber crime legislation in 2009, and then withdrew it soon after. Without it, our country is limited in the way we collaborate with other country signatories in tracking down cross border cyber racism.

3. Amend the eSafety Act

The Enhancing the Online Safety of Australians Act (until 2017 Enhancing the Online Safety of Children Act) established the eSafety Commissioner’s Office to pursue acts which undercut the safe use of the internet, especially through bullying.

The eSafety Act should be amended by Communications Minister Fletcher to extend the options for those harassed and intimidated, to include provisions similar to those found in NZ legislation. In effect this would mean people harassed online could take action themselves, or require the commissioner to act to protect them.

Such changes should be supported by staff able to speak the languages and operate in the cultural frames of those who are the most vulnerable to online race hate. These include Aboriginal Australians, Muslims, Jews and people of African and Asian descent.

4. Commit to retaining 18C

Section 18C of the RDA, known as the racial vilification provisions, allows individuals offended or intimidated by online race hate to seek redress.

The LNP government conducted two failed attempts over 2013-2019 to remove or dilute section 18C on grounds of free speech.

Rather than just leaving this dangling into the future, the government should commit itself to retaining 18C.

Even if this does happen, unless Article 4 of the (ICERD) is ratified as mentioned above, Australia will still have no effective laws that target online race-hate speech by pushing back against perpetrators.

Legislation introduced by the Australian government in April 2019 does make companies such as Facebook more accountable for hosting violent content online, but does not directly target perpetrators of race hate. It’s private online groups that can harbour and grow race hate hidden from the law.




Read more:
New livestreaming legislation fails to take into account how the internet actually works


5. Review best practice in combating cyber racism

Australia’s government should conduct a public review of best practice worldwide in relation to combating cyber racism. For example, it could plan for an options paper for public discussion by the end of 2020, and legislation where required in 2021.

European countries have now a good sense of how their protocol on cyber racism has worked. In particular, it facilitates inter-country collaboration, and empowers the police to pursue organised race hate speech as a criminal enterprise.

Other countries such as New Zealand and Canada, with whom we often compare ourselves, have moved far beyond the very limited action taken by Australia.

6. Provide funds to stop racism

In conjunction with the states plus industry and civil society organisations, the Australian government should promote and resource “push back” against online racism. This can be addressed by reducing the online space in which racists currently pursue their goals of normalising racism.

Civil society groups such as the Online Hate Prevention Institute and All Together Now, and interventions like the currently stalled NSW Government program on Remove Hate from the Debate, are good examples of strategies that could achieve far more with sustained support from the federal government.

Such action characterises many European societies. Another good example is the World Wide Web Foundation (W3F)) in North America, whose #Fortheweb campaign highlights safety issues for web users facing harassment and intimidation through hate speech.




Read more:
Racism in a networked world: how groups and individuals spread racist hate online


Slow change over time

Speaking realistically, the aim through these mechanisms cannot be to “eliminate” racism, which has deep structural roots. Rather, our goal should be to contain racism, push it back into ever smaller pockets, target perpetrators and force publishers to be far more active in limiting their users’ impacts on vulnerable targets.

Without criminal provisions, infractions of civil law are essentially let “through to the keeper”. The main players know this very well.

Our government has a responsibility to ensure publishers and platforms know what the community standards are in Australia. Legislation and regulation should enshrine, promote and communicate these standards – otherwise the vulnerable remain unprotected, and the aggressors continue smirking.The Conversation

Andrew Jakubowicz, Emeritus Professor of Sociology, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The behavioural economics of discounting, and why Kogan would profit from discount deception



The consumer watchdog has accused Kogan Australia of misleading customers, by touting discounts on more than 600 items it had previously raised the price of.
http://www.shutterstock.com

Ralph-Christopher Bayer, University of Adelaide

Kogan Australia has grown from a garage to an online retail giant in a little more than a decade. Key to its success have been its discount prices.

But apparently not all of those discounts have been legit, according to the Australian Competition and Consumer Commission.

The consumer watchdog has accused the home electronics and appliances retailer of misleading customers, by touting discounts on more than 600 items whose prices it had sneakily raised by at least the same percentage.

Kogan is yet to have its day in court, so we won’t dwell on its case specifically.

The ACCC alleges Kogan’s ‘TAXTIME’ promotion offered a 10% discount on items whose prices had all been raised by the equivalent percentage.
ACCC

But the scenario does raise an interesting question. How effective are these types of price manipulation? After all, checking and comparing prices is dead easy online. So what could a retailer possibly gain?

Well, as it happens, potentially quite a lot.

Because consumers are human beings, our actions aren’t necessarily rational. We have strong emotional reactions to price signals. The sheer ubiquity of discounts demonstrate they must work.

Lets review a couple of findings from behavioural (and traditional) economics that help explain why discounting – both real and fake – is such an effective marketing ploy.

Save! Save! Save!

In standard economics, consumers are assumed to base their purchasing decisions on absolute prices. They make “rational” decisions, and the “framing” of the price does not matter.

Psychologists Daniel Kahneman and Amos Tversky challenged this assumption with their insights into consumer behaviour. Their best-known contribution to behavioural economics is “prospect theory” – a psychologically more realistic alternative to the classical theory of rational choice.

Kahneman and Tversky argued that behaviour is based on changes, which were relative. Framing a price as involving a discount therefore influences our perception of its value.

The prospect of buying something leads us to compare two different changes: the positive change in perceived value from taking ownership of a good (the gain); and the negative change experienced from handing over money (the loss). We buy if we perceive the gain to outweigh the loss.

Suppose you are looking to buy a toaster. You see one for $99. Another is $110, with a 10% discount – making it $99. Which one would you choose?

Evaluating the first toaster’s value to you is reasonably straightforward. You will consider the item’s attributes against other toasters and how much you like toast versus some other benefit you might attain for $99.

Standard economics says your emotional response involves weighing the loss of $99 against the gain of owning the toaster.

For the second toaster you might do all the same calculations about features and value for money. But behavioural economics tells us the discount will provoke a more complex emotional reaction than the first toaster.

Research shows most of us will tend to “segregate” the price from the discount; we will feel separately the emotion from the loss of spending $99 and the gain of “saving” $11.

Economist Richard Thaler demonstrated this in a study involving 87 undergraduate students at Cornell University. He quizzed them on a series of scenarios like the following:

Mr A’s car was damaged in a parking lot. He had to spend $200 to repair the damage. The same day the car was damaged, he won $25 in the office football pool.
Mr B’s car was damaged in a parking lot. He had to spend $175 to repair the damage.
Who was more upset?

Just five students said both would be equally upset, while 63 (more than 72%) said Mr B. Similar hypotheticals elicited equally emphatic results.

Economists now refer to this as the “silver lining effect” – segregating a small gain from a larger loss results in greater psychological value than integrating the gain into a smaller loss.

The result is we feel better handing over money for a discounted item than the same amount for a non-discounted item.

Must end soon!

Another behavioural trick associated with discounts is creating a sense of urgency, by emphasising the discount period will end soon.

Again, the fact people typically evaluate prospects as changes from a reference point comes into play.

The seller’s strategy is to shift our reference points so we compare the current price with a higher price in the future. This makes not buying feel like a future loss. Since most humans are loss-averse, we may be nudged to avoid that loss by buying before the discount expires.

Expiry warnings also work through a second behavioural channel: anticipated regret.

Some of us are greatly influenced to behave according to whether we think we will regret it in the future.

Economic psychologist Marcel Zeelenberg and colleagues demonstrated this in experiments with students at the University of Amsterdam. Their conclusion: regret-aversion better explains choices than risk-aversion, because anticipation of regret can promote both risk-averse and risk-seeking choices.

Depending to what extent we have this trait, an expiry warning can compel us to buy now, in case we need that item in the future and will regret not having taken the opportunity to buy it when discounted.

Discounting is thus an effective strategy to get us to buy products we actually don’t need.

Look no further!

But what about the fact that it is so easy to compare prices online? Why doesn’t this fact nullify the two effects we’ve just discussed?

Here the standard economics of consumer search would agree that consumers might be misled despite being perfectly rational.

If a consumer judges a discount promotion is genuine, they have a tendency to assume it is less likely they will find a lower price elsewhere. This belief makes them less likely to continue searching.

In experiments on this topic, my colleague Changxia Ke and I have found a discernible “discount bias”. The effect is not necessarily large, depending on circumstances, but even a small nudge towards choosing a retailer with discounted items over another could end up being worth millions.




Read more:
Why consumers fall for ‘sales’, but companies may be using them too much


Once a consumer has made a decision and bought an item, they are even less likely to search for prices. They therefore may never learn a discount was fake.

There are entire industries where it is general practice to frame prices this way. Paradoxically, because this makes consumers search less for better deals, it allows all sellers to charge higher prices.

The bottom line: beware the emotional appeal of the discount. Whether real or fake, the human tendency is to overrate them.The Conversation

Ralph-Christopher Bayer, Professor of Economics , University of Adelaide

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Online tools can help people in disasters, but do they represent everyone?



Social media helped some people cope with the Townsville floods earlier this year.
AAP Image/Andrew Rankin

Billy Tusker Haworth, University of Manchester; Christine Eriksen, University of Wollongong, and Scott McKinnon, University of Wollongong

With natural hazard and climate-related disasters on the rise, online tools such as crowdsourced mapping and social media can help people understand and respond to a crisis. They enable people to share their location and contribute information.

But are these tools useful for everyone, or are some people marginalised? It is vital these tools include information provided from all sections of a community at risk.

Current evidence suggests that is not always the case.




Read more:
‘Natural disasters’ and people on the margins – the hidden story


Online tools let people help in disasters

Social media played an important role in coordinating response to the 2019 Queensland floods and the 2013 Tasmania bushfires. Community members used Facebook to coordinate sharing of resources such as food and water.

Crowdsourced mapping helped in response to the humanitarian crisis after the 2010 Haiti earthquake. Some of the most useful information came from public contributions.

Twitter provided similar critical insights during Hurricane Irma in South Florida in 2017.

Research shows these public contributions can help in disaster risk reduction, but they also have limitations.

In the rush to develop new disaster mitigation tools, it is important to consider whether they will help or harm the people most vulnerable in a disaster.

Who is vulnerable?

Extreme natural events, such as earthquakes and bushfires, are not considered disasters until vulnerable people are exposed to the hazard.




Read more:
Understanding the root causes of natural disasters


To determine people’s level of vulnerability we need to know:

  1. the level of individual and community exposure to a physical threat
  2. their access to resources that affect their capacity to cope when threats materialise.

Some groups in society will be more vulnerable to disaster than others. This includes people with immobility issues, caring roles, or limited access to resources such as money, information or support networks.

When disaster strikes, the pressure on some groups is often magnified.

The devastating scenes in New Orleans after Hurricane Katrina in 2005 and in Puerto Rico after Hurricane Maria in 2017 revealed the vulnerability of children in such disasters.

Unfortunately, emergency management can exacerbate the vulnerability of marginalised groups. For example, a US study last year showed that in the years after disasters, wealth increased for white people and declined for people of colour. The authors suggest this is linked to inequitable distribution of emergency and redevelopment aid.

Policies and practice have until recently mainly been written by, and for, the most predominant groups in our society, especially heterosexual white men.

Research shows how this can create gender inequities or exclude the needs of LGBTIQ communities, former refugees and migrants or domestic violence victims.




Read more:
More men die in bushfires: how gender affects how we plan and respond


We need to ask: do new forms of disaster response help everyone in a community, or do they reproduce existing power imbalances?

Unequal access to digital technologies

Research has assessed the “techno-optimism” – a belief that technologies will solve our problems – associated with people using online tools to share information for disaster management.

These technologies inherently discriminate if access to them discriminates.

In Australia, the digital divide remains largely unchanged in recent years. In 2016-17 nearly 1.3 million households had no internet connection.

Lower digital inclusion is seen in already vulnerable groups, including the unemployed, migrants and the elderly.

Global internet penetration rates show uneven access between economically poorer parts of the world, such as Africa and Asia, and wealthier Western regions.

Representations of communities are skewed on the internet. Particular groups participate with varying degrees on social media and in crowdsourcing activities. For example, some ethnic minorities have poorer internet access than other groups even in the same country.

For crowdsourced mapping on platforms such as OpenStreetMap, studies find participation biases relating to gender. Men map far more than women at local and global scales.

Research shows participation biases in community mapping activities towards older, more affluent men.

Protect the vulnerable

Persecuted minorities, including LGBTIQ communities and religious minorities, are often more vulnerable in disasters. Digital technologies, which expose people’s identities and fail to protect privacy, might increase that vulnerability.

Unequal participation means those who can participate may become further empowered, with more access to information and resources. As a result, gaps between privileged and marginalised people grow wider.

For example, local Kreyòl-speaking Haitians from poorer neighbourhoods contributed information via SMS for use on crowdsourced maps during the 2010 Haiti earthquake response.

But the information was translated and mapped in English for Western humanitarians. As they didn’t speak English, vulnerable Haitians were further marginalised by being unable to directly use and benefit from maps resulting from their own contributions.

Participation patterns in mapping do not reflect the true makeup of our diverse societies. But they do reflect where power lies – usually with dominant groups.

Any power imbalances that come from unequal online participation are pertinent to disaster risk reduction. They can amplify community tensions, social divides and marginalisation, and exacerbate vulnerability and risk.

With greater access to the benefits of online tools, and improved representation of diverse and marginalised people, we can better understand societies and reduce disaster impacts.

We must remain acutely aware of digital divides and participation biases. We must continually consider how these technologies can better include, value and elevate marginalised groups.The Conversation

Billy Tusker Haworth, Lecturer in GIS and Disaster Management, University of Manchester; Christine Eriksen, Senior Lecturer in Geography and Sustainable Communities, University of Wollongong, and Scott McKinnon, Vice-Chancellor’s Postdoctoral Research Fellow, University of Wollongong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Goodbye Google+, but what happens when online communities close down?



File 20190403 177184 jfjjy0.jpg?ixlib=rb 1.1
Google+ is the latest online community to close.
Shutterstock/rvlsoft

Stan Karanasios, RMIT University

This week saw the closure of Google+, an attempt by the online giant to create a social media community to rival Facebook.

If the Australian usage of Google+ is anything to go by – just 45,000 users in March compared to Facebook’s 15 million – it never really caught on.

Google+ is no longer available to users.
Google+/Screengrab

But the Google+ shutdown follows a string of organisations that have disabled or restricted community features such as reviews, user comments and message boards (forums).




Read more:
Sexual subcultures are collateral damage in Tumblr’s ban on adult content


So are we witnessing the decline of online communities and user comments?

Turning off online communities and user generated content

One of the most well-known message boards – which existed on the popular movie website IMDb since 2001 – was shut down by owner Amazon in 2017 with just two weeks’ notice for its users.

This is not only confined to online communities but mirrors a trend among organisations to restrict or turn off their user-generated content. Last year the subscription video-on-demand website Netflix said it no longer allowed users to write reviews. It subsequently deleted all existing user-generated reviews.

Other popular websites have disabled their comments sections, including National Public Radio (NPR), The Atlantic, Popular Science and Reuters.

Why the closures?

Organisations have a range of motivations for taking such actions, ranging from low uptake, running costs, the challenges of managing moderation, as well as the problem around divisive comments, conflicts and lack of community cohesion.

In the case of Google+, low usage alongside data breaches appear to have sped up its decision.

NPR explained its motivation to remove user comments by highlighting how in one month its website NPR.org attracted 33 million unique users and 491,000 comments. But those comments came from just 19,400 commenters; the number of commenters who posted in consecutive months was a fraction of that.

This led NPR’s managing editor for digital news, Scott Montgomery, to say:

We’ve reached the point where we’ve realized that there are other, better ways to achieve the same kind of community discussion around the issues we raise in our journalism.

He said audiences had also moved to engage with NPR more on Facebook and Twitter.

Likewise, The Atlantic explained that its comments sections had become “unhelpful, even destructive, conversations” and was exploring new ways to give users a voice.

In the case of IMDB closing its message boards in 2017, the reason given was:

[…] we have concluded that IMDb’s message boards are no longer providing a positive, useful experience for the vast majority of our more than 250 million monthly users worldwide.

The organisation also nudged users towards other forms of social media, such as its Facebook page and Twitter account @IMDB, as the “(…) primary place they (users) choose to post comments and communicate with IMDb’s editors and one another”.

User backlash

Unsurprisingly, such actions often lead to confusion, criticism and disengagement by user communities, and in some cases petitions to have the features reinstated (such as this one for Google+) and boycotts of the organisations.

But most organisations take these aspects into their decision-making.

The petition to save IMDB’s message boards.
Change.org/Screengrab

For fans of such community features these trends point to some harsh realities. Even though communities may self-organise and thrive, and users are co-creators of value and content, the functionality and governance are typically beyond their control.

Community members are at the mercy of hosting organisations, some profit-driven, which may have conflicting motivations to those of the users. It’s those organisations that hold the power to change or shut down what can be considered by some to be critical sources of knowledge, engagement and community building.

In the aftermath of shutdowns, my research shows that communities that existed on an organisation’s message boards in particular may struggle to reform.

This can be due to a number of factors, such as high switching costs, and communities can become fragmented because of the range of other options (Reddit, Facebook and other message boards).

So it’s difficult for users to preserve and maintain their communities once their original home is disabled. In the case of Google+, even its Mass Migration Group – which aims to help people, organisations and groups find “new online homes” – may not be enough to hold its online communities together.

The trend towards the closure of online communities by organisations might represent a means to reduce their costs in light of declining usage and the availability of other online options.

It’s also a move away from dealing with the reputational issues related to their use and controlling the conversation that takes place within their user bases. Trolling, conflicts and divisive comments are common in online communities and user comments spaces.

Lost community knowledge

But within online groups there often exists social and network capital, as well as the stock of valuable knowledge that such community features create.




Read more:
Zuckerberg’s ‘new rules’ for the internet must move from words to actions


Often these communities are made of communities of practice (people with a shared passion or concern) on topics ranging from movie theories to parenting.

They are go-to sources for users where meaningful interactions take place and bonds are created. User comments also allow people to engage with important events and debates, and can be cathartic.

Closing these spaces risks not only a loss of user community bases, but also a loss of this valuable community knowledge on a range of issues.The Conversation

Stan Karanasios, Senior Research Fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The trouble with Big W: don’t blame online for killing discount department stores


Gary Mortimer, Queensland University of Technology

After weeks of speculation, Woolworths has confirmed it will close 30 of its Big W stores in Australia, as well as two distribution centres. This represents about 16% of its 183-strong network.

The obvious culprit, and the one identified by many analysts, is online shopping.

As one industry analyst explained: “The physical department store footprint is likely to continue to shrink as online sales penetration increases further.”

Online shopping is certainly a factor, but it is not the primary reason for Big W’s troubles.

Though online shopping in the department and variety stores category is growing fast (by 29.6% in 2018 according to the NAB Online Retail Sales Index, the total amount of money spent online by Australian shoppers – A$28.8 billion – is still only about about 9% of what is spent in traditional bricks-and-mortar stores.


Online retail sales growth by industry in the 12 months to December 2018,
NAB Online Retail Sales Index

More important to Big W’s woes is the growth of the so-called category killers, which are disrupting the entire discount department store business model. It a threat to which Big W has failed to respond with the same agility of rival Kmart.

Departed departments

If you’re old enough you may remember getting your wall paint mixed in the Big W hardware department, or buying car accessories from its automotive department. There was also a large “sight and sound” department filled with televisions, sound systems, videos and CDs. Discount department stores truly lived up to the idea of a variety store.

‘You know the price is low, everyday’: A television advertisement for Big W in 1994.

But the profitability of all these market segments for department stores has been eroded by the growth of “category killers” – retailers specialising in the same product categories.

Examples include Office Works for office supplies, Rebel for sports equipment, JB Hi-Fi for audiovisual, Super Cheap Auto for car parts, and Bunnings for hardware. All have taken market share from the discounters. These stores compete on price and have superior range, and shoppers trust the expertise of staff working in a specialist shop.

Speed of change

The popularity of category killers explains in large part the stagnant sales and talk of store closures throughout the department store segment.

Harris Scarfe and Best and Less are reportedly struggling. The Reject Shop’s net profit for the first half fell from an expected A$17 million to less than A$11 million.
David Jones’ half-year profit fell 39% to A$36 million. Myer reported a 2.8% drop in total sales for the same time frame.




Read more:
What does the future hold for our traditional department stores?


Wesfarmers expects earnings from its department store brands Kmart and Target to fall about 8% this financial year. Eight Target stores closed during the first half of the financial year, with another six closures expected by the end of June.

Cutting losses

Kmart is considered Australia’s discount department store “darling”. A decade ago it was on life support. Under the direction of chief executive Guy Russo it doubled it profits by 2015.

A key to the turnaround was recognition it needed to quickly reduce or exit categories it could not compete in, such as hardware, automotive, fishing, consumer electronics and sporting goods. It has turned to homeware, soft furnishings, manchester and kitchenware.

There appears no such swiftness in Big W’s moves.

Big W’s chief executive from January to November 2016, Sally MacDonald, reportedly wanted to closes stores and make other major changes but was thwarted by the board of Woolworths Group, owner of Big W.

Such differences in strategic vision explain why MacDonald left the role within the year.

This process of “right-sizing” therefore seems long overdue. To what extent it makes Woolworths a sustainable business, however, will depend on future response to changing circumstances.

What is certain is that discount department stores aren’t what they used to be; and if they want to be around in future, they probably can’t be what they are now.The Conversation

Gary Mortimer, Associate Professor in Marketing and Consumer Behaviour, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

As responsible digital citizens, here’s how we can all reduce racism online



File 20190409 2918 9q8zs6.jpg?ixlib=rb 1.1
No matter how innocent you think it is, what you type into search engines can shape how the internet behaves.
Hannah Wei / unsplash, CC BY

Ariadna Matamoros-Fernández, Queensland University of Technology

Have you ever considered that what you type into Google, or the ironic memes you laugh at on Facebook, might be building a more dangerous online environment?

Regulation of online spaces is starting to gather momentum, with governments, consumer groups, and even digital companies themselves calling for more control over what is posted and shared online.

Yet we often fail to recognise the role that you, me and all of us as ordinary citizens play in shaping the digital world.

The privilege of being online comes with rights and responsibilities, and we need to actively ask what kind of digital citizenship we want to encourage in Australia and beyond.




Read more:
How the use of emoji on Islamophobic Facebook pages amplifies racism


Beyond the knee-jerk

The Christchurch terror attack prompted policy change by governments in both New Zealand and Australia.

Australia recently passed a new law that will enforce penalties for social media platforms if they don’t remove violent content after it becomes available online.

Platforms may well be lagging behind in their content moderation responsibilities, and still need to do better in this regard. But this kind of “kneejerk” policy response won’t solve the spread of problematic content on social media.

Addressing hate online requires coordinated efforts. Platforms must improve the enforcement of their rules (not just announce tougher measures) to guarantee users’ safety. They may also reconsider a serious redesign, because the way they currently organise, select, and recommend information often amplifies systemic problems in society like racism.




Read more:
New livestreaming legislation fails to take into account how the internet actually works


Discrimination is entrenched

Of course, biased beliefs and content don’t just live online.

In Australia, racial discrimination has been perpetuated in public policy, and the country has an unreconciled history of Indigenous dispossession and oppression.

Today, Australia’s political mainstream is still lenient with bigots, and the media often contributes to fearmongering about immigration.

However, we can all play a part in reducing harm online.

There are three aspects we might reconsider when interacting online so as to deny oxygen to racist ideologies:

  • a better understanding of how platforms work
  • the development of empathy to identify differences in interpretation when engaging with media (rather than focusing on intent)
  • working towards a more productive anti-racism online.

Online lurkers and the amplification of harm

White supremacists and other reactionary pundits seek attention on mainstream and social media. New Zealand Prime Minister Jacinda Ardern refused to name the Christchurch gunman to prevent fuelling his desired notoriety, and so did some media outlets.

The rest of us might draw comfort from not having contributed to amplifying the Christchurch attacker’s desired fame. It’s likely we didn’t watch his video or read his manifesto, let alone upload or share this content on social media.

But what about apparently less harmful practices, such as searching on Google and social media sites for keywords related to the gunman’s manifesto or his live video?

It’s not the intent behind these practices that should be the focus of this debate, but the consequences of it. Our everyday interactions on platforms influence search autocomplete algorithms and the hierarchical organisation and recommendation of information.

In the Christchurch tragedy, even if we didn’t share or upload the manifesto or the video, the zeal to access this information drove traffic to problematic content and amplified harm for the Muslim community.

Normalisation of hate through seemingly lighthearted humour

Reactionary groups know how to capitalise on memes and other jokey content that degrades and dehumanises.

By using irony to deny the racism in these jokes, these far-right groups connect and immerse new members in an online culture that deliberately uses memetic media to have fun at the expense of others.

The Christchurch terrorist attack showed this connection between online irony and the radicalisation of white men.

However, humour, irony and play – which are protected on platform policies – serve to cloak racism in more mundane and everyday contexts.




Read more:
Racism in a networked world: how groups and individuals spread racist hate online


Just as everyday racism shares discourses and vocabularies with white supremacy, lighthearted racist and sexist jokes are as harmful as online fascist irony.

Humour and satire should not be hiding places for ignorance and bigotry. As digital citizens we should be more careful about what kind of jokes we engage with and laugh at on social media.

What’s harmful and what’s a joke might not be apparent when interpreting content from a limited worldview. The development of empathy to others’ interpretations of the same content is a useful skill to minimise the amplification of racist ideologies online.

As scholar danah boyd argues:

The goal is to understand the multiple ways of making sense of the world and use that to interpret media.

Effective anti-racism on social media

A common practice in challenging racism on social media is to publicly call it out, and show support for those who are victims of it. But critics of social media’s callout culture and solidarity sustain that these tactics often do not work as an effective anti-racism tool, as they are performative rather than having an advocacy effect.

An alternative is to channel outrage into more productive forms of anti-racism. For example, you can report hateful online content either individually or through organisations that are already working on these issues, such as The Online Hate Prevention Institute and the Islamophobia Register Australia.

Most major social media platforms struggle to understand how hate articulates in non-US contexts. Reporting content can help platforms understand culturally specific coded words, expressions, and jokes (most of which are mediated through visual media) that moderators might not understand and algorithms can’t identify.

As digital citizens we can work together to deny attention to those that seek to discriminate and inflict harm online.

We can also learn how our everyday interactions might have unintended consequences and actually amplify hate.

However, these ideas do not diminish the responsibility of platforms to protect users, nor do they negate the role of governments to find effective ways to regulate platforms in collaboration and consultation with civil society and industry.The Conversation

Ariadna Matamoros-Fernández, Lecturer in Digital Media at the School of Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Seven ways the government can make Australians safer – without compromising online privacy



File 20190211 174894 12g4z9d.jpg?ixlib=rb 1.1
We need a cyber safety equivalent to the Slip! Slop! Slap! campaign to nudge behavioural change in the community.
Shutterstock

Damien Manuel, Deakin University

This is part of a major series called Advancing Australia, in which leading academics examine the key issues facing Australia in the lead-up to the 2019 federal election and beyond. Read the other pieces in the series here.

When it comes to data security, there is an inherent tension between safety and privacy. The government’s job is to balance these priorities with laws that will keep Australians safe, improve the economy and protect personal data from unwarranted surveillance.

This is a delicate line to walk. Recent debate has revolved around whether technology companies should be required to help law enforcement agencies gain access to the encrypted messages of suspected criminals.

While this is undoubtedly an important issue, the enacted legislation – the Telecommunications and Other Legislation Amendment (Assistance and Access) Act – fails on both fronts. Not only is it unlikely to stop criminals, it could make personal communications between everyday people less secure.

Rather than focus on the passage of high-profile legislation that clearly portrays a misunderstanding of the technology in question, the government would do better to invest in a comprehensive cyber security strategy that will actually have an impact.

Achieving the goals set out in the strategy we already have would be a good place to start.




Read more:
The difference between cybersecurity and cybercrime, and why it matters


Poor progress on cyber security

The Turnbull government launched Australia’s first Cyber Security Strategy in April 2016. It promised to dramatically improve the online safety of all Australian families and businesses.

In 2017, the government released the first annual update to report on how well it was doing. On the surface some progress had been made, but a lot of items were incomplete – and the promised linkages to businesses and the community were not working well.

Unfortunately, there was never a second update. Prime ministers were toppled, cabinets were reshuffled and it appears the Morrison government lost interest in truly protecting Australians.

So, where did it all go wrong?

A steady erosion of privacy

Few Australians paid much notice when vested interests hijacked technology law reforms. The amendment of the Copyright Act in 2015 forced internet service providers (ISPs) to block access to sites containing pirated content. Movie studios now had their own version of China’s “Great Firewall” to block and control internet content in Australia.

In 2017, the government implemented its data retention laws, which effectively enabled specific government agencies to spy on law-abiding citizens. The digital trail (metadata) people left through phone calls, SMS messages, emails and internet activity was retained by telecommunications carriers and made accessible to law enforcement.

The public was assured only limited agencies would have access to the data to hunt for terrorists. In 2018, we learned that many more agencies were accessing the data than originally promised.

Enter the Assistance and Access legislation. Australia’s technology sector strongly objected to the bill, but the Morrison government’s consultation process was a whitewash. The government ignored advice on the damage the legislation would do to the developing cyber sector outlined in the Cyber Security Strategy – the very sector the Turnbull government had been counting on to help rebuild the economy in this hyper-connected digital world.




Read more:
What skills does a cybersecurity professional need?


While the government focuses on the hunt for terrorists, it neglects the thousands of Australians who fall victim each year to international cybercrime syndicates and foreign governments.

Australians lose money to cybercrime via scam emails and phone calls designed to harvest passwords, banking credentials and other personal information. Losses from some categories of cybercrime have increased by more than 70% in the last 12 months. The impact of cybercrime on Australian business and individuals is estimated at $7 billion a year.

So, where should government focus its attention?

Seven actions that would make Australia safer

If the next government is serious about protecting Australian businesses and families, here are seven concrete actions it should take immediately upon taking office.

1. Review the Cyber Security Strategy

Work with industry associations, the business and financial sectors, telecommunication providers, cyber startups, state government agencies and all levels of the education sector to develop a plan to protect Australians and businesses. The plan must be comprehensive, collaborative and, most importantly, inclusive. It should be adopted at the federal level and by states and territories.

2. Make Australians a harder target for cybercriminals

The United Kingdom’s National Cyber Security Centre is implementing technical and process controls that help people in the UK fight cybercrime in smart, innovative ways. The UK’s Active Cyber Defence program uses top-secret intelligence to prevent cyber attacks and to detect and block malicious email campaigns used by scammers. It also investigates how people actually use technology, with the aim of implementing behavioural change programs to improve public safety.

3. Create a community education campaign

A comprehensive community education program would improve online behaviours and make businesses and families safer. We had the iconic Slip! Slop! Slap! campaign from 1981 to help reduce skin cancer through community education. Where is the equivalent campaign for cyber safety to nudge behavioural change in the community at all levels from kids through to adults?

4. Improve cyber safety education in schools

Build digital literacy into education from primary through to tertiary level so that young Australians understand the consequences of their online behaviours. For example, they should know the risks of sharing personal details and nude selfies online.




Read more:
Cybersecurity of the power grid: A growing challenge


5. Streamline industry certifications

Encourage the adoption of existing industry certifications, and stop special interest groups from introducing more. There are already more than 100 industry certifications. Minimum standards for government staff should be defined, including for managers, technologists and software developers.

The United States Defence Department introduced minimum industry certification for people in government who handle data. The Australian government should do the same by picking a number of vendor-agnostic certifications as mandatory in each job category.

6. Work with small and medium businesses

The existing cyber strategy doesn’t do enough to engage with the business sector. Small and medium businesses form a critical part of the larger business supply-chain ecosystem, so the ramifications of a breach could be far-reaching.

The Australian Signals Directorate recommends businesses follow “The Essential Eight” – a list of strategies businesses can adopt to reduce their risk of cyber attack. This is good advice, but it doesn’t address the human side of exploitation, called social engineering, which tricks people into disclosing passwords that protect sensitive or confidential information.

7. Focus on health, legal and tertiary education sectors

The health, legal and tertiary education sectors have a low level of cyber maturity. These are among the top four sectors reporting breaches, according to the Office of the Australian Information Commissioner.

While health sector breaches could lead to personal harm and blackmail, breaches in the legal sector could result in the disclosure of time-sensitive business transactions and personal details. And the tertiary education sector – a powerhouse of intellectual research – is ripe for foreign governments to steal the knowledge underpinning Australia’s future technologies.

A single person doing the wrong thing and making a mistake can cause a major security breach. More than 900,000 people are employed in the Australian health and welfare sector, and the chance of one of these people making a mistake is unfortunately very high.The Conversation

Damien Manuel, Director, Centre for Cyber Security Research & Innovation (CSRI), Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Online trolling used to be funny, but now the term refers to something far more sinister



File 20190131 108351 w5ujdy.jpg?ixlib=rb 1.1
The definition of “trolling” has changed a lot over the last 15 years.
Shutterstock

Evita March, Federation University Australia

It seems like internet trolling happens everywhere online these days – and it’s showing no signs of slowing down.

This week, the British press and Kensington Palace officials have called for an end to the merciless online trolling of Duchesses Kate Middleton and Meghan Markle, which reportedly includes racist and sexist content, and even threats.

But what exactly is internet trolling? How do trolls “behave”? Do they intend to harm, or amuse?

To find out how people define trolling, we conducted a survey with 379 participants. The results suggest there is a difference in the way the media, the research community and the general public understand trolling.

If we want to reduce abusive online behaviour, let’s start by getting the definition right.




Read more:
How empathy can make or break a troll


Which of these cases is trolling?

Consider the comments that appear in the image below:


Screenshot

Without providing any definitions, we asked if this was an example of internet trolling. Of participants, 44% said yes, 41% said no and 15% were unsure.

Now consider this next image:


Screenshot

Of participants, 69% said this was an example of internet trolling, 16% said no, and 15% were unsure).

These two images depict very different online behaviour. The first image depicts mischievous and comical behaviour, where the author perhaps intended to amuse the audience. The second image depicts malicious and antisocial behaviour, where the author may have intended to cause harm.

There was more consensus among participants that the second image depicted trolling. That aligns with a more common definition of internet trolling as destructive and disruptive online behaviour that causes harm to others.

But this definition has only really evolved in more recent years. Previously, internet trolling was defined very differently.




Read more:
We researched Russian trolls and figured out exactly how they neutralise certain news


A shifting definition

In 2002, one of the earliest definitions of internet “trolling” described the behaviour as:

luring others online (commonly on discussion forums) into pointless and time-consuming activities.

Trolling often started with a message that was intentionally incorrect, but not overly controversial. By contrast, internet “flaming” described online behaviour with hostile intentions, characterised by profanity, obscenity, and insults that inflict harm to a person or an organisation.

So, modern day definitions of internet trolling seem more consistent with the definition of flaming, rather than the initial definition of trolling.

To highlight this intention to amuse compared to the intention to harm, communication researcher Jonathan Bishop suggested we differentiate between “kudos trolling” to describe trolling for mutual enjoyment and entertainment, and “flame trolling” to describe trolling that is abusive and not intended to be humorous.

How people in our study defined trolling

In our study, which has been accepted to be published in the journal Cyberpsychology, Behavior, and Social Networking, we recruited 379 participants (60% women) to answer an online, anonymous questionnaire where they provided short answer responses to the following questions:

  • how do you define internet trolling?

  • what kind of behaviours constitute internet trolling?

Here are some examples of how participants responded:

Where an individual online verbally attacks another individual with intention of offending the other (female, 27)

People saying intentionally provocative things on social media with the intent of attacking / causing discomfort or offence (female, 26)

Teasing, bullying, joking or making fun of something, someone or a group (male, 29)

Deliberately commenting on a post to elicit a desired response, or to purely gratify oneself by emotionally manipulating another (male, 35)

Based on participant responses, we suggest that internet trolling is now more commonly seen as an intentional, malicious online behaviour, rather than a harmless activity for mutual enjoyment.

A word cloud representing how survey participants described trolling behaviours.

Researchers use ‘trolling’ as a catch-all

Clearly there are discrepancies in the definition of internet trolling, and this is a problem.

Research does not differentiate between kudos trolling and flame trolling. Some members of the public might still view trolling as a kudos behaviour. For example, one participant in our study said:

Depends which definition you mean. The common definition now, especially as used by the media and within academia, is essentially just a synonym to “asshole”. The better, and classic, definition is someone who speaks from outside the shared paradigm of a community in order to disrupt presuppositions and try to trigger critical thought and awareness (male, 41)

Not only does the definition of trolling differ from researcher to researcher, but there can also be discrepancy between the researcher and the public.

As a term, internet trolling has significantly deviated from its early, 2002 definition and become a catch-all for all antisocial online behaviours. The lack of a uniform definition of internet trolling leaves all research on trolling open to validity concerns, which could leave the behaviour remaining largely unchecked.




Read more:
Our experiments taught us why people troll


We need to agree on the terminology

We propose replacing the catch-all term of trolling with “cyberabuse”.

Cyberbullying, cyberhate and cyberaggression are all different online behaviours with different definitions, but they are often referred to uniformly as “trolling”.

It is time to move away from the term trolling to describe these serious instances of cyberabuse. While it may have been empowering for the public to picture these internet “trolls” as ugly creatures living under the bridge, this imagery may have begun to downplay the seriousness of their online behaviour.

Continuing to use the term trolling, a term that initially described a behaviour that was not intended to harm, could have serious consequences for managing and preventing the behaviour.The Conversation

Evita March, Senior Lecturer in Psychology, Federation University Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Not another online petition! But here’s why you should think before deleting it


File 20190123 135148 8hmc8s.jpg?ixlib=rb 1.1
It’s a lazy form of activism, but that doesn’t mean signing online petitions is useless.
from shutterstock.com

Sky Croeser, Curtin University

Online petitions are often seen as a form of “slacktivism” – small acts that don’t require much commitment and are more about helping us feel good than effective activism. But the impacts of online petitions can stretch beyond immediate results.

Whether they work to create legislative change, or just raise awareness of an issue, there’s some merit to signing them. Even if nothing happens immediately, petitions are one of many ways we can help build long-term change.

A history of petitions

Petitions have a long history in Western politics. They developed centuries ago as a way for people to have their voices heard in government or ask for legislative change. But they’ve also been seen as largely ineffective in this respect. One study found only three out of 2,589 petitions submitted to the Australian House of Representatives between 1999 and 2007 even received a ministerial response.

Before the end of the second world war, fewer than 16 petitions a year were presented to Australia’s House of Representatives. The new political landscape of the early 1970s saw that number leap into the thousands.

In the 2000s, the House received around 300 petitions per year, and even with online tools, it’s still nowhere near what it was in the 70s. According to the parliamentary website, an average of 121 petitions have been presented each year since 2008.




Read more:
Changing the world one online petition at a time: how social activism went mainstream


Although petitions rarely achieve direct change, they are an important part of the democratic process. Many governments have attempted to facilitate petitioning online. For example, the Australian parliamentary website helps citizens through the process of developing and submitting petitions. This is one way the internet has made creating and submitting petitions easier.

There are also independent sites that campaigners can use, such as Change.org and Avaaz. It can take under an hour to go from an idea to an online petition that’s ready to share on social media.

As well as petitions being a way for citizens to make requests of their governments, they are now used more broadly. Many petitions reach a global audience – they might call for change from companies, international institutions, or even society as a whole.

What makes for an effective petition?

The simplest way to gauge if a petition has been successful is to look at whether the requests made were granted. The front page of Change.org displays recent “victories”. These including a call to axe the so-called “tampon tax” (the GST on menstrual products) which states and territories agreed to remove come January 2019.

Change.org also boasts the petition for gender equality on cereal boxes as a victory, after Kelloggs sent a statement they would be updating their packaging in 2019 to include images of males and females. This petition only had 600 signatures, in comparison to the 75,000 against the tampon tax.

In 2012, a coalition of organisations mobilised a campaign against two proposed US laws that many saw as likely to restrict internet freedom. A circulating petition gathered 4.5 million signatures, which helped put pressure on US representatives not to vote for the bills.

However, all of these petitions were part of larger efforts. There have been campaigns to remove the tax on menstrual products since it was first imposed, there’s a broad movement for more equal gender representation, and there’s significant global activism against online censorship. None of these petitions can claim sole victory. But they may have pushed it over the line, or just added some weight to the groundswell of existing support.

Online petitions can have the obvious impact of changing the very thing they’re campaigning for. However, the type of petition also makes a difference to what change it can achieve.

Choosing a petition worth signing

Knowing a few characteristics of successful petitions can be useful when you’re deciding whether it’s worth your time to sign and share something. Firstly, there should be a target and specific call for action.

These can take many forms: petitions might request a politician vote “yes” on a specific law, demand changes to working conditions at a company, or even ask an advocacy organisation to begin campaigning around a new issue. Vague targets and unclear goals aren’t well suited to petitions. Calls for “more gender equality in society” or “better rights for pets”, for example, are unlikely to achieve success.

Secondly, the goal needs to be realistic. This is so it’s possible to succeed and so supporters feel a sense of optimism. Petitioning for a significant change in a foreign government’s policy – for example, a call from world citizens for better gun control in the US – is unlikely to lead to results.




Read more:
Why #metoo is an impoverished form of feminist activism, unlikely to spark social change


It’s easier to get politicians to change their vote on a single, relatively minor issue than to achieve sweeping legal changes. It’s also more likely a company will change its packaging than completely overhaul its approach to production.

Thirdly, and perhaps most importantly, a petition’s chance of success depends largely on the strength of community supporting it. Petitions rarely work on their own. In her book Twitter and Teargas, Turkish writer Zeynep Tufekci argues the internet allows us to organise action far more quickly than in the past, outpacing the hard but essential work of community organising.

We can get thousands of people signing a petition and shouting in the streets well before we build coalitions and think about long-term strategies. But the most effective petitions will work in combination with other forms of activism.

Change happens gradually

Even petitions that don’t achieve their stated aims or minor goals can play a role in activist efforts. Sharing petitions is one way to bring attention to issues that might otherwise remain off the agenda.

Most online petitions include the option of allowing further updates and contact. Organisations often use a petition to build momentum around an ongoing campaign. Creating, or even signing, online petitions can be a form of micro-activism that helps people start thinking of themselves as capable of creating change.

Signing petitions – and seeing that others have also done so – can help us feel we are part of a collective, working with others to shape our world.

It’s reasonable to think carefully about what we put our names to online, but we shouldn’t be too quick to dismiss online petitions as ineffective, or “slack”. Instead, we should think of them as one example of the diverse tactics that help build change over time.The Conversation

Sky Croeser, Lecturer, School of Media, Creative Arts and Social Inquiry, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Racism in a networked world: how groups and individuals spread racist hate online



File 20190125 108342 1xg36s8.jpg?ixlib=rb 1.1
We could see even sharper divisions in society in the future if support for racism spreads online.
Markus Spiske/Unsplash

Ana-Maria Bliuc, Western Sydney University; Andrew Jakubowicz, University of Technology Sydney, and Kevin Dunn, Western Sydney University

Living in a networked world has many advantages. We get our news online almost as soon as it happens, we stay in touch with friends via social media, and we advance our careers through online professional networks.

But there is a darker side to the internet that sees far-right groups exploit these unique features to spread divisive ideas, racial hate and mistrust. Scholars of racism refer to this type of racist communication online as “cyber-racism”.

Even the creators of the internet are aware they may have unleashed a technology that is causing a lot of harm. Since 2017, the inventor of the World Wide Web, Tim Berners-Lee, has focused many of his comments about the dangers of manipulation of the internet around the spread of hate speech, saying that:

Humanity connected by technology on the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken.

Our team conducted a systematic review of ten years of cyber-racism research to learn how different types of communicators use the internet to spread their views.




Read more:
How the use of emoji on Islamophobic Facebook pages amplifies racism


Racists groups behave differently to individuals

We found that the internet is indeed a powerful tool used to influence and reinforce divisive ideas. And it’s not only organised racist groups that take advantage of online communication; unaffiliated individuals do it too.

But the way groups and individuals use the internet differs in several important ways. Racist groups are active on different communication channels to individuals, and they have different goals and strategies they use to achieve them. The effects of their communication are also distinctive.

Individuals mostly engage in cyber-racism to hurt others, and to confirm their racist views by connecting with like-minded people (seeking “confirmation bias”). Their preferred communication channels tend to be blogs, forums, news commentary websites, gaming environments and chat rooms.

Channels, goals and strategies used by unaffiliated people when communicating cyber-racism.

Strategies they use include denying or minimising the issue of racism, denigrating “non-whites”, and reframing the meaning of current news stories to support their views.

Groups, on the other hand, prefer to communicate via their own websites. They are also more strategic in what they seek to achieve through online communication. They use websites to gather support for their group and their views through racist propaganda.

Racist groups manipulate information and use clever rhetoric to help build a sense of a broader “white” identity, which often goes beyond national borders. They argue that conflict between different ethnicities is unavoidable, and that what most would view as racism is in fact a natural response to the “oppression of white people”.

Channels, goals and strategies used by groups when communicating cyber-racism.




Read more:
How the alt-right uses milk to promote white supremacy


Collective cyber-racism has the main effect of undermining the social cohesion of modern multicultural societies. It creates division, mistrust and intergroup conflict.

Meanwhile, individual cyber-racism seems to have a more direct effect by negatively affecting the well being of targets. It also contributes to maintaining a hostile racial climate, which may further (indirectly) affect the well being of targets.

What they have in common

Despite their differences, groups and individuals both share a high level of sophistication in how they communicate racism online. Our review uncovered the disturbingly creative ways in that new technologies are exploited.

For example, racist groups make themselves attractive to young people by providing interactive games and links to music videos on their websites. And both groups and individuals are highly skilled at manipulating their public image via various narrative strategies, such as humour and the interpretation of current news to fit with their arguments.




Read more:
Race, cyberbullying and intimate partner violence


A worrying trend

Our findings suggest that if these online strategies are effective, we could see even sharper divisions in society as the mobilisation of support for racism and far-right movements spreads online.

There is also evidence that currently unaffiliated supporters of racism could derive strength through online communication. These individuals might use online channels to validate their beliefs and achieve a sense of belonging in virtual spaces where racist hosts provide an uncontested and hate-supporting community.

This is a worrying trend. We have now seen several examples of violent action perpetrated offline by isolated individuals who radicalise into white supremacist movements – for example, in the case of Anders Breivik in Norway, and more recently of Robert Gregory Bowers, who was the perpetrator of the Pittsburgh synagogue shooting.

In Australia, unlike most other liberal democracies, there are effectively no government strategies that seek to reduce this avenue for the spread of racism, despite many Australians expressing a desire that this be done.The Conversation

Ana-Maria Bliuc, Senior Lecturer in Social Psychology, Western Sydney University; Andrew Jakubowicz, Emeritus Professor of Sociology, University of Technology Sydney, and Kevin Dunn, Dean of the School of Social Science and Psychology, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.