As responsible digital citizens, here’s how we can all reduce racism online



File 20190409 2918 9q8zs6.jpg?ixlib=rb 1.1
No matter how innocent you think it is, what you type into search engines can shape how the internet behaves.
Hannah Wei / unsplash, CC BY

Ariadna Matamoros-Fernández, Queensland University of Technology

Have you ever considered that what you type into Google, or the ironic memes you laugh at on Facebook, might be building a more dangerous online environment?

Regulation of online spaces is starting to gather momentum, with governments, consumer groups, and even digital companies themselves calling for more control over what is posted and shared online.

Yet we often fail to recognise the role that you, me and all of us as ordinary citizens play in shaping the digital world.

The privilege of being online comes with rights and responsibilities, and we need to actively ask what kind of digital citizenship we want to encourage in Australia and beyond.




Read more:
How the use of emoji on Islamophobic Facebook pages amplifies racism


Beyond the knee-jerk

The Christchurch terror attack prompted policy change by governments in both New Zealand and Australia.

Australia recently passed a new law that will enforce penalties for social media platforms if they don’t remove violent content after it becomes available online.

Platforms may well be lagging behind in their content moderation responsibilities, and still need to do better in this regard. But this kind of “kneejerk” policy response won’t solve the spread of problematic content on social media.

Addressing hate online requires coordinated efforts. Platforms must improve the enforcement of their rules (not just announce tougher measures) to guarantee users’ safety. They may also reconsider a serious redesign, because the way they currently organise, select, and recommend information often amplifies systemic problems in society like racism.




Read more:
New livestreaming legislation fails to take into account how the internet actually works


Discrimination is entrenched

Of course, biased beliefs and content don’t just live online.

In Australia, racial discrimination has been perpetuated in public policy, and the country has an unreconciled history of Indigenous dispossession and oppression.

Today, Australia’s political mainstream is still lenient with bigots, and the media often contributes to fearmongering about immigration.

However, we can all play a part in reducing harm online.

There are three aspects we might reconsider when interacting online so as to deny oxygen to racist ideologies:

  • a better understanding of how platforms work
  • the development of empathy to identify differences in interpretation when engaging with media (rather than focusing on intent)
  • working towards a more productive anti-racism online.

Online lurkers and the amplification of harm

White supremacists and other reactionary pundits seek attention on mainstream and social media. New Zealand Prime Minister Jacinda Ardern refused to name the Christchurch gunman to prevent fuelling his desired notoriety, and so did some media outlets.

The rest of us might draw comfort from not having contributed to amplifying the Christchurch attacker’s desired fame. It’s likely we didn’t watch his video or read his manifesto, let alone upload or share this content on social media.

But what about apparently less harmful practices, such as searching on Google and social media sites for keywords related to the gunman’s manifesto or his live video?

It’s not the intent behind these practices that should be the focus of this debate, but the consequences of it. Our everyday interactions on platforms influence search autocomplete algorithms and the hierarchical organisation and recommendation of information.

In the Christchurch tragedy, even if we didn’t share or upload the manifesto or the video, the zeal to access this information drove traffic to problematic content and amplified harm for the Muslim community.

Normalisation of hate through seemingly lighthearted humour

Reactionary groups know how to capitalise on memes and other jokey content that degrades and dehumanises.

By using irony to deny the racism in these jokes, these far-right groups connect and immerse new members in an online culture that deliberately uses memetic media to have fun at the expense of others.

The Christchurch terrorist attack showed this connection between online irony and the radicalisation of white men.

However, humour, irony and play – which are protected on platform policies – serve to cloak racism in more mundane and everyday contexts.




Read more:
Racism in a networked world: how groups and individuals spread racist hate online


Just as everyday racism shares discourses and vocabularies with white supremacy, lighthearted racist and sexist jokes are as harmful as online fascist irony.

Humour and satire should not be hiding places for ignorance and bigotry. As digital citizens we should be more careful about what kind of jokes we engage with and laugh at on social media.

What’s harmful and what’s a joke might not be apparent when interpreting content from a limited worldview. The development of empathy to others’ interpretations of the same content is a useful skill to minimise the amplification of racist ideologies online.

As scholar danah boyd argues:

The goal is to understand the multiple ways of making sense of the world and use that to interpret media.

Effective anti-racism on social media

A common practice in challenging racism on social media is to publicly call it out, and show support for those who are victims of it. But critics of social media’s callout culture and solidarity sustain that these tactics often do not work as an effective anti-racism tool, as they are performative rather than having an advocacy effect.

An alternative is to channel outrage into more productive forms of anti-racism. For example, you can report hateful online content either individually or through organisations that are already working on these issues, such as The Online Hate Prevention Institute and the Islamophobia Register Australia.

Most major social media platforms struggle to understand how hate articulates in non-US contexts. Reporting content can help platforms understand culturally specific coded words, expressions, and jokes (most of which are mediated through visual media) that moderators might not understand and algorithms can’t identify.

As digital citizens we can work together to deny attention to those that seek to discriminate and inflict harm online.

We can also learn how our everyday interactions might have unintended consequences and actually amplify hate.

However, these ideas do not diminish the responsibility of platforms to protect users, nor do they negate the role of governments to find effective ways to regulate platforms in collaboration and consultation with civil society and industry.The Conversation

Ariadna Matamoros-Fernández, Lecturer in Digital Media at the School of Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

Seven ways the government can make Australians safer – without compromising online privacy



File 20190211 174894 12g4z9d.jpg?ixlib=rb 1.1
We need a cyber safety equivalent to the Slip! Slop! Slap! campaign to nudge behavioural change in the community.
Shutterstock

Damien Manuel, Deakin University

This is part of a major series called Advancing Australia, in which leading academics examine the key issues facing Australia in the lead-up to the 2019 federal election and beyond. Read the other pieces in the series here.

When it comes to data security, there is an inherent tension between safety and privacy. The government’s job is to balance these priorities with laws that will keep Australians safe, improve the economy and protect personal data from unwarranted surveillance.

This is a delicate line to walk. Recent debate has revolved around whether technology companies should be required to help law enforcement agencies gain access to the encrypted messages of suspected criminals.

While this is undoubtedly an important issue, the enacted legislation – the Telecommunications and Other Legislation Amendment (Assistance and Access) Act – fails on both fronts. Not only is it unlikely to stop criminals, it could make personal communications between everyday people less secure.

Rather than focus on the passage of high-profile legislation that clearly portrays a misunderstanding of the technology in question, the government would do better to invest in a comprehensive cyber security strategy that will actually have an impact.

Achieving the goals set out in the strategy we already have would be a good place to start.




Read more:
The difference between cybersecurity and cybercrime, and why it matters


Poor progress on cyber security

The Turnbull government launched Australia’s first Cyber Security Strategy in April 2016. It promised to dramatically improve the online safety of all Australian families and businesses.

In 2017, the government released the first annual update to report on how well it was doing. On the surface some progress had been made, but a lot of items were incomplete – and the promised linkages to businesses and the community were not working well.

Unfortunately, there was never a second update. Prime ministers were toppled, cabinets were reshuffled and it appears the Morrison government lost interest in truly protecting Australians.

So, where did it all go wrong?

A steady erosion of privacy

Few Australians paid much notice when vested interests hijacked technology law reforms. The amendment of the Copyright Act in 2015 forced internet service providers (ISPs) to block access to sites containing pirated content. Movie studios now had their own version of China’s “Great Firewall” to block and control internet content in Australia.

In 2017, the government implemented its data retention laws, which effectively enabled specific government agencies to spy on law-abiding citizens. The digital trail (metadata) people left through phone calls, SMS messages, emails and internet activity was retained by telecommunications carriers and made accessible to law enforcement.

The public was assured only limited agencies would have access to the data to hunt for terrorists. In 2018, we learned that many more agencies were accessing the data than originally promised.

Enter the Assistance and Access legislation. Australia’s technology sector strongly objected to the bill, but the Morrison government’s consultation process was a whitewash. The government ignored advice on the damage the legislation would do to the developing cyber sector outlined in the Cyber Security Strategy – the very sector the Turnbull government had been counting on to help rebuild the economy in this hyper-connected digital world.




Read more:
What skills does a cybersecurity professional need?


While the government focuses on the hunt for terrorists, it neglects the thousands of Australians who fall victim each year to international cybercrime syndicates and foreign governments.

Australians lose money to cybercrime via scam emails and phone calls designed to harvest passwords, banking credentials and other personal information. Losses from some categories of cybercrime have increased by more than 70% in the last 12 months. The impact of cybercrime on Australian business and individuals is estimated at $7 billion a year.

So, where should government focus its attention?

Seven actions that would make Australia safer

If the next government is serious about protecting Australian businesses and families, here are seven concrete actions it should take immediately upon taking office.

1. Review the Cyber Security Strategy

Work with industry associations, the business and financial sectors, telecommunication providers, cyber startups, state government agencies and all levels of the education sector to develop a plan to protect Australians and businesses. The plan must be comprehensive, collaborative and, most importantly, inclusive. It should be adopted at the federal level and by states and territories.

2. Make Australians a harder target for cybercriminals

The United Kingdom’s National Cyber Security Centre is implementing technical and process controls that help people in the UK fight cybercrime in smart, innovative ways. The UK’s Active Cyber Defence program uses top-secret intelligence to prevent cyber attacks and to detect and block malicious email campaigns used by scammers. It also investigates how people actually use technology, with the aim of implementing behavioural change programs to improve public safety.

3. Create a community education campaign

A comprehensive community education program would improve online behaviours and make businesses and families safer. We had the iconic Slip! Slop! Slap! campaign from 1981 to help reduce skin cancer through community education. Where is the equivalent campaign for cyber safety to nudge behavioural change in the community at all levels from kids through to adults?

4. Improve cyber safety education in schools

Build digital literacy into education from primary through to tertiary level so that young Australians understand the consequences of their online behaviours. For example, they should know the risks of sharing personal details and nude selfies online.




Read more:
Cybersecurity of the power grid: A growing challenge


5. Streamline industry certifications

Encourage the adoption of existing industry certifications, and stop special interest groups from introducing more. There are already more than 100 industry certifications. Minimum standards for government staff should be defined, including for managers, technologists and software developers.

The United States Defence Department introduced minimum industry certification for people in government who handle data. The Australian government should do the same by picking a number of vendor-agnostic certifications as mandatory in each job category.

6. Work with small and medium businesses

The existing cyber strategy doesn’t do enough to engage with the business sector. Small and medium businesses form a critical part of the larger business supply-chain ecosystem, so the ramifications of a breach could be far-reaching.

The Australian Signals Directorate recommends businesses follow “The Essential Eight” – a list of strategies businesses can adopt to reduce their risk of cyber attack. This is good advice, but it doesn’t address the human side of exploitation, called social engineering, which tricks people into disclosing passwords that protect sensitive or confidential information.

7. Focus on health, legal and tertiary education sectors

The health, legal and tertiary education sectors have a low level of cyber maturity. These are among the top four sectors reporting breaches, according to the Office of the Australian Information Commissioner.

While health sector breaches could lead to personal harm and blackmail, breaches in the legal sector could result in the disclosure of time-sensitive business transactions and personal details. And the tertiary education sector – a powerhouse of intellectual research – is ripe for foreign governments to steal the knowledge underpinning Australia’s future technologies.

A single person doing the wrong thing and making a mistake can cause a major security breach. More than 900,000 people are employed in the Australian health and welfare sector, and the chance of one of these people making a mistake is unfortunately very high.The Conversation

Damien Manuel, Director, Centre for Cyber Security Research & Innovation (CSRI), Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Online trolling used to be funny, but now the term refers to something far more sinister



File 20190131 108351 w5ujdy.jpg?ixlib=rb 1.1
The definition of “trolling” has changed a lot over the last 15 years.
Shutterstock

Evita March, Federation University Australia

It seems like internet trolling happens everywhere online these days – and it’s showing no signs of slowing down.

This week, the British press and Kensington Palace officials have called for an end to the merciless online trolling of Duchesses Kate Middleton and Meghan Markle, which reportedly includes racist and sexist content, and even threats.

But what exactly is internet trolling? How do trolls “behave”? Do they intend to harm, or amuse?

To find out how people define trolling, we conducted a survey with 379 participants. The results suggest there is a difference in the way the media, the research community and the general public understand trolling.

If we want to reduce abusive online behaviour, let’s start by getting the definition right.




Read more:
How empathy can make or break a troll


Which of these cases is trolling?

Consider the comments that appear in the image below:


Screenshot

Without providing any definitions, we asked if this was an example of internet trolling. Of participants, 44% said yes, 41% said no and 15% were unsure.

Now consider this next image:


Screenshot

Of participants, 69% said this was an example of internet trolling, 16% said no, and 15% were unsure).

These two images depict very different online behaviour. The first image depicts mischievous and comical behaviour, where the author perhaps intended to amuse the audience. The second image depicts malicious and antisocial behaviour, where the author may have intended to cause harm.

There was more consensus among participants that the second image depicted trolling. That aligns with a more common definition of internet trolling as destructive and disruptive online behaviour that causes harm to others.

But this definition has only really evolved in more recent years. Previously, internet trolling was defined very differently.




Read more:
We researched Russian trolls and figured out exactly how they neutralise certain news


A shifting definition

In 2002, one of the earliest definitions of internet “trolling” described the behaviour as:

luring others online (commonly on discussion forums) into pointless and time-consuming activities.

Trolling often started with a message that was intentionally incorrect, but not overly controversial. By contrast, internet “flaming” described online behaviour with hostile intentions, characterised by profanity, obscenity, and insults that inflict harm to a person or an organisation.

So, modern day definitions of internet trolling seem more consistent with the definition of flaming, rather than the initial definition of trolling.

To highlight this intention to amuse compared to the intention to harm, communication researcher Jonathan Bishop suggested we differentiate between “kudos trolling” to describe trolling for mutual enjoyment and entertainment, and “flame trolling” to describe trolling that is abusive and not intended to be humorous.

How people in our study defined trolling

In our study, which has been accepted to be published in the journal Cyberpsychology, Behavior, and Social Networking, we recruited 379 participants (60% women) to answer an online, anonymous questionnaire where they provided short answer responses to the following questions:

  • how do you define internet trolling?

  • what kind of behaviours constitute internet trolling?

Here are some examples of how participants responded:

Where an individual online verbally attacks another individual with intention of offending the other (female, 27)

People saying intentionally provocative things on social media with the intent of attacking / causing discomfort or offence (female, 26)

Teasing, bullying, joking or making fun of something, someone or a group (male, 29)

Deliberately commenting on a post to elicit a desired response, or to purely gratify oneself by emotionally manipulating another (male, 35)

Based on participant responses, we suggest that internet trolling is now more commonly seen as an intentional, malicious online behaviour, rather than a harmless activity for mutual enjoyment.

A word cloud representing how survey participants described trolling behaviours.

Researchers use ‘trolling’ as a catch-all

Clearly there are discrepancies in the definition of internet trolling, and this is a problem.

Research does not differentiate between kudos trolling and flame trolling. Some members of the public might still view trolling as a kudos behaviour. For example, one participant in our study said:

Depends which definition you mean. The common definition now, especially as used by the media and within academia, is essentially just a synonym to “asshole”. The better, and classic, definition is someone who speaks from outside the shared paradigm of a community in order to disrupt presuppositions and try to trigger critical thought and awareness (male, 41)

Not only does the definition of trolling differ from researcher to researcher, but there can also be discrepancy between the researcher and the public.

As a term, internet trolling has significantly deviated from its early, 2002 definition and become a catch-all for all antisocial online behaviours. The lack of a uniform definition of internet trolling leaves all research on trolling open to validity concerns, which could leave the behaviour remaining largely unchecked.




Read more:
Our experiments taught us why people troll


We need to agree on the terminology

We propose replacing the catch-all term of trolling with “cyberabuse”.

Cyberbullying, cyberhate and cyberaggression are all different online behaviours with different definitions, but they are often referred to uniformly as “trolling”.

It is time to move away from the term trolling to describe these serious instances of cyberabuse. While it may have been empowering for the public to picture these internet “trolls” as ugly creatures living under the bridge, this imagery may have begun to downplay the seriousness of their online behaviour.

Continuing to use the term trolling, a term that initially described a behaviour that was not intended to harm, could have serious consequences for managing and preventing the behaviour.The Conversation

Evita March, Senior Lecturer in Psychology, Federation University Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Not another online petition! But here’s why you should think before deleting it


File 20190123 135148 8hmc8s.jpg?ixlib=rb 1.1
It’s a lazy form of activism, but that doesn’t mean signing online petitions is useless.
from shutterstock.com

Sky Croeser, Curtin University

Online petitions are often seen as a form of “slacktivism” – small acts that don’t require much commitment and are more about helping us feel good than effective activism. But the impacts of online petitions can stretch beyond immediate results.

Whether they work to create legislative change, or just raise awareness of an issue, there’s some merit to signing them. Even if nothing happens immediately, petitions are one of many ways we can help build long-term change.

A history of petitions

Petitions have a long history in Western politics. They developed centuries ago as a way for people to have their voices heard in government or ask for legislative change. But they’ve also been seen as largely ineffective in this respect. One study found only three out of 2,589 petitions submitted to the Australian House of Representatives between 1999 and 2007 even received a ministerial response.

Before the end of the second world war, fewer than 16 petitions a year were presented to Australia’s House of Representatives. The new political landscape of the early 1970s saw that number leap into the thousands.

In the 2000s, the House received around 300 petitions per year, and even with online tools, it’s still nowhere near what it was in the 70s. According to the parliamentary website, an average of 121 petitions have been presented each year since 2008.




Read more:
Changing the world one online petition at a time: how social activism went mainstream


Although petitions rarely achieve direct change, they are an important part of the democratic process. Many governments have attempted to facilitate petitioning online. For example, the Australian parliamentary website helps citizens through the process of developing and submitting petitions. This is one way the internet has made creating and submitting petitions easier.

There are also independent sites that campaigners can use, such as Change.org and Avaaz. It can take under an hour to go from an idea to an online petition that’s ready to share on social media.

As well as petitions being a way for citizens to make requests of their governments, they are now used more broadly. Many petitions reach a global audience – they might call for change from companies, international institutions, or even society as a whole.

What makes for an effective petition?

The simplest way to gauge if a petition has been successful is to look at whether the requests made were granted. The front page of Change.org displays recent “victories”. These including a call to axe the so-called “tampon tax” (the GST on menstrual products) which states and territories agreed to remove come January 2019.

Change.org also boasts the petition for gender equality on cereal boxes as a victory, after Kelloggs sent a statement they would be updating their packaging in 2019 to include images of males and females. This petition only had 600 signatures, in comparison to the 75,000 against the tampon tax.

In 2012, a coalition of organisations mobilised a campaign against two proposed US laws that many saw as likely to restrict internet freedom. A circulating petition gathered 4.5 million signatures, which helped put pressure on US representatives not to vote for the bills.

However, all of these petitions were part of larger efforts. There have been campaigns to remove the tax on menstrual products since it was first imposed, there’s a broad movement for more equal gender representation, and there’s significant global activism against online censorship. None of these petitions can claim sole victory. But they may have pushed it over the line, or just added some weight to the groundswell of existing support.

Online petitions can have the obvious impact of changing the very thing they’re campaigning for. However, the type of petition also makes a difference to what change it can achieve.

Choosing a petition worth signing

Knowing a few characteristics of successful petitions can be useful when you’re deciding whether it’s worth your time to sign and share something. Firstly, there should be a target and specific call for action.

These can take many forms: petitions might request a politician vote “yes” on a specific law, demand changes to working conditions at a company, or even ask an advocacy organisation to begin campaigning around a new issue. Vague targets and unclear goals aren’t well suited to petitions. Calls for “more gender equality in society” or “better rights for pets”, for example, are unlikely to achieve success.

Secondly, the goal needs to be realistic. This is so it’s possible to succeed and so supporters feel a sense of optimism. Petitioning for a significant change in a foreign government’s policy – for example, a call from world citizens for better gun control in the US – is unlikely to lead to results.




Read more:
Why #metoo is an impoverished form of feminist activism, unlikely to spark social change


It’s easier to get politicians to change their vote on a single, relatively minor issue than to achieve sweeping legal changes. It’s also more likely a company will change its packaging than completely overhaul its approach to production.

Thirdly, and perhaps most importantly, a petition’s chance of success depends largely on the strength of community supporting it. Petitions rarely work on their own. In her book Twitter and Teargas, Turkish writer Zeynep Tufekci argues the internet allows us to organise action far more quickly than in the past, outpacing the hard but essential work of community organising.

We can get thousands of people signing a petition and shouting in the streets well before we build coalitions and think about long-term strategies. But the most effective petitions will work in combination with other forms of activism.

Change happens gradually

Even petitions that don’t achieve their stated aims or minor goals can play a role in activist efforts. Sharing petitions is one way to bring attention to issues that might otherwise remain off the agenda.

Most online petitions include the option of allowing further updates and contact. Organisations often use a petition to build momentum around an ongoing campaign. Creating, or even signing, online petitions can be a form of micro-activism that helps people start thinking of themselves as capable of creating change.

Signing petitions – and seeing that others have also done so – can help us feel we are part of a collective, working with others to shape our world.

It’s reasonable to think carefully about what we put our names to online, but we shouldn’t be too quick to dismiss online petitions as ineffective, or “slack”. Instead, we should think of them as one example of the diverse tactics that help build change over time.The Conversation

Sky Croeser, Lecturer, School of Media, Creative Arts and Social Inquiry, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Racism in a networked world: how groups and individuals spread racist hate online



File 20190125 108342 1xg36s8.jpg?ixlib=rb 1.1
We could see even sharper divisions in society in the future if support for racism spreads online.
Markus Spiske/Unsplash

Ana-Maria Bliuc, Western Sydney University; Andrew Jakubowicz, University of Technology Sydney, and Kevin Dunn, Western Sydney University

Living in a networked world has many advantages. We get our news online almost as soon as it happens, we stay in touch with friends via social media, and we advance our careers through online professional networks.

But there is a darker side to the internet that sees far-right groups exploit these unique features to spread divisive ideas, racial hate and mistrust. Scholars of racism refer to this type of racist communication online as “cyber-racism”.

Even the creators of the internet are aware they may have unleashed a technology that is causing a lot of harm. Since 2017, the inventor of the World Wide Web, Tim Berners-Lee, has focused many of his comments about the dangers of manipulation of the internet around the spread of hate speech, saying that:

Humanity connected by technology on the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken.

Our team conducted a systematic review of ten years of cyber-racism research to learn how different types of communicators use the internet to spread their views.




Read more:
How the use of emoji on Islamophobic Facebook pages amplifies racism


Racists groups behave differently to individuals

We found that the internet is indeed a powerful tool used to influence and reinforce divisive ideas. And it’s not only organised racist groups that take advantage of online communication; unaffiliated individuals do it too.

But the way groups and individuals use the internet differs in several important ways. Racist groups are active on different communication channels to individuals, and they have different goals and strategies they use to achieve them. The effects of their communication are also distinctive.

Individuals mostly engage in cyber-racism to hurt others, and to confirm their racist views by connecting with like-minded people (seeking “confirmation bias”). Their preferred communication channels tend to be blogs, forums, news commentary websites, gaming environments and chat rooms.

Channels, goals and strategies used by unaffiliated people when communicating cyber-racism.

Strategies they use include denying or minimising the issue of racism, denigrating “non-whites”, and reframing the meaning of current news stories to support their views.

Groups, on the other hand, prefer to communicate via their own websites. They are also more strategic in what they seek to achieve through online communication. They use websites to gather support for their group and their views through racist propaganda.

Racist groups manipulate information and use clever rhetoric to help build a sense of a broader “white” identity, which often goes beyond national borders. They argue that conflict between different ethnicities is unavoidable, and that what most would view as racism is in fact a natural response to the “oppression of white people”.

Channels, goals and strategies used by groups when communicating cyber-racism.




Read more:
How the alt-right uses milk to promote white supremacy


Collective cyber-racism has the main effect of undermining the social cohesion of modern multicultural societies. It creates division, mistrust and intergroup conflict.

Meanwhile, individual cyber-racism seems to have a more direct effect by negatively affecting the well being of targets. It also contributes to maintaining a hostile racial climate, which may further (indirectly) affect the well being of targets.

What they have in common

Despite their differences, groups and individuals both share a high level of sophistication in how they communicate racism online. Our review uncovered the disturbingly creative ways in that new technologies are exploited.

For example, racist groups make themselves attractive to young people by providing interactive games and links to music videos on their websites. And both groups and individuals are highly skilled at manipulating their public image via various narrative strategies, such as humour and the interpretation of current news to fit with their arguments.




Read more:
Race, cyberbullying and intimate partner violence


A worrying trend

Our findings suggest that if these online strategies are effective, we could see even sharper divisions in society as the mobilisation of support for racism and far-right movements spreads online.

There is also evidence that currently unaffiliated supporters of racism could derive strength through online communication. These individuals might use online channels to validate their beliefs and achieve a sense of belonging in virtual spaces where racist hosts provide an uncontested and hate-supporting community.

This is a worrying trend. We have now seen several examples of violent action perpetrated offline by isolated individuals who radicalise into white supremacist movements – for example, in the case of Anders Breivik in Norway, and more recently of Robert Gregory Bowers, who was the perpetrator of the Pittsburgh synagogue shooting.

In Australia, unlike most other liberal democracies, there are effectively no government strategies that seek to reduce this avenue for the spread of racism, despite many Australians expressing a desire that this be done.The Conversation

Ana-Maria Bliuc, Senior Lecturer in Social Psychology, Western Sydney University; Andrew Jakubowicz, Emeritus Professor of Sociology, University of Technology Sydney, and Kevin Dunn, Dean of the School of Social Science and Psychology, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Trolls, fanboys and lurkers: understanding online commenting culture shows us how to improve it



File 20180524 117628 k4li3d.jpg?ixlib=rb 1.1
The way user interfaces are designed can impact the kind of community that gathers.
Shutterstock

Renee Barnes, University of the Sunshine Coast

Do you call that a haircut? I hope you didn’t pay for it.

Oh please this is rubbish, you’re a disgrace to yourself and your profession.

These are just two examples of comments that have followed articles I have written in my career. While they may seem benign compared with the sort of violent and vulgar comments that are synonymous with cyberbullying, they are examples of the uncivil and antisocial behaviour that plagues the internet.

If these comments were directed at me in any of my interactions in everyday life – when buying a coffee or at my monthly book club – they would be incredibly hurtful and certainly not inconsequential.

Drawing on my own research, as well as that of researchers in other fields, my new book “Uncovering Online Commenting Culture: Trolls, Fanboys and Lurkers” attempts to help us understand online behaviours, and outlines productive steps we can all take towards creating safer and kinder online interactions.




Read more:
Rude comments online are a reality we can’t get away from


Steps we all can take

Online abuse is a social problem that just happens to be powered by technology. Solutions are needed that not only defuse the internet’s power to amplify abuse, but also encourage crucial shifts in social norms and values within online communities.

Recognise that it’s a community

The first step is to ensure we view our online interactions as an act of participation in a community. What takes place online will then begin to line up with our offline interactions.

If any of the cruel comments that often form part of online discussion were said to you in a restaurant, you would expect witnesses around you to support you. We must have the same expectations online.

Know our audience

We learn to socialise offline based on visual and verbal cues given by the people with whom we interact. When we move social interactions to an online space where those cues are removed or obscured, a fundamental component of how we moderate our own behaviour is also eliminated. Without these social cues, it’s difficult to determine whether content is appropriate.

Research has shown that most social media users imagine a very different audience to the actual audience reading their updates. We often imagine our audience as people we associate with regularly offline, however a political statement that may be supported by close family and friends could be offensive to former colleagues in our broader online network.

Understand our own behaviour

Emotion plays a role in fuelling online behaviour – emotive comments can inspire further emotive comments in an ongoing feedback loop. Aggression can thus incite aggression in others, but it can also establish a behavioural norm within the community that aggression is acceptable.




Read more:
How empathy can make or break a troll


Understanding our online behaviour can help us take an active role in shaping the norms and values of our online communities by demonstrating appropriate behaviour.

It can also inform education initiatives for our youngest online users. We must teach them to remain conscious of the disjuncture between our imagined audience and the actual audience, thereby ingraining productive social norms for generations to come. Disturbingly, almost 70% of those aged between 18 and 29 have experienced some form of online harassment, compared with one-third of those aged 30 and older.

What organisations and institutions can do

That is not to say that we should absolve the institutions that profit from our online interactions. Social networks such as Facebook and Twitter also have a role to play.

User interface design

Design of user interfaces impacts on the ease with which we interact, the types of individuals who comment, and how we will behave.

Drawing on psychological research, we can link particular personality traits with antisocial behaviour online. This is significant because simple changes to the interfaces we use to communicate can influence which personality types will be inclined to comment.

Using interface design to encourage participation from those who will leave positive comments, and creating barriers for those inclined to leave abusive ones, is one step that online platforms can take to minimise harmful behaviours.

For example, those who are highly agreeable prefer anonymity when communicating online. Therefore, eliminating anonymity on websites (an often touted response to hostile behaviour) could discourage those agreeable individuals who would leave more positive comments.

Moderation policies

Conscientious individuals are linked to more pro-social comments. They prefer high levels of moderation, and systems where quality comments are highlighted or ranked by other users.

Riot Games, publisher of the notorious multiplayer game League of Legends, has had great success in mitigating offensive behaviour by putting measures in place to promote the gaming community’s shared values. This included a tribunal of players who could determine punishment for people involved in uncivilised behaviour.

Analytics and reporting

Analytical tools, visible data on who visits a site, and a real-time guide to who is reading comments can help us configure a more accurate imagining of our audience. This could help eliminate the risk of unintentional offence.

Providing clear processes for reporting inappropriate behaviour, and acting quickly to punish it, will also encourage us to take an active role in cleaning up our online communities.




Read more:
How we can keep our relationships during elections: don’t talk politics on social media


We can and must expect more of our online interactions. Our behaviour and how we respond to the behaviour of others within these communities will contribute to the shared norms and values of an online community.

The ConversationHowever, there are institutional factors that can affect the behaviours displayed. It is only through a combination of both personal and institutional responses to antisocial behaviour that we will create more inclusive and harmonious online communities.

Renee Barnes, Senior Lecturer, Journalism, University of the Sunshine Coast

This article was originally published on The Conversation. Read the original article.

Online conspiracy theorists are more diverse (and ordinary) than most assume



File 20180327 188628 1qt10b9.jpg?ixlib=rb 1.1
Fox Mulder (David Duchovny) in The X-Files is fond of joining seemingly unrelated dots to create a conspiracy theory – but in reality, the picture is more nuanced.

Colin Klein, Australian National University; Peter Clutton, Australian National University, and Vince Polito, Macquarie University

Conspiracy theories are known for connecting apparently unrelated events. Consider the X-Files’ Fox Mulder holed up in his office, frantically joining seemingly random dots. Or the American radio host Alex Jones connecting leaked Clinton emails and fuzzy rover pictures to conclude that NASA is running a child slave colony on Mars.

Cognitive psychologists have often claimed that conspiracy theorists possess a “monological” belief system, in which belief in one conspiracy leads to belief in others. Eventually, they explain every significant event, however unrelated, through the same conspiratorial “logic”.

On such a view, conspiracy theorists are fundamentally irrational, perhaps even pathologically so. But is this an oversimplification?

So-called “big data” approaches to psychology can give a unique perspective on these questions. By using large datasets gathered from social media websites, one can look at people interacting in everyday settings. Importantly, these approaches can capture everybody, not just the most vocal members of a community.

In a recent paper we used online comments to examine individuals who are interested in these types of ideas. We examined a complete set of comments over eight years from the conspiracy forum of reddit.com.

Our dataset included 2.2 million comments from roughly 130,000 distinct user names. Our analyses used topic modelling, a type of linguistic analysis that tries to find common themes across a large collection of documents.

We were able to identify 12 distinct subgroups of individuals who used language in different ways and who varied widely in their interests and their posting habits. What they talked about strongly suggested that they held different beliefs and attitudes about a range of conspiracies.

Eleven of these had consistent enough interests to be readily interpretable. We assigned each of them a name and created aggregate sample comments, as shown in this diagram.


CC BY-SA

We found that there were posters who fit the “monological” pattern (we dubbed them True Believers), writing at length on a wide variety of different topics. However, they were only the tip of the iceberg. Most posters had more specific interests.

Indeed, many seemed to be attracted as much to the social aspects of discussing conspiracies as the content of specific conspiracies. The group we called the Meta-redditors, for example, was most notable for their discussion of other forums on Reddit and complaints about moderation policies.

Similarly, there were subgroups (the Downtrodden) who appeared to be communicating about conspiracy theories as a way of expressing general frustration with authority.

Some were suspicious of US foreign policy (Anti-imperalists). Others may be using conspiracy topics as a way to express racist or otherwise socially unacceptable ideas (Anti-semites). There was even a distinct subgroup who appeared to be posting primarily in order to debunk conspiracy beliefs (Sceptics).

https://datawrapper.dwcdn.net/H6Iwh/3/

This suggests that individuals involved with conspiracy theories may have quite different motivations, beliefs and attitudes. It may not be useful to consider them as a homogeneous group.

What does all this mean for the study of conspiracy theories? It’s complicated.

On the one hand, it has long been known that online conspiracy theorising can have a variety of negative social effects, from political disengagement to actual violence. These are effects we should aim to mitigate.

On the other hand, our study also reveals something more surprising. The vast majority of posters were no more active in the conspiracy forum than they were on other reddit forums. This suggests that, for most individuals, conspiracy theories are not an all-encompassing obsession that overrides other interests, but just one interest among many. Even the most obsessed posters also spend a lot of time discussing Star Wars and trading cute cat photos on other parts of the site.

As noted, many members of online forums appear to use conspiracy theories to express legitimate doubts about structures of power, even if the details may appear striking to us. Reddit’s conspiracy forum links on its front page to a “list of confirmed conspiracies”, many of which are historically accurate. In the 1960s, for instance, the US FBI did infiltrate domestic protest groups to disrupt and discredit them. The public health service in Alabama did conduct long-term medical experiments on African American men.

The ConversationHence, we think that conspiracy theorising may be best understood as a symptom of a breakdown of trust in institutions like government and the media. Rebuilding that trust is, alas, a difficult proposal. However, our work suggests that recognising the varied and complex motivations and attitudes of conspiracy believers is an important step forward.

Colin Klein, Senior Lecturer in Philosophy, Australian National University; Peter Clutton, , Australian National University, and Vince Polito, Postdoctoral Research Fellow in Cognitive Science, Macquarie University

This article was originally published on The Conversation. Read the original article.

Your online privacy depends as much on your friends’ data habits as your own



File 20180326 54872 1q80274.jpg?ixlib=rb 1.1
Many social media users have been shocked to learn the extent of their digital footprint.
Shutterstock

Vincent Mitchell, University of Sydney; Andrew Stephen, University of Oxford, and Bernadette Kamleitner, Vienna University of Economics and Business

In the aftermath of revelations about the alleged misuse of Facebook user data by Cambridge Analytica, many social media users are educating themselves about their own digital footprint. And some are shocked at the extent of it.

Last week, one user took advantage of a Facebook feature that enables you to download all the information the company stores about you. He found his call and SMS history in the data dump – something Facebook says is an opt-in feature for those using Messenger and Facebook Lite on Android.

//platform.twitter.com/widgets.js

This highlights an issue that we don’t talk about enough when it comes to data privacy: that the security of our data is dependent not only on our own vigilance, but also that of those we interact with.

It’s easy for friends to share our data

In the past, personal data was either captured in our memories or in physical objects, such as diaries or photo albums. If a friend wanted data about us, they would have to either observe us or ask us for it. That requires effort, or our consent, and focuses on information that is both specific and meaningful.

Nowadays, data others hold about us is given away easily. That’s partly because the data apps ask for is largely intangible and invisible, as well as vague rather than specific.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


What’s more, it doesn’t seem to take much to get us to give away other people’s data in return for very little, with one study finding 98% of MIT students would give away their friends’ emails when promised free pizza.

Other studies have shown that collaborating in folders on cloud services, such as Google Drive, can result in privacy losses that are 39% higher due collaborators installing third-party apps you wouldn’t choose to install yourself. Facebook’s data download tool poses another risk in that once the data is taken out of Facebook it becomes even easier to copy and distribute.

This shift from personal to interdependent online privacy reliant on our friends, family and colleagues is a seismic one for the privacy agenda.

How much data are we talking about?

With more than 3.5 million apps on Google Play alone, the collection of data from our friends via back-door methods is more common than we might think. The back-door opens when you press “accept” to permissions to give access to your contacts when installing an app.

WhatsApp might have your contact information even if you aren’t a registered user.
Screen Shot at 1pm on 26 March 2018

Then the data harvesting machinery begins its work – often in perpetuity, and without us knowing or understanding what will be done with it. More importantly, our friends never agreed to us giving away their data. And we have a lot of friends’ data to harvest.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The average Australian has 234 Facebook friends. Large-scale data collection is easy in an interconnected world when each person who signs up for an app has 234 friends, and each of them has 234 and, so on. That’s how Cambridge Analytica was apparently able to collect information on up to 50 million users, with permission from just 270,000.

Add to that the fact that the average person uses nine different apps on a daily basis. Once installed, some of these apps can harvest data on a daily basis without your friends knowing and 70% of apps share it with third parties.




Read more:
7 in 10 smartphone apps share your data with third-party services


We’re more likely to refuse data requests that are specific

Around 60% of us never, or only occasionally, review the privacy policy and permissions requested by an app before downloading. And in our own research conducted with a sample of 287 London business students, 96% of participants failed to realise the scope of all the information they were giving away.

However, this can be changed by making a data request more specific – for example, by separating out “contacts” from “photos”. When we asked participants if they had the right to give all the data on their phone, 95% said yes. But when they focused on just contacts, this decreased to 80%.

We can take this further with a thought experiment. Imagine if an app asked you for your “contacts, including your grandmother’s phone number and your daughter’s photos”. Would you be more likely to say no? The reality of what you are actually giving away in these consent agreements becomes more apparent with a specific request.

The silver lining is more vigilance

This new reality not only threatens moral codes and friendships, but can cause harm from hidden viruses, malware, spyware or adware. We may also be subject to prosecution as in a recent German case in which a judge ruled that giving away your friend’s data on Whatsapp without their permission was wrong.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Although company policies on privacy can help, these are difficult to police. Facebook’s “platform policy” at the time the Cambridge Analytica data was harvested only allowed the collection of friends’ data to improve the user experience of an app, while preventing it from being sold on or used for advertising. But this puts a huge burden on companies to police, investigate and enforce these policies. It’s a task few can afford, and even a company the size of Facebook failed.

The ConversationThe silver lining to the Cambridge Analytica case is that more and more people are recognising that the idea of “free” digital services is an illusion. The price we pay is not only our own privacy, but the privacy of our friends, family and colleagues.

Vincent Mitchell, Professor of Marketing, University of Sydney; Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford, and Bernadette Kamleitner, , Vienna University of Economics and Business

This article was originally published on The Conversation. Read the original article.

New online tool can predict your melanoma risk



File 20180309 30994 zn9jqj.jpg?ixlib=rb 1.1
People who are unable to tan and who have moles on their skin are among those at heightened risk of developing melanoma.
from shutterstock.com

Phoebe Roth, The Conversation

Australians over the age of 40 can now calculate their risk of developing melanoma with a new online test. The risk predictor tool estimates a person’s melanoma risk over the next 3.5 years based on seven risk factors.

Melanoma is the third most common cancer in Australia and the most dangerous form of skin cancer.

The seven risk factors the tool uses are age, sex, ability to tan, number of moles at age 21, number of skin lesions treated, hair colour and sunscreen use.

The tool was developed by researchers at the QIMR Berghofer Medical Research Institute. Lead researcher Professor David Whiteman explained he and his team determined the seven risk factors by following more than 40,000 Queenslanders since 2010, and analysing their data.




Read more:
Interactive body map: what really gives you cancer?


The seven risk factors are each weighted differently. The tool’s algorithm uses these to assign a person into one of five risk categories: very much below average, below average, average, above average, and very much above average.

“This online risk predictor will help identify those with the highest likelihood of developing melanoma so that they and their doctors can decide on how to best manage their risk,” Professor Whiteman said.

After completing the short test, users will be offered advice, such as whether they should see their doctor. A reading of “above average” or “very much above average” will recommend a visit to the doctor to explore possible options for managing their melanoma risk.

But Professor Whiteman cautions that people with a below average risk shouldn’t become complacent.

“Even if you are at below average risk, it doesn’t mean you are at low risk – just lower than the average Australian,” he said.




Read more:
Explainer: how does sunscreen work, what is SPF and can I still tan with it on?


An estimated one in 17 Australians will be diagnosed with melanoma by their 85th birthday.

The test is targeted for people aged 40 and above as this was the age range of the cohort studied.

However, melanoma remains the most common cancer in Australians under 40.

Professor Whiteman said that the test may be useful for those under 40, but it may not be as accurate, as that wasn’t the demographic it was based on.

But he added complete accuracy couldn’t be guaranteed even for the target demographic.

“I don’t think it’s possible that we’ll ever get to 100%. I think that’s a holy grail that we aspire to, but in reality, cancers are very complex diseases and their causality includes many, many, factors, including unfortunately some random factors.”

The prognosis for melanoma patients is significantly better when it is detected earlier. The University of Queensland’s Professor of Dermatology H. Peter Soyer explained that the five-year survival rate for melanoma is 90%. But this figure jumps to 98% for patients diagnosed at the very early stages.

“At the end of the day, everything that raises awareness for melanomas and for skin cancer is beneficial,” Professor Soyer said.

Dr Hassan Vally, a senior lecturer in epidemiology at La Trobe University, said the way risk is often communicated is hard for people to grasp. But he said this model would provide people with a tangible measure of their risk of disease, and point them towards what they may be able to do to reduce it.

“Everything comes back to how people perceive their risk, and how can they make sense of it.

The Conversation“If it makes people more aware of their risks of disease that’s a good thing, and if that awareness leads to people taking action and improving their health then that’s great.”

Phoebe Roth, Editorial Intern, The Conversation

This article was originally published on The Conversation. Read the original article.