Online trolling used to be funny, but now the term refers to something far more sinister



File 20190131 108351 w5ujdy.jpg?ixlib=rb 1.1
The definition of “trolling” has changed a lot over the last 15 years.
Shutterstock

Evita March, Federation University Australia

It seems like internet trolling happens everywhere online these days – and it’s showing no signs of slowing down.

This week, the British press and Kensington Palace officials have called for an end to the merciless online trolling of Duchesses Kate Middleton and Meghan Markle, which reportedly includes racist and sexist content, and even threats.

But what exactly is internet trolling? How do trolls “behave”? Do they intend to harm, or amuse?

To find out how people define trolling, we conducted a survey with 379 participants. The results suggest there is a difference in the way the media, the research community and the general public understand trolling.

If we want to reduce abusive online behaviour, let’s start by getting the definition right.




Read more:
How empathy can make or break a troll


Which of these cases is trolling?

Consider the comments that appear in the image below:


Screenshot

Without providing any definitions, we asked if this was an example of internet trolling. Of participants, 44% said yes, 41% said no and 15% were unsure.

Now consider this next image:


Screenshot

Of participants, 69% said this was an example of internet trolling, 16% said no, and 15% were unsure).

These two images depict very different online behaviour. The first image depicts mischievous and comical behaviour, where the author perhaps intended to amuse the audience. The second image depicts malicious and antisocial behaviour, where the author may have intended to cause harm.

There was more consensus among participants that the second image depicted trolling. That aligns with a more common definition of internet trolling as destructive and disruptive online behaviour that causes harm to others.

But this definition has only really evolved in more recent years. Previously, internet trolling was defined very differently.




Read more:
We researched Russian trolls and figured out exactly how they neutralise certain news


A shifting definition

In 2002, one of the earliest definitions of internet “trolling” described the behaviour as:

luring others online (commonly on discussion forums) into pointless and time-consuming activities.

Trolling often started with a message that was intentionally incorrect, but not overly controversial. By contrast, internet “flaming” described online behaviour with hostile intentions, characterised by profanity, obscenity, and insults that inflict harm to a person or an organisation.

So, modern day definitions of internet trolling seem more consistent with the definition of flaming, rather than the initial definition of trolling.

To highlight this intention to amuse compared to the intention to harm, communication researcher Jonathan Bishop suggested we differentiate between “kudos trolling” to describe trolling for mutual enjoyment and entertainment, and “flame trolling” to describe trolling that is abusive and not intended to be humorous.

How people in our study defined trolling

In our study, which has been accepted to be published in the journal Cyberpsychology, Behavior, and Social Networking, we recruited 379 participants (60% women) to answer an online, anonymous questionnaire where they provided short answer responses to the following questions:

  • how do you define internet trolling?

  • what kind of behaviours constitute internet trolling?

Here are some examples of how participants responded:

Where an individual online verbally attacks another individual with intention of offending the other (female, 27)

People saying intentionally provocative things on social media with the intent of attacking / causing discomfort or offence (female, 26)

Teasing, bullying, joking or making fun of something, someone or a group (male, 29)

Deliberately commenting on a post to elicit a desired response, or to purely gratify oneself by emotionally manipulating another (male, 35)

Based on participant responses, we suggest that internet trolling is now more commonly seen as an intentional, malicious online behaviour, rather than a harmless activity for mutual enjoyment.

A word cloud representing how survey participants described trolling behaviours.

Researchers use ‘trolling’ as a catch-all

Clearly there are discrepancies in the definition of internet trolling, and this is a problem.

Research does not differentiate between kudos trolling and flame trolling. Some members of the public might still view trolling as a kudos behaviour. For example, one participant in our study said:

Depends which definition you mean. The common definition now, especially as used by the media and within academia, is essentially just a synonym to “asshole”. The better, and classic, definition is someone who speaks from outside the shared paradigm of a community in order to disrupt presuppositions and try to trigger critical thought and awareness (male, 41)

Not only does the definition of trolling differ from researcher to researcher, but there can also be discrepancy between the researcher and the public.

As a term, internet trolling has significantly deviated from its early, 2002 definition and become a catch-all for all antisocial online behaviours. The lack of a uniform definition of internet trolling leaves all research on trolling open to validity concerns, which could leave the behaviour remaining largely unchecked.




Read more:
Our experiments taught us why people troll


We need to agree on the terminology

We propose replacing the catch-all term of trolling with “cyberabuse”.

Cyberbullying, cyberhate and cyberaggression are all different online behaviours with different definitions, but they are often referred to uniformly as “trolling”.

It is time to move away from the term trolling to describe these serious instances of cyberabuse. While it may have been empowering for the public to picture these internet “trolls” as ugly creatures living under the bridge, this imagery may have begun to downplay the seriousness of their online behaviour.

Continuing to use the term trolling, a term that initially described a behaviour that was not intended to harm, could have serious consequences for managing and preventing the behaviour.The Conversation

Evita March, Senior Lecturer in Psychology, Federation University Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

Not another online petition! But here’s why you should think before deleting it


File 20190123 135148 8hmc8s.jpg?ixlib=rb 1.1
It’s a lazy form of activism, but that doesn’t mean signing online petitions is useless.
from shutterstock.com

Sky Croeser, Curtin University

Online petitions are often seen as a form of “slacktivism” – small acts that don’t require much commitment and are more about helping us feel good than effective activism. But the impacts of online petitions can stretch beyond immediate results.

Whether they work to create legislative change, or just raise awareness of an issue, there’s some merit to signing them. Even if nothing happens immediately, petitions are one of many ways we can help build long-term change.

A history of petitions

Petitions have a long history in Western politics. They developed centuries ago as a way for people to have their voices heard in government or ask for legislative change. But they’ve also been seen as largely ineffective in this respect. One study found only three out of 2,589 petitions submitted to the Australian House of Representatives between 1999 and 2007 even received a ministerial response.

Before the end of the second world war, fewer than 16 petitions a year were presented to Australia’s House of Representatives. The new political landscape of the early 1970s saw that number leap into the thousands.

In the 2000s, the House received around 300 petitions per year, and even with online tools, it’s still nowhere near what it was in the 70s. According to the parliamentary website, an average of 121 petitions have been presented each year since 2008.




Read more:
Changing the world one online petition at a time: how social activism went mainstream


Although petitions rarely achieve direct change, they are an important part of the democratic process. Many governments have attempted to facilitate petitioning online. For example, the Australian parliamentary website helps citizens through the process of developing and submitting petitions. This is one way the internet has made creating and submitting petitions easier.

There are also independent sites that campaigners can use, such as Change.org and Avaaz. It can take under an hour to go from an idea to an online petition that’s ready to share on social media.

As well as petitions being a way for citizens to make requests of their governments, they are now used more broadly. Many petitions reach a global audience – they might call for change from companies, international institutions, or even society as a whole.

What makes for an effective petition?

The simplest way to gauge if a petition has been successful is to look at whether the requests made were granted. The front page of Change.org displays recent “victories”. These including a call to axe the so-called “tampon tax” (the GST on menstrual products) which states and territories agreed to remove come January 2019.

Change.org also boasts the petition for gender equality on cereal boxes as a victory, after Kelloggs sent a statement they would be updating their packaging in 2019 to include images of males and females. This petition only had 600 signatures, in comparison to the 75,000 against the tampon tax.

In 2012, a coalition of organisations mobilised a campaign against two proposed US laws that many saw as likely to restrict internet freedom. A circulating petition gathered 4.5 million signatures, which helped put pressure on US representatives not to vote for the bills.

However, all of these petitions were part of larger efforts. There have been campaigns to remove the tax on menstrual products since it was first imposed, there’s a broad movement for more equal gender representation, and there’s significant global activism against online censorship. None of these petitions can claim sole victory. But they may have pushed it over the line, or just added some weight to the groundswell of existing support.

Online petitions can have the obvious impact of changing the very thing they’re campaigning for. However, the type of petition also makes a difference to what change it can achieve.

Choosing a petition worth signing

Knowing a few characteristics of successful petitions can be useful when you’re deciding whether it’s worth your time to sign and share something. Firstly, there should be a target and specific call for action.

These can take many forms: petitions might request a politician vote “yes” on a specific law, demand changes to working conditions at a company, or even ask an advocacy organisation to begin campaigning around a new issue. Vague targets and unclear goals aren’t well suited to petitions. Calls for “more gender equality in society” or “better rights for pets”, for example, are unlikely to achieve success.

Secondly, the goal needs to be realistic. This is so it’s possible to succeed and so supporters feel a sense of optimism. Petitioning for a significant change in a foreign government’s policy – for example, a call from world citizens for better gun control in the US – is unlikely to lead to results.




Read more:
Why #metoo is an impoverished form of feminist activism, unlikely to spark social change


It’s easier to get politicians to change their vote on a single, relatively minor issue than to achieve sweeping legal changes. It’s also more likely a company will change its packaging than completely overhaul its approach to production.

Thirdly, and perhaps most importantly, a petition’s chance of success depends largely on the strength of community supporting it. Petitions rarely work on their own. In her book Twitter and Teargas, Turkish writer Zeynep Tufekci argues the internet allows us to organise action far more quickly than in the past, outpacing the hard but essential work of community organising.

We can get thousands of people signing a petition and shouting in the streets well before we build coalitions and think about long-term strategies. But the most effective petitions will work in combination with other forms of activism.

Change happens gradually

Even petitions that don’t achieve their stated aims or minor goals can play a role in activist efforts. Sharing petitions is one way to bring attention to issues that might otherwise remain off the agenda.

Most online petitions include the option of allowing further updates and contact. Organisations often use a petition to build momentum around an ongoing campaign. Creating, or even signing, online petitions can be a form of micro-activism that helps people start thinking of themselves as capable of creating change.

Signing petitions – and seeing that others have also done so – can help us feel we are part of a collective, working with others to shape our world.

It’s reasonable to think carefully about what we put our names to online, but we shouldn’t be too quick to dismiss online petitions as ineffective, or “slack”. Instead, we should think of them as one example of the diverse tactics that help build change over time.The Conversation

Sky Croeser, Lecturer, School of Media, Creative Arts and Social Inquiry, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Racism in a networked world: how groups and individuals spread racist hate online



File 20190125 108342 1xg36s8.jpg?ixlib=rb 1.1
We could see even sharper divisions in society in the future if support for racism spreads online.
Markus Spiske/Unsplash

Ana-Maria Bliuc, Western Sydney University; Andrew Jakubowicz, University of Technology Sydney, and Kevin Dunn, Western Sydney University

Living in a networked world has many advantages. We get our news online almost as soon as it happens, we stay in touch with friends via social media, and we advance our careers through online professional networks.

But there is a darker side to the internet that sees far-right groups exploit these unique features to spread divisive ideas, racial hate and mistrust. Scholars of racism refer to this type of racist communication online as “cyber-racism”.

Even the creators of the internet are aware they may have unleashed a technology that is causing a lot of harm. Since 2017, the inventor of the World Wide Web, Tim Berners-Lee, has focused many of his comments about the dangers of manipulation of the internet around the spread of hate speech, saying that:

Humanity connected by technology on the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken.

Our team conducted a systematic review of ten years of cyber-racism research to learn how different types of communicators use the internet to spread their views.




Read more:
How the use of emoji on Islamophobic Facebook pages amplifies racism


Racists groups behave differently to individuals

We found that the internet is indeed a powerful tool used to influence and reinforce divisive ideas. And it’s not only organised racist groups that take advantage of online communication; unaffiliated individuals do it too.

But the way groups and individuals use the internet differs in several important ways. Racist groups are active on different communication channels to individuals, and they have different goals and strategies they use to achieve them. The effects of their communication are also distinctive.

Individuals mostly engage in cyber-racism to hurt others, and to confirm their racist views by connecting with like-minded people (seeking “confirmation bias”). Their preferred communication channels tend to be blogs, forums, news commentary websites, gaming environments and chat rooms.

Channels, goals and strategies used by unaffiliated people when communicating cyber-racism.

Strategies they use include denying or minimising the issue of racism, denigrating “non-whites”, and reframing the meaning of current news stories to support their views.

Groups, on the other hand, prefer to communicate via their own websites. They are also more strategic in what they seek to achieve through online communication. They use websites to gather support for their group and their views through racist propaganda.

Racist groups manipulate information and use clever rhetoric to help build a sense of a broader “white” identity, which often goes beyond national borders. They argue that conflict between different ethnicities is unavoidable, and that what most would view as racism is in fact a natural response to the “oppression of white people”.

Channels, goals and strategies used by groups when communicating cyber-racism.




Read more:
How the alt-right uses milk to promote white supremacy


Collective cyber-racism has the main effect of undermining the social cohesion of modern multicultural societies. It creates division, mistrust and intergroup conflict.

Meanwhile, individual cyber-racism seems to have a more direct effect by negatively affecting the well being of targets. It also contributes to maintaining a hostile racial climate, which may further (indirectly) affect the well being of targets.

What they have in common

Despite their differences, groups and individuals both share a high level of sophistication in how they communicate racism online. Our review uncovered the disturbingly creative ways in that new technologies are exploited.

For example, racist groups make themselves attractive to young people by providing interactive games and links to music videos on their websites. And both groups and individuals are highly skilled at manipulating their public image via various narrative strategies, such as humour and the interpretation of current news to fit with their arguments.




Read more:
Race, cyberbullying and intimate partner violence


A worrying trend

Our findings suggest that if these online strategies are effective, we could see even sharper divisions in society as the mobilisation of support for racism and far-right movements spreads online.

There is also evidence that currently unaffiliated supporters of racism could derive strength through online communication. These individuals might use online channels to validate their beliefs and achieve a sense of belonging in virtual spaces where racist hosts provide an uncontested and hate-supporting community.

This is a worrying trend. We have now seen several examples of violent action perpetrated offline by isolated individuals who radicalise into white supremacist movements – for example, in the case of Anders Breivik in Norway, and more recently of Robert Gregory Bowers, who was the perpetrator of the Pittsburgh synagogue shooting.

In Australia, unlike most other liberal democracies, there are effectively no government strategies that seek to reduce this avenue for the spread of racism, despite many Australians expressing a desire that this be done.The Conversation

Ana-Maria Bliuc, Senior Lecturer in Social Psychology, Western Sydney University; Andrew Jakubowicz, Emeritus Professor of Sociology, University of Technology Sydney, and Kevin Dunn, Dean of the School of Social Science and Psychology, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Trolls, fanboys and lurkers: understanding online commenting culture shows us how to improve it



File 20180524 117628 k4li3d.jpg?ixlib=rb 1.1
The way user interfaces are designed can impact the kind of community that gathers.
Shutterstock

Renee Barnes, University of the Sunshine Coast

Do you call that a haircut? I hope you didn’t pay for it.

Oh please this is rubbish, you’re a disgrace to yourself and your profession.

These are just two examples of comments that have followed articles I have written in my career. While they may seem benign compared with the sort of violent and vulgar comments that are synonymous with cyberbullying, they are examples of the uncivil and antisocial behaviour that plagues the internet.

If these comments were directed at me in any of my interactions in everyday life – when buying a coffee or at my monthly book club – they would be incredibly hurtful and certainly not inconsequential.

Drawing on my own research, as well as that of researchers in other fields, my new book “Uncovering Online Commenting Culture: Trolls, Fanboys and Lurkers” attempts to help us understand online behaviours, and outlines productive steps we can all take towards creating safer and kinder online interactions.




Read more:
Rude comments online are a reality we can’t get away from


Steps we all can take

Online abuse is a social problem that just happens to be powered by technology. Solutions are needed that not only defuse the internet’s power to amplify abuse, but also encourage crucial shifts in social norms and values within online communities.

Recognise that it’s a community

The first step is to ensure we view our online interactions as an act of participation in a community. What takes place online will then begin to line up with our offline interactions.

If any of the cruel comments that often form part of online discussion were said to you in a restaurant, you would expect witnesses around you to support you. We must have the same expectations online.

Know our audience

We learn to socialise offline based on visual and verbal cues given by the people with whom we interact. When we move social interactions to an online space where those cues are removed or obscured, a fundamental component of how we moderate our own behaviour is also eliminated. Without these social cues, it’s difficult to determine whether content is appropriate.

Research has shown that most social media users imagine a very different audience to the actual audience reading their updates. We often imagine our audience as people we associate with regularly offline, however a political statement that may be supported by close family and friends could be offensive to former colleagues in our broader online network.

Understand our own behaviour

Emotion plays a role in fuelling online behaviour – emotive comments can inspire further emotive comments in an ongoing feedback loop. Aggression can thus incite aggression in others, but it can also establish a behavioural norm within the community that aggression is acceptable.




Read more:
How empathy can make or break a troll


Understanding our online behaviour can help us take an active role in shaping the norms and values of our online communities by demonstrating appropriate behaviour.

It can also inform education initiatives for our youngest online users. We must teach them to remain conscious of the disjuncture between our imagined audience and the actual audience, thereby ingraining productive social norms for generations to come. Disturbingly, almost 70% of those aged between 18 and 29 have experienced some form of online harassment, compared with one-third of those aged 30 and older.

What organisations and institutions can do

That is not to say that we should absolve the institutions that profit from our online interactions. Social networks such as Facebook and Twitter also have a role to play.

User interface design

Design of user interfaces impacts on the ease with which we interact, the types of individuals who comment, and how we will behave.

Drawing on psychological research, we can link particular personality traits with antisocial behaviour online. This is significant because simple changes to the interfaces we use to communicate can influence which personality types will be inclined to comment.

Using interface design to encourage participation from those who will leave positive comments, and creating barriers for those inclined to leave abusive ones, is one step that online platforms can take to minimise harmful behaviours.

For example, those who are highly agreeable prefer anonymity when communicating online. Therefore, eliminating anonymity on websites (an often touted response to hostile behaviour) could discourage those agreeable individuals who would leave more positive comments.

Moderation policies

Conscientious individuals are linked to more pro-social comments. They prefer high levels of moderation, and systems where quality comments are highlighted or ranked by other users.

Riot Games, publisher of the notorious multiplayer game League of Legends, has had great success in mitigating offensive behaviour by putting measures in place to promote the gaming community’s shared values. This included a tribunal of players who could determine punishment for people involved in uncivilised behaviour.

Analytics and reporting

Analytical tools, visible data on who visits a site, and a real-time guide to who is reading comments can help us configure a more accurate imagining of our audience. This could help eliminate the risk of unintentional offence.

Providing clear processes for reporting inappropriate behaviour, and acting quickly to punish it, will also encourage us to take an active role in cleaning up our online communities.




Read more:
How we can keep our relationships during elections: don’t talk politics on social media


We can and must expect more of our online interactions. Our behaviour and how we respond to the behaviour of others within these communities will contribute to the shared norms and values of an online community.

The ConversationHowever, there are institutional factors that can affect the behaviours displayed. It is only through a combination of both personal and institutional responses to antisocial behaviour that we will create more inclusive and harmonious online communities.

Renee Barnes, Senior Lecturer, Journalism, University of the Sunshine Coast

This article was originally published on The Conversation. Read the original article.

Online conspiracy theorists are more diverse (and ordinary) than most assume



File 20180327 188628 1qt10b9.jpg?ixlib=rb 1.1
Fox Mulder (David Duchovny) in The X-Files is fond of joining seemingly unrelated dots to create a conspiracy theory – but in reality, the picture is more nuanced.

Colin Klein, Australian National University; Peter Clutton, Australian National University, and Vince Polito, Macquarie University

Conspiracy theories are known for connecting apparently unrelated events. Consider the X-Files’ Fox Mulder holed up in his office, frantically joining seemingly random dots. Or the American radio host Alex Jones connecting leaked Clinton emails and fuzzy rover pictures to conclude that NASA is running a child slave colony on Mars.

Cognitive psychologists have often claimed that conspiracy theorists possess a “monological” belief system, in which belief in one conspiracy leads to belief in others. Eventually, they explain every significant event, however unrelated, through the same conspiratorial “logic”.

On such a view, conspiracy theorists are fundamentally irrational, perhaps even pathologically so. But is this an oversimplification?

So-called “big data” approaches to psychology can give a unique perspective on these questions. By using large datasets gathered from social media websites, one can look at people interacting in everyday settings. Importantly, these approaches can capture everybody, not just the most vocal members of a community.

In a recent paper we used online comments to examine individuals who are interested in these types of ideas. We examined a complete set of comments over eight years from the conspiracy forum of reddit.com.

Our dataset included 2.2 million comments from roughly 130,000 distinct user names. Our analyses used topic modelling, a type of linguistic analysis that tries to find common themes across a large collection of documents.

We were able to identify 12 distinct subgroups of individuals who used language in different ways and who varied widely in their interests and their posting habits. What they talked about strongly suggested that they held different beliefs and attitudes about a range of conspiracies.

Eleven of these had consistent enough interests to be readily interpretable. We assigned each of them a name and created aggregate sample comments, as shown in this diagram.


CC BY-SA

We found that there were posters who fit the “monological” pattern (we dubbed them True Believers), writing at length on a wide variety of different topics. However, they were only the tip of the iceberg. Most posters had more specific interests.

Indeed, many seemed to be attracted as much to the social aspects of discussing conspiracies as the content of specific conspiracies. The group we called the Meta-redditors, for example, was most notable for their discussion of other forums on Reddit and complaints about moderation policies.

Similarly, there were subgroups (the Downtrodden) who appeared to be communicating about conspiracy theories as a way of expressing general frustration with authority.

Some were suspicious of US foreign policy (Anti-imperalists). Others may be using conspiracy topics as a way to express racist or otherwise socially unacceptable ideas (Anti-semites). There was even a distinct subgroup who appeared to be posting primarily in order to debunk conspiracy beliefs (Sceptics).

https://datawrapper.dwcdn.net/H6Iwh/3/

This suggests that individuals involved with conspiracy theories may have quite different motivations, beliefs and attitudes. It may not be useful to consider them as a homogeneous group.

What does all this mean for the study of conspiracy theories? It’s complicated.

On the one hand, it has long been known that online conspiracy theorising can have a variety of negative social effects, from political disengagement to actual violence. These are effects we should aim to mitigate.

On the other hand, our study also reveals something more surprising. The vast majority of posters were no more active in the conspiracy forum than they were on other reddit forums. This suggests that, for most individuals, conspiracy theories are not an all-encompassing obsession that overrides other interests, but just one interest among many. Even the most obsessed posters also spend a lot of time discussing Star Wars and trading cute cat photos on other parts of the site.

As noted, many members of online forums appear to use conspiracy theories to express legitimate doubts about structures of power, even if the details may appear striking to us. Reddit’s conspiracy forum links on its front page to a “list of confirmed conspiracies”, many of which are historically accurate. In the 1960s, for instance, the US FBI did infiltrate domestic protest groups to disrupt and discredit them. The public health service in Alabama did conduct long-term medical experiments on African American men.

The ConversationHence, we think that conspiracy theorising may be best understood as a symptom of a breakdown of trust in institutions like government and the media. Rebuilding that trust is, alas, a difficult proposal. However, our work suggests that recognising the varied and complex motivations and attitudes of conspiracy believers is an important step forward.

Colin Klein, Senior Lecturer in Philosophy, Australian National University; Peter Clutton, , Australian National University, and Vince Polito, Postdoctoral Research Fellow in Cognitive Science, Macquarie University

This article was originally published on The Conversation. Read the original article.

Your online privacy depends as much on your friends’ data habits as your own



File 20180326 54872 1q80274.jpg?ixlib=rb 1.1
Many social media users have been shocked to learn the extent of their digital footprint.
Shutterstock

Vincent Mitchell, University of Sydney; Andrew Stephen, University of Oxford, and Bernadette Kamleitner, Vienna University of Economics and Business

In the aftermath of revelations about the alleged misuse of Facebook user data by Cambridge Analytica, many social media users are educating themselves about their own digital footprint. And some are shocked at the extent of it.

Last week, one user took advantage of a Facebook feature that enables you to download all the information the company stores about you. He found his call and SMS history in the data dump – something Facebook says is an opt-in feature for those using Messenger and Facebook Lite on Android.

//platform.twitter.com/widgets.js

This highlights an issue that we don’t talk about enough when it comes to data privacy: that the security of our data is dependent not only on our own vigilance, but also that of those we interact with.

It’s easy for friends to share our data

In the past, personal data was either captured in our memories or in physical objects, such as diaries or photo albums. If a friend wanted data about us, they would have to either observe us or ask us for it. That requires effort, or our consent, and focuses on information that is both specific and meaningful.

Nowadays, data others hold about us is given away easily. That’s partly because the data apps ask for is largely intangible and invisible, as well as vague rather than specific.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


What’s more, it doesn’t seem to take much to get us to give away other people’s data in return for very little, with one study finding 98% of MIT students would give away their friends’ emails when promised free pizza.

Other studies have shown that collaborating in folders on cloud services, such as Google Drive, can result in privacy losses that are 39% higher due collaborators installing third-party apps you wouldn’t choose to install yourself. Facebook’s data download tool poses another risk in that once the data is taken out of Facebook it becomes even easier to copy and distribute.

This shift from personal to interdependent online privacy reliant on our friends, family and colleagues is a seismic one for the privacy agenda.

How much data are we talking about?

With more than 3.5 million apps on Google Play alone, the collection of data from our friends via back-door methods is more common than we might think. The back-door opens when you press “accept” to permissions to give access to your contacts when installing an app.

WhatsApp might have your contact information even if you aren’t a registered user.
Screen Shot at 1pm on 26 March 2018

Then the data harvesting machinery begins its work – often in perpetuity, and without us knowing or understanding what will be done with it. More importantly, our friends never agreed to us giving away their data. And we have a lot of friends’ data to harvest.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The average Australian has 234 Facebook friends. Large-scale data collection is easy in an interconnected world when each person who signs up for an app has 234 friends, and each of them has 234 and, so on. That’s how Cambridge Analytica was apparently able to collect information on up to 50 million users, with permission from just 270,000.

Add to that the fact that the average person uses nine different apps on a daily basis. Once installed, some of these apps can harvest data on a daily basis without your friends knowing and 70% of apps share it with third parties.




Read more:
7 in 10 smartphone apps share your data with third-party services


We’re more likely to refuse data requests that are specific

Around 60% of us never, or only occasionally, review the privacy policy and permissions requested by an app before downloading. And in our own research conducted with a sample of 287 London business students, 96% of participants failed to realise the scope of all the information they were giving away.

However, this can be changed by making a data request more specific – for example, by separating out “contacts” from “photos”. When we asked participants if they had the right to give all the data on their phone, 95% said yes. But when they focused on just contacts, this decreased to 80%.

We can take this further with a thought experiment. Imagine if an app asked you for your “contacts, including your grandmother’s phone number and your daughter’s photos”. Would you be more likely to say no? The reality of what you are actually giving away in these consent agreements becomes more apparent with a specific request.

The silver lining is more vigilance

This new reality not only threatens moral codes and friendships, but can cause harm from hidden viruses, malware, spyware or adware. We may also be subject to prosecution as in a recent German case in which a judge ruled that giving away your friend’s data on Whatsapp without their permission was wrong.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Although company policies on privacy can help, these are difficult to police. Facebook’s “platform policy” at the time the Cambridge Analytica data was harvested only allowed the collection of friends’ data to improve the user experience of an app, while preventing it from being sold on or used for advertising. But this puts a huge burden on companies to police, investigate and enforce these policies. It’s a task few can afford, and even a company the size of Facebook failed.

The ConversationThe silver lining to the Cambridge Analytica case is that more and more people are recognising that the idea of “free” digital services is an illusion. The price we pay is not only our own privacy, but the privacy of our friends, family and colleagues.

Vincent Mitchell, Professor of Marketing, University of Sydney; Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford, and Bernadette Kamleitner, , Vienna University of Economics and Business

This article was originally published on The Conversation. Read the original article.

New online tool can predict your melanoma risk



File 20180309 30994 zn9jqj.jpg?ixlib=rb 1.1
People who are unable to tan and who have moles on their skin are among those at heightened risk of developing melanoma.
from shutterstock.com

Phoebe Roth, The Conversation

Australians over the age of 40 can now calculate their risk of developing melanoma with a new online test. The risk predictor tool estimates a person’s melanoma risk over the next 3.5 years based on seven risk factors.

Melanoma is the third most common cancer in Australia and the most dangerous form of skin cancer.

The seven risk factors the tool uses are age, sex, ability to tan, number of moles at age 21, number of skin lesions treated, hair colour and sunscreen use.

The tool was developed by researchers at the QIMR Berghofer Medical Research Institute. Lead researcher Professor David Whiteman explained he and his team determined the seven risk factors by following more than 40,000 Queenslanders since 2010, and analysing their data.




Read more:
Interactive body map: what really gives you cancer?


The seven risk factors are each weighted differently. The tool’s algorithm uses these to assign a person into one of five risk categories: very much below average, below average, average, above average, and very much above average.

“This online risk predictor will help identify those with the highest likelihood of developing melanoma so that they and their doctors can decide on how to best manage their risk,” Professor Whiteman said.

After completing the short test, users will be offered advice, such as whether they should see their doctor. A reading of “above average” or “very much above average” will recommend a visit to the doctor to explore possible options for managing their melanoma risk.

But Professor Whiteman cautions that people with a below average risk shouldn’t become complacent.

“Even if you are at below average risk, it doesn’t mean you are at low risk – just lower than the average Australian,” he said.




Read more:
Explainer: how does sunscreen work, what is SPF and can I still tan with it on?


An estimated one in 17 Australians will be diagnosed with melanoma by their 85th birthday.

The test is targeted for people aged 40 and above as this was the age range of the cohort studied.

However, melanoma remains the most common cancer in Australians under 40.

Professor Whiteman said that the test may be useful for those under 40, but it may not be as accurate, as that wasn’t the demographic it was based on.

But he added complete accuracy couldn’t be guaranteed even for the target demographic.

“I don’t think it’s possible that we’ll ever get to 100%. I think that’s a holy grail that we aspire to, but in reality, cancers are very complex diseases and their causality includes many, many, factors, including unfortunately some random factors.”

The prognosis for melanoma patients is significantly better when it is detected earlier. The University of Queensland’s Professor of Dermatology H. Peter Soyer explained that the five-year survival rate for melanoma is 90%. But this figure jumps to 98% for patients diagnosed at the very early stages.

“At the end of the day, everything that raises awareness for melanomas and for skin cancer is beneficial,” Professor Soyer said.

Dr Hassan Vally, a senior lecturer in epidemiology at La Trobe University, said the way risk is often communicated is hard for people to grasp. But he said this model would provide people with a tangible measure of their risk of disease, and point them towards what they may be able to do to reduce it.

“Everything comes back to how people perceive their risk, and how can they make sense of it.

The Conversation“If it makes people more aware of their risks of disease that’s a good thing, and if that awareness leads to people taking action and improving their health then that’s great.”

Phoebe Roth, Editorial Intern, The Conversation

This article was originally published on The Conversation. Read the original article.

You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem



File 20171107 1032 f7pvxc.jpg?ixlib=rb 1.1
Do you care if your data is being used by third parties?
from www.shutterstock.com

Siobhan Lyons, Macquarie University

We all seem worried about privacy. Though it’s not only privacy itself we should be concerned about: it’s also our attitudes towards privacy that are important.

When we stop caring about our digital privacy, we witness surveillance apathy.

And it’s something that may be particularly significant for marginalised communities, who feel they hold no power to navigate or negotiate fair use of digital technologies.


Read more: Yes, your doctor might google you


In the wake of the NSA leaks in 2013 led by Edward Snowden, we are more aware of the machinations of online companies such as Facebook and Google. Yet research shows some of us are apathetic when it comes to online surveillance.

Privacy and surveillance

Attitudes to privacy and surveillance in Australia are complex.

According to a major 2017 privacy survey, around 70% of us are more concerned about privacy than we were five years ago.

Snapshot of Australian community attitudes to privacy 2017.
Office of the Australian Information Commissioner

And yet we still increasingly embrace online activities. A 2017 report on social media conducted by search marketing firm Sensis showed that almost 80% of internet users in Australia now have a social media profile, an increase of around ten points from 2016. The data also showed that Australians are on their accounts more frequently than ever before.

Also, most Australians appear not to be concerned about recently proposed implementation of facial recognition technology. Only around one in three (32% of 1,486) respondents to a Roy Morgan study expressed worries about having their faces available on a mass database.

A recent ANU poll revealed a similar sentiment, with recent data retention laws supported by two thirds of Australians.

So while we’re aware of the issues with surveillance, we aren’t necessarily doing anything about it, or we’re prepared to make compromises when we perceive our safety is at stake.

Across the world, attitudes to surveillance vary. Around half of Americans polled in 2013 found mass surveillance acceptable. France, Britain and the Philippines appeared more tolerant of mass surveillance compared to Sweden, Spain, and Germany, according to 2015 Amnesty International data.


Read more: Police want to read encrypted messages, but they already have significant power to access our data


Apathy and marginalisation

In 2015, philosopher Slavoj Žižek proclaimed that he did not care about surveillance (admittedly though suggesting that “perhaps here I preach arrogance”).

This position cannot be assumed by all members of society. Australian academic Kate Crawford argues the impact of data mining and surveillance is more significant for marginalised communities, including people of different races, genders and socioeconomic backgrounds. American academics Shoshana Magnet and Kelley Gates agree, writing:

[…] new surveillance technologies are regularly tested on marginalised communities that are unable to resist their intrusion.

A 2015 White House report found that big data can be used to perpetuate price discrimination among people of different backgrounds. It showed how data surveillance “could be used to hide more explicit forms of discrimination”.


Read more: Witch-hunts and surveillance: the hidden lives of queer people in the military


According to Ira Rubinstein, a senior fellow at New York University’s Information Law Institute, ignorance and cynicism are often behind surveillance apathy. Users are either ignorant of the complex infrastructure of surveillance, or they believe they are simply unable to avoid it.

As the White House report stated, consumers “have very little knowledge” about how data is used in conjunction with differential pricing.

So in contrast to the oppressive panopticon (a circular prison with a central watchtower) as envisioned by philosopher Jeremy Bentham, we have what Siva Vaidhyanathan calls the “crytopticon”. The crytopticon is “not supposed to be intrusive or obvious. Its scale, its ubiquity, even its very existence, are supposed to go unnoticed”.

But Melanie Taylor, lead artist of the computer game Orwell (which puts players in the role of surveillance) noted that many simply remain indifferent despite heightened awareness:

That’s the really scary part: that Snowden revealed all this, and maybe nobody really cared.

The Facebook trap

Surveillance apathy can be linked to people’s dependence on “the system”. As one of my media students pointed out, no matter how much awareness users have regarding their social media surveillance, invariably people will continue using these platforms. This is because they are convenient, practical, and “we are creatures of habit”.

Are you prepared to give up the red social notifications from Facebook?
nevodka/shutterstock

As University of Melbourne scholar Suelette Dreyfus noted in a Four Corners report on Facebook:

Facebook has very cleverly figured out how to wrap itself around our lives. It’s the family photo album. It’s your messaging to your friends. It’s your daily diary. It’s your contact list.

This, along with the complex algorithms Facebook and Google use to collect and use data to produce “filter bubbles” or “you loops” is another issue.

Protecting privacy

While some people are attempting to delete themselves from the network, others have come up with ways to avoid being tracked online.

Search engines such as DuckDuckGo or Tor Browser allow users to browse without being tracked. Lightbeam, meanwhile, allows users to see how their information is being tracked by third party companies. And MIT devised a system to show people the metadata of their emails, called Immersion.

The ConversationSurveillance apathy is more disconcerting than surveillance itself. Our very attitudes about privacy will inform the structure of surveillance itself, so caring about it is paramount.

Siobhan Lyons, Scholar in Media and Cultural Studies, Macquarie University

This article was originally published on The Conversation. Read the original article.

Here’s how Australia can act to target racist behaviour online



File 20171014 3527 1vy4jnu.jpg?ixlib=rb 1.1
Racists take advantage of social media algorithms to find people with similar beliefs.
from www.shutterstock.com

Andrew Jakubowicz, University of Technology Sydney

Although racism online feels like an insurmountable problem, there are legal and civil actions we can take right now in Australia to address it.

Racism expressed on social media sites provided by Facebook and the Alphabet stable (which includes Google and YouTube) ranges from advocacy of white power, support of the extermination of Jews and the call for political action against Muslim citizens because of their faith. Increasingly it occurs within the now “private” pages of groups that “like” racism.

The Simon Wiesenthal Center 2017 Digital Terrorism and Hate Report card.
Simon Wiesenthal Center

At the heart of the problem is the clash between commercial goals of social media companies (based around creating communities, building audiences, and publishing and curating content to sell to advertisers), and self-ascribed ethical responsibilities of companies to users.

Although some platforms show growing awareness of the need to respond more quickly to complaints, it’s a very slow process to automate.

Australia should focus on laws that protect internet users from overt hate, and civil actions to help balance out power relationships.


Read more: Tech companies can distinguish between free speech and hate speech


Three actions on the legal front

At the global level, Australia could withdraw its reservation to Article 4 of the International Convention to Eliminate All Forms of Racial Discrimination. Such a move has been flagged in the past, but stymied by opposition from an alliance of free speech and social conservative activists and politicians.

The convention is a global agreement to outlaw racism and racial discrimination, and Article 4 committed signatories to criminalise race hate speech. Australia’s reservation reflected the conservative governments’ reluctance to use the criminal law, similar to the civil law debate over section 18C of the Racial Discrimination Act in 2016/7.

New data released by the eSafety Commissioner showed young people are subjected to extensive online hate. Amongst other findings, 53% of young Muslims said they had faced harmful online content; Indigenous people and asylum seekers were also frequent targets of online hate. Perhaps this could lead governments and opposition parties to a common cause.


Read more: Australians believe 18C protections should stay


Secondly, while Australian law has adopted the European Convention on Cyber Crime, it could move further and adopt the additional protocol. This outlaws racial vilification, and the advocacy of xenophobia and racism.

The impact of these international agreements would be to make serious cases of racial vilification online criminal acts in Australia, and the executive employees of platforms that refused to remove them personally criminally liable. This situation has emerged in Germany where Facebook executives have been threatened with the use of such laws. Mark Zuckerberg visited Germany to pledge opposition to anti-immigrant vilification in 2016.

Finally, Australia could adopt a version of New Zealand’s approach to harmful digital communication. Here, platforms are held ultimately accountable for the publication of online content that seriously offends, and users can challenge the failure of platforms to take down offensive material in the realm of race hate. Currently complaints via the Australian Human Rights Commission do elicit informal cooperation in some cases, but citizen rights are limited.

Taken together, these elements would mark out to providers and users of internet services that there is a shared responsibility for reasonable civility.

Digital platforms can allow racist behaviour to be anonymous.
from www.shutterstock.com

Civil strategies

In addition to legal avenues, civil initiatives can empower those who are the targets of hate speech, and disempower those who are the perpetrators of race hate.

People who are targeted by racists need support and affirmation. This approach underpins the eSafety commissioner’s development of a Young and Safe portal, which offers stories and scenarios designed to build confidence and grow skills in young people. This is extending to address concerns of women and children, racism, and other forms of bullying.

The Online Hate Prevention Institute (OHPI) has become a reservoir of insights and capacities to identify and pursue perpetrators. As proposed by OHPI, a CyberLine could be created for tipping and reporting race hate speech online, for follow up and possible legal action. Such a hotline would also serve as a discussion portal on what racism looks like and what responses are appropriate.

Anti-racism workshops (some have already been run by the E Safety commissioner) have aimed to push back against hate, and build structures where people can come together online. Modelling and disseminating best practice against race hate speech offers resources to wider communities that can then be replicated elsewhere.

The Point magazine (an online youth-centred publication for the government agency Multicultural New South Wales) reported two major events where governments sponsored industry/community collaboration to find ways forward against cyber racism.

What makes a diverse Australia?

The growth of online racism marks the struggle between a dark and destructive social movement that wishes to suppress or minimise the recognition of cultural differences, confronted by an emergent social movement that treasures cultural differences and egalitarian outcomes in education and wider society.

Advocacy organisations can play a critical role in advancing an agenda of civility and responsibility through the state, the economy and civil society. The social movements of inclusion will ultimately put pressure on the state and in the economy to ensure the major platforms do in fact accept full responsibilities for the consequences of their actions. If a platform refuses to publish hate speech or acts to remove it when it receives valid complaints, such views remain a private matter for the individual who holds them, not a corrosive undermining of civil society.

We need to rebalance the equation between civil society, government and the internet industry, so that when the population confronts the industry, demonstrating it wants answers, we will begin to see responsibility emerge.

Governments also need to see their role as more strongly ensuring a balance between the right to a civil discourse and the profitability of platforms. Currently the Australian government seems not to accept that it has such a role, even though a number of states have begun to act.


The ConversationThe Cyber Racism and Community Resilience Project CRaCR explores why cyber racism has grown in Australia and globally, and what concerned communities have and can do about it. This article summarises the recommendations CRaCR made to industry partners.

Andrew Jakubowicz, Professor of Sociology, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.