Why social media platforms banning Trump won’t stop — or even slow down — his cause


Bronwyn Carlson, Macquarie University

Last week Twitter permanently suspended US President Donald Trump in the wake of his supporters’ violent storming of Capitol Hill. Trump was also suspended from Facebook and Instagram indefinitely.

Heads quickly turned to the right-wing Twitter alternative Parler — which seemed to be a logical place of respite for the digitally de-throned president.

But Parler too was axed, as Amazon pulled its hosting services and Google and Apple removed it from their stores. The social network, which has since sued Amazon, is effectively shut down until it can secure a new host or force Amazon to restore its services.

These actions may seem like legitimate attempts by platforms to tackle Trump’s violence-fuelling rhetoric. The reality, however, is they will do little to truly disengage his supporters or deal with issues of violence and hate speech.

With an election vote count of 74,223,744 (46.9%), the magnitude of Trump’s following is clear. And since being banned from Twitter, he hasn’t shown any intention of backing down.

In his first appearance since the Capitol attack, Trump described the impeachment process as ‘a continuation of the greatest witch hunt in the history of politics’.

Not budging

With more than 47,000 original tweets from Trump’s personal Twitter account (@realdonaldtrump) since 2009, one could argue he used the platform inordinately. There’s much speculation about what he might do now.

Tweeting via the official Twitter account for the president @POTUS, he said he might consider building his own platform. Twitter promptly removed this tweet. He also tweeted: “We will not be SILENCED!”.

This threat may come with some standing as Trump does have avenues to control various forms of media. In November, Axios reported he was considering launching his own right-wing media venture.

For his followers, the internet remains a “natural hunting ground” where they can continue gaining support through spreading racist and hateful sentiment.

The internet is also notoriously hard to police – it has no real borders, and features such as encryption enable anonymity. Laws differ from state to state and nation to nation; an act deemed illegal in one locale may be legal elsewhere.

It’s no surprise groups including fascists, neo-Nazis, anti-Semites and white supremacists were early and eager adopters of the internet. Back in 1998, former Ku Klux Klan Grand Wizard David Duke wrote online:

I believe that the internet will begin a chain reaction of racial enlightenment that will shake the world by the speed of its intellectual conquest.

As far as efforts to quash such extremism go, they’re usually too little, too late.

Take Stormfront, a neo-Nazi platform described as the web’s first major racial hate site. It was set up in 1995 by a former Klan state leader, and only removed from the open web 22 years later in 2017.




Read more:
Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?


The psychology of hate

Banning Trump from social media won’t necessarily silence him or his supporters. Esteemed British psychiatrist and broadcaster Raj Persaud sums it up well: “narcissists do not respond well to social exclusion”.

Others have highlighted the many options still available for Trump fans to congregate since Parler’s departure, which was used to communicate plans ahead of the siege at Capitol. Gab is one platform many Trump supporters have flocked to.

It’s important to remember hate speech, racism and violence predate the internet. Those who are predisposed to these ideologies will find a way to connect with others like them.

And censorship likely won’t change their beliefs, since extremist ideologies and conspiracies tend to be heavily spurred on by confirmation bias. This is when people interpret information in a way that reaffirms their existing beliefs.

When Twitter took action to limit QAnon content last year, some followers took this as confirmation of the conspiracy, which claims Satan-worshipping elites from within government, business and media are running a “deep state” against Trump.

Social media and white supremacy: a love story

The promotion of violence and hate speech on platforms isn’t new, nor is it restricted to relatively fringe sites such as Parler.

Queensland University of Technology Digital Media lecturer Ariadna Matamoros-Fernández describes online hate speech as “platformed racism”. This framing is critical, especially in the case of Trump and his followers.

It recognises social media has various algorithmic features which allow for the proliferation of racist content. It also captures the governance structures that tend to favour “free speech” over the safety of vulnerable communities online.

For instance, Matamoros-Fernández’s research found in Australia, platforms such as Facebook “favoured the offenders over Indigenous people” by tending to lean in favour of free speech.

Other research has found Indigenous social media users regularly witness and experience racism and sexism online. My own research has also revealed social media helps proliferate hate speech, including racism and other forms of violence.

On this front, tech companies are unlikely to take action on the scale required, since controversy is good for business. Simply, there’s no strong incentive for platforms to tackle the issues of hate speech and racism — not until not doing so negatively impacts profits.

After Facebook indefinitely banned Trump, its market value reportedly dropped by US$47.6 billion as of Wednesday, while Twitter’s dropped by US$3.5 billion.




Read more:
Profit, not free speech, governs media companies’ decisions on controversy


The need for a paradigm shift

When it comes to imagining a future with less hate, racism and violence, a key mistake is looking for solutions within the existing structure.

Today, online media is an integral part of the structure that governs society. So we look to it to solve our problems.

But banning Trump won’t silence him or the ideologies he peddles. It will not suppress hate speech or even reduce the capacity of individuals to incite violence.

Trump’s presidency will end in the coming days, but extremist groups and the broader movement they occupy will remain, both in real life and online.




Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day


The Conversation


Bronwyn Carlson, Professor, Indigenous Studies, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

No, Twitter is not censoring Donald Trump. Free speech is not guaranteed if it harms others



Alex Brandon/AP

Katharine Gelber, The University of Queensland

The recent storming of the US Capitol has led a number of social media platforms to remove President Donald Trump’s account. In the case of Twitter, the ban is permanent. Others, like Facebook, have taken him offline until after President-elect Joe Biden’s inauguration next week.

This has led to a flurry of commentary in the Australian media about “free speech”. Treasurer Josh Frydenburg has said he is “uncomfortable” with Twitter’s removal of Trump, while the acting prime minister, Michael McCormack, has described it as “censorship”.

Meanwhile, MPs like Craig Kelly and George Christensen continue to ignore the evidence and promote misinformation about the nature of the violent, pro-Trump mob that attacked the Capitol.

A growing number of MPs are also reportedly calling for consistent and transparent rules to be applied by online platforms in a bid to combat hate speech and other types of harmful speech.

Some have conflated this effort with the restrictions on Trump’s social media usage, as though both of these issues reflect the same problem.

Much of this commentary is misguided, wrong and confusing. So let’s pull it apart a bit.

There is no free speech “right” to incite violence

There is no free speech argument in existence that suggests an incitement of lawlessness and violence is protected speech.

Quite to the contrary. Nineteenth century free speech proponent John Stuart Mill argued the sole reason one’s liberty may be interfered with (including restrictions on free speech) is “self-protection” — in other words, to protect people from harm or violence.




Read more:
Parler: what you need to know about the ‘free speech’ Twitter alternative


Additionally, incitement to violence is a criminal offence in all liberal democratic orders. There is an obvious reason for this: violence is harmful. It harms those who are immediately targeted (five people died in the riots last week) and those who are intimidated as a result of the violence to take action or speak up against it.

It also harms the institutions of democracy themselves, which rely on elections rather than civil wars and a peaceful transfer of power.

To suggest taking action against speech that incites violence is “censoring” the speaker is completely misleading.

There is no free speech “right” to appear on a particular platform

There is also no free speech argument that guarantees any citizen the right to express their views on a specific platform.

It is ludicrous to suggest there is. If this “right” were to exist, it would mean any citizen could demand to have their opinions aired on the front page of the Sydney Morning Herald and, if refused, claim their free speech had been violated.




Read more:
Trump’s Twitter tantrum may wreck the internet


What does exist is a general right to express oneself in public discourse, relatively free from regulation, as long as one’s speech does not harm others.

Trump still possesses this right. He has a podium in the West Wing designed for this specific purpose, which he can make use of at any time.

Were he to do so, the media would cover what he says, just as they covered his comments prior to, during and immediately after the riots. This included him telling the rioters that he loved them and that they were “very special”.

Trump told his supporters before the Capitol was overrun: ‘if you don’t fight like hell, you’re not going to have a country anymore’.
Jacquelyn Martin/AP

Does the fact he’s the president change this?

In many free speech arguments, political speech is accorded a higher level of protection than other forms of speech (such as commercial speech, for example). Does the fact this debate concerns the president of the United States change things?

No, it does not. There is no doubt Trump has been given considerable leeway in his public commentary prior to — and during the course of — his presidency. However, he has now crossed a line into stoking imminent lawlessness and violence.

This cannot be protected speech just because it is “political”. If this was the case, it would suggest the free speech of political elites can and should have no limits at all.

Yet, in all liberal democracies – even the United States which has the strongest free speech protection in the world – free speech has limits. These include the incitement of violence and crime.

Are social media platforms over-censoring?

The last decade or so has seen a vigorous debate over the attitudes and responses of social media platforms to harmful speech.

The big tech companies have staunchly resisted being asked to regulate speech, especially political speech, on their platforms. They have enjoyed the profits of their business model, while specific types of users – typically the marginalised – have borne the costs.

However, platforms have recently started to respond to demands and public pressure to address the harms of the speech they facilitate – from countering violent extremism to fake accounts, misinformation, revenge porn and hate speech.

They have developed community standards for content moderation that are publicly available. They release regular reports on their content moderation processes.

Facebook has even created an independent oversight board to arbitrate disputes over their decision making on content moderation.

They do not always do very well at this. One of the core problems is their desire to create algorithms and policies that are applicable universally across their global operations. But such a thing is impossible when it comes to free speech. Context matters in determining whether and under what circumstances speech can harm. This means they make mistakes.




Read more:
Why the business model of social media giants like Facebook is incompatible with human rights


Where to now?

The calls by MPs Anne Webster and Sharon Claydon to address hate speech online are important. They are part of the broader push internationally to find ways to ensure the benefits of the internet can be enjoyed more equally, and that a person’s speech does not silence or harm others.

Arguments about harm are longstanding, and have been widely accepted globally as forming a legitimate basis for intervention.

But the suggestion Trump has been censored is simply wrong. It misleads the public into believing all “free speech” claims have equal merit. They do not.

We must work to ensure harmful speech is regulated in order to ensure broad participation in the public discourse that is essential to our lives — and to our democracy. Anything less is an abandonment of the principles and ethics of governance.The Conversation

Katharine Gelber, Professor of Politics and Public Policy, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?


Timothy Graham, Queensland University of Technology

Amid the chaos in the US Capitol, stoked largely by rhetoric from President Donald Trump, Twitter has locked his account, with 88.7 million followers, for 12 hours.

Facebook and Instagram quickly followed suit, locking Trump’s accounts — with 35.2 million followers and 24.5 million, respectively — for at least two weeks, the remainder of his presidency. This ban was extended from 24 hours.

The locks are the latest effort by social media platforms to clamp down on Trump’s misinformation and baseless claims of election fraud.

They came after Twitter labelled a video posted by Trump and said it posed a “risk of violence”. Twitter removed users’ ability to retweet, like or comment on the post — the first time this has been done.

In the video, Trump told the agitators at the Capitol to go home, but at the same time called them “very special” and said he loved them for disrupting the Congressional certification of President-elect Joe Biden’s win.

That tweet has since been taken down for “repeated and severe violations” of Twitter’s civic integrity policy. YouTube and Facebook have also removed copies of the video.

But as people across the world scramble to make sense of what’s going on, one thing stands out: the events that transpired today were not unexpected.

Given the lack of regulation and responsibility shown by platforms over the past few years, it’s fair to say the writing was on the wall.

The real, violent consequences of misinformation

While Trump is no stranger to contentious and even racist remarks on social media, Twitter’s action to lock the president’s account is a first.

The line was arguably crossed by Trump’s implicit incitement of violence and disorder within the halls of the US Capitol itself.

Nevertheless, it would have been a difficult decision for Twitter (and Facebook and Instagram), with several factors at play. Some of these are short-term, such as the immediate potential for further violence.

Then there’s the question of whether tighter regulation could further incite rioting Trump supporters by feeding into their theories claiming the existence of a large-scale “deep state” plot against the president. It’s possible.




Read more:
QAnon believers will likely outlast and outsmart Twitter’s bans


But a longer-term consideration — and perhaps one at the forefront of the platforms’ priorities — is how these actions will affect their value as commercial assets.

I believe the platforms’ biggest concern is their own bottom line. They are commercial companies legally obliged to pursue profits for shareholders. Commercial imperatives and user engagement are at the forefront of their decisions.

What happens when you censor a Republican president? You can lose a huge chunk of your conservative user base, or upset your shareholders.

Despite what we think of them, or how we might use them, platforms such as Facebook, Twitter, Instagram and YouTube aren’t set up in the public interest.

For them, it’s risky to censor a head of state when they know that content is profitable. Doing it involves a complex risk calculus — with priorities being shareholders, the companies’ market value and their reputation.




Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day


Walking a tightrope

The platforms’ decisions to not only force the removal of several of Trump’s posts but also to lock his accounts carries enormous potential loss of revenue. It’s a major and irreversible step.

And they are now forced to keep a close eye on one another. If one appears too “strict” in its censorship, it may attract criticism and lose user engagement and ultimately profit. At the same time, if platforms are too loose with their content regulation, they must weather the storm of public critique.

You don’t want to be the last organisation to make the tough decision, but you don’t necessarily want to be the first, either — because then you’re the “trial balloon” who volunteered to potentially harm the bottom line.

For all major platforms, the past few years have presented high stakes. Yet there have been plenty of opportunities to stop the situation snowballing to where it is now.

From Trump’s baseless election fraud claims to his false ideas about the coronavirus, time and again platforms have turned a blind eye to serious cases of mis- and disinformation.

The storming of the Capitol is a logical consequence of what has arguably been a long time coming.

The coronavirus pandemic illustrated this. While Trump was partially censored by Twitter and Facebook for misinformation, the platforms failed to take lasting action to deal with the issue at its core.

In the past, platforms have cited constitutional reasons to justify not censoring politicians. They have claimed a civic duty to give elected officials an unfiltered voice.

This line of argument should have ended with the “Unite the Right” rally in Charlottesville in August 2017, when Trump responded to the killing of an anti-fascism protester by claiming there were “very fine people on both sides”.

An age of QAnon, Proud Boys and neo-Nazis

While there’s no silver bullet for online misinformation and extremist content, there’s also no doubt platforms could have done more in the past that may have prevented the scenes witnessed in Washington DC.

In a crisis, there’s a rush to make sense of everything. But we need only look at what led us to this point. Experts on disinformation have been crying out for platforms to do more to combat disinformation and its growing domestic roots.

Now, in 2021, extremists such as neo-Nazis and QAnon believers no longer have to lurk in the depths of online forums or commit lone acts of violence. Instead, they can violently storm the Capitol.

It would be a cardinal error to not appraise the severity and importance of the neglect that led us here. In some ways, perhaps that’s the biggest lesson we can learn.


This article has been updated to reflect the news that Facebook and Instagram extended their 24-hour ban on President Trump’s accounts.The Conversation

Timothy Graham, Senior Lecturer, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Do social media algorithms erode our ability to make decisions freely? The jury is out



Charles Deluvio/Unsplash, CC BY-SA

Lewis Mitchell and James Bagrow, University of Vermont

Social media algorithms, artificial intelligence, and our own genetics are among the factors influencing us beyond our awareness. This raises an ancient question: do we have control over our own lives? This article is part of The Conversation’s series on the science of free will.


Have you ever watched a video or movie because YouTube or Netflix recommended it to you? Or added a friend on Facebook from the list of “people you may know”?

And how does Twitter decide which tweets to show you at the top of your feed?

These platforms are driven by algorithms, which rank and recommend content for us based on our data.

As Woodrow Hartzog, a professor of law and computer science at Northeastern University, Boston, explains:

If you want to know when social media companies are trying to manipulate you into disclosing information or engaging more, the answer is always.

So if we are making decisions based on what’s shown to us by these algorithms, what does that mean for our ability to make decisions freely?

What we see is tailored for us

An algorithm is a digital recipe: a list of rules for achieving an outcome, using a set of ingredients. Usually, for tech companies, that outcome is to make money by convincing us to buy something or keeping us scrolling in order to show us more advertisements.

The ingredients used are the data we provide through our actions online – knowingly or otherwise. Every time you like a post, watch a video, or buy something, you provide data that can be used to make predictions about your next move.

These algorithms can influence us, even if we’re not aware of it. As the New York Times’ Rabbit Hole podcast explores, YouTube’s recommendation algorithms can drive viewers to increasingly extreme content, potentially leading to online radicalisation.

Facebook’s News Feed algorithm ranks content to keep us engaged on the platform. It can produce a phenomenon called “emotional contagion”, in which seeing positive posts leads us to write positive posts ourselves, and seeing negative posts means we’re more likely to craft negative posts — though this study was controversial partially because the effect sizes were small.

Also, so-called “dark patterns” are designed to trick us into sharing more, or spending more on websites like Amazon. These are tricks of website design such as hiding the unsubscribe button, or showing how many people are buying the product you’re looking at right now. They subconsciously nudge you towards actions the site would like you to take.




Read more:
Sludge: how corporations ‘nudge’ us into spending more


You are being profiled

Cambridge Analytica, the company involved in the largest known Facebook data leak to date, claimed to be able to profile your psychology based on your “likes”. These profiles could then be used to target you with political advertising.

“Cookies” are small pieces of data which track us across websites. They are records of actions you’ve taken online (such as links clicked and pages visited) that are stored in the browser. When they are combined with data from multiple sources including from large-scale hacks, this is known as “data enrichment”. It can link our personal data like email addresses to other information such as our education level.

These data are regularly used by tech companies like Amazon, Facebook, and others to build profiles of us and predict our future behaviour.

You are being predicted

So, how much of your behaviour can be predicted by algorithms based on your data?

Our research, published in Nature Human Behaviour last year, explored this question by looking at how much information about you is contained in the posts your friends make on social media.

Using data from Twitter, we estimated how predictable peoples’ tweets were, using only the data from their friends. We found data from eight or nine friends was enough to be able to predict someone’s tweets just as well as if we had downloaded them directly (well over 50% accuracy, see graph below). Indeed, 95% of the potential predictive accuracy that a machine learning algorithm might achieve is obtainable just from friends’ data.

Average predictability from your circle of closest friends (blue line). A value of 50% means getting the next word right half of the time — no mean feat as most people have a vocabulary of around 5,000 words. The curve shows how much an AI algorithm can predict about you from your friends’ data. Roughly 8-9 friends are enough to predict your future posts as accurately as if the algorithm had access to your own data (dashed line).
Bagrow, Liu, & Mitchell (2019)

Our results mean that even if you #DeleteFacebook (which trended after the Cambridge Analytica scandal in 2018), you may still be able to be profiled, due to the social ties that remain. And that’s before we consider the things about Facebook that make it so difficult to delete anyway.




Read more:
Why it’s so hard to #DeleteFacebook: Constant psychological boosts keep you hooked


We also found it’s possible to build profiles of non-users — so-called “shadow profiles” — based on their contacts who are on the platform. Even if you have never used Facebook, if your friends do, there is the possibility a shadow profile could be built of you.

On social media platforms like Facebook and Twitter, privacy is no longer tied to the individual, but to the network as a whole.

No more free will? Not quite

But all hope is not lost. If you do delete your account, the information contained in your social ties with friends grows stale over time. We found predictability gradually declines to a low level, so your privacy and anonymity will eventually return.

While it may seem like algorithms are eroding our ability to think for ourselves, it’s not necessarily the case. The evidence on the effectiveness of psychological profiling to influence voters is thin.

Most importantly, when it comes to the role of people versus algorithms in things like spreading (mis)information, people are just as important. On Facebook, the extent of your exposure to diverse points of view is more closely related to your social groupings than to the way News Feed presents you with content. And on Twitter, while “fake news” may spread faster than facts, it is primarily people who spread it, rather than bots.

Of course, content creators exploit social media platforms’ algorithms to promote content, on YouTube, Reddit and other platforms, not just the other way round.

At the end of the day, underneath all the algorithms are people. And we influence the algorithms just as much as they may influence us.




Read more:
Don’t just blame YouTube’s algorithms for ‘radicalisation’. Humans also play a part


The Conversation


Lewis Mitchell, Senior Lecturer in Applied Mathematics and James Bagrow, Associate Professor, Mathematics & Statistics, University of Vermont

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Is social media damaging to children and teens? We asked five experts



They need to have it to fit in, but social media is probably doing teens more harm than good.
from http://www.shutterstock.com

Alexandra Hansen, The Conversation

If you have kids, chances are you’ve worried about their presence on social media.

Who are they talking to? What are they posting? Are they being bullied? Do they spend too much time on it? Do they realise their friends’ lives aren’t as good as they look on Instagram?

We asked five experts if social media is damaging to children and teens.

Four out of five experts said yes

The four experts who ultimately found social media is damaging said so for its negative effects on mental health, disturbances to sleep, cyberbullying, comparing themselves with others, privacy concerns, and body image.

However, they also conceded it can have positive effects in connecting young people with others, and living without it might even be more ostracising.

The dissident voice said it’s not social media itself that’s damaging, but how it’s used.

Here are their detailed responses:


If you have a “yes or no” health question you’d like posed to Five Experts, email your suggestion to: alexandra.hansen@theconversation.edu.au


Karyn Healy is a researcher affiliated with the Parenting and Family Support Centre at The University of Queensland and a psychologist working with schools and families to address bullying. Karyn is co-author of a family intervention for children bullied at school. Karyn is a member of the Queensland Anti-Cyberbullying Committee, but not a spokesperson for this committee; this article presents only her own professional views.The Conversation

Alexandra Hansen, Chief of Staff, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Goodbye Google+, but what happens when online communities close down?



File 20190403 177184 jfjjy0.jpg?ixlib=rb 1.1
Google+ is the latest online community to close.
Shutterstock/rvlsoft

Stan Karanasios, RMIT University

This week saw the closure of Google+, an attempt by the online giant to create a social media community to rival Facebook.

If the Australian usage of Google+ is anything to go by – just 45,000 users in March compared to Facebook’s 15 million – it never really caught on.

Google+ is no longer available to users.
Google+/Screengrab

But the Google+ shutdown follows a string of organisations that have disabled or restricted community features such as reviews, user comments and message boards (forums).




Read more:
Sexual subcultures are collateral damage in Tumblr’s ban on adult content


So are we witnessing the decline of online communities and user comments?

Turning off online communities and user generated content

One of the most well-known message boards – which existed on the popular movie website IMDb since 2001 – was shut down by owner Amazon in 2017 with just two weeks’ notice for its users.

This is not only confined to online communities but mirrors a trend among organisations to restrict or turn off their user-generated content. Last year the subscription video-on-demand website Netflix said it no longer allowed users to write reviews. It subsequently deleted all existing user-generated reviews.

Other popular websites have disabled their comments sections, including National Public Radio (NPR), The Atlantic, Popular Science and Reuters.

Why the closures?

Organisations have a range of motivations for taking such actions, ranging from low uptake, running costs, the challenges of managing moderation, as well as the problem around divisive comments, conflicts and lack of community cohesion.

In the case of Google+, low usage alongside data breaches appear to have sped up its decision.

NPR explained its motivation to remove user comments by highlighting how in one month its website NPR.org attracted 33 million unique users and 491,000 comments. But those comments came from just 19,400 commenters; the number of commenters who posted in consecutive months was a fraction of that.

This led NPR’s managing editor for digital news, Scott Montgomery, to say:

We’ve reached the point where we’ve realized that there are other, better ways to achieve the same kind of community discussion around the issues we raise in our journalism.

He said audiences had also moved to engage with NPR more on Facebook and Twitter.

Likewise, The Atlantic explained that its comments sections had become “unhelpful, even destructive, conversations” and was exploring new ways to give users a voice.

In the case of IMDB closing its message boards in 2017, the reason given was:

[…] we have concluded that IMDb’s message boards are no longer providing a positive, useful experience for the vast majority of our more than 250 million monthly users worldwide.

The organisation also nudged users towards other forms of social media, such as its Facebook page and Twitter account @IMDB, as the “(…) primary place they (users) choose to post comments and communicate with IMDb’s editors and one another”.

User backlash

Unsurprisingly, such actions often lead to confusion, criticism and disengagement by user communities, and in some cases petitions to have the features reinstated (such as this one for Google+) and boycotts of the organisations.

But most organisations take these aspects into their decision-making.

The petition to save IMDB’s message boards.
Change.org/Screengrab

For fans of such community features these trends point to some harsh realities. Even though communities may self-organise and thrive, and users are co-creators of value and content, the functionality and governance are typically beyond their control.

Community members are at the mercy of hosting organisations, some profit-driven, which may have conflicting motivations to those of the users. It’s those organisations that hold the power to change or shut down what can be considered by some to be critical sources of knowledge, engagement and community building.

In the aftermath of shutdowns, my research shows that communities that existed on an organisation’s message boards in particular may struggle to reform.

This can be due to a number of factors, such as high switching costs, and communities can become fragmented because of the range of other options (Reddit, Facebook and other message boards).

So it’s difficult for users to preserve and maintain their communities once their original home is disabled. In the case of Google+, even its Mass Migration Group – which aims to help people, organisations and groups find “new online homes” – may not be enough to hold its online communities together.

The trend towards the closure of online communities by organisations might represent a means to reduce their costs in light of declining usage and the availability of other online options.

It’s also a move away from dealing with the reputational issues related to their use and controlling the conversation that takes place within their user bases. Trolling, conflicts and divisive comments are common in online communities and user comments spaces.

Lost community knowledge

But within online groups there often exists social and network capital, as well as the stock of valuable knowledge that such community features create.




Read more:
Zuckerberg’s ‘new rules’ for the internet must move from words to actions


Often these communities are made of communities of practice (people with a shared passion or concern) on topics ranging from movie theories to parenting.

They are go-to sources for users where meaningful interactions take place and bonds are created. User comments also allow people to engage with important events and debates, and can be cathartic.

Closing these spaces risks not only a loss of user community bases, but also a loss of this valuable community knowledge on a range of issues.The Conversation

Stan Karanasios, Senior Research Fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Livestreaming terror is abhorrent – but is more rushed legislation the answer?



File 20190401 177178 1cjkc2w.jpg?ixlib=rb 1.1
The perpetrator of the Christchurch attacks livestreamed his killings on Facebook.
Shutterstock

Robert Merkel, Monash University

In the wake of the Christchurch attack, the Australian government has announced its intention to create new criminal offences relating to the livestreaming of violence on social media platforms.

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill will create two new crimes:

It will be a criminal offence for social media platforms not to remove abhorrent violent material expeditiously. This will be punishable by 3 years’ imprisonment or fines that can reach up to 10% of the platform’s annual turnover.

Platforms anywhere in the world must notify the Australian Federal Police if they become aware their service is streaming abhorrent violent conduct that is happening in Australia. A failure to do this will be punishable by fines of up to A$168,000 for an individual or A$840,000 for a corporation.

The government is reportedly seeking to pass the legislation in the current sitting week of Parliament. This could be the last of the current parliament before an election is called. Labor, or some group of crossbenchers, will need to vote with the government if the legislation is to pass. But the draft bill was only made available to the Labor Party last night.

This is not the first time that legislation relating to the intersection of technology and law enforcement has been raced through parliament to the consternation of parts of the technology industry, and other groups. Ongoing concerns around the Access and Assistance bill demonstrate the risks of such rushed legislation.




Read more:
China bans streaming video as it struggles to keep up with live content


Major social networks already moderate violence

The government has defined “abhorrent violent material” as:

[…] material produced by a perpetrator, and which plays or livestreams the very worst types of offences. It will capture the playing or streaming of terrorism, murder, attempted murder, torture, rape and kidnapping on social media.

The major social media platforms already devote considerable resources to content moderation. They are often criticised for their moderation policies, and the inconsistent application of those policies. But content fitting the government’s definition is already clearly prohibited by Twitter, Facebook, and Snapchat.

Social media companies rely on a combination of technology, and thousands of people employed as content moderators to remove graphic content. Moderators (usually contractors, often on low wages) are routinely called on to remove a torrent of abhorrent material, including footage of murders and other violent crimes.




Read more:
We need to talk about the mental health of content moderators


Technology is helpful, but not a solution

Technologies developed to assist with content moderation are less advanced than one might hope – particularly for videos. Facebook’s own moderation tools are mostly proprietary. But we can get an idea of the state of the commercial art from Microsoft’s Content Moderator API.

The Content Moderator API is an online service designed to be integrated by programmers into consumer-facing communication systems. Microsoft’s tools can automatically recognise “racy or adult content”. They can also identify images similar to ones in a list. This kind of technology is used by Facebook, in cooperation with the office of the eSafety Comissioner, to help track and block image-based abuse – commonly but erroneously described as “revenge porn”.

The Content Moderator API cannot automatically classify an image, let alone a video, as “abhorrent violent content”. Nor can it automatically identify videos similar to another video.

Technology that could match videos is under development. For example, Microsoft is currently trialling a matching system specifically for video-based child exploitation material.

As well as developing new technologies themselves, the tech giants are enthusiastic adopters of methods and ideas devised by academic researchers. But they are some distance from being able to automatically identify re-uploads of videos that violate their terms of service, particularly when uploaders modify the video to evade moderators. The ability to automatically flag these videos as they are uploaded or streamed is even more challenging.

Important questions, few answers so far

Evaluating the government’s proposed legislative amendments is difficult given that details are scant. I’m a technologist, not a legal academic, but the scope and application of the legislation is currently unclear. Before any legislation is passed, a number of questions need to be addressed – too many to list here, but for instance:

Does the requirement to remove “abhorrent violent material” apply only to material created or uploaded by Australians? Does it only apply to events occurring within Australia? Or could foreign social media companies be liable for massive fines if videos created in a foreign country, and uploaded by a foreigner, were viewed within Australia?

Would attempts to render such material inaccessible from within Australia suffice (even though workarounds are easy)? Or would removal from access anywhere in the world be required? Would Australians be comfortable with a foreign law that required Australian websites to delete content displayed to Australians based on the decisions of a foreign government?




Read more:
Anxieties over livestreams can help us design better Facebook and YouTube content moderation


Complex legislation needs time

The proposed legislation does nothing to address the broader issues surrounding promotion of the violent white supremacist ideology that apparently motivated the Christchurch attacker. While that does not necessarily mean it’s a bad idea, it would seem very far from a full governmental response to the monstrous crime an Australian citizen allegedly committed.

It may well be that the scope and definitional issues are dealt with appropriately in the text of the legislation. But considering the government seems set on passing the bill in the next few days, it’s unlikely lawmakers will have the time to carefully consider the complexities involved.

While the desire to prevent further circulation of perpetrator-generated footage of terrorist attacks is noble, taking effective action is not straightforward. Yet again, the federal government’s inclination seems to be to legislate first and discuss later.The Conversation

Robert Merkel, Lecturer in Software Engineering, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Digital campaigning on sites like Facebook is unlikely to swing the election



File 20190412 44802 mem06u.jpg?ixlib=rb 1.1
Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.
Shutterstock

Glenn Kefford, Macquarie University

With the federal election now officially underway, commentators have begun to consider not only the techniques parties and candidates will use to persuade voters, but also any potential threats we are facing to the integrity of the election.

Invariably, this discussion leads straight to digital.

In the aftermath of the 2016 United States presidential election, the coverage of digital campaigning has been unparalleled. But this coverage has done very little to improve understanding of the key issues confronting our democracies as a result of the continued rise of digital modes of campaigning.

Some degree of confusion is understandable since digital campaigning is opaque – especially in Australia. We have very little information on what political parties or third-party campaigners are spending their money on, some of which comes from taxpayers. But the hysteria around digital is for the most part, unfounded.




Read more:
Chinese social media platform WeChat could be a key battleground in the federal election


Why parties use digital media

In any attempt to better understand digital, it’s useful to consider why political parties and other campaigners are using it as part of their election strategies. The reasons are relatively straightforward.

The media landscape is fragmented. Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.

Compared to the cost of advertising on television, radio or in print, digital advertising is very affordable.

Platforms like Facebook offer services that give campaigners a relatively straightforward way to segment voters. Campaigners can use these tools to micro-target them with tailored messaging.

Voting, persuasion and mobilisation

While there is certainly more research required into digital campaigning, there is no scholarly study I know of that suggests advertising online – including micro-targeted messaging – has the effect that it is often claimed to have.

What we know is that digital messaging can have a small but significant effect on mobilisation, that there are concerns about how it could be used to demobilise voters, and that it is an effective way to fundraise and organise. But its ability to independently persuade voters to change their votes is estimated to be close to zero.




Read more:
Australian political journalists might be part of a ‘Canberra bubble’, but they engage the public too


The exaggeration and lack of clarity around digital is problematic because there is almost no evidence to support many of the claims made. This type of technology fetishism also implies that voters are easily manipulated, when there is little evidence of this.

While it might help some commentators to rationalise unexpected election results, a more fruitful endeavour than blaming technology would be to try to understand why voters are attracted to various parties or candidates, such as Trump in the US.

Digital campaigning is not a magic bullet, so commentators need to stop treating it as if it is. Parties hope it helps them in their persuasion efforts, but this is through layering their messages across as many mediums as possible, and using the network effect that social media provides.

Data privacy and foreign interference

The two clear and obvious dangers related to digital are about data privacy and foreign meddling. We should not accept that our data is shared widely as a result of some box we ticked online. And we should have greater control over how our data are used, and who they are sold to.

An obvious starting point in Australia is questioning whether parties should continue to be exempt from privacy legislation. Research suggests that a majority of voters see a distinction between commercial entities advertising to us online compared to parties and other campaigners.

We also need to take some personal responsibility, since many of us do not always take our digital footprint as seriously as we should. It matters, and we need to educate ourselves on this.

The more vexing issue is that of foreign interference. One of the first things we need to recognise is that it is unlikely this type of meddling online would independently turn an election.

This does not mean we should accept this behaviour, but changing election results is just one of the goals these actors have. Increasing polarisation and contributing to long-term social divisions is part of the broader strategy.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


The digital battleground

As the 2019 campaign unfolds, we should remember that, while digital matters, there is no evidence it has an independent election-changing effect.

Australians should be most concerned with how our data are being used and sold, and about any attempts to meddle in our elections by state and non-state actors.

The current regulatory environment fails to meet community standards. More can and should be done to protect us and our democracy.


This article has been co-published with The Lighthouse, Macquarie University’s multimedia news platform.The Conversation

Glenn Kefford, Senior Lecturer, Department of Modern History, Politics and International Relations, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Four ways social media companies and security agencies can tackle terrorism


Robyn Torok, Edith Cowan University

Prime Minister Malcolm Turnbull has joined Britain’s Prime Minister Theresa May in calling on social media companies to crack down on extremist material being published by users.

It comes in the wake of the recent terror attacks in Australia and Britain.

Facebook is considered a hotbed for terrorist recruitment, incitement, propaganda and the spreading of radical thinking. Twitter, YouTube and encrypted services such WhatsApp and Telegram are also implicated.

Addressing the extent of such content on social media requires international cooperation from large social media platforms themselves and encrypted services.

Some of that work is already underway by many social media operators, with Facebook’s rules on this leaked only last month. Twitter says that in one six-month period it has suspended 376,890 accounts related to the promotion of terrorism.

While these measures are a good start, more can be done. A focus on disruption, encryption, recruitment and creating counter-narratives is recommended.

Disruption: remove content, break flow-on

Disruption of terrorists on social media involves reporting and taking down of radical elements and acts of violence, whether that be radical accounts or posted content that breaches community safety and standards.

This is critical both in timing and eradication.

Disruption is vital for removing extreme content and breaking the flow-on effect while someone is in the process of being recruited by extremists.

Taking down accounts and content is difficult as there is often a large volume of content to remove. Sometimes it is not removed as quickly as needed. In addition, extremists typically have multiple accounts and can operate under various aliases at the same time.

Encryption: security authorities need access

When Islamic extremists use encrypted channels, it makes the fight against terrorism much harder. Extremists readily shift from public forums to encrypted areas, and often work in both simultaneously.

Encrypted networks are fast becoming a problem because of the “burn time” (destruction of messages) and the fact that extremists can communicate mostly undetected.

Operations to attack and kill members of the public in the West have been propagated on these encrypted networks.

The extremists set up a unique way of communicating within encrypted channels to offer advice. That way a terrorist can directly communicate with the Islamic State group and receive directives to undertake an attack in a specific country, including operational methods and procedures.

This is extremely concerning, and authorities – including intelligence agencies and federal police – require access to encrypted networks to do their work more effectively. They need the ability to access servers to obtain vital information to help thwart possible attacks on home soil.

This access will need to be granted in consultation with the companies that offer these services. But such access could be challenging and there could also be a backlash from privacy groups.

Recruitment: find and follow key words

It was once thought that the process of recruitment occurred over extended periods of time. This is true in some instances, and it depends on a multitude of individual experiences, personality types, one’s perception of identity, and the types of strategies and techniques used in the recruitment process.

There is no one path toward violent extremism, but what makes the process of recruitment quicker is the neurolinguistic programming (NLP) method used by terrorists.

Extremists use NLP across multiple platforms and are quick to usher their recruits into encrypted chats.

Key terms are always used alongside NLP, such as “in the heart of green birds” (which is used in reference to martyrdom), “Istishhad” (operational heroism of loving death more than the West love life), “martyrdom” and “Shaheed” (becoming a martyr).

If social media companies know and understand these key terms, they can help by removing any reference to them on their platforms. This is being done by some platforms to a degree, but in many cases social media operaters still rely heavily on users reporting inappropriate material.

Create counter-narratives: banning alone won’t work

Since there are so many social media applications, each with a high volume of material that is both very dynamic and fluid, any attempts to deal with extremism must accept the limitations and challenges involved.

Attempts to shut down sites, channels, and web pages are just one approach. It is imperative that efforts are not limited to such strategies.

Counter-narratives are essential, as these deconstruct radical ideologies and expose their flaws in reasoning.

But these counter-narratives need to be more sophisticated given the ability of extremists to manipulate arguments and appeal to emotions, especially by using horrific images.

This is particularly important for those on the social fringe, who may feel a sense of alienation.

It is important for these individuals to realise that such feelings can be addressed within the context of mainstream Islam without resorting to radical ideologies that leave them open to exploitation by experienced recruiters. Such recruiters are well practised and know how to identify individuals who are struggling, and how to usher them along radical pathways.

Ultimately, there are ways around all procedures that attempt to tackle the problem of terrorist extremism on social media. But steps are slowly being taken to reduce the risk and spread of radical ideologies.

The ConversationThis must include counter-narratives as well as the timely eradication of extremist material based on keywords as well as any material from key radical preachers.

Robyn Torok, PhD, PhD – researcher and analyst, Edith Cowan University

This article was originally published on The Conversation. Read the original article.