Why social media platforms banning Trump won’t stop — or even slow down — his cause


Bronwyn Carlson, Macquarie University

Last week Twitter permanently suspended US President Donald Trump in the wake of his supporters’ violent storming of Capitol Hill. Trump was also suspended from Facebook and Instagram indefinitely.

Heads quickly turned to the right-wing Twitter alternative Parler — which seemed to be a logical place of respite for the digitally de-throned president.

But Parler too was axed, as Amazon pulled its hosting services and Google and Apple removed it from their stores. The social network, which has since sued Amazon, is effectively shut down until it can secure a new host or force Amazon to restore its services.

These actions may seem like legitimate attempts by platforms to tackle Trump’s violence-fuelling rhetoric. The reality, however, is they will do little to truly disengage his supporters or deal with issues of violence and hate speech.

With an election vote count of 74,223,744 (46.9%), the magnitude of Trump’s following is clear. And since being banned from Twitter, he hasn’t shown any intention of backing down.

In his first appearance since the Capitol attack, Trump described the impeachment process as ‘a continuation of the greatest witch hunt in the history of politics’.

Not budging

With more than 47,000 original tweets from Trump’s personal Twitter account (@realdonaldtrump) since 2009, one could argue he used the platform inordinately. There’s much speculation about what he might do now.

Tweeting via the official Twitter account for the president @POTUS, he said he might consider building his own platform. Twitter promptly removed this tweet. He also tweeted: “We will not be SILENCED!”.

This threat may come with some standing as Trump does have avenues to control various forms of media. In November, Axios reported he was considering launching his own right-wing media venture.

For his followers, the internet remains a “natural hunting ground” where they can continue gaining support through spreading racist and hateful sentiment.

The internet is also notoriously hard to police – it has no real borders, and features such as encryption enable anonymity. Laws differ from state to state and nation to nation; an act deemed illegal in one locale may be legal elsewhere.

It’s no surprise groups including fascists, neo-Nazis, anti-Semites and white supremacists were early and eager adopters of the internet. Back in 1998, former Ku Klux Klan Grand Wizard David Duke wrote online:

I believe that the internet will begin a chain reaction of racial enlightenment that will shake the world by the speed of its intellectual conquest.

As far as efforts to quash such extremism go, they’re usually too little, too late.

Take Stormfront, a neo-Nazi platform described as the web’s first major racial hate site. It was set up in 1995 by a former Klan state leader, and only removed from the open web 22 years later in 2017.




Read more:
Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?


The psychology of hate

Banning Trump from social media won’t necessarily silence him or his supporters. Esteemed British psychiatrist and broadcaster Raj Persaud sums it up well: “narcissists do not respond well to social exclusion”.

Others have highlighted the many options still available for Trump fans to congregate since Parler’s departure, which was used to communicate plans ahead of the siege at Capitol. Gab is one platform many Trump supporters have flocked to.

It’s important to remember hate speech, racism and violence predate the internet. Those who are predisposed to these ideologies will find a way to connect with others like them.

And censorship likely won’t change their beliefs, since extremist ideologies and conspiracies tend to be heavily spurred on by confirmation bias. This is when people interpret information in a way that reaffirms their existing beliefs.

When Twitter took action to limit QAnon content last year, some followers took this as confirmation of the conspiracy, which claims Satan-worshipping elites from within government, business and media are running a “deep state” against Trump.

Social media and white supremacy: a love story

The promotion of violence and hate speech on platforms isn’t new, nor is it restricted to relatively fringe sites such as Parler.

Queensland University of Technology Digital Media lecturer Ariadna Matamoros-Fernández describes online hate speech as “platformed racism”. This framing is critical, especially in the case of Trump and his followers.

It recognises social media has various algorithmic features which allow for the proliferation of racist content. It also captures the governance structures that tend to favour “free speech” over the safety of vulnerable communities online.

For instance, Matamoros-Fernández’s research found in Australia, platforms such as Facebook “favoured the offenders over Indigenous people” by tending to lean in favour of free speech.

Other research has found Indigenous social media users regularly witness and experience racism and sexism online. My own research has also revealed social media helps proliferate hate speech, including racism and other forms of violence.

On this front, tech companies are unlikely to take action on the scale required, since controversy is good for business. Simply, there’s no strong incentive for platforms to tackle the issues of hate speech and racism — not until not doing so negatively impacts profits.

After Facebook indefinitely banned Trump, its market value reportedly dropped by US$47.6 billion as of Wednesday, while Twitter’s dropped by US$3.5 billion.




Read more:
Profit, not free speech, governs media companies’ decisions on controversy


The need for a paradigm shift

When it comes to imagining a future with less hate, racism and violence, a key mistake is looking for solutions within the existing structure.

Today, online media is an integral part of the structure that governs society. So we look to it to solve our problems.

But banning Trump won’t silence him or the ideologies he peddles. It will not suppress hate speech or even reduce the capacity of individuals to incite violence.

Trump’s presidency will end in the coming days, but extremist groups and the broader movement they occupy will remain, both in real life and online.




Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day


The Conversation


Bronwyn Carlson, Professor, Indigenous Studies, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

No, Twitter is not censoring Donald Trump. Free speech is not guaranteed if it harms others



Alex Brandon/AP

Katharine Gelber, The University of Queensland

The recent storming of the US Capitol has led a number of social media platforms to remove President Donald Trump’s account. In the case of Twitter, the ban is permanent. Others, like Facebook, have taken him offline until after President-elect Joe Biden’s inauguration next week.

This has led to a flurry of commentary in the Australian media about “free speech”. Treasurer Josh Frydenburg has said he is “uncomfortable” with Twitter’s removal of Trump, while the acting prime minister, Michael McCormack, has described it as “censorship”.

Meanwhile, MPs like Craig Kelly and George Christensen continue to ignore the evidence and promote misinformation about the nature of the violent, pro-Trump mob that attacked the Capitol.

A growing number of MPs are also reportedly calling for consistent and transparent rules to be applied by online platforms in a bid to combat hate speech and other types of harmful speech.

Some have conflated this effort with the restrictions on Trump’s social media usage, as though both of these issues reflect the same problem.

Much of this commentary is misguided, wrong and confusing. So let’s pull it apart a bit.

There is no free speech “right” to incite violence

There is no free speech argument in existence that suggests an incitement of lawlessness and violence is protected speech.

Quite to the contrary. Nineteenth century free speech proponent John Stuart Mill argued the sole reason one’s liberty may be interfered with (including restrictions on free speech) is “self-protection” — in other words, to protect people from harm or violence.




Read more:
Parler: what you need to know about the ‘free speech’ Twitter alternative


Additionally, incitement to violence is a criminal offence in all liberal democratic orders. There is an obvious reason for this: violence is harmful. It harms those who are immediately targeted (five people died in the riots last week) and those who are intimidated as a result of the violence to take action or speak up against it.

It also harms the institutions of democracy themselves, which rely on elections rather than civil wars and a peaceful transfer of power.

To suggest taking action against speech that incites violence is “censoring” the speaker is completely misleading.

There is no free speech “right” to appear on a particular platform

There is also no free speech argument that guarantees any citizen the right to express their views on a specific platform.

It is ludicrous to suggest there is. If this “right” were to exist, it would mean any citizen could demand to have their opinions aired on the front page of the Sydney Morning Herald and, if refused, claim their free speech had been violated.




Read more:
Trump’s Twitter tantrum may wreck the internet


What does exist is a general right to express oneself in public discourse, relatively free from regulation, as long as one’s speech does not harm others.

Trump still possesses this right. He has a podium in the West Wing designed for this specific purpose, which he can make use of at any time.

Were he to do so, the media would cover what he says, just as they covered his comments prior to, during and immediately after the riots. This included him telling the rioters that he loved them and that they were “very special”.

Trump told his supporters before the Capitol was overrun: ‘if you don’t fight like hell, you’re not going to have a country anymore’.
Jacquelyn Martin/AP

Does the fact he’s the president change this?

In many free speech arguments, political speech is accorded a higher level of protection than other forms of speech (such as commercial speech, for example). Does the fact this debate concerns the president of the United States change things?

No, it does not. There is no doubt Trump has been given considerable leeway in his public commentary prior to — and during the course of — his presidency. However, he has now crossed a line into stoking imminent lawlessness and violence.

This cannot be protected speech just because it is “political”. If this was the case, it would suggest the free speech of political elites can and should have no limits at all.

Yet, in all liberal democracies – even the United States which has the strongest free speech protection in the world – free speech has limits. These include the incitement of violence and crime.

Are social media platforms over-censoring?

The last decade or so has seen a vigorous debate over the attitudes and responses of social media platforms to harmful speech.

The big tech companies have staunchly resisted being asked to regulate speech, especially political speech, on their platforms. They have enjoyed the profits of their business model, while specific types of users – typically the marginalised – have borne the costs.

However, platforms have recently started to respond to demands and public pressure to address the harms of the speech they facilitate – from countering violent extremism to fake accounts, misinformation, revenge porn and hate speech.

They have developed community standards for content moderation that are publicly available. They release regular reports on their content moderation processes.

Facebook has even created an independent oversight board to arbitrate disputes over their decision making on content moderation.

They do not always do very well at this. One of the core problems is their desire to create algorithms and policies that are applicable universally across their global operations. But such a thing is impossible when it comes to free speech. Context matters in determining whether and under what circumstances speech can harm. This means they make mistakes.




Read more:
Why the business model of social media giants like Facebook is incompatible with human rights


Where to now?

The calls by MPs Anne Webster and Sharon Claydon to address hate speech online are important. They are part of the broader push internationally to find ways to ensure the benefits of the internet can be enjoyed more equally, and that a person’s speech does not silence or harm others.

Arguments about harm are longstanding, and have been widely accepted globally as forming a legitimate basis for intervention.

But the suggestion Trump has been censored is simply wrong. It misleads the public into believing all “free speech” claims have equal merit. They do not.

We must work to ensure harmful speech is regulated in order to ensure broad participation in the public discourse that is essential to our lives — and to our democracy. Anything less is an abandonment of the principles and ethics of governance.The Conversation

Katharine Gelber, Professor of Politics and Public Policy, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Happy birthday Instagram! 5 ways doing it for the ‘gram has changed us



Unsplash/Adi Goldstein, CC BY

Tama Leaver, Curtin University; Crystal Abidin, Curtin University, and Tim Highfield, University of Sheffield

Tomorrow marks Instagram’s tenth birthday. Having amassed more than a billion active users worldwide, the app has changed radically in that decade. And it has changed us.

1. Instagram’s evolution

When it was launched on October 6, 2010 by Kevin Systrom and Mike Krieger, Instagram was an iPhone-only app. The user could take photos (and only take photos — the app could not load existing images from the phone’s gallery) within a square frame. These could be shared, with an enhancing filter if desired. Other users could comment or like the images. That was it.

As we chronicle in our book, the platform has grown rapidly and been at the forefront of an increasingly visual social media landscape.

In 2012, Facebook purchased Instagram for a deal worth a $US1 billion (A$1.4 billion), which in retrospect probably seems cheap. Instagram is now one of the most profitable jewels in the Facebook crown.

Instagram has integrated new features over time, but it did not invent all of them.

Instagram Stories, with more than half a billion daily users, was shamelessly borrowed from Snapchat in 2016. It allowed users to post 10-second content bites which disappear after 24 hours. The rivers of casual and intimate content (later integrated into Facebook) are widely considered to have revitalised the app.

Similarly, IGTV is Instagram’s answer to YouTube’s longer-form video. And if the recently-released Reels isn’t a TikTok clone, we’re not sure what else it could be.




Read more:
Facebook is merging Messenger and Instagram chat features. It’s for Zuckerberg’s benefit, not yours


2. Under the influencers

Instagram is largely responsible for the rapid professionalisation of the influencer industry. Insiders estimated the influencer industry would grow to US$9.7 billion (A$13.5 billion) in 2020, though COVID-19 has since taken a toll on this as with other sectors.

As early as in 2011, professional lifestyle bloggers throughout Southeast Asia were moving to Instagram, turning it into a brimming marketplace. They sold ad space via post captions and monetised selfies through sponsored products. Such vernacular commerce pre-dates Instagram’s Paid Partnership feature, which launched in late-2017.

Girl takes selfie on street
Behind the scenes snaps can enhance Insta-authenticity.
Unsplash/Afif Kusuma, CC BY

The use of images as a primary mode of communication, as opposed to the text-based modes of the blogging era, facilitated an explosion of aspiring influencers. The threshold for turning oneself into an online brand was dramatically lowered.

Instagrammers relied more on photography and their looks — enhanced by filters and editing built into the platform.

Soon, the “extremely professional and polished, the pretty, pristine, and picturesque” started to become boring. Finstagrams (“fake Instagram”) and secondary accounts proliferated and allowed influencers to display behind-the-scenes snippets and authenticity through calculated performances of amateurism.

3. Instabusiness as usual

As influencers commercialised Instagram captions and photos, those who had owned online shops turned hashtag streams into advertorial campaigns. They relied on the labour of followers to publicise their wares and amplify their reach.

Bigger businesses followed suit and so did advice from marketing experts for how best to “optimise” engagement.

In mid-2016, Instagram belatedly launched business accounts and tools, allowing companies easy access to back-end analytics. The introduction of the “swipeable carousel” of story content in early 2017 further expanded commercial opportunities for businesses by multiplying ad space per Instagram post. This year, in the tradition of Instagram corporatising user innovations, it announced Instagram Shops would allow businesses to sell products directly via a digital storefront. Users had previously done this via links.

Old polaroid camera.
The original Instagram logo paid tribute to the Polaroid aesthetic.
Unsplash/Josh Carter, CC BY



Read more:
Friday essay: Twitter and the way of the hashtag


4. Sharenting

Instagram isn’t just where we tell the visual story of ourselves, but also where we co-create each other’s stories. Nowhere is this more evident than the way parents “sharent”, posting their children’s daily lives and milestones.

Many children’s Instagram presence begins before they are even born. Sharing ultrasound photos has become a standard way to announce a pregnancy. Over 1.5 million public Instagram posts are tagged #genderreveal.

Sharenting raises privacy questions: who owns a child’s image? Can children withdraw publishing permission later?

Sharenting entails handing over children’s data to Facebook as part of the larger realm of surveillance capitalism. A saying that emerged around the same time as Instagram was born still rings true: “When something online is free, you’re not the customer, you’re the product”. We pay for Instagram’s “free” platform with our user data and our children’s data, too, when we share photos of them.

Couple holds ultrasound print out.
Many babies appear on Instagram before they are even born.
Meryl Spadaro/Unsplash, CC BY



Read more:
The real problem with posting about your kids online


5. Seeing through the frame

The apparent “Instagrammability” of a meal, a place, or an experience has seen the rise of numerous visual trends and tropes.

Short-lived Instagram Stories and disappearing Direct Messages add more spaces to express more things without the threat of permanence.




Read more:
Friday essay: seeing the news up close, one devastating post at a time


The events of 2020 have shown our ways of seeing on Instagram reveal the possibilities and pitfalls of social media.

In June racial justice activism on #BlackoutTuesday, while extremely popular, also had the effect of swamping the #BlackLivesMatter hashtag with black squares.

Instagram is rife with disinformation and conspiracy theories which hijack the look and feel of authoritive content. The template of popular Instagram content can see familiar aesthetics weaponised to spread misinformation.

Ultimately, the last decade has seen Instagram become one of the main lenses through which we see the world, personally and politically. Users communicate and frame the lives they share with family, friends and the wider world.




Read more:
#travelgram: live tourist snaps have turned solo adventures into social occasions


The Conversation


Tama Leaver, Associate Professor in Internet Studies, Curtin University; Crystal Abidin, Senior Research Fellow & ARC DECRA, Internet Studies, Curtin University, Curtin University, and Tim Highfield, Lecturer in Digital Media and Society, University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.