Category Archives: social networking
Apple’s new ‘app tracking transparency’ has angered Facebook. How does it work, what’s all the fuss about, and should you use it?

Amr Alfiky/AP
Paul Haskell-Dowland, Edith Cowan University and Nikolai Hampton, Edith Cowan UniversityApple users across the globe are adopting the latest operating system update, called iOS 14.5, featuring the now-obligatory new batch of emojis.
But there’s another change that’s arguably less fun but much more significant for many users: the introduction of “app tracking transparency”.
This feature promises to usher in a new era of user-oriented privacy, and not everyone is happy — most notably Facebook, which relies on tracking web users’ browsing habits to sell targeted advertising. Some commentators have described it as the beginnings of a new privacy feud between the two tech behemoths.
So, what is app tracking transparency?
App tracking transparency is a continuation of Apple’s push to be recognised as the platform of privacy. The new feature allows apps to display a pop-up notification that explains what data the app wants to collect, and what it proposes to do with it.
There is nothing users need to do to gain access to the new feature, other than install the latest iOS update, which happens automatically on most devices. Once upgraded, apps that use tracking functions will display a request to opt in or out of this functionality.

Apple newsroom
How does it work?
As Apple has explained, the app tracking transparency feature is a new “application programming interface”, or API — a suite of programming commands used by developers to interact with the operating system.
The API gives software developers a few pre-canned functions that allow them to do things like “request tracking authorisation” or use the tracking manager to “check the authorisation status” of individual apps.
In more straightforward terms, this gives app developers a uniform way of requesting these tracking permissions from the device user. It also means the operating system has a centralised location for storing and checking what permissions have been granted to which apps.
What is missing from the fine print is that there is no physical mechanism to prevent the tracking of a user. The app tracking transparency framework is merely a pop-up box.
It is also interesting to note the specific wording of the pop-up: “ask app not to track”. If the application is using legitimate “device advertising identifiers”, answering no will result in this identifier being set to zero. This will reduce the tracking capabilities of apps that honour Apple’s tracking policies.
However, if an app is really determined to track you, there are many techniques that could allow them to make surreptitious user-specific identifiers, which may be difficult for Apple to detect or prevent.
For example, while an app might not use Apple’s “device advertising identifier”, it would be easy for the app to generate a little bit of “random data”. This data could then be passed between sites under the guise of normal operations such as retrieving an image with the data embedded in the filename. While this would contravene Apple’s developer rules, detecting this type of secret data could be very difficult.
Read more:
Your smartphone apps are tracking your every move – 4 essential reads
Apple seems prepared to crack down hard on developers who don’t play by the rules. The most recent additions to Apple’s App Store guidelines explicitly tells developers:
You must receive explicit permission from users via the App Tracking Transparency APIs to track their activity.
It’s unlikely major app developers will want to fall foul of this policy — a ban from the App Store would be costly. But it’s hard to imagine Apple sanctioning a really big player like Facebook or TikTok without some serious behind-the-scenes negotiation.
Why is Facebook objecting?
Facebook is fuelled by web users’ data. Inevitably, anything that gets in the way of its gargantuan revenue-generating network is seen as a threat. In 2020, Facebook’s revenue from advertising exceeded US$84 billion – a 21% rise on 2019.
The issues are deep-rooted and reflect the two tech giants’ very different business models. Apple’s business model is the sale of laptops, computers, phones and watches – with a significant proportion of its income derived from the vast ecosystem of apps and in-app purchases used on these devices. Apple’s app revenue was reported at US$64 billion in 2020.
With a vested interest in ensuring its customers are loyal and happy with its devices, Apple is well positioned to deliver privacy without harming profits.
Should I use it?
Ultimately, it is a choice for the consumer. Many apps and services are offered ostensibly for free to users. App developers often cover their costs through subscription models, in-app purchases or in-app advertising. If enough users decide to embrace privacy controls, developers will either change their funding model (perhaps moving to paid apps) or attempt to find other ways to track users to maintain advertising-derived revenue.
If you don’t want your data to be collected (and potentially sold to unnamed third parties), this feature offers one way to restrict the amount of your data that is trafficked in this way.
But it’s also important to note that tracking of users and devices is a valuable tool for advertising optimisation by building a comprehensive picture of each individual. This increases the relevance of each advert while also reducing advertising costs (by only targeting users who are likely to be interested). Users also arguably benefit, as they see more (relevant) adverts that are contextualised for their interests.
It may slow down the rate at which we receive personalised ads in apps and websites, but this change won’t be an end to intrusive digital advertising. In essence, this is the price we pay for “free” access to these services.
Read more:
Facebook data breach: what happened and why it’s hard to know if your data was leaked
Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Nikolai Hampton, School of Science, Edith Cowan University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The ebb and flow of COVID-19 vaccine support: what social media tells us about Australians and the jab

Shutterstock
Bela Stantic, Griffith University; Rodney Stewart, Griffith University, and Sharyn Rundle-Thiele, Griffith UniversityAustralia’s COVID-19 vaccination rollout has hit yet another crossroads. Public confidence has wavered following the federal government’s announcement last week that the Pfizer vaccine was the preferred choice for people under age 50.
The advice was based on an extremely low risk of severe blood clots forming in younger recipients of the AstraZeneca vaccine. Many patients under 50 have since cancelled or been turned away from their vaccine appointments, according to reports.
Our Griffith University team is monitoring vaccine support levels among Australians. We’re doing this by analysing “big data” gleaned from social media platforms.
According to our analysis, the biggest drop in COVID-19 vaccine acceptance rates in Australia happened when blood clotting incidents were reported in some European countries, prompting rollouts to be stopped.
An evolving debate
Our team trawled through social media feeds for two months, collecting data on public attitudes towards the vaccine. We also watched these opinions change and evolve in response to important media announcements.
We found the Australian public cares about the vaccine’s effectiveness, side effects and roll-out process. Social media sentiment in particular is helping us identify misinformation in a way more traditional survey methods can’t.
Our findings, which have been provided to Queensland Health, are aiding decision makers in devising the best strategies to provide vaccine information to the public.
Standard survey approaches
Carrying out surveys can be costly and time-consuming. It’s hard to get large samples because many people approached won’t participate. It’s also difficult to return to respondents later to understand how their beliefs may be changing over time.
Between October last year and February this year, the Gold Coast Public Health Unit ran a survey asking people if they intended to get the COVID-19 vaccine.
Almost 19,000 survey invitations were given to people who visited fever clinics at the Gold Coast University Hospital and Robina Health Precinct. From these, 2,706 responses came back.
Results showed just over 50% of respondents “definitely intended” to receive the COVID-19 vaccine. Around 15% said they “probably” or “definitely” wouldn’t receive the vaccine.

Similarly, a study conducted by researchers at the Australian National University showed one in five (21.7%) respondents would “probably” not or “definitely” not receive a vaccine.
While such surveys provide a snapshot from one point in time, big data analytics can examine social media data (such as from Twitter) in real time and provide ongoing insight.
Nearly 100,000 posts from 42,000 accounts
We applied algorithms to social media content published between January 24 and March 24. In just two months, more than 97,000 Twitter posts from more than 42,000 Australian accounts (with 308,331 “likes”) were collected.
These posts attracted a further 49,642 comments from another 15,648 unique accounts. This sample size is much bigger than the surveys mentioned above. Notably, the data we collected showed us how vaccine hesitancy had changed during that time.
We used techniques called “sentiment polarity” calculations and “topic modelling analysis” and also looked at the number of likes received by posts for and against the vaccine.
During the two months, we were able to identify links between changes in sentiment and specific media announcements from trusted news sources. The announcements had an obvious impact on people’s opinions.
Negative reporting had a direct impact
Vaccine support started at around 80% in January. We then saw declining support as COVID cases in Australia dropped. But when the media showed people receiving the Pfizer vaccine in February, support grew again.

Joel Carrett/AAP
Negative stories started to appear mid-to-late February and support levels on social media feeds dropped. In late February, the media told us about a poorly trained doctor who gave higher than recommended vaccine doses to two elderly people.
We then received reports of multiple European Union countries banning the AstraZeneca COVID-19 vaccine, due to concerns of blood clotting as a potential side effect. This marked the biggest drop in support, from more than 80% to below 60%.

In late March, support bounced back when the same countries resumed rolling out the AstraZeneca vaccine, and news emerged that GP clinics in Australia were gearing up to do the same.
There are some limitations to our research method. For instance, the views of Twitter users don’t necessarily represent the general population. That said, our data pool does seem to reflect a fairly diverse group of users sharing opinions by posting, re-tweeting and liking posts.
All of these opinions are captured and incorporated into our analysis. Considering the large volume of data used, as well as insights from other correct predictions, we are confident in our ability to provide an accurate near real-time analysis.
Addressing what the public wants addressed
Big data analysis can deliver fast results that show not just the prevalence of vaccine hesitancy, but also help us understand the factors that drive it.
Further, by focusing on the regions or demographics which have the most doubts — whether this is certain age groups, or people with a given level of education — big data analysis can keep high-level decision makers informed about how the public feels.
This in turn assists them with pointing out key issues and vulnerable areas, to which they can direct targeted messaging. In this way, the news sources the public respects and trusts can (and must) be used to improve health outcomes for all.
Read more:
Just the facts, or more detail? To battle vaccine hesitancy, the messaging has to be just right
Bela Stantic, Professor, Director of Big data and smart analytics lab – IIIS, Griffith University; Rodney Stewart, Professor, Griffith School of Engineering, Griffith University, and Sharyn Rundle-Thiele, Professor and Director, Social Marketing @ Griffith, Griffith University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Laws making social media firms expose major COVID myths could help Australia’s vaccine rollout

Shutterstock
Tauel Harper, University of Western Australia
With a vaccine rollout impending, key groups have backed calls for the Australian government to force social media platforms to share details about popular coronavirus misinformation.
An open letter was put forth by independent group Reset Australia. It was endorsed by the Doherty Institute, Immunisation Coalition and Immunisation Foundation of Australia, along with the research group I’m working with, Coronavax — which reports community concerns about the COVID-19 vaccination program to government and health workers.
The issue of coronavirus and vaccine-related misinformation should not be understated. That said, big tech companies need to be engaged the right way to help the Australian public avoid what could potentially be a lifetime of health problems.
A leaderboard for COVID myths
We’re living in a dangerous time for both journalism and public education. We don’t have the legal infrastructure or public forums required to address the spread of coronavirus misinformation. Reset’s proposal intends to address these shortcomings, to better regulate this content in Australia.
It states there should be a mandate given to internet service providers to provide more details on the highest trending online posts spreading misinformation about COVID.
These “live lists” would be updated in real time and would let politicians, researchers, medical experts, journalists and the public keep track of which communities are being exposed to coronavirus and vaccine-related lies and what the major stories are.
The proposal suggests the eSafety commissioner should determine how the information is shared publicly to help prevent the potential victimisation of particular individuals.
Conspiracies can slip through the cracks
Many people rely on news (or what they think is news) presented on social media. Unlike traditional journalism, this isn’t fact-checked and has no editorial oversight to ensure accuracy. Moreover, the vast scale of this misinformation extends beyond platforms’ best efforts to curb it.

Shutterstock
While social media analytic sites such as CrowdTangle provide some insight for researchers, it’s not enough.
For example, the data CrowdTangle shares from Facebook is limited to public posts in large public pages and groups. We can see engagement for these posts (numbers of likes and comments) but not reach (how many people have seen a particular post).
Reset’s open letter recommends extending access provision to data across the entire social networking site, including (in Facebook’s case) posts on people’s personal profiles (not to be confused with private conversations via Facebook Messenger).
While this does raise privacy concerns, the system would be set up so personal identifiers are removed. Instead of paying social media platforms in exchange for data, we would be putting pressure on them via the law and, at base, their “social license to operate”.
Taking down extremists isn’t the goal
Far-right conspiracy group QAnon has managed to entrench itself in certain pockets in Australia. Its believers claim there is a “deep state” plot against former US President Donald Trump.
This group’s conspiracies have extended to include the bogus claim that COVID is an invention of political elites to ensure compliance from the people and usher in oppressive rules. As the theory goes, the vaccine itself is also a tool for indoctrination and/or population control.
Read more:
Why QAnon is attracting so many followers in Australia — and how it can be countered
Public figures have further amplified the conspiracies, with celebrity chef Pete Evans seemingly spearheading the celebrity faction of the QAnon “cause” in Australia.
The real value of Reset’s policy recommendation, however, is not in trying to change these peoples’ views. Rather, what researchers require are more details on trends and levels of engagement with certain types of content.
One focus would be to identify groups of people exposed to misinformation who could potentially be swayed in the direction of conspiracies.
If we can figure out which particular demographics are be more involved in the spreading of misinformation, or perhaps more vulnerable to it, this would help with efforts to engage with these communities.
We already know young people are generally less confident about receiving a COVID vaccine than people over 65, but we’ve less insight on what their concerns are, or whether there are particular rumours circulating online that are making them wary of vaccinations.
Once these are identified, they can be prioritised in the minds of health workers and policy makers, such as by creating educational content in a group’s specific language to help dispel any myths.
Read more:
Why social media platforms banning Trump won’t stop — or even slow down — his cause
Pressure on platforms is mounting
There is the argument that sharing links to online misinformation could help spread it further. We’ve already seen unscrupulous journalists repeat popular terms from online conspiracists (such as “Dictator Dan”, in reference to Victoria Premier Daniel Andrews) in their own coverage to engage a particular audience.
But ultimately, the information being highlighted is already out there, so it’s better for us to take it on openly and honestly. It’s also not just a matter of monitoring misinformation, but also monitoring legitimate public concern about any vaccine side effects.
The increased visibility of the public’s concerns will force government, researchers, journalists and health professionals to engage more directly with those concerns.

Shutterstock
The goal now is to invite Facebook, Twitter and Google to help us develop a tool that highlights public issues while also protecting users’ privacy.
Compelled by Australian law, the platforms will likely be concerned about their legal liabilities for any data passed into the public domain. This is understandable, considering the Cambridge Analytica debacle happened because Facebook was too open with users’ data.
Then again, Facebook already has CrowdTangle and Twitter has also been relatively amendable in the fight against COVID misinformation. There are good reasons to suggest these platforms will continue to invest in fighting misinformation, even if just to protect their reputation and profits.
Like it or not, social media have changed the way we discuss issues of public importance — and have certainly changed the game for public communication. What Reset Australia is proposing is an important step in addressing the spread and influence of COVID misinformation in our communities.
Tauel Harper, Lecturer, Media and Communication, UWA, University of Western Australia
This article is republished from The Conversation under a Creative Commons license. Read the original article.
There’s no such thing as ‘alternative facts’. 5 ways to spot misinformation and stop sharing it online

Shutterstock
Mark Pearson, Griffith University
The blame for the recent assault on the US Capitol and President Donald Trump’s broader dismantling of democratic institutions and norms can be laid at least partly on misinformation and conspiracy theories.
Those who spread misinformation, like Trump himself, are exploiting people’s lack of media literacy — it’s easy to spread lies to people who are prone to believe what they read online without questioning it.
We are living in a dangerous age where the internet makes it possible to spread misinformation far and wide and most people lack the basic fact-checking abilities to discern fact from fiction — or, worse, the desire to develop a healthy skepticism at all.
Read more:
Stopping the spread of COVID-19 misinformation is the best 2021 New Year’s resolution
Journalists are trained in this sort of thing — that is, the responsible ones who are trying to counter misinformation with truth.
Here are five fundamental lessons from Journalism 101 that all citizens can learn to improve their media literacy and fact-checking skills:
1. Distinguishing verified facts from myths, rumours and opinions
Cold, hard facts are the building blocks for considered and reasonable opinions in politics, media and law.
And there are no such things as “alternative facts” — facts are facts. Just because a falsity has been repeated many times by important people and their affiliates does not make it true.
We cannot expect the average citizen to have the skills of an academic researcher, journalist or judge in determining the veracity of an asserted statement. However, we can teach people some basic strategies before they mistake mere assertions for actual facts.
Does a basic internet search show these assertions have been confirmed by usually reliable sources – such as non-partisan mainstream news organisations, government websites and expert academics?
Students are taught to look to the URL of more authoritative sites — such as .gov or .edu — as a good hint at the factual basis of an assertion.
Searches and hashtags in social media are much less reliable as verification tools because you could be fishing within the “bubble” (or “echo chamber”) of those who share common interests, fears and prejudices – and are more likely to be perpetuating myths and rumours.
2. Mixing up your media and social media diet
We need to be break out of our own “echo chambers” and our tendencies to access only the news and views of those who agree with us, on the topics that interest us and where we feel most comfortable.
For example, over much of the past five years, I have deliberately switched between various conservative and liberal media outlets when something important has happened in the US.
By looking at the coverage of the left- and right-wing media, I can hope to find a common set of facts both sides agree on — beyond the partisan rhetoric and spin. And if only one side is reporting something, I know to question this assertion and not just take it at face value.
3. Being skeptical and assessing the factual premise of an opinion
Journalism students learn to approach the claims of their sources with a “healthy skepticism”. For instance, if you are interviewing someone and they make what seems to be a bold or questionable claim, it’s good practice to pause and ask what facts the claim is based on.
Students are taught in media law this is the key to the fair comment defence to a defamation action. This permits us to publish defamatory opinions on matters of public interest as long as they are reasonably based on provable facts put forth by the publication.
The ABC’s Media Watch used this defence successfully (at trial and on appeal) when it criticised a Sydney Sun-Herald journalist’s reporting that claimed toxic materials had been found near a children’s playground.
This assessment of the factual basis of an opinion is not reserved for defamation lawyers – it is an exercise we can all undertake as we decide whether someone’s opinion deserves our serious attention and republication.
Read more:
Teaching children digital literacy skills helps them navigate and respond to misinformation
4. Exploring the background and motives of media and sources
A key skill in media literacy is the ability to look behind the veil of those who want our attention — media outlets, social media influencers and bloggers — to investigate their allegiances, sponsorships and business models.
For instance, these are some key questions to ask:
-
who is behind that think tank whose views you are retweeting?
-
who owns the online newspaper you read and what other commercial interests do they hold?
-
is your media diet dominated by news produced from the same corporate entity?
-
why does someone need to be so loud or insulting in their commentary; is this indicative of their neglect of important facts that might counter their view?
-
what might an individual or company have to gain or lose by taking a position on an issue, and how might that influence their opinion?
Just because someone has an agenda does not mean their facts are wrong — but it is a good reason to be even more skeptical in your verification processes.
Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?
5. Reflecting and verifying before sharing
We live in an era of instant republication. We immediately retweet and share content we see on social media, often without even having read it thoroughly, let alone having fact-checked it.
Mindful reflection before pressing that sharing button would allow you to ask yourself, “Why am I even choosing to share this material?”
You could also help shore up democracy by engaging in the fact-checking processes mentioned above to avoid being part of the problem by spreading misinformation.
Mark Pearson, Professor of Journalism and Social Media, Griffith Centre for Social and Cultural Research, Griffith University, Griffith University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Trump’s time is up, but his Twitter legacy lives on in the global spread of QAnon conspiracy theories

Verica Rupar, Auckland University of Technology and Tom De Smedt, University of Antwerp
“The lie outlasts the liar,” writes historian Timothy Snyder, referring to outgoing president Donald Trump and his contribution to the “post-truth” era in the US.
Indeed, the mass rejection of reason that erupted in a political mob storming Capitol Hill mere weeks before the inauguration of Joe Biden tests our ability to comprehend contemporary American politics and its emerging forms of extremism.
Much has been written about Trump’s role in spreading misinformation and the media failures that enabled him. His contribution to fuelling extremism, flirting with the political fringe, supporting conspiracy theories and, most of all, Twitter demagogy created an environment in which he has been seen as an “accelerant” in his own right.
If the scale of international damage is yet to be calculated, there is something we can measure right now.
In September last year, the London-based Media Diversity Institute (MDI) asked us to design a research project that would systematically track the extent to which US-originated conspiracy theory group QAnon had spread to Europe.
Titled QAnon 2: spreading conspiracy theories on Twitter, the research is part of the international Get the Trolls Out! (GTTO) project, focusing on religious discrimination and intolerance.
Twitter and the rise of QAnon
GTTO media monitors had earlier noted the rise of QAnon support among Twitter users in Europe and were expecting a further surge of derogatory talk ahead of the 2020 US presidential election.
We examined the role religion played in spreading conspiracy theories, the most common topics of tweets, and what social groups were most active in spreading QAnon ideas.
We focused on Twitter because its increasing use — some sources estimate 330 million people used Twitter monthly in 2020 — has made it a powerful political communication tool. It has given politicians such as Trump the opportunity to promote, facilitate and mobilise social groups on an unprecedented scale.
Read more:
QAnon and the storm of the U.S. Capitol: The offline effect of online conspiracy theories
Using AI tools developed by data company Textgain, we analysed about half-a-million Twitter messages related to QAnon to identify major trends.
By observing how hashtags were combined in messages, we examined the network structure of QAnon users posting in English, German, French, Dutch, Italian and Spanish. Researchers identified about 3,000 different hashtags related to QAnon used by 1,250 Twitter profiles.

http://www.shutterstock.com
An American export
Every fourth QAnon tweet originated in the US (300). Far behind were tweets from other countries: Canada (30), Germany (25), Australia (20), the United Kingdom (20), the Netherlands (15), France (15), Italy (10), Spain (10) and others.
We examined QAnon profiles that share each other’s content, Trump tweets and YouTube videos, and found over 90% of these profiles shared the content of at least one other identified profile.
Seven main topics were identified: support for Trump, support for EU-based nationalism, support for QAnon, deep state conspiracies, coronavirus conspiracies, religious conspiracies and political extremism.
Hashtags rooted in US evangelicalism sometimes portrayed Trump as Jesus, as a superhero, or clad in medieval armour, with underlying Biblical references to a coming apocalypse in which he will defeat the forces of evil.
Overall, the coronavirus pandemic appears to function as an important conduit for all such messaging, with QAnon acting as a rallying flag for discontent among far-right European movements.
Measuring the toxicity of tweets
We used Textgain’s hate-speech detection tools to assess toxicity. Tweets written in English had a high level of antisemitism. In particular, they targeted public figures such as Jewish-American billionaire investor and philanthropist George Soros, or revived old conspiracies about secret Jewish plots for world domination. Soros was also a popular target in other languages.
We also found a highly polarised debate around the coronavirus public health measures employed in Germany, often using Third Reich rhetoric.
New language to express negative sentiments was coined and then adopted by others — in particular, pejorative terms for face masks and slurs directed at political leaders and others who wore masks.
Accompanying memes ridiculed political leaders, displaying them as alien reptilian overlords or antagonists from popular movies, such as Star Wars Sith Lords and the cyborg from The Terminator.
Most of the QAnon profiles tap into the same sources of information: Trump tweets, YouTube disinformation videos and each other’s tweets. It forms a mutually reinforcing confirmation bias — the tendency to search for, interpret, favour, and recall information that confirms prior beliefs or values.
Read more:
Despite being permanently banned, Trump’s prolific Twitter record lives on
Where does it end?
Harvesting discontent has always been a powerful political tool. In a digital world this is more true than ever.
By mid 2020, Donald Trump had six times more followers on Twitter than when he was elected. Until he was suspended from the platform, his daily barrage of tweets found a ready audience in ultra-right groups in the US who helped his misinformation and inflammatory rhetoric jump the Atlantic to Europe.
Social media platforms have since attempted to reduce the spread of QAnon. In July 2020, Twitter suspended 7,000 QAnon-related accounts. In August, Facebook deleted over 790 groups and restricted the accounts of hundreds of others, along with thousands of Instagram accounts.
Read more:
Trump’s Twitter feed shows ‘arc of the hero,’ from savior to showdown
In January this year, all Trump’s social media accounts were either banned or restricted. Twitter suspended 70,000 accounts that share QAnon content at scale.
But further Textgain analysis of 50,000 QAnon tweets posted in December and January showed toxicity had almost doubled, including 750 tweets inciting political violence and 500 inciting violence against Jewish people.
Those tweets were being systematically removed by Twitter. But calls for violence ahead of the January 20 inauguration continued to proliferate, Trump’s QAnon supporters appearing as committed and vocal as ever.
The challenge for both the Biden administration and the social media platforms themselves is clear. But our analysis suggests any solution will require a coordinated international effort.
Verica Rupar, Professor, Auckland University of Technology and Tom De Smedt, Postdoctoral research associate, University of Antwerp
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Why social media platforms banning Trump won’t stop — or even slow down — his cause
Bronwyn Carlson, Macquarie University
Last week Twitter permanently suspended US President Donald Trump in the wake of his supporters’ violent storming of Capitol Hill. Trump was also suspended from Facebook and Instagram indefinitely.
Heads quickly turned to the right-wing Twitter alternative Parler — which seemed to be a logical place of respite for the digitally de-throned president.
But Parler too was axed, as Amazon pulled its hosting services and Google and Apple removed it from their stores. The social network, which has since sued Amazon, is effectively shut down until it can secure a new host or force Amazon to restore its services.
These actions may seem like legitimate attempts by platforms to tackle Trump’s violence-fuelling rhetoric. The reality, however, is they will do little to truly disengage his supporters or deal with issues of violence and hate speech.
With an election vote count of 74,223,744 (46.9%), the magnitude of Trump’s following is clear. And since being banned from Twitter, he hasn’t shown any intention of backing down.
Not budging
With more than 47,000 original tweets from Trump’s personal Twitter account (@realdonaldtrump) since 2009, one could argue he used the platform inordinately. There’s much speculation about what he might do now.
Tweeting via the official Twitter account for the president @POTUS, he said he might consider building his own platform. Twitter promptly removed this tweet. He also tweeted: “We will not be SILENCED!”.
This threat may come with some standing as Trump does have avenues to control various forms of media. In November, Axios reported he was considering launching his own right-wing media venture.
For his followers, the internet remains a “natural hunting ground” where they can continue gaining support through spreading racist and hateful sentiment.
The internet is also notoriously hard to police – it has no real borders, and features such as encryption enable anonymity. Laws differ from state to state and nation to nation; an act deemed illegal in one locale may be legal elsewhere.
It’s no surprise groups including fascists, neo-Nazis, anti-Semites and white supremacists were early and eager adopters of the internet. Back in 1998, former Ku Klux Klan Grand Wizard David Duke wrote online:
I believe that the internet will begin a chain reaction of racial enlightenment that will shake the world by the speed of its intellectual conquest.
As far as efforts to quash such extremism go, they’re usually too little, too late.
Take Stormfront, a neo-Nazi platform described as the web’s first major racial hate site. It was set up in 1995 by a former Klan state leader, and only removed from the open web 22 years later in 2017.
The psychology of hate
Banning Trump from social media won’t necessarily silence him or his supporters. Esteemed British psychiatrist and broadcaster Raj Persaud sums it up well: “narcissists do not respond well to social exclusion”.
Others have highlighted the many options still available for Trump fans to congregate since Parler’s departure, which was used to communicate plans ahead of the siege at Capitol. Gab is one platform many Trump supporters have flocked to.
It’s important to remember hate speech, racism and violence predate the internet. Those who are predisposed to these ideologies will find a way to connect with others like them.
And censorship likely won’t change their beliefs, since extremist ideologies and conspiracies tend to be heavily spurred on by confirmation bias. This is when people interpret information in a way that reaffirms their existing beliefs.
When Twitter took action to limit QAnon content last year, some followers took this as confirmation of the conspiracy, which claims Satan-worshipping elites from within government, business and media are running a “deep state” against Trump.
Social media and white supremacy: a love story
The promotion of violence and hate speech on platforms isn’t new, nor is it restricted to relatively fringe sites such as Parler.
Queensland University of Technology Digital Media lecturer Ariadna Matamoros-Fernández describes online hate speech as “platformed racism”. This framing is critical, especially in the case of Trump and his followers.
It recognises social media has various algorithmic features which allow for the proliferation of racist content. It also captures the governance structures that tend to favour “free speech” over the safety of vulnerable communities online.
For instance, Matamoros-Fernández’s research found in Australia, platforms such as Facebook “favoured the offenders over Indigenous people” by tending to lean in favour of free speech.
Other research has found Indigenous social media users regularly witness and experience racism and sexism online. My own research has also revealed social media helps proliferate hate speech, including racism and other forms of violence.
On this front, tech companies are unlikely to take action on the scale required, since controversy is good for business. Simply, there’s no strong incentive for platforms to tackle the issues of hate speech and racism — not until not doing so negatively impacts profits.
After Facebook indefinitely banned Trump, its market value reportedly dropped by US$47.6 billion as of Wednesday, while Twitter’s dropped by US$3.5 billion.
Read more:
Profit, not free speech, governs media companies’ decisions on controversy
The need for a paradigm shift
When it comes to imagining a future with less hate, racism and violence, a key mistake is looking for solutions within the existing structure.
Today, online media is an integral part of the structure that governs society. So we look to it to solve our problems.
But banning Trump won’t silence him or the ideologies he peddles. It will not suppress hate speech or even reduce the capacity of individuals to incite violence.
Trump’s presidency will end in the coming days, but extremist groups and the broader movement they occupy will remain, both in real life and online.
Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day
Bronwyn Carlson, Professor, Indigenous Studies, Macquarie University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Despite being permanently banned, Trump’s prolific Twitter record lives on
Audrey Courty, Griffith University
For years, US President Donald Trump pushed the limits of Twitter’s content policies, raising pressure on the platform to exercise tougher moderation.
Ultimately, the violent siege of the US Capitol forced Twitter’s hand and the platform permanently banned Trump’s personal account, @realDonaldTrump.
But this doesn’t mean the 26,000 or so tweets posted during his presidency have vanished. They are now a matter of public record — and have been preserved accordingly.
Read more:
Twitter permanently suspends Trump after U.S. Capitol siege, citing risk of further violence
The de-platforming of Donald Trump
The loss of public access to Trump’s original Twitter posts means every online hyperlink to a tweet is now defunct. Embedded tweets are still visible as simple text, but can no longer be traced to their source.
Adding to this, retweets of the president’s messages no longer appear on the forwarding user’s feed. Quote tweets have been replaced with the message: “This Tweet is unavailable” and replies can’t be viewed in one place anymore.
But even if Trump’s account had not been suspended, he would have had to part with it at the end of his presidency anyway, since he used it extensively for presidential purposes.
Under the US government’s ethics regulations, US officials are prevented from benefiting personally from their public office, and this applies to social media accounts.
Former US ambassador to the United Nations, Nikki Haley, also used her personal and political Twitter accounts to conduct official business as ambassador. The account was wiped and renamed in 2019 once her role ended.
Where did all the information go?
Despite being permanently suspended, Trump’s prolific Twitter record is not lost. Under the Presidential Records Act, all of Trump’s social media communications are considered public property, including non-public messages sent via direct chat features.
The act defines presidential records as any materials created or received by the president (or immediate staff or advisors) in the course of conducting his official duties.
It was passed in 1978, out of concern that former president Richard Nixon would destroy the tapes which ultimately led to his resignation. Today, it remains a way to force governments to be transparent with the public.
And although Trump tweeted extensively from his personal Twitter account created in 2009, @realDonaldTrump, it has undoubtedly been used for official purposes.
From banning transgender military service to threatening the use of nuclear weapons against North Korea, his tweets on this account constitute an important part of the presidential record.
As such, the US National Archives says it will preserve all of them, including deleted posts — as well as all posts from @POTUS, the official presidential account.
The Trump administration will have to turn over the digital records for both accounts on January 20, which will eventually be made available to the public on a Trump Library website.
Still, the president reserves the right to invoke as many as six specific restrictions to public access for up to 12 years.
We don’t know whether Trump will invoke restrictions. But even if he does, grassroots initiatives have already archived all of his tweets.
For example, the Trump Twitter Archive is a free, public resource that lets users search and filter through more than 56,000 tweets by Trump since 2009, including deleted tweets since 2016.

Screenshot
A matter of public record
In 2017, Trump told Fox News he believed he may have never been elected without Twitter — and that he viewed it as an effective means for pushing his message.
Twitter also benefited from this relationship. Trump’s 88 million followers (as of when his account was suspended) generated endless streams of user engagement for the social media giant.
Trump’s approach to using Twitter was unprecedented. He bypassed traditional media channels, instead tweeting for political and diplomatic purposes — including to make important policy announcements.
His tweets set the agenda for US politics during his presidency. For example, they influenced foreign relations between the US and Mexico, North Korea, China and Iran. They were also used to endorse allies and attack rivals.
Read more:
Twitter diplomacy: how Trump is using social media to spur a crisis with Mexico
The closest thing to a town square
For all the reasons listed above, the value of Trump’s Twitter record extends beyond historical research. It’s a way to hold him accountable for what he has said and done.
And this will soon be on display as the US Democratic Party looks to impeach him for the second time for “inciting insurrection”.
Trump’s administration of “alternative facts” has continuously stonewalled a number of enquiries — going as far as refusing to testify before Congress on certain matters.
From this frame of view, Trump’s Twitter feed was arguably one of few places where his claims and decisions could really be scrutinised. And indeed, news coverage of the president often relied heavily on this.
The amplification effect
The media’s reliance on president Trump’s tweets ultimately highlights a key aspect that governs today’s hybrid media system. That is, it’s highly responsive to a populist communication style.
Trump’s use of Twitter indirectly contributed to his election success in 2016, by helping boost media coverage of his campaign. Researchers also observed him strategically increasing his Twitter activity in line with waning news interest.
Through a constant stream of provocative remarks, Trump exploited news values and continuously inserted himself into the news cycle. And for journalists under pressure to churn out content, his impassioned messages were the perfect sound bites.
Now, stripped of his favourite mouthpiece, it’s uncertain whether Trump will find another way to exert his influence. But one thing is for sure: his time on Twitter will go down in history.
Audrey Courty, PhD candidate, School of Humanities, Languages and Social Science, Griffith University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
No, Twitter is not censoring Donald Trump. Free speech is not guaranteed if it harms others

Alex Brandon/AP
Katharine Gelber, The University of Queensland
The recent storming of the US Capitol has led a number of social media platforms to remove President Donald Trump’s account. In the case of Twitter, the ban is permanent. Others, like Facebook, have taken him offline until after President-elect Joe Biden’s inauguration next week.
This has led to a flurry of commentary in the Australian media about “free speech”. Treasurer Josh Frydenburg has said he is “uncomfortable” with Twitter’s removal of Trump, while the acting prime minister, Michael McCormack, has described it as “censorship”.
Meanwhile, MPs like Craig Kelly and George Christensen continue to ignore the evidence and promote misinformation about the nature of the violent, pro-Trump mob that attacked the Capitol.
A growing number of MPs are also reportedly calling for consistent and transparent rules to be applied by online platforms in a bid to combat hate speech and other types of harmful speech.
Some have conflated this effort with the restrictions on Trump’s social media usage, as though both of these issues reflect the same problem.
Much of this commentary is misguided, wrong and confusing. So let’s pull it apart a bit.
There is no free speech “right” to incite violence
There is no free speech argument in existence that suggests an incitement of lawlessness and violence is protected speech.
Quite to the contrary. Nineteenth century free speech proponent John Stuart Mill argued the sole reason one’s liberty may be interfered with (including restrictions on free speech) is “self-protection” — in other words, to protect people from harm or violence.
Read more:
Parler: what you need to know about the ‘free speech’ Twitter alternative
Additionally, incitement to violence is a criminal offence in all liberal democratic orders. There is an obvious reason for this: violence is harmful. It harms those who are immediately targeted (five people died in the riots last week) and those who are intimidated as a result of the violence to take action or speak up against it.
It also harms the institutions of democracy themselves, which rely on elections rather than civil wars and a peaceful transfer of power.
To suggest taking action against speech that incites violence is “censoring” the speaker is completely misleading.
There is no free speech “right” to appear on a particular platform
There is also no free speech argument that guarantees any citizen the right to express their views on a specific platform.
It is ludicrous to suggest there is. If this “right” were to exist, it would mean any citizen could demand to have their opinions aired on the front page of the Sydney Morning Herald and, if refused, claim their free speech had been violated.
Read more:
Trump’s Twitter tantrum may wreck the internet
What does exist is a general right to express oneself in public discourse, relatively free from regulation, as long as one’s speech does not harm others.
Trump still possesses this right. He has a podium in the West Wing designed for this specific purpose, which he can make use of at any time.
Were he to do so, the media would cover what he says, just as they covered his comments prior to, during and immediately after the riots. This included him telling the rioters that he loved them and that they were “very special”.

Jacquelyn Martin/AP
Does the fact he’s the president change this?
In many free speech arguments, political speech is accorded a higher level of protection than other forms of speech (such as commercial speech, for example). Does the fact this debate concerns the president of the United States change things?
No, it does not. There is no doubt Trump has been given considerable leeway in his public commentary prior to — and during the course of — his presidency. However, he has now crossed a line into stoking imminent lawlessness and violence.
This cannot be protected speech just because it is “political”. If this was the case, it would suggest the free speech of political elites can and should have no limits at all.
Yet, in all liberal democracies – even the United States which has the strongest free speech protection in the world – free speech has limits. These include the incitement of violence and crime.
Are social media platforms over-censoring?
The last decade or so has seen a vigorous debate over the attitudes and responses of social media platforms to harmful speech.
The big tech companies have staunchly resisted being asked to regulate speech, especially political speech, on their platforms. They have enjoyed the profits of their business model, while specific types of users – typically the marginalised – have borne the costs.
However, platforms have recently started to respond to demands and public pressure to address the harms of the speech they facilitate – from countering violent extremism to fake accounts, misinformation, revenge porn and hate speech.
They have developed community standards for content moderation that are publicly available. They release regular reports on their content moderation processes.
Facebook has even created an independent oversight board to arbitrate disputes over their decision making on content moderation.
They do not always do very well at this. One of the core problems is their desire to create algorithms and policies that are applicable universally across their global operations. But such a thing is impossible when it comes to free speech. Context matters in determining whether and under what circumstances speech can harm. This means they make mistakes.
Read more:
Why the business model of social media giants like Facebook is incompatible with human rights
Where to now?
The calls by MPs Anne Webster and Sharon Claydon to address hate speech online are important. They are part of the broader push internationally to find ways to ensure the benefits of the internet can be enjoyed more equally, and that a person’s speech does not silence or harm others.
Arguments about harm are longstanding, and have been widely accepted globally as forming a legitimate basis for intervention.
But the suggestion Trump has been censored is simply wrong. It misleads the public into believing all “free speech” claims have equal merit. They do not.
We must work to ensure harmful speech is regulated in order to ensure broad participation in the public discourse that is essential to our lives — and to our democracy. Anything less is an abandonment of the principles and ethics of governance.
Katharine Gelber, Professor of Politics and Public Policy, The University of Queensland
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?
Timothy Graham, Queensland University of Technology
Amid the chaos in the US Capitol, stoked largely by rhetoric from President Donald Trump, Twitter has locked his account, with 88.7 million followers, for 12 hours.
Facebook and Instagram quickly followed suit, locking Trump’s accounts — with 35.2 million followers and 24.5 million, respectively — for at least two weeks, the remainder of his presidency. This ban was extended from 24 hours.
The locks are the latest effort by social media platforms to clamp down on Trump’s misinformation and baseless claims of election fraud.
They came after Twitter labelled a video posted by Trump and said it posed a “risk of violence”. Twitter removed users’ ability to retweet, like or comment on the post — the first time this has been done.
In the video, Trump told the agitators at the Capitol to go home, but at the same time called them “very special” and said he loved them for disrupting the Congressional certification of President-elect Joe Biden’s win.
That tweet has since been taken down for “repeated and severe violations” of Twitter’s civic integrity policy. YouTube and Facebook have also removed copies of the video.
But as people across the world scramble to make sense of what’s going on, one thing stands out: the events that transpired today were not unexpected.
Given the lack of regulation and responsibility shown by platforms over the past few years, it’s fair to say the writing was on the wall.
The real, violent consequences of misinformation
While Trump is no stranger to contentious and even racist remarks on social media, Twitter’s action to lock the president’s account is a first.
The line was arguably crossed by Trump’s implicit incitement of violence and disorder within the halls of the US Capitol itself.
Nevertheless, it would have been a difficult decision for Twitter (and Facebook and Instagram), with several factors at play. Some of these are short-term, such as the immediate potential for further violence.
Then there’s the question of whether tighter regulation could further incite rioting Trump supporters by feeding into their theories claiming the existence of a large-scale “deep state” plot against the president. It’s possible.
Read more:
QAnon believers will likely outlast and outsmart Twitter’s bans
But a longer-term consideration — and perhaps one at the forefront of the platforms’ priorities — is how these actions will affect their value as commercial assets.
I believe the platforms’ biggest concern is their own bottom line. They are commercial companies legally obliged to pursue profits for shareholders. Commercial imperatives and user engagement are at the forefront of their decisions.
What happens when you censor a Republican president? You can lose a huge chunk of your conservative user base, or upset your shareholders.
Despite what we think of them, or how we might use them, platforms such as Facebook, Twitter, Instagram and YouTube aren’t set up in the public interest.
For them, it’s risky to censor a head of state when they know that content is profitable. Doing it involves a complex risk calculus — with priorities being shareholders, the companies’ market value and their reputation.
Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day
Walking a tightrope
The platforms’ decisions to not only force the removal of several of Trump’s posts but also to lock his accounts carries enormous potential loss of revenue. It’s a major and irreversible step.
And they are now forced to keep a close eye on one another. If one appears too “strict” in its censorship, it may attract criticism and lose user engagement and ultimately profit. At the same time, if platforms are too loose with their content regulation, they must weather the storm of public critique.
You don’t want to be the last organisation to make the tough decision, but you don’t necessarily want to be the first, either — because then you’re the “trial balloon” who volunteered to potentially harm the bottom line.
For all major platforms, the past few years have presented high stakes. Yet there have been plenty of opportunities to stop the situation snowballing to where it is now.
From Trump’s baseless election fraud claims to his false ideas about the coronavirus, time and again platforms have turned a blind eye to serious cases of mis- and disinformation.
The storming of the Capitol is a logical consequence of what has arguably been a long time coming.
The coronavirus pandemic illustrated this. While Trump was partially censored by Twitter and Facebook for misinformation, the platforms failed to take lasting action to deal with the issue at its core.
In the past, platforms have cited constitutional reasons to justify not censoring politicians. They have claimed a civic duty to give elected officials an unfiltered voice.
This line of argument should have ended with the “Unite the Right” rally in Charlottesville in August 2017, when Trump responded to the killing of an anti-fascism protester by claiming there were “very fine people on both sides”.
An age of QAnon, Proud Boys and neo-Nazis
While there’s no silver bullet for online misinformation and extremist content, there’s also no doubt platforms could have done more in the past that may have prevented the scenes witnessed in Washington DC.
In a crisis, there’s a rush to make sense of everything. But we need only look at what led us to this point. Experts on disinformation have been crying out for platforms to do more to combat disinformation and its growing domestic roots.
Now, in 2021, extremists such as neo-Nazis and QAnon believers no longer have to lurk in the depths of online forums or commit lone acts of violence. Instead, they can violently storm the Capitol.
It would be a cardinal error to not appraise the severity and importance of the neglect that led us here. In some ways, perhaps that’s the biggest lesson we can learn.
This article has been updated to reflect the news that Facebook and Instagram extended their 24-hour ban on President Trump’s accounts.
Timothy Graham, Senior Lecturer, Queensland University of Technology
This article is republished from The Conversation under a Creative Commons license. Read the original article.
You must be logged in to post a comment.