Tag Archives: social media
The ebb and flow of COVID-19 vaccine support: what social media tells us about Australians and the jab

Shutterstock
Bela Stantic, Griffith University; Rodney Stewart, Griffith University, and Sharyn Rundle-Thiele, Griffith UniversityAustralia’s COVID-19 vaccination rollout has hit yet another crossroads. Public confidence has wavered following the federal government’s announcement last week that the Pfizer vaccine was the preferred choice for people under age 50.
The advice was based on an extremely low risk of severe blood clots forming in younger recipients of the AstraZeneca vaccine. Many patients under 50 have since cancelled or been turned away from their vaccine appointments, according to reports.
Our Griffith University team is monitoring vaccine support levels among Australians. We’re doing this by analysing “big data” gleaned from social media platforms.
According to our analysis, the biggest drop in COVID-19 vaccine acceptance rates in Australia happened when blood clotting incidents were reported in some European countries, prompting rollouts to be stopped.
An evolving debate
Our team trawled through social media feeds for two months, collecting data on public attitudes towards the vaccine. We also watched these opinions change and evolve in response to important media announcements.
We found the Australian public cares about the vaccine’s effectiveness, side effects and roll-out process. Social media sentiment in particular is helping us identify misinformation in a way more traditional survey methods can’t.
Our findings, which have been provided to Queensland Health, are aiding decision makers in devising the best strategies to provide vaccine information to the public.
Standard survey approaches
Carrying out surveys can be costly and time-consuming. It’s hard to get large samples because many people approached won’t participate. It’s also difficult to return to respondents later to understand how their beliefs may be changing over time.
Between October last year and February this year, the Gold Coast Public Health Unit ran a survey asking people if they intended to get the COVID-19 vaccine.
Almost 19,000 survey invitations were given to people who visited fever clinics at the Gold Coast University Hospital and Robina Health Precinct. From these, 2,706 responses came back.
Results showed just over 50% of respondents “definitely intended” to receive the COVID-19 vaccine. Around 15% said they “probably” or “definitely” wouldn’t receive the vaccine.

Similarly, a study conducted by researchers at the Australian National University showed one in five (21.7%) respondents would “probably” not or “definitely” not receive a vaccine.
While such surveys provide a snapshot from one point in time, big data analytics can examine social media data (such as from Twitter) in real time and provide ongoing insight.
Nearly 100,000 posts from 42,000 accounts
We applied algorithms to social media content published between January 24 and March 24. In just two months, more than 97,000 Twitter posts from more than 42,000 Australian accounts (with 308,331 “likes”) were collected.
These posts attracted a further 49,642 comments from another 15,648 unique accounts. This sample size is much bigger than the surveys mentioned above. Notably, the data we collected showed us how vaccine hesitancy had changed during that time.
We used techniques called “sentiment polarity” calculations and “topic modelling analysis” and also looked at the number of likes received by posts for and against the vaccine.
During the two months, we were able to identify links between changes in sentiment and specific media announcements from trusted news sources. The announcements had an obvious impact on people’s opinions.
Negative reporting had a direct impact
Vaccine support started at around 80% in January. We then saw declining support as COVID cases in Australia dropped. But when the media showed people receiving the Pfizer vaccine in February, support grew again.

Joel Carrett/AAP
Negative stories started to appear mid-to-late February and support levels on social media feeds dropped. In late February, the media told us about a poorly trained doctor who gave higher than recommended vaccine doses to two elderly people.
We then received reports of multiple European Union countries banning the AstraZeneca COVID-19 vaccine, due to concerns of blood clotting as a potential side effect. This marked the biggest drop in support, from more than 80% to below 60%.

In late March, support bounced back when the same countries resumed rolling out the AstraZeneca vaccine, and news emerged that GP clinics in Australia were gearing up to do the same.
There are some limitations to our research method. For instance, the views of Twitter users don’t necessarily represent the general population. That said, our data pool does seem to reflect a fairly diverse group of users sharing opinions by posting, re-tweeting and liking posts.
All of these opinions are captured and incorporated into our analysis. Considering the large volume of data used, as well as insights from other correct predictions, we are confident in our ability to provide an accurate near real-time analysis.
Addressing what the public wants addressed
Big data analysis can deliver fast results that show not just the prevalence of vaccine hesitancy, but also help us understand the factors that drive it.
Further, by focusing on the regions or demographics which have the most doubts — whether this is certain age groups, or people with a given level of education — big data analysis can keep high-level decision makers informed about how the public feels.
This in turn assists them with pointing out key issues and vulnerable areas, to which they can direct targeted messaging. In this way, the news sources the public respects and trusts can (and must) be used to improve health outcomes for all.
Read more:
Just the facts, or more detail? To battle vaccine hesitancy, the messaging has to be just right
Bela Stantic, Professor, Director of Big data and smart analytics lab – IIIS, Griffith University; Rodney Stewart, Professor, Griffith School of Engineering, Griffith University, and Sharyn Rundle-Thiele, Professor and Director, Social Marketing @ Griffith, Griffith University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Laws making social media firms expose major COVID myths could help Australia’s vaccine rollout

Shutterstock
Tauel Harper, University of Western Australia
With a vaccine rollout impending, key groups have backed calls for the Australian government to force social media platforms to share details about popular coronavirus misinformation.
An open letter was put forth by independent group Reset Australia. It was endorsed by the Doherty Institute, Immunisation Coalition and Immunisation Foundation of Australia, along with the research group I’m working with, Coronavax — which reports community concerns about the COVID-19 vaccination program to government and health workers.
The issue of coronavirus and vaccine-related misinformation should not be understated. That said, big tech companies need to be engaged the right way to help the Australian public avoid what could potentially be a lifetime of health problems.
A leaderboard for COVID myths
We’re living in a dangerous time for both journalism and public education. We don’t have the legal infrastructure or public forums required to address the spread of coronavirus misinformation. Reset’s proposal intends to address these shortcomings, to better regulate this content in Australia.
It states there should be a mandate given to internet service providers to provide more details on the highest trending online posts spreading misinformation about COVID.
These “live lists” would be updated in real time and would let politicians, researchers, medical experts, journalists and the public keep track of which communities are being exposed to coronavirus and vaccine-related lies and what the major stories are.
The proposal suggests the eSafety commissioner should determine how the information is shared publicly to help prevent the potential victimisation of particular individuals.
Conspiracies can slip through the cracks
Many people rely on news (or what they think is news) presented on social media. Unlike traditional journalism, this isn’t fact-checked and has no editorial oversight to ensure accuracy. Moreover, the vast scale of this misinformation extends beyond platforms’ best efforts to curb it.

Shutterstock
While social media analytic sites such as CrowdTangle provide some insight for researchers, it’s not enough.
For example, the data CrowdTangle shares from Facebook is limited to public posts in large public pages and groups. We can see engagement for these posts (numbers of likes and comments) but not reach (how many people have seen a particular post).
Reset’s open letter recommends extending access provision to data across the entire social networking site, including (in Facebook’s case) posts on people’s personal profiles (not to be confused with private conversations via Facebook Messenger).
While this does raise privacy concerns, the system would be set up so personal identifiers are removed. Instead of paying social media platforms in exchange for data, we would be putting pressure on them via the law and, at base, their “social license to operate”.
Taking down extremists isn’t the goal
Far-right conspiracy group QAnon has managed to entrench itself in certain pockets in Australia. Its believers claim there is a “deep state” plot against former US President Donald Trump.
This group’s conspiracies have extended to include the bogus claim that COVID is an invention of political elites to ensure compliance from the people and usher in oppressive rules. As the theory goes, the vaccine itself is also a tool for indoctrination and/or population control.
Read more:
Why QAnon is attracting so many followers in Australia — and how it can be countered
Public figures have further amplified the conspiracies, with celebrity chef Pete Evans seemingly spearheading the celebrity faction of the QAnon “cause” in Australia.
The real value of Reset’s policy recommendation, however, is not in trying to change these peoples’ views. Rather, what researchers require are more details on trends and levels of engagement with certain types of content.
One focus would be to identify groups of people exposed to misinformation who could potentially be swayed in the direction of conspiracies.
If we can figure out which particular demographics are be more involved in the spreading of misinformation, or perhaps more vulnerable to it, this would help with efforts to engage with these communities.
We already know young people are generally less confident about receiving a COVID vaccine than people over 65, but we’ve less insight on what their concerns are, or whether there are particular rumours circulating online that are making them wary of vaccinations.
Once these are identified, they can be prioritised in the minds of health workers and policy makers, such as by creating educational content in a group’s specific language to help dispel any myths.
Read more:
Why social media platforms banning Trump won’t stop — or even slow down — his cause
Pressure on platforms is mounting
There is the argument that sharing links to online misinformation could help spread it further. We’ve already seen unscrupulous journalists repeat popular terms from online conspiracists (such as “Dictator Dan”, in reference to Victoria Premier Daniel Andrews) in their own coverage to engage a particular audience.
But ultimately, the information being highlighted is already out there, so it’s better for us to take it on openly and honestly. It’s also not just a matter of monitoring misinformation, but also monitoring legitimate public concern about any vaccine side effects.
The increased visibility of the public’s concerns will force government, researchers, journalists and health professionals to engage more directly with those concerns.

Shutterstock
The goal now is to invite Facebook, Twitter and Google to help us develop a tool that highlights public issues while also protecting users’ privacy.
Compelled by Australian law, the platforms will likely be concerned about their legal liabilities for any data passed into the public domain. This is understandable, considering the Cambridge Analytica debacle happened because Facebook was too open with users’ data.
Then again, Facebook already has CrowdTangle and Twitter has also been relatively amendable in the fight against COVID misinformation. There are good reasons to suggest these platforms will continue to invest in fighting misinformation, even if just to protect their reputation and profits.
Like it or not, social media have changed the way we discuss issues of public importance — and have certainly changed the game for public communication. What Reset Australia is proposing is an important step in addressing the spread and influence of COVID misinformation in our communities.
Tauel Harper, Lecturer, Media and Communication, UWA, University of Western Australia
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Why social media platforms banning Trump won’t stop — or even slow down — his cause
Bronwyn Carlson, Macquarie University
Last week Twitter permanently suspended US President Donald Trump in the wake of his supporters’ violent storming of Capitol Hill. Trump was also suspended from Facebook and Instagram indefinitely.
Heads quickly turned to the right-wing Twitter alternative Parler — which seemed to be a logical place of respite for the digitally de-throned president.
But Parler too was axed, as Amazon pulled its hosting services and Google and Apple removed it from their stores. The social network, which has since sued Amazon, is effectively shut down until it can secure a new host or force Amazon to restore its services.
These actions may seem like legitimate attempts by platforms to tackle Trump’s violence-fuelling rhetoric. The reality, however, is they will do little to truly disengage his supporters or deal with issues of violence and hate speech.
With an election vote count of 74,223,744 (46.9%), the magnitude of Trump’s following is clear. And since being banned from Twitter, he hasn’t shown any intention of backing down.
Not budging
With more than 47,000 original tweets from Trump’s personal Twitter account (@realdonaldtrump) since 2009, one could argue he used the platform inordinately. There’s much speculation about what he might do now.
Tweeting via the official Twitter account for the president @POTUS, he said he might consider building his own platform. Twitter promptly removed this tweet. He also tweeted: “We will not be SILENCED!”.
This threat may come with some standing as Trump does have avenues to control various forms of media. In November, Axios reported he was considering launching his own right-wing media venture.
For his followers, the internet remains a “natural hunting ground” where they can continue gaining support through spreading racist and hateful sentiment.
The internet is also notoriously hard to police – it has no real borders, and features such as encryption enable anonymity. Laws differ from state to state and nation to nation; an act deemed illegal in one locale may be legal elsewhere.
It’s no surprise groups including fascists, neo-Nazis, anti-Semites and white supremacists were early and eager adopters of the internet. Back in 1998, former Ku Klux Klan Grand Wizard David Duke wrote online:
I believe that the internet will begin a chain reaction of racial enlightenment that will shake the world by the speed of its intellectual conquest.
As far as efforts to quash such extremism go, they’re usually too little, too late.
Take Stormfront, a neo-Nazi platform described as the web’s first major racial hate site. It was set up in 1995 by a former Klan state leader, and only removed from the open web 22 years later in 2017.
The psychology of hate
Banning Trump from social media won’t necessarily silence him or his supporters. Esteemed British psychiatrist and broadcaster Raj Persaud sums it up well: “narcissists do not respond well to social exclusion”.
Others have highlighted the many options still available for Trump fans to congregate since Parler’s departure, which was used to communicate plans ahead of the siege at Capitol. Gab is one platform many Trump supporters have flocked to.
It’s important to remember hate speech, racism and violence predate the internet. Those who are predisposed to these ideologies will find a way to connect with others like them.
And censorship likely won’t change their beliefs, since extremist ideologies and conspiracies tend to be heavily spurred on by confirmation bias. This is when people interpret information in a way that reaffirms their existing beliefs.
When Twitter took action to limit QAnon content last year, some followers took this as confirmation of the conspiracy, which claims Satan-worshipping elites from within government, business and media are running a “deep state” against Trump.
Social media and white supremacy: a love story
The promotion of violence and hate speech on platforms isn’t new, nor is it restricted to relatively fringe sites such as Parler.
Queensland University of Technology Digital Media lecturer Ariadna Matamoros-Fernández describes online hate speech as “platformed racism”. This framing is critical, especially in the case of Trump and his followers.
It recognises social media has various algorithmic features which allow for the proliferation of racist content. It also captures the governance structures that tend to favour “free speech” over the safety of vulnerable communities online.
For instance, Matamoros-Fernández’s research found in Australia, platforms such as Facebook “favoured the offenders over Indigenous people” by tending to lean in favour of free speech.
Other research has found Indigenous social media users regularly witness and experience racism and sexism online. My own research has also revealed social media helps proliferate hate speech, including racism and other forms of violence.
On this front, tech companies are unlikely to take action on the scale required, since controversy is good for business. Simply, there’s no strong incentive for platforms to tackle the issues of hate speech and racism — not until not doing so negatively impacts profits.
After Facebook indefinitely banned Trump, its market value reportedly dropped by US$47.6 billion as of Wednesday, while Twitter’s dropped by US$3.5 billion.
Read more:
Profit, not free speech, governs media companies’ decisions on controversy
The need for a paradigm shift
When it comes to imagining a future with less hate, racism and violence, a key mistake is looking for solutions within the existing structure.
Today, online media is an integral part of the structure that governs society. So we look to it to solve our problems.
But banning Trump won’t silence him or the ideologies he peddles. It will not suppress hate speech or even reduce the capacity of individuals to incite violence.
Trump’s presidency will end in the coming days, but extremist groups and the broader movement they occupy will remain, both in real life and online.
Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day
Bronwyn Carlson, Professor, Indigenous Studies, Macquarie University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
No, Twitter is not censoring Donald Trump. Free speech is not guaranteed if it harms others

Alex Brandon/AP
Katharine Gelber, The University of Queensland
The recent storming of the US Capitol has led a number of social media platforms to remove President Donald Trump’s account. In the case of Twitter, the ban is permanent. Others, like Facebook, have taken him offline until after President-elect Joe Biden’s inauguration next week.
This has led to a flurry of commentary in the Australian media about “free speech”. Treasurer Josh Frydenburg has said he is “uncomfortable” with Twitter’s removal of Trump, while the acting prime minister, Michael McCormack, has described it as “censorship”.
Meanwhile, MPs like Craig Kelly and George Christensen continue to ignore the evidence and promote misinformation about the nature of the violent, pro-Trump mob that attacked the Capitol.
A growing number of MPs are also reportedly calling for consistent and transparent rules to be applied by online platforms in a bid to combat hate speech and other types of harmful speech.
Some have conflated this effort with the restrictions on Trump’s social media usage, as though both of these issues reflect the same problem.
Much of this commentary is misguided, wrong and confusing. So let’s pull it apart a bit.
There is no free speech “right” to incite violence
There is no free speech argument in existence that suggests an incitement of lawlessness and violence is protected speech.
Quite to the contrary. Nineteenth century free speech proponent John Stuart Mill argued the sole reason one’s liberty may be interfered with (including restrictions on free speech) is “self-protection” — in other words, to protect people from harm or violence.
Read more:
Parler: what you need to know about the ‘free speech’ Twitter alternative
Additionally, incitement to violence is a criminal offence in all liberal democratic orders. There is an obvious reason for this: violence is harmful. It harms those who are immediately targeted (five people died in the riots last week) and those who are intimidated as a result of the violence to take action or speak up against it.
It also harms the institutions of democracy themselves, which rely on elections rather than civil wars and a peaceful transfer of power.
To suggest taking action against speech that incites violence is “censoring” the speaker is completely misleading.
There is no free speech “right” to appear on a particular platform
There is also no free speech argument that guarantees any citizen the right to express their views on a specific platform.
It is ludicrous to suggest there is. If this “right” were to exist, it would mean any citizen could demand to have their opinions aired on the front page of the Sydney Morning Herald and, if refused, claim their free speech had been violated.
Read more:
Trump’s Twitter tantrum may wreck the internet
What does exist is a general right to express oneself in public discourse, relatively free from regulation, as long as one’s speech does not harm others.
Trump still possesses this right. He has a podium in the West Wing designed for this specific purpose, which he can make use of at any time.
Were he to do so, the media would cover what he says, just as they covered his comments prior to, during and immediately after the riots. This included him telling the rioters that he loved them and that they were “very special”.

Jacquelyn Martin/AP
Does the fact he’s the president change this?
In many free speech arguments, political speech is accorded a higher level of protection than other forms of speech (such as commercial speech, for example). Does the fact this debate concerns the president of the United States change things?
No, it does not. There is no doubt Trump has been given considerable leeway in his public commentary prior to — and during the course of — his presidency. However, he has now crossed a line into stoking imminent lawlessness and violence.
This cannot be protected speech just because it is “political”. If this was the case, it would suggest the free speech of political elites can and should have no limits at all.
Yet, in all liberal democracies – even the United States which has the strongest free speech protection in the world – free speech has limits. These include the incitement of violence and crime.
Are social media platforms over-censoring?
The last decade or so has seen a vigorous debate over the attitudes and responses of social media platforms to harmful speech.
The big tech companies have staunchly resisted being asked to regulate speech, especially political speech, on their platforms. They have enjoyed the profits of their business model, while specific types of users – typically the marginalised – have borne the costs.
However, platforms have recently started to respond to demands and public pressure to address the harms of the speech they facilitate – from countering violent extremism to fake accounts, misinformation, revenge porn and hate speech.
They have developed community standards for content moderation that are publicly available. They release regular reports on their content moderation processes.
Facebook has even created an independent oversight board to arbitrate disputes over their decision making on content moderation.
They do not always do very well at this. One of the core problems is their desire to create algorithms and policies that are applicable universally across their global operations. But such a thing is impossible when it comes to free speech. Context matters in determining whether and under what circumstances speech can harm. This means they make mistakes.
Read more:
Why the business model of social media giants like Facebook is incompatible with human rights
Where to now?
The calls by MPs Anne Webster and Sharon Claydon to address hate speech online are important. They are part of the broader push internationally to find ways to ensure the benefits of the internet can be enjoyed more equally, and that a person’s speech does not silence or harm others.
Arguments about harm are longstanding, and have been widely accepted globally as forming a legitimate basis for intervention.
But the suggestion Trump has been censored is simply wrong. It misleads the public into believing all “free speech” claims have equal merit. They do not.
We must work to ensure harmful speech is regulated in order to ensure broad participation in the public discourse that is essential to our lives — and to our democracy. Anything less is an abandonment of the principles and ethics of governance.
Katharine Gelber, Professor of Politics and Public Policy, The University of Queensland
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?
Timothy Graham, Queensland University of Technology
Amid the chaos in the US Capitol, stoked largely by rhetoric from President Donald Trump, Twitter has locked his account, with 88.7 million followers, for 12 hours.
Facebook and Instagram quickly followed suit, locking Trump’s accounts — with 35.2 million followers and 24.5 million, respectively — for at least two weeks, the remainder of his presidency. This ban was extended from 24 hours.
The locks are the latest effort by social media platforms to clamp down on Trump’s misinformation and baseless claims of election fraud.
They came after Twitter labelled a video posted by Trump and said it posed a “risk of violence”. Twitter removed users’ ability to retweet, like or comment on the post — the first time this has been done.
In the video, Trump told the agitators at the Capitol to go home, but at the same time called them “very special” and said he loved them for disrupting the Congressional certification of President-elect Joe Biden’s win.
That tweet has since been taken down for “repeated and severe violations” of Twitter’s civic integrity policy. YouTube and Facebook have also removed copies of the video.
But as people across the world scramble to make sense of what’s going on, one thing stands out: the events that transpired today were not unexpected.
Given the lack of regulation and responsibility shown by platforms over the past few years, it’s fair to say the writing was on the wall.
The real, violent consequences of misinformation
While Trump is no stranger to contentious and even racist remarks on social media, Twitter’s action to lock the president’s account is a first.
The line was arguably crossed by Trump’s implicit incitement of violence and disorder within the halls of the US Capitol itself.
Nevertheless, it would have been a difficult decision for Twitter (and Facebook and Instagram), with several factors at play. Some of these are short-term, such as the immediate potential for further violence.
Then there’s the question of whether tighter regulation could further incite rioting Trump supporters by feeding into their theories claiming the existence of a large-scale “deep state” plot against the president. It’s possible.
Read more:
QAnon believers will likely outlast and outsmart Twitter’s bans
But a longer-term consideration — and perhaps one at the forefront of the platforms’ priorities — is how these actions will affect their value as commercial assets.
I believe the platforms’ biggest concern is their own bottom line. They are commercial companies legally obliged to pursue profits for shareholders. Commercial imperatives and user engagement are at the forefront of their decisions.
What happens when you censor a Republican president? You can lose a huge chunk of your conservative user base, or upset your shareholders.
Despite what we think of them, or how we might use them, platforms such as Facebook, Twitter, Instagram and YouTube aren’t set up in the public interest.
For them, it’s risky to censor a head of state when they know that content is profitable. Doing it involves a complex risk calculus — with priorities being shareholders, the companies’ market value and their reputation.
Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day
Walking a tightrope
The platforms’ decisions to not only force the removal of several of Trump’s posts but also to lock his accounts carries enormous potential loss of revenue. It’s a major and irreversible step.
And they are now forced to keep a close eye on one another. If one appears too “strict” in its censorship, it may attract criticism and lose user engagement and ultimately profit. At the same time, if platforms are too loose with their content regulation, they must weather the storm of public critique.
You don’t want to be the last organisation to make the tough decision, but you don’t necessarily want to be the first, either — because then you’re the “trial balloon” who volunteered to potentially harm the bottom line.
For all major platforms, the past few years have presented high stakes. Yet there have been plenty of opportunities to stop the situation snowballing to where it is now.
From Trump’s baseless election fraud claims to his false ideas about the coronavirus, time and again platforms have turned a blind eye to serious cases of mis- and disinformation.
The storming of the Capitol is a logical consequence of what has arguably been a long time coming.
The coronavirus pandemic illustrated this. While Trump was partially censored by Twitter and Facebook for misinformation, the platforms failed to take lasting action to deal with the issue at its core.
In the past, platforms have cited constitutional reasons to justify not censoring politicians. They have claimed a civic duty to give elected officials an unfiltered voice.
This line of argument should have ended with the “Unite the Right” rally in Charlottesville in August 2017, when Trump responded to the killing of an anti-fascism protester by claiming there were “very fine people on both sides”.
An age of QAnon, Proud Boys and neo-Nazis
While there’s no silver bullet for online misinformation and extremist content, there’s also no doubt platforms could have done more in the past that may have prevented the scenes witnessed in Washington DC.
In a crisis, there’s a rush to make sense of everything. But we need only look at what led us to this point. Experts on disinformation have been crying out for platforms to do more to combat disinformation and its growing domestic roots.
Now, in 2021, extremists such as neo-Nazis and QAnon believers no longer have to lurk in the depths of online forums or commit lone acts of violence. Instead, they can violently storm the Capitol.
It would be a cardinal error to not appraise the severity and importance of the neglect that led us here. In some ways, perhaps that’s the biggest lesson we can learn.
This article has been updated to reflect the news that Facebook and Instagram extended their 24-hour ban on President Trump’s accounts.
Timothy Graham, Senior Lecturer, Queensland University of Technology
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Do social media algorithms erode our ability to make decisions freely? The jury is out

Charles Deluvio/Unsplash, CC BY-SA
Lewis Mitchell and James Bagrow, University of Vermont
Social media algorithms, artificial intelligence, and our own genetics are among the factors influencing us beyond our awareness. This raises an ancient question: do we have control over our own lives? This article is part of The Conversation’s series on the science of free will.
Have you ever watched a video or movie because YouTube or Netflix recommended it to you? Or added a friend on Facebook from the list of “people you may know”?
And how does Twitter decide which tweets to show you at the top of your feed?
These platforms are driven by algorithms, which rank and recommend content for us based on our data.
As Woodrow Hartzog, a professor of law and computer science at Northeastern University, Boston, explains:
If you want to know when social media companies are trying to manipulate you into disclosing information or engaging more, the answer is always.
So if we are making decisions based on what’s shown to us by these algorithms, what does that mean for our ability to make decisions freely?
What we see is tailored for us
An algorithm is a digital recipe: a list of rules for achieving an outcome, using a set of ingredients. Usually, for tech companies, that outcome is to make money by convincing us to buy something or keeping us scrolling in order to show us more advertisements.
The ingredients used are the data we provide through our actions online – knowingly or otherwise. Every time you like a post, watch a video, or buy something, you provide data that can be used to make predictions about your next move.
These algorithms can influence us, even if we’re not aware of it. As the New York Times’ Rabbit Hole podcast explores, YouTube’s recommendation algorithms can drive viewers to increasingly extreme content, potentially leading to online radicalisation.
Facebook’s News Feed algorithm ranks content to keep us engaged on the platform. It can produce a phenomenon called “emotional contagion”, in which seeing positive posts leads us to write positive posts ourselves, and seeing negative posts means we’re more likely to craft negative posts — though this study was controversial partially because the effect sizes were small.
Also, so-called “dark patterns” are designed to trick us into sharing more, or spending more on websites like Amazon. These are tricks of website design such as hiding the unsubscribe button, or showing how many people are buying the product you’re looking at right now. They subconsciously nudge you towards actions the site would like you to take.
Read more:
Sludge: how corporations ‘nudge’ us into spending more
You are being profiled
Cambridge Analytica, the company involved in the largest known Facebook data leak to date, claimed to be able to profile your psychology based on your “likes”. These profiles could then be used to target you with political advertising.
“Cookies” are small pieces of data which track us across websites. They are records of actions you’ve taken online (such as links clicked and pages visited) that are stored in the browser. When they are combined with data from multiple sources including from large-scale hacks, this is known as “data enrichment”. It can link our personal data like email addresses to other information such as our education level.
These data are regularly used by tech companies like Amazon, Facebook, and others to build profiles of us and predict our future behaviour.
You are being predicted
So, how much of your behaviour can be predicted by algorithms based on your data?
Our research, published in Nature Human Behaviour last year, explored this question by looking at how much information about you is contained in the posts your friends make on social media.
Using data from Twitter, we estimated how predictable peoples’ tweets were, using only the data from their friends. We found data from eight or nine friends was enough to be able to predict someone’s tweets just as well as if we had downloaded them directly (well over 50% accuracy, see graph below). Indeed, 95% of the potential predictive accuracy that a machine learning algorithm might achieve is obtainable just from friends’ data.

Bagrow, Liu, & Mitchell (2019)
Our results mean that even if you #DeleteFacebook (which trended after the Cambridge Analytica scandal in 2018), you may still be able to be profiled, due to the social ties that remain. And that’s before we consider the things about Facebook that make it so difficult to delete anyway.
Read more:
Why it’s so hard to #DeleteFacebook: Constant psychological boosts keep you hooked
We also found it’s possible to build profiles of non-users — so-called “shadow profiles” — based on their contacts who are on the platform. Even if you have never used Facebook, if your friends do, there is the possibility a shadow profile could be built of you.
On social media platforms like Facebook and Twitter, privacy is no longer tied to the individual, but to the network as a whole.
No more free will? Not quite
But all hope is not lost. If you do delete your account, the information contained in your social ties with friends grows stale over time. We found predictability gradually declines to a low level, so your privacy and anonymity will eventually return.
While it may seem like algorithms are eroding our ability to think for ourselves, it’s not necessarily the case. The evidence on the effectiveness of psychological profiling to influence voters is thin.
Most importantly, when it comes to the role of people versus algorithms in things like spreading (mis)information, people are just as important. On Facebook, the extent of your exposure to diverse points of view is more closely related to your social groupings than to the way News Feed presents you with content. And on Twitter, while “fake news” may spread faster than facts, it is primarily people who spread it, rather than bots.
Of course, content creators exploit social media platforms’ algorithms to promote content, on YouTube, Reddit and other platforms, not just the other way round.
At the end of the day, underneath all the algorithms are people. And we influence the algorithms just as much as they may influence us.
Read more:
Don’t just blame YouTube’s algorithms for ‘radicalisation’. Humans also play a part
Lewis Mitchell, Senior Lecturer in Applied Mathematics and James Bagrow, Associate Professor, Mathematics & Statistics, University of Vermont
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Parler: what you need to know about the ‘free speech’ Twitter alternative

Wikimedia
Audrey Courty, Griffith University
Amid claims of social media platforms stifling free speech, a new challenger called Parler is drawing attention for its anti-censorship stance.
Last week, Harper’s Magazine published an open letter signed by 150 academics, writers and activists concerning perceived threats to the future of free speech.
The letter, signed by Noam Chomsky, Francis Fukuyama, Gloria Steinem and J.K. Rowling, among others, reads:
The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted.
Debates surroundings free speech and censorship have taken centre stage in recent months. In May, Twitter started adding fact-check labels to tweets from Donald Trump.
More recently, Reddit permanently removed its largest community of Trump supporters.
In this climate, Parler presents itself as a “non-biased, free speech driven” alternative to Twitter. Here’s what you should know about the US-based startup.
Read more:
Is cancel culture silencing open debate? There are risks to shutting down opinions we disagree with
What is Parler?
Parler reports more than 1.5 million users and is growing in popularity, especially as Twitter and other social media giants crackdown on misinformation and violent content.
screenshot
Parler is very similar to Twitter in appearance and function, albeit clunkier. Like Twitter, Parler users can follow others and engage with public figures, news sources and other users.
Public posts are called “parleys” rather than “tweets” and can contain up to 1,000 characters.

screenshot
Users can search for hashtags, make comments, “echo” posts (similar to a retweet) and “vote” (similar to a like) on posts. There’s also a direct private messaging feature, just like Twitter.
Given this likeness, what actually is unique about Parler?
Fringe views welcome?
Parler’s main selling point is its claim it embraces freedom of speech and has minimal moderation. “If you can say it on the street of New York, you can say it on Parler”, founder John Matze explains.
This branding effort capitalises on allegations competitors such as Twitter and Facebook unfairly censor content and discriminate against right-wing political speech.
While other platforms often employ fact checkers, or third-party editorial boards, Parler claims to moderate content based on American Federal Communications Commission guidelines and Supreme Court rulings.
So if someone shared demonstrably false information on Parler, Matze said it would be up to other users to fact-check them “organically”.
And although Parler is still dwarfed by Twitter (330 million users) and Facebook (2.6 billion users) the platform’s anti-censorship stance continues to attract users turned off by the regulations of larger social media platforms.
When Twitter recently hid tweets from Trump for “glorifying violence”, this partly prompted the Trump campaign to consider moving to a platform such as Parler.
screenshot
Matze also claims Parler protects users’ privacy by not tracking or sharing their data.
Is Parler really a free speech haven?
Companies such as Twitter and Facebook have denied they are silencing conservative voices, pointing to blanket policies against hate speech and content inciting violence.
Parler’s “free speech” has resulted in various American Republicans, including Senator Ted Cruz, promoting the platform.
Many conservative influencers such as Katie Hopkins, Lara Loomer and Alex Jones have sought refuge on Parler after being banned from other platforms.
Although it brands itself as a bipartisan safe space, Parler is mostly used by right-wing media, politicians and commentators.
Moreover, a closer look at its user agreement suggests it moderates content the same way as any platform, maybe even more.
The company states:
Parler may remove any content and terminate your access to the Services at any time and for any reason or no reason.
Parler’s community guidelines prohibit a range of content including spam, terrorism, unsolicited ads, defamation, blackmail, bribery and criminal behaviour.
Although there are no explicit rules against hate speech, there are policies against “fighting words” and “threats of harm”. This includes “a threat of or advocating for violation against an individual or group”.
There are rules against content that is obscene, sexual or “lacks serious literary, artistic, political and scientific value”. For example, visuals of genitalia, female nipples, or faecal matter are barred from Parler.
Meanwhile, Twitter allows “consensually produced adult content” if its marked as “sensitive”. It also has no policy against the visual display of excrement.
As a private company, Parler can remove whatever content it wants. Some users have already been banned for breaking rules.
What’s more, in spite of claims it does not share user data, Parler’s privacy policy states data collected can be used for advertising and marketing.
Read more:
Friday essay: Twitter and the way of the hashtag
No marks of establishment
Given its limited user base, Parler has yet to become the “open town square” it aspires to be.
The platform is in its infancy and its user base is much less representative than larger social media platforms.
Despite Matze saying “left-leaning” users tied to the Black Lives Matter movement were joining Parler to challenge conservatives, Parler lacks the diverse audience needed for any real debate.
screenshot
Matze also said he doesn’t want Parler to be an “echo chamber” for conservative voices. In fact, he is offering a US$20,000 “progressive bounty” for an openly liberal pundit with 50,000 followers on Twitter or Facebook to join.
Clearly, the platform has a long way to go before it bursts its conservative bubble.
Read more:
Don’t (just) blame echo chambers. Conspiracy theorists actively seek out their online communities
Audrey Courty, PhD candidate, School of Humanities, Languages and Social Science, Griffith University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Disasters expose gaps in emergency services’ social media use
Tan Yigitcanlar, Queensland University of Technology; Ashantha Goonetilleke, Queensland University of Technology, and Nayomi Kankanamge, Queensland University of Technology
Australia has borne the brunt of several major disasters in recent years, including drought, bushfires, floods and cyclones. The increasing use of social media is changing how we prepare for and respond to these disasters. Not only emergency services but also their social media are now much-sought-after sources of disaster information and warnings.
We studied Australian emergency services’ social media use in times of disaster. Social media can provide invaluable and time-critical information to both emergency services and communities at risk. But we also found problems.
Read more:
Drought, fire and flood: how outer urban areas can manage the emergency while reducing future risks
How do emergency services use social media?
The 2019-20 Australian bushfires affected 80% of the population directly or indirectly. Social media were widely used to spread awareness of the bushfire disaster and to raise funds – albeit sometimes controversially – to help people in need.
The escalating use and importance of social media in disaster management raises an important question:
How effective are social media pages of Australian state emergency management organisations in meeting community expectations and needs?
To answer this question, QUT’s Urban Studies Lab investigated the community engagement approaches of social media pages maintained by various Australian emergency services. We placed Facebook and Twitter pages of New South Wales State Emergency Services (NSW-SES), Victoria State Emergency Services (VIC-SES) and Queensland Fire and Emergency Services (QLD-FES) under the microscope.
Our study made four key findings.
First, emergency services’ social media pages are intended to:
- disseminate warnings
- provide an alternative communication channel
- receive rescue and recovery requests
- collect information about the public’s experiences
- raise disaster awareness
- build collective intelligence
- encourage volunteerism
- express gratitude to emergency service staff and volunteers
- raise funds for those in need.
Read more:
With costs approaching $100 billion, the fires are Australia’s costliest natural disaster
Examples of emergency services’ social media posts are shown below.




Second, Facebook pages of emergency services attract more community attention than Twitter pages. Services need to make their Twitter pages more attractive as, unlike Facebook, Twitter allows streamlined data download for social media analytics. A widely used Twitter page of emergency service means more data for analysis and potentially more accurate policies and actions.
Third, Australia lacks a legal framework for the use of social media in emergency service operations. Developing these frameworks will help organisations maximise its use, especially in the case of financial matters such as donations.
Fourth, the credibility of public-generated information can sometimes be questionable. Authorities need to be able to respond rapidly to such information to avoid the spread of misinformation or “fake news” on social media.
Services could do more with social media
Our research highlighted that emergency services could use social media more effectively. We do not see these services analysing social media data to inform their activities before, during and after disasters.
In another study on the use of social media analytics for disaster management, we developed a novel approach to show how emergency services can identify disaster-affected areas using real-time social media data. For that study, we collected Twitter data with location information on the 2010-11 Queensland floods. We were able to identify disaster severity by analysing the emotional or sentiment values of tweets.
Read more:
Explainer: how the internet knows if you’re happy or sad
This work generated the disaster severity map show below. The map is over 90% accurate to actual figures in the report of the Queensland Floods Commission of Inquiry.

Authors
Concerns about using social media to manage disaster
The first highly voiced concern about social media use in disaster management is the digital divide. While the issue of underrepresented people and communities remains important, the use of technology is spreading widely. There were 3.4 billion social media users worldwide in 2019, and the growth in numbers is accelerating.
Read more:
Online tools can help people in disasters, but do they represent everyone?
Besides, many Australian cities and towns are investing in smart city strategies and infrastructures. These localities provide free public Wi-Fi connections. And almost 90% of Australians now own a smart phone.
The second concern is information accuracy or “fake news” on social media. Evidently, sharing false information and rumours compromises the information social media provides. Social media images and videos tagged with location information can provide more reliable, eye-witness information.
Another concern is difficulty in receiving social media messages from severely affected areas. For instance, the disaster might have brought down internet or 4G/5G coverage, or people might have been evacuated from areas at risk. This might lead to limited social media posts from the actual disaster zone, with increasing numbers of posts from the places people are relocated.
In such a scenario, alternative social media analytics are on offer. We can use content analysis and sentiment analysis to determine the disaster location and impact.
How to make the most of social media
Social media and its applications are generating new and innovative ways to manage disasters and reduce their impacts. These include:
- increasing community trust in emergency services by social media profiling
- crowd-sourcing the collection and sharing of disaster information
- creating awareness by incorporating gamification applications in social media
- using social media data to detect disaster intensity and hotspot locations
- running real-time data analytics.
In sum, social media could become a mainstream information provider for disaster management. The need is likely to become more pressing as human-induced climate change increases the severity and frequency of disasters.
Today, as we confront the COVID-19 pandemic, social media analytics are helping to ease its impacts. Artificial intelligence (AI) technologies are greatly reducing processing time for social media analytics. We believe the next-generation AI will enable us to undertake real-time social media analytics more accurately.
Read more:
Coronavirus: How Twitter could more effectively ease its impact
Tan Yigitcanlar, Associate Professor of Urban Studies and Planning, Queensland University of Technology; Ashantha Goonetilleke, Professor, Queensland University of Technology, and Nayomi Kankanamge, PhD Candidate, School of Built Environment, Queensland University of Technology
This article is republished from The Conversation under a Creative Commons license. Read the original article.
When a virus goes viral: pros and cons to the coronavirus spread on social media

Axel Bruns, Queensland University of Technology; Daniel Angus, Queensland University of Technology; Timothy Graham, Queensland University of Technology, and Tobias R. Keller, Queensland University of Technology
News and views about coronavirus has spread via social media in a way that no health emergency has done before.
Platforms like Twitter, Facebook, Tik Tok and Instagram have played critical roles in sharing news and information, but also in disseminating rumours and misinformation.
Getting the Message Out
Early on, snippets of information circulated on Chinese social media platforms such as Weibo and WeChat, before state censors banned discussions. These posts already painted a grim picture, and Chinese users continue to play cat and mouse with the Internet police in order to share unfiltered information.
As the virus spread, so did the social media conversation. On Facebook and Twitter, discussions have often taken place ahead of official announcements: calls to cancel the Australian Formula One Grand Prix were trending on Twitter days before the official decision.
Similarly, user-generated public health explainers have circulated while official government agencies in many countries discuss campaign briefs with advertising agencies.
Many will have come across (and, hopefully, adopted) hand-washing advice set to the lyrics of someone’s favourite song:
Widespread circulation of graphs has also explained the importance of “flattening the curve” and social distancing.
Debunking myths
Social media have been instrumental in responding to COVID-19 myths and misinformation. Journalists, public health experts, and users have combined to provide corrections to dangerous misinformation shared in US President Donald Trump’s press conferences:
Other posts have highlighted potentially deadly assumptions in the UK government’s herd immunity approach to the crisis:
Users have also pointed out inconsistencies in the Australian cabinet’s response to Home Affairs Minister Peter Dutton’s coronavirus diagnosis.
The circulation of such content through social media is so effective because we tend to pay more attention to information we receive through our networks of social contacts.
Similarly, professional health communicators like Dr Norman Swan have been playing an important role in answering questions and amplifying public health messages, while others have set up resources to keep the public informed on confirmed cases:
Even just seeing our leaders’ poor hygienic practices ridiculed might lead us to take better care ourselves:
Some politicians, like Australian Prime Minister Scott Morrison, blandly dismiss social media channels as a crucial source of crisis information, despite more than a decade’s research showing their importance.
This is deeply unhelpful: they should be embracing social media channels as they seek to disseminate urgent public health advice.
Stoking fear
The downside of all that user-driven sharing is that it can lead to mass panics and irrational behaviour – as we have seen with the panic-buying of toiletpaper and other essentials.
The panic spiral spins even faster when social media trends are amplified by mainstream media reporting, and vice versa: even only a handful of widely shared images of empty shelves in supermarkets might lead consumers to buy what’s left, if media reporting makes the problem appear much larger than it really is.
News stories and tweets showing empty shelves are much more news- and share-worthy than fully stocked shelves: they’re exceptional. But a focus on these pictures distorts our perception of what is actually happening.
The promotion of such biased content by the news media then creates a higher “viral” potential, and such content gains much more public attention than it otherwise would.
Levels of fear and panic are already higher during times of crisis, of course. As a result, some of us – including journalists and media outlets – might also be willing to believe new information we would otherwise treat with more scepticism. This skews the public’s risk perception and makes us much more susceptible to misinformation.
A widely shared Twitter post showed how panic buying in (famously carnivorous) Glasgow had skipped the vegan food section:
Closer inspection revealed the photo originated from Houston during Hurricane Harvey in 2017 (the dollar signs on the food prices are a giveaway).
This case also illustrates the ability of social media discussion to self-correct, though this can take time, and corrections may not travel as far as initial falsehoods. The potential for social media to stoke fears is measured by the difference in reach between the two.
The spread of true and false information is also directly affected by the platform architecture: the more public the conversations, the more likely it is that someone might encounter a falsehood and correct it.
In largely closed, private spaces like WhatsApp, or in closed groups or private profile discussions on Facebook, we might see falsehoods linger for considerably longer. A user’s willingness to correct misinformation can also be affected by their need to maintain good relationships within their community. People will often ignore misinformation shared by friends and family.
And unfortunately, the platforms’ own actions can also make things worse: this week, Facebook’s efforts to control “fake news” posts appeared to affect legitimate stories by mistake.
Rallying cries
Their ability to sustain communities is one of the great strengths of social media, especially as we are practising social distancing and even self-isolation. The internet still has a sense of humour which can help ease the ongoing tension and fear in our communities:
Younger generations are turning to newer social media platforms such as TikTok to share their experiences and craft pandemic memes. A key feature of TikTok is the uploading and repurposing of short music clips by platform users – music clip It’s Corona Time has been used in over 700,000 posts.
We have seen substantial self help efforts conducted via social media: school and university teachers who have been told to transition all of their teaching to online modes at very short notice, for example, have begun to share best-practice examples via the #AcademicTwitter hashtag.
The same is true for communities affected by event shutdowns and broader economic downturns, from freelancers to performing artists. Faced with bans on mass gatherings, some artists are finding ways to continue their work: providing access to 600 live concerts via digital concert halls or streaming concerts live on Twitter.
Such patterns are not new: we encountered them in our research as early as 2011, when social media users rallied together during natural disasters such as the Brisbane floods, Christchurch earthquakes, and Sendai tsunami to combat misinformation, amplify the messages of official emergency services organisations, and coordinate community activities.
Especially during crises, most people just want themselves and their community to be safe.
Axel Bruns, Professor, Creative Industries, Queensland University of Technology; Daniel Angus, Associate Professor in Digital Communication, Queensland University of Technology; Timothy Graham, Senior Lecturer, Queensland University of Technology, and Tobias R. Keller, Visiting Postdoc, Queensland University of Technology
This article is republished from The Conversation under a Creative Commons license. Read the original article.
You must be logged in to post a comment.