Laws making social media firms expose major COVID myths could help Australia’s vaccine rollout



Shutterstock

Tauel Harper, University of Western Australia

With a vaccine rollout impending, key groups have backed calls for the Australian government to force social media platforms to share details about popular coronavirus misinformation.

An open letter was put forth by independent group Reset Australia. It was endorsed by the Doherty Institute, Immunisation Coalition and Immunisation Foundation of Australia, along with the research group I’m working with, Coronavax — which reports community concerns about the COVID-19 vaccination program to government and health workers.

The issue of coronavirus and vaccine-related misinformation should not be understated. That said, big tech companies need to be engaged the right way to help the Australian public avoid what could potentially be a lifetime of health problems.

A leaderboard for COVID myths

We’re living in a dangerous time for both journalism and public education. We don’t have the legal infrastructure or public forums required to address the spread of coronavirus misinformation. Reset’s proposal intends to address these shortcomings, to better regulate this content in Australia.

It states there should be a mandate given to internet service providers to provide more details on the highest trending online posts spreading misinformation about COVID.

These “live lists” would be updated in real time and would let politicians, researchers, medical experts, journalists and the public keep track of which communities are being exposed to coronavirus and vaccine-related lies and what the major stories are.

The proposal suggests the eSafety commissioner should determine how the information is shared publicly to help prevent the potential victimisation of particular individuals.

Conspiracies can slip through the cracks

Many people rely on news (or what they think is news) presented on social media. Unlike traditional journalism, this isn’t fact-checked and has no editorial oversight to ensure accuracy. Moreover, the vast scale of this misinformation extends beyond platforms’ best efforts to curb it.

Since last year, a host of fake coronavirus cures have circulated and been sold illegally on the dark web. Among these was one hoax ‘cure’ in the form of ‘blood’ from supposedly recovered coronavirus patients.
Shutterstock

While social media analytic sites such as CrowdTangle provide some insight for researchers, it’s not enough.

For example, the data CrowdTangle shares from Facebook is limited to public posts in large public pages and groups. We can see engagement for these posts (numbers of likes and comments) but not reach (how many people have seen a particular post).

Reset’s open letter recommends extending access provision to data across the entire social networking site, including (in Facebook’s case) posts on people’s personal profiles (not to be confused with private conversations via Facebook Messenger).

While this does raise privacy concerns, the system would be set up so personal identifiers are removed. Instead of paying social media platforms in exchange for data, we would be putting pressure on them via the law and, at base, their “social license to operate”.

Taking down extremists isn’t the goal

Far-right conspiracy group QAnon has managed to entrench itself in certain pockets in Australia. Its believers claim there is a “deep state” plot against former US President Donald Trump.

This group’s conspiracies have extended to include the bogus claim that COVID is an invention of political elites to ensure compliance from the people and usher in oppressive rules. As the theory goes, the vaccine itself is also a tool for indoctrination and/or population control.




Read more:
Why QAnon is attracting so many followers in Australia — and how it can be countered


Public figures have further amplified the conspiracies, with celebrity chef Pete Evans seemingly spearheading the celebrity faction of the QAnon “cause” in Australia.

The real value of Reset’s policy recommendation, however, is not in trying to change these peoples’ views. Rather, what researchers require are more details on trends and levels of engagement with certain types of content.

One focus would be to identify groups of people exposed to misinformation who could potentially be swayed in the direction of conspiracies.

If we can figure out which particular demographics are be more involved in the spreading of misinformation, or perhaps more vulnerable to it, this would help with efforts to engage with these communities.

We already know young people are generally less confident about receiving a COVID vaccine than people over 65, but we’ve less insight on what their concerns are, or whether there are particular rumours circulating online that are making them wary of vaccinations.

Once these are identified, they can be prioritised in the minds of health workers and policy makers, such as by creating educational content in a group’s specific language to help dispel any myths.




Read more:
Why social media platforms banning Trump won’t stop — or even slow down — his cause


Pressure on platforms is mounting

There is the argument that sharing links to online misinformation could help spread it further. We’ve already seen unscrupulous journalists repeat popular terms from online conspiracists (such as “Dictator Dan”, in reference to Victoria Premier Daniel Andrews) in their own coverage to engage a particular audience.

But ultimately, the information being highlighted is already out there, so it’s better for us to take it on openly and honestly. It’s also not just a matter of monitoring misinformation, but also monitoring legitimate public concern about any vaccine side effects.

The increased visibility of the public’s concerns will force government, researchers, journalists and health professionals to engage more directly with those concerns.

Pfizer vaccine's on conveyor belt
The Therapeutic Goods Administration has granted provisional approval for Pfizer’s coronavirus vaccine to be rolled out in Australia. It’s the first receive regulatory approval.
Shutterstock

The goal now is to invite Facebook, Twitter and Google to help us develop a tool that highlights public issues while also protecting users’ privacy.

Compelled by Australian law, the platforms will likely be concerned about their legal liabilities for any data passed into the public domain. This is understandable, considering the Cambridge Analytica debacle happened because Facebook was too open with users’ data.

Then again, Facebook already has CrowdTangle and Twitter has also been relatively amendable in the fight against COVID misinformation. There are good reasons to suggest these platforms will continue to invest in fighting misinformation, even if just to protect their reputation and profits.

Like it or not, social media have changed the way we discuss issues of public importance — and have certainly changed the game for public communication. What Reset Australia is proposing is an important step in addressing the spread and influence of COVID misinformation in our communities.The Conversation

Tauel Harper, Lecturer, Media and Communication, UWA, University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

There’s no such thing as ‘alternative facts’. 5 ways to spot misinformation and stop sharing it online



Shutterstock

Mark Pearson, Griffith University

The blame for the recent assault on the US Capitol and President Donald Trump’s broader dismantling of democratic institutions and norms can be laid at least partly on misinformation and conspiracy theories.

Those who spread misinformation, like Trump himself, are exploiting people’s lack of media literacy — it’s easy to spread lies to people who are prone to believe what they read online without questioning it.

We are living in a dangerous age where the internet makes it possible to spread misinformation far and wide and most people lack the basic fact-checking abilities to discern fact from fiction — or, worse, the desire to develop a healthy skepticism at all.




Read more:
Stopping the spread of COVID-19 misinformation is the best 2021 New Year’s resolution


Journalists are trained in this sort of thing — that is, the responsible ones who are trying to counter misinformation with truth.

Here are five fundamental lessons from Journalism 101 that all citizens can learn to improve their media literacy and fact-checking skills:

1. Distinguishing verified facts from myths, rumours and opinions

Cold, hard facts are the building blocks for considered and reasonable opinions in politics, media and law.

And there are no such things as “alternative facts” — facts are facts. Just because a falsity has been repeated many times by important people and their affiliates does not make it true.

We cannot expect the average citizen to have the skills of an academic researcher, journalist or judge in determining the veracity of an asserted statement. However, we can teach people some basic strategies before they mistake mere assertions for actual facts.

Does a basic internet search show these assertions have been confirmed by usually reliable sources – such as non-partisan mainstream news organisations, government websites and expert academics?

Students are taught to look to the URL of more authoritative sites — such as .gov or .edu — as a good hint at the factual basis of an assertion.

Searches and hashtags in social media are much less reliable as verification tools because you could be fishing within the “bubble” (or “echo chamber”) of those who share common interests, fears and prejudices – and are more likely to be perpetuating myths and rumours.

2. Mixing up your media and social media diet

We need to be break out of our own “echo chambers” and our tendencies to access only the news and views of those who agree with us, on the topics that interest us and where we feel most comfortable.

For example, over much of the past five years, I have deliberately switched between various conservative and liberal media outlets when something important has happened in the US.

By looking at the coverage of the left- and right-wing media, I can hope to find a common set of facts both sides agree on — beyond the partisan rhetoric and spin. And if only one side is reporting something, I know to question this assertion and not just take it at face value.

3. Being skeptical and assessing the factual premise of an opinion

Journalism students learn to approach the claims of their sources with a “healthy skepticism”. For instance, if you are interviewing someone and they make what seems to be a bold or questionable claim, it’s good practice to pause and ask what facts the claim is based on.

Students are taught in media law this is the key to the fair comment defence to a defamation action. This permits us to publish defamatory opinions on matters of public interest as long as they are reasonably based on provable facts put forth by the publication.

The ABC’s Media Watch used this defence successfully (at trial and on appeal) when it criticised a Sydney Sun-Herald journalist’s reporting that claimed toxic materials had been found near a children’s playground.

This assessment of the factual basis of an opinion is not reserved for defamation lawyers – it is an exercise we can all undertake as we decide whether someone’s opinion deserves our serious attention and republication.




Read more:
Teaching children digital literacy skills helps them navigate and respond to misinformation


4. Exploring the background and motives of media and sources

A key skill in media literacy is the ability to look behind the veil of those who want our attention — media outlets, social media influencers and bloggers — to investigate their allegiances, sponsorships and business models.

For instance, these are some key questions to ask:

  • who is behind that think tank whose views you are retweeting?

  • who owns the online newspaper you read and what other commercial interests do they hold?

  • is your media diet dominated by news produced from the same corporate entity?

  • why does someone need to be so loud or insulting in their commentary; is this indicative of their neglect of important facts that might counter their view?

  • what might an individual or company have to gain or lose by taking a position on an issue, and how might that influence their opinion?

Just because someone has an agenda does not mean their facts are wrong — but it is a good reason to be even more skeptical in your verification processes.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


5. Reflecting and verifying before sharing

We live in an era of instant republication. We immediately retweet and share content we see on social media, often without even having read it thoroughly, let alone having fact-checked it.

Mindful reflection before pressing that sharing button would allow you to ask yourself, “Why am I even choosing to share this material?”

You could also help shore up democracy by engaging in the fact-checking processes mentioned above to avoid being part of the problem by spreading misinformation.The Conversation

Mark Pearson, Professor of Journalism and Social Media, Griffith Centre for Social and Cultural Research, Griffith University, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Trump’s time is up, but his Twitter legacy lives on in the global spread of QAnon conspiracy theories



http://www.shutterstock.com

Verica Rupar, Auckland University of Technology and Tom De Smedt, University of Antwerp

“The lie outlasts the liar,” writes historian Timothy Snyder, referring to outgoing president Donald Trump and his contribution to the “post-truth” era in the US.

Indeed, the mass rejection of reason that erupted in a political mob storming Capitol Hill mere weeks before the inauguration of Joe Biden tests our ability to comprehend contemporary American politics and its emerging forms of extremism.

Much has been written about Trump’s role in spreading misinformation and the media failures that enabled him. His contribution to fuelling extremism, flirting with the political fringe, supporting conspiracy theories and, most of all, Twitter demagogy created an environment in which he has been seen as an “accelerant” in his own right.

If the scale of international damage is yet to be calculated, there is something we can measure right now.

In September last year, the London-based Media Diversity Institute (MDI) asked us to design a research project that would systematically track the extent to which US-originated conspiracy theory group QAnon had spread to Europe.

Titled QAnon 2: spreading conspiracy theories on Twitter, the research is part of the international Get the Trolls Out! (GTTO) project, focusing on religious discrimination and intolerance.

Twitter and the rise of QAnon

GTTO media monitors had earlier noted the rise of QAnon support among Twitter users in Europe and were expecting a further surge of derogatory talk ahead of the 2020 US presidential election.

We examined the role religion played in spreading conspiracy theories, the most common topics of tweets, and what social groups were most active in spreading QAnon ideas.

We focused on Twitter because its increasing use — some sources estimate 330 million people used Twitter monthly in 2020 — has made it a powerful political communication tool. It has given politicians such as Trump the opportunity to promote, facilitate and mobilise social groups on an unprecedented scale.




Read more:
QAnon and the storm of the U.S. Capitol: The offline effect of online conspiracy theories


Using AI tools developed by data company Textgain, we analysed about half-a-million Twitter messages related to QAnon to identify major trends.

By observing how hashtags were combined in messages, we examined the network structure of QAnon users posting in English, German, French, Dutch, Italian and Spanish. Researchers identified about 3,000 different hashtags related to QAnon used by 1,250 Twitter profiles.

Protestors with flag showing US flag and QAnon logo
Making the connection: demonstrators in Berlin in 2020 display QAnon and US imagery.
http://www.shutterstock.com

An American export

Every fourth QAnon tweet originated in the US (300). Far behind were tweets from other countries: Canada (30), Germany (25), Australia (20), the United Kingdom (20), the Netherlands (15), France (15), Italy (10), Spain (10) and others.

We examined QAnon profiles that share each other’s content, Trump tweets and YouTube videos, and found over 90% of these profiles shared the content of at least one other identified profile.

Seven main topics were identified: support for Trump, support for EU-based nationalism, support for QAnon, deep state conspiracies, coronavirus conspiracies, religious conspiracies and political extremism.




Read more:
Far-right activists on social media telegraphed violence weeks in advance of the attack on the US Capitol


Hashtags rooted in US evangelicalism sometimes portrayed Trump as Jesus, as a superhero, or clad in medieval armour, with underlying Biblical references to a coming apocalypse in which he will defeat the forces of evil.

Overall, the coronavirus pandemic appears to function as an important conduit for all such messaging, with QAnon acting as a rallying flag for discontent among far-right European movements.

Measuring the toxicity of tweets

We used Textgain’s hate-speech detection tools to assess toxicity. Tweets written in English had a high level of antisemitism. In particular, they targeted public figures such as Jewish-American billionaire investor and philanthropist George Soros, or revived old conspiracies about secret Jewish plots for world domination. Soros was also a popular target in other languages.

We also found a highly polarised debate around the coronavirus public health measures employed in Germany, often using Third Reich rhetoric.

New language to express negative sentiments was coined and then adopted by others — in particular, pejorative terms for face masks and slurs directed at political leaders and others who wore masks.

Accompanying memes ridiculed political leaders, displaying them as alien reptilian overlords or antagonists from popular movies, such as Star Wars Sith Lords and the cyborg from The Terminator.

Most of the QAnon profiles tap into the same sources of information: Trump tweets, YouTube disinformation videos and each other’s tweets. It forms a mutually reinforcing confirmation bias — the tendency to search for, interpret, favour, and recall information that confirms prior beliefs or values.




Read more:
Despite being permanently banned, Trump’s prolific Twitter record lives on


Where does it end?

Harvesting discontent has always been a powerful political tool. In a digital world this is more true than ever.

By mid 2020, Donald Trump had six times more followers on Twitter than when he was elected. Until he was suspended from the platform, his daily barrage of tweets found a ready audience in ultra-right groups in the US who helped his misinformation and inflammatory rhetoric jump the Atlantic to Europe.

Social media platforms have since attempted to reduce the spread of QAnon. In July 2020, Twitter suspended 7,000 QAnon-related accounts. In August, Facebook deleted over 790 groups and restricted the accounts of hundreds of others, along with thousands of Instagram accounts.




Read more:
Trump’s Twitter feed shows ‘arc of the hero,’ from savior to showdown


In January this year, all Trump’s social media accounts were either banned or restricted. Twitter suspended 70,000 accounts that share QAnon content at scale.

But further Textgain analysis of 50,000 QAnon tweets posted in December and January showed toxicity had almost doubled, including 750 tweets inciting political violence and 500 inciting violence against Jewish people.

Those tweets were being systematically removed by Twitter. But calls for violence ahead of the January 20 inauguration continued to proliferate, Trump’s QAnon supporters appearing as committed and vocal as ever.

The challenge for both the Biden administration and the social media platforms themselves is clear. But our analysis suggests any solution will require a coordinated international effort.The Conversation

Verica Rupar, Professor, Auckland University of Technology and Tom De Smedt, Postdoctoral research associate, University of Antwerp

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why social media platforms banning Trump won’t stop — or even slow down — his cause


Bronwyn Carlson, Macquarie University

Last week Twitter permanently suspended US President Donald Trump in the wake of his supporters’ violent storming of Capitol Hill. Trump was also suspended from Facebook and Instagram indefinitely.

Heads quickly turned to the right-wing Twitter alternative Parler — which seemed to be a logical place of respite for the digitally de-throned president.

But Parler too was axed, as Amazon pulled its hosting services and Google and Apple removed it from their stores. The social network, which has since sued Amazon, is effectively shut down until it can secure a new host or force Amazon to restore its services.

These actions may seem like legitimate attempts by platforms to tackle Trump’s violence-fuelling rhetoric. The reality, however, is they will do little to truly disengage his supporters or deal with issues of violence and hate speech.

With an election vote count of 74,223,744 (46.9%), the magnitude of Trump’s following is clear. And since being banned from Twitter, he hasn’t shown any intention of backing down.

In his first appearance since the Capitol attack, Trump described the impeachment process as ‘a continuation of the greatest witch hunt in the history of politics’.

Not budging

With more than 47,000 original tweets from Trump’s personal Twitter account (@realdonaldtrump) since 2009, one could argue he used the platform inordinately. There’s much speculation about what he might do now.

Tweeting via the official Twitter account for the president @POTUS, he said he might consider building his own platform. Twitter promptly removed this tweet. He also tweeted: “We will not be SILENCED!”.

This threat may come with some standing as Trump does have avenues to control various forms of media. In November, Axios reported he was considering launching his own right-wing media venture.

For his followers, the internet remains a “natural hunting ground” where they can continue gaining support through spreading racist and hateful sentiment.

The internet is also notoriously hard to police – it has no real borders, and features such as encryption enable anonymity. Laws differ from state to state and nation to nation; an act deemed illegal in one locale may be legal elsewhere.

It’s no surprise groups including fascists, neo-Nazis, anti-Semites and white supremacists were early and eager adopters of the internet. Back in 1998, former Ku Klux Klan Grand Wizard David Duke wrote online:

I believe that the internet will begin a chain reaction of racial enlightenment that will shake the world by the speed of its intellectual conquest.

As far as efforts to quash such extremism go, they’re usually too little, too late.

Take Stormfront, a neo-Nazi platform described as the web’s first major racial hate site. It was set up in 1995 by a former Klan state leader, and only removed from the open web 22 years later in 2017.




Read more:
Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?


The psychology of hate

Banning Trump from social media won’t necessarily silence him or his supporters. Esteemed British psychiatrist and broadcaster Raj Persaud sums it up well: “narcissists do not respond well to social exclusion”.

Others have highlighted the many options still available for Trump fans to congregate since Parler’s departure, which was used to communicate plans ahead of the siege at Capitol. Gab is one platform many Trump supporters have flocked to.

It’s important to remember hate speech, racism and violence predate the internet. Those who are predisposed to these ideologies will find a way to connect with others like them.

And censorship likely won’t change their beliefs, since extremist ideologies and conspiracies tend to be heavily spurred on by confirmation bias. This is when people interpret information in a way that reaffirms their existing beliefs.

When Twitter took action to limit QAnon content last year, some followers took this as confirmation of the conspiracy, which claims Satan-worshipping elites from within government, business and media are running a “deep state” against Trump.

Social media and white supremacy: a love story

The promotion of violence and hate speech on platforms isn’t new, nor is it restricted to relatively fringe sites such as Parler.

Queensland University of Technology Digital Media lecturer Ariadna Matamoros-Fernández describes online hate speech as “platformed racism”. This framing is critical, especially in the case of Trump and his followers.

It recognises social media has various algorithmic features which allow for the proliferation of racist content. It also captures the governance structures that tend to favour “free speech” over the safety of vulnerable communities online.

For instance, Matamoros-Fernández’s research found in Australia, platforms such as Facebook “favoured the offenders over Indigenous people” by tending to lean in favour of free speech.

Other research has found Indigenous social media users regularly witness and experience racism and sexism online. My own research has also revealed social media helps proliferate hate speech, including racism and other forms of violence.

On this front, tech companies are unlikely to take action on the scale required, since controversy is good for business. Simply, there’s no strong incentive for platforms to tackle the issues of hate speech and racism — not until not doing so negatively impacts profits.

After Facebook indefinitely banned Trump, its market value reportedly dropped by US$47.6 billion as of Wednesday, while Twitter’s dropped by US$3.5 billion.




Read more:
Profit, not free speech, governs media companies’ decisions on controversy


The need for a paradigm shift

When it comes to imagining a future with less hate, racism and violence, a key mistake is looking for solutions within the existing structure.

Today, online media is an integral part of the structure that governs society. So we look to it to solve our problems.

But banning Trump won’t silence him or the ideologies he peddles. It will not suppress hate speech or even reduce the capacity of individuals to incite violence.

Trump’s presidency will end in the coming days, but extremist groups and the broader movement they occupy will remain, both in real life and online.




Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day


The Conversation


Bronwyn Carlson, Professor, Indigenous Studies, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Despite being permanently banned, Trump’s prolific Twitter record lives on


Audrey Courty, Griffith University

For years, US President Donald Trump pushed the limits of Twitter’s content policies, raising pressure on the platform to exercise tougher moderation.

Ultimately, the violent siege of the US Capitol forced Twitter’s hand and the platform permanently banned Trump’s personal account, @realDonaldTrump.

But this doesn’t mean the 26,000 or so tweets posted during his presidency have vanished. They are now a matter of public record — and have been preserved accordingly.




Read more:
Twitter permanently suspends Trump after U.S. Capitol siege, citing risk of further violence


The de-platforming of Donald Trump

The loss of public access to Trump’s original Twitter posts means every online hyperlink to a tweet is now defunct. Embedded tweets are still visible as simple text, but can no longer be traced to their source.

Adding to this, retweets of the president’s messages no longer appear on the forwarding user’s feed. Quote tweets have been replaced with the message: “This Tweet is unavailable” and replies can’t be viewed in one place anymore.

But even if Trump’s account had not been suspended, he would have had to part with it at the end of his presidency anyway, since he used it extensively for presidential purposes.

Under the US government’s ethics regulations, US officials are prevented from benefiting personally from their public office, and this applies to social media accounts.

Former US ambassador to the United Nations, Nikki Haley, also used her personal and political Twitter accounts to conduct official business as ambassador. The account was wiped and renamed in 2019 once her role ended.

Where did all the information go?

Despite being permanently suspended, Trump’s prolific Twitter record is not lost. Under the Presidential Records Act, all of Trump’s social media communications are considered public property, including non-public messages sent via direct chat features.

The act defines presidential records as any materials created or received by the president (or immediate staff or advisors) in the course of conducting his official duties.

It was passed in 1978, out of concern that former president Richard Nixon would destroy the tapes which ultimately led to his resignation. Today, it remains a way to force governments to be transparent with the public.

And although Trump tweeted extensively from his personal Twitter account created in 2009, @realDonaldTrump, it has undoubtedly been used for official purposes.

From banning transgender military service to threatening the use of nuclear weapons against North Korea, his tweets on this account constitute an important part of the presidential record.

As such, the US National Archives says it will preserve all of them, including deleted posts — as well as all posts from @POTUS, the official presidential account.

The Trump administration will have to turn over the digital records for both accounts on January 20, which will eventually be made available to the public on a Trump Library website.

Still, the president reserves the right to invoke as many as six specific restrictions to public access for up to 12 years.

We don’t know whether Trump will invoke restrictions. But even if he does, grassroots initiatives have already archived all of his tweets.

For example, the Trump Twitter Archive is a free, public resource that lets users search and filter through more than 56,000 tweets by Trump since 2009, including deleted tweets since 2016.

Screenshot taken from https://www.thetrumparchive.com/
The Trump Twitter Archive, started in 2016, is currently one of few extensive online databases providing access to the president’s past tweets.
Screenshot

A matter of public record

In 2017, Trump told Fox News he believed he may have never been elected without Twitter — and that he viewed it as an effective means for pushing his message.

Twitter also benefited from this relationship. Trump’s 88 million followers (as of when his account was suspended) generated endless streams of user engagement for the social media giant.

Trump’s approach to using Twitter was unprecedented. He bypassed traditional media channels, instead tweeting for political and diplomatic purposes — including to make important policy announcements.

His tweets set the agenda for US politics during his presidency. For example, they influenced foreign relations between the US and Mexico, North Korea, China and Iran. They were also used to endorse allies and attack rivals.




Read more:
Twitter diplomacy: how Trump is using social media to spur a crisis with Mexico


The closest thing to a town square

For all the reasons listed above, the value of Trump’s Twitter record extends beyond historical research. It’s a way to hold him accountable for what he has said and done.

And this will soon be on display as the US Democratic Party looks to impeach him for the second time for “inciting insurrection”.

Trump’s administration of “alternative facts” has continuously stonewalled a number of enquiries — going as far as refusing to testify before Congress on certain matters.

From this frame of view, Trump’s Twitter feed was arguably one of few places where his claims and decisions could really be scrutinised. And indeed, news coverage of the president often relied heavily on this.

The amplification effect

The media’s reliance on president Trump’s tweets ultimately highlights a key aspect that governs today’s hybrid media system. That is, it’s highly responsive to a populist communication style.

Trump’s use of Twitter indirectly contributed to his election success in 2016, by helping boost media coverage of his campaign. Researchers also observed him strategically increasing his Twitter activity in line with waning news interest.

Through a constant stream of provocative remarks, Trump exploited news values and continuously inserted himself into the news cycle. And for journalists under pressure to churn out content, his impassioned messages were the perfect sound bites.

Now, stripped of his favourite mouthpiece, it’s uncertain whether Trump will find another way to exert his influence. But one thing is for sure: his time on Twitter will go down in history.The Conversation

Audrey Courty, PhD candidate, School of Humanities, Languages and Social Science, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

No, Twitter is not censoring Donald Trump. Free speech is not guaranteed if it harms others



Alex Brandon/AP

Katharine Gelber, The University of Queensland

The recent storming of the US Capitol has led a number of social media platforms to remove President Donald Trump’s account. In the case of Twitter, the ban is permanent. Others, like Facebook, have taken him offline until after President-elect Joe Biden’s inauguration next week.

This has led to a flurry of commentary in the Australian media about “free speech”. Treasurer Josh Frydenburg has said he is “uncomfortable” with Twitter’s removal of Trump, while the acting prime minister, Michael McCormack, has described it as “censorship”.

Meanwhile, MPs like Craig Kelly and George Christensen continue to ignore the evidence and promote misinformation about the nature of the violent, pro-Trump mob that attacked the Capitol.

A growing number of MPs are also reportedly calling for consistent and transparent rules to be applied by online platforms in a bid to combat hate speech and other types of harmful speech.

Some have conflated this effort with the restrictions on Trump’s social media usage, as though both of these issues reflect the same problem.

Much of this commentary is misguided, wrong and confusing. So let’s pull it apart a bit.

There is no free speech “right” to incite violence

There is no free speech argument in existence that suggests an incitement of lawlessness and violence is protected speech.

Quite to the contrary. Nineteenth century free speech proponent John Stuart Mill argued the sole reason one’s liberty may be interfered with (including restrictions on free speech) is “self-protection” — in other words, to protect people from harm or violence.




Read more:
Parler: what you need to know about the ‘free speech’ Twitter alternative


Additionally, incitement to violence is a criminal offence in all liberal democratic orders. There is an obvious reason for this: violence is harmful. It harms those who are immediately targeted (five people died in the riots last week) and those who are intimidated as a result of the violence to take action or speak up against it.

It also harms the institutions of democracy themselves, which rely on elections rather than civil wars and a peaceful transfer of power.

To suggest taking action against speech that incites violence is “censoring” the speaker is completely misleading.

There is no free speech “right” to appear on a particular platform

There is also no free speech argument that guarantees any citizen the right to express their views on a specific platform.

It is ludicrous to suggest there is. If this “right” were to exist, it would mean any citizen could demand to have their opinions aired on the front page of the Sydney Morning Herald and, if refused, claim their free speech had been violated.




Read more:
Trump’s Twitter tantrum may wreck the internet


What does exist is a general right to express oneself in public discourse, relatively free from regulation, as long as one’s speech does not harm others.

Trump still possesses this right. He has a podium in the West Wing designed for this specific purpose, which he can make use of at any time.

Were he to do so, the media would cover what he says, just as they covered his comments prior to, during and immediately after the riots. This included him telling the rioters that he loved them and that they were “very special”.

Trump told his supporters before the Capitol was overrun: ‘if you don’t fight like hell, you’re not going to have a country anymore’.
Jacquelyn Martin/AP

Does the fact he’s the president change this?

In many free speech arguments, political speech is accorded a higher level of protection than other forms of speech (such as commercial speech, for example). Does the fact this debate concerns the president of the United States change things?

No, it does not. There is no doubt Trump has been given considerable leeway in his public commentary prior to — and during the course of — his presidency. However, he has now crossed a line into stoking imminent lawlessness and violence.

This cannot be protected speech just because it is “political”. If this was the case, it would suggest the free speech of political elites can and should have no limits at all.

Yet, in all liberal democracies – even the United States which has the strongest free speech protection in the world – free speech has limits. These include the incitement of violence and crime.

Are social media platforms over-censoring?

The last decade or so has seen a vigorous debate over the attitudes and responses of social media platforms to harmful speech.

The big tech companies have staunchly resisted being asked to regulate speech, especially political speech, on their platforms. They have enjoyed the profits of their business model, while specific types of users – typically the marginalised – have borne the costs.

However, platforms have recently started to respond to demands and public pressure to address the harms of the speech they facilitate – from countering violent extremism to fake accounts, misinformation, revenge porn and hate speech.

They have developed community standards for content moderation that are publicly available. They release regular reports on their content moderation processes.

Facebook has even created an independent oversight board to arbitrate disputes over their decision making on content moderation.

They do not always do very well at this. One of the core problems is their desire to create algorithms and policies that are applicable universally across their global operations. But such a thing is impossible when it comes to free speech. Context matters in determining whether and under what circumstances speech can harm. This means they make mistakes.




Read more:
Why the business model of social media giants like Facebook is incompatible with human rights


Where to now?

The calls by MPs Anne Webster and Sharon Claydon to address hate speech online are important. They are part of the broader push internationally to find ways to ensure the benefits of the internet can be enjoyed more equally, and that a person’s speech does not silence or harm others.

Arguments about harm are longstanding, and have been widely accepted globally as forming a legitimate basis for intervention.

But the suggestion Trump has been censored is simply wrong. It misleads the public into believing all “free speech” claims have equal merit. They do not.

We must work to ensure harmful speech is regulated in order to ensure broad participation in the public discourse that is essential to our lives — and to our democracy. Anything less is an abandonment of the principles and ethics of governance.The Conversation

Katharine Gelber, Professor of Politics and Public Policy, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?


Timothy Graham, Queensland University of Technology

Amid the chaos in the US Capitol, stoked largely by rhetoric from President Donald Trump, Twitter has locked his account, with 88.7 million followers, for 12 hours.

Facebook and Instagram quickly followed suit, locking Trump’s accounts — with 35.2 million followers and 24.5 million, respectively — for at least two weeks, the remainder of his presidency. This ban was extended from 24 hours.

The locks are the latest effort by social media platforms to clamp down on Trump’s misinformation and baseless claims of election fraud.

They came after Twitter labelled a video posted by Trump and said it posed a “risk of violence”. Twitter removed users’ ability to retweet, like or comment on the post — the first time this has been done.

In the video, Trump told the agitators at the Capitol to go home, but at the same time called them “very special” and said he loved them for disrupting the Congressional certification of President-elect Joe Biden’s win.

That tweet has since been taken down for “repeated and severe violations” of Twitter’s civic integrity policy. YouTube and Facebook have also removed copies of the video.

But as people across the world scramble to make sense of what’s going on, one thing stands out: the events that transpired today were not unexpected.

Given the lack of regulation and responsibility shown by platforms over the past few years, it’s fair to say the writing was on the wall.

The real, violent consequences of misinformation

While Trump is no stranger to contentious and even racist remarks on social media, Twitter’s action to lock the president’s account is a first.

The line was arguably crossed by Trump’s implicit incitement of violence and disorder within the halls of the US Capitol itself.

Nevertheless, it would have been a difficult decision for Twitter (and Facebook and Instagram), with several factors at play. Some of these are short-term, such as the immediate potential for further violence.

Then there’s the question of whether tighter regulation could further incite rioting Trump supporters by feeding into their theories claiming the existence of a large-scale “deep state” plot against the president. It’s possible.




Read more:
QAnon believers will likely outlast and outsmart Twitter’s bans


But a longer-term consideration — and perhaps one at the forefront of the platforms’ priorities — is how these actions will affect their value as commercial assets.

I believe the platforms’ biggest concern is their own bottom line. They are commercial companies legally obliged to pursue profits for shareholders. Commercial imperatives and user engagement are at the forefront of their decisions.

What happens when you censor a Republican president? You can lose a huge chunk of your conservative user base, or upset your shareholders.

Despite what we think of them, or how we might use them, platforms such as Facebook, Twitter, Instagram and YouTube aren’t set up in the public interest.

For them, it’s risky to censor a head of state when they know that content is profitable. Doing it involves a complex risk calculus — with priorities being shareholders, the companies’ market value and their reputation.




Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day


Walking a tightrope

The platforms’ decisions to not only force the removal of several of Trump’s posts but also to lock his accounts carries enormous potential loss of revenue. It’s a major and irreversible step.

And they are now forced to keep a close eye on one another. If one appears too “strict” in its censorship, it may attract criticism and lose user engagement and ultimately profit. At the same time, if platforms are too loose with their content regulation, they must weather the storm of public critique.

You don’t want to be the last organisation to make the tough decision, but you don’t necessarily want to be the first, either — because then you’re the “trial balloon” who volunteered to potentially harm the bottom line.

For all major platforms, the past few years have presented high stakes. Yet there have been plenty of opportunities to stop the situation snowballing to where it is now.

From Trump’s baseless election fraud claims to his false ideas about the coronavirus, time and again platforms have turned a blind eye to serious cases of mis- and disinformation.

The storming of the Capitol is a logical consequence of what has arguably been a long time coming.

The coronavirus pandemic illustrated this. While Trump was partially censored by Twitter and Facebook for misinformation, the platforms failed to take lasting action to deal with the issue at its core.

In the past, platforms have cited constitutional reasons to justify not censoring politicians. They have claimed a civic duty to give elected officials an unfiltered voice.

This line of argument should have ended with the “Unite the Right” rally in Charlottesville in August 2017, when Trump responded to the killing of an anti-fascism protester by claiming there were “very fine people on both sides”.

An age of QAnon, Proud Boys and neo-Nazis

While there’s no silver bullet for online misinformation and extremist content, there’s also no doubt platforms could have done more in the past that may have prevented the scenes witnessed in Washington DC.

In a crisis, there’s a rush to make sense of everything. But we need only look at what led us to this point. Experts on disinformation have been crying out for platforms to do more to combat disinformation and its growing domestic roots.

Now, in 2021, extremists such as neo-Nazis and QAnon believers no longer have to lurk in the depths of online forums or commit lone acts of violence. Instead, they can violently storm the Capitol.

It would be a cardinal error to not appraise the severity and importance of the neglect that led us here. In some ways, perhaps that’s the biggest lesson we can learn.


This article has been updated to reflect the news that Facebook and Instagram extended their 24-hour ban on President Trump’s accounts.The Conversation

Timothy Graham, Senior Lecturer, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

#IStandWithDan, #DictatorDan, #DanLiedPeopleDied: 397,000 tweets reveal the culprits behind a dangerously polarised debate


Timothy Graham, Queensland University of Technology; Axel Bruns, Queensland University of Technology; Daniel Angus, Queensland University of Technology; Edward Hurcombe, Queensland University of Technology, and Samuel Hames, Queensland University of Technology

The Victorian government’s handling of the state’s second coronavirus wave attracted massive Twitter attention, both in support of and against the state’s premier Daniel Andrews.

Our research, published in the journal Media International Australia reveals much of this attention was driven by a small, hyper-partisan core of highly active participants.

We found a high proportion of active campaigners were anonymous “sockpuppet” accounts — created by people using fake profiles for the sole purpose of magnifying their view.

What’s more, very little activity came from computer-controlled “bot” accounts. But where it did, it was more common from the side campaigning against Andrews.

A larger concern which emerged was the feedback cycle between anti-Andrews campaigners (both genuine and inauthentic), political stakeholders and partisan mainstream media which flung dangerous, fringe ideas into the spotlight.

A few highly-charged accounts driving debate

In mid-to-late 2020, thousands of Australian Twitter users split themselves into two camps: those who supported Andrews’ handling of the second wave and those who didn’t.

We looked at 397,000 tweets from 40,000 Twitter accounts engaging in content with three hashtags: #IStandWithDan, #DictatorDan and #DanLiedPeopleDied.

Our comprehensive analysis revealed pro-Dan activity greatly outnumbered the dissent. #IStandWithDan featured in 275,000 tweets. This was about 2.5 times more than #DictatorDan and 13 times more than #DanLiedPeopleDied.

A hashtag network showing the polarised Twitter discussions in support of (red) and against (blue) Victorian Premier Dan Andrews.
This hashtag network shows the polarised Twitter discussions in support of (red) and against (blue) Victorian Premier Daniel Andrews.

Activity on both sides was mostly driven by a small but highly-active subset of participants.

The top 10% of accounts posting #IStandWithDan were behind 74% of the total number of these tweets. This figure was similar for the top 10% of accounts posting anti-Andrews hashtags — and the same pattern applied to retweet behaviour.

Our findings challenge the idea of Twitter as the true voice of the public. Rather, what we saw was a small number of pro- and anti-government campaigners that could mobilise particular Twitter communities on an ad hoc basis.

This suggests it only takes a small (but concentrated) effort to get a political hashtag trending in Australia.




Read more:
The story of #DanLiedPeopleDied: how a hashtag reveals Australia’s ‘information disorder’ problem


Who started the campaigns?

Our analysis showed Liberal state MP Tim Smith was instrumental in making the #DictatorDan hashtag go viral.

It was in low circulation until May 17, when Smith created a Twitter poll asking whether Andrews should be labelled “Dictator Dan” or “Chairman Dan”.

Subsequent growth of #DictatorDan activity was driven largely by far-right commentator Avi Yemini and his followers, along with a key group for fringe right-wing politics in Australian Twitter.

Meanwhile, #DanLiedPeopleDied went viral later on August 12, sparked by another right-wing group led by a handful of outspoken members. This group managed to get the hashtag trending nationally.

This attracted Yemini’s attention. The same day the hashtag started trending, he posted seven tweets and seven retweets with it to his then 128,000 followers. A considerable increase in activity ensued.

The hashtag #IStandWithDan had little activity until July 8, when it suddenly went viral with nearly 1,600 tweets. This spike coincided with the announcement of stage 3’s “stay at home” restrictions for metropolitan Melbourne and the Mitchell Shire.

Activity surrounding #IStandWithDan was driven by factors including the various stages of lockdown, attacks on Andrews from conservative media and the emergence of anti-Andrews Twitter campaigners.

Tweeting (loudly) from the shadows

We analysed the top 50 most active accounts tweeting each hashtag, to figure out how many of them didn’t belong to who they claimed and were in fact anonymous sockpuppet accounts.

We found 54% of the top 50 accounts posting anti-Andrews hashtags qualified as sockpuppets. This figure was 34% for accounts posting #IStandWithDan.

The onslaught from anonymously-run accounts on both sides had a massive impact. Just 27 sockpuppet accounts were behind 9% of all #DictatorDan tweets and 14% of all #DanLiedPeopleDied tweets.

Similarly, 17 accounts were responsible for 6% of all #IStandWithDan tweets.

Inauthentic activity

Many of the anti-Andrews accounts were created more recently than those posting pro-Andrews hashtags. The imbalance between new accounts posting pro- and anti-Andrews hashtags probably isn’t by chance.

Bar plot showing distribution of account creation years per hashtag
This graph shows the distribution of when accounts from both sides were created. From the accounts pushing the #DictatorDan tag, 19% were created this year — compared to 10.7% of accounts posting #IStandWithDan.

It’s more likely anti-Andrews activists deliberately created sockpuppets accounts to give the impression of greater support for their agenda than actually exists among the public.

The aim would be to use these fake accounts to fool Twitter’s algorithms into giving certain hashtags greater visibility.

Interestingly, despite accusations of bot activity from both sides, our work revealed bots actually accounted for a negligible amount of overall hashtag activity.

Of the top 1,000 accounts most frequently tweeting each hashtag, there were just 50 anti-Andrews bot accounts (which sent 264 tweets) and 11 pro-Dan accounts that posted #IStandWithDan (which sent 44 tweets).

Polarisation creates a feedback loop with media

Some of the ways news media engaged with (and amplified) the debate around Victoria’s lockdown helped stoke further division. On September 17, Sky News published the headline:

‘Dictator Dan’ is trying to build a ‘COVID Gulag’.

Herald Sun columnist Andrew Bolt repeated the “Dictator Dan” label in both his blogs and widely read opinion columns, which were part of a much-criticised series attacking the premier.

Here, we witnessed the continuing problem of the “oxygen of amplification”, whereby news commentators amplify false, misleading and/or problematic content (intentionally or unintentionally) and thereby aid its creators.

The #DictatorDan hashtag was used to cast doubt on Andrews’ lockdown measures and establish a false equivalence between the two rivalling Twitter communities, despite Andrews’ having strong approval ratings throughout the pandemic.

The “debate” surrounding the premier’s lockdown measures even gained international attention in a Washington Post article, which Sky News used in a bid to legitimatise its “Dictator Dan” narrative. Yet, at the end Victoria emerged as the gold standard for second-wave coronavirus responses.




Read more:
Finally at zero new cases, Victoria is on top of the world after unprecedented lockdown effort


A polarised Twittersphere might be entertaining at times, but it sustains a vicious feedback loop between users and partisan media. Irresponsible news commentary provides fuel for Twitter users. This leads to more polarity, which leads to more media attention.

Those with a voice in the public sphere should ask critical moral questions about when (and whether) they engage with hyper-partisan content. In the case of COVID-19, it can carry life and death consequences.The Conversation

Timothy Graham, Senior Lecturer, Queensland University of Technology; Axel Bruns, Professor, Creative Industries, Queensland University of Technology; Daniel Angus, Associate Professor in Digital Communication, Queensland University of Technology; Edward Hurcombe, Research associate, Queensland University of Technology, and Samuel Hames, Data Scientist, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Calls for an ABC-run social network to replace Facebook and Google are misguided. With what money?



shutterstock.

Fiona R Martin, University of Sydney

If Facebook prevented Australian news from being shared on its platform, could the ABC start its own social media service to compensate? While this proposal from the Australia Institute is a worthy one, it’s an impossible ask in the current political climate.

The suggestion is one pillar of the think tank’s new Tech-xit report.

The report canvasses what the Australian government should do if Facebook and Google withdraw their news-related services from Australia, in reaction to the Australian Competition and Consumer Commission’s draft news media bargaining code.

Tech-xit rightly notes the ABC is capable of building social media that doesn’t harvest Australians’ personal data. However, it overlooks the costs and challenges of running a social media service — factors raised in debate over the new code.

Platforms react (badly) to the code

The ACCC’s code is a result of years of research into the effects of platform power on Australian media.

It requires Facebook and Google to negotiate with Australian news businesses about licensing payments for hosting news excerpts, providing access to news user data and information on pending news feed algorithm changes.

Predictably, the tech companies are not happy. They argue they make far less from news than the ACCC estimates, have greater costs and return more benefit to the media.

If the code becomes law, Facebook has threatened to stop Australian users from sharing local or international news. Google notified Australians its free services would become “at risk”, although it later said it would negotiate if the draft law was changed in its favour.

Facebook’s withdrawal, which the Tech-xit report sees as being likely if the law passes, would reduce Australians’ capacity to share vital news about their communities, activities and businesses.




Read more:
If Facebook really pulls news from its Australian sites, we’ll have a much less compelling product


ABC to the rescue?

Cue the ABC then, says Jordan Guiao, the report’s author. Guiao is the former head of social media for both the ABC and SBS, and now works at the institute’s Centre for Responsible Technology.

He argues that, if given the funding, ABC Online could reinvent itself to become a “national social platform connecting everyday Australians”. He says all the service would have to do is add

distinct user profiles, user publishing and content features, group connection features, chat, commenting and interactive discussion capabilities.

As a trusted information source, he proposes the ABC could enable “genuine exchange and influence on decision making” and “provide real value to local communities starved of civic engagement”.

Financial reality check

It’s a bold move to suggest the ABC could start yet another major network when it has just had to cut A$84 million from its budget and lose more than 200 staff.

The institute’s idea is very likely an effort to persuade the Morrison government it should redirect some of that funding back to Aunty, which has a history of digital innovation with ABC Online, iView, Q&A and the like.

However, the government has repeatedly denied it has cut funding to the national broadcaster. It hasn’t provided
catch-up emergency broadcasting funds since the ABC covered our worst ever fire season. This doesn’t bode well for a change of mind on future allocations.

The government also excluded the ABC and SBS as beneficiaries of the news media bargaining code negotiations.

The ABC doesn’t even have access to start-up venture capital the way most social media companies do. According to Crunchbase, Twitter and Reddit — the two most popular news-sharing platforms after Facebook — have raised roughly US$1.5 billion and US$550 million respectively in investment rounds, allowing them to constantly innovate in service delivery.

Operational challenges

In contrast, over the past decade, ABC Online has had to reduce many of the “social” services it once offered. This is largely due to the cost of moderating online communities and managing user participation.

Illustration of person removing a social media post.
Social media content moderation requires an abundance of time, money and human resources.
Shutterstock

First news comments sections were canned, and online communities such as the Four Corners forums and The Drum website were closed.

Last year, the ABC’s flagship site for regional and rural user-created stories, ABC Open, was also shut down.

Even if the government were to inject millions into an “ABC Social”, it’s unlikely the ABC could deal with the problems of finding and removing illegal content at scale.

It’s an issue that still defeats social media platforms and the ABC does not have machine learning expertise or funds for an army of outsourced moderators.

The move would also expose the ABC to accusations it was crowding out private innovation in the platform space.

A future without Facebook

It’s unclear whether Facebook will go ahead with its threat of preventing Australian users from sharing news on its platform, given the difficulties with working out exactly who an Australian user is.

For instance, the Australian public includes dual citizens, temporary residents, international students and business people, and expatriates.

If it does, why burden the ABC with the duty to recreate social media? Facebook’s withdrawal could be a boon for Twitter, Reddit and whatever may come next.

In the meantime, if we restored the ABC’s funding, it could develop more inventive ways to share local news online that can’t be threatened by Facebook and Google.




Read more:
Latest $84 million cuts rip the heart out of the ABC, and our democracy


The Conversation


Fiona R Martin, Associate Professor in Convergent and Online Media, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Do social media algorithms erode our ability to make decisions freely? The jury is out



Charles Deluvio/Unsplash, CC BY-SA

Lewis Mitchell and James Bagrow, University of Vermont

Social media algorithms, artificial intelligence, and our own genetics are among the factors influencing us beyond our awareness. This raises an ancient question: do we have control over our own lives? This article is part of The Conversation’s series on the science of free will.


Have you ever watched a video or movie because YouTube or Netflix recommended it to you? Or added a friend on Facebook from the list of “people you may know”?

And how does Twitter decide which tweets to show you at the top of your feed?

These platforms are driven by algorithms, which rank and recommend content for us based on our data.

As Woodrow Hartzog, a professor of law and computer science at Northeastern University, Boston, explains:

If you want to know when social media companies are trying to manipulate you into disclosing information or engaging more, the answer is always.

So if we are making decisions based on what’s shown to us by these algorithms, what does that mean for our ability to make decisions freely?

What we see is tailored for us

An algorithm is a digital recipe: a list of rules for achieving an outcome, using a set of ingredients. Usually, for tech companies, that outcome is to make money by convincing us to buy something or keeping us scrolling in order to show us more advertisements.

The ingredients used are the data we provide through our actions online – knowingly or otherwise. Every time you like a post, watch a video, or buy something, you provide data that can be used to make predictions about your next move.

These algorithms can influence us, even if we’re not aware of it. As the New York Times’ Rabbit Hole podcast explores, YouTube’s recommendation algorithms can drive viewers to increasingly extreme content, potentially leading to online radicalisation.

Facebook’s News Feed algorithm ranks content to keep us engaged on the platform. It can produce a phenomenon called “emotional contagion”, in which seeing positive posts leads us to write positive posts ourselves, and seeing negative posts means we’re more likely to craft negative posts — though this study was controversial partially because the effect sizes were small.

Also, so-called “dark patterns” are designed to trick us into sharing more, or spending more on websites like Amazon. These are tricks of website design such as hiding the unsubscribe button, or showing how many people are buying the product you’re looking at right now. They subconsciously nudge you towards actions the site would like you to take.




Read more:
Sludge: how corporations ‘nudge’ us into spending more


You are being profiled

Cambridge Analytica, the company involved in the largest known Facebook data leak to date, claimed to be able to profile your psychology based on your “likes”. These profiles could then be used to target you with political advertising.

“Cookies” are small pieces of data which track us across websites. They are records of actions you’ve taken online (such as links clicked and pages visited) that are stored in the browser. When they are combined with data from multiple sources including from large-scale hacks, this is known as “data enrichment”. It can link our personal data like email addresses to other information such as our education level.

These data are regularly used by tech companies like Amazon, Facebook, and others to build profiles of us and predict our future behaviour.

You are being predicted

So, how much of your behaviour can be predicted by algorithms based on your data?

Our research, published in Nature Human Behaviour last year, explored this question by looking at how much information about you is contained in the posts your friends make on social media.

Using data from Twitter, we estimated how predictable peoples’ tweets were, using only the data from their friends. We found data from eight or nine friends was enough to be able to predict someone’s tweets just as well as if we had downloaded them directly (well over 50% accuracy, see graph below). Indeed, 95% of the potential predictive accuracy that a machine learning algorithm might achieve is obtainable just from friends’ data.

Average predictability from your circle of closest friends (blue line). A value of 50% means getting the next word right half of the time — no mean feat as most people have a vocabulary of around 5,000 words. The curve shows how much an AI algorithm can predict about you from your friends’ data. Roughly 8-9 friends are enough to predict your future posts as accurately as if the algorithm had access to your own data (dashed line).
Bagrow, Liu, & Mitchell (2019)

Our results mean that even if you #DeleteFacebook (which trended after the Cambridge Analytica scandal in 2018), you may still be able to be profiled, due to the social ties that remain. And that’s before we consider the things about Facebook that make it so difficult to delete anyway.




Read more:
Why it’s so hard to #DeleteFacebook: Constant psychological boosts keep you hooked


We also found it’s possible to build profiles of non-users — so-called “shadow profiles” — based on their contacts who are on the platform. Even if you have never used Facebook, if your friends do, there is the possibility a shadow profile could be built of you.

On social media platforms like Facebook and Twitter, privacy is no longer tied to the individual, but to the network as a whole.

No more free will? Not quite

But all hope is not lost. If you do delete your account, the information contained in your social ties with friends grows stale over time. We found predictability gradually declines to a low level, so your privacy and anonymity will eventually return.

While it may seem like algorithms are eroding our ability to think for ourselves, it’s not necessarily the case. The evidence on the effectiveness of psychological profiling to influence voters is thin.

Most importantly, when it comes to the role of people versus algorithms in things like spreading (mis)information, people are just as important. On Facebook, the extent of your exposure to diverse points of view is more closely related to your social groupings than to the way News Feed presents you with content. And on Twitter, while “fake news” may spread faster than facts, it is primarily people who spread it, rather than bots.

Of course, content creators exploit social media platforms’ algorithms to promote content, on YouTube, Reddit and other platforms, not just the other way round.

At the end of the day, underneath all the algorithms are people. And we influence the algorithms just as much as they may influence us.




Read more:
Don’t just blame YouTube’s algorithms for ‘radicalisation’. Humans also play a part


The Conversation


Lewis Mitchell, Senior Lecturer in Applied Mathematics and James Bagrow, Associate Professor, Mathematics & Statistics, University of Vermont

This article is republished from The Conversation under a Creative Commons license. Read the original article.