Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots



Shutterstock

Ryan Ko, The University of Queensland

Recently Facebook, Reddit, Google, LinkedIn, Microsoft, Twitter and YouTube committed to removing coronavirus-related misinformation from their platforms.

COVID-19 is being described as the first major pandemic of the social media age. In troubling times, social media helps distribute vital knowledge to the masses.
Unfortunately, this comes with myriad misinformation, much of which is spread through social media bots.

These fake accounts are common on Twitter, Facebook, and Instagram. They have one goal: to spread fear and fake news.

We witnessed this in the 2016 United States presidential elections, with arson rumours in the bushfire crisis, and we’re seeing it again in relation to the coronavirus pandemic.




Read more:
Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight


Busy busting bots

This figure shows the top Twitter hashtags tweeted by bots over 24 hours.
Bot Sentinel

The exact scale of misinformation is difficult to measure. But its global presence can be felt through snapshots of Twitter bot involvement in COVID-19-related hashtag activity.

Bot Sentinel is a website that uses machine learning to identify potential Twitter bots, using a score and rating. According to the site, on March 26 bot accounts were responsible for 828 counts of #coronavirus, 544 counts of #COVID19 and 255 counts of #Coronavirus hashtags within 24 hours.

These hashtags respectively took the 1st, 3rd and 7th positions of all top-trolled Twitter hashtags.

It’s important to note the actual number of coronavirus-related bot tweets are likely much higher, as Bot Sentinel only recognises hashtag terms (such as #coronavirus), and wouldn’t pick up on “coronavirus”, “COVID19” or “Coronavirus”.

How are bots created?

Bots are usually managed by automated programs called bot “campaigns”, and these are controlled by human users. The actual process of creating such a campaign is relatively simple. There are several websites that teach people how to do this for “marketing” purposes. In the underground hacker economy on the dark web, such services are available for hire.

While it’s difficult to attribute bots to the humans controlling them, the purpose of bot campaigns is obvious: create social disorder by spreading misinformation. This can increase public anxiety, frustration and anger against authorities in certain situations.

A 2019 report published by researchers from the Oxford Internet Institute revealed a worrying trend in organised “social media manipulation by governments and political parties”. They reported:

Evidence of organised social media manipulation campaigns which have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017. In each country, there is at least one political party or government agency using social media to shape public attitudes domestically.

The modus operandi of bots

Typically, in the context of COVID-19 messages, bots would spread misinformation through two main techniques.

The first involves content creation, wherein bots start new posts with pictures that validate or mirror existing worldwide trends. Examples include pictures of shopping baskets filled with food, or hoarders emptying supermarket shelves. This generates anxiety and confirms what people are reading from other sources.

The second technique involves content augmentation. In this, bots latch onto official government feeds and news sites to sow discord. They retweet alarming tweets or add false comments and information in a bid to stoke fear and anger among users. It’s common to see bots talking about a “frustrating event”, or some social injustice faced by their “loved ones”.

The example below shows a Twitter post from Queensland Health’s official twitter page, followed by comments from accounts named “Sharon” and “Sara” which I have identified as bot accounts. Many real users reading Sara’s post would undoubtedly feel a sense of injustice on behalf of her “mum”.

The official tweet from Queensland Health and the bots’ responses.

While we can’t be 100% certain these are bot accounts, many factors point to this very likely being the case. Our ability to accurately identify bots will get better as machine learning algorithms in programs such as Bot Sentinel improve.

How to spot a bot

To learn the characteristics of a bot, let’s take a closer look Sharon’s and Sara’s accounts.

Screenshots of the accounts of ‘Sharon’ and ‘Sara’.

Both profiles lack human uniqueness, and display some telltale signs they may be bots:

  • they have no followers

  • they only recently joined Twitter

  • they have no last names, and have alphanumeric handles (such as Sara89629382)

  • they have only tweeted a few times

  • their posts have one theme: spreading alarmist comments

Bot ‘Sharon’ tried to rile others up through her tweets.

  • they mostly follow news sites, government authorities, or human users who are highly influential in a certain subject (in this case, virology and medicine).

My investigation into Sharon revealed the bot had attempted to exacerbate anger on a news article about the federal government’s coronavirus response.

The language: “Health can’t wait. Economic (sic) can” indicates a potentially non-native English speaker.

It seems Sharon was trying to stoke the flames of public anger by calling out “bad decisions”.

Looking through Sharon’s tweets, I discovered Sharon’s friend “Mel”, another bot with its own programmed agenda.

Bot ‘Mel’ spread false information about a possible delay in COVID-19 results, and retweeted hateful messages.

What was concerning was that a human user was engaging with Mel.

An account that seemed to belong to a real Twitter user began engaging with ‘Mel’.

You can help tackle misinformation

Currently, it’s simply too hard to attribute the true source of bot-driven misinformation campaigns. This can only be achieved with the full cooperation of social media companies.

The motives of a bot campaign can range from creating mischief to exercising geopolitical control. And some researchers still can’t agree on what exactly constitutes a “bot”.

But one thing is for sure: Australia needs to develop legislation and mechanisms to detect and stop these automated culprits. Organisations running legitimate social media campaigns should dedicate time to using a bot detection tool to weed out and report fake accounts.

And as a social media user in the age of the coronavirus, you can also help by reporting suspicious accounts. The last thing we need is malicious parties making an already worrying crisis worse.




Read more:
You can join the effort to expose Twitter bots


The Conversation


Ryan Ko, Chair Professor and Director of Cyber Security, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight


Timothy Graham, Queensland University of Technology and Tobias R. Keller, Queensland University of Technology

In the first week of 2020, hashtag #ArsonEmergency became the focal point of a new online narrative surrounding the bushfire crisis.

The message: the cause is arson, not climate change.

Police and bushfire services (and some journalists) have contradicted this claim.

We studied about 300 Twitter accounts driving the #ArsonEmergency hashtag to identify inauthentic behaviour. We found many accounts using #ArsonEmergency were behaving “suspiciously”, compared to those using #AustraliaFire and #BushfireAustralia.

Accounts peddling #ArsonEmergency carried out activity similar to what we’ve witnessed in past disinformation campaigns, such as the coordinated behaviour of Russian trolls during the 2016 US presidential election.

Bots, trolls and trollbots

The most effective disinformation campaigns use bot and troll accounts to infiltrate genuine political discussion, and shift it towards a different “master narrative”.

Bots and trolls have been a thorn in the side of fruitful political debate since Twitter’s early days. They mimic genuine opinions, akin to what a concerned citizen might display, with a goal of persuading others and gaining attention.

Bots are usually automated (acting without constant human oversight) and perform simple functions, such as retweeting or repeatedly pushing one type of content.

Troll accounts are controlled by humans. They try to stir controversy, hinder healthy debate and simulate fake grassroots movements. They aim to persuade, deceive and cause conflict.

We’ve observed both troll and bot accounts spouting disinformation regarding the bushfires on Twitter. We were able to distinguish these accounts as being inauthentic for two reasons.

First, we used sophisticated software tools including tweetbotornot, Botometer, and Bot Sentinel.

There are various definitions for the word “bot” or “troll”. Bot Sentinel says:

Propaganda bots are pieces of code that utilize Twitter API to automatically follow, tweet, or retweet other accounts bolstering a political agenda. Propaganda bots are designed to be polarizing and often promote content intended to be deceptive… Trollbot is a classification we created to describe human controlled accounts who exhibit troll-like behavior.

Some of these accounts frequently retweet known propaganda and fake news accounts, and they engage in repetitive bot-like activity. Other trollbot accounts target and harass specific Twitter accounts as part of a coordinated harassment campaign. Ideology, political affiliation, religious beliefs, and geographic location are not factors when determining the classification of a Twitter account.

These machine learning tools compared the behaviour of known bots and trolls with the accounts tweeting the hashtags #ArsonEmergency, #AustraliaFire, and #BushfireAustralia. From this, they provided a “score” for each account suggesting how likely it was to be a bot or troll account.

We also manually analysed the Twitter activity of suspicious accounts and the characteristics of their profiles, to validate the origins of #ArsonEmergency, as well as the potential motivations of the accounts spreading the hashtag.

Who to blame?

Unfortunately, we don’t know who is behind these accounts, as we can only access trace data such as tweet text and basic account information.

This graph shows how many times #ArsonEmergency was tweeted between December 31 last year and January 8 this year:

On the vertical axis is the number of tweets over time which featured #ArsonEmergency. On January 7, there were 4726 tweets.
Author provided

Previous bot and troll campaigns have been thought to be the work of foreign interference, such as Russian trolls, or PR firms hired to distract and manipulate voters.

The New York Times has also reported on perceptions that media magnate Rupert Murdoch is influencing Australia’s bushfire debate.




Read more:
Weather bureau says hottest, driest year on record led to extreme bushfire season


Weeding-out inauthentic behaviour

In late November, some Twitter accounts began using #ArsonEmergency to counter evidence that climate change is linked to the severity of the bushfire crisis.

Below is one of the earliest examples of an attempt to replace #ClimateEmergency with #ArsonEmergency. The accounts tried to get #ArsonEmergency trending to drown out dialogue acknowledging the link between climate change and bushfires.

We suspect the origins of the #ArsonEmergency debacle can be traced back to a few accounts.
Author provided

The hashtag was only tweeted a few times in 2019, but gained traction this year in a sustained effort by about 300 accounts.

A much larger portion of bot and troll-like accounts pushed #ArsonEmergency, than they did #AustraliaFire and #BushfireAustralia.

The narrative was then adopted by genuine accounts who furthered its spread.

On multiple occasions, we noticed suspicious accounts countering expert opinions while using the #ArsonEmergency hashtag.

The inauthentic accounts engaged with genuine users in an effort to persuade them.
author provided

Bad publicity

Since media coverage has shone light on the disinformation campaign, #ArsonEmergency has gained even more prominence, but in a different light.

Some journalists are acknowledging the role of disinformation bushfire crisis – and countering narrative the Australia has an arson emergency. However, the campaign does indicate Australia has a climate denial problem.

What’s clear to me is that Australia has been propelled into the global disinformation battlefield.




Read more:
Watching our politicians fumble through the bushfire crisis, I’m overwhelmed by déjà vu


Keep your eyes peeled

It’s difficult to debunk disinformation, as it often contains a grain of truth. In many cases, it leverages people’s previously held beliefs and biases.

Humans are particularly vulnerable to disinformation in times of emergency, or when addressing contentious issues like climate change.

Online users, especially journalists, need to stay on their toes.

The accounts we come across on social media may not represent genuine citizens and their concerns. A trending hashtag may be trying to mislead the public.

Right now, it’s more important than ever for us to prioritise factual news from reliable sources – and identify and combat disinformation. The Earth’s future could depend on it.The Conversation

Timothy Graham, Senior lecturer, Queensland University of Technology and Tobias R. Keller, Visiting Postdoc, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.