Anger is all the rage on Twitter when it’s cold outside (and on Mondays)



Shutterstock

Heather R. Stevens, Macquarie University; Ivan Charles Hanigan, University of Sydney; Paul Beggs, Macquarie University, and Petra Graham, Macquarie University

The link between hot weather and aggressive crime is well established. But can the same be said for online aggression, such as angry tweets? And is online anger a predictor of assaults?

Our study just published suggests the answer is a clear “no”. We found angry tweet counts actually increased in cooler weather. And as daily maximum temperatures rose, angry tweet counts decreased.

We also found the incidence of angry tweets is highest on Mondays, and perhaps unsurprisingly, angry Twitter posts are most prevalent after big news events such as a leadership spill.

This is the first study to compare patterns of assault and social media anger with temperature. Given anger spreads through online communities faster than any other emotion, the findings have broad implications – especially under climate change.

A caricature of US President Donald Trump, who’s been known to fire off an angry tweet.
Shutterstock

Algorithms are watching you

Of Australia’s 24.6 million people, 18 million, or 73%, are active social media users. Some 4.7 million Australians, or 19%, use Twitter. This widespread social media use provides researchers with valuable opportunities to gather information.

When you publicly post, comment or even upload a selfie, an algorithm can scan it to estimate your mood (positive or negative) or your emotion (such as anger, joy, fear or surprise).

This information can be linked with the date, time of day, location or even your age and sex, to determine the “mood” of a city or country in near real time.

Our study involved 74.2 million English-language Twitter posts – or tweets – from 2015 to 2017 in New South Wales.

We analysed them using the publicly available We Feel tool, developed by the CSIRO and the Black Dog Institute, to see if social media can accurately map our emotions.

Some 2.87 million tweets (or 3.87%) contained words or phrases considered angry, such as “vicious”, “hated”, “irritated”, “disgusted” and the very popular “f*cked”.

Hot-headed when it’s cold outside

On average, the number of angry tweets were highest when the temperature was below 15℃, and lowest in warm temperatures (25-30℃).

The number of angry tweets slightly increased again in very high temperatures (above 35℃), although with fewer days in that range there was less certainty about the trend.

On the ten days with the highest daily maximum temperatures, the average angry tweet count was 2,482 per day. Of the ten coldest days, the average angry tweet count was higher at 3,354 per day.




Read more:
Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots


The pattern of angry tweets was opposite to that of physical assaults, which are more prevalent in hotter weather – with some evidence of a decline in extreme heat.

So why the opposite patterns? We propose two possible explanations.

First, hot and cold weather triggers a physiological response in humans. Temperature affects our heart rate, the amount of oxygen to our brain, hormone regulation (including testosterone) and our ability to sleep. In some people, this in turn affects physical aggression levels.

Hot weather means more socialising, and potentially less time for tweeting.
Shutterstock

Second, weather triggers changes to our routine. Research suggests aggressive crimes increase because warmer weather encourages behaviour that fosters assaults. This includes more time outdoors, increased socialising and drinking alcohol.

Those same factors – time outdoors and more socialising – may reduce the opportunity or motivation to tweet. And the effects of alcohol (such as reduced mental clarity and physical precicion) make composing a tweet harder, and therefore less likely.

This theory is supported by our finding that both angry tweet counts, as well as overall tweet counts, were lowest on weekends, holidays and the hottest days,




Read more:
Car accidents, drownings, violence: hotter temperatures will mean more deaths from injury


It’s possible that as people vent their frustrations online, they feel better and are then less inclined to commit an assault. However, this theory isn’t well supported.

The relationship is more likely due to the vastly different demographics of Twitter users and assault offenders.

Assault offenders are most likely to be young men from low socio-economic backgrounds. In contrast, about half of Twitter users are female, and they’re more likely to be middle-aged and in a higher income bracket compared with other social media users.

Our study did not consider why these two groups differ in response to temperature. However, we are currently researching how age, sex and other social and demographic factors influence the relationships between temperature and aggression.

Twitter users are more likely to be middle aged.
Shutterstock

The Monday blues

Our study primarily set out to see whether temperatures and angry tweet counts were related. But we also uncovered other interesting trends.

Average angry tweet counts were highest on a Monday (2,759 per day) and lowest on weekends (Saturdays, 2,373; Sundays, 2,499). This supports research that found an online mood slump on weekdays.

We determined that major news events correlated with the ten days where the angry tweet count was highest. These events included:

  • the federal leadership spill in 2015 when Malcolm Turnbull replaced Tony Abbott as prime minister

  • a severe storm front in NSW in 2015, then a major cold front a few months later

  • two mass shootings in the United States: Orlando in 2016 and Las Vegas in 2017

  • sporting events including the Cricket World Cup in 2015.

Days with high angry tweet counts correlated with major news events.
Shutterstock

Twitter in a warming world

Our study was limited in that Twitter users are not necessarily representative of the broader population. For example, Twitter is a preferred medium for politicians, academics and journalists. These users may express different emotions, or less emotion, in their posts than other social media users.

However, the influence of temperature on social media anger has broad implications. Of all the emotions, anger spreads through online communities the fastest. So temperature changes and corresponding social media anger can affect the wider population.

We hope our research helps health and justice services develop more targeted measures based on temperature.

And with climate change likely to affect assault rates and mood, more research in this field is needed.




Read more:
Nine things you love that are being wrecked by climate change


The Conversation


Heather R. Stevens, Doctoral student in Environmental Sciences, Macquarie University; Ivan Charles Hanigan, Data Scientist (Epidemiology), University of Sydney; Paul Beggs, Associate Professor and Environmental Health Scientist, Macquarie University, and Petra Graham, Senior Research Fellow, Macquarie Business School, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Trump’s Twitter tantrum may wreck the internet


Michael Douglas, University of Western Australia

US President Donald Trump, who tweeted more than 11,000 times in the first two years of his presidency, is very upset with Twitter.

Earlier this week Trump tweeted complaints about mail-in ballots, alleging voter fraud – a familiar Trump falsehood. Twitter attached a label to two of his tweets with links to sources that fact–checked the tweets, showing Trump’s claims were unsubstantiated.

Trump retaliated with the power of the presidency. On May 28 he made an “Executive Order on Preventing Online Censorship”. The order focuses on an important piece of legislation: section 230 of the Communications Decency Act 1996.




Read more:
Can you be liable for defamation for what other people write on your Facebook page? Australian court says: maybe


What is section 230?

Section 230 has been described as “the bedrock of the internet”.

It affects companies that host content on the internet. It provides in part:

(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

This means that, generally, the companies behind Google, Facebook, Twitter and other “internet intermediaries” are not liable for the content on their platforms.

For example, if something defamatory is written by a Twitter user, the company Twitter Inc will enjoy a shield from liability in the United States even if the author does not.




Read more:
A push to make social media companies liable in defamation is great for newspapers and lawyers, but not you


Trump’s executive order

Within the US legal system, an executive order is a “signed, written, and published directive from the President of the United States that manages operations of the federal government”. It is not legislation. Under the Constitution of the United States, Congress – the equivalent of our Parliament – has the power to make legislation.

Trump’s executive order claims to protect free speech by narrowing the protection section 230 provides for social media companies.

The text of the order includes the following:

It is the policy of the United States that such a provider [who does not act in “good faith”, but stifles viewpoints with which they disagree] should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider …

To advance [this] policy … all executive departments and agencies should ensure that their application of section 230 (c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard.

The order attempts to do a lot of other things too. For example, it calls for the creation of new regulations concerning section 230, and what “taken in good faith” means.

The reaction

Trump’s action has some support. Republican senator Marco Rubio said if social media companies “have now decided to exercise an editorial role like a publisher, then they should no longer be shielded from liability and treated as publishers under the law”.

Critics argue the order threatens, rather than protects, freedom of speech, thus threatening the internet itself.

The status of this order within the American legal system is an issue for American constitutional lawyers. Experts were quick to suggest the order is unconstitutional; it seems contrary to the separation of powers enshrined in the US Constitution (which partly inspired Australia’s Constitution).

Harvard Law School constitutional law professor Laurence Tribe has described the order as “totally absurd and legally illiterate”.

That may be so, but the constitutionality of the order is an issue for the US judiciary. Many judges in the United States were appointed by Trump or his ideological allies.

Even if the order is legally illiterate, it should not be assumed it will lack force.

What this means for Australia

Section 230 is part of US law. It is not in force in Australia. But its effects are felt around the globe.

Social media companies who would otherwise feel safe under section 230 may be more likely to remove content when threatened with legal action.

The order might cause these companies to change their internal policies and practices. If that happens, policy changes could be implemented at a global level.

Compare, for example, what happened when the European Union introduced its General Data Protection Regulation (GDPR). Countless companies in Australia had to ensure they were meeting European standards. US-based tech companies such as Facebook changed their privacy policies and disclosures globally – they did not want to meet two different privacy standards.

If section 230 is diminished, it could also impact Australian litigation by providing another target for people who are hurt by damaging content on social media, or accessible by internet search. When your neighbour defames you on Facebook, for example, you can sue both the neighbour and Facebook.

That was already the law in Australia. But with a toothless section 230, if you win, the judgement could be enforceable in the US.

Currently, suing certain American tech companies is not always a good idea. Even if you win, you may not be able to enforce the Australian judgement overseas. Tech companies are aware of this.

In 2017 litigation, Twitter did not even bother sending anyone to respond to litigation in the Supreme Court of New South Wales involving leaks of confidential information by tweet. When tech companies like Google have responded to Aussie litigation, it might be understood as a weird brand of corporate social responsibility: a way of keeping up appearances in an economy that makes them money.

A big day for ‘social media and fairness’?

When Trump made his order, he called it a big day for “fairness”. This is standard Trump fare. But it should not be dismissed outright.

As our own Australian Competition and Consumer Commission recognised last year in its Digital Platforms Inquiry, companies such as Twitter have enormous market power. Their exercise of that power does not always benefit society.

In recent years, social media has advanced the goals of terrorists and undermined democracy. So if social media companies can be held legally liable for some of what they cause, it may do some good.

As for Twitter, the inclusion of the fact check links was a good thing. It’s not like they deleted Trump’s tweets. Also, they’re a private company, and Trump is not compelled to use Twitter.

We should support Twitter’s recognition of its moral responsibility for the dissemination of information (and misinformation), while still leaving room for free speech.

Trump’s executive order is legally illiterate spite, but it should prompt us to consider how free we want the internet to be. And we should take that issue more seriously than we take Trump’s order.The Conversation

Michael Douglas, Senior Lecturer in Law, University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disasters expose gaps in emergency services’ social media use


Tan Yigitcanlar, Queensland University of Technology; Ashantha Goonetilleke, Queensland University of Technology, and Nayomi Kankanamge, Queensland University of Technology

Australia has borne the brunt of several major disasters in recent years, including drought, bushfires, floods and cyclones. The increasing use of social media is changing how we prepare for and respond to these disasters. Not only emergency services but also their social media are now much-sought-after sources of disaster information and warnings.

We studied Australian emergency services’ social media use in times of disaster. Social media can provide invaluable and time-critical information to both emergency services and communities at risk. But we also found problems.




Read more:
Drought, fire and flood: how outer urban areas can manage the emergency while reducing future risks


How do emergency services use social media?

The 2019-20 Australian bushfires affected 80% of the population directly or indirectly. Social media were widely used to spread awareness of the bushfire disaster and to raise funds – albeit sometimes controversially – to help people in need.

The escalating use and importance of social media in disaster management raises an important question:

How effective are social media pages of Australian state emergency management organisations in meeting community expectations and needs?

To answer this question, QUT’s Urban Studies Lab investigated the community engagement approaches of social media pages maintained by various Australian emergency services. We placed Facebook and Twitter pages of New South Wales State Emergency Services (NSW-SES), Victoria State Emergency Services (VIC-SES) and Queensland Fire and Emergency Services (QLD-FES) under the microscope.

Our study made four key findings.

First, emergency services’ social media pages are intended to:

  • disseminate warnings
  • provide an alternative communication channel
  • receive rescue and recovery requests
  • collect information about the public’s experiences
  • raise disaster awareness
  • build collective intelligence
  • encourage volunteerism
  • express gratitude to emergency service staff and volunteers
  • raise funds for those in need.



Read more:
With costs approaching $100 billion, the fires are Australia’s costliest natural disaster


Examples of emergency services’ social media posts are shown below.

NSW-SES collecting data from the public through their posts.
Facebook
VIC-SES sharing weather warnings to inform the public.
Facebook
QLD-FES posting fire condition information to increase public awareness.
Facebook
QLD-FES showing the direction of a cyclone and warning the community.
Facebook

Second, Facebook pages of emergency services attract more community attention than Twitter pages. Services need to make their Twitter pages more attractive as, unlike Facebook, Twitter allows streamlined data download for social media analytics. A widely used Twitter page of emergency service means more data for analysis and potentially more accurate policies and actions.

Third, Australia lacks a legal framework for the use of social media in emergency service operations. Developing these frameworks will help organisations maximise its use, especially in the case of financial matters such as donations.

Fourth, the credibility of public-generated information can sometimes be questionable. Authorities need to be able to respond rapidly to such information to avoid the spread of misinformation or “fake news” on social media.

Services could do more with social media

Our research highlighted that emergency services could use social media more effectively. We do not see these services analysing social media data to inform their activities before, during and after disasters.

In another study on the use of social media analytics for disaster management, we developed a novel approach to show how emergency services can identify disaster-affected areas using real-time social media data. For that study, we collected Twitter data with location information on the 2010-11 Queensland floods. We were able to identify disaster severity by analysing the emotional or sentiment values of tweets.




Read more:
Explainer: how the internet knows if you’re happy or sad


This work generated the disaster severity map show below. The map is over 90% accurate to actual figures in the report of the Queensland Floods Commission of Inquiry.

Disaster severity map created through Twitter analytics.
Authors

Concerns about using social media to manage disaster

The first highly voiced concern about social media use in disaster management is the digital divide. While the issue of underrepresented people and communities remains important, the use of technology is spreading widely. There were 3.4 billion social media users worldwide in 2019, and the growth in numbers is accelerating.




Read more:
Online tools can help people in disasters, but do they represent everyone?


Besides, many Australian cities and towns are investing in smart city strategies and infrastructures. These localities provide free public Wi-Fi connections. And almost 90% of Australians now own a smart phone.

The second concern is information accuracy or “fake news” on social media. Evidently, sharing false information and rumours compromises the information social media provides. Social media images and videos tagged with location information can provide more reliable, eye-witness information.

Another concern is difficulty in receiving social media messages from severely affected areas. For instance, the disaster might have brought down internet or 4G/5G coverage, or people might have been evacuated from areas at risk. This might lead to limited social media posts from the actual disaster zone, with increasing numbers of posts from the places people are relocated.

In such a scenario, alternative social media analytics are on offer. We can use content analysis and sentiment analysis to determine the disaster location and impact.

How to make the most of social media

Social media and its applications are generating new and innovative ways to manage disasters and reduce their impacts. These include:

  • increasing community trust in emergency services by social media profiling
  • crowd-sourcing the collection and sharing of disaster information
  • creating awareness by incorporating gamification applications in social media
  • using social media data to detect disaster intensity and hotspot locations
  • running real-time data analytics.

In sum, social media could become a mainstream information provider for disaster management. The need is likely to become more pressing as human-induced climate change increases the severity and frequency of disasters.

Today, as we confront the COVID-19 pandemic, social media analytics are helping to ease its impacts. Artificial intelligence (AI) technologies are greatly reducing processing time for social media analytics. We believe the next-generation AI will enable us to undertake real-time social media analytics more accurately.




Read more:
Coronavirus: How Twitter could more effectively ease its impact


The Conversation


Tan Yigitcanlar, Associate Professor of Urban Studies and Planning, Queensland University of Technology; Ashantha Goonetilleke, Professor, Queensland University of Technology, and Nayomi Kankanamge, PhD Candidate, School of Built Environment, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots



Shutterstock

Ryan Ko, The University of Queensland

Recently Facebook, Reddit, Google, LinkedIn, Microsoft, Twitter and YouTube committed to removing coronavirus-related misinformation from their platforms.

COVID-19 is being described as the first major pandemic of the social media age. In troubling times, social media helps distribute vital knowledge to the masses.
Unfortunately, this comes with myriad misinformation, much of which is spread through social media bots.

These fake accounts are common on Twitter, Facebook, and Instagram. They have one goal: to spread fear and fake news.

We witnessed this in the 2016 United States presidential elections, with arson rumours in the bushfire crisis, and we’re seeing it again in relation to the coronavirus pandemic.




Read more:
Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight


Busy busting bots

This figure shows the top Twitter hashtags tweeted by bots over 24 hours.
Bot Sentinel

The exact scale of misinformation is difficult to measure. But its global presence can be felt through snapshots of Twitter bot involvement in COVID-19-related hashtag activity.

Bot Sentinel is a website that uses machine learning to identify potential Twitter bots, using a score and rating. According to the site, on March 26 bot accounts were responsible for 828 counts of #coronavirus, 544 counts of #COVID19 and 255 counts of #Coronavirus hashtags within 24 hours.

These hashtags respectively took the 1st, 3rd and 7th positions of all top-trolled Twitter hashtags.

It’s important to note the actual number of coronavirus-related bot tweets are likely much higher, as Bot Sentinel only recognises hashtag terms (such as #coronavirus), and wouldn’t pick up on “coronavirus”, “COVID19” or “Coronavirus”.

How are bots created?

Bots are usually managed by automated programs called bot “campaigns”, and these are controlled by human users. The actual process of creating such a campaign is relatively simple. There are several websites that teach people how to do this for “marketing” purposes. In the underground hacker economy on the dark web, such services are available for hire.

While it’s difficult to attribute bots to the humans controlling them, the purpose of bot campaigns is obvious: create social disorder by spreading misinformation. This can increase public anxiety, frustration and anger against authorities in certain situations.

A 2019 report published by researchers from the Oxford Internet Institute revealed a worrying trend in organised “social media manipulation by governments and political parties”. They reported:

Evidence of organised social media manipulation campaigns which have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017. In each country, there is at least one political party or government agency using social media to shape public attitudes domestically.

The modus operandi of bots

Typically, in the context of COVID-19 messages, bots would spread misinformation through two main techniques.

The first involves content creation, wherein bots start new posts with pictures that validate or mirror existing worldwide trends. Examples include pictures of shopping baskets filled with food, or hoarders emptying supermarket shelves. This generates anxiety and confirms what people are reading from other sources.

The second technique involves content augmentation. In this, bots latch onto official government feeds and news sites to sow discord. They retweet alarming tweets or add false comments and information in a bid to stoke fear and anger among users. It’s common to see bots talking about a “frustrating event”, or some social injustice faced by their “loved ones”.

The example below shows a Twitter post from Queensland Health’s official twitter page, followed by comments from accounts named “Sharon” and “Sara” which I have identified as bot accounts. Many real users reading Sara’s post would undoubtedly feel a sense of injustice on behalf of her “mum”.

The official tweet from Queensland Health and the bots’ responses.

While we can’t be 100% certain these are bot accounts, many factors point to this very likely being the case. Our ability to accurately identify bots will get better as machine learning algorithms in programs such as Bot Sentinel improve.

How to spot a bot

To learn the characteristics of a bot, let’s take a closer look Sharon’s and Sara’s accounts.

Screenshots of the accounts of ‘Sharon’ and ‘Sara’.

Both profiles lack human uniqueness, and display some telltale signs they may be bots:

  • they have no followers

  • they only recently joined Twitter

  • they have no last names, and have alphanumeric handles (such as Sara89629382)

  • they have only tweeted a few times

  • their posts have one theme: spreading alarmist comments

Bot ‘Sharon’ tried to rile others up through her tweets.

  • they mostly follow news sites, government authorities, or human users who are highly influential in a certain subject (in this case, virology and medicine).

My investigation into Sharon revealed the bot had attempted to exacerbate anger on a news article about the federal government’s coronavirus response.

The language: “Health can’t wait. Economic (sic) can” indicates a potentially non-native English speaker.

It seems Sharon was trying to stoke the flames of public anger by calling out “bad decisions”.

Looking through Sharon’s tweets, I discovered Sharon’s friend “Mel”, another bot with its own programmed agenda.

Bot ‘Mel’ spread false information about a possible delay in COVID-19 results, and retweeted hateful messages.

What was concerning was that a human user was engaging with Mel.

An account that seemed to belong to a real Twitter user began engaging with ‘Mel’.

You can help tackle misinformation

Currently, it’s simply too hard to attribute the true source of bot-driven misinformation campaigns. This can only be achieved with the full cooperation of social media companies.

The motives of a bot campaign can range from creating mischief to exercising geopolitical control. And some researchers still can’t agree on what exactly constitutes a “bot”.

But one thing is for sure: Australia needs to develop legislation and mechanisms to detect and stop these automated culprits. Organisations running legitimate social media campaigns should dedicate time to using a bot detection tool to weed out and report fake accounts.

And as a social media user in the age of the coronavirus, you can also help by reporting suspicious accounts. The last thing we need is malicious parties making an already worrying crisis worse.




Read more:
You can join the effort to expose Twitter bots


The Conversation


Ryan Ko, Chair Professor and Director of Cyber Security, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

When a virus goes viral: pros and cons to the coronavirus spread on social media



Tim Gouw/Unsplash, CC BY

Axel Bruns, Queensland University of Technology; Daniel Angus, Queensland University of Technology; Timothy Graham, Queensland University of Technology, and Tobias R. Keller, Queensland University of Technology

News and views about coronavirus has spread via social media in a way that no health emergency has done before.

Platforms like Twitter, Facebook, Tik Tok and Instagram have played critical roles in sharing news and information, but also in disseminating rumours and misinformation.

Getting the Message Out

Early on, snippets of information circulated on Chinese social media platforms such as Weibo and WeChat, before state censors banned discussions. These posts already painted a grim picture, and Chinese users continue to play cat and mouse with the Internet police in order to share unfiltered information.

As the virus spread, so did the social media conversation. On Facebook and Twitter, discussions have often taken place ahead of official announcements: calls to cancel the Australian Formula One Grand Prix were trending on Twitter days before the official decision.

Similarly, user-generated public health explainers have circulated while official government agencies in many countries discuss campaign briefs with advertising agencies.

Many will have come across (and, hopefully, adopted) hand-washing advice set to the lyrics of someone’s favourite song:

Widespread circulation of graphs has also explained the importance of “flattening the curve” and social distancing.

Debunking myths

Social media have been instrumental in responding to COVID-19 myths and misinformation. Journalists, public health experts, and users have combined to provide corrections to dangerous misinformation shared in US President Donald Trump’s press conferences:

Other posts have highlighted potentially deadly assumptions in the UK government’s herd immunity approach to the crisis:

Users have also pointed out inconsistencies in the Australian cabinet’s response to Home Affairs Minister Peter Dutton’s coronavirus diagnosis.

The circulation of such content through social media is so effective because we tend to pay more attention to information we receive through our networks of social contacts.

Similarly, professional health communicators like Dr Norman Swan have been playing an important role in answering questions and amplifying public health messages, while others have set up resources to keep the public informed on confirmed cases:

Even just seeing our leaders’ poor hygienic practices ridiculed might lead us to take better care ourselves:

Some politicians, like Australian Prime Minister Scott Morrison, blandly dismiss social media channels as a crucial source of crisis information, despite more than a decade’s research showing their importance.

This is deeply unhelpful: they should be embracing social media channels as they seek to disseminate urgent public health advice.

Stoking fear

The downside of all that user-driven sharing is that it can lead to mass panics and irrational behaviour – as we have seen with the panic-buying of toiletpaper and other essentials.

The panic spiral spins even faster when social media trends are amplified by mainstream media reporting, and vice versa: even only a handful of widely shared images of empty shelves in supermarkets might lead consumers to buy what’s left, if media reporting makes the problem appear much larger than it really is.

News stories and tweets showing empty shelves are much more news- and share-worthy than fully stocked shelves: they’re exceptional. But a focus on these pictures distorts our perception of what is actually happening.

The promotion of such biased content by the news media then creates a higher “viral” potential, and such content gains much more public attention than it otherwise would.

Levels of fear and panic are already higher during times of crisis, of course. As a result, some of us – including journalists and media outlets – might also be willing to believe new information we would otherwise treat with more scepticism. This skews the public’s risk perception and makes us much more susceptible to misinformation.

A widely shared Twitter post showed how panic buying in (famously carnivorous) Glasgow had skipped the vegan food section:

Closer inspection revealed the photo originated from Houston during Hurricane Harvey in 2017 (the dollar signs on the food prices are a giveaway).

This case also illustrates the ability of social media discussion to self-correct, though this can take time, and corrections may not travel as far as initial falsehoods. The potential for social media to stoke fears is measured by the difference in reach between the two.

The spread of true and false information is also directly affected by the platform architecture: the more public the conversations, the more likely it is that someone might encounter a falsehood and correct it.

In largely closed, private spaces like WhatsApp, or in closed groups or private profile discussions on Facebook, we might see falsehoods linger for considerably longer. A user’s willingness to correct misinformation can also be affected by their need to maintain good relationships within their community. People will often ignore misinformation shared by friends and family.

And unfortunately, the platforms’ own actions can also make things worse: this week, Facebook’s efforts to control “fake news” posts appeared to affect legitimate stories by mistake.

Rallying cries

Their ability to sustain communities is one of the great strengths of social media, especially as we are practising social distancing and even self-isolation. The internet still has a sense of humour which can help ease the ongoing tension and fear in our communities:

Younger generations are turning to newer social media platforms such as TikTok to share their experiences and craft pandemic memes. A key feature of TikTok is the uploading and repurposing of short music clips by platform users – music clip It’s Corona Time has been used in over 700,000 posts.

We have seen substantial self help efforts conducted via social media: school and university teachers who have been told to transition all of their teaching to online modes at very short notice, for example, have begun to share best-practice examples via the #AcademicTwitter hashtag.

The same is true for communities affected by event shutdowns and broader economic downturns, from freelancers to performing artists. Faced with bans on mass gatherings, some artists are finding ways to continue their work: providing access to 600 live concerts via digital concert halls or streaming concerts live on Twitter.

Such patterns are not new: we encountered them in our research as early as 2011, when social media users rallied together during natural disasters such as the Brisbane floods, Christchurch earthquakes, and Sendai tsunami to combat misinformation, amplify the messages of official emergency services organisations, and coordinate community activities.

Especially during crises, most people just want themselves and their community to be safe.The Conversation

Axel Bruns, Professor, Creative Industries, Queensland University of Technology; Daniel Angus, Associate Professor in Digital Communication, Queensland University of Technology; Timothy Graham, Senior Lecturer, Queensland University of Technology, and Tobias R. Keller, Visiting Postdoc, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The coronavirus and Chinese social media: finger-pointing in the post-truth era


Haiqing Yu, RMIT University

As public health authorities in China and the world fight the novel coronavirus, they face two major communication obstacles: the eroding trust in the media, and misinformation on social media.

As cities, towns, villages and residential compounds have been shut down or implemented curfews, social media have played a central role in crisis communication.

Chinese social media platforms, from WeChat and Weibo, to QQ, Toutiao, Douyin, Zhihu and Tieba, are the lifeline for many isolated and scared people who have been housebound for over two weeks, relying on their mobile phones to access information, socialise, and order food.

A meme being shared on WeChat reads: ‘When the epidemic is over, men will understand why women suffer from postnatal depression after one-month confinement upon childbirth.’
Author provided

These platforms constitute the mainstream media in the war on the coronavirus.

I experienced the most extraordinary Chinese New Year with my parents in China and witnessed the power of Chinese social media, especially WeChat, in spreading and controlling information and misinformation.

China is not only waging a war against the coronavirus. It is engaged in a media war against misinformation and “rumour” (as termed by the Chinese authorities and social media platforms).

This banner being shared on WeChat reads: ‘Those who do not come clean when having a fever are class enemies hidden among the people.’
Author provided

Information about the virus suddenly increased from January 21, after the central government publicly acknowledged the outbreak the previous day and Zhong Nanshan, China’s leading respiratory expert and anti-SARS hero, declared on the state broadcaster CCTV the virus was transmissible from person to person.




Read more:
Coronavirus: how health and politics have always been inextricably linked in China


On WeChat, the Chinese all-in-one super app with over 1.15 billion monthly active users, there has been only one dominant topic: the coronavirus.

Rumour mongers and rumour busters

In Wired, Robert Dingwall wrote “fear, finger-pointing, and militaristic action against the virus are unproductive”, asking if it is time to adjust to a new normal of outbreaks.

To many Chinese, this new normal of fear and militaristic action is already real in everyday life.

Finger-pointing, however, can be precarious in the era of information control and post-truth.

One of many spoof Cultural Revolution posters being shared on social media to warn people of the consequence of not wearing masks.
Author provided

On WeChat and other popular social media platforms, information about the virus from official, semi-official, unofficial and personal sources is abundant in chat groups, “Moments”, WeChat official accounts, and newsfeeds (mostly from Tencent News and Toutiao).

Information includes personal accounts of life under lockdown, duanzi (jokes, parodies, humorous videos), heroism of volunteers, generosity of donations, quack remedies, scaremongering about deaths and price hikes, and the conspiracy theory of the US waging a biological war against China.

TikTok video (shared on WeChat) on the life of a man in isolation at home and his ‘social life’.

There is also veiled criticism of the government and government officials for mismanagement, bad decisions, despicable behaviours and lack of accountability.

At the same time, the official media and Tencent have stepped up their rumour-busting effort.

They regularly publish rumour-busting pieces. They mobilise the “50-cent army” (wumao) and volunteer wumao (ziganwu) as their truth ambassadors.

Tencent has taken on the responsibility to provide “transparent” communication. It opened a new function through its WeChat mini-program Health, providing real-time updates of the epidemic and comprehensive information – including fake news busting.

The government has told people to only post and forward information from official channels and warned of severe consequences for anyone found guilty of disseminating “rumours”, including permanently blocking WeChat groups, blocking social media accounts, and possible jail terms.

A warning to WeChat users not to spread fake news about the coronavirus.
Author provided

Chinese people, accustomed to having posts deleted, face increased peer pressure in their chat groups to comply with the heightened censorship regime. Amid the panic the general advice is: don’t repost anything.

They are asked to be savvy consumers, able to distinguish fake news, half-truths or rumours, and to trust only one source of truth: the official channels.

But the skills to detect and contain false content are becoming rarer and more difficult to obtain.

Coronavirus and the post-truth

We live in the post-truth era, where every “truth” is driven by subjective, elusive, self-confirming and emotional “facts”.




Read more:
Post-truth politics and why the antidote isn’t simply ‘fact-checking’ and truth


Any news source can take you in the wrong direction.

We have seen that in the eight doctors from Wuhan who transformed from being rumourmongers to whistleblowers and heroes within a month.

Dr Li Wenliang, the first to warn others of the “SARS-like” virus in December 2019, died from the novel coronavirus in the early hours of February 7 2020. There is an overwhelming sense of loss, mourning and unspoken indignation at his death in various WeChat groups.

WeChat users mourning the death of Dr Li Wenliang.
Author provided

In the face of this post-truth era, we must ask the questions: what is “rumour”, who defines “rumour”, and how does “rumour” occur in the first place?

Information overload is accompanied by information pollution. Detecting and contain false information on social media has been a technical, sociological and ideological challenge.

With a state-led campaign to “bust rumours” and “clean the web” in a controlled environment at a time of crisis, these questions are more urgent than ever.

As media scholar Yong Hu said in 2011, when “official lies outpace popular rumors” the government and its information control mechanism constitute the greatest obstruction of the truth.

On the one hand, the government has provided an environment conducive to the spread of rumours, and on the other it sternly lashes out against rumours, placing itself in the midst of an insoluble contradiction.

As the late Dr Li Wenliang said: “[To me] truth is more important than my case being redressed; a healthy society should not only allow one voice.”

A screenshot from WeChat quoting Dr Li Wenliang: ‘[To me] truth is more important than my case being redressed; a healthy society should not only allow one voice.’
Author provided

China can lock down its cities, but it cannot lock down rumours on social media.

In fact, the Chinese people are not worried about rumours. They are worried about where to find truth and voice facts: not one single source of truth, but multiple sources of facts that will save lives.The Conversation

Haiqing Yu, Associate Professor, School of Media and Communication, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Does social media make us more or less lonely? Depends on how you use it



Research by Relationships Australia released in 2018 revealed one in six Australians experience emotional loneliness, which means they lack meaningful relationships in their lives.
SHUTTERSTOCK

Roger Patulny, University of Wollongong

Humans are more connected to each other than ever, thanks to smartphones, the web and social media. At the same time, loneliness is a huge and growing social problem.

Why is this so? Research shows social media use alone can’t cure loneliness – but it can be a tool to build and strengthen our genuine connections with others, which are important for a happy life.

To understand why this is the case, we need to understand more about loneliness, its harmful impact, and what this has to do with social media.

The scale of loneliness

There is great concern about a loneliness epidemic in Australia. In the 2018 Australian Loneliness Report, more than one-quarter of survey participants reported feeling lonely three or more days a week.

Studies have linked loneliness to early mortality, increased cardio-vascular disease, poor mental health and depression, suicide, and increased social and health care costs.

But how does this relate to social media?




Read more:
How to be a healthy user of social media


More and more Australians are becoming physically isolated. My previous research demonstrated that face-to-face contact in Australia is declining, and this is accompanied by a rise in technology-enabled communication.

Enter social media, which for many is serving as a replacement for physical connection. Social media influences nearly all relationships now.

Navigating the physical/digital interface

While there is evidence of more loneliness among heavy social media users, there is also evidence suggesting social media use decreases loneliness among highly social people.

How do we explain such apparent contradictions, wherein both the most and least lonely people are heavy social media users?

Research reveals social media is most effective in tackling loneliness when it is used to enhance existing relationships, or forge new meaningful connections. On the other hand, it is counterproductive if used as a substitute for real-life social interaction.

Thus, it is not social media itself, but the way we integrate it into our existing lives which impacts loneliness.

I wandered lonely in the cloud

While social media’s implications for loneliness can be positive, they can also be contradictory.

Tech-industry enthusiasts highlight social media’s benefits, such as how it offers easy, algorithimically-enhanced connection to anyone, anywhere in the world, at any time. But this argument often ignores the quality of these connections.

Psychologist Robert Weiss makes a distinction between “social loneliness” – a lack of contact with others – and “emotional loneliness”, which can persist regardless of how many “connections” you have, especially if they do not provide support, affirm identity and create feelings of belonging.




Read more:
A month at sea with no technology taught me how to steal my life back from my phone


Without close, physical connections, shallow virtual friendships can do little to alleviate emotional loneliness. And there is reason to think many online connections are just that.

Evidence from past literature has associated heavy social media use with increased loneliness. This may be because online spaces are often oriented to performance, status, exaggerating favourable qualities (such as by posting only “happy” content and likes), and frowning on expressions of loneliness.

On the other hand, social media plays a vital role in helping us stay connected with friends over long distances, and organise catch-ups. Video conferencing can facilitate “meetings” when physically meeting is impractical.

Platforms like Facebook and Instagram can be used to engage with new people who may turn into real friends later on. Similarly, sites like Meetup can help us find local groups of people whose interests and activities align with our own.

And while face-to-face contact remains the best way to help reduce loneliness, help can sometimes be found through online support groups.

Why so lonely?

There are several likely reasons for our great physical disconnection and loneliness.

We’ve replaced the 20th century idea of stable, permanent careers spanning decades with flexible employment and gig work. This prompts regular relocation for work, which results in disconnection from family and friends.

The way we build McMansions (large, multi-room houses) and sprawl our suburbs is often antisocial, with little thought given to developing vibrant, walkable social centres.




Read more:
Size does matter: Australia’s addiction to big houses is blowing the energy budget


Single-person households are expected to increase from about 2.1 million in 2011 to almost 3.4 million in 2036.

All of the above means the way we manage loneliness is changing.

In our book, my co-authors and I argue people manage their feelings differently than in the past. Living far from friends and family, isolated individuals often deal with negative emotions alone, through therapy, or through connecting online with whoever may be available.

Social media use is pervasive, so the least we can do is bend it in a way that facilitates our real-life need to belong.

It is a tool that should work for us, not the other way around. Perhaps, once we achieve this, we can expect to live in a world that is a bit less lonely.The Conversation

Roger Patulny, Associate Professor of Sociology, University of Wollongong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

As fires rage, we must use social media for long-term change, not just short-term fundraising



Comedian Celeste Barber’s fundraising efforts have gained monumental support. But we need to think of long-term engagement in climate action too.
Facebook

Emma Hutchison, The University of Queensland

With 26 fatalities, half a billion animals impacted and 10.7 million hectares of land burnt, Australia faces a record-breaking bushfire season.

Yet, amid the despondency, moving stories have emerged of phenomenal fundraising conducted through social media.

At the forefront is Australian comedian Celeste Barber, whose Facebook fundraiser has raised more than AUD$45 million – the largest amount in the platform’s history.

Presenting shocking visuals, sites such as Instagram, Twitter and Facebook have been monumental in communicating the severity of the fires.

But at a time when experts predict worsening climate conditions and longer fire seasons, short bursts of compassion and donations aren’t enough.

For truly effective action against current and future fires, we need to use social media to implement lasting transformations, to our attitudes, and our ability to address climate change.

Get out of your echo-chamber

Links between social media and public engagement are complex. Their combination can be helpful, as we’re witnessing, but doesn’t necessarily help solve problems requiring long-term attention.




Read more:
Climate change is bringing a new world of bushfires


Online spaces can cultivate polarising, and sometimes harmful, debate.

Past research indicates the presence of online echo chambers, and users’ tendency to seek interaction with others holding the same beliefs as them.

If you’re stuck in an echo chamber, Harvard Law School lecturer Erica Ariel Fox suggests breaking the mould by going out of your way to understand diverse opinions.

Before gearing up to disagree with others, she recommends acknowledging the contradictions and biases you yourself hold, and embracing the opposing sides of yourself.

In tough times, many start to assign blame – often with political or personal agendas.

In the crisis engulfing Australia, we’ve seen this with repeated accusations from conservatives claiming the Greens party have made fire hazard reduction more difficult.

In such conversations, larger injustices and the underlying political challenges are often forgotten. The structural conditions underpinning the crisis remain unchallenged.

Slow and steady

We need to rethink our approach to dealing with climate change, and its harmful effects.

First, we should acknowledge there is no quick way to resolve the issue, despite the immediacy of the threats it poses.

Political change is slow, and needs steady growth. This is particularly true for climate politics, an issue which challenges the social and economic structures we rely on.

Our values and aspirations must also change, and be reflected in our online conversations. Our dialogue should shift from blame to a culture of appreciation, and growing concern for the impact of climate degradation.

Users should continue to explore and learn online, but need to do so in an informed way.

Reading Facebook and Twitter content is fine, but this must be complemented with reliable news sources. Follow authorised user accounts providing fact-based articles and guidance.

Before you join an online debate, it’s important you can back your claims. This helps prevent the spread of misinformation online, which is unfortunately rampant.

A 2018 Reuters Institute report found people’s interaction (sharing, commenting and reacting) with false news from a small number of Facebook outlets “generated more or as many interactions as established news brands”.

Also, avoid regressive discussions with dead-ends. Social media algorithms dictate that the posts you engage with set the tone for future posts targeted at you, and more engagement with posts will make them more visible to other users too. Spend your time and effort wisely.

And lastly, the internet has made it easier than ever to contact political leaders, whether it’s tweeting at your prime minister, or reaching out to the relevant minister on Facebook.




Read more:
Listen to your people Scott Morrison: the bushfires demand a climate policy reboot


Tangible change-making

History has proven meaningful social and political progress requires sustained public awareness and engagement.

Australian comedian Celeste Barber started fundraising with a goal of $30,000.
Celeste Barber/Facebook

Consider Australia’s recent legislation on marriage equality, or the historical transformation of women’s rights.

These issues affect people constantly, but fixing them required debate over long periods.

We should draw on the awareness raised over the past weeks, and not let dialogue about the heightened threat of bushfires fizzle out.

We must not return to our practices of do-nothingism as soon as the immediate disaster subsides.

Although bushfire fundraisers have collected millions, a European Social Survey of 44,387 respondents from 23 countries found that – while most participants were worried about climate change – less than one-third were willing to pay higher taxes on fossil fuels.

If we want climate action, we must expect more from our governments but also from ourselves.

Social media should be used to consistently pressure government to take principled stances on key issues, not short-sighted policies geared towards the next election.

Opening the public’s eyes

There’s no denying social media has successfully driven home the extent of devastation caused by the fires.

A clip from Fire and Rescue NSW, viewed 7.8 million times on Twitter alone, gives audiences a view of what it’s like fighting on the frontlines.

Images of burnt, suffering animals and destroyed homes, resorts, farms and forests have signalled the horror of what has passed and what may come.

Social media can be a formidable source of inspiration and action. It’s expected to become even more pervasive in our lives, and this is why it must be used carefully.

While showings of solidarity are incredibly helpful, what happens in the coming weeks and months, after the fires pass, is what will matter most.The Conversation

Emma Hutchison, Associate Professor and ARC DECRA Fellow, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

6 things to ask yourself before you share a bushfire map on social media



NASA’s Worldview software gives you a satellite view of Earth right now, and can help track the spread of fires.
Nasa Worldview

Juan Pablo Guerschman, CSIRO

In recent days, many worrying bushfire maps have been circulating online, some appearing to suggest all of Australia is burning.

You might have seen this example, decried by some as misleading, prompting this Instagram post by its creator:

As he explained, the image isn’t a NASA photo. What a satellite actually “sees” is quite different.

I’ll explain how we use data collected by satellites to estimate how much of an area is burning, or has already been burnt, and what this information should look like once it’s mapped.




Read more:
A crisis of underinsurance threatens to scar rural Australia permanently


Reflective images

When astronauts look out their window in space, this is what they see:

It’s similar to what you might see from an aeroplane window, but higher and covering a wider area.

As you read this, many unmanned satellites are orbiting and photographing Earth. These images are used to monitor fires in real-time. They fall into two categories: reflective and thermal.

Reflective images capture information in the visible range of the electromagnetic spectrum (in other words, what we can see). But they also capture information in wavelengths we can’t see, such as infrared wavelengths.

If we use only the visible wavelengths, we can render the image similar to what we might see with the naked eye from a satellite. We call these “true colour” images.

This is a true colour image of south-east Australia, taken on January 4th 2020 from the MODIS instrument on the Aqua satellite. Fire smoke is grey, clouds are white, forests are dark green, brown areas are dryland agricultural areas, and the ocean is blue.
NASA Worldview / https://go.nasa.gov/307pDDX

Note that the image doesn’t have political boundaries, as these aren’t physical features. To make satellite imagery useful for navigation, we overlay the map with location points.

The same image shown as true colour, with the relevant geographical features overlaid.
NASA Worldview / https://go.nasa.gov/2TafEMH

From this, we can predict where the fires are by looking at the smoke. However, the fires themselves are not directly visible.

‘False colour’ images

Shortwave infrared bands are less sensitive to smoke and more sensitive to fire, which means they can tell us where fire is present.

Converting these wavelengths into visible colours produces what we call “false colour” images. For instance:

The same image, this time shown as false colour. Now, the fire smoke is partially transparent grey while the clouds aren’t. Red shows the active fires and brown shows where bushfires have recently burnt.
NASA Worldview / https://go.nasa.gov/2NhzRfN

In this shortwave infrared image, we start to “see” under the smoke, and can identify active fires. We can also learn more about the areas that are already burnt.

Thermal and hotspots

As their name suggests, thermal images measure how hot or cold everything in the frame is. Active fires are detected as “hotspots” and mapped as points on the surface.

While reflective imagery is only useful when obtained by a satellite during daytime, thermal hotspots can be measured at night – doubling our capacity to observe active fires.

The same image shown as false color, with hotspots overlaid in red.
NASA Worldview / https://go.nasa.gov/2rZNIj9

This information can be used to create maps showing the aggregation of hotspots over several days, weeks or months.

Geoscience Australia’s Digital Earth hotspots service shows hotspots across the continent in the last 72 hours. It’s worth reading the “about” section to learn the limitations or potential for error in the map.




Read more:
Spread the word: the value of local information in disaster response


When hotspots, which show “hot” pixels, are shown as extremely big icons, or are collected over long periods, the results can be deceiving. They can indicate a much larger area to be under fire than what is really burning.

For example, it would be wrong to believe all the areas in red in the map below are burning or have already burnt. It’s also unclear over what period of time the hotspots were aggregated.

The ‘world map of fire hotspots’ from the Environmental Investigation Agency.
Environmental Investigation Agency / https://eia-international.org/news/watching-the-world-burn-fires-threaten-the-worlds-tropical-forests-and-millions-of-people/

Get smart

Considering all of the above, there are some key questions you can ask to gauge the authenticity of a bushfire map. These are:

  • Where does this map come from, and who produced it?

  • is this a single satellite image, or one using hotspots overlaid on a map?

  • what are the colours representing?

  • do I know when this was taken?

  • if this map depicts hotspots, over what period of time were they collected? A day, a whole year?

  • is the size of the hotspots representative of the area that is actually burning?

So, the next time you see a bushfire map, think twice before pressing the share button.The Conversation

Juan Pablo Guerschman, Senior Research Scientist, CSIRO

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight


Timothy Graham, Queensland University of Technology and Tobias R. Keller, Queensland University of Technology

In the first week of 2020, hashtag #ArsonEmergency became the focal point of a new online narrative surrounding the bushfire crisis.

The message: the cause is arson, not climate change.

Police and bushfire services (and some journalists) have contradicted this claim.

We studied about 300 Twitter accounts driving the #ArsonEmergency hashtag to identify inauthentic behaviour. We found many accounts using #ArsonEmergency were behaving “suspiciously”, compared to those using #AustraliaFire and #BushfireAustralia.

Accounts peddling #ArsonEmergency carried out activity similar to what we’ve witnessed in past disinformation campaigns, such as the coordinated behaviour of Russian trolls during the 2016 US presidential election.

Bots, trolls and trollbots

The most effective disinformation campaigns use bot and troll accounts to infiltrate genuine political discussion, and shift it towards a different “master narrative”.

Bots and trolls have been a thorn in the side of fruitful political debate since Twitter’s early days. They mimic genuine opinions, akin to what a concerned citizen might display, with a goal of persuading others and gaining attention.

Bots are usually automated (acting without constant human oversight) and perform simple functions, such as retweeting or repeatedly pushing one type of content.

Troll accounts are controlled by humans. They try to stir controversy, hinder healthy debate and simulate fake grassroots movements. They aim to persuade, deceive and cause conflict.

We’ve observed both troll and bot accounts spouting disinformation regarding the bushfires on Twitter. We were able to distinguish these accounts as being inauthentic for two reasons.

First, we used sophisticated software tools including tweetbotornot, Botometer, and Bot Sentinel.

There are various definitions for the word “bot” or “troll”. Bot Sentinel says:

Propaganda bots are pieces of code that utilize Twitter API to automatically follow, tweet, or retweet other accounts bolstering a political agenda. Propaganda bots are designed to be polarizing and often promote content intended to be deceptive… Trollbot is a classification we created to describe human controlled accounts who exhibit troll-like behavior.

Some of these accounts frequently retweet known propaganda and fake news accounts, and they engage in repetitive bot-like activity. Other trollbot accounts target and harass specific Twitter accounts as part of a coordinated harassment campaign. Ideology, political affiliation, religious beliefs, and geographic location are not factors when determining the classification of a Twitter account.

These machine learning tools compared the behaviour of known bots and trolls with the accounts tweeting the hashtags #ArsonEmergency, #AustraliaFire, and #BushfireAustralia. From this, they provided a “score” for each account suggesting how likely it was to be a bot or troll account.

We also manually analysed the Twitter activity of suspicious accounts and the characteristics of their profiles, to validate the origins of #ArsonEmergency, as well as the potential motivations of the accounts spreading the hashtag.

Who to blame?

Unfortunately, we don’t know who is behind these accounts, as we can only access trace data such as tweet text and basic account information.

This graph shows how many times #ArsonEmergency was tweeted between December 31 last year and January 8 this year:

On the vertical axis is the number of tweets over time which featured #ArsonEmergency. On January 7, there were 4726 tweets.
Author provided

Previous bot and troll campaigns have been thought to be the work of foreign interference, such as Russian trolls, or PR firms hired to distract and manipulate voters.

The New York Times has also reported on perceptions that media magnate Rupert Murdoch is influencing Australia’s bushfire debate.




Read more:
Weather bureau says hottest, driest year on record led to extreme bushfire season


Weeding-out inauthentic behaviour

In late November, some Twitter accounts began using #ArsonEmergency to counter evidence that climate change is linked to the severity of the bushfire crisis.

Below is one of the earliest examples of an attempt to replace #ClimateEmergency with #ArsonEmergency. The accounts tried to get #ArsonEmergency trending to drown out dialogue acknowledging the link between climate change and bushfires.

We suspect the origins of the #ArsonEmergency debacle can be traced back to a few accounts.
Author provided

The hashtag was only tweeted a few times in 2019, but gained traction this year in a sustained effort by about 300 accounts.

A much larger portion of bot and troll-like accounts pushed #ArsonEmergency, than they did #AustraliaFire and #BushfireAustralia.

The narrative was then adopted by genuine accounts who furthered its spread.

On multiple occasions, we noticed suspicious accounts countering expert opinions while using the #ArsonEmergency hashtag.

The inauthentic accounts engaged with genuine users in an effort to persuade them.
author provided

Bad publicity

Since media coverage has shone light on the disinformation campaign, #ArsonEmergency has gained even more prominence, but in a different light.

Some journalists are acknowledging the role of disinformation bushfire crisis – and countering narrative the Australia has an arson emergency. However, the campaign does indicate Australia has a climate denial problem.

What’s clear to me is that Australia has been propelled into the global disinformation battlefield.




Read more:
Watching our politicians fumble through the bushfire crisis, I’m overwhelmed by déjà vu


Keep your eyes peeled

It’s difficult to debunk disinformation, as it often contains a grain of truth. In many cases, it leverages people’s previously held beliefs and biases.

Humans are particularly vulnerable to disinformation in times of emergency, or when addressing contentious issues like climate change.

Online users, especially journalists, need to stay on their toes.

The accounts we come across on social media may not represent genuine citizens and their concerns. A trending hashtag may be trying to mislead the public.

Right now, it’s more important than ever for us to prioritise factual news from reliable sources – and identify and combat disinformation. The Earth’s future could depend on it.The Conversation

Timothy Graham, Senior lecturer, Queensland University of Technology and Tobias R. Keller, Visiting Postdoc, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.