Calls for an ABC-run social network to replace Facebook and Google are misguided. With what money?



shutterstock.

Fiona R Martin, University of Sydney

If Facebook prevented Australian news from being shared on its platform, could the ABC start its own social media service to compensate? While this proposal from the Australia Institute is a worthy one, it’s an impossible ask in the current political climate.

The suggestion is one pillar of the think tank’s new Tech-xit report.

The report canvasses what the Australian government should do if Facebook and Google withdraw their news-related services from Australia, in reaction to the Australian Competition and Consumer Commission’s draft news media bargaining code.

Tech-xit rightly notes the ABC is capable of building social media that doesn’t harvest Australians’ personal data. However, it overlooks the costs and challenges of running a social media service — factors raised in debate over the new code.

Platforms react (badly) to the code

The ACCC’s code is a result of years of research into the effects of platform power on Australian media.

It requires Facebook and Google to negotiate with Australian news businesses about licensing payments for hosting news excerpts, providing access to news user data and information on pending news feed algorithm changes.

Predictably, the tech companies are not happy. They argue they make far less from news than the ACCC estimates, have greater costs and return more benefit to the media.

If the code becomes law, Facebook has threatened to stop Australian users from sharing local or international news. Google notified Australians its free services would become “at risk”, although it later said it would negotiate if the draft law was changed in its favour.

Facebook’s withdrawal, which the Tech-xit report sees as being likely if the law passes, would reduce Australians’ capacity to share vital news about their communities, activities and businesses.




Read more:
If Facebook really pulls news from its Australian sites, we’ll have a much less compelling product


ABC to the rescue?

Cue the ABC then, says Jordan Guiao, the report’s author. Guiao is the former head of social media for both the ABC and SBS, and now works at the institute’s Centre for Responsible Technology.

He argues that, if given the funding, ABC Online could reinvent itself to become a “national social platform connecting everyday Australians”. He says all the service would have to do is add

distinct user profiles, user publishing and content features, group connection features, chat, commenting and interactive discussion capabilities.

As a trusted information source, he proposes the ABC could enable “genuine exchange and influence on decision making” and “provide real value to local communities starved of civic engagement”.

Financial reality check

It’s a bold move to suggest the ABC could start yet another major network when it has just had to cut A$84 million from its budget and lose more than 200 staff.

The institute’s idea is very likely an effort to persuade the Morrison government it should redirect some of that funding back to Aunty, which has a history of digital innovation with ABC Online, iView, Q&A and the like.

However, the government has repeatedly denied it has cut funding to the national broadcaster. It hasn’t provided
catch-up emergency broadcasting funds since the ABC covered our worst ever fire season. This doesn’t bode well for a change of mind on future allocations.

The government also excluded the ABC and SBS as beneficiaries of the news media bargaining code negotiations.

The ABC doesn’t even have access to start-up venture capital the way most social media companies do. According to Crunchbase, Twitter and Reddit — the two most popular news-sharing platforms after Facebook — have raised roughly US$1.5 billion and US$550 million respectively in investment rounds, allowing them to constantly innovate in service delivery.

Operational challenges

In contrast, over the past decade, ABC Online has had to reduce many of the “social” services it once offered. This is largely due to the cost of moderating online communities and managing user participation.

Illustration of person removing a social media post.
Social media content moderation requires an abundance of time, money and human resources.
Shutterstock

First news comments sections were canned, and online communities such as the Four Corners forums and The Drum website were closed.

Last year, the ABC’s flagship site for regional and rural user-created stories, ABC Open, was also shut down.

Even if the government were to inject millions into an “ABC Social”, it’s unlikely the ABC could deal with the problems of finding and removing illegal content at scale.

It’s an issue that still defeats social media platforms and the ABC does not have machine learning expertise or funds for an army of outsourced moderators.

The move would also expose the ABC to accusations it was crowding out private innovation in the platform space.

A future without Facebook

It’s unclear whether Facebook will go ahead with its threat of preventing Australian users from sharing news on its platform, given the difficulties with working out exactly who an Australian user is.

For instance, the Australian public includes dual citizens, temporary residents, international students and business people, and expatriates.

If it does, why burden the ABC with the duty to recreate social media? Facebook’s withdrawal could be a boon for Twitter, Reddit and whatever may come next.

In the meantime, if we restored the ABC’s funding, it could develop more inventive ways to share local news online that can’t be threatened by Facebook and Google.




Read more:
Latest $84 million cuts rip the heart out of the ABC, and our democracy


The Conversation


Fiona R Martin, Associate Professor in Convergent and Online Media, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Do social media algorithms erode our ability to make decisions freely? The jury is out



Charles Deluvio/Unsplash, CC BY-SA

Lewis Mitchell and James Bagrow, University of Vermont

Social media algorithms, artificial intelligence, and our own genetics are among the factors influencing us beyond our awareness. This raises an ancient question: do we have control over our own lives? This article is part of The Conversation’s series on the science of free will.


Have you ever watched a video or movie because YouTube or Netflix recommended it to you? Or added a friend on Facebook from the list of “people you may know”?

And how does Twitter decide which tweets to show you at the top of your feed?

These platforms are driven by algorithms, which rank and recommend content for us based on our data.

As Woodrow Hartzog, a professor of law and computer science at Northeastern University, Boston, explains:

If you want to know when social media companies are trying to manipulate you into disclosing information or engaging more, the answer is always.

So if we are making decisions based on what’s shown to us by these algorithms, what does that mean for our ability to make decisions freely?

What we see is tailored for us

An algorithm is a digital recipe: a list of rules for achieving an outcome, using a set of ingredients. Usually, for tech companies, that outcome is to make money by convincing us to buy something or keeping us scrolling in order to show us more advertisements.

The ingredients used are the data we provide through our actions online – knowingly or otherwise. Every time you like a post, watch a video, or buy something, you provide data that can be used to make predictions about your next move.

These algorithms can influence us, even if we’re not aware of it. As the New York Times’ Rabbit Hole podcast explores, YouTube’s recommendation algorithms can drive viewers to increasingly extreme content, potentially leading to online radicalisation.

Facebook’s News Feed algorithm ranks content to keep us engaged on the platform. It can produce a phenomenon called “emotional contagion”, in which seeing positive posts leads us to write positive posts ourselves, and seeing negative posts means we’re more likely to craft negative posts — though this study was controversial partially because the effect sizes were small.

Also, so-called “dark patterns” are designed to trick us into sharing more, or spending more on websites like Amazon. These are tricks of website design such as hiding the unsubscribe button, or showing how many people are buying the product you’re looking at right now. They subconsciously nudge you towards actions the site would like you to take.




Read more:
Sludge: how corporations ‘nudge’ us into spending more


You are being profiled

Cambridge Analytica, the company involved in the largest known Facebook data leak to date, claimed to be able to profile your psychology based on your “likes”. These profiles could then be used to target you with political advertising.

“Cookies” are small pieces of data which track us across websites. They are records of actions you’ve taken online (such as links clicked and pages visited) that are stored in the browser. When they are combined with data from multiple sources including from large-scale hacks, this is known as “data enrichment”. It can link our personal data like email addresses to other information such as our education level.

These data are regularly used by tech companies like Amazon, Facebook, and others to build profiles of us and predict our future behaviour.

You are being predicted

So, how much of your behaviour can be predicted by algorithms based on your data?

Our research, published in Nature Human Behaviour last year, explored this question by looking at how much information about you is contained in the posts your friends make on social media.

Using data from Twitter, we estimated how predictable peoples’ tweets were, using only the data from their friends. We found data from eight or nine friends was enough to be able to predict someone’s tweets just as well as if we had downloaded them directly (well over 50% accuracy, see graph below). Indeed, 95% of the potential predictive accuracy that a machine learning algorithm might achieve is obtainable just from friends’ data.

Average predictability from your circle of closest friends (blue line). A value of 50% means getting the next word right half of the time — no mean feat as most people have a vocabulary of around 5,000 words. The curve shows how much an AI algorithm can predict about you from your friends’ data. Roughly 8-9 friends are enough to predict your future posts as accurately as if the algorithm had access to your own data (dashed line).
Bagrow, Liu, & Mitchell (2019)

Our results mean that even if you #DeleteFacebook (which trended after the Cambridge Analytica scandal in 2018), you may still be able to be profiled, due to the social ties that remain. And that’s before we consider the things about Facebook that make it so difficult to delete anyway.




Read more:
Why it’s so hard to #DeleteFacebook: Constant psychological boosts keep you hooked


We also found it’s possible to build profiles of non-users — so-called “shadow profiles” — based on their contacts who are on the platform. Even if you have never used Facebook, if your friends do, there is the possibility a shadow profile could be built of you.

On social media platforms like Facebook and Twitter, privacy is no longer tied to the individual, but to the network as a whole.

No more free will? Not quite

But all hope is not lost. If you do delete your account, the information contained in your social ties with friends grows stale over time. We found predictability gradually declines to a low level, so your privacy and anonymity will eventually return.

While it may seem like algorithms are eroding our ability to think for ourselves, it’s not necessarily the case. The evidence on the effectiveness of psychological profiling to influence voters is thin.

Most importantly, when it comes to the role of people versus algorithms in things like spreading (mis)information, people are just as important. On Facebook, the extent of your exposure to diverse points of view is more closely related to your social groupings than to the way News Feed presents you with content. And on Twitter, while “fake news” may spread faster than facts, it is primarily people who spread it, rather than bots.

Of course, content creators exploit social media platforms’ algorithms to promote content, on YouTube, Reddit and other platforms, not just the other way round.

At the end of the day, underneath all the algorithms are people. And we influence the algorithms just as much as they may influence us.




Read more:
Don’t just blame YouTube’s algorithms for ‘radicalisation’. Humans also play a part


The Conversation


Lewis Mitchell, Senior Lecturer in Applied Mathematics and James Bagrow, Associate Professor, Mathematics & Statistics, University of Vermont

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Happy birthday Instagram! 5 ways doing it for the ‘gram has changed us



Unsplash/Adi Goldstein, CC BY

Tama Leaver, Curtin University; Crystal Abidin, Curtin University, and Tim Highfield, University of Sheffield

Tomorrow marks Instagram’s tenth birthday. Having amassed more than a billion active users worldwide, the app has changed radically in that decade. And it has changed us.

1. Instagram’s evolution

When it was launched on October 6, 2010 by Kevin Systrom and Mike Krieger, Instagram was an iPhone-only app. The user could take photos (and only take photos — the app could not load existing images from the phone’s gallery) within a square frame. These could be shared, with an enhancing filter if desired. Other users could comment or like the images. That was it.

As we chronicle in our book, the platform has grown rapidly and been at the forefront of an increasingly visual social media landscape.

In 2012, Facebook purchased Instagram for a deal worth a $US1 billion (A$1.4 billion), which in retrospect probably seems cheap. Instagram is now one of the most profitable jewels in the Facebook crown.

Instagram has integrated new features over time, but it did not invent all of them.

Instagram Stories, with more than half a billion daily users, was shamelessly borrowed from Snapchat in 2016. It allowed users to post 10-second content bites which disappear after 24 hours. The rivers of casual and intimate content (later integrated into Facebook) are widely considered to have revitalised the app.

Similarly, IGTV is Instagram’s answer to YouTube’s longer-form video. And if the recently-released Reels isn’t a TikTok clone, we’re not sure what else it could be.




Read more:
Facebook is merging Messenger and Instagram chat features. It’s for Zuckerberg’s benefit, not yours


2. Under the influencers

Instagram is largely responsible for the rapid professionalisation of the influencer industry. Insiders estimated the influencer industry would grow to US$9.7 billion (A$13.5 billion) in 2020, though COVID-19 has since taken a toll on this as with other sectors.

As early as in 2011, professional lifestyle bloggers throughout Southeast Asia were moving to Instagram, turning it into a brimming marketplace. They sold ad space via post captions and monetised selfies through sponsored products. Such vernacular commerce pre-dates Instagram’s Paid Partnership feature, which launched in late-2017.

Girl takes selfie on street
Behind the scenes snaps can enhance Insta-authenticity.
Unsplash/Afif Kusuma, CC BY

The use of images as a primary mode of communication, as opposed to the text-based modes of the blogging era, facilitated an explosion of aspiring influencers. The threshold for turning oneself into an online brand was dramatically lowered.

Instagrammers relied more on photography and their looks — enhanced by filters and editing built into the platform.

Soon, the “extremely professional and polished, the pretty, pristine, and picturesque” started to become boring. Finstagrams (“fake Instagram”) and secondary accounts proliferated and allowed influencers to display behind-the-scenes snippets and authenticity through calculated performances of amateurism.

3. Instabusiness as usual

As influencers commercialised Instagram captions and photos, those who had owned online shops turned hashtag streams into advertorial campaigns. They relied on the labour of followers to publicise their wares and amplify their reach.

Bigger businesses followed suit and so did advice from marketing experts for how best to “optimise” engagement.

In mid-2016, Instagram belatedly launched business accounts and tools, allowing companies easy access to back-end analytics. The introduction of the “swipeable carousel” of story content in early 2017 further expanded commercial opportunities for businesses by multiplying ad space per Instagram post. This year, in the tradition of Instagram corporatising user innovations, it announced Instagram Shops would allow businesses to sell products directly via a digital storefront. Users had previously done this via links.

Old polaroid camera.
The original Instagram logo paid tribute to the Polaroid aesthetic.
Unsplash/Josh Carter, CC BY



Read more:
Friday essay: Twitter and the way of the hashtag


4. Sharenting

Instagram isn’t just where we tell the visual story of ourselves, but also where we co-create each other’s stories. Nowhere is this more evident than the way parents “sharent”, posting their children’s daily lives and milestones.

Many children’s Instagram presence begins before they are even born. Sharing ultrasound photos has become a standard way to announce a pregnancy. Over 1.5 million public Instagram posts are tagged #genderreveal.

Sharenting raises privacy questions: who owns a child’s image? Can children withdraw publishing permission later?

Sharenting entails handing over children’s data to Facebook as part of the larger realm of surveillance capitalism. A saying that emerged around the same time as Instagram was born still rings true: “When something online is free, you’re not the customer, you’re the product”. We pay for Instagram’s “free” platform with our user data and our children’s data, too, when we share photos of them.

Couple holds ultrasound print out.
Many babies appear on Instagram before they are even born.
Meryl Spadaro/Unsplash, CC BY



Read more:
The real problem with posting about your kids online


5. Seeing through the frame

The apparent “Instagrammability” of a meal, a place, or an experience has seen the rise of numerous visual trends and tropes.

Short-lived Instagram Stories and disappearing Direct Messages add more spaces to express more things without the threat of permanence.




Read more:
Friday essay: seeing the news up close, one devastating post at a time


The events of 2020 have shown our ways of seeing on Instagram reveal the possibilities and pitfalls of social media.

In June racial justice activism on #BlackoutTuesday, while extremely popular, also had the effect of swamping the #BlackLivesMatter hashtag with black squares.

Instagram is rife with disinformation and conspiracy theories which hijack the look and feel of authoritive content. The template of popular Instagram content can see familiar aesthetics weaponised to spread misinformation.

Ultimately, the last decade has seen Instagram become one of the main lenses through which we see the world, personally and politically. Users communicate and frame the lives they share with family, friends and the wider world.




Read more:
#travelgram: live tourist snaps have turned solo adventures into social occasions


The Conversation


Tama Leaver, Associate Professor in Internet Studies, Curtin University; Crystal Abidin, Senior Research Fellow & ARC DECRA, Internet Studies, Curtin University, Curtin University, and Tim Highfield, Lecturer in Digital Media and Society, University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Parler: what you need to know about the ‘free speech’ Twitter alternative



Wikimedia

Audrey Courty, Griffith University

Amid claims of social media platforms stifling free speech, a new challenger called Parler is drawing attention for its anti-censorship stance.

Last week, Harper’s Magazine published an open letter signed by 150 academics, writers and activists concerning perceived threats to the future of free speech.

The letter, signed by Noam Chomsky, Francis Fukuyama, Gloria Steinem and J.K. Rowling, among others, reads:

The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted.

Debates surroundings free speech and censorship have taken centre stage in recent months. In May, Twitter started adding fact-check labels to tweets from Donald Trump.

More recently, Reddit permanently removed its largest community of Trump supporters.

In this climate, Parler presents itself as a “non-biased, free speech driven” alternative to Twitter. Here’s what you should know about the US-based startup.




Read more:
Is cancel culture silencing open debate? There are risks to shutting down opinions we disagree with


What is Parler?

Parler reports more than 1.5 million users and is growing in popularity, especially as Twitter and other social media giants crackdown on misinformation and violent content.

Parler appears similar to Twitter in its appearance and functions.
screenshot

Parler is very similar to Twitter in appearance and function, albeit clunkier. Like Twitter, Parler users can follow others and engage with public figures, news sources and other users.

Public posts are called “parleys” rather than “tweets” and can contain up to 1,000 characters.

Users can comment, ‘echo’ or ‘vote’ on parleys.
screenshot

Users can search for hashtags, make comments, “echo” posts (similar to a retweet) and “vote” (similar to a like) on posts. There’s also a direct private messaging feature, just like Twitter.

Given this likeness, what actually is unique about Parler?

Fringe views welcome?

Parler’s main selling point is its claim it embraces freedom of speech and has minimal moderation. “If you can say it on the street of New York, you can say it on Parler”, founder John Matze explains.

This branding effort capitalises on allegations competitors such as Twitter and Facebook unfairly censor content and discriminate against right-wing political speech.

While other platforms often employ fact checkers, or third-party editorial boards, Parler claims to moderate content based on American Federal Communications Commission guidelines and Supreme Court rulings.

So if someone shared demonstrably false information on Parler, Matze said it would be up to other users to fact-check them “organically”.

And although Parler is still dwarfed by Twitter (330 million users) and Facebook (2.6 billion users) the platform’s anti-censorship stance continues to attract users turned off by the regulations of larger social media platforms.

When Twitter recently hid tweets from Trump for “glorifying violence”, this partly prompted the Trump campaign to consider moving to a platform such as Parler.

Far-right American political activist and conspiracy theorist Lara Loomer is among Parler’s most popular users.
screenshot

Matze also claims Parler protects users’ privacy by not tracking or sharing their data.

Is Parler really a free speech haven?

Companies such as Twitter and Facebook have denied they are silencing conservative voices, pointing to blanket policies against hate speech and content inciting violence.

Parler’s “free speech” has resulted in various American Republicans, including Senator Ted Cruz, promoting the platform.

Many conservative influencers such as Katie Hopkins, Lara Loomer and Alex Jones have sought refuge on Parler after being banned from other platforms.

Although it brands itself as a bipartisan safe space, Parler is mostly used by right-wing media, politicians and commentators.

Moreover, a closer look at its user agreement suggests it moderates content the same way as any platform, maybe even more.

The company states:

Parler may remove any content and terminate your access to the Services at any time and for any reason or no reason.

Parler’s community guidelines prohibit a range of content including spam, terrorism, unsolicited ads, defamation, blackmail, bribery and criminal behaviour.

Although there are no explicit rules against hate speech, there are policies against “fighting words” and “threats of harm”. This includes “a threat of or advocating for violation against an individual or group”.

Parler CEO John Matze clarified the platform’s rules after banning users, presumably for breaking one or more of the listed rules.

There are rules against content that is obscene, sexual or “lacks serious literary, artistic, political and scientific value”. For example, visuals of genitalia, female nipples, or faecal matter are barred from Parler.

Meanwhile, Twitter allows “consensually produced adult content” if its marked as “sensitive”. It also has no policy against the visual display of excrement.

As a private company, Parler can remove whatever content it wants. Some users have already been banned for breaking rules.

What’s more, in spite of claims it does not share user data, Parler’s privacy policy states data collected can be used for advertising and marketing.




Read more:
Friday essay: Twitter and the way of the hashtag


No marks of establishment

Given its limited user base, Parler has yet to become the “open town square” it aspires to be.

The platform is in its infancy and its user base is much less representative than larger social media platforms.

Despite Matze saying “left-leaning” users tied to the Black Lives Matter movement were joining Parler to challenge conservatives, Parler lacks the diverse audience needed for any real debate.

Upon joining the platform, Parler suggests following several politically conservative users.
screenshot

Matze also said he doesn’t want Parler to be an “echo chamber” for conservative voices. In fact, he is offering a US$20,000 “progressive bounty” for an openly liberal pundit with 50,000 followers on Twitter or Facebook to join.

Clearly, the platform has a long way to go before it bursts its conservative bubble.




Read more:
Don’t (just) blame echo chambers. Conspiracy theorists actively seek out their online communities


The Conversation


Audrey Courty, PhD candidate, School of Humanities, Languages and Social Science, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Anger is all the rage on Twitter when it’s cold outside (and on Mondays)



Shutterstock

Heather R. Stevens, Macquarie University; Ivan Charles Hanigan, University of Sydney; Paul Beggs, Macquarie University, and Petra Graham, Macquarie University

The link between hot weather and aggressive crime is well established. But can the same be said for online aggression, such as angry tweets? And is online anger a predictor of assaults?

Our study just published suggests the answer is a clear “no”. We found angry tweet counts actually increased in cooler weather. And as daily maximum temperatures rose, angry tweet counts decreased.

We also found the incidence of angry tweets is highest on Mondays, and perhaps unsurprisingly, angry Twitter posts are most prevalent after big news events such as a leadership spill.

This is the first study to compare patterns of assault and social media anger with temperature. Given anger spreads through online communities faster than any other emotion, the findings have broad implications – especially under climate change.

A caricature of US President Donald Trump, who’s been known to fire off an angry tweet.
Shutterstock

Algorithms are watching you

Of Australia’s 24.6 million people, 18 million, or 73%, are active social media users. Some 4.7 million Australians, or 19%, use Twitter. This widespread social media use provides researchers with valuable opportunities to gather information.

When you publicly post, comment or even upload a selfie, an algorithm can scan it to estimate your mood (positive or negative) or your emotion (such as anger, joy, fear or surprise).

This information can be linked with the date, time of day, location or even your age and sex, to determine the “mood” of a city or country in near real time.

Our study involved 74.2 million English-language Twitter posts – or tweets – from 2015 to 2017 in New South Wales.

We analysed them using the publicly available We Feel tool, developed by the CSIRO and the Black Dog Institute, to see if social media can accurately map our emotions.

Some 2.87 million tweets (or 3.87%) contained words or phrases considered angry, such as “vicious”, “hated”, “irritated”, “disgusted” and the very popular “f*cked”.

Hot-headed when it’s cold outside

On average, the number of angry tweets were highest when the temperature was below 15℃, and lowest in warm temperatures (25-30℃).

The number of angry tweets slightly increased again in very high temperatures (above 35℃), although with fewer days in that range there was less certainty about the trend.

On the ten days with the highest daily maximum temperatures, the average angry tweet count was 2,482 per day. Of the ten coldest days, the average angry tweet count was higher at 3,354 per day.




Read more:
Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots


The pattern of angry tweets was opposite to that of physical assaults, which are more prevalent in hotter weather – with some evidence of a decline in extreme heat.

So why the opposite patterns? We propose two possible explanations.

First, hot and cold weather triggers a physiological response in humans. Temperature affects our heart rate, the amount of oxygen to our brain, hormone regulation (including testosterone) and our ability to sleep. In some people, this in turn affects physical aggression levels.

Hot weather means more socialising, and potentially less time for tweeting.
Shutterstock

Second, weather triggers changes to our routine. Research suggests aggressive crimes increase because warmer weather encourages behaviour that fosters assaults. This includes more time outdoors, increased socialising and drinking alcohol.

Those same factors – time outdoors and more socialising – may reduce the opportunity or motivation to tweet. And the effects of alcohol (such as reduced mental clarity and physical precicion) make composing a tweet harder, and therefore less likely.

This theory is supported by our finding that both angry tweet counts, as well as overall tweet counts, were lowest on weekends, holidays and the hottest days,




Read more:
Car accidents, drownings, violence: hotter temperatures will mean more deaths from injury


It’s possible that as people vent their frustrations online, they feel better and are then less inclined to commit an assault. However, this theory isn’t well supported.

The relationship is more likely due to the vastly different demographics of Twitter users and assault offenders.

Assault offenders are most likely to be young men from low socio-economic backgrounds. In contrast, about half of Twitter users are female, and they’re more likely to be middle-aged and in a higher income bracket compared with other social media users.

Our study did not consider why these two groups differ in response to temperature. However, we are currently researching how age, sex and other social and demographic factors influence the relationships between temperature and aggression.

Twitter users are more likely to be middle aged.
Shutterstock

The Monday blues

Our study primarily set out to see whether temperatures and angry tweet counts were related. But we also uncovered other interesting trends.

Average angry tweet counts were highest on a Monday (2,759 per day) and lowest on weekends (Saturdays, 2,373; Sundays, 2,499). This supports research that found an online mood slump on weekdays.

We determined that major news events correlated with the ten days where the angry tweet count was highest. These events included:

  • the federal leadership spill in 2015 when Malcolm Turnbull replaced Tony Abbott as prime minister

  • a severe storm front in NSW in 2015, then a major cold front a few months later

  • two mass shootings in the United States: Orlando in 2016 and Las Vegas in 2017

  • sporting events including the Cricket World Cup in 2015.

Days with high angry tweet counts correlated with major news events.
Shutterstock

Twitter in a warming world

Our study was limited in that Twitter users are not necessarily representative of the broader population. For example, Twitter is a preferred medium for politicians, academics and journalists. These users may express different emotions, or less emotion, in their posts than other social media users.

However, the influence of temperature on social media anger has broad implications. Of all the emotions, anger spreads through online communities the fastest. So temperature changes and corresponding social media anger can affect the wider population.

We hope our research helps health and justice services develop more targeted measures based on temperature.

And with climate change likely to affect assault rates and mood, more research in this field is needed.




Read more:
Nine things you love that are being wrecked by climate change


The Conversation


Heather R. Stevens, Doctoral student in Environmental Sciences, Macquarie University; Ivan Charles Hanigan, Data Scientist (Epidemiology), University of Sydney; Paul Beggs, Associate Professor and Environmental Health Scientist, Macquarie University, and Petra Graham, Senior Research Fellow, Macquarie Business School, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Trump’s Twitter tantrum may wreck the internet


Michael Douglas, University of Western Australia

US President Donald Trump, who tweeted more than 11,000 times in the first two years of his presidency, is very upset with Twitter.

Earlier this week Trump tweeted complaints about mail-in ballots, alleging voter fraud – a familiar Trump falsehood. Twitter attached a label to two of his tweets with links to sources that fact–checked the tweets, showing Trump’s claims were unsubstantiated.

Trump retaliated with the power of the presidency. On May 28 he made an “Executive Order on Preventing Online Censorship”. The order focuses on an important piece of legislation: section 230 of the Communications Decency Act 1996.




Read more:
Can you be liable for defamation for what other people write on your Facebook page? Australian court says: maybe


What is section 230?

Section 230 has been described as “the bedrock of the internet”.

It affects companies that host content on the internet. It provides in part:

(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

This means that, generally, the companies behind Google, Facebook, Twitter and other “internet intermediaries” are not liable for the content on their platforms.

For example, if something defamatory is written by a Twitter user, the company Twitter Inc will enjoy a shield from liability in the United States even if the author does not.




Read more:
A push to make social media companies liable in defamation is great for newspapers and lawyers, but not you


Trump’s executive order

Within the US legal system, an executive order is a “signed, written, and published directive from the President of the United States that manages operations of the federal government”. It is not legislation. Under the Constitution of the United States, Congress – the equivalent of our Parliament – has the power to make legislation.

Trump’s executive order claims to protect free speech by narrowing the protection section 230 provides for social media companies.

The text of the order includes the following:

It is the policy of the United States that such a provider [who does not act in “good faith”, but stifles viewpoints with which they disagree] should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider …

To advance [this] policy … all executive departments and agencies should ensure that their application of section 230 (c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard.

The order attempts to do a lot of other things too. For example, it calls for the creation of new regulations concerning section 230, and what “taken in good faith” means.

The reaction

Trump’s action has some support. Republican senator Marco Rubio said if social media companies “have now decided to exercise an editorial role like a publisher, then they should no longer be shielded from liability and treated as publishers under the law”.

Critics argue the order threatens, rather than protects, freedom of speech, thus threatening the internet itself.

The status of this order within the American legal system is an issue for American constitutional lawyers. Experts were quick to suggest the order is unconstitutional; it seems contrary to the separation of powers enshrined in the US Constitution (which partly inspired Australia’s Constitution).

Harvard Law School constitutional law professor Laurence Tribe has described the order as “totally absurd and legally illiterate”.

That may be so, but the constitutionality of the order is an issue for the US judiciary. Many judges in the United States were appointed by Trump or his ideological allies.

Even if the order is legally illiterate, it should not be assumed it will lack force.

What this means for Australia

Section 230 is part of US law. It is not in force in Australia. But its effects are felt around the globe.

Social media companies who would otherwise feel safe under section 230 may be more likely to remove content when threatened with legal action.

The order might cause these companies to change their internal policies and practices. If that happens, policy changes could be implemented at a global level.

Compare, for example, what happened when the European Union introduced its General Data Protection Regulation (GDPR). Countless companies in Australia had to ensure they were meeting European standards. US-based tech companies such as Facebook changed their privacy policies and disclosures globally – they did not want to meet two different privacy standards.

If section 230 is diminished, it could also impact Australian litigation by providing another target for people who are hurt by damaging content on social media, or accessible by internet search. When your neighbour defames you on Facebook, for example, you can sue both the neighbour and Facebook.

That was already the law in Australia. But with a toothless section 230, if you win, the judgement could be enforceable in the US.

Currently, suing certain American tech companies is not always a good idea. Even if you win, you may not be able to enforce the Australian judgement overseas. Tech companies are aware of this.

In 2017 litigation, Twitter did not even bother sending anyone to respond to litigation in the Supreme Court of New South Wales involving leaks of confidential information by tweet. When tech companies like Google have responded to Aussie litigation, it might be understood as a weird brand of corporate social responsibility: a way of keeping up appearances in an economy that makes them money.

A big day for ‘social media and fairness’?

When Trump made his order, he called it a big day for “fairness”. This is standard Trump fare. But it should not be dismissed outright.

As our own Australian Competition and Consumer Commission recognised last year in its Digital Platforms Inquiry, companies such as Twitter have enormous market power. Their exercise of that power does not always benefit society.

In recent years, social media has advanced the goals of terrorists and undermined democracy. So if social media companies can be held legally liable for some of what they cause, it may do some good.

As for Twitter, the inclusion of the fact check links was a good thing. It’s not like they deleted Trump’s tweets. Also, they’re a private company, and Trump is not compelled to use Twitter.

We should support Twitter’s recognition of its moral responsibility for the dissemination of information (and misinformation), while still leaving room for free speech.

Trump’s executive order is legally illiterate spite, but it should prompt us to consider how free we want the internet to be. And we should take that issue more seriously than we take Trump’s order.The Conversation

Michael Douglas, Senior Lecturer in Law, University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disasters expose gaps in emergency services’ social media use


Tan Yigitcanlar, Queensland University of Technology; Ashantha Goonetilleke, Queensland University of Technology, and Nayomi Kankanamge, Queensland University of Technology

Australia has borne the brunt of several major disasters in recent years, including drought, bushfires, floods and cyclones. The increasing use of social media is changing how we prepare for and respond to these disasters. Not only emergency services but also their social media are now much-sought-after sources of disaster information and warnings.

We studied Australian emergency services’ social media use in times of disaster. Social media can provide invaluable and time-critical information to both emergency services and communities at risk. But we also found problems.




Read more:
Drought, fire and flood: how outer urban areas can manage the emergency while reducing future risks


How do emergency services use social media?

The 2019-20 Australian bushfires affected 80% of the population directly or indirectly. Social media were widely used to spread awareness of the bushfire disaster and to raise funds – albeit sometimes controversially – to help people in need.

The escalating use and importance of social media in disaster management raises an important question:

How effective are social media pages of Australian state emergency management organisations in meeting community expectations and needs?

To answer this question, QUT’s Urban Studies Lab investigated the community engagement approaches of social media pages maintained by various Australian emergency services. We placed Facebook and Twitter pages of New South Wales State Emergency Services (NSW-SES), Victoria State Emergency Services (VIC-SES) and Queensland Fire and Emergency Services (QLD-FES) under the microscope.

Our study made four key findings.

First, emergency services’ social media pages are intended to:

  • disseminate warnings
  • provide an alternative communication channel
  • receive rescue and recovery requests
  • collect information about the public’s experiences
  • raise disaster awareness
  • build collective intelligence
  • encourage volunteerism
  • express gratitude to emergency service staff and volunteers
  • raise funds for those in need.



Read more:
With costs approaching $100 billion, the fires are Australia’s costliest natural disaster


Examples of emergency services’ social media posts are shown below.

NSW-SES collecting data from the public through their posts.
Facebook
VIC-SES sharing weather warnings to inform the public.
Facebook
QLD-FES posting fire condition information to increase public awareness.
Facebook
QLD-FES showing the direction of a cyclone and warning the community.
Facebook

Second, Facebook pages of emergency services attract more community attention than Twitter pages. Services need to make their Twitter pages more attractive as, unlike Facebook, Twitter allows streamlined data download for social media analytics. A widely used Twitter page of emergency service means more data for analysis and potentially more accurate policies and actions.

Third, Australia lacks a legal framework for the use of social media in emergency service operations. Developing these frameworks will help organisations maximise its use, especially in the case of financial matters such as donations.

Fourth, the credibility of public-generated information can sometimes be questionable. Authorities need to be able to respond rapidly to such information to avoid the spread of misinformation or “fake news” on social media.

Services could do more with social media

Our research highlighted that emergency services could use social media more effectively. We do not see these services analysing social media data to inform their activities before, during and after disasters.

In another study on the use of social media analytics for disaster management, we developed a novel approach to show how emergency services can identify disaster-affected areas using real-time social media data. For that study, we collected Twitter data with location information on the 2010-11 Queensland floods. We were able to identify disaster severity by analysing the emotional or sentiment values of tweets.




Read more:
Explainer: how the internet knows if you’re happy or sad


This work generated the disaster severity map show below. The map is over 90% accurate to actual figures in the report of the Queensland Floods Commission of Inquiry.

Disaster severity map created through Twitter analytics.
Authors

Concerns about using social media to manage disaster

The first highly voiced concern about social media use in disaster management is the digital divide. While the issue of underrepresented people and communities remains important, the use of technology is spreading widely. There were 3.4 billion social media users worldwide in 2019, and the growth in numbers is accelerating.




Read more:
Online tools can help people in disasters, but do they represent everyone?


Besides, many Australian cities and towns are investing in smart city strategies and infrastructures. These localities provide free public Wi-Fi connections. And almost 90% of Australians now own a smart phone.

The second concern is information accuracy or “fake news” on social media. Evidently, sharing false information and rumours compromises the information social media provides. Social media images and videos tagged with location information can provide more reliable, eye-witness information.

Another concern is difficulty in receiving social media messages from severely affected areas. For instance, the disaster might have brought down internet or 4G/5G coverage, or people might have been evacuated from areas at risk. This might lead to limited social media posts from the actual disaster zone, with increasing numbers of posts from the places people are relocated.

In such a scenario, alternative social media analytics are on offer. We can use content analysis and sentiment analysis to determine the disaster location and impact.

How to make the most of social media

Social media and its applications are generating new and innovative ways to manage disasters and reduce their impacts. These include:

  • increasing community trust in emergency services by social media profiling
  • crowd-sourcing the collection and sharing of disaster information
  • creating awareness by incorporating gamification applications in social media
  • using social media data to detect disaster intensity and hotspot locations
  • running real-time data analytics.

In sum, social media could become a mainstream information provider for disaster management. The need is likely to become more pressing as human-induced climate change increases the severity and frequency of disasters.

Today, as we confront the COVID-19 pandemic, social media analytics are helping to ease its impacts. Artificial intelligence (AI) technologies are greatly reducing processing time for social media analytics. We believe the next-generation AI will enable us to undertake real-time social media analytics more accurately.




Read more:
Coronavirus: How Twitter could more effectively ease its impact


The Conversation


Tan Yigitcanlar, Associate Professor of Urban Studies and Planning, Queensland University of Technology; Ashantha Goonetilleke, Professor, Queensland University of Technology, and Nayomi Kankanamge, PhD Candidate, School of Built Environment, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots



Shutterstock

Ryan Ko, The University of Queensland

Recently Facebook, Reddit, Google, LinkedIn, Microsoft, Twitter and YouTube committed to removing coronavirus-related misinformation from their platforms.

COVID-19 is being described as the first major pandemic of the social media age. In troubling times, social media helps distribute vital knowledge to the masses.
Unfortunately, this comes with myriad misinformation, much of which is spread through social media bots.

These fake accounts are common on Twitter, Facebook, and Instagram. They have one goal: to spread fear and fake news.

We witnessed this in the 2016 United States presidential elections, with arson rumours in the bushfire crisis, and we’re seeing it again in relation to the coronavirus pandemic.




Read more:
Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight


Busy busting bots

This figure shows the top Twitter hashtags tweeted by bots over 24 hours.
Bot Sentinel

The exact scale of misinformation is difficult to measure. But its global presence can be felt through snapshots of Twitter bot involvement in COVID-19-related hashtag activity.

Bot Sentinel is a website that uses machine learning to identify potential Twitter bots, using a score and rating. According to the site, on March 26 bot accounts were responsible for 828 counts of #coronavirus, 544 counts of #COVID19 and 255 counts of #Coronavirus hashtags within 24 hours.

These hashtags respectively took the 1st, 3rd and 7th positions of all top-trolled Twitter hashtags.

It’s important to note the actual number of coronavirus-related bot tweets are likely much higher, as Bot Sentinel only recognises hashtag terms (such as #coronavirus), and wouldn’t pick up on “coronavirus”, “COVID19” or “Coronavirus”.

How are bots created?

Bots are usually managed by automated programs called bot “campaigns”, and these are controlled by human users. The actual process of creating such a campaign is relatively simple. There are several websites that teach people how to do this for “marketing” purposes. In the underground hacker economy on the dark web, such services are available for hire.

While it’s difficult to attribute bots to the humans controlling them, the purpose of bot campaigns is obvious: create social disorder by spreading misinformation. This can increase public anxiety, frustration and anger against authorities in certain situations.

A 2019 report published by researchers from the Oxford Internet Institute revealed a worrying trend in organised “social media manipulation by governments and political parties”. They reported:

Evidence of organised social media manipulation campaigns which have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017. In each country, there is at least one political party or government agency using social media to shape public attitudes domestically.

The modus operandi of bots

Typically, in the context of COVID-19 messages, bots would spread misinformation through two main techniques.

The first involves content creation, wherein bots start new posts with pictures that validate or mirror existing worldwide trends. Examples include pictures of shopping baskets filled with food, or hoarders emptying supermarket shelves. This generates anxiety and confirms what people are reading from other sources.

The second technique involves content augmentation. In this, bots latch onto official government feeds and news sites to sow discord. They retweet alarming tweets or add false comments and information in a bid to stoke fear and anger among users. It’s common to see bots talking about a “frustrating event”, or some social injustice faced by their “loved ones”.

The example below shows a Twitter post from Queensland Health’s official twitter page, followed by comments from accounts named “Sharon” and “Sara” which I have identified as bot accounts. Many real users reading Sara’s post would undoubtedly feel a sense of injustice on behalf of her “mum”.

The official tweet from Queensland Health and the bots’ responses.

While we can’t be 100% certain these are bot accounts, many factors point to this very likely being the case. Our ability to accurately identify bots will get better as machine learning algorithms in programs such as Bot Sentinel improve.

How to spot a bot

To learn the characteristics of a bot, let’s take a closer look Sharon’s and Sara’s accounts.

Screenshots of the accounts of ‘Sharon’ and ‘Sara’.

Both profiles lack human uniqueness, and display some telltale signs they may be bots:

  • they have no followers

  • they only recently joined Twitter

  • they have no last names, and have alphanumeric handles (such as Sara89629382)

  • they have only tweeted a few times

  • their posts have one theme: spreading alarmist comments

Bot ‘Sharon’ tried to rile others up through her tweets.

  • they mostly follow news sites, government authorities, or human users who are highly influential in a certain subject (in this case, virology and medicine).

My investigation into Sharon revealed the bot had attempted to exacerbate anger on a news article about the federal government’s coronavirus response.

The language: “Health can’t wait. Economic (sic) can” indicates a potentially non-native English speaker.

It seems Sharon was trying to stoke the flames of public anger by calling out “bad decisions”.

Looking through Sharon’s tweets, I discovered Sharon’s friend “Mel”, another bot with its own programmed agenda.

Bot ‘Mel’ spread false information about a possible delay in COVID-19 results, and retweeted hateful messages.

What was concerning was that a human user was engaging with Mel.

An account that seemed to belong to a real Twitter user began engaging with ‘Mel’.

You can help tackle misinformation

Currently, it’s simply too hard to attribute the true source of bot-driven misinformation campaigns. This can only be achieved with the full cooperation of social media companies.

The motives of a bot campaign can range from creating mischief to exercising geopolitical control. And some researchers still can’t agree on what exactly constitutes a “bot”.

But one thing is for sure: Australia needs to develop legislation and mechanisms to detect and stop these automated culprits. Organisations running legitimate social media campaigns should dedicate time to using a bot detection tool to weed out and report fake accounts.

And as a social media user in the age of the coronavirus, you can also help by reporting suspicious accounts. The last thing we need is malicious parties making an already worrying crisis worse.




Read more:
You can join the effort to expose Twitter bots


The Conversation


Ryan Ko, Chair Professor and Director of Cyber Security, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

When a virus goes viral: pros and cons to the coronavirus spread on social media



Tim Gouw/Unsplash, CC BY

Axel Bruns, Queensland University of Technology; Daniel Angus, Queensland University of Technology; Timothy Graham, Queensland University of Technology, and Tobias R. Keller, Queensland University of Technology

News and views about coronavirus has spread via social media in a way that no health emergency has done before.

Platforms like Twitter, Facebook, Tik Tok and Instagram have played critical roles in sharing news and information, but also in disseminating rumours and misinformation.

Getting the Message Out

Early on, snippets of information circulated on Chinese social media platforms such as Weibo and WeChat, before state censors banned discussions. These posts already painted a grim picture, and Chinese users continue to play cat and mouse with the Internet police in order to share unfiltered information.

As the virus spread, so did the social media conversation. On Facebook and Twitter, discussions have often taken place ahead of official announcements: calls to cancel the Australian Formula One Grand Prix were trending on Twitter days before the official decision.

Similarly, user-generated public health explainers have circulated while official government agencies in many countries discuss campaign briefs with advertising agencies.

Many will have come across (and, hopefully, adopted) hand-washing advice set to the lyrics of someone’s favourite song:

Widespread circulation of graphs has also explained the importance of “flattening the curve” and social distancing.

Debunking myths

Social media have been instrumental in responding to COVID-19 myths and misinformation. Journalists, public health experts, and users have combined to provide corrections to dangerous misinformation shared in US President Donald Trump’s press conferences:

Other posts have highlighted potentially deadly assumptions in the UK government’s herd immunity approach to the crisis:

Users have also pointed out inconsistencies in the Australian cabinet’s response to Home Affairs Minister Peter Dutton’s coronavirus diagnosis.

The circulation of such content through social media is so effective because we tend to pay more attention to information we receive through our networks of social contacts.

Similarly, professional health communicators like Dr Norman Swan have been playing an important role in answering questions and amplifying public health messages, while others have set up resources to keep the public informed on confirmed cases:

Even just seeing our leaders’ poor hygienic practices ridiculed might lead us to take better care ourselves:

Some politicians, like Australian Prime Minister Scott Morrison, blandly dismiss social media channels as a crucial source of crisis information, despite more than a decade’s research showing their importance.

This is deeply unhelpful: they should be embracing social media channels as they seek to disseminate urgent public health advice.

Stoking fear

The downside of all that user-driven sharing is that it can lead to mass panics and irrational behaviour – as we have seen with the panic-buying of toiletpaper and other essentials.

The panic spiral spins even faster when social media trends are amplified by mainstream media reporting, and vice versa: even only a handful of widely shared images of empty shelves in supermarkets might lead consumers to buy what’s left, if media reporting makes the problem appear much larger than it really is.

News stories and tweets showing empty shelves are much more news- and share-worthy than fully stocked shelves: they’re exceptional. But a focus on these pictures distorts our perception of what is actually happening.

The promotion of such biased content by the news media then creates a higher “viral” potential, and such content gains much more public attention than it otherwise would.

Levels of fear and panic are already higher during times of crisis, of course. As a result, some of us – including journalists and media outlets – might also be willing to believe new information we would otherwise treat with more scepticism. This skews the public’s risk perception and makes us much more susceptible to misinformation.

A widely shared Twitter post showed how panic buying in (famously carnivorous) Glasgow had skipped the vegan food section:

Closer inspection revealed the photo originated from Houston during Hurricane Harvey in 2017 (the dollar signs on the food prices are a giveaway).

This case also illustrates the ability of social media discussion to self-correct, though this can take time, and corrections may not travel as far as initial falsehoods. The potential for social media to stoke fears is measured by the difference in reach between the two.

The spread of true and false information is also directly affected by the platform architecture: the more public the conversations, the more likely it is that someone might encounter a falsehood and correct it.

In largely closed, private spaces like WhatsApp, or in closed groups or private profile discussions on Facebook, we might see falsehoods linger for considerably longer. A user’s willingness to correct misinformation can also be affected by their need to maintain good relationships within their community. People will often ignore misinformation shared by friends and family.

And unfortunately, the platforms’ own actions can also make things worse: this week, Facebook’s efforts to control “fake news” posts appeared to affect legitimate stories by mistake.

Rallying cries

Their ability to sustain communities is one of the great strengths of social media, especially as we are practising social distancing and even self-isolation. The internet still has a sense of humour which can help ease the ongoing tension and fear in our communities:

Younger generations are turning to newer social media platforms such as TikTok to share their experiences and craft pandemic memes. A key feature of TikTok is the uploading and repurposing of short music clips by platform users – music clip It’s Corona Time has been used in over 700,000 posts.

We have seen substantial self help efforts conducted via social media: school and university teachers who have been told to transition all of their teaching to online modes at very short notice, for example, have begun to share best-practice examples via the #AcademicTwitter hashtag.

The same is true for communities affected by event shutdowns and broader economic downturns, from freelancers to performing artists. Faced with bans on mass gatherings, some artists are finding ways to continue their work: providing access to 600 live concerts via digital concert halls or streaming concerts live on Twitter.

Such patterns are not new: we encountered them in our research as early as 2011, when social media users rallied together during natural disasters such as the Brisbane floods, Christchurch earthquakes, and Sendai tsunami to combat misinformation, amplify the messages of official emergency services organisations, and coordinate community activities.

Especially during crises, most people just want themselves and their community to be safe.The Conversation

Axel Bruns, Professor, Creative Industries, Queensland University of Technology; Daniel Angus, Associate Professor in Digital Communication, Queensland University of Technology; Timothy Graham, Senior Lecturer, Queensland University of Technology, and Tobias R. Keller, Visiting Postdoc, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The coronavirus and Chinese social media: finger-pointing in the post-truth era


Haiqing Yu, RMIT University

As public health authorities in China and the world fight the novel coronavirus, they face two major communication obstacles: the eroding trust in the media, and misinformation on social media.

As cities, towns, villages and residential compounds have been shut down or implemented curfews, social media have played a central role in crisis communication.

Chinese social media platforms, from WeChat and Weibo, to QQ, Toutiao, Douyin, Zhihu and Tieba, are the lifeline for many isolated and scared people who have been housebound for over two weeks, relying on their mobile phones to access information, socialise, and order food.

A meme being shared on WeChat reads: ‘When the epidemic is over, men will understand why women suffer from postnatal depression after one-month confinement upon childbirth.’
Author provided

These platforms constitute the mainstream media in the war on the coronavirus.

I experienced the most extraordinary Chinese New Year with my parents in China and witnessed the power of Chinese social media, especially WeChat, in spreading and controlling information and misinformation.

China is not only waging a war against the coronavirus. It is engaged in a media war against misinformation and “rumour” (as termed by the Chinese authorities and social media platforms).

This banner being shared on WeChat reads: ‘Those who do not come clean when having a fever are class enemies hidden among the people.’
Author provided

Information about the virus suddenly increased from January 21, after the central government publicly acknowledged the outbreak the previous day and Zhong Nanshan, China’s leading respiratory expert and anti-SARS hero, declared on the state broadcaster CCTV the virus was transmissible from person to person.




Read more:
Coronavirus: how health and politics have always been inextricably linked in China


On WeChat, the Chinese all-in-one super app with over 1.15 billion monthly active users, there has been only one dominant topic: the coronavirus.

Rumour mongers and rumour busters

In Wired, Robert Dingwall wrote “fear, finger-pointing, and militaristic action against the virus are unproductive”, asking if it is time to adjust to a new normal of outbreaks.

To many Chinese, this new normal of fear and militaristic action is already real in everyday life.

Finger-pointing, however, can be precarious in the era of information control and post-truth.

One of many spoof Cultural Revolution posters being shared on social media to warn people of the consequence of not wearing masks.
Author provided

On WeChat and other popular social media platforms, information about the virus from official, semi-official, unofficial and personal sources is abundant in chat groups, “Moments”, WeChat official accounts, and newsfeeds (mostly from Tencent News and Toutiao).

Information includes personal accounts of life under lockdown, duanzi (jokes, parodies, humorous videos), heroism of volunteers, generosity of donations, quack remedies, scaremongering about deaths and price hikes, and the conspiracy theory of the US waging a biological war against China.

TikTok video (shared on WeChat) on the life of a man in isolation at home and his ‘social life’.

There is also veiled criticism of the government and government officials for mismanagement, bad decisions, despicable behaviours and lack of accountability.

At the same time, the official media and Tencent have stepped up their rumour-busting effort.

They regularly publish rumour-busting pieces. They mobilise the “50-cent army” (wumao) and volunteer wumao (ziganwu) as their truth ambassadors.

Tencent has taken on the responsibility to provide “transparent” communication. It opened a new function through its WeChat mini-program Health, providing real-time updates of the epidemic and comprehensive information – including fake news busting.

The government has told people to only post and forward information from official channels and warned of severe consequences for anyone found guilty of disseminating “rumours”, including permanently blocking WeChat groups, blocking social media accounts, and possible jail terms.

A warning to WeChat users not to spread fake news about the coronavirus.
Author provided

Chinese people, accustomed to having posts deleted, face increased peer pressure in their chat groups to comply with the heightened censorship regime. Amid the panic the general advice is: don’t repost anything.

They are asked to be savvy consumers, able to distinguish fake news, half-truths or rumours, and to trust only one source of truth: the official channels.

But the skills to detect and contain false content are becoming rarer and more difficult to obtain.

Coronavirus and the post-truth

We live in the post-truth era, where every “truth” is driven by subjective, elusive, self-confirming and emotional “facts”.




Read more:
Post-truth politics and why the antidote isn’t simply ‘fact-checking’ and truth


Any news source can take you in the wrong direction.

We have seen that in the eight doctors from Wuhan who transformed from being rumourmongers to whistleblowers and heroes within a month.

Dr Li Wenliang, the first to warn others of the “SARS-like” virus in December 2019, died from the novel coronavirus in the early hours of February 7 2020. There is an overwhelming sense of loss, mourning and unspoken indignation at his death in various WeChat groups.

WeChat users mourning the death of Dr Li Wenliang.
Author provided

In the face of this post-truth era, we must ask the questions: what is “rumour”, who defines “rumour”, and how does “rumour” occur in the first place?

Information overload is accompanied by information pollution. Detecting and contain false information on social media has been a technical, sociological and ideological challenge.

With a state-led campaign to “bust rumours” and “clean the web” in a controlled environment at a time of crisis, these questions are more urgent than ever.

As media scholar Yong Hu said in 2011, when “official lies outpace popular rumors” the government and its information control mechanism constitute the greatest obstruction of the truth.

On the one hand, the government has provided an environment conducive to the spread of rumours, and on the other it sternly lashes out against rumours, placing itself in the midst of an insoluble contradiction.

As the late Dr Li Wenliang said: “[To me] truth is more important than my case being redressed; a healthy society should not only allow one voice.”

A screenshot from WeChat quoting Dr Li Wenliang: ‘[To me] truth is more important than my case being redressed; a healthy society should not only allow one voice.’
Author provided

China can lock down its cities, but it cannot lock down rumours on social media.

In fact, the Chinese people are not worried about rumours. They are worried about where to find truth and voice facts: not one single source of truth, but multiple sources of facts that will save lives.The Conversation

Haiqing Yu, Associate Professor, School of Media and Communication, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.