We’re in danger of drowning in a coronavirus ‘infodemic’. Here’s how we can cut through the noise



Paul Hanaoka/Unsplash

Connal Lee, University of South Australia

The novel coronavirus that has so far killed more than 1,100 people now has a name – COVID-19.

The World Health Organisation (WHO) didn’t want the name to refer to a place, animal or certain group of people and needed something pronounceable and related to the disease.

“Having a name matters to prevent the use of other names that can be inaccurate or stigmatising,” said WHO director-general Tedros Adhanom Ghebreyesus.

The organisation has been battling misinformation about the coronavirus, with some experts warning rumours are spreading more rapidly than the disease itself.




Read more:
Coronavirus fears: Should we take a deep breath?


The WHO describes the overabundance of information about the coronavirus as an “infodemic”. Some information is accurate, but much of it isn’t – and it can be difficult to tell what’s what.

What’s the problem?

Misinformation can spread unnecessary fear and panic. During the 2014 Ebola outbreak, rumours about the disease led to panic-buying, with many people purchasing Ebola virus protection kits online. These contained hazmat suits and face masks, which were unnecessary for protection against the disease.

As we’ve seen with the coronavirus, misinformation can prompt blame and stigmatisation of infected and affected groups. Since the outbreak began, Chinese Australians, who have no connection or exposure to the virus, have reported an increase in anti-Chinese language and abuse both online and on the streets.




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


Misinformation can also undermine people’s willingness to follow legitimate public health advice. In extreme cases, people don’t acknowledge the disease exists, and fail to take proven precautionary measures.

In other cases, people may not seek help due to fears, misconceptions or a lack of trust in authorities.

The public may also grow bored or apathetic due to the sheer quantity of information out there.

Mode of transmission

The internet can be an ally in the fight against infectious diseases. Accurate messages about how the disease spreads and how to protect yourself and others can be distributed promptly and accessibly.

But inaccurate information spreads rapidly online. Users can find themselves inside echo chambers, embracing implausible conspiracy theories and ultimately distrusting those in charge of the emergency response.

The infodemic continues offline as information spreads via mobile phone, traditional media and in the work tearoom.

Previous outbreaks show authorities need to respond to misinformation quickly and effectively, while remaining aware that not everybody will believe the official line.

Responding to the infodemic

Last week, rumours emerged that the coronavirus was transmitted through infectious clouds in the air that people could inhale.

The WHO promptly responded to these claims, noting this was not the case. WHO’s Director of Global Infectious Hazard Preparedness, Sylvie Briand, explained:

Currently the virus is transmitted through droplets and you need a close contact to be infected.

This simple intervention demonstrates how a timely response be effective. However, it may not convince everyone.




Read more:
Coronavirus fears: Should we take a deep breath?


Official messages need to be consistent to avoid confusion and information overload. However, coordination can be difficult, as we’ve seen this week.

Potentially overly optimistic predictions have come from Chinese health officials saying the outbreak will be over by April. Meanwhile, the WHO has given dire warnings, saying the virus poses a bigger threat than terrorism.

These inconsistencies can be understandable as governments try to placate fears while the WHO encourages us to prepare for the worst.

Health authorities should keep reiterating key messages, like the importance of regularly washing your hands. This is a simple and effective measure that helps people feel in control of their own protection. But it can be easily forgotten in a sea of information.

It’s worth reminding people to regularly wash their hands.
CDC/Unsplash

A challenge is that authorities may struggle to compete with the popularity of sensationalist stories and conspiracy theories about how diseases emerge, spread and what authorities are doing in response. Conspiracies may be more enjoyable than the official line, or may help some people preserve their existing, problematic beliefs.

Sometimes a prompt response won’t successfully cut through this noise.

Censorship isn’t the answer

Although censoring a harmful view could limit its spread, it could also make that view popular. Hiding negative news or over-reassuring people can leave them vulnerable and unprepared.

Censorship and media silence during the 1918 Spanish flu, which included not releasing numbers of affected and dead, undercut the seriousness of the pandemic.

When the truth emerges, people lose trust in public institutions.

Past outbreaks illustrate that building trust and legitimacy is vital to get people to adhere to disease prevention and control measures such as quarantines. Trying to mitigate fear through censorship is problematic.

Saving ourselves from drowning in a sea of (mis)information

The internet is useful for monitoring infectious diseases outbreaks. Tracking keyword searches, for example, can detect emerging trends.

Observing online communication offers an opportunity to quickly respond to misunderstandings and to build a picture of what rumours gain the most traction.

Health authorities’ response to the infodemic should include a strategy for engaging with and even listening to those who spread or believe inaccurate stories to gain deeper understanding of how infodemics spread.The Conversation

Connal Lee, Associate Lecturer, Philosophy, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

9 ways to talk to people who spread coronavirus myths



from www.shutterstock.com

Claire Hooker, University of Sydney

The spread of misinformation about the novel coronavirus, now known as COVID-19, seems greater than the spread of the infection itself.

The World Health Organisation (WHO), government health departments and others are trying to alert people to these myths.

But what’s the best way to tackle these if they come up in everyday conversation, whether that’s face-to-face or online? Is it best to ignore them, jump in to correct them, or are there other strategies we could all use?




Read more:
The coronavirus and Chinese social media: finger-pointing in the post-truth era


Public health officials expect misinformation about disease outbreaks where people are frightened. This is particularly so when a disease is novel and the science behind it is not yet clear. It’s also the case when we still don’t know how many people are likely to become sick, have a life-threatening illness or die.

Yet we can all contribute to the safe control of the disease and to minimising its social and economic impacts by addressing misinformation when we encounter it.

To avoid our efforts backfiring, we need to know how to do this effectively and constructively.




Read more:
We depend so much more on Chinese travellers now. That makes the impact of this coronavirus novel


What doesn’t work

Abundant research shows what doesn’t work. Telling people not to panic or their perceptions and beliefs are incorrect can actually strengthen their commitment to their incorrect views.

Over-reactions are common when new risks emerge and these over-reactions will pass. So, it’s often the best choice to not engage in the first place.




Read more:
Listen up, health officials – here’s how to reduce ‘Ebolanoia’


What can I do?

If you wish to effectively counter misinformation, you need to pay more attention to your audience than to the message you want to convey. See our tips below.

Next, you need to be trusted.

People only listen to sources they trust. This involves putting in the time and effort to make sure your knowledge is correct and reliable; discussing information fairly (what kind of information would make you change your own mind?); and being honest enough to admit when you don’t know, and even more importantly, when you are wrong.

Here’s how all this might work in practice.

1. Understand how people perceive and react to risks

We all tend to worry more about risks we perceive to be new, uncertain, dreaded, and impact a large group in a short time – all features of the new coronavirus.

Our worries increase significantly if we do not feel we, or the governments acting for us, have control over the virus.




Read more:
Coronavirus fears: Should we take a deep breath?


2. Recognise people’s concerns

People can’t process information unless they see their worries being addressed.

So instead of offering facts (“you won’t catch coronavirus from your local swimming pool”), articulate their worry (“you’ve caught colds in swimming pools before, and now you’re worried someone might transmit the virus before they know they are infected”).

Being heard helps people re-establish a sense of control.




Read more:
How to cut through when talking to anti-vaxxers and anti-fluoriders


3. Be aware of your own feelings

Usually when we want to correct someone, it’s because we’re worried about the harms their false beliefs will cause.

But if we are emotional, what we communicate is not our knowledge, but our disrespect for the other person’s views. This usually produces a defensive reaction.

Manage your own outrage first before jumping in to correct others. This might mean saving a discussion for another day.




Read more:
4 ways to talk with vaccine skeptics


4. Ask why someone is worried

If you ask why someone is worried, you might discover your assumptions about that person are wrong.

Explaining their concerns to you helps people explore their own views. They might become aware of what they don’t know or of how unlikely their information sounds.




Read more:
Everyone can be an effective advocate for vaccination: here’s how


5. Remember, the facts are going to change

Because there is still considerable uncertainty about how severe the epidemic will be, information and the government’s response to it is going to change.

So you will need to frequently update your own views. Know where to find reliable information.

For instance, state and federal health departments, the WHO and the US Centers for Disease Control websites provide authoritative and up-to-date information.

6. Admit when you’re wrong

Being wrong is likely in an uncertain situation. If you are wrong, say so early.

If you asked your family or employees to take avoidance measures you now realise aren’t really necessary, then admit it and apologise. This helps restore the trust you need to communicate effectively the next time you need to raise an issue.

7. Politely provide your own perspective

Phrases like, “here’s why I am not concerned about that” or “I actually feel quite confident about doing X or Y” offer ways to communicate your knowledge without attacking someone else’s views.

You can and should be explicit about what harms you worry misinformation can cause. An example could be, “I’m worried that avoiding Chinese restaurants will really hurt their business. I’m really conscious of wanting to support Chinese Australians right now.”




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


8. On social media, model the behaviour you want to see

It’s harder to be effective on social media, where outrage, not listening, is common. Often your goal might be to promote a reasoned, civil discussion, not to defend one particular belief over another. Use very reliable links.




Read more:
False information fuels fear during disease outbreaks: there is an antidote


9. Don’t make it worse online

Your online comment can unintentionally reinforce misinformation, for example by giving it more prominence. Check the Debunking Handbook for some strategies to avoid this.

Make sure your posts or comments are polite, specific, factual and very brief.

Acknowledging common values or points of connection by using phrases such as “I’m worried about my grandmother, too”, or by being supportive (“It’s so great that you’re proactive about looking after your staff”), can help.

Remember why this is important

The ability to respond to emergencies rests on having civil societies. The goal is to keep relationships constructive and dialogue open – not to be right.The Conversation

Claire Hooker, Senior Lecturer and Coordinator, Health and Medical Humanities, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

6 things to ask yourself before you share a bushfire map on social media



NASA’s Worldview software gives you a satellite view of Earth right now, and can help track the spread of fires.
Nasa Worldview

Juan Pablo Guerschman, CSIRO

In recent days, many worrying bushfire maps have been circulating online, some appearing to suggest all of Australia is burning.

You might have seen this example, decried by some as misleading, prompting this Instagram post by its creator:

As he explained, the image isn’t a NASA photo. What a satellite actually “sees” is quite different.

I’ll explain how we use data collected by satellites to estimate how much of an area is burning, or has already been burnt, and what this information should look like once it’s mapped.




Read more:
A crisis of underinsurance threatens to scar rural Australia permanently


Reflective images

When astronauts look out their window in space, this is what they see:

It’s similar to what you might see from an aeroplane window, but higher and covering a wider area.

As you read this, many unmanned satellites are orbiting and photographing Earth. These images are used to monitor fires in real-time. They fall into two categories: reflective and thermal.

Reflective images capture information in the visible range of the electromagnetic spectrum (in other words, what we can see). But they also capture information in wavelengths we can’t see, such as infrared wavelengths.

If we use only the visible wavelengths, we can render the image similar to what we might see with the naked eye from a satellite. We call these “true colour” images.

This is a true colour image of south-east Australia, taken on January 4th 2020 from the MODIS instrument on the Aqua satellite. Fire smoke is grey, clouds are white, forests are dark green, brown areas are dryland agricultural areas, and the ocean is blue.
NASA Worldview / https://go.nasa.gov/307pDDX

Note that the image doesn’t have political boundaries, as these aren’t physical features. To make satellite imagery useful for navigation, we overlay the map with location points.

The same image shown as true colour, with the relevant geographical features overlaid.
NASA Worldview / https://go.nasa.gov/2TafEMH

From this, we can predict where the fires are by looking at the smoke. However, the fires themselves are not directly visible.

‘False colour’ images

Shortwave infrared bands are less sensitive to smoke and more sensitive to fire, which means they can tell us where fire is present.

Converting these wavelengths into visible colours produces what we call “false colour” images. For instance:

The same image, this time shown as false colour. Now, the fire smoke is partially transparent grey while the clouds aren’t. Red shows the active fires and brown shows where bushfires have recently burnt.
NASA Worldview / https://go.nasa.gov/2NhzRfN

In this shortwave infrared image, we start to “see” under the smoke, and can identify active fires. We can also learn more about the areas that are already burnt.

Thermal and hotspots

As their name suggests, thermal images measure how hot or cold everything in the frame is. Active fires are detected as “hotspots” and mapped as points on the surface.

While reflective imagery is only useful when obtained by a satellite during daytime, thermal hotspots can be measured at night – doubling our capacity to observe active fires.

The same image shown as false color, with hotspots overlaid in red.
NASA Worldview / https://go.nasa.gov/2rZNIj9

This information can be used to create maps showing the aggregation of hotspots over several days, weeks or months.

Geoscience Australia’s Digital Earth hotspots service shows hotspots across the continent in the last 72 hours. It’s worth reading the “about” section to learn the limitations or potential for error in the map.




Read more:
Spread the word: the value of local information in disaster response


When hotspots, which show “hot” pixels, are shown as extremely big icons, or are collected over long periods, the results can be deceiving. They can indicate a much larger area to be under fire than what is really burning.

For example, it would be wrong to believe all the areas in red in the map below are burning or have already burnt. It’s also unclear over what period of time the hotspots were aggregated.

The ‘world map of fire hotspots’ from the Environmental Investigation Agency.
Environmental Investigation Agency / https://eia-international.org/news/watching-the-world-burn-fires-threaten-the-worlds-tropical-forests-and-millions-of-people/

Get smart

Considering all of the above, there are some key questions you can ask to gauge the authenticity of a bushfire map. These are:

  • Where does this map come from, and who produced it?

  • is this a single satellite image, or one using hotspots overlaid on a map?

  • what are the colours representing?

  • do I know when this was taken?

  • if this map depicts hotspots, over what period of time were they collected? A day, a whole year?

  • is the size of the hotspots representative of the area that is actually burning?

So, the next time you see a bushfire map, think twice before pressing the share button.The Conversation

Juan Pablo Guerschman, Senior Research Scientist, CSIRO

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight


Timothy Graham, Queensland University of Technology and Tobias R. Keller, Queensland University of Technology

In the first week of 2020, hashtag #ArsonEmergency became the focal point of a new online narrative surrounding the bushfire crisis.

The message: the cause is arson, not climate change.

Police and bushfire services (and some journalists) have contradicted this claim.

We studied about 300 Twitter accounts driving the #ArsonEmergency hashtag to identify inauthentic behaviour. We found many accounts using #ArsonEmergency were behaving “suspiciously”, compared to those using #AustraliaFire and #BushfireAustralia.

Accounts peddling #ArsonEmergency carried out activity similar to what we’ve witnessed in past disinformation campaigns, such as the coordinated behaviour of Russian trolls during the 2016 US presidential election.

Bots, trolls and trollbots

The most effective disinformation campaigns use bot and troll accounts to infiltrate genuine political discussion, and shift it towards a different “master narrative”.

Bots and trolls have been a thorn in the side of fruitful political debate since Twitter’s early days. They mimic genuine opinions, akin to what a concerned citizen might display, with a goal of persuading others and gaining attention.

Bots are usually automated (acting without constant human oversight) and perform simple functions, such as retweeting or repeatedly pushing one type of content.

Troll accounts are controlled by humans. They try to stir controversy, hinder healthy debate and simulate fake grassroots movements. They aim to persuade, deceive and cause conflict.

We’ve observed both troll and bot accounts spouting disinformation regarding the bushfires on Twitter. We were able to distinguish these accounts as being inauthentic for two reasons.

First, we used sophisticated software tools including tweetbotornot, Botometer, and Bot Sentinel.

There are various definitions for the word “bot” or “troll”. Bot Sentinel says:

Propaganda bots are pieces of code that utilize Twitter API to automatically follow, tweet, or retweet other accounts bolstering a political agenda. Propaganda bots are designed to be polarizing and often promote content intended to be deceptive… Trollbot is a classification we created to describe human controlled accounts who exhibit troll-like behavior.

Some of these accounts frequently retweet known propaganda and fake news accounts, and they engage in repetitive bot-like activity. Other trollbot accounts target and harass specific Twitter accounts as part of a coordinated harassment campaign. Ideology, political affiliation, religious beliefs, and geographic location are not factors when determining the classification of a Twitter account.

These machine learning tools compared the behaviour of known bots and trolls with the accounts tweeting the hashtags #ArsonEmergency, #AustraliaFire, and #BushfireAustralia. From this, they provided a “score” for each account suggesting how likely it was to be a bot or troll account.

We also manually analysed the Twitter activity of suspicious accounts and the characteristics of their profiles, to validate the origins of #ArsonEmergency, as well as the potential motivations of the accounts spreading the hashtag.

Who to blame?

Unfortunately, we don’t know who is behind these accounts, as we can only access trace data such as tweet text and basic account information.

This graph shows how many times #ArsonEmergency was tweeted between December 31 last year and January 8 this year:

On the vertical axis is the number of tweets over time which featured #ArsonEmergency. On January 7, there were 4726 tweets.
Author provided

Previous bot and troll campaigns have been thought to be the work of foreign interference, such as Russian trolls, or PR firms hired to distract and manipulate voters.

The New York Times has also reported on perceptions that media magnate Rupert Murdoch is influencing Australia’s bushfire debate.




Read more:
Weather bureau says hottest, driest year on record led to extreme bushfire season


Weeding-out inauthentic behaviour

In late November, some Twitter accounts began using #ArsonEmergency to counter evidence that climate change is linked to the severity of the bushfire crisis.

Below is one of the earliest examples of an attempt to replace #ClimateEmergency with #ArsonEmergency. The accounts tried to get #ArsonEmergency trending to drown out dialogue acknowledging the link between climate change and bushfires.

We suspect the origins of the #ArsonEmergency debacle can be traced back to a few accounts.
Author provided

The hashtag was only tweeted a few times in 2019, but gained traction this year in a sustained effort by about 300 accounts.

A much larger portion of bot and troll-like accounts pushed #ArsonEmergency, than they did #AustraliaFire and #BushfireAustralia.

The narrative was then adopted by genuine accounts who furthered its spread.

On multiple occasions, we noticed suspicious accounts countering expert opinions while using the #ArsonEmergency hashtag.

The inauthentic accounts engaged with genuine users in an effort to persuade them.
author provided

Bad publicity

Since media coverage has shone light on the disinformation campaign, #ArsonEmergency has gained even more prominence, but in a different light.

Some journalists are acknowledging the role of disinformation bushfire crisis – and countering narrative the Australia has an arson emergency. However, the campaign does indicate Australia has a climate denial problem.

What’s clear to me is that Australia has been propelled into the global disinformation battlefield.




Read more:
Watching our politicians fumble through the bushfire crisis, I’m overwhelmed by déjà vu


Keep your eyes peeled

It’s difficult to debunk disinformation, as it often contains a grain of truth. In many cases, it leverages people’s previously held beliefs and biases.

Humans are particularly vulnerable to disinformation in times of emergency, or when addressing contentious issues like climate change.

Online users, especially journalists, need to stay on their toes.

The accounts we come across on social media may not represent genuine citizens and their concerns. A trending hashtag may be trying to mislead the public.

Right now, it’s more important than ever for us to prioritise factual news from reliable sources – and identify and combat disinformation. The Earth’s future could depend on it.The Conversation

Timothy Graham, Senior lecturer, Queensland University of Technology and Tobias R. Keller, Visiting Postdoc, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Fake news’ is already spreading online in the election campaign – it’s up to us to stop it


File 20190423 15218 1dp9eot.jpg?ixlib=rb 1.1
Claims of ‘fake news’ and misinformation campaigns have already arisen in the federal election campaign, a problem the political parties and tech companies are ill-equipped to address.
Ritchie B. Tongo/EPA

Michael Jensen, University of Canberra

We’re only days into the federal election campaign and already the first instances of “fake news” have surfaced online.

Over the weekend, Labor demanded that Facebook remove posts it says are “fake news” about the party’s plans to introduce a “death tax” on inheritances. Labor also called on the Coalition to publicly disavow the misinformation campaign.

An inauthentic tweet purportedly sent from the account of Australian Council of Trade Unions secretary Sally McManus also made the rounds, claiming that she, too, supported a “death tax”. It was retweeted many times – including by Sky News commentator and former Liberal MP Gary Hardgrave – before McManus put out a statement saying the tweet had been fabricated.

What the government and tech companies are doing

In the wake of the cyber-attacks on the 2016 US presidential election, the Australian government began taking seriously the threat that “fake news” and online misinformation campaigns could be used to try to disrupt our elections.

Last year, a taskforce was set up to try to protect the upcoming federal election from foreign interference, bringing together teams from Home Affairs, the Department of Finance, the Australian Electoral Commission (AEC), the Australian Federal Police (AFP) and the Australian Security Intelligence Organisation (ASIO).

The AEC also created a framework with Twitter and Facebook to remove content deemed to be in violation of Australian election laws. It also launched an aggressive campaign to encourage voters to “stop and consider” the sources of information they consume online.




Read more:
We’ve been hacked – so will the data be weaponised to influence election 2019? Here’s what to look for


For their part, Facebook and Twitter rolled out new features aimed specifically at safeguarding the Australian election. Facebook announced it would ban foreign advertising in the run-up to the election and launch a fact-checking partnership to vet the accuracy of information being spread on the platform. However, Facebook will not be implementing requirements that users wishing to post ads verify their locations until after the election.

Twitter also implemented new rules requiring that all political ads be labelled to show who sponsored them and those sending the tweets to prove they are located in Australia.

While these moves are all a good start, they are unlikely to be successful in stemming the flow of manipulative content as election day grows closer.

Holes in the system

First, a foreign entity intent on manipulating the election can get around address verification rules by partnering with domestic actors to promote paid advertising on Facebook and Twitter. Furthermore, Russia’s intervention in the US election showed that “troll” or “sockpuppet” accounts, as well as botnets, can easily spread fake news content and hyperlinks in the absence of a paid promotion strategy.

Facebook has also implemented measures that actually reduce transparency in its advertising. To examine how political advertising works on the platform, ProPublica built a browser plugin last year to collect Facebook ads and show which demographic groups they were targeting. Facebook responded by blocking the plugin. The platform’s own ad library, while expansive, also does not include any of the targeting data that ProPublica had made public.




Read more:
Russian trolls targeted Australian voters on Twitter via #auspol and #MH17


A second limitation faced by the AEC, social media companies, and government agencies is timing. The framework set up last year by the AEC to address content in possible violation of electoral rules has proven too slow to be effective. First, the AEC needs to be alerted to questionable content. Then, it will try to contact whoever posted it, and if it can’t, the matter is escalated to Facebook. This means that days can pass before the material is addressed.

Last year, for instance, when the AEC contacted Facebook about sponsored posts attacking left-wing parties from a group called Hands Off Our Democracy, it took Facebook more than a month to respond. By then, the group’s Facebook page had disappeared.



FOI request by ABC News, Author provided
Portions of AEC letter to Facebook legal team for Australia and New Zealand detailing steps for addressing questionable content, sent 30 August 2018.
FOI request by ABC News, Author provided

The length of time required to take down illegal content is critical because research on campaigning shows that the window of opportunity to shift a political discussion on social media is often quite narrow. For this reason, an illegal ad likely will have achieved its purpose by the time it is flagged and measures are taken to remove it.

Indeed, from 2015 to 2017, Russia’s Internet Research Agency, identified by US authorities as the main “troll farm” behind Russia’s foreign political interference, ran over 3,500 ads on Facebook with a median duration of just one day.

Even if content is flagged to the tech companies and accounts are blocked, this measure itself is unlikely to deter a serious misinformation campaign.

The Russian Internet Research Agency spent millions of dollars and conducted research over a period of years to inform their strategies. With this kind of investment, a determined actor will have gamed out changes to platforms, anticipated legal actions by governments and adapted its strategies accordingly.

What constitutes ‘fake news’ in the first place?

Finally, there is the problem of what counts as “fake news” and what counts as legitimate political discussion. The AEC and other government agencies are not well positioned to police truth in politics. There are two aspects to this problem.

The first is the majority of manipulative content directed at democratic societies is not obviously or demonstrably false. In fact, a recent study of Russian propaganda efforts in the United States found the majority of this content “is not, strictly speaking, ‘fake news’.”

Instead, it is a mixture of half-truths and selected truths, often filtered through a deeply cynical and conspiratorial worldview.

There’s a different issue with the Chinese platform WeChat, where there is a systematic distortion of news shared on public or “official accounts”. Research shows these accounts are often subject to considerable censorshipincluding self-censorship – so they do not infringe on the Chinese government’s official narrative. If they do, the accounts risk suspension or their posts can be deleted.. Evidence shows that official WeChat accounts in Australia often change their content and tone in response to changes in Beijing’s media regulations.




Read more:
Who do Chinese-Australian voters trust for their political news on WeChat?


For this reason, suggestions that platforms like WeChat be considered “an authentic, integral part of a genuinely multicultural, multilingual mainstream media landscape” are dangerously misguided, as official accounts play a role in promoting Beijing’s strategic interests rather than providing factual information.

The public’s role in stamping out the problem

If the AEC is not in a position to police truth online and combat manipulative speech, who is?

Research suggests that in a democracy, the political elites play a strong role in shaping opinions and amplifying the effects of foreign influence misinformation campaigns.

For example, when Republican John McCain was running for the US presidency against Barack Obama in 2008, he faced a question at a rally about whether Obama was “an Arab” – a lie that had been spread repeatedly online. Instead of breathing more life into the story, McCain provided a swift rebuttal.

After the Labor “death tax” Facebook posts appeared here last week, some politicians and right-wing groups shared the post on their own accounts. (It should be noted, however, that Hardgrave apologised for retweeting the fake tweet by McManus.)

Beyond that, the responsibility for combating manipulative speech during elections falls to all citizens. It’s absolutely critical in today’s world of global digital networks for the public to recognise they “are combatants in cyberspace”.

The only sure defence against manipulative campaigns – whether from foreign or domestic sources – is for citizens to take seriously their responsibilities to critically reflect on the information they receive and separate fact from fiction and manipulation.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Lies, ‘fake news’ and cover-ups: how has it come to this in Western democracies?



File 20180903 41705 1wc4eo3.jpg?ixlib=rb 1.1
Malcolm Turnbull has blamed the conservative faction in the Liberal Party for the ‘insurgency’ that led to his resignation as prime minister.
Lukas Coch/AAP

Joseph Camilleri, La Trobe University

The Liberal leadership spill and Malcolm Turnbull’s downfall is but the latest instalment in a game of musical chairs that has dominated Australian politics for the best part of a decade.

For many, it has been enough to portray Tony Abbott as the villain of the story. Others have pointed to Peter Dutton and his allies as willing, though not-so-clever, accomplices. There’s also been a highlighting of the herd instinct: once self-serving mutiny gathers steam, others will want to follow.

But this barely scratches the surface. And the trend is not confined to Australia.




Read more:
Dutton v Turnbull is the latest manifestation of the splintering of the centre-right in Australian politics


We need only think of Donald Trump’s America, Britain’s Brexit saga or the rise of far-right populist movements in Europe. Politics in the West seem uneasily suspended between farce and tragedy, as deception, accusations of “fake news” and infighting have become commonplace.

In Australia, the revolving prime ministerial door has had much to do with deep tensions surrounding climate change and energy policy more generally.

In Britain, a longstanding ambivalence towards European integration has deeply divided mainstream parties and plunged the country into “Brexit chaos”, a protracted crisis greatly exacerbated by government incompetence and political expediency.

In Italy, the steady erosion of support for the establishment parties has paved the way for a governing coalition that includes a far-right party committed to cracking down on “illegal”, specifically Muslim, immigration.

Yet, beyond these differences are certain common, cross-cultural threads which help explain the present Western malaise.

Simply put, we now have a glaring and widening gap between the enormity of the challenges facing Western societies and the capacity of their political institutions to address them.

Neoliberalism at work

The political class in Australia, as in Europe and North America, is operating within an institutional framework that is compromised by two powerful forces: the dominance of the neoliberal order and relentless globalisation.

The interplay of these two forces goes a long way towards explaining the failure of political elites. They offer neither a compelling national narrative nor a coherent program for the future. Instead, the public is treated to a series of sideshows and constant rivalries over the spoils of office.




Read more:
Partially right: rejecting neoliberalism shouldn’t mean giving up on social liberalism


How does the neoliberal creed underpin the state of current political discourse and practice? The shorthand answer is by setting economic growth as the overriding national objective . Such growth, we are told, requires the public sector to be squeezed and the private sector to be given free reign.

And when economic performance falls short of the mark, pressing social and environmental needs are unmet, or a global financial crisis exposes large-scale financial crimes and shoddy lending practices, these are simply dismissed as inconvenient truths.

Compounding the impact of this highly restrictive economic agenda is globalisation or, to be more accurate, the phenomenal growth of cross-border flows of goods and services, capital, money, carbon emissions, technical know-how, arms, information, images and people. The sheer scale, speed and intensity of these flows make them impervious to national control.




Read more:
It’s not just the economy, stupid; it’s whether the economy is fair


But governments and political parties want to maintain the pretence they can stem the tide. To admit they cannot is to run the risk of appearing incompetent or irrelevant. Importantly, they risk losing the financial or political support of powerful interests that benefit from globalisation, such as the coal lobby.

And so, deception and self-deception become the only viable option. So it is that several US presidents, including Trump, and large segments of the US Congress have flagrantly contradicted climate science or downplayed its implications.

Much the same can be said of Australia. When confronted with climate sceptics in the Liberal ranks, the Turnbull government chose to prioritise lowering electricity prices while minimising its commitment to carbon emission reductions.

The erosion of truth and trust

In the face of such evasion and disinformation, large segments of the population, especially those who are experiencing hard times or feel alienated, provide fertile ground for populist slogans and the personalities willing to mouth them.

Each country has its distinctive history and political culture. But everywhere we see the same refusal to face up to harsh realities. Some will deny the science of climate change. Others will want to roll back the unprecedented movements of people seeking refuge from war, discrimination or abject poverty.

Others still will pretend the state can regulate the accelerating use of information technology, even though the technology is already being used to threaten people’s privacy and reduce control over personal data. Both the state and corporate sector are subjecting citizens to unprecedented levels of surveillance.




Read more:
The Turnbull government is all but finished, and the Liberals will now need to work out who they are


Lies, “fake news” and cover-ups are not, of course, the preserve of politicians. They have become commonplace in so many of our institutions.

The extraordinary revelations from the Banking Royal Commission make clear that Australia’s largest banks and other financial enterprises have massively defrauded customers, given short shrift to both the law and regulators and consistently disregarded the truth.

And now, as a result of another Royal Commission, we have a belated appreciation of the rampant sexual abuse of children in the Catholic Church, which has been consistently covered up by religious officials.

These various public and private arenas, where truth is regularly concealed, denied or obscured, have had a profoundly corrosive effect on the fabric of society, and inevitably on the public sphere. They have severely diminished the social trust on which the viability of democratic processes vitally depends.

There is no simple remedy to the current political disarray. The powerful forces driving financial flows and production and communication technologies are reshaping culture, the global economy and policy-making processes in deeply troubling ways.

Truth and trust are now in short supply. Yet, they are indispensable to democratic processes and institutions.

A sustained national and international conversation on ways to redeem truth and trust has become one of the defining imperatives of our time.


Joseph Camilleri will speak more on this topic in three interactive public lectures entitled Brave New World at St Michael’s on Collins in Melbourne on Sept. 11, 18 and 25.The Conversation

Joseph Camilleri, Emeritus Professor of International Relations, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Outlawing fake news will chill the real news



File 20180405 189827 fffhkh.png?ixlib=rb 1.1
The largest television company in the US recently issued a coordinated campaign of scripted warnings about fake news.
Screen Shot at 2PM

Sandeep Gopalan, Deakin University

The term “fake news” has gained prominence in recent years thanks to US President Donald Trump’s attacks against the media during the 2016 US election. In 2017, it was one of Collins Dictionary’s 2017 words of the year.

Unsurprisingly, politicians use the fake news label to discredit media stories that portray them in a negative light. And it’s back in the headlines after the largest television company in the US – Sinclair Broadcasting Group – issued a coordinated campaign of scripted warnings about fake news in terms that echo Trump’s sentiments:

The sharing of biased and false news has become all too common on social media … Some members of the media use their platforms to push their own personal bias … This is extremely dangerous to our democracy.

//platform.twitter.com/widgets.js

Trump tweeted in support of Sinclair’s message, slamming the mainstream media in the process.

//platform.twitter.com/widgets.js

Meanwhile, a new study suggests that actual fake news may have helped Trump to secure the election. Ohio State University researchers found a high statistical association between belief in fake news items and voting in 2016.




Read more:
In Italy, fake news helps populists and far-right triumph


Whatever the impact of fake news on election outcomes, some governments are introducing legislation to control the problem. But these laws are more likely to limit free speech, chill the real news, and create unintended consequences.

Public trust in media is low

Trump and other politicians’ attacks mirror widely held suspicions about the media. A recent poll by Monmouth University showed that more than 77% of Americans believed that mainstream media reports fake news. One in three believed this happened regularly, whereas 46% thought it only happened occasionally.

Fake news was defined broadly: 25% thought it referred to wrong facts, whereas 65% believed it also covered editorial decisions and news coverage. 87% of Americans thought interest groups plant fake news on platforms such as Facebook and YouTube. Of concern, 42% believed media reported fake news to push an agenda, and 35% trusted Trump more than CNN.

Australians also have low confidence in the media. According to the 2018 Edelman Trust Barometer, just 32% trust the media – the second lowest score out of the 28 countries surveyed.

New laws take aim at fake news

The congruence of public distrust and politicians’ self-interest has reached an obvious denouement: legislation.

The most egregious of these laws was just passed by the Malaysian Parliament’s Lower House. The Anti-Fake News Act 2018, which imposes jail terms of up to six years, will become an Act after Senate approval. The law defines fake news broadly to include:

…any news, information, data and reports, which is or are wholly or partly false, whether in the form of features, visuals or audio recordings or in any other form capable of suggesting words or ideas.

The law is particularly dangerous because it has extra-territorial application – foreigners can be dealt with “as if the offence was committed” within Malaysia. In other words, it is not just Malaysian journalists who could be locked up – foreign media can also be locked up if Malaysian law enforcement can reach them.




Read more:
Why are Sinclair’s scripted news segments such a big deal?


Malaysia is not an isolated instance. The Philippines is considering a similar law. The Irish Parliament is also considering a bill to criminalise the use of bots on social media platforms to promote fake news – such as those thought to have been used by Russia to influence the US election.

India proposed a law that would suspend the accreditation of journalists for fake news, but retracted the order within a day due to a backlash.

Problems with regulating speech

It is unclear if the Malaysian law – and other national variants – is masquerading as an attempt to promote real news when it is actually an attempt at censorship by stealth. Regardless, even assuming good intentions, anti fake news laws are incapable of tackling the menace.

Fake news is a slippery concept. Who decides what is fake? And how do we manage the distinction between facts and opinion? There is no bright-line definition that would provide clarity, and each item has to be assessed on its own. Moreover, not all fake news is harmful – a precondition for regulation.




Read more:
In Italy, fake news helps populists and far-right triumph


Regulation would turn judges into fact-checkers for potentially millions of news items or social media posts – an impossible task even without crowded dockets. Replacing judges with bureaucrats might improve efficiency marginally, but would generate a censorship state.

Buttressed with criminal penalties, these laws will chill free speech and substantially diminish the marketplace for ideas. Media outlets will be overly cautious with negative consequences for transparency and accountability. In addition, the laws are unlikely to advance the cause of real news – they have no connection to the incentives for providing truthful information.

The current system is sufficient

Countries committed to free speech should not adopt anti fake news laws. The current legal regime represents a pragmatic compromise. Our system of free speech tolerates the risk of inaccurate news for several reasons.




Read more:
Why Facebook is the reason fake news is here to stay


Firstly, it is difficult to establish intention to fabricate falsehoods and harm, and the causal link between the two. And giving the state tools to police speech is dangerous, with fear alone generating self-censorship. Also, judges and bureaucrats are not experts at separating fake from real news – public debate in the marketplace of ideas is more efficient. Finally, modern news does not stop at geographic boundaries, and national law cannot solve a transnational problem.

The ConversationThis does not mean that social media platforms should be free to spread falsehoods and compromise elections. Some options for preventing the proliferation of fake news that could crowd out real news include accreditation to distinguish legitimate news outlets, liability for search engines and distributors where actual harm and intent to fabricate can be established in private litigation, and accessible remedies for defamation. However, such regulation goes well beyond the scope of current anti fake news laws.

Sandeep Gopalan, Pro Vice-Chancellor (Academic Innovation) & Professor of Law, Deakin University

This article was originally published on The Conversation. Read the original article.

The US election hack, fake news, data theft: the cyber security lessons from 2017



File 20171219 4995 17al34.jpg?ixlib=rb 1.1
Cyber attacks have the potential to cause economic disruption, coerce changes in political behaviour and subvert systems of governance.
from http://www.shutterstock.com, CC BY-ND

Joe Burton, University of Waikato

Cyber security played a prominent role in international affairs in 2017, with impacts on peace and security.

Increased international collaboration and new laws that capture the complexity of communications technology could be among solutions to cyber security issues in 2018.


Read more: Artificial intelligence cyber attacks are coming – but what does that mean?


The US election hack and the end of cyber scepticism

The big story of the past year has been the subversion of the US election process and the ongoing controversies surrounding the Trump administration. The investigations into the scandal are unresolved, but it is important to recognise that the US election hack has dispelled any lingering scepticism about the impact of cyber attacks on national and international security.

From the self-confessed “mistake” Secretary Clinton made in setting up a private email server, to the hacking of the Democratic National Committee’s servers and the leaking of Democratic campaign chair John Podesta’s emails to WikiLeaks, the 2016 presidential election was in many ways defined by cyber security issues.

Many analysts had been debating the likelihood of a “digital Pearl Harbour”, an attack producing devastating economic disruption or physical effects. But they missed the more subtle and covert political scope of cyber attacks to coerce changes in political behaviour and subvert systems of governance. Enhancing the security and integrity of democratic systems and electoral processes will surely be on the agenda in 2018 in the Asia Pacific and elsewhere.

Anti-social media

The growing impact of social media and the connection with cyber security has been another big story in 2017. Social media was meant to be a great liberator, to democratise, and to bring new transparency to politics and societies. In 2017, it has become a platform for fake news, misinformation and propaganda.

Social media sites clearly played a role in displacing authoritarian governments during the Arab Spring uprisings. Few expected they would be used by authoritarian governments in an incredibly effective way to sow and exploit divisions in democratic countries. The debate we need to have in 2018 is how we can deter the manipulation of social media, prevent the spread of fake news and encourage the likes of Facebook and Twitter to monitor and police their own networks.

If we don’t trust what we see on these sites, they won’t be commercially successful, and they won’t serve as platforms to enhance international peace and security. Social media sites must not become co-opted or corrupted. Facebook should not be allowed to become Fakebook.

Holding data to ransom

The spread of the Wannacry virus was the third big cyber security story of 2017. Wannacry locked down computers and demanded a ransom (in bitcoin) for the electronic key that would release the data. The virus spread in a truly global attack to an estimated 300,000 computers in 150 countries. It led to losses in the region of four billion dollars – a small fraction of the global cyber crime market, which is projected to grow to $6 trillion by 2021. In the Asia Pacific region, cyber crime is growing by 45% each year.


Read more: Cyberspace aggression adds to North Korea’s threat to global security


Wannacry was an important event because it pointed not only to the growth in cyber crime but also the dangers inherent in the development and proliferation of offensive cyber security capabilities. The exploit to windows XP systems that was used to spread the virus had been stockpiled by the US National Security Agency (NSA). It ended up being released on the internet and then used to generate revenue.

A fundamental challenge in 2018 is to constrain the use of offensive cyber capabilities and to reign in the growth of the cyber-crime market through enhanced cooperation. This will be no small task, but there have been some positive developments.

According to US network security firm FireEye, the recent US-China agreement on commercial cyber espionage has led to an estimated 90% reduction in data breaches in the US emanating from China. Cyber cooperation is possible and can lead to bilateral and global goods.

Death of cyber norms?

The final big development, or rather lack of development, has been at the UN. The Government Group of Experts (GGE) process, established in 2004 to strengthen the security of global information and telecommunications systems, failed to reach a consensus on its latest report on the status of international laws and norms in cyberspace. The main problem has been that there is no definite agreement on the applicability of existing international law to cyber security. This includes issues such as when states might be held responsible for cyber attacks emanating from their territory, or their right to the use of countermeasures in cyber self-defence.

Some analysts have proclaimed this to be “the end of cyber norms”. This betrays a pessimism about UN level governance of the internet that is deeply steeped in overly state-centric views of security and a reluctance to cede any sovereignty to international organisations.

It is true that norms won’t be built from the top down. But the UN does and should have an important role to play in cyber security as we move into 2018, not least because of its universality and global reach.

The NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) in Tallinn, Estonia recently launched the Tallinn Manual 2.0, which examines the applicability of international law to cyber attacks that fall below the use of force and occur outside of armed conflict.

These commendable efforts could move forward hand in hand with efforts to build consensus on new laws that more accurately capture the complexity of new information and communications technology. In February 2017, Brad Smith, the head of Microsoft, proposed a digital Geneva Convention that would outlaw cyber attacks on civilian infrastructure.

The ConversationIn all this we must recognise that cyber security is not a binary process. It is not about “ones and zeros”, but rather about a complex spectrum of activity that needs multi-level, multi-stakeholder responses that include international organisations. This is a cyber reality that we should all bear in mind when we try to find solutions to cyber security issues in 2018.

Joe Burton, Senior Lecturer, Institute for Security and Crime Science, University of Waikato

This article was originally published on The Conversation. Read the original article.