We’re in danger of drowning in a coronavirus ‘infodemic’. Here’s how we can cut through the noise



Paul Hanaoka/Unsplash

Connal Lee, University of South Australia

The novel coronavirus that has so far killed more than 1,100 people now has a name – COVID-19.

The World Health Organisation (WHO) didn’t want the name to refer to a place, animal or certain group of people and needed something pronounceable and related to the disease.

“Having a name matters to prevent the use of other names that can be inaccurate or stigmatising,” said WHO director-general Tedros Adhanom Ghebreyesus.

The organisation has been battling misinformation about the coronavirus, with some experts warning rumours are spreading more rapidly than the disease itself.




Read more:
Coronavirus fears: Should we take a deep breath?


The WHO describes the overabundance of information about the coronavirus as an “infodemic”. Some information is accurate, but much of it isn’t – and it can be difficult to tell what’s what.

What’s the problem?

Misinformation can spread unnecessary fear and panic. During the 2014 Ebola outbreak, rumours about the disease led to panic-buying, with many people purchasing Ebola virus protection kits online. These contained hazmat suits and face masks, which were unnecessary for protection against the disease.

As we’ve seen with the coronavirus, misinformation can prompt blame and stigmatisation of infected and affected groups. Since the outbreak began, Chinese Australians, who have no connection or exposure to the virus, have reported an increase in anti-Chinese language and abuse both online and on the streets.




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


Misinformation can also undermine people’s willingness to follow legitimate public health advice. In extreme cases, people don’t acknowledge the disease exists, and fail to take proven precautionary measures.

In other cases, people may not seek help due to fears, misconceptions or a lack of trust in authorities.

The public may also grow bored or apathetic due to the sheer quantity of information out there.

Mode of transmission

The internet can be an ally in the fight against infectious diseases. Accurate messages about how the disease spreads and how to protect yourself and others can be distributed promptly and accessibly.

But inaccurate information spreads rapidly online. Users can find themselves inside echo chambers, embracing implausible conspiracy theories and ultimately distrusting those in charge of the emergency response.

The infodemic continues offline as information spreads via mobile phone, traditional media and in the work tearoom.

Previous outbreaks show authorities need to respond to misinformation quickly and effectively, while remaining aware that not everybody will believe the official line.

Responding to the infodemic

Last week, rumours emerged that the coronavirus was transmitted through infectious clouds in the air that people could inhale.

The WHO promptly responded to these claims, noting this was not the case. WHO’s Director of Global Infectious Hazard Preparedness, Sylvie Briand, explained:

Currently the virus is transmitted through droplets and you need a close contact to be infected.

This simple intervention demonstrates how a timely response be effective. However, it may not convince everyone.




Read more:
Coronavirus fears: Should we take a deep breath?


Official messages need to be consistent to avoid confusion and information overload. However, coordination can be difficult, as we’ve seen this week.

Potentially overly optimistic predictions have come from Chinese health officials saying the outbreak will be over by April. Meanwhile, the WHO has given dire warnings, saying the virus poses a bigger threat than terrorism.

These inconsistencies can be understandable as governments try to placate fears while the WHO encourages us to prepare for the worst.

Health authorities should keep reiterating key messages, like the importance of regularly washing your hands. This is a simple and effective measure that helps people feel in control of their own protection. But it can be easily forgotten in a sea of information.

It’s worth reminding people to regularly wash their hands.
CDC/Unsplash

A challenge is that authorities may struggle to compete with the popularity of sensationalist stories and conspiracy theories about how diseases emerge, spread and what authorities are doing in response. Conspiracies may be more enjoyable than the official line, or may help some people preserve their existing, problematic beliefs.

Sometimes a prompt response won’t successfully cut through this noise.

Censorship isn’t the answer

Although censoring a harmful view could limit its spread, it could also make that view popular. Hiding negative news or over-reassuring people can leave them vulnerable and unprepared.

Censorship and media silence during the 1918 Spanish flu, which included not releasing numbers of affected and dead, undercut the seriousness of the pandemic.

When the truth emerges, people lose trust in public institutions.

Past outbreaks illustrate that building trust and legitimacy is vital to get people to adhere to disease prevention and control measures such as quarantines. Trying to mitigate fear through censorship is problematic.

Saving ourselves from drowning in a sea of (mis)information

The internet is useful for monitoring infectious diseases outbreaks. Tracking keyword searches, for example, can detect emerging trends.

Observing online communication offers an opportunity to quickly respond to misunderstandings and to build a picture of what rumours gain the most traction.

Health authorities’ response to the infodemic should include a strategy for engaging with and even listening to those who spread or believe inaccurate stories to gain deeper understanding of how infodemics spread.The Conversation

Connal Lee, Associate Lecturer, Philosophy, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

9 ways to talk to people who spread coronavirus myths



from www.shutterstock.com

Claire Hooker, University of Sydney

The spread of misinformation about the novel coronavirus, now known as COVID-19, seems greater than the spread of the infection itself.

The World Health Organisation (WHO), government health departments and others are trying to alert people to these myths.

But what’s the best way to tackle these if they come up in everyday conversation, whether that’s face-to-face or online? Is it best to ignore them, jump in to correct them, or are there other strategies we could all use?




Read more:
The coronavirus and Chinese social media: finger-pointing in the post-truth era


Public health officials expect misinformation about disease outbreaks where people are frightened. This is particularly so when a disease is novel and the science behind it is not yet clear. It’s also the case when we still don’t know how many people are likely to become sick, have a life-threatening illness or die.

Yet we can all contribute to the safe control of the disease and to minimising its social and economic impacts by addressing misinformation when we encounter it.

To avoid our efforts backfiring, we need to know how to do this effectively and constructively.




Read more:
We depend so much more on Chinese travellers now. That makes the impact of this coronavirus novel


What doesn’t work

Abundant research shows what doesn’t work. Telling people not to panic or their perceptions and beliefs are incorrect can actually strengthen their commitment to their incorrect views.

Over-reactions are common when new risks emerge and these over-reactions will pass. So, it’s often the best choice to not engage in the first place.




Read more:
Listen up, health officials – here’s how to reduce ‘Ebolanoia’


What can I do?

If you wish to effectively counter misinformation, you need to pay more attention to your audience than to the message you want to convey. See our tips below.

Next, you need to be trusted.

People only listen to sources they trust. This involves putting in the time and effort to make sure your knowledge is correct and reliable; discussing information fairly (what kind of information would make you change your own mind?); and being honest enough to admit when you don’t know, and even more importantly, when you are wrong.

Here’s how all this might work in practice.

1. Understand how people perceive and react to risks

We all tend to worry more about risks we perceive to be new, uncertain, dreaded, and impact a large group in a short time – all features of the new coronavirus.

Our worries increase significantly if we do not feel we, or the governments acting for us, have control over the virus.




Read more:
Coronavirus fears: Should we take a deep breath?


2. Recognise people’s concerns

People can’t process information unless they see their worries being addressed.

So instead of offering facts (“you won’t catch coronavirus from your local swimming pool”), articulate their worry (“you’ve caught colds in swimming pools before, and now you’re worried someone might transmit the virus before they know they are infected”).

Being heard helps people re-establish a sense of control.




Read more:
How to cut through when talking to anti-vaxxers and anti-fluoriders


3. Be aware of your own feelings

Usually when we want to correct someone, it’s because we’re worried about the harms their false beliefs will cause.

But if we are emotional, what we communicate is not our knowledge, but our disrespect for the other person’s views. This usually produces a defensive reaction.

Manage your own outrage first before jumping in to correct others. This might mean saving a discussion for another day.




Read more:
4 ways to talk with vaccine skeptics


4. Ask why someone is worried

If you ask why someone is worried, you might discover your assumptions about that person are wrong.

Explaining their concerns to you helps people explore their own views. They might become aware of what they don’t know or of how unlikely their information sounds.




Read more:
Everyone can be an effective advocate for vaccination: here’s how


5. Remember, the facts are going to change

Because there is still considerable uncertainty about how severe the epidemic will be, information and the government’s response to it is going to change.

So you will need to frequently update your own views. Know where to find reliable information.

For instance, state and federal health departments, the WHO and the US Centers for Disease Control websites provide authoritative and up-to-date information.

6. Admit when you’re wrong

Being wrong is likely in an uncertain situation. If you are wrong, say so early.

If you asked your family or employees to take avoidance measures you now realise aren’t really necessary, then admit it and apologise. This helps restore the trust you need to communicate effectively the next time you need to raise an issue.

7. Politely provide your own perspective

Phrases like, “here’s why I am not concerned about that” or “I actually feel quite confident about doing X or Y” offer ways to communicate your knowledge without attacking someone else’s views.

You can and should be explicit about what harms you worry misinformation can cause. An example could be, “I’m worried that avoiding Chinese restaurants will really hurt their business. I’m really conscious of wanting to support Chinese Australians right now.”




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


8. On social media, model the behaviour you want to see

It’s harder to be effective on social media, where outrage, not listening, is common. Often your goal might be to promote a reasoned, civil discussion, not to defend one particular belief over another. Use very reliable links.




Read more:
False information fuels fear during disease outbreaks: there is an antidote


9. Don’t make it worse online

Your online comment can unintentionally reinforce misinformation, for example by giving it more prominence. Check the Debunking Handbook for some strategies to avoid this.

Make sure your posts or comments are polite, specific, factual and very brief.

Acknowledging common values or points of connection by using phrases such as “I’m worried about my grandmother, too”, or by being supportive (“It’s so great that you’re proactive about looking after your staff”), can help.

Remember why this is important

The ability to respond to emergencies rests on having civil societies. The goal is to keep relationships constructive and dialogue open – not to be right.The Conversation

Claire Hooker, Senior Lecturer and Coordinator, Health and Medical Humanities, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

6 things to ask yourself before you share a bushfire map on social media



NASA’s Worldview software gives you a satellite view of Earth right now, and can help track the spread of fires.
Nasa Worldview

Juan Pablo Guerschman, CSIRO

In recent days, many worrying bushfire maps have been circulating online, some appearing to suggest all of Australia is burning.

You might have seen this example, decried by some as misleading, prompting this Instagram post by its creator:

As he explained, the image isn’t a NASA photo. What a satellite actually “sees” is quite different.

I’ll explain how we use data collected by satellites to estimate how much of an area is burning, or has already been burnt, and what this information should look like once it’s mapped.




Read more:
A crisis of underinsurance threatens to scar rural Australia permanently


Reflective images

When astronauts look out their window in space, this is what they see:

It’s similar to what you might see from an aeroplane window, but higher and covering a wider area.

As you read this, many unmanned satellites are orbiting and photographing Earth. These images are used to monitor fires in real-time. They fall into two categories: reflective and thermal.

Reflective images capture information in the visible range of the electromagnetic spectrum (in other words, what we can see). But they also capture information in wavelengths we can’t see, such as infrared wavelengths.

If we use only the visible wavelengths, we can render the image similar to what we might see with the naked eye from a satellite. We call these “true colour” images.

This is a true colour image of south-east Australia, taken on January 4th 2020 from the MODIS instrument on the Aqua satellite. Fire smoke is grey, clouds are white, forests are dark green, brown areas are dryland agricultural areas, and the ocean is blue.
NASA Worldview / https://go.nasa.gov/307pDDX

Note that the image doesn’t have political boundaries, as these aren’t physical features. To make satellite imagery useful for navigation, we overlay the map with location points.

The same image shown as true colour, with the relevant geographical features overlaid.
NASA Worldview / https://go.nasa.gov/2TafEMH

From this, we can predict where the fires are by looking at the smoke. However, the fires themselves are not directly visible.

‘False colour’ images

Shortwave infrared bands are less sensitive to smoke and more sensitive to fire, which means they can tell us where fire is present.

Converting these wavelengths into visible colours produces what we call “false colour” images. For instance:

The same image, this time shown as false colour. Now, the fire smoke is partially transparent grey while the clouds aren’t. Red shows the active fires and brown shows where bushfires have recently burnt.
NASA Worldview / https://go.nasa.gov/2NhzRfN

In this shortwave infrared image, we start to “see” under the smoke, and can identify active fires. We can also learn more about the areas that are already burnt.

Thermal and hotspots

As their name suggests, thermal images measure how hot or cold everything in the frame is. Active fires are detected as “hotspots” and mapped as points on the surface.

While reflective imagery is only useful when obtained by a satellite during daytime, thermal hotspots can be measured at night – doubling our capacity to observe active fires.

The same image shown as false color, with hotspots overlaid in red.
NASA Worldview / https://go.nasa.gov/2rZNIj9

This information can be used to create maps showing the aggregation of hotspots over several days, weeks or months.

Geoscience Australia’s Digital Earth hotspots service shows hotspots across the continent in the last 72 hours. It’s worth reading the “about” section to learn the limitations or potential for error in the map.




Read more:
Spread the word: the value of local information in disaster response


When hotspots, which show “hot” pixels, are shown as extremely big icons, or are collected over long periods, the results can be deceiving. They can indicate a much larger area to be under fire than what is really burning.

For example, it would be wrong to believe all the areas in red in the map below are burning or have already burnt. It’s also unclear over what period of time the hotspots were aggregated.

The ‘world map of fire hotspots’ from the Environmental Investigation Agency.
Environmental Investigation Agency / https://eia-international.org/news/watching-the-world-burn-fires-threaten-the-worlds-tropical-forests-and-millions-of-people/

Get smart

Considering all of the above, there are some key questions you can ask to gauge the authenticity of a bushfire map. These are:

  • Where does this map come from, and who produced it?

  • is this a single satellite image, or one using hotspots overlaid on a map?

  • what are the colours representing?

  • do I know when this was taken?

  • if this map depicts hotspots, over what period of time were they collected? A day, a whole year?

  • is the size of the hotspots representative of the area that is actually burning?

So, the next time you see a bushfire map, think twice before pressing the share button.The Conversation

Juan Pablo Guerschman, Senior Research Scientist, CSIRO

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight


Timothy Graham, Queensland University of Technology and Tobias R. Keller, Queensland University of Technology

In the first week of 2020, hashtag #ArsonEmergency became the focal point of a new online narrative surrounding the bushfire crisis.

The message: the cause is arson, not climate change.

Police and bushfire services (and some journalists) have contradicted this claim.

We studied about 300 Twitter accounts driving the #ArsonEmergency hashtag to identify inauthentic behaviour. We found many accounts using #ArsonEmergency were behaving “suspiciously”, compared to those using #AustraliaFire and #BushfireAustralia.

Accounts peddling #ArsonEmergency carried out activity similar to what we’ve witnessed in past disinformation campaigns, such as the coordinated behaviour of Russian trolls during the 2016 US presidential election.

Bots, trolls and trollbots

The most effective disinformation campaigns use bot and troll accounts to infiltrate genuine political discussion, and shift it towards a different “master narrative”.

Bots and trolls have been a thorn in the side of fruitful political debate since Twitter’s early days. They mimic genuine opinions, akin to what a concerned citizen might display, with a goal of persuading others and gaining attention.

Bots are usually automated (acting without constant human oversight) and perform simple functions, such as retweeting or repeatedly pushing one type of content.

Troll accounts are controlled by humans. They try to stir controversy, hinder healthy debate and simulate fake grassroots movements. They aim to persuade, deceive and cause conflict.

We’ve observed both troll and bot accounts spouting disinformation regarding the bushfires on Twitter. We were able to distinguish these accounts as being inauthentic for two reasons.

First, we used sophisticated software tools including tweetbotornot, Botometer, and Bot Sentinel.

There are various definitions for the word “bot” or “troll”. Bot Sentinel says:

Propaganda bots are pieces of code that utilize Twitter API to automatically follow, tweet, or retweet other accounts bolstering a political agenda. Propaganda bots are designed to be polarizing and often promote content intended to be deceptive… Trollbot is a classification we created to describe human controlled accounts who exhibit troll-like behavior.

Some of these accounts frequently retweet known propaganda and fake news accounts, and they engage in repetitive bot-like activity. Other trollbot accounts target and harass specific Twitter accounts as part of a coordinated harassment campaign. Ideology, political affiliation, religious beliefs, and geographic location are not factors when determining the classification of a Twitter account.

These machine learning tools compared the behaviour of known bots and trolls with the accounts tweeting the hashtags #ArsonEmergency, #AustraliaFire, and #BushfireAustralia. From this, they provided a “score” for each account suggesting how likely it was to be a bot or troll account.

We also manually analysed the Twitter activity of suspicious accounts and the characteristics of their profiles, to validate the origins of #ArsonEmergency, as well as the potential motivations of the accounts spreading the hashtag.

Who to blame?

Unfortunately, we don’t know who is behind these accounts, as we can only access trace data such as tweet text and basic account information.

This graph shows how many times #ArsonEmergency was tweeted between December 31 last year and January 8 this year:

On the vertical axis is the number of tweets over time which featured #ArsonEmergency. On January 7, there were 4726 tweets.
Author provided

Previous bot and troll campaigns have been thought to be the work of foreign interference, such as Russian trolls, or PR firms hired to distract and manipulate voters.

The New York Times has also reported on perceptions that media magnate Rupert Murdoch is influencing Australia’s bushfire debate.




Read more:
Weather bureau says hottest, driest year on record led to extreme bushfire season


Weeding-out inauthentic behaviour

In late November, some Twitter accounts began using #ArsonEmergency to counter evidence that climate change is linked to the severity of the bushfire crisis.

Below is one of the earliest examples of an attempt to replace #ClimateEmergency with #ArsonEmergency. The accounts tried to get #ArsonEmergency trending to drown out dialogue acknowledging the link between climate change and bushfires.

We suspect the origins of the #ArsonEmergency debacle can be traced back to a few accounts.
Author provided

The hashtag was only tweeted a few times in 2019, but gained traction this year in a sustained effort by about 300 accounts.

A much larger portion of bot and troll-like accounts pushed #ArsonEmergency, than they did #AustraliaFire and #BushfireAustralia.

The narrative was then adopted by genuine accounts who furthered its spread.

On multiple occasions, we noticed suspicious accounts countering expert opinions while using the #ArsonEmergency hashtag.

The inauthentic accounts engaged with genuine users in an effort to persuade them.
author provided

Bad publicity

Since media coverage has shone light on the disinformation campaign, #ArsonEmergency has gained even more prominence, but in a different light.

Some journalists are acknowledging the role of disinformation bushfire crisis – and countering narrative the Australia has an arson emergency. However, the campaign does indicate Australia has a climate denial problem.

What’s clear to me is that Australia has been propelled into the global disinformation battlefield.




Read more:
Watching our politicians fumble through the bushfire crisis, I’m overwhelmed by déjà vu


Keep your eyes peeled

It’s difficult to debunk disinformation, as it often contains a grain of truth. In many cases, it leverages people’s previously held beliefs and biases.

Humans are particularly vulnerable to disinformation in times of emergency, or when addressing contentious issues like climate change.

Online users, especially journalists, need to stay on their toes.

The accounts we come across on social media may not represent genuine citizens and their concerns. A trending hashtag may be trying to mislead the public.

Right now, it’s more important than ever for us to prioritise factual news from reliable sources – and identify and combat disinformation. The Earth’s future could depend on it.The Conversation

Timothy Graham, Senior lecturer, Queensland University of Technology and Tobias R. Keller, Visiting Postdoc, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The real news on ‘fake news’: politicians use it to discredit media, and journalists need to fight back


Wes Mountain/The Conversation, CC BY-ND

Andrea Carson, La Trobe University and Kate Farhall, RMIT University

During the 2019 election, a news story about the Labor Party supporting a “death tax” – which turned out to be fake – gained traction on social media.

Now, Labor is urging a post-election committee to rule on whether digital platforms like Facebook are harming Australian democracy by allowing the spread of fake news.

While the joint standing committee on electoral matters (JSCEM) will not report until July next year, our latest research finds that politicians are key culprits turning the term “fake news” into a weapon.

Following the election of Donald Trump as president of the United States, we investigated if Australian politicians were using the terms “fake news”, “alternative facts” and “post-truth”, as popularised by Trump, to discredit opponents.




Read more:
Merchants of misinformation are all over the internet. But the real problem lies with us


With colleagues Scott Wright, William Lukamto and Andrew Gibbons, we investigated if elite political use of this language had spread to Australia. For six months after Trump’s victory, we searched media reports, Australian parliamentary proceedings (Hansard), and politicians’ websites, press releases, Facebook and Twitter communications.

We discovered a US contagion effect. Australian politicians had “weaponised” fake news language to attack their opponents, much in the way that Trump had when he first accused a CNN reporter of being “fake news”.

President-elect Donald Trump refused to take a question from CNN reporter Jim Acosta, calling him fake news in January 2017.

Significantly, these phrases were largely absent in Australian media and parliamentary archives before Trump’s venture into politics.

Our key findings were:

  • Conservative politicians are the most likely users of “fake news” language. This finding is consistent with international studies.

  • Political users were either fringe politicians who use the term to attract more media coverage, or powerful politicians who exploit the language to discredit the media first, and political opponents second.

  • The discourse of fake news peaks during parliamentary sitting times. However, often journalists introduce it at “doorstops” and press conferences, allowing politicians a free kick to attack them.

  • ABC journalists were the most likely targets of the offending label.

  • Concerningly, when the media were accused of being fake news, they report it but seldom contest this negative framing of themselves, giving people no reason to doubt its usage.

Here is one example of how journalists introduce the term, only to have it used against them.

Journalist: Today, we have seen a press conference by President Trump where he has discussed at length this issue as fake news. Prime Minister Turnbull do you believe there is such a thing as fake news?

Prime minister: A very great politician, Winston Churchill, once said that politicians complaining about the newspapers, is like a sailor complaining about the sea — there’s not much point. That is the media we live with.

This kind of sequence suggests journalists play a role in driving and reinforcing fake news discourse to the likely detriment of trust in media.

One Nation’s Malcolm Roberts provides the most extreme example of the weaponisation of fake news discourse against mainstream media:

Turns out the ABC, in-between spewing fake news about our party, ruined ANZAC day for diggers… . The ABC are a clear and present threat to democracy.

Roberts was not alone. Politicians from three conservative parties claimed the ABC produced fake news to satisfy so-called leftist agendas.

What we discovered is a dangerous trend: social media users copy the way in which their politicians turn “fake news” against media and spread it on the digital platforms.

Despite this, our findings, published in the International Journal of Communication, offer hope as well as lessons to protect Australian democracy from disinformation.

First, our study of politicians of the 45th Parliament in 2016 shows it was a small, but noisy minority that use fake news language (see table below). This suggests there is still time for our parliamentarians to reverse this negative communication behaviour and serve as public role models. Indeed, two Labor politicians, Bill Shorten and Stephen Jones, led by example in 2017 and rejected the framing of fake news language when asked about it by journalists.

figure 1: Total number of instances of fake news discourse use between 8 November 2016 to 8 May 2017 by politician. N = 22 MPs; N =152 events. *MPs who use fake news discourse to refute it rather than allege it.
Authors

Second, we argue the media’s failure to refute fake news accusations has adverse consequences for public debate and trust in media. We recommend journalists rethink how they respond when politicians accuse them of being fake news or of spreading dis- and misinformation when its usage is untrue.

Third, academics such as Harvard’s Claire Wardle argue that to address the broader problem of information disorders on the web, we all should shun the term “fake news”. She says the phrase:

is being used globally by politicians to describe information that they don’t like, and increasingly, that’s working.

On the death tax fake news during the 2019 election, Carson’s research for a forthcoming book chapter found the spread of this false information was initiated by right-wing fringe politicians and political groups, beginning with One Nation’s Malcolm Roberts and Pauline Hanson.

One Nation misappropriated a real news story discussing inheritance tax from Channel Seven’s Sunrise program, which it then used against Labor on social media. Among the key perpetrators to give attention to this false story were the Nationals’ George Christensen and Matt Canavan. As with the findings in our study, social media users parroted this message, further spreading the false information.




Read more:
How fake news gets into our minds, and what you can do to resist it


While Labor is urging the JSCEM to admonish the digital platforms for allowing the false information about the “death tax” to spread, it might do well to reflect that the same digital platforms along with paid television ads enabled the campaigning success of its mischievous “Mediscare” campaign in 2016.

In a separate study, Carson with colleagues Shaun Ratcliff and Aaron Martin, found this negative campaign, while not responsible for an electoral win, did reverse a slump in Labor’s support to narrow its electoral defeat.

Perhaps the JSCEM should also consider the various ways in which our politicians employ “fake news” to the detriment of our democracy.The Conversation

Andrea Carson, Associate Professor at La Trobe University. Department of Politics, Media and Philosophy, La Trobe University and Kate Farhall, Postdoctoral research fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Governments are making fake news a crime – but it could stifle free speech


Alana Schetzer, University of Melbourne

The rapid spread of fake news can influence millions of people, impacting elections and financial markets. A study on the impact of fake news on the 2016 US presidential election, for instance, has found that fake news stories about Hillary Clinton was “very strongly linked” to the defection of voters who supported Barack Obama in the previous election.

To stem the rising influence of fake news, some countries have made the creation and distribution of deliberately false information a crime.

Singapore is the latest country to have passed a law against fake news, joining others like Germany, Malaysia, France and Russia.




Read more:
Media Files: Australians’ trust in news media is falling as concern over ‘fake news’ grows


But using the law to fight the wave of fake news may not be the best approach. Human rights activists, legal experts and others fear these laws have the potential to be misused to stifle free speech, or unintentionally block legitimate online posts and websites.

Legislating free speech

Singapore’s new law gives government ministers significant powers to determine what is fake news, and the authority to order online platforms to remove content if it’s deemed to be against the public interest.

What is considered to be of public interest is quite broad, but includes threats to security, the integrity of elections, and the public perception of the government. This could be open to abuse. It means any content that could be interpreted as embarrassing or damaging to the government is now open to being labelled fake news.

And free speech and human rights groups are concerned that legally banning fake news could be used as a way to restrict free speech and target whistleblowers.




Read more:
Freedom of speech: a history from the forbidden fruit to Facebook


Similar problems have arisen in Malaysia and Russia. Both nations have been accused of using their respective laws against fake news to further censor free speech, especially criticism of the government.

Malaysia’s previous government outlawed fake news last year, making it a crime punishable by a fine up to 500,000 Malaysian (A$171,000) ringgit or six years’ imprisonment, or both. The new government has vowed to repeal the law, but so far has yet to do so.

Russia banned fake news – which it labels as any information that shows “blatant disrespect” for the state – in April. Noncompliance can carry a jail sentence of 15 days.

Discriminating between legitimate and illegitimate content

But the problems that come with legislating against fake news is not restricted to countries with questionable track records of electoral integrity and free speech.

Even countries like Germany are facing difficulties enforcing their laws in a way that doesn’t unintentionally also target legitimate content.

Germany’s law came into effect on January 1, 2018. It targets social media platforms such as Facebook and Twitter, and requires them to remove posts featuring hate speech or fake information within 24 hours. A platform that fails to adhere to this law may face fines up to 50 million euros.

But the government is now reviewing the law because too much information is being blocked that shouldn’t be.

The Association of German Journalists has complained that social media companies are being too cautious and refusing to publish anything that could be wrongly interpreted under the law. This could lead to increasing self-censorship, possibly of information in the public interest.

In Australia, fake news is also a significant problem, with more and more people unable to distinguish fake news from legitimate reports.

During Australia’s federal election in May, fake news claiming the Labor Party planned on introducing a death tax spread across Facebook and was adopted by the Liberal Party in attack ads.




Read more:
Lies, obfuscation and fake news make for a dispiriting – and dangerous – election campaign


But there has been no serious talk of passing a law banning fake news here. Instead, Australian politicians from all sides have been pressuring the biggest social media platforms to be more vigilant and remove fake news before it becomes a problem.

Are there any alternatives to government regulation?

Unlike attempts to limit or ban content in pre-internet days, simply passing a law against fake news may not be the best way to deal with the problem.

The European Union, which is experiencing a rise in support for extreme right-wing political parties, introduced a voluntary code of practice against online disinformation in 2018. Facebook and other social media giants have since signed up.

But there are already concerns the code was “softened” to minimise the amount of content that would need to be removed or edited.

Whenever governments get involved in policing the media – even for the best-intended reasons – there is always the possibility of corruption and a reduction in genuine free speech.

Industry self-regulation is also problematic, as social media companies often struggle to objectively police themselves. Compelling these companies to take responsibility for the content on their sites through fines and other punitive measures, however, could be effective.




Read more:
After defamation ruling, it’s time Facebook provided better moderation tools


Another alternative is for media industry groups to get involved.

Media freedom watchdog Reporters Without Borders, for instance, has launched the Journalism Trust Initiative, which could lead to a future certification system that would act as a “guarantee” of quality and accuracy for readers. The agreed standards are still being discussed, but will include issues such as company ownership, sources of revenue, independence and ethical compliance.The Conversation

Alana Schetzer, Sessional Tutor and Journalist, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Fake news’ is already spreading online in the election campaign – it’s up to us to stop it


File 20190423 15218 1dp9eot.jpg?ixlib=rb 1.1
Claims of ‘fake news’ and misinformation campaigns have already arisen in the federal election campaign, a problem the political parties and tech companies are ill-equipped to address.
Ritchie B. Tongo/EPA

Michael Jensen, University of Canberra

We’re only days into the federal election campaign and already the first instances of “fake news” have surfaced online.

Over the weekend, Labor demanded that Facebook remove posts it says are “fake news” about the party’s plans to introduce a “death tax” on inheritances. Labor also called on the Coalition to publicly disavow the misinformation campaign.

An inauthentic tweet purportedly sent from the account of Australian Council of Trade Unions secretary Sally McManus also made the rounds, claiming that she, too, supported a “death tax”. It was retweeted many times – including by Sky News commentator and former Liberal MP Gary Hardgrave – before McManus put out a statement saying the tweet had been fabricated.

What the government and tech companies are doing

In the wake of the cyber-attacks on the 2016 US presidential election, the Australian government began taking seriously the threat that “fake news” and online misinformation campaigns could be used to try to disrupt our elections.

Last year, a taskforce was set up to try to protect the upcoming federal election from foreign interference, bringing together teams from Home Affairs, the Department of Finance, the Australian Electoral Commission (AEC), the Australian Federal Police (AFP) and the Australian Security Intelligence Organisation (ASIO).

The AEC also created a framework with Twitter and Facebook to remove content deemed to be in violation of Australian election laws. It also launched an aggressive campaign to encourage voters to “stop and consider” the sources of information they consume online.




Read more:
We’ve been hacked – so will the data be weaponised to influence election 2019? Here’s what to look for


For their part, Facebook and Twitter rolled out new features aimed specifically at safeguarding the Australian election. Facebook announced it would ban foreign advertising in the run-up to the election and launch a fact-checking partnership to vet the accuracy of information being spread on the platform. However, Facebook will not be implementing requirements that users wishing to post ads verify their locations until after the election.

Twitter also implemented new rules requiring that all political ads be labelled to show who sponsored them and those sending the tweets to prove they are located in Australia.

While these moves are all a good start, they are unlikely to be successful in stemming the flow of manipulative content as election day grows closer.

Holes in the system

First, a foreign entity intent on manipulating the election can get around address verification rules by partnering with domestic actors to promote paid advertising on Facebook and Twitter. Furthermore, Russia’s intervention in the US election showed that “troll” or “sockpuppet” accounts, as well as botnets, can easily spread fake news content and hyperlinks in the absence of a paid promotion strategy.

Facebook has also implemented measures that actually reduce transparency in its advertising. To examine how political advertising works on the platform, ProPublica built a browser plugin last year to collect Facebook ads and show which demographic groups they were targeting. Facebook responded by blocking the plugin. The platform’s own ad library, while expansive, also does not include any of the targeting data that ProPublica had made public.




Read more:
Russian trolls targeted Australian voters on Twitter via #auspol and #MH17


A second limitation faced by the AEC, social media companies, and government agencies is timing. The framework set up last year by the AEC to address content in possible violation of electoral rules has proven too slow to be effective. First, the AEC needs to be alerted to questionable content. Then, it will try to contact whoever posted it, and if it can’t, the matter is escalated to Facebook. This means that days can pass before the material is addressed.

Last year, for instance, when the AEC contacted Facebook about sponsored posts attacking left-wing parties from a group called Hands Off Our Democracy, it took Facebook more than a month to respond. By then, the group’s Facebook page had disappeared.



FOI request by ABC News, Author provided
Portions of AEC letter to Facebook legal team for Australia and New Zealand detailing steps for addressing questionable content, sent 30 August 2018.
FOI request by ABC News, Author provided

The length of time required to take down illegal content is critical because research on campaigning shows that the window of opportunity to shift a political discussion on social media is often quite narrow. For this reason, an illegal ad likely will have achieved its purpose by the time it is flagged and measures are taken to remove it.

Indeed, from 2015 to 2017, Russia’s Internet Research Agency, identified by US authorities as the main “troll farm” behind Russia’s foreign political interference, ran over 3,500 ads on Facebook with a median duration of just one day.

Even if content is flagged to the tech companies and accounts are blocked, this measure itself is unlikely to deter a serious misinformation campaign.

The Russian Internet Research Agency spent millions of dollars and conducted research over a period of years to inform their strategies. With this kind of investment, a determined actor will have gamed out changes to platforms, anticipated legal actions by governments and adapted its strategies accordingly.

What constitutes ‘fake news’ in the first place?

Finally, there is the problem of what counts as “fake news” and what counts as legitimate political discussion. The AEC and other government agencies are not well positioned to police truth in politics. There are two aspects to this problem.

The first is the majority of manipulative content directed at democratic societies is not obviously or demonstrably false. In fact, a recent study of Russian propaganda efforts in the United States found the majority of this content “is not, strictly speaking, ‘fake news’.”

Instead, it is a mixture of half-truths and selected truths, often filtered through a deeply cynical and conspiratorial worldview.

There’s a different issue with the Chinese platform WeChat, where there is a systematic distortion of news shared on public or “official accounts”. Research shows these accounts are often subject to considerable censorshipincluding self-censorship – so they do not infringe on the Chinese government’s official narrative. If they do, the accounts risk suspension or their posts can be deleted.. Evidence shows that official WeChat accounts in Australia often change their content and tone in response to changes in Beijing’s media regulations.




Read more:
Who do Chinese-Australian voters trust for their political news on WeChat?


For this reason, suggestions that platforms like WeChat be considered “an authentic, integral part of a genuinely multicultural, multilingual mainstream media landscape” are dangerously misguided, as official accounts play a role in promoting Beijing’s strategic interests rather than providing factual information.

The public’s role in stamping out the problem

If the AEC is not in a position to police truth online and combat manipulative speech, who is?

Research suggests that in a democracy, the political elites play a strong role in shaping opinions and amplifying the effects of foreign influence misinformation campaigns.

For example, when Republican John McCain was running for the US presidency against Barack Obama in 2008, he faced a question at a rally about whether Obama was “an Arab” – a lie that had been spread repeatedly online. Instead of breathing more life into the story, McCain provided a swift rebuttal.

After the Labor “death tax” Facebook posts appeared here last week, some politicians and right-wing groups shared the post on their own accounts. (It should be noted, however, that Hardgrave apologised for retweeting the fake tweet by McManus.)

Beyond that, the responsibility for combating manipulative speech during elections falls to all citizens. It’s absolutely critical in today’s world of global digital networks for the public to recognise they “are combatants in cyberspace”.

The only sure defence against manipulative campaigns – whether from foreign or domestic sources – is for citizens to take seriously their responsibilities to critically reflect on the information they receive and separate fact from fiction and manipulation.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Lies, ‘fake news’ and cover-ups: how has it come to this in Western democracies?



File 20180903 41705 1wc4eo3.jpg?ixlib=rb 1.1
Malcolm Turnbull has blamed the conservative faction in the Liberal Party for the ‘insurgency’ that led to his resignation as prime minister.
Lukas Coch/AAP

Joseph Camilleri, La Trobe University

The Liberal leadership spill and Malcolm Turnbull’s downfall is but the latest instalment in a game of musical chairs that has dominated Australian politics for the best part of a decade.

For many, it has been enough to portray Tony Abbott as the villain of the story. Others have pointed to Peter Dutton and his allies as willing, though not-so-clever, accomplices. There’s also been a highlighting of the herd instinct: once self-serving mutiny gathers steam, others will want to follow.

But this barely scratches the surface. And the trend is not confined to Australia.




Read more:
Dutton v Turnbull is the latest manifestation of the splintering of the centre-right in Australian politics


We need only think of Donald Trump’s America, Britain’s Brexit saga or the rise of far-right populist movements in Europe. Politics in the West seem uneasily suspended between farce and tragedy, as deception, accusations of “fake news” and infighting have become commonplace.

In Australia, the revolving prime ministerial door has had much to do with deep tensions surrounding climate change and energy policy more generally.

In Britain, a longstanding ambivalence towards European integration has deeply divided mainstream parties and plunged the country into “Brexit chaos”, a protracted crisis greatly exacerbated by government incompetence and political expediency.

In Italy, the steady erosion of support for the establishment parties has paved the way for a governing coalition that includes a far-right party committed to cracking down on “illegal”, specifically Muslim, immigration.

Yet, beyond these differences are certain common, cross-cultural threads which help explain the present Western malaise.

Simply put, we now have a glaring and widening gap between the enormity of the challenges facing Western societies and the capacity of their political institutions to address them.

Neoliberalism at work

The political class in Australia, as in Europe and North America, is operating within an institutional framework that is compromised by two powerful forces: the dominance of the neoliberal order and relentless globalisation.

The interplay of these two forces goes a long way towards explaining the failure of political elites. They offer neither a compelling national narrative nor a coherent program for the future. Instead, the public is treated to a series of sideshows and constant rivalries over the spoils of office.




Read more:
Partially right: rejecting neoliberalism shouldn’t mean giving up on social liberalism


How does the neoliberal creed underpin the state of current political discourse and practice? The shorthand answer is by setting economic growth as the overriding national objective . Such growth, we are told, requires the public sector to be squeezed and the private sector to be given free reign.

And when economic performance falls short of the mark, pressing social and environmental needs are unmet, or a global financial crisis exposes large-scale financial crimes and shoddy lending practices, these are simply dismissed as inconvenient truths.

Compounding the impact of this highly restrictive economic agenda is globalisation or, to be more accurate, the phenomenal growth of cross-border flows of goods and services, capital, money, carbon emissions, technical know-how, arms, information, images and people. The sheer scale, speed and intensity of these flows make them impervious to national control.




Read more:
It’s not just the economy, stupid; it’s whether the economy is fair


But governments and political parties want to maintain the pretence they can stem the tide. To admit they cannot is to run the risk of appearing incompetent or irrelevant. Importantly, they risk losing the financial or political support of powerful interests that benefit from globalisation, such as the coal lobby.

And so, deception and self-deception become the only viable option. So it is that several US presidents, including Trump, and large segments of the US Congress have flagrantly contradicted climate science or downplayed its implications.

Much the same can be said of Australia. When confronted with climate sceptics in the Liberal ranks, the Turnbull government chose to prioritise lowering electricity prices while minimising its commitment to carbon emission reductions.

The erosion of truth and trust

In the face of such evasion and disinformation, large segments of the population, especially those who are experiencing hard times or feel alienated, provide fertile ground for populist slogans and the personalities willing to mouth them.

Each country has its distinctive history and political culture. But everywhere we see the same refusal to face up to harsh realities. Some will deny the science of climate change. Others will want to roll back the unprecedented movements of people seeking refuge from war, discrimination or abject poverty.

Others still will pretend the state can regulate the accelerating use of information technology, even though the technology is already being used to threaten people’s privacy and reduce control over personal data. Both the state and corporate sector are subjecting citizens to unprecedented levels of surveillance.




Read more:
The Turnbull government is all but finished, and the Liberals will now need to work out who they are


Lies, “fake news” and cover-ups are not, of course, the preserve of politicians. They have become commonplace in so many of our institutions.

The extraordinary revelations from the Banking Royal Commission make clear that Australia’s largest banks and other financial enterprises have massively defrauded customers, given short shrift to both the law and regulators and consistently disregarded the truth.

And now, as a result of another Royal Commission, we have a belated appreciation of the rampant sexual abuse of children in the Catholic Church, which has been consistently covered up by religious officials.

These various public and private arenas, where truth is regularly concealed, denied or obscured, have had a profoundly corrosive effect on the fabric of society, and inevitably on the public sphere. They have severely diminished the social trust on which the viability of democratic processes vitally depends.

There is no simple remedy to the current political disarray. The powerful forces driving financial flows and production and communication technologies are reshaping culture, the global economy and policy-making processes in deeply troubling ways.

Truth and trust are now in short supply. Yet, they are indispensable to democratic processes and institutions.

A sustained national and international conversation on ways to redeem truth and trust has become one of the defining imperatives of our time.


Joseph Camilleri will speak more on this topic in three interactive public lectures entitled Brave New World at St Michael’s on Collins in Melbourne on Sept. 11, 18 and 25.The Conversation

Joseph Camilleri, Emeritus Professor of International Relations, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.